Solve It In Software

VP of Operations, Opreto

3 minute read

The philosopher Bill Rapaport identifies four great insights of computer science, culminating with the Church-Turing thesis, which says that any real-world computation can be translated into an equivalent Turing machine program. The idea of universal hardware is incredibly powerful. It substantially decouples the work of building computers, and of iterating on their efficiency, from the work of doing computations. Once the computer is built, provided it is fast enough, whatever your problem is, we can solve it in software.

In her notes on Charles Babbage’s Analytical Engine, Ada Lovelace famously saw beyond number crunching and imagined computers doing all sorts of things for people, provided the right data representations and algorithms. Of course, this has turned out to overwhelmingly be the rule: the vast majority of what computers do today is difficult to mentally map back onto operations in the ALU. I have understood the machine down to its semiconductor physics, and yet even to me it is an abstract substrate, whose capabilities are entirely driven by software.

We are perhaps experiencing this idea taken to its logical extreme with the recent developments in artificial intelligence. The actual computations taking place in an immense fabric of parallel cores relate in no discernible way to the work we see the computer doing at the application level. The capabilities of generative AI are driven by something resembling software in a rather abstract form, running on a general-purpose algorithm in general-purpose hardware.

The idea: given sufficiently general hardware, any problem becomes tractable in software. We can decouple the design of the hardware from the design of the software—and therefore, of the whole system function. This is good, because software is endlessly malleable and in most cases very easy and cheap to update. (It’s also bad, because hardware improvements appear to absolve us of applying engineering rigor to software development, but I’ve complained about that enough lately.)

Importantly, this isn’t limited to just faster, wider Turing machines: we can also include the extended abilities of computers to read from and write to the world via their computations, through human input devices, displays, communication networks, and sensors and actuators of all kinds. In communications, we have software-defined networking and software-defined radio. And consider Tesla’s approach to self-driving cars: design in what seems like a sufficient suite of sensors, and then develop the autonomy software, to be rolled out to existing vehicles via over-the-air updates. My erstwhile colleagues at Apex.AI have begun using the term “software-defined vehicle” to describe this paradigm.

I see one particular application of this idea on the horizon with huge value potential, and it has a lot to do with my recent exploratory work with Arduino Pro products.

We’ve been hearing about the promise of Industry 4.0 for over a decade, to the extent that it has earned the dubious title of buzzword neologism on Wikipedia. Specifically, I can find a lot of (very similar) vague conceptions of how some combination of the industrial Internet of Things and edge computing and the cloud and AI will transform manufacturing, but few concrete success stories.

We know that we can use process data to optimize processes, but it’s not always obvious what data to collect. I can think of no clearer example than one from my first manufacturing job. The aluminum casting plant I worked at regularly incurred scrap from strings of so-called “short pours,” and wanted a way to flag them in the line. Fortunately, the advanced Cosworth casting process generated a lot of data from various physical points (temperatures, electromagnet voltages, etc.), and it turned out that certain variables began exhibiting a tell-tale pattern before the first short pour, so that we could eliminate the scrap altogether. Presumably, whoever decided to instrument the machine with all those sensors hadn’t imagined this particular problem nor this particular solution. And our work was done with old-fashioned manual data mining: it’s easy to see how modern machine learning can turbocharge these efficiency wins.

Like the Tesla, any realization of software-defined manufacturing needs to anticipate, in the hardware design, some reasonably “complete” scope of input data, including multi-modal sensor coverage. I contend that powerful industrial-rated sensor packages like the Arduino Nicla Vision are now so inexpensive and unobtrusive that we should be sprinkling them all over the hard-to-reach parts of our machines. Collect and stream and record all the rich process data you possibly can. That way, when some issue or change inevitably crops up, you might just be able to solve it in software.

Updated:

Comments