The Machine that Builds the Machine

hurlbert
8 min readApr 21, 2022

My recent article on, “Engineering our Craft” discusses the ways in which craft are slow. There are many insights into making things more efficient and it’s worth a read.

Craft code will always be slow. To go fast you must be building the machine that builds the machine.

General construction uses pre-fab components. Carpenters still do some custom work but it’s common to use pre-made rafters or beams. Modern construction crews often have the wood cut offsite and delivered in stacks ready for assembly. In software we get a bit of this with NuGet packages and the language frameworks and SDKs. We need to lean into this approach with our service work.

Over a decade ago I automated the process of building new forms with a code generator. The time to build a basic form dropped from 3–4 days to 3–4 hours. This was a machine that built a machine.

As a practitioner of THE-METHOD, it’s easy to miss that the MANAGERS are the things being built. The MANAGERS are the machine. The ENGINES and RA are the parts needed to build the MANAGERS. If we are building cars then the ENGINES and RA are the motors and drive train. They are not the car but they make the car possible.

It’s important to understand what it is that we’re automating. Our focus needs to be on automating the construction of the MANAGERS. Our ENGINES and RA should encapsulate the volatilities that come from automating multiple use-cases with our MANAGERS.

ENGINES are hard to write. RESOURCE-ACCESSORS are even more difficult to write. MANAGERS are almost disposable but the ability to create MANAGERS comes from automating the ENGINES and RA. ENGINES and RA are meant to be very focused. They deliver an algorithm or an atomic-business-verb. If this were a traditional factory, the ENGINES and RA would be the parts built by the machine so that the actual product could be built. They would be the motor that goes in the car. They are a part of the car but they are not the car. The MANAGER is the car.

The MANAGER implements or consumes the instructions for producing the desired business behaviors. It is the MANAGER that describes the workflow and the ENGINES and RA are the parts within this machine that do the actual work. This can make understanding the machine that builds the machine difficult. We need to focus on ENGINES and RA because the managers are there to automate the ENGINES and RA in order to achieve the goals of the use-case.

Traditionally developers use the requirements as the building materials for the services, classes, and libraries they write. The book, “Righting Software” shows us that this approach is misguided and leads to unmaintainable software. The requirements help us discover the automation, but the software we write (“right”) should automate the algorithms and the atomic-business-verbs. Most teams still make the mistake of trying to automate features.

Sidebar: Once you shift your perspective to building the MBM the desire to automate features disappears. Features tend to be one-offs. A machine that built the machine that built features would be useless. It would be a custom builder of one-offs. That’s ridiculous. Understanding that we are building the machine that builds the machine forces us to think in algorithms and atomic-business-verbs. You must deal with volatilities because a volatility among the algos and ABV will be a volatility to your machine that builds the machine.

The metadata around our projects (requirements, dependencies, settings, defaults) are very similar to the raw materials for the machine that builds the machine. The code generators and templating engines are the machinery. They are the stamping machines, welding robots, and parts warehouse and inventory. Our machine that builds the machine will be something like a compiler or a generator.

The problem that THE-METHOD has is that we commonly building the machine itself by hand. We hand code the RA. We hand code the ENGINES. We need to build machines that build RA, that build ENGINES. Just to prove the point, when we implement something like a workflow engine to automate the MANAGER, it’s often underwhelming. This is not because a workflow engine isn’t valuable. It’s just that what’s being automated is not the bulk of the work. The bulk of the work happens in the behaviors built into the ENGINES and RA. ENGINES and RA need to be the MBM’s focus.

To really examine the machine that builds the machine we need to think of, “what are the raw materials needed to build our MBM factory.”

Viewed from this “factory” perspective things like requirement, detailed design specs, tests, defaults, and mocked up data sets are our raw materials.

Once the machine is built and the system is running then things like a message-bus and queues make sense. They represent just-in-time supply delivery to the running machine. They represent stacks of raw materials, ready to feed the machine that our machine built.

The trick here, is of course, who among us is building the machine that builds the machine? What does that even mean in this context?

I wrote the Storyboarder. I got it working and put it aside. I need to get back to it. But the Storyboarder was written as an exercise. It’s not a product. But it was an example of the machine that builds the machine. More accurately it’s a machine that prototypes the new machine.

A machine that would build the machine would need to write ENGINES and RESOURCE-ACCESSORS. Such a machine would write algorithms for the ENGINES and atomic-business-verbs (AVB) for the RAs. In our world an algorithm is a method/function. An AVB is also a collection methods/functions. That seems boring. It also seems a bit misguided. We need better. Without a better understanding of what we’re doing this would lead us right back to hand coding everything. This could be why everyone does this and that this is so difficult to explain and write about. Juval doesn’t even address this in, “Righting Software.”

The problem is how do you build a machine that builds the machine when the machine is built from methods/functions? We normally call such a machine “Visual Studio” or the programming language/framework. To briefly return to my woodworking analogy this is like calling lumber a furniture building machine. It’s not very helpful. We need better. More accurately we need less.

What we need is a new concept of the machine that we want our machine to build. If the machine is just made of methods, what is there to build? We’re back to hand coding and slow slow slow implementation. Sound like your world?

Got any ideas? There’s an interesting conversation on the iDesign Alumni forum. Monty suggests, “A Service is a unit of logically related behavior with well-known contracts remotely accessible via standard plumbing. IDesign refines this definition to provide rules and structure with VBD (volatility based decomposition) and the Method. Service meshes allow you to also define a service as a Unit of Scale.”

This suggests that the MBM is building groups of “logically related (methods).” It then expands that there are rules with some structure and that the product should work as a unit of scale. That’s pretty good.

Sidebar: Monty from iDesign was one of the authors of iDesign’s ServiceModelEx. This library turned an interface into a WCF service — immediately. It can be used to create an enterprise quality service in 5 lines of code and it’s rock solid. This is an example of the machine that builds the machine. It’s now been ages and nothing like this has emerged for gRPC. The closest public tool is the tooling built into Visual Studio. WCF was a (necessary) step backward due to complexity, but gRPC is still waiting to be automated at an enterprise level.

An interface is a pretty good specification for a behavior.

Using an interface as a specification you could say that the MBM takes an interface and some rules as input and outputs a unit that can be scaled.

I like this definition. We can work with this. The question is, how do we automate the building of interfaces into units of scale in a way that helps us remove all the custom work?

Okay, now we’re getting somewhere.

Sidebar: As much as I rag on the front-end folks there is a great example of the MBM from the front-end. The front-end consumes view-models from the back-end and then submits “commands” to drive the workflow. GraphQL is often misunderstood but if viewed as a machine that builds view-models it’s perfect. This is actually what it was intended for and that makes it a machine that builds the machine. While I think GraphQL may also be one of the most abused pieces of software, when viewed as a machine that makes view models it shows us a great example of the machine that builds the machine.

We’ve defined the basic shape of the machine as a group of rule following behaviors (aka: service) that can be scaled. We’ve suggested that an interface is a pretty good description of at least a single facet of the machine’s behaviors. The problem is that this description feels like the hollow shell of our product. It’s the car show “concept vehicle” that is pretty but has no engine or guts. It’s nice but not enough to get you there. What is in our hollow shell?

In our case what’s inside are the algorithms and ABVs of the ENGINES and RAs. Above we referred to these as behaviors. What we need is a way to automate the construction of these behaviors and we’ve already suggested that we can do that with interfaces.

Said another way, if we had interfaces all the way down a code generator could create most of the machine just from these interfaces.

I used to use Sparx Systems Enterprise Architect to do exactly this. We would draw out the classes in UML (like) drawings. When we rendered the code from the UML we would end up with empty classes with methods and all. This was definitely the machine that builds the machine, if what was being built was the empty shell of the machine.

What we needed on top of such a tool was a place to put method code and to be able to run it in real time in a debugger. Some of the online linters feel very much like the MBM. You enter code, it run and an entire infrastructure is spun up and executes with mock debugger and everything. It’s a very effective way to explore code.

The goal here is to create METHOD systems faster and in an more automated and repeatable fashion. Such a system could even have a high level view that mirrors the “network” view of the project dependencies from PROJECT-DESIGN. Visual Studio almost has this with the package diagram.

The MBM would ideally be at a higher level than Visual Studio so that it could be language and implementation agnostic. But it’s possible to create such tools as Visual Studio add-ons, so it could be MS specific.

Why do this if our MBM only builds hollow shells? Or if the code we put in the methods/functions still has to be written manually? That’s a good question. We’re talking about automation and I see real value in being able to fluently restructure this code. Especially if the rules for the behaviors mentioned above are abstracted out as well.

Any thoughts on the MBM?

Epilogue

--

--