News Stay informed about the latest enterprise technology news and product updates.

QA: Grady Booch on systems of systems, software everywhere, and rockets in days of yore

Software is moving deeper into the real world. Embedded systems are hooking up with corporate backroom computers in new ways. A new systems ethos is emerging. IBM software scientist Grady Booch casts light on the issues, including how software design can be viewed through the lens of systems of systems.


Grady Booch

The idea of 'systems of systems' is getting some attention these days. In a way, embedded systems are meeting backroom enterprise computing in ways once unimagined. To some extent, IBM's much publicized Smarter Planet initiative is headed in this direction. That is: toward collections of highly-distributed systems combined as a sort of 'meta system.'

To learn more,'s Jack Vaughan spoke with Grady Booch, Chief Scientist for Software Engineering at IBM Research. Booch served as Chief Scientist for Rational Software Corporation from its founding in 1981, was a 'co-inventor' of the Unified Modeling Language (UML), and is these days viewing the issues of systems of systems, ultra-large systems, and their tendency to surpass the capability of any one organization to control. We are interested in how embedded systems and enterprise systems are becoming intermixed. Is the idea of "systems of systems" a modern take on some traditional work that has already been done?

For software people, the idea of systems of systems looks new and novel, but it's not. It's a problem that's existed in engineering for a long time.
Grady Booch
Chief Scientist for Software Engineering at IBM Research

Grady Booch: Well, I wouldn't say the idea of systems of systems is a modern thing at all. You can go all the way back to the work of [researcher] Herbert Simon, the book Systemantics by John Gall, and the NASA gentlemen that built their deep space network. Their notions about systems are things that have been part of the generic practice for decades. The difference is the increased amount of software that we're seeing injected into these systems.

For software people, the idea of systems of systems looks new and novel, but it's not. It's a problem that's existed in engineering for a long time. There is an analogue of this problem in the growth of cities. The problems we see in cities are very similar to the problems we see in large software systems. The metaphor that I often use when I talk about systems of systems is that they are an awful lot like cities, because there's no one that really controls [the whole thing], there are lots of sorts of cost cutting concerns, and scattered and tangled sorts of things. The lessons that we learn from cities can be applied to software systems as well. : I am thinking of cities, and how they're made up of buildings, and how software pattern work emerged from work on buildings architecture. That makes me ask if there are any design tools or patterns that are going to be intrinsic to this sort of systems of systems design?

Grady Booch: It depends upon how firmly you want to capture the semantics of "patterns" in that regard. But I think the experience from city-building and large organic systems like that, and certainly what Herbert Simon tells us, is that well-structured systems of systems have a number of characteristics to them.

But the challenge is how do you build a system of systems knowing that you don't have complete control over the direction. There's a lot of fascinating work that's going on, and I direct you to another IBMer, by the name of Dick Gabriel, who's been writing things with regards to ultra-large systems. I guess the best way to characterize ultra-large systems is that they are beyond the comprehension of any one organization to control. The best you can do with ultra-large systems is to kind of nudge them in certain ways. You also are in a situation with these systems that you can't ever replace them. They are too large. They are - dare I say it - "too big to fail". Yeah, you don't want them to fail, especially if they are an elevator or a missile.

Grady Booch: Yes. But you know in some ways, even just saying "it's a missile" is a short sited view of the problem, because that's a system unto itself, but it's also the missile that's involved in other systems of systems. You have to ask how that missile integrates into the tracking systems and the theater of operations that exist around the world.

The level of things that I've been interacting with folks with on -- just the global theater of operations -- these things are just whopping big systems. I think what makes them most challenging is that they are on this cusp of the technical and the social, meaning that you often have to deal with not only the technical architecture, but also the organizational architecture that can be as resistive to the prospect of change as is the technical architecture. So it can be a convergence of mechanical, electronic, and digital technologies?

Grady Booch: Absolutely. In that regard, I think a good place to look at it would be the automobile.

A couple of decades ago there was hardly any software with the cars. I think it was in the mid-'80s that we started seeing the auto manufacturers begin putting microprocessors in the cars. And the dominant processor at the time was the [Motorola] 68000, I believe, and it was used in isolated systems. It was used in the anti-lock brake system or in the transmission, things like that.

Why were they doing that? Well, you'd see places where those groups realized this is a cost-sensitive thing that needs to be done, and if I have all these individual components and I have all this behavior that I'd like to see, then having individual discreet electronic components is one way of producing suitable solutions to it, but the opportunity for failure is higher because I have more parts. The ability to rapidly deal with change is increased with microprocessors, and so that's why that sort of subsystem moved towards more and more microprocessors. It made a lot of sense.

Those were very, very closed worlds. I could take a braking system , I could view it as a closed system, and I could analyze the heck out of it to make sure it works. The challenge that happens with all of these kinds of systems, is that all of a sudden you see all these microprocessors popping up, and one day you wake up and say "Oh my God, I have all this software!"

And it's generally been developed by different groups. So there are no economies of scale and then you also realize that there's no way that these things can talk together. Well that leads us to things like AUTOSAR [automotive software architecture] which is, in many ways, a logical step in the evolution of automobiles. A group of folks get together and decide upon common goals and common protocols. It's no different than deciding years ago that 110 electrical plugs in the US will all look a certain way. By establishing those standards it creates opportunities for stabilizing the marketplace and the ability for third parties to be adding to the ecosystem.

What's the lesson learned here? There are two, I think. There comes a time in the growth of systems where it is possible to establish de facto standards, and those standards in effect will establish the way to play well with one another … [leading to] cross-cutting protocols and patterns that work. The other thing that happens is that having done that you establish this system and all the sudden all you can do is hold onto the reins and go for the ride, because there will be many unintended consequences of those decisions and you will grow in ways that you cannot anticipate. Indeed, the mark of a good system is that it will grow in ways that you could never have anticipated and the ecosystem flourishes on its own. We're certainly seeing at IBM that we're moving from an Internet of people -- where what you used to see on the edges of the Web were people -- to an Internet of things. What happens if you have potentially a billion devices that are all IP addressable? And what happens if these things work together? It's very exciting stuff, but it's not unlike what happened in cars, where we see these individual things pop up and people then discover we can fit these things together in interesting ways. They start relying on each other. You start using links between them and then all the sudden you realize this is a complex system. I jiggle it here and something squeaks a thousand miles away, but nobody planned that. Grady, you came up through the embedded ranks, right? I wonder if you'd mind if I asked about days of yore. In the Air Force and early on at Rational, would you say you were involved with real-time systems? I'd imagine that would flavor your thinking on systems engineering.

Grady Booch: Well, let me go way back in the days of yore as you say. My first assignment was at Vandenberg Air Force Base where as a young lieutenant I was involved in ground systems in support of the nation's ballistic missile defense program. I was involved primary in two projects which were both very much real-time kinds of things. The first was a telemetry processing system that would read data in real-time from the launches. It was an interesting exercise in data building and data rates because the amount of data coming back from these birds was vastly increasing as we were trying to improve the targeting on them. So lots and lots of data coming back.

There was a very novel system being developed at the time called TIPS -- Telemetry Integrated Processing System. It pushed the edges of what anybody knew how to do. We had a cluster of minicomputers, and back then people didn't really know how to pull those clusters together, so we built our own real time operating system for it. It was an interesting fusion of hardware and software issues and also an interesting problem of computer and human interface issues because you had to deliver the data to users in real time.

The next project that I moved on to was called the Range Safety Display System. Way back then we would track missiles using pen and ink plotters. You would sit in this room that looked like a scene out of a TV show or a movie and you would have these huge pen and ink plotters that surrounded you and would track where the missile was relative to performance characteristics. There was a range safety officer whose job it was to stand there and if the missile deviated from its flight path, he would literally hit the big red button and blow the thing up. A lot of missiles from there veered off toward Los Angeles because they were drawn into polar orbit, and the last thing you want to do is to blow up an errant missile over Los Angeles. It's just not a good thing to do. Right. Gotcha.

Grady Booch: So the range safety display system was an effort to remove those pen and ink boards, because they simply were not fast enough or responsive enough, and to move all that data into a single graphical display. We selected an  Evans and Sutherland [display system] and it was revolutionary at the time because all the sudden we were fusing data from 40 different sensors and putting it all on a screen -- presenting it in such a way that the range safety officer could make some very important decisions in real-time.

SearchSOA:: Back in the day, as you did the OOAD methods, which moved towards UML, were the embedded real-time issues important?

Grady Booch: Very much so. Remember the data was primarily meant for defensive stuff. The early uses for it were for hard real-time kinds of things. Later on, Ada was pretty much the dominant language used in traffic control systems. With Rational, I was involved with the IBM bid and then the Hughes bid for the US air traffic control system. Many railway systems were built in Ada, as well, and these are all real-time kinds of things, as well. Frankly, my early experience with Rational had nothing to do with enterprise systems, but had lots to do with these hard real-time systems.

What we realized back then is that here was a crowd of people that had traditionally been -- and I don't mean this in a pejorative sense -- tin-benders -- they built hardware. But the amount of software that was going into the missiles, trains, radars and the like was a hockey stick. There was a knee in the curve. And the ability of those companies to understand how to develop software properly -- to architect and manage it as part of the system -- that was all new ground. we talk about systems of systems, are we taking a sort of a "geek's-eye view" of the Smarter Planet?

Grady Booch: Partly, yes. I would say that is true. I am very excited about the direction IBM is heading with this, but from an insider's perspective I realize that there is a whole lot of software that has to be built. Things like the Smarter Planet are pushing our ability to manage this kind of system. I'm not saying that in a negative sense.

It's delightful that we have things that push us because we grow when we have the most demanding kind of concerns. It's a fascinating engineering problem and one that pushes the envelope of what we know works.

Dig Deeper on Topics Archive

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.