Thursday, August 24, 2006

The Third Way. My academic background is in the arcane field of operations research. Often, this is combined with systems engineering. I sometimes find myself perusing the college curricula for this field. (I know, I'm trying to get out more!)

There are normally two "basic" courses in OR. One centers on determininstic methods (linear programming and optimization of nonlinear systems). It teaches methods that really made their appearance in the 1950s and 60s. The second focuses on stochastic methods (Monte Carlo simulations, Markov Processes, and dynamic programming). This field had to wait for improved computer capabilties of the 1960s and 70s before the stochastic decision theory could be widely adopted. Except for improvements in software, the methods have not seen much revision since then.

Let's compare this with the typical physics curriculum. Physics begins with mechanics. Here, we meet Sir Isaac Newton, and learn concepts of force and acceleration. Date of origin: the 60s. The 1660s, that is. This is follwed by electricity and magnetism, much of which was derived and develped in the time of Michael Faraday. E&M's emergence can be dated around 1800. But, physics has a third "basic" course. It's often called 'modern physics' or 'quantum physics'. Here, we learn about the weird and wonderful world of tunneling electrons, wave functions, and Heisenbergian uncertainty. The period of development can probably be placed within a few decades of 1930.

Operations Research will, someday, also have a third course. It will encompass chaos and complexity. It will focus on priniciples of adaptability, non-equilibrium systems control, scale-free distributions, and Bayesian anaysis. The primary tools, the equivalent of LINDO and discrete event simulations, will be agent-based models and genetic algorithms. When the historians look back, they will probably place the "birthday" within a few years of, well, 2006.

Oh brave, new world
That has such people in't!

Wednesday, August 23, 2006

Robust Design. I've done a lot of thinking lately about the concept of robustness. Many decision-makers, when describing key requirements of a new system, insist that it must be robust. The DoD's Office of Force Transformation treats robustness in the military force as the opposite of optimal. That is, the US should be striving to build a military (in the next generation) that can achieve national security goals despite the wide range of unforseen challenges it and we will face.

Now, that's a tall order. For one thing, military affairs are, by their very nature, a complex adaptive system. Your adversary will seek to identify your strengths and weaknesses. He will adapt to them, as you will adapt to his. Robustness in this context means a never-ending evaluation of your emerging systems and your adversary's strategy and capabilities. Like all complex adaptive systems, national security never reaches equilibrium. Thus, the "Force" in the Office of Force transformation's lexicon will never really fit within modern "systems engineering" principles and processes.

If you google "robust design", you will become immersed in Taguchi quality methods. This has been further absorbed in the modern organizational "borg" called "Six Sigma". (That's "borg" as in "resistance is futile, you will be six sigmilated".) Taguchi, in the 1950s, defined robustness in designs as resistance to random changes in the environment. Any reading of the modern Six Sigma doctrine will show that this theme is constant and pervasive. In fact, the very name 'six sigma' assumes that your primary design challenge is random variation.

In case anybody hasn't noticed, terrorists do not strike randomly. In fact, none of our adversaries have attacked us at random. Would that military designs needed only to respond to environmental variation! Thus, when the mavens of defense programmatics call for robustness, they are asking for something completely different.

Measuring robustness of complex adaptive systems is a challenge. If you look at the Santa Fe Institute research area on "robustness" (yes, they have one!), you'll find it devoted mostly to biological robustness. This fits Santa Fe's own predeliction for academic and observational research.

So, my search continues. To be specific, how do we measure--to any degree of confidence--the robustness of a complex adaptive system. And, are there ways to manipulate an existing system to make it more robust? Are there pitfalls in intervention process that will rob robustness from a well-functioning system?

Thursday, July 06, 2006

We're already doing it. That is, we're already doing engineering of complex adaptive system. It happens every time a general develops an op-plan. It happens every time a large corporation launches a new product. And, the stock market--the ultimate complex adaptive system--'redesigns' the American economy every day. Some days it's a tweak. Some days it's a major overhaul. Winners and losers are chosen.

What we don't do--at least not very well--is describe what we do and how we make decisions in the management of complex adaptive systems.

The 'control paradigm' is broken for large, complex systems. You just can't find a parameter to monitor, another to adjust, and keep a complex system within design limits. Large systems defy efforts to create a control function. In the free market, your competition adjusts and your customers evolve.

But, we've been 'managing' systems that we can't control for a long, long time.

Have you every heard about the process of determining the right places for footpaths across a college lawn? A clever landscaper decided, first, to plant grass on the entire lawn. Then, where the grass was worn away by foot traffic, that's where the concrete was laid for permanent footpaths. That's 'design' for complex adaptive systems.

Wednesday, May 31, 2006

Complexity theory is everywhere and nowhere.

I was at a workshop the other day where systems engineers and 'enterprise architects' were pondering a large cluster of issues about future trends. The subject is not particularly important, because the point I'll make is general.

After a bit of discussion, I suggested that they were dealing with a complex adaptive system, and might benefit from models, simulations, and control theories that are being developed for CAS's. The response was generally negative. "We don't have any training in CAS's." "There's no documentation... standards... definitions... validation... etc."

Clearly, CAS theory is an emergent science. It has none of the trappings of a mature body of theory. What's more, it may never have these things. You may never be able to get a degree as a 'complex systems engineeer'.

I believe that we are actually exploring new territory. We don't know where this will lead. There are enough tantalizing clues, however, that there are universal truths in CAS theory. From such theory might spring guidelines in finding where to look for lever points and other important control strategies.

Those would be the building blocks for a predictive tool. Prediction is the crucible for all new sciences. If a body of theory is not predictive, and has no chance of becoming predictive, then its status as a science is questionable. The road to creating a testable theory is long and difficult. But, I believe that the journey is worth it--mostly because complex systems are both important and ubiquitous.

Wednesday, May 10, 2006

Well, I've gone and launched a blog on complexity.

Following a terrific presentation by INCOSE on complexity and systems engineering, I've decided to create a web-space where I can publish my ramblings about complex adaptive systems (CAS's).

Complex adaptive sytems are best described by what they are not. The opposite of CAS's are architected or designed systems. I'm a systems engineer. Much has been written and spoken about designed systems. For these traditional systems, we know how to define requirements, establish standards, separate the design process into parts, and create this thing from sheer cloth. (Or steel, or digits--you get the pic). But, many--perhaps most--systems we deal with on a daily basis had no requirements document, no standards committee, and no validation process. They just grew.

'So what?', you might ask. Well, since CAS's make up so much of our daily lives, wouldn't it be nice to have a plan when they break? For when they stop delivering output that we depended on? How about for predicting when and how often they will break?

That's where I come in: I carry an agent-based model. Or some other tool of CAS analysis. In short, I work (academically) at the frontier of a whole new systems science. And, hopefully, I'll be able to send dispaches from the front lines as we all explore this brave, new world.

Redbeard