Wednesday, August 23, 2006

Robust Design. I've done a lot of thinking lately about the concept of robustness. Many decision-makers, when describing key requirements of a new system, insist that it must be robust. The DoD's Office of Force Transformation treats robustness in the military force as the opposite of optimal. That is, the US should be striving to build a military (in the next generation) that can achieve national security goals despite the wide range of unforseen challenges it and we will face.

Now, that's a tall order. For one thing, military affairs are, by their very nature, a complex adaptive system. Your adversary will seek to identify your strengths and weaknesses. He will adapt to them, as you will adapt to his. Robustness in this context means a never-ending evaluation of your emerging systems and your adversary's strategy and capabilities. Like all complex adaptive systems, national security never reaches equilibrium. Thus, the "Force" in the Office of Force transformation's lexicon will never really fit within modern "systems engineering" principles and processes.

If you google "robust design", you will become immersed in Taguchi quality methods. This has been further absorbed in the modern organizational "borg" called "Six Sigma". (That's "borg" as in "resistance is futile, you will be six sigmilated".) Taguchi, in the 1950s, defined robustness in designs as resistance to random changes in the environment. Any reading of the modern Six Sigma doctrine will show that this theme is constant and pervasive. In fact, the very name 'six sigma' assumes that your primary design challenge is random variation.

In case anybody hasn't noticed, terrorists do not strike randomly. In fact, none of our adversaries have attacked us at random. Would that military designs needed only to respond to environmental variation! Thus, when the mavens of defense programmatics call for robustness, they are asking for something completely different.

Measuring robustness of complex adaptive systems is a challenge. If you look at the Santa Fe Institute research area on "robustness" (yes, they have one!), you'll find it devoted mostly to biological robustness. This fits Santa Fe's own predeliction for academic and observational research.

So, my search continues. To be specific, how do we measure--to any degree of confidence--the robustness of a complex adaptive system. And, are there ways to manipulate an existing system to make it more robust? Are there pitfalls in intervention process that will rob robustness from a well-functioning system?

No comments: