This section of the portal is for supporting the Disciplined Agile Value Stream Consultant Workshop (DAVSC), currently under development. Discussions on the pages here will take place on the Disciplined Agile LinkedIn group.
This page is currently being rewritten and may have some duplication in it during this process.
This page provides an overview different ways to approach complexity. The Disciplined Agile FLEX approach is described at Dealing with Complexity by Creating a Bias For Simplicity
Our attitude about complexity has a major impact on the approaches we take. Many people take a simple approach, believing attending to just a few things will be sufficient. Others believe that if they just look long and hard enough at all relationships between people and the artifacts in play that they can figure things out and make accurate predictions about what will happen when they make changes. This approach comes in two flavors. One is that we can start with small teams and combine them together. This deals with complexity by taking an organic growth approach where each of the components are reasonably understandable. The second is to try to accommodate everything. The challenge with both is that we’re embedded in a complex adaptive system and making predictions based on the relationships between a system’s components is fraught with error and risk.
Both of these approaches, however, ignore the fact that understanding the relationships between components in a system does not provide us with the holistic view or predictability needed to improve the system.
An organization creating new products and services is a complex adaptive system. A CAS is a system in which a perfect understanding of the individual parts does not automatically convey a perfect understanding of the whole system’s behavior. This means one can’t be certain what a change in one part of the system will have on another. But this doesn’t mean that cause and effect doesn’t exist for the system as a whole.
The reality that is setting in with these approaches has led to yet another approach. This has us take an attitude that our system is too complex to have any meaningful predictability. We can only hope that positive changes will emerge as we make decisions. This approach is attractive for many reasons. Practitioners can avoid responsibility for failure by just acknowledging that their system was complex. Proponents of frameworks to improve companies can provide either simple or complicated solutions and, when failure occurs, simply ascribe it to the frameworks being difficult to master due to the complexity of the organization.
Even in complex adaptive systems there is a cause and effect when one deals with the system as a whole. Here are some examples:
“There is more value created with overall alignment than with local excellence.” Don Reinertsen
“It is easier to act yourself into a new way of thinking, than it is to think yourself into a new way of acting.” Millard Fuller
“A system must be managed. It will not manage itself. Left to themselves, components become selfish, competitive, independent profit centers, and thus destroy the system … The secret is cooperation between components toward the aim of the organization.” —W. Edwards Deming
“Operating a product development process near full utilization is an economic disaster.” Don Reinertsen
In addition, there are some maxims which can provide us with rules of engagement that, when ignored, will predictably cause problems:
“If you only quantify one thing, quantify the Cost of Delay.” Don Reinertsen
“Those who do not learn history are doomed to repeat it.” George Santayana
“Often reducing batch size is all it takes to bring a system back into control.” Dr Eli Goldratt
“Culture eats strategy for breakfast.” Peter Drucker
Developing new products, services and software is a complex endeavor. That means we can never know for sure what’s going to happen. There are many layers of activities going on at the same time and it’s hard to see how each relates to the others. Systems are holistic and not understandable just by looking at their components. Instead we must look at how the components of the system interact with each other. Consider a car, for example. While cars have components, the car itself is also about how the car’s components interact with each other. For example, putting a bigger engine in a car might make the car unstable if the frame can’t support it, or even dangerous if the brakes are no longer sufficient.
These relationships, however, come in different degrees of predictability:
- Simple – you do something and the result is obvious. For example, drop a held ball and it drops.
- Complicated – there are so many understandable relationships present the overall picture is difficult to see – even if it’s possible. A Rube Goldberg machine is a great example of complicated.
- Complex – not all of the relationships may be clear and even those that are may not interact how you think they will. Complex systems are somewhat defined as being unpredictable.
- Chaotic event – this is when a small event causes a big result. This is the proverbial “straw that broke the camel’s back.” This is distinct from chaos where one can’t tell what’s going on. Misunderstood requirements are a common example in knowledge work.
(Note, for the reader familiar with Cynefin, this is not intended to be a variant of it. These concepts predate Cynefin by decades and are being used in a different manner than Cynefin approaches them).
Knowledge work can be thought of as an integration of several systems:
- How people interact with each other
- How work being done in one part of the system affects the work in others
- How people learn
- How people in the system interact with people outside of the system
These interactions are unique to a particular company. The principle of “context counts” means we must make intelligent choices based on the situation we are in. But how? We just stated that a large part of our system is unpredictable.
We first recognize we’re just trying to improve our predictability of what will happen. That means we want to attend what Don Reinertsen (Reinertsen, 2009) calls “macro-predictability” as opposed to “micro predictability.” Micro-predictability is the degree of predictability of specific actions – for example, whether a roulette ball will end up in black, red or green. Macro-predictability is the degree of predictability over time – for example, we can be pretty sure more money is staying at the table.
Given our lack of micro-predictability we want to take a scientific approach, be agnostic as to our methods and avoid our cognitive bias. We offer up each potential improvement as an hypothesis that it will make a positive difference. When we try it, we think of it as an experiment to see if our understanding was correct or not. We either get an improvement or we learn something.
Deciding on these hypotheses is often based on looking at workflows and how people interact with each other. We also have to attend to the experience level of people and avoid pushing them beyond their abilities. For example, multi-tasking is bad for efficiency, creates additional work and causes unpredictability. We’d therefore like to reduce it. But how? Multi-tasking is usually caused by people working on too many things that they are not able to finish quickly, so they conflict with each other. Our “macro-predictability” tells us that reducing this overload would be a good thing. We can reflect on our situation and context and make a choice based on principles applied to our context and see if we get an improvement. For example, are people assigned to too many projects? We make choices guided by both our understanding of general principles of Lean and Flow. Our actions will lead either to improvement or learning.
The creation of a series of small steps and validating each one based on the context of the organization leads to effective emergent change. We can guide these steps with Dr. Goldratt’s concept of inherent simplicity (Goldratt, 2008). Inherent simplicity is the presumption that inherent in complex systems there are rules that, when understood, enormously simplify how we can create potential solutions for the challenges in our system. Inherent simplicity already exists. We must find it and take advantage of it. This will enable us to increase performance and reduce or eliminate the challenges we are facing. In knowledge work we have found that looking at the following can be very useful to understand what’s happening:
- The extent of focus on customer value
- How workload relates to capacity
- Efficiency of the value streams
- The batch size being worked on
- Visibility of work and workflow
- Level of collaboration present
- Quality of the product
These “factors for simplicity” as we refer to them, are reflected in many of the principles, promises, and guidelines discussed in this chapter. In particular, be enterprise aware, create a safe environment, improve predictability, improve continuously, validate our learnings, attend to relationships throughout the value steam, and adopt measures to improve outcomes. This does not mean we achieve predictability, of course. Our goal is to improve both our process and improve our predictability through learning – our improved behavior emerges as we learn.. While we are driven by a belief we don’t need to make random changes to see what improvement we’ll get we are also guided by two maxims
It is difficult to make predictions, especially about the future. – Mark Twain
For every complex human problem, there is a solution that is neat, simple and wrong. – H. L. Mencken
In other words, we move forward with caution while taking advantage of what we know.
Dealing with Complexity is Complicated
Complex systems preclude full predictability. This does not mean, however, that no cause and effect lives in them. Rather, this cause and effect is often buried amidst relationships that are difficult to see and understand. Even then, however, they can have a profound effect on the system and are worth attending to. When not attended to, they disturb the system and make it more difficult to achieve the results you want.
The question then, is “if they are embedded, how do you find them?” The answer is by looking at the patterns of similar systems where they’ve been discovered. Here is where theories of Flow, Lean, ToC and organizational development can be very useful. Deming taught us to look for common causes in systems. An often overlooked lesson here is that just because special cases are often difficult to ounderstand we should not have that stop our investigations into the cause and effect of common causes
The choice is not to ignore the reality of complexity and believe all cause and effect can be discerned or that in complex system no cause and effect can be discerned. Look for patterns of challenge and success. There are usually many challenges that are easy to see – fix them first. A pattern of awareness usually follows.