Ryan Barrington Cox bio photo

Ryan Barrington Cox

Ryan makes things in Asheville, NC.

Email Youtube Github RSS

An Introduction to General Systems Thinking by Gerald Weinberg

My Notes

Chapter 1: The Problem

We use science to master our environment, but we miss the artifacts and consequences outside of our limited understanding. We break complex things down into smaller isolated parts to understand them. In doing so, we lose a sense of the interactions and thus the overall system. Our “scientific” actions often have grave side effects like the loss of rainforests and polluted seas.

The Square Law of Computation tells us that as we add n elements to a system, the number of relationships, interactions, computations and thus complexity grows by (at least) n-squared. Adding just a few things to a system makes it exponentially harder to analyze.

The Square Root of N Law tells us that our error margin decreases quickly as we get more and more parts. Chemistry Laws work very well because there are a vast number of particles in a small space.

Organized simplicity (i.e. a single machine) lends itself to detailed analysis.
Random, unorganized complexity (i.e. gas molecules in a jar) lends itself to statistics.
We have lacked a scientific means for dealing with systems between these two extremes. This is the vast land of Medium Numbers.
“Organized complexity, the region too complex for analysis and too organized for statistics. This is the region of general systems.”

The Law of Medium Numbers fits between the Square Law of Computation and the Square Root of N Law. It is characterized by fluctuations, irregularities and discrepancies with any theory. Medium number systems are all around us. Murphy’s Law is the Law of Medium Numbers “expressed as folklore.”

“Using science to solve every problem is like using a chainsaw to trim your fingernails.” (paraphrased)

We made transistors as pure as possible. These were are “units.” The joints between them were the weak links, where all the heat and waste wound up. Integrated circuits combined transistors into one new pure “unit.” The links between them became the new weak links. We divide the body up into organs, the planet into political sections, which works to a point, then the walls come crashing down and we have to redefine the units.

“We are not so interested in how a man operates when he is disassembled.”

Humans are medium numbers! When we average all our behaviors together, the personalities are smoothed out and disappear. When we study a human isolated, we lose the interaction with society and make her less than human. “The truth is between parts and averages.” (paraphrased) Medium numbers, General Systems thinking doesn’t offer the same control as the two extremes, but it does allow us a third bucket.

**Poetry often celebrates wholeness and complexity. I love that he points this out!

####Chapter 2: The Approach

We use analogies and models to represent the world. They are not the world and they have limits, but good predictive models are useful.

Start with an analogy and then work it into a predictive model (a model that accurately predicts what will happen next). Know the limits. Our models are not reality! We too often think they are.

Some things are vital. They have essence and life-force and shouldn’t be reduced any further than that.

“If something explains everything, it explains nothing.” (i.e. God willed tree to fall, rock not to move, etc)

Scientists are essentially reductionists, breaking things up into smaller and smaller pieces. The generalist, like the fox, moves through many different situations or disciplines and does well. She sees an underlying unity in things.

“To be a good generalist, one should not have faith in anything. Faith is the belief in something for which there is no evidence. Every article of faith is a restriction on the free movement of thought, and thus on the free movement of the generalist among disciplines… But nobody can exist without faith in something.”

We use induction to draw general laws from cases not yet observed.

General systems thinking requires that we be naive like children and approach things with fresh eyes, fumble in the dark, make false conclusions and make fools of ourselves. This is how children learn so quickly and so well. Adults lose the ability to grasp wholes, b/c we see the parts separate from the whole.

Stay general and take grand leaps. When we are wrong, we find out quicker. Slow-but-sure analysis may take forever and never get completed. We die first ; )

Those of us who are impatient are drawn to General Systems approaches. Impatience is not enough! We must ignore details and see the “mere outlines” of things. Keep to the general. Keep the big picture in mind. We are looking for an “approximation of truth” to get us excited, get us going. We continue to refine this approximation as long as we are living.

When we have a law, such as the First Law of Thermodynamics, and we observe something that disagrees with our law, we tend to mistrust our measurements, not the law. Laws represent lots of work so rather than reject them, we append them and the get messier and messier. General Systems Laws are not meant to give answers. They can afford to be occasionally wrong. Keep them memorable and short, loose and complications-free.

Heuristics are fallible methods.
Heuristics are hands-on, interactive and not concerned with 100% accuracy.
Keep it around if it works most of the time and keep it simple.
“No law is useful if you don’t remember it when you need it.” General Systems Laws are not constraints but stimulants! Ha.

Over-generalizing is bad, yes, of course, but so is under-generalizing. “It isn’t what we don’t know that gives us trouble, it’s what we know that ain’t so.”

General Systems Laws come in balanced pairs.

  1. The Law of Happy Particularities: Any general law must have at least two specific applications.
  2. The Law of Unhappy Particularities: Any general law is bound to have at least two exceptions. (aka If you never say anything wrong, you never say anything).

And here -

  1. Composition Law: The whole is more than the sum of its parts.
  2. Decomposition Law: The part is more than a fraction of the whole.

The generalist travels to Bangkok and sees that they have streets and people and cars. He sees similarities. The generalist understands systems and relationships in broad ways that he can apply to various new subjects right away. He is then able to ask sharp questions and learn quicker.

The general systems approach provides starting points for the study of many complex systems.

Three generalist ways of using models

  1. Improving thought processes
  2. Studying special systems
  3. Creating new laws and refining old.
    The author ends the chapter saying that general systems thinking is getting away from its simple roots. He writes this book to bring it back to the ordinary people for which it was conceived.

Chapter 3: System and Illusion

Poets know that a system is a way of looking at the world.
We learn symbols as kids. They often replace reality.
Our point of view, our relationship as an observer, colors what we see.

“The belief in an external world independent of the percipient subject is the foundation of all science.” - Einstein

Don’t be a fool and assume there is a “right” way to view a system.

Heuristic devices don’t tell us everything and they don’t tell us when to stop.

From broad to narrow:

  • Idea
  • Concept
  • Rule
  • Principle
  • Law
  • Reality
  • Truth

Know when to stop for there are no truths.

Relational Thinking: Beware of assuming you know “real” things, even if there is such a thing as reality.
If there are “real things,” we cannot know them.

The exception proves the rule. “Proof” used to mean “puts to test” (not “shows is true”).

It’s forceful to speak in terms of absolute purposes, but purposes are never absolute. For example, one may say that GM’s purpose is to produce cars not metal scraps. To a junkyard, GM’s purpose is to produce scraps. To investors, GM’s purpose is to produce money.
Purpose is relative, not absolute.

Emergent properties didn’t exist in the pieces, but they emerge in the whole system. i.e. A thermometer actually dips down for a moment before going up, when you put it in hot water. Glass expands before mercury.
Emergent properties are unexpected. They are therefore relative too because one person might be surprised to see them emerge and another person predicted them (maybe one tester already knew about glass expansion and it didn’t “emerge” during experiment.

Arbitrary systems are systems of which nothing general can be said except, “Nothing general can be said.”

You can’t separate the observer from the observed. “Nobody has ever demonstrated that he can choose things arbitrarily.” The system is relative to the viewpoint of the observer.

We think we know absolute rules about grammar, what is relevant or right. Then we try to program that to a computer and find we knew nothing. Examples are given of trying to write a computer program that parses correct sentences. You have to keep adding ad hoc case after ad hoc case. It gets messy. Things aren’t what they seem. It’s not the external system so much as our internal thinking that causes confusion.

What is a system made of? Objects, parts, variables, elements, attributes? Nobody seems to know for sure. They are “undefined primitives” and very subjective at that. Different observers draw the lines and quantify them differently.
Mathematics pretends to be exact but the symbols and equations are misleading, based on things that have been subjectively quantified.

Observations are never “correct.” However, without some sense of correctness, we can’t get very far. Instead of correctness, let’s speak of consistency, that is the compatibility of one observed set with another.

In order to make a “system” (which is really just a model of something external), we have to create a set of things. Making a set is very subjective and relative, not absolute.

The Principle of Indifference: Laws should not depend on a particular choice of notation.
There are hierarchies and overlap to words that can create much confusion. For example, a road runner is a type of cuckoo. You can call it either a road runner or a cuckoo. A road runner is a cuckoo,, but a cuckoo is not necessarily a road runner. Which symbol is chosen will have side effects as we reason and infer.

The first man says road runner and the other says cuckoo, they agree. If the first man again speaks, saying he sees a yellow-bill, the other man agrees again (for a yellow-bill a type of cuckoo). However if the first man says cuckoo, the other man doesn’t know if it’s a yellow-bill, road runner or another subcategory of cuckoo. This second man dominates the first observer with more fine-grained language. He has a many-to-one mapping of the first man’s word. The first man has a one-to-many word mapping. The Principle of Indifference says these mappings don’t hold because they change when you shuffle the direction.

There is another interesting indifference example with two observers looking at a table eye-level. They are adjacent and can only tell if a tossed coin is on left or right, but not how close it is. A “super observer” viewing from above sees four quadrants, the sum of all A and B’s knowledge of positions… (don’t feel like writing it all out now but good stuff. needs diagrams)
Point: The number of ways to view and interpret a system grows exponentially as you add limited observers.

Many drugs have had unforeseen “side effects.” In some cases, the drug is repurposed so the “side effects” are now sold as the “main effects.” What is side and what is main, it’s a matter of perspective and it’s relative not absolute (though we speak in absolutes all the time).

Chapter 4: Interpreting Observations

Combinatorial sequences are discussed here. That first word comes up a lot to illustrate that if we have separate “state machines” the possible number of overall combinations of both state machines is combinatorial and grows really fucking big really fucking quickly. You are basically multiplying arrays to get multidimensional arrays. Combinatorial sequences can also be called a Cartesian Product.
For example, if A’s range is (1,2), B’s range is (1,2) and C’s range is (1,2,3) we have this Combinatorial Product (aka all possible combinations), which are labeled with letters:

111 a
112 b
113 c
121 d
122 e
123 f
211 g
212 h
213 i
221 j
222 k
223 l

That’s 12 possible states! Now imagine sequences of these possible states i.e. (a,b,c,…) The number of sequences that can occur within just a few “ticks” is enormous! A sequence of two states has 12^2=144 possibilities. A sequence of three has 1728 possibilities, then 20K, then 250K, then 2M. Then length of the sequence grows combinatorially with its length.
(That’s why the number of chess moves is so high, even though it’s counterintuitive. “There are over 288 billion different possible positions after four moves each. The number of distinct 40-move games is far greater than the number of electrons in the observable universe.” - from chessposter.com).

Remembering a sequence is very difficult, unless it’s a constrained sequence, such as (a,d,e,a,d,e,…) over and over a again. A cycle of a,d and e. Once you start switching between different repeating sequences, different observers will remember/interpret/understand what the sequences is different ways. This is illustrated with a funny black box story.

The Eye-Brain Laws:

  1. To a certain extent, mental power can compensate for observational weakness.\
  2. To a certain extent, observational power can compensate for mental weakness.

In summary, we should seek to find the balance between eye power and brain power.
Every state (aka everything) that happens is a miracle, highly improbable. We are biased to seem some patterns, like a great bridge hand (cards), as more amazing than others but that’s only because we are putting the rules of the game on the hand. The rules are constructed by us humans and don’t reflect what just happened. Every hand is as likely as every other in an honest card deal.

If we want to learn anything, we mustn’t try to learn everything.

Every war is a miracle. Every white rat too. Every event. If we see things as they truly “are,” they’re all miraculous and never again able to occur. However, if we think this way we can’t do science.
With science, We have to lump things together and put repetition on events. Biases are built in and eventually we are discarding the “true” things, cramming what we perceive into our existing patterns. In this way, the eye is traded for the brain.

Math allows us to reduce things to functions, which can be convenient:
f(x,y) = z
Means z depends only on x and y. Some function, f, makes the calculation possible.
g(x,y,…) = s
Means some function, g, can calculate s from x,y and some other stuff.
With these functions we are isolating and simplifying things which can be useful, but often we forget that systems are not so neat and the things left out can snowball.
When we assume that x and y are needed to calculate z, we implicitly assume that everything else besides x and y are not needed.
If we have h(x) = y and y does not depend on x, our formula is overcomplete. If y depends on x and something else too, our formula is undercomplete.

The Generalized Law of Complementarity: Reductionism is infinite. We never get things broken down “all the way” but we may reach limits where we can’t go any farther. It’s common for two or more observers to reach an irreducible point where there observations disagree. There may be overlap between what they observe. There may not. We see this all the time. Each person has his or her own complex view, but none is complete and they don’t all agree.

Chapter 5: Breaking Down Observations

“Our minds are limited, especially when it comes to knowing the ways in which our minds are limited.”

Decomposing a system into separate, simpler parts can help us understand it more easily. We often do this in custom ways that don’t make sense to others. We may become anxious when someone challenges our decomposition of a complex system. (I have experienced this in music and songwritingr)

To cope with unfamiliar, complex systems we can -

  • Get a broad, complete view of all that interests us.
  • Get a minimal view of states.
  • Get an independent view that decomposes observed states into simpler noninteracting qualities. \

It’s too much for our minds to have every moment be a completely new, incomparable thing so we look for general models we can carry around.

The Axiom of Experience: The future will be like the past, because, in the past, the future was like the past.

Metaphors allow us to transfer properties, to understand something we don’t know (yet) through something we do now. i.e. The future will be like the past.\ “My love is like a rose.”\ Which means my love = f(rose,…) or “my love” is a function of “a rose” and other things. \ Science and poetry may be a bit alike.

Our brains are always separating things from one another. “This is a pencil. That is happiness.” There must be boundaries to separate one thing from another. We often put boundaries in places where they “should” not be.
The word “interface” is often used instead of “boundary” because interfaces are looking in and out. Interfaces are important parts in their own right, not just a perfectly thin separation.

We draw squares, circles and arrows to represent flow charts, systems, etc, which can be helpful but they can also be misleading. Diagrams imply that there are definite, sharp boundaries between “things,” which is not true. Really, the boundaries are metaphors just as “my love is like a red, red rose” deepens our understanding of something beyond our grasp (not mandate absolute truth).

We break things up into qualities and properties based on history. If our ancestors felt it was useful to have a separate word or idea for something, we tend to follow that convention, which may mean it’s useful some of the time (tis the same with borders between countries).

The Principle of Invariance: Some transformations preserve a given property and some do not. Or put another way, Some properties are preserved by a given transformation and some do not.
This has to do with extensive vs intensive properties. When you break a chocolate bar in half, the size changes so that’s an extensive property. The chocolatey-ness does not so that’s an intensive property. Put another way, the Principle of Invariance states: We understand change only by observing what remains invariant, and permanence only by what is transformed.
This principle is explained with state machines and different observers calculating a different number of states. If you have more overall states to keep track of, your observations be more fine-tuned but more complex to manage. If you have less states, it’s easier but you might lump states together and lose what you’re looking for. It’s a tradeoff. Everything we observe is limited.

Partitions are tricky with respect to reflexivity, symmetry and transitivity.
Reflexivity has to do with bidirectional, circular relationships that can’t distinguish cause from effect.
Symmetry means a quality holds in every direction. i.e. If A says B is their friend, B may not say A is their friend.
Transitivity means a property transfers. i.e If A is friends with B and B is friends with C, can we conclude that A is friends with C?

The boundaries by which we partition things should add useful meaning. There are an infinite number of ways to partition “things” and “properties.”

The Perfect Systems Law: True system properties cannot be investigated.
The Strong Connection Law: Systems, on the average, are more tightly connected than the average.

The system is beyond our limited mental capacities so we must decompose into parts and focus one thing, saying “All other things being equal,” but all other things are rarely equal (if ever).

Chapter 6: Describing Behavior

As observers, we add to what we are observing.

We create two dimensional state spaces using x,y graphs. These are combos of two changing variables. Three dimensional state space can be done with a cube. After three dimensions, our brains can no longer visualize n-dimensional state space, but we can still use math. General Systems have state combinations that are more than just 2 or 3 dimensions.

We often confuse representations with reality. When we see a picture of the Necker cube for example, we usually say “That’s a cube,” not “That’s a picture of a cube.” The cube is ambiguous and could be a 2d representation (or 2d “shadow”) of other shapes too. When we say “picture of” we are reminded that there is information removed from the image that might be put back in.

The Diachronic Principle: If a line of behavior crosses itself, then either

  • the system is not state determined
  • we are viewing a projection, a shadow - an incomplete view

The Synchronic Principle: If two systems occupy the same position in the state space at the same time, then the space is under-dimensioned, that is, the view is incomplete.

We can use projection to simplify a system for reasoning. We can select a few members for reasoning and ignore others.

Count-to-Three Principle: If you can not think of three ways of abusing a tool, you do not understand it.

Time always moves in one direction, but timescale makes it more complicated. Are we concerned with the patterns over thousands of years? Or thousandths of a second? Which timescale we choose changes a graph drastically.

Scientists model reality. They find rules and patterns that seem to fit what they don’t understand. Models are not reality.

Closed Systems are theoretically and have no input, just behavior.
Opens systems’ behaviors are theoretically a function of some input.
Our view is always incomplete. We can never know if a system is truly closed, unaffected by outside factors, even if we think we see cycles (recurring behavior patterns).
What we call “randomness” may be an incomplete view. If we can’t understand behavior or model it, we call it random.

Scientists prefer closed systems. Openness puzzles us. We can’t predict it. It’s doubtful that a truly closed system exists anywhere. \
Equifinal systems always end in the same state, equally. Yes, but “how” did we get there? We all wind up dead in the end, but how we got there remains of interest.

Principle of Indeterminability: We cannot with certainty attribute observed constraint either to system or to environment.
It may even be worse than this, as observer herself can add to it.

If you go down into the sea with a 3-inch net and catch animals for years, you may conclude that there are no creatures in the ocean less than three inches long.
Observers: Beware three-inch nets!

Alexander Wood invented the hypodermic needle. He hypothesized that he had to inject Morphine near the source of physical pain for it to work. He came across a lady with pain in her scalp and did not inject because he clung to his theory, without testing it. Years later, Charles Hunter showed that you can inject Morphine other places in the body to relieve pain.
Science is full of untested assumptions. Fresh perspective of old issues can help.

Chapter 7: Some Systems Questions

Objects are just states. “Being is the cross section of an entity in time.” - R.W. Gerard

Being is an instant, a snapshot of behaving.
We analyze “being” with boundaries, properties, diagrams of structure. The white box.
We analyze “behaving” with terms blackbox terms like state spaces, inputs/outputs and chronological graphs.

Being and behaving are intertwined. We watch behavior and extract properties, ways of being. We use states of being (properties, boundaries and structure) to try to predict behavior.
Being and behaving is analogous to properties and process.

Believing involves an observer. Observers are always entangled and involved in what they observe. What do we believe we are seeing?

Becoming is more complex as it involves observing change. As children, we take things as absolute. Then we see things break, disappear, change and we start to know about becoming. How did things become this way, we ask?

The “three great questions of General Systems Thinking,” called the Systems Triumvirate:

  1. Why do I see what I see?
  2. Why do things stay the same?
  3. Why do things change?

We will never solve the riddle, but asking these questions gets us closer, sharpens us.

The author says this book was on question 1. There are sequels that answer questions 2 and 3. It’s a cycle that leads back to #1. We will now walk through that cycle once together.

Science seeks to reconcile our thoughts of reality, not reality itself.

So what of things that stay the same? What does it mean to be stable? Even the Empire State building sways in the wind, though we call it stable. The author defines stable as “within certain limits.” A strong enough wind will blow down any building. When we speak of stability, we mean acceptable behavior of system and expected behavior of environment. A state space between the system and its environment is what we call “stability.” (.i.e. not swaying more than 10 ft in 90 mph winds)
Still, if winds reach 110 mph and the building blows down, we are likely to disregard our previous limits of “stability” and say the building was “unstable.”

Linear Stability happens when an amount of input changes a system by that same amount.

We tend to confuse stability with goodness (maybe because traumatic experiences poke out in our mind and associate pain with change).

Survival
Why do systems survive? Well, in the long run, only surviving systems stick around to be witnessed.
Survival is difficult. Most species go extinct relatively quickly. Organizations and businesses, careers - they don’t last that long over the long haul.

If survival is “continued existence,” what do “continued” and “existence” mean?
“Continued” depends on timescale. We observe things relative to our timescale. You can slow down things on the cellular level and they live and die in instants.

Identity
Does a system die or change? We don’t always agree because it depends on how we define the identity of the system. Specifically, what are the variables that identify the system?

If you rotate the letter “Z” counterclockwise is it still a “Z?” Or an “N?” Transformations happen all the time, due to environmental factors. Whether something maintains its identity is a matter of perception. Does a system maintain its identifying properties after a transformation? If so, it keeps its identity and thus still “exists” in our minds.
So survival depends on:

  • What the environment does
  • How the system’s program transforms the environment
  • What variables are involved in the identity
  • How the observer’s program operates on those variables

Regulation and Adaptation
Systems may adapt or die, depending on the observer.
Small changes in the white box can lead to huge changes in the black box. i.e. changing a “+” bit to a “-” bit in a computer program may cause wildly different behavior.

When a fish starts breathing, does it cease to be a fish? Or is it an adapted fish? There is no clear answer because the question depends on where we partition fish-ness, what we define to be fishy variables.

When we learn new things, are our brains adapting or regulating, according to certain rules?

The Used Car Law.

  • A system that is doing a good job of regulation need not adapt.
  • A system may adapt in order to simplify its job of regulating.

Rephrased Used Car Law:

  • A way of looking at the world that is not putting excessive stress on an observer need not be changed.
  • A way of looking at the world may be changed to reduce stress on an observer.

Stress may cause a system to adapt or collapse. Without stress, a system can keep regulating itself and chugging along.

Names deceive us. Political parties are always changing, but the names stay the same so we may think they haven’t changed.