To Change One's Mind
By Michael Renken
I have a working theory that, at some point in everybody’s life, they decide that they’re done learning and are just going to start reacting to the things around them as they come and either be angry if it conflicts with their internal model or take credit that it adheres to their beliefs perfectly. Now, I’m not a scientist, but I’m capable of coming up with my own theories about the world and informally exercising it. Us plebs can science too.
The above theory came about while I was kind-of obsessing about the relationship between chaos and the structure of a rigid software project that I used to contribute to. So, obviously, when I read the book “Jurassic Park” by Michael Chrichton, I performed a mini deep dive into the surface-level concepts of chaos theory1.
If I were to sum up the theory in my own words, I’d say that it’s the study of chaotic micro systems in the scope of better-defined and much-less-chaotic macro systems - the main idea being that, if we perceive a phenomenon as “chaotic”, it probably is just a minor facet of a larger phenomenon that we wouldn’t.
To bring this full circle, the initial theory that I lead with was my primitive realization that systems are often much more complex than we perceive them, and that, when people rage against what they don’t understand but do nothing to change their preconceptions, they’re generally costing themselves in the long run because unpredictable things will always happen, and if it happens once, it’ll probably happen again.
In my experience, the issue was not so much the raging but my inability to accept that we should just let these issues happen without adapting our mental and software model to encompass them.
Most software is written for “good path” or “happy path”. As long as the system we’re harnessing for our purposes is acting “normally”, everything’s okay. But often, the system does not behave normally. So, either your machine is rigid and falls apart, or it has, built into it, the ability to heal itself. This can either be accomplished via software or human intervention.
Obviously, software in isolated systems that cannot be easily modified by humans must be able to heal itself. In my mind, I think of systemd2 and how processes can be defined by their relationships with each other, and when one thing dies, its dependencies die as well. But when it’s restarted later, they all come back up as if nothing happened. This evolution in userspace architecture could only happen after decades of frustration and locking some of the senior engineers who couldn’t conceive of such a system in a closet where they can’t hinder the progress.
As for human intervention, obviously people are much better at adapting to our environment than machines are. As a species, we’ve lived through several ice ages3 in our million years since the discovery of fire4. These large upheavals drive many species to extinction, yet we kept on truckin'. I often marvel at artist renditions of giant sloths and sabre-toothed tigers that just weren’t able to adapt.
Funny enough, both of the above scenarios involved humans either coming up with a much better idea for controlling system processes and winning marketshare by being the better system. Or by simply staying alive. Neither would have happened without people carefully adapting to their surroundings by changing their mind and their perceptions of the world around them.
-
Fractal Foundation - “What is Chaos Theory” - https://fractalfoundation.org/resources/what-is-chaos-theory/ ↩︎
-
systemd - https://systemd.io/ ↩︎
-
Sandy Eldredge and Bob Biek - GLAD YOU ASKED: ICE AGES – WHAT ARE THEY AND WHAT CAUSES THEM? - https://geology.utah.gov/map-pub/survey-notes/glad-you-asked/ice-ages-what-are-they-and-what-causes-them/ ↩︎
-
K. Kris Hirst - “The Discovery of Fire” - https://www.thoughtco.com/the-discovery-of-fire-169517 ↩︎