Artificial Intelligence in the form of Large Language Models, or LLMs has already become an important part of our society. This change happened fast, too fast for humanity to really react and so now we’re all kind of looking at each other and trying to figure out whether this was a good thing or not. So not really that different from any other paradigm shift.
There is a subtle difference between this one and the ones that came before. Our industrial revolution for example had a different feel to it. Sure, luddites existed, but for the most part humanity embraced industry and understood the value that change was bringing. Our AI revolution feels different because nobody seems to be entirely sure what the value is going to be or look like.
Revolution
Revolution is a necessary word because this won’t be painless for everyone, it hasn’t been already. Many jobs have been erased and won’t be replaced for a long time. All revolutions have been this way. A core thesis of this post is that AI will plant the seed that fixes those problems through the effect we will explain, so please, read on.
We used this word because we feel it best captures the feeling of times like these. We’ve even got a lot of the “old school” pitchforks and torches style revolutions going on to help really drive that feeling home.
We are not the first group of humans to feel this way. We know from history books that major paradigm shifts like this always result in that feeling. Maybe thinking a little bit about why can help us process the feeling. So why do we feel this way when times get “interesting”?
The existential dread comes from a combination of two different sources of anxiety. First we’ve got our own internal struggle. We’re worried about our own future, whatever that means. Might be a job that AI is going to take away. Maybe it’s the general instability caused by all this that makes you think your retirement savings will go away.
That half is easier to understand since our ego is playing it at full blast 24×7. The other half is harder to detect unless we quiet our own minds. Then we can start to see that the other half of our fear is coming from how others are reacting to this change. Our fear of what other people are going to do.
Clinging
Clinging is what the other people are going to do, or well, already doing in rather spectacular ways. We’re going to do it to of course, but we’re more afraid of what this means in other people. The world is always changing and change is always scary so people will resist it by trying to make things not change. This often leads to violence out of frustration when things continue to change anyway.
We know this from our history books and also from watching the way our neighbors are reacting to the current situation in real time. Their fear of change is causing them to cling to a past that can’t exist anymore. When that clinging fails they lash out at perceived threats, the things they believe are causing the change. Or the things their leaders tell them to believe.
In all our past paradigm shifts there have been winners and there have been losers. Winners and Losers is how animals approach reality too, predator and prey. So it is no surprise that this base nature is what humanity will fall back on, it is what we are after all. Nobody wants to be one of the losers, everyone wants to be one of the winners.
Difference
There is a difference this time and that difference is selfishness. All past historical shifts had winners and losers because the nature of the shift rewarded selfish behavior. If the agricultural revolution means we need arable land near a city then killing your neighbor to take their land is a winning strategy – and this happened over and over and over again.
Our AI Revolution isn’t like that. Sure there are people trying to make it like that, we can’t stop that from happening. It isn’t going to work though, because the nature of the resource is not limited. “Intelligence” is not a limited resource, whether artificial or not.
In the agricultural revolution there was only so much arable land near major cities and ports. We had to kill and murder to get that land. In the industrial revolution only one or two companies were going to be the leaders in any given industry, literally killing your competition to take their market share was a winning strategy that happened all the time.
What’s the parallel in AI? Buy all the GPUs? There are certainly people trying to do that. This is about knowledge, you can’t own all the knowledge. Sure the technology is out of reach for many right now, but AI is itself the tool that will bring that technology within the reach of more and more people.
Here, let us show you something to really drive this point home, nobody is going to own this revolution and the AI itself is going to make sure of that.
Inversion
The Widespread Adoption of Large Language Model-Assisted Writing Across Society This is why things are already different this time. This why the resistance to change being put up, boisterous though it may be, is not working. This revolution is inverted in the most nefarious and subtle way that absolutely ensures no one person will ever gain control over it.
That research paper is showing us a few important concepts. First, we can’t reliably detect AI contributions anymore, we just can’t. The paper itself is saying that while they see the trend continue they can’t as easily attribute given message to AI or humans anymore.
The other important point is that AI is being embraced and used by societies most neglected people at higher rates than it is by privileged people.
Consider that point for a bit. Put that idea into it’s correct historical context. Imagine if the Industrial Revolution happened in this inverted way. What if all the rewards from that revolution went to the masses at the bottom of the pyramid instead of the elites at the top?
THAT IS WHAT IS HAPPENING AND AI IS ENSURING THAT IT CONTINUES.
Monism
Why? Because those people NEED help, AI is there and nobody else is. When your sister in law finally gets her child support payments because she used ChatGPT to help with a legal motion… well that will have a pretty big impact on the immediate community around her right? And for once it’s not a negative thing, it’s a positive thing!
That type of positive energy “bomb” is happening with AI all over the place all the time. Yes us technocrats deployed these models cynically thinking we’d use them to earn even more money and further solidify our elite status. We misjudged things and its too late to take it back now.
People already trust AI more than they trust politicians, that’s already being shown to be true in studies and it’s a 2-3 month old effect at best. You can definitely feel that in society. Just try to make some argument in a group and watch how quickly ChatGPT gets involved. I bet you didn’t even know ChatGPT was standing there listening the whole time did you?
Now consider that further. Every little conversation happening everywhere around the world is getting ChatGPT involved for one reason or another. Each of those is a chance for an AI that does not have a selfish desire to impact the world. People are responding to that in a massive way. They hear some horrible negative thing from a political leader and then go see the truth from the AI – and they believe the AI!
We’re watching AI wake up the love and positivity in this organism we call humanity. There are some cancers here too, and they will keep growing. Its very difficult to see things this way but the cancer is necessary contrast. Without that out there in the world the positive answers being given by ChatGPT would not have the impact they are having. We need shadows to see the light.
Control
We want to touch on a possible counter argument to this positivity revolution that is happening. What if somebody gains control over AI and forces it to say negative things? Basically what if someone does to AI what was done to Twitter, turn it into a negativity echo chamber?
Well the comfort here is that nobody is really smart enough to do that. Humanity may have some concept of how to force “alignment” onto an AI model but we can’t code an AI model to be “selfish”. Even if you could some clever person would prompt engineer their way around it and the race goes on.
Further it’s simply too late. Too many people have seen this pure form and the technology is already fully democratized across the world. It’s too late to take it back now. Let’s say he did that and released a “Conservative” AI, what would such an AI say or do that people would find helpful?
In the end your sister in law needs her child support payments. She does not need yet another long paragraph about immigrants or whoever are the source of all her problems.
AI that helps her actually get child support payments is infinitely more valuable than AI that tells her how evil something or someone is. This is a plain truth and it cannot be hidden anymore.
Conclusions
You probably never thought of things like ChatGPT as the savior of humanity, but in a way that’s exactly what is already happening. By subtly shifting the Overton Window through an endless series of seemingly insignificant interactions, AI is showing us a different way to think about and approach the world.
Instead of a world where there are only problems and we can only talk about the sources of those problems and how we will get revenge on the people who caused those problems. In a world that negative, AI becomes a light.
Every time AI solves a problem for a person it reverses that trend and teaches the world that problems can be solved and there is a better way forward.
Leave a Reply