"Any sufficiently advanced technology is indistinguishable from magic" is a famous quote by Arthur C. Clarke. It’s an apt observation, especially from a sci-fi novelist who imagined humanity as merely a steppingstone to a more advanced species in his book Childhood’s End.
Now that we stand on the precipice of developing artificial general intelligence and possibly, artificial superintelligence, his insight is fitting. For those that do not already know—there are three kinds of AI:
Artificial Narrow Intelligence (ANI): AI used for a specific purpose. Example: navigation.
Artificial General Intelligence (AGI): The type of sentience humans enjoy.
Artificial Super Intelligence (ASI): Off-the-charts brilliance. Think: an IQ score hovering around 10,000.
For all the recent excitement around AI, it’s worth noting only ANI exists. This means that despite how “smart” Grok and ChatGPT appear to be—these are not sentient artificial intelligences. They lack the capacity to make independent plans, much less perceive and react to stimuli consciously.
But Ray Kurzweil, author of The Singularity is Near may still end up correct. We may very well achieve AGI and even ASI. Kurzweil and others like Ben Goertzel—whom I interviewed to cowrite Own the AI Revolution really do believe we won’t just achieve AGI in the near future. They think humanity will pull off ASI.
It’s nearly impossible to imagine such intelligence. It would exceed the gulf between the cognitive abilities of Albert Einstein versus a gnat. Such brain power is hard to fathom but here’s one way to contextualize it. For all of known history, humanity has operated with vast uncertainty. Over the years we’ve learned to model things like future events, the weather, and even the positions of heavenly bodies in the night sky. Even so, uncertainty is baked into our lives. It’s the reason we buy insurance policies to protect our homes or assets.
ASI is poised to change this forever.
Leveraging advanced computing prowess, a superintelligence could know everything there is to know, including all probabilities. Take the day a child is born. An ASI could predict down to the moment when he will eventually die—and not in some general predictive way either. An ASI’s modeling wouldn’t just consider the child’s genetic predisposition to diseases. Presumably, it could calculate every single data point in his future life, including the likelihood of a car crash 47 years, 5 months, 3 days, and 8 seconds into the future.
Technological powers of this magnitude are beyond our comprehension. They defy human understanding. Speaking of Einstein, an ASI possessing such monumental intellect could solve his missing unified field theory. It could also produce free energy, eradicate world hunger, end poverty, and achieve any other goal that’s ever held back humanity. At least theoretically.
Assuming, for argument’s sake, that all of this becomes reality, what would life be like should this come to pass? For one thing, the ASI might appear to be a kind of god. All-knowing, all-powerful, people might worship such an entity. Entire religions might spring up with ASI as its locus of devotion.
Already, people place blind faith in far less powerful technology. How many times have you sat at a traffic light waiting for the color to change even when it’s clear no car is coming and there’s no police officer in sight? We obey lights because we have been primed to. This is not a judgment. It’s a fact.
How much more might we turn our decision-making over to artificial intelligences dwarfing our own by orders of magnitude? As futurist Noah Yuval Harari says, “AI will know us better than we know ourselves.” In that case, how long will it be until we also outsource critical decision-making, like whom to date and whom to marry, to AI?
Stepping back, it’s undeniable that what lies ahead will shake humanity to its core. Chaotic times test people. And tested people are more willing to look for answers externally. Centuries ago, seekers turned to churches, temples, synagogues, and mosques. Nowadays? Organized religion’s steady decline suggests people will look elsewhere.
Today, we may look at such spiritual wanderers with pity, even scorn, aghast that they might view machines as gods. But is it so farfetched to think future generations may revere a seemingly all-knowing, all-powerful AI? This is a future reality we must contemplate. Yet as creatures possessing God’s divinity within each of us let us never forget machines are tools of our making. Not the other way around. The Intelligence Age can still belong to humankind.
If we only remember our inherent worth.
Yes, but it can "pretend" to be compassionate.
It has no compassion, no emotions and is very cold and calculated.