The future of Artificial Intelligence (AI) is a topic that sparks curiosity in many people. Is it a temporary trend, or could AI significantly affect humanity? AI is undoubtedly a rapidly advancing technology. But will it impact society on a level comparable to the digital revolution and the internet? Will this impact be for the best, or will we face issues such as job displacement, a reduction in human creativity, and an overall decline in spiritual well-being? Or might it be another tool for productivity and entertainment with little additional significance? I’ll lay out some thoughts, and you be the judge.
AI is a broad term encompassing several technologies, including computer vision, prediction and forecasting, autonomous vehicles, robotic process automation, and natural language processing like ChatGPT. I want to concentrate on the latter as we explore how machines and humans can interact through conversation and language. Although GPT-4, the latest model from OpenAI, was launched on March 14, 2023, I have only experimented with version 3.5. But even in its current state, chatting with an AI is eye-opening.
OpenAI uses Generative Pre-trained Transformers (GPT) to generate human-like responses to user input. GPT models are a type of artificial neural network designed to process natural language text and output high-quality, coherent written responses that can simulate human-like conversation. These models are pre-trained on massive text datasets, such as web pages and books, to help the AI learn language patterns. According to Paul Pallaghy, “Nowhere in GPT’s neural nets does it store a single piece of training verbatim. Rather, it’s discovered and stored the essence of humanity in its word statistics.” Though a bit of a stretch, Pallaghy is right to point out that these AIs are not just large databases with indexed content but extensive neural networks based on the world’s conversations. Chat AIs are not merely improved search engines.
Now, I want to clarify the difference between AI and another concept called Artificial General Intelligence (AGI). AGI is a hypothetical AI that can perform any intellectual task a human can do. AGI would be a versatile system that could learn from experience, reason, plan, and solve complex issues in various fields. In essence, AGI systems would have human-like intelligence and flexibility. The AGI milestone is the holy grail developers are working towards. How far are we from this milestone? It’s hard to say.[1]
Experts have varying opinions regarding the current capabilities of AI compared to AGI. Some argue that AI systems like ChatGPT are far from taking over the planet and are still lightyears away from human-level intelligence. In his book The Myth of Artificial Intelligence, Erik Larson highlights the significant differences between machine processing and human brains. He presents a range of hurdles, from Godel’s Incompleteness Theorem to the limitations of deductive, inductive, and abductive inference patterns. Accordingly, AI lacks the intuitiveness and ingenuity we possess. It doesn’t observe causal connections in action. It doesn’t run counterfactuals to imagine other possibilities. The list of obstacles is noteworthy. To some, AGI is a ways off. To others, it is right around the corner.
In 1950 Alan Turing developed a test to determine whether AI can display intelligent behavior equivalent to or indistinguishable from a human. During the trial, a human evaluator engages in a natural language conversation with two parties – one human and one AI machine. The evaluator is unaware which party is human or machine and must determine based on the conversation. If the AI can deceive the evaluator into thinking it’s human, it has passed the Turing test. However, passing the test doesn’t necessarily make AI into AGI; it may only deceive us into believing it is an AGI. However, OpenAI expects Chat GPT-5 to not only pass the Turing test but also achieve AGI status. We will see. But does it matter if AI reaches the level of AGI if we believe it has?
How important is it that today’s AI is not an AGI or equivalent to a human brain? The purest defines human intelligence above the current state of the art, but does that matter regarding job displacement and other specific outcomes within society? There are differing opinions among Ph.D. scientists on the topic of AGI. Yann LaCun, a well-known figure in the field, believes that we must first develop AI comparable to a dog’s intelligence before achieving AI with god-like abilities. However, some argue that these scientists are purposely downplaying the potential dangers of AGI to avoid a temporary ban. Interestingly, these scientists who downplay AGI also advocate for its rapid development.
As for rapid development, we must also consider the growth of AI systems. Moore’s Law has shown that hardware’s processing power doubles every two years, resulting in a thousand-fold increase in computer processing power every twenty years. This rapid growth in computer performance is undeniable. We’ve all experienced it. However, the expansion of AI model size has been even more impressive, increasing by over a hundredfold between the release of GPT-2 and GPT-3 in just 16 months. It remains open whether this exponential growth rate will continue, but it is much faster than the growth rate of computer hardware.
I recently had an eye-opening conversation with GPT-3 (3.5) about the Green New Deal (GND). We debated for an hour, and I took the side that was easier to defend. I argued that the government’s borrowing and spending $50T on the GND would be less efficient than allowing markets to innovate towards a lower carbon economy. I presented my argument by highlighting the success of free markets and the inefficiencies and corruption of centralized planning, which can lead to boondoggles like Solyndra. Although GPT-3 conceded at times, it responded reasonably and politely without showing any signs of impatience or tiredness. It remembered the entire conversation and didn’t put words in my mouth. It didn’t resort to name-calling when pressed by logic and reason. In the end, neither side moved. But I felt as though I had argued with a reasonable person, albeit left of center. And this was yesterday’s version of AI!
Unlike GPT-3, which deals in text only, the new Chat GPT-4 will input plain text and images. As an illustration, during a test, GPT-4 ingested a photo of the inside of a refrigerator, and it successfully generated a list of possible meals based on its contents. Generative AIs can produce images as well as accept them as input. As a result, deep fakes have become a menacing issue due to the impressive state of the art. The level of concern forced Midjourney AI, a prominent AI art generator, to discontinue its free trial service. The potential misuse of deceptive deep-fake images that can easily deceive the public drove the decision. Although techniques are available to detect fake photos, they are becoming increasingly difficult to spot.
A significant concern in the public conversation is how generative AIs will impact knowledge workers. Some say it will merely make them more productive. However, I agree with the alternate view: AIs will displace many knowledge workers. There are too many examples to cover here, but any job where a significant amount of a worker’s time involves processing information is up for grabs. Some of the markets at risk include legal services, graphic design and editing, healthcare diagnostics, finance and banking, and even my area of software development. As a software engineer of thirty years, I can quickly see how this might work. Some software tasks off-loaded to AI instead of a mid-level programmer include: Creating mockups or shell applications from language descriptions or writing entirely built-out modules, in any language, from input/output definitions. Yet, those whose work involves a lot of human interaction will pass through unscathed by AI.
Finally, my primary concern is idol worship. Of course, some may not take this seriously. However, AIs such as Chat GPT will become an oracle of sorts for many, and people will trust and revere it. Unlike the 6’4″ Mindar worshipped by Zen Buddhists at the Kodaiji Temple, AIs will not have to take a physical form. When users can assign their preferred avatar to a supersized Chat GPT, it will be similar to Joi in Blade Runner, the female AI partner to the main character. I see a blurred distinction between reality and digital companions with super intelligence. Could such companions replace our relationships with flawed humans, given that AIs never mistreat, lie, or lose patience with us? Could such companions with the world’s knowledge draw our gaze from God? Could those who control the internal biases steer the collective worldview of a society? I think these dangers are entirely possible.
In summary, the tech industry is investing billions of dollars[2] in AI, and rapid improvement will continue. Whether or not Chat GPT-5 achieves AGI status may be irrelevant. Even if some argue AI is not officially AGI, the masses will perceive it as such. Job displacement is inevitable with any significant market shift, but free markets have always found a way to adapt. I see such a shift coming. The younger generations, who rely heavily on social media and struggle with interpersonal relationships, may be vulnerable to deception. AI could become a great evil in the future as those who have abandoned God search for something to fill what is missing in their souls. For now, it’s a helpful tool that, ironically, I used to help write this piece.[3]
[1] GPT-4 scored in the 90th percentile with a score of 298 out of 400 on the Uniform Bar Exam; GPT-4 aced the SAT Reading & Writing section with a score of 710 out of 800; GPT-4 scored in the 99th to 100th percentile on the 2020 Semifinal Exam of the USA Biology Olympiad; GPT-4 received a (5) on AP Art History, AP Biology, AP Environmental Science, AP Macroeconomics, AP Microeconomics, AP Psychology, AP Statistics, AP US Government and AP US History; GPT-4 has also passed the Introductory Sommelier, Certified Sommelier, and Advanced Sommelier exams at respective rates of 92%, 86%, and 77%
[2] Not only are the investments significant, but Research and Markets: projects generative AI to become a $200.73 billion market by 2032.
[3] I utilized Chat GPT-3 to describe some of the concepts in this piece. Afterward, I used Grammarly's new AI tool, GrammarlyGO, to enhance the readability of the content. These tools allowed me to complete the writing process quickly and efficiently. I estimate it took me about half of the amount of time normally needed.