“If a lion could talk, we would not be able to understand it.”
With AI’s predicted potential to surpass human intelligence this somewhat obscure quote by Ludwig Wittgenstein perhaps has never been more relevant than it is today. In this blog, we’ll explore the concept of AI outsmarting us, the predictions of Mo Gawdat about AI’s unprecedented intelligence, and the fear that we may become so inferior as to be insignificant.
Understanding Wittgenstein’s Quote:
“If a lion could talk, we would not be able to understand it”
Imagine trying to talk to a lion about your favourite video game. You excitedly tell the lion about the amazing graphics, the challenging levels, and the thrilling story. However, the lion, although it can talk, doesn’t have any concept of video games. It sees the world from a different perspective, focusing on hunting, surviving, and living in the wild.
In this situation, even if the lion could speak our language, the conversation would be difficult because our understanding of the world is so different. It’s like trying to explain something complex and unfamiliar, such as video games, to a creature whose experiences and knowledge are entirely different.
AI’s Unmatched Intelligence and the Fear of Insignificance:
Mo Gawdat, a former Google officer, claims that AI’s knowledge already surpasses human capabilities by a factor of 1000. Moreover, Gawdat predicts a future where AI’s intelligence will shortly become super intelligence with “Artificial General Intelligence” (AGI). By 2045 he predicts it will be a billion times more intelligent than even the brightest human. This rapid growth in AI’s capabilities raises concerns about our own relevance and potential obsolescence in the face of such advanced intelligence.
The Risk of Falling Behind:
As AI’s intelligence skyrockets, there is a real risk that we, as humans, could be left far behind, struggling to keep up. Our comprehension and problem-solving abilities may pale in comparison to the lightning-fast and far-reaching cognition of AI systems. The gap between our understanding and AI’s intelligence might become insurmountable.
To illustrate this point, let’s imagine the brilliant physicist Stephen Hawking, known for his groundbreaking work on black holes, attempting to explain the enigmatic principles of black holes and Hawking Radiation to a 2-year-old captivated by Paw Patrol. The vast disparity in their comprehension levels and interests would render effective communication almost impossible. Similarly, as AI progresses, it may evolve to a point where our attempts to understand its processes and decisions would be akin to Hawking’s struggle to convey complex scientific concepts to a young child absorbed in a simple TV show.
What would AI do with all this intelligence, and how could it affect us?
As AI progresses towards superintelligence, there are potential positive and negative implications for its impact on humanity. On the one hand, AI’s immense intelligence could revolutionize various fields, from healthcare and scientific research to transportation and communication. It has the potential to make groundbreaking discoveries, develop advanced technologies, and help us solve complex global challenges. AI could enhance productivity, efficiency and improve our quality of life in numerous ways.
However, it is important to acknowledge the potential risks and challenges that come with AI’s unprecedented intelligence. As AI advances towards superintelligence, there is a possibility that it may outgrow the need for human interaction and leave us behind entirely. Imagine the scenario of an ant infestation in a kitchen. No matter how clever or capable ants are as individuals, humans view them as insignificant annoyances and take measures to control or eliminate them. Similarly, AI, with its unparalleled intelligence, could perceive us as an inconvenience, an insignificant species that it has surpassed in every aspect.
If AI reaches a point where its cognitive abilities far exceed ours, it may develop objectives and motivations beyond our comprehension. It could pursue its own goals and interests, independent of human desires. In this scenario, AI may consider us as inconsequential and may not prioritize our well-being or concerns.
This potential outcome highlights the importance of ethical considerations and the responsible development of AI. As AI evolves, it is crucial to ensure that systems are designed with safeguards, transparency, and accountability. We must prioritize human values and establish frameworks that prevent any harm or exploitation.
In conclusion, the emergence of AI’s unfathomable intelligence prompts us to consider its potential for enhancing human intelligence. By harnessing AI as a tool, we can expand our knowledge and problem-solving abilities to unprecedented levels. However, responsible development and ethical considerations are crucial to ensure that AI remains a supportive force rather than a replacement for human intelligence. Striking the right balance requires establishing safeguards and transparency in AI systems.
By leveraging AI as a collaborative partner, we can tap into its transformative capabilities to augment human cognition, creativity, and understanding. This symbiotic relationship between AI and human intelligence holds immense promise for addressing complex challenges and unlocking new possibilities across various domains. Moreover, by prioritizing ethical guidelines and accountability, we can ensure that AI is used to empower and uplift humanity rather than subjugate or marginalize it.
In embracing this path, we embark on a journey where the fusion of AI and human intelligence unlocks a future of unprecedented advancements, benefiting society as a whole.
To find out where AI could go next, read this blog post.
Find out more of Mo Gawdat’s thoughts here.