From Skynet to The Matrix, the possibility of machines one day being able to outsmart us has provided the backdrop to stories from science fiction books to the silver screen.
But with AI technology evolving at rapid speed, will it soon be time to say “hasta la vista baby” to humans’ control over machines?
Although many now agree that the trajectory towards Artificial General Intelligence (AGI) has been set, experts are divided on when we’ll reach this technological milestone. Several subscribe to the vague notion that it could happen “around 2050.”
Still, others, such as DeepMind co-founder Shane Legg, believe it could come as soon as 2028. When prompted earlier this year, OpenAI CEO Sam Altman provided the somewhat obscure timeline for development of AGI as in the “reasonably close-ish future.”
But the real question is:
The <3 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
How will we know we’ve reached AGI?
AGI is defined as AI with human-like intelligence and the ability to self-teach. But what are the markers of human intelligence? What distinguishes us?
With AI having long since passed the Turing Test, scientists have proposed a number of new tests to assess AI’s abilities.These range from being able to pass university-level courses to performing jobs better and more efficiently than humans.
With GPT-4 having passed the bar exam last year, this reality is becoming more and more tangible.
Will AGI actually pose a threat to humanity?
Last year, over a thousand AI experts and big names in tech signed an open letter calling for the training of AI systems more powerful than GPT-4 to be put on pause until we can address their potential threat to humanity. Shortly thereafter, another thousand AI experts signed a different open letter saying AI is a force for good and not a threat.
Whether it’s the now very common concern that AI will take our jobs, or the more extreme possibility that an AI will one day refuse its creators’ attempts to reprogram it, perhaps the biggest fear on all sides is the fear of the unknown.
We’re headed into a future filled with hopes for life-changing medical breakthroughs and novel answers to global problems ranging from climate change to world hunger. But at the same time, the spectre of what technological advancement has done for warfare, inequality, and more looms in the darker corners of our minds.
Whatever outcome you subscribe to, say we should all suddenly become characters in a pre-apocalyptic scenario of our own making:
What’s the best defence strategy against an imminent robot uprising?
We rounded up three experts at TNW Conference 2023, CEO of Behavioural Signals Rana Gujral, COO of SingularityNET Janet Adams, and EMEA Field Chief Data Officer for Dataiku Shaun McGirr to get their take on these questions and more. Here’s what they had to say:
Worried about an AI apocalypse? Find out what DeepMind CEO Lila Ibrahim had to say about how we can start building a responsible future for AI and humanity now.