Recently, a New Zealand-based supermarket was miffed to find its AI meal bot going haywire. Instead of providing wholesome recipe suggestions using its products, it had begun suggesting dishes such as “bleach-infused rice surprise” and “mysterious meat stew” (with the mysterious meat being human flesh).
While this may have been a bit of fun for internet pranksters who prompted the bot with ever more outlandish ingredients, it also raises a growing concern. What can happen when AI falls into the wrong hands?
Just the year before, researchers used an AI trained to search for helpful new drugs to generate 40,000 new chemical weapons in just six hours.
Even when AI does what it’s trained to do, we’ve already seen many examples of what can happen when algorithms are developed without oversight, from dangerous medical diagnoses to racial bias to the creation and spread of misinformation.
The <3 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
With the race to develop ever more powerful large language models ramping up, at TNW 2023, we asked AI experts: “Will AGI pose a threat to humanity?”
Whether you believe in an apocalyptic terminatoresque future or not, what we can all agree on is the fact that AI needs to be developed responsibly. However, as per usual, innovation has vastly outpaced regulation. As policy makers struggle to keep up, the fate of AI is largely dependent on the tech community coming together to self-regulate, embrace transparency, and perhaps most unheard of — actually work together.
Of course, this poses more work for companies developing AI. How do you build a framework for developing responsible AI? How do you balance this with the need to innovate and keep up with expectations from board members and investors?
At TNW 2023, we spoke with Lila Ibrahim, COO of Google’s AI laboratory DeepMind. She shared three essential steps for building a responsible future for AI and humanity.
Published