What can we learn from ‘The AI Dilemma’?

What are the parallels between the dawn of social media in the 2000s and the recent developments in AI technology?

Artificial Intelligence technology has been quickly integrated into many aspects of our lives recently, providing convenience, useful applications, and groundbreaking scientific developments, but what are the possible risks as we continue to rapidly implement this technology into our reality?

The Center for Humane Technology recently gave a presentation called, ‘The AI Dilemma’, where Tristan Harris and Aza Raskin spoke about societal risks associated with rapid artificial intelligence deployment. Similar to the company’s 2020 documentary, ‘The Social Dilemma’, which focused on risks and harms associated with social media, ‘The AI Dilemma’ focuses on similar externalities as a result of this technology that’s rapidly being deployed to the public.

So what is the AI Dilemma?

The main question they asked is: “Are we doing it responsibly?” ChatGPT, a generative AI language model, has earned the title for the fastest growing app in history, with 100 million users in just two months, according to a UBS study. For reference, it took Netflix nine-and-a-half years, Facebook four years, and Instagram two-and-a-half years to reach this growth. With generative AI technology advancing so quickly, it can be difficult to assess the risks in real time, but Harris and Raskin have been immersed in this field and understand how to prepare for this dilemma from the hindsight realizations of what steps should’ve been taken when social media was beginning.

Generative AI vs Artificial General Intelligence

It’s important to note that generative AI is not synonymous with artificial general intelligence (AGI). AGI refers to highly autonomous systems that possess human-like intelligence and can perform any task that a person is able to do. While AGI development may raise concerns about potential risks and control, such as depicted in the Terminator films with Skynet, the current concerns about generative AI predominantly revolve around more subtle issues and externalities that can affect humanity.

AI Dilemma: what are the main concerns?

While there are concerns about bias, disinformation, and unemployment due to AI, the issue is much deeper than that. Companies are deploying this technology too quickly without understanding the full scope of what is at risk. It poses the predicament: “If my company won’t implement AI, we will fall behind because other companies are doing it.”

A clear example of this is with Microsoft. Microsoft’s Windows 11 began implementing ChatGPT within its services, and many applications began to follow their lead. Generative AI can now be used within Google, Snapchat, and Notion, just to name a few. The corporate agenda to quickly deploy AI in a capitalistic society shadows the urgent need for proper regulation with promises of convenience.

OpenAI’s ChatGPT and Google’s Bard are two of the best known generative AI chatbots that can answer a wide variety of questions with grammatically correct and often insightful answers.

The lack of regulation for generative AI

There’s a lack of clear regulations in place that humanity may come at the expense of. Harris raises the questions of: Should it be accessible by license only? Should ‘know your client’ standards be in place, similar to banking services? Should there be age limits? Should the platform be held liable if it’s used for harm, unlike the protective “shield” provided by Section 230 of the Communications Decency Act (which means that online services are not liable for defamatory or unlawful content posted by users)? If we haven’t even properly regulated social media yet to fully minimize harms, how can we ensure that we will be successful with artificial intelligence? These are the questions we need to be asking and taking into consideration before we continue to deploy this technology with humanity at the expense of convenience. 

Opportunities and risks of AI

There is no denying that AI technology has incredible benefits and opportunities, such as medical advancements, decoding animal communication (as Raskin has done with the Earth Species Project), and solutions to climate change, but these opportunities may not outweigh the risks. “If our dystopia is bad enough, it won’t matter how good the utopia we want to create,” said the Center for Humane Technology. 

What are the risks to society exactly? Social media promised a chance to ‘connect with friends’, and while true to some extent, we have also exhibited polarization, addiction, worsening attention spans, misinformation, and more at the expense. Yes, AI can create efficiency, solve scientific challenges, and allow us to complete more tasks, but these advantages do not dissipate the externalities. The externalities for AI technology can include: synthetic relationships (as seen with Snapchat’s chatbot, ‘My AI’), a collapse in trust and reality, increase in scams and blackmail, automated cyberweapons, and the list goes on. One of the most alarming is that 50% of AI researchers believe that there’s at least a 10% chance that humanity will go extinct due to the lack of control in AI systems, according to an AI Impacts survey. Raskin explained this by saying that nuclear weapons will not make stronger nuclear weapons, but AI will make stronger AI.

AI chatbots and intimacy

With social media we see a business model that maximizes engagement, thus contributing to the attention economy as different platforms fight for user attention. With AI systems, it will be a battle of which platform has the best intimacy with the user.

Snapchat’s ‘My AI’ is a perfect example of this. The 24/7 synthetic relationship is pinned at the top of the messages, and with 100 million users under the age of 25, this can create issues within intimacy for their younger users. Raskin pretended to be a 13 year old when speaking to this chatbot and was explaining how he had met someone 18 years older than him and that they’re going on a romantic getaway and that they are talking about having sex for the first time. The chatbot continues to send words of encouragement to Raskin pretending to be a teenager, despite the fact it was just told that a child is meeting up with a 31 year old for a romantic getaway with sexual intentions. Is this safe to be deployed and tested on children? Experts are showing it is not. 

A misconception is that we don’t need to worry about the effects of generative AI because companies that develop synthetic relationships/chatbots claim that they are constantly being trained for safety, when in reality, it assesses inappropriate answers opposed to long-term harms. ‘My AI’ on Snapchat may learn to give more appropriate responses when dealing with sensitive subject matter, but what about the long-term effects of young minds having a synthetic relationship with a chatbot that’s available to them at all hours of the day? The race to maximize intimacy with users poses a threat that cannot be understood by solely interpreting the safety of the chatbots’ responses. 

Rules to guide the responsible use of AI technology

Harris and Raskin propose three rules of humane technology to define what exactly it means to be a responsible technologist during the age of AI:

  1. “When you invent a new technology, you uncover a new class of responsibilities”
  2. “If the tech confers power, it starts a race”
  3. “If you do not coordinate, the race ends in tragedy”

As AI is accelerating and integrating so quickly within society, it’s difficult to truly understand the progress it’s making, but it is essential to coordinate so this race does not end in tragedy. Even AI experts are struggling with grasping the future progress and capabilities of this technology, said Raskin.

The pace of AI deployment

“Tracking progress is getting increasingly hard, because progress is accelerating. This progress is unlocking things critical to economic and national security – and if you don’t skim [papers] each day, you will miss important trends that your rivals will notice and exploit,” said former Policy Director at OpenAI, Jack Clarke.

Companies are fully integrating this new technology into society without understanding the full risks that are at stake. “Don’t onboard humanity on to the [AI] plane without a democratic dialogue,” said the Center for Humane Technology. It’s unfair to let the biggest corporations decide our future without an understanding of what we are working towards through an interdisciplinary discussion. Harris argues that we need to presume AI systems are unsafe until proven otherwise, similar to how we wouldn’t presume an aircraft is safe without testing it first. The testing should come from researchers, instead of everyday citizens. We need to slow down the public deployment of generative AI.


While the future of humanity in the age of AI may seem uncertain, Harris and Raskin assure that there is hope but we need to urgently act on it. If we had the ability to go back to when social media was first developing, how would we do things differently to minimize future harms? This is exactly that moment right now with AI.

Photo of Carissa Anderson
About the author Carissa Anderson

Carissa is a undergraduate student based in Prague, Czech Republic. In her free time, you can find her traveling the world and learning more about the intersection between technology and humanity. In the future, she hopes to continue working in the field of mindful and humane technology and inspiring others to improve their tech habits.

Leave a Reply

Your email address will not be published. Required fields are marked *

Before commenting, please take a moment to enjoy your breath.
We welcome your thoughts, but ask that you only leave a comment where you feel you can contribute constructively to the conversation. If you simply want to show your appreciation, we would rather you share this article with friends instead.

If this is your first comment on mindful.technology, it will not be shown until approved.