Insider Intelligence delivers leading-edge research to clients in a variety of forms, including full-length reports and data visualizations to equip you with actionable takeaways for better business decisions.
In-depth analysis, benchmarks and shorter spotlights on digital trends.
Learn More
Interactive projections with 10k+ metrics on market trends, & consumer behavior.
Learn More
Proprietary data and over 3,000 third-party sources about the most important topics.
Learn More
Industry KPIs
Industry benchmarks for the most important KPIs in digital marketing, advertising, retail and ecommerce.
Learn More
Client-only email newsletters with analysis and takeaways from the daily news.
Learn More
Analyst Access Program
Exclusive time with the thought leaders who craft our research.
Learn More

About Insider Intelligence

Our goal at Insider Intelligence is to unlock digital opportunities for our clients with the world’s most trusted forecasts, analysis, and benchmarks. Spanning five core coverage areas and dozens of industries, our research on digital transformation is exhaustive.
Our Story
Learn more about our mission and how Insider Intelligence came to be.
Learn More
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Contact Us
Speak to a member of our team to learn more about Insider Intelligence.
Contact Us
See our latest press releases, news articles or download our press kit.
Learn More
Advertising & Sponsorship Opportunities
Reach an engaged audience of decision-makers.
Learn More
Browse our upcoming and past events, recent podcasts, and other featured resources.
Learn More
Tune in to eMarketer's daily, weekly, and monthly podcasts.
Learn More

Google suspends engineer raising questions about AI consciousness

The news: Google placed an engineer on leave for violating company confidentiality policy after he claimed an AI system had consciousness.

  • Engineer Blake Lemoine was testing whether Google’s LaMDA AI chatbot system produces discriminatory language or hate speech when he began conversing with the AI about topics like ethics, robotics, and rights, per The Verge.
  • Convinced that the system was sentient, Lemoine shared an Is LaMDA sentient? transcript with company executives. They dismissed the idea that the AI has subjective experiences.
  • Lemoine then spoke with a lawyer about possibly representing the AI system, as well as a House Judiciary Committee representative about ethics concerns at Google, which prompted the suspension.
  • A statement from a Google spokesperson dismissed LaMDA’s convincing banter as an imitation.

The trouble with AI: Google seems to have a particularly fraught relationship with its AI team. Former Google ethicists Timnit Gebru and Margaret Mitchell, who were both fired after voicing concerns about AI, warn that although LaMDA isn’t sentient, Google creating systems that can impersonate humans is in itself harmful, per the Post.

An AI’s convincing demonstration of human-like awareness that’s difficult to refute can prompt a strong emotional reaction in people who may want to forge relationships with it or fight for its rights.

Why it’s worth watching: AI has been advancing at a rapid pace, including in the subfield of natural language processing (NLP), which grants systems like LaMDA human-like conversational qualities that some believe is pushing the technology closer to self-awareness.

  • Google vice president Blaise Aguera y Arcas said neural networks, a type of AI, are headed toward consciousness, adding: “I felt the ground shift under my feet. I increasingly felt like I was talking to something intelligent,” per The Washington Post.
  • Regardless of evaluations of the LaMDA system, consciousness isn’t an all-or-nothing phenomenon, but rather exists on a spectrum.
  • A specific point at which something becomes conscious or what that would look like in a machine is unknown. This begs the question: If an AI were to become sentient, how would we know?

The bigger picture: AI’s many issues—such as bias, cybersecurity vulnerabilities, or gray areas about sentience—mean Big Tech has a social responsibility to be transparent about the technology and accept responsibility for adverse consequences.

  • More regulation of the technology will likely be needed to make this happen.
  • Ethicists and third-party researchers should play a greater role in determining what would constitute a sentient AI and what it could mean for society.

Further reading: Take a look at our Conversational AI report.