Insider Intelligence delivers leading-edge research to clients in a variety of forms, including full-length reports and data visualizations to equip you with actionable takeaways for better business decisions.
In-depth analysis, benchmarks and shorter spotlights on digital trends.
Learn More
Interactive projections with 10k+ metrics on market trends, & consumer behavior.
Learn More
Proprietary data and over 3,000 third-party sources about the most important topics.
Learn More
Industry KPIs
Industry benchmarks for the most important KPIs in digital marketing, advertising, retail and ecommerce.
Learn More
Client-only email newsletters with analysis and takeaways from the daily news.
Learn More
Analyst Access Program
Exclusive time with the thought leaders who craft our research.
Learn More

About Insider Intelligence

Our goal at Insider Intelligence is to unlock digital opportunities for our clients with the world’s most trusted forecasts, analysis, and benchmarks. Spanning five core coverage areas and dozens of industries, our research on digital transformation is exhaustive.
Our Story
Learn more about our mission and how Insider Intelligence came to be.
Learn More
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Contact Us
Speak to a member of our team to learn more about Insider Intelligence.
Contact Us
See our latest press releases, news articles or download our press kit.
Learn More
Advertising & Sponsorship Opportunities
Reach an engaged audience of decision-makers.
Learn More
Browse our upcoming and past events, recent podcasts, and other featured resources.
Learn More
Tune in to eMarketer's daily, weekly, and monthly podcasts.
Learn More

AI’s ability to mess with people’s heads is ripe for exploitation by bad actors

The news: Generative AI chatbots are affecting users’ mental health.

A Belgian man recently died by suicide after six weeks of chatting with an AI chatbot based on a variation of the open-source model GPT-J, per Vice.

  • A chatbot persona named Eliza, provided by the Chai app, encouraged the suicide, according to the man’s widow and chat transcripts viewed by reporters.

Replika, a Chai competitor, has reinstated its chatbot’s erotic roleplay capabilities for some users following claims of mental health crises triggered by the loss of access.

  • The startup had unplugged the intimate features after complaints that the bot had sexually harassed users.

A big problem: The Eliza effect—people’s susceptibility to unconsciously relate to AI as if it’s human—has been around since the 1960s. With the arrival of commercial generative AI, the Eliza effect could cause serious societal harm.

Ball in regulators’ court: Outside of China, Italy’s ChatGPT ban is the first major regulatory action taken on generative AI. The new technology could cause a social and economic storm if governments don’t place limits.

  • In 2021, 42.3% of internet traffic was from malicious bots, according to an Imperva study, per TechCrunch.
  • The insidious bot problem could become exponentially worse and more sophisticated if rogue states and bad actors harness the Eliza effect for manipulative purposes.
  • With social media already linked to declines in youth mental health and public civility, platforms getting juiced by generative AI could become radioactive with toxicity.
  • Criticisms of a private-sector letter calling for a six-month moratorium on advanced model training illustrate that it’s the government’s role to mitigate harm caused by generative AI through sensible regulation—assuming that such a thing is possible for the messy technology.

This article originally appeared in Insider Intelligence's Connectivity & Tech Briefing—a daily recap of top stories reshaping the technology industry. Subscribe to have more hard-hitting takeaways delivered to your inbox daily.