Insider Intelligence delivers leading-edge research to clients in a variety of forms, including full-length reports and data visualizations to equip you with actionable takeaways for better business decisions.
In-depth analysis, benchmarks and shorter spotlights on digital trends.
Learn More
Interactive projections with 10k+ metrics on market trends, & consumer behavior.
Learn More
Proprietary data and over 3,000 third-party sources about the most important topics.
Learn More
Industry KPIs
Industry benchmarks for the most important KPIs in digital marketing, advertising, retail and ecommerce.
Learn More
Client-only email newsletters with analysis and takeaways from the daily news.
Learn More
Analyst Access Program
Exclusive time with the thought leaders who craft our research.
Learn More

About Insider Intelligence

Our goal at Insider Intelligence is to unlock digital opportunities for our clients with the world’s most trusted forecasts, analysis, and benchmarks. Spanning five core coverage areas and dozens of industries, our research on digital transformation is exhaustive.
Our Story
Learn more about our mission and how Insider Intelligence came to be.
Learn More
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Contact Us
Speak to a member of our team to learn more about Insider Intelligence.
Contact Us
See our latest press releases, news articles or download our press kit.
Learn More
Advertising & Sponsorship Opportunities
Reach an engaged audience of decision-makers.
Learn More
Browse our upcoming and past events, recent podcasts, and other featured resources.
Learn More
Tune in to eMarketer's daily, weekly, and monthly podcasts.
Learn More

Google takes a stand on deepfake AI, bans training models in lab

The news: Google is banning the training of AI systems that can be used to create deepfakes on its platforms like Google Colaboratory, per TechCrunch.

Disallowing deepfakes: Deepfakes, which superimpose a person’s face on top of another to create realistic videos, have become increasingly realistic thanks to artificial intelligence. AI can match body movements, microexpressions, and skin tones more accurately than CGI animation.

BleepingComputer and spotted updated terms of use, including deepfake-related work, in the disallowed projects list on Google’s Colab, a service allowing coders to write and execute arbitrary computer code through web browsers.

  • Colab has been a key platform for running demos within the AI research community. Google has taken a laissez-faire attitude on what code it allows on Colab, potentially attracting nefarious users.
  • Users of the open-source deepfake generator DeepFaceLab recently received error messages after trying to run DeepFaceLab in Colab. 
  • The warning read: “You may be executing code that is disallowed, and this may restrict your ability to use Colab in the future. Please note the prohibited actions specified in our FAQ.”
  • Some deepfake code will still run without errors or warning. In context, FaceSwap, a deepfake app, still runs without issue.

The bigger picture: “Deterring abuse is an ever-evolving game, and we cannot disclose specific methods as counterparties can take advantage of the knowledge to evade detection systems,” a Google spokesperson told TechCrunch. “In general, we have automated systems that detect and prohibit many types of abuse.”

  • Previous Colab restrictions voiding terms of service include running denial-of-service attacks, password cracking, and downloading torrents, which are potentially illegal activities.
  • Deepfakes are being used more often by hackers to spread disinformation and fraud. Deepfakes online increased from around 14,000 in 2019 to 145,000 in 2021.

What’s next? Google and other companies looking to regulate deepfake AI will need to enforce more comprehensive controls to effectively clamp down on its use.

  • Lopsided deepfake regulation could result in a rise of disinformation as well as a proliferation of extortion and fraud schemes. 
  • In context, deepfake scams are increasing, per SHRM. Forrester Research estimated that these scams cost $250 million in 2020.