• Turing Post
  • Posts
  • How TrueMedia Fights the Battle Against Fake News: AI in Action

How TrueMedia Fights the Battle Against Fake News: AI in Action

Oren Etzioni discusses AI, AGI, Misinformation, and Balancing the Benefits of Open-source

If you like Turing Post, please consider to support us today

Oren Etzioni, the founding CEO of the Allen Institute for AI (AI2), has turned his attention to fighting political misinformation. In January 2024, he left AI2 and founded TrueMedia.org. This non-profit platform utilizes AI detection tools to identify deepfakes and manipulated media. What motivated Etzioni to launch this platform? What tools are used to fight misinformation? How can we balance the benefits of open-source AI with its risks? These and other insights for responsible AI development are in our interview below.

Hi Oren, happy to have you for this interview. You are known as the founding CEO of the Allen Institute for Artificial Intelligence (AI2), a venture partner at Madrona VC, and an entrepreneur with a long history of successful exits. And then suddenly, you quit your CEO job and started TrueMedia, a non-profit. What inspired or pushed you to do it?

Iā€™m very proud of the success of AI2. After nine years taking it from nothing to $100M+ budget and 250 team members ā€“ it was time for a break (I took a sabbatical in 2023) and a new challenge. My new challenge is fighting political deep fakes, and Iā€™m also very proud of truemedia.org which offers a free public service to help people assess whether images, audios, and videos shared via social media have been manipulated.

What are the most interesting cases of the tsunami of misinformation you observe now? How do they make you feel?

We see misinformation and disinformation in every election in 2024 including Indonesia, India, Mexico, Taiwan, and more. Iā€™m very concerned about what we will see in November this year. The fake Biden robocall in the New Hampshire primary is a canary in the coal mine ā€“ a lot more political deepfakes are coming!

Please tell us about the specific tools TrueMedia.org has developed to fight AI-generated disinformation, and how effective they have been in real-world applications so far. 

We use a set of different AI detectors, some we developed in-house, but most from technical partners. We partner with the best detection companies in the world to bring all their cutting-edge technologies to our users. Our detectors look at different aspects, which are broken down into four categories:

  1. Face Manipulation - Distinguishes deepfakes from real faces or if other methods were used such as face blending, swaps, or re-enactment.

  2. Generated AI - Detects if the image was created with popular tools, specifically DALL-E, Stable Cascade, Stable Diffusion XL, CQD Diffusion, Kandinsky, WĆ¼rstchen, Titan, Midjourney, Adobe Firefly, Pixart, Glide, Imagen, Bing Image Creator, LCM, Hive, DeepFloyd, and any Generative Adversarial Network (GAN).

  3. Visual noise - Detects if artifacts from manipulation or generation are present in an image, including variation in pixels and color variation. When an AI tool creates or modifies an image sometimes types of visual noise remain.

  4. Audio - Detects if there are traces that audio has been manipulated or cloned.

What research areas are the most interesting to you now in relation to this project?

We are focused on computer vision, but also on synthetic data generation to improve our datasets.

Whatā€™s your stance on AGI?

The term is ill-defined, but whatā€™s clear that is that AI technology is still very far from human-level intelligence as per my article here. (From the editor: In this article, Oren argues that "artificial intelligence" refers both to the scientific quest to create human-like intelligence (ā€˜Scientific AIā€™ in computers and the modeling of large data sets (ā€˜Data-centric AIā€™: ā€œWe continue to anticipate the distant day when AI systems can formulate good questionsā€”and shed more light on the fundamental scientific challenge of understanding and constructing human-level intelligence.ā€)

Some people argue that the open-source nature of many AI tools poses significant security risks. Others argue that itā€™s the way to more intelligent machines. What are your views on balancing the benefits of open-source AI with the need to prevent its misuse?

Balancing the benefits of open-source AI with its risks involves several key measures. Open-source AI fosters innovation, transparency, and accessibility, while also posing security and misuse risks. To manage these, establish ethical guidelines, implement robust security measures, and consider tiered access for sensitive tools. Collaboration with regulatory bodies ensures compliance with standards. Community involvement and vigilance can help monitor misuse, and education on ethical implications and secure coding practices is vital. A holistic approach combining these strategies can harness the advantages of open-source AI while minimizing its potential for harm.

In your articles, you mentioned the need for widespread cooperation among government regulators, AI developers, and tech giants. What are the biggest challenges in fostering this collaboration, and how can they be overcome?

Fostering cooperation among government regulators, AI developers, and tech giants faces challenges like diverse interests, regulatory complexity, rapid technological advancements, data privacy concerns, and lack of standardization. Solutions include establishing multistakeholder platforms, developing global regulatory frameworks, adopting agile regulatory approaches, promoting transparency and accountability, encouraging public-private collaboration, providing education and training, and working towards standardization. These efforts can bridge gaps, align goals, and ensure the responsible development and deployment of AI technologies through effective collaboration. I discussed some of my ideas on how to regulate AI here. (From the editor: In this article, Oren argues that government regulation of AI is essential to prevent harm but must be carefully crafted to avoid stifling innovation. He proposes regulating AI applications in specific areas like transportation and medicine, while avoiding regulation of AI research. This balanced approach aims to harness AI's benefits while mitigating risks without overreaching.)

You are still a Technical Director of the AI2 Incubator. What trends are you observing in AI startups that are particularly promising or concerning?

We are seeing tremendous traction with GenAI startups from Lexion (legal GenAI ā€“ just acquired by Docusign) to Yoodli.ai (presentation and sales coaching) to ChipStack (GenAI for better chip design) and more.

From your experience at the Allen Institute for AI, what are the most effective strategies for fostering collaboration between academic research and industry to accelerate AI innovation?

The use of open source to address IP concerns on both sides, coupled with the free flow of ideas, interns, and more, was a huge boon for AI2.

What other research areas, apart from your everyday job, are you following closely and think are essential for moving the AI industry forward?

Iā€™m continuing to follow closely the developments of foundation models, and software agents based on these models. In the next decade, we will see some remarkable breakthroughs here!

Can you recommend a book or books for aspiring computer scientists, developers, or ML engineers, whether or not they are specifically about CS or ML?

For work, I recommend:

For life, I recommend:

Thank you for reading! if you find it interesting, please do share or upgrade to Premium to support our effort šŸ¤

 

Reply

or to participate.