Loading stock data...

Vera aims to leverage AI to mitigate the negative consequences of generative models.

GettyImages 1399939989

As we increasingly rely on artificial intelligence (AI) in our daily lives, concerns about its safety and security have grown. Liz O’Sullivan, co-founder of Vera, a startup focused on developing model-moderating technology for generative AI, is at the forefront of this movement. In an exclusive interview with TechCrunch, O’Sullivan shares her vision for making AI safer and more responsible.

The Problem with Generative AI

Generative AI models have revolutionized industries such as content creation, customer service, and healthcare. However, their ability to generate human-like text, images, and audio also raises concerns about the potential misuse of this technology. "Today’s AI hype cycle obscures the very serious, very present risks that affect humans alive today," O’Sullivan notes.

Vera’s Solution

Vera’s model-moderating technology aims to mitigate these risks by detecting and preventing sensitive data retention or regurgitation in generative AI models. According to O’Sullivan, Vera’s unique approach lies in its ability to tackle a wide range of generative AI threats simultaneously. "We’re not just focused on one particular type of attack," she explains. "Our technology is designed to address the complex web of risks associated with generative AI."

Competition and Market Trends

While Vera is not alone in this space, O’Sullivan believes that its value proposition sets it apart from competitors like Nvidia’s NeMo Guardrails and Salesforce’s Einstein Trust Layer. "We’re seeing a nascent market for model-moderating technology emerge," she notes. "Vera’s unique approach and comprehensive solution are resonating with customers seeking a one-stop-shop for content moderation and AI-model-attack-fighting."

The Future of Generative AI

As the demand for generative AI continues to grow, O’Sullivan emphasizes the need for responsible development and deployment practices. "We’re at a critical juncture where we must prioritize AI safety and security," she asserts. Vera’s mission is to empower organizations to unlock the full potential of generative AI while minimizing the risks associated with this technology.

Conclusion

Liz O’Sullivan and Vera are leading the charge in developing model-moderating technology for generative AI. As the industry continues to evolve, it’s essential to prioritize responsible development and deployment practices. With Vera at the forefront of this movement, we can expect significant advancements in making AI safer and more secure.

Related Topics

  • AI
  • Enterprise
  • Generative AI
  • Moderation
  • Security
  • Startups

About the Author

Kyle Wiggers is a senior reporter at TechCrunch with a special interest in artificial intelligence. His writing has appeared in VentureBeat and Digital Trends, as well as a range of gadget blogs including Android Police, Android Authority, Droid-Life, and XDA-Developers.

Contact Information

If you’re interested in learning more about Vera or AI-related topics, feel free to reach out to Kyle Wiggers at Kyle_L_Wiggers@techcrunch.com.

Subscribe to TechCrunch Daily News

Stay up-to-date with the latest news and trends in technology by subscribing to TechCrunch’s daily newsletter.