Synthetic data has its limits — why human-sourced data can help prevent AI model collapse


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


My, how quickly the tables turn in the tech world. Just two years ago, AI was lauded as the “next transformational technology to rule them all.” Now, instead of reaching Skynet levels and taking over the world, AI is, ironically, degrading. 

Once the harbinger of a new era of intelligence, AI is now tripping over its own code, struggling to live up to the brilliance it promised. But why exactly? The simple fact is that we’re starving AI of the one thing that makes it truly smart: human-generated data.

To feed these data-hungry models, researchers and organizations have increasingly turned to synthetic data. While this practice has long been a staple in AI development, we’re now crossing into dangerous territory by over-relying on it, causing a gradual degradation of AI models. And this isn’t just a minor concern about ChatGPT producing sub-par results — the consequences are far more dangerous.

When AI models are trained on outputs generated by previous iterations, they tend to propagate errors and introduce noise, leading to a decline in output quality. This recursive process turns the familiar cycle of “garbage in, garbage out” into a self-perpetuating problem, significantly reducing the effectiveness of the system. As AI drifts further from human-like understanding and accuracy, it not only undermines performance but also raises critical concerns about the long-term viability of relying on self-generated data for continued AI development.

But this isn’t just a degradation of technology; it’s a degradation of reality, identity, and data authenticity — posing serious risks to humanity and society. The ripple effects could be profound, leading to a rise in critical errors. As these models lose accuracy and reliability, the consequences could be dire — think medical misdiagnosis, financial losses and even life-threatening accidents.

Another major implication is that AI development could completely stall, leaving AI systems unable to ingest new data and essentially becoming “stuck in time.” This stagnation would not only hinder progress but also trap AI in a cycle of diminishing returns, with potentially catastrophic effects on technology and society.

But, practically speaking, what can enterprises do to ensure the safety of their customers and users? Before we answer that question, we need to understand how this all works.

When a model collapses, reliability goes out the window

The more AI-generated content spreads online, the faster it will infiltrate datasets and, subsequently, the models themselves. And it’s happening at an accelerated rate, making it increasingly difficult for developers to filter out anything that is not pure, human-created training data. The fact is, using synthetic content in training can trigger a detrimental phenomenon known as “model collapse” or “model autophagy disorder (MAD).”

Model collapse is the degenerative process in which AI systems progressively lose their grasp on the true underlying data distribution they’re meant to model. This often occurs when AI is trained recursively on content it generated, leading to a number of issues:

  • Loss of nuance: Models begin to forget outlier data or less-represented information, crucial for a comprehensive understanding of any dataset.
  • Reduced diversity: There is a noticeable decrease in the diversity and quality of the outputs produced by the models.
  • Amplification of biases: Existing biases, particularly against marginalized groups, may be exacerbated as the model overlooks the nuanced data that could mitigate these biases.
  • Generation of nonsensical outputs: Over time, models may start producing outputs that are completely unrelated or nonsensical.

A case in point: A study published in Nature highlighted the rapid degeneration of language models trained recursively on AI-generated text. By the ninth iteration, these models were found to be producing entirely irrelevant and nonsensical content, demonstrating the rapid decline in data quality and model utility.

Safeguarding AI’s future: Steps enterprises can take today

Enterprise organizations are in a unique position to shape the future of AI responsibly, and there are clear, actionable steps they can take to keep AI systems accurate and trustworthy:

  • Invest in data provenance tools: Tools that trace where each piece of data comes from and how it changes over time give companies confidence in their AI inputs. With clear visibility into data origins, organizations can avoid feeding models unreliable or biased information.
  • Deploy AI-powered filters to detect synthetic content: Advanced filters can catch AI-generated or low-quality content before it slips into training datasets. These filters help ensure that models are learning from authentic, human-created information rather than synthetic data that lacks real-world complexity.
  • Partner with trusted data providers: Strong relationships with vetted data providers give organizations a steady supply of authentic, high-quality data. This means AI models get real, nuanced information that reflects actual scenarios, which boosts both performance and relevance.
  • Promote digital literacy and awareness: By educating teams and customers on the importance of data authenticity, organizations can help people recognize AI-generated content and understand the risks of synthetic data. Building awareness around responsible data use fosters a culture that values accuracy and integrity in AI development.

The future of AI depends on responsible action. Enterprises have a real opportunity to keep AI grounded in accuracy and integrity. By choosing real, human-sourced data over shortcuts, prioritizing tools that catch and filter out low-quality content, and encouraging awareness around digital authenticity, organizations can set AI on a safer, smarter path. Let’s focus on building a future where AI is both powerful and genuinely beneficial to society.

Rick Song is the CEO and co-founder of Persona.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers



Source link

About The Author