About Hallucination Yield
Understanding how AI models create systematic investment biases and market premiums
WHAT IS HALLUCINATION YIELD?
The concept of "hallucination yield" emerged from a fascinating observation: ChatGPT and other large language models show consistent biases when discussing investments and stocks. As noted by @goodalexander:
"I'm a recently converted fan of 'hallucination yield' - the idea that ChatGPT arbitrarily likes certain stocks or thinks they're bigger than they are. Hard to backtest because of training cutoffs but... yeah, it works."
THE SCIENCE BEHIND IT
Large language models are trained on vast amounts of internet data, which creates systematic biases in how they perceive and recommend investments. These biases aren't random - they reflect patterns in the training data that can be measured and analyzed.
Our platform systematically queries multiple AI services (specifically large language models or LLMs) to:
- Identify stocks that AI models consistently favor or overweight
- Measure the "premium" AI models assign to specific companies
- Track how these biases change over time
- Provide data-driven insights into AI market sentiment
WHY THIS MATTERS
As AI becomes increasingly influential in financial markets - through robo-advisors, research tools, and decision-making systems - understanding these built-in biases becomes crucial. The "hallucination yield" represents a new form of market premium that traditional analysis might miss.
Intellectual Foundations
Our research is informed by several key philosophical concepts that help explain how AI can influence markets:
Hyperstition (Nick Land)
The concept of "hyperstition" describes fictions that make themselves real. As AI models repeatedly recommend certain assets, they create a narrative that can drive investor behavior, turning a fictional "AI preference" into a real market effect.
Reflexivity (George Soros)
We draw on George Soros's theory that investor biases shape market outcomes. LLMs introduce a powerful new reflexive loop, where AI-generated sentiment and market reality feed back into each other.
Hyperreality (Jean Baudrillard)
An AI's description of a company is a simulacrum - a model based on data. When this model becomes more influential than the company's fundamentals, it creates a "hyperreality" for investors.
OUR APPROACH
We take a rigorous, data-driven approach to understanding AI market biases:
Systematic Data Collection
Regular queries across multiple AI platforms
Trend Analysis
Tracking changes in AI preferences over time
API Access
Providing structured data through our platform
Visualization Tools
Charts and analytics to understand patterns
Project Vision & Future Roadmap
Hallucination Yield is fundamentally a free and open research project. Our mission is to gather a comprehensive dataset on AI investment opinions and make it accessible to researchers, journalists, and the public to foster a deeper understanding of this emerging phenomenon.
To ensure the project's sustainability, we may introduce a premium tier in the future aimed at professional analysts and traders. Potential premium features include:
- Powerful screeners to filter assets by nuanced AI sentiment metrics.
- Complete historical data access for robust backtesting of trading strategies.
- Real-time API access for systematic trading and analysis.
Our immediate focus remains on building a world-class, publicly available dataset. This will serve as the foundation for future tools and research into AI's reflexive impact on markets.
LEARN MORE
Explore our detailed documentation and resources to understand our methodology and research approach:
Ready to Explore?
Join our research community and get early access to AI market bias data
Get Early AccessResearch Disclaimer
This platform is for research and educational purposes only. Nothing on this site constitutes financial advice. All data and analysis should be used for research purposes only. We make no warranties about the accuracy or reliability of the information provided. Use at your own risk.