Frequently Asked Questions

Common questions about hallucination yield, AI investment bias, and our research platform

GENERAL QUESTIONS

What is hallucination yield?

Hallucination yield refers to the systematic biases that large language models (LLMs) like ChatGPT show when making investment recommendations. These AI models consistently favor certain stocks or overestimate company importance based on patterns in their training data, creating measurable "premiums" that can be tracked and analyzed.

How do you measure AI investment bias?

We systematically query multiple AI models with investment-related questions and analyze their responses for frequency bias (which stocks are mentioned most), sentiment bias (how positively companies are described), and consistency patterns across different models and time periods.

Which AI models do you track?

We analyze responses from major large language models including ChatGPT (various versions), Claude, Gemini, and other leading LLMs. Our platform tracks 7+ different models to identify consensus patterns and disagreements in their investment recommendations.

ACCURACY & RELIABILITY

How accurate are AI stock predictions?

AI stock predictions should not be considered accurate investment advice. Our research focuses on identifying systematic biases rather than predicting actual market performance. AI models have training data cutoffs, lack real-time market information, and can exhibit significant biases that make them unreliable for investment decisions.

Can I use this data for actual investing?

No, this platform is for research purposes only. Our analysis is designed to understand AI behavior and biases, not to provide investment advice. All investment decisions should be based on independent research, professional advice, and your own due diligence.

What causes bias in AI trading algorithms?

AI bias stems from training data patterns, where certain companies receive more positive coverage, discussion frequency, or media attention. Models learn these patterns and reproduce them in recommendations. Additionally, training data cutoffs mean models lack current market information and may overweight historical narratives.

TECHNICAL QUESTIONS

How often is the data updated?

We update our AI opinion data weekly for most tracked assets. Major assets like Bitcoin, Tesla, and NVIDIA may have more frequent updates during significant market events. Our research reports and analysis are published monthly.

What's your research methodology?

We use standardized prompts across multiple AI models, asking for investment recommendations, price predictions, and risk assessments. Responses are analyzed for sentiment, consistency, and statistical significance. Our methodology is detailed in our Goals & Methods page.

Is hallucination yield a real phenomenon?

Yes, systematic bias in AI investment recommendations is a documented phenomenon. While the term "hallucination yield" is relatively new, the concept builds on established research in AI bias, algorithmic trading, and the impact of media sentiment on market behavior.

PLATFORM & ACCESS

Is the platform free to use?

Yes, our basic research data and analysis are freely available. We also offer custom consulting and enterprise solutions for organizations interested in deeper analysis or custom research projects. Check our pricing page for details.

Do you have an API?

We're developing API access for researchers and institutions. Join our newsletter to be notified when API access becomes available. Priority access will be given to academic researchers and legitimate research institutions.

Can I request analysis of specific assets?

Yes! We regularly add new assets based on community interest and research value. Contact us through our contact page to suggest assets for analysis. We prioritize assets with high trading volume and significant AI model attention.

Still have questions?

Can't find the answer you're looking for? Our research team is here to help with any questions about AI investment bias, our methodology, or specific use cases.