Company News

Company News

LinqAlpha at NeurIPS 2025: Measuring LLM Bias in Investment Research

LinqAlpha at NeurIPS 2025: Measuring LLM Bias in Investment Research

Dec 6, 2025

Dec 6, 2025

Jacob Chanyeol Choi

Jacob Chanyeol Choi

LinqAlpha's Cofounder and CEO, Jacob Chanyeol Choi, highlights how LLMs function as autonomous agents in financial workflows, and the risks of bias and misalignment in institutional settings.

LinqAlpha participated in the NeurIPS 2025 Workshop on Generative AI in Finance, joining academic and industry leaders to examine the deployment of generative AI— including large language models (LLMs) and agentic systems—across financial applications. The workshop emphasized the importance of governance, explainability, and regulatory alignment as institutions move from experimentation to production.

During the session “LLMs in Finance: Biases, Forecasting and Trading,” Jacob Chanyeol Choi, Co-founder and CEO of LinqAlpha, presented joint research with Prof. Alejandro Lopez-Lira of the University of Florida on the role of LLMs as autonomous agents in financial reasoning. The talk explored how LLMs develop implicit investment views through training data, and how confirmation bias and internal memory can influence analysis and decision-making in institutional workflows.

The session highlighted the growing use of LLMs for sentiment analysis, ESG evaluation, research generation, and catalyst checking, while underscoring the risks of misalignment between model-generated outputs and firm-wide investment theses. The speakers also discussed how agentic pipelines and multi-agent systems, including devil's advocate agents, can help introduce structured challenge and objectivity into investment research.

LinqAlpha continues to advance research-driven, agentic AI for institutional finance, focusing on building aligned, interpretable systems that support safer and more confident investment decision-making.

To learn more about LinqAlpha’s work on agentic AI for institutional finance, visit linqalpha.com.