As someone who uses language models including Gemini, Perplexity and Claude for various research tasks including some that seek to summarize market trends, I have often found that I get answers that use different sources, which is not unexpected.
What has been unexpected is the frequent refusal of Gemini to provide estimates the other engines do supply. That leads me to believe Gemini, in particular, uses algorithms intended to limit its use in many ways that might expose Alphabet to regulatory or other exposure.
My guess is that Google’s higher regulatory and antitrust exposure, compared to firms supporting Perplexity or Claude, for example, causes the use of guardrails that instruct the chatbot to avoid anything that might be construed as “financial advice,” even when no personally-identifiable information is involved, and the questions relate to industry market sizes, revenues and so forth.
The issue here is not that different training data has been used, that data recency varies or that models use different underlying architectures, algorithms, and fine-tuning techniques. So even when working from similar base data, different models can generate slightly-different conclusions.
Refusals to forecast (like you sometimes see from Gemini) are” typically due to built-in safety protocols designed to prevent the AI from giving unlicensed financial advice, acknowledge the inherent uncertainty of markets, and avoid potential liability and the spread of misinformation,” Gemini itself says, when asked about “refusal to answer” responses.
No comments:
Post a Comment