The query was exactly as you described. Gemini returned the details of the Dreamliner Air India crash, including capturing the number of people aboard the flight, the date of the flight, the location of the flight and crash - yet Gemini somehow managed to hallucinate the wrong plane, an Airbus A330-243.
Air India doesn't even have any A330s, so it's not even immediately obvious how the hallucination happened. It just straight up included the wrong plane.
This specific failure seems more egregious than most.
Just reproduced the issue with Bing's AI result. I find it kind of hilarious that in its sources, the first one listed is an article with the headline: "How Is Airbus Not Suing Google?"
Try asking if Case IH does the same behavior as John Deere with using excessive software locks on their hardware to prevent farmer repair of their machines.
Sometimes it just takes a little leading, but you can get these things to say whatever you want without any actual regard as to veracity.
The answer may be semantically and grammatically correct, but it is nevertheless, largely bullshit.
I've found Google's AI overview to be so bad it's probably spreading misinformation. It has no ability to assess the correctness of it's results. I can only assume its enabled so that google can collect training data from user feedback on the feature.
For instance I asked google how many Apple shares one person owned, it plucked a number from an article, but that was for a deal Apple made, and the person just happened to be involved in the deal.
The search used was "last airbus fatal crash".
The query was exactly as you described. Gemini returned the details of the Dreamliner Air India crash, including capturing the number of people aboard the flight, the date of the flight, the location of the flight and crash - yet Gemini somehow managed to hallucinate the wrong plane, an Airbus A330-243.
Air India doesn't even have any A330s, so it's not even immediately obvious how the hallucination happened. It just straight up included the wrong plane.
This specific failure seems more egregious than most.
Just reproduced the issue with Bing's AI result. I find it kind of hilarious that in its sources, the first one listed is an article with the headline: "How Is Airbus Not Suing Google?"
Try asking if Case IH does the same behavior as John Deere with using excessive software locks on their hardware to prevent farmer repair of their machines.
Sometimes it just takes a little leading, but you can get these things to say whatever you want without any actual regard as to veracity.
The answer may be semantically and grammatically correct, but it is nevertheless, largely bullshit.
And the output was a Boeing crash.
I've found Google's AI overview to be so bad it's probably spreading misinformation. It has no ability to assess the correctness of it's results. I can only assume its enabled so that google can collect training data from user feedback on the feature.
For instance I asked google how many Apple shares one person owned, it plucked a number from an article, but that was for a deal Apple made, and the person just happened to be involved in the deal.
This is concerning, AI has to be accurate for it to work well. It can't hallucinate. Until this problem is fixed, we still need people to do the work.
[flagged]