I have found Gemini 2.5 Pro works really well on python math or science based prompts, where you for instance need to generate a notebook or get some scaffolding code up for a problem you're exploring. However, it hasn't replaced Claude 3.7 thinking for day to day coding tasks.
Most notable at least in my experience, Gemini 2.5 Pro works terribly on existing code. It will often produce changes to areas of the code I never asked it to, requiring careful review. Sometimes when Claude has failed to fix a bug repeatedly, I'll try to use Gemini 2.5 Pro. Still a hit or miss, and several times I've gone back to Claude and found the right solution with more and varied prompting or work on adding context.
One thing I have noticed is that Gemini 2.5 Pro works a lot better using Google's AI console than through VS Code or Cursor (you can get it working with the latest Insiders build for VSCode). You can also see its chain of thought, which often times gives insight that can be used in additional prompting.
Still, though, at least for my type of work, Gemini 2.5 Pro hasn't replaced Claude for me quite yet.
Google’s A.I. is the least interesting and most useless in my experience. I use a lot of local models and the Gemma models have always been a waste of time, even the new ones.
Mistral, Llama, Deepseek, Qwen, Exaone and etc have been more useful.
I feel like Google has been spending a lot on PR lately because my experience does not match some of the weird praise I see them getting lately.
Flash 2.0 has been excellent for its price point for me. And their multimodal work is leading the field. Their top-end foundation models are a bit lacking but they are catching up fast there too.
To me it seems Google lost this train already. Most users are into ChatGPT, programmers into Claude and the people who need something cheap use Deepseek or Qwen.
I have found Gemini 2.5 Pro works really well on python math or science based prompts, where you for instance need to generate a notebook or get some scaffolding code up for a problem you're exploring. However, it hasn't replaced Claude 3.7 thinking for day to day coding tasks.
Most notable at least in my experience, Gemini 2.5 Pro works terribly on existing code. It will often produce changes to areas of the code I never asked it to, requiring careful review. Sometimes when Claude has failed to fix a bug repeatedly, I'll try to use Gemini 2.5 Pro. Still a hit or miss, and several times I've gone back to Claude and found the right solution with more and varied prompting or work on adding context.
One thing I have noticed is that Gemini 2.5 Pro works a lot better using Google's AI console than through VS Code or Cursor (you can get it working with the latest Insiders build for VSCode). You can also see its chain of thought, which often times gives insight that can be used in additional prompting.
Still, though, at least for my type of work, Gemini 2.5 Pro hasn't replaced Claude for me quite yet.
Google’s A.I. is the least interesting and most useless in my experience. I use a lot of local models and the Gemma models have always been a waste of time, even the new ones.
Mistral, Llama, Deepseek, Qwen, Exaone and etc have been more useful.
I feel like Google has been spending a lot on PR lately because my experience does not match some of the weird praise I see them getting lately.
Flash 2.0 has been excellent for its price point for me. And their multimodal work is leading the field. Their top-end foundation models are a bit lacking but they are catching up fast there too.
To me it seems Google lost this train already. Most users are into ChatGPT, programmers into Claude and the people who need something cheap use Deepseek or Qwen.
Been trying it out a bit, feels a bit dry like a college professor, gpt after a while even starts making jokes
Not sure I buy all of Alberto's arguments but definitely a spicy take.
I dunno, each time I try to use Gemini, it disappoints. Maybe ill try again
...except mindshare.
Turns out, that matters a lot.
[flagged]