
I somehow missed that Google showed pricing for Gemini 2.5 Pro.
It's something between 4o, if you run less than 200k tokens, and Sonnet 3.7, if you run up to a full million. I was a bit more optimistic after seeing how cheap they gave away 2.0 Flash, it's a real shame.
It's cool that you can use it for free. But for this free cheese, you need to pay by allowing Google to use our contexts for training. OpenAI and Anthropic, for example, explicitly promise that they won't use API requests this way. In the ChatGPT web interface, by default, there's a hidden checkbox that allows the opposite, by the way. Go disable it.
This week I'm planning to play with this model in Cline and Roo Code, testing both of them as well. I want to estimate costs and quality of work on some real scenarios.
PS. The last few weeks, neural networks haven't been solving my important and urgent tasks normally at all. I have to think with my own head like in the good old days. Horrible.