Gemini 3 is out and it’s been pretty incredible to test. I built a couple of fun prototypes with it in Google AI Studio, and they really pushed the envelope this time. It’s hard to pinpoint exactly why it feels different, but here’s what stands out to me:

  • It goes the extra mile when thinking about what I asked, from a product perspective. It doesn’t just think about how to build what I wanted, but also about what exactly needs to be built.
  • It’s spit out delightful designs compared to other tools (including Lovable). I guess this is subjective, but I didn’t really have to change anything design-wise.
  • It iterated several times until it resolved all bugs before I could even spot them.
  • It’s been really efficient at implementing my improvements and suggesting new ones (including LLM-intensive features).
  • I’m not sure if this is because of Google AI Studio or Gemini 3 itself, but it’s been totally frictionless to add LLM-enabled features (no need to set up an API key, connect to a streaming endpoint, etc.). It really does make it incredibly easy for non-technical people to implement full-scale solutions.

I didn’t evaluate the code quality since that’s not my job or interest, but from a product and prototyping perspective, I’m pretty blown away so far. Looking forward to experimenting more with it.

Like Ethan Mollick, I feel like we’re moving away from the agentic hand-holding approach and toward more of a directing-the-AI approach. Or as he puts it:

Three years ago, we were impressed that a machine could write a poem about otters. Less than 1,000 days later, I am debating statistical methodology with an agent that built its own research environment. The era of the chatbot is turning into the era of the digital coworker. To be very clear, Gemini 3 isn’t perfect, and it still needs a manager who can guide and check it. But it suggests that “human in the loop” is evolving from “human who fixes AI mistakes” to “human who directs AI work.”

Source: oneusefulthing.org