Skip to content
Source

About Our Podlodka Talk

A bit about our talk

It went much better than during rehearsals before. We wanted to fit too much in, which made the preparation not the easiest. In the end, we used all our strength and performed almost as we wanted. I listened to it twice afterwards, and I liked it except for a couple of slip-ups. What was new for me here was that I wasn't performing solo like before, but in a duo. So we had to solve everything as a team and build the narrative based on that. What Lyosha went through when a hundred top Android developers were listening to him live, I can barely imagine; he's truly awesome.

We had a deliberately optimistic task "to implement the entire screen in one prompt". It's optimistic because usually we either break it down into smaller pieces or finish it with follow-up prompts. That is, on one hand, we wanted to test the neural networks' strength in their current state, and on the other, we wanted to develop some experience on how to properly prepare context. To form a mindset first for ourselves, then to share it. We can't just take and shove the entire project into a neural network, so we have to approach this wisely.

The most common opinion from feedback was that the effort spent on writing prompts could have been spent on coding the screen manually. Of course, that's true; you almost always spend more time on automating a single task than solving it by hand. But if we're talking about ten cases where we can reuse this experience, then it wasn't done in vain.

But that's not even the main point here; it's understanding how to gather context. This idea, where we iteratively refine prompts and try to make them reusable between different tasks, is the key moment. We insist that neural networks need to be onboarded, code needs to be reviewed and corrected. This is not vibe coding by definition.

And here's a small update to the presentation. Two days after the talk, Gemini 2.5 was released, which shows better results in the same experiment both in final layout and code. And it's still free while it's experimental. That same single 60k token request to Sonnet 3.7 in the talk cost 20 cents. Such is life.