About a week ago Dima from Drinkit wrote a post about them experimenting with algorithmic generation of the network layer from OpenAPI spec with models, Retrofit interfaces and all that. Interesting topic. Understandable, beautiful programming task. But the post triggered me to think.
He writes there that the spec has peculiarities. For example, it mentions that class names get too long if you do everything exactly by spec. And also there's something unnecessary in the spec that would be nice to hide.
When we talk about such automation, in theory it always sounds good, but in practice some compromises and nuances appear that need to be kept in mind. Someone still needs to maintain the spec. Keep it up to date, without inventing new concepts and rules. They should always stick to what was agreed upon, otherwise automation will become increasingly complex and unreliable.
Moreover, I dare to assume that the task of transferring spec to code is more or less one-time, when someone writes a feature. At minimum - rare, when it changes drastically.
And with these inputs I no longer understand why solve it algorithmically in 2025, as they say. Wish we had a tool that could "think" instead of us, right? That we could explain what we'd like to see as output.
Claude Code, literally, is a hammer in my hands, everything else around are nails.
So, I can give it the same spec, show where the high-level documentation like "how we write network layer" lies, with explanations "how to use OpenAPI spec", and it will do good enough. It will make some decisions itself that you would have had to think about while writing your own.
Moreover, you also need to teach your developers to use your own solution later, even if it's simple. But formulating to an agent "by openapi.yml add network layer to feature:name" seems like a basic skill for anyone who's touched agents even once. And the further we go, the more such people there will be. With simple clarifications you can add/change a specific method from this spec in already existing code. Well yes, the agent will work for ten minutes, but it will write it. All faster than a live developer. And in your own automation you would have had to initially plan for this.
Just to be clear, you can teach the agent to use your own bicycles too, if they exist. For us, for example, the agent at words like "make feature-module X" will call our own task, not generate with neural networks as it sees fit.
But that doesn't change my point. It seems some things don't make sense to write at all.
What do you think, where is this line exactly? If you have a task, when is it worth writing your own automation to solve it, and when is it enough to give it to an agent? Is it a matter of frequency of use, desire for reliability, labor costs?