How skills work in different agents
In the previous post I mentioned that for Claude Code I had to write hook hacks to remind the model that it has skills. Then I switched to OpenCode with GPT. And I kept noticing that this needs to be done there too, otherwise skills that clearly match the description still do not get read. So I was also putting some scripts into the right place there.
I did not do this in Pi, because skills are read quite consistently there. And I am very curious why. I have very few ideas, so if you know — tell me.
It seems that the fundamental difference is only that in Claude Code and OpenCode, invoking skills is a separate tool, something like invoke/skill. This is because skills there have various settings: permissions, custom agents, and so on, where some code needs to run before the file is read. Because of that, the model has to be seriously inclined to run a custom harness tool instead of deciding "I already have enough info, I'll go do it". And the problem is not so much tool calling itself as the model's decisiveness. With hacks, the tools do get called.
In Pi, the system prompt says: "here are the files, if something seems useful, read it". So it is just the dumbest possible read, which basically cannot fail. Because reading a file is the most standard flow for a model solving any task. They are trained for this.
Can it really be that simple? Can it really be that if you do not reinvent bicycles on top of a minimal base that models understand, they start arriving at the right conclusions? And are all these harness-level enterprise features like permissions in skills worth it if the basic concept of a skill works unreliably because of them?