Skip to content
Source

Documentation Not for Humans

The main problem with any LLMs right now is that we humans have a very hard time forming the connection between a task and the fact that it can be delegated to a neural network. That is, they can already do very very much at least normally, but we do it ourselves out of habit, because we don't think about it. And when we do think about it, we're surprised why we didn't think earlier. 🤔

I've been playing around quite a lot with Projects at Claude lately. This is roughly the same as custom GPTs at ChatGPT. That is, we can start a chat in advance with a certain "project" context, and in this context we can throw in a bunch of files as a knowledge base and system prompts.

In our project repo there's quite small markdown documentation that was written primarily as materials for onboarding. Either for new developers, or for old ones into new technology / new approach, but still for onboarding. There's a tech radar describing the relevance of all technologies, there's documentation on the approach to modularization with module types and rules, there's documentation on the architecture of specific features, there's a small roadmap like what we're getting rid of and where we want to move. And besides documentation, of course, our project is very well described by the Version Catalog file and the config with detekt rules. That is, everything that the code heavily depends on.

People rarely read documentation preemptively. For people, documentation is just a place you can go for additional information in case of a problem. But a neural network no, this is just wonderful material to help it. We throw all this into a custom Project, and any chat immediately starts without the need to write out a huge prompt with details on how to do it, how not to do it, what you need, and so on. We literally onboard the neural network in advance and once. And I still have somewhere around 15% of the potential context size, you can imagine how much more can be added there.

So, this is a total game changer. Now we're at the stage where Claude in a new chat can immediately write Compose layout for a screen from one screenshot and all the architectural wrapper exactly as it's written in our docs. And the biggest breakthrough of all this from a technical side is that this code after copy-paste into Studio is immediately green, in most cases you don't have to fix anything. I'll still note that the code here is quite template-based, where you need to come up with something beautiful - it's still not so rosy.

In connection with this experience I even had some kind of conceptual shift in my head. Everyone understands that prompt engineering is incredibly important and will be needed by everyone, but this thought went somewhere a step further. And I'm now almost certain that we'll be writing documentation not for people, but for neural networks first and foremost, because they need it much more and the impact from such documentation is many times greater. And the final thought - we need to invest more time in formalizing all verbal agreements in text. We're not even worried that developers will ignore them, because there's an obvious use case anyway.

PS. GPTs at ChatGPT are not even close in this sense, apparently due to context size limitations it doesn't perceive files from the knowledge base as that same context. It can answer about the contents of these files, but when you ask it to make a feature as accepted in the documentation - it just ignores all that and writes maximally generic code in Google style and pours a bunch of water instead of writing code almost silently.

PPS. Bought a Steam Deck, the number of posts sharply decreased until the addiction passes and free time appears again, you understand. 👮