Max Woolf dives into a fascinating experiment: Can LLMs ‘go meta’ when asked to generate better code—much like how generative AI produces wild outputs when prompted for ‘more X’? It also raises an interesting question: What does ‘better’ even mean in code? More performant? More reliable? Or more complex?
If code can indeed be improved simply through iterative prompting such as asking the LLM to “make the code better” — even though it’s very silly — it would be a massive productivity increase. And if that’s the case, what happens if you iterate on the code too much? What’s the equivalent of code going cosmic? There’s only one way to find out!