LLMs and Abstraction Layers

June 6, 2025  |  Programming  ·  Machine Learning

One of the most divisive topics in programming is whether to use an ORM. Both sides have strong arguments, and each choice comes with significant trade-offs. If you trust an ORM, it can boost productivity, but it may also slow down your code and introduce obscure, hard-to-debug issues. Writing raw SQL avoids an extra layer of abstraction, but it requires deep SQL knowledge and often involves writing a lot of boilerplate code.

Recently, I’ve been experimenting with Deepseek and noticed that it excels at generating ORM boilerplate. This shifts the balance in the ORM debate, dealing a serious blow to the pro-ORM camp: now, you can have both direct database access and higher productivity. As someone who has always opposed ORMs, I find this development very encouraging.

Systems with fewer abstraction layers are more maintainable because they contain no black boxes—especially when those abstractions are leaky.

The law of leaky abstractions means that whenever somebody comes up with a wizzy new code-generation tool that is supposed to make us all ever-so-efficient, you hear a lot of people saying “learn how to do it manually first, then use the wizzy tool to save time.” Code generation tools which pretend to abstract out something, like all abstractions, leak, and the only way to deal with the leaks competently is to learn about how the abstractions work and what they are abstracting. So the abstractions save us time working, but they don’t save us time learning.