Most of my career I’ve used key-value stores as my main database. In 2007, when I joined Google, every team prettymuch used BigTable, except for Ads which used a sharded MySQL. Around 2014 I started using Spanner, which did have tables and joins and a SQL-like language, but in the end I still used it as a key-value store as it was more performant that way for my use case.

Once I left Google, SQL databases were the default everywhere I went, typically PostgreSQL. And be it in whatever language, most teams were using an ORM library to interact with the database. It didn’t take long for me to utterly dislike them and resort to writing SQL directly whenever I could.

ORMs seem pretty cool the first time you use them and in particular contexts, but they are the wrong abstraction. They are the assembly language for the database’s machine code (SQL), but SQL is so high-level that the ORM ends up reimplementing it in the target language, making it cumbersome to interact and in some cases it can hide performance issues.

A simple database

In order to illustrate my issues, let’s start with a simple database schema. I’ll be using SQLite syntax and Go as my target language since they are my goto hobby dev stack.

Here is the schema:

CREATE TABLE IF NOT EXISTS User(
  id    TEXT NOT NULL PRIMARY KEY,  -- will be an UUID/ULID
  name  TEXT NOT NULL,
  email TEXT NOT NULL
);

CREATE TABLE IF NOT EXISTS Post(
  id      TEXT NOT NULL,
  uid     TEXT NOT NULL,
  content TEXT NOT NULL,

  PRIMARY KEY(uid, id),
  FOREIGN KEY(uid) REFERENCES User(id) ON DELETE CASCADE
);

CREATE TABLE IF NOT EXISTS Comment(
  id      TEXT NOT NULL,
  pid     TEXT NOT NULL,
  creator TEXT NOT NULL
  content TEXT NOT NULL,


  PRIMARY KEY(pid, id),
  FOREIGN KEY(creator) REFERENCES User(id) ON DELETE CASCADE,
  FOREIGN KEY(pid) REFERENCES Post(id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS creator_idx ON Comment(creator);

We have a User, Post and Comment tables. Each User can have N Posts and each Post can have M Comments. If you know SQL, you should be able to easily create queries for inserting new users/posts/comments and reading them as you need.

How would an ORM work with this database?

Tables and queries in your favorite language

For starters, most ORMs will want you to create a model, a structure that will map to an equivalent SQL table. Using Gorm, the tables above would be created from these models:

type User struct {
  ID    string `gorm:"primaryKey"`
  Name  string
  Email string
  Posts []Post `gorm:"constraint:OnDelete:CASCADE"`
}

type Post struct {
  ID       string    `gorm:"primaryKey"`
  UID      string    `gorm:"primaryKey"`
  Content  string
  Comments []Comment `gorm:"constraint:OnDelete:CASCADE"`
}

type Comment struct {
  ID      string `gorm:"primaryKey"`
  PID     string `gorm:"primaryKey"`
  Creator string `gorm:"index,foreignKey:UserRefer"`
  Content string
}

This setup creates the same database schema I defined earlier once you register the types as gorm models.

Interacting with the tables models

You can look up the API on how to insert data using the models, it is pretty straightforward with Create calls. More relevant, how do you retrieve the data?

If I want to list all the users in the database:

var users []User
result := db.Find(&users)

Pretty intuitive, I’d say. This translates to a SELECT * FROM Users;.

How about retrieving a specific user?

var user User
result := db.Find(&user, "UID").Limit(1)

This translates to SELECT * FROM User WHERE id = "UID" LIMIT 1;. Again, pretty intuitive.

Ok, how about getting all the posts from a specific user and doing something with them?

var user User
result := db.Find(&user, "UID").Limit(1)
if results.Error != nil {
  return results.Error
}

// Hmm...
for _, p := range user.Posts {
  // do something?
}

This seems pretty intuitive, right? You fetch the user you want and then range over their Posts. But in this case the Posts slice would be empty because your db.Find call only queried for the User you specified. In order to get the posts as well, you need to Preload it:

var user User
result := db.Preload("Post").Find(&user, "UID").Limit(1)
if results.Error != nil {
  return results.Error
}

// Now we can get the existing posts
for _, p := range user.Posts {t
  // do something
}

By using Preload, you perform an Eager Load i.e it will immediately fetch the posts for every user it finds, which is actually what we wanted in this case. Preload is actually a misnomer since it gives the impression that it will first query for all posts and then for the user and then filter out the posts from the user in memory. In fact, it first fetches the user and then the posts using an WHERE IN clause.

Not fetching the relations by default is actually good. Other ORMs such as the one in Django would lazily fetch each relation, one SQL query at a time, if you tried to iterate on the posts without preloading, creating N+1 queries.

Another way to get only the posts would be:

var posts []Post
results := db.Where("uid = ? ¨, "UID").Find(&posts)

If you didn’t care about the user information, this would be the best option and translates to SELECT * FROM Post WHERE id = "UID";

Wait but this is good, right?

From these simple examples, the ORM makes it pretty straightforward to query for data, as long as you are careful to preload when needed and learn the join/where/group_by/having/etc api calls. In the end, these are just methods on a language data structure, allowing you to not have to learn SQL syntax in order to interact with ANY SQL database. The ORM takes care of translating the API calls to the equivalent SQL query and then mapping the data returned back to your language data types, a process called hydration.

As a bonus, it allows you to construct a query lazily, on the fly. You can add conditions based on the user input so you don’t have to create one SQL query for each unique combination of parameters: just construct it using the Gorm API as you Go (ha!).

But, herein lies dragons….

It’s ${TARGET_LANGUAGE} all the way down

Because now everything is Go code, it is tempting to write your service invoking the ORM directly. For example, using Echo as your web framework, you could write something like this:

func (h *Handler) Posts(c echo.Context) error {
  uid := c.Param("uid¨)
  var posts []model.Post
  results := h.db.Where("uid = ?", uid).Find(&posts)
  if results.Error != nil {
    return results.Error
  }

  return c.JSON(posts)
}

This handler would do exactly what you expect: get the UID from the url path, use Gorm to query for the posts that belong to that user and if there are no errors, it returns it as a JSON object.

Why is this problematic? Your handler now depends on how your database is structured. More importantly, if you need to fetch posts in a different handler, you will likely repeat this same call. If you need to change how you query posts, you have to find every instance of post queries to fix them.

It also means that you can’t test your database calls separately from your handler code. You can refactor this and instead have a service package that encapsulates the database calls:

// service/post/post.go
package post

type Service struct {
  db *gorm.DB
}

func New(db *gorm.DB) *Service {
  return &Service{db: db}
}

func (s *Service) Posts(uid string) ([]model.Post, error) {
  var posts []model.Post

  results := h.db.Where("uid = ?", uid).Find(&posts)
  if results.Error != nil {
    return nil, results.Error
  }

  return posts, nil
}

// endpoints/web/web.go
package web

type Handler struct {
  posts *post.Service
}

func New(posts *post.Service) *Handler {
  return &Handler{posts: posts}
}

func (h *Handler) Posts(c echo.Context) error {
  uid := c.Param("uid¨)

  posts, err := h.posts.Posts(uid)
  if err != nil {
    return results.Error
  }

  return c.JSON(posts)
}

This is better, but you really just moved your problem to a different place. You are now manipulating your database directly from your business logic code. Fundamentally, you are still writing SQL in your methods, it’s just disguised as Go code because Gorm put a wrapper around it. You still have the same problems I pointed out in the first version of this code.

Databases are their own programmable domain

The way I like to think of it is that SQL is the database’s declarative machine language and an ORM is the assembly language for it. Notice how ORMs perform 1-to-1 mappings between their concepts and SQL: tables -> models, SELECT -> Find. conditions -> functions with the conditions as names.

This is exactly what assembly language is. Machine language is a sequence of numbers that represent instructions. Assembly just maps those instructions to mnemonics. It is a lot easier to write and remember what the LD A,(HL) instruction means [Z80 assembly to load a byte into register A from the address pointed to by the HL register] instead of its hex code representation 7E.

The problem is that SQL is already a very high-level language, and so the assembly (ORM) will inevitably look exactly like it, just using the target language constructs. Most ORMs don’t even try to make a more idiomatic API (Gorm requiring a pointer to a slice is pretty weird in Go). And when it tries to make it more idiomatic, like in the Django ORM, it quickly falls apart as you have to introduce concepts like Preload in order to write code in the way you would expect or have hidden performance issues (ask me how I know…).

Mnemonics are APIs

If we want to take the assembly language analogy further, the equivalent would actually be to create higher-level semantically relevant functions that can be combined to implement your business logic. Going back to our posts example, the Service.Posts method is a business logic method that is using “assembly” to query the database.

This might be fine for now, but if you need to change how it’s queried or add more queries to it, it’s more “assembly” that you have to write at the business logic level. Instead, you can isolate the “assembly” into its own package with named methods for the semantic you are implementing:

// storage/storage.go
package storage

type Repo struct {
  db *gorm.DB
}

func New(db *gorm.DB) *Repo {
  return Repo{db: db}
}

func (r *Repo) GetPostsForUser(uid string) ([]model.Post, error) {
  var posts []model.Post

  results := h.db.Where("uid = ?", uid).Find(&posts)
  if results.Error != nil {
    return nil, results.Error
  }

  return posts, nil
}

// service/post/post.go
package post

type Service struct {
  repo *storage.Repo
}

func New(repo *storage.Repo) *Service {
  return &Service{repo: repo}
}

func (s *Service) Posts(uid string) ([]model.Post, error) {
  posts, err := s.repo.GetPostsForUser(uid)
    return nil, results.Error
  }

  // NOTE: You could create a Post datatype in this package so that
  // callers would not directly depend on the gorm model.

  return posts, nil
}

// endpoints/web/web.go
package web

type Handler struct {
  posts *post.Service
}

func New(posts *post.Service) *Handler {
  return &Handler{posts: posts}
}

func (h *Handler) Posts(c echo.Context) error {
  uid := c.Param("uid¨)

  posts, err := h.posts.Posts(uid)
  if err != nil {
    return results.Error
  }

  return c.JSON(posts)
}

Notice now how it is clear, reading each package code individually, what is being done. In the web package, you are calling a service to retrieve all posts for a given user. You may perform any input validation at this level to make sure that only valid data is passed along.

In the post package, you are calling a repo method to retrieve all the posts from the database that belong to a user. post has no knowledge of the SQL instructions needed to perform this, but it knows what GetPostsForUser means and knows what’s the expected semantics for it.

Finally, in the storage package, repo uses Gorm to implement the high level instruction “fetch all the posts for this user”.

Isn’t this a bit too much?

For this simple example, maybe. But what you should take from this is that

  • The structure of the code makes it obvious what is being done in each package.
  • The database is treated as its own programmable domain, with the storage package creating the high-level functions that map to the SQL machine code.

In a more involved example, you can combine different repo functions in a transaction to perform complex queries. In fact, I’d argue that this structure makes it much easier to create composable queries.

Or you can alter your database schema – normalizing it for example – and then change the repo code to still have the same semantics (enforced by tests) after the normalization, with neither the service or web packages having to be changed.

What am I gaining from the ORM then?

The main benefits of using the ORM in this architecture are

  1. Hydration i.e it takes care of reading and writing to/from your target language data structures.
  2. Syntax and type checked queries - for the most part - since you are using the ORM API to create queries
  3. Creating custom queries on the fly, which if you were to use SQL directly could involve potentially unsafe string operations that could result in SQL injections.

(1) and (2) are very important from a productivity stand point, as you really don’t want to keep rewriting the hydration code and adding more tests just to make sure your queries don’t have syntax errors.

(3), on the other hand, is arguably less important if you adopt this “repo as high-level functions for your database” pattern. You will likely end up with one function for every use case you need and a few that would be composed from them, so the benefit of queries on the fly would depend on your particular situation.

But, if (1) and (2) are what matters in this architecture, then you don’t really need an ORM. What you need is tool that would allow you to write syntax (and possibly type) checked SQL queries, give names to them and then generate the hydration code for you.

“Named SQL” abstraction

There are very few cases where I reach for code generation, but this is one of them. For Go, there is a tool called sqlc that allows you to write SQL queries, name them and then have it generate the hydration Go code for you. So, using our example, you would create a file with SQL queries:

-- storage/queries.sql
-- name: GetPostsForUser :many
SELECT * FROM Post WHERE id = ?;

You then need to create a sqlc.yaml file that tells the tool where to find your database schema, your queries and where it will write the generated code. Fundamentally, it would generate the following:

// storage/database/model.go [abbreviated]

type User struct {
  ID       string
  Name string
  Email  string
}

type Post struct {
  ID      string
  Uid     string
  Content string
}

type Comment struct {
  ID      string
  Pid     string
  Creator string
  Content string
}

// storage/database/queries.go [abbreviated]

const getPostsForUser = `
SELECT id, uid, content FROM Post WHERE uid = ?`

func (q *Queries) GetPostsForUser(
ctx context.context,
uid string,
) ([]Post, error) {
    rows, err := q.db.QueryContext(ctx, getPostsForUser, uid)
    if err != nil {
        return nil, err
    }
    defer rows.Close()
    var items []Post
    for rows.Next() {
        var i Post
        if err := rows.Scan(&i.ID, &i.Uid, &i.Content); err != nil {
            return nil, err
        }
        items = append(items, i)
    }
    if err := rows.Close(); err != nil {
        return nil, err
    }
    if err := rows.Err(); err != nil {
        return nil, err
    }
    return items, nil
}

Now we have the exact function we created with the ORM, but it was created from a named SQL query and is as testable as before. sqlc will actually validate the fields so that the query is syntactically correct, with no misnamed fields and will map the resulting rows to the correct Go types.

There are similar tools in Python, Closure, Rust and I’m sure in many other languages.

The main limitation is that you can no longer create queries on the fly: every query needs to be written and named in a sql file in order to be generated. But, as I argued previously, using the pattern of “repo as high-level functions for the database” allows you to more easily create composable queries that can be combined in a transaction. And you can always change your queries to perform more complex joins in order to achieve the result you need.

Named sql is a better abstraction than the ORM, in my opinion, because it lets you use the database’s native assembly language (SQL) and have it generate the high-level function for you, taking away the burden of writing and maintaining hydration code.

It really gives you the best of both worlds.