Simple
Every programmer, at some point, has built something that became far more complex than it needed to be. I’ve been guilty of this more times than I can count. The hardest lesson in programming and product development isn’t writing code; it is learning how to keep things simple.
Why is making something simple one of the hardest tasks in software development? It’s not just about reducing code, but about resisting the urge to add features you don’t need. I was putting this lesson into practice when I set out to build a blog using Common Lisp.
Throughout my career, I’ve programmed my fair share of blogs and to-do lists. But after years of building over-complicated projects filled with external libraries, unnecessary databases, and feature creep — I decided to do something different. This time, I wasn’t going to scale prematurely, join the latest javascript framework craze or rely on good ol' tools out of habit. I was going to select my tools carefully and keep external dependencies to an absolute minimum. This is, in my opinion, a valuable, but often overlooked, skill to attain for anyone making something. I have to give credit to my previous co-worker Paul who taught me this lesson.
The language – Lisp
The choice of Lisp wasn’t random. It’s a language that encourages minimalism. With its read-eval-print loop and simple abstractions,
It’s a language that forces you to think in terms of simplicity from the ground up.
In this project I would use the dialect: Common Lisp, specifically the compiled, highly performant SBCL version of it. Previously I had a lot of experience developing web apps with another lisp dialect: clojure, and I had learned functional programming using yet another dialect: scheme in college. I had never touched Common Lisp before, but my assumption was that it was going to be easy to pick up the basics quickly due to my previous lisp exposure.
I was, indeed, correct. Except for a few gotchas related to the "design-by-committee" quirks of Common lisp; the choice of language proved to be a nice development experience, primarily due to the language not getting in the way of my ideas. This likely made the process quicker than most alternatives, and also potentially made the codebase smaller than it would have been had I picked most other languages I am familiar with.
As for robustness; succinctness is a virtue. A small codebase with a language with very little syntax, is, all else being equal, going to be more maintainable.
The web server
I wasn’t going to write a web server from scratch in a language I had no experience with, so I needed a web server library to build my application. In Common Lisp, libraries like Hunchentoot, Lack, and Clack are ubiquitous, so the choice to use them was an easy one to make. I also added a sprinkle of Ningle, a super simple routing library that has the same maintainer as Lack and Clack.
With Ningle you just create an instance of an app:
(defvar *app* (make-instance 'ningle:app))
And add some routes like so:
(setf (ningle:route *app* "/" :method :GET)
#'(lambda (params)
`(200 (:content-type "text/plain") "Hello home"))))
The lambda-function takes some params and returns a list of a status-code, headers and a body.
To get your routes and server up and running, you can use Lack and Clack to handle sessions, access logs, static file serving, and the Hunchentoot server:
(defun start-server ()
(clack:clackup
(lack:builder
(:static :path "/public/"
:root #P"./dist/")
:accesslog
:session
*app*)
:server :hunchentoot
:port (parse-integer *PORT*)))))
(defvar *clack-server* (start-server))
Et voilà, I compiled the file, and I had a working server.
The goal for the web server was simple: don’t let it get in the way. The end user doesn’t care if I’m running a complicated, highly optimized web server for a blog. They just want content. That content needs to be represented and presented somehow. I decided to save the articles as markdown files (data representation) and I of course had to present and serve everything as html.
The markup
Because lisp is represented in the same way as the built in list structure it contains (ergo LISt Processing), the line between code and data is blurred. This is a feature not a bug. It makes it straightforward to write out a datastructure like the underlying html datastructure in the language you are actually writing, as well as adding more dynamic components from that language to generate that same data. From there you can implement a parser to convert the data into its string representation, like the data-format that we call HTML. HTML is essentially a tree structure, with some extra metadata, so in Lisp, we can represent it as a tree:
(list :html
(list :body
(list :h1 "Hello")
(list :h2 "world")
(list :a :href "/link" "Go to link"))
We can simplify this further using Lisp’s list shorthand syntax (see the backtick):
`(:html
(:body
(:h1 "Hello")
(:h2 "world")))
(:a :href "/link" "Go to link"))
Because the "HTML" is just a Lisp data structure, I can easily generate that tree-like "HTML" using standard Lisp constructs like loops. There’s no need for specialized DSL constructs. For example listing the categories is as simple as creating a list of anchor-tags:
(dolist (category *categories*) ; if you do not know lisp, this is essentially like pythons list comprehension
(list :a :href (format nil "/category/~a" category) category))
The final HTML structure is sent to a parser that recursively converts it into the XML-like string representation that we know of as HTML.
This approach is so simple and straightforward that it’s been done many times before (see Hiccup for the clojure way of doing this). Spinneret is a Common Lisp library that does exactly this, so I decided to use it.
On the topic of data-formats, Markdown is also a data-format. There’s a good Common Lisp library called 3bmd that parses Markdown and produces HTML, so I used that as well for parsing my blog post markdown files (1).
Knowing when to use a library is just as important as knowing when to skip it. My criteria for this was that the library needed to be something I knew I could understand and implement myself, but I would save time by using someone elses hard work. Should I find an issue with these simple libraries later on, I know that I can fix the issue and submit a pull request, or worst case fork the repository.
A benefit of using these libraries was that I could focus on the contents and semantics of the data I was producing, both in the form of content and in the form of layout and design, rather than the parsing of pure syntax. The tooling did not get in my way here.
With the HTML parsing library, I did not have to learn a specific domain language to generate html, like is so common in most templating engines I've seen. I was also not limited by that domain languages constructs and abstractions, because I had literally the entire lisp language at my disposal at all times. HTML was built directly into the language I was actually working in. I did not have to step in and out of two different worlds.
The datalayer
At this point, I’d typically fire up PostgreSQL, import a migration library, and write a lot of SQL. But this, in my opinion, would be premature scaling. At the time, I had no articles and no readers. Why would I need a sophisticated, scalable PostgreSQL database? It would be overkill.
Even SQLite felt excessive(2). All I needed was a simple structure to track content, titles, categories, and dates. So I opted for a CSV-like file, where each article had its own row, and each row had semicolon-separated data fields. I wrote a function to read this file and create a vector of articles, with each index corresponding to a row in the file.
I realized I could take this even further. I wasn’t memory-constrained at all. The compiled Lisp executable used almost no memory, and the articles themselves were small. So I decided to store everything in memory, loading the rows, parsing the Markdown, and performing a quick (O(1)) index-based lookup in my article vector each time someone requested an article.
(defun collect-article (line)
(let* ((elements (cl-ppcre:split ";" line))
(title (first elements))
(summary (second elements))
(filename (third elements))
(categories (cl-ppcre:split "," (fourth elements)))
(datetime (parse-integer (fifth elements)))
(strm (make-string-output-stream)))
(progn
;; parsing the markdown file, producing html
(3bmd:parse-string-and-print-to-stream
(uiop:read-file-string (format nil "~a/~a" *article-folder* filename))
strm)
(make-article categories
title
summary
(get-output-stream-string strm)
datetime))))
(defun read-files ()
(with-open-file (stream +meta-file+)
(make-array (count-aricles +meta-file+)
; The loop construct is something that was entirely new to me.
; This is very common-lispy
:initial-contents
(loop for line = (read-line stream nil 'eof)
until (eq line 'eof)
collect (collect-article line))))))
(defparameter *articles* (read-files))
(setf (ningle:route *app* "/article/:id" :method :GET)
#'(lambda (params)
(let* ((article-id (parse-integer (get-param params :id)))
(article (elt *articles* article-id)))
`(200 nil (,(funcall #'article-page article))))))
Because of the simple choice of using a file and some memory as my database and precomputing the markdown to html, I was not only able to skip all the SQL table definitions and queries, I was able to make the lookup instantaneous reducing all the IO to only server startup.
Even though I gained a lot of benefits by skipping SQL databases, I do, actually like relational databases, a lot. And Lisp being able to accurately represent DSLs like SQL in its own syntax could be highly beneficial here (unlike most ORMs).
The infrastructure
To deploy, I compiled the 200ish lines of Common Lisp into an executable. And I wrote a simple shell script to build and deploy the executable. On the hosting server, I created a systemd service for the compiled binary, giving me the benefit of automatic startups. Deployment is now as simple as building the executable and restarting the systemd service. I did not need anything more than this. My blog uses about 250MB of RAM, an a neglible amount of cpu when dormant.
Conclusion
After this exercise, I was left with a working blog that was simple to use and easy to extend. The real task, of course, was writing articles — because the content, not the blog technology, should be the main value generator. (Fun fact, this article took approximately twice as long to write as actually programming and deploying the website)
This project reinforced a valuable lesson: knowing where the real value lies is crucial. Should I need more features, I can implement them easily. If the number of articles or concurrent readers grows, to an annoying level for the csv, I can add a database. If the complexity increases, I might write some tests. But a simplicity first mindset forces you to constantly ask: Is this really necessary?
(1) Of course parsing is slightly more complicated than simple production
(2) I should aknowledge that it has some extra benefits like SQL as a domain language, keeping the db in memory etc.