Category: Databases

I’m officially unemployed again 😎. In the interest of self-accountability, I’m going to try to document what I’m up to on my break; expect more frequent updates to this blog.

HYTRADBOI

I bought a ticket to Have You Tried Rubbing A Database On It?, which could loosely be described as a hipster database conference; lots of people using databases in unusual ways, not much in the way of enterprise RDBMSs. The speaker list is like a Who’s Who for offbeat database work, and I’m really looking forward to it.

Nushell

I’ve been using Nushell as my shell on both Windows and Linux, about half the time. Nushell is a fascinating project; it’s a shell that operates on structured data like PowerShell, but without PowerShell’s (many) pain points.

Nushell has recently seen some massive upgrades (the parsing and evaluation engine was completely rewritten) and it’s a very good time to give it a try. It’s still early days, but I’m hopeful Nushell will be able to displace POSIX shells; it’s liberating to work with much richer data types than plain text:

Nu is a way of saying “what if we didn’t need tools like awk so often?” Since you’re working with structured data, as we add more support for file types, it’s less often you need to reach for “awk”, “jq”, “grep”, and the array of other tools to open and work with common file types. In a way, it’s taking the original spirit of Unix — where you use pipelines to combine a set of tools — and imagining how that original spirit would work today, with what we know about programming languages and tools.

Building data-centric apps with a reactive relational database

This essay touches on a lot of my favourite things: SQLite! The intersection of native apps and web UI! iTunes clones! In a nutshell, it’s a very cool approach to building GUI applications in which all of the application’s state lives in a local database.

It’s more of a provocation than a fully finished system, but I think it shows promise. I’d like to see a bit more investigation of “escape hatches”; how hard would it be mix in a little imperative code when SQL/SQLite aren’t the right fit for a task? Also, this was a bit depressing:

One challenge has been inter-process communication. When the reactive graph is running in the UI thread and the SQLite database is on a web worker or native process, each query results in an asynchronous call that has to serialize and deserialize data.

This would have been a non-issue in traditional GUI code (just query SQLite on a background thread in the same process); one more thing we lose as web UI takes over, I guess 😞.

I recently spent a bit of time messing around with the Postgres wire protocol. I’d like to build a library that lets any application accept connections as if it were a Postgres database; many database clients and databases speak the Postgres wire protocol, and it seems like a practical way to expose tabular data over the network.

Implementing the protocol isn’t too bad; the docs aren’t great but the protocol itself is relatively straightforward. Unfortunately, client behaviour seems harder to accommodate. Clients can issue arbitrarily complex SQL queries to the database to inspect its schema, and sometimes they expect accurate answers. For example, Npgsql issues this beast and then disconnects if it gets an invalid response.

So I’m stuck with a question: how far should I go to support legitimate client behaviour outside of the protocol? The easiest approach for now is to test with a few popular clients and hardcode responses for any queries that they need implemented, but that will leave a long tail of unsupported clients. On the other end of the complexity scale, I could try running schema queries on an in-memory SQLite instance with database objects that mimic Postgres (certainly a massive yak shave, but tempting nonetheless).

I think my next step will be to dig through the source of a few Postgres-compatible databases to see how they solve this.

Managing Postgres with Pulumi and Terraform

Adventures with Infrastructure as Code

I’m a big fan of Pulumi, and I’ve previously used it for cloud infrastructure. That’s their main selling point; Pulumi’s slogan is literally “Program the cloud.” Interestingly, Pulumi can also be used to manage on-premise infrastructure, including PostgreSQL databases. Let’s dive into the details.

Pulumi basics

Pulumi takes a declarative infrastructure definition and handles provisioning said infrastructure, just like Terraform, AWS CloudFormation, and Azure Resource Manager. The main difference is that in Pulumi you’re writing real code (Node, Python, or Go) to build that infrastructure definition instead of a YAML or JSON file, it’s great.

Pulumi/Terraform PostgreSQL Provider

After Pulumi has executed your code to build up an infrastructure definition, it needs to interact with external infrastructure resources (cloud providers, databases, etc.) to turn that definition into a reality. Some of Pulumi’s functionality for working with external resources is derived from Terraform providers. For example, Pulumi has a PostgreSQL provider which is derived from the Terraform PostgreSQL provider.

At a high level, the PostgreSQL provider lets you define databases, roles, schemas, permissions, and PostgreSQL extensions. It stops short of letting you manage DDL objects and data within the database (which is probably for the best given the complexities of schema+data migrations).

Defining a Postgres database with TypeScript

Pulumi can be used to instantiate entire Postgres clusters, but for this tutorial let’s assume you already have an existing cluster managed outside of Pulumi. It can be running anywhere: in the cloud, in a container, on your local machine, whatever.

First, install Pulumi then instantiate a new TypeScript project with pulumi new typescript. This scaffolds a bare-bones Node+TypeScript Pulumi project with the following files:

index.ts                package-lock.json       tsconfig.json
Pulumi.yaml             node_modules            package.json

Install the Pulumi PostgreSQL provider with npm i @pulumi/postgresql (or just edit your package.json directly).

Next, we need to tell Pulumi which Postgres cluster to connect to and how. The PostgreSQL provider’s configuration points are documented here. My Postgres cluster is running on a local VM with the hostname fedora-vm, so I set the postgresql:host variable like so:

pulumi config set postgresql:host fedora-vm

This creates a YAML configuration file for the current Pulumi stack, or modifies one if it already exists. For my dev stack, this creates a file named Pulumi.dev.yaml with the following contents:

config:
  postgresql:host: fedora-vm

You’ll want to do the same for postgresql:username and postgresql:password, to tell Pulumi which credentials to use. Sensitive configuration values like passwords can be encrypted using the --secret flag.

Finally, we’re ready to work with the fun stuff: configuring a database in real TypeScript code. Let’s say we want to instantiate a Pulumi-managed database, create a role with login permissions, and grant the role SELECT permission on tables in the public schema in the new database. This can all be done in the index.ts file like so:

import * as pulumi from "@pulumi/pulumi";
import * as postgresql from "@pulumi/postgresql";

const config = new pulumi.Config();

const managedDatabase = new postgresql.Database("managedDatabase", {
  name: "pulumi"
});

const publicReaderRole = new postgresql.Role("publicReaderRole", {
  login: true,
  name: "public_reader",
  password: config.require("publicReaderPassword")
});

const publicReaderSelectTablesGrant = new postgresql.Grant(
  "publicReaderSelectTablesGrant",
  {
    database: managedDatabase.name,
    objectType: "table",
    privileges: ["SELECT"],
    role: publicReaderRole.name,
    schema: "public"
  }
);

Relational Database History

1983 called. They said SQL needs a makeover

I recently read 2 papers that I’d highly recommend if you ever wonder about the underpinnings of modern relational databases:

What Goes Around Comes Around (Stonebraker and Hellerstein, 2005)

This is an opinionated history of database architectures from roughly 1970 to 2005, and it’s got some serious credentials behind it – Michael Stonebraker was the driving force behind Postgres and many other things. It’s an easy read, and it’s a good explanation of how relational DBs and SQL came to (mostly) rule the world. It was written amidst a lot of hype over XML databases, and it (rightly, in retrospect) critiques XML DBs and suggests that they won’t see much commercial adoption.

One interesting bit: Stonebraker and Hellerstein do not like SQL, and they claim that it became dominant largely because of IBM’s dominance in the marketplace during the 1980’s. Which leads us to the next item…

A Critique of the SQL Database Language (Chris Date, 1983)

This was cited approvingly in the 1st paper as a “scathing critique of the semantics of SQL”, which caught my eye - virtually every database supports SQL, how bad can it be?

I won’t summarize the entire thing, it’s a laundry list of complaints, but this resonated with me:

Notice that it is just table-names that appear in the FROM clause. Completeness suggests that it should be table-expressions (as Gray puts it, “anything in computer science that is not recursive is no good”).

…a simple table-reference (i.e. a table-name) ought to be just a special case of a general table-expression

To give a (trivial) example, say we want to union 2 tables and extract a column. We’d write that like this in SQL:

SELECT EMPLOYEE_ID FROM EMPLOYEES
UNION
SELECT EMPLOYEE_ID FROM CONTRACTORS

But if table names were just special cases of table expressions, we could write it like this instead:

SELECT EMPLOYEE_ID FROM (EMPLOYEES UNION CONTRACTORS)

SQL gets the job done, and it could be a lot worse - but it is a little sad that many of the issues academics have known about for 35+ years still exist today. It’s a reminder of how entrenched something that’s only “good enough” can become.

Quick T-SQL Regression Testing

Using CHECKSUM() and CHECKSUM_AGG()

Here’s something I’ve found useful in SQL Server, but it should apply in any DBMS with checksum functions. Next time you’re refactoring some database objects and you want to do some regression testing, give the CHECKSUM (MSDN) and CHECKSUM_AGG (MSDN) functions a try.

CHECKSUM and CHECKSUM_AGG

They behave pretty much as you would expect; CHECKSUM returns 1 checksum given 1 row, and CHECKSUM_AGG is an aggregate function that returns 1 checksum given multiple checksum rows. Between the two of them, you can get a checksum for any arbitrary collection of data:

-- Returns 2 rows with 2 columns
SELECT Field1, Field2
FROM (VALUES ('foo','bar'),('baz','foo')) TempTable(Field1,Field2)

-- Returns 2 rows with 1 checksum column
SELECT CHECKSUM(Field1,Field2)
FROM (VALUES ('foo','bar'),('baz','foo')) TempTable(Field1,Field2)
 
-- Returns 1 row with 1 checksum column
SELECT CHECKSUM_AGG(CHECKSUM(Field1,Field2))
FROM (VALUES ('foo','bar'),('baz','foo')) TempTable(Field1,Field2)

Regression testing

These help with a common problem: when you’re refactoring a database object, how can you be sure that you haven’t changed the existing functionality? Sometimes it’s enough to spot check a few inputs, but often the logic is complex enough that this does not provide a high level of confidence. With CHECKSUM and CHECKSUM_AGG, you can quickly check large numbers of test cases.

For example, say we have a TVF called MyDateTVF that takes in a date parameter and returns columns Column1 and Column2. I want to refactor this function and then test that the functionality is unchanged for every date in 2018. If we have a table called DateList which contains every date (more useful than you might expect in a DB with a lot of temporal aspects), I can simply run the following before and after my change:

SELECT CHECKSUM_AGG(CHECKSUM(MDT.Column1, MDT.Column2))
FROM DateList DL
CROSS APPLY (
    SELECT Column1, Column2 from dbo.MyDateTVF(DL.Date)
) MDT
WHERE DL.Date BETWEEN '2018-01-01' AND '2018-12-31'

If that query returns the same checksum before and after my change, we can rule out any regressions for those inputs. Nice and easy!

headshot

Cities & Code

Things that don't quite fit in 280 characters.

Top Categories

View all categories

About

I'm a programmer in Vancouver, Canada. I'm interested in databases, urban planning, computing history, and whatever else catches my fancy.

Learn More / Contact me