Saturday, June 24

PG Phriday: Postgres on Tap

Testing within the context of a database can sometimes seem a bit overwhelming. Not only is it necessary to create a reasonable set of test data to bootstrap multiple tables, but we need some reliable invocation system to compare query, view, and function output to expected results. And that is no small feat. For a very long time, the "easiest" way to test end-user database objects involved building a Makefile that included PGXS. Then we had to create the tests, run the tests, and copy over the output into static expected files. Modifying any test always meant updating the static output file as well.

It’s great that the tools are there, yet why do we need to be halfway to a Postgres developer in order follow good QA practices with Postgres? Well, maybe we don’t. It turns out, like many things in the Postgres world, that there’s an extension for that. These days, all the cool kids are using pgTAP. Not only does pgTAP make it easier to write tests for Postgres, but its design generally encourages test-driven development. Regardless of how any of us might view that development paradigm, having the availability makes Postgres a more welcoming platform for those who fully embrace it.

To that end, let’s say we have our trusty sensor_log table and we want to write a get/set function pair to control and access its contents. As always, the sensor_log table looks like this:

CREATE TABLE sensor_log (
  sensor_log_id  SERIAL PRIMARY KEY,
  location       VARCHAR NOT NULL,
  reading        BIGINT NOT NULL,
  reading_date   TIMESTAMP NOT NULL
);

CREATE INDEX idx_sensor_log_location ON sensor_log (location);
CREATE INDEX idx_sensor_log_date ON sensor_log (reading_date);

We also need to install pgTAP itself. That part at least, is as easy as the usual extension install. For RHEL and Debian system users, it’s not even necessary to build anything, since pgTAP is in the official Postgres YUM and APT repositories. After that, it’s just a quick extension install:

CREATE EXTENSION pgtap;

In the spirit of TDD, we should write the tests first. The pgTAP system supports a couple of invocation methods, but one of the simpler approaches involves creating a file or two to house the tests. Even though our API will only have two functions at this point, let’s create two separate files so we can perform a bunch of different tests for each. Starting with the set function, this is what that file might resemble:

BEGIN;

SELECT plan(3);

SELECT lives_ok(
    $$ SELECT set_sensor_log('1', 1) $$,
    'Check set_sensor_log - basic execution'
);

SELECT is(
    set_sensor_log('10', 20), True,
    'Check set_sensor_log - new value'
);

SELECT throws_ok(
    $$ SELECT set_sensor_log('100', 599) $$,
    '"599" is an invalid reading!',
    'Check set_sensor_log - invalid reading'
);

ROLLBACK;

Let’s assume we made a directory named tests and put this in a file named set_sensor_log.pg so we know what we’re testing. The tests we’re actually performing are extremely simple. We’re just making sure the function can execute, can successfully set a value, and as a bonus, ensuring an invalid value is properly rejected.

To run our test, pgTAP provides a utility named pg_prove which can either process a whole directory of tests, or a single test file. That makes it extremely versatile for incrementally integrating new tests while still having the option to run the entire suite. If we were to process this test file now, we’d get nothing but an error:

$> pg_prove -r tests

tests/set_sensor_log.pg .. 1/3 
# Failed test 1: "Check set_sensor_log - basic execution"
#     died: 42883: function set_sensor_log(integer, integer) does not exist
#         HINT:       No function matches the given name and argument types. You might need to add explicit type casts.
#         CONTEXT:
#             PL/pgSQL function lives_ok(text,text) line 14 at EXECUTE
psql:tests/set_sensor_log.pg:13: ERROR:  function set_sensor_log(integer, integer) does not exist
LINE 2:     set_sensor_log(10, 20), True,
            ^
HINT:  No function matches the given name and argument types. You might need to add explicit type casts.
tests/set_sensor_log.pg .. Dubious, test returned 3 (wstat 768, 0x300)
Failed 3/3 subtests 

Test Summary Report
-------------------
tests/set_sensor_log.pg (Wstat: 768 Tests: 1 Failed: 1)
  Failed test:  1
  Non-zero exit status: 3
  Parse errors: Bad plan.  You planned 3 tests but ran 1.
Files=1, Tests=1,  0 wallclock secs ( 0.02 usr  0.00 sys +  0.02 cusr  0.00 csys =  0.04 CPU)
Result: FAIL

Normally a failed test is something to complain about, but in this case, it’s exactly what we want to see. The next step is to write the function the test is trying to invoke. Our API calls for a function with two parameters—presumably the location and reading values. It should return a boolean for its status, and throw an exception on invalid inputs. Here’s a function that does all of those things:

CREATE OR REPLACE FUNCTION set_sensor_log(
  p_location VARCHAR, p_reading BIGINT
)
RETURNS BOOLEAN AS
$$
BEGIN
  IF p_reading > 500 THEN
    RAISE EXCEPTION '"%" is an invalid reading!', p_reading;
  END IF;

  INSERT INTO sensor_log (location, reading, reading_date)
    VALUES (p_location, p_reading, now());

  RETURN TRUE;
END;
$$ LANGUAGE plpgsql;

And if we were to invoke pg_prove again, the results would be much different:

$> pg_prove -r tests

tests/set_sensor_log.pg .. ok   
All tests successful.
Files=1, Tests=3,  0 wallclock secs ( 0.02 usr  0.00 sys +  0.02 cusr  0.00 csys =  0.04 CPU)
Result: PASS

Conveniently, we can follow a very similar procedure for the get function. Let’s define three tests here as well. One to execute the function, one to retrieve a specific record, and one which tries to match a record that doesn’t exist. Here’s how that might look:

BEGIN;

SELECT plan(3);

INSERT INTO sensor_log (sensor_log_id, location, reading, reading_date)
VALUES (99999, '999', 499, now());

SELECT lives_ok(
    $$ SELECT get_sensor_log('999') $$,
    'Check get_sensor_log - basic execution'
);

SELECT results_eq(
    $$ SELECT * FROM get_sensor_log('999') $$,
    $$ VALUES (99999::INT, '999'::VARCHAR, 
          499::BIGINT, now()::TIMESTAMP) $$,
    'Check get_sensor_log - existing location'
);

SELECT throws_ok(
    $$ SELECT get_sensor_log('998') $$,
    '"998" is an invalid location!',
    'Check get_sensor_log - invalid location'
);

ROLLBACK;

Notice that this time around, we bootstrapped the table a bit so we could target specific row results. Fun, eh? From these tests, we can also see that the intent of the get_sensor_log function is to retrieve the entire series of rows for a particular location. Our test only needs one successful result, but there’s nothing stopping us from defining several.

Once again, the function for this test isn’t especially onerous:

CREATE OR REPLACE FUNCTION get_sensor_log(
  p_location VARCHAR
)
RETURNS SETOF sensor_log AS
$$
DECLARE
  retval sensor_log;
BEGIN
  FOR retval IN 
      SELECT *
        FROM sensor_log
       WHERE location = p_location
  LOOP
    RETURN NEXT retval;
  END LOOP;

  IF NOT FOUND THEN
    RAISE EXCEPTION '"%" is an invalid location!', p_location;
  END IF;
END;
$$ LANGUAGE plpgsql;

Thankfully, Postgres set returning functions help us along for fetching all matching rows. We just throw an exception if the caller passed an invalid location, and we’re basically done. And here’s what happens when we run pg_prove on the tests directory this time:

$> pg_prove -r tests

tests/get_sensor_log.pg .. ok   
tests/set_sensor_log.pg .. ok   
All tests successful.
Files=2, Tests=6,  0 wallclock secs ( 0.02 usr  0.00 sys +  0.06 cusr  0.00 csys =  0.08 CPU)
Result: PASS

Not only did it run both of our tests, but it maintained a handy summary of everything. We’ve used this utility to proof a project that had nearly 200 test files, many of which required multiple revisions to both the test and the SQL function. Combined with a script to generate a basic test skeleton for each function, pg_prove greatly increases code to QA iteration velocity. A good test framework really is indispensable to handling error conditions and code edge cases.

And we’ve barely even scratched the surface of pgTAP; we’ve only utilized four of nearly 200 potential test subroutines. So far, the only test scenario that doesn’t appear to have a repeatable result are functions that operate in a chronologically sensitive context. Basically, it’s difficult or outright impossible to masquerade the date or time within Postgres, so a function that uses CURRENT_DATE or now() to trigger specific code paths is hard to verify. Does a certain function operate normally on the first of the month, or during working hours? Who knows. But this is less a shortcoming of pgTAP than a limitation of Postgres itself due to data integrity concerns, so it’s understandable. Imagine the chaos if any user could set the context of their session to an arbitrary date or time!

I’d like to personally thank David Wheeler for contributing this wonderful resource. It’s criminally underutilized, and part of that is the persistent idea that databases can’t really be tested due to their relatively volatile nature. The reality of the situation is that databases are immensely testable, and edge cases especially can be mocked up and permanently preserved as part of the test framework. If we did more of that, maybe we could eventually make the argument to somehow mix some T into our ACID.

Leave a Reply

Your email address will not be published. Required fields are marked *