Postgres contains a moving event horizon, which is in effect about 2 billion transactions ahead of or behind the current transaction id. Transactions more than 2 billion ahead of or more than 2 billion behind the current transaction id are considered to be in the future, and will thus be invisible to current transactions.
Postgres avoids this catastrophic data loss by specially marking old rows so that no matter where they are in relation to the current transaction id they will be visible.
Freezing is this process of marking old live tuples (i.e. database rows) so that they don't get run over by the moving event horizon that would otherwise make them appear to be in the future. This is in contrast to vacuuming, which is the freeing up of space consumed by old dead tuples that are no
The video of my presentation below walks you through the major features of the native JSON data type in PostgreSQL 9.3 and beyond.
This presentation covers the following topics:
What is JSON?
How is it available in PostgreSQL?
What's the difference between JSON and JSONB?
Accessing JSON values
Creating JSON from table data
Creating table data from JSON
Crosstabs with JSON
Indexing and JSON
When to use JSON, when to use JSONB, and when neither should be used
Recently I have been refining and adding utilities to look after our Perl code. You might be surprised to learn that as well as 1.3 million or so lines of C code, there are about 30,000 lines of Perl code in our sources. This a sizeable body of code, even if it's dwarfed by our C code. What does it do? Well, lots of things. It runs some very critical code in building from source, so the code to set up our catalogs is created by some Perl code. All the new data setup for catalogs is in fact Perl code. That's another 20,000 lines or so of code on top of the 30,000 mentioned above. We also use Perl to run TAP tests, such as testing initdb and pg_dump. And it runs building and testing when we're building with the Microsoft tool-sets on Windows.
So, what changes have been made? First, we
If you have Docker installed on your development machine, there is a simple way to road test your code using the buildfarm client, in a nicely contained environment.
These preparatory steps only need to be done once. First clone the repository that has the required container definitions:
git clone https://github.com/PGBuildFarm/Dockerfiles.git bf-docker
Then build a container image to run the command (in this example we use the file based on Fedora 28):
docker build --rm=true -t bf-f28 -f Dockerfile.fedora-28 .
Make a directory to contain all the build artefacts:
That's all the preparation required. Now you can road test your code with this command:
docker run -v buildroot-f28:/app/buildroot \
I have just release version 8 of the PostgreSQL Buildfarm client
It can be downloaded from https://buildfarm.postgresql.org/downloads/releases/build-farm-8.tgz or https://github.com/PGBuildFarm/client-code/archive/REL_8.tar.gz
This release contains a number of small changes to make using the --from-source feature a bit nicer. It also contains a good deal of code cleanup to be perlcritic clean, with some exceptions, down to severity level 3, and also to remove some out of date code that referred to Postgresql branches we no longer support.
It also contains the following features:
--tests and --schedule now apply to the check step as well as the installcheck step
a new --delay-check switch delays the check step until after install. This helps work around a bug or lack of
I have released version 6.1 of the PostgreSQL Buildfarm client. It is available at https://buildfarm.postgresql.org/downloads/releases/build-farm-6_1.tgz
This release fixes a couple of bugs that became apparent in yesterday's release. The first was a relatively minor one where the verbosity was always set if using the run_branches.pl script. The second was a portability issue where some versions of Perl and its Time::HiRes module behaved unexpectedly and as a result log files were not sorted in correct order, leading to very strange timing results. The second one is sufficiently bad to warrant this point release.
Thanks to Tom Lane for identifying and helping to diagnose and patch these bugs.
Also, one other small bug is fixed in some utility scripts, and the BlackholeFDW module
PostgreSQL Buildfarm client Release 6 is now available and can be downloaded from https://buildfarm.postgresql.org/downloads/releases/build-farm-6.tgz
use a persistent cache for configure, leading to a substantial performance improvement for that step, based on an idea from Tom Lane
improve installcheck security, mirroring earlier changes on this in Postgres itself
allow skipping individual TAP tests, e,g, "--skip_steps=pg_dump-check"
a use_installcheck_parallel setting that uses the parallel schedule rather than the serial schedule for the installcheck step (not for MSVC)
use_valgrind and valgrind_options settings that will cause the installed postmaster to be run under valgrind and report any output. This doesn't apply to the check step, TAP tests, or any
Recently we've had a patch submitted to support the latest incarnation of the Microsoft build tools, Visual Studio 2017. I didn't have a spare Windows machine available to test with, so I set up Windows machine on Amazon AWS to test with. I chose Windows Server 2016, a t2.medium instance with a 50 GB root disk (The default 30Gb is a bit tight.) This costs about US$0.065 per hour to run, so it's pretty cheap.
The first things I did, as I always do with these machines, were to turn off Internet Enhanced Security, which has a habit of getting in the way, and then install the Firefox web browser. Then I installed the Chocolatey package manager for Windows. This is a pretty useful tool, somewhat similar to yum, dnf and apt. You need to install this via an administrative command shell. Once
Release 5 of the PostgreSQL Buildfarm Client has been released and can be downloaded from https://buildfarm.postgresql.org/downloads/releases/build-farm-5.tgz
In a similar move to PostgreSQL version numbering, with this release we move to a one part numbering system.
In addition to a number of bug fixes and very minor changes, this release has the following features / changes:
Cross-version pg_upgrade module is no longer experimental - see below
TAP tests now run the "check" target, but in most cases redundant installs are avoided
Improved and expanded TAP test running on MSVC animals - these now run the same tests as other animals
Automatic garbage collection of git repositories, once a week by default. This should improve the speed of git operations, especially on
Some of my Italian colleagues have made a nifty little gadget called redislog for pushing postgres logs into Redis, the distributed in-memory cache. From there it can be fed into things like logstash. I thought it would be interesting instead to make the logs available via the Redis Foreign Data Wrapper as a Postgres table. That way we would have easy access to the running logs from Postgres with almost no effort. Here's what I did.
First I built and installed redislog and redis_fdw. Then I added redislog to my server's shared_preload_libraries, set log_min_duration_statement to 0 and restarted.
Then I created a database called logger and did this in it:
create extension redis_fdw;
create server localredis
foreign data wrapper redis_fdw;
create foreign table