I have developed a particular Git workflow for maintaining PostgreSQL feature branches and submitting patches to the pgsql-hackers mailing list and commit fests. Perhaps it's also useful to others.
This workflow is useful for features that take a long time to develop, will be submitted for review several times, and will require a significant amount of changes over time. In simpler cases, it's probably too much overhead.
You start as usual with a new feature branch off master
git checkout -b reindex-concurrently master
and code away. Make as many commits as you like for every change you make. Never rebase this branch. Push it somewhere else regularly for backup.
When it's time to submit your feature for the first time, first merge in the current master branch, fix any
What is the role of PgBouncer in a Postgres High Availability stack? What even is PgBouncer at the end of the day? Is it a glorified traffic cop, or an integral component critical to the long-term survival of a Postgres deployment?
When we talk about Postgres High Availability, a lot of terms might spring to mind. Replicas, streaming, disaster recovery, fail-over, automation; it's a ceaseless litany of architectural concepts and methodologies. The real question is: how do we get from Here to There?
The Importance of Proxies
It's no secret that the application stack must communicate with the database. Regardless of how many layers of decoupling, queues, and atomicity of our implementation, data must eventually be stored for reference. But where is that endpoint? Presuming that write
Since the release of v3, Postgres-BDR has evolved into the go-to clustering technology built specially for businesses that require geographically distributed databases with multiple masters.
To get an update on Postgres-BDR’s development, new features, and future roadmap, 2ndQuadrant held the pglogical and Postgres-BDR Update webinar as part of its PostgreSQL webinar series.
The webinar was presented by Simon Riggs, Founder & CEO of 2ndQuadrant, who is also a major contributor of the open source PostgreSQL project. Those who weren’t able to attend the live event can now view the recording here.
For any questions or comments regarding Postgres-BDR, please send an email to
After the final release of patch 9.3.25 on November 8th 2018, PostgreSQL 9.3 is no longer supported. Therefore it’s time for all users of PG 9.3 to upgrade their databases to a newer supported version. The benefits of having a supported version are many and that’s what Craig Ringer talks about in the Q&A session below:
Why is it important to upgrade your PostgreSQL database to the latest version?
Craig: Always update to the latest minor version. 2ndQuadrant's 24/7 support services often help customers who could've avoided experiencing a production outage or fault simply by updating every minor version or two. The PostgreSQL community (including 2ndQuadrant) releases minor point releases conservatively, and for good reasons. Keep up to date on those patches.
Note: A "minor
below covers various concepts around "Ansible and PostgreSQL" - something I am very enthusiastic about.
This presentation covers the following topics:
Overview of Ansible and PostgreSQL
Best strategies for mixed cloud and on-premises deployments
How to deploy AlwaysOn PostgreSQL clusters
How to perform maintenance updates
How to create a variety of cluster types
How to backup servers for mixed on-premises and multi-cloud deployments
I don’t often get to speak on technical topics, but the video of my
or if you just decided to encrypt the sensitive stuff. You need to do the encryption right and actually protecting the information in both cases. Unfortunately, full-disk-encrytion and pgcrypto are not a good fit for multiple reasons, and application-level encryption reduces the database to "dumb" storage. Let's look at an alternative approach - offloading the encryption to a separate trusted component, implemented as a custom data type.
Let's assume you have some sensitive data, that you need to protect by encryption. It might be credit card numbers (the usual example), social security numbers, or pretty much anything you consider sensitive. It does not matter if the encryption is mandated by a standard like
A database management tool that simplifies what is complex and drives performance. OmniDB is one such tool with which you can connect to several different databases - including PostgreSQL, Oracle, MySQL and others.
2ndQuadrant recently hosted a webinar on this very topic: Introduction to OmniDB. The webinar was presented by OmniDB co-founders and PostgreSQL consultants at 2ndQuadrant, Rafael Castro & William Ivanski.
The recording of the webinar is now available here.
Questions that Rafael and William couldn’t respond to during the live webinar have been answered below.
Q1: There are other open source GUI tools around to manage PostgreSQL. Why are you investing efforts on a new tool?
A1: When OmniDB was created we wanted a web tool, and not all available tools
UUIDs are a popular identifier data type - they are unpredictable, and/or globally unique (or at least very unlikely to collide) and quite easy to generate. Traditional primary keys based on sequences won't give you any of that, which makes them unsuitable for public identifiers, and UUIDs solve that pretty naturally.
But there are disadvantages too - they may make the access patterns much more random compared to traditional sequential identifiers, cause WAL write amplification etc. So let's look at an extension generating "sequential" UUIDs, and how it can reduce the negative consequences of using UUIDs.
Announcing Release 9 of the PostgreSQL Buildfarm client.
Along with numerous fixes of minor bugs and a couple of not so minor bugs, this release has the following features:
new command line parameter --run-parallel for run_branches.pl runs
all branches in parallel, possibly across animals as well
new config setting max_load_avg inhibits a run if the load average
is higher than the setting
new config_option archive_reports saves that number of generations
of the report sent to the server
new command line parameter --show-error-log which outputs the error
log if any on stdout
automatically rerun 3 hours after a git failure, useful on back
branches where commits can be infrequent
automatically convert old pgbuildfarm.org URLs to