Thursday, June 21

Tag: logical decoding

Near-Zero Downtime Automated Upgrades of PostgreSQL Clusters in Cloud (Part II)

Near-Zero Downtime Automated Upgrades of PostgreSQL Clusters in Cloud (Part II)

2ndQuadrant, DevOps, Featured, Gulcin's PlanetPostgreSQL, pglogical, PostgreSQL
I've started to write about the tool (pglupgrade) that I developed to perform near-zero downtime automated upgrades of PostgreSQL clusters. In this post, I'll be talking about the tool and discuss its design details. You can check the first part of the series here: Near-Zero Downtime Automated Upgrades of PostgreSQL Clusters in Cloud (Part I). The tool is written in Ansible. I have prior experience of working with Ansible, and I currently work with it in 2ndQuadrant as well, which is why it was a comfortable option for me. That being said, you can implement the minimal downtime upgrade logic, which will be explained later in this post, with your favorite automation tool. Further reading: Blog posts Ansible Loves PostgreSQL , PostgreSQL Planet in Ansible Galaxy and  (more…)
Near-Zero Downtime Automated Upgrades of PostgreSQL Clusters in Cloud (Part I)

Near-Zero Downtime Automated Upgrades of PostgreSQL Clusters in Cloud (Part I)

2ndQuadrant, DevOps, Featured, Gulcin's PlanetPostgreSQL, pglogical, PostgreSQL
Last week, I was at Nordic PGDay 2018 and I had quite a few conversations about the tool that I wrote, namely pglupgrade, to automate PostgreSQL major version upgrades in a replication cluster setup. I was quite happy that it has been heard and some other people in different communities giving talks at meetups and other conferences about near-zero downtime upgrades using logical replication. Given that there is a talk that I gave at PGDAY'17 Russia, PGConf.EU 2017 in Warsaw and lastly at FOSDEM PGDay 2018 in Brussels, I thought it is better to create a blog post to keep this presentation available to the folks who could not make it to any of the conferences aforementioned. If you would like to directly go the talk and skip reading this blog post here is your link: Near-Zero Downtime (more…)

BDR is coming to PostgreSQL 9.6

Craig's PlanetPostgreSQL
I'm pleased to say that Postgres-BDR is on its way to PostgreSQL 9.6, and even better, it works without a patched PostgreSQL. BDR has always been an extension, but on 9.4 it required a heavily patched PostgreSQL, one that isn't fully on-disk-format compatible with stock community PostgreSQL 9.4. The goal all along has been to allow it to run as an extension on an unmodified PostgreSQL ... and now we're there. The years of effort we at 2ndQuadrant have put into getting the series of patches from BDR into PostgreSQL core have paid off. As of PostgreSQL 9.6, the only major patch that Postgres-BDR on 9.4 has that PostgreSQL core doesn't, is the sequence access method patch that powers global sequences. This means that Postgres-BDR on 9.6 will not support global sequences, at least not (more…)
Evolution of Fault Tolerance in PostgreSQL: Synchronous Commit

Evolution of Fault Tolerance in PostgreSQL: Synchronous Commit

2ndQuadrant, Featured, Gulcin's PlanetPostgreSQL, PostgreSQL
PostgreSQL is an awesome project and it evolves at an amazing rate. We’ll focus on evolution of fault tolerance capabilities in PostgreSQL throughout its versions with a series of blog posts. This is the fourth post of the series and we’ll talk about synchronous commit and its effects on fault tolerance and dependability of PostgreSQL. If you would like to witness the evolution progress from the beginning, please check the first three blog posts of the series below. Each post is independent, so you don't actually need to read one to understand another. Evolution of Fault Tolerance in PostgreSQL  Evolution of Fault Tolerance in PostgreSQL: Replication Phase  Evolution of Fault Tolerance in PostgreSQL: Time Travel Synchronous Commit By default, PostgreSQL (more…)
Evolution of Fault Tolerance in PostgreSQL: Time Travel

Evolution of Fault Tolerance in PostgreSQL: Time Travel

2ndQuadrant, Featured, Gulcin's PlanetPostgreSQL, PostgreSQL
PostgreSQL is an awesome project and it evolves at an amazing rate. We’ll focus on evolution of fault tolerance capabilities in PostgreSQL throughout its versions with a series of blog posts. This is the third post of the series and we’ll talk about timeline issues and their effects on fault tolerance and dependability of PostgreSQL. If you would like to witness the evolution progress from the beginning, please check the first two blog posts of the series: Evolution of Fault Tolerance in PostgreSQL  Evolution of Fault Tolerance in PostgreSQL: Replication Phase  Timelines The ability to restore the database to a previous point in time creates some complexities which we’ll cover some of the cases by explaining failover (Fig. 1), switchover (Fig. 2) and pg_rewind (Fig (more…)
Evolution of Fault Tolerance in PostgreSQL: Replication Phase

Evolution of Fault Tolerance in PostgreSQL: Replication Phase

2ndQuadrant, Featured, Gulcin's PlanetPostgreSQL, PostgreSQL
PostgreSQL is an awesome project and it evolves at an amazing rate. We’ll focus on evolution of fault tolerance capabilities in PostgreSQL throughout its versions with a series of blog posts. This is the second post of the series and we'll talk about replication and its importance on fault tolerance and dependability of PostgreSQL. If you would like to witness the evolution progress from the beginning, please check the first blog post of the series: Evolution of Fault Tolerance in PostgreSQL PostgreSQL Replication Database replication is the term we use to describe the technology used to maintain a copy of a set of data on a remote system.  Keeping a reliable copy of a running system is one of the biggest concerns of redundancy and we all like maintainable, easy-to-use and (more…)

Failover slots for PostgreSQL

Craig's PlanetPostgreSQL
Logical decoding and logical replication is getting more attention in the PostgreSQL world. This means we need it working well alongside production HA systems - and it turns out there's a problem there. Replication slots are not themselves synced to physical replicas so you can't continue to use a slot after a master failure results in promotion of a standby. The failover slots patch changes that, syncing slot creation and updates to physical replica servers such as those maintained with WAL archives or streaming replication. That lets logical decoding clients seamlessly follow a failover promotion and continue replay without losing consistency. (more…)