Here’s a step by step guide to install PostgreSQL on your machine using PGInstaller. PGInstaller supports three modes of installation; Graphical, Unattended and Text. We’re going to cover all three of them in this guide.
To Install PostgreSQL via Graphical Mode
Download PGInstaller here. PGInstaller is available for PostgreSQL 9.5, 9.6, 10, and 11(beta).
Click on the executable file to run the installer.
Select your preferred language.
Specify directory where you want to install PostgreSQL.
Specify PostgreSQL server port. You can leave this as default if you’re unsure what to enter.
Specify data directory to initialize PostgreSQL database.
Create a PostgreSQL user
PostgreSQL administration, configuration, and deployment can be a tough ask while working in an agile environment with strict deadlines. The key to simplify these operational tasks for PostgreSQL is Ansible - an open source IT automation tool.
To explain how these technologies work together, 2ndQuadrant hosted a Webinar on PostgreSQL deployments using Ansible. The webinar was presented by Tom Kincaid, GM North America at 2ndQuadrant, who gave an overview of Ansible and PostgreSQL, covered best strategies for deployments, and a variety of other topics.
If you weren’t able to make it to the live session, you can now view the recording here.
For any questions, comments, or feedback, please visit our website or send an email to
The video of my presentation below walks you through the major features of the native JSON data type in PostgreSQL 9.3 and beyond.
This presentation covers the following topics:
What is JSON?
How is it available in PostgreSQL?
What's the difference between JSON and JSONB?
Accessing JSON values
Creating JSON from table data
Creating table data from JSON
Crosstabs with JSON
Indexing and JSON
When to use JSON, when to use JSONB, and when neither should be used
The elephant has been the symbol of PostgreSQL for many years now, referring to the robustness and strength as well as its reputed wisdom. Long may that association continue.
Even after many years of protection, the elephant is being killed by poachers at an incredible rate of 20,000 per year, or approximately 1 elephant will be killed while you read this.
The things we care about can be destroyed if we do nothing.
If the online trade in ivory can be reduced, we can reduce the killing.
Please contribute in some way, and report traders if you see them.
A couple of weeks back, I wrote about how to use Windows Functions for time series IoT analytics in Postgres-BDR. This post follows up on IoT time series data and covers the next challenge: Scalability.
‘Internet of Things’ is the new buzzword as we move to a smarter world equipped with more advanced technologies. From transport to building industry, smart homes to personal gadgets, it’s not just about gadgets and sensors anymore.
In reality, it is all about data. Not just simple data, but data that grows at an enormous rate. Businesses and application developers in Internet of Things domain face some similar questions today in terms of finding the best combination of technologies to support them. Without a doubt, database remains at the core of any such decision making.
OmniDB 2.8 introduced support for Postgres-BDR 3.0, the ground-breaking multi-master replication tool for PostgreSQL databases, announced last month in PostgresConf US.
Here we have 2 virtual machines with Postgres-BDR 3.0 installed and we will use OmniDB to connect to them and setup replication between the machines.
Postgres-BDR 3.0 requires PostgreSQL 10 or better and also pglogical 3.0 extension should be installed, as Postgres-BDR 3.0 works on top of pglogical 3.0. Make sure you put the required entries in pg_hba.conf to make both machines communicate to each other via streaming replication. Then, in postgresql.conf you should set the following parameters in both machines:
listen_addresses = '*'
client_encoding = utf8
wal_level = 'logical'
If the title of this blog post rings a bell with you, perhaps you were at PG Day in Horwood House in 2014, when I stood up for 5 minutes to make the case for data modelling; a data model is much more than just a diagram. I shouldn’t be, but I am often amazed by the way data models (and the tools we use to manage them) are derided as ‘just pretty pictures’ or ‘documentation’. I’m not going to repeat my lightning talk here (watch it yourself if you want to), instead I’m going to talk about Data Vault.
Data Vault (DV) is a technique for building scalable data warehouses. Dan Linstedt describes DV as “a detail oriented, historical tracking and uniquely linked set of normalized tables that support one or more functional areas of business. It is a hybrid approach encompassing
Recently I have been refining and adding utilities to look after our Perl code. You might be surprised to learn that as well as 1.3 million or so lines of C code, there are about 30,000 lines of Perl code in our sources. This a sizeable body of code, even if it's dwarfed by our C code. What does it do? Well, lots of things. It runs some very critical code in building from source, so the code to set up our catalogs is created by some Perl code. All the new data setup for catalogs is in fact Perl code. That's another 20,000 lines or so of code on top of the 30,000 mentioned above. We also use Perl to run TAP tests, such as testing initdb and pg_dump. And it runs building and testing when we're building with the Microsoft tool-sets on Windows.
So, what changes have been made? First, we
If you have Docker installed on your development machine, there is a simple way to road test your code using the buildfarm client, in a nicely contained environment.
These preparatory steps only need to be done once. First clone the repository that has the required container definitions:
git clone https://github.com/PGBuildFarm/Dockerfiles.git bf-docker
Then build a container image to run the command (in this example we use the file based on Fedora 28):
docker build --rm=true -t bf-f28 -f Dockerfile.fedora-28 .
Make a directory to contain all the build artefacts:
That's all the preparation required. Now you can road test your code with this command:
docker run -v buildroot-f28:/app/buildroot \
Internet of Things tends to generate large volumes of data at a great velocity. Often times this data is collected from geographically distributed sources and aggregated at a central location for data scientists to perform their magic i.e. find patterns, trends and make predictions.
Let’s explore what the IoT Solution using Postgres-BDR has to offer for Data Analytics. Postgres-BDR is offered as an extension on top of PostgreSQL 10 and above. It is not a fork. Therefore, we get the power of complete set of analytic functions that PostgreSQL has to offer. In particular, I am going to play around with PostgreSQL's Window Functions here to analyze a sample of time series data from temperature sensors.
Let's take an example of IoT temperature sensor time series data spread over a