Monday, October 15

Data generation and hardware quality

Greg's PlanetPostgreSQL, PostgreSQL, United States News
One of the challenges when dealing with a new database design is that you don't know things like how big the tables will end up being until they're actually populated with a fair amount of data.  But if the design has to factor in the eventual scalability concerns, you can't deploy it to obtain that data until the estimation is done.  One way around this is to aggressively prototype things.  Use staging hardware for this purpose that new applications can live on temporarily while sorting details like this out.  You can just factor in that you'll need to move the app and possibly redesign it after a few months, when you have a better idea what data is going to show up in it.The other way to get around this chicken/egg problem is to write a data generator.  Construct (more…)

Hinting at PostgreSQL

Greg's PlanetPostgreSQL, PostgreSQL, United States News
This week's flame war on the pgsql-performance list once again revolves around the fact that PostgreSQL doesn't have the traditional hint syntax available in other databases.  There are a mix of technical and pragmatic reasons behind why that is: Introducing hints is a common source of later problems, because fixing a query place once in a special case isn't a very robust approach.  As your data set grows, and possibly changes distribution as well, the idea you hinted toward when it was small can become an increasingly bad idea.Adding a useful hint interface would complicate the optimizer code, which is difficult enough to maintain as it is.  Part of the reason PostgreSQL works as well as it does running queries is because feel-good code ("we can check off hinting on our (more…)

Tuning Linux for low PostgreSQL latency

Greg's PlanetPostgreSQL, United States News
One of the ugly parts of Linux with PostgreSQL is that the OS will happily cache up to around 5% of memory before getting aggressive about writing it out.  I've just updated a long list of pgbench runs showing how badly that can turn out, even on a server with a modest 16GB of RAM.  Note that I am intentionally trying to introduce the bad situation here, so this is not typical performance.  The workload that pgbench generates is not representative of any real-world workload, it's as write-intensive as it's possible to be.Check out test set 5, which is running a stock development version PostgreSQL 9.1.  Some the pauses where the database is unresponsive during checkpoints, as shown by the max_latency figure there (which is in milliseconds), regularly exceed 40 seconds.& (more…)

Reducing the postgresql.conf, parameter at a time

Greg's PlanetPostgreSQL, PostgreSQL, United States News
One of the more useful bits of PostgreSQL documentation I ever worked on is Tuning Your PostgreSQL Server.  When that was written in the summer of 2008, a few months after the release of PostgreSQL 8.3, it was hard to find any similar guide that was both (relatively) concise and current. Since then, myself and many other PostgreSQL contributors have helped keep that document up to date as changes to PostgreSQL were made. The interesting and helpful trend during that period is that parameters keep disappearing from the set of ones you need to worry about. In PostgreSQL 8.2, there was a long list of parameters you likely needed to adjust for good performance on a PostgreSQL server: shared_buffers, effective_cache_size, checkpoint_segments, autovacuum, max_fsm_pages, (more…)

How not to build PostgreSQL 9.0 extensions on RPM platforms

Greg's PlanetPostgreSQL, PostgreSQL, United Kingdom News, United States News
For a long time, adding packages to RedHat derived Linux systems has been called "RPM Hell", for good reason.  Particularly before the yum utility came about to help, getting RPM to do the right thing has often been a troublesome task.  I was reminded of this again today, while trying to compile a PostgreSQL extension on two nearly identical CentOS systems.PostgreSQL provides an API named PGXS that lets you build server extensions that both leverage the code library of the server and communicate with it.  We use PGXS to install our repmgr utility, and having that well defined API let the program be developed externally from the main server core.  Many popular pieces of PostgreSQL add-ons rely on PGXS to build themselves.  In fact, the contrib modules that come with (more…)

Updates to PostgreSQL testing tools with benchmark archive

Greg's PlanetPostgreSQL, PostgreSQL
I maintain a number of project whose purpose in life is to make testing portions of PostgreSQL easier.  All of these got a decent upgrade over this last week.stream-scaling tests how memory speed increases on servers as more cores are brought into play.  It's fascinating data, enough of it there to start seeing some real trends.  It now works correctly on systems with large amounts of CPU cache, because they have many cores.  It was possible before for it to be so aggressive with sizing the test set to avoid cache impact that it used more memory than could be allocated with the current design of the stream code.  That's been scaled back.  If you have a 48 core server or larger, I could use some more testing of this new code to see if the new way I handle this (more…)

Telling Your Users to Go Fork Themselves

Greg's PlanetPostgreSQL, PostgreSQL
As the PostgreSQL Elephant continues its march toward yet another release, I've been thinking quite a bit about the role users of software should have in its user interface design. Today I proposed something that makes a database parameter people used to have to worry about, and that wasn't obvious at all how to set, and makes its value largely automatic. That's a pretty unambiguous forward change; users were annoyed, good default behavior established, and that default behavior suggested as a patch. If it's applied I'd be shocked to find anyone who considers that a bad decision. There's been a similar discussion of how to rework the user interface around database checkpoints. Right now, the speed at which data is written to disk by a checkpoint is impacted by three values the (more…)

Easier PostgreSQL 9.0 clusters with repmgr

Greg's PlanetPostgreSQL, International News, United Kingdom News, United States News
When PostgreSQL 9.0 shipped a few months ago, it included several new replication features. It's obvious that you can use these features to build clusters of servers for both high availability and read query scaling purposes. What hasn't been so obvious is how to manage that cluster easily. Getting a number of nodes installed and synchronized with their master isn't that difficult. But while the basic functions necessary to monitor multiple nodes and help make decisions like "which node do I promote if the master fails?" were included in 9.0, the way they expose this information is based on internal server units. There are a few common complaints that always seem to show up once you actually consider putting one of these clusters into a production environment: How do I handle (more…)