Yesterday afternoon was the Pro Postgres class as taught by Robert Treat. On entering the class I was a little afraid at first that the class wouldn’t cover what I needed. He started off spending a large amount of time covering packaging, then upgrading between versioning, and what version numbers meant. Not exactly what I expected out of the class.
However, slightly before the break period, the magic happened. One of the main reasons I wanted this class was because of problems I’ve had in getting postgres tuned for the performance we wanted. Personally this has been a major beef of mine for a long time with postgres, the documentation for performance tuning is extremely lacking. To many times you find the details for tuning are simply ‘try changing, and see what happens’. Not very helpful with live systems that you can’t take down just to twiddle a bit.
However this time there was actually some magical help in finding several of the key things we can tune, especially on the solaris based system in question. Of special note, while shared_buffers is something we had been focusing on modifying, some of the warnings we had read about modifying it we found aren’t such a big deal. default_statistics_target is something that was harped on in our case. Apparently the default value is ’10’, and the presenter said the first thing he did on a system was raise it to ‘100’, then run analyze. Great, glad that’s a default. According to him, there is no reason to want it lower than 100, glad that’s a default guys :). sort_mem, checkpoint_segments, checkpoint_timeout, and effective_cache_size were also given heavier focus on why they should be targets of configuration optimization for performance. These are major areas of our previous focus, and you can find some some information about them online, but this was much better. Some other items of interest included using maintenance_work_mem in a per-session basis when altering tables or indexes. update_prcess_title is an option that can apparently run very slowly on a solaris system, wish that was well known. max_fsm_pages is one that as soon as he discussed it, I knew would be an issue. Apparently if the page setting is too low, and you have a highly updated database, this will cause major performance issues. The best way to test on this was to run a vacuum, and look at the pages information. If it needed more than it said was available, you need to raise this :). Once again, would have been nice in the docs, and I know we are now raising ours more than we had, by quite a bit.
After the break we went on to cover various forms of data recovery, from the basic pgdumps (remember the -Fc option), pitr (point in time recovery), availability, slow query analysis, a review of important /contrib entries, and finally some interesting pg query tricks.
I did learn a bit of very useful information. I was afraid at first, not to mention when he stated, “Trush /contrib more than your own code.” But I wound up liking the content quite a bit. I have several pacakges that I hope to write more about after I try them out some.