I've been using PostgreSQL since 1997, when I wrote a support ticket web app using Perl and Postgresql. I remember having to use gcc to compile the interface module to use PostgreSQL for CGI. Once the Perl libs showed up I never looked at c-based CGI ever again.
If you're installing the current version PostgreSQL don't bother tinkering under the hood too much. It's easy to give PostgeSQL too much power, its setup to handle about anything you want to do with it unless you're working on something that wickedly parallel (clustered/sync'd/and nested joining up the wazoo).
Anyone using PostgreSQL should only have to do the two following tasks daily:
1)setup an scheduled backup
2)setup a scheduled vacuum
Here's some crontab examples:
30 4 * * * /home/db_su_name/scripts/dbroller 2>&1
* 3 * * * /usr/bin/vacuumdb -q -d punbb 2>&1
Here's the code I use for a database backup. Basically you add the name of any of the databases you want to keep a backup of into the array. Converting this to run with the database name passed as an argument is a trivial exercise.
#!/usr/bin/perl
$dbFileFilter="%s.%s";
my @dbName=("punbb","template1");
foreach $db(@dbName){
$dbFile = sprintf "$dbFileFilter",$db,"sql";
$rslt=`pg_dump $db > ~db_su_name/pgData/$dbFile`;
}
I also use the logrotate daemon to do my bidding...basically keeping me from having a buttload of database files hanging around. If you are running a "snapshot" service, then this isn't for you. The benefits of this are a limited number of database files are kept and you can set how much backstepping you want to do.
In the /etc/logrotate.d/ folder I create a file called "pg-bak"
/home/pg_su_name/pgData/*.sql
{
rotate 7
daily
missingok
compress
}
This means I keep 7 days of compressed backups.
Combine this with rsync to an offsite server and your bases are covered.
I cannot overstate the performance gains you will get from performing a vacuum on the punbb database once a day. PostgreSQL was loading the server pretty high until I started doing a daily vacuum.