Yes, it should be automated, and preferably so that it is totally generic for postgres and mysql, and independent of the application using the database. This is the very point of using sql, and I am surprised that there is so little done in this area. mysql and postgres totally dominate the web/open source communities, but I suppose there is too much religion about them for people to migrate back and forth very often...
Actually, I had another problem along the way. I was dumping from Postgres using:
pg_dump -D --host=localhost --username=pun --file pg_punbb.sql --password -t 'punbb_*' pundb
Postgres wants to place keywords in double quotes, so some of this output can't be piped into mysql without first cleaning these double quotes. I did so using a sed script.
Save this into a file called e.g. pgfix and make it executable:
#!/bin/sed -f
s/"language"/language/g
s/"password"/password/g
s/"location"/location/g
s/"owner"/owner/g
While at it, one can also place a few more lines into this script to replace all references
to the old address of the site, for example:
s/www.oldpunsite.com/www.newpunsite.com/g
Then just execute:
./pgfix < pg_punbb.sql > fixedpg_punbb.sql
In this way one can change not only the settings for the new punbb site, but also
all references used by users in their posts.
In a previous version of pg_dump there was a flag that forced pg_dump not to place
double quotes in the output, but this has been depracted and the flag means something
else now (won't say what I think about such ill-documented changes...).
This was a pretty clumsy writedown on how to do the migration, and it could be made better, but I'm on vacation and just had to get this migration done very quickly. Hopefully it helps someone. It would be good if someone polished up the information and placed it into the FAQ.