Speed is not the end-all and be-all of things, particularly not for web applications, where the real bottleneck is the network---not processing speed or rendering speed, but the upload speed of your server, the download speed of your clients, and the amount of data passed inbetween. A reasonable processing/rendering speed is good enough.
At some point, you will need to make the choice: are you going to go for speed that isn't needed, or are you going to go for maintainable code that can be refactored and modified easily when the world ultimately changes (the MySQL incident being a prime example)? If speed is ultimately needed, such an architecture can also be changed for speed, but still keep remnants of layering that will still make it easy to adapt to change.
If there's one thing about open source software, it's that it changes from year to year, and month to month. Today's database of choice will not be tomorrow's; just as yesterday's sendmail is being replaced by today's Postfix, yesterday's Gnucash by today's Grisbi, yesterday's GTK-1.2 by today's GTK-2.x, yesterday's XFree86 by today's Xorg. Sawfish is extinct; metacity has taken over. The world changes.
Most software has to be adaptable, and nowhere is this more true than the open source world, where this year is quite different from last year.
Unless WordPress is ultimately an application with an intended short shelf-life of a few years, a database abstraction layer will be a big first step to WordPress surviving whatever upheaval is in store in the uncertain future, particularly as we're already in the middle of an upheaval---more and more sites turning in MySQL for PostgreSQL.