Wednesday, November 29, 2006

Where am I deploying MySQL, revisited, revisited...

Seems like I'm doing a lot of revisting these days. Perhaps I'm just not making myself clear.

I have basically five types of systems in my organization; mission critical, 24x7, ancillary, QA, and Development. Below are some detail explanations of each type of system:

Mission Critical
A mission critical application is a piece of software that must be running in order for the business to continue to operate. Without the software, the business would cease to exist. An example of this is eBay's auction system or the software that executes orders for Ameritrade.

A 24x7 system is important to the business, but not essential for the business to run. If a 24x7 system goes down, customers get upset but business continues to operate. An example of this type of system would be Ameritrade's charts or eBay's feedback system.

Ancillary Systems
An ancillary system is a system that a limited number of users access. If an ancillary system is down your bean counters may complain because they can't crunch their numbers, but overall, there is no impact to the business. Usually an ancillary system is something used internally and not exposed to your customer. An example of this type of system might be eBay's customer data warehouse or Ameritrade's accounting system.

A QA system is "almost production" in the sense that it carries a full load of production data, but the data may be "stale". Performance testing occurs on these types of systems as well as new software changes. The only people that are affected by QA outages are those running the tests.

A development system is a work in progress. A handful of developers will be ticked off if development is down, but nobody really cares.

Mission Critical, 24x7, and Ancillary systems are production systems and are treated as such. Any production system is important to the business, just some are more important than others. QA systems are sometimes treated like production, sometimes not. Development systems are almost never treated like production.

In "Where am I deploying MySQL", I point out what types of systems where I am comfortable deploying MySQL. I have deployed several systems in development, QA, and production. My production apps are mostly of ancillary type, but I have recently deployed a 24x7 web based application. As we build the infrastructure around MySQL, there will be a natural progression towards more 24x7 apps. Those ancillary and 24x7 apps eventually evolve into mission critical systems as our experience becomes deeper and our confidence becomes stronger.

I'm an Oracle guy, no question about it. But I'm implementing MySQL. In fact, the vast majority of my new systems are being developed on MySQL and are going into production on MySQL.

From time to time we have to emerge from our comfort zones. We can either confront it as an opportunity to learn and progress or fear it and fail.

Tuesday, November 28, 2006

Why I still use pfiles, revisited...

Ivan Kartik has an informative guide on SPFILES and their use at I hope he posts a followup explaining what advantages they have over PFILEs and some real-world scenarios. Perhaps he can persuade me to change my mind...

Monday, November 27, 2006

Where am I deploying MySQL, revisited...

The internet is a great thing. You can find almost anything and share your ideas with anybody that is willing to listen. Unfortunately, that means that anybody with an pugnacious personality and a keyboard also has a voice.

A couple weeks ago, somebody asked me where I deploy MySQL. I created this post about where I think MySQL's sweet spots are in my organization.

I'm no MySQL expert, I'll admit it. In fact, right at the top of the post I indicate as such:
In my opinion, I'd use MySQL for anything but the most mission critical applications. I'm not saying MySQL can't handle the most mission critical applications, but I'm not comfortable betting my business on MySQL at this point.
Sheeri Kritzer decides to put her own disparaging slant on my post and writes MySQL Is a Great Database, But Do Not Use It???? I posted a comment on the blog clarifying my position on a couple points, but wanted to expound on it a little more here.

First off, I didn't say nor did I imply not to use MySQL. I didn't say anything of the sort. I simply explained where I use MySQL and what kind of applications I use MySQL for. Nowhere in the post did I say MySQL is only good for XYZ type application in every organization.

She also goes on to pick apart this paragraph:
Last, but not least, MySQL is suited for data marts ( less than 1TB). Stuffing lots of historical data into denormalized relational tables is what "LOAD DATA LOCAL" is all about. These types of applications aren't needed 24x7 but require snappy response times when queried.

In the third sweet spot, you say MySQL is good at stuffing lots of data into denormalized tables. I’ve been using MySQL for quite a long time, and I missed the part where stuffing lots of data into normalized tables is slow. It’s fast for stuffing lots of data into tables, period.
Um, I guess if I stood on my head with one eye closed my paragraph says "MySQL sucks at stuffing lots of data into normalized tables."

Next, we get into a common misconception:
And MySQL isn’t “just as fast as the next guy”. It’s FASTER, otherwise why would people use it, because it’s much less featured than Oracle? This I’ll admit — however, you can code around features. No amount of code can make Oracle faster.

Where, oh where, do I start? I know I use MySQL because it's cheaper. I wouldn't make a blanket statement saying it's FASTER. Sometimes it's faster, sometimes not. All I have to do is come up with one example of where MySQL is slower and the blanket statement is false.

People that have been in this business a little more than a couple years know lots of examples where people think they can code around features. And no amount of code can make Oracle faster? If you treat it like a big black box, then you're probably right. But treat it like a sophisticated piece of software that has it's on intricacies, then yes, code matters.

Oh, and:
To Jeff Hunter’s comment I say: Incremental backups exist — that’s what binary logs are.
...has nothing to do with MySQL, it's a reply to the previous comment about what's missing in Oracle SE vs. EE.

And I purposely didn't make this an Oracle/MySQL comparison. Primarily because I can't say with authority how MySQL behaves. I could bring up the fact that you have to bring down the db just to add a datafile to the InnoDB tablespace, but I'm not 100% sure you can't do that online. I could also bring up the fact that you have to shutdown the database to change the size of your logfiles, but again, I'm not 100% sure that's the only way to do it.

Apparently MySQL is her gig and she's a zealot for the software. That's cool, more power to ya. But don't belittle me because that's not the only thing I use. And if you're going to criticize me, at least have the courtesy of alerting me to the fact that you have done so.

Tuesday, November 21, 2006

Selling out

Danasoft has a .jpg generator that echo's your IP address and does a IP lookup to figure out your ISP. You could also put up to ten sayings on your .jpg and they would randomly rotate on the graphic. Up until a couple days ago, I thought it was kinda cool, so I included it in my signature on several forums.

A couple days ago, I noticed that my signature had an advertisement in it. I thought that maybe my signature had been hacked and I was just going to create another one. But then I saw on the Danasoft website that you could create a .jpg without advertisements by clicking a certain link.

Granted, everytime my .jpg is displayed, I'm using some of their bandwidth. But the part that really miffs me is they dropped these advertisements in without notifying anybody. Needless to say, I ripped the graphic from all the forums in which I participate. That's 3000 less views they'll be getting every day...

Tuesday, November 14, 2006

Why I still use pfiles

I'm always up for investigating features of Oracle and deciding when and how they should be introduced into my environment. I recently got a chance to go back and look at spfiles again.

Server Parameter Files (spfiles) are binary files that contain your initialization paramters. The two major advantages to using spfiles over standard text parameter files are that the spfiles let your initialization parameters persist across instance restarts and they can be backed up by RMAN. How many people have changed a parameter on-the-fly only for it to be reset by the init.ora upon restart? All your init.ora problems could now be solved, just by using spfiles.

But I don't use them.

The first reason is maybe I don't want the instance to keep a value when it gets restarted. For example, maybe I changed my log_archive_dest_1 to a temporary location because my primary location started filling up. I don't want that to be my permanent location, just until I get a chance to backup my archived redo logs. Sure, I'll change it back when my backup changes, but if I forget, I haven't made an unintentional permanent change.

Also, I dig change control. Nothing goes to production until it's checked into my version control system. This includes init.ora files, tnsnames.ora files, listener.ora, scripts, and anything else. If you want to know what the db_cache_size was back in 2002, I can tell you. Sure, I could dump my spfile to a pfile when I make a change and put it into version control, but that goes against the process. Things go into version control before they go into production.

Along those same lines, version control allows me to back out a change that won't allow me to start the instance. For example, say I set the shared_pool_size to 400G on a box with 4G of RAM. When my instance doesn't start, I can check the old version of the init.ora out of my version control, replace it, and start the instance. If I were using an spfile I'd have to create a pfile that specified my spfile and then add an entry that reduced the shared_pool_size back down. And that's assuming I knew what the problem was. With version control, I just diff the two files and I know what the change was.

Another reason I like pfiles is I can include a "standard" pfile so all my instances run with similar parameters. For example, maybe I want to make sure all my instances run 4 job queue procesess. I just put the appropriate parameter in the my initStandard.ora, include the initStandard.ora in my init.ora file using ifile= and I'm good to go. I know that on the next restart my environments will be the same.

The last advantage I think standard pfiles give me is the ability to distribute production changes to my DR site. I have two init.ora files at my DR site; one that holds the parameters for recovery and one that holds the parameters for running during a failover. When I make a change to my primary init.ora, I push that change to both the primary site and the standby site (using automated methods). When I have to failover, I know the standby will operate just like my primary site did.

I also don't care about the backup. Whoa, I mean I don't care about backing up the initialization parameters because I already have them in my version control.

I know this goes against conventional wisdom in the Oracle world. I'm a firm believer in doing what makes sense for my particular environment regardless if "everybody" thinks its OK or not. When I start running RAC, maybe it will make more sense for this environment, but until then I'm still using pfiles.

Thursday, November 09, 2006

Keeping you in the loop

The email subscription form we introduced at Wilton Diaries a couple weeks ago has really taken off. In other words, we've got three subscribers, but the "cool" factor is way up there.

When you subscribe via email, you will get a message when the Blog gets updated. To subscribe, simply enter your email address and click the "Subscribe" button. A form will then pop-up asking you to verify that you are an acutal person and not an aggregator. You simply type the letters that show in the box and click the buttons you are instructed to. You will then be sent an email message with a URL that you must click on to finish your subscription. This last part is important as you don't want to go through all that work and not get the feed!

If that all sounds too complicated, feel free to continue checking the So What Co-operative every day.

Wednesday, November 08, 2006

Playing in the Sand

Just got back from a week in Kuwait upgrading our database down there from 9i to 10g and upgrading our ESRI from 9.0 to 9.1.

I must say upgrading Oracle9i to 10g on a Sun Solaris 10 OS is the most easiest and painless install I’ve ever done. I found out on our Dev box that trying to put 10g on Solaris 8 was like trying to put a square peg in a round hole. Luckily, I got our UNIX admin on board with upgrading all of our servers to Solaris 10 before I started my production upgrade festivities.

When I got down there we had one database serving up maps and vehicle tracking data. All the tracking data is OLTP oriented and the maps are nothing but a bunch of blobs. The user has the ability to see maps by themselves and vehicle information (text) by itself. The user also has the ability to see maps and the vehicle data at the same time.

The server has plenty of horsepower and space so I decided to break the database into two. I created another database and put the maps on it. I configured it for bulk stuff .One other thing I did was, we have a particular map that automatically loads up when a user first accesses the webpage, I threw that in to the keep pool. Performance is very nice. It’s so refreshing when you have a database configured correctly for the environment it supports.

Tuesday, November 07, 2006

Where am I deploying MySQL?

If cost were no object, I'd always deploy Oracle. I'm comfortable with Oracle technology and I think I have a pretty good idea how to implement and administer it.

In the world of corporate IT, however, budgets are king. Projects are measured by their Return on Investment (ROI) and the lower I can get that investment, the better return I can get for my investment. I have a real hard time spending $160K on an application that will occupy 40G of space.

In my opinion, I'd use MySQL for anything but the most mission critical applications. I'm not saying MySQL can't handle the most mission critical applications, but I'm not comfortable betting my business on MySQL at this point.

I think there are about three sweet spots for MySQL. The first is small to medium size OLTP databases (<100 GB) that are fronted by something like a java middle-tier. These applications typically control most of the business logic and authentication/authorization in the middle-tier (right or wrong) and use the database as a big storage bucket. These applications rely on the backend serving data as fast as it can and MySQL can serve data just as fast as the next guy.

Another area where MySQL excels in serving database driven content directly on the webserver. This type of application typically cranks out high numbers of queries and has very little updates to worry about.

Last, but not least, MySQL is suited for data marts ( < 1TB). Stuffing lots of historical data into denormalized relational tables is what "LOAD DATA LOCAL" is all about. These types of applications aren't needed 24x7 but require snappy response times when queried.

No, MySQL doesn't have some of the features that some of the big-box databases have. And it's got plenty of limitations. But when you want an 80% solution, I think it's the right choice. My company is sold on MySQL and as our confidence grows in the software, so will our installed base.

Monday, November 06, 2006

Quick and Dirty MySQL Backup

Until recently, the MySQL databases I work with contain data that can be retrieved from other sources. Most of the data is either batched in from flat files or another database. It would be inconvenient to reload a couple months worth of data, but since these databases are not mission critical, the business could operate without them for a couple days. Lately, we've been implementing some semi-critical systems that rely on a somewhat expedient recovery.

The requirements for the project were that the database must remain up during the backup and losing a day's worth of data was acceptable. All of my regular Oracle readers are cringing at the moment, but hey, that was the rules I was working with.

My first thought was to use mysqlhotcopy because it backed up the actual physical files. However, mysqlhotcopy only allows you to backup MyISAM tables and we extensively use InnoDB.

My next choice was mysqldump. mysqldump basically takes the entire database and dumps a text file containing DDL and DML that will re-create your database. Coming from an Oracle background, I knew there were shortcomings to dumping the entire database, but hopefully I could mitigate them.

The first hurdle was security. I specifically turn off unauthenticated root access on my databases, but I needed to be able to read all the tables to do a backup. I don't want to hard-code my root password or any password in a script as I don't have suicidal tendencies (diagnosed, anyway). So I created a user called backup that could only login from the server machine, but could login unauthenticated.

The next thing I had to figure out was how to get a consistent view of the data. I knew that my developers preferred InnoDB for it's Referential Integrity features and getting inconsistent data would be disasterous. Fortunately, one of the flags to mysql_dump is the --single-transaction which essentially takes a snapshot in time.

So I wrote a script around mysql_dump and --single-transaction and dumped my entire database to disk. Every now and again, I encountered an "Error 2013: Lost connection to MySQL server during query when dumping table `XYZ` at row: 12345". The row number changed each time, so I figured it had something to do with either activity in the database or memory. I could rerun the command and it usually finished the second or third time.

After the third day straight of my backup failing, I decided to research it a little more. mysql_dump has a flag called --quick which bypasses the cache and writes directly to disk. I put this flag in my backup script and the script started finishing more consistently.

The last hurdle was having enough space on disk to store my backups. Since the backup file is really a text file, I decided to pipe the output through gzip to reduce it's size.

Currently, my quick and dirty backup script is a wrapper around the following command:

mysqldump --all-databases --quick --single-transaction -u backup | gzip > mybackup.sql.gz

We're adopting MySQL at a blistering pace, so I'm sure I'll need to make changes in the future. For right now, though, it gets the job done.

Wednesday, November 01, 2006

Check this out

I usually employ a logon trigger for most of my Oracle databases so I can grab certain identifying information about the session. Then I save this information in another table for later analysis.

I have started testing 9iR2 on a 64-bit Linux box and have come across a certain peculiarity. v$session is defined as:

SQL> desc v$session
Name Null? Type
----------------------------------------- -------- ----------------------------

I then create a table using the same type and try to insert a value:

SQL> create table jh1 (saddr raw(4));

Table created.

SQL> desc jh1
Name Null? Type
----------------------------------------- -------- ----------------------------

SQL> insert into jh1 select saddr from v$session;
insert into jh1 select saddr from v$session
ERROR at line 1:
ORA-01401: inserted value too large for column

Hmmmf. So I do a CTAS:

SQL> drop table jh1;

Table dropped.

SQL> create table jh1 as select saddr from v$session;

Table created.

SQL> desc jh1
Name Null? Type
----------------------------------------- -------- ----------------------------

...and look what size the column is!

SQL> select * from v$version;

Oracle9i Enterprise Edition Release - 64bit Production
PL/SQL Release - Production
CORE Production
TNS for Linux: Version - Production
NLSRTL Version - Production

Update: 2006/11/01 16:09:
From Support:
I checked my Windows (32bit) database and v$session.saddr is a RAW(4).

OK, that explains it.