One of my 4-way Solaris boxes just wasn't cutting it anymore. We've tuned what we can tune and tweaked what we can tweak. The box was running with an average run queue of 8 lately. Time to upgrade the hardware. We chose a nice fast Linux box from HP and hooked it to an HP EVA disk array.
So how do you move 660GB of data from Solaris to Linux? On Oracle 9i?
Moving to 10g was considered for about 30 seconds, but we decided it was too many changes at one time for one of our most important applications. The next logical choice was export/import. On the first round, it took about 14 hours to export and 4 days to import. OK, that can't be done in a weekend.
The we decided to break the tables up into four somewhat equally sized groups and run export/import in four threads. This actually worked pretty well for the overwhelming majority of tables. About 4 hours for the longest group to export, and 43 hours for the longest thread to import. Still not doable in a weekend, but we're closing in on a reasonable timeframe.
We noticed a lot of time was being taken up creating indexes. So we decided to pre-create the tables without indexes and create indexes in parallel with nologging. This shaved about 6 hours off our total time.
Then we looked and saw two Index Organized Tables were taking a lot of time relative the the number of rows they contained. I then did some testing with import on IOT tables vs. Heap tables and didn't see any significant difference. In fact, the testing I did proved the two IOT tables shouldn't have taken as long as they did. I looked at them a little more closely and found the tables contained a TIMESTAMP field. I did some more testing and compared import with a DATE field vs. import with a TIMESTAMP field and found the TIMESTAMP field was causing the slowdown. Once we identified the TIMESTAMP field was the problem, my boss found note 305211.1 on Metalink which explained our problem. We then timed the import with COMMIT=N for those two tables and found the time went from 26 hours to 17 hours. I thought, "lets try to fill the tables through a database link". The database link yielded the best results of all in about 6 hours.
The last piece of the pie was the FK constraints. We didn't want to re-validate all the constraints since we knew all the data was going to be copied over (after manual verification, that is). So we created the constraint with NOVALIDATE which took relatively no time.
After everything was back in, we estimated statistics on our biggest tables and computed statistics on our smaller tables. It took about three weeks to figure out the method, but the time researching the project paid off when the database was available after 34 hours.
Saturday, August 12, 2006
Subscribe to:
Post Comments (Atom)
2 comments:
Now see this is the kind of stuff I like to see on here. Your experiences....
Sharing what you've done and the results of such are valuable gems. Like the no validate with the FK's. It's the little things that make the difference.
I worked at a site where they were moving a 400g database off of SGI Irix and on to HP Superdomes. They ended up doing an initial migration with import/export, and then writing their own replication to replcate changes from their standby database to the new server in several steps over a couple of days, so that the actual down time would be less than an hour. It seemed to work well.
Post a Comment