I have one production database left on 8.1.7.4, although it's pretty irrelavent at this point. The plans are in the works to upgrade that particular database to 9.2, so it's time to "decommission" it's 8.1.7.4 development database.
Part of the decommissioning process is to truncate all the tables in the development database, but leave the structure in place just in case we have to do any quick compiles. Although I know it's running on my development database, my pulse still quickens when I see all those tables whirring by knowing they're being truncated. I double and triple checked that I was on development, but still I was a little nervous pressing the "Enter" key...
Wednesday, October 26, 2005
Monday, October 24, 2005
Giving up on dbms_metadata
My group was assigned a project to create a script that would dump the DDL of a schema to source. We figured dbms_metadata was the way to go and started writing our script. The process was to dump a particular object type, and then run that DDL on an empty schema to make sure it ran.
First thing we hit was good old bug 3832291:
OK. We figured out a way around that by generating constraints as ALTER statements. As we're cooking along again, blam, bug 3721907. Sure, we found a way around that one too.
The final nail in the coffin, was the performance with the whole thing. It took 30 minutes to dump a small schema of about 200 tables and a couple dozen packages. Checked metalink and found good old bug 3653586.
Looks like most of these bugs are fixed in 10g, but I'm not upgrading a bunch of instances just so dbms_metadata works. Sigh.
First thing we hit was good old bug 3832291:
ORA-31603: object "SYS_C0059941" of type CONSTRAINT not found in schema "MYSCHEMA"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
ORA-06512: at "SYS.DBMS_METADATA", line 628
ORA-06512: at "SYS.DBMS_METADATA", line 1221
ORA-06512: at line 1
OK. We figured out a way around that by generating constraints as ALTER statements. As we're cooking along again, blam, bug 3721907. Sure, we found a way around that one too.
The final nail in the coffin, was the performance with the whole thing. It took 30 minutes to dump a small schema of about 200 tables and a couple dozen packages. Checked metalink and found good old bug 3653586.
Looks like most of these bugs are fixed in 10g, but I'm not upgrading a bunch of instances just so dbms_metadata works. Sigh.
Wednesday, October 19, 2005
Automating the post office
I thought I had a simple task; mail a letter, get a return receipt, and have the ability to track it. I don't do this every day, so I went to the post office and started reading. Certified mail, Delivery Confirmation, Registered Mail, Priority Mail... it was a lot to take in. I finally decided I wanted Delivery Confirmation with a Return receipt. I filled out both forms and went to wait in line. One line, with one teller.
A friendly postal worker announced "Anybody paying by debit card or credit card can use the automated postal center." Cool, I thought, and I proceeded over to the ATM like thingy.
I must have had a blank stare on my face because the same postal worker asked me, "Do you need help?"
"I just want to mail this letter, get an acknowledgement from the recipient, and be able to track it if it gets lost.", I replied.
"Those aren't the right forms. The machine will print out the right ones."
"OK, cool." I start punching away. Letter. First Class. Done.
"Um, it didn't ask me if I wanted to get a return receipt."
"You did it wrong. You have to add a service."
"O...K..." Cancel. Letter. First Class. Add Service. Return Receipt.
"How do I know my tracking number?"
"You have to add a different service."
At this point, she is getting just as frustrated as I am. There's three Moms behind me patiently waiting to send cookies off to their kids. We go through it together: Cancel. Letter. First Class. Add Service. Certified Mail. Add Return Receipt. Done.
Finally, the stamp prints out. Another form prints the Return Receipt card and another sticker I'm supposed to put on the letter. The form is about half "peel and stick" and half regular paper. I fill out the "peel and stick" part and put it on the back of my letter. I carefully place the other sticker on the front of my envelope and fold it according to the directions on the form. I drop my neatly assembled letter in the big mailbox next to the APC.
"That's for packages", she says and quickly adds on "but it will get there." I'm going to the window next time.
A friendly postal worker announced "Anybody paying by debit card or credit card can use the automated postal center." Cool, I thought, and I proceeded over to the ATM like thingy.
I must have had a blank stare on my face because the same postal worker asked me, "Do you need help?"
"I just want to mail this letter, get an acknowledgement from the recipient, and be able to track it if it gets lost.", I replied.
"Those aren't the right forms. The machine will print out the right ones."
"OK, cool." I start punching away. Letter. First Class. Done.
"Um, it didn't ask me if I wanted to get a return receipt."
"You did it wrong. You have to add a service."
"O...K..." Cancel. Letter. First Class. Add Service. Return Receipt.
"How do I know my tracking number?"
"You have to add a different service."
At this point, she is getting just as frustrated as I am. There's three Moms behind me patiently waiting to send cookies off to their kids. We go through it together: Cancel. Letter. First Class. Add Service. Certified Mail. Add Return Receipt. Done.
Finally, the stamp prints out. Another form prints the Return Receipt card and another sticker I'm supposed to put on the letter. The form is about half "peel and stick" and half regular paper. I fill out the "peel and stick" part and put it on the back of my letter. I carefully place the other sticker on the front of my envelope and fold it according to the directions on the form. I drop my neatly assembled letter in the big mailbox next to the APC.
"That's for packages", she says and quickly adds on "but it will get there." I'm going to the window next time.
Monday, October 17, 2005
Wicked ORA-27054, Solved
You may recall that I've been having problems getting 10gR2 backed up to NFS mounted disk. After we got bounced around a couple groups of Oracle Support, we finally landed into the capable hands of David Marshall.
After much research on David's part and some testing on our own, we settled on using the following parameters to mount our NFS filesystems for the purpose of backup:
So all is well now. Our 9.2 backups complete in the same time and our 10.2 backups complete in an acceptable time. Now if we just knew which one of those parameters did it, we'd complete the education process.
After much research on David's part and some testing on our own, we settled on using the following parameters to mount our NFS filesystems for the purpose of backup:
rw,bg,intr,hard,timeo=600,wsize=32768,rsize=32768,nfsver=3,tcp
So all is well now. Our 9.2 backups complete in the same time and our 10.2 backups complete in an acceptable time. Now if we just knew which one of those parameters did it, we'd complete the education process.
Saturday, October 15, 2005
Setting up Multiple Listeners
Somebody had a problem in one of the forums with connections timing out. I suggested the error might be due to the listener being really busy and the poster could setup multiple listeners and load balance between the two. The setup was complex enough that I wanted enough room to explain the whole thing.
Setting up two listeners on the same box is done via the listener.ora file. The listener.ora file is usually located in your $ORACLE_HOME/network/admin directory. I have setup two listeners on my test box and the listener.ora file looks like this:
This part of the listener.ora tells me my listener name is LSNR1666 and that it is listening on port 1666 of hostname "jake" and I'll be using the TCP protocol.
This part of the listener.ora tells me that LSNR1666 will be listening for the DB named lx10r2.us whose ORACLE_HOME is /u01/app/oracle/product/10.2.0.1 and whose ORACLE_SID is lx10r2. (This could have been specified with a SERVICE_NAME also, but I chose to do it this way because it is simpler).
That's it, now you just start the listeners by name:
We can see that the listeners are started and waiting for connections:
Next, we configure the client. The client naming is usually controlled by a file called tnsnames.ora in the $ORACLE_HOME/network/admin directory of the client. My tnsnames.ora file looks like this:
Here, the database alias is "lx10r2.us". Normally, when you specify multiple address lines for an alias, Oracle will attempt the first and if that fails will attempt the second. This might actually work for the poster, but would still pound port 1666 until it couldn't handle any connections and THEN try port 2666. The key to having a somewhat even distribution over the two listeners is by using the (LOAD_BALANCE=ON) parameter.
Just to test the theory, we create a small batch file on the client machine that connects and disconnects in an infinite loop. The batch file looks like this:
Then, we open up multiple command prompt windows and start pounding away. You can see the listener.log file keeps growing and both listeners are accepting approximately the same number of connections:
Nice. About three hundred connections per minute. Not bad for an 802.11b connection to a 733Mhz Linux box connected to a 100Mbps router. Will this solve the poster's problem? I don't know. I do know this is how you can setup multiple listeners, though.
Setting up two listeners on the same box is done via the listener.ora file. The listener.ora file is usually located in your $ORACLE_HOME/network/admin directory. I have setup two listeners on my test box and the listener.ora file looks like this:
LSNR1666 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = jake)(PORT = 1666))
)
)
)
LSNR2666 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = jake)(PORT = 2666))
)
)
)
SID_LIST_LSNR1666 =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = lx10r2.us)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0.1)
(SID_NAME = lx10r2)
)
)
SID_LIST_LSNR2666 =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = lx10r2.us)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0.1)
(SID_NAME = lx10r2)
)
)
This part of the listener.ora tells me my listener name is LSNR1666 and that it is listening on port 1666 of hostname "jake" and I'll be using the TCP protocol.
LSNR1666 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = jake)(PORT = 1666))
)
)
)
This part of the listener.ora tells me that LSNR1666 will be listening for the DB named lx10r2.us whose ORACLE_HOME is /u01/app/oracle/product/10.2.0.1 and whose ORACLE_SID is lx10r2. (This could have been specified with a SERVICE_NAME also, but I chose to do it this way because it is simpler).
SID_LIST_LSNR1666 =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = lx10r2.us)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0.1)
(SID_NAME = lx10r2)
)
)
That's it, now you just start the listeners by name:
[oracle@jake admin]$ lsnrctl start lsnr1666
LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 15-OCT-2005 08:52:23
Copyright (c) 1991, 2005, Oracle. All rights reserved.
Starting /u01/app/oracle/product/10.2.0.1/bin/tnslsnr: please wait...
TNSLSNR for Linux: Version 10.2.0.1.0 - Production
System parameter file is /u01/app/oracle/product/10.2.0.1/network/admin/listener.ora
Log messages written to /u01/app/oracle/product/10.2.0.1/network/log/lsnr1666.log
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=jake)(PORT=1666)))
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=jake)(PORT=1666)))
STATUS of the LISTENER
------------------------
Alias lsnr1666
Version TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Start Date 15-OCT-2005 08:52:23
Uptime 0 days 0 hr. 0 min. 0 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/10.2.0.1/network/admin/listener.ora
Listener Log File /u01/app/oracle/product/10.2.0.1/network/log/lsnr1666.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=jake)(PORT=1666)))
Services Summary...
Service "lx10r2.us" has 1 instance(s).
Instance "lx10r2", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
[oracle@jake admin]$ lsnrctl start lsnr2666
LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 15-OCT-2005 08:52:30
Copyright (c) 1991, 2005, Oracle. All rights reserved.
Starting /u01/app/oracle/product/10.2.0.1/bin/tnslsnr: please wait...
TNSLSNR for Linux: Version 10.2.0.1.0 - Production
System parameter file is /u01/app/oracle/product/10.2.0.1/network/admin/listener.ora
Log messages written to /u01/app/oracle/product/10.2.0.1/network/log/lsnr2666.log
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=jake)(PORT=2666)))
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=jake)(PORT=2666)))
STATUS of the LISTENER
------------------------
Alias lsnr2666
Version TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Start Date 15-OCT-2005 08:52:30
Uptime 0 days 0 hr. 0 min. 0 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/10.2.0.1/network/admin/listener.ora
Listener Log File /u01/app/oracle/product/10.2.0.1/network/log/lsnr2666.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=jake)(PORT=2666)))
Services Summary...
Service "lx10r2.us" has 1 instance(s).
Instance "lx10r2", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
We can see that the listeners are started and waiting for connections:
[oracle@jake admin]$ tail $ORACLE_HOME/network/log/lsnr1666.log
Trace information written to /u01/app/oracle/product/10.2.0.1/network/trace/lsnr1666.trc
Trace level is currently 0
Started with pid=3201
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=jake)(PORT=1666)))
Listener completed notification to CRS on start
TIMESTAMP * CONNECT DATA [* PROTOCOL INFO] * EVENT [* SID] * RETURN CODE
WARNING: Subscription for node down event still pending
15-OCT-2005 08:52:23 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=jake)(USER=oracle))(COMMAND=status)(ARGUMENTS=64)(SERVICE=lsnr1666)(VERSION=169869568)) * status * 0
[oracle@jake admin]$ tail $ORACLE_HOME/network/log/lsnr2666.log
Trace information written to /u01/app/oracle/product/10.2.0.1/network/trace/lsnr2666.trc
Trace level is currently 0
Started with pid=3211
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=jake)(PORT=2666)))
Listener completed notification to CRS on start
TIMESTAMP * CONNECT DATA [* PROTOCOL INFO] * EVENT [* SID] * RETURN CODE
WARNING: Subscription for node down event still pending
15-OCT-2005 08:52:30 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=jake)(USER=oracle))(COMMAND=status)(ARGUMENTS=64)(SERVICE=lsnr2666)(VERSION=169869568)) * status * 0
Next, we configure the client. The client naming is usually controlled by a file called tnsnames.ora in the $ORACLE_HOME/network/admin directory of the client. My tnsnames.ora file looks like this:
D:\oracle\product\10.2.0\NETWORK\ADMIN>type tnsnames.ora
# tnsnames.ora Network Configuration File: D:\oracle\product\10.2.0\network\admin\tnsnames.ora
# Generated by Oracle configuration tools.
LX10R2.US =
(DESCRIPTION =
(ADDRESS_LIST=
(LOAD_BALANCE=on)
(ADDRESS = (PROTOCOL = TCP)(HOST = jake)(PORT = 1666))
(ADDRESS = (PROTOCOL = TCP)(HOST = jake)(PORT = 2666))
)
(CONNECT_DATA = (SID = lx10r2))
)
Here, the database alias is "lx10r2.us". Normally, when you specify multiple address lines for an alias, Oracle will attempt the first and if that fails will attempt the second. This might actually work for the poster, but would still pound port 1666 until it couldn't handle any connections and THEN try port 2666. The key to having a somewhat even distribution over the two listeners is by using the (LOAD_BALANCE=ON) parameter.
Just to test the theory, we create a small batch file on the client machine that connects and disconnects in an infinite loop. The batch file looks like this:
C:\temp>more infinite.bat
:start
sqlplus jeffh/jeffh@lx10r2.us @disconnect
goto :start
C:\temp>more disconnect.sql
exit
Then, we open up multiple command prompt windows and start pounding away. You can see the listener.log file keeps growing and both listeners are accepting approximately the same number of connections:
[oracle@jake log]$ wc -l *.log; sleep 120; wc -l *.log
206 lsnr1666.log
199 lsnr2666.log
405 total
528 lsnr1666.log
582 lsnr2666.log
1110 total
Nice. About three hundred connections per minute. Not bad for an 802.11b connection to a 733Mhz Linux box connected to a 100Mbps router. Will this solve the poster's problem? I don't know. I do know this is how you can setup multiple listeners, though.
Friday, October 14, 2005
Using MySQL more frequently
As I use MySQL more frequently, I really get perturbed when I get back on to Oracle and can't do:
SELECT someFunctionName();
SELECT someFunctionName();
Thursday, October 13, 2005
Thanks Howard, Part II
I use statcounter to keep track of my blog's visitors. No, I don't use it for ads or anything like that, but still, I like to know who is reading the blog and how they got here.
A couple of weeks ago, Howard J. Rogers added my blog to the links section of The Dizwell Blog and I started getting a lot of referring links. Sometimes when I post an entry about MySQL, I'll get a spike from Planet MySQL and other related sites. Over the last few days, The Dizwell Blog has taken the top spot and I'm starting to get referrals from other sites I never knew about. In fact, I've found a couple blogs that are interesting that I'll be modifying my own links section in the near future to replace some of the blogs that don't get updated on a regular basis.
So, thanks to Howard and others to helping me get recognized and thanks to the readers. Keep on blogging.
This is the second time HJR has helped me out in the last couple weeks.
A couple of weeks ago, Howard J. Rogers added my blog to the links section of The Dizwell Blog and I started getting a lot of referring links. Sometimes when I post an entry about MySQL, I'll get a spike from Planet MySQL and other related sites. Over the last few days, The Dizwell Blog has taken the top spot and I'm starting to get referrals from other sites I never knew about. In fact, I've found a couple blogs that are interesting that I'll be modifying my own links section in the near future to replace some of the blogs that don't get updated on a regular basis.
So, thanks to Howard and others to helping me get recognized and thanks to the readers. Keep on blogging.
This is the second time HJR has helped me out in the last couple weeks.
Tuesday, October 11, 2005
Wicked ORA-27054, Part I
We're doing some testing with RMAN and 10.2.0.1 on Linux. Our standard backup strategy is to backup to NFS mounted disk and backup to tape at a later time. With 9.2 I could mount the NFS filesystems without any particular options and RMAN would run just fine.
On 10.2.0.1, we setup the backup and immediately got an error (ORA-27054: NFS file system where the file is created or resides is not mounted with correct options). So we submit a TAR and find out that 10.2 requires NFS filesystems to be mounted with the following options:
The backup works now, except it takes 1 hour 9 minutes to backup a 2G database. We indicate this in the TAR and the analyst basically says "new problem, new tar". So we create a new TAR for the performance issues, but I post a followup on the existing TAR:
Well, at least he's being honest about it. Of course, we all know that filesystemio_options is not hidden in 10.2.
OK, I get it. You're just trying to close as many TARs as possible. Glad to see my support dollars "at work".
So we pursue our "performance" problem with the other TAR and reference the first TAR. The analyst suggests we contact the vendor of our NAS to find out what they suggest. We explain this is plain NFS and it doesn't really have a vendor. They suggest getting an strace on the rman process. Now we're getting somewhere.
While they are looking at the TAR, one of my DBAs does some more testing. He backs up my 2G database to local disk in 4 minutes. He then copies the backup pieces to the NFS mounted disk mounted with the options specified by Oracle Support. 1 hour 9 minutes. Ah ha! Poor performance even with plain old cp.
I know what Oracle Support is going to say. I'd probably say it myself. "The fact that plain old cp is slow tells me your NFS is setup wrong." And I wouldn't disagree. The conundrum is that these are the options Oracle told us to use...
to be continued...solution
On 10.2.0.1, we setup the backup and immediately got an error (ORA-27054: NFS file system where the file is created or resides is not mounted with correct options). So we submit a TAR and find out that 10.2 requires NFS filesystems to be mounted with the following options:
rsize=32768,wsize=32768,hard,noac
The backup works now, except it takes 1 hour 9 minutes to backup a 2G database. We indicate this in the TAR and the analyst basically says "new problem, new tar". So we create a new TAR for the performance issues, but I post a followup on the existing TAR:
Me: Is there any flag or something that we can change to turn off the checking that results in the ORA-27054?
OCS: I have seen the following being used on different problems (not related to RMAN), sometimes it works, sometimes it doesn't, I would not be able to explain why since it is not on my skills, also this parameter is not documented so I would not know what other effects could cause.
Set init.ora parameter:
_filesystemio_options=directio
You can also try:
_filesystemio_options=none
Hope this helps.
Well, at least he's being honest about it. Of course, we all know that filesystemio_options is not hidden in 10.2.
Me: This is a production system. We'd rather not "try" something if we don't know if it will solve our problem or not. Since you say you're not qualified, how do we get to someone who can tell us the answer?
OCS: Create a new tar.
OK, I get it. You're just trying to close as many TARs as possible. Glad to see my support dollars "at work".
So we pursue our "performance" problem with the other TAR and reference the first TAR. The analyst suggests we contact the vendor of our NAS to find out what they suggest. We explain this is plain NFS and it doesn't really have a vendor. They suggest getting an strace on the rman process. Now we're getting somewhere.
While they are looking at the TAR, one of my DBAs does some more testing. He backs up my 2G database to local disk in 4 minutes. He then copies the backup pieces to the NFS mounted disk mounted with the options specified by Oracle Support. 1 hour 9 minutes. Ah ha! Poor performance even with plain old cp.
I know what Oracle Support is going to say. I'd probably say it myself. "The fact that plain old cp is slow tells me your NFS is setup wrong." And I wouldn't disagree. The conundrum is that these are the options Oracle told us to use...
to be continued...solution
Friday, October 07, 2005
What I think about MySQL
After my series on MySQL to an Oracle DBA (here, here, here, here, and here), a lot of people have asked what I think about MySQL.
I'm not really qualified to compare the two. I think I know Oracle much better than I know MySQL, so the comparison would be jaded. Some people claim that MySQL is much easier to setup and maintain than Oracle. I'm not completely convinced of that. Each has their own set of hundreds of parameters that can be tweaked. Having installed MySQL from source a couple times, I'd actually say it's a little harder to setup in certain circumstances. Not hard, just harder.
I'm very intrigued by MySQL's backup methods. At first glance mysqldump looks more like Oracle's export program than a backup tool. An Oracle DBA would scoff this off as single point in time snapshot of the data. Of course, we Oracle DBA's know you can't import and then apply archived redo logs, but imagine a scenario where you could. Appearantly MySQL has the ablity to import and re-apply the bin log (kind of like the redo log). This concept was tossed about in the class and I'm anxious about trying it out.
I'm a little concerned about connection scalability with MySQL. The MySQL server is bound to one port. In the coming months I'll be stress testing high frequency connections to see of they scale.
I think Oracle gives you much more information about the health of the database through views than MySQL does. With Oracle, I can query v$this, x$that, and dba_whatever to tell me certain things about how the database is performing. In addition, I can write tools using these queries to automate my job and alert me to when certain conditions arise. MySQL has a lot of SHOW commands, but it's difficult to get this information in an automated fashion without some pipes, grep, and awk. Maybe I just don't know enough about it yet.
One thing I really like about MySQL is the concept of a "slow log". Here, queries that run longer than a specified threshold are logged to a "slow log" that can be investigated later. My developers will hate me. I wonder if it's to obnoxious to set my threshold at 60 seconds. Hmmm...
We'll definitely be using MySQL. It will start off with backoffice apps and progress to bigger apps. I've got half a mind to port my Oracle monitoring app to MySQL just for kicks. I'll be updating this blog as I find interesting things about MySQL.
I'm not really qualified to compare the two. I think I know Oracle much better than I know MySQL, so the comparison would be jaded. Some people claim that MySQL is much easier to setup and maintain than Oracle. I'm not completely convinced of that. Each has their own set of hundreds of parameters that can be tweaked. Having installed MySQL from source a couple times, I'd actually say it's a little harder to setup in certain circumstances. Not hard, just harder.
I'm very intrigued by MySQL's backup methods. At first glance mysqldump looks more like Oracle's export program than a backup tool. An Oracle DBA would scoff this off as single point in time snapshot of the data. Of course, we Oracle DBA's know you can't import and then apply archived redo logs, but imagine a scenario where you could. Appearantly MySQL has the ablity to import and re-apply the bin log (kind of like the redo log). This concept was tossed about in the class and I'm anxious about trying it out.
I'm a little concerned about connection scalability with MySQL. The MySQL server is bound to one port. In the coming months I'll be stress testing high frequency connections to see of they scale.
I think Oracle gives you much more information about the health of the database through views than MySQL does. With Oracle, I can query v$this, x$that, and dba_whatever to tell me certain things about how the database is performing. In addition, I can write tools using these queries to automate my job and alert me to when certain conditions arise. MySQL has a lot of SHOW commands, but it's difficult to get this information in an automated fashion without some pipes, grep, and awk. Maybe I just don't know enough about it yet.
One thing I really like about MySQL is the concept of a "slow log". Here, queries that run longer than a specified threshold are logged to a "slow log" that can be investigated later. My developers will hate me. I wonder if it's to obnoxious to set my threshold at 60 seconds. Hmmm...
We'll definitely be using MySQL. It will start off with backoffice apps and progress to bigger apps. I've got half a mind to port my Oracle monitoring app to MySQL just for kicks. I'll be updating this blog as I find interesting things about MySQL.
Thursday, October 06, 2005
Sysadmin at work
For you sysadmin types out there, this is not such a big accomplishment, but for me it is. I salvaged a drive from one of my older systems that I was throwing out. I decided to add it to my Linux box because I wanted to separate my home directories from my OS files.
This particular drive was a 800M IDE drive that I slapped into the box and the machine booted right away. Now, I'm not a hardware guy, so the fact that this worked the first time was pretty amazing to me.
Next, I knew I had to partition the disk and cook a filesystem, but didn't know the Linux commands to do it. I went to the Red Hat docs and managed to partition my disk and cook an ext3 filesystem. On Solaris, I'd just put an entry in /etc/vfstab and mount the filesystem. But there's no vfstab file on Linux. I did some more reading and found I need to put my entry in /etc/fstab. Although the format of the fstab file is different then I'm used to, I put the entries in like I think they should have been and tried to mount. Still can't mount.
Finally, I realize that my new disk has to have a label. I go through the same process except put a label on the disk this time. I use the new syntax in the /etc/fstab file and finally the filesystem mounts. I reboot the box and the filesystem mounts automatically. Basically a two hour process for something my sysadmins could have done in about 5 minutes. The joy of learning.
This particular drive was a 800M IDE drive that I slapped into the box and the machine booted right away. Now, I'm not a hardware guy, so the fact that this worked the first time was pretty amazing to me.
Next, I knew I had to partition the disk and cook a filesystem, but didn't know the Linux commands to do it. I went to the Red Hat docs and managed to partition my disk and cook an ext3 filesystem. On Solaris, I'd just put an entry in /etc/vfstab and mount the filesystem. But there's no vfstab file on Linux. I did some more reading and found I need to put my entry in /etc/fstab. Although the format of the fstab file is different then I'm used to, I put the entries in like I think they should have been and tried to mount. Still can't mount.
Finally, I realize that my new disk has to have a label. I go through the same process except put a label on the disk this time. I use the new syntax in the /etc/fstab file and finally the filesystem mounts. I reboot the box and the filesystem mounts automatically. Basically a two hour process for something my sysadmins could have done in about 5 minutes. The joy of learning.
Wednesday, October 05, 2005
New Toy
Our old VCR finally died a couple days ago. We've had that thing for about 7 years now and use it pretty extensively to tape shows and watch them later. The quality of the recordings has gotten pretty poor over the last 6 months or so. In fact, we made the decision to junk it only when the top half of the picture didn't record and we had to have to volume all the way up to hear it.
We went through the process of looking at new VCRs, but decided a DVD recorder might make more sense. We looked at Tivo, but didn't want to be saddled with the additional cost. So we asked my sister-in-law; a buyer in the consumer electronics field. She suggested we look at a DVD recorder with a hard drive. (Hard drive, I'm into that!) We could record up to 100 hours on the drive and burn to DVD when we needed to. Our particular DVD recorder has on-screen listings that get updated automatically. I am ashamed to say that my DVD recorder has more disk space than my Linux box.
It was really a snap to hook up to the TV. After 24 hours, the listings downloaded to the unit and we setup all our shows to record. I'll see tonight if American Chopper recorded sucessfully Monday. Now if I can figure out how to NFS mount that bad boy to my Linux box, I'd be all set...
We went through the process of looking at new VCRs, but decided a DVD recorder might make more sense. We looked at Tivo, but didn't want to be saddled with the additional cost. So we asked my sister-in-law; a buyer in the consumer electronics field. She suggested we look at a DVD recorder with a hard drive. (Hard drive, I'm into that!) We could record up to 100 hours on the drive and burn to DVD when we needed to. Our particular DVD recorder has on-screen listings that get updated automatically. I am ashamed to say that my DVD recorder has more disk space than my Linux box.
It was really a snap to hook up to the TV. After 24 hours, the listings downloaded to the unit and we setup all our shows to record. I'll see tonight if American Chopper recorded sucessfully Monday. Now if I can figure out how to NFS mount that bad boy to my Linux box, I'd be all set...
Tuesday, October 04, 2005
Problem Solved
Thanks to a tip by HJR, my .pdf problems have been solved. I've downloaded Foxit Reader and installed it on all my Windoz machines to take the place of Acrobat Reader 7.0. All my .pdf files now display correctly and print correctly. Thanks Howard.
Subscribe to:
Posts (Atom)