The writings of Merlin Moncure, professional database developer, about work, life, family, and everything else.

Friday, August 31, 2007

Dell md1000 bonnie++ benchmarks

Following are the results of our testing of the Dell MD1000 with two raid controllers (Perc 5/e). Server was dell 2950, 8gb ram. Pay close attention to the seeks figure, as this important figure for databases (the sequential rates are not all that important). We tried various combinations of software raid and hardware raid in both active/active and active passive configurations.

scroll
down
for
the
results!

:-)


























Sequential OutputSequential InputRandom
Seeks
DescriptionSize:Chunk SizeOutputRewriteInput
K/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPU/ sec
single drive perc 5i ext2 wal16G566005320884903826444.9
dual controller 5 drive hardware raid 0 striped in software ext316G3617907517794833368836311115.2
singe controller 10 drive raid 0 hardware xfs16G4056024710953715510528341111.8
single controller 5 drive hardware raid 0 ext316G1624773411020319423957261062.0
dual controller 5 drive hardware raid 0 mirrored in software xfs16G2556603011053118379800321471.4
single controller 10 drive hardware raid 10 xfs16G2990393610469714402601271460.9
dual controller 10 drive hardware jbod software raid 0 xfs16G2430362512691316793798521081.5
dual controller 10 drive hardware jbod software raid 1016G244808279041012355593241509.9
RUN1 dual controller 5 drive raid 0 xfs separate volumes dual bonnie16G287469369540013423656311246.3
RUN216G291124371093731436703228755.4
RUN1 single controller 5 drive raid 0 xfs dual bonnie same volume16G140861215368381291449506.6
RUN216G137158216020691116468487.2
single controller 5 drive raid 5 xfs16G292219348342511262651171282.4
RUN 1 dual controller 2x4 drive hardware raid 516G24525931836471122956116767.0
RUN 216G248950327447210261117181025.7
dual controller 2x5 drive hardware raid 5 striped in software16G423363448040911195092121063.6
dual controller 2x5 drive hardware raid 5 mirrored in software16G409208265154759755375.0

17 comments:

Anonymous said...

Ƿhy one ƿould even þink about uſing ſomeþing elſe ðan RAID 10?

Unknown said...

Hm, strange. I ran bonnie++ against my array (an MD3000 with attached MD1000 -- each filled with (15) 15k RPM 300GB SAS drives). The array is configured as a hardware RAID 10; each mirrored pair is created from one disk in the MD3000 and one disk in the MD1000. The total array is 28 disks. My test server is a PE2950 with 4GB RAM attached to the array via an HBA to the controller in the MD3000 (the two controllers are in an active / passive config). These are my results with an ext3 file system:

Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
sage-ora2 8000M 77055 90 142941 29 81398 11 73811 85 265997 15 1213 1
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 6037 99 +++++ +++ +++++ +++ 6248 99 +++++ +++ 18902 100
sage-ora2,8000M,77055,90,142941,29,81398,11,73811,85,265997,15,1213.3,1,16,6037,99,+++++,+++,+++++,+++,6248,99,+++++,+++,18902,100

These numbers don't seem that good compared to your array with 10 disks. I wonder why the disparity? I was expecting my random seeks to easily eclipse yours given that I have almost 3 times the number of disks... ?

I've also tested the same config with Oracle's ORION tool. Results of that test are here -> http://all.thingsit.com/archives/5

Unknown said...

Aha, so I switched over to bonnie so I could adjust the number of seekers and with 28 seekers I get 5960 random seeks / sec. That's more like it..

---Sequential Output (nosync)--- ---Sequential Input-- --Rnd Seek-
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --04k (28)-
Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
sage-o 1*8000 83120 91.2161026 34.4 78983 11.2 72553 79.8257721 16.2 5960.1 14.3

Anonymous said...

Some database users (like me :) will do table scans on multiple tables in large databases (~1 TB) for aggregation purposes.

In this case, would seeks be relatively less important and throughput be relatively more important?

And if so, do your tests simulate this, or are they weighted toward determining seek ability?

Merlin Moncure said...

well, it is much more difficult to optimize for random i/o than it is for sequential. in database terms, even 200mb/sec is very fast and you are unlikely to get that in most cases. if all you do is run aggregates on huge tables and you never update your database and you never run joins, then I guess seeks are not as important, but that is a very narrow definition that does not fit most applications.

Anonymous said...

I am benchmarking an MD1000 before putting into production use.

Can you tell me what RAID options, mount options, and bonnie++ options you used to achieve such high scores, especially the seeks/sec.

Thanks, Gabriel

My setup is:
Dell 2970 with Perc5e card
8 disks 7200RPM SATA
RAID 10
No Read Ahead
Write Back
64KB Stripe
XFS

My scores are:
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
32184M 78639 99 169253 25 53228 7 59307 72 204111 16 442.3 0
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
32184M 16 24405 97 +++++ +++ 18138 65 21531 88 +++++ +++ 13975 57

Merlin Moncure said...

Everything is on there...filesystem is xfs unless otherwise noted. o/s is stock centos 5. It's a dell 2950 with two perc 5 raid controllers and the fastest dual core/memory combination available for that server.

merlin

Anonymous said...

Hi there,

I think the previous poster meant as in what are the options that you ran, for instance I ran "bonnie -u user -d /var/test -s 32473" where the /var/test is the directory on the raid array.

Thanks

Anonymous said...

What kind of drives do you have in your array? The 300GB 15K RPM ones?

Thanks

Merlin Moncure said...

150gb 15krpm sas

Anonymous said...

Mine are 8 500 Gb, 7200 RPM SATA in raid 10 and all i get is around 430 for random seeks, I'd like to know what options you guys run to get this insane number. And having 15k RPM SAS doesn't hurt I bet. I'm running Debian Etch, ext3, the raid card is Dell Perc 6.

Anonymous said...

debian:/var/vm# bonnie++ -u test -d /var/vm/test/ -s 32473
Using uid:1000, gid:1000.

Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
test 32473M 57872 98 165994 34 95848 18 63837 97 290484 31 423.8 0
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
debian,32473M,57872,98,165994,34,95848,18,63837,97,290484,31,423.8,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++


and this is with bonnie only:

debian:/var/vm# bonnie -u test -d /var/vm/test/ -s 32473

Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
debian 32473M 55708 93 175486 36 94788 18 64323 97 289786 31 428.2 0
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
debian,32473M,55708,93,175486,36,94788,18,64323,97,289786,31,428.2,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++

Merlin Moncure said...

"Mine are 8 500 Gb, 7200 RPM SATA in raid 10 and all i get is around 430 for random seeks, I'd like to know what options you guys run to get this insane number. And having 15k RPM SAS doesn't hurt I bet. I'm running Debian Etch, ext3, the raid card is Dell Perc 6."

nothing strange at all...a 15k sas drive should get 2-2.5x seeks than 7200rpm sata. I also have 10 drives vs 8 (25% gain on top of that).

so, 430 * 2.5 * 1.25 = 1343

which is in neighborhood of what I'm getting.

merlin

Anonymous said...

ah I see, thanks. Also what are you recommendation for mount options for bootup, mine are all at default of the raid array.

/dev/sdb1 /var/vm ext3 rw 0 0

/dev/sdb1 /var/vm ext3 defaults 0 2

Weiyi said...

Are there comparisons on same hardware configurations with ext3 and xfs?

Anonymous said...

On a PE1950/MD1000/PERC6E setup with 7 WD RE3 SATA drives in RAID 5, I found ext3 to be about 20% faster in random seeks compared to xfs and about 15% slower in sequential read/write. Ext3 also performed much better in file create and delete tests, although xfs scaled better and took the lead once caching was defeated.

Merlin Moncure said...

yup...the perc 6 is a better all around performer though.

ext4 is also looking pretty good.