scroll
down
for
the
results!
:-)
Sequential Output | Sequential Input | Random Seeks | |||||||||||
Description | Size:Chunk Size | Output | Rewrite | Input | |||||||||
K/sec | % CPU | K/sec | % CPU | K/sec | % CPU | K/sec | % CPU | K/sec | % CPU | / sec | |||
single drive perc 5i ext2 wal | 16G | 56600 | 5 | 32088 | 4 | 90382 | 6 | 444.9 | |||||
dual controller 5 drive hardware raid 0 striped in software ext3 | 16G | 361790 | 75 | 177948 | 33 | 368836 | 31 | 1115.2 | |||||
singe controller 10 drive raid 0 hardware xfs | 16G | 405602 | 47 | 109537 | 15 | 510528 | 34 | 1111.8 | |||||
single controller 5 drive hardware raid 0 ext3 | 16G | 162477 | 34 | 110203 | 19 | 423957 | 26 | 1062.0 | |||||
dual controller 5 drive hardware raid 0 mirrored in software xfs | 16G | 255660 | 30 | 110531 | 18 | 379800 | 32 | 1471.4 | |||||
single controller 10 drive hardware raid 10 xfs | 16G | 299039 | 36 | 104697 | 14 | 402601 | 27 | 1460.9 | |||||
dual controller 10 drive hardware jbod software raid 0 xfs | 16G | 243036 | 25 | 126913 | 16 | 793798 | 52 | 1081.5 | |||||
dual controller 10 drive hardware jbod software raid 10 | 16G | 244808 | 27 | 90410 | 12 | 355593 | 24 | 1509.9 | |||||
RUN1 dual controller 5 drive raid 0 xfs separate volumes dual bonnie | 16G | 287469 | 36 | 95400 | 13 | 423656 | 31 | 1246.3 | |||||
RUN2 | 16G | 291124 | 37 | 109373 | 14 | 367032 | 28 | 755.4 | |||||
RUN1 single controller 5 drive raid 0 xfs dual bonnie same volume | 16G | 140861 | 21 | 53683 | 8 | 129144 | 9 | 506.6 | |||||
RUN2 | 16G | 137158 | 21 | 60206 | 9 | 111646 | 8 | 487.2 | |||||
single controller 5 drive raid 5 xfs | 16G | 292219 | 34 | 83425 | 11 | 262651 | 17 | 1282.4 | |||||
RUN 1 dual controller 2x4 drive hardware raid 5 | 16G | 245259 | 31 | 83647 | 11 | 229561 | 16 | 767.0 | |||||
RUN 2 | 16G | 248950 | 32 | 74472 | 10 | 261117 | 18 | 1025.7 | |||||
dual controller 2x5 drive hardware raid 5 striped in software | 16G | 423363 | 44 | 80409 | 11 | 195092 | 12 | 1063.6 | |||||
dual controller 2x5 drive hardware raid 5 mirrored in software | 16G | 40920 | 8 | 26515 | 4 | 75975 | 5 | 375.0 |
17 comments:
Ƿhy one ƿould even þink about uſing ſomeþing elſe ðan RAID 10?
Hm, strange. I ran bonnie++ against my array (an MD3000 with attached MD1000 -- each filled with (15) 15k RPM 300GB SAS drives). The array is configured as a hardware RAID 10; each mirrored pair is created from one disk in the MD3000 and one disk in the MD1000. The total array is 28 disks. My test server is a PE2950 with 4GB RAM attached to the array via an HBA to the controller in the MD3000 (the two controllers are in an active / passive config). These are my results with an ext3 file system:
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
sage-ora2 8000M 77055 90 142941 29 81398 11 73811 85 265997 15 1213 1
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 6037 99 +++++ +++ +++++ +++ 6248 99 +++++ +++ 18902 100
sage-ora2,8000M,77055,90,142941,29,81398,11,73811,85,265997,15,1213.3,1,16,6037,99,+++++,+++,+++++,+++,6248,99,+++++,+++,18902,100
These numbers don't seem that good compared to your array with 10 disks. I wonder why the disparity? I was expecting my random seeks to easily eclipse yours given that I have almost 3 times the number of disks... ?
I've also tested the same config with Oracle's ORION tool. Results of that test are here -> http://all.thingsit.com/archives/5
Aha, so I switched over to bonnie so I could adjust the number of seekers and with 28 seekers I get 5960 random seeks / sec. That's more like it..
---Sequential Output (nosync)--- ---Sequential Input-- --Rnd Seek-
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --04k (28)-
Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
sage-o 1*8000 83120 91.2161026 34.4 78983 11.2 72553 79.8257721 16.2 5960.1 14.3
Some database users (like me :) will do table scans on multiple tables in large databases (~1 TB) for aggregation purposes.
In this case, would seeks be relatively less important and throughput be relatively more important?
And if so, do your tests simulate this, or are they weighted toward determining seek ability?
well, it is much more difficult to optimize for random i/o than it is for sequential. in database terms, even 200mb/sec is very fast and you are unlikely to get that in most cases. if all you do is run aggregates on huge tables and you never update your database and you never run joins, then I guess seeks are not as important, but that is a very narrow definition that does not fit most applications.
I am benchmarking an MD1000 before putting into production use.
Can you tell me what RAID options, mount options, and bonnie++ options you used to achieve such high scores, especially the seeks/sec.
Thanks, Gabriel
My setup is:
Dell 2970 with Perc5e card
8 disks 7200RPM SATA
RAID 10
No Read Ahead
Write Back
64KB Stripe
XFS
My scores are:
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
32184M 78639 99 169253 25 53228 7 59307 72 204111 16 442.3 0
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
32184M 16 24405 97 +++++ +++ 18138 65 21531 88 +++++ +++ 13975 57
Everything is on there...filesystem is xfs unless otherwise noted. o/s is stock centos 5. It's a dell 2950 with two perc 5 raid controllers and the fastest dual core/memory combination available for that server.
merlin
Hi there,
I think the previous poster meant as in what are the options that you ran, for instance I ran "bonnie -u user -d /var/test -s 32473" where the /var/test is the directory on the raid array.
Thanks
What kind of drives do you have in your array? The 300GB 15K RPM ones?
Thanks
150gb 15krpm sas
Mine are 8 500 Gb, 7200 RPM SATA in raid 10 and all i get is around 430 for random seeks, I'd like to know what options you guys run to get this insane number. And having 15k RPM SAS doesn't hurt I bet. I'm running Debian Etch, ext3, the raid card is Dell Perc 6.
debian:/var/vm# bonnie++ -u test -d /var/vm/test/ -s 32473
Using uid:1000, gid:1000.
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
test 32473M 57872 98 165994 34 95848 18 63837 97 290484 31 423.8 0
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
debian,32473M,57872,98,165994,34,95848,18,63837,97,290484,31,423.8,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
and this is with bonnie only:
debian:/var/vm# bonnie -u test -d /var/vm/test/ -s 32473
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
debian 32473M 55708 93 175486 36 94788 18 64323 97 289786 31 428.2 0
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
debian,32473M,55708,93,175486,36,94788,18,64323,97,289786,31,428.2,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
"Mine are 8 500 Gb, 7200 RPM SATA in raid 10 and all i get is around 430 for random seeks, I'd like to know what options you guys run to get this insane number. And having 15k RPM SAS doesn't hurt I bet. I'm running Debian Etch, ext3, the raid card is Dell Perc 6."
nothing strange at all...a 15k sas drive should get 2-2.5x seeks than 7200rpm sata. I also have 10 drives vs 8 (25% gain on top of that).
so, 430 * 2.5 * 1.25 = 1343
which is in neighborhood of what I'm getting.
merlin
ah I see, thanks. Also what are you recommendation for mount options for bootup, mine are all at default of the raid array.
/dev/sdb1 /var/vm ext3 rw 0 0
/dev/sdb1 /var/vm ext3 defaults 0 2
Are there comparisons on same hardware configurations with ext3 and xfs?
On a PE1950/MD1000/PERC6E setup with 7 WD RE3 SATA drives in RAID 5, I found ext3 to be about 20% faster in random seeks compared to xfs and about 15% slower in sequential read/write. Ext3 also performed much better in file create and delete tests, although xfs scaled better and took the lead once caching was defeated.
yup...the perc 6 is a better all around performer though.
ext4 is also looking pretty good.
Post a Comment