[Fwd: [H-GEN] partitioning again]

Robert Stuart rstu at qbssss.edu.au
Wed May 12 22:38:06 EDT 1999


(Note reply-to: being general at humbug.org.au vs Robert Stuart <rstu at qbssss.edu.au>)

James McPherson wrote:
> 
> (Note reply-to: being general at humbug.org.au vs James McPherson <jmcphers at laurel.ocs.mq.edu.au>)
> 
> Robert Stuart writes:
>  > (Note reply-to: being general at humbug.org.au vs Robert Stuart <rstu at qbssss.edu.au>)
>  > I've reforwarded this because I couldn't post to the general list.....
> 
> didn't think I'd seen it before ;)
> 
>  > Robert Stuart wrote:
>  > > James McPherson wrote:
>  > >
>  > I use one large _fs_ for /, /usr, and /var (see below) on our Solaris
>  > servers because we are running SDS and the root disk is mirrored.  It is
>  > simply easier to manage that way.  We also have UPSs (and the building
>  > has emergency power) so I am not concerned about losing the fs except
>  > due to some sort of software problem screwing the root fs which
>  > partitioning can't help anyhow.  This solves my fs space problems too
>  > (var filling etc). The one issue I have with this is root fs' disks
>  > performance because is holds both swap and root (and usr var ...), both
>  > of which are hit heavily when the machine is loaded.
>  > Comments?
> 
> According to Brian Wong's Configuration and Capacity Planning for Solaris
> Servers book, the outer edges of the disk are the fastest, so we should be
> placing heavily used partitions such as /usr and /var on slices 6 7 or 8. Of
> course, I have to wait until I get another box to reconfigure before I can try
> that ;)
> 

Realistically, how much of a difference in speed is putting different fs
on different areas of the disk going to make?  Does anyone have hard
figures?  Even if I sacrifice 10% of the performance of those
filesystems, that's not going to worry me, there are much better ways to
spend my time optimising the machine's performance.  It's not worth the
problems of having to juggle space.  On a real production machine, there
should be enough memory that swap should simply hold data/code that is
never going to be used (ie the pages that are in memory are "used"). 
The Ultra E450 machine I was talking about has 6x9GB drives.  I have
only root and swap on the 2 disks that are used for the root fs which
leaves me with ~7GB free space (currently not even partitioned) because
I don't want the root and swap disks to be slowed by other apps using
the same drives.  In the end, I'll probably put some stuff on there that
is static and unlikely to cause a large write load (it will be a
mirrored fs).

> 
> # prtconf -v
> System Configuration:  Sun Microsystems  sun4d
> Memory size: 768 Megabytes
> System Peripherals (Software Nodes):
> SUNW,SPARCserver-1000
> [snip]
> cpu-unit, instance #x
> [snip]
>  TI,TMS390Z55        <<<<---- the 512kb onboard cache module

I thought the cache on the Super Sparcs was a 1MB cache?

> 
> As for swap space, what sort of db application are you running? we've got
> oracle 7.3.3 and 8.0.5 (testing mode) to deal with, along with three
> production databases and two development dbs. Then there's the application
> weight to consider - nothing's cut and dried with db servers!

True.  We have only one DB running on it, the test DB is on a separate
machine (which also is a "backup" machine for if the production machine
blows up).

>  > /md6 is made up of 4 9GB SCSI drives striped and mirrored.
> 
> since when did qbssss have any money to spend on decent configurations? ;)

Since it was costing us more for the maintenance of the old server
(SS1000, 3x50MHz) than to lease a new machine (which is under 3 year
warranty)! 
 
>  > NB we use DB files rather than raw paritions (at this point in time, we
>  > can afford to shutdown the DB every night for backup).
> 
> oh don't start that one please! imho the "real databases use raw partitions
> for speed" claim is a furphy which should have died out when 2Gb drives became
> cheap. The speed and capacity of disks these days, coupled with the fast/wide
> scsi standard and ever faster system busses makes it more efficient to use the
> OS filesystem than raw partitions.

I find this a bit hard to believe.  Assuming Oracle (or whatever RDBMS
vendor) has bright people etc, raw partitions should be faster (I guess
in some ways they'd be like a FS specifically for a DB) - Oracle
certainly claims this.  But it is much simpler to simply treat the DB
data as just another couple of files - no special treatment at all - and
shutdown the DB to do backups (as opposed to using table backup
methods).  Note that the term "raw partitions" as I use it above does
not indicate that they can't be a raid "partition".  

CYA
-- 
Robert Stuart
Ph  61-7-3864 0364

--
This is list (humbug) general handled by majordomo at lists.humbug.org.au .
Postings only from subscribed addresses of lists general or general-post.



More information about the General mailing list