[Fwd: [H-GEN] partitioning again]

James McPherson jmcphers at laurel.ocs.mq.edu.au
Wed May 12 01:24:55 EDT 1999


(Note reply-to: being general at humbug.org.au vs James McPherson <jmcphers at laurel.ocs.mq.edu.au>)


Robert Stuart writes:
 > (Note reply-to: being general at humbug.org.au vs Robert Stuart <rstu at qbssss.edu.au>)
 > I've reforwarded this because I couldn't post to the general list.....

didn't think I'd seen it before ;)

 > Robert Stuart wrote:
 > > James McPherson wrote:
 > >
 > > well from an enterprise point of view (ie we run oracle ;>), that sort of
 > > scheme is laughable because there are just too many things which can go wrong
 > > if there is not enough space - we can't afford that possibility. On the
 > > machine I admin, we generally have two internal disks (we only do scsi btw),
 > > and any number of external disks on separate fast/wide controllers on which we
 > > have the following:
 > >
 > > /dev/dsk/c0t0d0s0      57143   19234   32195    38%    /
 > > /dev/dsk/c0t0d0s1     962582  399296  505532    45%    /usr
 > > /dev/dsk/c0t0d0s4     448143  207353  195976    52%    /var
 > > swap                 1069220  252500  816720    24%    /tmp
 > > /dev/dsk/c0t1d0s0     480919  126977  329897    28%    /opt
 > > /dev/dsk/c0t1d0s1     962582  476992  437461    53%    /home
 > >
 > > ^^these are the internal disks - luns 0 and 1 on the internal chain.
 > 
 > I use one large _fs_ for /, /usr, and /var (see below) on our Solaris
 > servers because we are running SDS and the root disk is mirrored.  It is
 > simply easier to manage that way.  We also have UPSs (and the building
 > has emergency power) so I am not concerned about losing the fs except
 > due to some sort of software problem screwing the root fs which
 > partitioning can't help anyhow.  This solves my fs space problems too
 > (var filling etc). The one issue I have with this is root fs' disks
 > performance because is holds both swap and root (and usr var ...), both
 > of which are hit heavily when the machine is loaded.
 > Comments?

According to Brian Wong's Configuration and Capacity Planning for Solaris
Servers book, the outer edges of the disk are the fastest, so we should be
placing heavily used partitions such as /usr and /var on slices 6 7 or 8. Of
course, I have to wait until I get another box to reconfigure before I can try 
that ;)

 > [snip]
 > 
 > > While we're at it, what do people think about Solaris' use of tmpfs for
 > > virtual memory? This what we're seeing at the moment on this particular
 > > machine:
 > >
 > > # swap -l
 > > swapfile             dev  swaplo blocks   free
 > > /dev/dsk/c0t0d0s3   32,3       8 1024472 721296
 > > /dev/dsk/c0t1d0s3   32,11      8 1080712 770976
 > >
 > > interestingly, before we cut over to 100Mbit (hme0 is the standard 100Mbit
 > > device - the hme stands for "happy meal" ;>), we were seeing about 50% swap
 > > utilization - now it rarely gets above 30%. (Yes, that was the _only_ thing we
 > > changed).
 > >
 > > Comments/queries anybody? Or is Solaris the Dark Side ? ;)
 > 
 > That's a bit odd (the use of /tmp being reduced by change to 100Mbit).
 > Three possibities I can think of:
 > 1. memory leak in driver for old 10Mbit....

possible.

 > 2. something else changed (how clients access the DB?) that caused your
 > users to use the DB server in a different way, hence using less memory
 > (eg fewer oracle server processes running?).  Any DB parameters change
 > (and DB restarted due to shutdown)?

no. the switch from 10Mbit to 100Mbit was the _only_ thing that we did at that 
time, and other changes happened well after we noticed the change in swap
utilization on the server. Of course the db was restarted - we shut it down
every night for backups, not least because ufsdump doesn't like dumping from
active partitions. The mix of remote clients and direct telnet clients hasn't
changed either since the user population is static and they don't tend to
change their preferences for access in a hurry.

 > 3. something else had a memory leak that you haven't triggered again or
 > just plain used a heap of memory at some stage (memory doesn't get
 > released back to the OS until the process finishes).

I think it's a buffering thing between oracle and solaris but I don't really
have the tools or the time to figure it out (and I'm _not_ going to put the
cable back into lo0!).

 > You seem to be using a huge amout of swap (~250MB).  Unless you have
 > large files in /tmp, I'd suggest you get more memory.  I think having
 > /tmp as virtual memory is great.  Our stats are as follows:
 > 
 > #df -k
 > Filesystem            kbytes    used   avail capacity  Mounted on
 > /dev/md/dsk/d2        963869  640089  265948    71%    /
 > /dev/md/dsk/d6       17398449 9602512 7621953    56%    /md6
 > swap                 1440600    5976 1434624     1%    /tmp
 > 
 > #swap -l
 > swapfile             dev  swaplo blocks   free
 > /dev/md/dsk/d5      85,5      16 2050432 1993488

 # psrinfo -v
Status of processor 0 as of: 05/12/99 14:39:03
  Processor has been on-line since 04/16/99 16:38:46.
  The sparc processor operates at 85 MHz,
        and has a sparc floating point processor.
Status of processor 1 as of: 05/12/99 14:39:03
  Processor has been on-line since 04/16/99 16:38:50.
  The sparc processor operates at 85 MHz,
        and has a sparc floating point processor.
Status of processor 2 as of: 05/12/99 14:39:03
  Processor has been on-line since 04/16/99 16:38:50.
  The sparc processor operates at 85 MHz,
        and has a sparc floating point processor.
Status of processor 3 as of: 05/12/99 14:39:03
  Processor has been on-line since 04/16/99 16:38:50.
  The sparc processor operates at 85 MHz,
        and has a sparc floating point processor.
Status of processor 4 as of: 05/12/99 14:39:03
  Processor has been on-line since 04/16/99 16:38:50.
  The sparc processor operates at 85 MHz,
        and has a sparc floating point processor.
Status of processor 5 as of: 05/12/99 14:39:03
  Processor has been on-line since 04/16/99 16:38:50.
  The sparc processor operates at 85 MHz,
        and has a sparc floating point processor.       


# swap -l
swapfile             dev  swaplo blocks   free
/dev/dsk/c0t0d0s3   32,3       8 1024472 651552
/dev/dsk/c0t1d0s3   32,11      8 1080712 701712  

# prtconf -v
System Configuration:  Sun Microsystems  sun4d
Memory size: 768 Megabytes
System Peripherals (Software Nodes):
SUNW,SPARCserver-1000             
[snip]
cpu-unit, instance #x
[snip]
 TI,TMS390Z55        <<<<---- the 512kb onboard cache module
 prtconf|egrep "SUNW|QLG"
SUNW,SPARCserver-1000
    SUNW,nvtwo (driver not attached)
            SUNW,hme, instance #1
            SUNW,fas, instance #0
            SUNW,hme, instance #2
            SUNW,fas, instance #1
            QLGC,isp, instance #0
            SUNW,hme, instance #3
            SUNW,fas, instance #2
            QLGC,isp, instance #1
            SUNW,hme, instance #4
            SUNW,fas, instance #3
            SUNW,hme, instance #0
            SUNW,fas, instance #4

which shows that we have 6 cpus, running at 85MHz each with 512kb of cache. We 
have 5 SUN-branded fast/wide cards which each a happymeal controller on them,
and two QLogic fast/wide cards. And if you're cluey about solaris and devices, 
you'll notice that hme0 for this box is the last one in the system, which of
course was the last socket I plugged the cable into ;)

Ram is something which I'd love more of, but at the moment the budget just
doesn't allow for it. Most of the disks we've got are in unipacks and my
priority is to get them replaced with multipacks instead - ever seen 20-odd
external scsi devices with power and scsi cables at the limits of their
length? rats nest is my phrase for it.

 > 
 > We haven't even touched our swap space (and we have 1GB of it) - its a
 > fairly quiet time of year for us.  As a result, all of the files in /tmp
 > are in memory -> fast access etc.  This is great until someone decides
 > to put a couple of huge files in /tmp, but Solaris then moves them onto
 > real disk space over time.

our quiet time of year is ... oh hang on, we haven't got one. Though we might
next year when the olympics are on since Macquarie is going to be an official
carpark for the fortnight before, during and a week afterwards. 

 > As you can see, our root fs has heeps of space (the only print jobs are
 > small PCL stuff - this machine is a DB server).

As for swap space, what sort of db application are you running? we've got
oracle 7.3.3 and 8.0.5 (testing mode) to deal with, along with three
production databases and two development dbs. Then there's the application
weight to consider - nothing's cut and dried with db servers!

 > /md6 is made up of 4 9GB SCSI drives striped and mirrored.

since when did qbssss have any money to spend on decent configurations? ;)

 > NB we use DB files rather than raw paritions (at this point in time, we
 > can afford to shutdown the DB every night for backup).

oh don't start that one please! imho the "real databases use raw partitions
for speed" claim is a furphy which should have died out when 2Gb drives became 
cheap. The speed and capacity of disks these days, coupled with the fast/wide
scsi standard and ever faster system busses makes it more efficient to use the 
OS filesystem than raw partitions. 



</soapbox> ;)


cheers,
James C. McPherson
--
Unix Systems Administrator            Phone: +61.2.9850.9418
Office of Computing Services            Fax: +61.2.9850.7433
Macquarie University   NSW    2109     remove the extra char 
AUSTRALIA			       in the reply-to field

--
This is list (humbug) general handled by majordomo at lists.humbug.org.au .
Postings only from subscribed addresses of lists general or general-post.



More information about the General mailing list