[H-GEN] Tape Drives

James McPherson James.McPherson at Sun.COM
Mon Jan 7 07:09:55 EST 2002


[ Humbug *General* list - semi-serious discussions about Humbug and  ]
[ Unix-related topics.  Please observe the list's charter.           ]
[ Worthwhile understanding: http://www.humbug.org.au/netiquette.html ]


[snip]
> Keep in mind that depending on the type of data being stored, you *may* get
> somewhat more or less than the 40G onto a DDS4.  The 40G is essentially
> double the raw capacity of the tape and assumes a compression ratio of 50%
> when using the drive's internal compression algorithm.  In practice I find
> that the compression ratio is generally somewhat better than that [1].  You
> will find that if your data is largely textual in nature then it will
> compress rather nicely.  

Umm, I've only found that sparse files and definitely plain text files (code,
html, latex etc) compress slightly better than 50% onto tape, and that does
depend on how good the drive's native compression algorithm is. The newer drives
(dlt7/8000, super dlt, lto/ultrium, sony ait, stk 9840(fc)) are really quite good. 
The older ones (anything 8mm, dds up to recent dds3/4) are not. The issue that I
see with dds4 though is that the write speed of the drive - even without compression
is still pretty abysmal when you have multiple gb of data that you want to dump
out asap. We supply dds4 drives in the d130 and d240 storage units inside SF{346}800
and the new SF15k, but those drives are only really there for ufsdump of your OS 
and so the onsite engineer can dump your crash dump to tape quickly.

> DLT is definitely better both in capacity and speed, however you will pay 
> for that improvement, as you have discovered.

true - so you have to work out your company's sweet spot. For a smaller system
with not much (say <10gb data to level 0 backup regularly) then it may well be 
that dds{3/4} or 8mm is more than adequate, however if you have a database that 
has to stay available then you need something faster. 
 
> The other method I have used is to use something like 'tar c stuff | gzip -9
> | dd of=/dev/tape'.  It's a little heavy on the CPU usage, but in the middle
> of the night, who cares?  The main reason I did this instead of using the
> drive compression was that the version of tar on the system in question had
> problems handling a few largish files.

ah, don't you just love software with bugs in it!
 
> To add another thought into the tape vs. HDD argument... if you have to back
> up a database, quite often the database needs to be taken offline if you
> wish to make a backup of it's data files.  We use Oracle at work and with
> that it is possible to place the database into a mode where you can take a
> consistent backup of the datafiles while the system is still active.  While
> in this mode certain things don't happen as usual and large volumes of
> transactions will cause problems.  

ebu (enterprise backup utility) for oracle 7 or rman (recovery manager) for 8++.
A large volume of transactions that overflow your redo and archive logs will cause
you headaches, but if you allocate enough space for them based on your largest
transaction value over the period of your backup (eg, 30000 t per 2.5 hours)
then you _should_ be ok.

> As a result it's imperative that the
> actual copy of the files is made as quickly as possible.  To this end we
> have another large drive in the system that the backup is made to, then the
> database is switched back to the normal mode and the copied files are
> written to tape and then compressed on the drive.  This way I can keep a
> week or so of backups online so that if the day ever comes that someone does
> a Bad Thing to the data (as opposed to a system or hardware failure) we do
> not need to wait for the latest tapes to be retrieved from the offsite
> storage.  

sounds eminently sensible. That way you can use any file-based backup system
to dump the hot backup images to tape (ufsdump, tar, cpio if you must, networker,
netbackup, arcserve etc), and not have to worry quite as much about oracle's
sparse data files.

> Oh, btw, on this system the actual file copy takes about 30
> minutes, the write to tape about 3 hours and the final compression about 6
> hours.... and I also do a test on the tape to make sure it's readable and
> that takes another 3 hours or so..... I often get a call in the afternoon
> from the person changing tapes "The backup has locked up and the tape hasn't
> ejected, can you reset the machine please? [3]".  Perhaps I should start the
> backup earlier 8^)

raises the question - what are you doing for the verification, and how fast
is your tape drive (does it share the scsi bus with something else like disk?)
 

> [2] As an aside, does anyone have any clue as to how to determine the
> remaining capacity on a tape?  It's something that's bugged me for a while
> and I can't find an answer to it.

if you use something like netbackup, which uses a modified gnu tar to write
multiple blocks per "file" on the tape, then you can work it out. off hand though
I can't recall ufsdump's system. tar otoh writes 512byte blocks afaik so you if
you know the number of blocks used, and the size of the tape (assuming for dltIV
media that you get 62-65Gb rather than 70Gb) then you can work out roughly how 
much space is left - probably to within 10%. Legato Networker and Veritas NetBackup
tend to write until the drive sends back EOT rather than doing any fancy maths.
Networker has the interesting feature though that it has hard coded values for
how much it should be able to fit on 4mm/8mm/dlt/lto tapes, and you can actually
reach that value (eg 65Gb for dltIV media) and see "100% used" - but networker
will not actually say "full" until a write attempt returns with EOT. Nice (ish)!

> [3] Erm, this is an E250 running Solaris, most of the boxes there, apart
> from the ones I look after, are running Novell or NT and I have given up
> explaining that it's not locked up, it just hasn't finished yet.  I normally
> just kill the verify and eject the tape, seems to keep them happy 8^)

As I mentioned this morning - training! then again, some people just are not
interested in actually knowing what goes on in a computer that doesn't run
nt/win2k/xp or novell. Maybe you could give them a little menu option to
check the progress (elapsed time vs expected) of the verify. Then if they don't
check it before calling you you could lart them ;)


James 
-- 
TSG Engineer (Kernel/Storage)           828 Pacific Highway
APAC Customer Care Centre               Gordon NSW 
Sun Microsystems Australia              2072

Failfast panic: those controlling voices in my head have 
stopped telling me what to do.....

Read about the VOS Initiative at http://www.vosinitiative.com


--
* This is list (humbug) general handled by majordomo at lists.humbug.org.au .
* Postings to this list are only accepted from subscribed addresses of
* lists 'general' or 'general-post'.



More information about the General mailing list