[H-GEN] Processing large text files in perl

Byron Ellacott bje at apnic.net
Mon Mar 17 18:43:52 EST 2003


[ Humbug *General* list - semi-serious discussions about Humbug and     ]
[ Unix-related topics. Posts from non-subscribed addresses will vanish. ]

On Fri, 2003-03-14 at 13:39, Michael Anthon wrote:
> My real question is how I should go about this if I were to rewrite it in
> perl.  My first thought was to have mysql/postgresql installed on the
> machine that will be running the process (to avoid network traffic) and use
> perl DBI but I don't know if that is the "best" way to do it.  Is there some
> other simple and fast DB system I could use instead?

If you don't need particularly complex querying, you could try out the
perl berkeley db interface.  In particular, BerkeleyDB.pm allows you to
use:

my $db = tie %data, 'BerkeleyDB::Hash';    # or
my $db = tie %data, 'BerkeleyDB::Btree';

You can then access %data as an ordinary associative array, where reads
and writes cause disk access.

If you've got no need of a query structure, this is probably more useful
than DBI, though as others say DBI is easy to use.

-- 
Byron Ellacott <bje at apnic.net>
APNIC

--
* This is list (humbug) general handled by majordomo at lists.humbug.org.au .
* Postings to this list are only accepted from subscribed addresses of
* lists 'general' or 'general-post'.  See http://www.humbug.org.au/



More information about the General mailing list