Of course, that's clear.  Would be hard to find an algo with O(1/n) or
so.  Nevertheless the big problem seems to be large files.  I am
reading (just for a special purpose) comp.os.linux.answers.  The
articles in this groups are FAQs and thus very large (~50-100KByte).
Importing this group takes considerably longer (estimated factor 100)
than other groups with the same amount of articles.  And yes, it is
not only due to limitations in HDD speed...
  :
> If you want to take a look at a bit of the contents of the history.pag
> here's a rexx script that i was playing with then i first started trying to
> figure it out....  it will dump the contents, somewhat crudely to stdout...
  :
Interesting features.  Does it mean that history.pag contains only a
hashed list of the articles?
Hardy
PS:  you will not get my previous email, because it was bounced back
     (host unknown or so).  I did not see that the domain was sh*t (I
     mailed to gro.oi@mt (---{Q[3]Qo.)).  What's the purpose of this
     fake?  Should it prevent you from receiving spam?
-- Hardy Griech, Kurt-Schumacher-Str. 25/1, D-72762 Reutlingen