Robert Klemme
11/14/2003 1:19:00 PM
"Rasputin" <rasputin@idoru.mine.nu> schrieb im Newsbeitrag
news:20031114115944.GA18331@lb.tenfour...
>
> I was looking at prototyping a write once file service, a bit like
> Plan9s Venti system. I delete files a lot :)
> As usual, my grand designs are thwarted by the basics...
>
> I'm at the 'get the unit tests of the simplest possible implementation
to
> pass' stage, and need something like an append-only file of fixed
> size records (data blocks). Have we got classes or methods to do
> record-based file IO? I dont mind reinventing a wheel otherwise.
A typical solution might involve IO with binary mode with Array#pack and
String.unpack. Then you'll have to do the byte layout and record size
fixing yourself.
Or you can use Marshalling for this: define a class and marshal instances
one at a time into the file:
st = Struct.new( "FooRecord", :name, :age )
records = (1..10).map{|i| st.new("hello", i)}
File.open("foo.bin", "wb") do |io|
records.each do |rec|
Marshal.dump(rec, io)
end
end
File.open("foo.bin", "rb") do |io|
until io.eof?
obj = Marshal.load(io)
puts obj
end
end
Of courese you can marshal a complete array, too. But then, all instances
are loaded at once and with the approach above you can read one record at
a time, thus navigating (not very efficient though).
> Next problem (I learned to never look more than 1 problem ahead or I
> never start anything). I need an index that gets dynamically updated as
> the record file is written to.
If the dataset will grow not too big, you could use a Hash and Marshal
that in a file...
> The ideal solution would be a hash that was backed by a file somehow-
> then I realised I could implement the data blocks themselves
> the same way.
>
> I looked through pickAxe and RAA for these two and found a lot of SQL
> persistence APIs, but that feels like overkill....
>
> Does anyone know of a more lightweight solution?
Not perfectly what you are looking for but maybe some ideas to play
with...
robert