That is one option…the reason i need to be able to delete lines, is
that i am running a rake task on a server, that i cannot easily modify
files, and I can’t predict how long the rake task will time out. The
task takes entries from text based dictionaries and adds it to my DB,
the thing is, if i didn’t delete the lines i’ve already added, everytime
i ran the task, (it has to be run multiple times due to timeouts) i
would only re-add the same lines. The deletion acts as a place holder of
sorts. I’ll play around with your suggestion, and i’ll let you know. If
anyone else has an alternate method…i’m all ears
but i can’t figure out the correct syntax for how to delete a line in a
file, and then save that file.
Simplest thing is to get it into memory like you did with readlines
above and write out the altered contents to a new file and then move it.
(I prefer File.readlines - although this is the same method either way).
Large files might require different treatment rather than slurping into
memory like that.
That being said, I’d be curious to hear how people use r+ mode.
That is one option…the reason i need to be able to delete lines, is
that i am running a rake task on a server, that i cannot easily modify
files, and I can’t predict how long the rake task will time out. The
task takes entries from text based dictionaries and adds it to my DB,
the thing is, if i didn’t delete the lines i’ve already added, everytime
i ran the task, (it has to be run multiple times due to timeouts) i
would only re-add the same lines. The deletion acts as a place holder of
sorts. I’ll play around with your suggestion, and i’ll let you know. If
anyone else has an alternate method…i’m all ears
I don’t know if it’s all that gross. I have feeling this is
standard way to do it for apps and editors ie write out entire altered
content
after working with file in-memory using whatever scheme.
Different story if you’re a database I guess.
Here is one scheme some text editors use: Gap buffer - Wikipedia
although it doesn’t discuss file system/persistence issues.
Presumably when you hit the save button, the system writes
out to a new file (the gap buffer is not playing around with
the old file stream).
If you’re just replacing stuff character for character, then
it seems ok to use the file stream (in r+ mode) or if you’re
appending (or both); but deleting or inserting content seems
problematic - not sure it’s possible let alone standardized.
Anyone want to weigh in here?
Once again, i’m interested in different approaches, or something, not
quite so processor intensive
If the file is really large, you can perhaps just move through the
stream till you get to the point where you want to start
then commence writing from the old stream to the new file stream.
May be ways to optimise it.
Sparse files and fixed line lengths ?
Maybe I’ve said enough wrong things to provoke a reacion
from someone else.
And… it’s a lot easier to delete the last line of a file than the
first.
I don’t think this is actually true, can you explain further?
-Erik
You can just truncate the file size. You don’t have any subsequent
lines (bytes) to move into a new position within the file.
How do you know which line is the last line?
Unless there’s something I don’t know, that involves reading the whole
file, or a combination of seek/read from the end until you find the last
newline, which is essentially what tail +2 does, but starts at the
beginning of the file.
“Moving” data in a file is the worst possible scenario for I/O at all.
You can do both of these operations in a single pass read of the file
without shoving the whole thing into memory at once. It just involves
writing to one file and reading from another, is all.
-Erik
newline, which is essentially what tail +2 does, but starts at the
beginning of the file.
“Moving” data in a file is the worst possible scenario for I/O at all.
You can do both of these operations in a single pass read of the file
without shoving the whole thing into memory at once. It just involves
writing to one file and reading from another, is all.
Well, if you want/need the last line(s) of a file (presumably text or
how would you define a “line”), you can take a look at the File::Tail
gem.
gem install file-tail
I had some Perl code (lifted from some forum or article) that would
cut initial lines out of a log file using sysread/syswrite with a
truncate to reset the end-of-file. I don’t recall if it used a single
file descriptor or two separate ones, but the idea is the same – move
bytes “backward” across the gap that you want to eliminate. I agree
with your “worst possible scenario for I/O” assessment.
I had some Perl code (lifted from some forum or article) that would
cut initial lines out of a log file using sysread/syswrite with a
truncate to reset the end-of-file. I don’t recall if it used a single
file descriptor or two separate ones, but the idea is the same – move
bytes “backward” across the gap that you want to eliminate. I agree
with your “worst possible scenario for I/O” assessment.
You are talking about tail -f. This is different.
(And if you ever need to find that again, perldoc -q tail).
And… it’s a lot easier to delete the last line of a file than the
first.
I don’t think this is actually true, can you explain further?
My Ruby isn’t up to coding it, but in principle I’d seek to the end of
the file, then backtrack until I found the appropriate newline. Then I’d
truncate the file.
the thing is, if i didn’t delete the lines i’ve already added, everytime
i ran the task, (it has to be run multiple times due to timeouts) i
would only re-add the same lines. The deletion acts as a place holder of
sorts. I’ll play around with your suggestion, and i’ll let you know. If
anyone else has an alternate method…i’m all ears
I suggest exploring a distributed worker system like Rinda,
Backgroundrb, AP4R, or Sparrow. You can prepare a master list of
dictionary words, and then worker processes can take one at a time and
add them to your database. Having a timeout won’t slow things down,
nor will it cause you to have to re-read your wordlist.
I’ve set it up with a rake task and a cron job, that re-runs every 5hrs
(my timeout window)
i this is the code that i’ve already uploaded, and is currently
running…
namespace :chinese do
desc “adds all chinese files to database”
task :create => :environment do
active_dictionary =
File.readlines(“public/languages/chinese/practice.txt”)
count = 0
for @element in active_dictionary
count += 1
process_chinese
open(‘public/languages/chinese/practice.txt’, ‘w’) do
|file|
file.puts
active_dictionary[count,active_dictionary.size]
end
end
end
end
where process_chinese contains all my proprietary code, i played around
with only writing to the file every time i’ve processed ten entries, but
it only cut my process time down by a trivial amount of time, so i just
let it write over the file after every line. As we’ve seen here, there
are quite a few ways this can be accomplished, this ended up
working…and didn’t kill my processor (the writes to the DB are
infinitely more expensive than opening and writing to this file).
Richard S. wrote:
If the file is exceptionally large, you can save a lot of memory (and
processing time, likely), by doing something like this:
File.open(“my_file”) do |f|
f.readline
File.open(“my_file.tmp”, ‘w’) do |f2|
f2 << f.read
end
end
FileUtils.mv(“my_file.tmp”, “my_file”)
Just on the “f2 << f.read” part, isn’t this still reading the rest of
the file into ruby?
I was thinking more of reading stuff into a fixed buffer and then
writing it.
ie
while buf=f.read(32000) # bytes
f2.write buf # or f2 << buf
end
which will result in a bazillion more calls to IO#read and IO#write on a
large file but doesn’t read the whole thing into memory. I’m not
recommending this or anything - just wanted to clarify.
i this is the code that i’ve already uploaded, and is currently
running…
If the file is exceptionally large, you can save a lot of memory (and
processing time, likely), by doing something like this:
File.open(“my_file”) do |f|
f.readline
File.open(“my_file.tmp”, ‘w’) do |f2|
f2 << f.read
end
end
FileUtils.mv(“my_file.tmp”, “my_file”)
The point here is that almost all the work is done on the file
descriptors instead of in memory. I don’t know if ruby has a sendfile()
implementation, but that would be the most ideal, as it’d instruct the
OS to do the copy.