got problem with big regexes:
I have a regex of about 70000+ words concated with ‘|’ that I’d like to
match as a regex. /bla|blub|foo|bar|…(70000)/
But unfortunately ruby gives me a ‘regular expression too big’ if I’m
trying to build such a thing.
I had a look at the regex.c code and saw the limit of 1 << 16 bytes for
regexes. Is there a way around this (without going down to 2000 words) ?
got problem with big regexes:
I have a regex of about 70000+ words concated with ‘|’ that I’d like to
match as a regex. /bla|blub|foo|bar|…(70000)/
But unfortunately ruby gives me a ‘regular expression too big’ if I’m
trying to build such a thing.
I had a look at the regex.c code and saw the limit of 1 << 16 bytes for
regexes. Is there a way around this (without going down to 2000 words) ?
Thanks for any hint
You could optimize the regex a little for size, e.g. by factoring out
common prefixes:
(b(l(a|ub)|ar)|foo)...
Of course, that will only help if the | alternatives have a reasonable
amount of redundancy. Alternatively, you could just break the whole
thing into multiple expressions. Instead of
You could optimize the regex a little for size, e.g. by factoring out
common prefixes:
(b(l(a|ub)|ar)|foo)...
Thought of that.
Of course, that will only help if the | alternatives have a reasonable
amount of redundancy. Alternatively, you could just break the whole
thing into multiple expressions. Instead of
if /first_part|second_part/ =~ text
You could try:
if /first_part/ =~ text or /second_part/ =~ text
Yes, that was my next thought but where to split? Just count the bytes
and splitt near 1 <<16?
Why is there a limitation at all? I implemented the same thing in perl
and it no complains …
Is the regexp engine of perl that much better?
On Sun, 12 Nov 2006 15:01:56 -0000, Peter S. [email protected] wrote:
Why is there a limitation at all? I implemented the same thing in perl
and it no complains …
Is the regexp engine of perl that much better?
Irrespective of whether regex the best solution for your needs, it seems
Oniguruma will improve the situation somewhat with respect to large
regular expressions.
This doesn’t really help with the actual question of getting past
regex size limits, but are you sure that regexen are correct solution
to this problem? Unless I’m mistaken, the above match is going to be
horribly, painfully slow; the issue you’re running into is probably an
indication that you might want to look elsewhere.
I don’t want to make any silly claims without doing any benchmarking
on your data, but I would imagine that even doing an efficient search
on a sorted array of your words would give you better performance than
the regex search. You can move up from there into hashes or other
data structures for this sort of thing.
regexes. Is there a way around this (without going down to 2000 words) ?
Thanks for any hint
You could optimize the regex a little for size, e.g. by factoring out
common prefixes:
(b(l(a|ub)|ar)|foo)...
Thought of that.
Good for you.
Yes, that was my next thought but where to split? Just count the bytes
and splitt near 1 <<16?
Probably better not to construct the mega-regex in the first place. For
the record, finding yourself on the edges of the language’s capacity
like this might be a sign that refactoring is in order.
Even if you’re going to stick with your current technique, but work
around the size limitation, it’s probably better not to build a megex
™ you’ll have to split up. As you put the pattern together, only add
alternations as long as the cumulative size will be < 0x10000 (or a
well-commented static constant with that value).
Why is there a limitation at all? I implemented the same thing in perl
and it no complains …
Is the regexp engine of perl that much better?
As Friedl notes, Perl is darned close to being the ideal regex language.
Ruby regexes aren’t necessarily meant to be the one hammer that can
drive every nail. If you want to be able to view every problem through
a regex lens, you’ll probably have to dig a little deeper than
categorizing one language’s engine as simply “better” than another.
Big, static sizes like 2**16 are often used to avoid dynamic allocation,
or otherwise improve runtime efficiency.
Also for the record: I’m a big fan of regexes. Though a lot of people
complain about complexity or efficiency issues, I’ve never had a problem
with either. I would be interested to see a comparison of the relative
merits and limitations of various engines, e.g. regex lengths,
benchmarks, big-O complexity, and ability to handle null bytes.
On one more thing: to implement substring search using a trie, when
adding words to the trie, you should generate appropriate back links so
that you can implement =~ use the Knuth-Morris-Pratt algorithm for
matching.
got problem with big regexes:
I have a regex of about 70000+ words concated with ‘|’ that I’d like to
match as a regex. /bla|blub|foo|bar|…(70000)/
It appears you are trying to match a word against a list of words. Yes?
From
an efficiency standpoint, if the expression is really as shown (no real
regex syntax), why not create a hash out of the data and see if the
desired
word is present that way? That would likely be much faster than running
the
(non-regex) regex against each word in the input data.
When you post an question like this, it is always a good idea to reveal
the
purpose as well as the method.
#!/usr/bin/ruby -w
data = File.read("/usr/share/dict/words")
words = data.split(%r{\s+}m)
hash = {}
words.each do |word|
hash[word] = true
end
while true
print “Enter a word:”
word = STDIN.readline.chomp
if hash[word]
puts “Valid word.”
else
puts “Not present.”
end
end
Now you have a hash that can be used for testing a list of words.
When you post an question like this, it is always a good idea to
reveal the
purpose as well as the method.
First of all: Thanks for the replies, I think I have enough input to
chew on.
And to reveal the purpose:
I’d like to match a LOT of words/strings (words with spaces) and
sometimes regexes against a lot of small and medium size( < 10000 byte )
strings.
The comparison with Perl was just a comparison of the regexp engines and
not the language itself. It was just that until now I had never to think
much about the underlying hardware … because ruby did that for me. So
I was surprised by the “too big exception”.
I’d like to match a LOT of words/strings (words with spaces) and
sometimes regexes against a lot of small and medium size( < 10000 byte )
strings.
If you are going to compare strings to strings as well as strings to
regexes, maybe a two-tiered scheme would be better. First tier, simple
textual comparison using a hash of strings. Second tier, regexes.
This removes the ambiguity that a particular data string might pass a
string
comparison but fail any of the provided regexes, or the opposite. It
will
also be much faster than a gigantic list of strings and regexes, all
presented to the regex engine as though they were all regular
expressions,
even though some are simple string comparisons.
I think this (e.g speed improvement) would have been true in Perl also.
On Mon, 13 Nov 2006 00:07:52 -0800, Paul L. wrote:
comparison but fail any of the provided regexes, or the opposite. It will
also be much faster than a gigantic list of strings and regexes, all
presented to the regex engine as though they were all regular expressions,
even though some are simple string comparisons.
I think this (e.g speed improvement) would have been true in Perl also.
His regex isn’t anchored to the beginning and end of the string. This
makes
hashes useless for the kind of comparison he seems to want to perform.
I think this (e.g speed improvement) would have been true in Perl also.
His regex isn’t anchored to the beginning and end of the string.
Yes, I see that now. From the appearance of the data, it looks as though
he
is matching whole words, but that’s only an assumption on my part.
This makes hashes useless for the kind of comparison he seems to want to
perform.
Yes, unless he is matching whole words, he is stuck with regexes. There
is
very likely to be a refactoring for this problem, and it would have to
start with a clear statement of the problem to be solved.
Yes, unless he is matching whole words, he is stuck with regexes. There is
very likely to be a refactoring for this problem, and it would have to
start with a clear statement of the problem to be solved.
Sorry, my fault.
The problem is to match a whole bunch (>70000) of words (later regexps)
on a string.
The actual implementation is to concat the word with | and create a big
regexp. word1|word2|word3|word4 …
This went well until I tested with some 100 words. Now I have the ‘big
regexp problem’ problem. The solution has to work with regexps as words
as well like: word1(the|a|an)word11|regexp2|…
So the currently working solutions are:
-use ruby 1.9 (the performance is far below perl, but I have to
benchmark this.)
-loop through the words/regexp and match each of them on the string
(don’t ask for performance here).
-perhaps I’ll realy pipe the problem into a
perl-implemented-matching-server (not that bad idea)
regexp. word1|word2|word3|word4 …
Is it exact word matching?
if so then you can split the input-string on blankspace,
so you get an array of words. Then you can sort the array of words
irb(main):001:0> a = %w(a b c e f g)
=> [“a”, “b”, “c”, “e”, “f”, “g”]
irb(main):002:0> b = %w(b d f)
=> [“b”, “d”, “f”]
irb(main):003:0> a & b
=> [“b”, “f”]