I’ve found the following text about differences between 1.8 and 1.9:
“It is more rigorous that 1.8 when it comes to detecting invalid code.
For
example, 1.8 accepts /[^\x00-\xa0]/u, while 1.9 complains of invalid
multibyte
escape”
Ok, so how should I write the above Regexp to work on 1.9.1?
I’ve found the following text about differences between 1.8 and 1.9:
“It is more rigorous that 1.8 when it comes to detecting invalid
code. For
example, 1.8 accepts /[^\x00-\xa0]/u, while 1.9 complains of invalid
multibyte
escape”
Ok, so how should I write the above Regexp to work on 1.9.1?
Ok, so how should I write the above Regexp to work on 1.9.1?
Regexp.new ‘[\xC0-\xDF]’, nil, ‘n’
Great! Thanks a lot.
However I don’t understant these parameters for Regexp.new.
I read: class Regexp - RDoc Documentation
About the third parameter you use (‘n’) it doens’t appear on the doc ¿?
However I don’t understant these parameters for Regexp.new.
I read: class Regexp - RDoc Documentation
About the third parameter you use (‘n’) it doens’t appear on the doc ¿?
All the new stuff to do with String and encodings in ruby 1.9 is
undocumented.
(At least, it’s not documented within Ruby itself. You may be able to
purchase a book which has some reverse-engineered documentation)
If you care about stability or documentation, my own advice is to stick
with 1.8 - preferably 1.8.6.
However I don’t understant these parameters for Regexp.new.
I read: class Regexp - RDoc Documentation
About the third parameter you use (‘n’) it doens’t appear on the
doc ¿?
All the new stuff to do with String and encodings in ruby 1.9 is
undocumented.
I’ve got the majority of the new functionality covered in my m17n
series now:
purchase a book which has some reverse-engineered documentation)
Regexp.new of Ruby 1.9 is obviously documented in: class Regexp - RDoc Documentation
but the number of parameters doesn’t match with the reality ¿?
Does it make sense? Isn’t that documentation been created with Rdoc?
purchase a book which has some reverse-engineered documentation)
Regexp.new of Ruby 1.9 is obviously documented in: class Regexp - RDoc Documentation
but the number of parameters doesn’t match with the reality ¿?
Does it make sense? Isn’t that documentation been created with Rdoc?
The rdoc is only as good as the comments in the source code.
I’m working on documenting some of this stuff when I have time (always
the magic words, eh? :-/). I ran dcov on the whole of Ruby core last
week (results: http://jeremymcanally.com/coverage.html ; it’s a little
deceiving since methods like to_yaml I think are actually included
from elsewhere. I’ll have to look…), and I’m currently setting up
some tasks for myself to knock things out.
I might setup a Lighthouse for it or something if other people want to
get involved.
What about string literals which include escape sequences like \u?
This seems to override the source encoding rule.
I plan to cover this in the next article.
What encoding is chosen for regexp literals? (Seems to be different
rules to string literals). What about string literals which include
#{interpolation}? What about regexp literals which include
#{interpolation}?
I’m going to cover this too.
I think it will be worth explaining what you need to do to handle
binary data (using “rb” and “wb”, the ASCII-8BIT encoding, how to set
external encoding for STDIN, the fact that read() and gets() return
different encodings for the same data…)
Planned for the next article.
What actually happens if you use string operations on two strings
with
different encodings? e.g. str1 == str2, str1 + str2, str1 << str2?
What
about indexing a hash with two strings which are identical byte
sequences but different encodings?
I feel a gave a much better strategy that prevents you from worrying
about such things. However, that article did link to a detailed
explanation.
I think it will be worth explaining what you need to do to handle
binary data (using “rb” and “wb”, the ASCII-8BIT encoding, how to set
external encoding for STDIN, the fact that read() and gets() return
different encodings for the same data…)
Planned for the next article.
I’ve added a new post to my m17n series covering all of the above and
more:
I expect to have the minor side topics I’m still missing covered in
the next few weeks.
This is a good start, but I think it just scratches the surface.
Questions which immediately spring to mind:
What is the nature of the “compatible” relationship? Does A compatible
with B imply B compatible with A? It’s not commutative:
irb(main):002:0> a = “abc”.force_encoding(“UTF-8”)
=> “abc”
irb(main):003:0> b = “def”.force_encoding(“ISO-8859-1”)
=> “def”
irb(main):004:0> Encoding.compatible?(a,b)
=> #Encoding:UTF-8
irb(main):005:0> Encoding.compatible?(b,a)
=> #Encoding:ISO-8859-1
Also, it’s not encodings which are compatible, but actual strings. Two
strings may or may not be compatible, dependent not just on their
encoding, but on their actual content at that instant.
irb(main):006:0> a = “abc\xff”.force_encoding(“UTF-8”)
=> “abc\xFF”
irb(main):007:0> b = “def\xff”.force_encoding(“ISO-8859-1”)
=> “def�”
irb(main):008:0> Encoding.compatible?(a,b)
=> nil
What about string literals which include escape sequences like \u?
This seems to override the source encoding rule.
What encoding is chosen for regexp literals? (Seems to be different
rules to string literals). What about string literals which include
#{interpolation}? What about regexp literals which include
#{interpolation}?
What source encoding and external encoding is used in irb?
I think it will be worth explaining what you need to do to handle
binary data (using “rb” and “wb”, the ASCII-8BIT encoding, how to set
external encoding for STDIN, the fact that read() and gets() return
different encodings for the same data…)
What actually happens if you use string operations on two strings with
different encodings? e.g. str1 == str2, str1 + str2, str1 << str2? What
about indexing a hash with two strings which are identical byte
sequences but different encodings?