So, I’ve a problem. I’m using ncurses (or possibly not, might just
STDIN.read(1)
or something, we’ll see) to grab byte–level input from
the terminal. Purpose being to catch and handle control characters in a
text mode application, such as “meta–3†or “control–c.â€
Currently, I have a really ugly method that manually parses UTF-8 and
ASCII directly in my Ruby source; however, this is extremely slow, and
seems quite a bit like overkill. After all, with 1.9’s wonderfully
robust Encoding
support, it seems silly to duplicate all that
byte–parsing work that must be going on somewhere in Ruby already.
Here’s my current method (forgive the horrendous code, please! I fully
intended to get rid of it right from the start, so…):
The goal is to devise some method by which I can:
- Determine whether or not an
Array
of so–far–received bytes is, yet,
a validString
of a givenEncoding
(I can get the intended input
Encoding
by way of a simpleEncoding.find(:locale)
, so we’re always
in–the–know as to whichEncoding
the incoming bytes are intended to
be) - Once we know the
Array
instance containing the relevant bytes
pertains to a validString
, convert that into aString
and further
store/cache/process it in some way.
Yes, this means that the String
will almost always be one character
long; I am uninterested in parsing lengths of characters out of the
input stream, I can deal with that later. At the moment, I very simply
want to ensure that I can retrieve, in real time, the latest character
entered at the terminal, as a String
, in any Encoding
.
Any help would be much appreciated; I’ve been banging my head against
this on–and–off for weeks! (-: