Object-Oriented thinking

Hello everyone,

What I find fascinating when hacking in Ruby, is that we use one of the
purest implementation of OOP, and at time we find ourselves tweaking C
extensions. Thus, the boundary between the two ways of thinking
(procedural
vs OOP) seem very thin, yet it’s still fundamentally different.

My question is, what kind of mental model do you use when you program in
Ruby? How do you think about objects? Do you see them as elements
carrying
with them their own methods, bundled with their data?

How about the flow of the program: Whenever there’s a method call, do
you
picture the method to be inside the receiver - just like it would be in
a
real-life object -, or since you know that in the underlying
implementation
the method is stored in the class, you just think about a procedure call
with a self variable being the receiver?

Do you think using the OOP abstraction without knowing the internals can
be
harmful? My case for that (even if I tend not to believe so) would be
that
someone might be tempted to think that during an object instanciation,
all
the instance variables AND methods gets duplicated for this particular
instance, which isn’t the case - yet, that’s what the abstraction pushes
us
to believe.

That’s a lot of questions!
Looking forward to hear what you think.

Michael

On Fri, May 20, 2011 at 11:58 AM, Michael S. [email protected]
wrote:

I really try not to see objects as a collection of functions wrapped
around
data. That is how I was taught objects and its a real shame. IMO objects
should be designed around behavior and not data. I try to think of
objects
as little people with their own personalities who want to do one thing
and
do it really well. I get to be their god, tweak their purpose, and put
them
into the system to do their thing.

How about the flow of the program: Whenever there’s a method call, do you
picture the method to be inside the receiver - just like it would be in a
real-life object -, or since you know that in the underlying implementation
the method is stored in the class, you just think about a procedure call
with a self variable being the receiver?

In Ruby you pass a message to an object and get a response. It doesn’t
matter if it has a its own method, uses an ancestor’s method, or
responds
using method_missing. Its not about procedures its about responding to
messages.

Do you think using the OOP abstraction without knowing the internals can be
harmful? My case for that (even if I tend not to believe so) would be that
someone might be tempted to think that during an object instanciation, all
the instance variables AND methods gets duplicated for this particular
instance, which isn’t the case - yet, that’s what the abstraction pushes us
to believe.

Not exactly sure what you are asking for here. You should understand the
object model, but even if you don’t I can’t see how it harms your
ability to
use well designed objects. Design, yes. Use and consume, probably not.
But
perhaps I’m not understanding what you are asking.

The book that spoke to me the most about all of this is Object
Thinking[1]
by David West. In it he explains how behavior is the key to decomposing
your
domain into objects. He also warns against the command and control
approach
that is so prevalent in the popular understanding of objects. And there
is
also an interesting discussion about objects and immutability in a
recent
ADDCasts[2]. They discuss the notion of East-oriented programming where
you
move away from assignment and more towards methods/messages. Good luck!

  1. Object Thinking [Book]

http://addcasts.com/2011/04/21/episode-4-special-guest-james-ladd-talks-to-us-about-running-smalltalk-on-the-jvm-immutability-and-how-to-write-good-oo-code/

I truly feel there is an art to computer programming in general
regardless of paradigm. If you think about it your fundamental data
types in C are actually abstractions of the memory map of your
computer that you are compiling the code for. When you call sizeof()
from the inside of a malloc() function are your really worried that it
might return the wrong number of bytes? structures and adts in C also
return the correct sum and the abstraction begins.

C is a great language to build languages, drivers, and shell
utilities. If you where a linux system admin how many times would you
type “make -j5 && make modules_install” before you decided it would be
best to create a shell script to automate it and save you from early
onset carpal tunnel. Maybe you might get tricky and make the script it
to be portable across nth amount of machines using the tools available
and the UNIX programming paradigm like so:

make -j$(grep “processor” /proc/cpuinfo | wc -l | awk ‘{print $1+1}’)
&& make modules_install;
cp arch/uname -m/boot/bzImage /boot

The unix pipe is a beautiful and instructive procedural paradigm
without worry about garbage collection and low level constructs with
calls to malloc() and free().

Not that writing my own wc program and add program( probably could
have used bc/dc vs awk so this is a lazy example) wouldn’t be simple
enough the shell provides enough modularity already. If im worried
about memory usage and optimization I can parse ps -u and refactor or
I could just do it the quick and dirty was and get on with my life.

I do not think that learning new paradigms are harmful to your
previous knowledge in programming. I do believe understanding the
gestalt of your tools is important only after or as you begin to learn
how to use your tools. This can be seen in situations where rails
programmers are not always ruby programmers or visa versa.

As for the ruby object model. I believe it’s an excellent tool for
someone who hasn’t hit the apex in the object oriented paradigm. You
can ask for a list of each objects ancestor hierarchy, you can
visualize where the polymorphism comes into play with method
overloading. The design pattern concept of iterators are probably the
first and most obvious.

Most programmers first getting a grasp on OOPS make a single Class
which holds the kitchen sink of methods. As they become more
comfortable with inheritance and polymorphism they begin to break the
Class up into several smaller classes with the intention to composite
those classes to aid the interface abstraction. Variations can then
inherit from the chain and overload where it is deviates from the
behavior down the structure to the base class.

I read somewhere a suggestion to give your own classes in ruby a to_s
method. Though I imagine it is limited based on what your abstracting
it really is not that far fetched. If a common interface has been
defined to the high level programmer( or those using your api) they
could use to_s as seamless as they would on ruby’s standard data
objects. Also now you don’t have to think of a name for the string
convention method as using Ruby’s template naming conventions is
already in place.

So by creating a proper interface and abstraction using the OOP
abstraction without knowing the internals be harmful?

No. It allows us to create our programs in an elegant way without
worry of low level details. A common interface allows us to reuse
common method calls regardless of type. Some may say that less code
== less bugs and for the most important feature it simply fun to work
with and run adhoc experiments without the stress of resorting to
strict programming rules and ‘clever’ tricks to circumvent bugs that
arise often from low level languages( truncation and rounding in C vs
Ruby comes to mind with that last statement)

Does this answer your question(s)?

~Stu

Michael S. wrote in post #999929:

the boundary between the two ways of thinking (procedural
vs OOP) seem very thin,

I know it’s not what you asked, but it’s easy to forget that computers
can’t run object-oriented (or functional) code. They can only
run procedural.

Your objects are always disguising procedures and your functions are
always on top of procedural code. If you lift the bonnet (hood) don’t
be surprised to see it, or worry about dabbling a bit.

On Fri, May 20, 2011 at 7:58 PM, Michael S. [email protected]
wrote:

My question is, what kind of mental model do you use when you program in
Ruby? How do you think about objects? Do you see them as elements carrying
with them their own methods, bundled with their data?

I don’t think too much in that direction. I start with thinking about
what main artifacts a program needs to have in terms of functionality.
I try to do CRC - at least mentally. After that is settled I think
about what public interface serves a class’s purpose best (along the
lines of abstract data type). Only then eventually (i.e. when coding)
I think about how these methods are implemented. Of course, in
practice there can be more iterations of this and one class might
still be in the interface stage while another is implemented already.

How about the flow of the program: Whenever there’s a method call, do you
picture the method to be inside the receiver - just like it would be in a
real-life object -, or since you know that in the underlying implementation
the method is stored in the class, you just think about a procedure call
with a self variable being the receiver?

That’s not a distinction I consciously make. An object responds to a
message - whether that behavior is defined in its class or in the
instance itself doesn’t really matter.

Do you think using the OOP abstraction without knowing the internals can be
harmful? My case for that (even if I tend not to believe so) would be that
someone might be tempted to think that during an object instanciation, all
the instance variables AND methods gets duplicated for this particular
instance, which isn’t the case - yet, that’s what the abstraction pushes us
to believe.

It’s the other way round: worrying about internals too early is
harmful. Whether that means thinking about how a class is implemented
or how the underlying object model works doesn’t matter. Both are
irrelevant most of the time when designing an application.

Kind regards

robert

On Friday, May 20, 2011 12:58:26 PM Michael S. wrote:

My question is, what kind of mental model do you use when you program in
Ruby?

I’m not sure of the best way to describe this, and I suspect others have
already done a decent job, but in any language, I have two main things
going
on in my head. First is the semantics of the language in question.
Second is
just enough of the implementation, usually more closely related to the
semantics than to the actual C code, that I have an intuition of what
kind of
things are likely to be more or less efficient.

How do you think about objects? Do you see them as elements carrying
with them their own methods, bundled with their data?

Not quite, but close. With respect to the object or class I’m currently
developing, I think of it almost as an independent program capable of
sending
and receiving messages, much as I think of Erlang processes. When using
objects, I think of them as nouns, and the methods as verbs.

How about the flow of the program: Whenever there’s a method call, do you
picture the method to be inside the receiver - just like it would be in a
real-life object -, or since you know that in the underlying implementation
the method is stored in the class, you just think about a procedure call
with a self variable being the receiver?

I think of it as being “inside the receiver” in the sense that your
personality, your decisions, your actions and reactions, are all inside
your
brain. That some might be genetic (and there is of course dispute about
this)
is an irrelevant detail.

There are practical reasons for this, also: How an object responds to a
method
call really is up to the object. Aside from method_missing and other fun
tricks, objects also have metaclasses, which means you can define a
method on
a single object.

Do you think using the OOP abstraction without knowing the internals can be
harmful?

I agree with Robert; worrying about internals when you don’t have to is
harmful. While I do have a thread running in my head thinking about
performance considerations, even that is irrelevant for most programs
most of
the time.

So, for your case:

My case for that (even if I tend not to believe so) would be that
someone might be tempted to think that during an object instanciation, all
the instance variables AND methods gets duplicated for this particular
instance, which isn’t the case - yet, that’s what the abstraction pushes us
to believe.

If you don’t know JavaScript well, I would strongly suggest picking it
up.
Read pretty much anything Douglas Crockford has to say about it, and
play with
different OO patterns. I’m going to use this as an analogy here, so I’ll
try
to include enough background that it’s understandable if you don’t know
any
JavaScript, or if you still think JavaScript is “Java Lite”.

JavaScript objects behave like hashes, and the two can be used almost
interchangeably. Methods are just functions (generally anonymous) which
have
been stored in the hash, and a syntax for calling them which sets ‘this’

but you can apply almost any function to almost any object. Many
implementations allow you to get a method’s source easily – playing
around
with this, it seems that when you attempt to coerce a method into a
string,
you get the source back.

Your choices for inheritance are either to use JavaScript’s own
prototypal
inheritance, or to write your own inheritance – and your only choice
for
multiple inheritance is to roll your own. With prototypal inheritance,
any
time you try to access a property (either an instance variable or a
method) of
a given object, it checks that object first, then its prototype object,
then
the prototype’s prototype, and so on, arbitrarily deep.

Rolling your own is much more flexible – you just create a new, empty
object
(as easy as {}) and start adding methods to it. Basic inheritance would
just
mean calling some “constructor” method which returns an object, then
modifying
it in the child “constructor” method.

Now, like with your example, as a newbie, you might be tempted to think
that:

  • Functions are stored as strings.
  • Internal, inline functions are therefore inefficient, because you’re
    creating a new string each time.
  • Prototypal inheritance is slow, especially at any depth, since you’ll
    have
    to trace the entire ancestry for each access.
  • Roll-your-own inheritance is tremendously inefficient, since even if
    the
    initial object creation (prototypal or otherwise) was efficient, you’re
    taking
    the child and modifying it, thus leading to worse performance.

One of these is still true, but the others are false. Most surprisingly,
rolling your own inheritance may lead to a slower constructor – maybe

but in the v8 engine (used in Chrome), there’s no performance penalty
whatsoever once the objects are created. Despite the highly dynamic
nature of
what I’m calling roll-your-own inheritance, which feels like it should
be less
efficient than calling Object.extend in Ruby on every single object, the
resultant objects behave very similarly to objects created in
statically-typed
languages – that is, they’re fast!

I just wrote several paragraphs setting up the problem and explaining
why it’s
not a problem. That is why I think while internals are a great learning
experience, a best practice is to ignore implementation details,
particularly
performance implications, until you actually care.

That, and think bigger. Suppose it was true that all the instance
variables
and methods got duplicated for a given instance. So what? It’s still
O(1) with
regard to the algorithm I actually care about, unless that algorithm
consists
of adding methods to the parent class based on input and then creating a
bunch
of objects.

First . . .

I think your description of the object model and how you think of it is
excellent, and it very closely approaches the way I think about it,
though I think some of the points you make come off a bit more subtly
than they would if I tried to explain it. Of course, I’m also having a
difficult time figuring out how to articulate my thoughts on that
clearly, which is why I haven’t contributed an explanation of my own
sense of Ruby’s object model to this thread.

That having been said, I’ll respond directly to some stuff you said now.

On Wed, May 25, 2011 at 12:12:44PM +0900, David M. wrote:

different OO patterns. I’m going to use this as an analogy here, so I’ll try
to include enough background that it’s understandable if you don’t know any
JavaScript, or if you still think JavaScript is “Java Lite”.

JavaScript objects behave like hashes, and the two can be used almost
interchangeably. Methods are just functions (generally anonymous) which have
been stored in the hash, and a syntax for calling them which sets ‘this’ –
but you can apply almost any function to almost any object. Many
implementations allow you to get a method’s source easily – playing around
with this, it seems that when you attempt to coerce a method into a string,
you get the source back.

I would actually suggest that, in large part because of the compromises
made in JavaScript’s implementation in the ECMA standardization process,
there is a lot of clutter in the language’s design that obscures these
characteristics of JavaScript’s object model in some unfortunate ways.
As an alternative to JavaScript, if the goal is solely to gain an
understanding of this prototype-based object model philosophy, one might
be better served to learn Io. I think that in the end JavaScript is a
much more useful language, and there are in fact some implementation
details of Io that I find a bit perversely cumbersome, but as a way to
rapidly pick up the feel and implications of a prototype-based object
model nothing comes to mind that comes anywhere near the efficacy of Io
as an example language.

mean calling some “constructor” method which returns an object, then modifying
it in the child “constructor” method.

Here is one case where I think Io serves better than JavaScript. A lot
of what goes on in JavaScript – especially where “best practices” and
common cultural idioms are concerned – is pretty heavily polluted by
more “traditional” C++ish and Java-like object models’ concepts of class
hierarchies and the like than Io’s approach. Ultimately, I find that it
is much more obvious how to do protype-based OOP “correctly” in Io than
in JavaScript as a result of that, and of other factors as well.

So, to summarize . . . I think that Io is a better choice for the
limited
purpose of learning about these things. I think JavaScript is a better
choice for learning a useful language, though.

Some Io programmer may well take exception to my statements, and if so,
I
invite explanations why I’m mistaken. I’d really like to find out why
(if it is true) Io is as useful a language as JavaScript for “real
world”
problem solving when it seems to me like it kinda isn’t (and not just
because of lacking libraries, tools, et cetera, but also because of some
characteristics of its design).

Thanks a lot to everyone who replied.

I really appreciate the quality of the answers I got. I took a look at
the “Object Thinking” book mentioned by Mike M… I’m not sure if I’m
too fond of the “human” metaphor for objects, although it works, and it
makes designing software a little more like playing a RTS game :wink:

Reading about the javascript object model (beautifully described) made
me realize that there’s no point for a OOP programmer to care about the
internals - other than for the personal culture or for working for
performance. I guess each language implements OO its own way, but the
concept of an object remains the same, whether you’re doing
class-oriented or prototype-oriented.

I’ve been taught that an important thing to master programming is to
fully understand the flow of your program. I have been interpreting that
all along as “knowing in which function you are”, which isn’t relevant
in OOP, and for a good reason, since your objects take care of that for
you.

I like the idea of an object being thought about as a independent
program - a sort of compiled version of its class. Classes themselves
being also a compiled version of their meta-class.

Michael S.

Raw JavaScript is raw.
Which also means astonishing things can be done with it.
For example: http://jsclass.jcoglan.com/

MarkT

On Wednesday, May 25, 2011 01:02:36 AM Chad P. wrote:

I think your description of the object model and how you think of it is
excellent, and it very closely approaches the way I think about it,
though I think some of the points you make come off a bit more subtly
than they would if I tried to explain it.

Thank you!

I actually should give credit to… someone. I’m sure these ideas aren’t
entirely my own, but I don’t remember where they come from.

I would actually suggest that, in large part because of the compromises
made in JavaScript’s implementation in the ECMA standardization process,
there is a lot of clutter in the language’s design that obscures these
characteristics of JavaScript’s object model in some unfortunate ways.

I don’t know enough about that process to know that this is where it
came
from, but I’ll certainly agree there’s clutter and downright ugliness in
the
design. It’s fortunate that when you get past the ugliness, there’s
still
something kind of cool and useful, whereas it seems like behind every
bit of
ugliness in C++ is more ugliness.

As an alternative to JavaScript, if the goal is solely to gain an
understanding of this prototype-based object model philosophy, one might
be better served to learn Io. I think that in the end JavaScript is a
much more useful language, and there are in fact some implementation
details of Io that I find a bit perversely cumbersome, but as a way to
rapidly pick up the feel and implications of a prototype-based object
model nothing comes to mind that comes anywhere near the efficacy of Io
as an example language.

It’s one that I’ve actually been meaning to learn for awhile, mostly
because I
like the idea of objects being actors – though, reading it now, it’s a
bit
depressing that it’s cooperative concurrency, so I can’t actually do
multicore
stuff that way.

I think the JavaScript syntax is easier to pick up, at least for getting
the
point across that I wanted to make:

function parent() {
return {
foo: function() { … }
};
}

function child() {
var obj = parent();
obj.bar = function() { … };
return obj;
}

And of course, the ad-hoc-ness of the various implementations of
‘extend’. Off
the top of my head:

var Extendable = {
extend: function (other) {
for (var x in other)
this[x] = other[x];
return this;
},
infect: function(other) {
this.extend.apply(other, [this]);
return other;
}
};
// So, to make any object extendable, you can have any extendable object
// “infect” it first, then use Object.extend, kind of like Ruby:
var foo = {};
Extendable.infect(foo);
foo.extend({
bar: function() { … },
baz: function() { … }
});
// Or chain them:
var foo = Extendable.infect({}).extend({ … });

I don’t know enough about IO to really say whether it’s better for
teaching
prototypal inheritance, but that wasn’t quite the goal here. For one,
JavaScript is useful, and I’ll take almost any opportunity to counter
FUD from
people who bash it without understanding it. But my main goal was to
show that
code like the above can be made to run much more efficiently than you’d
think,
so why not have something that flexible to begin with?

So, to summarize . . . I think that Io is a better choice for the limited
purpose of learning about these things. I think JavaScript is a better
choice for learning a useful language, though.

That depends entirely what your goals are. For instance, if you can give
me an
IO VM, or an IO derivative, which gives me prototypal-inheritance OOP
(or just
OOP in general), objects-as-actors, and Erlang-style concurrency, I’d be
very
interested (and I really should be following Reia much more closely).

I wouldn’t deliberately use JavaScript for anything other than the Web
or
things closely tied to it.

I’d really like to find out why
(if it is true) Io is as useful a language as JavaScript for “real world”
problem solving when it seems to me like it kinda isn’t (and not just
because of lacking libraries, tools, et cetera, but also because of some
characteristics of its design).

Well, again, I don’t know enough about IO to have an opinion, but I’d
like to.
What aspects of its design make it unsuited to real-world problems?

On Thu, May 26, 2011 at 03:36:39AM +0900, David M. wrote:

aren’t entirely my own, but I don’t remember where they come from.
Maybe Alan Kay . . . ? My mental model for OOP is inspired kinda
directly by things he has said about how he envisioned things, to some
extent at least.

ugliness, there’s still something kind of cool and useful, whereas it
seems like behind every bit of ugliness in C++ is more ugliness.

To be fair to ECMA, the ugliness in JavaScript isn’t all the fault of
what went on in the standards process – but I think the lion’s share of
that ugliness owes its current existence to the ECMA standard. There
was
serious talk of a new major version of the ECMAScript standard that
promised to fix all kinds of awfulness in the language, but it was
eventually abandoned in favor of a minor version bump to tack on a few
features the corporate ECMA members demanded.

It’s one that I’ve actually been meaning to learn for awhile, mostly
because I like the idea of objects being actors – though, reading it
now, it’s a bit depressing that it’s cooperative concurrency, so I
can’t actually do multicore stuff that way.

I think the JavaScript syntax is easier to pick up, at least for
getting the point across that I wanted to make:

Io’s pretty easy to pick up as well, at least for a basic understanding
of the important parts of the language. Whirlwind tour for beginners:

Introducing Io, A Prototype-Based Language
http://blogs.techrepublic.com.com/programming-and-development/?p=3483

I don’t know enough about IO to really say whether it’s better for
teaching prototypal inheritance, but that wasn’t quite the goal here.
For one, JavaScript is useful, and I’ll take almost any opportunity to
counter FUD from people who bash it without understanding it. But my
main goal was to show that code like the above can be made to run much
more efficiently than you’d think, so why not have something that
flexible to begin with?

I’m making an effort to get a firmer grasp of JavaScript than I already
have, because I’m aware of its usefulness, and think that it has a lot
of
potential to serve me well in the near future. I’m just frustrated by
the way the good (really good) parts of it are weighed down by the
garbage heaped on it, so I probably come off as more critical of it than
I intend. It’s definitely worth knowing – and using. I guess that,
within the narrow constraints of languages that accomplish what
JavaScript does well, JavaScript is the best of a bad lot, in part
because our alternative is VBScript (I shudder), but in part because of
the really neat language design decisions that survive at the core of
the
language.

So, to summarize . . . I think that Io is a better choice for the
limited purpose of learning about these things. I think JavaScript
is a better choice for learning a useful language, though.

That depends entirely what your goals are. For instance, if you can
give me an IO VM, or an IO derivative, which gives me
prototypal-inheritance OOP (or just OOP in general), objects-as-actors,
and Erlang-style concurrency, I’d be very interested (and I really
should be following Reia much more closely).

I’m not sure what you described would really be Io any longer.

I stumbled across Reia quite a while ago, too, and found it interesting.
I already have far too much on my plate to invest too much time into it
right now, though, so I probably won’t check into it too seriously for
another year or two.

I wouldn’t deliberately use JavaScript for anything other than the Web
or things closely tied to it.

At least until there’s a major housecleaning for the ECMAScript
standard,
I don’t think I would, either – though “the Web”, in my case, also
includes the enticing prospect of server-side JavaScript programming
with
fun toys like Node.js, which I hope to tackle in some depth some time in
the next six months.

I’d really like to find out why (if it is true) Io is as useful a
language as JavaScript for “real world” problem solving when it seems
to me like it kinda isn’t (and not just because of lacking libraries,
tools, et cetera, but also because of some characteristics of its
design).

Well, again, I don’t know enough about IO to have an opinion, but I’d
like to. What aspects of its design make it unsuited to real-world
problems?

It feels almost spitefully bare-bones in a lot of ways. I alluded to
the
same problem in the above-referenced article. When playing around with
Io, I found myself wishing I was using Ruby more often than just coding,
once I got past the initial fun of the prototype model of OOP. It just
felt a lot of the time like I was implementing parts of the language its
creator forgot.

I recommend Seven Languages in Seven Weeks for a quick introduction to
Io
(and six other interesting languages). The Introduction to Io article
barely scratches the surface; the Seven Languages book digs a bit deeper
into the interesting ideas built into the design of the language.

On Tuesday, May 24, 2011, Mike S. [email protected] wrote:

Michael S. wrote in post #999929:

the boundary between the two ways of thinking (procedural
vs OOP) seem very thin,

I know it’s not what you asked, but it’s easy to forget that computers
can’t run object-oriented (or functional) code. They can only
run procedural.

Actually, they generally don’t run procedural code either, they run
unstructured imperative programs. The procedural model is, like the
functional, object-oriented, and relational models, a higher level
abstraction over what is really going on at the low level.

This is an interesting read:

https://groups.google.com/forum/#!forum/object-composition

MarkT

David M. wrote:

On Wednesday, May 25, 2011 01:02:36 AM Chad P. wrote:

I think your description of the object model and how you think of it is
excellent, and it very closely approaches the way I think about it,
though I think some of the points you make come off a bit more subtly
than they would if I tried to explain it.
Thank you!

I actually should give credit to… someone. I’m sure these ideas aren’t
entirely my own, but I don’t remember where they come from.

That would be none other than Alan Kay himself.

Well, actually, he described objects as their own little independent
computers, not programs as you did, but the rest of your
description is pretty close. (I guess that would make the methods
the programs.)

I’m not entirely sure what the exact connection is (maybe Alan Kay
interned at ARPA?) but Alan Kay knew about the architecture of the
ARPAnet (what we would now call the Internet and specifically the idea
of messaging as implemented in TCP/IP) when he started to think about
object-orientation.

He thought that independent machines communicating over the network by
messaging, with no shared memory, was a brilliant way to design
complex distributed communication systems, and if it’s good enough to
design a worldwide information system, it surely would be good enough
to design programs, right?

This is the way that pretty much all of his team thought about (and
still does, if you listen e.g. to recent interviews with Dan Ingalls)
object-orientation. Another view that is more peculiar to Alan Kay
himself, is thinking of objects as biological cells. (This is his
biology minor showing through.)

jwm

On Tue, Jun 7, 2011 at 3:12 PM, Stu [email protected] wrote:

I’d be interested in reading more about this metaphor. Did Kay write
anything I can see? Link or book? I love paradigms =)

It’s interesting you bring up biological cells. Peter Small has an
interesting take on the paradigm which is essentially the same thing.
He wrote a book to illuminate the idea using macromedia director and
the lingo programming language. Interesting enough the syntax for a
constructor in that language that was a method called ‘birth’ which
was later renamed ‘new’. It’s an interesting concept. If you need a
visual alias birth new in your class Class and see how it looks
birthing an object. Seeing objects as living breathing things is a
smart well though out was of seeing constructors/deconstructors.

The basic biological concept goes like this:

Information processing within a cell is analogous to computer
processing. Instead of registers, the processing of information in a
cell is organized through surface interactions and shape distortions
of chemical molecules. In this way, every human cell acts like a very
powerful computer: reading programs, instructions and data from the
genome. The genome is essentially your template for your programs and
data.

Proteins, which are specified by DNA sequences on the cell’s genome,
mix and combine with each other to form complex structures. These
complex structures, in turn, mix and combine among themselves to form
even more complex structures which in turn recombine in a multitude of
ways to provide the wide variety of organic mechanisms and structures
which together make up the living human form. The whole process is
based upon is broken down to the use of modular molecular units which
are duplicated and combined into different arrangements. This
duplication and combining of modular components is the basis of
object-oriented programming.

The biological genome also creates and sends chemical messages to
itself and to other molecular complexes within the cell. These
chemical messages initiate cascades of chemical reactions. This is
exactly analogous to which how method arguments might trigger cascades
of procedures and commands inside methods.

Molecular messages can also pass through cell walls, allowing cells to
communicate with each other. The billions of cells which make up the
human body are linked together though an information system consisting
of a vast complexity of chemical gradients and inter-cellular
movements of molecules. Cells as computer, all linked together in an
information carrying network, could be a suitable model for an
information system such as the Internet.

This is my favorite metaphor for new to OOPS programmers and explains
the S in the acronym which is SYSTEMS. Makes you almost want to
explore AI programming and simulations. You can begin to see some
concepts such as computer virus infecting memory locations. It also
predates the design-patterns documentation when programmers still
implemented such things innately vs learned.

Though for someone new to programming all together the mechanical
angle which is simple enough to state: a variable that holds a
reference to a script template in memory.

I’d be interested in reading more about this metaphor. Did Kay write
anything I can see? Link or book? I love paradigms =)

~Stu

2011/6/7 Jrg W Mittag [email protected]:

On 20 May 2011, at 18:58, Michael S. wrote:

Hello everyone,

What I find fascinating when hacking in Ruby, is that we use one of the
purest implementation of OOP, and at time we find ourselves tweaking C
extensions. Thus, the boundary between the two ways of thinking (procedural
vs OOP) seem very thin, yet it’s still fundamentally different.

My question is, what kind of mental model do you use when you program in
Ruby? How do you think about objects? Do you see them as elements carrying
with them their own methods, bundled with their data?

Mostly I tend to think about programming as a sort of a collaborative
writing exercise between myself and the computer as we explore a
particular problem space. Along the way we’ll spot some interesting
landmarks, chat with the natives about their lifestyle and customs, get
stuck in some weird backwater or three along the way and when we get
back have a really great yarn to tell.

Out of that experience we (I and Friend Computer) seem to produce this
artefact called a software program with objects and classes and
meta-programming adornments.

And at times we write tests. Which is a polite way of saying we perform
unethical vivisection on various of the natives we meet, take snapshots
for the further edification of science, and then run away from the scene
of the crime very very quickly.

How about the flow of the program: Whenever there’s a method call, do you
picture the method to be inside the receiver - just like it would be in a
real-life object -, or since you know that in the underlying implementation
the method is stored in the class, you just think about a procedure call
with a self variable being the receiver?

It’s all just messages, all the way down. Often in a Schroedinger’s
Cat/Shakespearean tragi-comic sense.

Do you think using the OOP abstraction without knowing the internals can be
harmful? My case for that (even if I tend not to believe so) would be that
someone might be tempted to think that during an object instanciation, all
the instance variables AND methods gets duplicated for this particular
instance, which isn’t the case - yet, that’s what the abstraction pushes us
to believe.

Anything with quantum mechanics can catch us out if we fall for the
fallacy of determinism. Message passing languages such as Ruby are an
example of this and an object is only ever what it claims to be when
asked.

Just because a Ruby object is a Dog, doesn’t mean it can necessarily
bark :wink:

Ellie

Eleanor McHugh
Games With Brains
http://feyeleanor.tel

raise ArgumentError unless @reality.responds_to? :reason

I very much enjoyed reading that. Thank you for the link.

~Stu

Michael actually started to raise a separate issue:

What I find fascinating when hacking in Ruby, is that we use one of the
purest implementation of OOP, and at time we find ourselves tweaking C
extensions.

When you are writing a software solution, is there something wrong about
using different levels of abstraction - different paradigms - at the
same time?