Joel Spolsky on languages for web programming

David V. wrote:

Utter pants. I mean, you used the word “bloat”, which should make people
lose any debate by default.

I don’t like bloated software, it is unnecessary.

Alvin R. wrote:

Java and C# are no guarantee for success.

Neither is Ruby / Rails. No technology is a guarantee for success, no
technology ever was, and I’ll bet a gold bar against a plastic spoon no
technology ever will. Technology used is a very important decision to
make, but it never single-handedly moves you from doable to undoable or
vice versa.

There are many factors required for success and I don’t believe any one
factor guarantees it but interestingly it can take so much as one
element gone wrong to ruin everything.

you seem to need 10 instead of 3 people and 5 times as long.

Pure, unadulterated shite. Give me numbers. Credible statistics and real
research, not random anectodal success stories that are too pathethic to
sell Herbalife diet pills.

The “10 to 3” ratio wasn’t meant to be taken literally surely you don’t
think otherwise? And can you tell me where can I get such “credible
statistics and real research” from?

Are you saying all languages yield the same level of productivity? If
they aren’t equally productive then how much more productive is Java
over C++ or VB over assembler? Do you need “credible statistics and
research” to answer the question?

Also, initial development cost isn’t a very important factor. Recalls
your uni software lifecycle charts about how much of a project’s life is
maintenance. For a successful project, the numbers are very much true.
With a successful product comes the responsibility of supporting it and
keeping it successful, and in some cases this responsibility creates
ongoing costs that dwarf the initial development horribly.

I disagree, the initial cost is vital. Most projects get approved or
not approved based on that initial cost and if that money is drained on
developers trying to tame an unwieldly platform instead of building the
actual system then we have a problem don’t we?

Ok, sure Java’s OO may be nicer than Perl 5’s but once you brew
HTML/Javascript/JSP/JSTL/EL/tags/JSF or Struts together the result
isn’t exactly what I’d call pretty. Java is in no way a safe bet.

Noone cares about pretty. It’s also a completely irrelevant issue when
deciding on implementation language if you’re at least remotely responsible.

I care about pretty.

How about C#, well it runs in Windows and without serious and expensive
firewalls you just can’t go anywhere near the Internet.

You need to tighten off Unix-based servers too. Heck, there are even
serious and expensive firewalls for Linux around too, because not
everyone has an in-house iptables guru.

True, no platform is 100% impervious to attack but some are less secure
than others.

Ruby and Rails just get straight to the point. They make common things
easy and elegant.

Sometimes things aren’t so common. Ruby and Rails DO have faults. Just
google around, I’m not going to go namecall out of respect and out of a
sense of realism - every technology has flaws and any mudslinging would
only lead to a pointless flamewar. Sometimes they are uneducated rants
and / or whining, but some of them are valid.

Yes I know all platforms have faults and wish lists, I didn’t think
otherwise.

And if you do NOT go out and learn about these flaws, and what impact
they could have, and be fully aware of them when making the
implementation technology decision on a project to consider the severity
of their impact under the circumstances of your project, then your
decision may cause a lot of trouble.

Fair enough, I agree. I think software should be published with
specifications and limits as they do in other industries. This is a 100
ohm resister +/- 2%, capable of running in these temperators, it
handles this much power … but in software its just “blah” you have to
discover the limits yourself (ouch).

I’m not sure how fast or slow Ruby is but if it’s as fast as Perl I’ll
be happy enough. Yes I know C is faster but I need fast development
times too.

As for developing major sites with Rails, most managers don’t have the
balls.

I advise you go on throught freshman year on a management school. It’s
the managers’ job to “not have balls” and risk when there’s apparently
nothing to be had from taking it. If you want to be a Ruby advocate, you
need to be able to persuade them, not yourself, of the advantages or
using it.

I’ve worked with Harvard level managers, they seemed to think it was
their job to have balls
, which is the opposite of what your saying? I
prefer to work with managers that have knowledge, intelligence, energy
and conviction to back up their decisions.

Besides that the choice of language is usually mine, that’s why I
gravitate to the more productive ones. In my experience run-time
performance is rarely an issue but development time is.

contracted the software finds he isn’t really interested in what the
tech demos had to show at all.

And the stereotype of lazy management that never gets punished is good
to make Dilbert strips from - in real life, it probably doesn’t hold

No, I’ve seen it hold in real life too many times, the “pointy haired
boss with the corner office” still brings a chuckle out of me.

true outside of a select few huge moloch companies, or on the opposite
side of the spectrum small short-lived hick-led shops where the bosses
kids and nephews gets all sort of crap assigned to get better allowance.
In a well-led company with working internal management processes, when
the shit hits the fan, everyone gets the stink.

David V.

Cheers :wink:

James B. wrote:

This leads to an interesting question: how many ruby programmers are
there, anyway?

I ran across
http://sanjose.bizjournals.com/sanjose/stories/2006/08/28/daily1.html
today and boggled at the “2.5 million” number for PHP.

Any ideas for Ruby?

At least a dozen, maybe more.

+1 (in both senses)

William G. wrote:

This leads to an interesting question: how many ruby programmers are
there, anyway?

I ran across
http://sanjose.bizjournals.com/sanjose/stories/2006/08/28/daily1.html
today and boggled at the “2.5 million” number for PHP.

Any ideas for Ruby?

At least a dozen, maybe more.


James B.

http://www.ruby-doc.org - Ruby Help & Documentation
Ruby Code & Style - The Journal By & For Rubyists
http://www.rubystuff.com - The Ruby Store for Ruby Stuff
http://www.jamesbritt.com - Playing with Better Toys

On Mon, Sep 04, 2006 at 11:49:07AM +0900, M. Edward (Ed) Borasky wrote:

Rails and rake are internal DSLs, and Ruby makes internal DSL creation
much easier than many other languages. I can’t tell from this thread
whether Wasabi is external or internal.

I’m having a hard time imagining it being internal, conidering the way
Joel describes it in an essay where he addresses Wasabi directly:
Wasabi – Joel on Software

I am, frankly, having a tough time imagining how he could have designed
a language that does what he claims as an internal DSL of VBScript. In
fact, he describes it as “100% backwards-compatible with VBScript”.

On the other hand, it seems utterly incomprehensible that someone would
recreate VBScript, but with more power, from scratch – which is what
he’d have to do, considering I doubt Microsoft gave him the source for
it.

I hardly think of an external DSL as anything special any more. They’ve
been around as long as I’ve been programming, which is – well, let’s
just say your toaster has more compute power than the machine I learned
on. :slight_smile: Almost every major decades-old Fortran code, for example, is
really implementing an external DSL.

In a manner of speaking, one might say that all programming languages
are, in one way or another, DSLs of a sort.

On Mon, Sep 04, 2006 at 01:23:16PM +0900, M. Edward (Ed) Borasky wrote:

The gap has narrowed. It’s rare that an assembly language coder can beat
a compiler by more than a factor of 2 these days, and on some
architectures it’s a dead tie – there’s only one way to do something
and the compiler always finds it. Interpreters are better now too,
mostly because today’s languages have such a large component that has to
be dealt with at run time anyway that the “heavy lifting” is done by
compiled code.

This brings me to a thought I’ve been having a lot, lately: that the
future of compiled code will probably start looking in some ways more
and more like interpreted code. I don’t see why we can’t, relatively
soon, have a compiler that produces a compiled executable of a dynamic
language such as Ruby that does not require a VM or interpreter to be
run (outside of the very loose definition of “interpreter” or “VM” that
might include your whole OS in that definition). The dynamic aspects of
the language would be handled by the executable binary itself, rather
than by an interpreter external to the program.

I’m not entirely sure how to explain what I’m thinking at this time so
I’m sure I get my point across. Hopefully someone who reads this will
get where I’m aiming, and may even be able to help me clarify it.

I’m not sure JIT is “necessary” for efficient interpretation of Ruby
anyway. But you’re right … if the economics is there, the gap will get
closed, just like the compiler/assembler gap got closed.

There are things that Ruby allows that simply cannot be done without a
certain amount of runtime interpretation, with the possible exception of
the evolution of persistent compiled executable binaries described
above.

Devin M. wrote:

David V. wrote:

Speaking purely theorethically, Ruby can not be made as performant as
Java or C# could be made if they had ideally performing implementations.
Latent typing makes it almost impossible to do certain optimizations as
static typing does. That’s pure fact.
Irrelevant. In many cases, the fact that Ruby has latent typing is an
implementation detail. Ruby has no type declarations, but in many
cases static type inference can be applied to get the same optimizations
of which Java and C# implementations avail themselves. (Disclaimer:
that’s about as much as I know about this subject.)

You’re absolutely right.

Look to Haskell for a good example a statically typed language
almost free of type annotation of any kind - type information is almost
exclusively added by the compiler (though you can add type
annotations).

While Ruby has features that make it impossible for an implementation
to use strict static typing everywhere, a lot of a typical Ruby
application could be statically typed by an implementation using type
inference fairly easily by doing some relatively simple flow analysis
combined with marking up the parse tree.

Doing it for a pure interpreter would be easy, but the advantages would
be relatively limited. Doing it for a JIT compiler would also be quite
straightforwards and does have the potential of very significant
speedups.

For a full fledged compiler it would be tricky without some
restrictions - the main problem is Ruby’s introspective features and
various eval mechanisms, which means the type inference valid at
compile time might not hold at runtime. Add a few restrictions on the
use of load/require etc. and the use of eval’s and/or some way of
adding some basic type annotation to guide the compiler for “extension
points” (classes/methods that will be affected by runtime changes) and
it would be doable without significant changes.

Vidar

On Mon, Sep 04, 2006 at 01:00:07PM +0900, M. Edward (Ed) Borasky wrote:

to insist that they be certified by Cisco, Microsoft or Red Hat as
appropriate.

As someone with a combination of college, trade school, on-the-job, and
purely autodidactic education, with several certifications of my own, my
experience is that all certificiations really prove is A) a certain
minimum standard of test-taking competence that can be sized up within
five minutes of meeting someone anyway and B) a certain amount of money
invested in professional advancement.

. . . with the exception that some certifications require certain resume
bullet-points before one is allowed to take the certification exam in
question (CISSP comes to mind). Considering one doesn’t require a
certification to determine whether someone has such resume
bullet-points, however, that seems irrelevant.

On Mon, Sep 04, 2006 at 12:37:03PM +0900, Vidar H. wrote:

tell you to use. Communities and support are paramount.

  • Corporations and the people in those organizations favor safety, if
    your job is on the line go with the tried and true. Take no risks.

All three assumptions rely on a single assumption: FEAR.

No. They rely on sound risk management principles.

One might say that’s just euphemistic phrasing. I’m not prepared to
make such an assertion at this time (I’d like to think about this a bit
more before doing so), but it does occur to me as a possibility.

  • Fear the technology would eventually not deliver.

Replace “Fear” with “Risk” and the above is reasonable if your company
does not have people experienced in a particular technology. And fact
is today it is still far harder to find people skilled at Ruby than
many other languages. More importantly, there is too little experience
with many Ruby technologies for a company with no Ruby experience to
know whether Ruby will be appropriate for a specific project.

This brings us to the “real” problem:

Decision makers need to know something about the technologies to be able
to make the “right” decisions. One cannot effectively expect that any
particular decision is more or less likely to be a good one unless the
decision maker actually knows the options at hand. In other words, Joel
Spolsky’s advice about choosing “proven” technologies is nonsense: the
real advice should be “Choose from among technologies you know. If you
are not an expert at all the options that sound good, learn enough to be
able to make an informed decision. Failing to do so does not guarantee
that you will make the wrong decision, but it does guarantee that you
will make your decision for the wrong reasons. Period.”

In other words, every time a nontechnical manager is given the
responsibility of choosing a programming language and/or framework for a
project, someone has screwed up. How can (s)he possibly evaluate the
available technologies, or even the advice (s)he receives about them
(whether from employees, friends, consultants, or Joel On Software) to
be sure it’s not a load of hooey without knowing the technologies
personally?

  • Fear the support will not be sufficient.

Replace “Fear” with “Risk” again. The company I work for, Edgeio, uses
PHP for our frontend code (but Ruby for our backend) because when we
started building it I had concerns about the availability of people
skilled with Ruby in general or Rails in particular.

Sure, Java and PHP programmers are a dime a dozen – as long as you’re
willing to settle for a dime-a-dozen quality programmer. If you want
programmers that are worth their paychecks, however, you significantly
narrow the field no matter what the language you’re using. Considering
the learning ability and proclivities of excellent programmers, however,
I rather suspect that you’ll find as many excellent programmers who know
“exciting new languages” as “boring old languages”, Considering the
direction language design has been going lately, “exciting new
languages” are generally easier to learn, too. This means that if you
choose to hire for excellence over familiarity with a given language,
you’re just as likely to find yourself constrained to choose an
excellent C programmer over a poor Java programmer as you are to choose
an excellent C programmer over a poor Ruby programmer – but if you’re
working with Ruby, your excellent C programmer will probably pick up the
language faster.

I guess what I’m saying is that you’re probably better off choosing
excellent programmers and the language that works best, technically
speaking, for your project. Choosing a language for which programmers
are a dime a dozen regardless of technical merit is more likely to leave
you with crappy software development, lightning-fast employee turnover,
or (more likely) both.

When we started hiring those concerns were validated: It’s proved
extremely hard to find people with Ruby experience. While it’s
certainly getting easier rapidly, not everyone can afford to take the
risk. In our case I decided to start phasing Ruby in for small self
contained components in our backend, and gradually take it from there
as we get enough Ruby skills through training or hiring, which has
proven to work well and meant that in the event that we’d run into
unforeseen problems, the effort involved in switching back to another
language would have been limited.

Define “experience”. If by “experience” you mean “has spent ten years
developing enterprise applications in the language”, darn right it would
be more difficult to find people with Ruby “experience” than many other
languages. If, on the other hand, you mean “has demonstrated aptitude,
Ruby skill, and programming wizardry likely to prove to be an
unequivocal asset to your team”, you’re probably looking in the wrong
places (since you’re unlikely to find that in college internship
programs, where all they teach anyone is Java and .NET).

enough skilled Ruby developers available in your location, picking it
for a high risk project will certainly not speak to your favor with any
risk

The problem is where people fear for job security based on choosing a
non-conservative technology, rather than for choosing an inappropriate
technology. Many people would never (under current conditions) choose
Ruby over Java, even if guaranteed that the project would be completed
with 110% requirements satisfaction within two months for Ruby or with a
10% chance of project failure, a 90% requirements satisfaction rate if
“successful”, and an eighteen month development time for Java – all
based on fear for job security. It’s the “nobody ever got fired for
choosing IBM” syndrome. Even if choosing the conservative technology is
the Wrong Answer for the task at hand, it will be considered the Right
Answer for job security by a great many people.

On Mon, Sep 04, 2006 at 01:01:53PM +0900, Alvin R. wrote:

I’m not sure how fast or slow Ruby is but if it’s as fast as Perl I’ll
be happy enough. Yes I know C is faster but I need fast development
times too.

Based on what I’ve heard/seen/experienced, Ruby is generally somewhat
slower than Python which is, in turn, somewhat slower than Perl.
Generally. On average. For some definition of “average”. One of the
nice things about Perl’s relative maturity is the significant work that
has gone into performance concerns for the parser/runtime. I have no
doubt that Ruby will eventually match Perl for general-case performance,
but I also have no doubt that on the whole it has not matched it yet.

On the other hand, the difference is not so great that execution
performance is a factor I really take into account when choosing between
the two languages for a given project.

In message [email protected], David V.
[email protected] writes

research, not random anectodal success stories that are too pathethic
to sell Herbalife diet pills.

Thats a great line.

Stephen

In message [email protected], Devin M.
[email protected] writes

does clean mean? reduced duplication?). Pretty means fewer LOC, which
is about the only objective measure of maintainability we know.

I take it you’ve never had the pleasure of reading someone else’s APL
code? Its about as dense as you can get in LOC.

Sure is not easy to maintain. Often described as a “write only
language”.

I think the word “pretty” is not the correct word, “elegant” would be
better.

Stephen

Vidar,

Risk Management IS NOT equivalent to FEAR, in that you are right.

However, as I said earlier, no SIGNIFICANT progress can be expected
without some risk. Risk Management is about dealing with risk, not
eliminating it.

Ruby and Ruby on Rails are not the safest choice, but I believe they
are one of the very best choices for web development. There is a
slight risk in it, but not enough to stop any bold, courageous
corporation, startup or even lone developer to create great software
with it.

Joel and people who share his views, equate Risk with FEAR. That is
their main mistake. Ruby is ready now… not for everything, but is
uniquely ready for web development, that is I believe a fact.

As another poster mentioned, there is an evolution in the adoption of
technology. Ruby is still with the early adopters, but that does not
mean is not mature enough for critical applications.

What will prove me right however is not my rationale here or in my
previous post, but TIME, time will prove those who sticked to Ruby and
Ruby on Rails did so wisely, because TRUTH is tested in time… I
believe Ruby is ready now, many people disagree, but ultimately time
and people using Ruby for critical applications will be the deciding
factor.

I love this quote from the Linux Advocacy video Red Hat produced
recently, it is incredibly accurate to this issue we are discussing,
and I recommend everyone to watch it, I will quote it here:

“Despite Ignorance
Despite Ridicule
Despite Opposition
Despite it ALL
TRUTH HAPPENS”
Source: How we contribute

Time will tell us indeed, but I am not waiting for the jury, I am
learning Ruby and RoR now, eager to apply it to create cool, amazing
web applications… isn’t that the whole point? To push technology?
To make it fun again? To innovate?

Jose Hurtado
Web D.
Toronto, Canada

Stephen K. wrote:

I think the word “pretty” is not the correct word, “elegant” would be
better.

Stephen

I’ve never written a line of APL code, but that hasn’t ever stopped me
from being able to read APL code if the need or desire to do so arose.
There was a time when it was a dominant language in econometrics and
computational finance; indeed, the “A Plus” open source descendant of
APL originated at Morgan Stanley.

APL and its original implementation APL\360 were/are works of pure
genius. I was privileged to meet one of their creators (Falkoff) when I
was a very young programmer working at IBM. APL is one of the few truly
unique programming languages and possesses an elegant simplicity found,
in my opinion anyway, in only two other programming languages –
Lisp/Scheme and Forth.

Chad P. wrote:

On Mon, Sep 04, 2006 at 01:01:53PM +0900, Alvin R. wrote:

I’m not sure how fast or slow Ruby is but if it’s as fast as Perl I’ll
be happy enough. Yes I know C is faster but I need fast development
times too.

Based on what I’ve heard/seen/experienced, Ruby is generally somewhat
slower than Python which is, in turn, somewhat slower than Perl.
Generally. On average. For some definition of “average”.

Would you be interested in the correct definition of average in
benchmarking? Of course you would! :slight_smile:

One of the
nice things about Perl’s relative maturity is the significant work that
has gone into performance concerns for the parser/runtime. I have no
doubt that Ruby will eventually match Perl for general-case performance,
but I also have no doubt that on the whole it has not matched it yet.

And this was one of the motivations of the Parrot team – a common
virtual machine for Perl, Python and Ruby. The Ruby community seems to
have put a lot more effort into YARV than the Cardinal/Parrot approach.
Has the Python community similarly gone their own way, or do they plan
to use Parrot?

Chad P. wrote:

As someone with a combination of college, trade school, on-the-job, and
purely autodidactic education, with several certifications of my own, my
experience is that all certificiations really prove is A) a certain
minimum standard of test-taking competence that can be sized up within
five minutes of meeting someone anyway and B) a certain amount of money
invested in professional advancement.
They also prove that you can learn and carry out a learning task to
completion. They also provide HR and the hiring manager with an
objective way of ruling out unqualified candidates. If I post a network
engineer position and get 100 applications, ten of whom have completed
their certification, that’s 90 resumes I can throw in the trash.

Chad P. wrote:

I’m not entirely sure how to explain what I’m thinking at this time so
I’m sure I get my point across. Hopefully someone who reads this will
get where I’m aiming, and may even be able to help me clarify it.
Perhaps you’re thinking along the lines of Lisp or Forth, where an
application is layered on top of the entire compiler/interpreter/runtime
package and then saved as an executable. As far as I can tell, there’s
absolutely no reason this couldn’t be done for Ruby. IIRC that’s also
the way the Squeak S. environment works and the way Self worked.

Incidentally, Forth contains two interpreters and a compiler. A typical
Common Lisp contains one compiler and one interpreter. Right now, Ruby
is simple enough that what you’re describing seems feasible – a couple
more years of co-evolution with its users and it might not be. :slight_smile:

On Tue, Sep 05, 2006 at 03:00:13AM +0900, M. Edward (Ed) Borasky wrote:

engineer position and get 100 applications, ten of whom have completed
their certification, that’s 90 resumes I can throw in the trash.

I don’t think I can really put much value in that “carry out a learning
task to completion” idea, in this case. The sort of “learning” it
measures is, generally speaking, more suited to learning to give the
answers people are expecting than coming up with correct answers.
Microsoft certs, in particular, are bad about this – filled with
marketing euphemisms and salesworthy “this solution for that problem”
questions.

That’s not to say certifications are useless, but they carry little
enough worth in (accurately) judging a candidate’s value that ignoring
them entirely probably wouldn’t hurt your hiring strategies.

You’re right about certifications providing HR and hiring managers with
an “objective” metric for candidate qualifications, but that’s pretty
self-referential (they’re “qualified” if they meet the qualification
requirements, including a certification, which is required so that
you’ll have some way to tell if they’re qualified, et cetera), and
there’s not really any indication that what it objectively measures is
useful for most purposes. About the only way it measures something
useful with regard to job performance is if someone can literally just
walk into the exam cold, with no studying, and answer all the questions
correctly . . . except for the questions that are misgraded on the exam
(I’ve yet to see a certification test that doesn’t require technically
inaccurate answers to get everything “right”).

Throwing out 90% of candidates for not having a certification in the IT
industry is about like throwing out 90% of candidates because their
ties aren’t the right width. I mean, sure, having ties of the “right”
width indicates an attention to detail and ability to keep up with
changing trends, which is useful for technical matters, but there’s no
guarantee the people you’ve excluded aren’t just fashion-impaired
despite attention to detail and throughly current knowledge of
information technologies, nor that the people with the “right” ties
aren’t more focused on fashion than professional skills, or even just
really lucky in their choice of ties today.

Chad P. wrote:

interpreter capabilities exercised by a program in a dynamic language:
|++++++++++++|

compiled static binary for an equivalent program from a static language:
|++++++++++++|

combination static/dynamic compiled binary from a dynamic language:
|++++++++++±—|

. . . roughly.

You can usually do something like this in Forth. As you’re developing,
you save off the whole enchilada (the Forth interpreters and compiler
and assembler, along with your application code, all of which reside in
the dictionary) as an executable. When you’re ready to release the
application, you take a special pass and strip out everything your
application doesn’t use, getting a smaller executable that only contains
the pieces of the Forth environment needed to run the application.

I haven’t spent any appreciable time inside either Common Lisp or
Scheme, or for that matter Ruby, so I don’t know how this would work in
any language except Forth. Maybe what you want is as “simple” as
implementing Ruby on top of Forth. :slight_smile:

There would likely be more binary size necessary, but considering that
even an interpreter is (generally) a compiled binary that just operates
on input, I don’t see any reason to assume we cannot cannot compile
dynamic language code into a persistent binary with accomodations made
for the parts of the program that require runtime dynamic behavior.
No reason it can’t be done. The question is only “should it be done?” :slight_smile:

This strikes me as a superior approach to a JIT compiler/interpreter
approach like Perl’s, a pure interpreter approach like Ruby’s, or a
bytecode compilation plus runtime interpreter VM like Java’s, for
performance.

Java also has JIT, of course. Curiously enough, someone once told me
that if I looked at the JVM carefully, I’d see Forth. :slight_smile:

Add to that the potential increased performance for some
parts of a program written in a more dynamic language something like the
following might actually run faster than the equivalent compiled program
I diagrammed above:

|++++++±-------|

. . . depending on how well those dynamic bits (represented by the
hyphens) optimize at runtime for a particular run of the program.

Well … maybe we should leave that to the chip? :slight_smile:

On Tue, Sep 05, 2006 at 03:12:07AM +0900, M. Edward (Ed) Borasky wrote:

I’m not entirely sure how to explain what I’m thinking at this time so
I’m sure I get my point across. Hopefully someone who reads this will
get where I’m aiming, and may even be able to help me clarify it.
Perhaps you’re thinking along the lines of Lisp or Forth, where an
application is layered on top of the entire compiler/interpreter/runtime
package and then saved as an executable. As far as I can tell, there’s
absolutely no reason this couldn’t be done for Ruby. IIRC that’s also
the way the Squeak S. environment works and the way Self worked.

No . . . that’s not quite it. Maybe a really bad diagram will help.

interpreter for a dynamic language:
|--------------------------------------------------|

interpreter capabilities exercised by a program in a dynamic language:
|++++++++++++|

compiled static binary for an equivalent program from a static language:
|++++++++++++|

combination static/dynamic compiled binary from a dynamic language:
|++++++++++±—|

. . . roughly.

There would likely be more binary size necessary, but considering that
even an interpreter is (generally) a compiled binary that just operates
on input, I don’t see any reason to assume we cannot cannot compile
dynamic language code into a persistent binary with accomodations made
for the parts of the program that require runtime dynamic behavior.
This strikes me as a superior approach to a JIT compiler/interpreter
approach like Perl’s, a pure interpreter approach like Ruby’s, or a
bytecode compilation plus runtime interpreter VM like Java’s, for
performance. Add to that the potential increased performance for some
parts of a program written in a more dynamic language something like the
following might actually run faster than the equivalent compiled program
I diagrammed above:

|++++++±-------|

. . . depending on how well those dynamic bits (represented by the
hyphens) optimize at runtime for a particular run of the program.

Joseph wrote:

Risk Management IS NOT equivalent to FEAR, in that you are right.

However, as I said earlier, no SIGNIFICANT progress can be expected
without some risk. Risk Management is about dealing with risk, not
eliminating it.

This we agree on.

Ruby and Ruby on Rails are not the safest choice, but I believe they
are one of the very best choices for web development. There is a
slight risk in it, but not enough to stop any bold, courageous
corporation, startup or even lone developer to create great software
with it.

And we agree on this too, to some extent. My argument is mainly that
without a certain level of knowledge about Ruby, the level of risk is
unknown, in which case it is prudent to assume the likely risk is high
until you have investigated it closer.

For those of us that know Ruby, the best we can do to spread it is to
help people get to the stage where they know enough that they can
accurately assess the risk of using Ruby for their projects, but until
people have that knowledge the risk as seen by someone who doesn’t know
Ruby will be higher than the real risks with using Ruby.

Joel and people who share his views, equate Risk with FEAR. That is
their main mistake. Ruby is ready now… not for everything, but is
uniquely ready for web development, that is I believe a fact.

And it may very well be a fact, but again, there are still risks, and
those risks are greater for someone who doesn’t know Ruby, or who
doesn’t have the skills inhouse, whereas the corresponding risks for a
Java shop of doing something in Java may be very low if their staff is
skilled enough in Java.

Personally I hate Java and love using Ruby, but if I had to manage a
team of Java guru’s, I’d still consider java as a safer choice than
Ruby unless the project was long enough to take significant time
retraining staff and possibly hiring replacements for anyone who decide
to leave.

They then have to make a tradeoff: low risk in Java (or C# or PHP or
LISP or whatever language they have the sufficient experience with to
make the risk a known, low factor) or a possibly higher risk in another
language vs. possibly lower cost and shorter development time.
Developer hapiness doesn’t count unless it affects one of the previous
two, or increases employee retention.

However, that possible payoff depends on whether they make a
successfull transition, which they won’t know the chances off if they
have little to no exposure to the language. It also depends on whether
Ruby is right for their specific project, which they won’t know if
they have little experience with the language.

These factors are all reasons why - regardless of how good Ruby is -
for someone to be picking Ruby just because you and I and other
Rubyists say it’s good without having a reasonable degree of knowledge
about how appropriate it would be for their project themselves would be
quite irresponsible.

As another poster mentioned, there is an evolution in the adoption of
technology. Ruby is still with the early adopters, but that does not
mean is not mature enough for critical applications.

For some it certainly is. But despite having a reasonable experience
with Ruby, I’d hesitate about making a blanket statement about it.
Performance will be an issue for some apps (as I’ve noted elsewhere,
it won’t be for most web apps, but there certainly are web apps that
are CPU intensive too, and where C/C++ extensions would be vital if you
were to go with Ruby at the current stage), and lack of certain
libraries might be an issue for some.

Feature poor XML integration IS an issue for my company (Edgeio.com) at
the moment. It’s one we expect to solve, but at the cost of additional
work which we wouldn’t incur in some other languages. Ruby is still
good for us, but it’s not a panacea for all types of development. It
likely never will be, but as time goes the space of apps for which Ruby
is a good choice will of course increase significantly and I do believe
it can supplant many currently more widely used languages.

We still use PHP for our web frontend, though. All our Ruby code is in
the backend for now. I did consider Rails, and maybe we’ll migrate to
it at some point, but currently the potential savings are too small to
outweigh the cost/time to migrate, and our frontend is growing thinner
and thinner as we refactor our middleware and backend, so it pays to
just wait for now.

What will prove me right however is not my rationale here or in my
previous post, but TIME, time will prove those who sticked to Ruby and
Ruby on Rails did so wisely, because TRUTH is tested in time… I
believe Ruby is ready now, many people disagree, but ultimately time
and people using Ruby for critical applications will be the deciding
factor.

Ruby is ready now for some apps if you have the experience or your
potential cost savings are large enough to justify taking the time to
retrain your staff or hire new people.

I use Ruby because it’s the best of an increasing pool of bad
alternatives. I still haven’t found a language I don’t see tons of
flaws in, Ruby included. Ruby’s flaws are just less annoying than the
rest :slight_smile: I don’t believe in “truths” in language choices - people need
to pick what works for them, and while looking at what’s popular is
often good, there are always exceptions.

Time will tell us indeed, but I am not waiting for the jury, I am
learning Ruby and RoR now, eager to apply it to create cool, amazing
web applications… isn’t that the whole point? To push technology?
To make it fun again? To innovate?

That’s one viewpoint. But the point for the companies considering
language choices is what technologies will bring them the greatest
profit at the lowest risk.

As much as it’s tempting for me as a geek to pick technology based on
personal preference, ultimately I have a responsibility to the
shareholders that needs to take preference.
(and since I’m one of them, and I work at a startup, there’s also the
hope of an opportunity for early “retirement” :slight_smile: )

Vidar