Joel Spolsky on languages for web programming

On Tue, Sep 05, 2006 at 04:25:36AM +0900, M. Edward (Ed) Borasky wrote:

Chad P. wrote:

[ snip a bunch of bad diagramming ]

You can usually do something like this in Forth. As you’re developing,
you save off the whole enchilada (the Forth interpreters and compiler
and assembler, along with your application code, all of which reside in
the dictionary) as an executable. When you’re ready to release the
application, you take a special pass and strip out everything your
application doesn’t use, getting a smaller executable that only contains
the pieces of the Forth environment needed to run the application.

That’s at least darned close. I’d have to learn more about what exactly
it does to know how close.

I haven’t spent any appreciable time inside either Common Lisp or
Scheme, or for that matter Ruby, so I don’t know how this would work in
any language except Forth. Maybe what you want is as “simple” as
implementing Ruby on top of Forth. :slight_smile:

Actually, now that I think about it, I wish something like that would be
what they’d do for Perl 6 instead of wedding it to a VM if they want
some kind of persistent compilation that doesn’t preclude runtime
dynamism.

There would likely be more binary size necessary, but considering that
even an interpreter is (generally) a compiled binary that just operates
on input, I don’t see any reason to assume we cannot cannot compile
dynamic language code into a persistent binary with accomodations made
for the parts of the program that require runtime dynamic behavior.
No reason it can’t be done. The question is only “should it be done?” :slight_smile:

I certainly think so, if only to provide an alternative to the “worst of
both worlds” bytecode-VM approach.

This strikes me as a superior approach to a JIT compiler/interpreter
approach like Perl’s, a pure interpreter approach like Ruby’s, or a
bytecode compilation plus runtime interpreter VM like Java’s, for
performance.

Java also has JIT, of course. Curiously enough, someone once told me
that if I looked at the JVM carefully, I’d see Forth. :slight_smile:

It’s a quite different approach to JIT compilation than Perl’s, of
course.

Well … maybe we should leave that to the chip? :slight_smile:

That’s sorta the idea.

On 9/4/06, Joseph [email protected] wrote:

Vidar,

Risk Management IS NOT equivalent to FEAR, in that you are right.

However, as I said earlier, no SIGNIFICANT progress can be expected
without some risk. Risk Management is about dealing with risk, not
eliminating it.

I would have thought that eliminating risk would be a job well done
by someone responsible for Risk Management? No?

I am seeing an awful lot of chatter here along the lines that technology
decision makers are insipid jobsworths who fall in line behind the big
tech brands because they are afraid to stick their neck out. ie. the
only reason
they are not picking Rails is because they don’t have the stones for it.

Has anyone ever considered the fact that many of these decision makers
are very serious, ethically minded people? They take their job seriously
and feel a strong responsibility to make a correct technology decision.

I am really strongly looking at Rails at the moment for an up and coming
solution. But we got some funky requirements that may result in our use
of Rails being purely reserved for rapid prototyping and
development/test
tools. While I love how quickly you can get a best-practice solution
together,
and how elegant the solutions are, I am concerned that the time you
save early on you lose down the road dealing with edge problems.

The concerns are not that questions exist, but that the questions are
not being really well answered. Some concerns that I have about RoR:

  • lack of good success & failure case studies with lessons learned
  • library (Ruby) and plugin (Rails) immaturity
  • library portability
  • what happens to productivity when you go outside the rails problem
    domain
  • how narrow is that problem domain (how easy is it to overstep)
  • what happens (to productivity/performance) when your rails apps
    need to do wierd stuff like bolt-on SNMP processing ruby-code
  • how forgiving is the technology, if you make mistakes/bad assumptions,
    how easy is it to recover
  • deployment of Rails apps/bundling rails apps
  • immaturity of tools
  • international support

I am happy enough with a lot of these issues to go with a Rails solution
for something non-critical or prototyping. But I can’t in good faith bet
the
project on it. I would be happy enough to wait a year though and see
what
happens to my concerns as it is moving really rapidly, in the meantime
levelling up my Rails skills.

Don’t assume decision makers are stupid or spineless. Their
responsibility
is to their employer, it is not their responsibility to promote a
technology.
They read mailing lists and bloggers and case studies and do google
searches.
They see the extended debates on multinational rails,
performance/scalability,
plugin life-expectancy and weak/unknown applicability outside of classic
web apps. Sure the rapid development aspects with implicit bet practices
is great, thats why they are looking at it in the first place, thats the
carrot.

I would like it to be ready for prime time now, because next year I
probably
won’t be in a position to put in place any Rails solution. And it sure
is a
lot of fun to work with - I can code for fun at home, but if I get my
employer to
adopt it i can get paid for RoR-ing too.

On Tue, Sep 05, 2006 at 04:49:19AM +0900, Richard C. wrote:

by someone responsible for Risk Management? No?
If you eliminate risk entirely, you end up guaranteeing failure – for
some definition of risk. Any definition of risk that does not result in
that end is either meaningless or effectively impossible to eliminate.

I am seeing an awful lot of chatter here along the lines that technology
decision makers are insipid jobsworths who fall in line behind the big
tech brands because they are afraid to stick their neck out. ie. the only
reason
they are not picking Rails is because they don’t have the stones for it.

My take is that people who choose a technology based on popularity
rather than knowledge of the technology is an insipid jobsworth who
falls in line behind the big tech brands because (s)he is afraid to
stick his/her neck out. Those who choose a technology based on
knowledge of the technology, on the other hand, is a smart guy that
should be making a lot of money, whether the ultimate decision is to go
with J2EE, Rails, Common Lisp, VB.NET, or Wasabi. Of course, I think
VB.NET is unlikely to be a good choice outside of extremely pathological
edge-cases, but that’s beside the point.

Has anyone ever considered the fact that many of these decision makers
are very serious, ethically minded people? They take their job seriously
and feel a strong responsibility to make a correct technology decision.

. . . but if they end up making a decision based on the criteria Joel
Spolsky advocated in the essay that started all this discussion, they’re
either malicious or incompetent.

On 9/3/06, Joseph [email protected] wrote:

Best Regards,

Jose L. Hurtado
Web D.
Toronto, Canada

So …

we’ve never seen you at a TRUG meeting (we just had one yesterday).

Come on out and join us!

-austin

Richard C. wrote:

I would have thought that eliminating risk would be a job well done
by someone responsible for Risk Management? No?
Not that I’m at all knowledgeable in risk management, but No. Breaking
up the sum total of possible outcomes into “profit” and “risk” is Bad.
There are all sorts of axes on which to measure success – profit,
employee retention, positive reviews, server uptime, etc. – all of
which can go in the plus or the minus, and none of which can be
predicted with certainty.

That is, I’d imagine a good Risk Manager has to balance Possibility of
Server Crash against Possibility of Profit, and if ____ can greatly
increase the Possibility of Profit at the cost of only slightly
increasing the Possibility of Server Crash, it might be worth pursuing.

I am seeing an awful lot of chatter here along the lines that technology
decision makers are insipid jobsworths who fall in line behind the big
tech brands because they are afraid to stick their neck out.
Well, there’s a mix, obviously. I happen to work for an organization
that doesn’t do a great job filtering out the insipid ones, but there
are still some non-insipid ones around.

The concerns are not that questions exist, but that the questions are
not being really well answered. Some concerns that I have about RoR:

  • lack of good success & failure case studies with lessons learned
  1. Have you read these two articles from Relevance, LLC?
    http://tinyurl.com/j63la
    http://tinyurl.com/h6f5g
  2. What such case studies have you read about the other options you’re
    considering?
  • library (Ruby) and plugin (Rails) immaturity
  • library portability
  • what happens to productivity when you go outside the rails problem domain
  • how narrow is that problem domain (how easy is it to overstep)
  • how forgiving is the technology, if you make mistakes/bad
    assumptions,
    how easy is it to recover
  • immaturity of tools
    Well, it’s no wonder those questions aren’t being answered. They’re
    ridiculously vague. Ruby/Rails libraries/plugins/tools are all over the
    map – some are mature; some are immature; some are mature but
    incomplete; some are complete but immature; some have been tested on all
    types of systems; some only work on a POSIX environment; some work on
    all environments, but aren’t really tested on Win32; some are fast; some
    are slow; no two libraries are not on fire.
  • what happens (to productivity/performance) when your rails apps
    need to do wierd stuff like bolt-on SNMP processing ruby-code
    Never had to do SNMP processing, sorry.
  • deployment of Rails apps/bundling rails apps
    There are LOTS of case studies of this. Assuming you’ve read them, the
    next step is to deploy something and find out.
  • international support
    I hear tell of a number of multilingual Rails apps. I don’t have to deal
    with i18n, myself.

I am happy enough with a lot of these issues to go with a Rails solution
for something non-critical or prototyping. But I can’t in good faith bet
the project on it.
That’s probably a good bet, especially if your employer’s got no Ruby
talent on hand. I would’t wait a year; just measure the success of your
own non-critical app.

Devin

Richard C. wrote:

The burden of proof is on Rails to
establish it is ready for prime time.
I beg to differ, but the burden of proof is on people who give a shit
about “the burden of proof.”

Devin
(Excusez-vous mon anglais.)

On 9/4/06, Chad P. [email protected] wrote:

If you eliminate risk entirely, you end up guaranteeing failure – for
some definition of risk. Any definition of risk that does not result in
that end is either meaningless or effectively impossible to eliminate.

Well thats pedantic, you will notice that I said nothing about
eliminating
it entirely. I always thought your technology choice should
not add to your existing business risk wherever you can help it.

Currently the way I see it, is that if you choose a java solution over a
RoR solution you are increasing business risk due to the fact that it
will take longer to get your v1.0 up. In the meantime the RoR solution
is getting very polished, and mature and is in a position to ship or
secure
revenue or mind share early. Also lengthy development gives the ADD
marketing crowd too much time to get antsy.

The main unanswered question in Rails is whether ‘Rails is quicker’
holds true for a sufficiently large application space. Or is this a
truism
only in a conventional web app space and green field application to boot
(the Rails Sweet Spot ™ ).

If you have to do wierd stuff like:

  • interact with systems in some non web way (CORBA, SNMP, XML)
  • use legacy databases
  • run on single-server machines
  • do CPU intensive work in requests (e.g. image manipulation)

do you still get to keep the productivity boost? If you don’t, the RoR
advantage starts to disappear, and all those other conventional
technologies
start to look more attractive. All those ‘works as advertised’
libraries, and
alternatives of technology choices if your main library/binary is
inadequate.
Also a huge volume of proven, documented best practices.

The burden of proof is on Rails to
establish it is ready for prime time.

Devin M. wrote:

Richard C. wrote:

The burden of proof is on Rails to
establish it is ready for prime time.

I beg to differ, but the burden of proof is on people who give a shit
about “the burden of proof.”

I’m sorry, that came out wrong. Please let me rephrase:

I beg to differ, but the burden of “proof” is on people who give a shit
about the burden of “proof.”

That’s better. Sorry about the mix-up.

On 9/4/06, Devin M. [email protected] wrote:

Not that I’m at all knowledgeable in risk management, but No.

Risk management isn’t really about that. Every time I propose a
design I am exercising risk management. One way of measuring
that design’s quality is by how much it addresses, minimises or
identifies risk.

I have worked in bleeding edge technology for most of my career
so you get a 6th sense hammered into you for likely risk sources.

The fact that one solution may take 50-100% longer to implement
isn’t necessarily a big risk. But when a library or technology that
is critical to your infrastructure exposes its project-breaking flaw
you are in serious sh1t, as unless there is a feature equivalent
non-broken alternative, or you can write your own conveniently, your
project schedule is out the window. Your initial schedule is nothing
compared to the time overruns these kinds of showstoppers
introduce.

Well, there’s a mix, obviously. I happen to work for an organization
that doesn’t do a great job filtering out the insipid ones, but there
are still some non-insipid ones around.

I work for an organisation where there are a lot of smart decision
makers,
and there are a lot of otherwise smart decision makers who have lapses
of judgement (often during committee style design). The best stuff here
always
happens when the number of designers is low and the scope is controlled.

We don’t have a problem with technology experimentation,
but we are officially in an area that I would term ‘hard software
development’

  • regardless of how much time and skill we have available, it is not
    always
    possible to deliver something better than a ‘merely acceptable’
    solution.
    There are often too many elements in the picture, and too many cruddy
    interfaces. So there is a strong tendency to default to conservative
    options.
  1. Have you read these two articles from Relevance, LLC?
    http://tinyurl.com/j63la
    http://tinyurl.com/h6f5g

Just now. He confirms what I suspected about Rails productivity gains.
I knew the 10x thing was pure bull, but as far as I am concerned,
shaving
30%-50% off an equivalent Java solution is actually a great result.

I look forward to his discussion on Rails outside the sweet spot.

Though there is a bit too much zealotry.

  1. What such case studies have you read about the other options you’re
    considering?

I have read this negative one:
http://rationalist-manifesto.blogspot.com/2006/07/web-applications-vs-web-sites-ruby-on.html
I think the article sucked though. He raised a legitimate concern about
Rails outside the sweetspot, based on a very very flawed idea of his.
Its clear
he tried somethign stupidly outside the sweetspot and then concluded
that Rails
outside the pale was not wise. He didn;t back up his findings at all
or anything.
Frustrating. I would have liked to see the point made by someone
objective,
and I would like to have seen actual findings.

Mostly what I am concerned about are War stories

  • case studies of Rails adoption, even from companies that were not
    strong
    in web development
  • library gotchas
  • Java conversion stories
  • application of Rails outside of classic web-site problem domains
> Well, it's no wonder those questions aren't being answered. They're > ridiculously vague.

I don’t think so - these are questions that get commonly debated
in other platforms. Many are deal breakers or otherwise a massive
source of risk.

I have seen reports on this list that Rails churns its plugins, that
correct plugin operation is not guaranteed as it matures. This could
lead to issues where you are dependent on plugins for essential
behaviour like Background Ruby, or Globalize, and a 1.8.5-style
security fix comes along that you cannot safely or quickly upgrade
to. People with an eye for risk take that pretty seriously - they
expect security issues, but not rock-and-hard-place conflicts like
that.

Ruby/Rails libraries/plugins/tools are all over the

map – some are mature; some are immature; some are mature but
incomplete; some are complete but immature; some have been tested on all
types of systems; some only work on a POSIX environment; some work on
all environments, but aren’t really tested on Win32; some are fast; some
are slow; no two libraries are not on fire.

i.e. all over the place - that makes it risky - I don’t see enough
libraries
that I am likely to depend on, at a mature enough version to warrant
doing
anything mission critical with them. Theres a lot of libraries that
require
binary installations that are unavailable on all platforms.

Libraries is life. And the greatest source of risk generally, developers
like
libraries that they can trust. If you can’t trust them you have to
limit yourself
to problems that do without, or write your own equivalents (which will
wipe
out your rails productivity boost).

  • what happens (to productivity/performance) when your rails apps
    need to do wierd stuff like bolt-on SNMP processing ruby-code
    Never had to do SNMP processing, sorry.

Well lots of people do, and the question still stands as to what happens
to the boost if you have to do any significant Ruby code processing in
Rails (non-SNMP). I have got some sloppy partial code around that
builds up a DIV graph and there is a perceptible delay in the drawing
of the page that uses it. Admittedly its under worst case scenarios, but
it does strike me that you really don’t want to be doing anything funky
in your controllers at all.

  • deployment of Rails apps/bundling rails apps
    There are LOTS of case studies of this. Assuming you’ve read them, the
    next step is to deploy something and find out.

I am not talking of the classic situation of where you own the server or
can directly install on in customer equipment. I am talking of scenarios
where you have to produce a windows/Mac OS/Linux installer/packager,
that will extract out a working Rails Apps with dependencies and
minimal interaction from the user.

I would kill to read a step by step example of this - from source
control
to OS-specific-user-friendly installers. I have been looking a bit at
Capistrano, I haven’t delved deeply enough, but it doesn’t seem to
go the full distance that I am talking about. And if I am not mistaken,
the author has stated that its not a path he will pursue

I saw another article that was pretty cute too - a guy had a tutorial
that went the long way round of achieving a Rails app bundle. Didn’t
go the full distance but it was pretty impressive. Sorry I can’t recall
the link
or the authors name, but I would say many people here know the
article I am talking about.

  • international support
    I hear tell of a number of multilingual Rails apps. I don’t have to deal
    with i18n, myself.

Not something we can ignore - and we get all the hard languages to
deal with - Chinese simp./trad and Japanese, and while most european
countries or european language customers are OK with our enterprise
solutions in english, that disappears when you start dealing with
governments.

Globalize seems sufficient to the task, and honest about what it
delivers.

I have done some research and it seems to clearly delineate between
‘for free’ behaviour and stuff you have to mess with yourself (date l18n
controls, bi-directional text), and it has hooks for when you need
to redirect to locale specific views & templates. And maybe locale
specific
partials too. Ideally you want to minimise the amount of view
localisation,
but sometimes thats cleaner.

I am just wondering about what its resource input formats are like,
whether
its strict about them, whether it supports insertion points and whether
you
can build resource formats by hand or dynamically (converting an
existing
XML format into a Globalise format). Also I need to look at whether
there is a lookup
cost (I think its one-time-only).I guess I just have to poke around
with the system
more and have a look at the formats.

I got an impression that it was something of an opinionated plugin -
there
was a right way to use it, and misuse could get you in trouble. That
seems
par for the course with Rails though.

I am happy enough with a lot of these issues to go with a Rails solution
for something non-critical or prototyping. But I can’t in good faith bet
the project on it.
That’s probably a good bet, especially if your employer’s got no Ruby
talent on hand.

I am working on the lack of Ruby talent, by well, educating me at least.
Rails isn’t hard, but its got a big domain learning curve (which people
confuse
with Rails learning curve if they are fretting), but Ruby is easy to
pick up
once you have recognised its key differences from your background.
Ironically
the volume of Ruby online info slows this process down - it dilutes
access
to those hardcore reference guides and total n00b material.

In fact, if you are doing any kind of web development, you should be
checking out WATIR anyway, which is a Ruby primer, and will address
your lack of Ruby talent pretty quickly.

I would’t wait a year; just measure the success of your
own non-critical app.

Not enough non-critical projects, we have longer project durations. I am
trying
to fit in Rails prototyping between all the wierd req.s gathering I am
doing now.

I don’t have a problem with working on Rails myself, I always figured
that if
I could get a functional prototype of what our main app is doing,
working in
Rails, it would be a good start. But its all these edge conditions that
would
kill Rails adoption - and I don’t have the time to both
address/investigate
the concerns

On Wed, Sep 06, 2006 at 02:36:02AM +0900, Richard C. wrote:

Libraries is life. And the greatest source of risk generally, developers
like
libraries that they can trust. If you can’t trust them you have to
limit yourself
to problems that do without, or write your own equivalents (which will wipe
out your rails productivity boost).

. . . once.

Imagine you have choice A and choice B. With choice B, you have all the
libraries already. With choice A, you’re missing some. On the other
hand, with choice A you have a productivity boost that provides extra
time roughly equivalent to the time it takes to write the libraries that
are missing.

Now imagine you’re doing the same thing again, a few years later. Would
you rather have chosen option B the first time, and be faced again by
the same trade-off between core task productivity and available
libraries, or have chosen option A the first time, have written the
libraries you needed, and now have only to choose between two options
with the same library availability for the task at hand with wildly
different productivity characteristics?

Productivity doesn’t go away just because you’re spending the same
amount of time completing the overall goal. It just gets used more.
Rather than producing only a web app, you are in the same time producing
a web app and the libraries necessary to support it.

Now, if producing those libraries the first time ends up taking three
times as long as the app itself with choice B would have, that’s another
story – but if I take your “wipe out your rails productivity boost”
comment at face value, I’m still choosing Rails.

Well lots of people do, and the question still stands as to what happens
to the boost if you have to do any significant Ruby code processing in
Rails (non-SNMP). I have got some sloppy partial code around that
builds up a DIV graph and there is a perceptible delay in the drawing
of the page that uses it. Admittedly its under worst case scenarios, but
it does strike me that you really don’t want to be doing anything funky
in your controllers at all.

It strikes me as a bad idea in general to pursue edge cases in
frameworks. Frameworks are for general-case development. Their benefit
is that they do the common things for you. If your application is 98%
uncommon things, you aren’t going to get much use out of frameworks.

SNMP, I realize, is not an uncommon case, but it’s uncommon enough for
web app development that expecting a web development framework to do the
heavy lifting for you is a somewhat odd demand, I think.

go the full distance that I am talking about. And if I am not mistaken,
the author has stated that its not a path he will pursue

I’m entirely with you on this: web application framework advocates, for
any framework in any language, seem to believe that the developer will
always be the one deploying and that deployment will be accomplished at
a server where the developer has complete access and control. There
isn’t nearly enough attention on the problem of removing control
characteristics and direct access capabilities, or even passing on
deployment to someone else entirely who wasn’t part of the original
picture at all.

Christian N. wrote:

Bil K. [email protected] writes:

Attempting remote installation of ‘MerriamWebster’
ERROR: While executing gem … (NoMethodError)
undefined method `name’ for -517611318:Fixnum

on ruby 1.8.2 (2004-12-25) [powerpc-darwin8.2.0]

Sounds like the OS X symbol table overflow bug. Update your Ruby.

Bingo.

Thanks,

Long post! Ack!

The fact that one solution may take 50-100% longer to implement
isn’t necessarily a big risk. But when a library or technology that
is critical to your infrastructure exposes its project-breaking flaw
you are in serious sh1t, as unless there is a feature equivalent
non-broken alternative, or you can write your own conveniently, your
project schedule is out the window. Your initial schedule is nothing
compared to the time overruns these kinds of showstoppers
introduce.
So… the 50-100% extra time is okay, as long as it’s known up front?
Why not just pad the extra 50-100% for the Ruby estimate, and just spend
the last few weeks partying when you finish early? :stuck_out_tongue:

  1. What such case studies have you read about the other options you’re
    considering?

I have read this negative one:
http://rationalist-manifesto.blogspot.com/2006/07/web-applications-vs-web-sites-ruby-on.html
My question had an agenda. I meant: leading up to the moment you picked
[Java, I presume], what case studies had you read about its use? Just
trying to scope out for any double-edged swords. Sorry; I was cranky
yesterday.

Mostly what I am concerned about are War stories

  • case studies of Rails adoption, even from companies that were not strong
    in web development
    Well, at RailsConf, I talked to a guy who’d never programmed in his life
    before Rails, and he said that within 2 months of picking it up, he’d
    deployed an app to a customer. shudder

I have seen reports on this list that Rails churns its plugins, that
correct plugin operation is not guaranteed as it matures.
Hrm. The only thing I really recall breaking a whole bunch is Engines.
That said, through experience developing some of these apps, I’ve become
much more conservative of what plugins I use. I wrote my own tagging
code; were I to do it again, I’d write my own user/password code; etc.
Not so much because of Rails upgrades causing breakage, but because the
plugin implementations turned out to be flawed/buggy (read: poorly
tested).

to. People with an eye for risk take that pretty seriously - they
expect security issues, but not rock-and-hard-place conflicts like
that.
That’s true, and that’s one of the ways in which Rails needs somewhat
guru coders – ones who test their app thoroughly, and are able to patch
the broken spots when they come up.

Ruby/Rails libraries/plugins/tools are all over the map
i.e. all over the place - that makes it risky
Same could be said about any language, no?

I don’t see enough libraries
that I am likely to depend on, at a mature enough version to warrant doing
anything mission critical with them. Theres a lot of libraries that require
binary installations that are unavailable on all platforms.
Ah, yeah, can’t help you there – I haven’t needed much in the way of
libraries – XML parser, HTML parser, etc… I might be able to help you
with the second part in a few days – I’m finally getting around to
profiling, and hence need to compile Shugo’s or ZenProfiler for win32.

write your own equivalents (which will wipe
out your rails productivity boost).
Well, not according to the Relevance folks, but I admit, they seem
pretty adept.

Never had to do SNMP processing, sorry.
Well lots of people do, and the question still stands as to what happens
to the boost if you have to do any significant Ruby code processing in
Rails (non-SNMP).
I’m confused… are you asking if the Ruby language is as quick to
program in as the Rails framework, or if the framework is fast despite
the language? Or are you talking about writing a Ruby extension? In any
case, I don’t know if you are going to get metrics much more specific
than the Relevance posts. People seem pretty guarded about their own
professional productivity. Probably a little productivity abritage going
on.

I am not talking of the classic situation of where you own the server or
can directly install on in customer equipment. I am talking of scenarios
where you have to produce a windows/Mac OS/Linux installer/packager,
that will extract out a working Rails Apps with dependencies and
minimal interaction from the user.
Interesting. Well, Rubyscript2Exe is supposed to do just that. Never
used it, but seen an example package being run. A little slow to start
up.

I don’t have a problem with working on Rails myself, I always figured
that if
I could get a functional prototype of what our main app is doing,
working in
Rails, it would be a good start. But its all these edge conditions that
would
kill Rails adoption - and I don’t have the time to both address/investigate
the concerns
Ah. Well, I might, were I working for you.

But I’m not. :smiley:

Devin
You made it! You win a prize!

On 9/5/06, Chad P. [email protected] wrote:

or write your own equivalents (which will wipe
out your rails productivity boost).

. . . once.

Imagine you have choice A and choice B. With choice B, you have all the
libraries already. With choice A, you’re missing some. On the other
hand, with choice A you have a productivity boost that provides extra
time roughly equivalent to the time it takes to write the libraries that
are missing.

Time saved is not the only thing that libraries provide. For instance,
you
may simply not have the expertise or business mentality for library
development. e.g. Marketing, management and other project stakeholders
cannot determine if a library is ‘good’ or ‘done’ or whatever - a
library is
a software product whose customers are developers. Libraries generally
require higher quality than the products that use them too.

I know for instance that I would smack anyone at work who suggested
we write a(nother) security library. I have been in companies where
our library choice was something that was discussed in sales pitches.
If you use something sensitive like a credit card payment processing
engine, customers might be sensitive to what libraries you use to access
it (the vendors, an established third party etc.).

There are also many companies where library authoring may be a complete
anathema. They don’t know how to fit it in with their business processes
(like testing) or how the life expectancy and support of a library
outlasts its
first application, and how it gets support. There may also be
significant
developer fear about having to author and effectively support something
like that.

Also consider this: the Ruby/Rails environment has probably more active
library development per capita (of developers) than established
languages.

I know in Java,
the quality of the basic libraries is excellent, and it is worth
checking out
virtually anything that the jakarta crew work on, as they are especially
good. But ironically, this means that library development skills for
your
average java programmer have severely atrophied through lack of use, and
if they then have to write equivalents in Ruby… that ramps up your
risk
completely. I know for a fact that when I am presented with an in-house
java library my first reaction is ‘aw crap’. I have been pleasantly
surprised
on occasion, and I work with good people now, but there is something
about Java that attracts the worst kind of well-intentioned designer -
the
kind that starts developing meta-solutions instead of addressing the
given problem.

It strikes me as a bad idea in general to pursue edge cases in
frameworks.

Well I wasn’t distinguishing between Ruby or Rails here. I wasn’t
requesting that Rails accomodate edge
conditions - the correct approach to edge conditions in Rails, is
to down shift to Plain Old Ruby. This is pretty obvious with its
‘helper’ hook, and is well stated in the good literature, and ‘Recipes’
in general are implemented this way.

However, my point was that using Ruby IN Rails in this manner
could kill your scaling. Its another one of those rock-and-hard-place
situations - the implication is that only ‘sweet spot’ Rails apps
can truly scale. Whether Rails scales up to MySpace levels is
another thing, and whether you need twice or ten times as many
machines is another. But I don’t particularly care about those cases.

I do care about single machine performance though.

SNMP, I realize, is not an uncommon case, but it’s uncommon enough for
web app development that expecting a web development framework to do the
heavy lifting for you is a somewhat odd demand, I think.

I try to avoid the thing as much as I can, doctor’s instructions. Its a
fine
protocol for what it was intended for, but the only uses of it I ever
see
are when people build insane application protocols over it.

I doubt if I am the first person to think about using Rails easy machine
parallelism to split up the workload involved when a couple of thousand
SNMP agents start screaming at you all at once.

This problem alone is what will dictate the shape of our new app. It is
not a solution we have ever solved properly before.

I would get a perverse satisfaction from using ‘risky/slow rails’ to
solve
our most persistent scaling problem. An ActionSNMP plugin would be
quite cool indeed.

I’m entirely with you on this: web application framework advocates, for
any framework in any language, seem to believe that the developer will
always be the one deploying and that deployment will be accomplished at
a server where the developer has complete access and control. There
isn’t nearly enough attention on the problem of removing control
characteristics and direct access capabilities, or even passing on
deployment to someone else entirely who wasn’t part of the original
picture at all.

Well Rails is new, and a moving target. The audience for Rails solutions
is web developers, and its original core audience was probably the
Perl, PHP, ASP community (I am guessing) who had direct access to
production servers.

The success attracts other kinds of web developers with indirect server
access. I don’t expect the Rails binary distribution problem to stay
unsolved for long. Instant Rails is an early step in that direction, and
I don’t think the problem is particularly hard anyway - you just bundle
your gems and binary dependencies together. Once you are dealing
with say, using existing MySQL databases you are automatically
dealing with a knowledgeable customer, and you can invisibly install
something like sqlite for the technofearful.

Rails is also attracting attention from weirdos like me who are seeing
what else we can do with it, and who see the lack of convenient
source deployment methods a problem. Also the lack of an established
build process is a bit step backward. As we have gotten very used to
100% non-manual build processes that do lots of non-build activities
too (like source analysis, unit tests, product watermarking). Not all
steps are applicable to Rails, but Rails does add some (deploy to
test server & test) but the principle remains the same.

On Thu, Sep 07, 2006 at 01:54:44AM +0900, Richard C. wrote:

Time saved is not the only thing that libraries provide. For instance, you
may simply not have the expertise or business mentality for library
development. e.g. Marketing, management and other project stakeholders
cannot determine if a library is ‘good’ or ‘done’ or whatever - a library is
a software product whose customers are developers. Libraries generally
require higher quality than the products that use them too.

Mostly, they require higher quality APIs. Everything else can be
changed.

I know for instance that I would smack anyone at work who suggested
we write a(nother) security library. I have been in companies where
our library choice was something that was discussed in sales pitches.
If you use something sensitive like a credit card payment processing
engine, customers might be sensitive to what libraries you use to access
it (the vendors, an established third party etc.).

Good point(s).

There are also many companies where library authoring may be a complete
anathema. They don’t know how to fit it in with their business processes
(like testing) or how the life expectancy and support of a library outlasts
its
first application, and how it gets support. There may also be significant
developer fear about having to author and effectively support something
like that.

I’m not terribly sympathetic to unreasonable fears, I’m afraid. I am,
however, sympathetic to people who have to labor under conditions of
unreasonable fear engendered by the ignorance of supervisors, which is
almost the same thing.

if they then have to write equivalents in Ruby… that ramps up your risk
completely. I know for a fact that when I am presented with an in-house
java library my first reaction is ‘aw crap’. I have been pleasantly
surprised
on occasion, and I work with good people now, but there is something
about Java that attracts the worst kind of well-intentioned designer - the
kind that starts developing meta-solutions instead of addressing the
given problem.

I’ll take your word for it. I’m not active in the Java community, and
would probably be stoned as a heretic if I was (nouns bore me, and an
entire language devoted to writing in passive voice never struck me as a
very good idea).

However, my point was that using Ruby IN Rails in this manner
could kill your scaling. Its another one of those rock-and-hard-place
situations - the implication is that only ‘sweet spot’ Rails apps
can truly scale. Whether Rails scales up to MySpace levels is
another thing, and whether you need twice or ten times as many
machines is another. But I don’t particularly care about those cases.

MySpace doesn’t even scale to MySpace levels. It’s written in CFML, for
crying out loud.

I do care about single machine performance though.

SNMP, I realize, is not an uncommon case, but it’s uncommon enough for
web app development that expecting a web development framework to do the
heavy lifting for you is a somewhat odd demand, I think.

I try to avoid the thing as much as I can, doctor’s instructions. Its a fine
protocol for what it was intended for, but the only uses of it I ever see
are when people build insane application protocols over it.

Maybe that’s the problem. All I know for sure is that I’ve never run
across a situation where my first thought was “Y’know, I could solve
this by writing an app that leverages SNMP.”

quite cool indeed.
I’m curious about how that goes, whether you end up using Rails or not.

Well Rails is new, and a moving target. The audience for Rails solutions
is web developers, and its original core audience was probably the
Perl, PHP, ASP community (I am guessing) who had direct access to
production servers.

There is a startlingly high number of web developers out there who do
the sort of consulting work that does not guarantee direct access to
production servers, unfortunately.

Rails is also attracting attention from weirdos like me who are seeing
what else we can do with it, and who see the lack of convenient
source deployment methods a problem. Also the lack of an established
build process is a bit step backward. As we have gotten very used to
100% non-manual build processes that do lots of non-build activities
too (like source analysis, unit tests, product watermarking). Not all
steps are applicable to Rails, but Rails does add some (deploy to
test server & test) but the principle remains the same.

When a Rails app deployment process consists of unceremoniously dumping
a bunch of files into a directory across an SFTP connection, then it
will be ready for prime time in the indirect-access market. Until then,
it’s still pretty much all Perl and PHP. That’s how it looks to me, at
any rate.

Austin,

I’d love to joing TRUG, I sent you a email privately, do answer back! ;
)

Best Regards,

Jose L. Hurtado
Web D. / IT Security
Toronto, Canada

On 9/6/06, Devin M. [email protected] wrote:

Long post! Ack!

Yeah, research and reqs gathering has pimped out my typing
skills. I have written more english than code this year. :sob

So… the 50-100% extra time is okay, as long as it’s known up front?

If its known, and can be planned for, and identified risk is either low,
or has had backup plans for, then its generally okay. Unless you
are in a situation where you are making software commodities
(like yet another social networking site) and productivity converted
into exclusive features is what drives your business, the productivity
hit is okay.

Or at least thats project manager think, that a low-scalar productivity
hit is fine as long as there is no additional hit due to risk.

I don’t tend to agree. Once a project schedule extends beyond a low
single digit number of months, it turns into pure fantasy. Also software
schedules are like fly paper - they longer they are, the more shit
sticks to them. ‘Completed’ software attracts better change requests
than incomplete software.

When the conception date and the ship date become very far apart,
project sponsors forget their original reasoning behind a feature
request.
You can bitch about marketing all you want, but they are just human
too.

I have done enough agile work in Java to appreciate on a surface level
what Ruby/Rails can do for you. I believe the productivity boost is
extremely important, and the time saved can be applied to polishing
the final product or introducing more features, or as a hedge against
possible risky areas of Ruby/Rails (say no secure SNMPv3 support)
where we might have to wrap a java library or something. Under those
circumstances, risk assessment is about showstoppers like:

  • you cannot accomplish something in Rails at all - you have
    overstepped
    the current capabilities of the framework and Ruby has no good fallback,
    or
    existing functionality is not 100% ready for primetime or your needs
    (e.g. crypto stuff, i18n, enterprise libraries, install on a specific
    platform etc.)
  • its single CPU performance of a properly optimised solution is ‘not
    good enough’ over a java equivalent
  • Rails app distribution to customers is hard to achieve well

These are my risk concerns. I am listing them, not stating them as
facts. But the other concerns I am familiar with too, as I know how my
peers and superiors think, and they are the kind of questions they will
ask. So I look out for them - even though I have personally confirmed
them as non-issues, I need to have a prepared defense against them.

Why not just pad the extra 50-100% for the Ruby estimate, and just spend
the last few weeks partying when you finish early? :stuck_out_tongue:

lol! They would probably get suspicious as my tan started to improve all
the way up to the march delivery date …

  1. What such case studies have you read about the other options you’re
    considering?
    My question had an agenda. I meant: leading up to the moment you picked
    [Java, I presume], what case studies had you read about its use? Just
    trying to scope out for any double-edged swords. Sorry; I was cranky
    yesterday.

Heh. I am thick skinned - I use up all my irritability on marketing,
heated
responses don’t phase me.

And to subvert your agenda, its probably because of the Java situation
that people are sensitive about technology choice. During Java’s early
adoption phase there was really nothing like it at the time, and there
was a total upheaval in software development as this web thing started
to become a platform.

Java eventually matured into a successful language, after burning
through
the hype bubble. Once the smoke cleared people realised that WORA
didn’t mean Java Office, or Java OS, but in fact meant that java code
would run on whatever you ported your runtime to. Once it failed as a
consumer GUI, people moved on to what it was really good at, like
security, i18n, network software, development solutions and web apps.

But people really got hurt in the 1.1 era. It had enough language/api
features to be useful, but it hurt to develop in. As a result people are
anal about technology choice. Of course now we actually have
choices to be anal about.

And if they ever got burned on a development choice before they can
go ultra-conservative. There is a perception that Rails may have certain
weaknesses. Many are not true, some are. Not all the false ones
are being properly dismissed as rubbish, and some of the true ones
are not being debated enough for them to be quantified.

And joel is right: technology steering committees are a useless
waste of time.

Well, at RailsConf, I talked to a guy who’d never programmed in his life
before Rails, and he said that within 2 months of picking it up, he’d
deployed an app to a customer. shudder

Nods. I was very fascinated by how it promotes best practices. It
removes
unnecessary choices from you, like where your view code goes, where to
put your utility code (helpers) and wiring code (controllers), and how
to
break up html generation usefully (partials & components).

Hell, once I am up to speed I am thinking of teaching it to my Dad, who
close to retirement age is considering learning how to do some
development.
In the interests of prolonging his existence on this planet, I think
Ruby/Rails
is an excellent choice. Its got practical applications, it has a short
feedback
cycle for learning, and principle-of-least-surprise is a real phenomenon
and
not language-fanboy-speak.

Also - imagine if your customer isn’t too fussy and is happy enough with
scaffold code … it boggles the mind at how far you could take it and
how
many people you could get to do it. The hardest thing about that kind
of software development is finding more customers like that.

Hrm. The only thing I really recall breaking a whole bunch is Engines.
That said, through experience developing some of these apps, I’ve become
much more conservative of what plugins I use. I wrote my own tagging
code; were I to do it again, I’d write my own user/password code; etc.
Not so much because of Rails upgrades causing breakage, but because the
plugin implementations turned out to be flawed/buggy (read: poorly tested).

Well this is what I call a legitimate concern, but it is also something
that
I think is hugely overblown by jittery decision makers. Considering the
speed of Rails development, by the time you would start to encounter
plugin issues, say 4-6 months into the project those issues may have
been resolved, or replacement plugins may have arrived. Hell, even
swapping
out the standard interpreter with JRuby might address it as you just use
an
equivalent mature java library instead.

The discussion I read here about new Rails versions breaking plugins was
more about the philosophy of Rails development - that maintaining
backwards code compatibilty wasn’t as primary a concern as it is in
other frameworks. Thats the digest, I don’t know how true it was, but
it has now added to my Perception™ of rails. I have looked, and
identified that I need Globalise or an equivalent, and BackgroundRb.
And write all my own code.

to. People with an eye for risk take that pretty seriously - they
expect security issues, but not rock-and-hard-place conflicts like
that.
That’s true, and that’s one of the ways in which Rails needs somewhat
guru coders – ones who test their app thoroughly, and are able to patch
the broken spots when they come up.

Theres also a concern that if you have written something for a customer,
you need a way to update them or patch their install. Thats the flip
side
of my deployment concern: automatic updates to software.

libraries – XML parser, HTML parser, etc… I might be able to help you
with the second part in a few days – I’m finally getting around to
profiling, and hence need to compile Shugo’s or ZenProfiler for win32.

I guess wait one year. A lot of this is due to the ‘enterprise-style’
APIs
coming into Ruby fairly recently (so new the docs aren’t up to date
e.g. HTTP class).

write your own equivalents (which will wipe
out your rails productivity boost).
Well, not according to the Relevance folks, but I admit, they seem
pretty adept.

Being at their level is the ‘goal’ you might say - and my guess is that
a serious showstopper in a library or plugin could wipe out the
productivity
boost for an organisation getting into Rails. Which is what I am
primarily
concerned with.

I’m confused… are you asking if the Ruby language is as quick to
program in as the Rails framework, or if the framework is fast despite
the language? Or are you talking about writing a Ruby extension?

I am not talking about productivity boost here. I am looking at a
legitimate scaling concern. If you need to do something strange
in Rails you simply write Ruby code in your helpers that does it,
or write a funky plugin.

However I was raising a concern that perhaps Rails is only highly
scaling if you don’t do this or are otherwise very careful about how
you introduce pure-processing code like this.

In any
case, I don’t know if you are going to get metrics much more specific
than the Relevance posts. People seem pretty guarded about their own
professional productivity. Probably a little productivity abritage going on.

Yeah they were good. And refreshingly honest. I have been through
several hype bubbles before and sycophants do more harm than
good.

Interesting. Well, Rubyscript2Exe is supposed to do just that. Never
used it, but seen an example package being run. A little slow to start up.

Deployment is way more involved than that, but its a start. An
interesting
way to look at what I mean by deployment:
“What would be involved to make your Rails app installable by your Mom”
Basically going from nothing (and I mean nothing, you can’t expect
Rails knowledge or domain knowledge on the part of the person installing
it)
to 100% working Rails app.

You also need to consider the update case as well, like Windows updates.

Once this is a solved problem, you can address all the intervening
problems,
like advanced/customisation by someone who knows their stuff and wants
to use their existing web server/database.

And then you address the remote install problem, and look into whether
you
need to do anythign wierd for virtualised systems.

But I’m not. :smiley:
What bugs me is that in order to make a convincing pro-Rails argument
I would effectively have to write the app (software that manages other
software agents on the network) in its entirety. While Rails
has excellent productivity improvements its not that good, not for
someone
like me who is learning still. While I am tantalisingly close to getting
certain aspects working in some rough code (even cheating: communicating
with the web-enabled software agents by scraping their UI with
WWW::Mechanize or even WATIR) the only way to determine if there are
no i18n issues is to Globalise it, and the only way to address how
installable Rails is, is to make a 1-click installer. And thats not fun,
at all,
though WIX might help.

More seriously though is the investment in process-type stuff (tieing in
with our automated build system/auto testing etc.)

Yeah, just one more body would make the difference. The scary thing is
that the steps required to rapidly prototype a java app usually mean
that the code is only fit for burning afterwards. But the difference
between
a Rails prototype and a proper app is some refactoring and going back
to write the unit tests.

Phlip,

When I mentioned the error in reasoning Joel did, by equating almost
all his infamous post about Ruby, basically with FEAR, some people did
not fully get my point.

Then you added the UNCERTAINTY and DOUBT that was there too, to create
the trifecta of a technology attack based on truly nothing significant
at all.

Well, guess who aggreed with us, almost point by point? David
Heinemeier, co-founder and lead developer of Ruby on Rails, check out
his post here:

http://www.loudthinking.com/arc/000596.html

Have a nice weekend everyone,

Jose L. Hurtado
Web D.
Toronto, Canada

Folks,

Well this is a LONG discussion!

Some points I wanted to address that have been raised by different
posters, I have included a short version of their own post below:

VIDAR HOKSTAD SAID
Sometimes the payoff in trying a technology that your team is
inexperienced with or that isn’t widely deployed …or the risks are
mitigated by your teams experience (Vidar has an application that is
already written in another language)

PETER BOOTH SAID
One point he doesn’t make is the difference OPM and MHC (Other Peoples
Money versus My Hard-earned Cash.) When you are spending OPM costs can
become unreal. So as an employee a “buzz-word compliant/standard
approach” Java or NET solution might make more sense…

RESPONSE
Vidar and Peter I agree partially with your main points, however I
would argue your points apply to existing applications, software
already written in a given language where migration risks may truly
outweight embracing a new technology, and then perhaps staying where
you are is a safer, wiser choice.

I say perhaps because:

  • Your code base would have to be significant, and your current
    language at least good enough to achieve your business or startup
    objectives in a reasonable amount of time.
  • The productivity savings of the alternative (Ruby and RoR) would have
    to be low or unimportant enough to ignore the other language.
  • Other people’s money truly dislike risk, any risk, and of course if
    you have convinced them that
    [insert-your-safe-language-framework-combination-here] is the best, and
    the coolest, safest tools to use… how would you go back to them,
    asking for money and telling them you were wrong… tough to do, yes,
    but on ocassion I would argue WISE. Because if this savings in
    productivity were coming to benefit YOUR HARD EARNED CASH (MHC) then I
    could bet Ruby would be the only obvious answer!

Also, a little off topic, you could reach a middle of the ground
compromise, extending your app in Ruby and leaving the “legacy” code
alone, a common practice in the mainframe world where COBOL apps have
been largely left in the cold or with minimal maintenace, while hooks
have been made into Java based applications that extend them! Why not
do the same with Net/Java apss with links to Ruby on Rails?

Finally to end my reply I would say that if you are starting a NEW
project, mission critical or not, and there is no “showstopper” for
your application, by all means Ruby and Ruby on Rails are indeed the
best possible tools you could choose. THAT IS PRECISELY my point. The
productivity savings, the elegance and clarity of the framework is just
to good to ignore.

Best Regards,

Jose L. Hurtado
Web D.
Toronto, Canada

Joel Spolsky wrote:

[Ruby is slow] so if you become The Next MySpace,
you’ll be buying 5 times as many boxes as the .NET guy
down the hall.

Last time I checked, 5 new boxes were cheaper than 5 new programmers…

Phlip wrote:

Joel Spolsky wrote:

[Ruby is slow] so if you become The Next MySpace,
you’ll be buying 5 times as many boxes as the .NET guy
down the hall.

Last time I checked, 5 new boxes were cheaper than 5 new programmers…

That isn’t the point.

  1. Five new experienced .NET programmers are cheaper than five new
    experienced Rails programmers.
  2. One new box is cheaper than five new boxes.

Now: would you like to debate whether a Rails programmer can produce at
five times the rate of a .NET programmer, given equivalent tools and
processes? Or would you like to join me in efforts and advocacy to speed
Ruby up, which seems to me a more productive use of time?