[lnkForumImage]
TotalShareware - Download Free Software

Confronta i prezzi di migliaia di prodotti.
Asp Forum
 Home | Login | Register | Search 


 

Forums >

comp.lang.ruby

The economics of a slow but productive Ruby

Jacob Fugal

9/12/2006 12:57:00 AM

[NOTE: I'm trying to present the facts and be objective in this post.
I love Ruby, and would choose it any day when economics didn't matter.
But in the sense of the "Real World", this is what I discovered. And
of course, if I made any serious mistakes, be sure to let me know!]

Company QUUX is deciding on technologies for a new project. They
estimate a development budget of A and a hardware budget of B under
technology BAR:

development budget under BAR = A
hardware budget under BAR = B
total budget under BAR = A + B

They are also considering using technology FOO as well. FOO is widely
reputed to grant productivity gains of a factor Y, but is slower than
BAR, requiring X times the servers. FOO developers make about Z times
as much as BAR developers, on average:

X = servers required under FOO / servers required under BAR
Y = productivity FOO / productivity BAR
Z = annual FOO salary / annual BAR salary

The development budget under FOO would be reduced by the productivity
increase, but that increase will be mitigated by the difference in
salary:

development budget under FOO = AZ/Y

The hardware budget under FOO would be increased by the factor X:

hardware budget under FOO = BX

The total budget under FOO, in terms of the budget under BAR, would then be:

total budget under FOO = AZ/Y + BX

Given these estimates, it would be a profitable decision to choose FOO
over BAR if and only if the total budget under FOO is less than the
total budget under BAR.

choose FOO iff AZ/Y + BX < A + B -- or, rearranging...
choose FOO iff (X - 1)B < (1 - Z/Y)A
choose FOO iff [(X - 1) + (1 - Z/Y)]B < (1 - Z/Y)(A + B)
choose FOO iff B < [(1 - Z/Y) / (X - Z/Y)](A + B)

Let's apply this estimate to the current standing between .NET and
Ruby/Rails, using the figures from Joel (X = 5, Y = 5). In this case,
Z = 1 (actually, in my comparisons, Z was slight *less* than one).

(1 - Z/Y) / (X - Z/Y)
= (1 - 1/5) / (5 - 1/5)
= (4/5) / (24/5)
= 4 / 24
= 1 / 6

So, choosing Ruby of .NET (assuming Joel's numbers are correct) is
economically sound iff your hardware budget makes up 1/6th or less of
the total estimated .NET budget.

Now, let's assume 20 servers and a 5 year application lifespan, with a
$5K one-time cost per server, $500 annually for repairs and one
Sysadmin with a salary comparable to the developers ($60K). This
brings our hardware budget to $450K over the 5 years[1]. If this is
only 1/6 the total budget, we need to be spending at least 5 times as
much on developers, or exactly that amount per year. Using the same
$60K figure for developer salaries, this comes to 7.5 developers. So,
if you're developer to server ratio is at least 3 developers for each
production server, Ruby is probably economical. If you start getting a
lot more servers than developers however, the hardware cost of a slow
Ruby builds up on you.

Jacob Fugal

[1] It's interesting to note however that 67% of that figure is still
in paid salaries, rather than the cost of the hardware itself. If
you've got a super sysadmin who can manage 100 boxes (and you better
be paying them at least 80K if they are that super), the hardware
budget will scale a lot better. There's a lot to be said for getting
your hands on a good sysadmin...

34 Answers

Carl Lerche

9/12/2006 1:15:00 AM

0

1) It doesn't take 5 times more boxes for a ruby app than a .NET app,
the single biggest factor in efficiency is the quality of the
developer. You can do many things on the code level to optimize
server CPU. I've never found it to be an issue. Honestly, if you need
5 times more servers to run a ruby on rails app than a .NET app, I'll
have to laugh.

As an example, I worked for a company that developed a PHP app and it
took 15 application servers to run it when it should have taken 5. It
took that many because the coding (before I was hired) was terrible.
The same can happen with any technology.

2) Network latency is a far bigger bottleneck than CPU. All
technologies face the same problem.

3) Joel pulled that number out of his ass, I mean, I could say that
the same app coded in .NET would take 2834 servers where as it would
take a 3 year old palm using ruby. That doesn't make it true.

4) I didn't see any factor for software budget.

-carl

On Sep 11, 2006, at 5:56 PM, Jacob Fugal wrote:

> [NOTE: I'm trying to present the facts and be objective in this post.
> I love Ruby, and would choose it any day when economics didn't matter.
> But in the sense of the "Real World", this is what I discovered. And
> of course, if I made any serious mistakes, be sure to let me know!]
>
> Company QUUX is deciding on technologies for a new project. They
> estimate a development budget of A and a hardware budget of B under
> technology BAR:
>
> development budget under BAR = A
> hardware budget under BAR = B
> total budget under BAR = A + B
>
> They are also considering using technology FOO as well. FOO is widely
> reputed to grant productivity gains of a factor Y, but is slower than
> BAR, requiring X times the servers. FOO developers make about Z times
> as much as BAR developers, on average:
>
> X = servers required under FOO / servers required under BAR
> Y = productivity FOO / productivity BAR
> Z = annual FOO salary / annual BAR salary
>
> The development budget under FOO would be reduced by the productivity
> increase, but that increase will be mitigated by the difference in
> salary:
>
> development budget under FOO = AZ/Y
>
> The hardware budget under FOO would be increased by the factor X:
>
> hardware budget under FOO = BX
>
> The total budget under FOO, in terms of the budget under BAR, would
> then be:
>
> total budget under FOO = AZ/Y + BX
>
> Given these estimates, it would be a profitable decision to choose FOO
> over BAR if and only if the total budget under FOO is less than the
> total budget under BAR.
>
> choose FOO iff AZ/Y + BX < A + B -- or, rearranging...
> choose FOO iff (X - 1)B < (1 - Z/Y)A
> choose FOO iff [(X - 1) + (1 - Z/Y)]B < (1 - Z/Y)(A + B)
> choose FOO iff B < [(1 - Z/Y) / (X - Z/Y)](A + B)
>
> Let's apply this estimate to the current standing between .NET and
> Ruby/Rails, using the figures from Joel (X = 5, Y = 5). In this case,
> Z = 1 (actually, in my comparisons, Z was slight *less* than one).
>
> (1 - Z/Y) / (X - Z/Y)
> = (1 - 1/5) / (5 - 1/5)
> = (4/5) / (24/5)
> = 4 / 24
> = 1 / 6
>
> So, choosing Ruby of .NET (assuming Joel's numbers are correct) is
> economically sound iff your hardware budget makes up 1/6th or less of
> the total estimated .NET budget.
>
> Now, let's assume 20 servers and a 5 year application lifespan, with a
> $5K one-time cost per server, $500 annually for repairs and one
> Sysadmin with a salary comparable to the developers ($60K). This
> brings our hardware budget to $450K over the 5 years[1]. If this is
> only 1/6 the total budget, we need to be spending at least 5 times as
> much on developers, or exactly that amount per year. Using the same
> $60K figure for developer salaries, this comes to 7.5 developers. So,
> if you're developer to server ratio is at least 3 developers for each
> production server, Ruby is probably economical. If you start getting a
> lot more servers than developers however, the hardware cost of a slow
> Ruby builds up on you.
>
> Jacob Fugal
>
> [1] It's interesting to note however that 67% of that figure is still
> in paid salaries, rather than the cost of the hardware itself. If
> you've got a super sysadmin who can manage 100 boxes (and you better
> be paying them at least 80K if they are that super), the hardware
> budget will scale a lot better. There's a lot to be said for getting
> your hands on a good sysadmin...
>


Jacob Fugal

9/12/2006 1:19:00 AM

0

On 9/11/06, Jacob Fugal <lukfugl@gmail.com> wrote:
> choose FOO iff B < [(1 - Z/Y) / (X - Z/Y)](A + B)
>
> Let's apply this estimate to the current standing between .NET and
> Ruby/Rails, using the figures from Joel (X = 5, Y = 5). In this case,
> Z = 1 (actually, in my comparisons, Z was slight *less* than one).

Also note that the values I used here a pretty conservative. As many
have mentioned, Ruby will often not be the bottleneck -- X can be less
than 5. Also, depending on your programmers, Y may be more or less
than 5. Doing the calculation with X = 2 and Y = 10 yields much more
favorable results:

(1 - Z/Y) / (X - Z/Y)
= (1 - 1/10) / (2 - 1/10)
= (9/10) / (19/10)
= 9 / 19
= 47%

So under optimistic cases, Ruby will still be economical until
hardware eats up *half* your budget. Or, pessimistically, let's try X
= 10, Y = 2:

(1 - Z/Y) / (X - Z/Y)
= (1 - 1/2) / (10 - 1/2)
= (1/2) / (49/2)
= 1/49

You're hardware budget would need to be negligible under those
circumstances to make Ruby economical.

Fortunately, in my experience, X has never even approached 5, let
alone 10. And Y has always been good to me. The important thing is
that for *your* decision, you need to:

1) Evaluate what X is *for your application*
2) Evaluate what Y you will believe
3) Know how your hardware costs will scale (see the footnote in my
original email)

All these factors will affect the outcome greatly.

Jacob Fugal

Matt Lawrence

9/12/2006 1:21:00 AM

0

Jacob Fugal

9/12/2006 1:22:00 AM

0

On 9/11/06, Carl Lerche <carl.lerche@verizon.net> wrote:
> 1) It doesn't take 5 times more boxes for a ruby app than a .NET app,
> the single biggest factor in efficiency is the quality of the
> developer. You can do many things on the code level to optimize
> server CPU. I've never found it to be an issue. Honestly, if you need
> 5 times more servers to run a ruby on rails app than a .NET app, I'll
> have to laugh.

I agree, but I was using the numbers from Joel's article. See my
follow up email for a little more detail on what I believe it would
*really* be...

My *main* point in the original email is that there *is* a line where
throwing more servers at it isn't economical. Where that line is
depends a great deal on your individual situation.

Jacob Fugal

Carl Lerche

9/12/2006 2:31:00 AM

0

I realize that you are using the numbers from Joel's article, but
(and maybe it's just me), those numbers are just so absurd, they
don't merit any more discussion than "that's absurd" and maybe point
out why using real world situations.

Also, yes, there are some extreme cases... such as the google search
engine. However, scaling is not linear. Hypothetically, IF at a
certain point a .NET web-application takes 5 servers and a similar
ruby web-application takes 25 servers (this already sounds a bit
ridiculous, but allow me to continue...). This does NOT mean that
when this .NET application requires 50 servers to run that the
similar ruby web-app will require 250.

As such, I don't see where this line would be, not using your method
of proving that there is a line.

And lastly, if there are any developers that develop Ruby apps for a
company that requires 5 times as many servers as an equivalent .NET
app.. they should be fired :P

-carl

On Sep 11, 2006, at 6:21 PM, Jacob Fugal wrote:

> On 9/11/06, Carl Lerche <carl.lerche@verizon.net> wrote:
>> 1) It doesn't take 5 times more boxes for a ruby app than a .NET app,
>> the single biggest factor in efficiency is the quality of the
>> developer. You can do many things on the code level to optimize
>> server CPU. I've never found it to be an issue. Honestly, if you need
>> 5 times more servers to run a ruby on rails app than a .NET app, I'll
>> have to laugh.
>
> I agree, but I was using the numbers from Joel's article. See my
> follow up email for a little more detail on what I believe it would
> *really* be...
>
> My *main* point in the original email is that there *is* a line where
> throwing more servers at it isn't economical. Where that line is
> depends a great deal on your individual situation.
>
> Jacob Fugal
>


Chad Perrin

9/12/2006 3:21:00 AM

0

On Tue, Sep 12, 2006 at 11:30:59AM +0900, Carl Lerche wrote:
>
> Also, yes, there are some extreme cases... such as the google search
> engine. However, scaling is not linear. Hypothetically, IF at a
> certain point a .NET web-application takes 5 servers and a similar
> ruby web-application takes 25 servers (this already sounds a bit
> ridiculous, but allow me to continue...). This does NOT mean that
> when this .NET application requires 50 servers to run that the
> similar ruby web-app will require 250.

No kidding. For one thing, while it's possible that in some
pathological edge-case it might require five LAMRoR servers to equate
one WS2k3 .NET server, the level of system resources required just to
run each individual server is rather greater for WS2k3/IIS systems than
for Linux/Apache systems. Additionally, there are more options
available for scaling up with Linux than Windows solutions -- better
load balancing, effective clustering, et cetera (Microsoft promised a
clustering version of Windows last year -- the result being that once
they achieved something testworthy, nobody bothered to use it except for
academic demonstration purposes because, of course, the cost of
licensing would be far greater than any return on investment, especially
considering the artificial technical limitations imposed because of the
MS business model).

There's a sweet spot for vertically integrated Microsoft solutions. If
you stay inside that sweet spot, it's cheaper to use a .NET solution
than certain other solutions. Your project, whatever it may be, may or
may not lose to .NET inside that sweet spot -- in fact, I'll go so far
as to say that .NET is almost certainly a net (ha ha) win. The term
"scalability", however, refers to the mobility of the economics of your
solution, and in that sense one of the standard Linux-based solutions
will probably scale better.

--
CCD CopyWrite Chad Perrin [ http://ccd.ap... ]
"The measure on a man's real character is what he would do
if he knew he would never be found out." - Thomas McCauley

Chad Perrin

9/12/2006 3:28:00 AM

0

On Tue, Sep 12, 2006 at 09:56:58AM +0900, Jacob Fugal wrote:
>
> [1] It's interesting to note however that 67% of that figure is still
> in paid salaries, rather than the cost of the hardware itself. If
> you've got a super sysadmin who can manage 100 boxes (and you better
> be paying them at least 80K if they are that super), the hardware
> budget will scale a lot better. There's a lot to be said for getting
> your hands on a good sysadmin...

It also helps if you're using a system that has a lower
admins-to-servers requirement ratio. As indicated by recent studies,
Linux and Solaris both require far fewer admins for the number of boxen
than Windows:

http://www.cioupdate.com/article.php/104...

From the article:

Linux, along with Solaris, also came out ahead of Windows in terms of
administration costs, despite the fact that it's less expensive to
hire Windows system administrators. The average Windows administrator
in the study earned $68,500 a year, while Linux sys admins took home
$71,400, and those with Solaris skills were paid $85,844. The Windows
technicians, however, only managed an average of 10 machines each,
while Linux or Solaris admins can generally handle several times that.

This, like the number of servers required for a given software project,
does not scale linearly -- but the scalability of Windows systems in
terms of administrative requirements never overtakes that of Solaris and
Linux systems (except possibly in pathological edge-cases).

--
CCD CopyWrite Chad Perrin [ http://ccd.ap... ]
Ben Franklin: "As we enjoy great Advantages from the Inventions of
others we should be glad of an Opportunity to serve others by any
Invention of ours, and this we should do freely and generously."

M. Edward (Ed) Borasky

9/12/2006 3:30:00 AM

0

Jacob Fugal wrote:

>
> [1] It's interesting to note however that 67% of that figure is still
> in paid salaries, rather than the cost of the hardware itself. If
> you've got a super sysadmin who can manage 100 boxes (and you better
> be paying them at least 80K if they are that super), the hardware
> budget will scale a lot better. There's a lot to be said for getting
> your hands on a good sysadmin...

Ah, but does SuperSysAdmin have to use a slow scripting language?

<ducking>



Chad Perrin

9/12/2006 3:33:00 AM

0

On Tue, Sep 12, 2006 at 12:30:05PM +0900, M. Edward (Ed) Borasky wrote:
> Jacob Fugal wrote:
>
> >
> > [1] It's interesting to note however that 67% of that figure is still
> > in paid salaries, rather than the cost of the hardware itself. If
> > you've got a super sysadmin who can manage 100 boxes (and you better
> > be paying them at least 80K if they are that super), the hardware
> > budget will scale a lot better. There's a lot to be said for getting
> > your hands on a good sysadmin...
>
> Ah, but does SuperSysAdmin have to use a slow scripting language?

Do you suggest they should use a slower scripting language, like batch
files? It's not like sysadmins write their administrative scripts in
assembly language for performance.

--
CCD CopyWrite Chad Perrin [ http://ccd.ap... ]
"A script is what you give the actors. A program
is what you give the audience." - Larry Wall

Gregory Brown

9/12/2006 3:38:00 AM

0

On 9/11/06, Carl Lerche <carl.lerche@verizon.net> wrote:

> 4) I didn't see any factor for software budget.

Which is significant. I don't know the cost of licenses for Windows
servers, but I imagine it is costly, not to mention things like
development tools... Actually, I imagine a .Net project could greatly
exceed its hardware costs in software costs, given the right
circumstances.