[lnkForumImage]
TotalShareware - Download Free Software

Confronta i prezzi di migliaia di prodotti.
Asp Forum
 Home | Login | Register | Search 


 

Forums >

comp.lang.ruby

Debugging in the large, modern practice?

Hugh Sasse

10/12/2006 10:45:00 AM

18 Answers

Andrew Stewart

10/12/2006 12:32:00 PM

0


On 12 Oct 2006, at 11:44, Hugh Sasse wrote:

> So, my question is this: Given that since I started working in
> computing there have been major strides in software development,
> such as Object Oriented programming becoming mainstream, development
> of concepts like refactoring, development of practices such as the
> Agile methodologies, not to mention developments in networking and
> databases, what are the parallel developments in debugging large
> systems? By large, I mean sufficiently large to cause problems in
> the mental modelling of the dynamic nature of the process, and
> involving considerable quantities of other people's code.

Debugging large systems is indeed hard -- lots to remember at once
with too many moving parts. It's much easier to debug small things.

Alongside all the other strides you mention, testing has improved no
end as I am sure you are aware. Tools like autotest [1] make it easy
to run tests against your system all the time, so you notice sooner
rather than later when it doesn't behave the way you (via your tests)
expect it to.

It's daunting when confronted with a large pile of someone else's
code, especially if that code doesn't have tests, but you have to
start somewhere. You can write tests against the third-party code
which over time become your own personal (executable) documentation
of its API.

With your specific problem, perhaps you could write a test for the
operation you are trying to do. Start with one that passes. Now add
more tests until one fails -- should be easy as it sounds like you
can reliably make the system fail. Now you can iteratively try to
write intermediate tests between the one that passes and the one that
fails, until you isolate the problem to a very small change
somewhere. Think of it as a binary search via tests of the problem
space. Hopefully you will be able to converge on the problematic
needle in the haystack of code.

And then when you have isolated and fixed the problem, you have a
nice set of tests which ensure it won't reappear later on -- a
benefit which manually inspecting log files doesn't confer.

You probably know all this already, so apologies ;-)

Good luck,
Andy Stewart


[1] http://www.zenspider.com/ZSS/Product...


Michael Glaesemann

10/12/2006 12:34:00 PM

0


On Oct 12, 2006, at 19:44 , Hugh Sasse wrote:

> what are the parallel developments in debugging large
> systems? By large, I mean sufficiently large to cause problems in
> the mental modelling of the dynamic nature of the process, and
> involving considerable quantities of other people's code.

Though I can't speak to your larger question as to history of
debugging and its progress, I'm finding that writing (unit,
functional) tests is really helping me understand my own applications
better as well as understand the frameworks and other code they rely
on. Tests can fail unexpectedly, and digging a bit to find the reason
for the exhibited behavior can be very edifying. And as the scope of
the test decreases, so does the number of parts that are contributing
to the behavior. I've just had a good couple of days doing nothing
but writing unit tests and I'm really happy with how I feel more
comfortable with the code and its behavior.

Michael Glaesemann
grzm seespotcode net



Eivind Eklund

10/12/2006 12:57:00 PM

0

On 10/12/06, Hugh Sasse <hgs@dmu.ac.uk> wrote:
> So, my question is this: Given that since I started working in
> computing there have been major strides in software development,
> such as Object Oriented programming becoming mainstream, development
> of concepts like refactoring, development of practices such as the
> Agile methodologies, not to mention developments in networking and
> databases, what are the parallel developments in debugging large
> systems?

Invariant checks ("Design by Contract"), mostly. Adding invariant
checks to your system will often unearth weird cases - making them
easily reproducable and making them fail early.

Testing (which has already been mentioned) can handle some of the same
niche, by removing coupling and allowing you to externally check for
invariants being kept. However, it is usually much easier to add
invariant checks to a system than to add tests.

Eivind.

James Gray

10/12/2006 1:15:00 PM

0

On Oct 12, 2006, at 7:32 AM, Andrew Stewart wrote:

> With your specific problem, perhaps you could write a test for the
> operation you are trying to do. Start with one that passes. Now
> add more tests until one fails -- should be easy as it sounds like
> you can reliably make the system fail. Now you can iteratively try
> to write intermediate tests between the one that passes and the one
> that fails, until you isolate the problem to a very small change
> somewhere.

Plus, when you find the one that fails you will no every subsystem
involved up to there. You can begin to go write tests for those too
now, ensuring they function as expected too. Hopefully you would
slowly zero in on the problem.

James Edward Gray II


Richard Conroy

10/12/2006 1:31:00 PM

0

On 10/12/06, Hugh Sasse <hgs@dmu.ac.uk> wrote:
> I have a large application (which is actually a Rails app) which is
> behaving oddly (I can change items in a DB twice, but 4 times
> fails), and using all the conventional approaches I have learned for
> debugging (printing things out, logging to files, ...)

I am not being pedantic here, but have you not tried a debugger?
Standard out is ok for only certain tasks, particularly if there is
an extended time aspect to the problem.

> ... what are the parallel developments in debugging large
> systems? By large, I mean sufficiently large to cause problems in
> the mental modelling of the dynamic nature of the process, and
> involving considerable quantities of other people's code.

-Remote debugging (ie. servet, web service, .NET debugging in your
IDE as if it was a locally running program)
-Automated Unit Testing (JUnit, NUnit, RUnit)
-Unit Test coverage measuring (rcov, JCoverage, Clover)
-Automated UI testing (of web UIs, excellent examples being WATIR,
Selenium etc.)

If you are getting something as fundamental as somethign screwing up
your DB saves, I would probably start looking at whether you are
properly covered by your unit tests. If you find gaping holes in your
test coverage I would write more tests to expose the problem.

> Given the prevalence of metaprogramming in Ruby, I'll phrase this
> another way, as a meta-question: what are good questions to ask to
> progress along the road of improving one's ability to debug large
> systems?

Simple answer: break up large systems into smaller systems
that can be tested independently (and possibly even replaceable). Its
not so much a component concept as just the age old practice of
modular systems.

I am not sure how this might apply to large Ruby apps, my gut feeling
is there are not many huge ruby apps out there. The biggest I have
heard is around 30000 LOC, which is an awful lot for Ruby. Equivalent
functionality in a more conventional language (Java, C++, NET etc.)
would be bigger.

Rails apps are a bit different though, as Rails is an opinionated framework,
and logically dictates where your code should be. You also don't end up
writing much Ruby code. If you are, you're probably not leaning on the
stack enough. My gut feeling is that code bloat in Rails will very quickly
reveal itself that you are doing something wrong (possibly when test code
starts to get difficult to write). To get 30000 LOC in a Rails app, when it
is written well, your app would be huge, massively featured and have a
pretty huge UI (lots of RHTML), it might be possible to achieve that if you
are localising views a lot (while globalise nicely lets you avoid this as
much as possible, things like date controls, right->left reading order,
might make it easier to localise at the template level.

M. Edward (Ed) Borasky

10/12/2006 2:32:00 PM

0

Hugh Sasse wrote:
> I think the following may be a badly formed question, but if you'd
> bear with me....
>
> I have a large application (which is actually a Rails app) which is
> behaving oddly (I can change items in a DB twice, but 4 times
> fails), and using all the conventional approaches I have learned for
> debugging (printing things out, logging to files, ...) it is taking
> me an age to track the problem down. I have no good reason to assert
> that the database or Rails is at fault, it is more likely to be my
> code, but the interactions with the other code make debugging more
> difficult.

A couple of questions:

1. How large is "large"? Is there some kind of "code size metric" that
the Ruby community uses, and a tool or tools to measure it?

2. You say "your code". How much of this application have you personally
written, how much is "Rails and Ruby and the rest of the
infrastructure", and how much is "the rest of it"?

>
> So, my question is this: Given that since I started working in
> computing there have been major strides in software development,
> such as Object Oriented programming becoming mainstream, development
> of concepts like refactoring, development of practices such as the
> Agile methodologies, not to mention developments in networking and
> databases, what are the parallel developments in debugging large
> systems? By large, I mean sufficiently large to cause problems in
> the mental modelling of the dynamic nature of the process, and
> involving considerable quantities of other people's code.

The "traditional CASE tools" -- IDEs, software configuration and project
management tool sets, the waterfall model, the CMM levels, and of course
prayer, threats, outsourcing and pizza. :)

> The experience I have gained seems to be insufficient to meet the
> kinds of demands that cannot be unique to my situation, so there
> must be better approaches out there already if others are meeting
> such demands.

Again, without knowing both the scope of your specific project nor the
size of the team that built/is building it, it's difficult to answer.
There are bazillions of failed silver bullets to choose from. My
personal opinion is that you're being too hard on yourself and that
anybody who claims to have a tool or a process or a programming language
that is *significantly* better than today's common practices is either
deceived, deceiving or both.

> Given the prevalence of metaprogramming in Ruby, I'll phrase this
> another way, as a meta-question: what are good questions to ask to
> progress along the road of improving one's ability to debug large
> systems?

I think first of all, you have to *want* to debug large chunks of other
peoples' code. It's an acquired taste. I acquired it at one point in my
career but found it unsatisfying. If you *don't* want to debug large
chunks of other peoples' code, there are ways you can structure your
team and processes to minimize how much of it you have to do.

And I would caution you that, although testing and test-driven
development are certainly important and worthwhile, testing can only
show the *presence* of defects, not the absence of defects.

At the point in my career when I was at the peak of my ability to debug
other peoples' code, I came up with a simple rule. You'll probably need
to adjust the time scales to suit your situation, but in my case, the
rule was: If I don't find the problem in one day, it's not my mistake,
but a mistake in someone else's work. And if it takes more than a week,
it's not a software problem, it's a hardware problem. :)

Good luck ... may the source be with you. :)

Kent Sibilev

10/12/2006 2:35:00 PM

0

On 10/12/06, Hugh Sasse <hgs@dmu.ac.uk> wrote:
> I have a large application (which is actually a Rails app) which is
> behaving oddly (I can change items in a DB twice, but 4 times
> fails), and using all the conventional approaches I have learned for
> debugging (printing things out, logging to files, ...) it is taking
> me an age to track the problem down. I have no good reason to assert
> that the database or Rails is at fault, it is more likely to be my
> code, but the interactions with the other code make debugging more
> difficult.

My first bet would be on improving the test case suite. But in the
process I think that a good debugger is a very valuable tool too. You
can try ruby-debug, which I find very helpful sometimes:

http://datanoise.com/articles/2006/09/14/debugging-rails-a...

--
Kent
---
http://www.dat...

Its Me

10/12/2006 3:11:00 PM

0

"Eivind Eklund" <eeklund@gmail.com> wrote in message

> However, it is usually much easier to add
> invariant checks to a system than to add tests.

+1



Hugh Sasse

10/12/2006 3:36:00 PM

0

Hugh Sasse

10/12/2006 3:41:00 PM

0