[lnkForumImage]
TotalShareware - Download Free Software

Confronta i prezzi di migliaia di prodotti.
Asp Forum
 Home | Login | Register | Search 


 

Forums >

microsoft.public.dotnet.framework

.Net IL and optimization Capabilities?!

Leon_Amirreza

10/1/2008 8:49:00 PM

I have written a program in C# with .net 3.5sp1 and calculated the running
of it in micro senconds. now
I want to estimate the time that it would take to run the same algorithm in
C on another processor/microcontroller SO
I need to know what features of intel processor and optimization mechanisms
are USED in .net and what features of intel are NOT USED by .net to estimate
the running time more accurate?

1- any links to .net internals or source code is appreciated; or books (free
or commecial books)?

usually algorithms take much more effort to be written in C than c#!

14 Answers

Family Tree Mike

10/1/2008 9:25:00 PM

0

I cannot imagine how you would get a meaningful number without actually
recoding, recompiling and testing on the target machine. To me, an analogy
is you have driven a car from Chicago to Cleveland, and from it you want to
estimate how long it will take to ride a motorcycle from Detroit to
Louisville.

"Leon_Amirreza" <r_rahmaty@hotmail.com> wrote in message
news:%234EOwcAJJHA.3816@TK2MSFTNGP04.phx.gbl...
>I have written a program in C# with .net 3.5sp1 and calculated the running
>of it in micro senconds. now
> I want to estimate the time that it would take to run the same algorithm
> in C on another processor/microcontroller SO
> I need to know what features of intel processor and optimization
> mechanisms are USED in .net and what features of intel are NOT USED by
> .net to estimate the running time more accurate?
>
> 1- any links to .net internals or source code is appreciated; or books
> (free or commecial books)?
>
> usually algorithms take much more effort to be written in C than c#!

Jeroen Mostert

10/1/2008 9:30:00 PM

0

Leon_Amirreza wrote:
> I have written a program in C# with .net 3.5sp1 and calculated the
> running of it in micro senconds. now
> I want to estimate the time that it would take to run the same algorithm
> in C on another processor/microcontroller SO

Hopeless, especially if you go down to the microsecond level. It depends on
how you allocate memory, how the C# compiler optimizes, how the CLR
optimizes, how your C compiler optimizes and the specifics of your target
hardware in terms of caching and pipelining. Anything you don't benchmark is
a lie. Anything you do benchmark is probably a half-truth, but at least
it'll be better.

> usually algorithms take much more effort to be written in C than c#!

But if you don't have a .NET runtime on your target platform, the only point
to writing the algorithm in C# first is to get it correct (and possibly
optimized as far as asymptotic running time goes). There's no point to
benchmarking code you ultimately won't be using.

--
J.

Peter Duniho

10/2/2008 12:26:00 AM

0

On Wed, 01 Oct 2008 14:30:22 -0700, Jeroen Mostert <jmostert@xs4all.nl>
wrote:

> Leon_Amirreza wrote:
>> I have written a program in C# with .net 3.5sp1 and calculated the
>> running of it in micro senconds. now
>> I want to estimate the time that it would take to run the same
>> algorithm in C on another processor/microcontroller SO
>
> Hopeless, especially if you go down to the microsecond level. It depends
> on how you allocate memory, how the C# compiler optimizes, how the CLR
> optimizes, how your C compiler optimizes and the specifics of your
> target hardware in terms of caching and pipelining. Anything you don't
> benchmark is a lie. Anything you do benchmark is probably a half-truth,
> but at least it'll be better.

And just to elaborate on what Jeroen wrote (and FTM, for that matter)...

You're fooling yourself if you think you've calculated the running time of
any algorithm or program down to the microsecond. Windows as a platform
simply does not provide a sufficiently repeatable execution environment
for a microsecond-level measurement to be meaningful.

Jeroen's post basically says this, but I think it's worth being more
explicit about it.

Pete

Leon_Amirreza

10/2/2008 8:16:00 AM

0

sorry to answer on a new thread ;

1- i can have a worse case on c#?! cant I? because the target is less
capable of Intel core dou
and because of scheduling overhead and because of .net overhead
and Joreon you are right depends on these and that is why i need these
insight info ( i have the exact target plaftform info and capabilitis but
not .Net)
and i have run my algorithm many times that gives an statistical average
running time with some tolerance thats all; simple and effective i have done
these kind of benchmarkings and wasnt so far from the reality (I am not
interested in exact microsencod running time but the average) by microsecond
I just meant i got this resolution in calculation time with windows Perf
Counter not Environment.TickCounts thats all

I have run this App for more than 10000 times and gives a good average on
worse case

thank you for your time but STILL you havent answered mine questiuon!

"Peter Duniho" <NpOeStPeAdM@nnowslpianmk.com> wrote in message
news:op.uidfgxtp8jd0ej@petes-computer.local...
> On Wed, 01 Oct 2008 14:30:22 -0700, Jeroen Mostert <jmostert@xs4all.nl>
> wrote:
>
>> Leon_Amirreza wrote:
>>> I have written a program in C# with .net 3.5sp1 and calculated the
>>> running of it in micro senconds. now
>>> I want to estimate the time that it would take to run the same
>>> algorithm in C on another processor/microcontroller SO
>>
>> Hopeless, especially if you go down to the microsecond level. It depends
>> on how you allocate memory, how the C# compiler optimizes, how the CLR
>> optimizes, how your C compiler optimizes and the specifics of your
>> target hardware in terms of caching and pipelining. Anything you don't
>> benchmark is a lie. Anything you do benchmark is probably a half-truth,
>> but at least it'll be better.
>
> And just to elaborate on what Jeroen wrote (and FTM, for that matter)...
>
> You're fooling yourself if you think you've calculated the running time of
> any algorithm or program down to the microsecond. Windows as a platform
> simply does not provide a sufficiently repeatable execution environment
> for a microsecond-level measurement to be meaningful.
>
> Jeroen's post basically says this, but I think it's worth being more
> explicit about it.
>
> Pete

Leon_Amirreza

10/2/2008 8:17:00 AM

0

Target processor is much like a intel pentium without SSE2 and other
especial features so they are very much alike

"Peter Duniho" <NpOeStPeAdM@nnowslpianmk.com> wrote in message
news:op.uidfgxtp8jd0ej@petes-computer.local...
> On Wed, 01 Oct 2008 14:30:22 -0700, Jeroen Mostert <jmostert@xs4all.nl>
> wrote:
>
>> Leon_Amirreza wrote:
>>> I have written a program in C# with .net 3.5sp1 and calculated the
>>> running of it in micro senconds. now
>>> I want to estimate the time that it would take to run the same
>>> algorithm in C on another processor/microcontroller SO
>>
>> Hopeless, especially if you go down to the microsecond level. It depends
>> on how you allocate memory, how the C# compiler optimizes, how the CLR
>> optimizes, how your C compiler optimizes and the specifics of your
>> target hardware in terms of caching and pipelining. Anything you don't
>> benchmark is a lie. Anything you do benchmark is probably a half-truth,
>> but at least it'll be better.
>
> And just to elaborate on what Jeroen wrote (and FTM, for that matter)...
>
> You're fooling yourself if you think you've calculated the running time of
> any algorithm or program down to the microsecond. Windows as a platform
> simply does not provide a sufficiently repeatable execution environment
> for a microsecond-level measurement to be meaningful.
>
> Jeroen's post basically says this, but I think it's worth being more
> explicit about it.
>
> Pete

Leon_Amirreza

10/2/2008 8:20:00 AM

0

if the worse case on c# is satisfactory there is just hope that actullay
buying and building the new target platform from scratch may have
satisfactory results but if c# fails to give a good running time you would
not know what will happen when you run your program on the target machine
(see the cheapest way is first to test if the algorithm runs in a good time
in c# or not then test it on the real platform)
"Peter Duniho" <NpOeStPeAdM@nnowslpianmk.com> wrote in message
news:op.uidfgxtp8jd0ej@petes-computer.local...
> On Wed, 01 Oct 2008 14:30:22 -0700, Jeroen Mostert <jmostert@xs4all.nl>
> wrote:
>
>> Leon_Amirreza wrote:
>>> I have written a program in C# with .net 3.5sp1 and calculated the
>>> running of it in micro senconds. now
>>> I want to estimate the time that it would take to run the same
>>> algorithm in C on another processor/microcontroller SO
>>
>> Hopeless, especially if you go down to the microsecond level. It depends
>> on how you allocate memory, how the C# compiler optimizes, how the CLR
>> optimizes, how your C compiler optimizes and the specifics of your
>> target hardware in terms of caching and pipelining. Anything you don't
>> benchmark is a lie. Anything you do benchmark is probably a half-truth,
>> but at least it'll be better.
>
> And just to elaborate on what Jeroen wrote (and FTM, for that matter)...
>
> You're fooling yourself if you think you've calculated the running time of
> any algorithm or program down to the microsecond. Windows as a platform
> simply does not provide a sufficiently repeatable execution environment
> for a microsecond-level measurement to be meaningful.
>
> Jeroen's post basically says this, but I think it's worth being more
> explicit about it.
>
> Pete

Peter Duniho

10/2/2008 10:56:00 AM

0

On Thu, 02 Oct 2008 01:15:46 -0700, Leon_Amirreza <r_rahmaty@hotmail.com>
wrote:

> [...]
> thank you for your time but STILL you havent answered mine questiuon!

You haven't received an answer to the specific question because, I
believe, there isn't one. That is, the references you're looking for
simply don't exist. You can't extrapolate the performance of your
algorithm, currently coded in C# and running on one hardware platform, to
a theoretical implementation in C running on a different hardware
platform. There's just no reliable way to do that.

If you know that the tested hardware platform is at a minimum not superior
to the target hardware platform, then you _might_ be able to, with some
small degree of confidence, assume that the tested scenario is in fact
your "worst case" scenario and you would get better performance in C on
the other platform. But in truth, even that really depends on too many
factors for you to be sure.

In particular, an algorithm that relies heavily on a library
implementation may in fact run better on .NET because a lot of .NET is
coded with the latest technologies and techniques providing the best
performance. This seems especially likely to be applicable if you are
saying that "usually algorithms take much more effort to be written in C
than c#", since that statement is actually only true if you're making
heavy use of library functions that aren't available in your C environment.

Pete

Jeroen Mostert

10/2/2008 11:02:00 AM

0

Leon_Amirreza wrote:
> 1- i can have a worse case on c#?! cant I? because the target is less
> capable of Intel core dou
> and because of scheduling overhead and because of .net overhead
> and Joreon you are right depends on these and that is why i need these
> insight info ( i have the exact target plaftform info and capabilitis but
> not .Net)

The "exact info" is an implementation detail. There are no books on it
because it's a moving target. The only way you can find out is to compile
programs and look at the assembler output. To get "exact info", you'd need
to have the source of the compiler and the runtime and a lot of free time
understanding them. I'm pretty sure both the compiler and the runtime are
closed-source, though a good part of the CLR and libraries are available in
source form.

There is info on how the garbage collector works, for example, but only in
broad strokes. It will not allow you to draw conclusions about individual
programs, you can only use it to explain performance problems when they crop
up. You will not get details like "allocating X objects will take Y
seconds, and if you did 'the same thing' in C it would take Z seconds" from
any book or site. The only way to find that out is to try it.

> and i have run my algorithm many times that gives an statistical average
> running time with some tolerance thats all; simple and effective i have
> done
> these kind of benchmarkings and wasnt so far from the reality (I am not
> interested in exact microsencod running time but the average) by
> microsecond
> I just meant i got this resolution in calculation time with windows Perf
> Counter not Environment.TickCounts thats all
>
Yes, you certainly can get a *worst* case. Unfortunately, your worst case
timings will say very little about how fast the code will run on your actual
platform once it's written in a different language and compiled by a
different compiler to a different architecture. It could be slower, it could
be faster. A very simple difference like cache size could already have a big
impact.

You can only establish that (for example) your code runs in linear time.
This is a property of the abstract *algorithm*, independent of the language
it's written in, and while it's certainly a useful thing to know, it says
little about how any particular implementation of that algorithm in *code*
will actually run.

> thank you for your time but STILL you havent answered mine questiuon!
>
I don't think your question is worth answering. If you want to know how fast
your code will run, write it, transfer it to the target platform, then
measure it. Unless your hardware does not yet exist, doing anything else is
a waste of time. You can do algorithmic analysis and that's useful, but
forget about timing.

--
J.

Leon_Amirreza

10/2/2008 12:54:00 PM

0

First u got my question right and wrong i am not asking for ways how to
estimate my algorithm but for insight references that u already said theres
none ; so i assume my answer is simply "None" if its not worth answering so
simply dont answer it and thank you

"Jeroen Mostert" <jmostert@xs4all.nl> wrote in message
news:48e4aa3e$0$197$e4fe514c@news.xs4all.nl...
> Leon_Amirreza wrote:
>> 1- i can have a worse case on c#?! cant I? because the target is less
>> capable of Intel core dou
>> and because of scheduling overhead and because of .net overhead
>> and Joreon you are right depends on these and that is why i need these
>> insight info ( i have the exact target plaftform info and capabilitis but
>> not .Net)
>
> The "exact info" is an implementation detail. There are no books on it
> because it's a moving target. The only way you can find out is to compile
> programs and look at the assembler output. To get "exact info", you'd need
> to have the source of the compiler and the runtime and a lot of free time
> understanding them. I'm pretty sure both the compiler and the runtime are
> closed-source, though a good part of the CLR and libraries are available
> in source form.
>
> There is info on how the garbage collector works, for example, but only in
> broad strokes. It will not allow you to draw conclusions about individual
> programs, you can only use it to explain performance problems when they
> crop up. You will not get details like "allocating X objects will take Y
> seconds, and if you did 'the same thing' in C it would take Z seconds"
> from any book or site. The only way to find that out is to try it.
>
>> and i have run my algorithm many times that gives an statistical average
>> running time with some tolerance thats all; simple and effective i have
>> done
>> these kind of benchmarkings and wasnt so far from the reality (I am not
>> interested in exact microsencod running time but the average) by
>> microsecond
>> I just meant i got this resolution in calculation time with windows Perf
>> Counter not Environment.TickCounts thats all
>>
> Yes, you certainly can get a *worst* case. Unfortunately, your worst case
> timings will say very little about how fast the code will run on your
> actual platform once it's written in a different language and compiled by
> a different compiler to a different architecture. It could be slower, it
> could be faster. A very simple difference like cache size could already
> have a big impact.
>
> You can only establish that (for example) your code runs in linear time.
> This is a property of the abstract *algorithm*, independent of the
> language it's written in, and while it's certainly a useful thing to know,
> it says little about how any particular implementation of that algorithm
> in *code* will actually run.
>
>> thank you for your time but STILL you havent answered mine questiuon!
>>
> I don't think your question is worth answering. If you want to know how
> fast your code will run, write it, transfer it to the target platform,
> then measure it. Unless your hardware does not yet exist, doing anything
> else is a waste of time. You can do algorithmic analysis and that's
> useful, but forget about timing.
>
> --

> J.

Leon_Amirreza

10/2/2008 1:00:00 PM

0

yes the answer may not be accurate but gives some insight for example
1- i dont need very near estimation ( it takes nearly 6 micro senconds in
every loop iteration) with intel e2140 cpu with 20 stage piplined sse3 1 MB
level 2 cache and 32 kb level 1 cache MMX ... so it would not take 10
seconds on a processor very much like my intel with 4 stage pipline 64 kb
level 1 cache and without sse3 and mmx
if i am only assured that (say) MMX and sse3 and other features on my intel
had no effect on my App because simply .net dont use them thats all

"Peter Duniho" <NpOeStPeAdM@nnowslpianmk.com> wrote in message
news:op.uid8n5oq8jd0ej@petes-computer.local...
> On Thu, 02 Oct 2008 01:15:46 -0700, Leon_Amirreza <r_rahmaty@hotmail.com>
> wrote:
>
>> [...]
>> thank you for your time but STILL you havent answered mine questiuon!
>
> You haven't received an answer to the specific question because, I
> believe, there isn't one. That is, the references you're looking for
> simply don't exist. You can't extrapolate the performance of your
> algorithm, currently coded in C# and running on one hardware platform, to
> a theoretical implementation in C running on a different hardware
> platform. There's just no reliable way to do that.
>
> If you know that the tested hardware platform is at a minimum not superior
> to the target hardware platform, then you _might_ be able to, with some
> small degree of confidence, assume that the tested scenario is in fact
> your "worst case" scenario and you would get better performance in C on
> the other platform. But in truth, even that really depends on too many
> factors for you to be sure.
>
> In particular, an algorithm that relies heavily on a library
> implementation may in fact run better on .NET because a lot of .NET is
> coded with the latest technologies and techniques providing the best
> performance. This seems especially likely to be applicable if you are
> saying that "usually algorithms take much more effort to be written in C
> than c#", since that statement is actually only true if you're making
> heavy use of library functions that aren't available in your C
> environment.
>
> Pete