[lnkForumImage]
TotalShareware - Download Free Software

Confronta i prezzi di migliaia di prodotti.
Asp Forum
 Home | Login | Register | Search 


 

Forums >

comp.lang.python

time.time or time.clock

rrr

1/13/2008 8:06:00 PM


I'm having some cross platform issues with timing loops. It seems
time.time is better for some computers/platforms and time.clock others, but
it's not always clear which, so I came up with the following to try to
determine which.


import time

# Determine if time.time is better than time.clock
# The one with better resolution should be lower.
if time.clock() - time.clock() < time.time() - time.time():
clock = time.clock
else:
clock = time.time


Will this work most of the time, or is there something better?


Ron

9 Answers

John Machin

1/13/2008 9:50:00 PM

0

On Jan 14, 7:05 am, Ron Adam <r...@ronadam.com> wrote:
> I'm having some cross platform issues with timing loops. It seems
> time.time is better for some computers/platforms and time.clock others, but

Care to explain why it seems so?

> it's not always clear which, so I came up with the following to try to
> determine which.
>
> import time
>
> # Determine if time.time is better than time.clock
> # The one with better resolution should be lower.
> if time.clock() - time.clock() < time.time() - time.time():
> clock = time.clock
> else:
> clock = time.time
>
> Will this work most of the time, or is there something better?
>

Manual:
"""
clock( )

On Unix, return the current processor time as a floating point number
expressed in seconds. The precision, and in fact the very definition
of the meaning of ``processor time'', depends on that of the C
function of the same name, but in any case, this is the function to
use for benchmarking Python or timing algorithms.

On Windows, this function returns wall-clock seconds elapsed since the
first call to this function, as a floating point number, based on the
Win32 function QueryPerformanceCounter(). The resolution is typically
better than one microsecond.
[snip]

time( )

Return the time as a floating point number expressed in seconds since
the epoch, in UTC. Note that even though the time is always returned
as a floating point number, not all systems provide time with a better
precision than 1 second. While this function normally returns non-
decreasing values, it can return a lower value than a previous call if
the system clock has been set back between the two calls.
"""

AFAICT that was enough indication for most people to use time.clock on
all platforms ... before the introduction of the timeit module; have
you considered it?

It looks like your method is right sometimes by accident. func() -
func() will give a negative answer with a high resolution timer and a
meaningless answer with a low resolution timer, where "high" and "low"
are relative to the time taken for the function call, so you will pick
the high resolution one most of the time because the meaningless
answer is ZERO (no tick, no change). Some small fraction of the time
the low resolution timer will have a tick between the two calls and
you will get the wrong answer (-big < -small). In the case of two
"low" resolution timers, both will give a meaningless answer and you
will choose arbitrarily.

HTH,
John

Fredrik Lundh

1/13/2008 10:16:00 PM

0

John Machin wrote:

> AFAICT that was enough indication for most people to use time.clock on
> all platforms ...

which was unfortunate, given that time.clock() isn't even a proper clock
on most Unix systems; it's a low-resolution sample counter that can
happily assign all time to a process that uses, say, 2% CPU and zero
time to one that uses 98% CPU.

> before the introduction of the timeit module; have you considered it?

whether or not "timeit" suites his requirements, he can at least replace
his code with

clock = timeit.default_timer

which returns a good wall-time clock (which happens to be time.time() on
Unix and time.clock() on Windows).

</F>

rrr

1/14/2008 4:03:00 AM

0



John Machin wrote:
> On Jan 14, 7:05 am, Ron Adam <r...@ronadam.com> wrote:
>> I'm having some cross platform issues with timing loops. It seems
>> time.time is better for some computers/platforms and time.clock others, but
>
> Care to explain why it seems so?
>
>> it's not always clear which, so I came up with the following to try to
>> determine which.
>>
>> import time
>>
>> # Determine if time.time is better than time.clock
>> # The one with better resolution should be lower.
>> if time.clock() - time.clock() < time.time() - time.time():
>> clock = time.clock
>> else:
>> clock = time.time
>>
>> Will this work most of the time, or is there something better?
>>
>
> Manual:
> """
> clock( )
>
> On Unix, return the current processor time as a floating point number
> expressed in seconds. The precision, and in fact the very definition
> of the meaning of ``processor time'', depends on that of the C
> function of the same name, but in any case, this is the function to
> use for benchmarking Python or timing algorithms.
>
> On Windows, this function returns wall-clock seconds elapsed since the
> first call to this function, as a floating point number, based on the
> Win32 function QueryPerformanceCounter(). The resolution is typically
> better than one microsecond.
> [snip]
>
> time( )
>
> Return the time as a floating point number expressed in seconds since
> the epoch, in UTC. Note that even though the time is always returned
> as a floating point number, not all systems provide time with a better
> precision than 1 second. While this function normally returns non-
> decreasing values, it can return a lower value than a previous call if
> the system clock has been set back between the two calls.
> """
>
> AFAICT that was enough indication for most people to use time.clock on
> all platforms ... before the introduction of the timeit module; have
> you considered it?

I use it to time a Visual Python loop which controls frame rate updates and
set volocities according to time between frames, rather than frame count.
The time between frames depends both on the desired frame rate, and the
background load on the computer, so it isn't constant.

time.clock() isn't high enough resolution for Ubuntu, and time.time() isn't
high enough resolution on windows.

I do use timeit for bench marking, but haven't tried using in a situation
like this.


> It looks like your method is right sometimes by accident. func() -
> func() will give a negative answer with a high resolution timer and a
> meaningless answer with a low resolution timer, where "high" and "low"
> are relative to the time taken for the function call, so you will pick
> the high resolution one most of the time because the meaningless
> answer is ZERO (no tick, no change). Some small fraction of the time
> the low resolution timer will have a tick between the two calls and
> you will get the wrong answer (-big < -small).

If the difference is between two high resolution timers then it will be
good enough. I think the time between two consectutive func() calls is
probably low enough to rule out low resolution timers.


In the case of two
> "low" resolution timers, both will give a meaningless answer and you
> will choose arbitrarily.

In the case of two low resolution timers, it will use time.time. In this
case I probably need to raise an exception. My program won't work
correctly with a low resolution timer.

Thanks for the feed back, I will try to find something more dependable.

Ron





rrr

1/14/2008 4:03:00 AM

0



John Machin wrote:
> On Jan 14, 7:05 am, Ron Adam <r...@ronadam.com> wrote:
>> I'm having some cross platform issues with timing loops. It seems
>> time.time is better for some computers/platforms and time.clock others, but
>
> Care to explain why it seems so?
>
>> it's not always clear which, so I came up with the following to try to
>> determine which.
>>
>> import time
>>
>> # Determine if time.time is better than time.clock
>> # The one with better resolution should be lower.
>> if time.clock() - time.clock() < time.time() - time.time():
>> clock = time.clock
>> else:
>> clock = time.time
>>
>> Will this work most of the time, or is there something better?
>>
>
> Manual:
> """
> clock( )
>
> On Unix, return the current processor time as a floating point number
> expressed in seconds. The precision, and in fact the very definition
> of the meaning of ``processor time'', depends on that of the C
> function of the same name, but in any case, this is the function to
> use for benchmarking Python or timing algorithms.
>
> On Windows, this function returns wall-clock seconds elapsed since the
> first call to this function, as a floating point number, based on the
> Win32 function QueryPerformanceCounter(). The resolution is typically
> better than one microsecond.
> [snip]
>
> time( )
>
> Return the time as a floating point number expressed in seconds since
> the epoch, in UTC. Note that even though the time is always returned
> as a floating point number, not all systems provide time with a better
> precision than 1 second. While this function normally returns non-
> decreasing values, it can return a lower value than a previous call if
> the system clock has been set back between the two calls.
> """
>
> AFAICT that was enough indication for most people to use time.clock on
> all platforms ... before the introduction of the timeit module; have
> you considered it?

I use it to time a Visual Python loop which controls frame rate updates and
set volocities according to time between frames, rather than frame count.
The time between frames depends both on the desired frame rate, and the
background load on the computer, so it isn't constant.

time.clock() isn't high enough resolution for Ubuntu, and time.time() isn't
high enough resolution on windows.

I do use timeit for bench marking, but haven't tried using in a situation
like this.


> It looks like your method is right sometimes by accident. func() -
> func() will give a negative answer with a high resolution timer and a
> meaningless answer with a low resolution timer, where "high" and "low"
> are relative to the time taken for the function call, so you will pick
> the high resolution one most of the time because the meaningless
> answer is ZERO (no tick, no change). Some small fraction of the time
> the low resolution timer will have a tick between the two calls and
> you will get the wrong answer (-big < -small).

If the difference is between two high resolution timers then it will be
good enough. I think the time between two consectutive func() calls is
probably low enough to rule out low resolution timers.


In the case of two
> "low" resolution timers, both will give a meaningless answer and you
> will choose arbitrarily.

In the case of two low resolution timers, it will use time.time. In this
case I probably need to raise an exception. My program won't work
correctly with a low resolution timer.

Thanks for the feed back, I will try to find something more dependable.

Ron





rrr

1/14/2008 4:09:00 AM

0



Fredrik Lundh wrote:
> John Machin wrote:
>
>> AFAICT that was enough indication for most people to use time.clock on
>> all platforms ...
>
> which was unfortunate, given that time.clock() isn't even a proper clock
> on most Unix systems; it's a low-resolution sample counter that can
> happily assign all time to a process that uses, say, 2% CPU and zero
> time to one that uses 98% CPU.
>
> > before the introduction of the timeit module; have you considered it?
>
> whether or not "timeit" suites his requirements, he can at least replace
> his code with
>
> clock = timeit.default_timer
>
> which returns a good wall-time clock (which happens to be time.time() on
> Unix and time.clock() on Windows).


Thanks for the suggestion Fredrik, I looked at timeit and it does the
following.


import sys
import time

if sys.platform == "win32":
# On Windows, the best timer is time.clock()
default_timer = time.clock
else:
# On most other platforms the best timer is time.time()
default_timer = time.time



I was hoping I could determine which to use by the values returned. But
maybe that isn't as easy as it seems it would be.


Ron

rrr

1/14/2008 4:09:00 AM

0



Fredrik Lundh wrote:
> John Machin wrote:
>
>> AFAICT that was enough indication for most people to use time.clock on
>> all platforms ...
>
> which was unfortunate, given that time.clock() isn't even a proper clock
> on most Unix systems; it's a low-resolution sample counter that can
> happily assign all time to a process that uses, say, 2% CPU and zero
> time to one that uses 98% CPU.
>
> > before the introduction of the timeit module; have you considered it?
>
> whether or not "timeit" suites his requirements, he can at least replace
> his code with
>
> clock = timeit.default_timer
>
> which returns a good wall-time clock (which happens to be time.time() on
> Unix and time.clock() on Windows).


Thanks for the suggestion Fredrik, I looked at timeit and it does the
following.


import sys
import time

if sys.platform == "win32":
# On Windows, the best timer is time.clock()
default_timer = time.clock
else:
# On most other platforms the best timer is time.time()
default_timer = time.time



I was hoping I could determine which to use by the values returned. But
maybe that isn't as easy as it seems it would be.


Ron

dwblas

1/14/2008 5:50:00 PM

0

"""
<snipped>
time.clock() isn't high enough resolution for Ubuntu, and time.time()
isn't
high enough resolution on windows.

Take a look at datetime. It is good to the micro-second on Linux and
milli-second on Windows.
"""

import datetime
begin_time=datetime.datetime.now()
for j in range(100000):
x = j+1 # wait a small amount of time
print "Elapsed time =", datetime.datetime.now()-begin_time

## You can also access the individual time values
print begin_time.second
print begin_time.microsecond ## etc.

Fredrik Lundh

1/14/2008 7:30:00 PM

0

dwblas@gmail.com wrote:
> """
> <snipped>
> time.clock() isn't high enough resolution for Ubuntu, and time.time()
> isn't > high enough resolution on windows.
>
> Take a look at datetime. It is good to the micro-second on Linux and
> milli-second on Windows.

datetime.datetime.now() does the same thing as time.time(); it uses the
gettimeofday() API for platforms that have it (and so does time.time()),
and calls the fallback implementation in time.time() if gettimeofdat()
isn't supported. from the datetime sources:

#ifdef HAVE_GETTIMEOFDAY
struct timeval t;
#ifdef GETTIMEOFDAY_NO_TZ
gettimeofday(&t);
#else
gettimeofday(&t, (struct timezone *)NULL);
#endif
...
#else /* ! HAVE_GETTIMEOFDAY */

/* No flavor of gettimeofday exists on this platform. Python's
* time.time() does a lot of other platform tricks to get the
* best time it can on the platform, and we're not going to do
* better than that (if we could, the better code would belong
* in time.time()!) We're limited by the precision of a double,
* though.
*/

(note the "if we could" part).

</F>

John Machin

1/14/2008 8:36:00 PM

0

On Jan 15, 4:50 am, dwb...@gmail.com wrote:
> """
> <snipped>
> time.clock() isn't high enough resolution for Ubuntu, and time.time()
> isn't
> high enough resolution on windows.
>
> Take a look at datetime. It is good to the micro-second on Linux and
> milli-second on Windows.
> """

On Windows, time.clock has MICROsecond resolution, but your method
appears to have exactly the same (MILLIsecond) resolution as
time.time, but with greater overhead, especially when the result is
required in seconds-and-a-fraction as a float:

>>> def datetimer(start=datetime.datetime(1970,1,1,0,0,0), nowfunc=datetime.datetime.now):
.... delta = nowfunc() - start
.... return delta.days * 86400 + delta.seconds +
delta.microseconds / 1000000.0
....
>>> tt = time.time(); td = datetimer(); diff = td - tt; print map(repr, (tt, td,
diff))
['1200341583.484', '1200381183.484', '39600.0']
>>> tt = time.time(); td = datetimer(); diff = td - tt; print map(repr, (tt, td,
diff))
['1200341596.484', '1200381196.484', '39600.0']
>>> tt = time.time(); td = datetimer(); diff = td - tt; print map(repr, (tt, td,
diff))
['1200341609.4530001', '1200381209.4530001', '39600.0']
>>> tt = time.time(); td = datetimer(); diff = td - tt; print map(repr, (tt, td,
diff))
['1200341622.562', '1200381222.562', '39600.0']
>>>

The difference of 39600 seconds (11 hours) would be removed by using
datetime.datetime.utcnow.

>
> import datetime
> begin_time=datetime.datetime.now()
> for j in range(100000):
> x = j+1 # wait a small amount of time
> print "Elapsed time =", datetime.datetime.now()-begin_time
>
> ## You can also access the individual time values
> print begin_time.second
> print begin_time.microsecond ## etc.

Running that on my Windows system (XP Pro, Python 2.5.1, AMD Turion 64
Mobile cpu rated at 2.0 GHz), I get
Elapsed time = 0:00:00.031000
or
Elapsed time = 0:00:00.047000
Using 50000 iterations, I get it down to 15 or 16 milliseconds. 15 ms
is the lowest non-zero interval that can be procured.

This is consistent with results obtained by using time.time.

Approach: get first result from timer function; call timer in a tight
loop until returned value changes; ignore the first difference so
found and save the next n differences.

Windows time.time appears to tick at 15 or 16 ms intervals, averaging
about 15.6 ms. For comparison, Windows time.clock appears to tick at
about 2.3 MICROsecond intervals.

Finally, some comments from the Python 2.5.1 datetimemodule.c:

/* No flavor of gettimeofday exists on this platform. Python's
* time.time() does a lot of other platform tricks to get the
* best time it can on the platform, and we're not going to do
* better than that (if we could, the better code would belong
* in time.time()!) We're limited by the precision of a double,
* though.
*/

HTH,
John