[lnkForumImage]
TotalShareware - Download Free Software

Confronta i prezzi di migliaia di prodotti.
Asp Forum
 Home | Login | Register | Search 


 

Forums >

comp.lang.python

Unexpected timing results with file I/O

Steven D'Aprano

2/4/2008 3:17:00 PM

After reading an earlier thread about opening and closing lots of files,
I thought I'd do a little experiment.

Suppose you have a whole lot of files, and you need to open each one,
append a string, then close them. There's two obvious ways to do it:
group your code by file, or group your code by procedure.

# Method one: grouped by file.
for each file:
open the file, append the string, then close it


# Method two: grouped by procedure.
for each file:
open the file
for each open file:
append the string
for each open file:
close the file


If you have N files, both methods make the same number of I/O calls: N
opens, N writes, N closes. Which is faster?

Intuitively, the first method has *got* to be faster, right? It's got one
loop instead of three and it doesn't build an intermediate list of open
file objects. It's so *obviously* going to be faster that it is hardly
worth bothering to check it with timeit, right?

Well, I wouldn't be writing unless that intuitive result was wrong. So
here's my test results:


Method 1:

>>> import timeit
>>> names = ['afile' + str(n) for n in range(1000)]
>>> T = timeit.Timer('''for name in names:
.... fp = open(name, 'a'); fp.write('xyz\\n'); fp.close()
.... ''', 'from __main__ import names')
>>> min(T.repeat(6, 500))
17.391216039657593


Method 2:

>>> for name in names: # reset the files to an empty state.
.... fp = open(name, 'w'); fp.close()
....
>>> T = timeit.Timer('''files = [open(name, 'a') for name in names]
.... for fp in files:
.... fp.write('xyz\\n')
.... for fp in files:
.... fp.close()
.... ''', '''from __main__ import names''')
>>> min(T.repeat(6, 500))
16.823362112045288


Surprisingly, Method 2 is a smidgen faster, by about half a second over
500,000 open-write-close cycles. It's not much faster, but it's
consistent, over many tests, changing many of the parameters (e.g. the
number of files, the number of runs per timeit test, etc.).

I'm using Linux and Python 2.5.

So, what's going on? Can anyone explain why the code which does more work
takes less time?



--
Steven
11 Answers

Christian Heimes

2/4/2008 4:05:00 PM

0

Steven D'Aprano wrote:
> So, what's going on? Can anyone explain why the code which does more work
> takes less time?

Short answer: CPU and RAM are much faster than hard disks.

The three loops and the creation of a list costs only a few CPU cycles
compared to flushing the new data to disk.

Christian

Marc 'BlackJack' Rintsch

2/4/2008 5:08:00 PM

0

On Mon, 04 Feb 2008 15:17:18 +0000, Steven D'Aprano wrote:

> # Method one: grouped by file.
> for each file:
> open the file, append the string, then close it
>
>
> # Method two: grouped by procedure.
> for each file:
> open the file
> for each open file:
> append the string
> for each open file:
> close the file
>
> Method 1:
>
> 17.391216039657593
>
> Method 2:
>
> 16.823362112045288
>
>
> Surprisingly, Method 2 is a smidgen faster, by about half a second over
> 500,000 open-write-close cycles. It's not much faster, but it's
> consistent, over many tests, changing many of the parameters (e.g. the
> number of files, the number of runs per timeit test, etc.).
>
> I'm using Linux and Python 2.5.
>
> So, what's going on? Can anyone explain why the code which does more work
> takes less time?

Can't confirm this (Linux, Python 2.5):

Method 1: 15.380897998809814
Method 2: 18.085366010665894

I guess it's really all about the disk IO as my system monitor applet
shows that almost all of the time is spend in the kernel and very little
in user space.

Ciao,
Marc 'BlackJack' Rintsch

roger.dahlstrom

2/4/2008 5:53:00 PM

0

On Feb 4, 10:17 am, Steven D'Aprano <st...@REMOVE-THIS-
cybersource.com.au> wrote:
> After reading an earlier thread about opening and closing lots of files,
> I thought I'd do a little experiment.
>
> Suppose you have a whole lot of files, and you need to open each one,
> append a string, then close them. There's two obvious ways to do it:
> group your code by file, or group your code by procedure.
>
> # Method one: grouped by file.
> for each file:
> open the file, append the string, then close it
>
> # Method two: grouped by procedure.
> for each file:
> open the file
> for each open file:
> append the string
> for each open file:
> close the file
>
> If you have N files, both methods make the same number of I/O calls: N
> opens, N writes, N closes. Which is faster?
>
> Intuitively, the first method has *got* to be faster, right? It's got one
> loop instead of three and it doesn't build an intermediate list of open
> file objects. It's so *obviously* going to be faster that it is hardly
> worth bothering to check it with timeit, right?
>
> Well, I wouldn't be writing unless that intuitive result was wrong. So
> here's my test results:
>
> Method 1:
>
> >>> import timeit
> >>> names = ['afile' + str(n) for n in range(1000)]
> >>> T = timeit.Timer('''for name in names:
>
> ... fp = open(name, 'a'); fp.write('xyz\\n'); fp.close()
> ... ''', 'from __main__ import names')>>> min(T.repeat(6, 500))
>
> 17.391216039657593
>
> Method 2:
>
> >>> for name in names: # reset the files to an empty state.
>
> ... fp = open(name, 'w'); fp.close()
> ...>>> T = timeit.Timer('''files = [open(name, 'a') for name in names]
>
> ... for fp in files:
> ... fp.write('xyz\\n')
> ... for fp in files:
> ... fp.close()
> ... ''', '''from __main__ import names''')>>> min(T.repeat(6, 500))
>
> 16.823362112045288
>
> Surprisingly, Method 2 is a smidgen faster, by about half a second over
> 500,000 open-write-close cycles. It's not much faster, but it's
> consistent, over many tests, changing many of the parameters (e.g. the
> number of files, the number of runs per timeit test, etc.).
>
> I'm using Linux and Python 2.5.
>
> So, what's going on? Can anyone explain why the code which does more work
> takes less time?
>
> --
> Steven



The code that does more work takes more time. The second one does
quite a bit less work. Think of it like this:

You have 500,000 people to fit through a door. Here are your options:

1. For each person, open the door, walk through the door, then close
the door.
2. Open the door, allow everyone to walk through, then close the
door.

Which one would you say would be a more efficient way to fit 500,000
people through the door?

Gabriel Genellina

2/4/2008 6:05:00 PM

0

En Mon, 04 Feb 2008 15:53:11 -0200, rdahlstrom <roger.dahlstrom@gmail.com>
escribi�:

> On Feb 4, 10:17 am, Steven D'Aprano <st...@REMOVE-THIS-
> cybersource.com.au> wrote:
>>
>> Suppose you have a whole lot of files, and you need to open each one,
>> append a string, then close them. There's two obvious ways to do it:
>> group your code by file, or group your code by procedure.
>>
>> # Method one: grouped by file.
>> for each file:
>> open the file, append the string, then close it
>>
>> # Method two: grouped by procedure.
>> for each file:
>> open the file
>> for each open file:
>> append the string
>> for each open file:
>> close the file
>>
>> If you have N files, both methods make the same number of I/O calls: N
>> opens, N writes, N closes. Which is faster?

> The code that does more work takes more time. The second one does
> quite a bit less work. Think of it like this:
>
> You have 500,000 people to fit through a door. Here are your options:
>
> 1. For each person, open the door, walk through the door, then close
> the door.
> 2. Open the door, allow everyone to walk through, then close the
> door.
>
> Which one would you say would be a more efficient way to fit 500,000
> people through the door?

Mmmm, no, the second one should be:

2. Create 500,000 doors and open them.
Make each person enter the room -one at a time- using its own door.
Close each of the 500,000 doors.

--
Gabriel Genellina

Carl Banks

2/4/2008 6:12:00 PM

0

On Feb 4, 12:53 pm, rdahlstrom <roger.dahlst...@gmail.com> wrote:
> On Feb 4, 10:17 am, Steven D'Aprano <st...@REMOVE-THIS-
>
>
>
> cybersource.com.au> wrote:
> > After reading an earlier thread about opening and closing lots of files,
> > I thought I'd do a little experiment.
>
> > Suppose you have a whole lot of files, and you need to open each one,
> > append a string, then close them. There's two obvious ways to do it:
> > group your code by file, or group your code by procedure.
>
> > # Method one: grouped by file.
> > for each file:
> > open the file, append the string, then close it
>
> > # Method two: grouped by procedure.
> > for each file:
> > open the file
> > for each open file:
> > append the string
> > for each open file:
> > close the file
>
> > If you have N files, both methods make the same number of I/O calls: N
> > opens, N writes, N closes. Which is faster?
>
> > Intuitively, the first method has *got* to be faster, right? It's got one
> > loop instead of three and it doesn't build an intermediate list of open
> > file objects. It's so *obviously* going to be faster that it is hardly
> > worth bothering to check it with timeit, right?
>
> > Well, I wouldn't be writing unless that intuitive result was wrong. So
> > here's my test results:
>
> > Method 1:
>
> > >>> import timeit
> > >>> names = ['afile' + str(n) for n in range(1000)]
> > >>> T = timeit.Timer('''for name in names:
>
> > ... fp = open(name, 'a'); fp.write('xyz\\n'); fp.close()
> > ... ''', 'from __main__ import names')>>> min(T.repeat(6, 500))
>
> > 17.391216039657593
>
> > Method 2:
>
> > >>> for name in names: # reset the files to an empty state.
>
> > ... fp = open(name, 'w'); fp.close()
> > ...>>> T = timeit.Timer('''files = [open(name, 'a') for name in names]
>
> > ... for fp in files:
> > ... fp.write('xyz\\n')
> > ... for fp in files:
> > ... fp.close()
> > ... ''', '''from __main__ import names''')>>> min(T.repeat(6, 500))
>
> > 16.823362112045288
>
> > Surprisingly, Method 2 is a smidgen faster, by about half a second over
> > 500,000 open-write-close cycles. It's not much faster, but it's
> > consistent, over many tests, changing many of the parameters (e.g. the
> > number of files, the number of runs per timeit test, etc.).
>
> > I'm using Linux and Python 2.5.
>
> > So, what's going on? Can anyone explain why the code which does more work
> > takes less time?
>
> > --
> > Steven
>
> The code that does more work takes more time. The second one does
> quite a bit less work. Think of it like this:
>
> You have 500,000 people to fit through a door. Here are your options:
>
> 1. For each person, open the door, walk through the door, then close
> the door.
> 2. Open the door, allow everyone to walk through, then close the
> door.
>
> Which one would you say would be a more efficient way to fit 500,000
> people through the door?

Bad analogy. A better analogy would be if each person has their own
door to walk through.

My hunch is that is has to do with the OS I/O scheduling. Closing a
file triggers a cache flush, which in turn triggers the I/O to
schedule a write to disk; the OS scheduler is perhaps more efficient
(for a given number of total writes) when it can combine many writes
at the same time.


Carl Banks

roger.dahlstrom

2/4/2008 6:19:00 PM

0

On Feb 4, 1:12 pm, Carl Banks <pavlovevide...@gmail.com> wrote:
> On Feb 4, 12:53 pm, rdahlstrom <roger.dahlst...@gmail.com> wrote:
>
>
>
> > On Feb 4, 10:17 am, Steven D'Aprano <st...@REMOVE-THIS-
>
> > cybersource.com.au> wrote:
> > > After reading an earlier thread about opening and closing lots of files,
> > > I thought I'd do a little experiment.
>
> > > Suppose you have a whole lot of files, and you need to open each one,
> > > append a string, then close them. There's two obvious ways to do it:
> > > group your code by file, or group your code by procedure.
>
> > > # Method one: grouped by file.
> > > for each file:
> > > open the file, append the string, then close it
>
> > > # Method two: grouped by procedure.
> > > for each file:
> > > open the file
> > > for each open file:
> > > append the string
> > > for each open file:
> > > close the file
>
> > > If you have N files, both methods make the same number of I/O calls: N
> > > opens, N writes, N closes. Which is faster?
>
> > > Intuitively, the first method has *got* to be faster, right? It's got one
> > > loop instead of three and it doesn't build an intermediate list of open
> > > file objects. It's so *obviously* going to be faster that it is hardly
> > > worth bothering to check it with timeit, right?
>
> > > Well, I wouldn't be writing unless that intuitive result was wrong. So
> > > here's my test results:
>
> > > Method 1:
>
> > > >>> import timeit
> > > >>> names = ['afile' + str(n) for n in range(1000)]
> > > >>> T = timeit.Timer('''for name in names:
>
> > > ... fp = open(name, 'a'); fp.write('xyz\\n'); fp.close()
> > > ... ''', 'from __main__ import names')>>> min(T.repeat(6, 500))
>
> > > 17.391216039657593
>
> > > Method 2:
>
> > > >>> for name in names: # reset the files to an empty state.
>
> > > ... fp = open(name, 'w'); fp.close()
> > > ...>>> T = timeit.Timer('''files = [open(name, 'a') for name in names]
>
> > > ... for fp in files:
> > > ... fp.write('xyz\\n')
> > > ... for fp in files:
> > > ... fp.close()
> > > ... ''', '''from __main__ import names''')>>> min(T.repeat(6, 500))
>
> > > 16.823362112045288
>
> > > Surprisingly, Method 2 is a smidgen faster, by about half a second over
> > > 500,000 open-write-close cycles. It's not much faster, but it's
> > > consistent, over many tests, changing many of the parameters (e.g. the
> > > number of files, the number of runs per timeit test, etc.).
>
> > > I'm using Linux and Python 2.5.
>
> > > So, what's going on? Can anyone explain why the code which does more work
> > > takes less time?
>
> > > --
> > > Steven
>
> > The code that does more work takes more time. The second one does
> > quite a bit less work. Think of it like this:
>
> > You have 500,000 people to fit through a door. Here are your options:
>
> > 1. For each person, open the door, walk through the door, then close
> > the door.
> > 2. Open the door, allow everyone to walk through, then close the
> > door.
>
> > Which one would you say would be a more efficient way to fit 500,000
> > people through the door?
>
> Bad analogy. A better analogy would be if each person has their own
> door to walk through.
>
> My hunch is that is has to do with the OS I/O scheduling. Closing a
> file triggers a cache flush, which in turn triggers the I/O to
> schedule a write to disk; the OS scheduler is perhaps more efficient
> (for a given number of total writes) when it can combine many writes
> at the same time.
>
> Carl Banks

The analogy holds. It's faster to open the door, do what you need to
do, then close the door than it is to open and close the door each
time.

Marc 'BlackJack' Rintsch

2/4/2008 6:44:00 PM

0

On Mon, 04 Feb 2008 10:18:39 -0800, rdahlstrom wrote:

> On Feb 4, 1:12 pm, Carl Banks <pavlovevide...@gmail.com> wrote:
>> On Feb 4, 12:53 pm, rdahlstrom <roger.dahlst...@gmail.com> wrote:
>> > You have 500,000 people to fit through a door. Here are your options:
>>
>> > 1. For each person, open the door, walk through the door, then close
>> > the door.
>> > 2. Open the door, allow everyone to walk through, then close the
>> > door.
>>
>> > Which one would you say would be a more efficient way to fit 500,000
>> > people through the door?
>>
>> Bad analogy. A better analogy would be if each person has their own
>> door to walk through.
>
> The analogy holds. It's faster to open the door, do what you need to
> do, then close the door than it is to open and close the door each
> time.

It doesn't hold. Read the code again. The total count of "open door" and
"close door" is the same in both cases.

It's

for every person:
open his door; push him through the door; close his door

vs.

for every person:
open his door
for every person:
push him through the door
for every person:
close his door

Ciao,
Marc 'BlackJack' Rintsch

roger.dahlstrom

2/4/2008 6:49:00 PM

0

It doesn't matter how many doors opening and closing there are, it
matters the order in which the opening, walking through, and closing
are done. That's my point. In the second example, all of the disk
operations are done at the same time. That's what I meant by people
going through the doors. Maybe it was more clear in my head.

Marc 'BlackJack' Rintsch

2/4/2008 7:09:00 PM

0

On Mon, 04 Feb 2008 10:48:32 -0800, rdahlstrom wrote:

> It doesn't matter how many doors opening and closing there are, it
> matters the order in which the opening, walking through, and closing
> are done. That's my point. In the second example, all of the disk
> operations are done at the same time. That's what I meant by people
> going through the doors. Maybe it was more clear in my head.

But my timing shows that method two is slower on my computer. So there is
no obvious winner.

Ciao,
Marc 'BlackJack' Rintsch

Steven D'Aprano

2/4/2008 9:59:00 PM

0

On Mon, 04 Feb 2008 17:08:02 +0000, Marc 'BlackJack' Rintsch wrote:

>> Surprisingly, Method 2 is a smidgen faster, by about half a second over
>> 500,000 open-write-close cycles. It's not much faster, but it's
>> consistent, over many tests, changing many of the parameters (e.g. the
>> number of files, the number of runs per timeit test, etc.).
>>
>> I'm using Linux and Python 2.5.
>>
>> So, what's going on? Can anyone explain why the code which does more
>> work takes less time?
>
> Can't confirm this (Linux, Python 2.5):
>
> Method 1: 15.380897998809814
> Method 2: 18.085366010665894

Hmmm... does your system use software RAID? Mine does. I wonder if that's
a relevant factor?

> I guess it's really all about the disk IO as my system monitor applet
> shows that almost all of the time is spend in the kernel and very little
> in user space.

I wouldn't be surprised if it was something to do with the OS caching
writes to disk. And saying that is really just me doing a lot of hand-
waving and saying "it's magic of what we know naught".



--
Steven