[lnkForumImage]
TotalShareware - Download Free Software

Confronta i prezzi di migliaia di prodotti.
Asp Forum
 Home | Login | Register | Search 


 

Forums >

comp.lang.c++

incredible slowdown switching to 64 bit g++

nandor.sieben

11/25/2008 6:21:00 AM

I have a fairly complex C++ program that uses a lot of STL, number
crunching using doubles and the lapack library (-llapack -lblas -lg2c -
lm). The code works fine on any 32 bit unix machine compiled with g++
but when I try it on a 64 bit machine a running time of 10 seconds
becomes 15 minutes. The code is complex, I could not create a simple
subset that produces this problem. I tried this on several 32 and 64
bit machines. The speed of the machines are comparable. I use -O2
optimization. The program is not swapping to disk. What could cause
this incredible slowdown?

Some suspects:

-The lapack library
- Tolerances I use for floating point comparisons
- Large vector<vector<int > > variables ( even vector<vector<vector< >
> > variable )
- Need a compiler option on the 64 bit machines?
- Random number generator

20 Answers

Kai-Uwe Bux

11/25/2008 6:46:00 AM

0

nandor.sieben@gmail.com wrote:

> I have a fairly complex C++ program that uses a lot of STL, number
> crunching using doubles and the lapack library (-llapack -lblas -lg2c -
> lm). The code works fine on any 32 bit unix machine compiled with g++
> but when I try it on a 64 bit machine a running time of 10 seconds
> becomes 15 minutes. The code is complex, I could not create a simple
> subset that produces this problem. I tried this on several 32 and 64
> bit machines. The speed of the machines are comparable. I use -O2
> optimization. The program is not swapping to disk. What could cause
> this incredible slowdown?
[snip]

Just A Quick question: does the program do the same thing on a 64bit machine
as on a 32bit machine? has testing shown that for the same input you get
the same output?


Best

Kai-Uwe Bux

Ian Collins

11/25/2008 7:03:00 AM

0

nandor.sieben@gmail.com wrote:
> I have a fairly complex C++ program that uses a lot of STL, number
> crunching using doubles and the lapack library (-llapack -lblas -lg2c -
> lm). The code works fine on any 32 bit unix machine compiled with g++
> but when I try it on a 64 bit machine a running time of 10 seconds
> becomes 15 minutes. The code is complex, I could not create a simple
> subset that produces this problem. I tried this on several 32 and 64
> bit machines. The speed of the machines are comparable. I use -O2
> optimization. The program is not swapping to disk. What could cause
> this incredible slowdown?
>
You'll have to profile to find out. It's not uncommon for 64 bit
executables to be slower when the code has been tuned for 32bit. That's
one reason why 32 bit executables are still common on 64 bit platforms.

> Some suspects:
>
> -The lapack library
> - Tolerances I use for floating point comparisons

Shouldn't matter.

> - Large vector<vector<int > > variables ( even vector<vector<vector< >
>>> variable )

Shouldn't matter. A heavy use of long might.

Try a gcc or Linux group or maybe comp.unix.programmer.

--
Ian Collins

nandor.sieben

11/25/2008 7:27:00 AM

0

> Just A Quick question: does the program do the same thing on a 64bit machine
> as on a 32bit machine? has testing shown that for the same input you get
> the same output?

There are small differences in the values of doubles but I guess
that's not unexpected.


nandor.sieben

11/25/2008 7:31:00 AM

0

> You'll have to profile to find out.  

I did try profiling but I did not make much sense of it. Generally it
seemed
like everything takes somewhat longer.

> It's not uncommon for 64 bit
> executables to be slower when the code has been tuned for 32bit.  That's
> one reason why 32 bit executables are still common on 64 bit platforms.

But could it be such a huge difference? What does it mean to be tuned
for 32bit?
The code does not depend on it, could it be that the STL library is
optimized for
32 bit or the lapack library?

> Shouldn't matter.  A heavy use of long might.

No long in the code.

nandor.sieben

11/25/2008 7:36:00 AM

0

> It's not uncommon for 64 bit
> executables to be slower when the code has been tuned for 32bit.  That's
> one reason why 32 bit executables are still common on 64 bit platforms.

I am not trying to run the executable compiled on the 32 bit machine.
I recompile
everything on the 64 bit machines.

Ian Collins

11/25/2008 7:45:00 AM

0

nandor.sieben@gmail.com wrote:
>> It's not uncommon for 64 bit
>> executables to be slower when the code has been tuned for 32bit. That's
>> one reason why 32 bit executables are still common on 64 bit platforms.
>
> I am not trying to run the executable compiled on the 32 bit machine.
> I recompile
> everything on the 64 bit machines.

Why?

--
Ian Collins

nandor.sieben

11/25/2008 7:57:00 AM

0

> > I am not trying to run the executable compiled on the 32 bit machine.
> > I recompile
> > everything on the 64 bit machines.
>
> Why?

It is my own code. Since I have the source code it makes sense to
recompile and hope it
will be optimized for the new machine. I don't know if the 32 bit
executable would run on
the 64 bit machines but perhaps I should try that.

Could this piece of code be responsible?

extern "C"
{
void dsyev_ (const char *jobz,
const char *uplo,
const int &n,
double a[],
const int &lda,
double w[], double work[], int &lwork, int &info);
}

int
dsyev (const vector < vector < double > >&mat, vector < double
>&eval,
vector < vector < double > >&evec)
{
....
dsyev_ ("V", "U", n, a, n, w, work, lwork, info);
....
}

This is how I use the fortran lapack library. Perhaps the type sizes
change differently in C++ and in Fortran
when going from 32 bit to 64 bit.

Ian Collins

11/25/2008 8:04:00 AM

0

nandor.sieben@gmail.com wrote:
>>> I am not trying to run the executable compiled on the 32 bit machine.
>>> I recompile
>>> everything on the 64 bit machines.
>> Why?
>
> It is my own code. Since I have the source code it makes sense to
> recompile and hope it
> will be optimized for the new machine. I don't know if the 32 bit
> executable would run on
> the 64 bit machines but perhaps I should try that.
>
Under any decent OS, they should. I don't use 32 bit systems any more
and I seldom build 64 bit executables.

> Could this piece of code be responsible?
>
> extern "C"
> {
> void dsyev_ (const char *jobz,
> const char *uplo,
> const int &n,
> double a[],
> const int &lda,
> double w[], double work[], int &lwork, int &info);
> }
>
doubles or ints shouldn't be an issue.

You'd should try asking on a more specialised group. You should be able
to find a 64 porting guide for your platform.

--
Ian Collins

Maxim Yegorushkin

11/25/2008 11:52:00 AM

0

On Nov 25, 6:21 am, nandor.sie...@gmail.com wrote:
> I have a fairly complex C++ program that uses a lot of STL, number
> crunching using doubles and the lapack library (-llapack -lblas -lg2c -
> lm). The code works fine on any 32 bit unix machine compiled with g++
> but when I try it on a 64 bit machine a running time of 10 seconds
> becomes 15 minutes. The code is complex, I could not create a simple
> subset that produces this problem. I tried this on several 32 and 64
> bit machines. The speed of the machines are comparable. I use -O2
> optimization. The program is not swapping to disk. What could cause
> this incredible slowdown?

[]

Have you tried comparing 32 and 64-bit versions compiled on the very
same machine? Use -m32 compiler switch to compile a 32-bit version.

--
Max

James Kanze

11/25/2008 12:03:00 PM

0

On Nov 25, 8:27 am, nandor.sie...@gmail.com wrote:
> > Just A Quick question: does the program do the same thing on
> > a 64bit machine as on a 32bit machine? has testing shown
> > that for the same input you get the same output?

> There are small differences in the values of doubles but I
> guess that's not unexpected.

Not really. Both the 64 bit machine and the 32 bit one are
probably using IEEE doubles. Even on a 32 bit machine, a double
is normally 64 bits.

Compiling in 64 bit mode will often result some reduction in
speed, because of larger program size, and thus poorer locality.
I can't imagine this representing more than a difference of
about 10 or 20 percent, however, and I would expect it usually
to be a lot less.

Have you profiled the two cases, to see which functions have
become significantly slower?

--
James Kanze (GABI Software) email:james.kanze@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34