[lnkForumImage]
TotalShareware - Download Free Software

Confronta i prezzi di migliaia di prodotti.
Asp Forum
 Home | Login | Register | Search 


 

Forums >

comp.lang.c

Memory management by malloc

Giorgio

4/14/2011 5:28:00 PM

Hi all, I am trying to understand how exactly memory management in linux
(OpenSUSE) takes place.
I have compiled the following program:

#include <stdlib.h>
#include <stdio.h>
int main(int argc, char **argv)
{
long int n=0;
int *x;

if (argc != 2) exit (0);
else {
int mem = atoi(argv[1]);
while(1)
{
x = (int*) malloc (mem); // no point in testing if not zero:
// I'll know that from the segmfault :-)
printf("\n n=%ld x=%p", n, x);
*(x+20) = 123456; // just to use the allocated memory
// (or to trigger the segmfault)
n++;
sleep(1); // wait a little and let me see what's happening
} // while
} // else
return 0;
}

on a linux workstation running a 64bit OpenSuse, with 24 GB RAM.
I run the program with
memorytest 1073751800
thus allocating memory 1 GB per cycle.

The program output is

n=0 x=0x7f751b1cd010
n=1 x=0x7f74db1ca010
n=2 x=0x7f749b1c7010
n=3 x=0x7f745b1c4010
<....omissis.....>
n=21 x=0x7f6fdb18e010
n=22 x=0x7f6f9b18b010
n=23 x=0x7f6f5b188010
n=24 x=0x7f6f1b185010
Segmentation fault

The program stops after allocating 24 GB, that is the physical memory
limit: why does it not go as far as 64 bit allocation allows? I'd expect
it to continue allocating memory (and paging) till all the paging memory
is exhausted, or virtual space addressing is saturated, whichever
the first.

Also consider that the same code works with no problem on a Mac Pro,
continuing to allocate memory on and on (till you are bored and stop the
program).

What's going on? Can someone enlighten me please?
I need all that memory (far more than the available physical RAM) so the
problem is not just curiosity...
Thanks
Giorgio

PS A collaborator of mine already posted this request to other (Italian
-only) groups, so please excuse me if you already see this help cry
somewhere else...
11 Answers

Malcolm McLean

4/14/2011 6:01:00 PM

0

On Apr 14, 8:27 pm, Giorgio <giorgio.denun...@unisalento.it> wrote:
>
> What's going on? Can someone enlighten me please?
> I need all that memory (far more than the available physical RAM) so the
> problem is not just curiosity...
>
Suse is obviously not allowing paged disk memory by default.
This is defensible. Most program that have to resort to paging out
memory to disk run too slowly to do any useful work. The term "soft
crashing" has been used to describe this phenomenon.

However there's probably an override somewhere.

Giorgio

4/14/2011 6:53:00 PM

0

On 14/04/2011 20:00, Malcolm McLean wrote:
>> What's going on? Can someone enlighten me please?
>>
> Suse is obviously not allowing paged disk memory by default.

Hi Malcolm, thanks for you reply!
As you can guess, even if I work in Linux (and in Windows) I am not so
expert: I thought paging was always active.

Enlightened :-) by your words, I asked google for
"opensuse allow paged memory"
and I found this page:
http://doc.opensuse.org/products/draft/SLES/SLES-tuning_draft/cha.tuning.m...
which refers to SUSE Linux Enterprise Server but can perhaps give me
some hints (I have openSUSE 11.2).

I shall look for some specific info on my linux version; I have just
seen that there is an opensuse forum at
http://forums.ope...
so I'll also ask there.

Thanks again
Giorgio

Paul

4/14/2011 7:34:00 PM

0

Giorgio wrote:
> Hi all, I am trying to understand how exactly memory management in linux
> (OpenSUSE) takes place.
> I have compiled the following program:
>
> #include <stdlib.h>
> #include <stdio.h>
> int main(int argc, char **argv)
> {
> long int n=0;
> int *x;
>
> if (argc != 2) exit (0);
> else {
> int mem = atoi(argv[1]);
> while(1)
> {
> x = (int*) malloc (mem); // no point in testing if not zero:
> // I'll know that from the segmfault :-)
> printf("\n n=%ld x=%p", n, x);
> *(x+20) = 123456; // just to use the allocated memory
> // (or to trigger the segmfault)
> n++;
> sleep(1); // wait a little and let me see what's happening
> } // while
> } // else
> return 0;
> }
>
> on a linux workstation running a 64bit OpenSuse, with 24 GB RAM.
> I run the program with
> memorytest 1073751800
> thus allocating memory 1 GB per cycle.
>
> The program output is
>
> n=0 x=0x7f751b1cd010
> n=1 x=0x7f74db1ca010
> n=2 x=0x7f749b1c7010
> n=3 x=0x7f745b1c4010
> <....omissis.....>
> n=21 x=0x7f6fdb18e010
> n=22 x=0x7f6f9b18b010
> n=23 x=0x7f6f5b188010
> n=24 x=0x7f6f1b185010
> Segmentation fault
>
> The program stops after allocating 24 GB, that is the physical memory
> limit: why does it not go as far as 64 bit allocation allows? I'd expect
> it to continue allocating memory (and paging) till all the paging memory
> is exhausted, or virtual space addressing is saturated, whichever
> the first.
>
> Also consider that the same code works with no problem on a Mac Pro,
> continuing to allocate memory on and on (till you are bored and stop the
> program).
>
> What's going on? Can someone enlighten me please?
> I need all that memory (far more than the available physical RAM) so the
> problem is not just curiosity...
> Thanks
> Giorgio
>
> PS A collaborator of mine already posted this request to other (Italian
> -only) groups, so please excuse me if you already see this help cry
> somewhere else...

First, you need to find some keywords.

http://en.wikipedia.org/w...

mentions /proc/sys/vm/swappiness

http://www.kerneltrap.org...

references "overcommit"

*******

Eventually, you find a web page with information on tuning.

http://www.kernel.org/doc/man-pages/online/pages/man5/p...

/proc/sys/vm/overcommit_memory

http://www.win.tue.nl/~aeb/linux/lk...

"After

# echo 2 > /proc/sys/vm/overcommit_memory

all three demo programs were able to obtain 498 MiB on this 2.6.8.1
machine (256 MiB, 539 MiB swap, lots of other active processes),
very satisfactory."

If you look around, you may find enough information
on the subject of "kernel tuning" to satisfy your project.

This kind of testing should only be done on your personal
machine, with no other users present. If the machine
crashes or the OOM_killer is triggered, you don't want
your experiment to spoil the work of others.

Paul

Kenneth Brody

4/14/2011 8:42:00 PM

0

On 4/14/2011 1:27 PM, Giorgio wrote:
> Hi all, I am trying to understand how exactly memory management in linux
> (OpenSUSE) takes place.
> I have compiled the following program:
[... kepp calling malloc() until program crashes ...]
> on a linux workstation running a 64bit OpenSuse, with 24 GB RAM.
> I run the program with
> memorytest 1073751800
> thus allocating memory 1 GB per cycle.
>
> The program output is
>
> n=0 x=0x7f751b1cd010
[...]
> n=24 x=0x7f6f1b185010
> Segmentation fault
>
> The program stops after allocating 24 GB, that is the physical memory
> limit: why does it not go as far as 64 bit allocation allows? I'd expect
> it to continue allocating memory (and paging) till all the paging memory
> is exhausted, or virtual space addressing is saturated, whichever
> the first.
[...]

Consider, too, that the O/S may impose a limit on the amount of memory a
program can access, to prevent runaway programs from crashing the system.

(On Linux platforms, there is a command [as well as a system call] "ulimit"
which will show this limit, and allow privileged accounts to increase it.)

--
Kenneth Brody

pacman

4/14/2011 8:44:00 PM

0

In article <4da72e9f$0$38640$4fafbaef@reader1.news.tin.it>,
Giorgio <giorgio.denunzio@unisalento.it> wrote:
>Hi all, I am trying to understand how exactly memory management in linux
>(OpenSUSE) takes place.
>I have compiled the following program:
>
[attempt to malloc more space than RAM]
>Segmentation fault
>
>The program stops after allocating 24 GB, that is the physical memory
>limit: why does it not go as far as 64 bit allocation allows? I'd expect
>it to continue allocating memory (and paging) till all the paging memory
>is exhausted, or virtual space addressing is saturated, whichever
>the first.

The simplest answer is that you forgot to activate your swap space.
Run these to make sure:
swapon -s ; free -t

If the simple answer isn't right, then maybe you can get some idea of what's
going wrong by running your test program with strace and looking at what
syscalls malloc is using and what error code it gets.

Also, instead of deliberately causing a segfault when malloc fails, test it
and after it returns NULL, do a system("cat /proc/self/maps") to see what the
address space looks like. There could be a clue in there, if malloc is
bumping into some other memory region. You'd hope that it would skip over
areas that are in use and find address space wherever it can, but I've found
that to be a false hope before.

--
Alan Curry

Peter Nilsson

4/14/2011 10:23:00 PM

0

On Apr 15, 3:27 am, Giorgio <giorgio.denun...@unisalento.it> wrote:
> Hi all, I am trying to understand how exactly memory management
> in linux (OpenSUSE) takes place.

This is a FAQ: <http://c-faq.com/malloc/lazyallo...

--
Peter

Giorgio

4/14/2011 10:55:00 PM

0


On 15/04/2011 00:23, Peter Nilsson wrote:
> On Apr 15, 3:27 am, Giorgio<giorgio.denun...@unisalento.it> wrote:
>> Hi all, I am trying to understand how exactly memory management
>> in linux (OpenSUSE) takes place.
>
> This is a FAQ:<http://c-faq.com/malloc/lazyallo...
>
> --
> Peter


Hi Peter, thanks for your reply.
In point of fact, by explicitly setting a value in the memory given back
by malloc by:

*(x+20) = 123456; // just to use the allocated memory
// (or to trigger the segmfault)

I was trying to be sure that the returned memory really existed (disk or
RAM), just to avoid the case in which the operating system cheats :-)


Anyway, this is the news, on http://forums.op... somebody
proposed to try
ulimits -v
to read how much virtual memory one can use (I got 26497120), and
ulimits -v memory
to set it, and it did work!! I can now run my program far longer than
before, allocating virtual space!

I am still exploring this command, because the strange thing is that by
setting
ulimits -v 40000000
I can run my testmemory program till about 37 GB, while from the command
free -t
which gives

giorgio.denunzio@pc-gdenunzio:~> free -t
total used free shared buffers cached
Mem: 24735456 9381104 15354352 0 425344 8255588
-/+ buffers/cache: 700172 24035284
Swap: 8385888 541412 7844476
Total: 33121344 9922516 23198828

I understand that total (physical+virtual) memory is 33 GB.
Too tired now, time to go to bed, but tomorrow morning I'll try and
understand...

Thanks again to you all!
Giorgio



Keith Thompson

4/15/2011 12:30:00 AM

0

Giorgio <giorgio.denunzio@unisalento.it> writes:
> On 15/04/2011 00:23, Peter Nilsson wrote:
> > On Apr 15, 3:27 am, Giorgio<giorgio.denun...@unisalento.it> wrote:
> >> Hi all, I am trying to understand how exactly memory management
> >> in linux (OpenSUSE) takes place.
> >
> > This is a FAQ:<http://c-faq.com/malloc/lazyallo...
> >
> > --
> > Peter
>
>
> Hi Peter, thanks for your reply.
> In point of fact, by explicitly setting a value in the memory given back
> by malloc by:
>
> *(x+20) = 123456; // just to use the allocated memory
> // (or to trigger the segmfault)
>
> I was trying to be sure that the returned memory really existed (disk or
> RAM), just to avoid the case in which the operating system cheats :-)

As I recall (and you should ask about this in a Linux group to be
sure), accessing memory like this won't necessarily cause the OS to
try to swap in the entire malloc'ed chunk of memory. Other memory
pages in that same chunk might still not be allocated in real memory.
You probably need to touch one byte in each memory page.

Determining the size of a memory page, and whether I know what I'm
talking about, are left as exercises.

--
Keith Thompson (The_Other_Keith) kst-u@mib.org <http://www.ghoti.ne...
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"

Giorgio

4/15/2011 8:16:00 PM

0

>
> As I recall (and you should ask about this in a Linux group to be
> sure), accessing memory like this won't necessarily cause the OS to
> try to swap in the entire malloc'ed chunk of memory. Other memory
> pages in that same chunk might still not be allocated in real memory.
> You probably need to touch one byte in each memory page.
>
> Determining the size of a memory page, and whether I know what I'm
> talking about, are left as exercises.
>

Hi Keith, thanks for the information! The mechanism of memory management
is something I have never studied in detail (but I should!).

Anyway, now my program is working!
I have set a 80 GB file called swap_1 that I am using as an additional
swap area (by swapon), I have defined ulimit -v at about 80% of the
total swap space.
My test program ran ok, and now I have launched the real program from
which my problem started (a program working on a binary mask of a lung,
whose aim is finding pleural nodules: I work in the field of medical
physics and imaging). Everything is going ok, and I can see free virtual
memory going down as the program runs. If all is ok, I'll play with
/etc/fstab and make my changes permanent.
Thanks again to you and to the whole group for helping!
Giorgio

io_x

4/16/2011 6:05:00 AM

0


"Keith Thompson" <kst-u@mib.org> ha scritto nel messaggio
news:lnwriwl5la.fsf@nuthaus.mib.org...
> Giorgio <giorgio.denunzio@unisalento.it> writes:
>> On 15/04/2011 00:23, Peter Nilsson wrote:
>> > On Apr 15, 3:27 am, Giorgio<giorgio.denun...@unisalento.it> wrote:
>> >> Hi all, I am trying to understand how exactly memory management
>> >> in linux (OpenSUSE) takes place.
)
>>
>> I was trying to be sure that the returned memory really existed (disk or
>> RAM), just to avoid the case in which the operating system cheats :-)
....
> Determining the size of a memory page, and whether I know what I'm
> talking about, are left as exercises.

there is no need to know the size of memory page,
nor what memory page means

but it can be done to zero all memory e.g.
something near to

a=malloc(v);
if(a==0){...}
for(i=0; i<v; ++i)
((char*)a)[i]=0;

i don't know if it compile ...