[lnkForumImage]
TotalShareware - Download Free Software

Confronta i prezzi di migliaia di prodotti.
Asp Forum
 Home | Login | Register | Search 


 

Forums >

comp.lang.c

Manage an unknown number of objects without malloc

pozz

8/27/2011 7:31:00 AM

I'm writing a software on an embedded platform, with a C compiler that
hasn't malloc/free facilities, so the allocation can't be dynamic.

I'd like to mimic the standard I/O functions in one of my API to let the
user developer manage several objects. I, as the library developer,
don't know how many objects will be used by the user developer.

Consider the fopen:

FILE * fopen (const char *filename, const char *opentype)

The mechanism is simple: the I/O library doesn't know in advance how
many files the user developer will want to manage at the same time, so
fopen() dynamically allocates the FILE object and returns it if the
operation is successful.

How may I mimic this behaviour in my API? Now I have two possibilities,
but I couldn't choose one.


First approach. The code that calls the function should allocate the
FILE object and pass it to the API functions. The fopen function will be:

int myfopen (FILE *stream, const char *filename, const char *opentype)

The con of this approach is that the function interface is different so
it will be costly, in the future, to switch to a platform with
malloc/free facilities (where I'll use an interface similar to standard
I/O functions).


Second approach. Adding an intermediate level that allocates and manage
a maximum number of objects (configurable as a probject basis).

/* middle_level.c */
FILE files[CONFIG_MAXFILES];

FILE * myfopen (const char *filename, const char *opentype) {
FILE *freefile = NULL;
unsigned int i;
for (i = 0; i < CONFIG_MAXFILES; i++) {
if (file_isused(&files[i])) {
freefile = &files[i];
break;
}
}
if (freefile == NULL) {
return NULL;
}
myfopen_low(freefile, filename, opentype);
return freefile;
}

Every time I need a new FILE object (like in myfopen), instead of
dynamically allocation, I have to search for a free (not used) object in
the files[] array. In other words, I create a micro and simple dynamic
allocation facilities that manage only FILE object.


What do you suggest? Are there any other better approaches?
20 Answers

H Vlems

8/27/2011 9:39:00 AM

0

On Aug 27, 9:31 am, pozz <pozzu...@gmail.com> wrote:
> I'm writing a software on an embedded platform, with a C compiler that
> hasn't malloc/free facilities, so the allocation can't be dynamic.
>
> I'd like to mimic the standard I/O functions in one of my API to let the
> user developer manage several objects.  I, as the library developer,
> don't know how many objects will be used by the user developer.
>
> Consider the fopen:
>
>    FILE * fopen (const char *filename, const char *opentype)
>
> The mechanism is simple: the I/O library doesn't know in advance how
> many files the user developer will want to manage at the same time, so
> fopen() dynamically allocates the FILE object and returns it if the
> operation is successful.
>
> How may I mimic this behaviour in my API?  Now I have two possibilities,
> but I couldn't choose one.
>
> First approach.  The code that calls the function should allocate the
> FILE object and pass it to the API functions.  The fopen function will be:
>
>    int myfopen (FILE *stream, const char *filename, const char *opentype)
>
> The con of this approach is that the function interface is different so
> it will be costly, in the future, to switch to a platform with
> malloc/free facilities (where I'll use an interface similar to standard
> I/O functions).
>
> Second approach.  Adding an intermediate level that allocates and manage
> a maximum number of objects (configurable as a probject basis).
>
>    /* middle_level.c */
>    FILE files[CONFIG_MAXFILES];
>
>    FILE * myfopen (const char *filename, const char *opentype) {
>      FILE *freefile = NULL;
>      unsigned int i;
>      for (i = 0; i < CONFIG_MAXFILES; i++) {
>        if (file_isused(&files[i])) {
>          freefile = &files[i];
>          break;
>        }
>      }
>      if (freefile == NULL) {
>        return NULL;
>      }
>      myfopen_low(freefile, filename, opentype);
>      return freefile;
>    }
>
> Every time I need a new FILE object (like in myfopen), instead of
> dynamically allocation, I have to search for a free (not used) object in
> the files[] array.  In other words, I create a micro and simple dynamic
> allocation facilities that manage only FILE object.
>
> What do you suggest?  Are there any other better approaches?

It seems you're in the same position as an operating system designer.
Perhaps somewhat easier
because you know the limits of the platform you're working on. Which
means that you've got to
write your own memory management allocation software. Maintain a table
of available free space
and issue an error if more memory is needed than available.
Maintaining that table may be the real challenge: efficiency and the
memory it uses of course.
Hans

Ike Naar

8/27/2011 1:19:00 PM

0

On 2011-08-27, pozz <pozzugno@gmail.com> wrote:
> Second approach. Adding an intermediate level that allocates and manage
> a maximum number of objects (configurable as a probject basis).
>
> /* middle_level.c */
> FILE files[CONFIG_MAXFILES];
>
> FILE * myfopen (const char *filename, const char *opentype) {
> FILE *freefile = NULL;
> unsigned int i;
> for (i = 0; i < CONFIG_MAXFILES; i++) {
> if (file_isused(&files[i])) {
> freefile = &files[i];
> break;
> }
> }
> if (freefile == NULL) {
> return NULL;
> }
> myfopen_low(freefile, filename, opentype);
> return freefile;
> }
>
> Every time I need a new FILE object (like in myfopen), instead of
> dynamically allocation, I have to search for a free (not used) object in
> the files[] array. In other words, I create a micro and simple dynamic
> allocation facilities that manage only FILE object.

The name "file_isused" is a bit confusing.
If the for loop is correct as written, file_isused(X)
means X is free (not used).

pozz

8/27/2011 1:47:00 PM

0

Il 27/08/2011 15:18, Ike Naar ha scritto:
> On 2011-08-27, pozz<pozzugno@gmail.com> wrote:
>> Second approach. Adding an intermediate level that allocates and manage
>> a maximum number of objects (configurable as a probject basis).
>>
>> /* middle_level.c */
>> FILE files[CONFIG_MAXFILES];
>>
>> FILE * myfopen (const char *filename, const char *opentype) {
>> FILE *freefile = NULL;
>> unsigned int i;
>> for (i = 0; i< CONFIG_MAXFILES; i++) {
>> if (file_isused(&files[i])) {
>> freefile =&files[i];
>> break;
>> }
>> }
>> if (freefile == NULL) {
>> return NULL;
>> }
>> myfopen_low(freefile, filename, opentype);
>> return freefile;
>> }
>>
>> Every time I need a new FILE object (like in myfopen), instead of
>> dynamically allocation, I have to search for a free (not used) object in
>> the files[] array. In other words, I create a micro and simple dynamic
>> allocation facilities that manage only FILE object.
>
> The name "file_isused" is a bit confusing.
> If the for loop is correct as written, file_isused(X)
> means X is free (not used).

You're right. I would had write if(!file_isused(...)).

Willem

8/27/2011 3:19:00 PM

0

pozz wrote:
) Consider the fopen:
)
) FILE * fopen (const char *filename, const char *opentype)
)
) The mechanism is simple: the I/O library doesn't know in advance how
) many files the user developer will want to manage at the same time, so
) fopen() dynamically allocates the FILE object and returns it if the
) operation is successful.

Not quite true, a lot of OSes have limits on how many files a program can
have open at a one time. Remember the days of DOS where you had to set
Files=255 in your startup files ?

) How may I mimic this behaviour in my API? Now I have two possibilities,
) but I couldn't choose one.
)
) First approach. The code that calls the function should allocate the
) FILE object and pass it to the API functions. The fopen function will be:
)
) int myfopen (FILE *stream, const char *filename, const char *opentype)
)
) The con of this approach is that the function interface is different so
) it will be costly, in the future, to switch to a platform with
) malloc/free facilities (where I'll use an interface similar to standard
) I/O functions).

That's a big con.

) Second approach. Adding an intermediate level that allocates and manage
) a maximum number of objects (configurable as a probject basis).
)
) /* middle_level.c */
) FILE files[CONFIG_MAXFILES];
)
) FILE * myfopen (const char *filename, const char *opentype) {
) FILE *freefile = NULL;
) unsigned int i;
) for (i = 0; i < CONFIG_MAXFILES; i++) {
) if (file_isused(&files[i])) {

ITYM: if (file_notused(&files[i]))

) freefile = &files[i];

Why not do:
myfopen_low(freefile, filename, opentype);
return freefile;
here, instead of the break ?

Anyway, you probably want:
if (myfopen_low(freefile, filename, opentype)) {
return freefile;
}
return NULL;

) break;
) }
) }

Then you don't need this code here

) if (freefile == NULL) {

However, you might want to set 'errno' here ?

) return NULL;
) }
) myfopen_low(freefile, filename, opentype);
) return freefile;
) }
)
) Every time I need a new FILE object (like in myfopen), instead of
) dynamically allocation, I have to search for a free (not used) object in
) the files[] array. In other words, I create a micro and simple dynamic
) allocation facilities that manage only FILE object.

Sounds like the better approach to me. And to you too, as you don't list
any cons. Well, the big con is obviously the (low?) limit on open files.

) What do you suggest? Are there any other better approaches?

A pool of file descriptors sounds good.


SaSW, Willem
--
Disclaimer: I am in no way responsible for any of the statements
made in the above text. For all I know I might be
drugged or something..
No I'm not paranoid. You all think I'm paranoid, don't you !
#EOT

gw7rib

8/27/2011 4:17:00 PM

0

On Aug 27, 8:31 am, pozz <pozzu...@gmail.com> wrote:
> I'm writing a software on an embedded platform, with a C compiler that
> hasn't malloc/free facilities, so the allocation can't be dynamic.

The following sprung to mind as a possible appraoch. It seems horribly
clunky, and there may be other problems as well...

struct Thing {
struct Thing *next;
FILE *file;
// any more data here
};

struct Thing *first = NULL, *last;

void dostuff(whatever) {
Thing thing;
if (first) first = &thing; else last -> next = &thing;
last = &thing;
if (need another) dostuff(whatever); else {

// do stuff here with a linked list of Things

}

Jorgen Grahn

8/27/2011 5:54:00 PM

0

On Sat, 2011-08-27, pozz wrote:
> I'm writing a software on an embedded platform, with a C compiler that
> hasn't malloc/free facilities, so the allocation can't be dynamic.
>
> I'd like to mimic the standard I/O functions in one of my API to let the
> user developer manage several objects. I, as the library developer,
> don't know how many objects will be used by the user developer.

Are you sure this is a good idea? If the system doesn't have those
chances are it's intended for very static, monolithic, software. For
software where there's an official, fixed memory budget for the whole
system, not for writers of general libraries and frameworks.

I am asking because I've seen (several times) underpowered embedded systems
being used to do too fancy stuff. This tends to result in instability
and low performance.

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

Alexandru Lazar

8/27/2011 7:42:00 PM

0

On 2011-08-27, pozz <pozzugno@gmail.com> wrote:
> I'm writing a software on an embedded platform, with a C compiler that
> hasn't malloc/free facilities, so the allocation can't be dynamic.
>
> I'd like to mimic the standard I/O functions in one of my API to let the
> user developer manage several objects. I, as the library developer,
> don't know how many objects will be used by the user developer.

The platform I'm doing a lot of programming on right now includes something
very similar to this. It's an in-house RTOS that has been developed before
I joined and it runs on MSP430 MCUs, fairly memory-restricted.

I think your second approach would be better. Although you don't know how
many objects there will be in advance, you can make a fairly good assumption
on a maximum number, and with a resource-constrained platform that's never
too large. Placing a hard limit over the number of file descriptors (or any
other kind of I/O-related objects) may be somewhat 1980s-esque but I think
it's a simple and efficient approach. Based on the number of functions that
the system performs, you can get a good estimation of a maximum number of
file descriptors that you should allocate; this is inefficient for highly
interactive systems (where you can have users opening a potentially
unlimited number of files), but in non-interactive systems it's easy to
get an accurate estimation.

The way we're doing it simply to place a limit that seems reasonable, based
on a few testcases. Every limit can be changed at compile time (in case an
appliance needs more or fewer such objects) and we haven't yet ran into
major problems. This is nicely handled by the compile scripts.

If you go through with it this way, it's a good idea to pay good attention
to graceful failure and establish a reasonable protocol for continuously
requesting a file descriptor (just in case an application will require
one when none are left and you don't want it to fail, just to keep asking
for it).

Best regards,
Alex

Ian Collins

8/27/2011 8:25:00 PM

0

On 08/27/11 07:31 PM, pozz wrote:
> I'm writing a software on an embedded platform, with a C compiler that
> hasn't malloc/free facilities, so the allocation can't be dynamic.
>
> I'd like to mimic the standard I/O functions in one of my API to let the
> user developer manage several objects. I, as the library developer,
> don't know how many objects will be used by the user developer.
>
> Consider the fopen:
>
> FILE * fopen (const char *filename, const char *opentype)
>
> The mechanism is simple: the I/O library doesn't know in advance how
> many files the user developer will want to manage at the same time, so
> fopen() dynamically allocates the FILE object and returns it if the
> operation is successful.

My view is if the implementation can't use dynamic allocation, it should
have fixed limits of all resources that would be dynamically allocated.
Otherwise just implement malloc/free.

The normal reason for using a "static" design is the ability guarantee
the system will not run out of memory. That guarantee should also apply
to other resources.

--
Ian Collins

Richard Damon

8/28/2011 1:37:00 AM

0

On 8/27/11 3:31 AM, pozz wrote:
> I'm writing a software on an embedded platform, with a C compiler that
> hasn't malloc/free facilities, so the allocation can't be dynamic.
>
> I'd like to mimic the standard I/O functions in one of my API to let the
> user developer manage several objects. I, as the library developer,
> don't know how many objects will be used by the user developer.
>

The basic options you have are really:

1) Implement malloc/free yourself, and use these.

2) Preallocate a specified number of each type of buffer. This number
might be specified by a config file, so that it can be adapted to
differing requirements.

3) Have the caller allocate the memory for each buffer and pass it in.


Which one is best is really a function of you overall needs.

The first one is the most general, and can be the "easiest" to use. The
biggest issue is it is very hard to prove that a program using dynamic
allocations will continue to operate over a long period of time. The
problem is that the memory heap can get fragmented and memory lost. Some
design rule for imbedded systems disallow the use of dynamic allocations
(or sometimes dynamic allocations after setup) for reliability reasons.
With limited memory there are times when you need to use this, if at
some times you need a bunch of type A buffers, but at other times you
need a type B buffer, the other methods have more problem with this sort
of sharing.

The second gets around fragmentation issues, but does requires you to
work out how many of each type of buffer you need. It does have the
advantage that it can look to the program like the dynamic allocation,
so your API can be consistent with systems with dynamic allocations.

The third requires, as you noticed, an API change. It puts the
responsibility for memory allocation onto the end program. This can have
advantages in tight memory conditions as you tend to have little wasted
memory. It also lets you know at link time if you have enough memory,
there is no worry that an allocation might fail or you might use too
many of a resource, as everything is preallocated. This is the method
preferred by some safety critical design rules.

pozz

8/28/2011 3:19:00 PM

0

Il 27/08/2011 19:54, Jorgen Grahn ha scritto:
> On Sat, 2011-08-27, pozz wrote:
>> I'm writing a software on an embedded platform, with a C compiler that
>> hasn't malloc/free facilities, so the allocation can't be dynamic.
>>
>> I'd like to mimic the standard I/O functions in one of my API to let the
>> user developer manage several objects. I, as the library developer,
>> don't know how many objects will be used by the user developer.
>
> Are you sure this is a good idea?

Of course... no, I'm not sure. Indeed I suggested two approaches: one
that is similar to standard I/O functions and the other that is more
"static" and "monolithic".


> If the system doesn't have those
> chances are it's intended for very static, monolithic, software.For
> software where there's an official, fixed memory budget for the whole
> system, not for writers of general libraries and frameworks.

I used this embedded platform in the past for some years and I'm going
to use it in the future. Some months ago I started to think in terms of
libraries: I'm sure I can speed up my work (and introduce fewer bugs) if
I write libraries to reuse in several situations. Libraries that can be
tested and tested with use, so without bugs.


> I am asking because I've seen (several times) underpowered embedded systems
> being used to do too fancy stuff. This tends to result in instability
> and low performance.

In the past I thought the same as you, but with my small experience, I
noticed that the anxiety for "low performance" brought me to very bad
decisions.
Yes, I'm using a poor and slow 16MHz CPU with a bunch of bytes in RAM,
but many times a better algorithms, the use of a robust library (even if
it could appear overmuch) let me earn in clarity and in reducing
complexity. Ok, ok, I could lose some precious clock ticks, some
milliseconds, but most of the time the end user won't be able to note
any difference.