[lnkForumImage]
TotalShareware - Download Free Software

Confronta i prezzi di migliaia di prodotti.
Asp Forum
 Home | Login | Register | Search 


 

Forums >

comp.lang.c

int and float memory space

alok

3/21/2011 6:17:00 PM

Why bytes needed by int and float is machine dependent?
5 Answers

Scott Fluhrer

3/21/2011 6:43:00 PM

0


"alok" <manglikalok@gmail.com> wrote in message
news:17bf789c-4d85-4903-9710-e30e27f5de0c@34g2000pru.googlegroups.com...
> Why bytes needed by int and float is machine dependent?

Because machines vary on the sizes of integers and floating point values
(and, for that matter, the size of a 'byte') that they work most efficiently
with. For example, some smaller CPUs can work efficiently with 16 bit
values; anything larger than that is a pain. On the other extreme, some
CPUs work most efficiently with 32 or 64 bit values, and convincing them to
work with smaller integers would actually slow things down.

Now, C could preselect a specific size (and insist that all implementations
do whatever conversion is necessary to make that specific size work);
however, C has a bias towards efficiency, and so it lets the implementation
pick what's efficient (as long as it meets various minimums, for example, a
variable of type 'int' must be able to represent any value between -32767
and 32767).

--
poncho




alok

3/22/2011 5:59:00 PM

0

On Mar 21, 11:43 pm, "Scott Fluhrer" <sfluh...@ix.netcom.com> wrote:
> "alok" <manglika...@gmail.com> wrote in message
>
> news:17bf789c-4d85-4903-9710-e30e27f5de0c@34g2000pru.googlegroups.com...
>
> > Why bytes needed by int and float is machine dependent?
>
> Because machines vary on the sizes of integers and floating point values
> (and, for that matter, the size of a 'byte') that they work most efficiently
> with.  For example, some smaller CPUs can work efficiently with 16 bit
> values; anything larger than that is a pain.  On the other extreme, some
> CPUs work most efficiently with 32 or 64 bit values, and convincing them to
> work with smaller integers would actually slow things down.
>
> Now, C could preselect a specific size (and insist that all implementations
> do whatever conversion is necessary to make that specific size work);
> however, C has a bias towards efficiency, and so it lets the implementation
> pick what's efficient (as long as it meets various minimums, for example, a
> variable of type 'int' must be able to represent any value between -32767
> and 32767).
>
> --
> poncho

Poncho,

So if the machine size is 32 bit the integer size will be 32 bit?

thanks for your wonderful reply.

Scott Fluhrer

3/22/2011 6:08:00 PM

0


"alok" <manglikalok@gmail.com> wrote in message
news:10ede842-169f-4989-b319-375e89610026@a21g2000prj.googlegroups.com...
On Mar 21, 11:43 pm, "Scott Fluhrer" <sfluh...@ix.netcom.com> wrote:
> Poncho,
>
> So if the machine size is 32 bit the integer size will be 32 bit?

Perhaps, and perhaps not. That's really the compiler writer's call. The
intent in the design of C is that common C types be efficient, but there's
really nothing enforcing it.

--
poncho



Keith Thompson

3/22/2011 6:27:00 PM

0

"Scott Fluhrer" <sfluhrer@ix.netcom.com> writes:
> "alok" <manglikalok@gmail.com> wrote in message
> news:10ede842-169f-4989-b319-375e89610026@a21g2000prj.googlegroups.com...
> On Mar 21, 11:43 pm, "Scott Fluhrer" <sfluh...@ix.netcom.com> wrote:
>> So if the machine size is 32 bit the integer size will be 32 bit?
>
> Perhaps, and perhaps not. That's really the compiler writer's call. The
> intent in the design of C is that common C types be efficient, but there's
> really nothing enforcing it.

And in practice, it's very common to choose sizes based on compatibility
with other systems. For example, if a 32-bit system is a direct
descendant of a similar 16-bit system, the implementer might well choose
to make int 16 bits so that code written for the older system will work
without change.

--
Keith Thompson (The_Other_Keith) kst-u@mib.org <http://www.ghoti.ne...
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"

Eric Sosman

3/23/2011 2:11:00 AM

0

On 3/22/2011 1:58 PM, alok wrote:
> [...]
> So if the machine size is 32 bit the integer size will be 32 bit?

What is "machine size?" I think that if you try to formulate
a precise definition, you will see that the answer leads nowhere
useful. Useful from a hardware designer's point of view, yes, but
not very useful to a software programmer.

Scott Fluhrer's explanation (paraphrase: "Different machines
are different") is pretty much the story. Somebody designing a C
implementation considers the CPU's instruction set, possibly mixes
in some information about the CPU hardware and characteristics of
other system components, speculates about market forces, and makes
a choice: "We will use a 28-bit int, and That's That." As C
programmers, we look at the resulting implementation and say "Oh,
28-bit ints, ugh, I want a different system" or "Hooray! 28-bit
ints, just what I've wanted all these years!" or something in
between. And we "vote with our feet" on whether the implementor's
choice was a good one (for us) or not.

The point is this: The implementor has the freedom to make
that choice, a freedom he would not have if he were implementing
Java, say. Java says "An int is 32 bits, whether that's convenient
or not" and hands the implementor the burden of dealing with the
fiat. This is nice in one way, because the Java programmer doesn't
need to worry about the sizes and ranges of primitives the way the
C programmer does. But at the same time, it means the programmer
has no way to say "I want an integer of modest size that the system
can handle with maximal ease," which is what a C programmer gets
by writing `int'.

--
Eric Sosman
esosman@ieee-dot-org.invalid