Eric Sosman
3/23/2011 2:11:00 AM
On 3/22/2011 1:58 PM, alok wrote:
> [...]
> So if the machine size is 32 bit the integer size will be 32 bit?
What is "machine size?" I think that if you try to formulate
a precise definition, you will see that the answer leads nowhere
useful. Useful from a hardware designer's point of view, yes, but
not very useful to a software programmer.
Scott Fluhrer's explanation (paraphrase: "Different machines
are different") is pretty much the story. Somebody designing a C
implementation considers the CPU's instruction set, possibly mixes
in some information about the CPU hardware and characteristics of
other system components, speculates about market forces, and makes
a choice: "We will use a 28-bit int, and That's That." As C
programmers, we look at the resulting implementation and say "Oh,
28-bit ints, ugh, I want a different system" or "Hooray! 28-bit
ints, just what I've wanted all these years!" or something in
between. And we "vote with our feet" on whether the implementor's
choice was a good one (for us) or not.
The point is this: The implementor has the freedom to make
that choice, a freedom he would not have if he were implementing
Java, say. Java says "An int is 32 bits, whether that's convenient
or not" and hands the implementor the burden of dealing with the
fiat. This is nice in one way, because the Java programmer doesn't
need to worry about the sizes and ranges of primitives the way the
C programmer does. But at the same time, it means the programmer
has no way to say "I want an integer of modest size that the system
can handle with maximal ease," which is what a C programmer gets
by writing `int'.
--
Eric Sosman
esosman@ieee-dot-org.invalid