robertwessel2@yahoo.com
6/30/2011 12:57:00 AM
On Thu, 30 Jun 2011 02:06:26 +0200, jacob navia <jacob@spamsink.net>
wrote:
>I am learning the Macintosh environment with the idea of porting the IDE
>of lcc-win to the mac.
>
>What is nice in the mac is how easily you can set up a text display
>and integrate images and other data into the text. It supports RTF text
>(as windows too, by the way) and I was again wondering at implementing
>an old idea that I wanted to implement already into wedit some years ago.
>
>Actually, when C was conceived there weren't any bit mapped graphics
>or any such hardware that we now consider common place.
>
>The typography of C remained the same, as all other computer languages,
>obsessed with the character sets from 32 to 127 ASCII codes. Then, we
>write != instead of the inequality sign, = instead of the assignment
>arrow, "&&" instead of /\, "||" instead of \/.
>
>Programs would be clearer if we would use today's hardware to show
>the usual signs, instead of constructs adapted to the teletype
>typewriter of the seventies.
>
>Unicode now offers all possible signs for displaying in our programs,
>and it would be a progress if C would standardize some codes to be
>used isnstead of the usual != and &&, etc.
>
>We have in iso646.h
>#define and &&
>#define and_eq &=
>#define bitand &
>#define bitor |
>#define compl ~
>
>We could have in some isoXXX.h
>#define ? !=
>#define ? and
>#define ? or
>#define ? <=
>#define ? >=
>
>etc.
>
>Using ? for assignment would avoid the common beginner's error of using
>= instead of == and programs would look less horrible.
>
>All this would be done first in output only to avoid requiring a new C
>keyboard even though that can be done later. You would still type !=
>but you would obtain ? in output in the same way that you type first the
>accent, then the later under the accent and you obtain one character.
>
>The standardization committee would be crucial in making this change
>smooth but... I fear the won't be so enthusiastic...
You'd have a hard time exchanging programs with
non-extended-character-set implementations (not limiting things to
Unicode implementations, on the assumption that some extended, but
non-Unicode implementations would be plausible).
I'm afraid you'd end up with more di/trigraphs. The horror of that
will probably send anyone running.
FWIW, Java allows the use of Unicode (in fact it's specified), and you
can use Unicode characters in names (so you can assign the value 3.14
to a variable named U+03C0). But they didn't use any of the extended
characters for operators.
Nor is this a particularly new concept - APL (circa 1964 for the first
implementations, although the initial papers, with most of the
symbology, date to about 1961) required/could use a extended character
set. IBM (and others) manufactured terminals and printers with the
APL character set available for output and on the keyboard. IBM even
had Selectric typewriter balls with the APL set.