>I am aware of the differences between signed and unsigned types. (I
>must admit to being surprised to see that type char is signed. That
>makes no sense to me.)
char defaulting to signed is classic / standard behaviour for real C
compilers. It dates from the early days when memory space was tight
and using 8 bit variables was common. You can't calculate a negative
difference with unsigned values, which is why programmers normally
use signed variables (hello Year 2038 problem!)
When working with real compilers, I always define my own unsigned 8
bit datatype:
typedef unsigned char BYTE; // portable unsigned 8 bit data type
It seems correct to me for "byte", "word", "dword", etc. datatypes to
be unsigned. I tend to equate them with memory addresses, block
sizes, and so on - things that can never be negative.
char, int, long - these are the signed counterparts. It works for me.
...Andy
Yahoo! Groups Links
<*> To visit your group on the web, go to:
http://groups.yahoo.com/group/oopic/
<*> Your email settings:
Individual Email | Traditional
<*> To change settings online go to:
http://groups.yahoo.com/group/oopic/join
(Yahoo! ID required)
<*> To change settings via email:
mailto:oopic-digest@yahoogroups.com
mailto:oopic-fullfeatured@yahoogroups.com
<*> To unsubscribe from this group, send an email to:
oopic-unsubscribe@yahoogroups.com
<*> Your use of Yahoo! Groups is subject to:
No comments:
Post a Comment