Back: ISO C
Forward: C Endianness
FastBack: C Endianness
Up: C Language Portability
FastForward: Cross-Unix Portability
Top: Autoconf, Automake, and Libtool
Contents: Table of Contents
Index: Index
About: About this document

15.1.2 C Data Type Sizes

The C language defines data types in terms of a minimum size, rather than an exact size. As of this writing, this mainly matters for the types int and long. A variable of type int must be at least 16 bits, and is often 32 bits. A variable of type long must be at least 32 bits, and is sometimes 64 bits.

The range of a 16 bit number is -32768 to 32767 for a signed number, or 0 to 65535 for an unsigned number. If a variable may hold numbers larger than 16 bits, use long rather than int. Never assume that int or long have a specific size, or that they will overflow at a particular point. When appropriate, use variables of system defined types rather than int or long:

Use this to hold the size of an object, as returned by sizeof.
Use this to hold the difference between two pointers into the same array.
Use this to hold a time value as returned by the time function.
On a Unix system, use this to hold a file position as returned by lseek.
Use this to hold the result of the Unix read or write functions.

Some books on C recommend using typedefs to specify types of particular sizes, and then adjusting those typedefs on specific systems. GNU Autotools supports this using the `AC_CHECK_SIZEOF' macro. However, while we agree with using typedefs for clarity, we do not recommend using them purely for portability. It is safest to rely only on the minimum size assumptions made by the C language, rather than to assume that a type of a specific size will always be available. Also, most C compilers will define int to be the most efficient type for the system, so it is normally best to simply use int when possible.

This document was generated by Gary V. Vaughan on February, 8 2006 using texi2html