Platform Issues
Big and Little Endian
Depending on the target architecture for which the CIGI API is being built, the platform may
be either big endian or little endian. Many systems are implemented
using big endian, although some (most notably Intel)
architectures use little endian. Consult the hardware documentation if unsure of which representation
is used.
If the platform is big endian, no changes need to be made to the CIGI API. The API handles packet
formatting and translations assuming big endian by default.
If the platform is little endian, the API will handle packets appropriately when the follwing preprocessor
definition is present.
#define CIGI_LITTLE_ENDIAN
For more information on this switch, please refer to the section on
preprocessor definitions.
Byte Ordering
Memory is arranged so that either the first or last bit in a byte is considered the most significant bit,
depending on the particular platform. By default, the CIGI API assumes that the first bit in a byte is the
least significant, and the last bit is the most significant bit. This is the case when using an Intel / Windows
platform. Consult your platform documentation if unsure of which representation is used.
If the target platform for which the CIGI API is built for consideres the first bit to be the most significant,
use the following preprocessor definition to indicate this before including the cigi_icd.h
file, or set the definition for the project prior to building.
#define MOST_SIGNIFICANT_BIT_FIRST
For more information on this switch, please refer to the section on
preprocessor definitions.
|