Each arithmetic data type has a default precision and a maximum precision as specified in Table 2-2. The range of the scale factor of fixed-point decimal data is 0 through 18 (0 ≤ q ≤ 18).
Data Type | Maximum Precision | Default Precision |
---|---|---|
Fixed Binary | 31 | 15* |
Fixed Decimal | 18 | 5 |
Float Binary | 52 | 23 |
Float Decimal | 16 | 6 |
* The -Iongint Compiler option changes the default precision of fixed binary from 15 to 31.