Each arithmetic data type has a default precision and a maximum precision as specified in Table 2-2. The range of the scale factor of fixed-point decimal data is 0 through 31 (0 ≤ q ≤ 31).
* The -longint option changes the default precision of fixed binary from 15 to 31.