For F-Format input conversion, a field of w characters from the input line is converted to a fixed-point decimal value of precision (p,q). If the field contains a decimal point, q is the number of digits following the decimal point; otherwise, q is the value of d or is zero if d is omitted. If the field contains all blanks, the result value is zero, and p is the value of MIN(n,w) where n is the maximum precision allowed by the implementation for fixed-point decimal data. (For the maximum precision allowed by Open PL/I, see your Open PL/I User's Guide.) If the field does not consist entirely of blanks, it must contain an optionally signed fixed-point constant with optional leading and/or trailing blanks. In that case, p is the precision of the constant. If an invalid field is read, the Compiler signals ERROR.