In EGL, precision is the total number of digits a variable can use to express its value, not just the number of decimal places. The precision of an INT is 9. For floating-point numbers, the precision is the maximum number of digits that the number can represent on the system on which the program is running.
mathLib.precision(numericVariable SMALLINT | INT |
BIGINT | DECIMAL | SMALLFLOAT | FLOAT in)
returns (result INT)
result INT;
myVar FLOAT;
result = mathLib.precision(myVar); // result=15