No matter what statistical programming language you use, be careful of testing for an exact value of a floating-point number. This is known in the world of numerical analysis as "10.0 times 0.1 is hardly ever 1.0" (Kernighan and Plauger, 1974, The Elements of Programming Style).
There are many examples of arithmetic operations with finite precision numbers that can serve as cautionary tales, but the one that I saw recently was a DO loop in SAS that looked something like the following:
data A; x = 0; do i = 1 to 20 until(x=1); /* sometimes WRONG */ x = x + 0.1; output; end; run;
Of course, the original code didn't have a comment that shows where the error is! The programmer clearly intends for the loop to exit after 10 iterations when x is 1. However, because of finite precision arithmetic and the fact that 0.1 is not representable in binary, the value of x is never exactly equal to 1.
Practice "defensive programming": don't use exact comparisons in IF-THEN, DO-UNTIL, or DO-WHILE statements. A better way to code the previous loop is to use an inequality. In this example, the iteration could stop when x is greater than 1 – 0.1 / 2 = 0.95. If you don't know the exact amount that x will be incremented, use some quantity related to "machine epsilon," as follows:
data A; x = 0; eps = constant("SQRTMACEPS"); /* or 100*constant("MACEPS") ... */ do i = 1 to 20 until(x > 1-eps); x = x + 0.1; output; end; run;
As I've said, these examples are ubiquitous in scientific programming, and this is a popular topic on "tip of the day" services such as @RLangTip on Twitter. So regardless of the programming language that you use, avoid the exact comparison of floating-point numbers.