## Correct code can still fail

When my code fails, its not always the code that has the failure. Sometimes it can be quirks in the framework, programming language, or even the hardware itself. My code will need to deal with these quirks, for the problem to be fixed.

## Small values

Lets take the following algorithm:

The algorithm is correct and working as intended, but the foundation has a quirk, that needs to be taken care of.

In Python, JavaScript, and other languages, the calculation “1005/1000*100” gives 100.49999… and not 100.5 because of a floating point error. The difference may seem minimal, but Math.round(1005/1000*100) will give 100 instead of 101, which will make the algorithm above fail.

A quick fix could be to switch the *100 with the /1000, so
1005/1000*100 would become 1005*100/1000.

But what if the 1005 is replaced with a x? Will this fix work for all values of x?

## Large numbers

Lets take a very large floating-point number such as 1e17, which is 100.000.000.000.000.000 (1 with 17 zeros)

1e17 + 8 = 1e17 because 8 is to small a number to influence 1e17 in any way! Even 1e17 + 8 + 8 will still give 1e17.

9 on the other hand is large enough: 1e17 + 9 = 1.0000000000000002e+17.

64 is not large enough to influence 1e18, but 65 is.

A million is not large enough to influence 1e22, but 1.048.577 is.

If you keep on increasing a value by the same number, then there is a point where the value will stop growing, because the number becomes too small to influence the value.

## Correct code can still fail

My point here is that we shouldn’t only test our algorithms and our implementation of our algorithms, but also the foundation of the implementation.