I'm having an issue that I believe has to do with working with floats and precision but I'm not very well versed in the various intricacies involved. I'm a math person and in my mind I might as well still be just working with decimals on a chalkboard. I'll begin studying up on this, but in the mean time, I'm wondering if there are any general techniques for working with floats that might address the problem I'll outline below.
I have a numpy array of decimals that I would like to round to the nearest .02. I originally accomplished this by dividing every element of the array by .02, rounding the result, then multiplying by .02 again. The actual data is generated by some code that process an input, but this demonstrates the problem:
x = np.array([.45632, .69722, .40692])
xx = np.round(x/.02)*.02
It seems to round everything correctly, as I can check:
array([0.46, 0.7, 0.4])
However, if I inspect the first and second element, I get:
Each element in the array is of type numpy.float64. The problem occurs later because I involve these numbers with comparison operators to select subsets of the data and what happens then is a little unpredictable:
xx == .46
xx == .70
As I said, I have a work around for this particular application, but I'm wondering if anyone has a way to make my first approach work or if there are techniques for dealing with these types of numbers that are more general that I should be aware of.
Rather than using
== to select subsets of data, try using numpy.isclose(). This allows you to specify a relative/absolute tolerance for your comparison
(absolute(a - b) <= (atol + rtol * absolute(b)))
print format(xx, '.100f') this code return actual value of xx ie xx = 0.70000000000000006661338147750939242541790008544921875
You can check these by code shown below
if xx == 0.70000000000000006661338147750939242541790008544921875: print 'true'