I'm looking for 6 random hex characters, and am using Ruby's
SecureRandom.hex(3) will return 6 hex characters unpacked from 3 bytes of random data.
The question is, will doing
SecureRandom.hex(6)[0,6] return 6 hex characters that are more random because there were 6 bytes of random data before the unpacking? For that matter, would
SecureRandom.hex(16)[0,6] be even more random?
For my application I only need the 6 characters with over 16 million unique values, but I want the chances of colliding with a number that was already picked to be as low as possible. So will using a larger
n for the random bytes improve the distribution of the random values over the space or is it unnecessary?
No. Selecting a six character substring of a larger random hex string will be no more "random" than using one that is generated as a six character string.
The only way to reduce the probability of collisions is to have more unique possible values.