I've got a problem with tests. There is an algorithm for some fancy procedure. This algorithm takes a random number from a range [-999,999; +999,999], treats it like an id number from a table and performs some random operations in the database. This means that I need to call the method huge number of times to be sure that the random number distribution is correct.
I wanted to make all the code using TDD (just to learn it a little bit more).
The problem is that I don't know how to test the algorithm with TDD principle in mind.
According to TDD, no code should be run without writing a test first.
One of the solutions I think of is to have a dummy method in the main class called debug(text). This method will do nothing in the production code. However then I'd make a subclass with this method being overloaded and this time it would store some interesting information. This information could be later used by test to find out if the function works properly. The database connection will be mocked and will do nothing.
Another solution would be to create a mocked database connection which would store the interesting information, used later in tests. However it would be so huge amount of work to create such a connection that I don't think is worth spending time on it.
There will be integration tests later to check if the database is changed properly. But integration tests are not part of TDD.
Have I got into a place where TDD fails and is useless or too hard to use?
Is it your random number function?
It is: The random number generator should be tested outside of anything that uses it.
It isn't: You shouldn't be testing it at all, unless you really have a need to validate how random it is. IMO not a great ROI, but it depends entirely on your actual needs.
The DB functionality should assume the RNG is actually R, and should be tested separately from the RNG–during testing you may not want to use the RNG. At the least, you may want to seed the RNG to make the tests repeatable–this may make correctness more difficult to verify.
You should consider your design again. Unit Testing (trough TDD or by adding tests later) should test each class in complete isolation.
In your case, you have a few distinct functions:
Each of these can be tested in complete isolation of each other by using design patterns such as Dependency Injection and Mocking.
Unit Tests should not depend on random behavior. That way they will become brittle and hard to maintain.
So in your case, you would test your random number generator by running it a significant amount of times and checking if the results are in the expected range. That test should succeed each time it's run, no matter the time of day.
For the database part, I would create an interface that would hide your database code (have a look at the Repository pattern). In your unit tests you can mock this interface and check if the correct functions are called with the right arguments. In your integration tests, you can use the real Repository implementation.
The second test then checks if your database lookup is working.Check if the 'random number' that's passed is used correctly to call the correct methods on your database mock.
The third test would check if the correct code is executed against your database mock.
A couple of months ago I wrote an article for .NET magazine about Unit Testing and some best practices. Maybe it can help: Unit Testing, hell or heaven?
Here are some assumptions for your code-base:
1) The call to the stored procedure is in data access code.
2) The data access code implements an interface.
3) The business logic you want to write with TDD can inject the data access code into it's constructor.
If that is the case, you can use a mocking framework to mock your data acess code. The actual stored procedure isn't called.
The new code can be developed using TDD.