What is my goal?
To measure actual query time, so that optimisation can be done if the time consumed is not satisfactory.
What have I done so far?
We have been using MySQL and in there we used to measure by using SQL_NO_CACHE.While this does not make MySQL to use the cache as mentioned here it somewhat better than using the cache.
The actual question is not that MySQL time is actually correct, in fact it is not since it does not disable the OS's cache, but on how to measure time for a query taking the cache out of the picture in Postgres.
From the same SO answer above, it is evident that there is no equivalent for Postgres and the cache in Postgres works very differently.
So if we take a query and run it several times, it may take some time in the beginning, but further calls mostly would come from the cache(from using explain (analyze,buffers).
With the above facts established,how I can measure query time.
Problem 1 : We cannot straightaway say that a query is slow for the first time since lot of warmup is required
Problem 2 : If we run the query say for 5 times and take the average, the results are not accurate since it uses the cache
You can set lowest possible value for cache to reduce cache effects
You also can stop pg an run
echo 3 /proc/sys/vm/drop_caches to remove os cache
But there is no need to real cache Your test environment should be nearest possible conditions to production and in prod u definitely have cache! It's better to compare different ways of getting a query or different ways of designing your schema and choose the best