I would like to modify a cron script which includes 3 table insertions, one table update and one deletion processes.
There are thousands of records and hence it takes much time to insert data.
The data is inserted into products table, product payment table, product details table and product images table.
current script checks whether the product exists or not.
If not exists, then insert data to products table first and returns the newly inserted product id. After that, insertion happens to the other three tables. There may be multiple insertion to the images table (depends on the number of images).
If the product already exists, update the tables. For the images table, the script deletes existing image records for that particular product and then inserts all images as new to the images table.
This is what happens now:
$product_id = insert_product($product_data) ;
As the data is enormous, it takes longer time to execute. Is there any way that we can optimize this entire process?
What you could do is use
REPLACE INTO sql query to either insert a row or replace the values in one go.
Take a look at this article: http://blogs.coldbuffer.com/inserting-or-updating-records-using-mysql-replace
Another way to deal with lots of data that you have to add/update is to create a temporary table (probably with ENGINE = MEMORY) and do bulk inserts into that table. So instead of adding one dataset at a time, create sql inserts with multiple sets and add them at once. This is way faster than looping over the elements and call an insert for each dataset.
After that you can do a mass update by joining your original table with that temporary table, as this also is much faster.
We are working with this concept to speed up inserts/updates quite well.
This will help immensely with your speed issue, if you're talking thousands of queries.
Also, wrap at least each individual block of queries in a transaction, if you're on InnoDB (which you should be!):
That won't help with speed, but will make the whole operation a good deal safer.