I am using transactions to do bulk insert/updates. This is my little test loop:
$now = date('Y-m-d H:i:s');
for ($i=0; $i<60; $i++) {
$db->insert($cfg['ps_manufacturer'], array(
'reference' => 'MF'.$i,
'name' => 'MF NAME'.$i,
'date_add' => $now,
'date_upd' => $now,
'active' => true,
));
echo $db->lastInsertId().'<br>';
}
Now the script above is obviously very slow because the query is in a loop. By using transactions the execution time drops from 2s to 0.06s:
$db->beginTransaction();
$now = date('Y-m-d H:i:s');
for ($i=0; $i<60; $i++) {
$db->insert($cfg['ps_manufacturer'], array(
'reference' => 'MF'.$i,
'name' => 'MF NAME'.$i,
'date_add' => $now,
'date_upd' => $now,
'active' => true,
));
echo $db->lastInsertId().'<br>';
}
$db->commit();
Now what confuses me is how does the second sample return an ID of every inserted row immediatelly? From my understanding the transaction starts and queues all procedures, until the queue is "released" by using commit().
Now obviously the test showed that I'm wrong, could someone explain how does this work? It does essentially the same, but works a whole lot faster. Does using transactions start some kind of "procedural session" which is optimized for continued querying? This code is a bit counter-intuitive to me, almost looks like async programming :)