I feel ashamed that I have not taken the time to really understand the data consistency mechanism in Oracle and SQL Server. I’ve read about transaction isolation level a few times, but didn’t really realise that in Oracle there is no dirty read like in SQL Server (Oracle does not offer Read Uncommitted mode).
When a query begins, the current system change number (SCN) is determined. In the picture above the SCN is 10. As data blocks are read, only blocks written with the observed SCN are used. Blocks with recent SCNs containing changed or uncommitted data are not read. This guarantees that the data read does not change during the duration of the query. This is why in Oracle there is no dirty reads (reading uncommitted data). Source: Link
In SQL Server, when the data is being read, the table is locked. Even thought table is locked, other processes can modify the data. Rows can be deleted and updated, new rows can appear. The lock only prevents SQL Server from reading uncommitted transactions (dirty reads). But it does read new rows and deleted rows (which is correct, we want the latest data). With NO LOCK, SQL Server will read uncommitted transactions. Source: Link
So, both SQL Server and Oracle, by default, allows phantom read and non-repeatable read, but prevent dirty reads. But they do it using different mechanism: Oracle uses SCN and SQL Server uses locks.
There are 3 issues with data consistency:
- A dirty read is when a process reads a row written by an uncommitted transaction. If this transaction is rolled back, the read becomes incorrect.
- A non repeatable read is when a transaction rereads a row it has previously read (for example, for updating a row) and finds that another process has modified or deleted the row.
- A phantom read is when a transaction rerun the same query, it finds a new row inserted by another process.
Here are the 4 isolation levels and their ability to solve the issues:
|Isolation Level||Dirty Read||Non Repeatable Read||Phantom Read|