Basically they’re saying the NOSQL systems such as Cassandra, Hypertable, etc are really a way to make large systems scalable by spreading the data across multiple systems without being concerned about consistency issues between the systems. “Without being concerned” means “there is no guarantee of consistency, at all, between systems”, and the application is responsible for maintaining consistency.
SQL systems which do guarantee consistency across systems use something called “two phase commit” which guarantees that transactions which update tables in multiple systems will always be consistent – you won’t get partial updates where one system gets the update and other(s) don’t. I’ve actually worked on supporting two phase commit in databases, and it’s hellishly complex and hard to get right. On top of that, as Thompson and Abadi remind us, two phase commit usually comes with it’s own set of performance problems, which get even worse when replication is involved.
If you’re into scalable systems or databases, it’s well worth a read. Hats off to the authors for explaining it so clearly.