Wednesday 7 December 2016

Transactions [1]: Transactions in modern systems: A capsule summary


If you are reading this posting, I probably don’t need to tell you what a transaction is, but just to keep our terminology in sync maybe I should start with just a few very standard remarks. 

For me, a transactional data store is any service that resides outside the applications using it, holding state that might be persisted to disk or retained in some form of in-memory representation.  The applications using the service have a way to open a new transactional session (begin a transaction), issue lock, read and write operations, and then can either close the session by making all the updates permanent and releasing locks (commit), or by rolling back all updates (abort). 

In this model we generally think of the application as being stateless: the data lives in the transactional storage systems, while computation on it occurs in these operations that leave no other form of persistent state (obviously, they have variables and can have all sorts of temporary state, but the persistent state is in the transactional database or subsystem, so they are stateless in this very narrow sense, which is what the term really means -- you can throw away all of that local state if these nodes crash).

My definition is at odds with the recent work on transactional memory, but I'll come back to that in a moment.   I'm also not really talking about NoSQL systems, at least not yet.  Those are systems that might offer a transactional API, but that don't provide standard transactional guarantees.  They matter, but they aren't transactional systems, no matter what the vendors may try to claim. 

Among the true transactional systems, there are some important sub-categories.  Quite a few transactional systems provide multi-operation transactions and user-visible locking, so this is the most general case.  Your local SQL database uses this model, and frankly, it is very powerful.  However, at large scale, complex transactions become (potentially) slow.

These days, everyone works with huge databases, so we see more and more systems that have transactions, but rather than try and support transactions when an application writes ten things at once, the model often focuses on real patterns of updates, which very often involve just one thing at a time (and updates themselves are often less than .01% of the traffic).  Since complex transactions are hard to support and the mechanisms introduce high overheads, it is common for transactional big data systems to be limited to so-called one-shot atomic actions (e.g. each operation is atomic, but you don’t get to string them together into multi-operation transactions).  Sometimes people will say that these use linearizability as their model, which is a version of transactional serializability but somewhat less general in its scope, and easier to implement in a scalable fast way. 

The multi-operation and one-shot models are actually more similar than you might expect.  One very cool piece of research a few years back was the Sinfonia system, which showed how you could turn a complicated transactional database into a system that would use one-shot atomic transactions; if you don’t know the work, read about it and the follow-on work at more recent SOSP and OSDIs.  The basic idea was to run a transaction on a snapshot (a read-only consistent copy of the database) and as it runs, track version numbers for any objects it touches.  Then at commit time, you generate a one-shot transaction that just checks to see if these versions are still in use and if so, commits the updates; if not, it aborts and you rerun the transaction using a fresh snapshot.  Very nice idea.
Anyhow, the usual way to describe the full transactional model is by reference to the ACID properties: a transactional system offers atomicity, consistency, isolation and durability. 
Transactional systems exist in many forms; here are the main ones that we work with in cloud settings:
  • Database products, which are generally organized around the relational model, and most often support the full multi-operation form of transactions.  Use these if you can, because they are powerful, easy to work with, and don't expose you to surprises (aka "inconsistencies").
  • Caches that we use when talking to those databases, which (these days) often use a model called snapshot isolation.  If you find data in the cache it might be stale, but reads will be consistent.  In the cloud people love caches, but be very wary of putting a cache that isn't integrated with the transactional backend store in front of a transactional application.  This is a big trend lately, but can expose you to surprises (see above).
  • More and more, we are seeing transactional support associated with large distributed key-value stores |(DHTs): shared memories in which data is associated with a key (which could just be an address, if you want to think of the store as a DSM), and a value (a true DSM would have some fixed number of bytes per address; a DHT has variable sized objects associated with each key).  Not all modern DHTs are transactional, but the transactional ones seem to be far more successful than the flat non-transactional ones.
  • Just for completeness, I'll remind you that many people would definitely mention transactional "memory" solutions for concurrent programming in any discussion of transactions.  These typically transform standard objects into transactional objects so that calls to them have transactional semantics.  But I don't really think they belong in the same category. 
  • NoSQL systems.  Again, in my way of thinking, these are perfectly interesting systems but simply don't fit in a discussion of the transactional model, because they don't support transactions.  I mention them here mostly because such systems are often marketed as if they could be dropped into place in systems that need full transactions.  In reality, they expose you to surprises (see above).
In this set of blog postings, my focus will turn out to be center on ultra-scalable transactional key-value stores.  These are the new guy on the street (relatively speaking) and because they scale so incredibly well, have the potential to be a big deal, once they mature.  But before we go there, I have a few remarks about the other models too.
A first question centers on whether we are thinking about the transaction system as the whole story (more or less, PaaS), or as a tool (as in the case of a DHT).
One thing to be aware of is that when you use a professional database, like Oracle’s enterprise solution or SQL server, the database has the whole story for data lifecycle management, built right in.  For example, a bank needs to encrypt customer data, and must keep it for N years, but then must delete it.  And it needs to ensure that only the proper people have access, that accesses are audited, that security alarms go off if someone tries to steal the money, etc.  A professional database has solutions for all of this, is designed for massive scaling and optimized for high speed.
The transaction DHT solutions are a new entrant to the market and are much less mature.  These systems can hold a lot of data, and they do offer transactions, but they would often lack cryptographic tools, won’t have services for long-term archival data management, etc.  For example, if a runaway (buggy) application dumped bad records into a professional SQL system, you could easily identify them and remove them later.  But a DHT might just bloat silently with no simple way to even list the data it contains.  You could end up with gigabytes of orphaned data: the system will hold it against further need, yet the application never intended to generate it, and in fact will never access it again.  Over time this will change as the DHTs catch up (for example, by supporting debugging development mode, data curation tools, automatic deletion after a lease expires, etc.), but it will take a while.
Then there is a question of what the application using the transaction system really looks like. 
In-memory transactions,  So let me return to this now and say a few words about the idea, and then set it to the side.  In-memory transactions were a hot topic within the programming languages community, and were explored very actively starting a few years ago.  The basic idea is simple: because so many programmers have trouble with concurrency, the inventors of the approach wanted to automatically insert the needed concurrency control so that operations on objects in a standard programming language like C++ or Java could be performed as small transactions.  Since the compiler would do this in a provably correct way, the bug rate experienced by developers would drop sharply.  At least that was the idea.
Without discussing the concept at length, I'll just summarize some of the issues that surfaced and ultimately seem to have limited adoption of that approach.  In fact none of these issues are at all new: Barbara Liskov proposed transactions on objects way back in the 1980's (the Argus project).  Argus got quite far, but on the other hand, ran into almost all the same questions.  These include:
  1. Performance issues.  The locking or versioning schemes required were more costly than the developers had expected, hence developers balked at using them.  For example, without being clever, many transactional object oriented systems risk holding locks for very long periods of time, because operations block for I/O or while waiting for other forms of input.  This can cause other concurrent threads to end up waiting, and can even introduce deadlocks.  A clever programmer would be able to avoid such issues, but asking the compiler or runtime system to do so is asking a lot.  Thus the developer either find the transactional solution buggy and slow, or will anticipate trouble and may try to break the transactional model in order to evade its limitations (which causes other issues).
  2. Instability.  Rachid Guerraoui (who actually likes software transactions a lot) points out that potentially, a transactional object could get into a state where two or more competing transactions basically livelock by causing one-another-to abort and retry.  You might expect this to be hard to trigger, but it turns out that in object oriented programming, you can easily trigger such scenarios, particularly if developers try and "break" the model to gain performance.  So you start with an object oriented system that was safe and live (would guarantee progress), were you to use locking carefully to avoid concurrency bugs, and then by reexpressing it using transactional objects, it suddenly starts to have infinite loops (livelock).
  3. Limitations of the model.  This last point touches on a basic issue: if thread A is in a transaction and wants to pass information to thread B, technically speaking that isn't permitted until the transaction commits.  But developers find this frustrating and may try and work around the issue by performing actions with external side-effects in ways that the transactional system can't easily control, such as I/O.
  4. Nested transactions.  There is a lot of debate around the proper way to handle recursion and other forms of nested calls.  Argus had a neat approach called nested transactions, but it can have very high overheads and requires pretty complex supporting mechanisms.  But without nested transactions, the transactional programming model might not allow recursion or other forms of nested operations: quite a serious restriction, since modern languages treat almost any operation as a method invocation!
  5. I/O is common, but transaction systems shouldn't try to do write operations from within an uncommitted transaction: if they do, the model requires that the data be held back until the transaction commits.  Similarly, a read may need to push data back on the input channel if the transaction that did the read happens to abort.  Both limitations are costly and may be impractical.
  6. Multicore parallelism isn't necessarily ideally suited for multi-threaded programming, in any case.  Multicore machines are awesome for supporting multitenancy: you launch hundreds of VMs, and each core owns a few of them, but the programs in those VMs are standard ones with relatively few threads.  To write multithreaded code that will actually speed up with multiple cores is a delicate art: you need to understand the NUMA memory behavior of your system and the layout of data into memory, and think hard about locking and other forms of concurrency control, you may need to worry about priority inversions or other oddities of scheduling, etc.  So the whole premise of the transactional memory work centers on assuming that this kind of programming is very valuable, and that hasn't really been shown to be the case.
All of these reasons lead me to view transactions on memory objects as a different area of research, with its own goals, its own user community, and its own challenges.
Lets pivot back to our main topic and focus on transactional ways of interacting with external data. 
Now the first thing to appreciate is that transactional access to external data is incredibly valuable and for this reason, incredibly popular.   Technologies like Microsoft’s .NET and Oracle’s versions of Java let your application map transactional subsystems into the application address space: the data appears to be in a set of collections (the in-memory name for a key-value store) and you access it through a programming language API that mimics the database SQL language, such as the .NET LINQ (Language Integrated Queries) API, or the Java equivalent, called JQuery.
Let's focus purely on this second category of transactional systems and ask how well each fits with the modern cloud.
Stateless applications that are part of some form of cloud platform (often a PaaS solution such as Microsoft Azure or Oracle’s three-tier cloud platform, IBM’s WebSphere, etc) often benefit from transactional mechanisms of these kinds.  The "client" systems are generally web browsers that issue requests.  Each request routes into a server in the cloud, where a thread executes on its behalf. 
That thread has the data the browser client sent in, as well as full access to the cloud, including transactional caches and back-end servers, and it can use files on the local file system.  Nonetheless, when the thread terminates, any data it saved locally in files will be deleted (so stateless really means automatically garbage collected).  Saved state is always persisted into the database or the transactional storage layer.   Most cloud applications work this way.
So with this background, we can talk about some typical modern cloud applications that might use transactions.  The standard example is something like this:
  • A banking application permits you to set up a wire transfer to pay bills.   You fill in the transfer page and click “schedule payment”.
  • The banking application generates a call to a web services method running on a stateless cloud node (in the sense mentioned above), and a thread launches to receive and process the request.
  • The schedule-payment method runs a transaction to read your account balance, check that the request is legal, and then either commits the transaction to the pending payments ledger or aborts and reports an error, such as “account overdrawn”.
So what’s not to love?  I’ll come back to that in a minute rather than explain here, but the short answer is that transactions are totally awesome, at least for uses like this, but at massive scale they do bring a lot of issues too.  But this blog posting is really long by now, so I'll pause here.  In the other blog sub-postings, I touch on them one by one.

No comments:

Post a Comment

This blog is inactive as of early in 2020. Comments have been disabled, and will be rejected as spam.

Note: only a member of this blog may post a comment.