Transactional Memory
Computer architecture is undergoing a vigorous shaking-up.
The major chip manufacturers have, for the time being, simply given up trying to make
processors run faster. Instead, they have recently started shipping "multicore" architectures, in
which multiple processors (cores) communicate directly through shared hardware caches,
providing increased concurrency instead of increased clock speed.
As a result, system designers and software engineers can no longer rely on increasing clock speed
to hide software bloat. Instead, they must somehow learn to make effective use of increasing
parallelism. This adaptation will not be easy. Conventional synchronization techniques based on
locks and conditions are unlikely to be effective in such a demanding environment. Coarsegrained
locks, which protect relatively large amounts of data, do not scale, and fine-grained locks
introduce substantial software engineering problems.
Transactional memory is a computational model in which threads synchronize by optimistic,
lock-free transactions. This synchronization model promises to alleviate many (perhaps not all) of
the problems associated with locking, and there is a growing community of researchers working
on both software and hardware support for this approach. This course will give a survey of the area, with a
discussion of existing algorithms and open research questions.