We enhance the notion of a computation of the classical theory of computing with the notion of interaction from concurrency theory. In this way, we enhance a Turing machine as a model of computation to a Reactive Turing Machine that is an abstract model of a computer as it is used nowadays, always interacting with the user and the world.

The algebraic theory of effects continues Moggi's monadic approach to effects by concentrating on a particular class of monads: those that can be given by equational theories. Their operations can be thought of as effect constructors, as it is they that give rise to effects. Examples include exceptions (when the theory is that of a set of constants with no axioms), nondeterminism (when the theory could be that of a semilattice, for choice, with a zero, for deadlock), and action (when the theory could be a set of unary operations with no axioms).

Two natural, apparently unrelated, questions arise: how can exception handlers and concurrency be incorporated into this picture? For the first, in previous work with Pretnar, we showed how free algebras give rise to a natural notion of computation handling, generalising Benton and Kennedy's exception handling construct. This can be thought of as an account of (unary) effect deconstructors.

Turning to, for example, CCS, the evident theory, at least for strong bisimulation, is that of nondeterminism plus action. Then restriction and relabelling are straightforwardly dealt with as unary deconstructors. In order to deal with concurrency, we give a theory of binary deconstructors. It applies to CCS and other process calculi, as well as to shared memory parallelism. In this way we demonstrate a possibility: that the monadic approach (which has always accounted for resumption semantics) can be fruitfully integrated with the world of concurrency.

This talk will describe my investigations into the highly-secret role that Alan Turing played, after his pre-war theoretical work on computability and the concept of a universal machine, in the actual development of the first electronic computers. These investigations resulted in my obtaining and publishing, in 1972, some limited information about his war-time contributions to the work on code-breaking machines at Bletchley Park, the fore-runner of GCHQ (the UK Government Communications Headquarters). Some years later I was able to obtain permission to compile and publish the first officially-authorised account of the work he and others had undertaken that led to the construction of a series of special purpose electronic computers for Bletchley Park during the period 1943-1945, computers that made a vital contribution to the Allied war effort.

Concurrency theory and real-world multiprocessors have developed in parallel for the last 50 years, from their beginnings in the mid 1960s. Both have been very productive: concurrency theory has given us a host of models, calculi, and proof techniques, while engineered multiprocessors are now ubiquitous, from 2--8 core smartphones and laptops through to servers with 1024 or more hardware threads. But the fields have scarcely communicated, and the shared-memory interaction primitives offered by those mainstream multiprocessors are very different from the theoretical models that have been heavily studied.

My colleagues and I have been working at this interface: establishing rigorous and accurate concurrency semantics for multiprocessors (x86, IBM POWER, and ARM) and for the C11 and C++11 programming languages, and reasoning about them, developing the CompCertTSO verified compiler from a concurrent C-like language to x86, verified compilation schemes from C/C++11 to POWER/ARM, and verified concurrent algorithms. The models and reasoning principles are new, but we draw on the toolbox established in the theoretical world. In this talk I'll highlight a few examples of this work.