Very interesting e-correspondence continues about the ability to simulate/upload human minds. The idea is that sometime in the future we could upload our mind into a computer, and then (perhaps) download it again. The key points for me are as follows:

- The brain is not a Turing machine: more generally organisms are not Turing Machines (Denis Noble has a great talk about this). So even if all the other obstacles could be overcome (impossibility of completely accurate measurement both for practical and Quantum reasons etc..) an “upload” is fundamentally and qualitatively different from a real human.

Only an omnipotent being, not limited by Heisenberg, could do an upload properly, into a new, non-digital body. Which is basically the Christian teaching about the resurrection.

- The brain may indeed act statistically in some senses, but individual neron firings do matter, they are not a simple sum, and there is massive feedback and cascading. So it has chaotic dynamics and quite a large Lyapunov exponent (indeed not obviously bounded). This means (for anyone who may need reminding) that the error term in the hypothetical simulation will grow as exp(At) for a fairly large A. Therefore however small the error between the real and simulated brain starts at, after a relatively small t it will be binary in the sense that Brain 1 would have done X and Brain 2 would not.

- The suggested exact imaging of the brain is also impossible. Even a nanobot in every neuron would be unable to observe every synapse in detail, and to understand exactly what is going in a synapse you have to "look at" the exact state of eg the Ca++ ions binding to the syapototagamin, which is beyond the capabilities of even the smallest nanobot. Furthermore any nanobot could only transmit a finite quantity of information to the supposed simulation in a given time, and since the brain state changes continuously it would never be possible to synchronise the two systems. Not to mention the problem of interactions between the nanobots and the brain, power dissipation, errors etc.. Neurons are highly complex wet analog systems, they are not logic devices at all, they only approximate to them.

- Not only are there are fundamental physical and biological reasons why it won't work, I cannot see how any company could get ethical approval for the necessary clinical trials. And without this there wouldn't be enough of a market to develop the hypothetical sensors.

- My attention was drawn to Deutsch's fascinating 1985 (!) paper which proves that any finite physically realisable system can be perfectly simulated by an (abstract) Quantum Computer. What he shows is that "every element of a certain countable dense subset of G {the set of physically realisable unitary quantum systems with finite numbers of degrees of freedom} can be computed {in the sense of there being in principle an abstract quantum computer that could compute that set}. But every point in any open region of a finite-dimensional vector space can be represented as a finite convex linear combination of elements of any dense subset of that space. It follows {since he has also shown that in principle if two sets can be Quantum-computed so can any linear combination of these sets} that Q can perfectly simulate any {unitary} physical system with a finite-dimensional state space."

Now this is an interesting result, but we have to understand that what Deutsch means by a physical system is one in which measurement/wavefunction collapse doesn't occur. So for Deutsch the brain never makes decisions: there is a "world" in which each neuron fires at a particular time and another one in which it doesn't. Whether or not one finds this ontology ridiculous what is very clear is that it is completely incompatible with the concepts needed to describe brains, thoughts, uploading or anything like that. The moment you can ask questions like "did I do this?" or "do I remember this?" then these Quantum Computers break down as "prefect simulators": they can only tell you probabilities.

The other secondary problems are that Deutsch needs infinite precision coefficients in his quantum computers for perfect simulation of another system, and that his computers are abstract and not necessarily physically realisable.

- It was also pointed out that there is a contradiction between Chaos Theory and Quantum Mechnanics. This is a deep issue and probably relates both to Quantum Gravity and the Measurement Problem. QM is known to be incomplete because it can neither account for gravity nor measurement. The Copenhagen interpretation (followed by the great majority of physicists over 50) says there is a not-yet-understood process whereby the system decides which (eigen)state it's in when it is measured, and a QM system is only linear/unitary up to the point at which it is measured. Deutsch and the Everett crowd say that there is no decision, just n new parallel universes one for each eigenstate.

- Finally my attention was drawn to the latest paper from Penrose and Hamerhoff. What I jokingly refer to "Beale's Law of Biological Systems" is the fundamental principle that "
*Biological systems are almost always more complex than you think - even when you allow for the fact that they are more complex than you think*" Whether or not their ideas about OrchOR and Microtubules turn out to be correct in detail, the fundamental point is that the brain is much much more complex than AI types suggest. What is certainly true is that even tiny fluctuations in the behaviour of the microtubules can be enough to change the firing behaviour of neurons. And as Hava Siegelmann showed in 1995, you don't need special physics to get the brain beyond the Turing limit.

I did suggest in the same journal that the discoverability of scientific laws may place a strong constraint on them, and this offers a meta-explanation of why consciousness and the laws of physics might be connected.