Saturday, October 31, 2015

Bell's Inequlities and Memory: two stunning papers in Nature

Fig 1 from Hensen et al. a and b show the schematic setup
with a separate "ready" signal used to signal when 2 states
have been successfully entangled. e shows the separation.
Two pretty stunning papers in Nature.

The first "Loophole-free Bell inequality violation using electron spins separated by 1.3 kilometres" (Hensen et al) is a pretty strong test of what people have long suspected: that "local realism" in which the outcome of quantum measurements is pre-determined by some "hidden variables" is not compatible with special relativity and physical observations. Previous experiments had supported this but there were always some loopholes.

This is a lovely and ingenious experiment. But it is a great pity that they only ran 245 trials because it means that their result has a p-value of only 0.039. This is not remotely good enough for most of physics. The trials only took 18 days and it would presumably have been straightforward to run them for another couple of months which would have got the p-value well below 0.000001. It really needs to be replicated and I'm a bit surprised the referees didn't ask for this. (see PS).

The authors remark that: "Our observation of a statistically significant loophole-free Bell inequality violation thus indicates rejection of all local-realist theories that accept that the number generators produce a free random bit in a timely manner and that the outputs are final once recorded in the electronics. Strictly speaking, no Bell experiment can exclude all conceivable local-realist theories, because it is fundamentally impossible to prove when and where free random input bits and output values came into existence."

Fig 1 from Rajasethupathy et al.a-c show where the labelling
agents were injected and the images. d-g show the response
amplitudes and latencies, and h and i show the CA1 and
CA3 spiking. j shows no signal from the dentate neurons
The other paper I found particularly interesting was "Projections from neocortex mediate top-down control of memory retrieval" by Priyamvada Rajasethupathy and colleagues at Stanford, which sheds a fascinating light on the mechanisms of memory.  They report the discovery of a what they call "AC-CA projection" in mice [prefrontal cortex (predominantly anterior cingulate) to hippocampus (CA3 to CA1 region)].

They find that optogenetic manipulation of AC-CA projection can elicit contextual memory retrieval. They developed tools to observe cellular-resolution neural activity during memory retrieval in mice behaving in virtual-reality environments. They "found that learning drives the emergence of a sparse class of neurons in CA2/CA3 that are highly correlated with the local network and that lead synchronous population activity events; these neurons are then preferentially recruited by the AC–CA projection during memory retrieval. These findings reveal a sparsely implemented memory retrieval mechanism in the hippocampus that operates via direct top-down prefrontal input, with implications for the patterning and storage of salient memory representations."

Both papers are examples of really beautiful and innovative experimental techniques to address really deep questions that would once have been considered almost inaccessible to experiment, and are to be applauded. Though in both cases I wish the sample sizes had been higher (n=5 is uncomfortably low even if there is a p less than 0.01).

PS Prof Hanson has kindly emailed pointing out that this is a 2-Sigma result (they observe S =2.42 with a conventional calculated std of 0.20 and Bell's inequality would give S less than 2) and many physics papers report 2 or 3 sigma results. In addition he remarks that "there is no one in the field that has any doubt that this result will not be replicated (this is a sign that the a-priori probability that our null hypothesis was true is actually small)"  (this is what he wrote but of course he means "that this result will be replicated) All of which is fair enough as far as it goes. But:
  1. If it is only possible to get a 2 or 3 sigma result it may well be worth reporting, but when it is straightforward to get a more watertight result just by running the experiments a few more days it seems a pity not to do so. These additional experiments could have been run after the paper was submitted so as not to cause problems with the priority (it was accepted 40 days after submission).
  2.  There are enough examples in the history of science of results that do not agree with what everybody in the field thinks that it would still be great to be more certain.
He says that his team and others will certainly acquire more data in the future, so the p value should come down. Hopefully this will be reported in Nature as an addendum when the data are in: it would be good to have this as clear as the Higgs particle.

No comments: