Saturday, August 16, 2014

Caution on self-organising machines

Fig 2 from Felton et al Self-assembling origami robot
 Two rather stunning papers in the same issue of Science raise some perennial, but significant, concerns. Felton et al describe their self-assembling origami robot. Although in many ways this raises no new issues of principle, it clearly makes the mass-production of self-assembling robots one step nearer.

Who or what will control these or will they be allowed to swarm - with completely unpredictable consequences?

Fig 2 from Merola et al. Principles of TrueNorth Architecture
Meanwhile Merola et al describe their "million spiking-neuron integrated circuit with a scalable communication network and interface" which describes a digital architecture that mimics a reasonably large scale neural network with reasonable power consumption properties - power consumption has become a major limiting factor on the power of computers.

Neural architectures also lend themselves to being programmed as self-organising networks where no-one really knows what is happening.

So the question naturally arises: what is to prevent autonomous computer/robot systems from doing serious harm to humanity?  Asimov suggested that the Three Laws of Robotics should be embedded in all robots, but there is no sign of this happening. I understand that Nick Bostrom has written a book about this: Superintelligence: Paths, Dangers, Strategies (OUP, 2014). Without being alarmist, there are serious ethical issues here that need to be thought through before these bugs become to widespread.


No comments: