|Fig 2 from Felton et al Self-assembling origami robot|
Who or what will control these or will they be allowed to swarm - with completely unpredictable consequences?
|Fig 2 from Merola et al. Principles of TrueNorth Architecture|
Neural architectures also lend themselves to being programmed as self-organising networks where no-one really knows what is happening.
So the question naturally arises: what is to prevent autonomous computer/robot systems from doing serious harm to humanity? Asimov suggested that the Three Laws of Robotics should be embedded in all robots, but there is no sign of this happening. I understand that Nick Bostrom has written a book about this: Superintelligence: Paths, Dangers, Strategies (OUP, 2014). Without being alarmist, there are serious ethical issues here that need to be thought through before these bugs become to widespread.