The laws of robotics: how regulation will improve our robots
Innovation within the field of robotics is gaining pace. Rapid expansion is taking place right across the industry spectrum, as well as seeping into entirely new areas. And these developments have rekindled the crucial discussion around robot ethics, as we consider who and what is responsible for keeping robots on the just and righteous path.
In legal terms, humans aren’t the only ones governed by predefined law, as artificial entities now have their own legal persona, too. In the assistive robotics sector, several studies have discussed ethics, due to its implications in ubiquitous surveillance, patient autonomy, non-human therapy, deception, and AI interpretability. Robotics start-ups should keep these concerns and challenges in mind, since they have a direct impact on the end-users, while regulation around these topics is coming into play as we speak, to help organisations design better robots. Robotics companies must prepare for future changes by taking a proactive and ethical approach, with user’s rights at the centre of the design process.
To gain people’s trust, clarity around the status of AI will certainly be important. One of the biggest, and legally disruptive, challenges posed by AI is what to do with machines that act in ways that are increasingly autonomous. Here it is vital that the regulation evolves.
There has previously been discussion around creating a new legal status for robots by giving them electronic personhood, whereby AI and robots would be considered “e-persons”. This would attribute them responsibilities and drastically alter the way they are seen through the eyes of human law. In this sense, the AI or robot could be held responsible under current structures if something were to go wrong. The idea was shot down, but the wider discussion remains as to how we situate robots within regulatory processes and where the responsibility ultimately lies.
Who or what is obligated?
An additional complexity around allocating responsibility for the damage caused by increasingly autonomous robots could be solved by an obligatory insurance scheme. This could reflect, for instance, the automotive industry, which already includes a protocol of this nature. Here the onus would be on the insurance industry to develop new products and types of offers that are in line with the advances in robotics and society today. With this in mind, AI engineers should be aware that the development of the likes of autonomous cars, medical-bots and robo-advisors could soon become highly regulated, expensive, and come with some extensive and burdensome liability strings attached.
In light of this, there is scope for these insurance pressures to result in improved robotics design. If manufacturers are obliged to insure their robotic products for damage and are also liable for providing safe-to-use equipment, there is an added incentive to create products that place user experience at the heart of the design. The good news is that this development is likely to lead to improved product design, as well as better robotics development and delivery of machines into the field because the direction of legislation is geared towards creating products that are of the highest possible standard for user experience.
The IoT as a blueprint?
Look at IoT devices, which have been the focus of increased legislation in the UK over recent years, due to historic vulnerabilities and minimal support from manufacturers once they leave the assembly line. The solution has been in the software, which can safeguard beyond the physical form of an IoT device, or in this case, robot, on an ongoing basis. Choosing an operating system that is designed with security and the end-user in mind allows a robot to better evolve with legislation over time.
Using large embedded systems such as Ubuntu Core enables autonomous robots to be fitted with several layers of security already stacked into the operating system, with system integrity ensuring the software is protected against corruption. From an IoT perspective, this provides additional protection over the data that is collated, in line with GDPR and wider security concerns across the robotics industry.
In time, software will replace hardware as the fundamental element in a robot's worth. Because of this, security and reliability will become the foundations of trust in machines, while collaboration will promote more dynamic robots, those that are able to extend their lifespans through third-party apps. That’s where a technology such as snaps comes in. Snaps are containerised software packages that are easy to create and publish. They are safe to run and have the ability to update automatically and transactionally to ensure an update never fails. If a security vulnerability is discovered in the libraries used by an application, the app publisher is notified so the app can be rebuilt quickly with the supplied fix and pushed out.
Ultimately, there is real value in developing regulation for the robotics industry, because it will provide the strongest possible user experience. Software is the building blocks for robots, and will result in better harmony between autonomous machines and the laws that govern them.