Image for Highlights from the Lords Select Committee on Artificial Intelligence
Back to Blogs

Highlights from the Lords Select Committee on Artificial Intelligence

The House of Lords select committee on Artificial Intelligence recently gathered to hear evidence from witnesses across academia, law, industry, and the media. We also attended for insight into the latest thinking on AI, and some of the potential recommendations which may be presented to government when the committee reports early next year.

There are concerns from almost every group which presented to the committee, about how AI will affect the UK’s workforce. Witnesses from academia and the legal profession raised concerns about the increasing number of industries where jobs were threatened. Where workers are displaced, it was suggested that government should incentivise individuals to retrain to make them more resilient to future tech revolutions, while also preparing the next generation of employees for an AI-enabled labour market.

Financial Times employment correspondent, Sarah O’Connor, described to the committee how AI could drastically improve the UK’s productivity, which continues to lag behind major trading partners such as the US, France, and Germany. The ethical implications of growing up in an AI-enabled world was top of mind for witnesses in all fields, and none more so than education. It was noted that children in primary school through to students at university will likely learn about the possible ethical effects intelligent technology can have on society.

Academics like Michael Wooldridge, professor of computer science at Oxford University, said it is vital that we also teach businesses to be responsible for the technology they create. His view is that it is becoming increasingly difficult to track algorithms and understand how they come to the conclusions they do. Similarly, he said developers must be aware of the unconscious bias being programmed into algorithms, which are shaping our perspective of the world.

Witnesses before the committee raised a crucial question: if AI does go wrong, who is accountable – the creator, the owner, or the algorithm itself? Alan Winfield, professor of robot ethics at the University of West England, stated that regulation should enforce a chain of accountability whereby the owner, not the creator, of the algorithm is responsible. What’s more, he felt these algorithms must be held to the same standard as physical products – subjected to rigorous testing and the scrutiny of third party agencies.

For other witnesses of the select committee, data privacy was a big concern – on the one hand AI can achieve so much in fields such as medicine, but without limits, our privacy can be invaded and exploited for financial gain. Academics such as Dame Wendy Hall, discussed the possibility of treating data like a natural asset, capitalising on the deep mines of information created by each of us. In fact, the likes of Tim Berners-Lee are working on personal databases, which give individuals complete control of their own data. While projects like these are still in the pipeline, private organisations must ensure the consent process for data use is better understood by the public. Ultimately, data could be either the success or downfall of AI, depending on how it is used.

With further evidence sessions to come, we won’t know the extent of the recommendations presented to government until the report is published in March 2018. What we can expect is that the guidance they provide will seek to protect our privacy, while ensuring that the UK is primed to take advantage of current and future AI advances.

Image for Highlights from the Lords Select Committee on Artificial Intelligence