Facial recognition technology at King’s Cross last used in 2018

Based on the latest report published by the 67-acre (0.3-sq-km) site’s developer, the facial-recognition technology has not been in use at London’s King’s Cross Central development since March 2018.

According to a spokeswoman, the use of the technology was to ‘ensure public safety’, as reported by the Financial Times in August.

Reports however suggest, that only two of the on-site cameras have used facial recognition. Moreover, they were in one location and was used to be of service to the police. The two cameras were in use between May 2016 and March 2018, and allegedly the data was ‘regularly deleted’. Furthermore, King’s Cross also denied sharing any form of data commercially.

Based on the source reports, the data was used to help British Transport Police and the Metropolitan, in order to “prevent and detect crime in the neighbourhood.” However, both the forces were oblivious to the police involvement.

In addition, futuristic work on the technology would be postponed, as it “has no plans to reintroduce any form of FRT [facial-recognition technology] at the King’s Cross estate”.

The Mayor of London also raised concern over the King’s Cross Central development group and sought for reassurance that the use of facial-recognition technology was legal.

Dr Stephanie Hare, who is a critic of the facial- recognition technology said she was doubtful about various issues undertaken in the area, which was privately owned, but remained open to public, through openings of restaurants, bars and other family spaces. Hare also raised uncertainty as to why the partnership had stopped using the technology.

“It does not change the fundamentals of the story in terms of the implications for people’s privacy and civil liberties, or the need for the ICO to investigate – they deployed this technology secretly for nearly two years,” Hare reported.

“Even if they deleted data, I would want to know, ‘Did they do anything with it beforehand, analyse it, link it to other data about the people being identified? Did they build their own watch-list? Did they share this data with anyone else? Did they use it to create algorithms that have been shared with anyone else? And most of all, were they comparing the faces of people they gathered to a police watch-list?’” She adds.