Safeguarding Privacy While Building Smart Cities of the Future

Hayden AI
4 min readApr 14, 2022

Despite the significant potential of artificial intelligence to improve public services, most government organizations have yet to embrace AI, partly due to privacy and ethical concerns. A privacy-by-design approach that builds privacy into the design of AI systems from the start can help ensure these technologies comply with privacy regulations and protect citizens’ data while fostering innovation.

Chris Carson, Founder & CEO, Hayden AI

AI can transform cities through smart applications that analyze data captured from thousands of interconnected IoT devices. These applications generate insights that help municipal leaders tackle urban challenges such as traffic congestion and air pollution.

Despite the significant potential of AI to improve public services, most government organizations have yet to embrace the technology, partly due to privacy and ethical concerns. The increasing frequency of data breaches, which often lead to identity theft and fraud, has further exacerbated these concerns, and has sparked outrage among citizens.

However, the benefits of AI cannot be overlooked. The recent Hello, World! Artificial Intelligence and its Use in the Public Sector report published by the Organization for Economic Co-operation and Development (OECD) predicts that AI will free up nearly one-third of public servants’ time in the coming years, allowing them to shift from tedious tasks to work that requires higher levels of skill.

Government organizations and service providers looking to leverage AI, therefore, need to address privacy concerns in order to harness the full potential of the technology and enhance operational efficiency.

THE CHALLENGE OF PROTECTING DATA AND MITIGATING PRIVACY CONCERNS

Data is the lifeblood of AI — the quantity and quality of data fed into AI algorithms determine the performance and accuracy of the models. However, protecting data privacy is often an afterthought to the design process of AI systems, which leaves organizations more susceptible to data breaches and privacy violations.

The advancement of AI beyond text data processing and toward visual data processing has further complicated efforts to protect privacy.

AI-based features such as facial recognition have faced backlash over privacy violations, so have predictive algorithms that derive sensitive personal information such as ethnicity from non-sensitive information such as location data. AI-powered systems — such as camera surveillance systems in public areas — may also capture personal information from unintended targets and violate the privacy rights of citizens.

While there are mechanisms that help minimize the impact of data breaches on privacy, such as restricting access through encryption and modifying personally identifiable information through anonymization, they are often reversible and thus insufficient.

Although privacy regulations such as the California Consumer Privacy Act, the New York Privacy Act, and the General Data Protection Regulation (GDPR) encourage organizations to protect citizens’ personal information by holding them accountable for privacy breaches, they tend to be overly restrictive and hamper advancements in AI.

By taking a privacy-by-design approach that builds privacy into the design of AI systems from the start, companies can more easily comply with privacy regulations and protect citizens’ data while fostering innovation.

This can be achieved by limiting the amount of personal data collected to only the information required, restricting the use of collected data to only the purpose that consent is obtained for, obfuscating faces and other sensitive information of citizens who are not of interest, and minimizing the movement of data.

Companies may also circumvent the restriction on processing personal data imposed by privacy regulations by utilizing synthetic datasets that closely reflect key statistical properties of real-world personal data to train AI models that require personal information.

HOW A SMART CITY SOLUTIONS PROVIDER IS SAFEGUARDING PRIVACY WHILE DRIVING INNOVATION

Hayden AI is a smart city solutions provider that offers an AI-powered traffic management platform that enables governments to reduce traffic congestion and improve the efficiency of traffic flows, increasing the reliability of public transport and encouraging ridership.

The company’s scalable platform allows transit agencies to build applications for various use cases such as bus lane enforcement, which improves the performance of existing bus lanes through automated camera enforcement.

A camera is mounted on the front of buses to automatically record potential violations as they occur. The recordings are then analyzed using deep learning algorithms and advanced computer vision. To mitigate the privacy risks associated with automated camera enforcement, the company utilizes a real privacy-by-design approach.

The Hayden AI platform temporarily stores the recordings of possible bus lane violations and analyzes the information at the edge in real time, instead of transferring the data to a central server for processing. These recordings are destroyed if no violation is detected. If a violation occurs, however, information regarding the transgressor such as license plate number are retained, as well as the recording of the violation incident.

To avoid infringing on the privacy of citizens who are not involved in the incident, the Hayden AI platform only captures and retains information about the vehicle of the transgressor. The platform is designed to not record information about pedestrians and to not engage in other privacy violating behaviors such as facial recognition.

The company also provides transparency around the decisioning of violations and complies with strict security standards including government-grade encryption when transferring sensitive data to the central server hosted on the cloud, minimizing the risk of unauthorized access.

CITIES ADOPTING AI NEED TO SAFEGUARD PRIVACY

Although AI has tremendous potential to help governments meet the needs of urban citizens more efficiently, the adoption of the technology in the public sector lags behind the private sector, partly due to privacy concerns.

By leveraging smart city applications that adopt a privacy-by-design approach, governments can mitigate privacy concerns more seamlessly and ensure the protection of citizens’ personal information, while harnessing the full potential of AI to tackle urban challenges and improve the overall quality of life.

Government agencies can also develop transparent policies and guidelines for the use of AI to allay privacy concerns.

--

--