By Fernanda Catão de Carvalho & Igor Baden Powell*

INTRODUCTION

Cities are characterized by being spaces with different perspectives that allow for the circulation of people and goods, mediated by contractual, cultural and economic relations. For its functioning, resources are needed–water, energy, food, work, among others–and infrastructure–mobility, housing, health, education and security–referring to the need to rethink the urban space management system in order to provide its inhabitants with a quality way of life and well-being.

These needs become worrisome as demands for such resources and infrastructure are rapidly increasing due to the speed and way in which cities have been forming and conforming. Grostein[1] highlights, with data from 1999, that “the Brazilian urbanization process, in the second half of the 20th century, led to the formation of 12 metropolitan regions and 37 non-metropolitan urban agglomerations, which account for 47% of the population of the country. In the 12 metropolitan areas, 33.6% of the Brazilian population reside (52.7 million inhabitants) in extensive conglomerates involving 200 municipalities.”

Recent data from the Brazilian Institute of Geography and Statistics (IBGE) (2018) show that after 19 years, Brazil has 73 Metropolitan Regions and 7 Integrated Development Regions (Ride) defined as administrative regions that cover different states, and 4 Urban Agglomerations, concentrating 1,403 municipalities. Among the Metropolitan Regions or Ride, 28 have a population of over 1 million inhabitants and add up to 98.7 million inhabitants, representing 47.3% of the Brazilian population.

The availability and popularization of technologies in this century made it possible to intensify their use not only for human communication, but also as a solution to complex everyday problems in cities and for their governance. Considering the abovementioned data, Brazilian cities are thus an attractive market for the use and propagation of such new technologies.

In this context, the use of data and advanced technologies in cities has become essential to provide a better quality of life for citizens, in addition to facilitating the performance of public and private entities established in them. Advanced technologies are those that use IoT (Internet of Things), an interconnected network of “things”, that is, any technology that is related to another technology or even to a person.

The application of IoT can go far beyond programming technologies to work in an isolated circumstance; through data collection and its subsequent analysis, IoT gives people the power and flexibility to monitor various aspects of their lives, making changes that can optimize everything from personal sleep cycles to energy efficiency on a college campus.

Massive data processing is inherent in cities. Through IoT devices, data of the most diverse nature are collected, used, stored and shared, such as: geolocation points, arrival and departure times, medical health information, food preferences, etc.

Thus, the great innovative differential of cities is the analysis and subsequent correlation of this vast collection of personal data, also known as urban big data.

Along with IoT devices and urban big data, an emerging technology intrinsic to cities is facial recognition, which serves multiple purposes.

The ability of Facial Recognition systems to assist law enforcement authorities, for example, is one of the reasons why such systems became extremely attractive. When deployed together with an extensive database, it can be a tool to aid in crime prevention, in locating missing persons, and in solving police inquiries and monitoring public places.

By 2019, there were high hopes that this technology would grow tremendously. According to a research report “Facial Recognition Market” by Component, the facial recognition industry is expected to grow from $3.2 billion in 2019 to $7.0 billion in 2024 in the US alone.

However, starting in 2020, that landscape changed after civil rights advocates raised concerns that facial recognition contributes to the erosion of privacy, reinforces prejudice against Black people, and is subject to misuse.

Bringing the discussion to the field of privacy, even if facial recognition technology uses publicly or freely provided information, such as facial images and user activities on the internet, the risk to privacy can be enormous. In addition to that, in a well-equipped environment with facial recognition, the simple life of people in society can provide enough personal information to link them to a multitude of personal data compiled and analyzed by both private companies and the government.

This paper then seeks to analyze the issue of the use of facial recognition systems, especially in the context of Brazilian cities, and to what extent should citizens be concerned about the implementation of such systems in their day-to-day lives.

Therefore, after a more technical description on the concepts involving both artificial intelligence systems and smart cities, an analysis of the Brazilian regulatory scenario is made in an effort to assess whether the current legal framework is sufficient to protect citizens from potential privacy threats.

I. TECHNOLOGIES AND CITIES: FROM DIGITAL CITIES TO SMART CITIES

Smart City (or Smart Cities) is already a well-known term, but not so well understood. Such a concept refers to a city that is smart, intelligent, i.e., automated and functional. Many would ask, “Can technologies that the government make available to the public within an urban area—such as Wi-Fi in public places, some service apps among others—characterize a city as smart?”

The right answer is: it depends. Although technologies are part of the path that leads cities to fit the concept of Smart City, most are still in the process of going digital. Although similar, both terms have different meanings.

Digital cities have implemented communication technologies that promote great access to content, tools and management systems, thus serving the government and its own workers, citizens and organizations. The Smart City on the other hand will use the digital components of a digital city, but in an innovative, integrative and collaborative way.

The concept of digital cities is diffuse and polysemic, that is, its scope and borders are not well defined and, therefore, it gives rise to different interpretations. The term digital city can mean urban information and communication infrastructure, local electronic government, tourist guides, proximity communities or virtual representations of real or imaginary cities.[2]

The origin of the term “digital city” is best known as coming from Amsterdam Digital City (“DDSDer Digital Staad”). This project became a paradigm, as it was the first to use the metaphor of the city—streets, squares and buildings—as interfaces for interaction with users. After the Amsterdam case, several other similar projects started to emerge, such as Helsinki Arena 2000, Digital City Bristol, and Kyoto Digital City.

Ishida best illustrates this initial moment by saying that “[t]he concept of digital city is to build an arena in which people in regional communities can interact and share knowledge, experiences, and mutual interests. Digital cities integrate urban information (both achievable and real time) and create public spaces on the Internet for people living or visiting the cities.”

This concept was later simplified: “digital cities will collect and organize digital information of the corresponding cities and provide a public information space for people living in and visiting them to interact with each other.”[3]

At first, the perspective that prevailed in Brazil was that of digital cities, which could be evidenced through the implementation of infrastructure that allowed public access to the Internet (e.g., telecentres, kiosks, etc.). In addition, the Brazilian government digitized some municipal public services that, in its original form, were bureaucratic and time consuming. Said incentives, among others, are considered to be the first step towards digitization. In this sense, it is noticed that the technology was used as an increment to existing public services.

As time passes, new Information and Communication Technologies (ICTs) emerged, broadband networks became increasingly better and more accessible to people. In addition to that, urban spaces have never been so populated and their respective infrastructures have never been so deficient to provide the most diverse types of public services. It is in this context that the term Smart City (or Smart Cities) appears to describe the intensive use of ICTs as one of the possible solutions to urban problems and, consequently, to improve the quality of life of citizens.[4]

The use of ICTs per se as an urban management tool is not new.[5] The idea of orderly planning has always been supported by the use, in particular, of data collection and processing technologies for the formulation of public policies. Until today, this is the dynamic, for example, of sociodemographic censuses in which the collection of citizens’ personal data allows the generation of statistics that can guide the expansion and administration of a national and local territory.[6]

What changes now is that with recent computational advances (big data, internet of things, artificial intelligence, etc.), a more intensive use of ICTs is possible. There is, above all, a transformation of the urban space itself, which is now architected with artifacts for the massive collection and processing of data. The ostensible and unique figure of the censor is added to the sensors dispersed and distributed[7]  throughout the territory, and a good portion of the actions of the public manager is now automated.

Massive data processing is inherent in cities. Through IoT devices, data of the most diverse natures are collected, used, stored and shared, such as: geolocation points, arrival and departure times, medical health information, food preferences, etc.

The intensive use of ICTs in urban spaces drastically alters the dynamics of capturing, collecting and processing citizens’ personal data, making them one of the main engines for the functioning of the city. This tends to become a little visible to individuals and to reinforce the asymmetry that already exists in their relationship with the State and, ultimately, to challenge their capacity for self-determination in this ecosystem.[8]

Along with IoT devices and urban big data, an emerging technology intrinsic to cities is facial recognition, which serves multiple purposes. The ability of this technology to assist law enforcement authorities, for example, is one of the reasons that made it extremely attractive. When deployed together with an extensive database, facial recognition systems can be a tool to aid in crime prevention, in locating missing persons, solving police inquiries, and monitoring public places.

II. ARTIFICIAL INTELLIGENCE

a. Artificial Intelligence and Machine Learning

In today’s world, AI is growing at an exponential pace and is impacting homes, businesses and governmental practices. Whether AI is equipping cars with the ability to self-drive,[9] providing innovative medical care for the elderly,[10] or shaping society with its decision-making power,[11] the fast-paced technology is transforming people’s lives. While many definitions have been attributed to artificial intelligence, all of them converge in one simple concept: AI is a computer science field that is dedicated to enabling machines to solve problems that would require human intelligence.[12] In other words, AI is solving cognitive problems that are usually associated with human intelligence.[13] Among many of the challenges that AI has the ability to address, the most common ones are: “cognitive learning, problem solving, and pattern recognition.”[14]

At the core of AI’s capabilities is the algorithm embedded in its software. An algorithm is a set of step-by-step instructions that, when followed, will lead to the desired outcome.[15] For instance, algorithms are very much like recipes. A recipe states the exact steps required to accomplish cooking a particular food. In the computer science field, “an algorithm is a series of instructions written by a programmer for a software to follow.”[16] However, AI will only be present in a particular system when said system is “able to directly perceive its environment, evaluate and adapt to the data received, and respond by editing its own processes to, ideally, achieve better, more reliable outputs or predictions.”[17]

These days, the most common type of AI that is being used is called “narrow AI.” Unlike “general AI,” a prospective system that could possibly reach or surpass human cognitive abilities, “narrow AI” is limited in its problem-solving abilities and is applied to only one specific task.[18] Facial recognition, the focus of this paper’s analysis, is one example of tasks that “narrow AI” could be designed to perform,[19] even in urban scenarios.

One particular branch of AI is machine learning. In the words of Arthur Samuel, machine learning is the “field of study that gives computers the ability to learn without being explicitly programmed.”[20] Through data analysis, machine learning allows for systems to identify patterns by observing examples, as opposed to following predetermined rules.[21] Although the system improves its output with its own persistent practice, human input is still necessary to shape the algorithm that is being created, select the data that the system is feeding from, and define other features and settings.[22]

Accordingly, Tom M. Mitchell explained that three features must be present in a “well- defined learning problem: the class of tasks, the measure of performance to be improved, and the source of experience.”[23] Additionally, these three crucial features are all dependent upon the evaluation of extremely large sets of data. Since the system improves its learning abilities through data analysis, it requires extensive training with “vast amounts of data, including identifiers, attributors and behaviors.”[24]

The system’s reliance on the data that it was provided is proving to be the source of great concern when it comes to ensuring output quality.[25] A machine learning system’s effectiveness is measured by the quality of the data that it was trained with.[26] For instance, in order to identify patterns, the system will analyze vast amounts of data to search for specific features that were either pre-identified by the programmer or were a result of the machine’s own extrapolation.[27] Both scenarios will have an effect on the final outcome reached by the machine learning system.

One of the most common ways to train a machine learning system is through supervised learning. This training category provides the model with an input X and an output Y.[28] According to a set of pre-established parameters, the machine will get from X to Y by “learn[ing]the mapping from the input to the output.”[29] Because the mapping process is based on correct values established by a supervisor, the system is learning by analyzing training examples that were previously labeled.[30] The Future of Privacy Forum explained the process of supervised learning in the following manner:

One way to train machine learning systems is by providing training examples in the form of labeled data. Any time the data is labeled (“this is an orange” or “this is an apple”) before being processed through the system, it is considered “supervised learning.” The system is then instructed on how the labeled data is to be categorized. In this manner, an algorithm is created which learns how to identify specific features that can be applied to previously unseen data. [31]

One method of supervised learning is classification, i.e., “the assignment of data to the category that they most likely belong” by comparing the new data with the labeled training examples.[32] In order to train the model, the supervisor must accurately label the training data by choosing a set of parameters that will aid the machine in analyzing which category the new data falls within. That is the case with facial recognition models.

The facial recognition classification process involves providing the system with vast sets of images, where each image is labeled with specific features that, through comparison, allows for the characterization of the input image. For example, the system is fed with numerous images of men and women, and each image is labeled according to the person’s gender. The machine supervisor will then indicate to the system which features are crucial to identifying whether the image represents a man or a woman (e.g., long hair and lipstick identifies women, short hair and beard identifies men). Consequently, the model will begin analyzing the data and the supervisor will adapt the features according to the results that were achieved. In the example given above, if the system’s training data consisted only of long-haired women and short haired men, when presented with images of short-haired women, the model would likely classify them as men.

This scenario involves one of the greatest concerns with supervised learning models: the effective collection of a vast and diverse dataset of images and the selection of accurate classification parameters.[33] The steps necessary to train the system involve ensuring that the dataset that the machine is going to analyze is broad enough to encompass as many outcomes as possible, and that the parameters established by the supervisor are accurate in terms of identifying the classification features. Accordingly, “supervised learning requires massive amounts of training data. Bias that is present in the training data becomes inherent in the model it produces.”[34]

In supervised learning systems, humans play a big part in the creation and the development of the AI software. They not only determine what data should be collected to train the system, they also supply the system with training data and determine how to train and assess the system’s outcome.[35] The power humans possess over the software’s decisional outcome is the main cause of bias in AI.

Consequently, with the premise of attempting to prevent bias from being embedded in the algorithm, various organizations, civil liberties groups, tech companies, and, as of very recently, States are advocating for fairer and more secure AI.[36] Many technology companies have published guidelines supporting the idea that AI should get rid of prejudice and even suspending or banning some AI technologies. In addition, countries are working to develop regulations for AI systems aimed at minimizing or preventing their use from violating guarantees and fundamental rights, as well as establishing minimum governance parameters.

b. Facial Recognition Technology

In the vast world of AI technology and machine learning systems lies facial recognition software. As explained in the previous section, facial recognition technology is one model of supervised learning that functions through a classification process. The biometric system was developed in 1973 by Takeo Kanade[37] and it operates by analyzing human faces for one or both of two purposes: face verification and face identification.[38] From the time it was first introduced, facial recognition technology has advanced increasingly, even “surpass[ing]human recognition performance” when the faces are presented in favorable conditions and are matched against vast sets of databases.[39]

The two key aspects of a facial recognition system are the enrollment and the authentication of a face image.[40] The enrollment process consists of including a person’s face image in the system’s “gallery” or “watch list” where its biometric features will be analyzed and algorithmically extracted.[41] The following step is the authentication process that can be divided in two: one-to-one matching (face verification) and one-to-many matching (face identification).[42] When the facial recognition software is used to “compare a query face image against an enrollment face image whose identity is being claimed,” a one-to-one mode is being employed.[43] The one-to-many mode involves “compar[ing]a query face against multiple faces in the enrollment database to associate the identity of the query face to one of those in the database.”[44]

While both modes of matching are affected by the same set of factors, such as “illumination, facial pose, expression, age span, hair, facial wear, and motion,” the one-to-many identification system differs from the one-to-one in the level of accuracy that it has to achieve. Unlike a simple face image comparison to provide access control (e.g. unlocking smartphones), facial identification requires comparing the biometric features of the new image against each of the images that were gathered during the enrollment process. Through that comparison, the system will come up with a “matching score” that represents the degree of similarity between the new image and the ones stored in the database.[45] The scoring rules are usually determined by the system’s supervisor through the configuration of a matching threshold. When the similarities exceed the threshold, the system will match the images reporting the results to the supervisor.[46] When the threshold is not reached, the system will report that the query face image is unknown.[47]

Facial recognition system is a pattern identification software that consists of four modules: (1) “face detection and face landmarking”—separating faces from the background image and finding the facial features (e.g. eyes, chin, mouth), (2) “face normalization”—transforming the face into a standard geometrical frame, (3) “face feature extraction”—extracting distinguishable information from the face image, and (4) “face matching”—as explained above, matching the features of the input face image with the ones in the system’s database.

III. CITIES AND FACIAL RECOGNITION: SO LONG, PRIVACY

Facial recognition technology, at present, can be applied to any situation where facial images are available, including urban areas. In order for the system to verify or identify facial images, it only requires the input of a query image and a large database to compare it against. Thus, whether the software is being used as a direct access control tool or as a broad identification system, its implementation is becoming increasingly common.

For example, facial recognition systems are already being used by some airports both to streamline check-in and boarding procedures, as well as those procedures related to customs and border protection. Police are also implementing facial recognition technology, musicians are relying on biometric software to prevent stalkers from attending their concerts, and Japanese universities are using the technology to confirm student attendance.

In addition, today, there are already cases of use of facial recognition systems capable of identifying the specific emotions that are being portrayed in the images that the software is analyzing, recognizing whether the person is happy, sad, angry or even crying.

In cities, this technology is getting closer and closer to us. Facial recognition systems can be found in bookstores, supermarkets, condominiums, hotels, airports, theme parks and companies in general.

In June 2019, São Paulo’s Metrô (São Paulo City subway company) announced a bid to implement a new electronic image monitoring system for its stations, trains, and operating areas on lines 1-Blue, 2-Green and 3-Red. In the public notice, there were mentions to the expansion of its camera fleet from the previous 2,200 to 5,200, the exchange of analog equipment for digital equipment, including facial recognition cameras, and the centralization of camera command in a single center, which was, until then, scattered through the seasons.

The argument that was (and has been) used to spread this technology—and its potential dystopian scenario—is mainly based on security. Facial recognition mechanisms can identify criminals or terrorists, detect area intrusion and equipment predators. The Metrô, at the time, even justified the use of this equipment in its potential to locate missing people. Those were the justifications used for initiatives that, in a test character, were carried out in the 2019 Carnival in Brazil, both in Salvador and in Rio de Janeiro. In the capital of Bahia, the Public Security Department of Bahia arrested a person who, dressed as a cabaret flapper and with a toy-colored machine gun in his hand, was seen by cameras dancing carnival in the Barra-Ondina circuit. He was a homicide fugitive that was armed with a firearm, who after having his image identified by the cameras and crossed with the database of wanted criminals in Bahia, was approached and arrested by the city’s military police. In Rio de Janeiro, the cameras were also tested in Copacabana during the Copa América final rounds, around the Maracanã Stadium, but they had problems; a woman was arrested after being wrongly identified as a criminal who was in the database of the police in Rio de Janeiro.

The implementation of facial recognition cameras in public spaces has raised some questions. The first refers to the ownership and use of data collected by this means: where is the information stored? For what purposes will they be used? The second refers to who are its owners: are they the companies that provide technology services or the city itself?

Recently, in São Paulo, there was a new case involving ViaQuatro, the concessionaire that manages Metro’s 4-Yellow Line, and which announced that its cameras, installed on the glass doors of the station platforms, were identifying the reaction of passengers to the advertisements that were being displayed. The data from the cameras were able to detect whether, for example, the user made a “happy” or “sad” expression after seeing an advertisement in the subway. Tabulated and systematized, this information could be shared to companies to improve the content of their ads. The Consumer Defense Institute (IDEC) filed a public civil action against the company, claiming that the sale of this data to potential advertisers is illegal. If the user of Line 4-Yellow did not want to have such information collected, the only option given by ViaQuatro was for that user to use another means of transportation.

The first decision of this public civil action was recently rendered in May 2021, and ViaQuatro was ordered to pay R$100,000 as collective moral distress compensation for collecting facial recognition data from users without consent. The conviction also provided for the prohibition of the use of the facial recognition system implemented since 2018. The judge highlighted in her sentence the company’s lack of transparency; it did not even report on the capture of expressions from users. However, her sentence included the fact that, in the concessionaire’s defense, it was alleged that only emotions were detected, with no identification of the user. Nevertheless, this was, at no time, proven by the concessionaire.

Technological advances in facial recognition software and its increasing commercial availability are enabling the biometric system to be deployed in a variety of circumstances.[48] It is therefore critical that people can truly trust a fair and accurate system. When Facebook inaccurately identifies the person who should be tagged in a particular photo, or when Instagram stories fail to capture the user’s face when applying a face filter, the consequences of these system’s inaccuracy will be, at best, uncomfortable for the user. However, when people are stopped at the airport or considered criminals (or suspects) by law enforcement authorities because facial recognition software has incorrectly identified their faces, the consequences can be extremely dangerous.

While the current and potential benefits of biometric software are proving to be extensive, there is a fine line between improving technology in people’s lives and gutting their privacy. By throwing big data into the equation, the line becomes almost invisible.

However, the big question is: why are facial recognition technology and big data such a threat to the privacy landscape if both systems are based upon information that was freely given or already made public, such as people’s faces and their internet activity? The answer is simple: when combined, facial recognition and big data have the potential to kill the societal concept of practical obscurity, ending the whole conception of privacy. If the unconstrained use of facial recognition technology continues to flourish, “people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited.”[49]

Another aspect is the theme of the city under permanent surveillance, its residents controlled, effectively preventing anonymity and privacy. Previously, the anonymity and privacy of the metropolis were the cause of great transformation in ways of life, allowing, for the first time, a life without the socio-political controls of small communities. This was the great theme of sociologists of urban life at the turn of the 19th to the 20th century, when the phenomenon of the big city emerged as a—liberating—rupture from previous socio-territorial modes of organization. Being able to walk through the streets, sidewalks, and public spaces of a city without being identified is an achievement made possible by large-scale urbanization; mixing with the mass of people without having their identity defined and traced is part of the right to anonymity and property that everyone has of their body and movements that in its genesis, constituted one of the foundations of democracy.

Although the notion of privacy has been limited with the emergence of big data,[50] people are still able to enjoy it in abundance.[51] Simple things such as what you say in public, where you go and even your own name are still very much considered private because there is an obscurity surrounding that information. According to Evan Sellinger and Woodrow Hartzog, obscurity is the essence of individual privacy.[52] They defined it as “the idea that information is safe—at least to some degree—when it is hard to obtain or understand.”[53] Accordingly, they argued that the obscurity that is inherent in public interactions is being threatened by new technologies, especially, facial recognition software:

The difficulty of identifying a person, notwithstanding the possession of relevant and intelligible information, is central to debates over obscurity-corrosive innovations such as facial recognition technology. While those around us can understand much of what we communicate in public, there can be a world of difference between others knowing what we say and knowing who we are when we say it. [54]

Furthermore, Judith Donath stated that the key to the end of obscurity is the database of information that, through facial recognition technology, will be easily accessed, allowing people to identify everyone around them.[55] If the future of society is to accept the shift from praising personal obscurity to embracing recognition abilities, Donath suggests that not only will government surveillance gain tremendous proportions, inducing paranoia, but people will also feel marginalized and oppressed by their own peers.[56]

Not by chance, it was with this argument that the San Francisco Board of Supervisors, similar to the City Council, decided to ban the use of digital recognition cameras in public spaces. Before this ban, San Francisco was one of the first cities where such cameras were implemented in the United States. For San Francisco lawmakers, security is possible without implementing a police state of full surveillance.

Woodrow Hartzog, a fierce opponent of facial recognition technology, argued that the system is a “irresistible tool for oppression” because of five main reasons.[57] While there are many other biometric identification tools that are also deployed by the government (e.g., fingerprints, DNA samples, and iris scans), facial recognition stands apart because of the following reasons: (1) the difficulty in altering and hiding one’s face and the fact that it can be easily captured, (2) “the legacy of name and face databases, such as driver’s licenses, mugshots and social media profiles,” (3) the input data in facial recognition systems is collected remotely and through surveillance devices that are already in place (e.g. “namely CCTV and officer-worn body cams”), unlike the collection of fingerprints that require a specific technology and an actual effort on the part of the individual, (4) the possibility of identifying individuals in real-time and, finally, (5) the fact that facial recognition technology is capable of merging people’s online lives with their offline ones—one simple scan and every single person can be associated with their social media outlets.[58]

It is clear that companies offering digital recognition systems are looking for markets in Brazil to expand their business. Before adopting a path surrounded by many doubts and which touches on the fundamental rights of each citizen, city halls and state governments have the obligation to answer questions about the use of cameras in a transparent manner. In turn, city dwellers and their political representatives cannot simply give up a broad and informative debate on the topic before taking a decision on its implementation.

IV. LEGAL FRAMEWORK

Considering that data protection and privacy are most likely the biggest issues related to the implementation of facial recognition technology by private entities and the public sector, which aspects of Brazilian law, if any, would prevent corporations, the government and society to employ facial recognition technology boundlessly and recklessly?

a. The Brazilian General Data Protection Law

In September of 2020, the Brazilian Data Protection Law (Law No. 13,709/2018 or “LGPD”) came into force, significantly transforming the data protection system in Brazil, and also aligning it with other world data protection laws, especially the European legislation, the General Data Protection Regulation (“GDPR”).

The LGPD regulates the manner in which entities and individuals use data of an identified or identifiable individual in Brazil and applies to any data processing carried out within or outside the Brazilian territory, regardless of where the controller and/or the processor are based or where the data are located, provided that: (i) the processing operation is carried out on the Brazilian territory; (ii) the purpose of the processing is to offer or provide goods and services within the Brazilian territory; and/or (iii) the personal data subject to the processing has been collected in the Brazilian territory (regardless of whether the data subjects are Brazilian or not).

Therefore, the implementation of facial recognition technologies is subject to the provisions of the LGPD, meaning that private and public entities should observe the principles and rules established in the law prior to processing the data related to the biometric system.  For instance, the data processing must be carried out for legitimate, specific and explicit purposes, and the processing must be limited to said purposes (Art. 6, items I and II, LGPD).  In addition, data subjects must be provided with clear, precise, and easily accessible information regarding the processing of data and its processing agents (Art. 6, item VI, LGPD).

Considering that the LGPD does not specify the meaning of ‘biometric data’ until the Brazilian Data Protection Authority (“ANPD”) issues guidelines or regulation on this matter, entities may follow the standard adopted by the GDPR.  The GDPR defines ‘biometric data’ as any personal data resulting from specific technical processing relating to the physical, physiological or behavioral characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images.

In light of the above, under Art. 5, item II, of the LGPD, the personal data processed by facial recognition systems should be considered not only personal data, but also sensitive data.  In this sense, the LGPD warrants an additional level of protection to such types of data. For instance, for any processing of personal data to be considered lawful, one of the legal bases set forth in the LGPD must be met (Art. 7, LGPD).  However, when it comes to sensitive data, such as biometric data, the LGPD limited the instances where the processing can be justified (Art. 11, LGPD).  More specifically, sensitive data can only be processed when the data subject consents or without the provision of the data subject’s consent, in scenarios in which, among others, it is indispensable for: (i) compliance with a legal or regulatory obligation; (ii) conducting studies by research organizations; (iii) the exercise of rights, including in the context of a contractual relationship; (iv) the protection of life or the physical safety of the data subject or third party; or (v) ensuring fraud prevention and data subject’s safety, in the identification and authentication process of registration in electronic systems.

In a nutshell, this means that if none of the other legal bases apply, sensitive data can only be processed upon the collection of data subjects’ consent, that must be provided in a specific and explicit manner, for specific purposes.  If consent is the applicable lawful basis, controllers (a) will have to adopt a robust and specific consent clause in order to avoid the nullity of the consent for data processing given by data subjects; (b) must implement a specific mechanism to store the consent given by data subjects (in a physical form or through a click-on in an online page); and (c) must implement a specific mechanism to identify data subjects that has revoked his/her consent.

Currently, the use of facial recognition technologies, especially in publicly accessible spaces, has been the subject of scrutiny by consumer protection agencies and the public prosecution office. This happens because such governmental bodies understand that the use of this technology and its derived systems are of high risk in the context of data protection, especially considering that there is no guidance from the ANPD regarding facial recognition or biometric data.

As previously stated, in May 2021, ViaQuatro, a subway concessionaire in the state of São Paulo was sentenced to pay BRL 100k for using a “facial recognition” tool without the passengers’ consent for advertisement purposes.  This was a leading case on the subject and was very controversial on several aspects.  Even though the company argued it used a “facial detection” tool that only captured general characteristics such as age, gender and emotion (i.e., it was not able to uniquely identify an individual), the judge found that it collected biometric data (therefore sensitive data) and that it should have requested the data subject’s consent.  It is important to highlight, however, that the judge indicated that this kind of technology could be used for security purposes, without data subjects’ consent, to improve the provision of the services. The company can still appeal from this decision.

However, the “notice and consent” requirement has been heavily criticized as ineffective to ensure consumer’s privacy. Solon Barocas and Helen Nissenbaum argue that policy solutions, such as notice and consent, fail to provide consumers with a clear understanding of what is at stake when they agree to having their information collected. According to them, the notice and consent are insufficient due to:

(1) the confusing disconnect between the privacy policies of online publishers and the tracking and targeting third parties with whom they contract, each of whom have their own privacy policies; (2) the fickle nature of privacy policies, which may change at any time, often with short notice, and (3) the ever-increasing number of players in the ad network and exchange space, resulting in flows of user data that are opaque to users.[59]

In addition, it should be pointed out that the LGPD does not apply to processing personal data performed exclusively for public safety, national defense, state security, or for activities related to investigation and suppressing criminal offenses (Art. 4, item III, LGPD). A committee was formed to discuss the drafting of a bill aimed at regulating the use of personal data for such purposes, but as of yet nothing has been defined.

Therefore, currently, the Brazilian government is not required to follow any set of principles or rules when implementing facial recognition technology.

b. Regulations on Artificial Intelligence

The implementation of Artificial Intelligence (AI) as a new alternative for solving countless everyday problems is a reality in Brazil. A study by the Getúlio Vargas Foundation (FGV) shows that, in the Brazilian Judiciary alone, there are about 72 projects based on AI in the most diverse fields and instances.

Its implementation brings with it the urgent need to discuss limitations and manage its risks. Principle aspects (is an AI capable of considering principles?), civil liability (who is responsible for the damage caused?), criminal consequences (e.g. discriminatory biases of the AI) and administrative and procedural complications (did you imagine an AI issuing sentences? ) are just some of the contexts that we will very soon face in discussions based on this primordial theme in the ongoing technological revolution.

The absence of regulatory parameters and bases leads us to a path of legal uncertainty in the guiding decisions of the AI. From there, the question arises: how is the regulation of the matter in Brazil going?

As for the regulation of Artificial Intelligence, there are currently a couple of bills in progress that aim to regulate the implementation of AI, but unfortunately, all of them are very timid and superficial. They leave a lot to be desired, and they do not encompass truly technical discussions, nor data protection and regulatory considerations. They are focused only on principled issues, obligations and responsibilities of the parties involved.

Among the various Bills in Brazil that aim to regulate AI, there are two that are at a more advanced stage, one originated from the Senate and the other from the Chamber of Deputies.

Despite the good intention and the more-than-present need to have an AI regulation in Brazil, what we have is that the existing projects that have been discussed are not enough to deal with an issue as complex as AI. The proposals in progress are still very timid.

They end up focusing more on principles, obligations, rights and responsibilities of the agents involved, and leaving aside important and practical points such as, for example, the analysis of AI systems based on risks. This approach would allow the regulatory weight to be consistent with the system’s inherent risk, that is, the greater the risk, the more rules and obligations there should be to it.

As an example, the Artificial Intelligence Act, a European Union regulation proposal, uses a risk-based approach and sets forth a series of growing legal and technical obligations, i.e., depending on whether the AI product or service is classified as low, medium, or high risk, while a number of AI uses are totally prohibited. According to European law, the classification of risks would be as follows:

The European Commission emphasizes that most AI systems should be included in this last category (minimum risk).

When we look at the projects discussed in Brazil, they do not bring this risk-based AI systems analysis perspective. When we talk about AI, we need to go beyond ethics and principles and deepen the debates about rights and obligations.

Considering the content of such regulation will impact the social and economic reality of Brazil for years to come. This debate needs to be the object of a broad, participatory and multi-sectorial construction, in the same way that we did when drafting the Civil Rights Framework for the Internet (Marco Civil da Internet or simply MCI) and the LGPD.

In addition to the need for maturity, regulatory aspects have the aggravating factor of urgency. Technology is a reality in the country and has an exponential level of implementation by entities from the most diverse spheres. We need Brazil to present and discuss projects that, on the one hand, do not hinder the development of new markets based on the use of AI and other disruptive technologies, but, on the other hand, do not create a highly permissive environment that negatively impacts the lives of Brazilians through indiscriminate use.

CONCLUSION

Cities have a natural tendency to grow since the urban functions they perform require an increasing number of people. Due to this expansion process, certain areas of the city start to demand greater attention from public authorities and the private sector regarding investments in infrastructure, technologies, commerce, leisure, among others.

In this context, the use of advanced data and technologies in cities, such as facial recognition systems, have become essential to provide a better quality of life to citizens, in addition to facilitating the activities of private and public entities established therein.

However, facial recognition and big data are a frightening combination. The potential of the technology combined with the amount of information that is stored online could eventually destroy privacy as is currently known. In a not-so-distant future, the biometrics technology could enable constant government surveillance, allow companies to track its consumers movements, and provide society with a tool to identify every person they see on the streets. The world of practical obscurity and anonymity would be essentially over.

In light of the impending widespread implementation of facial recognition and its threatening combination with big data, the Brazilian legal landscape is the main prevention tool against the privacy harms that could arise from unregulated use of the software. The novelty of the technology means that the pathway to regulating facial recognition is not yet clear. However, the current legal framework could eventually provide a basis for preventing the extinction of individual privacy.

Brazil is a continental country and extremely diverse in several aspects: social, cultural, economic, geographic, among others. For this reason, with regard to privacy and data protection, it is necessary that the principles established by the LGPD, and not only its rules, are enforced by an active authority, demanding that both companies and the public sector implement facial recognition technology with a clear regard for data subjects’ privacy and fundamental rights. As for Artificial Intelligence in general, the subject needs to be further discussed in Brazil to assess what kind of regulation is best suited for the implementation of Artificial Intelligence in Brazil.

It is not only possible, but also necessary to demand an environment in cities where technological development, privacy and data protection go hand in hand, in order to guarantee the free development of personality, dignity and the exercise of citizenship by people.


Fernanda Catao received a Master of Laws in Law & Tech from Duke University School of Law, and a Bachelor of Laws from Universidade Católica de Pernambuco. She is a lawyer with practice in Privacy, Data Protection and Technology and a Certified Information Privacy Professional/Europe (CIPP/E) by the International Association of Privacy Professionals (IAPP). She is the co-author of the paper Cities and Facial Recognition: a threat to privacy.

 

Igor Baden Powell is pursuing a MSc in Political and Economic Law (anticipated completion December 2022) and received a Bachelor of Laws (Concentration in Law and Development: Infrastructure, Sustainability and Public Policy), both from Mackenzie Presbyterian University. He is a lawyer with practice in Privacy and Data Protection and is a Certified Information Privacy Professional/Europe (CIPP/E) by the International Association of Privacy Professionals (IAPP). He is co-author of the paper Cities and Facial Recognition: a threat to privacy.

 


[1] Marta Dora Grostein, Metrópole e expansão urbana: a persistência de processos “insustentáveis, 15(1) São Paulo Em Persp. 13 (2001).

[2] Doug Schuler, Digital Cities and Digital Citizens, in 2362 Digit. Cities II: Computational And Socio. Approaches 71 (Makoto Tanabe, Peter van den Besselaar, Toru Ishida eds., Springer-Verlag, Berlin 2001) (2002).

[3] Toru Ishida, Activities and Technologies in Digital City Kyoto, in 3081 Digital Cities III: Info. Technologies For Soc. Cap.: Cross-Cultural Persp. 166 (Peter van den Besselaar, Satoshi Koizumi eds., Springer, Berlin, Heidelberg 2003) (2005), http://www.digitalcity.gr.jp/DigitalCityKyoto20040601.pdf.

[4] Macaya, Javiera. Smart Cities: tecnologias de informação e comunicação e o desenvolvimento de cidades mais sustentáveis e resilientes, Panorama Setorial Da Internet, 2(9) (2017), at 1,4.

[5] Taewoo Nam and Theresa A. Pardo, Conceptualizing Smart City with Dimensions of Technology, People, and Institutions, DG.O’ 2011 Proceedings Of The 12th Annual International Digital Government Research Conference: Digital Government Innovation In Challenging Times, 282 (2011).

[6] Arthur Miller, The Assault On Privacy: Computers, Data Banks, And Dossiers 223 (Ann Arbor: Univ. of Michigan Press 1971).

[7] Fernanda Bruno, Máquinas De Ver, Modos De Ser: Vigilância, Tecnologia E Subjetividade 24-26 (2013).

[8] See generally Bruno Ricardo Bioni, Ecologia: uma narrativa inteligente para a proteção de dados pessoais nas cidades inteligentes, in Cidades Inteligentes Em Perspectivas 58-72 (Obliq. 2019)

[9] Self-Driving Cars Take the Wheel, Mit Tech. Rev., https://www.technologyreview.com/s/612754/self-driving-cars-take-the-wheel/.

[10] Shourjya Sanyal, How Is AI Revolutionizing Elderly Care, Forbes (Oct. 31, 2018, 10:40 PM), https://www.forbes.com/sites/shourjyasanyal/2018/10/31/how-is-ai-revolutionizing-elderly-care/#1dae5a3e07d6.

[11] Kali Bracey & Marguerite L. Moeller, Legal Considerations When Using Big Data And Artificial Intelligence To Make Credit Decisions, Lending Times (March 29, 2018), https://lending- times.com/2018/03/29/legal-considerations-when-using-big-data-and-artificial-intelligence-to-make- credit-decisions/ (“Companies use big data, algorithms, and artificial intelligence to make decisions about the extension of credit.”); Karen Higginbottom, The Pros and Cons of Algorithms in Recruitment, Forbes (Oct. 19, 2018, 5:45 AM), https://www.forbes.com/sites/karenhigginbottom/2018/10/19/the-pros-and-cons-of-algorithms-in- recruitment/#5b1e50d73409 (“[Amazon’s] experimental hiring tool used AI to give job candidates scores ranging from one to five stars.”); Danielle Kehl, Priscilla Guo & Samuel Kessler, Algorithms in the Criminal Justice System: Assessing the Use of Risk Assessments in Sentencing, Digital Access To Scholarship At Harvard (2017), https://dash.harvard.edu/handle/1/33746041, for the use of risk assessment algorithms in sentencing.

[12] See Bernard Marr, The Key Definitions of Artificial Intelligence (AI) That Explain Its Importance, Forbes (Feb. 14, 2018, 1:27 AM), https://www.forbes.com/sites/bernardmarr/2018/02/14/the-key- definitions-of-artificial-intelligence-ai-that-explain-its-importance/#40af89074f5d

[13] What is Artificial Intelligence: Machine Learning and Deep Learning, Amazon Ai (Apr. 10, 2019), https://aws.amazon.com/machine-learning/what-is-ai/.

[14] Id.

[15] Algorithm, Merrian-Webster.Com, https://www.merriam-webster.com/dictionary/algorithm (last visited Apr. 10, 2019).

[16] The Privacy Expert’s Guide to Artificial Intelligence and Machine Learning, Future Of Privacy Forum (Oct. 2018), https://fpf.org/wp-content/uploads/2018/10/FPF_Artificial-Intelligence_Digital.pdf.

[17] Id.

[18] Id. at 5-6

[19] Benefits & Risks of Artificial Intelligence, Future Of Life Inst., https://futureoflife.org/background/benefits-risks-of-artificial-intelligence (last visited Apr. 10, 2019).

[20] Arthur L. Samuel, Some Studies in Machine Learning Using the Game of Checkers I, in Computer Games I 335-365 (Springer ed., 1988).

[21] Machine Learning: What it Is and Why it Matters, Sas, https://www.sas.com/en_us/insights/analytics/machine-learning.html (last visited Apr. 11, 2019).

[22] Future Of Privacy Forum, supra note 16, at 7.

[23] Tom M. Mitchell, Machine Learning 2–3 (McGraw-Hill, Inc. eds., 1997).

[24] Future Of Privacy Forum, supra note 16, at 8.

[25] See generally The Best Practices For Training Machine Learning Models, Forbes (May 10, 2017, 2:48 PM), https://www.forbes.com/sites/quora/2017/05/10/the-best-practices-for-training-machine-learning-models/?sh=352f970b7de8; see also Thomas C. Redman, If Your Data Is Bad, Your Machine Learning Tools Are Useless, Harv. Business Rev. (Apr. 2, 2018), https://hbr.org/2018/04/if-your-data-is-bad-your-machine-learning-tools-are-useless; and Amazon Machine Learning Developer Guide, Amazon Web Services (last visited Apr. 11, 2019), https://docs.aws.amazon.com/machine-learning/latest/dg/machinelearning-dg.pdf.

[26] Id. at 9.

[27] Id.

[28] Ethem Alpaydin, Introduction To Machine Learning 9 (The MIT Press Cambridge, 3d ed., 2010).

[29] Id.

[30] Id. at 11; Future Of Privacy Forum, supra note 16, at 10.

[31] Future Of Privacy Forum, supra note 16, at 10.

[32] Id. at 12

[33] See generally AI Now Report 2018, Ai Now (Dec. 2018), https://ainowinstitute.org/AI_Now_2018_Report.pdf.

[34] Future Of Privacy Forum, supra note 16, at 14.

[35] Algorithmic Accountability Policy Toolkit, Ai Now (Oct. 2018), https://ainowinstitute.org/aap-toolkit.pdf.

[36] Responsible AI Practices, Google Ai, https://ai.google/education/responsible-ai-practices (last visited Apr. 11, 2019); Saleema Amershi Et Al., Guidelines for Human-AI Interaction, in Chi Conference On Human Factors In Computing Systems Proceedings (May 4–9, 2019), https://www.microsoft.com/en-us/research/uploads/prod/2019/01/Guidelines-for-Human-AI-Interaction-camera-ready.pdf; Josh Sullivan Et Al., My Fair Data: How the Government Can Limit Bias in Artificial Intelligence, The Atlantic, https://www.theatlantic.com/sponsored/booz-allen-hamilton-2018/how-government-can-limit-bias-in-ai/1972/  (last visited Apr. 12, 2019); Thomas H. Davenport and Vivek Katyal, Every Leader’s Guide to the Ethics of AI, Mit Sloan Management Rev. (Dec. 6, 2018), https://sloanreview.mit.edu/article/every-leaders-guide-to-the-ethics-of-ai/; Universal Guidelines for Artificial Intelligence, The Pub. Voice (Oct. 23, 2018), https://thepublicvoice.org/ai-universal-guidelines/; ASLOMAR AI Principles, Future Of Life Inst., https://futureoflife.org/ai-principles/ (last visited Apr. 12, 2019); Everyday Ethics for Artificial Intelligence, Ibm (Sep. 2018), https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf; Identifying Algorithmic Harms when Creating DPIAs: A Quick Guide, Future Of Privacy Forum, https://fpf.org/wp-content/uploads/2018/10/2018_1018-Algorithmic-Harms-DPIA.pdf (last visited Apr. 12, 2019); Algorithms Are Making Government Decisions. The Public Needs To Have A Say, Aclu (Apr. 10, 2018, 10:00 AM), https://www.aclu.org/issues/privacy-technology/surveillance-technologies/algorithms-are-making-government-decisions; and Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Ieee, https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_v2.pdf.

[37] Takeo Kanade, Picture Processing System by Computer Complex and Recognition of Human Faces (Nov. 1973) (published Ph.D. dissertation, Kyoto University).

[38] Karl Martin, et al., A Biometric Encryption System for the Self-Exclusion Scenario of Face Recognition, 3 Sys. J.IEEE. 440–450 (2009). (https://www.researchgate.net/publication/224078949_A_Biometric_Encryption_System_for_the_Self-Exclusion_Scenario_of_Face_Recognition) (last visited Jun. 20, 2021).

[39] Stan Z. Li & Anil K. Jain, Introduction to Handbook Of Face Recognition 1, 2–3 (Stan Z. Li & Anil K. Jain eds., 2011).

[40] Karl Martin, et al., supra note 38, at 441.

[41] Kevin W. Bowyer, Face Recognition Technology: Security versus Privacy, 23 IEEE Technology and Society Magazine 9, 10 (2004); Cavoukian & Marinelli, supra note 38, at 3.

[42] Karl Martin, et al., supra note 38, at 441–442.

[43] Li & Jain, supra note 39, at 2–3 (showing examples of one-to-one matching are “person verification for selfserviced immigration clearance and the use of E-passport.”).

[44] Id. at 3 (showing examples of one-to-many matching are “watchlist check or face identification in surveillance video.”).

[45] Karl Martin, et al., supra note 38, at 442.

[46] Bowyer, supra note 41, at 11.

[47] Li & Jain, supra note 39, at 4.

[48] Smile, the Government Is Watching: Next Generation Identification, right side

News (Sep. 17, 2012),  https://www.rcreader.com/commentary/smile-government-watching-next-generation-identification/.

[49] Woodrow Hartzog, Facial Recognition Is the Perfect Tool for Oppression, Medium (Aug. 2, 2018), https://medium.com/s/story/facial-recognition-is-the-perfect-tool-for-oppression-bc2a08f0fe66.

[50] See generally Mary Madden, Privacy management on social media sites, Pew Research Center (Feb. 24, 2012), https://www.pewresearch.org/internet/2012/02/24/privacy-management-on-social-media-sites/ (for privacy and social media); also see generally Elizabeth E. Joh, The New Surveillance Discretion: Automated Suspicion, Big Data, and Policing, 10 Harv. L. & Pol’y Rev. 15 (2016) (for privacy and new law enforcement surveillance).

[51] Evan Selinger, Stop Saying Privacy Is Dead, Medium (Oct. 11, 2018), https://medium.com/s/story/stop-saying-privacy-is-dead-513dda573071.

[52] See generally Evan Selinger and Woodrow Hartzog, Obscurity and Privacy, Routledge Companion to Philosophy of Technology, in Spaces For The Future: A Companion To Philosophy Of Technology 1–16 (Joseph C. Pitt & Ashley Shew eds., 2018).

[53] Id. at 2.

[54] Id. at 3.

[55] Jon Christian, How Facial Recognition Could Tear Us Apart, Medium (Jul. 17, 2018), https://medium.com/s/futurehuman/how-facial-recognition-tech-could-tear-us-apart-c4486c1ee9c4 (“[…] when you walk down the street or you sit in a restaurant or you’re at a party, will give you the ability to identify the people around you.”).

[56] Id.

[57] Id.

[58] Id.

[59] Solon Barocas & Helen Nissenbaum, On Notice: The Trouble with Notice and Consent (Oct. 2009) (unpublished manuscript), https://nissenbaum.tech.cornell.edu/papers/ED_SII_On_Notice.pdf; see also Aleecia M. McDonald & Lorrie Faith Cranor, The Cost of Reading Privacy Policies, 4 I/S: J. L. & Pol’y for Info. Soc’y 543, 544, 564 (2008).