Clearview AI
American facial recognition software company From Wikipedia, the free encyclopedia
American facial recognition software company From Wikipedia, the free encyclopedia
Clearview AI, Inc. is an American facial recognition company, providing software primarily to law enforcement and other government agencies.[2] The company's algorithm matches faces to a database of more than 20 billion images collected from the Internet, including social media applications.[1] Founded by Hoan Ton-That and Richard Schwartz, the company maintained a low profile until late 2019, until its usage by law enforcement was first reported.[3]
Company type | Private |
---|---|
Industry | Facial recognition, software |
Founded | 2017[1] |
Founders | Hoan Ton-That Richard Schwartz |
Headquarters | Manhattan, New York City, United States |
Areas served | Globally excluding EU, UK, NZ, Canada, Australia |
Products | Clearview AI Software Clearview AI Search Engine |
Website | clearview |
Use of the facial recognition tool has been controversial. Several U.S. senators have expressed concern about privacy rights and the American Civil Liberties Union (ACLU) has sued the company for violating privacy laws on several occasions. U.S. police have used the software to apprehend suspected criminals.[4][5][6] Clearview's practices have led to fines and bans by EU nations for violating privacy laws, and investigations in the U.S. and other countries.[7][8][9] In 2022, Clearview reached a settlement with the ACLU, in which they agreed to restrict U.S. market sales of facial recognition services to government entities.
Clearview AI was the victim of a data breach in 2020 which exposed their customer list. This demonstrated 2,200 organizations in 27 countries had accounts with facial recognition searches.[10]
Clearview AI was founded in 2017 by Hoan Ton-That and Richard Schwartz after transferring the assets of another company, SmartCheckr, which the pair originally founded in 2017 alongside Charles C. Johnson.[11][3] The company was founded in Manhattan after the founders met at the Manhattan Institute.[1] The company initially raised $8.4 million from investors including Kirenaga Partners and Peter Thiel.[12] Additional fundraising, in 2020, collected $8.625 million in exchange for equity. The company did not disclose investors in the second round. In 2021, another fundraising round received $30 million.[13] Early use of Clearview's app was given to potential investors in their Series A fundraising round. Billionaire John Catsimatidis used it to identify someone his daughter dated and piloted it at one of his Gristedes grocery markets in New York City to identify shoplifters.[14][15]
In October 2020, a company spokesperson claimed that Clearview AI's valuation was more than $100 million.[16] The company announced its first chief strategy officer, chief revenue officer, and chief marketing officer in May 2021. Devesh Ashra, a former deputy assistant secretary with the United States Department of the Treasury, became its chief strategy officer. Chris Metaxas, a former executive at LexisNexis Risk Solutions, became its chief revenue officer. Susan Crandall, a former marketing executive at LexisNexis Risk Solutions and Motorola Solutions, became its chief marketing officer.[17] Devesh Ashra and Chris Metaxas left the company in 2021.[13] In August 2021, Clearview AI announced the formation of an advisory board including Raymond Kelly, Richard A. Clarke, Rudy Washington, Floyd Abrams, Lee S. Wolosky, and Owen West.[18] The company claimed to have scraped more than 10 billion images as of October 2021.[19] In May 2022, Clearview AI announced that it would be expanding sales of its facial recognition software to schools and lending platforms outside the U.S.[20]
Clearview AI hired a notable legal team to defend the company against several lawsuits that threatened their business model. Their legal staff includes Tor Ekeland, Lee S. Wolosky, Paul Clement, Floyd Abrams, and Jack Mulcaire.[21][1][22] Abrams stated the issue of privacy rights versus free speech in the First Amendment could reach the Supreme Court.[21]
Clearview AI provides facial recognition software where users can upload an image of a face and match it against their database.[23] The software then supplies links to where the "match" can be found online.[24] The company operated in near secrecy until the release of an investigative report in The New York Times titled "The Secretive Company That Might End Privacy as We Know It" in January 2020. It maintained this secrecy by publishing fake information about the company's location and employees and erasing social media for the founders.[3][1][25] Citing the article, over 40 tech and civil rights organizations sent a letter to the Privacy and Civil Liberties Oversight Board (PCLOB) and four congressional committees, outlining their concerns with facial recognition and Clearview, and asking the PCLOB to suspend use of facial recognition.[26][27][28][1]
Clearview served to accelerate a global debate on the regulation of facial recognition technology by governments and law enforcement.[29][30] Law enforcement officers have stated that Clearview's facial recognition is far superior in identifying perpetrators from any angle than previously used technology.[31] After discovering Clearview AI was scraping images from their site, Twitter sent a cease-and-desist letter to Clearview, insisting that they remove all images as scraping is against Twitter's policies.[32] On February 5 and 6, 2020, Google, YouTube, Facebook, and Venmo sent cease and desist letters as it is against their policies.[33][34] Ton-That responded in an interview that there is a First Amendment right to access public data. He later stated that Clearview has scraped over 50 billion images from across the web.[29][35][36]
The New Zealand Police used it in a trial after being approached by Clearview's Marko Jukic in January 2020. Jukic said it would have helped identify the Christchurch mosque shooter had the technology been available. The usage of Clearview's software in this case raised strong objections once exposed, as neither the users' supervisors or the Privacy Commissioner were aware or approved of its use. After it was revealed by RNZ, Justice Minister Andrew Little stated, "It clearly wasn't endorsed, from the senior police hierarchy, and it clearly didn't get the endorsement from the [Police] Minister... that is a matter of concern."[37][38]
Clearview's technology was used for identifying an individual at a May 30, 2020 George Floyd police violence protest in Miami, Florida. Miami's WTVJ confirmed this, as the arrest report only said she was "identified through investigative means". The defendant's attorney did not even know it was with Clearview. Ton-That confirmed its use, noting that it was not being used for surveillance, but only to investigate a crime.[39]
In December 2020, the ACLU of Washington sent a letter to Seattle mayor Jenny Durkan, asking her to ban the Seattle Police Department from using Clearview AI.[40] The letter cited public records retrieved by a local blogger, which showed one officer signing up for and repeatedly logging into the service, as well as corresponding with a company representative. While the ACLU letter raised concerns that the officer's usage violated the Seattle Surveillance Ordinance, an auditor at the City of Seattle Office of the Inspector General argued that the ordinance was designed to address the usage of surveillance technologies by the Department itself, not by an officer without the Department's knowledge.[41]
After the January 6 riot at the United States Capitol, the Oxford Police Department in Alabama used Clearview's software to run a number of images posted by the Federal Bureau of Investigation in its public request for suspect information to generate leads for people present during the riot. Photo matches and information were sent to the FBI who declined to comment on its techniques.[5]
In March 2022, Ukraine's Ministry of Defence began using Clearview AI's facial recognition technology "to uncover Russian assailants, combat misinformation and identify the dead". Ton-That also claimed that Ukraine's MoD has "more than 2 billion images from the Russian social media service VKontakte at its disposal".[42] Ukrainian government agencies used Clearview over 5,000 times as of April 2022.[43][44] The company provided these accounts and searches for free.[45]
In a Florida case, Clearview's technology was used by defense attorneys to successfully locate a witness, resulting in the dismissal of vehicular homicide charges against the defendant.[46]
Law enforcement use of the facial recognition software grew rapidly in the United States. In 2022 more than one million searches were conducted. In 2023, this usage doubled.[36]
Clearview AI encouraged user adoption by offering free trials to law enforcement officers rather than departments as a whole. The company additionally used its significant connections to the Republican Party to connect with police departments.[1][47] In onboarding emails, new users were encouraged to go beyond running one or two searches to "[s]ee if you can reach 100 searches".[48] During 2020, Clearview sold their facial recognition software for one tenth the cost of competitors.[3]
Clearview's marketing claimed their facial recognition led to a terrorist arrest. The identification was submitted to the New York Police Department tip line.[49] Clearview claims to have solved two other New York cases and 40 cold cases, later stating they submitted them to tip lines. NYPD stated they have no institutional relationship with Clearview, but their policies do not ban its use by individual officers. In 2020, thirty NYPD officers were confirmed to have Clearview accounts.[3] In April 2021, documents obtained by the Legal Aid Society under New York's Freedom Of Information Law demonstrated that Clearview had collaborated with the NYPD for years, contrary to past NYPD denials.[50] Clearview met with senior NYPD leadership and entered into a vendor contract with the NYPD.[48] Clearview came under renewed scrutiny for enabling officers to conduct large numbers of searches without formal oversight or approval.[50][48]
The company was sent a cease and desist letter from the office of New Jersey Attorney General Gurbir Grewal after including a promotional video on its website with images of Grewal.[51] Clearview had claimed that its app played a role in a New Jersey police sting. Grewal confirmed the software was used to identify a child predator, but he also banned the use of Clearview in New Jersey. Tor Ekeland, a lawyer for Clearview, confirmed the marketing video was taken down the same day.[4][52]
In March 2020, Clearview pitched their technology to states for use in contact tracing to assist with the COVID-19 pandemic.[53][54] A reporter found Clearview's search could identify him while he covered his nose and mouth like a COVID mask would.[45] The idea brought criticism from US senators and other commentators because it seemed the crisis was being used to push unreliable tools that violate personal privacy.[55][56]
Contrary to Clearview's initial claims that its service was sold only to law enforcement, a data breach in early 2020 revealed that numerous commercial organizations were on Clearview's customer list. For example, Clearview marketed to private security firms and to casinos.[57] Additionally, Clearview planned expansion to many countries, including authoritarian regimes.[58]
Senator Edward J. Markey wrote to Clearview and Ton-That, stating "Widespread use of your technology could facilitate dangerous behavior and could effectively destroy individuals' ability to go about their daily lives anonymously." Markey asked Clearview to detail aspects of its business, in order to understand these privacy, bias, and security concerns.[32][59] Clearview responded through an attorney, declining to reveal information.[60] In response to this, Markey wrote a second letter, saying their response was unacceptable and contained dubious claims, and that he was concerned about Clearview "selling its technology to authoritarian regimes" and possible violations of COPPA.[8][61] Senator Markey wrote a third letter to the company with concerns, stating "this health crisis cannot justify using unreliable surveillance tools that could undermine our privacy rights." Markey asked a series of questions about what government entities Clearview has been talking with, in addition to unanswered privacy concerns.[55]
Senator Ron Wyden voiced concerns about Clearview and had meetings with Ton-That cancelled on three occasions.[62][8]
In April 2021, Time magazine listed Clearview AI as one of the 100 most influential companies of the year.[63]
In October 2021 Clearview submitted its algorithm to one of two facial recognition accuracy tests conducted by the National Institute of Standards and Technology (NIST) every few months. Clearview ranked amongst the top 10 of 300 facial recognition algorithms in a test to determine accuracy in matching two different photos of the same person. Clearview did not submit to the NIST test for matching an unknown face to a 10 billion image database, which more-closely matches the algorithm's intended purpose. This was the first third-party test of the software.[19]
Clearview, at various times throughout 2020, has claimed 98.6%, 99.6%, or 100% accuracy. However, these results are from tests conducted by people affiliated with the company and have not used representative samples of the population.[29][64][65]
In 2021, Clearview announced that it was developing "deblur" and "mask removal" tools to sharpen blurred images and envision the covered part of an individual's face. These tools would be implemented using machine learning models that fill in the missing details based on statistical patterns found in other images. Clearview acknowledged that deblurring an image and/or removing a mask could potentially make errors more frequent and would only be used to generate leads for police investigations.[35]
Assistant Chief of Police of Miami, Armando Aguilar, said in 2023 that Clearview's AI tool had contributed to the resolution of several murder cases, and that his team had used the technology around 450 times a year. Aguilar emphasized that they do not make arrests based on Clearview's matches alone, and instead use the data as a lead and then proceed via conventional methods of case investigation.[24]
Several cases of mistaken identity using Clearview facial recognition have been documented, but "the lack of data and transparency around police use means the true figure is likely far higher." Ton-That claims the technology has approximately 100% accuracy, and attributes mistakes to potential poor policing practices. Ton-That's claimed accuracy level is based on mugshots and would be affected by the quality of the image uploaded.[24]
Clearview AI experienced a data breach in February 2020 which exposed its list of customers. Clearview's attorney, Tor Ekeland stated the security flaw was corrected.[66] In response to the leaks, the United States House Committee on Science, Space, and Technology sent a letter to the company requesting further insight into their bio-metric and security practices.[67]
While Clearview's app is only supposed to be privately accessible to customers, the Android application package and iOS applications were found in unsecured Amazon S3 buckets.[68] The instructions showed how to load an enterprise (developer) certificate so the app could be installed without being published on the App Store. Clearview's access was suspended, as it was against Apple's terms of service for developers, and as a result the app was disabled.[69] In addition to application tracking (Google Analytics, Crashlytics), examination of the source code for the Android version found references to Google Play Services, requests for precise phone location data, voice search, sharing a free demo account to other users, augmented reality integration with Vuzix, and sending gallery photos or taking photos from the app itself. There were also references to scanning barcodes on a drivers license and to RealWear.[70]
In April 2020, Mossab Hussein of SpiderSilk, a security firm, discovered Clearview's source code repositories were exposed due to misconfigured user security settings. This included secret keys and credentials, including cloud storage and Slack tokens. The compiled apps and pre-release apps were accessible, allowing Hussein to run the macOS and iOS apps against Clearview's services. Hussein reported the breach to Clearview but refused to sign a non-disclosure agreement necessary for Clearview's bug bounty program. Ton-That reacted by calling Hussein's disclosure of the bug as an act of extortion. Hussein also found 70,000 videos in one storage bucket from a Rudin Management apartment building's entrance.[71]
Clearview also operates a secondary business, Insight Camera, which provides AI-enabled security cameras. It is targeted at "retail, banking and residential buildings". Two customers have used the technology, United Federation of Teachers and Rudin Management.[72][73] The website for Insight Camera was taken down following BuzzFeed's investigation into he connection between Clearview AI and Insight Camera.[74]
Following a data leak of Clearview's customer list, BuzzFeed confirmed that 2,200 organizations in 27 countries had accounts with activity. BuzzFeed has the exclusive right to publish this list and has chosen not publish it in its entirety.[10] Clearview AI claims that at least 600 of these users are police departments. These are primarily in the U.S. and Canada, but Clearview has expanded to other countries as well.[3] Although the company claims their services are for law enforcement, they have had contracts with Bank of America, Kohls, and Macy's. Several universities and high schools have done trials with Clearview.[10] The list below highlights particularly notable users.
Clearview AI has had its business model challenged by several lawsuits in multiple jurisdictions. It responded by defending itself, settling in some cases, and exiting several markets.
The company's claim of a First Amendment right to public information has been disputed by privacy lawyers such as Scott Skinner-Thompson and Margot Kaminski, highlighting the problems and precedents surrounding persistent surveillance and anonymity.[34][89] Former New York City Police Commissioner and executive chairman of Teneo Risk Chief Bill Bratton challenged privacy concerns and recommended strict procedures for law enforcement usage in an op-ed in New York Daily News.[90]
After the release of The New York Times January 2020 article, lawsuits were filed by the states of Illinois, California, Virginia and New York, citing violations of privacy and safety laws.[91] Most of the lawsuits were transferred to New York's Southern District.[92] Two lawsuits were filed in state courts; in Vermont by the attorney general and in Illinois on behalf of the American Civil Liberties Union (ACLU), which cited a statute that forbids the corporate use of residents' faceprints without explicit consent. Clearview countered that an Illinois law does not apply to a company based in New York.[21]
In response to a class action lawsuit filed in Illinois for violating the Biometric Information Privacy Act (BIPA), in May 2020 Clearview stated that they instituted a policy to stop working with non-government entities and to remove any photos geolocated in Illinois.[93][94][75] On May 28, 2020, ACLU and Edelson filed a new suit Clearview in Illinois using the BIPA.[95][96] Clearview agreed to a settlement in June 2024, offering 23% of the company (valued at $52 million at the time) rather than a cash settlement, which was likely to bankrupt the company.[97]
In May 2022, Clearview agreed to settle the 2020 lawsuit from the ACLU. The settlement prohibited the sale of its facial recognition database to private individuals and businesses.[98]
In the Vermont case, Clearview AI invoked Section 230 immunity. The court denied the use of Section 230 immunity in this case because Vermont's claims were "based on the means by which Clearview acquired the photographs" rather than third party content.[99]
In July 2020, Clearview AI announced that it was exiting the Canadian market amidst joint investigations into the company and the use of its product by police forces.[100] Daniel Therrien, the Privacy Commissioner of Canada condemned Clearview AI's use of scraped biometric data: "What Clearview does is mass surveillance and it is illegal. It is completely unacceptable for millions of people who will never be implicated in any crime to find themselves continually in a police lineup."[101] In June 2021, Therrien found that the Royal Canadian Mounted Police had broken Canadian privacy law through hundreds of illegal searches using Clearview AI.[102]
In January 2021, Clearview AI's biometric photo database was deemed illegal in the European Union (EU) by the Hamburg Data Protection Authority (DPA). The deletion of an affected person's biometric data was ordered. The authority stated that the General Data Protection Regulation (GDPR) is applicable despite the fact that Clearview AI has no European branch.[103] In March 2020, they had requested Clearview AI's customer list, as data protection obligations would also apply to the customers.[104] The data protection advocacy organization NOYB criticized the DPA's decision as the DPA issued an order protecting only the individual complainant instead of an order banning the collection of any European resident's photos.[105]
In May 2021, the company had numerous legal complaints filed in Austria, France, Greece, Italy and the United Kingdom for violating European privacy laws in its method of documenting and collecting Internet data.[106] In November 2021, Clearview received a provisional notice by the UK's Information Commissioner's Office (ICO) to stop processing its citizens' data citing a range of alleged breaches. The company was also notified of a potential fine of approximately $22.6 million. Clearview claimed that the ICO's allegations were factually inaccurate as the company "does not do business in the UK, and does not have any UK customers at this time". The BBC reported on 23 May that the company had been fined "more than £7.5m by the UK's privacy watchdog and told to delete the data of UK residents".[107] Clearview was also ordered to delete all facial recognition data of UK residents. This fine marked the fourth of its type placed on Clearview, after similar orders and fines issued from Australia, France, and Italy.[9] However, in October 2023, this fine was overturned following an appeal based on the jurisdiction of the ICO over acts of foreign governments.[108]
In September 2024, Clearview AI was fined €30.5 million by the Dutch Data Protection Authority (DPA) for constructing what the agency described as an illegal database.[109] The DPA's ruling highlighted that Clearview AI unlawfully collected facial images, including those of Dutch citizens, without obtaining their consent. This practice constitutes a significant violation of the EU's GDPR due to the intrusive nature of facial recognition technology and the lack of transparency regarding the use of individuals' biometric data.[110]
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.