Utafiti: Mfumo wa utambuzi wa sura (facial recognition) unatoa matokeo yasiyo sahihi zaidi kwa wasio wazungu

The Sheriff

JF-Expert Member
Oct 10, 2019
610
1,770
face-recognition-video-surveillance-01-blog.jpeg

Kwa mujibu wa utafiti uliotolewa na serikali ya Marekani kupitia taasisi yake ya viwango na teknolojia, NIST, mifumo ya utambuzi wa sura (facial recognition) inaweza kutoa matokeo yasiyo sahihi kabisa, hususani kwa watu wasio wazungu.

Utafiti huo umeonesha mifumo ya utambuzi wa sura ilitoa matokeo yasiyo sahihi kwa Wamarekani weusi na Waasia, kiwango ambacho ni mara 100 zaidi ya Wazungu.

Utafiti huo unakuja ikiwa teknolojia ya facial recognition ikiwa inatumika kwa wingi katika vyombo vya usalama, viwanja vya ndege, katika mifumo ya usalama katika mipaka baina ya nchi, taasisi za kifedha, mashule na katika vifaa binafsi kama vile simu na kompyuta.

Wanaharakati na watafiti wanasema kiwango cha makosa ni kikubwa sana na kwamba yanaweza kusababisha watu wasio na hatia na kufungwa gerezani.

Utafiti huo umegundua kuwa mfumo unawatambua watu kimakosa, ambapo unashindwa kufananisha kwa usahihi sura na mtu husika katika kanzidata.

====

Facial recognition systems can produce wildly inaccurate results, especially for non-whites, according to a US government study released Thursday that is likely to raise fresh doubts on deployment of the artificial intelligence technology.

The study of dozens of facial recognition algorithms showed "false positives" rates for Asian and African American as much as 100 times higher than for whites.

The researchers from the National Institute of Standards and Technology (NIST), a government research center, also found two algorithms assigned the wrong gender to black females almost 35 percent of the time.

The study comes amid widespread deployment of facial recognition for law enforcement, airports, border security, banking, retailing, schools and for personal technology such as unlocking smartphones.

Some activists and researchers have claimed the potential for errors is too great and that mistakes could result in the jailing of innocent people, and that the technology could be used to create databases that may be hacked or inappropriately used.

The NIST study found both "false positives," in which an individual is mistakenly identified, and "false negatives," where the algorithm fails to accurately match a face to a specific person in a database.

"A false negative might be merely an inconvenience -- you can't get into your phone, but the issue can usually be remediated by a second attempt," said lead researcher Patrick Grother.

"But a false positive in a one-to-many search puts an incorrect match on a list of candidates that warrant further scrutiny."

The study found US-developed face recognition systems had higher error rates for Asians, African Americans and Native American groups, with the American Indian demographic showing the highest rates of false positives.

However, some algorithms developed in Asian countries produced similar accuracy rates for matching between Asian and Caucasian faces -- which the researchers said suggests these disparities can be corrected.

"These results are an encouraging sign that more diverse training data may produce more equitable outcomes," Grother said.

Nonetheless, Jay Stanley of the American Civil Liberties Union, which has criticized the deployment of face recognition, said the new study shows the technology is not ready for wide deployment.

"Even government scientists are now confirming that this surveillance technology is flawed and biased," Stanley said in a statement.

"One false match can lead to missed flights, lengthy interrogations, watchlist placements, tense police encounters, false arrests or worse. But the technology's flaws are only one concern. Face recognition technology -- accurate or not -- can enable undetectable, persistent, and suspicionless surveillance on an unprecedented scale."


Chanzo: AFP


Sent using Jamii Forums mobile app
 
Back
Top Bottom