If a person in the state of Western Australia contracts covid-19, they must remain in home quarantine for seven days, as will their close contacts.
The police verify their whereabouts by sending regular text messages and asking them to send a selfie within 15 minutes.
Police use facial recognition technology and GPS tracking to determine if the person who took the selfie is at home. If you’re not, they quickly come knocking on the door with a potentially hefty fine.
Local tech startup Genvis’ G2G app has been used by more than 150,000 people in the state since it launched in September 2020.
The same technology, although provided by different companies, has been tested in the states of New South Wales, Victoria, South Australia and Tasmania.
Australia stands out as the only democracy using facial recognition technology to help in the containment procedures of covid-19, while other countries reject the idea.
San Francisco was the first US city to introduce a moratorium against the use of facial recognition by police in May 2019. It was quickly followed by Oakland, also in California, and Somerville, in Massachusetts.
Amazon, Microsoft, IBM and Google have declared that they will not sell their facial recognition algorithms to law enforcement until there is federal law.
In November 2021, Facebook announced that it would remove 1 billion “face prints” and stop using the technology to tag people in photos.
The Australian Human Rights Commission called for a moratorium on the technology until Australia have a specific law to regulate its use.
Human rights activists say there is a possibility that the personal data obtained will be used for secondary purposes, and that it is a slippery slope to becoming a surveillance state.
Groups such as Amnesty International warn that the use of facial recognition leads to racial discrimination.
“The pandemic created all these new justifications for using facial recognition technology,” says Mark Andrejevic, professor of media studies at Monash University in Melbourne and author of a forthcoming book called “Facial Recognition.” facial).
“Everything was put on the internet and organizations were trying to get things up and running very quickly. But the implications were not thought through. Do we want to live in a world where everything is rendered and there are no private spaces? It creates a whole new level of stress that is not conducive to a healthy society,” she says.
Consent is required to use the G2G app, and was also required after Australia’s ‘black summer’ bushfires of 2020, when those who lost their identification documents used facial recognition to obtain disaster relief payments that the government gave.
But there have been times when facial recognition technology has been used covertly.
In October 2021, the 7-Eleven convenience store group was found to have breached its customers’ privacy by collecting facial prints from 1.6 million Australian customers when they completed satisfaction surveys.
The facial prints allegedly were obtained to obtain demographic profiles and prevent staff from tampering with surveys by increasing their ratings. The company was not fined.
The Australian Department of the Interior began building a national facial recognition database in 2016 and appears ready to roll it out. In January, it put out a tender for a company to “build and implement” the data.
“Facial recognition is on the cusp of relatively widespread deployment,” says Andrejevic.
“Australia is preparing to use facial recognition to enable access to government services. And among the government agencies that have to enforce the law, there is definitely a desire to have access to these tools.”
Most state governments have provided the central database with their residents’ driver’s licenses, and the database also stores visa and passport photos.
A law to regulate facial recognition technology was proposed in 2019, but was shelved after a parliamentary committee review found it lacked adequate privacy protections.
Among his staunchest critics was Australia’s then Human Rights Commissioner, Edward Santow.

“Now we are in the worst of all situations, as there is no specific law, so we are faced with some partial protections that are not fully effective and certainly not comprehensive,” says Santow.
“And yet the technology continues to roll out.”
Santow is working, with his team at the University of Technology Sydney, on ways to make privacy provisions more robust,
A varied global response
Part of the project involves examining other countries’ attempts to regulate facial recognition technologies.
There are markedly different approaches around the world. Most commonly, it’s relying on a handful of limited privacy protections that Santow says don’t adequately address the problem; that is the case in Australia.
“No country in the world has done it well,” says Santow. “Yes [las protecciones de privacidad fueran adecuadas]this project would be really simple”.
Leila Nashashibi is an activist with the US-based advocacy group Fight for the Future, which is working for a federal ban on facial recognition and other biometric identifiers.
“Like nuclear power and biological weapons, facial recognition poses a threat to society and our basic freedoms that far outweighs any potential benefit,” he says.
“Facial recognition is unlike any other form of surveillance because it enables automated and ubiquitous monitoring of entire populations, and can be almost impossible to avoid. As it spreads, people will be too afraid to participate in social movements and political demonstrations. Freedom of expression will cool down.”
Looking for facial prints on social media
The most prominent provider of facial recognition technology, US company Clearview AI, seems undeterred by the lawsuits and hefty fines it is racking up in a variety of jurisdictions.
The technology first attracted media attention when a billionaire used it to identify the person his daughter was having dinner with, and now the Ukrainian government is using it to identify dead Russian soldiers.
Their families are notified via social media, and photos are sometimes sent as attachments.
It is also trying to get its technology used in US schools as a “visitor management system,” which they believe could be used to help prevent shootings by recognizing the faces of expelled students, for example.
Facial and object recognition technology has already been tested in multiple schools by different vendors, including object recognition that could identify a concealed weapon.
“Clearview AI is exploiting people’s terror and trauma by saying that surveillance and policing is the answer,” says Nashashibi.
Clearview AI Australian CEO and founder Hoan Ton-That disagrees.
He says that facial recognition technology has great potential for crime prevention, because it can ensure that only authorized people have access to a building such as a school.
“We have seen our technology used with great success by law enforcement to stop gun trafficking, and we are hopeful that our technology can be used to help prevent tragic gun crimes in the future.” ensures.
In Australia, facial recognition technology is being used in several stadiums to prevent terror suspects or banned football hooligans from entering.
Andrejevic believes that the use of facial recognition as a security measure is an important step in surveillance and requires careful consideration.
“Cameras are often criticized because they only provide evidence after the fact, whereas facial recognition creates actionable information in real time to prevent crime,” he says. “That’s a very different conception of security.”
Some police forces around the world they already use live facial recognition.
The London Metropolitan Police, for example, use it to monitor specific areas for wanted criminals or people who might pose a risk to the public.

Clearview has created a searchable database of 20 billion facial images, much of it pulling photos from social media without consent.
Ton-That has said the company will not work with authoritarian governments like China, North Korea and Iran. However, it has encountered problems in some democracies.
It was banned in Canada and Australia and on May 24, the UK Information Commissioner’s Office (ICO) fined it more than $9.1 million after a joint investigation with the Information Commissioner’s Office. Australian Information.
It was ordered to remove the data of British residents from its systems.
In December 2021, the French privacy watchdog found that Clearview breached Europe’s General Data Protection Regulation (GDPR).
Santow says the goal in Australia is to develop a nuanced approach that encourages the use of positive apps and impose limits to prevent harm.
The worst case scenario would be to replicate the “social credit” system of China, a country whose government tracks people and organizations to determine their “reliability.”
“In determining whether a use is beneficial or harmful, we refer to the basic international framework of human rights that exists in almost every jurisdiction in the world,” says Santow.
For example, the law would require free and informed consent to use facial recognition.
However, if the technology causes discrimination by its inaccuracy in certain groups, consent would be irrelevant. As Santow says: “You cannot consent to being discriminated against”.
Increasingly sophisticated and powerful
“In the next two years, we are going to see a big change in the use of passwords, which are totally insecure. Biometrics will become the default option,” says O’Hara.
Facial recognition works by dividing the face into a series of geometric shapes and mapping the distances between its “reference points” such as the nose, eyes and mouth.
These distances are compared to other faces and converted into a unique code called a biometric marker.
“When you use a facial recognition app to open your phone, it’s not an image of your face that your phone stores,” explains Garrett O’Hara of security firm Mimecast.
“It stores an algorithmic derivation of what your face is mathematically. It looks like a long code of letters and numbers.”
Facial recognition has come a long way since it was first developed in the 1960s, although the error rate varies significantly between different systems in use today.
At first, he couldn’t distinguish between siblings or the changes in a person’s face as they aged.
It is now so sophisticated that it can identify someone wearing a mask or sunglasses, and it can do so from more than a kilometer away.
The best face identification algorithm has an error rate of just 0.08%, according to tests by the US National Institute of Standards and Technology.
However, this level of accuracy is only possible under ideal conditions, where facial features are clear and uncluttered, lighting is good, and the person is facing the camera.
The error rate for individuals caught “randomly” can reach 9.3%.
“It is an incredibly useful technology. But if someone had asked us 20 years ago, when the global internet started, if we wanted to live in a world where our interactions and activities were collected and tracked, most of us would probably have said that sounded creepy,” says O’Hara. .
“Now we are replicating the online space tracking to include the physical space as well. And we’re not asking the questions we should be asking.”
One of its most problematic aspects is su Potential for racial discrimination and bias.
Most facial recognition applications were initially trained on data sets that were not representative of the full breadth of the community.
“Initially, the data sets that were used were taken from all white men or white people in general,” says O’Hara.
“And clearly, it creates problems when you have people of color or different ethnicities or backgrounds that don’t match the training models. In the end, it’s just math. This is the problem”.

As a result of this, facial recognition systems are prone to errors when attempting to recognize people belonging to an ethnic minority group, women, people with disabilities and the elderly.
Their use has resulted in false arrests and other life-altering consequences for those affected, Nashashibi notes.
Deepfakes take fraud to new heights
Whether it’s a fingerprint, an iris scan, a gait analysis, or a hair reading, no type of biometrics is foolproof.
As technology becomes more sophisticated, so do hackers’ attempts to manipulate it for their own gain.
Deepfakes emerged as an evolution of fraud techniques, particularly in relation to digital facial recognition (i.e. photos).
“It used to take several hours to create a deepfake using animation tools; now it takes a couple of minutes,” says Francesco Cavalli, co-founder of Amsterdam-based Sensity AI.
“All you need is a photo to create a 3D deepfake. This means that fraudsters can scale their operations and attacks are skyrocketing. You don’t even need to be a developer or engineer. You can do it yourself. There are tons of apps that let you replicate anyone’s face.”
Sensity AI helps governments, financial institutes, and even dating websites detect fraudulent apps, whether it’s to get COVID-19 relief payments, store laundered money in a bank account, or blackmail someone on Tinder.
The infrared face detection test determines body temperature and blinking when someone takes a photo on the internet, which means that a photo of a “synthetic” person will be detected.
“At some point, scammers discover how to trick our different models, so we must continually devise new techniques,” he says.
Despite the challenges on the path to regulation, Santow is optimistic that Australia can become a world leader in regulating facial recognition.
“I cannot speak on behalf of the federal and state governments. But I know they understand that there are strong community concerns and there is a need to build trust in the technology.”
“Australia could provide a good model for a number of reasons,” he adds. “We have a strong institutional and corporate respect for human rights. It may not be perfect, but it is fundamental to who we are as a country. We are also an innovative country and developer of technology”.
“I perceive that the biggest challenge is not to write an infallible law, but to make sure that the law itself is not ignored.”
Now you can receive notifications from BBC World. Download the new version of our app and activate it so you don’t miss out on our best content.
Source-laopinion.com