How the Pandemic Turned Refugees Into ‘Guinea Pigs’ for Big Tech

 

The coronavirus pandemic unleashed a new era in surveillance technology, and arguably no group has felt this more acutely than refugees. Even before the pandemic, refugees were subjected to contact tracing, drone and LIDAR tracking, and facial recognition en masse. Since the pandemic, it’s only gotten worse.

For a microcosm of how bad the pandemic has been for refugees — both in terms of civil liberties and suffering under the virus — look no further than Greece.

OneZero: What kinds of surveillance practices and technologies did you see in the camps?

Petra Molnar: I went to Lesbos in September, right after the Moria camp burned down and thousands of people were displaced and sent to a new camp. We were essentially witnessing the birth of the Kara Tepes camp, a new containment center, and talked to the people about surveillance, and also how this particular tragedy was being used as a new excuse to bring more technology, more surveillance. The [Greek] government is… basically weaponizing Covid to use it as an excuse to lock the camps down and make it impossible to do any research.

When you are in Lesbos, it is very clear that it is a testing ground, in the sense that the use of tech is quite rudimentary — we are not talking about thermal cameras, iris scans, anything like that, but there’s an increase in the appetite of the Greek government to explore the use of it, particularly when they try to control large groups of people and also large groups coming from the Aegean. It’s very early days for a lot of these technologies, but everything points to the fact that Greece is Europe’s testing ground.

They are talking about bringing biometric control to the camps, but we know for example that the Hellenic Coast Guard has a drone that they have been using for self-promotion, propaganda, and they’ve now been using it to follow specific people as they are leaving and entering the camp. I’m not sure if the use of drones was restricted to following refugees once they left the camps, but with the lockdown, it was impossible to verify. [OneZero had access to a local source who confirmed that drones are also being used inside the camps to monitor refugees during lockdown.]

Also, people can come and go to buy things at stores, but they have to sign in and out at the gate, and we don’t know how they are going to use such data and for what purposes.

Surveillance has been used on refugees long before the pandemic — in what ways have refugees been treated as guinea pigs for the policies and technologies we’re seeing deployed more widely now? And what are some of the worst examples of authoritarian technologies being deployed against refugees in Europe?

The most egregious examples that we’ve been seeing are that ill-fated pilot projects — A.I. lie detectors and risk scorings which were essentially trying to use facial recognition and facial expressions’ micro-targeting to determine whether a person was more likely than others to lie at the border. Luckily, that technology was debunked and also generated a lot of debate around the ethics and human rights implications of using something like that.

Technologies such as voice printing have been used in Germany to try to track a person’s country of origin or their ethnicity, facial recognition made its way into the new Migration’s Pact, and Greece is thinking about automating the triage of refugees, so there’s an appetite at the EU level and globally to use this tech. I think 2021 will be very interesting as more resources are being diverted to these types of tech.

We saw, right when the pandemic started, that migration data used for population modeling became kind of co-opted and used to try and model flows of Covid. And this is very problematic because they are assuming that the mobile population, people on the move, and refugees are more likely to be bringing in Covid and diseases — but the numbers don’t bear out. We are also seeing the gathering of vast amounts of data for all these databases that Europe is using or will be using for a variety of border enforcement and policing in general.

The concern is that fear’s being weaponized around the pandemic and technologies such as mobile tracking and data collection are being used as ways to control people. It is also broader, it deals with a kind of discourse around migration, on limiting people’s rights to move. Our concern is that it’ll open the door to further, broader rollout of this kind of tech against the general population.

What are some of the most invasive technologies you’ve seen? And are you worried these authoritarian technologies will continue to expand, and not just in refugee camps?

In Greece, the most invasive technologies being used now would probably be drones and unpiloted surveillance technologies, because it’s a really easy way to dehumanize that kind of area where people are crossing, coming from Turkey, trying to claim asylum. There’s also the appetite to try facial recognition technology.

It shows just how dangerous these technologies can be both because they facilitate pushbacks, border enforcement, and throwing people away, and it really plays into this kind of idea of instead of humane responses you’d hope to happen when you see a boat in distress in the Aegean or the Mediterranean, now entities are turning towards drones and the whole kind of surveillance apparatus. It highlights how the humanity in this process has been lost.

And the normalization of it all. Now it is so normal to use drones — everything is about policing Europe’s shore, Greece being a shield, to normalize the use of invasive surveillance tech. A lot of us are worried with talks of expanding the scope of action, mandate, and powers of Frontex [the European Border and Coast Guard Agency] and its utter lack of accountability — it is crystal clear that entities like Frontex are going to do Europe’s dirty work.

There’s a particular framing applied when governments and companies talk about migrants and refugees, often linking them to ISIS and using careless terms and phrases to discuss serious issues. Our concern is that this kind of use of technology is going to become more advanced and more efficient.

What is happening with regard to contact tracing apps — have there been cases where the technology was forced on refugees?

I’ve heard about the possibility of refugees being tracked through their phones, but I couldn’t confirm. I prefer not to interact with the state through my phone, but that’s a privilege I have, a choice I can make. If you’re living in a refugee camp your options are much more constrained. Often people in the camps feel they are compelled to give access to their phones, to give their phone numbers, etc. And then there are concerns that tracking is being done. It’s really hard to track the tracking; it is not clear what’s being done.

Aside from contact tracing, there’s the concern with the Wi-Fi connection provided in the camps. There’s often just one connection or one specific place where Wi-Fi works and people need to be connected to their families, spouses, friends, or get access to information through their phones, sometimes their only lifeline. It’s a difficult situation because, on the one hand, people are worried about privacy and surveillance, but on the other, you want to call your family, your spouse, and you can only do that through Wi-Fi and people feel they need to be connected. They have to rely on what’s available, but there’s a concern that because it’s provided by the authorities, no one knows exactly what’s being collected and how they are being watched and surveilled.

How do we fight this surveillance creep?

That’s the hard question. I think one of the ways that we can fight some of this is knowledge. Knowing what is happening, sharing resources among different communities, having a broader understanding of the systemic way this is playing out, and using such knowledge generated by the community itself to push for regulation and governance when it comes to these particular uses of technologies.

We call for a moratorium or abolition of all high-risk technology in and around the border because right now we don’t have a governance mechanism in place or integrated regional or international way to regulate these uses of tech.

Meanwhile, we have in the EU a General Data Protection Law, a very strong tool to protect data and data sharing, but it doesn’t really touch on surveillance, automation, A.I., so the law is really far behind.

One of the ways to fight A.I. is to make policymakers understand the real harm that these technologies have. We are talking about ways that discrimination and inequality are reinforced by this kind of tech, and how damaging they are to people.

We are trying to highlight this systemic approach to see it as an interconnected system in which all of these technologies play a part in this increasingly draconian way that migration management is being done.

More at onezero.medium.com

****

Please Donate Below To Support Our Ongoing Work To Expose The Lies About COVID19

PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX. 

Please DONATE TODAY To Help Our Non-Profit Mission To Defend The Scientific Method.

Trackback from your site.

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via