Their surveillance technology, your human rights, our vision.

The misuse of technology isn't just about infringing on our right to privacy, it is chipping away at the heart of our society and the fundamental rights and freedoms we seek to protect.

<blog-h3>The misuse of technology isn't just about infringing on our right to privacy, it is chipping away at the heart of our society and the fundamental rights and freedoms we seek to protect.<blog-h3>

We’re on a mission to help people understand the observations made about them by companies and their technology, so that we can start to understand how, and why, we make the decisions we do, both as individuals and as a society.

"We exist at the intersection of artificial intelligence, human rights and consent."
Nicholas Oliver, Co-Founder of FORTYEIGHT

In the last few years, as a society, we have started moving into a dangerous space in which control over our personal data no longer guarantees privacy and the associated protections to our fundamental rights.

Next-generation technology like artificial intelligence and machine learning increasingly allows companies to know more about you than you know about yourself. This imbalance in the control of knowledge does more than simply infringe upon our privacy; it is starting to undermine many of our basic rights and freedoms.

Our hope lies in transparency. By revealing the effect technology, and the companies behind it, have on our lives as a result of the observations they make, we will finally be able to see how technology views us, to consider the result of these observations - whether good or bad - and to choose our own fate.

Technology knows 'you'

Technology is increasingly programmed to know you without needing access to your personal data. It has been trained to see patterns in non-identifiable data that enable it to make observations about you based on assumptions and stereotypes.

For example, an image recognition system could make an observation about the football team you support by seeing the shirt you’re wearing. From this observation, the recognition system may then trigger an advert on a TV screen or provide directions to your team’s seating area in the stadium.

These observations are rarely made with 100% certainty, and the very nature of the technology means that it has likely made many hundreds, if not thousands, of other observations from the same image, choosing only to select and then act upon the observations of which it is most confident.

Smart technology like this is everywhere. It’s making observations about you to recommend the videos, news stories or social posts you might find interesting; it’s observing which mode of transport you prefer, morning noon and night, to help you arrive on time; it’s observing your route through the supermarket to make aisles more convenient; it’s even observing what exercise you enjoy to help tailor personalised fitness and training plans.

This technology makes our lives easier and more convenient, streamlining our lives and allowing us to focus on being ‘in the moment’. So, why all the scaremongering?

Taking the red pill

‘Pulling the cord’ has lost its effect, and the technology of today is able to make highly accurate, and sometimes intrusive, observations about you without even accessing your personal data; negating the protections once afforded by the layer of consent typically required by data privacy regulation.

Access to your name, or date of birth is no longer a prerequisite of being identifiable to a company or technology, neither is your email address or your face. Now, technology enables your identification through even the most subtle of attributes from the shape of your ears, to the way you walk.

On the one hand, this technology provides us with the ultimate convenience, but it is also being used to discriminate, coerce and enslave us. In the shadows, it’s observing when teenagers are insecure and vulnerable so as to sell them products, it’s making observations about whether you’re being truthful or not in police reports and insurance claims, it’s even observing you in the crowd to determine if you’re a leader of a group, to help break apart a protest.

In one situation, the effect of an inaccurate observation may be as inconvenient as an irrelevant film being promoted to you, yet, in another, it could mean the difference between freedom and imprisonment.

Building trust in technology and the companies who operate it, requires transparency. So we not only see the positive effects, but the bad ones too.

Putting a price on freedom

Our rights and freedoms have become a numbers game, played against companies who invest billions into research and development and who, as a result, are so afraid of negative PR and losing commercial contracts that they are unwilling to acknowledge the poor efficacy of their systems.

"US government tests find even top-performing facial recognition systems misidentify blacks at rates five to 10 times higher than they do whites."
Wired Magazine, July 2019
facial recognition, gait recognition and image analysis during protests
Image displaying use of facial recognition, gait recognition and image analysis during protests.

While technology may not yet be the arbiter of justice, it can certainly lay claim to being the producer of it. By increasing the number of situations in which a person’s innocence is put in question, you begin to increase the likelihood of their being found guilty of something. As humans, we are, after all, wired to believe that wherever there is smoke, there must be fire.

Exacerbated by a legal system that favours tangible evidence over your subjective verbal account, or coercible witness statement, our credibility is also now being put to the test against some of the world’s most valuable and trusted technology brands, whose algorithms help provide the tangible evidence our legal systems crave.

We must challenge the status quo, to know when, how and why we've been observed, not only when this results in an accusation, but every single time.

The illusion of control

The belief that we are safe from harm until technology reaches human-level intelligence or ‘artificial general intelligence’ are ill-considered and wrong. Equally wrong is the argument that more encryption and anonymisation of data alone will save us.

Anonymisation may decrease the risk of spam emails or nuisance phone calls, but it also makes it much more difficult to prove that a negative effect that you may have suffered from a company involved the misuse of your data.

Encryption technology now enables a company to analyse and compute data without needing to unencrypt it. This may be more secure (depending on how much you trust the company doing it), but how can this be regulated if the regulator can’t see what the company is not choosing to show?

We face the same set of problems with IoT, edge computing and federated learning systems; a set of decentralised technologies designed to remove any single point of failure or control that might occur in a centralised system.

If the processing and computing is at the edge, beyond even the sight of its creator, how do you regulate it? or even hope to monitor the effect it is having on the end user? Or do we consider the potential effect to be of such insignificance that we have no need to concern ourselves with this?

Each individual technology or system may not create a notable effect in isolation, but taken in aggregate, the effect could be huge. Much like the unwitting participant in a Derren Brown stage show, whose journey between their home and the theatre is used to subversively place a concept in their mind, do we not consider there to be a risk of manipulation to the way we think?

"[...] a montage of moments from that night, in which Brown gave verbal suggestions, sometimes via subtle mispronunciations or non sequiturs, that we had apparently absorbed subconsciously."
How Derren Brown Remade Mindreading for Sceptics
The New Yorker 2019

We must consider whether the positive effects of these privacy preserving technologies continue to outweigh the less visible, more subversive and negative effects of reduced visibility.

Future Transparency

We support those who seek to give people control over their personal data, to protect our right to privacy and to further advance technological solutions that limit the nefarious collection, misuse or theft of our personal data.

We support the right for people to choose, the right to know.

Introducing, FORTYEIGHT: Future transparency for artificial intelligence, facial recognition and technological surveillance.

We sit at the intersection of artificial intelligence, human rights and consent.

We’re on a mission to help people understand the observations made about them by technology, so that we can begin the work on understanding how, and why, we make the decisions that we do, both as individuals and as a society.

This is a personal journey for you, and for us. We’re human, after all.

Join us, become a part of future transparency.

Nicholas Oliver
Founder

Continue Reading

No items found.