[ad_1]

Have you seen the Black Mirror episode Nosedive? It’s the one with all the pastel colours. The story follows a girl who is obsessed with her social media rating. Having a high rating is the only way she can get a flat and an invite to a wedding. It sounds like an Orwellian concept, but China has actually implemented a similar social credit system.

In essence, if you do something good, like volunteer or donate to charity, your social credit score goes up. Do something bad, like jaywalk or litter, and it goes down. It’s really important to stay on Beijing’s good side if you don’t want to end up in trouble.

In China, having a bad social credit score can really affect your ability to get a job, loan, or rent an apartment. Not to mention that it can lower your popularity with friends!

Sure, the west isn’t China. We all know that they have different ideas of life and ethics. Or do they?

We don’t have a centralized social ranking system, but there are companies out there that have the data to create one. They sell your data to businesses that want to make sure they know who you are. These so-called data brokers buy your info from tech companies that track you, usually through the apps and browser plugins you use. Then they put all the data together and sell it to anyone interested in you. John Oliver did a great job at explaining what data brokers are about.

Shoshana Zuboff also delved into this topic in her book The Age of Surveillance Capitalism. A short excerpt from the book:

A new app-based approach to lending instantly establishes creditworthiness based on detailed mining of an individual’s smartphone and other online behaviors, including texts, e-mails, GPS coordinates, social media posts, Facebook profiles, retail transactions, and communication patterns. Data sources can include intimate details such as the frequency with which you charge your phone battery, the number of incoming messages you receive, if and when you return phone calls, how many contacts you have listed in your phone, how you fill out online forms, or how many miles you travel each day.
These behavioral data yield nuanced patterns that predict the likelihood of loan default or repayment and thus enable continuous algorithmic development and refinement.
[…] “You’re able to get in and really understand the daily life of these customers, explained the CEO of one lending company that analyzes 10,000 signals per customer”

Even if we had no apps on our phones, companies could still spy on us and figure out how trustworthy we are. Let’s get into drama mode and think about the data they collect from us. How might different aspects of our digital behaviour impact our hypothetical social score?

🛰️ GPS data

Do you live in a “good” neighbourhood and are you therefore surrounded by “decent” people? How often are you late for work? Do you “have to” use public transport or can you “afford” a car? Do you drive too fast? Do you travel a lot abroad? Where? Have you been to <insert any middle eastern country>?
We could dedicate an entire article on GPS capturing but let’s keep it simple… GPS data tells a lot about you.

👩‍👧‍👧 Social circle

Who do you text or call? What’s their social score? What’s the social score of your spouse, your siblings, and your best friends?
We can imagine an algorithm that weights the “social contribution to your score” of your close contacts heavier than those you don’t see often.

✍ ️Language preferences

I have a hard time writing this but we can imagine some intrinsically messed up systems that judge you for your origins.
How would you be seen if you communicate in “trustworthy” Norwegian and English? And how would that compare to someone who speaks Spanish or Swahili? Perhaps speaking multiple languages is considered good? Well, at least if they are “good” languages.

💬 Language complexity and accuracy

Do you text and speak in academic language or RU likely 2B more like diz eh? Do you make a lot of grammar errors? What does this say about you?

🥹 Linguistic sentiment

Machine learning can be fairly easily used to analyse what kind of mental state you are in.
Do you use positive language and are you engaged or do you tend to keep your texts short, and make cynical comments? Does the system determine that you are depressed, perhaps? How many depressive episodes did you have in the last years? God knows… and the algorithm.

☎️ Incoming and outgoing phone calls

How often per day do you pick up your phone? And how long are these calls? What’s your phone call bounce rate? Do you Google the unknown numbers? Ahh, now the system knows if you are an introvert or an extrovert.

📱Phone OS usage

Which apps do you use most often? How often do you use them? For how long? How many times per hour do you unlock your phone? Do you tap on those never-ending notifications? Does this tell something about your productivity, distractibility, or vulnerability to addictions?

🛌🏾 Phone charging habits / Sleep routine

Does your phone charging routine tell how structured your life is? Do you charge your phone at night? Do you go to bed every time on the same day (this might also be analysed by looking at your phone activity)? Are you a “risk taker” and only start thinking about charging your phone when the battery is almost dead?

🏃‍♀️ Workouts

You might use a workout app. Even if you wouldn’t give this app any permissions, Android would be able to detect that you connect a heart rate strap for 1 hour every second day. Your GPS coordinates can also show that you run, or perhaps go for a bike ride. Healthy lifestyle: attaboy.

💸 Financial stability

Gmail, Android, Chrome and Google Analytics know which investment apps you use. They can also monitor which webshops you visit, and which checkout pages you’ve reached. Do you have a fair amount of assets? Which insurers do you use? Are you an impulse buyer? Do you pay your bills on time?
There are a million ways to create an algorithm that would be able to tell what your financial score would be.

🎹 Music & Media preference

Youtube, iMusic and the like know which media you consume. What does it tell about you? Jazz festivals always sell out, even during economic crises. Does that mean that Jazz listeners are “reliable people”? People in suits listen to Beethoven. Would this increase your trustworthiness? What if you listen to gangster rap? What if you follow certain channels on youtube? Would it reveal your political, sexual or religious preferences?

Next to all the data traces you leave with your phone, more opportunities become available for data brokers. The new wave of connected devices and wearables is a gold mine for them. Why was Google so keen to get its hands on Fitbit? It isn’t just about helping us keep track of our steps and sleep habits.

Let’s look at some of the other popular IoT products available.

⌚ Fitbit and the like

These devices obviously tell a lot about you and your physical state. How much, how stable and how consistently do you sleep? This, in combination with your resting heart rate, can detect whether you had a stressful day or if you consumed alcohol. A decreasing heart rate over time can show that you become more healthy. Fitness trackers do a reasonably reliable job of capturing what physical state you are in, and have been in.

🧽 Smart vacuum cleaners

This might sound paranoid but vacuum cleaners and lawnmowers can actually scan the floor plan of your house or garden. In fact, some of the terms and conditions blatantly state that they collect them. Who reads the terms anyway, right? Would the size of your living room give any insights into who you are?

🔈Smart speakers, smart home hubs etc.

Every major tech player has a product on the market that continuously listens to you. The technology to transcribe our spoken words did become impressively accurate. The companies claim that speech is only recorded when the device is activated by a certain command. In addition, the data analysis should be processed on the device and not in the cloud. How much of our intimate conversations are really processed? As Succession comedically illustrates: Are we listening?

Smart lights and lightbulbs, smart home security cameras, smart doorbells, smart thermostats, smart door locks, smart fridges, smart shower heads, smart mattresses (yes, not kidding), smart tools to observe your kids, smart toys to entertain your cats. Smart, smart, smart.

You get the point. We can choose to upload all our actions and gestures into the cloud. It might help us. It might make our life so insanely convenient. But couldn’t it make others’ life even better? How so?

For a long time, I’ve been thinking: “privacy is not important to me. I have nothing to hide”. I’m so insanely boring that the digital big brother might go into hibernation mode when it’s analysing me.

But let’s imagine a company that wants to reduce risk. It’s very easy for them to get their hands on the confidential info of millions of people.

A bank can use machine learning to decide if you’re eligible for a loan or mortgage. What criteria do they use to make this choice? Is the machine impartial? How much manual or automated bias is being added to the algorithm?

What about a big recruitment firm that wants to quickly evaluate all candidates? Which aspects would lead to the rejection of a candidate?

A dating firm or platform? Let’s take it to an extreme. Would it be able to connect people with like-minded matches who follow the same life patterns, come from a similar social class, have the same income and speak in similar slang? Wouldn’t it reek like facilitating a new caste system?

In the past — and present — people were blocked from social mobility because of sexism, racism and other forms of discrimination. Guilty by association! Do you have a “bad” family, mate? Bad luck. Bad blood. Bad luck. Bad hood. Bad luck.

How can we be sure that all this big data is impartially applied? Google Translate couldn’t even avoid sexism in a relatively simple translation app.

Try translating terms into English from Turkish, which has gender-neutral pronouns, and a phrase like “o bir muhendis” becomes “he is an engineer”, while “o bir hemsire” translates to “she is a nurse”.

The platform trained itself in sexist behaviours. Google only sorted out the problems after other people noticed the issue.

How many people need to be unethically disadvantaged before machine learning teaches itself to not associate your sleeping patterns, your emoji usage, or who you hang out with, with how suitable you might be for a job?

Apparently, Facebook has no flippin’ clue about how your data is processed. It sniffs up all they know about you and throws it on one pile of data, together with that of all the other users. If the legacy code creates such an ambiguous mess, how can we rely on how it makes decisions?

A handful of companies did become so big that they have a de-facto monopoly on digital ethics. They are publicly listed so their priorities are obviously not with the users but with Wallstreet. Therefore, who defends the interest of you and me? That’s what governments should do. Well, at least in some parts of the world.

During the last few years, the EU has been working on data privacy regulations. Two new ones will come into force in 2024.

The Digital Services Act demands from platforms to explain how it is possible that you see an Easyjet add, 5 seconds after you texted your brother that you want to go on a holiday. We have a right to know how targeted ads were chosen. We perhaps even have a right to not get any targeted ads at all.

The Digital Markets Act wants to protect small companies that have been reduced to Amazon or Google webshop slaves. Amazon shouldn’t first absorb the catalogues of its small competitors’, and afterwards put its own products on the top of the search pages. The EU demands them to change this. Amazon must also allow external vendors to be able to link to their own websites. They also can’t continue surveilling the user after they’ve clicked on outgoing links.

The biggest step forward has been the General Data Protection Regulation (GDPR). It should prevent companies from frantically harvesting data they don’t need. We need to give our consent first. We also have the right to get access to the data that has been collected. It’s ultimately us who own our data and we can therefore demand everything to be deleted.

Consider data your body. You can’t touch me unless I give permission. And if I ask you to stop, you stop. It’s that simple.

The EU privacy measures are a start but aren’t waterproof. The cookie popups have shown that ambiguities in the regulations make websites creative. Cookie popups are often more annoying than useful. As a design strategist, I’ve nevertheless worked on a fair few products that needed to become GDPR compliant. They all significantly improved their data privacy. The law certainly has a positive impact.

Especially the USA, but also the UK (they have GDPR as an EU legacy), have so far failed to implement extensive and adjusted data protection measures. It’s fairly simple to look at how much money the big tech throws at their lobbying practices (Meta, Amazon), or contributions to political party campaigns (Microsoft, Alphabet). They influence, slow down, and even block governmental decision-making.
Would I be cynical to point out that the oil, tobacco and pharmaceutical industries were once the biggest lobbyists? In the 20th century, the health of our bodies and our planet was compromised by capitalistic motives. Are we compromising our privacy and mental health in the 21st century?

Google claims it doesn’t sell its data but rather uses it to optimise its own platforms. And obviously… to provide tailored ads. This means that it might be hard for a single commercial company to combine all the data that we just discussed. Nevertheless, there are a fair amount of smaller companies who’s goal it is to collect your data. Why on earth does the average weather forecast app want access to my contacts?

I know many people who work at big tech. They are great folks. They are intrinsically motivated to help you and me get better experiences. They are probably never exposed to C-suite data strategies. Their success is isolated within the products they work on.

That doesn’t mean that we should stay within our low-level product bubble.

We should wonder what the long-term goal of our current or future companies is. We can read the terms and conditions of the products that we consider working on. It might reveal a decent bit about what the product is really about. If you see anything disturbing, ask about it. Ask what the primary streams of revenue are, or how they envision them to look in a few years. What’s the percentage of income through subscriptions, advertisement, and data selling?

When we design new features, we should reflect on what the ultimate aim is. A product manager should be able to explain in clear language what the business value of a choice is. You might need to design a dialogue that asks the user to give permission to access certain content. Why? Why is this on the roadmap? Collecting data you don’t need is even illegal under GDPR.

You might need to design an indicator to show how a candidate is ranked in a recruitment app. Which criteria are used? Ask how this data is collected and challenge whether the platform should be transparent about why an applicant appears “green” or “red”. Don’t be a zombie, don’t march blindly towards whatever your executives tell you.

Only by understanding the underlying motives of the company, we can conclude whether we primarily design for the user or if we design to exploit them. As UX designers, the word User is in our job title. Make sure that you are an ambassador for them. It’s not impossible to build ethical and commercially viable products. We don’t have to be the next China.

[ad_2]
Source link