Is the Quantified Self in Control?

In this blog, I reflect on the notion of control within the realm of the quantified self by exploring the roles of citizens, commercial firms and governments. Tools that may have sounded like science fiction only 15 years ago, have now become reality: We have the possibility to record the frequency and volume of our snoring during various sleep cycles, to monitor our eating pattern, to count our total number of steps in one week and to manage our daily habits. As a consequence, personal data transcends the online sphere and plays a large role in everyday life.


Wearables such as Fitbit and apps like MyFitnesspal, Sleep Cycle and Habits promise the achievement of personal goals, weight loss, more self-knowledge and better sleeping. The quantitative self-movement claims that self-tracking works as a mirror wherein one’s habits are reflected. While self-tracking offers citizens the possibility to control and monitor their own life and body to a large extent, the question is whether they also control the huge amount of data they generate.

Most self-tracking mobile applications and software packages connected to wearables are free of charge, while only a small amount includes advertisements. Quantified selfers accept privacy agreements that expose their personal lives to data mining (see Becker for an interesting reflection on the balance between privacy and self-tracking). Once a citizen monitors his offline behaviour via self-tracking tools, the data he or she collects enters the online sphere. Personal records are stored in databases and data warehouses after which they dissolve in the opaque process of data mining. Quantified selfers have no control over the unveiling and dissemination of their personal data. It is unclear who uses or monitors quantified selfers’ data and for what purposes. Large commercial actors (such as Google and Facebook) are more likely to gain access to big data than ordinary citizens. A new digital divide created big data rich and big data poor (coined by boyd & Crawford). As most quantified selfers will belong to the big data poor, they may feel in control over their records and numbers while big data rich collect, monitor and mine their personal self-tracking data.

Apart from the commercial use of data, there are other realms where the collection of self-tracking data may have far-reaching consequences. When used for surveillance purposes, quantified selfers can end up in specific social and economic categories that influence how they are treated by financial institutions, governmental organisations and insurance companies. David Lyon describes this process as social sorting and stresses how the matching of offline and online data increases the searchability of databases and the accuracy of the identification of ‘risks’ beforehand. The prospect that your insurance company raises your insurance costs because you unwittingly tracked your fast food consumption may seem farfetched, but is possible in practice. This example shows how self-tracking can easily get out of control when companies change their policy and/or ethical stance.

This exploration shows that while the quantified self accomplishes his/her own goals, (s)he lacks control over the numbers, facts and records that are collected. This can lead to digital inequalities with widespread consequences. Therefore, an open debate about control of quantified personal data is essential. There is a need for privacy-preserving self-tracking solutions, and now is the time for quantified selfers to demand more transparent and controllable processes when it comes to their self-tracking data.

Even Barbie is eavesdropping now?

these are great days

Privacy is currently the biggest cause of unease around new technology and artificial intelligence and it is not surprising that products developed for children are starting to raise concerns around privacy as well.

Hello Barbie by Mattel is interactive, that is, it is supposed to have meaningful conversations with a child. In order to achieve this, it records what the child says and sends the information to the cloud, where Mattel can retrieve it, use it to let Barbie learn and talk back, but also to better define marketing campaigns and advertise directly to the children. The toy has been called “creepy” and renamed “eavesdropping” Barbie.

It does seem very creepy, not only for the easy to guess privacy concerns expressed by several organizations (see links to articles below) but also because childhood play is supposed to be creative and stimulate imagination. I wouldn’t want my child to play with a doll that…

View original post 142 more words

Safer Internet Day: I dodged filter bubbles

 (Photo: Andrea Nanni)
Imagine you enter a large bookshop to find specific information. You voice your request and immediately books and magazines start shifting. Within seconds the shelves offer a personalized selection based on your interests and characteristics. Sounds strange?

This is exactly what happens when you enter a search request into Google. This might be convenient because what is offered probably suits your demands quite well. But what if you realize that there are other interesting books stocked in a warehouse out of your reach? Or when you find out that your friend, who made an identical request, received a different selection of books?

Google generates personalized search results based on previous search requests, current location and personal data gathered by tracking cookies. Consequently, a filter bubble is created around you: A unique universe of information that is individual, invisible and involuntary (Google is not the only service that can be accused of creating a filter bubble. Facebook, Amazon, CNN, Yahoo, MSN and others also generate personalized content, for examples see Eli Pariser’s TEDtalk or book: The Filter Bubble). What is wrong with a filter bubble? Filter bubbles can be harmful because they skew users’ perception of the world and narrow their scope.

Since today is Safer Internet Day, I wanted to make my own internet behavior a bit safer and started with my search engine use. Instead of automatically using Google, I used DuckDuckGo today. This is a search engine that enables you to dodge the filter bubble. It claims to protect your privacy and to offer better search results without a filter bubble (Grossman). DuckDuckGo offers you a greater level of autonomy and control over your search results than Google does. While DuckDuckGo is referred to as a promising alternative for Google, it does not offer the same amount of options. It is a basic search engine limited to online pages, images, videos and since recently, products.

While I really do appreciate the idea that I retrieve ‘neutral’ search results and it actually works quite good, I did end up using Google when I needed to look up a location on Google Maps and when I needed academic articles today.

In the impoverished ‘DuckDuckGo’ book stall everyone is treated exactly the same, which is a refreshing experience. To maintain my ‘safer internet’-resolutions I set up DuckDuckGo as my landing page and aim to use it as a starting point from now on.

Tip: Interessante visie op het verdienmodel van het internet

De afgelopen maanden heeft de Correspondentof beter gezegd, hebben journalisten Dimitri Tokmetzis en Maurits Martijn, zich verdiept in online trackers en de advertentie-industrie. Zij stellen zowel zorgwekkende ontwikkelingen aan de kaart (hoe kinderen online gemonitord worden via game-applicaties of hoeveel trackers je in de gaten houden wanneer je nietsvermoedend internetsites bezoekt), beschrijven hoe je zelf je privacy kunt waarborgen en geven inzicht in de huidige debatten en denkwijzes over privacy.

Vandaag publiceerden zij het laatste artikel in deze serie, waarin ze tot de conclusie komen dat in het huidige verdienmodel van het internet zowel consumenten als bedrijven verliezers zijn. Consumenten omdat ze hun privacy (in de vorm van persoonlijke gegevens) inleveren en bedrijven omdat advertenties erg weinig opleveren. Daarnaast is onze data erg weinig waard en zou het niet eens zo duur zijn als we voor het gebruik van deze services zouden betalen.

Een andere interessante kwestie die zij opwerpen is dat de acceptatie van online commerciële surveillance ook andere vormen van surveillance normaliseert, die voorheen niet geaccepteerd zouden worden. Als voorbeeld geven zij dat winkels aan de hand van je mobiele telefoon kunnen volgen hoe jij je door een winkel beweegt via wifi-tracking. Online is dit de normaalste zaak van de wereld, waarom zou dit offline dan niet mogen? Wifi-tracking werd een jaar geleden uitgebreid besproken in verschillende media en als het inderdaad zo is dat staatssecretaris Teeven de verantwoordelijkheid vervolgens bij de consument legt die zijn telefoon maar uit moet zetten (aldus Computeridee), dan lijkt mij dat dit niet alleen genormaliseerd wordt door online surveillance maar ook door gezagsdragers die blijkbaar de impact van zulke praktijken niet bevatten.

Het feit dat veiligheidsdiensten gebruik maken van de data die social networking services en online diensten verzamelen, maakt deze kwestie des te ingewikkelder.

Kortom, dit artikel is een aanrader wanneer je verder wilt denken over je eigen privacy, het gebruik van online diensten en surveillance!