Understanding Internet Users' Thoughts and Practices
The expansion
of data mining practices quite rightly gives rise to a range of concerns
relating to privacy, security, surveillance, trust and transparency. These
concerns are entirely justified when it comes to the spectacular forms of data
mining that have hit the headlines in recent years, as carried out by the NSA (National Security Agency) in the
US and GCHQ (Government Communications Headquarters) in the UK, as well as
governments, law enforcers and other major corporations. However, we are
currently experiencing many more forms of data mining than these. Writing specifically about one data
mining space, social media, van
Dijck and Poell (2013) assert that all kinds of actors (for example in
education, politics, arts, entertainment, the police, law enforcers, activists)
are increasingly required to act within what they call ‘social media logic’.
Such logic is constituted by the norms that underpin the incorporation of
social media activities into an increasingly broad range of fields, and one such
norm is data mining. Because of this and related phenomena, there is today a
diverse range of data mining practices, carried out by a variety of actors, in
distinct contexts, for distinct purposes. Arguably, some of them are more
troubling than others. In this context, it is important to consider whether we
need to differentiate the types of data mining practices which we would want to
subject to transparency and other regulatory measures. Should we treat a resource-poor public sector
organization like a museum or local council, which uses data mining in order to
understand, engage and provide services for its publics, in the same way that
we view the activities of the NSA or Facebook? These more under-the-radar forms of data mining might be considered as
mundane or ordinary; I suggest that we need to think about how to think about
them.
On this subject, I think
that, whilst it is important to direct attention to the issues raised by the
activities of the NSA and the like, because of the spread of data mining that I
talk about here, we do also need to attend to data practices on this ‘ordinary’
plane. How can we engage people
doing what might be considered ‘ordinary’ data mining (a museum interested in
visitors’ responses to an exhibition, a local council wanting to know what
people think about cost efficiency measures) in debates about the ethics of
data mining?
What do ‘ordinary’ internet users think
about privacy, security, surveillance and trust in relation to their data and
its mining? In another post on this blog, Lina Dencik argues that many people
do not connect with these terms. One solution to this problem that she suggests
is to connect resistance in this domain to ‘broader ecologies of political
activism’. Another important strategy is to identify what terms people do
connect with, and how they talk about their thoughts and feelings about dataand data mining practices. This is important in order to be able to
engage in conversation with ‘ordinary people’, in their terms, about the
problems with data mining that have been well-documented. In research that I
have carried out in order to explore this (Kennedy et al forthcoming), I found
that the language of
fairness was commonly used to express varying viewpoints on distinct data
mining practices.
No comments:
Post a Comment