Friday, March 27, 2015

SURPRISE: European Public more likely to find effective surveillance as non-invasive of privacy


European Public more likely to find effective surveillance as non-invasive of privacy

Yet more ploughing through the SURPRISE results on European public’s attitudes towards Security-Oriented Surveillance Technologies (SOST) finds that the more effective a SOST is perceived to be, the less it is regarded as privacy-intrusive.

This nine-nations European study on the European public’s attitudes towards Smart CCTV, Deep Packet Inspection (DPI), and Smartphone Location Tracking (SLT) finds that the perceived effectiveness of SOSTs negatively influences substantive privacy concerns (Pavone et al. 2015: 140).

Unfortunately for intelligence agencies, they will not tell us how effective their surveillance programs are. They do not comment as they argue that this would compromise their sources and methods, as well as tipping off terrorists. On this basis, it’s unlikely that intelligence agencies will be able to persuade the public that bulk data collection doesn’t compromise their right to privacy.

Pavone, V., Esposti, S.D.  and Santiago, E. (2015). D 2.4 – Key factors affecting public acceptance and acceptability of SOSTs. Surprise. Surveillance, Privacy and Security: A large scale participatory assessment of criteria and factors determining acceptability and acceptance of security technologies in Europe. Retrieved from http://surprise-project.eu/

SURPRISE! European Public Rejects Privacy-Security Trade-off



European Public Rejects Privacy-Security Trade-off

More ploughing through the SURPRISE results on European public’s attitudes towards Security-Oriented Surveillance Technologies (SOST) finds that few Europeans are willing to give up privacy in favour of more security.

This nine-nations European study on the European public’s attitudes towards Smart CCTV, Deep Packet Inspection (DPI), and Smartphone Location Tracking (SLT) finds that, while there are national differences, and SOST-differences, few people are willing to give up privacy in favour of more security.

‘In this study, participants who recognised the need to give up some of their privacy for better security were more willing to accept the SOST only in the case of DPI. We did not find similar results in the case of Smart CCTV and SLT.’ (Pavone et al. 2015, p.133)

This stands in contrast to statements by the UK Intelligence and Security Committee (ISC) in their recent Privacy And Security Inquiry which invokes the will of the UK public as erring on the side of all bulk data collection (including the content of communications, meta-data, and phone location data) in order to prevent terrorism. The ISC states:

we do not subscribe to the point of view that it is acceptable to let some terrorist attacks happen in order to uphold the individual right to privacy – nor do we believe that the vast majority of the British public would’ (p.36).  

The UK was one of the nations in the nine-nations European study. However, the UK public was only asked to consider Smart CCTV and DPI. Further research into the UK public’s attitudes to surveillance of other data types would be useful.

Further research into why different nations’ publics largely refuse this privacy-security trade-off, whether this is influenced by public discourses on surveillance and dataveillance, and the precise nature of these discourses, would be valuable.

Also needed is research into whether intelligence agencies and politicians listen to these public views on their privacy and their security, rather than invoking a mythical public opinion unsupported by data or research.

 

ISC. (2015). Privacy And Security Inquiry. Retrieved from http://isc.independent.gov.uk/

Pavone, V., Esposti, S.D.  and Santiago, E. (2015). D 2.4 – Key factors affecting public acceptance and acceptability of SOSTs. Surprise. Surveillance, Privacy and Security: A large scale participatory assessment of criteria and factors determining acceptability and acceptance of security technologies in Europe. Retrieved from http://surprise-project.eu/


SURPRISE results on European public’s attitudes towards security-oriented surveillance technologies


Ploughing through the SURPRISE results on European public’s attitudes towards security-oriented surveillance technologies provides much food for thought.

The study identifies a list of criteria for what makes security-oriented surveillance technologies acceptable:

a) operate under an international legislative framework, monitored by a data protection authority with sufficient powers at the European level;
b) are operated by transparent, accountable public agencies that inform citizens about their purposes and functions;
c) are cost-effective and allow citizens to access and control the data they retrieve and store;
d) always target the least sensitive data, only in public spaces, whenever possible and be specifically orientated towards suspects and criminal activities;
e) are deployed only after significant evidences have been collected and only after judicial authorities grant permission;
f) incorporate Privacy-by-Design mechanisms and principles;
g) do not replace but complement human intervention, as part of a broader, socially informed, security strategy that addresses also the social and economic causes of crime and violence.

So - how many of these criteria do Snowden's revelations violate?

See: Pavone, V., Esposti, S.D.  and Santiago, E. (2015). D 2.4 – Key factors affecting public acceptance and acceptability of SOSTs. Surprise. Surveillance, Privacy and Security: A large scale participatory assessment of criteria and factors determining acceptability and acceptance of security technologies in Europe. Retrieved from http://surprise-project.eu/


Tuesday, March 24, 2015

Seminar 2 Position Statement: F-Secure

We’re a Finnish company called F-Secure and for the last twenty six years we’ve been making software that protects people’s data. Whether their data is on their computer, laptop, mobile phone or in the cloud, we’ve made it our business to create great software to protect the irreplaceable. Developing and selling innovative software that fends off cyber crime is the business we’re in; it’s what we’re good at and it’s how we make out money.

Our lives after Snowden
But we don’t believe that that’s enough in a post Snowden society. We believe that, as an organisation we have the responsibility to fight a little bit harder for the rights of people in a digital society and we’re trying really hard to make this happen.

Digital freedom movement
That’s why we’ve launched something that we’re calling the Digital Freedom Movement, a group of people with a common understanding of how they think the digital society of the future could be. The foundation of the movement is the Digital Freedom Manifesto, a crowd sourced document, licensed under Creative Commons, that outlines how we believe governments, businesses and individuals should build a fundamentally digital society.

Saturday, March 21, 2015

Seminar 2 Position Statement: Dr Steve Wright (Applied Global Ethics)


Surveillance: The Next Echelon
 
Global Surveillance
Only 5% of Edward Snowden’s revelations have been published but they reveal previously undreamt of surveillance capacities in siphoning global telecommunications data equivalent to over 194 times the total contents of the British Library each and every day. The invasiveness of these systems which are largely based in space demotes notions of privacy and constitutional protection to mere aspirations. But it’s not the first time NSA activities have been exposed. Its Echelon system was modelled from mostly openly accessible information in the Seventies and Eighties and The European Parliamentary Library Information Service has just published its first history of the so-called “Echelon Affaire.”

Inklings & Mosaics
But what is being done with this prodigious data gathering? Most of the NGO researchers have been academics, sociologists and criminologists. But the emerging paradigm is both military and imperialistic. What emerges is not a network of global counter-terrorism and crime control but “full spectrum dominance.” And we are just at the beginning. Key innovations and breakthroughs in civil  surveillance during the early days, were gradual and linear. Post 9/11, massive R&D sums have been launched in breaking through to new surveillance capacities. Human Rights advocates and development specialists are slowly beginning to understand how surveillance technologies play a corrosive role in targeting human rights defenders

Battle-spaces That See
Key work is being accomplished not by peace researchers but urban geographers questing to understand emerging military logistics and challenges which are already melding living and non-living systems into battlespaces that see on land, sea, air and space. How should future leaders tackle the question of “taking out” cities of more than 10 million? It is one of the issues addressed by Prof. Stephen Graham at Newcastle University in his book, Cities Under Siege.

Psy-Ops
Why should such arcane issues burden us? Quite simply, because much of the modern city surveillance-scape, came from military capacities, (from wars like Vietnam and the Northern Irish Conflict). Future innovations will compete for resources with welfare, health and education but as each international counter terror scare gathers force, we will be fudged into procuring systems decanting down from hi-end military observational capacities to create new architectures of surveillance.

Amalgamating Surveillance Nervous Systems With Military Muscle
Why should this concern us? Military systems are about targeting and they are based increasingly on data-veillance and artificial intelligence. Of growing concern is the drift towards autonomous targeting and the denial of accountability and oversight. In this context, even the liberties we once had would be a significant achievement. On current trends that is not going to happen. We should be concerned about what lies in store for us as surveillance capacity reaches the next echelon. Our task as researchers is to assimilate what is already in full view and act.

Seminar 2 Position Statement: Col. Ian Tunnicliffe


State Surveillance – A Problem Of Public Perception

The impression given by recent events, not least the revelations of Edward Snowden, is that ordinary citizens are now under ever increasing levels of surveillance from the state. For many this intrusion into the privacy of individuals is unacceptable and requires action to reduce the state’s ability to do so through either technical or legislative means. In short the state’s spy agencies are out of control and need to be reined in.

My belief is that in truth, in terms of the effort put into surveillance activities by the government, relatively little has changed. The two key elements that have changed are the technology of communication, with the evolution of the Internet and social media, and crucially the level of trust the average citizen has in the government and state institutions, driven not just by past government failures but also by public perception shaped by the media.

The revolution in communications technology has led to an explosion in the amount of information generated across the world. We are currently in a second, mobile generated, revolution that is further increasing the vast quantities of data being generated. Indeed IBM estimate that, if all the information ever generated by man since the dawn of mankind was added up, 90% of that information would have been generated in the last two years. From this perspective, far from increasing its surveillance coverage the state is struggling, and failing, to keep up.

The most radical change within the timeframe of my experience of intelligence is in the level of trust in governments and politicians and the degree of cynicism shown by the public towards authority. While in general a positive development, when combined with a distorted view of the different intelligence agencies frequently promulgated by the media, it has given rise to widespread misconceptions as to what those agencies are doing and why.

In my experience UK government agencies are risk averse and bureaucratic. Individuals working in these organizations are conscientious and driven to achieve the best results they can but do not knowingly break the law although in some cases there can be confusion over what exactly the law is, particularly in those cases where technology has outpaced the regulations covering a particular subject.

The Recent House of Commons Intelligence and Security Committee review would tend to reflect this view.  The committee's 140-page review, published on 12 March 15, argues that while agencies “do not seek to circumvent the law” the current status quo is “unnecessarily complicated and – crucially – lacks transparency”. They called for a new law to replace current “piecemeal” legislation governing how the UK's intelligence agencies spy on its citizens.

This lack of transparency is more a function of a failure to update old laws combined with the gradual opening up of the rest of government over time, which has ironically highlighted those areas that remain closed. 

A separate but not unimportant area is the care that a government agency might or might not take over individual information. Due to the classification of most of the information collected by the various agencies such information is generally protected to a far higher extent than may be the case in other government departments, the HMRC disc loss of 25 million child benefit recipients’ records being a notorious example.

The Snowden revelations have generated concern that the state is becoming too intrusive. Despite these revelations there is an acceptance by many, although not all, that some level of surveillance is necessary. A mature view therefore needs to be taken to achieve a balance. Without doubt that will require more transparent legislation and regulation, which should be welcomed. However caution needs to be taken about constraining the agencies any further.  The threats faced by the UK are genuine and in some cases publicly underestimated.

Some have argued that successful terrorist attacks are a price worth paying as a cost of maintaining individual privacy.  The agencies themselves note wryly that this is not an argument that seems to get much coverage whenever a successful terrorist attack takes place, when the focus is on asking why the security services failed to prevent the attack in the first place.

In short my view is that the changing view of UK government spying agencies and their threat to the privacy of individuals has more to do with evolving public perception than changes within those agencies themselves or any changes to government policy. The media has played in a large part in reinforcing the perception that the government is increasingly ‘Big brother’ and is conspiring to spy on every citizen. I believe the reality is that there is no such malign intent. There are many other criminal and commercial threats to privacy; the government isn’t the real problem.

Seminar 2 Position Statement: Dr Brent Mittelstadt (Oxford Internet Institute)


Trust: The Missing Element in Surveillance
 
The Snowden revelations revealed global surveillance occurring at unprecedented levels and granularity.  Debates over the ethical acceptability of intra- and inter-national surveillance often conceptualise the problem as finding the appropriate balance between privacy and security, wherein monitoring the interactions and communications of citizens are seen as a way to enhance national security and enforce law.  This dichotomy is false, in that it is not necessary to locate a balance between the two values which can be universally applied to all instances of potential surveillance.  Rather, it may be that more invasive violations of privacy can be justified within certain contexts, for example when an identified threat to national security (or other values) is imminent.  A model for justifying surveillance is thus established wherein the degree of severity or invasiveness of a particular surveillance action is justified directly by the identification of a specific imminent threat.  If this model functions properly, perpetual invasive surveillance cannot be justified by the vague possibility of a future threat or attack.

The proposed model of surveillance is missing a key element to justify the violation of citizens’ privacy: trust.  In the Snowden revelations a relationship of trust between the NSA, GCHQ and citizens was missing, indicated most clearly by the secrecy of the surveillance operations and underlying legal processes.  Trust is interpreted as an interaction between a system that collects and processes data, the users that provide the data, and stakeholders who access it.  Trust can be seen as a sum of the credibility, motivation, transparency and responsibility of a system.  Credibility is linked to ‘loyalty’ or ‘reputation’; a stakeholder must be seen as responsible or credible enough to handle sensitive personal data.  Motivation refers to the intentions of stakeholders, or how they intend to use the data of users.  To achieve trust these motivations, as well as the extent of data held, must be transparent to users so that the system (and its custodians) are seen as responsible: as a citizen, you must let me know what data you are capturing and how you plan to use it in order for me to consent to targeted or increasingly invasive forms of surveillance in times of crisis.

Trust is something that develops over time, based upon development of the system and stakeholders involved.  Trust allows for violations of privacy to be justified in particular contexts.  For trust to exist, transparency is required on the part of the organisation conducting the surveillance.  A trusting relationship thus requires fidelity and transparency on the part of the surveillance organisation, and consent from data subjects.  Participation in decision-making regarding appropriate forms of surveillance may also be required, in particular to establish appropriate limitations on transparency in the interest of operationally-required secrecy.  When trust is breached, it must be clear who can be held responsible, and to what extent.  Similarly, new or increasingly invasive forms of data analysis require notification within a trusting relationship.  Systems and stakeholders that clearly establish responsibility before a system is implemented are, according to this conception of trust, more trustworthy.  Each of these elements was lacking in the surveillance operations revealed by Snowden, indicating a lack of public good will which must be re-established if future surveillance practices are to be broadly justified.

Seminar 2 Position Statement: Jess Meacham (Journalism)


Surveillance and Secrecy - Fictional Representations

My research focuses on fictional representations of surveillance and secrecy. Where this work intersects with applied theory and technical expertise lies, I think, in the contested question of popular and/or public understanding of surveillance in society. The cross-pollination between narratives of espionage and the history of British intelligence has a long and well-documented history. John le Carré, in particular, is widely credited with creating a new type of espionage fiction, qualitatively different than that of Ian Fleming and his other predecessors - one that has been built around his own experience within and alongside the British intelligence establishment. The popularity of his fiction attests to the strength of the public appetite for narratives of surveillance in a realist mode (contra Bond). He has been positioned as an author of the ‘negative thriller’, and there is currently, post-Snowden, renewed critical interest in his fiction. His work continues to reach a wide audience through big-budget cinematic adaptation. Here, I use le Carré as an example of just one aspect of how literary analysis might suggest potential research questions arising from the juxtaposition of literary enquiry with other disciplinary fields in surveillance and information studies.

Fictional representations of secrecy and privacy are always contingent on the readers’ awareness of the narrator and the author-figure. Narratologists distinguish between ‘story’ (or plot, that is the events that happen in order for a narrative to be a narrative) and ‘discourse’ (how the story is told). In le Carré, the gap between the two is often wide: think of Tinker, Tailor, Solider, Spy, which opens with Jim Prideaux arriving to take up work as a supply teacher in a minor public school. There are many pages to go before we can shape this into a chronological sequence (the plot) to discover that he was injured as a result of a disastrous operation in Czechoslovakia, ordered by Control in the last days of his reign at the Circus, before he was ousted by Alleline and Smiley was fired-retired … This is an example not of secrecy within the narrative as theme, but as device: for sustaining interest or tension, i.e., implicitly, for keeping us invested in the plot. Secrecy, personified in the form of Karla’s mole, is arguably also the dominant theme of Tinker, Tailor – harnessed to varying ends and in various ways. In this formulation, then, secrecy, lifted out of any real-world ethical framework, becomes instead a tool or technology to drive narrative.

The relevance of literary narratives to the public understanding of Snowden was demonstrated conclusively by the rapid rise in sales of Nineteen Eighty-Four in the wake of the story breaking. To what extent does secrecy operate in media surveillance narratives? How does the discourse of Snowden, as shaped by Glenn Greenwald and the Guardian, use the story of secrecy? How might public conceptions of privacy, secrecy and surveillance be shaped by the ways in which the media and literary narratives that inform them operate self-reflexively within these parameters? How might a concrete or policy-driven ethics of surveillance take into account these factors? 

Seminar 2 Position Statement: Dr Andrew McStay (Media-Culture)


Privacy: An Affective Protocol
Privacy is best understood by putting down the mobile phone, stepping away from the computer and turning-off all digital communication devices. Privacy is a fact of living in communities and although the ways in which privacy is expressed is ethnocentric, and differs on a community-by-community basis, anthropologists have long observed that “it” exists. What “it” is very hard to define. The problem is at once linguistic, categorical, experiential, sociological, anthropological and philosophical. Put otherwise, is privacy a thing, condition, property, right, ethic or even biological necessity?

Some people suggest that privacy might somehow be removed, surpassed or lost from the human equation – that privacy is dead. However, this is to very much misunderstand what privacy is. Excitable folk (particularly CEOs of tech’ companies) who herald the death of privacy are quite likely to be unpopular with their family, colleagues and acquaintances. This is because privacy plays a fundamental role in our most basic daily interactions. Be this in our behaviour towards each other, what we consider to be taboo, our modes of intimacy, the confidences we share with others, who we are open to, how we arrange our homes and working spaces, where we store thoughts and things of value, and more recently the ways that these are imbricated in media and technological systems, privacy is a basic and primal premise.

One implication of this view is that it makes little sense to think of privacy as being alone and if we are to think in terms of seclusion, this must be in terms of managing relationships with others (and being open as well as withdrawn). After all, being alone and being private are very different as the former involves an absence of relationships and connection. This absence can be very palpable and may even occur in public as well as on a hypothetical desert island where no one knows nor cares that an unlucky individual is stranded there. Without connection, being alone is simply that – utter absence of others that relate to us in some way.

While I broadly share what today is a liberal outlook on privacy (involving Kant and JS Mill’s thoughts on control, dignity, rights and autonomy), I also see it in more systemic terms. By this I mean that privacy norms contribute to how we connect and interact with others. Privacy is not about being alone, but how we are social. A systemic approach thus sees privacy as an organizational principle that contributes to the regulation of institutions, practices, modes of interaction, and social and individual life more generally.

It is at this point we can bring technologies back into the mix, particularly if we see technologies as social actors that both contribute to the existence of social norms, and are required to abide by them. As a principle, or set of principles, privacy is best conceived in terms of meta-stable protocols informed by physical, social, historical, technological and environmental circumstances. Importantly, protocols are not imposed, but are co-created between actors of all sorts so to be an emergent norm that advises interaction and behaviour. Thus while privacy does not have substance, it is quite real, and when we say real, we are able to say that it has capacity to affect and to bring about corporeal, behavioural, technological, psychological and organizational differences.

With privacy being affective protocol, the ethical onus for anyone seeking to dramatically modify or alter it is to make a case for their actions. To do otherwise is an act of force. While concern about consent, cookie use, legislation and surveillance of our digital communications remain critical areas for scrutiny, these investigations are renewed and refreshed by recognition of the breadth of privacy matters. This breadth is comprised of the fact that privacy protocols are found in the most basic of arrangements. Privacy is very much part of the human equation and suggestions that it might be waning are to be treated sceptically.

Seminar 2 Position Statement: Dr Yuwei Lin (Media, Culture, Communication)


The Public and The Private in Doing Citizen Sciences – How do Citizen Scientists Perceive Their Privacy When They Communicate Online?

Privacy is usually understood as a fixed concept (e.g. as seen in information system design and in policy documents). To challenge this linear, one-dimensional view, some scholarly work has proposed a contextual approach to privacy, taking into account different perceptions that individuals, groups and/or institutions have on privacy and different strategies developed for managing it (Nissenbaum 2004, Fieschi 2007, Livingstone 2008, Robards 2010, Wessels 2012).

Based on this body of work that establishes a need for contextualising the concept of privacy, I'd like to open up the discussion on data and privacy by exploring the intersectionality of different moralities, interests, practices, and perceptions of privacy.

The fuzzy boundary between the public and the private has been discussed in relation to digital communication. Scholarly attempt to problematise this dualism of the public / private can be illustrated by Sheller and Urry (2003), where they argue that the perception of what is public and what is private needs to be reconceptualised:

‘these notions of the public rest on a separate basis and presuppose a particular contrasting ‘private’... we criticize such static conceptions and emphasize the increasing fluidity in terms of where moments of publicity and privacy occur’ (107-08).

I would like to extend this argument to ponder different public and private spaces. I will draw on my work on free/open source software and citizen science communities, to see how open / citizen scientists perceive public and private spaces, and manage their privacy, in relation to their practices of sharing and crowdsourcing collective intelligence.

The words 'open' and 'citizen' suggest a certain level of 'public', yet the practices of data collection, data sharing and data manipulation require embodied actions that may involve contributions of actors' labour, emotions, and bodily performance. Therefore, there is a division between 'public' and 'private' in citizen science. However, how does one negotiate this public / private boundary? How much of the private body or emotions should one share or contribute to the 'public' domain? If sharing the data and information would reveal one's identity, whereabouts, locations in the public domain, would this person still be motivated to share the data? What is a justifiable cause (e.g., solving a scientific problem) to persuade a citizen scientist to 'surrender' his / her privacy (or is there a privacy issue here?)? 

Examining the intersection of the public and the private allows us to conceptualise the engagement in citizen science as new ways of expressing one's identity and creativity (which groups citizen scientists belong, what goals they would like to achieve in their life (self-actualisation in light of Maslow's hierarchy of needs), what values they hold dearly). It also allows the researcher to critically examine the citizen science phenomenon by problematising the overtly positive perspective on 'citizen science' constructed in the public discourse, and by contextualising issues around 'public participation' and 'individual motivations'. 

Different perceptions of public / private will be analysed through narrative analysis of communications on different online citizen science communities. Specifically, I will question different levels of publicity and privacy, citizen science as a cause for sacrificing privacy, perceptions of public / private spaces – would sharing personal information and emotions on a closed mailing list for amateurs scientists considered as private (or 'semi-public')? Would a single datum that inscribes the collector's location and whereabouts at certain time be considered as private once being integrated into an aggregated large dataset? Would a digital portal with an authentication mechanism built in be deemed as public? What strategies have been developed and/or adopted to manage privacy in different public and private spaces? What is the range of information that one shares and what not (e.g., personal experiences, life stories, emotions, data, objects)? How would one resist or express themselves through exploring and negotiating the public / private in the field of citizen science?

I'd love to ponder these questions together with fellow participants at the seminar 'Debating the Technical and Ethical Limits of Secrecy and Privacy' for they would be useful for designing citizen science information systems in the future.

Seminar 2 Position Statement: Paul Lashmar (Journalism)



Summary: UK intelligence agencies secret interventions have always posed a threat to the freedom of the press. We are now in a ‘state of exception’ (Agamben 2005) where ever more sophisticated surveillance technology is available to the secret state which is increasingly intervening to prevent the news media executing its fourth estate role. Journalists must find new modes to counter intrusive laws and technology.

One of the paradoxes of intelligence services in western societies is that they have often undermined the very rights they are meant to protect. Using journalistic cover, placing propaganda and suborning journalists undermines the freedom of press and using rendition and torture undermines basic Human Rights.  In the 21st century there is a debate over how expansion of surveillance technology is eroding the right to privacy.  The Intelligence lobby is now systematically devaluing privacy as a right to extend their own powers. I suggest that rather than acting as responding public servants, intelligence chiefs now set the agenda within the public sphere (Habermas 1962) as primary definers (Hall et al 1978). Retiring GCHQ director Sir Ian Lobban defended the work of GCHQ in front of a parliamentary committee in late 2014, and his successor Robert Hannigan controversially argued in The Financial Times that ‘privacy has never been an absolute right and the debate about this should not become a reason for postponing urgent and difficult decisions’. Other intelligence directors have made similar claims: after retiring as chief of the Secret Intelligence Service (MI6) in January 2015, Sir John Sawers claimed that preventing terrorism was impossible without monitoring the internet traffic of innocent people. He said:


Privacy may not be an absolute right but it is a fundamental liberty of liberal democracy, not just another inconvenience to counter-terrorism to be eliminated. This area of concern has long been recognised in philosophical discourse. Isaiah Berlin recognised the privacy and surveillance were closely enmeshed with concepts of freedom. When he was refining his concept of ‘freedom from interference’, he recognised that not all citizens wanted a public life:

a man may leave a vigorous and genuinely ‘participatory’ democratic state in which the social or political pressures are too suffocating for him, for a climate where there may be less civic participation, but more privacy, a less dynamic and all-embracing communal life, less gregarious but also less surveillance (Berlin 1969, vii).

It is noteworthy that during the Snowden revelations, the Conservative partners of the coalition Government did not acknowledge why critics of mass surveillance are so concerned. The Liberal Democratic partners have understood such concerns, to the point of vetoing the Communications Data Bill, known as the ‘Snooper’s Bill’, in 2012. (The Conservative Party stated they will reintroduce the Bill if they are successful in the 2015 general election.) Government legislation, technology and legal action are making it increasingly difficult for journalists to obtain confidential sources and then undertake their Fourth Estate role. Preventing journalists effecting accountability over the state is particularly beneficial for intelligence services who have a long history of incompetence, immorality and illegality. To work in this specialism, journalists will need to evolve new methodologies. There are some promising developments such as mass leaks of documentation. It is an urgent task - never before has government and its intelligence services had such powers and techniques of invasive mass surveillance available, and thus the potential to control the population as a whole, and those who dissent in particular.

Seminar 2 Position Statement: Dr. Adi Kuntsman (Media-Culture)


Social Media, Digital Exposure and Military Violence

We are living in times when privacy and secrecy are, paradoxically, both increasingly guarded and increasingly unstable. Our daily routines include entering multiple password and constant adjustment of privacy settings; and yet the dominating practice of social networking today is that of perpetual sharing. Our virtual and material environments are filled with technologies of protection; our governments adopt new defenses in light of Wikileaks or the Snowden affair, knowing that more digital disclosures are likely to come. And yet, the everyday fabric of social media culture is that of constant exposures: embarrassing personal information, incriminating or incident photographs, or stories of abuse and cruelty, shared willingly and joyfully on YouTube, Instagram, Facebook of Twitter. But while big concerns over individual privacy and national secrecy are steadily taking over the agenda of researchers, journalists, intelligence leaders and software developers, what is often left in the shadows are questions of mundane digital sociality, its ordinary routines and grammars, and the ways this ordinariness shifts our horizons of violence, responsibility, and accountability. For example, when social media becomes an archive of willing self-recorded perpetration (such as soldiers documenting their own abuse of civilian populations), how do we address such archives’ exposure?

Some examples of these issues are discussed in my new co-authored book, Digital Militarism: Israeli Occupation in the Social Media Age (with Rebecca L. Stein, Stanford University Press, 2015): countless cases of Israeli soldiers, sharing snapshots or everyday military brutality in the West Bank and Gaza in their Instagram and Facebook streams. When first publicly exposed, such images had been scandalised, as was the case of a female Israeli soldier, whose smiling photographs in front of blindfolded Palestinian detainees caused a national media storm in Israel in 2010. It was one of the first time social media became a site where the ordinary violence of the Israeli military occupation turned viral. And while some of the public outrage and the official military response regarded the photographs incompatible with the army morale and the national character, many Israelis commented on how common such images were to those who had served in the Israeli army. Their anger and rage were not directed at the depicted abuse, however – rather, they protested the photographs’ viral exposure. “I have pictures that are far worse…Her mistake was that she put then on the Internet”, wrote someone in an  on-line discussion, his word echoed numerous times on various forums, talkbacks and other on-line debates. Indeed, the public debate at the time had largely ignored the content of the photographs – no national soul searching came as the result of that particular scandal (nor, for that matter, of any others that followed). Rather, it became a debate about privacy and secrecy in the age of social media. For indeed, the photographs had been taken by a journalist blogger from the woman’s personal Facebook album. Was it wrong of her to have publically shared such photographs? – some asked. Was it unethical, others wondered, to have screengrabbed and circulated the content of someone’s Facebook, unprotected by privacy settings?  

To whom are we accountable, then, when we praise – or condemn –digital exposures of such violence? Whose privacy are we talking about? That of the offending soldier, who titled her album “The Army – the best days of my life”? Or that of the blindfolded Palestinian men? For indeed, they have been abused not twice – once on the ground, and once in the social media circulation – but multiple times, in the regime of militarised colonial rule where humiliation and torture are both routinised and banalised, and now extend into the domain of social media virality. And whose secrecy is at stake here? That of an individual social media user, or that of the Israeli society as a whole? As we argue in the book, social media has now become a site where the “public secrecy” (Taussig, 1999) of the Israeli occupation is both threatened and reaffirmed, when the ordinary brutality of the military rule is simultaneously exposed and excused.

Today, few years after that first scandal, instances of self-recorded and willingly shared perpetrator violence in Israel no longer surprise – by the time we finished our book in 2014, wartime Instagram snapshots and soldier selfies on and off battlefield had filled Israeli social networks, becoming an integral part of the new digital everyday. As such, they render the notion of a digital “exposure” obsolete. How, then, should we refigure our debate on digital exposures, to move from the individualised notion of privacy into the complexity of both ethics and accountability, respect and justice? But also, how do we account for the increasing normalisation of digital exposures, and their incorporation into the very normative fabric of our digital political life?