Blog

Why ownership of your health data does not make you an empowered citizen

By Raffael Himmelsbach. Originally published on Centre for Digital Life Norway.

In the digital age, the right to one’s own health data should not be mistaken for a real voice in shaping the direction of health technology development, writes Raffael Himmelsbach, DLN’s coordinator for responsible research and innovation.

While on vacation this summer in my native Switzerland, I went to see the dentist. Learning that I now live abroad, she asked if I wanted a copy of my patient file to take with me to Norway. This is an example of data portability that empowers me, as a patient, to reuse my health information at a future point in time. It is an anecdote from 2019, yet it could have just as well been a story from 1989 – photocopiers and printers were already in existence then. 

Digitization is transforming the health sector, reshaping how we do research, develop new therapies and treat patients. My recent data portability experience looks quaint compared to the promises of digitization. The Xerox in the age of AI? What a joke, you might think. Yet when we consider that there can be no technological change without changes in the social structures that enable new technologies to be used in practice, we very quickly discover that change processes happen at different speeds. 

Let’s consider the question of governance. We live in a democracy. Given how pervasive the influence of technologies is on our lives, wouldn’t it be reasonable to expect that technological progress should be governed democratically, at least to some extent? 

Listening to researchers giving talks on different aspects of the digitalization of healthcare, I repeatedly hear the same narrative: Citizens should agree to share their personal health data because this drives progress. In return they are to expect future health benefits. Since people feel uneasy without some level of control, they are promised ownership of their data in the form of opt-out consent. 

But does ownership of one’s own data constitute an effective form of democratic participation in shaping the direction of future technological developments? Let’s unpack some of the assumptions at stake.

Firstly, today’s visit to a physician creates ever more data about the state of our health, but also about non-pathological traits, and, increasingly, also information about our genome. Our bodies are under unprecedented surveillance.

Secondly, in order to derive meaningful insight from this volume of health information about an individual person, it has to be related to data from many other individuals through statistics. This brings in new social relationships: diagnosing my health status requires not just data about my own body, but also of other bodies. Conversely, I am no longer the exclusive beneficiary of the data describing my body. “One for all, all for one” is a fitting slogan.

Thirdly, as more data is aggregated and statistically mined, we expand the diagnosis of present illness to probabilistic predictions of future pathology. All humans become pre-symptomatic – the definitions of illness and patient become fuzzy. Prevention and overdiagnosis become relevant topics, but it’s not an either/or future as these phenomena will likely coexist. 

Fourthly, there is a new intermediary in the therapeutic relationship as statistical technologies nudge themselves in between physician and patient. This also changes how economic value is extracted from health services since somebody owns the intellectual property behind that technology. The solidarity implicit in “one for all, all for one” now has a marketable benefit. 

In conclusion, consider the following analogy. Imagine data describing your body as a brick. Now an architect approaches you and makes you an alluring promise about building a community centre, which can only be built if most of the community members donate their bricks (one for all, all for one). The only influence over the project you have is to donate or withhold the brick (consent). If you choose to donate the brick, you have to unconditionally trust the architect to realise their promise, rather than to building a luxury condo, or a megalomaniac airport that will never serve people since it’s being built in a remote desert. If too many choose to keep the brick, there won’t be a community centre, nor a desert casino or an abandoned airport.

So, would you say that owning your health data empowers you as a citizen? There is still a lot of thinking from the Xerox era in the current digitalization moment, especially when it comes to the literacy of social relations and how technological change affects them.

Can technology governance do without fictions?

Once in a blue moon I experience an intellectual crisis for it feels as if the cognitive foundation of my professional activity is built on quicksand. My work is premised on the assumption that advances in science and technology can be governed for the common good. In Europe, this assumption is presently expressed through an (as of now) unsubstantiated belief in technology policy as a motor of economic growth and job creation, and in research policies aimed at promoting so called responsible research and innovation. René von Schomberg, one of the intellectual architects, defines responsible research and innovation as:

… a transparent, interactive process by which societal actors and innovators become mutually responsive to each other with a view to the (ethical) acceptability, sustainability and societal desirability of the innovation process and its marketable products (in order to allow a proper embedding of scientific and technological advances in our society). p.64

von Schomberg, R. (2013). A Vision of Responsible Research and Innovation. In R. Owen, J. Bessant, & M. Heintz (Eds.), Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society (pp. 51-74). Chichester: Wiley.

Thus far relatively large sums have been invested in social science and humanities projects (by their standards, that is) for researching cases and developing instruments meant to make research and innovation practices more responsive to their societal context. In addition, scientists applying for money allocated to these technology policies (e.g. material science or biotechnology) are asked by research councils to showcase the societal value of their research output and explain how they intent to be mindful about the downstream consequences of their research. (There is a curious imbalance between how much effort has been put into developing RRI frameworks and ‘tools’ compared to the lack of attention on how they can be translated into real world practices through a governance regime constrained to distributing funds). My job at the Centre for Digital Life, too, is funded through a program that builds on the belief of creating economic activity through (responsible) technology policy.

Science relies on public funding and depends on being in politicians’ good favor as they allocate public expenditures. Especially in times of (fiscal) crisis there is a heightened need to legitimize the public support for science, mostly by means of the rhetoric that science produces valuable things for society. There is nothing new to this. Now comes the twist. Trajectories of scientific development are in interaction with particular interests in society as they serve (and make possible) industrial, military, socio-political or environmental agendas; this is known as co-production. The gap between particular interests and the public good is an essential tension at the heart of democracy that can only be resolved with the help of conceptual fictions, such as Rousseau’s general will, Adam Smith’s invisible hand, or Benedict Anderson’s imagined communities. RRI champions a science for the benefit of all and thus (un)knowingly depends on some sort of conceptual fiction to reconcile the particular with the general. The claim to some shared set of (European) values as the basis of science governance is a lever that is much more effective than demands for a science in the service of diversity or the poor. Science lobbying, too, knows that it needs to bridge the particular-collective gap and often does so in an awkward manoeuvre that invokes an abstract ‘the general public’ that is to benefit from what trickles down through taxation of economic growth led by technological innovation. How can one then advance the democratic governance of science and technology without resorting to a symbolically powerful fiction of the public good?

What quality standards for peer review related to responsible research and innovation?

Originally published on LinkedIn.

I would love to start a conversation on quality standards in peer review concerning responsible innovation and other aspects of research governance. Spider man is reminded that with ‘great power comes great responsibility’. Researchers in the humanities and the social sciences who study the governance of scientific research may not have action hero qualities, but they have acquired a certain influence over which projects get funded in the STEM field (Science, Technology, Engineering, Medicine) as they become part of mixed peer-review panels. How did we get there?

Not everybody is equally enchanted by industry’s uptake of technologies derived from molecular biology and other fields. Over time science funders have come to realise that public opinion no longer considers it a self-evident truth that science and technology are indisputable forces of good. And this has provided a window of opportunity to translate lessons from research about how science works into ideas about how scientific production could be better guided to serve the public interest. Enter responsible research and innovation, a set of governance instruments that several science funders in Europe are trying to implement, be it through added requirements in grant applications, interesting things like funding for my job etc.

I am a social scientist by training who quit the academic career track to work in research policy implementation. My work takes place in a biotechnology research network that spreads across an entire country. The scientists I work with are obliged by their funder to consider the (potential) impact their work has on the wider community. They are provided with a framework, but few practical guidance (there is a plethora of ‘tools’ and cases which I draw on for my work, but they are generally not suited for DIY use by STEM researchers). The framework is deliberately open, as it is feared that a check list approach would not lead to scientists taking sufficient time to seriously consider the societal implications of their work and to adjust their research agendas accordingly. In practice the framework’s openness presents a creative challenge. What can project teams do to address societal aspects of their work in a way that is useful to their research, too? Helping scientists figuring this out as they prepare a grant application is one of the most rewarding aspects of my work.

Back to spider man. Grant writing is an art of clearly and concisely communicating complex ideas, walking the tightrope between making bold (if slightly exaggerated) promises and demonstrating feasibility, and the liberal use of buzzwords and typographic emphasis. How to translate a solid research governance strategy into this prose is tricky for we know little about how reviewers assess it. And I am always at a loss when advising scientists on this point. So, fellow social scientists and humanists who have assessed the responsible research and innovation part of a STEM funding application, how do you handle the responsibility of your great powers? What quality standards do you follow? Let’s talk.

Digital natives, trope or tool for technology governance?

Scott Schiller/Flickr: https://www.flickr.com/photos/schill/12806114295

When I was in middle school my grandfather bought a computer. It was state of the art and ran Windows 95. As the computer savvy grand child, I became the designated support guy. As I explained the systems preference panel, my granddad would take detailed notes in his doctor scrawl that I imagined was only legible to pharmacists used to decipher prescriptions. We practiced conjuring up menus and files that he managed to accidentally hide; he dutifully took notes at every repetition. When I handed him the mouse for a practice run, he carefully aligned the cursor over the box he was supposed to click, only to be veered off target ever so slightly by the tremor in his hand every time he pressed his index finger on the mouse button. This was an obvious source of frustration for both of us, but he persevered in his tenacity. It wasn’t only an aging physique that impeded a smooth user experience of the Windows 95 systems preferences, it’s very logic was alien to my granddad — despite his detailed notes, menus kept disappearing and it was only after long support calls (or the next visit) that we could coax them back into sight. My granddad was by no means technologically illiterate. As a radiologist he dealt with complex, state of the art machinery his entire professional life. He wasn’t alien to logic either; he attended university lectures for senior citizens and would rejoice debating Kant, evolution and other matters beyond the comprehension of my narrow, adolescent intellectual horizon. Yet I was the whizz kid at home on computers, sporting a rapport to Windows 95 he could never approximate.

The crackling sound of the dial-up modem, Jpeg images loading line by line, and the constraints of the 1.4 megabyte floppy disk were defining experiences of my induction into computers; they are now museum worthy sentimentalities. Today’s youngsters send snaps and like selfies on Instagram; they have no memories of that first PC arriving in the household or of the primitive homepages that made up the web 1.0. They are growing up surrounded by information and communication technology that has become relatively inexpensive, speedy, and ubiquitous. This fact led to the proclamation of a new generation of ‘digital natives’, who, by virtue of having been born into this technological environment – rather than having migrated into it from the past –supposedly have innate skills and knowledge of information technologies, as well as distinct learning styles as a result of this exposure. This claim is also made for healthcare. The website iQ by Intel writes under the headline “Digital Natives Push for Personalized Healthcare Technology”:

Caring for a rapidly aging population is challenging. Experts working to revitalize healthcare for the 21st century are tackling this challenge by shifting from a one-size-fits-all to a more personalized healthcare approach, one that is heavily influenced by how young people use technology.
To combat skyrocketing healthcare costs for an American population of 326 million people spanning six generations, experts are turning to bioscience and new technologies as well as to young, tech-savvy digital natives who are already nudging healthcare into the Internet age.
“We’re already seeing that millennials and younger generations won’t be the same kinds of patients as their parents,” said Eric Dishman, an Intel Fellow and general manager of Intel’s Health and Life Sciences. “
These 18-to-34 year-olds already expect to have data and tools to help them manage their health just like they do for everything else in their lives.”

There is no doubt that the idea of digital natives has enjoyed widespread popularity, but is it actually useful to anticipate plausible futures of ICTs in health and education?

For starters, it seems far from clear whether there is indeed a generational gap in how people engage with ICTs. Empirical research both in the domains of education and health communication find the assumption of a generational gap vastly overstated – using information and communication technologies effectively depends on many different social factors, of which age is but one. danah boyd, for instance, finds that teenagers are less addicted to their devices as they are to their groups of friends.

Beyond the empirically dubious claim of a cohorts of digital natives, the very idea is flawed — the SnapChat andInstagram generation was born as much into a material world ruled by the laws of physics and social institutions as was my grandfather; why should these forces suddenly have less traction on today’s teenagers? It is simply a fallacy of the technological imagination that technology can somehow magically operate outside the social realm, whether this applies to romantic ideas about the internet being a magical vector of democratization (a very popular idea when I was a political science undergraduate in the early 2000s) or fancies to break gridlock in climate change through schemes of geo-engineering. Technologies certainly change societies, but from within rather than from the outside. Social media may have helped to mobilize protesters, but it didn’t make the Arab Spring; resources and social organization, not Twitter, proved to be the sustaining forces that kept people in the streets, just like during the civil rights movement. As Kentaro Toyama puts it, technologies act as amplifiers of social processes. The effects of this are not just incremental; after all, the internet didn’t just make communication easier and expanded the reaches of agents of all kinds of shades: it stopped information fading into entropy by archiving everything. And that is bound to change our lives profoundly.

Arguably, the implementation of new ICT technologies in education and healthcare affect how service providers and recipients interact, not to mention the kinds of services that are provided. Since these are public services in many countries, funded by taxes, the obvious question is whether ICTs deliver real improvements to service quality and coverage. There likely is no easy answer. Healthcare is one of the most complex policy and technology arenas and what counts as a positive outcome depends as much on the specific circumstances of a particular case, as on the values that underwrite our evaluative criteria. The notion of digital natives might fit into the promises of some of the world’s most valuable companies that have huge stakes in the education and health markets of tomorrow. It is moot, though, as a device to imagine societal transformation and technology governance.

Just as my grandfather once struggled to keep the cursor on target, I never made friends with typing on a touch screen. Instead of forming a barely perceptible membrane between the material and virtual worlds, its slick, featureless surface leave my fingertips somewhat bereft of orientation; my text messages remain short to the point of coming across as rude. There is no birthright to technological mastery, tomorrow’s technology will be new to every generation alive.