Digital autonomy – our ability to choose and control how technology integrates in our live- has never been more important. It also has never been more fragile
My eyes were opened to these issues when I went through the experience of evaluating an implanted pacemaker/defibrillator as a result of my life-threatening heart condition. As a patient with a technical background I was appalled to discover that not only did my medical practitioners have very little understanding of the devices that they regularly implant, but that as a patient I would have no ability to review the source code that is implanted in my body and attached to my heart. When I was pregnant, my defibrillator shocked me unnecessarily and the only way to deal with it was to slow my heart rate down with drugs. My situation was different from most other patients with my condition, and my medical practitioners were only able to work around my device – it was out of the question to evaluate the software on my device.
Going through these experience inspired me to research the issues around the software in technology and health related, embedded devices. It a made me passionate about making sure that we have control of the critical technology we rely on and that we do not compromise our fundamental digital autonomy.
While most of the corporations that make these products don’t have malicious intent generally, it’s not their goal to anticipate every single use case that may arise for the user or to ensure that consumers have rights down the road.
Our most personal information gets swept up in a global trade of data that is exploited for every possible profit motive. This is invisible to most people and is the result of compromises that they make every day to achieve minor conveniences. We need to understand the extent to which this is happening from a personal and societal perspective and ensure that at the end of the day that we have options to move forward with technology that is actually in the service of our own goals not the quarterly profits of any corporations. Most fundamentally, we cannot actually consent if we don’t understand what we are consenting to, and that consent cannot be legitimate if there is no other realistic option if opt out.
Making careful decisions about how we integrate technology in our lives is an important step to make sure we have personal freedom and liberty in the future. Insisting on software freedom means that we can check the products and see if they are functioning the way we expect them to, and change them to behave in a way we can accept. Having control over our own devices means we can choose devices that don’t surveillance us or monetize our data in ways that could have deep ramifications. It means that we don’t have to have products that become disposable when their manufacturers decide they don’t want to update the software, or that they want to update the software in a way that we don’t like.
You asked me what I thought about Brexit specifically in all of this. While I think it’s complicated to predict the overall impact, Brexit is a huge opportunity to take the best of Europe’s regulations and take the next steps toward true digital autonomy. Britain has the potential to be such an important center for creation of technology. By taking a stance for ethics in technology, Britain has the opportunity at this point in time, to establish its relevance in an unparalleled way.
Karen will be the keynote speaker at the OpenUK Open Healthcare evening sponsored by DITO, www.dito.tech and Great Ormond Street Hospital’s DRIVE.
https://www.eventbrite.co.uk/e/openuk-healthcare-evening-featuring-karen-sandler-tickets-88820042137