Skip to main content

Designing Tech to defeat Coercive Behaviour and Domestic Abuse

Henry Nash
23rd July 2020

The combination of the app-economy and cheap cloud resources has led to a transformation in the technology tools we have to manage our lives. We can manage money, control the heating/lighting in our houses, monitor people at our door, track and share our fitness and movement, access our multiple social media accounts – all equally easily from home, office or beach.

However, a darker use of this technology is rapidly becoming apparent – in that it is increasingly being used in cases of domestic abuse – where a perpetrator looks to control their victim.

This is known as technology-facilitated coercive control

It is not a niche issue – there have been numerous articles and reports of survivor’s experiences, with a recent UK news investigation finding an 1800 percent increase in alleged cyber stalking offences between 2014 and 2018. In the context of the scale of domestic abuse (in the USA, 1 in 3 women have experienced physical violence from an intimate partner whilst in parts of sub-Saharan Africa, partner violence is thought to be a reality for 65 percent of women), this is now major problem. To compound this, a recent UN report, exploring the impact of COVID-19 on women, highlighted a trend of increased abuse as homes are placed under strain from self-isolation and lockdown.

There is no easy answer to stopping this form of abuse.

However, by making subtle decisions – balancing intended with unintended consequences – it is possible to design technology to be resistant to abuse. To aid technologists in making these decisions, IBM recently presented five key design principles enabling the creation of products that are resistant to technology-facilitated coercive control.

While most of the principles may already be familiar to technologists, when looked at through the lens of coercive control, they take on additional meaning.

First, Diversity. Having a diverse design team broadens the understanding of user habits, enabling greater exploration of use cases, both the positive and the negative. While well-governed open source projects often pride themselves in being diverse – some users (for example children or grand-parents that may end up using the technology) are likely still be outside the mix of the project. We need to expand the personas we consider.

Second, Privacy and Choice. Users need to have the ability to actively make informed decisions about their privacy settings. Small red buttons, or phrases like ‘advanced settings’ can intimidate users, causing them to pick the default settings without necessarily understanding the consequences of that choice.

Third, Combatting Gaslighting. Gaslighting is where a person manipulates someone psychologically into doubting their memories and judgement. If another user can remove all evidence of an action taking place, or if there never was any evidence, this could lead to someone starting to question their memory. Untraceable remote control of household features (such as heating or lighting) or the ability to wipe social media posts without any record are some examples of how an abuser might gaslight their victim. Pertinent, timely notifications as well as auditing are essential for making it obvious who has done what, when. We have already solved similar issues in both open source and the boarder technology arena with things like immutable admin logs – we need bring this thinking to a wider context.

Fourth, Security and Data. It is important to think past the traditional security threat models, paying attention to the potential risk trajectories if the technology is used to abuse. For example, it is common that many home computer-based devices/services are managed by one user, even though they are used by many members of the family (e.g. virtual assistants, subscription channels, family calendar/data sharing plans, financial spend tracking etc.). An intuitive and easy way for family members to subscribe and unsubscribe might be a better model, with users having joint control. Even the concept of a “single user” account (e.g. email, social media) can breakdown in abusive relationships – often an abusive partner will insist the password is shared with them. If the victim has re-used that password for multiple accounts, then the abuser will now have access all those accounts. Solutions such as 2-factor authentication can help here, but even with this there are problems. Consider the fact that the “default settings” (see Privacy and Choice above) for most smart phones is to display any incoming text on the lock screen – it may be that the abuser will see the One Time Password arrive.

And lastly, Technical Ability. Victims of coercive control live in complex, ever shifting worlds and may lack the energy or confidence to navigate new technologies. The combination of ease of use and an auditing feedback loop to every user, can provide reassurance to a potential victim that they are not being controlled by the technology in question.

It’s worth noting that while this kind of abuse is often thought of in the context of women, it has wider ramifications in society as it can happen in any type of relationship – especially where there is a power imbalance. Some other examples would be between carers and the vulnerable, elderly, or disabled, within institutions and even in the workplace. These five design principles would equally apply to technologies built for all these demographics.

As technologists, we have a real opportunity to create products that resist technology-facilitated abuse and make a contribution towards the elimination of domestic abuse in our wider society. By opening-up conversations and adopting a mindful approach to design, we can work together to ensure our technology is resistant to being used for harm, making it inherently safer.

More information can be found here: https://www.ibm.com/blogs/policy/design-principles-to-combat-domestic-abuse/

Scroll to top of the content