To solve problems and suggest solutions on behalf of others is to have power. As a result, we behavioral scientists have a heightened responsibility: Being in this privileged position requires recognizing when and where assumptions about “what good looks like” might creep in. When we design interventions—even just determining what options are available, or what the default choice should be—we shape other peoples’ experiences in ways we may not always fully appreciate. And our decisions to address certain problems while leaving others aside implicitly declares what challenges, and audiences, we think are worthy of receiving attention.
I - Intended Behavior N - Non-targeted Audiences C - Compensatory Behaviors A - Additional Behaviors S - Signalling E - Emotional Impact
The following is from Dr. Bucher’s forthcoming book, Engaged: Designing for Behavior Change. I chose this section because it touches upon a PeopleScience theme: being successful and effective behavioral practitioners while also, and primarily, being good.
In this presentation Liz Barnes, Vice Chair of the CIM Charity and Social Marketing Group, will discuss which tactics we should be worried about, which techniques might be considered unethical and ways we can influence and persuade with integrity.
The most common question I get on responsible design: ‘How do I actually embed ethical considerations into our innovation process?’ (They don’t actually phrase it like that, but you know… trying to be concise.) Although I don’t love cramming a multifaceted field like ethics into a linear diagram, it’s helpful to show a simple process map. So here’s my attempt.
Artefact is proud to introduce The Tarot Cards of Tech: a tool to inspire important conversations around the true impact of technology and the products we design. The Tarot Cards of Tech encourage creators to think about the outcomes technology can create, from unintended consequences to opportunities for positive change. The cards are our way of helping you gaze into the future to determine how to make your product the best it can be.
Consequence Scanning – an agile practice for Responsible Innovators A timely new business practice; Consequence Scanning fits alongside other agile practices in an iterative development cycle. This is a dedicated time and process for considering the potential consequences of what you’re creating
all or nothing messaging may be harmful
Isolation measures to contain the spread of COVID-19 means that social researchers who have for doing fieldwork in a pandemic - specifically, ideas for avoiding in-person interactions by using mediated forms that will achieve similar ends. Social research has been conducted online for many years, of course. There are many examples of using online survey tools or doing content analyses or ethnographies using existing online interactions as research materials. Interviews have been conducted by phone or Skype for a long time. This document was initially directed at ways for how to turn fieldwork that was initially planned as using face-to-face methods into a more ‘hands-off’ mode. However, people have added useful material about ‘born digital’ research (content already generated on the internet by online interactions), which provides an alternative source of social research materials if researchers decide to go down that path.
Insights from the behavioural sciences are increasingly used by governments and other organizations worldwide to ‘nudge’ people to make better decisions. Furthermore, a large philosophical literature has emerged on the ethical considerations on nudging human behaviour that has presented key challenges for the area, but is regularly omitted from discussion of policy design and administration. We present and discuss FORGOOD, an ethics framework that synthesizes the debate on the ethics of nudging in a memorable mnemonic. It suggests that nudgers should consider seven core ethical dimensions: Fairness, Openness, Respect, Goals, Opinions, Options and Delegation. The framework is designed to capture the key considerations in the philosophical debate about nudging human behaviour, while also being accessible for use in a range of public policy settings, as well as training.
Fifteen months into the Democratic Republic of Congo’s latest Ebola outbreak, we are still asking people to overcome the fear of an indiscriminate disease and accept an intimidating medical process while communicating in a way that often creates confusion and frustration.
A Field Guide to “Fake News” and Other Information Disorders explores the use of digital methods to study false viral news, political memes, trolling practices and their social life online. It responds to an increasing demand for understanding the interplay between digital platforms, misleading information, propaganda and viral content practices, and their influence on politics and public life in democratic societies.
In this paper, we discussed multiple ways how behavior change interventions can backfire. We provided a framework to help facilitate the discussion of this topic, and created tools to aid academics in the study of this realm, and support practitioners to remain mindful of the potential risks.
There are 6 implications I've drawn from this initial analysis: Authentic engagement - embed authentic engagement and feedback processes all through the campaign development journey Behaviour change levers audit - identify and review all of the behaviour change levers, not just those where communications can make a difference Medium, message, messenger - critically analyse the relationship between these for each creative execution Authentic inclusion - ensure diversity is embedded into your teams and planning processes and that this inclusion is authentic and supportive The constraints of comms - recognise circumstances where communications are not the most effective behaviour change and/or confidence building lever Remember that communications don't take place in a vacuum - reflect on how communications can have an impact on the system outside of comms touchpoints
How to conduct user research for systems with confidential or otherwise sensitive data, for example in domains like healthcare or financial services, where it can be problematic to record screens or otherwise share the user's information.
Yet the characteristics of online environments – the deliberate design and the ability to generate enormous quantities of data about how we behave, who we interact with and the choices we make, coupled with the potential for mass experimentation – can also leave consumers open to harm and manipulation. Many of the failures and distortions in online markets are behavioural in nature, from the deep information asymmetries that arise as a result of consumers being inattentive to online privacy notices to the erosion of civility on online platforms. This paper considers how governments, regulators and at least some businesses might seek to harness our deepening understanding of human behaviour to address these failures, and to shape and guide the evolution of digital markets and online environments that really do work for individuals and communities.
To explore advertising and marketing’s capacity for empathy, we’ve turned to cutting edge moral psychology. In this white paper we are asking people working in the advertising and marketing industry to consider the deepest questions about their identity, ethics and morals.
visualisation of all available AI principles. No need for new ones, but a need to operationalise & contextualise them across AI life cycle, for all stakeholders involved - data scientist, business execs, procurement & regulators
Consumers, employees, students, and others are often subjected to “sludge”: excessive or unjustified frictions, such as paperwork burdens, that cost time or money; that may make life difficult to navigate; that may be frustrating, stigmatizing, or humiliating; and that might end up depriving people of access to important goods, opportunities, and services. Because of behavioral biases and cognitive scarcity, sludge can have much more harmful effects than private and public institutions anticipate. To protect consumers, investors, employees, and others, firms, universities, and government agencies should regularly conduct Sludge Audits to catalogue the costs of sludge, and to decide when and how to reduce it. Much of human life is unnecessarily sludgy. Sludge often has costs far in excess of benefits, and it can have hurt the most vulnerable members of society.
However, nudges aimed at reducing carbon emissions could have a pernicious indirect effect if they offer the promise of a ‘quick fix’ and thereby undermine support for policies of greater impact.
So what counts as the “right” kind of problem for behavioral science to solve? Put more bluntly: How might our sense about what we should solve, or even what qualifies as a problem worth solving, be biased by how we think about what we can solve?
To ensure these partnerships are beneficial to all involved—companies, employees, customers, and researchers—behavioral scientists need a set of ethical standards for conducting research in companies. To address this need, we created The Behavioral Scientist’s Ethics Checklist. In the checklist, we outline six key principles and questions that behavioral scientists and companies should ask themselves before beginning their research. To illustrate how each principle operates in practice, we provide mini case studies highlighting the challenges other researchers and companies have faced.
Graphic of layers - 1) what you share, 2) what your behavior tells them, 3) what the machine thinks about you