Unlike previous research, we were unable to show that higher monetary incentives were more effective for increasing response rates. An AUD$20 unconditional incentive may be no more effective than a lesser amount for encouraging prostate cancer survivors to participate in research involving long questionnaires.
OpenRefine is a powerful free, open source tool for working with messy data: cleaning it; transforming it from one format into another; and extending it with web services and external data.
Today Renaisi launches a new model for evaluating place-based systems change. Lily O’Flynn, Principal Consultant for Place-based Evaluation & Learning, describes the model and why it solves the problem of evaluating change in places for funders, commissioners, and practitioners.
The indicators in this dashboard are a compilation of existing indicators and results that UNICEF uses across multiple programming areas and aproaches. This list has been vetted and compiled by UNICEF's SBC team in collaboration with the sectors and cross-sectorial teams in the organization.
All Behavior Change publications in one place
A practical, interactive tool to help you develop a systematic understanding of the influences of your target behaviour, in your target population.
HealthMeasures consists of PROMIS, Neuro-QoL, ASCQ-Me, and NIH Toolbox. These four precise, flexible, and comprehensive measurement systems assess physical, mental, and social health, symptoms, well-being and life satisfaction; along with sensory, motor, and cognitive function.
100+ Items, 14 Mechanisms, 1 Journey Our goal with BCS is to offer a systematic yet adaptable methodology that makes it easier for product teams to capture the important details necessary for effective behavior change. To allow for that, we have chosen to focus on 14 Behavioral Science mechanisms as opposed to focusing on individual nudges which may or may not generalize to the unique context.
The development of effective interventions for COVID-19 vaccination has proven challenging given the unique and evolving determinants of that behavior. A tailored intervention to drive vaccination uptake through machine learning-enabled personalization of behavior change messages unexpectedly yielded a high volume of real-time short message service (SMS) feedback from recipients. A qualitative analysis of those replies contributes to a better understanding of the barriers to COVID-19 vaccination and demographic variations in determinants, supporting design improvements for vaccination interventions. Objective: The purpose of this study was to examine unsolicited replies to a text message intervention for COVID-19 vaccination to understand the types of barriers experienced and any relationships between recipient demographics, intervention content, and reply type. Method: We categorized SMS replies into 22 overall themes. Interrater agreement was very good (all κpooled . 0.62). Chi-square analyses were used to understand demographic variations in reply types and which messaging types were most related to reply types. Results: In total, 10,948 people receiving intervention text messages sent 17,090 replies. Most frequent reply types were “already vaccinated” (31.1%), attempts to unsubscribe (25.4%), and “will not get vaccinated” (12.7%). Within “already vaccinated” and “will not get vaccinated” replies, significant differences were observed in the demographics of those replying against expected base rates, all p . .001. Of those stating they would not vaccinate, 34% of the replies involved mis-/disinformation, suggesting that a determinant of vaccination involves nonvalidated COVID-19 beliefs. Conclusions: Insights from unsolicited replies can enhance our ability to identify appropriate intervention techniques to influence COVID-19 vaccination behaviors.
3:40 - 40 participants gives a 15% margin of error and 95% confidence level (binary metrics)
Big five trait scores for 307,313 people from many different countries.
The first objective was to provide an overview of all activities that were employed during the course of a research project to develop a relapse prevention intervention for interdisciplinary pain treatment programs. The second objective was to examine how co-design may contribute to stakeholder involvement, generation of relevant insights and ideas, and incorporation of stakeholder input into the intervention design.
Believe it or not, analyzing seemingly unrelated data can reveal hidden truths. Take the Pentagon, the nerve center of the U.S. military. While classified briefings and high-level meetings happen behind closed doors, open-source data can offer clues about what might be brewing. Here’s where things get interesting. We can use Google Trends data to track searches for “Pentagon pizza delivery” and nearby “gay bars.” Why pizza and bars? Increased late-night activity might indicate longer work hours for Pentagon staff, potentially signifying preparation for a major event.
Could this guide us towards a structured approach for assessing the level of community involvement in SBC programmes? At the highest level, “Citizen Control“, communities independently lead programmes with full decision-making authority. “Delegated Power“ and “Partnership“ designate significant community influence on programme decisions, either through majority control or collaborative governance. In contrast, “Placation“, “Consultation“, and “Informing“ indicate lower degrees of participation, where community input may be sought but is not necessarily instrumental in shaping outcomes.
In my research, I focus on three things that ran through people’s minds when they were working toward something. These three things are: inner thinking, thoughts, pondering, reasoning emotional reactions, feelings, moods guiding principles, personal rules
And remember, keeping screeners under 12 questions is the magic number to prevent attrition.
Ogilvy UK head of strategy, advertising, Matt Waksman, illustrates and interprets the role of the strategist within advertising and wider society
Thinking Styles are the archetypes that you would base characters on, like characters in TV episodes. (Try writing your scenarios like TV episodes, with constant characters.) Characters think, react, and made decisions based on their thinking style archetype. BUT they also switch thinking styles depending on context. For example, if you take a flight as a single traveler versus bringing a young child along–you’ll probably change your thinking style for that flight, including getting to the gate, boarding, and deplaning.
100+ open source innovation tools from the greatest design & strategy agencies in the world. Ideal for both offline or online workshops. All tools are pixel perfectly packaged in a vectorized PDF or PNG and can be downloaded for free.
If you’re trying to think and act more creatively and more critically, focus on asking better, more interesting questions of the briefs you’re tasked with answering. What we teach children can and should be applied to our own professional lives, too. A focus on problems and solutions first, promotes consistent, ‘safe’ answers, but won’t move the work on. Spending time on asking and answering better questions will help refine the understanding of a problem and will create the conditions for new, interesting and challenging solutions.
I sometimes make a further suggestion to client teams who have years of experience working directly (via research) with the diversity of the people their organization supports. I suggest they abandon “persona” (a representation of a person) and replace it with “behavioral audience segment” (a representation of a group). (Note: I have begun calling these “thinking styles” to emphasize that a person can change to a different group based on context or experience.)This change allows those qualified teams to get away from names and photos. I don’t suggest this for everyone. Note: “Behavioral audience segment” is the name I use, although there may be a better one. In its defense, Susan Weinschenk uses “behavioral science” to mean what I am trying to represent. And “audience segment” is a common way to express a group an organization is focused on.
But she did explain how researching and designing for the majority or “average user” actually end up ignoring, othering, and harming the people our designs are meant to serve. Indi shared how she finds patterns in people’s behaviors, thoughts, and needs—and how she uses that data to create thinking styles that inform more inclusive design decisions. Indi talked about… Why researchers should look for patterns, not anecdotes, to understand real user needs. What are thinking styles and how to uncover and use them. Why your “average” user often doesn’t exist in the real world, and how we can do better.
Ikea researchers explore Kiwi homes before opening first NZ store Christine Gough, head of interior design at Ikea Australia, is one of 40 Ikea researchers visiting hundreds of Kiwi homes to gauge what products to stock in its Auckland mega store.
“significant” p-value ≠ “significant” finding: The significance of statistical evidence for the true X (i.e., statistical significance of the p-value for the estimate of the true X) says absolutely nothing about the practical/scientific significance of the true X. That is, significance of evidence is not evidence of significance. Increasing your sample size in no way increases the practical/scientific significance of your practical/scientific hypothesis. “significant” p-value = “discernible” finding: The significance of statistical evidence for the true X does tell us how well the estimate can discern the true X. That is, significance of evidence is evidence of discernibility. Increasing your sample size does increase how well your finding can discern your practical/scientific hypothesis.
Opt-in samples are about half as accurate as probability-based panels
If you have ever been tasked with influencing a behaviour, you will know that it is critical to understand that behaviour in context. You need to understand the issues faced by the people affected. At BIT, we refer to the process of understanding behaviour in context as Exploring. Exploring is about discovering what people do and crucially why.
The JTBD Canvas 2.0 is a tool to help you scope out your JTBD landscape prior to conducting field research. It frames your field of inquiry and scopes of your innovation effort. Jobs to be done