Search
Results
SBC Menu of Results and Indicators | UNICEF SBC GUIDANCE
The indicators in this dashboard are a compilation of existing indicators and results that UNICEF uses across multiple programming areas and aproaches. This list has been vetted and compiled by UNICEF's SBC team in collaboration with the sectors and cross-sectorial teams in the organization.
HealthMeasures - Transforming how health is measured
HealthMeasures consists of PROMIS, Neuro-QoL, ASCQ-Me, and NIH Toolbox. These four precise, flexible, and comprehensive measurement systems assess physical, mental, and social health, symptoms, well-being and life satisfaction; along with sensory, motor, and cognitive function.
Introducing the Behavior Change Score
100+ Items, 14 Mechanisms, 1 Journey Our goal with BCS is to offer a systematic yet adaptable methodology that makes it easier for product teams to capture the important details necessary for effective behavior change. To allow for that, we have chosen to focus on 14 Behavioral Science mechanisms as opposed to focusing on individual nudges which may or may not generalize to the unique context.
(PDF) Short and extra-short forms of the Big Five Inventory–2: The BFI-2-S and BFI-2-XS
(PDF) The Next Big Five Inventory (BFI-2): Developing and Assessing a Hierarchical Model With 15 Facets to Enhance Bandwidth, Fidelity, and Predictive Power
Handling Sensitive Questions in Surveys and Screeners
(PDF) How to Measure Motivation: A Guide for the Experimental Social Psychologist
This article examines cognitive, affective, and behavioral measures of motivation and reviews their use throughout the discipline of experimental social psychology. We distinguish between two dimensions of motivation (outcome-focused motivation and process-focused motivation). We discuss circumstances under which measures may help distinguish between different dimensions of motivation, as well as circumstances under which measures may capture different dimensions of motivation in similar ways.
Why Use 40 Participants in Quantitative UX Research? - YouTube
3:40 - 40 participants gives a 15% margin of error and 95% confidence level (binary metrics)
Personality distribution research data - Big five trait scores for 307,313 people from many different countries.
Big five trait scores for 307,313 people from many different countries.
Handling Sensitive Questions in Surveys and Screeners
It’s Time to Change the Way We Write Screeners | Sago
And remember, keeping screeners under 12 questions is the magic number to prevent attrition.
Badly designed surveys don’t promote sustainability, they harm it
Ditch “Statistical Significance” — But Keep Statistical Evidence | by Eric J. Daza, DrPH, MPS | Towards Data Science
“significant” p-value ≠ “significant” finding: The significance of statistical evidence for the true X (i.e., statistical significance of the p-value for the estimate of the true X) says absolutely nothing about the practical/scientific significance of the true X. That is, significance of evidence is not evidence of significance. Increasing your sample size in no way increases the practical/scientific significance of your practical/scientific hypothesis. “significant” p-value = “discernible” finding: The significance of statistical evidence for the true X does tell us how well the estimate can discern the true X. That is, significance of evidence is evidence of discernibility. Increasing your sample size does increase how well your finding can discern your practical/scientific hypothesis.
Comparing Two Types of Online Survey Samples - Pew Research Center Methods | Pew Research Center
Opt-in samples are about half as accurate as probability-based panels
How many participants do we have to include in properly powered experiments? A tutorial of power analysis with reference tables - Journal of Cognition
Best Practices for Developing and Validating Scales for Health, Social, and Behavioral Research: A Primer
How to use a new generation data collection and analysis tool? - The Cynefin Co
This is SenseMaker in its most simple form, usually structured to have an open (non-hypothesis) question (commonly referred to as a ‘prompting question’) to collect a micro-narrative at the start. This is then followed by a range of triads (triangles), dyads (sliders), stones canvases, free text questions and multiple choice questions. The reason or value for using Sensemaker: Open free text questions are used at the beginning as a way of scanning for diversity of narratives and experiences. This is a way to remain open to ‘unknown unknowns’. The narrative is then followed by signifier questions that allow the respondent to add layers of meaning and codification to the narrative (or experience) in order to allow for mixed methods analysis, to map and explore patterns.
selfdeterminationtheory.org – An approach to human motivation & personality
Info, research, questionnaires/scales, info on application to specific topics
Measuring Intrinsic Motivation: 24 Questionnaires & Scales
Sample Size Calculator and Guide to Survey Sample Size - Conjointly
IndiKit - Guidance on SMART Indicators for Relief and Development Projects | IndiKit
Jo Evershed on Twitter: “Engaged participants are the secret to High-Quality Data. Foster engagement & data collection will be a breeze.
With this thread, you’ll learn 9 lessons: 1. The Data Quality Framework 2. The Participant Relationship 3. Perfect introductions 4. Instructions that work 5. Helpful signposting 6. An enjoyable experience 7. Getting feedback 8. Progressive piloting 9. Types of quality control
Using Commitment Requests Instead of Attention Checks
Qualtrics recommendation is to use the commitment request as that performed the best. However, the textual and factual attention checks also performed better than the control.
The Issue of Noncompliance in Attention Check Questions: False Positives in Instructed Response Items
Our results showed that while most respondents understand why attention checks are conducted, a nonnegligible proportion of respondents evaluated them as controlling or annoying. Most respondents passed the attention check; however, among those who failed the test, 61% seem to have failed the task deliberately. These findings reinforce that noncompliance is a serious concern with attention check instruments. The results of our experiment showed that more respondents passed the attention check if a comprehensible reason was given.
Examining Completion Rates in Web Surveys via Over 25,000 Real-World Surveys - Mingnan Liu, Laura Wronski, 2018
A survey’s completion rate is one of its most important data quality measures. There are quite a few published studies examining web survey completion rate through experimental approaches. In this study, we expand the existing literature by examining the predictors of web survey completion rate using 25,080 real-world web surveys conducted by a single online panel. Our findings are consistent with the literature on some dimensions, such as finding a negative relationship between completion rate and survey length and question difficulty. Also, surveys without progress bars have higher completion rates than surveys with progress bars. This study also generates new insights into survey design features, such as the impact of the first question type and length on completion rate. More: https://twitter.com/nielsmede/status/1576234663341064192?s=20&t=kSwdGBBuVv1yiqo1lE4vbw
Small Sample Size Solutions | A Guide for Applied Researchers and Prac
Your Data Playbook is ready. Download it now! - Solferino Academy
The Data Playbook is 120 exercises, games, scenarios, slides and checklists to assist you and your teams on your data journey. The social learning content is designed for teams to have discussions and activities across the data lifecycle in short 30 minute to 1 hour sessions.
5 Tips for Smarter System Design, with Raph Koster - YouTube
0:00 Introduction 1:10 Principle 1: Identify the Objects 2:01 Principle 2: Identify the Numbers 3:04 Principle 3: Identify the Verbs 5:52 Principle 4: Set Bounds on Numbers 7:25 Principle 5: Build a Dashboard
Meaningless Measurement – johnJsills
Broadly, these feedback surveys can be categorised into five groups: the pointless; the self-important; the immoral; the demanding; and the downright weird:
EMERGE – Evidence-based Measures of Empowerment for Research on Gender Equality – UC SAN DIEGO
EMERGE (Evidence-based Measures of Empowerment for Research on Gender Equality) is a project focused on gender equality and empowerment measures to monitor and evaluate health programs and to track progress on UN Sustainable Development Goal (SDG) 5: To Achieve Gender Equality and Empower All Girls. As reported by UN Women (2018), only 2 of the 14 SDG 5 indicators have accepted methodologies for measurement and data widely available. Of the remaining 12, 9 are indicators for which data are collected and available in only a limited number of countries. This assessment suggests notable measurement gaps in the state of gender equality and empowerment worldwide. EMERGE aims to improve the science of gender equality and empowerment measurement by identifying these gaps through the compilation and psychometric evaluation of available measures and supporting scientifically rigorous measure development research in India.
CDC Announces Plan To Send Every U.S. Household Pamphlet On Probabilistic Thinking
Motivating Seasonal Influenza Vaccination and Cross-Promoting COVID-19 Vaccination: An Audience Segmentation Study among University Students | HTML
Meta-Analysis Learning Information Center
The Meta-Analysis Learning Information Center (MALIC) believes in equitably providing cutting-edge and up-to-date techniques in meta-analysis to researchers in the social sciences, particularly those in education and STEM education.
www.postalexperience.com/pos - USPS Customer Satisfaction Survey
awful example of landing page!
Practical easy hands-on beginner R RMarkdown workshop | Open Science workshops | Gilad Feldman - YouTube
“Genderfluid” or “Attack Helicopter”: Responsible HCI Practice with Non-Binary Gender Variation in Online Communities
For HCI survey research broadly, we recommend using a question similar to the first question in [2]’s measure (as quoted in [3]) – “Are you…?” with three response options: “man,” “woman,” “something else: specify [text box]” – and allowing respondents to choose multiple options. This question will not identify all trans participants [3], but is inclusive to non-binary and trans people and will identify gender at a level necessary for most HCI research. To reduce trolling, we recommend providing the fill-in-theblank text box as a second step only for those respondents who choose the “something else” option.
Alexander L. Francis on Twitter: “I love it that one of my students suggested we change the default “Other (please specify“) option to “Not Listed (please specify)“ in a demographic survey. Explicitly *not* “othering“ participants while still asking for t
I love it that one of my students suggested we change the default “Other (please specify“) option to “Not Listed (please specify)“ in a demographic survey. Explicitly *not* “othering“ participants while still asking for the info we want. Any implied failure is on us, not them.
WTF Visualizations
How Many Participants for Quantitative Usability Studies: A Summary of Sample-Size Recommendations
40 participants is an appropriate number for most quantitative studies, but there are cases where you can recruit fewer users.
Net Promoter Score Considered Harmful (and What UX Professionals Can Do About It) | by Jared M. Spool | Noteworthy - The Journal Blog
Likert Scale Examples for Surveys
Full article: Meaningful change definitions: sample size planning for experimental intervention research
Digital Sex and/or gender - working together to get the question right - Digital
We Analyzed 2,810 Profiles to Calculate Facebook Engagement Rate
Same Stats, Different Graphs - the Datasaurus Dozen
Just-in-Time Adaptive Interventions and Adaptive Interventions – The Methodology Center
How to Analyze Instagram Stories: 7 Metrics to Track : Social Media Examiner
How scientists can stop fooling themselves over statistics
Behaviour change 101: How to do a Rapid Review | LinkedIn
In our work at BehaviourWorks Australia (BWA) we are frequently asked ‘What does the research say about getting audience Y to do behaviour X?’. When our partners need an urgent answer we often provide it using a Rapid Review. In this article I explain Rapid Reviews, why you should do them, and a process that you can follow to conduct one. What is a Rapid Review? Rapid Reviews are “a form of knowledge synthesis in which components of the systematic review process are simplified or omitted to produce information in a timely manner” [1]. Indeed, with sufficient resources (e.g., multiple staff working simultaneously) you can do a Rapid Review in less than a day. The outputs of these reviews are, of course, brief and descriptive, but they can be very useful where rapid evidence is needed, for example, in addressing COVID-19. Rapid Reviews can therefore provide detailed research within reduced timeframes and also meet most academic requirements by being standardised and reproducible. They are often, but not always, publishable in peer-reviewed academic journals.
Development and Testing of a Short Form of the Patient Activation Measure
The Patient Activation Measure is a valid, highly reliable, unidimensional, probabilistic Guttman‐like scale that reflects a developmental model of activation. Activation appears to involve four stages: (1) believing the patient role is important, (2) having the confidence and knowledge necessary to take action, (3) actually taking action to maintain and improve one's health, and (4) staying the course even under stress. The measure has good psychometric properties indicating that it can be used at the individual patient level to tailor intervention and assess changes. (https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1475-6773.2004.00269.x)