Writing Publishable Mixed Research Articles: Guidelines for Emerging Scholars in the Health Sciences and Beyond
How effective is nudging? A quantitative review on the effect sizes and limits of empirical nudging studies - ScienceDirect
Hello, and Thanks for All the Fish: Tips for effective research recruiting
Data Playbook Toolkit | Global Disaster Preparedness Center
The Data Playbook Beta is a recipe book or exercise book with examples, best practices, how to's, session plans, training materials, matrices, scenarios, and resources. The data playbook will provide resources for National Societies to develop their literacy around data, including responsible data use and data protection. The content aims to be visual, remixable, collaborative, useful, and informative. There are nine modules, 65 pieces of content, and a methodology for sharing curriculum across all the sectors and networks. Material has been compiled, piloted, and tested in collaboration with many contributors from across IFRC and National Societies. Each module has a recipe that puts our raw materials in suggested steps to reach a learning objective. To help support you in creating your own recipe, we also include a listing of 'ingredients' for a topic, organised by type:
What’s Wrong With Your Survey? How to Reduce Error and Increase Reliability
Your Friendly Guide to Colors in Data Visualisation | Chartable
Evaluating Effect Size in Psychological Research: Sense and Nonsense - David C. Funder, Daniel J. Ozer, 2019
Daniel J. O’Keefe PUBLICATIONS AND PAPERS
research on health comm messaging effects
How to spot a statistical problem: advice for a non-statistical reviewer | BMC Medicine | Full Text
4 Ways to Turn Eye-Glazing Data Into Eye-Opening Stories | Inc.com
Unconventional Techniques for Better Insights from Satisfaction Surveys
What science reporters should know about meta-analyses before covering them
Reference Collection to push back against “Common Statistical Myths“ - data analysis - Datamethods Discussion Forum
How to use Screening Questions to Select the Right Participants for User Research
Social and Behavior Change Monitoring Guidance | Breakthrough ACTION and RESEARCH
Breakthrough ACTION has distilled guidance on social and behavior change (SBC) monitoring methods into a collection of technical notes. Each note provides an overview of a monitoring method that may be used for SBC programs along with a description of when to use the method and its strengths and weaknesses.
Understanding how and why people change - Journal of Marketing Management
We applied a Hidden Markov Model* (see Figure 1) to examine how and why behaviours did or did not change. The longitudinal repeated measure design meant we knew about food waste behaviour at two points (the amount of food wasted before and after the program), changes in the amount of food wasted reported over time for each household (more or less food wasted) and other factors (e.g. self-efficacy). By using a new method we could extend our understanding beyond the overall effect (households in the Waste Not Want Not program group wasted less food after participating when compared to the control group).
How do I delete my search history? And other questions | The Behavioural Insights Team
Design and statistical considerations in the evaluation of digital behaviour change interventions | UCL CBC Digi-Hub Blog
The Question Protocol: How to Make Sure Every Form Field Is Necessary :: UXmatters
Learning what our target audiences think and do: extending segmentation to all four bases
The aim of this study was to establish if distinct segments were evident in a sexual health context drawing from measures sourced from four segmentation bases extending application of segmentation to all recommended bases . This study indicates how researchers can use two-step cluster analysis to identify segments, which are represented by a group of individuals who share similar characteristics that differ from other groups in the larger heterogeneous target audience. Further, this study demonstrates how available information can be used delivering a dashboard to inform program design and planning.
How to Visualize Statistically Significant P-Values with Squares | Depict Data Studio
What Does Probability Mean in Your Profession? – Math with Bad Drawings
Healthcare access: A sequence-sensitive approach - ScienceDirect
•Despite its sequential nature, healthcare seeking is often analysed as single event. •We demonstrate the value of sequential healthcare data analysis. •Descriptive analysis exposes otherwise neglected behavioural patterns. •Sequence-insensitive indicators can be inconsistent and misleading. •Sequence-sensitive evaluation hints at adverse behaviours of wealthy patients.
Graphics Principles Cheat Sheet v1.0 (pdf)
Effective visualizations communicate complex statistical and quantitative information facilitating insight, understanding, and decision making. But what is an effective graph? This cheat sheet provides general guidance and points to consider.
Time to Scale Psycho-Behavioral Segmentation in Global Development
Why 51% in a survey isn't necessarily a 'majority' | Pew Research Center
If You Say Something Is “Likely,” How Likely Do People Think It Is?
The next time you find yourself stating that a deal or other business outcome is “unlikely” or, alternatively, is “virtually certain,” stop yourself and ask: What percentage chance, in what time period, would I put on this outcome? Frame your prediction that way, and it’ll be clear to both yourself and others where you truly stand.
Using a cascade approach to assess condom uptake in female sex workers inIndia
Typically, cascades are based on HIV treatment moni-toring data, which focus on getting people living with HIVto a point of viral suppression. HIV prevention cascadesfocus on the steps required to prevent HIV infection andsuccessfully implement HIV prevention programs. Preven-tion cascades include demand-side interventions that focuson increasing awareness, acceptability and uptake of pre-vention interventions, supply-side interventions that makeprevention interventions more accessible and available, andadherence interventions thatsupport ongoing adoption andcompliance with prevention behaviours or products...
When to Use Which User-Experience Research Methods
User Research: is more the merrier? – UX Collective
Small, medium or large — what sample size of users fits your study is a composite question. The magic number of 5 users may work magic in some studies while in some it may not. It depends on the constraints put on by project requirements, assumptions about problem discoverability and implications to the design process. Assess these factors to determine the number of users for your study: What’s the nature and scope of research — is it exploratory or validatory? Who and what kind of users are you planning to study? What’s the budget and time to finish the study? Does your research involve presenting statistically significant numbers or inferring behavioural estimates for the problem statement?
Webinar - Identifying and Dealing with "Bad" Survey Respondents: the Role of Attention Check Questions | QualtricsWebinar - Identifying and Dealing with "Bad" Survey Respondents: the Role of Attention Check Questions | Qualtrics
Lesson: Use "commitment" question instead of attention check questions.
MTurk Tutorials for Researchers and Academics
Using Attention Checks in Your Surveys May Harm Data Quality | Qualtrics
Potential Requestor here...what are your reactions to seeing "attention checks"? : mturk
0.05 or 0.005? P-value Wars Continue – Science-Based Medicine
For fields where the threshold for defining statistical significance for new discoveries is P < 0.05, we propose a change to P < 0.005. This simple step would immediately improve the reproducibility of scientific research in many fields. Results that would currently be called “significant” but do not meet the new threshold should instead be called “suggestive.”
Inferring App Demand from Publicly Available Data by Rajiv Garg, Rahul Telang :: SSRN
Why Not to Trust Statistics | Math with Bad Drawings
How One Little Number Erodes Trust in Science - Pacific Standard
Behavioral Design: When to Fire a Cannon and When to Use a Precision Knife | Nicolae NAUMOF | LinkedIn
How to Craft an Engaging Narrative with Data | Matt Cooper | LinkedIn
How nonprofits can measure what matters in Google Analytics
How Data Science Shaped This Teen-Counseling-By-Text Service | Co.Exist | ideas impact
How to Conduct a Pretest | The Health COMpass
UNDERSTANDING METRICS Guides - Media Impact Project
Web Metrics, YouTube Basics and Mobile Metrics Guides
The End of Theory: The Data Deluge Makes the Scientific Method Obsolete
Anscombe's quartet - Wikipedia, the free encyclopedia
Anscombe's quartet comprises four datasets that have nearly identical simple statistical properties, yet appear very different when graphed.