Search
Results
Eight tips for using a word cloud in market research story finding
Technology Transfer and Commercialization Process
Sensemaker - map of subcultures in org
This is a map of subcultures within an organization (it's called a fitness landscape). It's built from stories told by the people in the organization. What can you do with it? Understand where the culture(s) are and request changes by saying I want “More stories like these...“ and “Fewer like those...“ Dave Snowden and The Cynefin Company (formerly Cognitive Edge) are offering impactful ways to visualize culture, and communicate direction in a manner that is customized to where each subculture is now and where their next best step is. Watch this video until 48:48 for more on the science and method (Link at 44:33) https://lnkd.in/emuAzp6E Stories collected using The Cynefin Co's Sensemaker tool.
Measures | Science of Behavior Change
Your personas probably suck. Here’s how you can build them better. | by Amber Westerholm-Smyth | Personas are Dead, Long Live Personas! | Medium
A five-step framework In summary, the five steps that we will walk you through are: Ask rich questions, not dumb questions Write a codebook Code your data Map your data Form your personas
A Guide to Complexity-Aware Monitoring Approaches for MOMENTUM Projects - USAID MOMENTUM
Self Assessment: How well is your research engaging target audiences?
User-Feedback Requests: 5 Guidelines
UX Research Templates - Notion
Jobs to be Done Insights Canvas
Sample Size Calculator and Guide to Survey Sample Size - Conjointly
IndiKit - Guidance on SMART Indicators for Relief and Development Projects | IndiKit
In-App Survey Questions: Guidelines and Templates | Instabug
6 Tips for Better Participant Engagement in Diary Studies
Qualitative Social Media Research Resources - Google Docs
Does eating white bread make you feel lonely - YouTube
Design Principles
An open source collection of Design Principles and methods.
Personas – A Simple Introduction | IxDF
Replacing Personas With Characters | by Alan Klement | down the rabbit hole | Medium
To get the brain to accept a story which explains why a consumer bought a product, it needs information presented in a particular way. The best way to deliver this information is to explain a customer’s anxieties, motivations, purchase-progress events, and purchase-progress situations. When combined, they form what I call Characters.
How to make a literature review useful for your team – Osman Advisory Services
For and Against National Service | Yes, Prime Minister | Comedy Greats - YouTube
Sir Humphrey, incensed that Hacker is pushing ahead with his “Grand Design”, delivers a masterclass in how to conduct a government opinion poll.
Stakeholder Interviews 101
Jo Evershed on Twitter: “Engaged participants are the secret to High-Quality Data. Foster engagement & data collection will be a breeze.
With this thread, you’ll learn 9 lessons: 1. The Data Quality Framework 2. The Participant Relationship 3. Perfect introductions 4. Instructions that work 5. Helpful signposting 6. An enjoyable experience 7. Getting feedback 8. Progressive piloting 9. Types of quality control
Using Commitment Requests Instead of Attention Checks
Qualtrics recommendation is to use the commitment request as that performed the best. However, the textual and factual attention checks also performed better than the control.
The Issue of Noncompliance in Attention Check Questions: False Positives in Instructed Response Items
Our results showed that while most respondents understand why attention checks are conducted, a nonnegligible proportion of respondents evaluated them as controlling or annoying. Most respondents passed the attention check; however, among those who failed the test, 61% seem to have failed the task deliberately. These findings reinforce that noncompliance is a serious concern with attention check instruments. The results of our experiment showed that more respondents passed the attention check if a comprehensible reason was given.
Examining Completion Rates in Web Surveys via Over 25,000 Real-World Surveys - Mingnan Liu, Laura Wronski, 2018
A survey’s completion rate is one of its most important data quality measures. There are quite a few published studies examining web survey completion rate through experimental approaches. In this study, we expand the existing literature by examining the predictors of web survey completion rate using 25,080 real-world web surveys conducted by a single online panel. Our findings are consistent with the literature on some dimensions, such as finding a negative relationship between completion rate and survey length and question difficulty. Also, surveys without progress bars have higher completion rates than surveys with progress bars. This study also generates new insights into survey design features, such as the impact of the first question type and length on completion rate. More: https://twitter.com/nielsmede/status/1576234663341064192?s=20&t=kSwdGBBuVv1yiqo1lE4vbw
Integrity Initiative - KNow Whitepaper 2022.pdf
How to screen out fraudulent qualitative research participants
A step-by-step guide to user research note taking | by Arnav Kumar | UX Planet
Small Sample Size Solutions | A Guide for Applied Researchers and Prac
UX Mapping Methods Compared: A Cheat Sheet
Empathy maps, customer journey maps, experience maps, and service blueprints depict different processes and have different goals, yet they all build common ground within an organization.
Research to Impact Canvas - The Canvas Revolution
Plan Research with the UX Research Canvas
(PDF) Action Research Model Canvas - ARMC
Research Impact Canvas: New tool for impactful science communication - ESPS – European Science Press Service
(PDF) Research Canvas PwC
The Research Design Canvas – Academic Toolkit
Why Am I Always Being Researched? - Chicago Beyond
How to Recruit Participants for UX Research
(PDF) Sample size for qualitative research: The risk of missing something important | Peter J DePaulo - Academia.edu
Until the definitive answer is provided, perhaps an N of 30 respondents is a reasonable starting point fordeciding the qualitative sample size that can reveal the full range (or nearly the full range) of potentially important customer perceptions. An N of 30 reduces the probability of missing a perception with a 10percent-incidence to less than 5 percent (assuming random sampling), and it is the upper end of the rangefound by Griffin and Hauser. If the budget is limited, we might reduce the N below 30, but the client mustunderstand the increased risks of missing perceptions that may be worth knowing. If the stakes and budgetare high enough, we might go with a larger sample in order to ensure that smaller (or harder to reach)subgroups are still likely to be represented.
Customer research. Here are 7 places to find exactly what your customers want
7 customer research sources: 1/ Media Kits 2/ Google Scholar 3/ Amazon Reviews 4/ The New Forums 5/ Comment Sections 6/ Customer Data 7/ Interviews
User Diary Studies - An effective research method for evaluating user behavior long-term
Doing research as if participants mattered | Impact of Social Sciences
Understanding Cultural Issues in Research Design: A Webinar Panel — Methodspace
Systems Mapping: How to build and use causal models of systems
Your Data Playbook is ready. Download it now! - Solferino Academy
The Data Playbook is 120 exercises, games, scenarios, slides and checklists to assist you and your teams on your data journey. The social learning content is designed for teams to have discussions and activities across the data lifecycle in short 30 minute to 1 hour sessions.
The question researchers should all stop asking
We want to take the shortcut and ask the why question, but please, resist the urge. Reframe it and you’ll find you are getting a more honest answer that is closer to authentic truth.
The Journey of One of PepsiCo’s Iconic Brands: The Research That Defined Fritos’ Appealing Human Truth | GreenBook
Covid-19: Identifying and addressing vaccine hesitancy using ‘personas’
How to create a better research poster in less time (#betterposter Generation 2). - YouTube
How to Conduct a Cognitive Walkthrough Workshop
A cognitive walkthrough is a technique used to evaluate the learnability of a system. Unlike user testing, it does not involve users (and, thus, it can be relatively cheap to implement). Like heuristic evaluations, expert reviews, and PURE evaluations, it relies on the expertise of a set of reviewers to assess the interface. Although cognitive walkthroughs can be conducted by an individual, they are designed to be done as part of a group in a workshop setting where evaluators walk through a task in a highly structured manner from a new user’s point of view.