This is a map of subcultures within an organization (it's called a fitness landscape). It's built from stories told by the people in the organization. What can you do with it? Understand where the culture(s) are and request changes by saying I want “More stories like these...“ and “Fewer like those...“ Dave Snowden and The Cynefin Company (formerly Cognitive Edge) are offering impactful ways to visualize culture, and communicate direction in a manner that is customized to where each subculture is now and where their next best step is. Watch this video until 48:48 for more on the science and method (Link at 44:33) https://lnkd.in/emuAzp6E Stories collected using The Cynefin Co's Sensemaker tool.
A five-step framework In summary, the five steps that we will walk you through are: Ask rich questions, not dumb questions Write a codebook Code your data Map your data Form your personas
An open source collection of Design Principles and methods.
To get the brain to accept a story which explains why a consumer bought a product, it needs information presented in a particular way. The best way to deliver this information is to explain a customer’s anxieties, motivations, purchase-progress events, and purchase-progress situations. When combined, they form what I call Characters.
Sir Humphrey, incensed that Hacker is pushing ahead with his “Grand Design”, delivers a masterclass in how to conduct a government opinion poll.
With this thread, you’ll learn 9 lessons: 1. The Data Quality Framework 2. The Participant Relationship 3. Perfect introductions 4. Instructions that work 5. Helpful signposting 6. An enjoyable experience 7. Getting feedback 8. Progressive piloting 9. Types of quality control
Qualtrics recommendation is to use the commitment request as that performed the best. However, the textual and factual attention checks also performed better than the control.
Our results showed that while most respondents understand why attention checks are conducted, a nonnegligible proportion of respondents evaluated them as controlling or annoying. Most respondents passed the attention check; however, among those who failed the test, 61% seem to have failed the task deliberately. These findings reinforce that noncompliance is a serious concern with attention check instruments. The results of our experiment showed that more respondents passed the attention check if a comprehensible reason was given.
A survey’s completion rate is one of its most important data quality measures. There are quite a few published studies examining web survey completion rate through experimental approaches. In this study, we expand the existing literature by examining the predictors of web survey completion rate using 25,080 real-world web surveys conducted by a single online panel. Our findings are consistent with the literature on some dimensions, such as finding a negative relationship between completion rate and survey length and question difficulty. Also, surveys without progress bars have higher completion rates than surveys with progress bars. This study also generates new insights into survey design features, such as the impact of the first question type and length on completion rate. More: https://twitter.com/nielsmede/status/1576234663341064192?s=20&t=kSwdGBBuVv1yiqo1lE4vbw
How to screen out fraudulent qualitative research participants
Empathy maps, customer journey maps, experience maps, and service blueprints depict different processes and have different goals, yet they all build common ground within an organization.
Until the definitive answer is provided, perhaps an N of 30 respondents is a reasonable starting point fordeciding the qualitative sample size that can reveal the full range (or nearly the full range) of potentially important customer perceptions. An N of 30 reduces the probability of missing a perception with a 10percent-incidence to less than 5 percent (assuming random sampling), and it is the upper end of the rangefound by Griffin and Hauser. If the budget is limited, we might reduce the N below 30, but the client mustunderstand the increased risks of missing perceptions that may be worth knowing. If the stakes and budgetare high enough, we might go with a larger sample in order to ensure that smaller (or harder to reach)subgroups are still likely to be represented.
7 customer research sources: 1/ Media Kits 2/ Google Scholar 3/ Amazon Reviews 4/ The New Forums 5/ Comment Sections 6/ Customer Data 7/ Interviews
The Data Playbook is 120 exercises, games, scenarios, slides and checklists to assist you and your teams on your data journey. The social learning content is designed for teams to have discussions and activities across the data lifecycle in short 30 minute to 1 hour sessions.
We want to take the shortcut and ask the why question, but please, resist the urge. Reframe it and you’ll find you are getting a more honest answer that is closer to authentic truth.
A cognitive walkthrough is a technique used to evaluate the learnability of a system. Unlike user testing, it does not involve users (and, thus, it can be relatively cheap to implement). Like heuristic evaluations, expert reviews, and PURE evaluations, it relies on the expertise of a set of reviewers to assess the interface. Although cognitive walkthroughs can be conducted by an individual, they are designed to be done as part of a group in a workshop setting where evaluators walk through a task in a highly structured manner from a new user’s point of view.
Whilst you’re shaping the problem space and then during the first diamond of understanding and defining which user needs to focus on, you should ideally get out of the lab or the office. When you have defined your solution and are iterating on it, that’s the best time to use your go to method — lab usability testing in a lot of cases, remote interviewing is mine. This is because you are likely needing cycles of quick feedback and iteration so you need a tried and trusted method so you can spin up a sprint of research quickly and efficiently. So how about when time and efficiency isn’t quite so important and the quality and depth of understanding or engagement of stakeholders are the key drivers? Here are some examples from my toolkit:
Broadly, these feedback surveys can be categorised into five groups: the pointless; the self-important; the immoral; the demanding; and the downright weird:
Method:Three participatory workshops were held with the independent Welsh residential decarbonisation advisory group(‘the Advisory Group’)to (1)maprelationships betweenactors, behavioursand influences onbehaviourwithin thehome retrofitsystem,(2)provide training in the Behaviour Change Wheel framework(3)use these to developpolicy recommendationsfor interventions. Recommendations were analysed usingthe COM-B (capability, opportunity, motivation) model of behaviourtoassesswhether they addressed these factors. Results:Twobehavioural systems mapswere produced,representing privately rented and owner-occupied housing tenures. The main causal pathways and feedback loops in each map are described.