yabs.io

Yet Another Bookmarks Service

Search

Results

[https://www.nngroup.com/articles/stakeholder-interviews/?utm_source=Alertbox&utm_campaign=2361996408-EMAIL_CAMPAIGN_2020_11_12_08_52_COPY_01&utm_medium=email&utm_term=0_7f29a2b335-2361996408-24361717] - - public:weinreich
management, qualitative, research - 3 | id:1287289 -

[https://twitter.com/EvershedJo/status/1526866665597718528] - - public:weinreich
ethics, quantitative, research - 3 | id:1287260 -

With this thread, you’ll learn 9 lessons: 1. The Data Quality Framework 2. The Participant Relationship 3. Perfect introductions 4. Instructions that work 5. Helpful signposting 6. An enjoyable experience 7. Getting feedback 8. Progressive piloting 9. Types of quality control

[https://journals.sagepub.com/doi/pdf/10.1177/1525822X221115830] - - public:weinreich
quantitative, research - 2 | id:1287258 -

Our results showed that while most respondents understand why attention checks are conducted, a nonnegligible proportion of respondents evaluated them as controlling or annoying. Most respondents passed the attention check; however, among those who failed the test, 61% seem to have failed the task deliberately. These findings reinforce that noncompliance is a serious concern with attention check instruments. The results of our experiment showed that more respondents passed the attention check if a comprehensible reason was given.

[https://journals.sagepub.com/doi/abs/10.1177/0894439317695581?journalCode=ssce] - - public:weinreich
quantitative, research - 2 | id:1286906 -

A survey’s completion rate is one of its most important data quality measures. There are quite a few published studies examining web survey completion rate through experimental approaches. In this study, we expand the existing literature by examining the predictors of web survey completion rate using 25,080 real-world web surveys conducted by a single online panel. Our findings are consistent with the literature on some dimensions, such as finding a negative relationship between completion rate and survey length and question difficulty. Also, surveys without progress bars have higher completion rates than surveys with progress bars. This study also generates new insights into survey design features, such as the impact of the first question type and length on completion rate. More: https://twitter.com/nielsmede/status/1576234663341064192?s=20&t=kSwdGBBuVv1yiqo1lE4vbw

[https://www.nngroup.com/articles/ux-mapping-cheat-sheet/] - - public:weinreich
design, how_to, research - 3 | id:1276582 -

Empathy maps, customer journey maps, experience maps, and service blueprints depict different processes and have different goals, yet they all build common ground within an organization.

[https://www.academia.edu/36897806/Sample_size_for_qualitative_research_The_risk_of_missing_something_important] - - public:weinreich
qualitative, research - 2 | id:1222012 -

Until the definitive answer is provided, perhaps an N of 30 respondents is a reasonable starting point fordeciding the qualitative sample size that can reveal the full range (or nearly the full range) of potentially important customer perceptions. An N of 30 reduces the probability of missing a perception with a 10percent-incidence to less than 5 percent (assuming random sampling), and it is the upper end of the rangefound by Griffin and Hauser. If the budget is limited, we might reduce the N below 30, but the client mustunderstand the increased risks of missing perceptions that may be worth knowing. If the stakes and budgetare high enough, we might go with a larger sample in order to ensure that smaller (or harder to reach)subgroups are still likely to be represented.

[https://solferinoacademy.com/your-data-playbook-is-ready-download-it-now] - - public:weinreich
quantitative, research - 2 | id:1186450 -

The Data Playbook is 120 exercises, games, scenarios, slides and checklists to assist you and your teams on your data journey. The social learning content is designed for teams to have discussions and activities across the data lifecycle in short 30 minute to 1 hour sessions.

[https://www.researchnewslive.com.au/2022/05/24/the-question-researchers-should-all-stop-asking/] - - public:weinreich
qualitative, research - 2 | id:1119095 -

We want to take the shortcut and ask the why question, but please, resist the urge. Reframe it and you’ll find you are getting a more honest answer that is closer to authentic truth.

[https://www.nngroup.com/articles/cognitive-walkthrough-workshop/?utm_source=Alertbox&utm_campaign=27cc444eff-EMAIL_CAMPAIGN_2020_11_12_08_52_COPY_01&utm_medium=email&utm_term=0_7f29a2b335-27cc444eff-24361717] - - public:weinreich
design, evaluation, how_to, research - 4 | id:1080276 -

A cognitive walkthrough is a technique used to evaluate the learnability of a system. Unlike user testing, it does not involve users (and, thus, it can be relatively cheap to implement). Like heuristic evaluations, expert reviews, and PURE evaluations, it relies on the expertise of a set of reviewers to assess the interface. Although cognitive walkthroughs can be conducted by an individual, they are designed to be done as part of a group in a workshop setting where evaluators walk through a task in a highly structured manner from a new user’s point of view.

[https://medium.com/@emmaboulton/research-methods-for-discovery-5c7623f1b2fb] - - public:weinreich
design, qualitative, research - 3 | id:1074484 -

Whilst you’re shaping the problem space and then during the first diamond of understanding and defining which user needs to focus on, you should ideally get out of the lab or the office. When you have defined your solution and are iterating on it, that’s the best time to use your go to method — lab usability testing in a lot of cases, remote interviewing is mine. This is because you are likely needing cycles of quick feedback and iteration so you need a tried and trusted method so you can spin up a sprint of research quickly and efficiently. So how about when time and efficiency isn’t quite so important and the quality and depth of understanding or engagement of stakeholders are the key drivers? Here are some examples from my toolkit:

[https://ucl.scienceopen.com/hosted-document?doi=10.14324/111.444/000117.v1] - - public:weinreich
behavior_change, consulting, design, environment, how_to, inspiration, research, social_network, strategy - 9 | id:1022051 -

Method:Three participatory workshops were held with the independent Welsh residential decarbonisation advisory group(‘the Advisory Group’)to (1)maprelationships betweenactors, behavioursand influences onbehaviourwithin thehome retrofitsystem,(2)provide training in the Behaviour Change Wheel framework(3)use these to developpolicy recommendationsfor interventions. Recommendations were analysed usingthe COM-B (capability, opportunity, motivation) model of behaviourtoassesswhether they addressed these factors. Results:Twobehavioural systems mapswere produced,representing privately rented and owner-occupied housing tenures. The main causal pathways and feedback loops in each map are described.

[https://emerge.ucsd.edu/] - - public:weinreich
evaluation, quantitative, research - 3 | id:1022011 -

EMERGE (Evidence-based Measures of Empowerment for Research on Gender Equality) is a project focused on gender equality and empowerment measures to monitor and evaluate health programs and to track progress on UN Sustainable Development Goal (SDG) 5: To Achieve Gender Equality and Empower All Girls. As reported by UN Women (2018), only 2 of the 14 SDG 5 indicators have accepted methodologies for measurement and data widely available. Of the remaining 12, 9 are indicators for which data are collected and available in only a limited number of countries. This assessment suggests notable measurement gaps in the state of gender equality and empowerment worldwide. EMERGE aims to improve the science of gender equality and empowerment measurement by identifying these gaps through the compilation and psychometric evaluation of available measures and supporting scientifically rigorous measure development research in India.

[https://twitter.com/jeroenbosman/status/1485003119184470016/photo/1] - - public:weinreich
bibliography, how_to, research - 3 | id:999520 -

People regularly ask me how to perform a systematic *web* search. Finally found some time to organize my ad hoc tips and relate these to steps in a systematic scholarly search. Despite the options, web search will remain less controlled and a fuzzy patchwork.

[https://brooketully.com/problem-trees/] - - public:weinreich
behavior_change, inspiration, research, strategy - 4 | id:999488 -

The issue is: We try to solve every single box in the problem tree. If people don’t know about something, then we solve it by raising awareness. If people don’t care about something, then we solve it by getting them to care more. If people are doing illegal behaviors because of a lack of enforcement, then we solve it by increasing enforcement. We go through the whole set of problem tree causes in this manner, writing objectives with a one-to-one match per problem. Not only does this result in a long list of objectives, which will quickly overwhelm us, it also traps us into solving behavioral problems using logic-based approaches.

Follow Tags


Export:

JSONXMLRSS