yabs.io

Yet Another Bookmarks Service

Search

Results

[https://towardsdatascience.com/ditch-statistical-significance-8b6532c175cb] - - public:weinreich
campaign_effects, evaluation, health_communication, how_to, quantitative, research - 6 | id:1484440 -

“significant” p-value ≠ “significant” finding: The significance of statistical evidence for the true X (i.e., statistical significance of the p-value for the estimate of the true X) says absolutely nothing about the practical/scientific significance of the true X. That is, significance of evidence is not evidence of significance. Increasing your sample size in no way increases the practical/scientific significance of your practical/scientific hypothesis. “significant” p-value = “discernible” finding: The significance of statistical evidence for the true X does tell us how well the estimate can discern the true X. That is, significance of evidence is evidence of discernibility. Increasing your sample size does increase how well your finding can discern your practical/scientific hypothesis.

[https://thecynefin.co/how-to-use-data-collection-analysis-tool/] - - public:weinreich
management, qualitative, quantitative, research, storytelling, target_audience - 6 | id:1484377 -

This is SenseMaker in its most simple form, usually structured to have an open (non-hypothesis) question (commonly referred to as a ‘prompting question’) to collect a micro-narrative at the start. This is then followed by a range of triads (triangles), dyads (sliders), stones canvases, free text questions and multiple choice questions. The reason or value for using Sensemaker: Open free text questions are used at the beginning as a way of scanning for diversity of narratives and experiences. This is a way to remain open to ‘unknown unknowns’. The narrative is then followed by signifier questions that allow the respondent to add layers of meaning and codification to the narrative (or experience) in order to allow for mixed methods analysis, to map and explore patterns.

[https://twitter.com/EvershedJo/status/1526866665597718528] - - public:weinreich
ethics, quantitative, research - 3 | id:1287260 -

With this thread, you’ll learn 9 lessons: 1. The Data Quality Framework 2. The Participant Relationship 3. Perfect introductions 4. Instructions that work 5. Helpful signposting 6. An enjoyable experience 7. Getting feedback 8. Progressive piloting 9. Types of quality control

[https://journals.sagepub.com/doi/pdf/10.1177/1525822X221115830] - - public:weinreich
quantitative, research - 2 | id:1287258 -

Our results showed that while most respondents understand why attention checks are conducted, a nonnegligible proportion of respondents evaluated them as controlling or annoying. Most respondents passed the attention check; however, among those who failed the test, 61% seem to have failed the task deliberately. These findings reinforce that noncompliance is a serious concern with attention check instruments. The results of our experiment showed that more respondents passed the attention check if a comprehensible reason was given.

[https://journals.sagepub.com/doi/abs/10.1177/0894439317695581?journalCode=ssce] - - public:weinreich
quantitative, research - 2 | id:1286906 -

A survey’s completion rate is one of its most important data quality measures. There are quite a few published studies examining web survey completion rate through experimental approaches. In this study, we expand the existing literature by examining the predictors of web survey completion rate using 25,080 real-world web surveys conducted by a single online panel. Our findings are consistent with the literature on some dimensions, such as finding a negative relationship between completion rate and survey length and question difficulty. Also, surveys without progress bars have higher completion rates than surveys with progress bars. This study also generates new insights into survey design features, such as the impact of the first question type and length on completion rate. More: https://twitter.com/nielsmede/status/1576234663341064192?s=20&t=kSwdGBBuVv1yiqo1lE4vbw

[https://solferinoacademy.com/your-data-playbook-is-ready-download-it-now] - - public:weinreich
quantitative, research - 2 | id:1186450 -

The Data Playbook is 120 exercises, games, scenarios, slides and checklists to assist you and your teams on your data journey. The social learning content is designed for teams to have discussions and activities across the data lifecycle in short 30 minute to 1 hour sessions.

[https://emerge.ucsd.edu/] - - public:weinreich
evaluation, quantitative, research - 3 | id:1022011 -

EMERGE (Evidence-based Measures of Empowerment for Research on Gender Equality) is a project focused on gender equality and empowerment measures to monitor and evaluate health programs and to track progress on UN Sustainable Development Goal (SDG) 5: To Achieve Gender Equality and Empower All Girls. As reported by UN Women (2018), only 2 of the 14 SDG 5 indicators have accepted methodologies for measurement and data widely available. Of the remaining 12, 9 are indicators for which data are collected and available in only a limited number of countries. This assessment suggests notable measurement gaps in the state of gender equality and empowerment worldwide. EMERGE aims to improve the science of gender equality and empowerment measurement by identifying these gaps through the compilation and psychometric evaluation of available measures and supporting scientifically rigorous measure development research in India.

[https://www.meta-analysis-learning-information-center.com/] - - public:weinreich
evaluation, how_to, quantitative, research - 4 | id:958540 -

The Meta-Analysis Learning Information Center (MALIC) believes in equitably providing cutting-edge and up-to-date techniques in meta-analysis to researchers in the social sciences, particularly those in education and STEM education.

[http://oliverhaimson.com/PDFs/JaroszewskiGenderfluidOrAttack.pdf] - - public:weinreich
quantitative, research - 2 | id:830116 -

For HCI survey research broadly, we recommend using a question similar to the first question in [2]’s measure (as quoted in [3]) – “Are you…?” with three response options: “man,” “woman,” “something else: specify [text box]” – and allowing respondents to choose multiple options. This question will not identify all trans participants [3], but is inclusive to non-binary and trans people and will identify gender at a level necessary for most HCI research. To reduce trolling, we recommend providing the fill-in-theblank text box as a second step only for those respondents who choose the “something else” option.

[https://twitter.com/alexlfrancis/status/1452817171659296777] - - public:weinreich
quantitative, research - 2 | id:830115 -

I love it that one of my students suggested we change the default “Other (please specify“) option to “Not Listed (please specify)“ in a demographic survey. Explicitly *not* “othering“ participants while still asking for the info we want. Any implied failure is on us, not them.

[https://www.nngroup.com/articles/summary-quant-sample-sizes/?utm_source=Alertbox&utm_campaign=6b433997b0-EMAIL_CAMPAIGN_2020_11_12_08_52_COPY_01&utm_medium=email&utm_term=0_7f29a2b335-6b433997b0-24361717] - - public:weinreich
design, quantitative, research - 3 | id:744486 -

40 participants is an appropriate number for most quantitative studies, but there are cases where you can recruit fewer users.

[https://www.linkedin.com/pulse/behaviour-change-101-how-do-rapid-review-peter-slattery/?trackingId=8W1tltMFgBAVWsXDraHUHw%3D%3D] - - public:weinreich
academia, how_to, quantitative, research - 4 | id:351907 -

In our work at BehaviourWorks Australia (BWA) we are frequently asked ‘What does the research say about getting audience Y to do behaviour X?’. When our partners need an urgent answer we often provide it using a Rapid Review. In this article I explain Rapid Reviews, why you should do them, and a process that you can follow to conduct one. What is a Rapid Review? Rapid Reviews are “a form of knowledge synthesis in which components of the systematic review process are simplified or omitted to produce information in a timely manner” [1]. Indeed, with sufficient resources (e.g., multiple staff working simultaneously) you can do a Rapid Review in less than a day. The outputs of these reviews are, of course, brief and descriptive, but they can be very useful where rapid evidence is needed, for example, in addressing COVID-19. Rapid Reviews can therefore provide detailed research within reduced timeframes and also meet most academic requirements by being standardised and reproducible. They are often, but not always, publishable in peer-reviewed academic journals.

[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1361231/] - - public:weinreich
behavior_change, health_communication, quantitative, theory - 4 | id:350967 -

The Patient Activation Measure is a valid, highly reliable, unidimensional, probabilistic Guttman‐like scale that reflects a developmental model of activation. Activation appears to involve four stages: (1) believing the patient role is important, (2) having the confidence and knowledge necessary to take action, (3) actually taking action to maintain and improve one's health, and (4) staying the course even under stress. The measure has good psychometric properties indicating that it can be used at the individual patient level to tailor intervention and assess changes. (https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1475-6773.2004.00269.x)

[https://conjointly.com/kb/nonequivalent-groups-analysis/] - - public:weinreich
quantitative, research - 2 | id:350192 -

The Research Methods Knowledge Base is a comprehensive web-based textbook that addresses all of the topics in a typical introductory undergraduate or graduate course in social research methods. It covers the entire research process including: formulating research questions; sampling (probability and nonprobability); measurement (surveys, scaling, qualitative, unobtrusive); research design (experimental and quasi-experimental); data analysis; and, writing the research paper. It also addresses the major theoretical and philosophical underpinnings of research including: the idea of validity in research; reliability of measures; and ethics.

[https://cbail.github.io/textasdata/Text_as_Data.html?fbclid=IwAR1Nl93wTvZlhmVdifK_-I91viDfkH1R69rGwSzE2wM__OOVT_w3mJatgvI] - - public:weinreich
how_to, qualitative, quantitative, research, social_media, twitter - 6 | id:309754 -

This class covers a range of different topics that build on top of each other. For example, in the first tutorial, you will learn how to collect data from Twitter, and in subsequent tutorials you will learn how to analyze those data using automated text analysis techniques. For this reason, you may find it difficult to jump towards one of the most advanced issues before covering the basics. Introduction: Strengths and Weaknesses of Text as Data Application Programming Interfaces Screen-Scraping Basic Text Analysis Dictionary-Based Text Analysis Topic Modeling Text Networks Word Embeddings

Follow Tags


Export:

JSONXMLRSS