Building a Sense of Belonging in Online Learning
We are excited to report Dr.
This is the second in a three-part series. Use the links below to view the rest of the series:
What you decide to implement in your research design impacts what you can say about your study and what others can critique. As a result, a good research question and literature review can go awry when the research methods and research question are not aligned. Here we discuss a road M.A.P. to methodological alignment: Measurement of variables, Analysis plan, and Participant selection.
How you decide to define a variable is only as useful as how well your measurement tools capture it. The research questions you selected ultimately operationalize your variables so that they connect the larger literature to your specific inquiry. Your measurement tool should demonstrate reliability (i.e. that you can consistently capture the same measurement of the variable) and validity (i.e., that it measures what it intends to measure). Using established measures can provide some degree of reliability and validity, given that the populations are the same, and allow comparisons to other studies that used the measure.
For example, you might define education disruption as experiencing school transfers. If you measure this using number of times a family moves, this might not capture moves that allow a student to stay in the same school or districts that allow for open enrollment.
The timing of measurement matters. If you are assessing change you need to capture the point of hypothesized change, usually using at least two time points or data sets. If you are looking at treatment adherence or retention, measuring from the start is important to establishing if adherence occurred. Qualitative data is not exempt from measurement scrutiny! Interview questions should clearly ask about the research topic and be appropriately timed.
Social science research has a wealth of statistical tests designed to capture a range of data types (e.g., nominal, ordinal) and designs (e.g., cross-sectional, longitudinal). Make use of discipline charts and tables to help identify the best statistical test for a data set during study development. You wouldn’t use a hammer to cut a board; find the right tool! Here are some examples:
Just as each test can be used with specific data, each test answers a specific question. Selecting a statistical test that fits your variables types does not mean that it is aligned with your research questions. Make sure your test also fits your research question.
For example, if you are looking at how academic self-esteem, academic self-efficacy, and peer-academic support predict school success, you could run a correlation, a multiple regression or analysis of covariance (ANCOVA). A correlation would be too weak of a test, as it just tells you how much the variables change together. If your assumption is that these variables share error, then multiple regression is just fine. If you believe the variables to be unique, interact, or want to measure peer-academic support or school success as a categorical variable, selecting ANCOVA would fit your assumptions better.
There are two goals in sampling alignment: 1) correctly identifying your sample and 2) controlling for external factors or confounding variables. Your sampling design needs to take into account what population your research question is solving for and make sure this is represented. Convenience sampling is often the tool of choice for researchers as it is practical and economic. However, it can create serious biases in the results. Sometimes you can control for this by adding demographic or context factors. Other times, you need to be explicit in the limitations of your study and why the sample was retained in your proposal. Finally, you will need to ensure you have the correct sample size to make assumptions from your results.
Qualitative research is not exempt from alignment. Timing and content of interviews is important. Your research questions should not be your interview questions. Developing good interview questions takes time. Have peers answer your questions and ask how they perceive them. Is it what you intended? If you are developing theory, you need to make sure you are using a less structured interview in order to allow a framework to emerge.
Conducting quality interviews is paramount to collecting usable data. Development and ongoing refining of survey skills will make sure the interview data you collect can answer your questions. Reviewing each interview after it concludes is an easy way to see if you are following up to your questions in a logical and aligned manner. When creating your scripts, make sure you are prepared to reframe the question and you have set follow-up questions. Taking time to develop your interview skills will pay off in your interviews eliciting usable information.
Want to learn more? Check out Professor Graham R. Gibbs YouTube lessons on conducting research interviews or these books:
How can you improve this study using the suggestions in this blog? Add them in the comments below!
A researcher asks if college students who receive two rounds of feedback show more improvement in final essays than students who receive one round of feedback. Information about the study was sent in a welcome back email to students at a graduate college. The researcher has students submit their first draft (if received one round of feedback) or second draft (if received two rounds of feedback) and a final submission. The researcher then counts the number of track changes between first or second submission and final submission and correlates the number of track changes to the number of rounds of feedback received.