MOCK Research Study

This paper samples my writing style by using a real research study and real references for all citations.  The second research study is a mock research study.

All of us are bombarded with all kinds of surveys, especially while viewing anything on the internet. This research paper focuses on the  interpretation and significance of survey response rates to help understand and interpret validity, and reliability of any survey.  However, since the paper was written for Teacher Education Research, I have focused on issues that specifically apply to Higher Education.  I have tried to use sentence structures and phrasing  to interpret results within my fictional  study to duplicate the methods of a significant factual research study (with a few changes to present my unique research study).

I learned a lot from the course and process of writing the paper. I would like to write more evaluative papers to further develop guidelines for the general public to interpret surveys easily and quickly. It can be tricky to interpret just what results are really factual, empirical, significant, and relevant or not.

Elizabeth Pula

TE 800, Introduction to Education Research- 2011

Title: Can Mixed Mode Boost Response Rates to College Student Web Surveys?

Abstract

Surveys are assessment tools to elicit information and data to justify accountability demands by accreditors, and legislators for educational operating budgets.  Twenty years ago, college student response rates to campus surveys were at 70% (Lipke, 2011 p.1).  As the accountability movement is causing increased scrutiny, surveys are over-surveying and counter-productive by causing decreasing response rates and questionable quality of the surveys. “This is a nontrivial threat to higher education and our ability to monitor what we’re doing. Institutions are not getting the quality of information they need to make responsible informed decisions” according to Dr. P. Terenzini, professor emeritus of education and a senior scientist emeritus at the Center for the Study of Higher Education at Pennsylvania State University (Lipke, 2011, p.1). In the 2011 National Survey of Student Engagement, college student response rates to campus surveys vary from 92 % to 4%. Less than 30% of students responded at more than a third of colleges. From 2007 through 2009, the response rates had hovered at the low rate of 36% (NSSE, 2011). In this study, an experiment using four separate campus web surveys were conducted to measure differences in response rates between two models of campus web surveys of a college student population at the University of Nebraska at Kearney in the United States of America.

The Chronicle of Higher Education reports that college students can be bombarded with as many as 10 campus surveys during the academic year with increasingly minimal response rates even with incentives of money or rewards for providing information in surveys (Lipke 2011, p.1).  The anecdotal information reported by Sara Lipke is documented by empirical research of declining response rates by a number of researchers as an alarming trend (Groves 1989; Dey 1997; Pike 2008).   This study is designed to determine if positive results obtained by Laguilles, Williams and Saunders (2011) can be obtained from a different student population at a different United States university. Their conceptualization and focus very clearly defines and explains the major historical issues over a forty-year period of declining response rates to college surveys. The Laguilles etal. research, using data obtained during 2008-2009, was the first research testing the effectiveness of lottery-based incentives in a college student population in the United States (Laguilles, etal., 2011, p.539).  Their data was based on an experimental model obtaining data from web surveys with lottery incentives as the only treatment to improve response rates. Their research findings showed a positive effect from the treatment within their experimental model to improve response rates. The findings of this study reveal that personalized mail invitations had positive statistically significant effects on response rates to lottery incentive web surveys for both genders of college students.  The positive findings indicate that responsive survey design changes can improve demographic data collection for needed institutional information.

This study used a mixed mode model to improve the experimental design of the prior Laguilles etal. research by adding personalized mail invitations as an additional and separate treatment to improve response rates to a web survey.  Adding mail invitations into the administration of the web surveys changes the model of the research design from a single mode (email/web delivery) to a mixed mode (mail/email/web delivery) methodology. In 2008, Converse conducted a research study using a mixed mode methodology that yielded an overall response rate of 76.3% (Greenlaw, Brown-Weltey 2009, p.464).  Mixed-mode surveys, while more expensive, have higher response rates.  Dilman noted (2000) that mixed-mode administration is a more dynamic approach to surveying by “providing an opportunity to compensate for each method” (p.218).  In this study, the participants were not required to enter any data in the mailed survey form to minimize data processing costs. Response rates were determined from only data entered online.

According to Fowler (2009, p. 51) high response rates are the most important indicator of data from a quality survey, and response rates of 20% or lower indicate data unlikely to yield credible statistics applicable to a population. The response rates to four web surveys increased because of the lottery incentives in the Laguilles etal. study. With the lottery incentives, the response rates vary from a low of 53.1% to a high of 57.2%. Without a lottery incentive, the control group response rates vary from a low of 43.2% to a high of 46.9% (Laguilles, etal. 2011, p. 545).  In the Laguilles etal. study, all response rates were above the NSSE 30%; above Fowler’s recommendation for credible interpretive data; and the lottery incentives boost the response rates above 50%.  In this mixed mode experimental study, would the response rates be above or below the NSSE 30%, and would adding the personalized mail incentive, rather than a complete-and-return mail survey form make a difference to the response rates of a lottery incentive web survey?

Method

The data on the response rates for this study were obtained during fall and spring  semesters 2010, at a public non-research university in the Midwest from a student population of 4,313 White and 1129 other ethnicity undergraduate enrollment of 5442 (UNK, 2011). The four survey topics were: (1) failure of Blackboard during final’s week of 2010 (IT), (2) new2010 campus dining centers(dining), (3) 2012 primary elections (election), and (4) job prospects after graduation (economy).  The survey topics duplicate topic categories of the earlier research by Laguilles etal. (2011, p.544).  All undergraduate students were the target population for the four topics in this study.  Specific demographic characteristics were determined from registration enrollment data from the university.  For each of the four experiments, there was a two-treatment group and a one-treatment control group. The data was analyzed by comparing responses of the sub groups, sex and level of attendance.

Separate data was analyzed to track who responded and completed the surveys and also received the direct mail invitations.  This separate data was compared to evaluate differences in all response rates between the treatment and control groups.

The sample size was 1,000 students for each survey, randomly selected by encrypted survey software.   The actual time of a survey was 3 weeks from any initial contact to any opportunity for a final response to a survey. The same 3 different lottery incentives were used for the two-treatment and single-treatment control groups.  All email invitations and follow–up emails contained identical information for the four surveys.

The two-treatment group received a direct personalized mail invitation that included a paper copy of the online survey.  Each invitation included a very brief hand written note about any topic, on any paper of the writer’s choice, including a request to participate in the survey, and was signed by a student who was also attending the university with their university email address.  The students who added their handwritten information did not know who would receive the mail survey and their personal invitation.

All online surveys included one question: did you receive any real mail about this survey? The software accessed and stored survey data and any personal data from the university enrollment data with encryption to maintain confidentiality of personal data. The response rates that were analyzed from the four surveys are the log-in rate, the incomplete rate (failure to complete survey online), and survey-completed response rate.   Chi-square tests were done for each survey to determine the effect of the second treatment incentive on the response rate for independence.  The effects of the single and two treatment incentives were tested as separate effects with layered cross-tabulations with chi-square tests for gender and class year differences.

Results

Experiment 1: Information Technology Survey

Students in the two-treatment group were significantly more likely to click on the web-survey link and complete the survey. Students in the single treatment group were more likely to dropout of the survey. Fewer students in the two-treatment group dropped out of the survey. More females than males completed the survey.

Experiment 2:  Dining Services

Females were significantly more likely to respond in both single and two-treatment groups. Males were significantly more likely not to respond to both single and two-treatment groups.  Students in the single treatment group were more likely to dropout of the dining services survey than drop-out of the information technology survey.  More students dropped out of the survey than the other surveys.

Experiment 3: Primary Elections Survey

Females and males were significantly more likely to respond in the two-treatment groups. Males were significantly more likely to complete the survey.  This survey had the second highest response rates for both genders.  All students that received a mail invitation completed the survey.

Experiment 4: Economic Issues Survey

Females and males were significantly more likely to respond in the two-treatment groups and were significantly more likely to complete the survey.  This survey had the significantly highest response rates of the four experiments.

The two-treatment groups in the four experiments showed significantly higher results to complete the survey regardless of gender.  The two-treatment group reveals an average response rate of 60.5% compared to the single treatment group of 53%. This response rate is above the NSSE average response rate for the treatment and control groups.

Discussion

Male and female students response rates show variances depending on subject topic of the survey. Although the two‐treatment groups response rates indicate a significant trend to complete a web survey regardless of gender, the results need to be interpreted as applicable only to college students at this university.  Laguilles etal. (2011) report in their study of a bias for students at a research university to complete surveys because of familiarity with surveys.  Again, in this study, all results need to be interpreted with awareness of  bias, although there was no statistical confirmation of the bias.  This study only selected specific random groups within a college population, and not the general population. The results from students at a non-research university show a positive correlation when compared to the earlier study by Laguilles etal. using a targeted population of students at a research university.   Patten (2009, p.47) notes that sampling error needs to be evaluated by inferential statistics.  The chi-square tests did indicate results below p.05 to allow discounting the null hypothesis, and indicate changes in response rates were due to the additional treatment used in the experiments.

The potential threat to external validity as Patten (2009) remarks: “To whom and under what circumstances can the results be generalized” (p.93)?  The results of this study can only be applied to college students, and not the general population who is also bombarded with web surveys. Although the response rates were above the NSSE 30% average, there is an internal threat to validity of this second experimental study results because of selection bias of student ages and because the targeted population is an intact group (Patten, 2009, p.91).  The incentives used in the lotteries for this survey were specifically selected for a college population.  Laguilles etal. (2011) report that appropriate incentives are essential to success in any lottery, and also affect response rates of participants.  It is difficult to determine prior and after any survey as to what effect the monetary value or kind of incentive affected response rates.  No attempt was made in this research study to evaluate appropriate incentives.  Top ten items under $200 from Amazon.com, available at a wholesale cost, or no cost to the University, were used as incentives.

This study also attempted to include web social networking by including an email address in the written correspondence from a student at the University.  Although no anecdotal data was evaluated in this study, there are anecdotal reports that some students did contact the University Student Center that the web surveys were a fun way to meet new friends on campus. Web social networking may be a positive intervention that could affect how to administer web surveys to obtain needed educational data for higher education institutions.

The results of this second independent experimental research study reveal that lottery incentives contribute to improve response rates above 30%. This second experimental research study also reveals results that the mixed-mode model contributes to improve response rates above 30%. The results of this study indicate that the mixed-mode model can improve response rates of web surveys to obtain necessary data for higher education institutions.  By obtaining necessary data, higher education institutions can then continue to function while determining requirements for responsible informed decision making processes to meet the cost guidelines and expectations for citizens of communities within the United States.

Although this study is a mock research project, I hope to use this type of experimental research study design in the future to obtain necessary information for a higher education institution to contribute to informed decision making processes.

References

Dey, E. L. (1997). Working with low survey response rates: The efficacy of weighting adjustments. Research in Higher Education, 38(2), 215-227.

Dilman, D., (2000). Mail and internet surveys: The Tailored Design method. 2nd ed., New York: John Wiley & Sons, Inc.

Fowler, F. J., Jr. (2009). Survey research methods 4th edition: Applied social research methods series (#1).Los Angeles: Sage Publications.

Greenlaw, C., & Brown-Welty, S. (2009). Testing Assumptions of Survey Mode and Response Cost A Comparison of Web-Based and Paper-Based Survey Methods : Evaluation Review,  33: 464-480.

Groves, R. M.  (1989). Survey errors and survey costs. New York: John Wiley & Sons.

Lipka, S. (2011). Want data? Ask students  again and again. The Chronicle of Higher   Education, August 7, 1-4.

Laguilles, J., Williams, E., & Saunders, D. (2011).  Can lottery incentives boost web survey     response rates? Findings from four experiments. Research in Higher Education, 52.     537-553.

National Survey of Student Engagement. (2011).  Reports 2007-2011. Retrieved from http://nsse.iub.edu/. Accessed 18 Nov 2011.

Patten, M.L.  (2009). Understanding research methods.  An overview of essentials (7th ed.).  Glendale, CA: Pryczak.

Pike, G. R. (2008). Using weighting adjustments to compensate for survey non-response.  Research in Higher Education, 49, 153–171.

University of Nebraska, (2011). Enrollment, University Factbook. Retrieved from http://www.unk.edu/ .  Accessed November 18, 2011.

6 thoughts on “MOCK Research Study

  1. Its much like you read my thoughts! You appear to understand such a lot associated with this particular, just like you authored the book inside it or something like that. I do think that you could do with some pics they are driving your own message home a little, but in addition to that particular, this really is excellent blog.

  2. I will right away take hold of your rss feed as I can’t find your e-mail subscription hyperlink or newsletter service. Do you have any? Kindly allow me realize in order that I may subscribe. Thanks.

  3. Hi Paxton:
    Hope the writing sample could help you on some of your own studies. It sure took time learning how to use APA and then develop an approach for an interesting fictitious type study using empirical study approach. Thanks, Lisa.

Leave a Reply