Dan Williams, Jacob Hess, Trevor Davis, and Drs. Robert Hunt with Scott Richards, Counseling Psychology and Special Education
Introduction & Objectives
The purpose of our study was to investigate the perspectives of fellow students towards current efforts at addressing the gospel of Jesus Christ in their departmental classes. The idea of a “BYU education,” while based in bedrock faith, is not a static concept; rather, strategies to apply our faith to daily learning moments are constantly evolving. We were interested in how other senior students perceived current efforts in “secular” classes to interface with the gospel. We asked ourselves, ‘what distinguishes positive experiences from the negative ones?’ What do other students think? Are their experiences similar to ours? A foundational assumption of our project was that students have much more to say than teachers ever hear about. Yes, as students we have plentiful opportunities to fill out “Teacher Evaluations” specific to each class and professor, but we sometimes wonder if bubbles convey our feelings well—particularly in the spiritual dimensions of a class summarized with a few vague items. Are other students thinking about how the gospel is addressed in “secular” classes? Is it even an issue for them? How often and in what ways are the gospel addressed across various departments? How often do classes pray together and how do students feel about prayer in classes outside of the religion department? What are examples of experiences where students felt comfortable or uncomfortable? From the student’s perspective are current attempts at interfacing with the gospel appropriate? Are students generally satisfied or dissatisfied with their department’s efforts and outcomes in this regard?
Report
–Survey Development, June 1999-April 2001: Lots of work had gone into the survey development before we received word of the ORCA funding. During winter semester, 2000, we continued to refine the survey and set it up in an on-line survey bank called “Zoomerang” which would make it easier to administer. Up till the “launch day,” we were gathering feedback. We asked the theoretical psychology study group for feedback after making a presentation, showed it to several individual professors, and administered it to a Marriage Prep class for pilot results. All of these efforts changed the survey significantly. Whole questions were dropped, and new answer options were added. The length of the survey increased while the questions improved in clarity and precision. We had initially planned that half of our actual research work would be to finish developing the instrument, and we found there was much work left to make it a better instrument.
–Initial Approval Process, January 2001: The survey was initially approved by the IRB committee at the beginning of the year. Following this, we moved into the recruiting stage of the project.
–Recruitment, February 2001: Through our faculty mentor, we requested from the BYU Assessment Office, random samples of 40 seniors from the following eight departments: English, philosophy, biology, physics, social work, psychology, marriage and family development and secondary education. Once these lists were obtained, we began calling every person on the lists. After a week of calling, we had contacted approximately half the people, and sent the remainder an e-mail invitation. To each person, we explained the survey and its purpose, and invited them to receive an e-mail survey, asking for their current mailing address to receive the four dollars of compensation. After phone calls & e-mails, we reached on average 30 of the 40 people on the each list, for around 150 willing participants (we dropped out secondary education & biology for different reasons).
–Approval of Revisions, March 2001: In submitting a revised instrument to IRB, problems arose. On second review, the intent of our project came under more scrutiny. We were asked by the IRB to speak with BYU student life before proceeding. Lane Fischer at Student Life decided we needed more “standing” to proceed with the project—including approval from the BYU Academic VP, Office of Assessment, and Department Chairs of the departments we were investigating. The goal of offering feedback to professors was the reason for this requirement.
Presumably, if we had merely wanted the information for our own curiosity, we wouldn’t need the same approval. By this time, the semester was nearing an end, and we were out of time to continue seeking more approval—we had not budgeted this into our schedule. We reluctantly called it off after the BYU academic VP refused to approve the project. Professors acquainted with our project, including our faculty mentor, were also hesitant to have their “official” stamp of approval on the project. We spoke with the Office of Assessment and asked them to look at the survey and decide if they might help us get it approved. As of today, we haven’t received word from their office.
Conclusions
The whole way through the project, we received verbal validation from faculty, administration, and especially from other students that our project was important, needed and exciting. However, when it got around to actually completing the project, there was more red tape than we had anticipated.
We understand the administration’s motivation to request “standing” on a project such as this. However, the basis of our project was that students, rather than non-students, were doing the asking and the research. Administrators and faculty were hesitant to “stand by” the project because we were asking questions differently than they would have. We have wondered whether or not we could achieve “standing” and keep our project’s “student-based” status at the same time.
We have continued communicating with the assessment office, and even suggested they incorporate our questions into a senior instrument they are developing. We still hope a project like this will be done. More and better feedback from senior students has great potential for improving our school.