"

11. Surveys

Dr. Sean Ashley and Christina Lennox (BA)

🎯 Learning Objectives

  • Define survey research.
  • Identify when to use survey research.
  • Describe different types of surveys.
  • Recognize the elements of effective survey questions and questionnaires.
  • Explain six types of bias in survey research.
  • Assess the strengths and weaknesses of survey research.
  • Value data sovereignty when conducting survey research with Indigenous communities.

 

Surveys have become very much a part of our everyday lives, and we’ve probably all taken one, heard about their results in the news, or even administered one ourselves. Despite being ubiquitous, constructing a good survey takes a great deal of thoughtful planning and many rounds of revisions, especially when you want valid results that can be used to shape social policy. In this chapter, we will define survey research and discuss when to use it. Additionally, we will describe different types of surveys and outline points of consideration when designing the questions, including biases that can influence survey construction. We will also outline the strengths and weaknesses of the methodology, as well as its relationship to Indigenous methodologies.

 

What is Survey Research?

Survey research is a method where researchers use standardized questionnaires to systematically collect data about people and their preferences, thoughts, and behaviours. While often used interchangeably in informal speech, the word “survey” refers to the method of gathering data, while a “questionnaire” is the list of questions used to elicit information from participants (Neuman & Robson, 2017). Survey research – particularly that which is conducted in-person – shares some elements with interviews (chapter 8), but it is a distinct method with its own set of guidelines, strengths, and weaknesses. Unlike interviews, surveys are often administered impersonally; that is, typically the person collecting the data will only interact with respondents to get their consent to participate in the research or to answer clarifying questions about survey items, leaving respondents to complete the questionnaire largely on their own.

Survey research is especially useful when a researcher aims to describe or explain trends or common features of a very large group or groups. In many ways, population surveys (like censuses, health surveys, and crime statistics) are the ways that states “see” the people who constitute the national body. In fact, the word “survey” derives from earlier Latin terms that mean “to look over,” while the word “statistics” (a common feature of survey research) originally meant the study of the condition of a state (Andersen et al., 2025).

The ability to see populations in this way is itself a form of power (Foucault, 2004/2009). As Hacking (1999) explains, statistical knowledge was originally developed to satisfy the interests of certain powerful groups. Eugenics (the practice of improving the quality of given populations, typically racial ones) was an important part of the work of early statistical pioneers, including that of Francis Galton and Karl Pearson (Hacking, 1999). These data on racial groups were not neutral but rather reflected and supported the racial hierarchies of the societies that created them (Kukutai & Walter, 2017). These uses reflect the positivist orientation of many survey projects (see more on the features of the positivist paradigm in chapter 2). The problems arise when these methods are used to uphold the status quo by reinforcing stereotypes through pathologizing narratives and misrepresenting the lived experience of dominated populations.

Despite this injurious history, survey research and quantitative data are powerful ways of knowing the world and can be used in the service of social justice goals. That is to say, surveys can also be conducted using a critical paradigm (Perez et al., 2023). Studying populations quantitatively is not a Western invention (Andersen et al., 2025). People the world over have long used quantitative approaches to understand their environment. As surveys and statistics have the power to shape public policy, it is vital that communities have a say in how they are conducted and how the data they produce are controlled.

The power dynamics inherent in survey research need to be kept in mind when working with any population but particularly for researchers working with Indigenous peoples. Statistics derived from surveys form the primary evidence used when creating Indigenous policy in Canada, but as Andersen et al. (2025) argue, “such data have never delivered benefits to Indigenous lives” (p. 60). Specifically, the data typically collected about Indigenous peoples represent knowledge that the state deems important, rather than reflecting the actual lives of the Indigenous peoples they purport to portray (Andersen et al., 2025). As such, survey researchers in Canada tend to focus on what Palawa scholar Maggie Walter calls the “Five Ds” (5D) of Indigenous data: “difference, disparity, disadvantage, dysfunction and deprivation” (Walter, 2016, as cited in Andersen et al., 2025, pp. 16-17). Taken together, 5D constructs an image of Indigenous lives as fundamentally deficient, reflecting mainstream prejudices about Indigenous culture as being marked by poverty, crime, and health problems, all of which are seen to emerge out of the “bad choices” Indigenous individuals make (Andersen et al., 2025).

It is therefore vitally important for criminologists and criminal justice researchers to avoid stigmatizing entire communities when presenting survey data. Instead, researchers should engage with the effects of colonization during every stage of their research, from data collection and analysis to the preservation of findings and the storage of data (Andersen et al., 2025). Strength-based research (i.e., research that focuses on community capacities rather than deficits) can help Indigenous communities address issues related to criminal justice in a way that does not reproduce stereotypes about community shortcomings (see chapter 1) (Andersen et al., 2025).

It is also vital that survey research, like other methods, incorporate the 5Rs (Respect, Relevance, Responsibility, Reciprocity, and Relationship) discussed in chapter 1 into every stage of the project. Concerning surveys, this approach may involve determining if the community is interested in collecting data with a survey compared to other methods. Other considerations include the types of questions included, how survey data are analyzed, and how the results will be shared (see box below for an example). In some instances, it may be requested that communities not be compared or averaged, or there might be particular interest in the results of a specific question. What is most important is that, if the research is not led by an Indigenous community, careful conversation and consultation occur between the research team and the Indigenous community of interest.

 

Indigenous-Led Research

In a publication from the Our Health Counts (OHC) data, Muir et al. (2024) examined associations “between the rate of ever being incarcerated and family disruption, experiences of racism, and victimization for Indigenous adults in the cities of London, Thunder Bay, and Toronto, Ontario, Canada” (p. 241). This publication only cites a small subsection of the overall OHC project, which is an Indigenous-led multi-phase database project focused on documenting and improving urban Indigenous health and wellbeing in six Ontario regions (Ottawa, Hamilton, Toronto, London, Kenora, and Thunder Bay).

Every stage of this project involved participation and collaboration from Indigenous community partners and leadership. This included the study design (how the entire study would be conducted), data collection (how participants are recruited and what tools are used to capture data), data analysis (how surveys are analyzed), and data interpretation (what insights and conclusions are drawn from the data). In accordance with the project’s data governance protocol and agreements, each community partner retained full ownership and control of data collected as part of the project. Also, each of the communities requested that their data be analyzed alone and not compared or summarized together. This is different from other studies based in a different intellectual tradition where the research institution (e.g., university) and researcher often retain ownership and control of all data and analysis.

The primary data collection tool for this study was a health survey developed through community-based partnerships in each of the three sites in Ontario (London, Thunder Bay, and Toronto). The project employed a respondent-driven sampling method, which invites initial respondents to recommend a limited number of additional participants to complete the survey instrument. Respondents were provided with monetary compensation for completing the survey and an additional amount for recruiting others who completed the survey (Rotondi et al., 2017). The surveys were unique to each community and took approximately 90 minutes to complete.

To measure justice system involvement in two communities (London, Toronto), the survey asked “‘Have you ever done time in jail?’ (‘yes’ or ‘no’); and ‘If yes, was this for a federal or provincial offense/crime?’” (p. 242). In Thunder Bay, they asked if the respondent had ever been incarcerated for 96 hours or more. Other survey questions asked about family disruption, experiences of racism, and victimization.

The study found (1) that Indigenous peoples in all three cities had disproportionately high rates of ever being incarcerated (43.0% in London, 54.0% in Toronto, 72.0% in Thunder Bay); (2) child protection involvement and experiencing racism were associated with an increase in previous incarceration; and (3) in Toronto and London, experiencing victimization was linked to a higher likelihood of incarceration. Overall, Muir and colleagues (2024) point to systemic inequities that continue to have a significant impact on ever being incarcerated among Indigenous peoples in these regions. They called for reforms in funding, programming, public health interventions, practices, and policy that are grounded in a deep understanding of the ongoing impacts of colonialism

 

Types of Surveys

Surveys come in many forms. Different types of surveys arise from differences in time (when or with what frequency a survey is administered) and administration (how a survey is delivered to respondents). This section examines what types of surveys exist when it comes to both time and administration.

 

Time

As you recall, the time element in research design was discussed in chapter 6b, where we introduced you to the idea that researchers can collect data at one point in time (cross-sectional) or at multiple points of time (longitudinal). One method of collecting data in these two ways is through survey research.

Cross-sectional surveys are administered at a single point in time, offering researchers a snapshot of respondents’ lives, opinions, and behaviours when the survey was administered. One issue with cross-sectional surveys is that the events, opinions, behaviours, and other phenomena that such surveys are designed to assess don’t generally remain static. Thus, generalizing from a cross-sectional survey can be tricky as things change. For example, the Canadian Social Survey (CSS) is a rapid survey conducted by Statistics Canada every three months, each time on a different topic. In fall 2022, the CSS asked about trust in Canadian institutions, including the police and court system. It found that 62% of Canadians had confidence in the police, though less than half (42%) had faith in the justice system and courts (Statistics Canada, 2023). Trust in police and courts is likely to change depending on high-profile events, such as the Movement for Black Lives, so this survey represents a snapshot of how Canadians feel at one moment in time.

Longitudinal surveys try to overcome this problematic aspect of cross-sectional surveys. Longitudinal surveys are administered at multiple points in time. There are three types of longitudinal surveys: trend, panel, and cohort. Researchers conducting trend surveys explore how surveyed phenomena change over time in large, general groups. Though surveys are collected multiple times, those who are surveyed can be different at each data collection point. An example of a trend survey is the General Social Survey: Canadians’ Safety (formerly the GSS on Victimization), which is conducted every five years with the aim of capturing information on Canadians’ experiences of victimization. The last cycle for the survey began 2019 and ended in March 2020, just as the COVID-19 pandemic was beginning to upend life for people in Canada (Cotter, 2021). One can imagine that the data may well have been different if the survey had been conducted during the lockdown.

Another type of longitudinal survey is called a panel survey. Unlike in a trend survey, in a panel survey the same people participate in the survey each time it is administered. For this reason, panel studies can be difficult and costly. Imagine trying to administer a survey to the same 100 people every year for, say, five years in a row. Keeping track of where people live, when they move, and when they die requires resources that researchers often don’t have. Statistics Canada attempted such a survey with the National Longitudinal Survey of Children and Youth (NLSCY), which began in 1994 with the aim of studying the development and well-being of a group of Canadians from birth to early adulthood. Of interest to criminologists, the NLSCY data demonstrate a correlation between punitive parenting and aggressive behaviour in children (Thompson, 2004). Had the survey continued, it would have been interesting to see whether this aggression would continue into early adulthood, but like so many panel surveys it ended early due to funding problems in 2009.

The third type of longitudinal survey offers a middle ground between trend and panel surveys. In a cohort survey, a researcher identifies a specific rather than general category of people of interest and then regularly surveys people who fall into that category. For example, researchers may identify people of specific generations or graduating classes, people who began work in a given industry at the same time, or perhaps people who have some specific life experience in common. Similar to a trend survey, the same people don’t necessarily participate from year to year, but all participants must meet the categorical criteria for inclusion in the study.

All three types of longitudinal surveys share the strength of allowing a researcher to make observations over time. If the behaviour or other phenomenon of interest changes over time across data collection points, either because of some world event or because people age, the researcher will be able to capture those changes in their study.

In sum, when or with what frequency a survey is administered will determine whether a survey is cross-sectional or longitudinal. Longitudinal surveys may be preferable in terms of their ability to track changes over time, but the time and cost required to administer a longitudinal survey can be prohibitive.

 

Administration

Surveys vary not just in terms of when they are administered but also in terms of how they are administered. Researchers commonly use self-administered questionnaires to gather survey data. In a self-administered questionnaire, respondents receive a written set of questions to which they respond on their own and typically without assistance. As such, these types of questionnaires usually include items that are easy to read and respond to and which are completed by those who are able to answer the questionnaire without significant support from others. Self-administered questionnaires can be delivered in hard copy (i.e., paper and pencil) format or online. We’ll consider both modes of delivery here.

Hard copy self-administered questionnaires may be delivered to participants in person or via regular, paper-based mail. Researchers may ask people to fill them out right away or return them to the researchers at a later date via mail or by having the researcher return to retrieve the completed survey.

Distributing surveys door-to-door can be extremely time-consuming, so many researchers decide to send their surveys through the mail. Often, survey researchers who deliver their surveys via mail provide some advance notice  to respondents about the survey to get people thinking about and preparing to complete it, such as a letter sent out a week or two prior to sending the survey in the mail. They may also follow up with their sample a few weeks after their survey has been sent out. This reminder can be done not only to prompt those who have not yet completed the survey to please do so, but also to thank those who have already returned the survey. This sort of follow-up can greatly increase response rates (Creswell & Creswell, 2022).

Online surveying has become increasingly common because of the ease of use, cost-effectiveness, and speed of data collection. It’s much simpler to create a survey online, send out the link to potential respondents, and then wait for the responses to roll in. With online surveys, researchers may employ some of the same strategies as mail surveys to increase response rates, including sending advance notice by email and following up with reminders to complete the survey. To deliver a survey online, a researcher may subscribe to a service that offers online survey construction and administration. Popular tools that are often supported by university and colleges include REDCap Surveys, Qualtrics, and lastly SurveyMonkey, which you have likely seen before and is shown in Figure 11.1.[1]

 

Screenshot showing SurveyMonkey interface
Figure 11.1 SurveyMonkey example

 

An example of an online survey that is relevant to criminology is the 2023 Canadian Survey of Cyber Security and Cybercrime. Examining responses from Canadian businesses, the purpose of the survey is to examine how these businesses have been affected by cybercrime. Administered electronically, survey items inquire about steps taken by businesses to engage in cyber security, how cybercrime has affected businesses, and the costs associated with proactive and reactive responses to cybercrime. You can learn more about this questionnaire by going to this website: Surveys and statistical programs – Canadian Survey of Cyber Security and Cybercrime.

When conducting an online survey, it is generally best to minimize the number of questions presented on the screen simultaneously to reduce survey fatigue and confusion. You can also include a progress bar at the top or bottom of the screen so that participants know how close they are to finishing. A progress bar also encourages participants to keep going as it makes a bit of a game out of keeping the bar moving towards completion.

Online surveys also allow for some innovative approaches that are simply not possible with paper-based surveys. For example, researchers can include video or sound clips into the survey itself. It is also much easier to construct adaptive questions that personalize a survey based on the answers provided. In a paper-based survey, you might include a contingency question that asks the respondent to “skip to question X if you answer no,” which can be a bit awkward for participants and should be minimized to reduce confusion when completing the questionnaire. This issue tends not to occur during online surveys, as the questionnaire can simply adapt and advance to the next relevant survey item without the respondent even knowing there was another pathway they could follow (Palys & Atchison, 2014).

There are pros and cons to each of the delivery options we’ve discussed. For example, while online surveys may be faster and cheaper than mailed surveys, a researcher cannot be certain that every person in their sample will have the necessary computer hardware, software, and Internet access to complete an online survey. On the other hand, mailed surveys may be more likely to reach the entire sample, but they are also more likely to be thrown away, lost, or not returned. The choice of delivery mechanism depends on factors such as the researcher’s resources, respondents’ resources, and the time available to distribute surveys and wait for responses.

 

Table 11.1 Survey Types: Strengths and Limitations
Survey Type Description Strengths Limitations
Cross-Sectional Survey Administered at a single point in time to capture a snapshot of opinions, behaviours, or experiences. Quick and cost-effective; good for assessing a point-in-time snapshot. Limited ability to track change; may be outdated quickly.
Trend Survey (Longitudinal) Administered to different people at multiple time points to identify trends in a general population. Tracks broad trends in the population over time. Does not track the same individuals; cannot show individual-level change.
Panel Survey (Longitudinal) Administered to the same group of individuals at multiple time points to observe change over time. Allows for individual-level analysis of changes over time. Costly and complex to maintain contact with participants over time.
Cohort Survey (Longitudinal) Administered to people who share a defining characteristic (e.g., birth year) over time. Tracks category-specific changes over time. Can help reduce panel attrition issues. Must consistently recruit new participants who fit the category.
Mail Survey (Self-Administered) Delivered in hard copy via mail or in person; completed and returned physically. Accessible without internet; the tangible format may feel more formal or legitimate. Can be lost, discarded, or ignored; slower to administer and analyze.

 

Survey Construction

Now that we have outlined some types of surveys we can employ in our research, let us now turn our attention to the actual construction of the survey questions. In this section, we will discuss what needs to be considered when making decisions about the content of survey questions, the wording and sequencing of questions as well as survey response formats.

 

Question Content

Question content refers to the topics of the questions you want to ask in a survey. In other words, the researcher must identify what exactly they want to know. As silly as this sounds, it can be easy to forget to include important questions in a survey because it requires considerable skill to translate abstract ideas into concrete and complete measurements.

Let’s say you want to understand how people make the transition out of prison. Perhaps you wish to identify which people were comparatively more or less successful in this transition and which factors contributed to success or the lack thereof. To understand these factors, you’ll need to include questions in your survey about all the possible factors that might contribute to successful transitions into the community. Consulting the literature on the topic will help as will brainstorming on your own and talking with others about what they think may be important in the transition out of prison. Time or space limitations won’t allow you to include every single item you’ve come up with, so you’ll also need to think about ranking your questions so that you can be sure to include those that seem most important.

Although including questions on all important topics makes sense, researchers also don’t want to include every possible question they can think of because this places an unnecessary burden on the survey respondents. Survey researchers have asked respondents to give their time and attention to the survey and to take care in responding to the questions, so asking them to complete an extremely long questionnaire just because the questions sound interesting to the researcher can be disrespectful to the respondents and increase the likelihood that they will not finish the entire set of questions.

 

Question-Wording

Question-wording refers to decisions that survey researchers must make about how to phrase each question. Responses obtained in survey research are very sensitive to the types of questions asked, and poorly framed or ambiguous questions may result in meaningless responses with very little value. For these reasons, survey researchers often use some common rules to evaluate their questions.

 

Rule 1. Is the Question Clear and Understandable?

Survey questions should be as clear and to the point as possible. This is not the time to show off your creative writing skills; a survey is a technical instrument and should be written in a way that is as direct and succinct as possible. Questions should be stated in simple language, preferably in the active voice, and without complicated words or jargon that the typical respondent may not understand. Write out all abbreviations, as what may be obvious to you may be confusing to the respondent (Neuman & Robson, 2017). While criminal justice researchers may immediately think of Correctional Services Canada when they see the acronym CSC, for example, a movie lover might see Canadian Society of Cinematographers and become quite confused by the question.

 

Rule 2. Is the Question Worded Negatively?

Negatively worded questions – those containing words such as “no,” “not,” or “never” – tend to confuse respondents and can lead to inaccurate responses. For example, a question such as “Should police officers not wear body cameras?” can be confusing and may frustrate respondents as they must do the mental gymnastics required to accurately answer the question. Respondents are also more likely to agree with negatively phrased questions than questions with neutral or even positive phrasing. Survey researchers must avoid these types of questions as well as questions that include double negatives. What if a question asked, “Did you not drink during high school?” A response of “no” would mean that the respondent did drink because they did not not drink. Imagine if you had to answer these kinds of questions on a survey; your brain would quickly tire, and you would likely end up responding in a way that you did not intend to, thus calling into question the validity of your responses. Or, you may simply not finish the survey. Avoiding negative terms in the question wording helps increase respondents’ understanding.

 

Rule 3. Is the Question Ambiguous?

Survey questions should not include words or expressions that may be interpreted differently by different respondents. For instance, asking someone “Do you go downtown often?” can be interpreted in a wide variety of ways, with respondents thinking that “often” may refer to daily, weekly, or monthly (Neuman & Robson, 2017). Different interpretations like this will lead to incomparable responses that cannot be accurately analyzed.

Sometimes, regionally or culturally specific phrases can also be ambiguous, especially to respondents outside of the region or culture that uses the phrase. For example, a summer retreat home in Eastern Canada is typically referred to as a “cottage,” while in Western Canada the term “cabin” or “camp” is more common. While you might think these terms are similar and that people could understand the intended meaning, it is likely to cause confusion because people typically think the terms used in their region are the natural, normal terms for whatever it is they are talking about.

 

Rule 4. Is the Question Double-Barreled?

Double-barreled questions are those that ask multiple questions as though they are a single question. This can be confusing and frustrating for survey respondents. Consider how a respondent might answer the following question: “How well do you think the police are doing at protecting and serving the people in your neighbourhood?” What if they thought the police were doing a good job protecting people in the neighbourhood but not serving them? Or what if they thought the police were doing a good job serving people in the neighbourhood but not protecting them? This is a double-barreled question because it’s really asking two separate questions:

 

Question 1. How well do you think the police are doing at protecting your

neighbourhood?

Question 2. How well do you think the police are doing at serving your

neighbourhood?

 

Because the original question combines protecting and serving, it’s a double-barreled question; therefore, it is recommended that this one question be divided into two separate questions.

 

Rule 5. Is the Question too General or too Specific?

There is a fine line between being too general and too specific in question wording. Questions that are too general may not accurately convey respondents’ perceptions. If a researcher asked someone how they liked a particular program and provided a set of responses ranging from “not at all” to “extremely well,” it would be unclear what the responses mean. Instead, more specific behavioural questions, such as “Would you recommend this program to others,” or “Do you plan to enroll in other programs offered by the same group” can better assess people’s perceptions of the program. Likewise, instead of asking how big a respondent’s neighbourhood is, a researcher could ask how many people live on the respondent’s block or street.

Questions that are too specific may be unnecessarily detailed and serve no specific research purpose. For example, if a researcher is interested in annual household income, asking a respondent to report the adjusted gross income on their last tax return may be too specific unless it serves a particular purpose for the research goals. Generally, asking respondents to estimate their annual household income or choose from a range of possible income options would be sufficient for the purposes of gathering basic demographic information. At the same time, if a researcher thinks the detailed data might be important for the study, then they should err on the side of requesting too much detail rather than not enough.

 

Response Formats

Response options are the answers you provide to the people taking your survey. Providing respondents with unambiguous response options is an important part of designing effective survey questions. Generally, survey questions ask respondents to choose a single (or best) response to each question, though in some cases, respondents are asked to choose multiple response options.

When writing an effective closed-ended question, researchers must follow a few guidelines, some of which we have introduced to you more generally in chapter 6a when reviewing the levels of measurement. First, the response options must be mutually exclusive, meaning they must not overlap. For example, if a question asks a respondent to report how many times they’ve interacted with the police in the past year and provides the options of 1–3 times, 3–5 times, and 5–7 times, what category would a person choose if they’d interacted with the police 3 or 5 times? To ensure that the options are mutually exclusive, the researcher could rewrite the response options to be 1–3 times, 4–6 times, and 7–9 times. To be sure that respondents can answer accurately, the categories provided must not overlap.

You might have noticed another problem with the response options presented above. What if a person had interacted with the police 0 times or 10 times? These options aren’t provided, so what option would they choose? This points to another guideline you have been introduced to as well: response options must be exhaustive. In other words, the set of responses provided must cover every possible response. In the example above, the researcher could add categories for 0 times and more than 9 times to make the list exhaustive.

Another consideration for response options involves the number and type of options, also called levels of measurement. Researchers can choose between four levels of measurement: nominal, ordinal, interval or ratio. In the context of survey research, nominal response options signify that the survey question presents two or more options that have no inherent order. Dichotomous response options (a type of nominal level of measurement) are those in which a respondent must choose one of two possible choices such as yes/no or agree/disagree. For example, the question, “Do you think the death penalty is justified under some circumstances: yes / no” is dichotomous because there are only two answer choices given. Nominal-level response options can also involve more than two answer choices. For example, the question, “What is your industry of employment: manufacturing / consumer services / retail / education / healthcare / tourism & hospitality / other” presents nominal response options because there are more than two categories, and they have no inherent order. The categories are simply names or labels, and no one category is inherently more or less than the other.

By contrast, ordinal response options present more than two options that can be ordered. For example, the question “What is your highest level of education (choose one): some high school / high school diploma / some college, no degree / associate’s degree / bachelor’s degree / some graduate school / graduate degree” has more than two options, and those options can be ordered (from least to most education).

One common type of survey question with ordinal response options is the Likert scale, which is shown in Figure 11.2. Likert scales measure people’s attitudes, behaviours, or perceptions on a scale. For example, they may ask do you “strongly agree,” “somewhat agree,” “neutral,” “somewhat disagree,” or “strongly disagree” with the following statement: “The police in my neighbourhood treat all people fairly.” Note that it is important to include a neutral option as some people may have no opinion on the statement at all.

 

Screenshot showing how a likert scale question renders in SurveyMonkey
11.2 Likert Scale Question Example

Interval or ratio response options involve options for which respondents enter a number as their answer. For example, asking for a respondent’s age and providing a blank space for them to write in their answer would be an interval/ratio response option. Ratio response options include the possible response of zero, making it possible to conduct certain statistical analyses. Age would be a ratio response as it is impossible to be less than 0 years old. Interval responses, on the other hand, can be less than 0, such as the way we measure temperature in Celsius.

Thus far, we’ve discussed response formats for closed-ended questions, which are more typical in quantitative survey research. Though surveys are predominantly quantitative, researchers sometimes also include open-ended questions in their questionnaires to gather additional information from respondents. An open-ended question does not include response options; instead, respondents are asked to reply to the question in their own way, using their own words. Survey researchers use these questions to find out more about a survey participant’s experiences or feelings about whatever they are being asked to report in the survey. Allowing participants to share some of their responses in their own words can add substantive depth and detail to their responses and may reveal new motivations or explanations that had not occurred to the researcher. However, the inclusion of these types of questions poses a considerable data analysis and generalizability barrier. Like interview transcripts, these types of responses often take a lot of time to analyze.

 

Question Sequencing

In addition to constructing quality questions and posing clear response options, researchers must also think about how to present their written questions and response options to respondents. One of the first steps after writing survey questions is to group the questions thematically. In the example of the transition from prison, perhaps we’d have a few questions asking about daily routines, others focused on support systems, and still others on exercise and eating habits. There’s no one way to organize the questions, but researchers must deliberately choose an order that makes sense given the goals of the research.

Once a researcher has grouped similar questions together, the next consideration is the order in which to present the question groups. Questions should flow logically from one to the next, with the least sensitive questions leading into the most sensitive, the factual and behavioural leading into the attitudinal, and from the more general to the more specific. Some researchers disagree on where to put demographic questions such as those about a person’s age, gender, and race. Placing them at the beginning of the questionnaire may lead respondents to think the survey is boring, unimportant, and not something they want to bother completing. However, if the survey deals with some very sensitive or difficult topic, such as child sexual abuse or other criminal activity, you don’t want to scare away respondents or shock them by beginning with the most intrusive questions. Some other general rules for question sequencing include starting with a closed-ended question, asking questions in chronological order if they relate to a sequence of events, and asking about one topic at a time rather than switching between topics with every question.

In the end, the order in which a researcher presents survey questions depends on the unique characteristics of the research. Only the researcher, preferably in consultation with people willing to provide feedback, can determine how best to order the questions. To do so, the researcher might consider the unique characteristics of the topic, the questions and, most importantly, the sample. Keeping in mind the characteristics and needs of the people being asked to complete the survey can help guide decisions about the most appropriate order in which to present the survey questions.

When researchers think they have a good questionnaire ready for respondents, they pretest the survey before sending it out. Pre-testing refers to the process of having a few people take the survey as if they were real respondents to identify any issues with the question content, wording, response options, or sequencing. By pretesting a questionnaire, researchers can find out how understandable the questions are, get feedback on the question wording and order, and learn whether any of the questions are unintentionally boring or offensive. The researcher can also ask pre-testers to keep track of how long it takes them to complete the survey, which provides valuable information on whether the researcher needs to cut some questions and what they should tell respondents about how long they should expect to spend completing the survey. In general, surveys should take no longer than 10–15 minutes to complete. Any longer and respondents may be more likely to refuse to participate, or they may not complete the entire questionnaire.

In sum, designing effective questions and questionnaires requires thoughtful planning that accounts for the goals of the research and respects respondents’ time, attention, trust, and confidentiality. Keeping the survey as short as possible, limiting the questions to only those necessary for the research project, and providing information about the confidentiality of responses, how data will be used (e.g., for academic research), and how the results will be reported (usually, in the aggregate) will all increase the chances that the researcher will gather quality data while respecting their respondents.

 

🧠 Stop and Take a Break!

Test your knowledge by answering a few questions on what you have read so far.

 

Bias in Survey Research

Survey research also has some unique considerations related to the goals of generalizing findings from the sample to the broader population. These potential biases include non-response bias, volunteer bias, sampling bias, social desirability bias, recall bias, and identity bias. While some of these biases apply to multiple research methods, they may be particularly relevant in survey research that aims for generalizability from a sample to a population, which is why we’ll discuss them in this chapter.

 

Non-Response and Volunteer Bias

Survey research can yield notoriously low response rates. For example, a response rate of 10–20% is typical in a mail survey, even after sending two or three reminders to potential respondents (Palys & Atchison, 2014). While there is no universal “good” rate and context matters a great deal, 50% or more would generally be considered good for any survey.

Low response rates can introduce bias into your survey, as there may be something different between those who answer and those who do not, a problem known as volunteer bias. Volunteer bias occurs because the type of people who volunteer to fill out surveys may be different from the general population. They tend to be more highly educated, female, nonconforming and more seeking of arousal and approval than the general population (Rosenthal & Rosnow, 1975).

Some strategies that help to improve response rates include making the survey as short as possible with clear questions that are easy to respond to, sending multiple follow-up requests for participation in the survey, providing incentives (e.g., cash or gift cards, giveaways, entry into a draw) to compensate people for the time and inconvenience of participating, and assuring potential respondents of the confidentiality and privacy of their data. Indicating that the survey is affiliated with a public institution (like a college or university) can also help ensure the survey is legitimate (Dantzker et al., 2018). Finally, volunteer bias can be improved by ensuring the topic is of interest to subjects and that they see the value of the research, which can be done by involving members of the researched community in all stages of the research process.

 

Sampling Bias

As discussed in chapter 7b, sampling bias occurs when the people selected for inclusion in a study don’t represent the larger population that the researcher is interested in studying. A particular concern in survey research relates to how the researcher administers the survey. For example, online surveys tend to include a disproportionate number of students and younger people and systematically exclude people with limited or no access to computers or the Internet. Further, any surveys that respondents must read and answer on their own will exclude people who are unable to read or do not understand the language used in the survey.

 

Social Desirability Bias

Many people try to avoid expressing negative opinions or making embarrassing comments about themselves, their employers, or their family or friends. On a survey, researchers may not get truthful responses to questions that require expressing these kinds of negative views. Instead, respondents might spin the truth to portray themselves or people they know in a positive, or socially desirable, light. This is what we refer to as social desirability bias. For example, respondents might try to protect their family, friends, and neighbours by saying that they disagree with statements such as, “My family tends to get on my nerves,” “There are a lot of political conflicts in my neighbourhood,” or “My friends often engage in activities that are against the law,” even though they may agree to some degree with the statements. While researchers can never know for sure how social desirability bias might impact responses to survey questions, they can try to lessen it by assuring confidentiality (and anonymity, if possible), allowing respondents to complete their surveys in private and return them in sealed envelopes, and telling respondents that they can skip any question they do not want to answer.

 

Recall Bias

In this type of bias, respondents may not fully or accurately remember past events or their own motivations or behaviours in relation to those events. You might experience recall bias when someone asks about your weekend. Even if it’s Monday, when someone says, “What did you do this weekend?” you might not be able to answer the question. After some thought, you can probably bring back the memory, but you might not remember every detail, emotion, or motivation behind your actions over the weekend. What if someone asks you about some event last month, last year, or even years ago? How likely is it that you’d remember the event in detail?

 

The same issue with remembering events happens in survey research. If a survey asks respondents to note how often they used alcohol and drugs during high school or even just a few weeks ago, they might not remember exactly how often they engaged in those behaviours in the past. Researchers can somewhat mitigate recall bias by anchoring respondents’ memories in specific events as they happened. For example, a survey might ask respondents to think about an occasion when they drank alcohol while in high school and report on specific aspects of that event. Then, the survey could ask respondents to estimate how often those kinds of specifics occurred throughout their high school years. While not a perfect solution, this kind of anchoring can help mitigate some of the concerns associated with recall bias.

 

Identity Bias

Lastly, because surveys rely on respondents’ self-identification, it can be difficult to know what criteria a person is using when responding to identity-based questions. This bias, called identity bias, can be particularly problematic when conducting research on Indigenous issues, as there is a growing recognition that many people claim Indigenous identity without being recognized as such by any Indigenous nation. For Métis, there are two competing definitions; it is often difficult to know whether a person is indicating a connection to the Métis nation or whether they are indicating that they have both an Indigenous and a non-Indigenous background (Andersen et al., 2025). This not-knowing can make policy construction difficult and highlights the need to have Indigenous communities involved in the survey construction to ensure questions about identity are properly framed.

 

Table 11.2 Types of Bias
Type of Bias Definition Cause Mitigation Strategies
Non-response Bias Bias that occurs when selected individuals do not respond to the survey. Respondents differ in important ways from non-respondents. Increase follow-ups, offer incentives, ensure confidentiality, shorten the survey.
Volunteer Bias Bias that arises when the people who choose to participate differ systematically from those who do not. Volunteers are more likely to be educated, female, approval-seeking, and interested in the topic. Ensure the broad appeal of the topic, involve community members in the research design.
Sampling Bias Bias that occurs when the sample does not accurately represent the population of interest. Survey administration excludes certain groups (e.g., those without internet, literacy barriers). Use multiple survey modes; ensure accessibility in language, format, and delivery.
Social Desirability Bias Respondents give answers they believe are socially acceptable rather than truthful. Fear of judgment, desire to present oneself positively. Ensure anonymity/confidentiality, allow private completion, avoid leading questions.
Recall Bias Inaccurate or incomplete memory of past events or behaviours. Respondents forget details or misremember timing and frequency. Use memory anchors, ask about specific events, shorten recall period.
Identity Bias Uncertainty or inconsistency in how individuals interpret and report identity-based questions. Self-identification may not align with community or legal definitions (e.g., Métis identity). Involve communities in question design, clarify definitions, and consider community recognition criteria.

 

Strengths and Limitations of Survey Research

Survey research has several benefits compared to other research methods. First, surveys are an effective way to measure a wide variety of unobservable data such as people’s preferences (e.g., political ideologies), traits (e.g., self-esteem), attitudes (e.g., toward people with criminal records), beliefs (e.g., about a new law), behaviours (e.g., smoking or drinking behaviour), or demographic information (e.g., income).

Second, survey research allows for the remote collection of data from many people relatively quickly and with minimal expense (Palys & Atchison, 2014). With surveys, an entire province can be covered using representative sampling techniques. Mailing a written questionnaire to 500 people entails significantly fewer costs and less time than visiting and interviewing each person individually. Plus, some respondents may prefer the convenient, unobtrusive nature of surveys to more time-intensive data collection methods such as interviews.

Related to the benefit of cost-effectiveness is a survey’s potential for generalizability. Because surveys allow researchers to collect data from very large samples for a relatively low cost, survey methods lend themselves to probability sampling techniques, which we discussed in chapter 7b. Of all the data collection methods described in this text, survey research may be the best method to use when one hopes to gain a representative picture of the attitudes and characteristics of a large group.

As with all methods of data collection, survey research also comes with some drawbacks. First, surveys may be flexible in the sense that researchers can ask many questions on many topics, but once the researcher has written and distributed the questionnaire, they are usually stuck with a single instrument for collecting data (the questionnaire) regardless of any issues that may arise later. For example, imagine you mail out a survey to 1,000 people and then, as responses start coming in, you discover that respondents find the phrasing of a particular question confusing. At this stage, it would be too late to start over or to change the question for the respondents who haven’t yet returned their surveys.

Validity can also be a problem with surveys. Because survey questions are standardized, it can be difficult to ask anything other than very general questions that a broad range of people will understand. As a result, survey findings may not be as valid as results obtained using methods of data collection that allow a researcher to comprehensively examine the topic being studied. Let’s say, for example, that you want to learn something about voters’ willingness to elect a politician who supports drug decriminalization. On a questionnaire, you might ask, “If your political representative supported decriminalizing drugs, would you vote for them if they were qualified for the job?” and provide the options of answering either “yes” or “no.” What if someone’s answer was more complex than could be answered with a simple yes or no? In an interview, the respondent and interviewer could have a conversation about the intricacies of a respondent’s answer to this type of question; however, standardized questionnaires often cannot allow for the same range and depth of responses as might be found in other research methodologies.

Another consideration is that surveys can be designed to allow the participant to respond without supervision. Although the remoteness of mail and digital surveys is a strength, it also means that the researcher will lose the ability to validate who actually took the survey (Palys & Atchison, 2014 ). For example, consider if we were working with an Indigenous Nation in Canada. Their leaders and Elders circle have asked to learn more about a new community safety patrol program they started last year. Eventually, we decide to conduct a survey with a small prize that will be distributed via a community email newsletter and through social media. The survey is sent out as expected, and we receive many responses almost instantly. However, we notice that when asked to describe their experiences, half of the responses are the same. After some investigation, we suspect a software program was deployed to repeatedly fill out the survey to win the prize. In this imaginary study, although we had hundreds of completed surveys, we could not validate which responses were human or computer-generated.

 

Table 11.3 Strengths and Limitations of Survey Research
Strengths Limitations
Can measure a wide variety of unobservable data. Cannot change questions after the questionnaire has been distributed.
Allows data to be collected from many people quickly and with minimal expense. May be less valid due to a lack of variation, depth in responses, and (potentially) people other than those intended might fill it in, even a computer program.
Strong potential for generalizing to larger populations.
Use of standardized questionnaires allows for consistency

 

 

🧠 Stop and Take a Break!

Test your knowledge by answering a few questions on what you have read so far.

 

Survey Data and Indigenous Peoples

Given the power surveys have to shape policy, Indigenous organizations have been formed to ensure that survey data are properly governed by Indigenous Nations, communities, and other stakeholders. Data sovereignty means “managing information in a way that is consistent with the laws, practices and customs of the nation-state in which it is located” (Kukutai & Taylor, 2016, as cited in First Nations Information Governance Centre [FNIGC], 2019, p. 60). Chapter 1 discussed the creation of OCAP®, which stands for ownership, control, access, and possession. Ownership refers to the fact that a community owns their information collectively; control ensures that communities and their representatives have the right to seek control over the way research is conducted at all stages of a project; access refers to the principle that Indigenous communities make decisions around who has access to the data; and possession refers to the concrete reality of having physical control over the data (FNIGC, 2019).

OCAP® grew out of the First Nations Regional Longitudinal Health Survey and has become the de facto standard for conducting research in First Nations communities (FNIGC, 2019). Although it originated in the context of First Nations research, the principles are also relevant to Inuit, Métis and other Indigenous Peoples (Schnarch, 2004). Métis researchers have also been developing their own principles regarding data sovereignty. For example, the Saskatchewan Métis Health Research and Data Governance Principles© were created in 2023 by Drs. Caroline Tait and Robert Henry in partnership with the Métis Nation Saskatchewan to govern health-related research (Anderson et al., 2025). The principles can also be applied to other areas for research, including education, housing, and justice-related topics.

Conclusion

Survey research is a common quantitative method that allows researchers to sample a large number of people and to learn about their attitudes, beliefs and opinions on a wide range of topics. This chapter outlines the key features of survey design, including the way in which the element of time applies to survey research design, the ways surveys can be administered as well as the various ways questions and questionnaires can be constructed to increase validity and response rates. Biases to consider as well as the various strengths and limitations of this method are also reviewed.

Throughout the chapter, special attention is paid to the importance of considering the power dynamics involved when employing this method with Indigenous participants and in Indigenous communities. The ways in which we can ensure those we are researching are respected and consulted at each stage of the survey research process are also highlighted. After all, like other quantitative methods, the potential to misuse or misinterpret survey results is ever present, and the onus remains on us as researchers to protect our participants from such harms.

 

✅ Summary

  • Survey research is a data collection method in which researchers use standardized questionnaires to systematically collect data about people in their sample.
  • Two types of surveys are cross-sectional surveys, which are administered at one point in time, and longitudinal surveys, which are administered multiple times. There are three types of longitudinal surveys: trend, panel and cohort.
  • Surveys can be self-administered in hard copy format or online, each of which has advantages and disadvantages.
  • Designing effective questions and questionnaires requires that careful thought be given to the question content, wording, response options, and sequencing.
  • Researchers must try to reduce the chances of various types of biases occurring in survey research. These include non-response, volunteer, sampling, social desirability, recall, and identity bias.
  • Some of the benefits of survey research include the ability to measure a wide variety of information, collect data from many people quickly with relatively minimal expense, generalize to larger populations, and ensure consistency across questions and answers.
  • Some of the drawbacks of survey research include not being able to change questions once surveys have been sent out and the potentially lower validity of answers compared to more in-depth and personal research methods.
  • Indigenous organizations have established frameworks such as OCAP® (Ownership, Control, Access, Possession) to ensure data sovereignty, meaning data are governed in accordance with Indigenous laws, customs, and community decision-making.

 

🖊️ Key Terms

closed-ended questions: quantitative interview questions that include a list of pre-determined response options from which the respondent must choose.

cohort survey: a longitudinal survey administered at more than one point in time to the same sub-category of people within the general population with the goal of examining overall changes over time within that same sub-category. An example of a cohort is people who are all born in the same decade.

contingency question: a question that is only asked to certain respondents based on their answers to a previous question.

cross-sectional survey: a survey that is administered once, with no follow-up, providing a snapshot of respondents’ opinions, beliefs, or preferences at that one point in time.

data sovereignty: managing information in a way that is consistent with the laws, practices and customs of a particular nation.

dichotomous response options: when a closed-ended survey question has only two response options, which are often exact opposites. For example, yes and no are dichotomous response options.

double-barreled question: a survey question that includes two distinct issues in the same question. This type of wording should be avoided in survey construction.

eugenics: the attempt to improve human populations through selective breeding and sterilization.

exhaustive: closed-ended question response options are deemed exhaustive when all possible responses are captured in the response options included in the survey question. Often, the response option “other” is included to ensure the options are indeed exhaustive.

identity bias: uncertainty or inconsistency in how individuals interpret and report identity-based questions.

interval/ratio response options: when a closed-ended survey question has response options that are numeric and can be rank ordered. Ratio response options include the possible response of zero, making it possible to conduct certain statistical analyses. For example, it is possible that someone has committed zero crimes.

level of measurement: this refers to the type and nature of response options in a survey. For example, a response option may be a word or a numerical value, and this will determine the type of analysis that can be conducted. In the social sciences, there are four levels of measurement: nominal, ordinal, interval and ratio.

Likert scale: a rating scale used in survey research that measures attitudes, opinions, or perceptions where respondents are asked to indicate their level of agreement or disagreement with a given statement.

longitudinal survey: a survey that is administered at more than one point in time, allowing changes in responses to be recorded. There are three types of longitudinal surveys: trend, panel, and cohort.

mutually exclusive: closed-ended question response options are deemed mutually exclusive when they do not overlap and respondents’ answers can be categorized into only one response option.

negatively worded question: a survey question that includes the words “no”, “not”, or “never”. This question wording should be avoided in survey construction.

nominal response options: when a closed-ended survey question has two or more response options that are words that cannot be rank ordered. Gender is an example of a nominal variable, with the nominal response options of male, female, transgender, etc.

open-ended question: qualitative interview questions that do not include possible response options but rather require the interviewee to provide responses in their own words.

ordinal response options: when a closed-ended survey question has two or more response options that can be rank ordered. For example, socioeconomic status can be ranked as low, middle and high. Note that no specific numeric value is assigned to each of the ranks at the ordinal level of measurement.

panel survey: a longitudinal survey administered at more than one point in time to the exact same people, thus allowing individual changes to be recorded.

pretesting: this involves having a small number of people complete the research instrument in question (e.g., the survey) before it is distributed to the study sample in order to address any issues and rectify any ambiguities at this preliminary stage.

recall bias: inaccurate or incomplete memory of past events or behaviours.

response options: the answer options that respondents can select from when responding to quantitative, closed-ended survey questions.

response rate: the percentage of responses you receive relative to the number of surveys you distribute.

sampling bias: bias that occurs when the sample does not accurately represent the population of interest.

self-administered questionnaire: a survey that is completed by the respondent on their own time, either in hard copy or online.

social desirability bias: occurs when people attempt to present a better image of themselves than might really be the case.

strength-based research: research that focuses on community capacities rather than deficits.

survey research: a method of collecting data by asking people questions, typically through questionnaires, to gather information about their attitudes, behaviours, or personal characteristics.

trend survey: a longitudinal survey administered at more than one point in time to the same target population with the goal of examining overall changes in that population. The respondents are not necessarily the same at each point in time.

volunteer bias: a data distortion that happens when the people that volunteer are somehow different than the rest of the general population.

 

🧠 Chapter Review

Crossword

Fill in the term in the right-hand column and it will display in the crossword puzzle. Be sure to include spaces where appropriate.

 

Discussion Questions

  1. Based on a research question you have identified through earlier exercises in this text, write a few closed-ended questions you could ask in a questionnaire on the topic. Now, use the information in this chapter to critique your questions based on the content, wording, and response options.
  2. What are some of the reasons why a researcher might choose a cross-sectional survey over a longitudinal survey?
  3. How can survey researchers ensure they are conducting ethical research when working with Indigenous communities?
  4. If you were to conduct survey research, would you choose to deliver the questionnaire in hard copy or online? Why?
  5. If you were to develop a questionnaire based on a research question you have identified through earlier exercises in this text, which topics would you cover in the beginning, middle, and end of your survey? Why would you choose that particular sequence of topics?

Further Reading


References

Andersen, C., Walter, M., Kukutai, T., & Gabel, C. (2025). Indigenous statistics: From data deficits to data sovereignty (2nd ed.). Routledge. https://doi.org/10.4324/9781003173342

Cotter, A. (2021, August 25). Criminal victimization in Canada, 2019. (Juristat, Catalogue no. 85-002-X). Statistics Canada. https://www150.statcan.gc.ca/n1/pub/85-002-x/2021001/article/00014-eng.htm

Creswell, J. W., & Creswell, J. D. (2022). Research design: Qualitative, quantitative, and mixed method approaches (6th ed.). Sage.

Dantzker, M. L., Hunter, R. D., & Quinn, S. T. (2018). Research methods for criminology and criminal justice (4th ed.). Jones & Bartlett Learning.

First Nations Information Governance Centre. (2019). First Nations data sovereignty in Canada. Statistical Journal of the IAOS, 35(1), 47-69. https://doi.org/10.3233/SJI-180478

Foucault, M. (2009). Security, territory, population: Lectures at the College De France, 1977-78, (G. Burchell, Trans.). Palgrave Macmillan. (Original work published 2004)

Hacking, I. (1999). The social construction of what? Harvard University Press.

Kukutai, T., & Walter, M. (2017). Indigenous statistics. In P. Liamputtong (Ed.), Handbook of Research Methods in Health Social Sciences (pp. 1-16). Springer. https://doi.org/10.1007/978-981-10-2779-6_40-1

Muir, N. M., Rotondi, M., Brar, R., Rotondi, N. K., Bourgeois, C., Dokis, B., Hardy, M., Maddox, R., & Smylie, J. (2024). Our health counts: Examining associations between colonialism and ever being incarcerated among First Nations, Inuit, and Métis people in London, Thunder Bay, and Toronto, Canada. Canadian Journal of Public Health, 115(Suppl 2), 239-252. https://doi.org/10.17269/s41997-023-00838-6

Neuman, W.L., & Robson, K. (2017) Basics of social research: Qualitative and quantitative approaches (4th ed.). Pearson Canada.

Palys, T., & Atchison, C. (2014). Research decisions: Quantitative, qualitative, and mixed methods approaches (5th ed.). Nelson.

Perez, W., Espinoza, R., & Melendrez, M. (2023). Critical survey research. In M. D. Young & S. Diem (Eds.), Handbook of Critical Education Research (pp. 612-629). Routledge. https://doi.org/10.4324/9781003141464-36

Rosenthal, R., & Rosnow, R. L. (1975). The volunteer subject. Wiley.

Rotondi, M. A., O’Campo, P., O’Brien, K., Firestone, M., Wolfe, S. H., Bourgeois, C., & Smylie, J. K. (2017). Our Health Counts Toronto: using respondent-driven sampling to unmask census undercounts of an urban indigenous population in Toronto, Canada. BMJ Open, 7(12), Article e018936. http://doi.org/10.1136/bmjopen-2017-018936

Schnarch, B. (2004). Ownership, Control, Access, and Possession (OCAP®) or self-determination applied to research: A critical analysis of contemporary First Nations research and some options for First Nations communities. Journal of Aboriginal Health, 1(1), 80-95. https://jps.library.utoronto.ca/index.php/ijih/article/view/28934

Statistics Canada. (2023, November 23). Do Canadians have confidence in their public institutions? StatsCAN Plus. https://www.statcan.gc.ca/o1/en/plus/5041-do-canadians-have-confidence-their-public-institutions.

Thompson, E. M. (2004). Aggressive behaviour outcomes for young children: Changes in parenting environment predicts change in behaviour. (Children and Youth Research Paper Series, Catalogue no. 89-599-MIE). Statistics Canada. https://www150.statcan.gc.ca/n1/en/catalogue/89-599-M2004001

 


AI Declaration Statement

Artificial Intelligence Tool: ChatGPT 4.0 (OpenAI); Visualization: The creation of Table 11.1 and 11.2. using content written by the author. The tables were then edited by the authors


  1. While the homepages for these services are provided, for data security reasons use the links supplied by your institution. These can be accessed through your institution’s website.
definition