Projects

Working Towards a Coordinated National Approach To Services, Accommodations And Policies For Post-Secondary Students With Disabilities

Appendix One: A Methodology for Best Practices Evaluation

This discussion details the factors and rationale underlying the main research decisions and procedures followed in the current project. A series of background factors and methodological issues have represented important guidelines for and influences on key decisions regarding all stages of research design, for instance, the development of two standardized questionnaires, with accompanying open-ended questions to provide in-depth information, as well as the specification of target populations, and data collection and management procedures.

It is useful to outline the context of the current project, within the broader history and traditions of program evaluation, as well as within the more recent developments and increasing popularity of the Best Practices and partnership-driven evaluation models. This included an ongoing process of partner consultation to develop evaluation criteria and data collection protocols. These contexts and trends have provided central guidelines and rationale for the development of a methodology oriented to Best Practices evaluation for the current project. For instance, the requirements of Best Practices models are supported by the survey of both service providers and students, as well as the consideration of both quantitative and qualitative responses. The inclusion of these multiple measures provides highly detailed information on programs and practices, and supports the assessment of, for instance, the viability of importing them to different institutional settings.

The current approach is to survey Best Practices, with an exploratory focus, rather than to, for instance, ensure as representative a sample as possible. On the other hand, the use of standardized data collection methods such as semi-structured questionnaires provides the capacity for alternate, aggregate comparisons. This attempt to gather as many varied opinions and perspectives as possible, hopefully supports the reliability and validity of the findings, and makes the results more trustworthy.

Evaluation and the Best Practices model

Bramley (1996) identifies an important type of evaluation research, pre-program evaluation, in which a program is assessed prior to being implemented - as opposed to during or after implementation. Pre-program evaluation focuses on such issues as program viability, required resources and potential side-effects prior to implementation, and therefore represents a significant means of guiding management decision-making. An important and widely-used pre-program evaluation approach is the best practices model. While best practices models have taken many forms, the common feature is the identification and importation of alternate, better practices from other organizations or divisions within organizations.

As a result, many suggest the term 'better practices' is more precise, since the main prerequisite for a best practice is that it be better than the current practice. The term best practices was conventionalized initially through the rise in popularity in the 1980s of awards given to what award agencies have deemed a best practice among organizations or divisions within organizations.

Keehley et al (1997) note that, though much has been written on the subject, there is little agreement as to how to define best practices (p.19). They identify three common ways of defining best practices:

  1. a best practice is anything better than your current practice - as they argue, best practice may be popular because it implies that there is a best way of doing something in an organization, and that the term is relative, and depends on the 'discoverer' of a best practice and the organization to which they belong; they caution that using a strictly relativistic definition of best practice can lead an organization to import harmful rather than a beneficial practice (p.20);
  2. a best practice is declared by the media or others (p.21) - this implies that a best practice is officiated rather than researched and defined consistently;
  3. a best practice is an award-winning success - after an award granting association has deemed it a best practice (p.24).

They conclude, however, that none of the common ways of defining Best Practices are adequate to applying a best practices model, specifically since they do not include the process of importing practices:

"we strongly believe that thousands of public organizations have benefited by importing practices discovered from other organizations. The problem is that the academic and practitioner communities have failed to articulate adequate definitions of best practices and to understand how various practices have been successfully imported" (Keehley et al, 1997, p.25).

The current project attempts to enhance the understanding of Best Practices, with an aim to improve that may be considered, adapted and imported more successfully. Therefore, a key orientation of current research is to provide aggregate ratings, as well as in-depth descriptions of practices of service delivery. This approach includes separate surveys of both service providers and students, and attempts to assess, for instance, the distinctiveness of the services, as well as the the individual experiences that students have had with them.

As opposed to the logic of population estimates, best practices do not necessarily focus on statistically-sound estimates that can be generalized to the population of institutions, practices or students. The primary purpose of best practices is to promote ongoing program improvements within individual organizational settings through the assessment and importation of practices from outside the setting. As opposed to assessing how a certain practice is representative of all practices, best practices assumes that a better practice is perpetually on the verge of being identified or created. As Keehley et al (1997) suggest:

"Best practices does not mean simply making comparisons and sharing practices. We must continue to grow in applying a systematic method to finding best practices. Finding a best practice has a specific form and use. Establishing criteria is the first step toward outlining what a best practice is" (p.25).

Proposed Criteria for Best Pmctices

For the purposes of the current study, the following criteria of best practices were modified from Keehley et al (1997) (p.26):

  1. The best practice must demonstrate success over time - i.e. have a proven track record. The practice cannot be one that is planned in the future, or only recently put in place.
  2. The practice must be recognized by local partners as having a positive outcome.
  3. If possible, the positive outcome(s) of the practice should be quantifiable - a single data point does not necessarily indicate success.
  4. The practice should be recognized as creative or innovative by some of the audience - in this case there must be some consensus among partners that a practice is a best practice - this should be better supported through the use of both service-provider and student surveys.
  5. The practice must have local importance or salience for institutions seeking improvement. It should be relevant to similar institutions - i.e. deal with issues and problems that are common among institutions.
  6. The practice should not be linked to unique demographics. Though the practice may have evolved from unique demographics, it should be transferable, with modifications, to institutions where those specific demographics do not exist.
  7. The best practice should be replicable, with modifications, at other institutions. The model should provide descriptions of best practices, the benefits that can be attributed to the practices, and (if possible) how they were developed.

Keehley et a1 (1997) note that applying these criteria will reduce the numbers of best practices. Similarly, identifying the best practice is only half the problem, and the other half is importing and implementing the practice. Part of this process is identifying organizations and making comparisons with organizations that are as similar as possible. Similarly, performance measures are bound to differ dramatically, depending on the mission of the organization: "a critical step in developing performance measures is to get the agency and its key stakeholders to agree on what the mission of the agency is, what goals and objectives need to be established and achieved to accomplish that mission, and what essential measures can serve as indicators of performance in the delivery of the mission" (p.32).

The inclusion of stakeholder and partner consultation is therefore a key feature of best practices evaluation models, and, as Keehley et al (1997) argue, data collection for best practices should use partner surveys to collect a broad range of data on practices. Similarly, analysis should determine the pertinent, feasible practices, and should address the following questions: How will the practice affect the delivery of service? How will it affect the performance gap? Is there credible documentation that attests to customer satisfaction or success?" (p. 169).

The NEADS Evaluation Partnership

Project work began in January of 1997 with the formation of a Project Advisory Group. This group was assembled to direct the research, to advise on the production of the final report, and to coordinate the dissemination of the results and recommendations of the project. The groups invited to monitor the project include: the Association of Canadian Community Colleges, Canadian Federation of Students, Canadian Association of Disability Service Providers in Post-Secondary Education, Canadian Association of College and University Student Services, the Quebec Association of Post-Secondary Disabled Students and Human Resources Development Canada.

Student representatives from the NEADS Board of Directors and representatives from the various partner organizations serve on the Project Advisory Group. Representatives from the partner organizations include: Toni Connolly, Association of Canadian Community Colleges (Algonquin College); Dean Mellway, Canadian Association of Disability Service Providers in Post-Secondary Education (Carleton University); Preston Parsons, Commissioner, Students With Disabilities Constituency Group, Canadian Federation of Students (University of Winnipeg); Joan Wolforth, Canadian Association of College and University Student Services (McGill University); Steve Estey, Consultant (Ottawa).

Members of the project Advisory Group held three meetings, in March and June of 1997 and in March of 1998. At the NEADS Board of Directors meeting held in Ottawa, Ontario in early March 1997, a discussion was held of the project objectives and activities and members of the Board were selected to serve on the Project Advisory Group. This committee met again by conference call near the end of March. On June 8, 1997, NEADS hosted a Project

Advisory Group meeting in Hull, Quebec that included representatives from our partner organizations. The aim was to get service providers and students working together to develop the aims of the survey instrument. A further meeting of the Project Advisory Group was held in March of 1998 in Ottawa to discuss the goals of the research and to discuss the issues to be addressed in the final report.

Survey Design

Through consultation with the Project Advisory Group and using the NEADS Resource Directory of Disabled Student Services at Canadian Universities and Colleges (1993), and the NEADS "Accessibility Survey," a survey instrument focusing on the availability of various kinds of adaptations, services, and accommodations and the assessment of their utility/success in meeting the needs of students with disabilities was prepared and pre-tested.

The Advisory Group felt it would be important to ask service providers and students the same questions so that comparability between the two groups could be maintained. It was also recognized that the two groups would have distinct types of information that would be of value in analyzing institutional arrangements with respect to disability. For this reason, the surveys administered to both groups had to be similar in terms of the assessment respondents were asked to perform, but different in terms of the background information collected.

Because student needs vary according to disability type and program choice, student awareness and perceptions will probably be affected by these factors. It was thus deemed necessary to collect information related to these factors from individual students. Service providers, on the other hand, generally serve a constituency that is varied according to program and disability type and will have a more comprehensive awareness of the range of institutional provision. Service providers may also have information related to the financing and staffing of service offices and the administrative arrangements that govern institutional provision that students would not, in most cases, be able to access. Thus, in recognition of these distinctions, two separate versions of the survey, one directed at students and the other at service providers, were designed.

In the first section of the student survey, students were asked a variety of questions about their disability-related needs: what modifications to physical facilities, what adaptive support services, and what types of equipment and technical aids they use on a daily basis and/or in the pursuit of their studies. We also asked students to report on forms of equipment they might need but do not have. The service provider survey asked respondents to supply details about the size and type of their institution, the commitment of their institution in terms of budget and human resources, and about the organization of responsibility for disability-related issues. In this way, a profile of needs associated with specific disabilities and a model of provision associated with different types of institutions could be constructed.

Though their perspectives may differ, both service providers and students have the experience to evaluate services, policies and accommodations. Thus, both versions of the survey contained common sections. The first section varied according to the intended recipient, whether a service provider or a student, while the subsequent two sections, addressing the evaluation of specific adaptations, programs and policies, were common to both.

Developing assessment procedures suitable for both service providers and end users entailed consultation with selected service providers as well as the Project Advisory Group. The Advisory Group identified a number of issues that impinge on accessibility that are not addressed in other surveys of policy and provision at the post-secondary level. For instance, in recent years students with disabilities have become more organized, and have sought representation within the broader structures of post-secondary institutions. To date no comprehensive description of the forms of student participation exists. NEADS' Advisory Group felt it would be an important aspect of this study to collect such information and to attempt to measure what effect this may have had on services, accommodations and policy

The accessibility checklist used in the second section of the survey was developed using the previous NEADS "Accessibility Survey," and through examination of the relevant literature. Survey participants were asked to evaluate: features of physical accessibility (physical adaptations, services, equipment, and safety features) in those buildings on campus that all students need to use; educational adaptations and accommodations; policy and administrative support for disability programming; volunteer services; and the accessibility of the surrounding community. All facets of accessibility were to be graded on a four point scale from poor to excellent. Respondents had the option of indicating if any of the features listed were not available or if they were unaware of their existence or availability. Space to comment was included with every separate section.

A third section asked respondents to identify those features they felt were most successful and those that were least successful. In terms of developing best practices recommendations and identifying those areas where improvement is necessary, these sections were expected to generate directions for the report. In order to verify that successes or failures were actually making an important difference, questions in the third section of the survey asked respondents to prioritize the areas in which accessibility is most critical.

Draft versions of the survey were reviewed by the Project Advisory Group and were pretested by selected service providers and students throughout August and September 1997. Along with regular print, large print and diskette versions, the final survey was recorded on audio-tape by readers at the Canadian National Institute for the Blind. A French language version of the survey was completed in February 1998 and verified by Project Advisory Group members from Quebec in early March 1998.

Target Populations

It was determined that the population to be surveyed for the purposes of this project should include university and college service providers and post-secondary students with disabilities. In documenting and evaluating the services, accommodations and policies of Canadian post-secondary institutions, the information that can be provided by both groups is necessary. Service providers have a more comprehensive awareness of arrangements at any specific institution than other administrators or students. Moreover, service providers have an awareness of the strengths and weaknesses of supports available to students as well as some appreciation of the needs of students they serve. Students with disabilities constitute the end users of services and are the subjects of accommodations and policies. They are, therefore, in the best position to assess their effectiveness.

Because the scope of the project is national, the study attempted to address specific conditions across, for instance, institutional types and regions, as well as target a population that reflects the range of institutional arrangements available throughout Canada. An appropriate sample, it was felt, should include institutions in all provinces and territories, both universities and colleges, and an array of institutions of various sizes and types. The target population was thus comprised of students and service providers at the 360 (approx.) post-secondary schools throughout Canada (Statistics Canada 1997).

Ideally, the study population would include representatives from all of these institutions. For practical reasons not all could be included. Approximately 25 percent of the post-secondary institutions are CEGEPS located in Quebec. Among these 83 institutions, centralized service provision is the norm and thus detailing services, policy and accommodations at each would involve a great deal of overlap. Concentrating on key institutions where service provision is organized and administered provides a more effective means. Thus a sample of CEGEPS, rather than the total universe, was included in the survey.

A contact list developed by NEADS provides the names of service providers or student service officers at 160 institutions throughout Canada. This list includes most universities, and approximately half of the colleges in all provinces and territories. Not included on the contact list are those colleges and CEGEPS where no particular responsibility is recognized or assigned for the administration or provision of services to students with disabilities. Most of the institutions not included on the contact list are very small (i.e. less than 500 students).

Students with disabilities at these institutions were treated as a separate target population. However, assessing the parameters of this population is significantly more difficult than that of the population of institutions. According to the most complete study of Canadians with disabilities, the Health and Activity Limitation Survey (HALS), a national post-censal survey of approximately 35,000 Canadians with disabilities and 113,000 without disabilities, the number of students with disabilities in 1991 was estimated to be 112,200, or approximately 7 percent of the total student population in that year (Statistics Canada 1993). This percentage may have increased or decreased somewhat over the past seven years, but no subsequent studies have been conducted. At the same time, overall levels of enrollment at Canadian post-secondary institutions have declined (Statistics Canada 1997). Statistics Canada figures for the 97/98 academic year indicate that approximately 1.2 million students were enrolled at post-secondary institutions throughout Canada. If 7 percent of the total student population is potentially comprised of students with disabilities, this yields a total population of approximately 84,000 for the 97/98 academic year.

Resources to duplicate the sampling strategies of HALS or other large-scale studies such as the National Graduate Study are not available. The strategy for contacting potential participants in this study had to be focused rather than broad. This implied sampling those students who had already self-identified themselves to their institution as students with disabilities. While no exact means exists for calculating the total number of potential participants in this instance, some indicators exist. In a recent study of universities, Jennifer Leigh Hill indicates that participating institutions were able to identify on average one percent or less of the student population as students with disabilities (Hill 1992). Using Hill's average, the potential target population would be 12,000 students. For the purposes of this study the target population group is smaller because not all Canadian post-secondary institutions are included.

The distribution of questionnaires to students attempted to include all participating institutions. Service provider lists and population estimates were used to estimate an appropriate number of students to be sampled at each institution. Total population at any given institution and numbers served through any given service office were used to construct a rough sampling ratio. Service providers and coordinators contacted in the current study reported varying estimates. It is estimated that between .2 and 5 percent of the total student population are in contact with the service office or officer.

On average, one in four of the students served at any particular institution were sampled (a total sample group of between 2000 and 2500). The size of the sample group varied with the size of the institution so that a slightly smaller proportion of the service group would be sampled at large institutions as compared with smaller institutions. Because student needs and awareness will vary according to disability type (and possibly according to their program of study as well) an adequate range of students at each institution had to be sampled. By sampling more intensively at smaller institutions (where the service group is likely to be smaller) NEADS attempted to ensure that an adequate range of students in terms of disability type would be included. In addition to students contacted through the participating institutions, students on the NEADS mailing list were also included among survey recipients. Again, the requirements of representativeness are not to provide statistically sound population estimates, but rather to fulfill a quota of, for instance, the range of institutional and disability types.

Data Collection and Coding

Using NEADS contact lists, service providers and coordinators in all provinces and territories except Quebec were contacted by telephone throughout December 1997 and January and February 1998. The survey and related documents were translated and service providers and coordinators in the province of Quebec were identified and surveyed. Contact of institutions in Quebec was conducted by the Association quebecoise des etudiants handicapes au postsecondaire. Approximately 25-30 institutions in Quebec were (i.e. throughout March 1998) contacted about their participation.

A contact protocol was developed for the use of assistants who telephoned service providers and coordinators. Service providers and coordinators were given a brief description of the aims of the survey and were asked if they would participate. They were further asked to provide information on the total enrollment at their institution, the potential size of the population of students with disabilities at their institution, and the numbers of students on average who contacted their office in the course of the academic year. These numbers were used to calculate sampling ratios for each institution.

Because they provide one of the few points of contact with students with disabilities, service providers and coordinators are also asked to participate as field confederates by distributing surveys to students. In instances where contact names for organizations of students with disability were available, these people were asked to participate in distributing the survey as well. In instances where disability specific organizations or service offices exist at an institution or where responsibility was divided between offices (i.e. on the basis of disability type or between campuses), contact was made with more than one administrator or officer at a given institution. In such instances, the total number of surveys assigned to a given institution on the basis of the sampling ratio was divided up among the various field confederates.

Given that the project aims to assess whether services available to students with disabilities are sufficiently comprehensive, it is important to obtain a sample that encompasses the full range of disability types. Thus, where appropriate, all field confederates were asked to ensure that surveys were distributed to students with different types of disability. In addition, they were asked to make the distribution as random as possible and to ensure that students contacted received an appropriate version of the survey (i.e. large print, diskette, etc.).

Once the size of the sample group was established (see Appendix One), surveys were mailed to field confederates and to students included on the NEADS mailing list in February and March 1998. A pre-addressed, postage paid, return envelope was included with each survey to encourage return. Instructions for distribution were included with each package sent.

Respondents were asked to return surveys within a week of their receipt. The length and complexity of the document meant that in many instances this was not possible. A list of nonresponding service providers was maintained and, in order to improve response rates, these service providers received a follow-up phone call. Approximately 100 service providers were contacted first in May, and approximately 60 received a second follow-up call in June and July. Approximately 30 non-responding service providers from Quebec were contacted in August. Follow-up calls focused on reminding service providers of the need to fill out the survey sent to them and to ensure the distribution of student surveys.

In total 2715 student surveys and service provider/coordinator surveys were distributed. Of the student surveys, 2392 were distributed through field confederates at institutions in provinces and territories throughout Canada and 153 through the NEADS mailing list. A further 170 service providers or student service offices were asked to complete surveys.

As of September 15, 1998, 419 surveys were returned: 349 of these from students and 70 from service providers. File setup and coding was begun in June 1998. In addition to coding forced-choice questions, responses to open-ended questions, which form an integral part of the data for the project, were entered in full and are included in an appendix to this report. Because the reflections contained in open-ended responses are full and varied, the project team endeavored to record and separately analyze as many of these as possible.




Top