Chapter 12

Outcomes Research

evolve http://evolve.elsevier.com/Burns/practice/

A new paradigm for research, outcomes research, has been developing momentum in health care research. Outcomes research focuses on the end results of patient care. To explain the end results, one must also understand the processes used to provide patient care. However, having a focused topic of research outcomes is not sufficient to define outcomes research as a new paradigm. A new paradigm implies new theories, new questions, new approaches to studying the topics of interest, and new methodologies. It suggests a revolution—and in a way, we have a revolution! The strategies used in outcomes research do, to some extent, depart from the accepted scientific methodology for health care research, and they incorporate evaluation methods, epidemiology, and economic theory. Building a scientific base using such methods is controversial. However, the findings of outcome studies are having a powerful impact on the provision of health care and the development of health policy.

The momentum propelling outcomes research comes not from scholars but from policy makers, insurers, and the public. There is a growing demand that providers justify interventions and systems of care in terms of improved patient lives and that costs of care be considered in evaluating treatment outcomes. There has been a major shift in published nursing studies as the number of studies using traditional quantitative or qualitative methods is dwarfed by the number of outcomes studies.

This chapter describes the theoretical basis of outcomes research, provides a brief history of the emerging endeavors to examine outcomes, explains the importance of outcomes research designed to examine nursing practice, and highlights methodologies used in outcomes research. A broad base of literature from a variety of disciplines was used to develop the content for this chapter, in keeping with the multidisciplinary perspective of outcomes research.

THEORETICAL BASIS OF OUTCOMES RESEARCH

The theory on which outcomes research is based emerged from evaluation research (Structure-Process-Outcomes Framework). The theorist Avedis Donabedian (1976, 1978, 1980, 1982, 1987) proposed a theory of quality health care and the process of evaluating it. Quality is the overriding construct of the theory, although Donabedian never defined this concept (Mark, 1995). The cube shown in Figure 12-1 explains the elements of quality health care. The three dimensions of the cube are health, subjects of care, and providers of care. The concept health has many aspects; three are shown on the cube: physical-physiological function, psychological function, and social function. Donabedian (1987, p. 4) proposed that “the manner in which we conceive of health and of our responsibility for it, makes a fundamental difference to the concept of quality and, as a result, to the methods that we use to assess and assure the quality of care.”

image

Figure 12-1 Level and scope of concern as factors in the definition of quality.

The concept subjects of care has two primary aspects: patient and person. A patient is defined as someone who has already gained access to some care, and a person as someone who may or may not have gained access to care. Each of these concepts is further categorized by the concepts individual and aggregate. Within patient, the aggregate is a caseload; within person, the aggregate is a target population or a community.

The concept providers of care shows levels of aggregation and organization of providers. The first level is the individual practitioner. At this level, no consideration is given to anyone else who might be involved in the subject’s care, whether individual or aggregate. As the levels progress, providers of care include several practitioners, who might be of the same profession or different professions and “who may be providing care concurrently, as individuals, or jointly, as a team” (Donabedian, 1987, p. 5). At higher levels of aggregation, the provider of care is institutions, programs, or the health care system as a whole.

Donabedian theorized that the dimensions of health are defined by the subjects of care, not by the providers of care, and are based on “what consumers expect, want, or are willing to accept” (Donabedian, 1987, p. 5). Thus, practitioners cannot unilaterally enlarge the definition of health to include other aspects; this action requires social consensus that “the scope of professional competence and responsibility embraces these areas of function” (Donabedian, 1987, p. 5). Donabedian indicated, however, that providers of care may make efforts to persuade subjects of care to expand their definition of the dimensions of health.

The primordial cell of Donabedian’s framework is the physical-physiological function of the individual patient being cared for by the individual practitioner. Examining quality at this level is relatively simple. As one moves outward to include more of the cubical structure, the notions of quality and its assessment become increasingly difficult. When more than one practitioner is involved, both individual and joint contributions to quality must be evaluated. Concepts such as coordination and teamwork must be conceptually and operationally defined. When a person is the subject of care, an important attribute is access. When an aggregate is the subject of care, an important attribute is resource allocation. Access and resource allocation are interrelated, because they each define who gets care, the kind of care received, and how much care is received.

As more elements of the cube are included, conflicts among competing objectives emerge. The chief conflict is between the practitioner’s responsibilities to the individual and to the aggregate. The practitioner is expected to have an exclusive commitment to each patient, yet the aggregate demands a commitment to the well-being of the society, leading to ethical dilemmas for the practitioner. Spending more time with an individual patient decreases access for other patients. Society’s demand to reduce costs for an overall financing program may require raising costs to the individual. From an examination of the cube, logic would suggest that one could build up quality beginning with the primordial cell and increase by increments with the assumption that each increment would contribute positively to a greater total quality. However, the conflicts among competing objectives may preclude this possibility and lead instead to moral dilemmas.

Donabedian (1987) identified three objects to evaluate when appraising quality: structure, process, and outcome. A complete quality assessment program requires the simultaneous use of all three concepts and an examination of the relationships among the three. However, researchers have had little success in accomplishing this theoretical goal. Studies designed to examine all three concepts would require sufficiently large samples of various structures, each with the various processes being compared and large samples of subjects who have experienced the outcomes of those processes. The funding and the cooperation necessary to accomplish this goal are not yet available.

Evaluating Outcomes

The goal of outcomes research is the evaluation of outcomes as defined by Donabedian. However, this goal is not as simplistic as it might immediately appear. Donabedian’s theory requires that identified outcomes be clearly linked with the process that caused the outcome. The researcher must define the process and justify the causal links with the selected outcomes. The identification of desirable outcomes requires dialogue between the subjects of care and the providers of care. Although the providers of care may delineate what is achievable, the subjects of care must clarify what is desirable. The outcomes must also be relevant to the goals of the health professionals, the health care system of which the professionals are a part, and society.

Outcomes are time dependent. Some outcomes may not be apparent for a long period after the process that is purported to cause them, whereas others may be apparent immediately. Some outcomes are temporary, and others are permanent. Thus, an appropriate time frame for determining the selected outcomes must be established.

A final obstacle to outcomes evaluation is attribution. This requires assigning the place and degree of responsibility for the outcomes observed. A particular outcome is often influenced by a multiplicity of factors. Health care represents only one dimension of a complex situation. Patient factors, such as compliance, predisposition to disease, age, propensity to use resources, high-risk behaviors (e.g., smoking), and lifestyle, must also be taken into account. Environmental factors such as air quality, public policies related to smoking, and occupational hazards must also be included. The responsibility for outcomes may be distributed among providers, patients, employers, insurers, and the community.

There is as yet little scientific basis for judging the precise relationship between each of these factors and the selected outcome. Many of the influencing factors may be outside the jurisdiction or influence of the health care system or of the providers within it. One solution to this problem of identifying relevant outcomes is to define a set of proximate outcomes specific to the condition for which care is being provided. Critical pathways and care maps may help the researcher to define at least proximate outcomes. However, proximate outcomes do not provide the degree of evidence of examining the desired outcomes.

Evaluating Process

Clinical management has been, for most health professionals, an art rather than a science. Understanding the process sufficiently to study it must begin with much careful reflection, dialogue, and observation. There are multiple components of clinical management, many of which have not yet been clearly defined or tested. Bergmark and Oscarsson (1991, pp. 139–140) suggested the following questions as important to consider in evaluating process: (1) “What constitutes the therapeutic agent?” (2) “Do practitioners actually do what they say they do?” and (3) “Do practitioners always know what they do?” Current outcomes studies use process variables that are easy to identify. Answers to questions such as those posed by Bergmark and Oscarsson are more difficult to define and will initially require observation, interviews, and the use of qualitative research methodologies. Three components of process that are of particular interest to Donabedian are standards of care, practice styles, and costs of care.

Standards of Care

A standard of care is a norm on which quality of care is judged. Clinical guidelines, critical paths, and care maps define standards of care for particular situations. According to Donabedian (1987), a practitioner has legitimate responsibility to apply available knowledge when managing a dysfunctional state. This management consists of (1) identifying or diagnosing the dysfunction, (2) deciding whether or not to intervene, (3) choosing intervention objectives, (4) selecting methods and techniques to achieve the objectives, and (5) skillfully executing the selected techniques.

Donabedian (1987) recommended the development of criteria to be used as a basis for judging the quality of care. These criteria may take the form of clinical guidelines or care maps based on prior validation that the care contributed to outcomes. The clinical guidelines published by the Agency for Healthcare Research and Quality (AHRQ) establish norms on which the validity of clinical management can be judged. These norms are now established through clinical practice guidelines. However, the core of the problem, from Donabedian’s perspective, is clinical judgment. Analysis of the process of making diagnoses and therapeutic decisions is critical to the evaluation of the quality of care. The emergence of decision trees and algorithms is a response to Donabedian’s concerns and provides a means of evaluating the adequacy of clinical judgments.

Practice Styles

The style of practice is another dimension of the process of care that influences quality; however, it is problematic to judge what constitutes “goodness” in style and to justify the decisions. Donabedian (1987) identified the following problem-solving styles: (1) routine approaches to care versus flexibility, (2) parsimony versus redundancy, (3) variations in degree of tolerance of uncertainty, (4) propensity to take risks, and (5) preference for type I errors versus type II errors. There are also diverse styles of interpersonal relationships. Westert and Groenewegen (1999, p. 174) suggest that differences in practice styles are a result of differences in opportunities, incentives, and influences. They suggest that “there is an (often implicit) idea of what should be done and how, and this shared (local) standard influences the choices made by individual practitioners. This alternative originates in the borders between economics and sociology and it can be characterized as the social production function approach.” Table 12-1 lists studies of practice styles that include nurses.

TABLE 12-1

Studies of Practice Styles of Nurses

Year Authors Title
1988 Bircumshaw & Chapman A study to compare the practice style of graduate and non- graduate nurses and midwives: The pilot study
1996 Fullerton, Hollenbach, & Wingard Practice styles: A comparison of obstetricians and nurse-midwives
1997 Howell-White Choosing a birth attendant: The influence of a woman’s childbirth definition
1999 Byers, Mays, & Mark Provider satisfaction in Army primary care clinics
2001 Hueston & Lewis-Stevenson Provider distribution and variations in statewide cesarean section rates
2001 Mark, Byers, & Mays Primary care outcomes and provider practice styles

Costs of Care

A third dimension of the examination of quality of care is cost. There are cost consequences to maintaining a specified level of quality of care. Providing more and better care is likely to increase costs but is also likely to produce savings. Economic benefits (savings) result from preventing illness, preventing complications, maintaining a higher quality of life, or prolonging productive life.

A related issue is who bears the costs of care. Some measures purported to reduce costs have instead simply shifted costs to another party. For example, a hospital might reduce its costs by discharging a particular type of patient early, but total costs could increase if the necessary community-based health care raised costs above those incurred by keeping the patient hospitalized longer. In this case, the third-party provider could experience higher costs. In many cases, the costs are shifted from the health care system to the family as out-of-pocket costs. Studies examining changes in costs of care should consider total costs, which include out-of-pocket costs. Table 12-2 provides examples of studies that examine the costs of care.

TABLE 12-2

Studies Examining Costs of Care

Year Author Title
2007 Griffiths, Edwards, Forbes, Harris, & Ritchie Effectiveness of intermediate care in nursing-led in-patient units
2007 Bradley & Lindsay Specialist epilepsy nurses for treating epilepsy
2007 Bowles & Baugh Applying research evidence to optimize telehomecare
2006 Rubin et al. Replicating the Hospital Elder Life Program in a community hospital and demonstrating effectiveness using quality improvement methodology
2006 Phibbs et al. The effect of geriatrics evaluation and management on nursing home use and health care costs: Results from a randomized trial
2005 Harris, Richardson, Griffiths, Hallett, Wilson-Barnett Economic evaluation of a nursing-led inpatient unit: The impact of findings on management decisions of service utility and sustainability
2005 McIsaac Managing wound care outcomes
2004 Leeper Nursing outcomes: Percutaneous coronary interventions
2004 Altimier, Eichel, Warner, Tedeschi, & Brown Developmental care: Changing the NICU physically and behaviorally to promote patient outcomes and contain costs
2004 Challis et al. The value of specialist clinical assessment of older people prior to entry to care homes
2003 Ahrens, Yancey, & Kollef Improving family communications at the end of life: Implications for length of stay in the intensive care unit and resource use
2002 Brooten et al. Lessons learned from testing the quality cost model of advanced practice nursing (APN) transitional care
1999 Kay Targeting cost containment efforts in Massachusetts nursing homes.
1997 Chisholm, Knapp, Astin, Audini, & Lelliott The mental health residential care study: The “hidden costs” of provision
1994 Varricchio Human and indirect costs of home care… for cancer patients
1994 Ward & Brown Labor and cost in AIDS family caregiving
1992 Harper Care and cost effectiveness of the clinical care coordinator/patient care associate nursing case management model
1991 Haggerty, Stockdale-Woolley, & Nair Respi-Care: An innovative home care program for the patient with chronic obstructive pulmonary disease

Evaluating Structure

Structures of care are the elements of organization and administration that guide the processes of care as well as provider and patient characteristics. The first step in evaluating structure is to identify and describe the elements of the structure. Various administration and management theories could be used to identify the elements of structure. These elements might be leadership, tolerance of innovativeness, organizational hierarchy, decision-making processes, distribution of power, financial management, and administrative decision-making processes.

The second step is to evaluate the impact of various structure elements on the process of care and on outcomes. This evaluation requires comparing different structures that provide the same processes of care. In evaluating structures, the unit of measure is the structure. The evaluation requires access to a sufficiently large sample of like structures with similar processes and outcomes, which can then be compared with a sample of another structure providing the same processes and outcomes. For example, in your research you might want to compare various structures providing primary health care, such as the private physician office, the health maintenance organization (HMO), the rural health clinic, the community-oriented primary care clinic, and the nurse-managed center. You might examine surgical care provided within the structures of a private outpatient surgical clinic, a private hospital, a county hospital, and a teaching hospital associated with a health science center. Within each of these examples, the focus of your study would be the impact of structure on processes of care and outcomes of care. Table 12-3 provides some examples of studies of structures of care.

TABLE 12-3

Studies Examining Various Aspects of Structure

Year Author Title
2007 Castle & Engberg The influence of staffing characteristics on quality of care in nursing homes
2007 Ward, Severs, Dean, & Brooks Care home versus hospital and own home environments for rehabilitation of older people
2007 Goldman, Vittinghoff, & Dudley Quality of care in hospitals with a high percent of Medicaid patients
2006 Mor Defining and measuring quality outcomes in long-term care
2006 Cólon-Emeric et al. Patterns of medical and nursing staff communication in nursing homes: Implications and insights from complexity science
2005 Wilson et al. Quality of HIV care provided by nurse practitioners, physician assistants, and physicians
2004 Schnelle et al. Relationship of nursing home staffing to quality of care
2004 Morgan, Stewart, D’Arcy, & Werezak Evaluating rural nursing home environments: Dementia special care units versus integrated facilities

The federal government mandates nursing homes, home health care agencies, and hospitals to collect and report specifically measured quality variables to the government. This mandate was made because of considerable variation on the quality of care in these structures. Various government agencies analyze the quality of these structures so that they can adequately oversee the quality of care provided to the American public. These data are made available to the general public so that individuals can make their own determination of the quality of care provided by various nursing homes, home health care agencies, or hospitals. Researchers can also access these data for studies of the quality of various structures. To access these data on the Internet, do a search using the phrases nursing home compare, home health compare, or hospital compare. In addition to being able to select a specific hospital, nursing home, or home health care agency, you will be able to access considerable general information about quality related to these structures of health care. Do a search for Magnet Hospitals at the American Nurses Credentialing Center at www.nursecredentialing.org/magnet to check for the status of a particular hospital.

FEDERAL GOVERNMENT INVOLVEMENT IN OUTCOMES RESEARCH

Agency for Health Services Research

Nurses participated in the initial federal involvement in studying the quality of health care. In 1959, two National Institutes of Health study sections, the Hospital and Medical Facilities Study Section and the Nursing Study Section, met to discuss concerns about the adequacy and appropriateness of medical care, patient care, and hospital and medical facilities. As a result of their dialogue, a Health Services Research Study Section was initiated. This study section eventually became the Agency for Health Services Research (AHSR). With small amounts of funding from Congress, the AHSR continued to study the effectiveness of health services, primarily supporting the research of economists, epidemiologists, and health policy analysts (White, 1993). Two projects that were to have the greatest impact were small area analyses and the Medical Outcomes Study (MOS).

Small Area Analyses

In the 1970s, an epidemiologist named Wennberg began a series of studies examining small area variations in medical practice across towns and counties. He found a wide variation in the tonsillectomy rate from one town to another in the New England area that could not be explained by differences such as health status, insurance, and demographics. These findings were replicated for a variety of medical procedures. Investigators began a search for the underlying causes of the variation and their implications for health status (O’Connor, Plume, & Wennberg, 1993; Wennberg, Barry, Fowler, & Mulley, 1993). Studies also revealed that many procedures, such as coronary artery bypass, were being performed on patients who did not have appropriate clinical indications for such surgery (Power, Tunis, & Wagner, 1994).

Medical Outcomes Study

The Medical Outcomes Study (MOS) was the first large-scale study to examine factors influencing patient outcomes. The study was designed to identify elements of physician care associated with favorable patient outcomes. Figure 12-2 shows the conceptual framework for the MOS. MOS failed to control for the effects of nursing interventions, staffing patterns, and nursing practice delivery models on medical outcomes. Coordination of care, counseling, and referral activities more commonly performed by nurses than physicians, were considered in the MOS to be components of medical practice. The abstract of the MOS is provided below:

image

Figure 12-2 The MOS conceptual framework.

The Medical Outcomes Study was designed to (1) determine whether variations in patient outcomes are explained by differences in system of care, clinician specialty, and clinicians’ technical and interpersonal styles and (2) develop more practical tools for the routine monitoring of patient outcomes in medical practice. Outcomes included clinical end points; physical, social, and role functioning in everyday living; patients’ perceptions of their general health and well-being; and satisfaction with treatment. Populations of clinicians (n = 523) were randomly sampled from different health care settings in Boston, MA; Chicago, IL; and Los Angeles, CA. In the cross-sectional study, adult patients (n = 22,462) evaluated their health status and treatment. A sample of these patients (n = 2349) with diabetes, hypertension, coronary heart disease, and/or depression were selected for the longitudinal study. Their hospitalizations and other treatments were monitored and they periodically reported outcomes of care. At the beginning and end of the longitudinal study, Medical Outcomes Study staff performed physical examinations and laboratory tests. Results were reported serially, primarily in The Journal of the American Medical Association. (Tarlov et al., 1989)

Agency for Health Care Policy and Research

In 1989, Congress created the Agency for Health Care Policy and Research (AHCPR) to replace the AHSR. Congress also established the National Advisory Council for Health Care Policy, Research, and Evaluation. The council was required to include (1) health care researchers; (2) health professionals (specifically including nurses); (3) individuals from business, law, ethics, economics, and public policy; and (4) individuals representing the interests of consumers. The budget for AHCPR increased to $1.9 million in 1988, $5.9 million in 1989, and $37.5 million in 1990. Its budget request for 2008 was $329,564,000. This agency has now been renamed the Agency for Healthcare Research and Quality (AHRQ). Their current budget is posted online at www.ahrq.gov.

The AHCPR initiated several major research efforts to examine medical outcomes. Two of the most significant, which are described here, are the Medical Treatment Effectiveness Program (MEDTEP) and a component of MEDTEP referred to as patient outcomes research teams (PORTs) (Greene, Bondy, & Maklan, 1994).

Medical Treatment Effectiveness Program

Congress established MEDTEP in 1989 to be implemented by the AHCPR. The purpose of the program was to improve the effectiveness and appropriateness of medical practice. When the program was mandated, Congress used the term medical. However, it was broadly interpreted to include health care in general, particularly—from our perspective—nursing care. The program was charged to develop and disseminate scientific information about the effects of health care services and procedures on patients’ survival, health status, functional capacity, and quality of life, a remarkable shift from the narrow focus of traditional medical research. The program funded three research areas: (1) patient outcomes research, (2) database development, and (3) research on effective methods of disseminating the information gathered. For more information on MEDTEP, go to the AHRQ website.

Patient Outcomes Research Team Projects (PORTs)

PORTs were large-scale, multifaceted, and multidisciplinary projects. Congress mandated PORTs to “identify and analyze the outcomes and costs of current alternative practice patterns in order to determine the best treatment strategy and to develop and test methods for reducing inappropriate variations” (U.S. Congress, 1994, p. 67). The PORTs were required to “conduct literature reviews and syntheses; analyze practice variations and associated patient outcomes, using available data augmented by primary data collection where desired; disseminate research findings; and evaluate the effects of dissemination” (U.S. Congress, 1994, p. 67). PORTs might address questions such as “Do patients benefit from the care provided?” “What treatments work best?” “Has the patient’s functional status improved?” “According to whose viewpoint?” and “Are health care resources well spent?” (Tanenbaum, 1994; Wood, 1990).

A major task of PORTs was to disseminate their findings and change the practice of health care providers to improve patient outcomes. A framework for dissemination was developed that identified the audiences for disseminated products, the media involved, and the strategies that foster assimilation and adoption of information (Goldberg et al., 1994).

The National Center for Nursing Research, now the National Institute for Nursing Research (NINR), developed a partnership with AHCPR, now the Agency for Health Care Research and Quality (AHRQ), to fund outcomes studies of importance to nursing. Calls for proposals jointly supported by AHRQ and NINR are announced each year. These calls for proposals can be found in NINR’s home page at www.nih.gov/ninr.

With a growing budget and strong political support, proponents of the AHCPR were becoming a powerful force. They insisted on a change in health care because of the demand for health care reform that existed throughout the government and among the public. A reauthorization act changed the name of the AHCPR to the Agency for Healthcare Research and Quality (AHRQ). The AHRQ is designated as a scientific research agency. The term policy was removed from the agency name to avoid the perception that the agency determined federal health care policies and regulations. The word quality was added to the agency’s name, establishing the AHRQ as the lead federal agency on quality of care research, with a new responsibility to coordinate all federal quality improvement efforts and health services research. The new legislation eliminated the requirement that the AHRQ develop clinical practice guidelines. However, the AHRQ still supports these efforts through evidence-based practice centers and the dissemination of evidence-based guidelines through its National Guideline Clearinghouse.

The United States is not the only country demanding improvements in quality of care and reductions in costs. Many countries are experiencing similar concerns and addressing them in relation to their particular government structures. Thus, the movement into outcomes research and the approaches described in this chapter are a worldwide phenomenon.

Clinical Guideline Panels

Clinical guideline panels incorporate available evidence on health outcomes into sets of recommendations concerning appropriate management strategies for patients with the studied conditions. An important source of guidelines is the National Guideline Clearinghouse (NGC). NGC has clear guidelines for submission of and inclusion of guidelines on their website. Any professional organization may gather a group to develop guidelines on a particular topic. Some groups seek funding for the project; others, such as professional organizations, conduct these efforts as an aspect of their organizational work. Medical schools and nursing schools have submitted guidelines, as have medical and nursing organizations and volunteer agencies such as the American Cancer Society. Guidelines developed across the world are included. Some guidelines are evidence based, whereas others are not. The evidence-based guidelines have considerably more validity. Current guidelines can be obtained from the National Guideline Clearinghouse of AHRQ at www.guideline.gov.

OUTCOMES RESEARCH AND NURSING PRACTICE

Outcome studies provide rich opportunities to build a stronger scientific underpinning for nursing practice. Nurse researchers have been actively involved in the effort to examine the outcomes of patient care. Ideally, we would like to understand the outcomes of nursing practice within a one-to-one nurse/patient relationship. However, in most cases, more than one nurse cares for a patient. Therefore, the nursing effect is shared. In addition, nurse managers and nurse administrators have control over the nursing staff and the environment of nursing practice, and this affects the autonomy of the nurse to implement practice. Therefore, outcomes research must first focus on how nursing care is organized rather than what nurses do. Then, perhaps, we can begin to determine how what nurses do influences patient outcomes (Lake, 2006). We know that nurses do have an impact on patient outcomes. Kramer, Maguire, and Schmalenberg (2006) indicated that “a growing body of evidence supports a linkage between an empowered shared leadership/governance structure and control over nursing practice.” The importance of autonomy in clinical nursing practice is being recognized as critically important to positive patient outcomes. It is important to identify autonomy-enabling structures in the organizational structures of nursing practice. One such structure revealed in a number of nursing studies is Magnet hospitals.

Lake (2006) suggested that we have a black box that holds the causal link between how care is organized and variations in patient outcomes. We cannot see inside the black box, we can only guess right now. Lake (2006) believes the black box contains nursing surveillance, judgment, and action, which are the bases for quality of care. But we cannot yet study it. Do you know how frustrating this is for a nurse researcher? But as we gain more understanding of the organization of nursing care and of the variation in patient outcomes, we will begin to obtain glimpses into the black box and to understand that causal link.

Nursing-Sensitive Patient Outcomes (NSPOs)

A nursing-sensitive patient outcome is “sensitive” because it is influenced by nursing. It may not be caused by nursing but is associated with nursing. In various situations, “nursing” might be the individual nurse, nurses as a working group, the approach to nursing practice, the nursing unit, the institution that determines numbers of nurses, salaries, educational levels of nurses, assignments of nurses, workload of nurses, management of nurses, and policies related to nurses and nursing practice. It might even include the architecture of the nursing unit. In whatever form, nursing actions have a role in the outcome, even though acts of other professionals, organizational acts, and patient characteristics and behaviors often are involved in the outcome. What patient outcomes can you think of that might be nursing-sensitive?

Nursing-sensitive outcomes have become an issue because of national concerns related to the quality of care. “The demand for professional accountability regarding patient outcomes dictates that nurses are able to identify and document outcomes influenced by nursing care” Given and Sherwood (2005, p. 774). Efforts to study nursing-sensitive outcomes were initiated by the American Nurses Association (ANA). In 1994, the ANA, in collaboration with the American Academy of Nursing Expert Panel on Quality Health Care (Mitchell, Ferketich, Jennings, and the American Academy of Nursing Expert Panel on Quality Health Care, 1998), launched a plan to identify indicators of quality nursing practice and to collect and analyze data using these indicators across the United States. The goal was to identify or develop nursing-sensitive quality measures. Donabedian’s theory was used as the framework for the project. Together, these indicators were referred to as the ANA Nursing Care Report Card. This Nursing Care Report Card could facilitate benchmarking, or setting a desired standard, that would allow comparisons of hospitals in terms of their nursing care quality.

The Acute Care Setting Report Card indicators were as follows:

• Patient satisfaction with pain management, nursing care, overall care, and educational information

• Pressure ulcers

• Patient falls

• Nurse satisfaction

• Nosocomial infection rate

• Direct care staffing mix

• Total nursing care hours per patient per day

No one knew what indicators were sensitive to nursing care provided to patients or what the relationships were between nursing inputs and patient outcomes. Every hospital had a different way of measuring the indicators that the ANA had selected. Persuading them to change to a standardized measure of the indicators for consistency across hospitals was a major endeavor (Jennings, Loan, DePaul, Brosch, & Hildreth, 2001; Rowell, 2001). Multiple pilot studies were conducted as nurse researchers and cooperating hospitals put in place the mechanisms required for data collection. These pilot studies identified multiple problems that had to be resolved before the project could go forward. Researchers learned that not only must the indicators be measured consistently, but data collection must be standardized. As studies continued, indictors were amplified and continue to be tested.

The ANA proposed that all hospitals collect and report on the nursing-sensitive quality indicators. To encourage researchers to collect these indicators, the ANA accredited organizations, and the federal government helped by sharing of the data with key groups. The ANA also encouraged state nurses’ associations to lobby state legislatures to include the nursing-sensitive quality indicators in regulations or state law.

In 1998, the ANA provided funding to develop a national database to house data collected using nursing-sensitive quality indicators. The Midwest Research Institute (MRI) and the University of Kansas School of Nursing jointly manage the data. In 2001, data from nursing-sensitive quality indicators were being collected from more than 120 hospitals in 24 states across the United States. Data were analyzed quarterly, and feedback reports were distributed to all participating hospitals. Confidential benchmarking reports were provided to allow hospitals to compare their results with those of other hospitals in the group (Rowell, 2001).

In 1997, the ANA appointed members of the ANA Advisory Committee on Community-based Non-acute Care Indicators to identify a core set of indictors. The committee began by selecting a theoretical base for its work, selecting Evans and Stoddart’s (1990) determinants of health model and also Donabedian’s model of quality. As its work progressed, the committee chose to synthesize a model to guide the identification and testing of indicators (Figure 12-3). The committee followed the same pattern of work that the previous committee had used to choose the indicators. The following indicators were selected:

image

Figure 12-3 Model used by the ANA Advisory Committee to guide identification and testing of community-based nonacute care indicators.

• Community based settings

• Pain management

• Staff mix

• Consistency of communication

• Client satisfaction

• Prevention of tobacco use

• Cardiovascular risk reduction

• Caregiver activity

• Activities of daily living

• Psychosocial interaction

Other organizations currently involved in efforts to study nursing-sensitive outcomes include The National Quality Forum (NQF), California Nursing Outcomes Coalition, Veterans Affairs Nursing Outcomes Database, the Center for Medicare and Medicaid Services’ (CMS) Hospital Quality Initiative, the American Hospital Association, the Federation of American Hospitals, the Joint Commission, and the Agency of Healthcare Research and Quality.

The Joint Commission appointed a group of external experts and stakeholders to provide advice on performance measurement issues. Titled the Advisory Council on Performance Measurement, it is a quasi-independent advisory group with responsibilities that include the initial selection of attributes for core measures and evaluation criteria. In 2007, the Joint Commission had implemented five sets of core performance measures for hospitals (Riehle, Riehle, Hanold, Sprenger, and Loeb, 2007). These sets are in the following areas:

• Heart failure

• Acute myocardial infarction

• Pneumonia

• Pregnancy and related conditions

• Surgical infection prevention

Sets are currently being prepared for the following:

• Intensive care unit (ICU) care

• Surgical care improvement project (SCIP)

• Sepsis

• Inpatient psychiatric care

• Pain management

• Children’s asthma care

The National Quality Forum (NQF), with strong financial support from the Robert Wood Johnson Foundation, adopted 15 indicators of health care quality in 2004, which are referred to as the NQF-15. The NQF-15 are as follows:

• Urinary catheter associated urinary tract infection in ICU patients

• Ventilator-associated pneumonia for ICU and high risk nursery patients

• Central line catheter associated blood stream infection (BSI) rate for ICU and HRN

• Death among surgical inpatients (failure to rescue)

• Restraint prevalence (vest and limbs only)

• 30-day mortality risk (risk-adjusted)

• Length of stay for a given condition

• RN education (proportion of total RNs with the credential of bachelor’s degree in nursing or higher)

• Smoking cessation counseling for patients with acute myocardial infarction

• Smoking cessation counseling for patients with heart failure

• Smoking cessation counseling for patients with pneumonia

• Recently hospitalized residents with symptoms of delirium (nursing homes)

• Katz Activities of Daily Living Index (home health care and nursing homes)

• Recently hospitalized residents who experience moderate to severe pain (nursing homes)

• Recently hospitalized residents who have pressure ulcers (nursing homes)

These indicators are the first nationally standardized performance measures of nursing sensitive care in acute care hospitals and are designed to assess health care quality, patient safety, and a professional and safe work environment. Although most measures currently used focus on the failure to meet the expected standards, the NQF believes that quality is as much about influencing positive outcomes, as it is about avoiding negative outcomes. Thus, the NQF is interested in developing measures that reflect the positive effects of nursing care. The NQF plans to include newly developed measures as advances in science-linked nursing yields other quality outcome measures. Priority areas for indicators include assessment, patient education, and care coordination (Naylor, 2007). Given and Sherwood (2005) suggested that “patients’ outcomes may be measured best in the context of their needs, given their diagnosis, treatment, and altered life expectations” (p. 774). Given and Sherwood expressed concern that little has been done to examine the relationship between nursing-sensitive patient outcomes and health disparities.

Oncology Nursing Society

The Oncology Nursing Society has taken a leadership role among specialty nursing organizations in developing an evidence-based practice resource area on its website, www.ons.org/outcomes/measures/index.shtml. The site provides nurses with a guide to identify, critically appraise, and use evidence to solve clinical problems. It can also assist nurses—especially advanced practice nurses—who are helping others in developing evidence-based practice protocols. Information on the website indicated that “any healthcare provider, administrator, educator, or student who wants to learn more about the Evidence-Based Practice process or who is involved with implementing such a process may find this resource area informative.” The outcomes resource area provides resources that will help nurses to achieve desired outcomes for people with cancer, including outcome measures, resource cards, and evidence tables. The ONS Outcomes Resource Area (ORA) provides information for both the nurse providing direct patient care and the nurse who is looking for research evidence regarding outcomes. The resource area can be used in conjunction with the Evidence-Based Practice Resource Area (EBPRA), which is focused on the process and resources for evidence-based practice. ONS provides Putting Evidence into Practice (PEP) resources. PEP resource cards can be downloaded or ordered online. Additional information related to interventions for the outcomes including definitions of interventions, evidence tables, meta-analysis and systematic review tables, and references can be found on the ORA website. Another resource is an Outcomes Measures section that provides detailed information on measuring outcomes specific to oncology patients (e.g., pain, nausea and vomiting, peripheral neuropathies, and mucositis). Within each outcome, you will find definitions, an overview of measuring the outcome, and tables of instruments recommended for measurement. Additional resources related to measurement of the outcome are also available. All resources on the ORA can be easily printed from PDF files. Articles related to PEPs are published regularly in the Oncology Nursing Forum. A list of Oncology Nursing- Sensitive Outcomes is provided on the ORA website, and those available in 2007 are listed in Table 12-4. Because this incredible site is accessible to the public, all nurses can use this information to improve patient outcomes. A separate web page is provided for each nursing-sensitive outcome.

TABLE 12-4

Oncology Nursing-Sensitive Outcomes: A Classification with Exemplars

Symptom Experience

Pain

Fatigue

Insomnia

Nausea

Constipation

Anorexia

Breathlessness

Diarrhea

Altered skin/mucous membranes

Neutropenia

Functional Status

ADL (activities of daily living)

IADL (instrumental activities of daily living)

Role functioning

Activity tolerance

Ability to carry out usual activities

Nutritional status

Safety (preventable adverse events)

Infections

Falls

Skin ulcers

Extravasation incidents

Hypersensitive reactions

Psychological Distress

Anxiety

Depression

Spiritual distress

Economic (incorporate this category into all categories)

Length of stay

Unexpected re-admissions

Emergency visits

Out-of-pocket costs (family)

Cost per patient day

Cost per episode of care

From Given, B., Beck, S., Etland, C., Gobel, B. H., Lamkin, L., & Marsee, V. D. (2004). Nursing-sensitive patient outcomes. Oncology Nursing Society. Retrieved November 16, 2007, from www.ons.org/outcomes/measures/outcomes.shtml.

Advanced Practice Nursing Outcomes Research

Studies of outcomes of advanced practice nurses (APNs) are now appearing in the literature. APNs are RNs educationally prepared at the master’s or doctoral level. These practitioners have expertise in a particular area of clinical specialization and provide direct patient care. The ANA recognizes four types of APNs: certified registered nurse anesthetists (CRNA), certified nurse midwives (CNMs), clinical nurse specialists (CNSs), and nurse practitioners (NPs). Of interest in studying APNs is the number of years they have practiced, which researchers are finding to be related to quality of care. There are two aspects of years of practice that may influence the quality of care they provide: the number of years of clinical practice before becoming an APN and the number of years of practice as an APN.

Another interest in terms of outcome research is what happens during the process of APN care. This care involves a set of activities within, among, and between practitioners and patients and includes both technical and interpersonal elements. This process of care is complex and somewhat mysterious. However, clearly describing what occurs during this process of care is essential to developing a comprehensive understanding of how APNs affect outcomes. Although researchers have provided descriptions of APN care, considerable detailed work must still be done to more thoroughly describe the activities and interactions that occur between APNs and patients during the process of care (Cunningham, 2004).

The next step is to establish the relationship between APN interventions and outcomes. The outcomes must be clearly defined and measurable or observable. One critical outcome is cost. Outcomes may require risk adjustments for factors that may confound the results, such as comorbidity, stage of illness, severity of illness, and demographic characteristics.

Failure to use rigor in measurement will limit your ability to interpret study findings meaningfully. It is important for variables to be measured using the same measurement methods across studies so that results are more readily compared. Understanding which outcomes are sensitive to APN interventions is critical to building knowledge in this area. We need a classification of outcomes of APN practice. Ingersoll, McIntosh, and Williams (2000) generated a beginning list of 27 relevant outcome indicators of APN practice. The nine highest outcomes were the following:

• Satisfaction with care delivered

• Symptom resolution or reduction

• Perception of being well cared for

• Compliance or adherence with treatment plan

• Knowledge of patients and families

• Trust of care provider

• Collaboration among care providers

• Frequency and type of procedures ordered

• Quality of life

These outcomes will require empirical validation. This means that the outcome measures will need to be tested repeatedly in nursing studies. No studies have, as yet, specified which APN interventions were responsible for what outcomes or how the outcomes were affected by the interventions. Cunningham (2004) posed the following critical questions in relation to APN outcomes:

1. What are the mechanisms by which APNs are able to consistently improve outcomes?

2. Is it just that they provided the interventions described by the researchers?

3. Was something key about those interventions within the populations studied?

4. What about the interpersonal components of care? (p. 228)

Linking APN nursing interventions to outcomes requires that we be able to quantify (reduce to numbers) an episode of nursing care using a measurement method that adequately captures what nurses actually do (Hughes et al., 2002). Information must be provided on type and frequency, range, emphasis, and dose intensity of the intervention. Outcomes are context dependent and should be evaluated within specific populations and settings. It has not been clear at this point in APN research which conditions or settings are most likely to benefit from APN interventions (Cunningham, 2004; Sox, 2000). Cunningham (2004) cautioned:

[A]ll outcome measurement must be considered within the context of time. The effects of healthcare interventions may not be discernable immediately or be sustained over time. Longitudinal measures provide information about the patterns and trajectories of outcomes. Understanding the duration of an intervention’s effect allows for the planning and delivery of effective health care. (p. 227)

If you are interested in becoming involved in APN-led multidisciplinary outcomes measurements projects, Kleinpell and Gawlinski (2005) recommended the following process:

• Find/identify outcomes variables that the APN can impact.

• Organize a team.

• Clarify current knowledge of the practice issue to be improved.

• Understand sources of variation.

• Select practices and strategies for improvement.

• Plan.

• Do the interventions according to your plan.

• Check/analyze/review data and results.

• Put improvement into effect, hold the gains you have achieved, and apply the lessons you have learned.

Practice-Based Research Networks (PBRNs)

PBRNs are a group of practices focused on patient care, and they are affiliated in order to analyze their clinical practices in communities. A Web-based search and a survey of PBRNs yielded 111 PBRNs (Tierney et al., 2007). The 86 who met the criteria for primary care PBRNs contained 1871 practices, 12,957 physicians, and 14.7 million patients. Minority and underinsured patients were overrepresented. The average PBRN was young; half had published three or fewer studies. Three-quarters were affiliated with universities. There were four primary care specialties represented in PBRN practices: family medicine, pediatrics, general internal medicine, and family nurse practitioners. The primary research foci were prevention, diabetes, cardiovascular risk, and mental health. There are two networks of advanced practice registered nurses: APRNet and MNCCRN.

APRNet (Advanced Practice Registered Nurses’ Research Network) was funded in 2000 by a grant to Yale University School of Nursing in collaboration with Boston College, the University of Connecticut, the Universities of Massachusetts at Amherst and Worcester, and the University of Rhode Island. The focus of APRNet is to provide primary care. The purpose of the network is to “conduct and facilitate practice-based research relevant to APRN primary care practice; develop culturally competent, evidence-based practice models for APRNs; and translate research findings into primary care practice” (McCloskey, Grey, Deshefy-Longhi, & Grey, 2003).

Midwest Nursing Centers Consortium Research Network (MNCCRN), a practice-based research network (PBRN) funded by the AHRQ, is the only PBRN in the United States comprising exclusively clinical nurse specialists. Its focus is to deliver primary health care with a particular emphasis on reducing health disparities. The network conducts community-based participatory research that will ultimately inform practice, education, and health policy (Anderko, Lundeen, S., & Bartz, 2006).

METHODOLOGIES FOR OUTCOMES STUDIES

A research tradition for the outcomes paradigm is still emerging. A research tradition defines acceptable research methodologies. The lack of an established set of methodologies should encourage greater creativity in seeking new strategies for studying the phenomena of concern. Small single studies using untried methodologies may be useful. Research teams must develop research programs with a planned sequence of studies focused on a particular outcome concern. The PORTs defined a research process for conducting programs of funded outcomes studies. These programs are complex and may consist of multiple studies using a variety of research strategies whose findings must be merged to formulate conclusions.

Although implementing a research program as extensive as a PORT would be unrealistic without funding, ideas for developing the methodology of outcomes research programs on a smaller scale may be generated through an examination of these plans. For example, measurement methods used in PORTs are available for smaller studies. The following steps were constructed combining PORT plans proposed by Freund, Dittus, Fitzgerald, and Heck (1990), Sledge (1993), and Turk and Rudy (1994):

1. Perform a critical review of the published literature or a meta-analysis.

2. Conduct large database analyses on the basis of the results of the critical literature review.

3. Identify outcomes measures for use in the study, and evaluate their sensitivity to change.

4. Identify variables that might affect the outcomes.

5. Achieve consensus on definitions for all variables to be used in the research program.

6. Develop assessment instruments or techniques.

7. Conduct patient surveys or focus groups to gain information on outcomes, such as level of functional status and perceived pain, and on how these outcomes may improve or regress over time.

8. Determine patterns of care. (Who provides care at what points of time for what purposes?)

9. Perform a cohort analysis: Monitor a cohort of patients—some of whom will receive one treatment and others of whom will not receive the treatment—to assess changes in outcomes over time. Use a telephone survey at selected intervals to gather information. Evaluate the proportion of patients who improve, as well as the group mean differences.

10. Determine, through follow-up studies, differences in patient selection or interventions that are associated with different outcomes. Evaluate the durability of change by conducting sufficiently long follow-up. Determine the percentage of patients dropping out of groups receiving different treatments, and, when possible, determine their reasons for dropping out.

11. Determine the clinical significance of improvement, as well as the statistical significance.

12. Determine the cost-benefit and cost-effectiveness of the treatments under evaluation.

13. Use decision analyses to synthesize information about patients’ outcomes and preferences for various types of outcomes.

14. Disseminate information to both patients and health care providers about which individuals would and which would not benefit from the procedure.

15. Conduct a clinical trial to evaluate the effects of the intervention.

16. Incorporate findings into treatment guidelines.

17. Modify provider and patient behavior so that proven, effective treatment is given to those who are most likely to benefit.

The PORTs recognized the need to allow diversity in research strategies, measures, and analyses to facilitate methodological advances (Fowler, Cleary, Magaziner, Patrick, & Benjamin, 1994). Creative flexibility is often necessary to develop ways to answer new questions. Finding ways to determine the impact of a condition on a person’s life is difficult. Interpreting results can also be problematic, because clinical significance is considered as important as statistical significance. This issue requires a judgment by the research team as to what constitutes clinical significance in their particular area of study.

The following section describes some of the sampling issues, research strategies, measurements, and statistical approaches that researchers use when conducting outcomes studies. These descriptions are not sufficient to guide you in using the approaches described; rather they provide a broad overview of a variety of methodologies being used. For additional information, refer to the citations in each section. Outcomes studies cross a variety of disciplines; thus, the emerging methodologies are being enriched by a cross-pollination of ideas, some of which are new to nursing research.

Samples and Sampling

The preferred methods of obtaining samples are different in outcomes studies; random sampling is not considered desirable and is seldom used. Heterogeneous, rather than homogeneous, samples are obtained. Rather than using sampling criteria that restrict subjects included in the study to decrease possible biases and that reduce the variance and increase the possibility of identifying a statistically significant difference, outcomes researchers seek large heterogeneous samples that reflect, as much as possible, all patients who would be receiving care in the real world. Samples, then, must include, for example, patients with various comorbidities and patients with varying levels of health status. In addition, persons should be identified who do not receive treatment for their condition. Devising ways to evaluate the representativeness of such samples is problematic. Developing strategies to locate untreated individuals and include them in follow-up studies is a challenge.

Traditional researchers and statisticians argue that when patients are not selected randomly, biases and confounding variables are more likely to occur and that this issue is a particular problem when the sample size is small. In nonexperimental studies, variation is likely to be greater, resulting in a higher risk of a type II error. Traditional analysts consider nonrandomized studies to be based on observational data and therefore do not view them as credible (Orchard, 1994). Using this argument, traditionalists claim that the findings of most outcomes studies are not valid and should not be used to establish guidelines for clinical practice or to build a body of knowledge.

Slade, Kuipers, and Priebe (2002) suggested the following:

Research questions are designed so that they can be answered by Randomized Controlled Trials (RCTs). Specifically, the use of RCTs involves the identification of an intervention which is given to patients in the experimental group, but not the control group. This encourages the asking of particular types of research questions, typically of the form “Does intervention X work for disorder Y?” It will be argued that the RCT methodology limits the questions that can be asked, and hence can restrict the potential findings from research. Furthermore, if different questions were being asked, the RCTs would not always be the best methodology to employ…. The question “Which patients with condition Y does intervention X work for?” may prove to have more clinical relevance, and answering this question may involve asking the question “How does intervention X work”?, a question which cannot be answered just by using RCTs. (Slade et al., 2002, pp. 12–13)

Large Databases as Sample Sources

One source of samples for outcomes studies is large databases. Two broad categories of databases emerge from patient care encounters: clinical databases and administrative databases, as illustrated by Figure 12-4. Clinical databases were created by providers such as hospitals, HMOs, and health care professionals. The clinical data are generated either as a result of routine documentation of care or in relation to a research protocol. Some databases are data registries that have been developed to gather data related to a particular disease, such as cancer (Lee & Goldman, 1989). With a clinical database, you can link observations made by many practitioners over long periods. Links can be made between the process of care and outcomes (Mitchell et al., 1994; Moses, 1995).

image

Figure 12-4 Types of databases emanating from patient care encounters.

Administrative databases are created by insurance companies, government agencies, and others not directly involved in providing patient care. Administrative databases have standardized sets of data for enormous numbers of patients and providers (Deyo et al., 1994; McDonald & Hui, 1991). An example is the Medicare database managed by the Centers for Medicare & Medicaid Services (CMS). These large databases can be used to determine the incidence or prevalence of disease, geographical variations in medical care utilization, characteristics of medical care, outcomes of care, and complementarity with clinical trials. Wray et al. (1995) cautioned, however, that analyses should be restricted to outcomes specific to a particular subgroup of patients, rather than one adverse outcome of all disease states.

There are problems with the quality of data in the large databases. Hundreds of individuals in a variety of settings have gathered and entered these data. There have been few quality checks on the data, and within the same data sets, records may have different lengths and structures. Missing data are common. Sampling and measurement errors are inherent in all large databases. Sampling error results from the way in which cases are selected for inclusion in the database; measurement error emerges from problems related to the operational definition of concepts. Thus, the reliability and validity of the data are a concern (Davis, 1990; Lange & Jacox, 1993).

Large databases are used in outcomes studies to examine patient care outcomes. The outcomes that can be examined are limited to those recorded in the database and thus tend to be general. Existing databases can be used for analyses such as (1) assessing nursing care delivery models; (2) varying nursing practices; or (3) evaluating patients’ risk of hospital-acquired infection, hospital-acquired pressure ulcer, or falls. Lange and Jacox (1993) identified the following important health policy questions related to nursing that should be examined through the use of large databases:

1. What is standard nursing practice in various settings?

2. What is the relationship between variations in nursing practice and patient outcomes?

3. What are the effects of different nursing staff mixes on patient outcomes and costs?

4. What are the total costs for episodes of treatment of specific conditions, and what part of those are attributable to nursing care?

5. Who is being reimbursed for nursing care delivery? (Lange & Jacox, 1993, p. 207)

To examine these questions, nurses must develop the statistical and methodological skills needed for working with large databases. Large databases contain patient and institutional information from huge numbers of patients. They exist in computer-readable form, require special statistical methods and computer techniques, and can be used by researchers who were not involved in the creation of the database.

Regrettably, nursing data are noticeably missing from these large databases and thus from the funded health policy studies using them. A nursing minimum data set has been repeatedly recommended for inclusion in these databases (Werley, Devine, Zorn, Ryan, & Westra, 1991; Werley & Lang, 1988; Zielstorff, Hudgings, Grobe, and the National Commission on Nursing Implementation Project Task Force on Nursing Information Systems, 1993). This minimum data set would comprise a set of variables necessary and sufficient to describe an episode of illness or the care given by a provider. The ANA has mandated the formation of a steering committee on databases to support clinical nursing practice. The following nursing classification schemes are being used in national databases:

• The North American Nursing Diagnosis Association (NANDA) classification

• The Omaha System: Applications for Community Health Nursing classification

• The Home Health Care Classification

• The Nursing Interventions Classification (NIC)

• The Nursing Outcomes Classification (NOC)

Temple (1990, p. 211) expressed the following concerns regarding the use of large data sets rather than controlled trials to assess the effectiveness of treatments: “We have traveled this route before with uncontrolled observations. It has always been hoped, and has often been asserted, that uncontrolled databases can be adjusted in some way that will allow valid comparisons of treatments. I know of no systematic attempt to document this.” Outcomes researchers counter these criticisms by pointing out that experimental studies lack external validity and are not useful for application in clinical settings. They claim that clinicians are not using the findings from clinical trials because they are not representative of the patients seeking care.

Research Strategies for Outcomes Studies

Outcomes research programs usually consist of studies with a mix of strategies carried out sequentially. Although these strategies could be referred to as designs, for some the term design as used in Chapters 10 and 11 is inconsistent with the use of the term here. Research strategies for outcomes studies have emerged from a variety of disciplines, and innovative new strategies continue to appear in the literature. Strategies for outcomes studies tend to have less control than traditional research designs and cannot be as easily categorized. The research strategies for outcomes studies described in this section are only a sampling from the literature; they are consensus knowledge building, practice pattern profiling, prospective cohort studies, retrospective cohort studies, population-based studies, clinical decision analysis, study of the effectiveness of interdisciplinary teams, geographical analyses, economic studies, ethical studies, and defining and testing of interventions.

Consensus Knowledge Building

Consensus knowledge building is usually performed by a multidisciplinary group representing a variety of constituencies. Initially, the group conducts an extensive international search of the literature on the topic of concern, including unpublished studies, studies in progress, dissertations, and theses. Several separate reviews may be performed, focusing on specific questions about the outcomes of care, diagnosis, prevention, or prognosis. Because meta-analytic methods often cannot be applied to the literature pertinent to PORTs, systematic approaches to critique and synthesis have been developed to identify relevant studies and gather and analyze data abstracted from the studies (Powe, Turner, Maklan, & Ersek, 1994).

The results are dispersed to researchers and clinical experts in the field, who are asked to carefully examine the material and then participate in a consensus conference. The consensus conference yields clinical guidelines, which are published and widely distributed to clinicians. The clinical guidelines are also used as practice norms to study process and outcomes in that field. Gaps in the knowledge base are identified and research priorities determined by the consensus group.

Preliminary steps in this process might include conducting extensive integrative reviews and seeking consensus from a multidisciplinary research team and locally available clinicians. A review could be accomplished by establishing a website and conducting dialogue with experts via the Internet. The review could be published in Sigma Theta Tau’s online journal, Knowledge Synthesis in Nursing, and then dialogue related to the review could be conducted over the Internet. The Delphi method has also been used to seek consensus (Tork, Dassen, & Lohrmann, 2008).

Practice Pattern Profiling

Practice pattern profiling is an epidemiological technique that focuses on patterns of care rather than individual occurrences of care. Researchers use large database analysis to identify a provider’s pattern of practice and compare it with that of similar providers or with an accepted standard of practice. The technique has been used to determine overutilization and underutilization of services, to determine costs associated with a particular provider’s care, to uncover problems related to efficiency and quality of care, and to assess provider performance. The provider being profiled could be an individual practitioner, a group of practitioners, or a health care organization, such as a hospital or an HMO.

The provider’s pattern is expressed as a rate aggregated over time for a defined population of patients under the provider’s care. For example, the analysis might examine the number of sigmoidoscopy claims filed per 100 Medicare patients seen by the provider in a given year. Analyses might examine (1) whether diabetic patients have had at least one annual serum glucose test and have received an ophthalmology examination or (2) the frequency of flu shots, Papanicolaou smears, and mammograms for various target populations (Lasker, Shapiro, & Tucker, 1992; McNeil, Pedersen, & Gatsonis, 1992).

Profiling can be used when the data contain hierarchical groupings: Patients could be grouped by nurse, nurses by unit, and units by larger organizations. The analysis uses regression equations to examine the relationship of an outcome to the characteristics of the various groupings. To be effective, the analysis must include data on the different sources of variability that might contribute to a given outcome.

The structure of the analysis reflects the structure of the data. For example, patient characteristics could be data on disease severity, comorbidity, emergent status, behavioral characteristics, socioeconomic status, and demographics. Nurse characteristics might consist of level of education, specialty status, years of practice, age, gender, and certifications. Unit characteristics could comprise number of beds, nursing management style used on the unit, ratio of patients to nurses, and the proportion of staff who are RNs (McNeil et al., 1992).

Profiles are designed to generate some type of action, such as to inform the provider that his or her rates are too high or too low compared with the norm. By examining aggregate patterns of practice, profiling can be used to compare the care provided by different organizations or received by different populations of patients. Critical pathways or care maps can then be used to determine the proportion of patients who diverged from the pathway for a particular nurse, group of nurses, or group of nursing units. Profiling can be used to improve quality, assess provider performance, and review utilization.

Profiling does not address methods of improving outcomes, although this process can identify problem areas. It can be used to determine how performance should be changed to improve outcomes and who should make those changes. Profiling can also identify outliers, allowing more detailed examination of these individuals.

The databases currently being used for profiling are not ideal, because they were developed for other purposes. Only broad outcomes can be examined, such as morbidity and mortality, complications, readmissions, and frequency of utilization of various services (Lasker et al., 1992; McNeil et al., 1992). Table 12-5 lists examples of the large database measures that might be used in profiling.

TABLE 12-5

Examples of Large Database Measures Used in Profiling

image

Reproduced in part from Steinwachs, D. M., Weiner, J. P., & Shapiro, S. (1989). Management information systems and quality. In N. Goldfield & D. B. Nash (Eds.), Providing quality care: The challenge to clinicians (pp. 160–180). Philadelphia: American College of Physicians.

Prospective Cohort Studies

A prospective cohort study is an epidemiological study in which the researcher identifies a group of people who are at risk for experiencing a particular event. Sample sizes for these studies often must be very large, particularly if only a small portion of the at-risk group will experience the event. The entire group is followed over time to determine the point at which the event occurs, variables associated with the event, and outcomes for those who experienced the event compared with those who did not.

The Harvard Nurses Health Study, which is still being conducted, is an example of a prospective cohort study. This study recruited 100,000 nurses to determine the long-term consequences of the use of birth control pills. Every 2 years, nurses complete a questionnaire about their health and health behaviors. The study has now been in progress for more than 20 years. Multiple studies reported in the literature have used the large data set yielded by the Harvard study. Prospective cohort nursing studies could be conducted on a smaller scale on other populations, such as patients identified as being at high risk for the development of pressure ulcers.

Retrospective Cohort Studies

A retrospective cohort study is an epidemiological study in which the researcher identifies a group of people who have experienced a particular event. This is a common research technique used in the epidemiology field to study occupational exposure to chemicals. Events of interest to nursing that could be studied in this manner include a procedure, an episode of care, a nursing intervention, and a diagnosis. Nurses might use a retrospective cohort study to follow a cohort of women who had received a mastectomy for breast cancer or of patients in whom a urinary bladder catheter was placed during and after surgery. The cohort is evaluated after the event to determine the occurrence of changes in health status, usually the development of a particular disease or death. Nurses might be interested in the pattern of recovery after an event or, in the case of catheterization, the incidence of bladder infections in the months after surgery.

On the basis of the study findings, epidemiologists calculate relative risk of the identified change in health for the group. For example, if death were the occurrence of interest, the expected number of deaths would be determined. The observed number of deaths divided by the expected number of deaths and multiplied by 100 yields a standardized mortality ratio (SMR), which is regarded as a measure of the relative risk of the studied group to die of a particular condition. In nursing studies, patients might be followed over time after discharge from a health care facility (Swaen & Meijers, 1988).

In retrospective studies, researchers commonly ask patients to recall information relevant to their previous health status. This information is often used to determine the amount of change occurring before and after an intervention. Recall can easily be distorted, however, misleading researchers, and thus it should be used with caution. Herrmann (1995) identified three sources of distortion in recall: (1) the question posed to the subject may be conceived or expressed incorrectly, (2) the recall process may be in error, and (3) the research design used to measure recall can result in the recall’s appearing to be different from what actually occurred. Herrmann (1995, p. AS90) also identified four bases of recall:

Direct recall: The subject “accesses the memory without having to think or search memory,” resulting in correct information.

Indirect recall: The subject “accesses the memory after thinking or searching memory,” resulting in correct information.

Limited recall: “Access to the memory does not occur but information that suggests the contents of the memory is accessed,” resulting in an educated guess.

No recall: “Neither the memory nor information relevant to the memory may be accessed,” resulting in a wild guess.

The following abstract is from a retrospective cohort study of the impact of an anemia clinic on emergency room visits and hospitalizations in patients with anemia of CKD pre-dialysis by Perkins et al. (2007).

image ABSTRACT

Aim: There is limited data regarding the impact on hospital resource use of a dedicated, nurse-managed anemia clinic in patients with pre-end stage chronic kidney disease.

Methods: A retrospective cohort study was conducted comparing patients with pre-end stage anemia of chronic kidney disease enrolled in an algorithmic anemia clinic (N = 27, treatment group) with un-enrolled patients with chronic kidney disease (N = 22, control group). The treatment group received algorithmic treatment with recombinant human erythropoietin and intravenous iron sucrose, while controls received usual care. The primary outcomes investigated were emergency room visits and hospitalizations during a 1-year period.

Results: The two groups were similar at baseline. During the first year of clinic enrollment, the mean hemoglobin values improved in the treatment group from baseline and compared with controls (11.6 +/- 1.2 g/dL vs. 10.3 +/- 1.0 g/dL, p < 0.05). The relative risk of an emergency room visit (RR 0.18, 95% CI 0.05–0.67, p < 0.05) and hospitalization (RR [relative risk] 0.20, 95% CI 0.06–0.67, p < 0.05) were reduced in the treatment group versus the control group. The average length of hospital stay was also reduced (6.8 days vs. 9.5 days, p = 0.05).

Conclusion: Enrollment in a dedicated nurse-managed anemia clinic is significantly associated with reduced emergency room visits and hospitalizations in patients with pre-end stage CKD. These associative findings justify future prospective analyses to establish causality. (Perkins et al., 2007, pp. 167–74)

Population-Based Studies

Population-based studies are also important in outcomes research. Conditions must be studied in the context of the community rather than of the medical system. With this method, all cases of a condition occurring in the defined population are included, rather than only patients treated at a particular health care facility, because the latter could introduce a selection bias. The researcher might make efforts to include individuals with the condition who had not received treatment.

Community-based norms of tests and survey instruments obtained in this manner provide a clearer picture of the range of values than the limited spectrum of patients seen in specialty clinics. Estimates of instrument sensitivity and specificity are more accurate. This method enables researchers to understand the natural history of a condition or of the long-term risks and benefits of a particular intervention (Guess et al., 1995).

Geller et al. (2007) conducted “A study of health outcomes in school children: Key challenges and lessons learned from the Framingham Schools’ Natural History of Nevi Study.” The following is an abstract of their study.

image ABSTRACT

Background: We describe the planning, recruitment, key challenges, and lessons learned in the development of a study of the evolution of nevi (moles) among children in a school setting.

Methods: This population-based study of digital photography and dermoscopy of the child’s back (overview, close-up, and dermoscopic images) and genetic specimens took place among fifth graders in the Framingham, Massachusetts School System. Schoolchildren and their parents completed baseline surveys on sun protection practices, sunburns, and past ultraviolet exposures, including summer and vacation experiences.

Results: Prestudy outreach was conducted with children, parents, nurses, administrators, and pediatricians. Of the 691 Framingham families with a fifth grader (aged 10–11), 443 consented to complete surveys and undergo digital photography and dermoscopy during the school’s routine scoliosis testing. Of the 443 families providing consent, 369 agreed to genetic testing. We identified key factors to consider when implementing school-based studies: (a) pilot studies to demonstrate feasibility, (b) inclusion of school administration and parents, (c) grassroots approach with multiple contacts, and (d) embedding research studies within preexisting school health services.

Conclusions: Launching an observational study within the school environment required an academic/school collaboration across numerous disciplines including dermatology, epidemiology, genetics, medical photography, school health, community health education, and most notably, the need for the presence of a full-time study nurse in the school. A large school system proved to be an excellent resource to conduct this first prospective study on the evolution of moles in US schoolchildren. The key challenges and lessons learned may be applicable to other investigators launching school-based initiatives. (Geller et al., 2007, pp. 312-318)

Clinical Decision Analysis

Clinical decision analysis is a systematic method of describing clinical problems, identifying possible diagnostic and management courses of action, assessing the probability and value of various outcomes, and then calculating the optimal course of action. Decision analysis is based on the following four assumptions: (1) decisions can be quantified; (2) all possible courses of action can be identified and evaluated; (3) the different values of outcomes, viewed from the perspective of the nurse, patient, payer, and administrator, can be examined; and (4) the analysis allows selection of an optimal course of therapy.

To perform the analysis, the researchers must define the boundaries of what is being defined as the clinical in terms of a logical sequence of events over time. All the possible courses of action are then determined. These courses of action are usually represented in a decision tree consisting of a starting point, available alternatives, probable events, and outcomes. Next, researchers define the goals and objectives for resolving the problem. They then calculate the probability that each path of the decision tree will occur. For each potential path, there is an outcome. Each outcome is assigned a value. These values may be expressed in terms of money, morbidity incidents, quality-of-life measures, or length of stay. Figure 12-5 presents a simplified decision tree for breech delivery in obstetrics. An optimal course of action is identified according to which decision maximizes the chances of the most desirable outcomes (Crane, 1988; Keeler, 1994; Sonnenberg et al., 1994).

image

Figure 12-5 Simplified decision tree for breech delivery. (Numbers in parentheses refer to estimated probability of event.)

Studies analyzing clinical decisions have primarily used questionnaires and interviews. However, determining the clinical decisions of practitioners is not an easy task. Much of patient care involves the clinician and the patient alone. The underlying theories of care and the processes of care are hidden from view. Thus, it is difficult for clinicians to compare their approaches to care. Among physicians, care delivered by other physicians is rarely observed (O’Connor et al., 1993). Studies have found that physicians have difficulty recalling their decisions and providing rationales for them (Chaput de Saintonge & Hattersley, 1985; Kirwan, Chaput de Saintonge, Joyce, Holmes, & Currey, 1986). Simulation would be an effective strategy for analyzing clinical decisions.

Unsworth (2001) described the following strategies for studying clinical decision making:

1. The clinician conducts semistructured interviews.

2. The clinician listens to an audiotape of the clinician-client dialogue and uses this to recall her or his reasoning processes (audio-assisted recall).

3. The clinician uses video footage to prompt recall of reasoning processes (video-assisted recall).

4. The clinician writes notes as he or she solves a problem.

5. The clinician provides verbal commentary during interaction with the client (the think-aloud method).

6. The clinician presents his or her reasoning about a clinician-client session afterward from memory.

7. The clinician uses a head-mounted video camera with video-assisted recall.

Chaput de Saintonge, Kirway, Evans, and Crane (1988) proposed a strategy for analyzing clinical decisions using “paper patients.” The techniques seem to parallel the decisions clinicians have made in the clinical setting. In their study, 10 common clinical variables used to evaluate the status of patients with rheumatoid arthritis were collected at two points on 30 patients participating in a clinical trial, at the time of entry and 1 year later. Twenty of the patients were duplicated throughout the table to check the consistency of responses, making 50 responses in all. The variables were presented to rheumatologists on a single sheet of paper labeled “before” and “after a year.” Physicians were asked to indicate the extent of change in each patient’s condition using a visual analogue scale (VAS) with the ends labeled “greatest possible deterioration” and “greatest possible improvement.” They were also asked whether they considered the change clinically important. Then they were asked to indicate the relative importance of each variable, rating the variables on a scale of 1 to 100.

Regression analyses were performed in which the VAS values were used as the dependent variable. With increasing VAS values, judgments of clinical importance changed from “not important” to “important.” This change occurred over a 5 mm length of the scale or less. (VAS scales are traditionally 100 mm in length.) The researchers designated the midpoint of this transition zone as the “threshold value of clinical importance.” To test the consistency of responses, the researchers used the Spearman rank correlation to compare responses to the duplicate cases. They then developed a consensus model by weighing each physician’s responses on the basis of the Spearman coefficient. The VAS scores were multiplied by the Spearman coefficient. These VAS scores were then used as the dependent variable in another regression analysis. This method is useful in identifying the variables important in the making of clinical decisions and the consistency with which practitioners make their decisions.

Standing (2007) conducted a study of clinical decision-making of nursing students as they progressed to practicing RNs. The following is an abstract of the study.

image ABSTRACT

Aim: This paper is a report of a study to explore, from the perspective of nursing students, how they acquire clinical decision-making skills and how well-prepared they feel in this respect regarding their responsibilities as Registered Nurses.

Background: Previous research has focused mainly on exploring experienced nurses’ judgment and decision-making. Some studies have elicited senior nursing students’ understanding of the process, but none has explored the development of clinical decision-making skills throughout the educational program and in the first year as a Registered Nurse.

Method: A volunteer sample of 20 respondents, broadly representative of the student cohort regarding qualifications, age, gender and nursing specialty, was recruited. A longitudinal hermeneutic phenomenological study was carried out from 2000 to 2004, using interviews, reflective journals, care studies, critical incident analyses and document analysis.

Findings: Ten conceptions of nursing and 10 perceptions of clinical decision-making were identified and a growing pattern of inter-relationships between them became apparent. A ‘matrix model’ was developed by cross-referencing the two thematic categories within the timeline of respondents’ developmental journey through significant milestones and changing contexts. As Registered Nurses they found having to ‘think on your feet’ without the ‘comfort blanket’ of student status both a stressful and formative learning experience.

Conclusion: Further collaboration between education and health service partners is recommended to integrate clinical decision-making throughout the nursing curriculum, enhance the development of such vital skills, and facilitate the transition from student to Registered Nurse. (Standing, 2007, pp. 257–269)

Study of the Effectiveness of Interdisciplinary Teams

According to Schmitt, Farrell, and Heinemann (1988), interdisciplinary teams have the following characteristics:

(1) Multiple health disciplines are involved in the care of the same patients, (2) the disciplines encompass a diversity of dissimilar knowledge and skills required by the patients, (3) the plan of care reflects an integrated set of goals shared by the providers of care, and (4) the team members share information and coordinate their services through a systematic communication process. (p. 753)

Part of the communication process consists of regularly scheduled face-to-face meetings. The assumption is that collaborative team approaches provide more effective care than nonteam approaches or than noncollaborative multidisciplinary approaches (parallel care).

Interdisciplinary teams are becoming more common as health care changes. Examples are hospice care teams, home health teams, and psychiatric care teams. Studying the effectiveness of interdisciplinary teams is difficult, however. The characteristics that make team care more effective have not been identified. Researchers usually focus on evaluating a single team rather than conducting comparison studies. The outcomes of team care are also multidimensional, requiring the use of multiple dependent variables.

Evaluation studies examining team care often examine only posttreatment data without baseline data. If groups are compared, there is no evidence that the groups were similar in terms of important variables before the intervention. Involvement of family members with the team has not been examined. Clearly, this is an important focus of research requiring more rigorous designs than have previously been used.

Lujan, Ostwald, and Ortiz (2007) conducted a study of the effectiveness of an interdisciplinary team using a diabetes intervention. The following abstract describes the study.

image ABSTRACT

Purpose: The purpose of this randomized controlled trial is to determine the effectiveness of an intervention led by promotoras (community lay workers) on the glycemic control, diabetes knowledge, and diabetes health beliefs of Mexican Americans with type 2 diabetes living in a major city on the Texas-Mexico border.

Methods: One hundred fifty Mexican American participants were recruited at a Catholic faith-based clinic and randomized into 2 groups. Personal characteristics, acculturation, baseline hemoglobin A1C level, diabetes knowledge, and diabetes health beliefs were measured. The intervention was culturally specific and consisted of participative group education, telephone contact, and follow-up using inspirational faith-based health behavior change postcards. The A1C levels, diabetes knowledge, and diabetes health beliefs were measured 3 and 6 months postbaseline, and the mean change between the groups was analyzed.

Results: The 80% female sample, with a mean age of 58 years, demonstrated low acculturation, income, education, health insurance coverage, and strong Catholicism. No significant changes were noted at the 3-month assessment, but the mean change of the A1C levels, F(1, 148) = 10.28, p < 0.001, and the diabetes knowledge scores, F(1, 148) = 9.0, p < 0.002, of the intervention group improved significantly at 6 months, adjusting for health insurance coverage. The health belief scores decreased in both groups.

Conclusions: The intervention resulted in decreased A1C levels and increased diabetes knowledge, suggesting that using promotoras as part of an interdisciplinary team can result in positive outcomes for Mexican Americans who have type 2 diabetes. Clinical implications and recommendations for future research are suggested. (Lujan et al., 2007, pp. 660–670)

Geographical Analyses

Geographical analyses examine variations in health status, health services, patterns of care, or patterns of use by geographical area and are sometimes referred to as small area analyses. Variations may be associated with sociodemographic, economic, medical, cultural, or behavioral characteristics. Locality-specific factors of a health care system, such as capacity, access, and convenience, may play a role in explaining variations. The social setting, environment, living conditions, and community may also be important factors.

The interactions between the characteristics of a locality and of its inhabitants are complex. The characteristics of the total community may transcend the characteristics of individuals within the community and may influence subgroup behavior. High educational levels in the community are commonly associated with greater access to information and receptiveness to ideas from outside the community.

Regression analyses are commonly used to develop models using all the risk factors and the characteristics of the community. Results are often displayed through the use of maps (Kieffer, Alexander, & Mor, 1992). After the analysis, the researcher must determine whether differences in rates are due to chance alone and whether high rates are too high. From a more theoretical perspective, the researcher must then explain the geographical variation uncovered by the analysis (Volinn, Diehr, Ciol, & Loeser, 1994).

Geographical information systems (GISs) can provide an important tool for performing geographical analyses. A GIS uses relational databases to facilitate processing of spatial information. The software tools in a GIS can be used for mapping, data summaries, and analysis of spatial relationships. GISs have the capability of modeling data flows so that the effect of proposed changes in interventions applied to individuals or communities on outcomes can be modeled (Auffrey, 1998).

Metzel and Giordano (2007) conducted a geographical analysis of the locations of employment services and people with disabilities. The following abstract describes the study.

image ABSTRACT

Vocational Rehabilitation (VR) services and One-Stop Career Centers (One-Stops) are the 2 principal public services intended to increase the employment rates of people with disabilities through employment and training services. As a first step in assessing accessibility of the locations of employment services, this study compared the location of VRs and One-Stops with areas of high numbers of nonemployment among people with disabilities and high numbers of unemployment in the general population. Using geographic information science and the spatial technique of the Local Indicators of Spatial Association (LISA), we analyzed the locations of the 2 programs and the concentrations of nonemployed people with disabilities at national and intrastate scales. We found that areas with high numbers of nonemployed people with disabilities are geographically underserved by both VRs and One-Stops, which raises questions about site selection and geographic accessibility. (Metzel & Giordano, 2007, pp. 88–97)

Economic Studies

Many of the problems studied in health services research address concerns related to the efficient use of scarce resources and, thus, to economics. Health economists are concerned with the costs and benefits of alternative treatments or ways of identifying the most efficient means of care. The economist’s definition of efficiency is the least expensive method of achieving a desired end while obtaining the maximum benefit from available resources. If available resources must be shared with other programs or other types of patients, an economic study can determine whether changing the distribution of resources will increase total benefit or welfare.

To determine the efficiency of a treatment, the economist conducts a cost-effectiveness analysis. This technique uses a single measure of outcomes, and all other factors are expressed in monetary terms as net cost per unit of output (Ludbrook, 1990). Cost-effectiveness analyses compare different ways of accomplishing a clinical goal, such as diagnosing a condition, treating an illness, or providing a service. The alternative approaches are compared in terms of costs and benefits. The purpose is to identify the strategy that provides the most value for the money. There are always trade-offs between costs and benefits (Oster, 1988). Stone (1998) described the methodology for performing a cost-effectiveness analysis.

It is time for nurses to take a more active role in conducting cost-effectiveness research. Nurses are well positioned to evaluate health care practices and have the incentive to conduct the studies. Nursing practice is seldom a subject of cost-effectiveness analyses. The knowledge gained from this effort could enable nurses to refine their practice by substituting interventions that maximize nurses’ time to the best advantage, in terms of the patient’s health, for interventions that offer less gain (Siegel, 1998).

As Lieu and Newman (1998) have pointed out:

“Cost-effective” does not necessarily mean “cost-saving” (Doubilet, Weinstein, & McNeil, 1986). Many health interventions, even preventive ones, do not save money (Tengs et al., 1995). Rather, a service should be called cost-effective if its benefits are judged worth the costs. Recently, a consensus panel supported by the National Institutes of Health published recommendations that define standards for conducting cost-effectiveness analysis (Gold, Siegel, Russell, & Weinstein, 1996). Cost-effectiveness analysis is only one of several methods that can be used for the economic evaluation of health services (Drummond, Stoddart, & Torrance, 1987). Although these methods are useful, an intervention cannot be cost-effective without being effective. (Lieu & Newman, 1998, p. 1043)

To examine overall benefits, researchers perform a cost-benefit analysis. With this method, the costs and benefits of alternative ways of using resources are assessed in monetary terms, and the use that produces the greatest net benefit is chosen. The costs included in an economic study are defined in exact ways. The actual costs associated with an activity, not prices, must be used. Cost is not the same as price. In most cases, price is greater than cost. Costs are a measure of the actual use of resources rather than the price charged. Charges are a poor reflection of actual costs. Costs that might be included in a cost-benefit analysis are costs to the provider, costs to third-party payers (e.g., insurance), out-of-pocket costs, and opportunity costs.

Out-of-pocket costs are expenses incurred by the patient or family or both that are not reimbursable by the insurance company; they include costs of buying supplies, dressings, medications, and special food, transportation expenses, and unreimbursable care expenses.

Opportunity costs are lost opportunities that the patient, family member, or others experience. For example, a family member might have been able to earn more money if he or she had not had to stay home to care for the patient. The child might have been able to advance her education if she had not had to drop out of school for a semester to care for a parent. A husband might have been able to take a better job if the family could have moved to another town rather than stay in place to enable a member to receive specific medical care.

Opportunity costs are often not included in overall costs. This omission results in an underestimation of costs and an overestimation of benefit. For example, one can demonstrate that caring for an acutely ill patient at home is cost-effective if one does not consider out-of-pocket costs and opportunity costs. However, the total costs of providing the care, regardless of who pays or who receives the money, must be included. In performing such a study, it is important to state whose costs are being considered and who is to weigh the benefits against the consequences.

Allred, Arford, Mauldin, and Goodwin (1998) critiqued the seven nursing studies between 1992 and 1996 in which cost-effectiveness analyses were performed. They found these studies to be equivalent in quality to those from other disciplines. They concluded that more emphasis must be placed on cost-effectiveness analyses in nursing research, and they provided guidelines for conducting these studies.

Stone (1998) has described the recommended guidelines for journal reports of cost-effectiveness analyses. Cost-effectiveness studies should be used as aids in decision making rather than as the end decision. If a cost-effectiveness study is conducted to inform those who make resource allocation decisions, a standard reference case should be presented to allow the decision makers to compare a proposed new health intervention with existing practice.

Mason, Freemantle, Gibson, and New (2005) conducted an economic analysis of the SPLINT clinical trial. The following is an abstract of that study.

image ABSTRACT

Objective: To determine the cost-effectiveness of specialist nurse-led clinics provided to improve lipid and blood pressure control in diabetic patients receiving hospital-based care.

Research Design and Methods: A policy of targeting improved care through specialist nurse-led clinics is evaluated using a novel method, linking the cost-effectiveness of antihypertensive and lipid-lowering treatments with the cost and level of behavioral change achieved by the specialist nurse-led clinics. Treatment cost-effectiveness is modeled from the U.K. Prospective Diabetes Study and Heart Protection Study treatment trials, whereas specialist nurse-led clinics are evaluated using the Specialist Nurse-Led Clinics to Improve Control of Hypertension and Hyperlipidemia in Diabetes (SPLINT) trial.

Results: Good lipid and blood pressure control are cost-effective treatment goals for patients with diabetes. Modeling findings from treatment trials, blood pressure lowering is estimated to be cost saving and life prolonging (-1,400 dollars/quality-adjusted life-year [QALY]), whereas lipid-lowering is estimated to be highly cost-effective (8,230 dollars/QALY). Investing in nurse-led clinics to help achieve these benefits imposes an addition on treatment cost-effectiveness leading to higher estimates: 4,020 dollars/QALY and 19,950 dollars/QALY, respectively. For both clinics combined, the estimated cost-effectiveness is 9,070 dollars/QALY. Using an acceptability threshold of 50,000 dollars/QALY, the likelihood that blood pressure-lowering clinics are cost-effective is 77%, lipid clinics 99%, and combined clinics 83%.

Conclusions: A method is described for evaluating the cost-effectiveness of policies to change patient uptake of health care. Such policies are less attractive than treatment cost-effectiveness (which implies cost-less self-implementation). However, specialist nurse-led clinics, as an adjunct to hospital-based diabetic care, combining both lipid and blood pressure control, appear effective and likely to provide excellent value for money. (Mason et al., 2005, pp. 40–46)

Ethical Studies

Outcomes studies often lead to policies for allocating scarce resources. Ethicists take the position that moral principles, such as justice, constrain the use of costs and benefits to choose treatments that might maximize the benefit per unit cost. Value commitments are inherent in choices about research methods and about the selection and interpretation of outcome variables, and researchers should acknowledge these commitments. “The choices researchers make should be documented and the reasons for those choices should be given explicitly in publications and presentations so that readers and other users of the information are enabled and expected to bear more responsibility for interpreting and applying the findings appropriately” (Lynn & Virnig, 1995, p. AS292). Veatch (1993) proposed that by analyzing the implications of rationing decisions in terms of the principles of justice and autonomy, we would establish more acceptable criteria than we would by using outcomes predictors alone. As an example, Veatch performed an ethical analysis of the use of outcome predictors in decisions related to the early withdrawal of life support. Ethical studies should play an important role in outcomes programs of research.

Schwartz (2004) performed an ethical analysis of postmortem sperm retrieval. The following is an abstract of that analysis.

image ABSTRACT

Reproductive technologists are developing new and more powerful means to assist reproduction, including postmortem sperm retrieval. This rapid technological development could lead to ethical concerns for nurses and nurse practitioners. In this article, I will present an overview of relevant literature and discussion of postmortem sperm retrieval. Topics discussed include the postmortem sperm retrieval process, public awareness of this process, ethical theories and principles related to this process, and case law related to this process. With the availability of increasingly complex and advanced reproductive technologies, including postmortem sperm retrieval, nurses and nurse practitioners need knowledge about the legal and ethical principles and the ramifications involved in this issue. (Schwartz, 2004, pp. 183–188)

Measurement Methods

The selection of appropriate outcome variables is critical to the success of a study (Bernstein & Hilborne, 1993). As in any study, the researcher must evaluate the evidence of validity and the reliability of the measurement methods. Outcomes selected for nursing studies should be those most consistent with nursing practice and theory (Harris & Warren, 1995). In some studies, rather than selecting the final outcome of care, which may not occur for months or years, researchers use measures of intermediate end points. Intermediate end points are events or markers that act as precursors to the final outcome. It is important, however, to document the validity of the intermediate end point in predicting the outcome (Freedman & Schatzkin, 1992). In early outcomes studies, researchers selected outcome measures that they could easily obtain rather than those most desirable for outcomes studies.

Table 12-6 identifies characteristics important to evaluate in selecting methods of measuring outcomes. In evaluating a particular outcome measure, the researcher should consult the literature for previous studies that have used that particular method of measurement, including the publication describing development of the method of measurement. Information related to the measurement can be organized into a table such as Table 12-7, which allows others to easily compare several methods of measuring a particular outcome.

TABLE 12-6

Characteristics of Outcomes Assessment Instruments

image

From Harris M. R., & Warren, J. J. (1995). Patient outcomes: Assessment issues for the CNS. Clinical Nurse Specialist, 9(2), 82.

TABLE 12-7

Characteristics of the Katz Activities of Daily Living (ADL) Scale: A Proposed Outcome Instrument

image

From Harris M. R., & Warren, J. J. (1995). Patient outcomes: Assessment issues for the CNS. Clinical Nurse Specialist, 9(2), 85.

Outcomes researchers are moving away from classic measurement theory as a means of evaluating the reliability of measurement methods. They are interested in identifying change in measures over time in a subject, and instruments developed through the use of classical measurement theory are often not sensitive to these changes. It is also important to determine the magnitude of change that can be detected. In addition, measures may detect change within a particular range of values but may not be sensitive to changes outside that range. The sensitivity to change of many commonly used outcome measures has not been examined (Deyo & Carter, 1992; Felson, Anderson, & Meenan, 1990). Studies must be conducted specifically to determine the sensitivity of measures before they are used in outcomes studies. As the sensitivity of a measure increases, statistical power increases, allowing smaller sample sizes to detect significant differences.

Creative methods of collecting data on instruments for large outcomes studies must be explored. In a busy office or clinic setting, the typical strategy of having clerks or other staff administer questionnaires or scales to patients is time intensive and costly and may result in lost data. Greist et al. (1997) recommended using the computer and the telephone to collect such data. Computers containing the instrument can be placed in locations convenient to patients, so the instrument can be completed with a minimum of staff involvement.

Another option is telephone interviews using the computer. The traditional telephone method of using interviewers to ask questions is costly. However, the same interactive voice response (IVR) technology used in voicemail can be used in telephone interviewing by computer. Interactive voice response allows the patient to respond to yes-no and multiple-choice questions by pressing numbers on the keypad or by saying “yes” or “no” or a number from 0 to 9. Patients can record answers in their own voices.

Measuring the frequency and nature of care activities of various staff has been problematic in studies where the goal is to evaluate the process of care. Strategies commonly used are chart review, time and motion studies, work sampling, and retrospective recall. None of these is a satisfactory indicator of the actual care (Hale, Thomas, Bond, & Todd, 1997). Holmes, Teresi, Lindeman, and Glandon (1997) recommended the use of barcode methodology to measure service inputs. The barcodes capture what care is provided, for whom, by whom, and at what time. Barcoded service sheets and a portable barcode reader are used with an accompanying database management system.

Analysis of Measurement Reliability

Estimating the reliability of outcome measures through the use of classic measurement theory may be problematic. The traditional concept of measurement reliability was developed to evaluate quantities that were not expected to change over time in an individual. This assessment of reliability is irrelevant, or only partially relevant, to assessing the suitability or precision of measures selected because of their sensitivity to change within the individual over time. Traditional evaluations of measurement methods assume that any change in group values is a result of variation among individuals. Patient change, however, results in changes within one individual. With classical measurement theory analysis, a measure that did not vary among individuals would have zero (or poor) reliability. This measure, however, may be an excellent measure of change over time if individuals change on that measure (even if group averages do not change much). Thus, it is inappropriate to assess the reliability of difference scores according to the internal consistency of measures (Collins & Johnston, 1995).

In some outcomes studies, measures obtained from individuals are used as indicators of characteristics of a group. The data from the measures are aggregated to reflect the group. In this case, the researcher must assess the extent to which the responses represent the group. Although the group mean is usually expected to serve this purpose, it may not adequately represent the group. Verran, Mark, and Lamb (1992) described techniques that can be employed to examine the psychometric properties of instruments used to describe group-level phenomena. Items of the instrument should be assessed for content validity to determine how well they measure group-level concepts. Reliability and validity must be assessed at the aggregated level rather than the individual level.

Commonly, multiple outcomes measures are used in outcomes studies. Researchers wish to evaluate all relevant effects of care. However, quantity of measures is not necessarily evidence of the quality of the measures. The measures most relevant to the treatment should be selected. Measures selected should not be closely correlated. Interpreting the results of studies in which multiple outcomes have been used can be problematic. For example, Felson et al. (1990, p. 141) asked, “Which is the better therapy, the one that shows a change in 6 outcome measures out of 12 tested or the one that shows a change in 4 of the 12 measures? What if the 4 that demonstrate change with one therapy are not the same as the 6 that show a change in another therapy?” If multiple comparisons are made, it is important to make statistical adjustments for them; the risk of a type I error is greater when multiple comparisons are made.

Some researchers recommend combining various measures into a single summary score (DesHarnais, McMahon, & Wroblewski, 1991; Felson et al., 1990). However, such global composite measures have not been widely used. The various measures used in such an index may not be equally weighted and may be difficult to combine. Also, clinicians may not readily interpret the composite index value.

The focus of most measures developed for outcomes studies has been the individual patient. However, a number of organizations are now developing measures of the quality of performance of systems of care. In 1990, the Consortium Research on Indicators of System Performance (CRISP) project began to develop indicators of the quality of performance of integrated delivery systems. From the perspective of CRISP, the success of a health system is associated with its ability to decrease the number of episodes of diseases in the population. Therefore, the impact of the delivery system on the community is considered an important measure of performance. CRISP has developed a number of indicators now in use by consortium members, who pay to participate in the studies (Bergman, 1994).

The Joint Commission is also applying outcomes data to quality management efforts in hospitals using the Indicator Measurement System (IMSystem) (McCormick, 1990; Nadzim, Turpin, Hanold, & White, 1993). The National Committee for Quality Assurance, the organization that accredits managed care plans, has developed a tool (HEDIS) for comparing managed care plans. Comparisons involve more than 60 measures, including patient satisfaction, quality of care, and financial stability (Guadagnoli & McNeil, 1994). Researchers at the Henry Ford Health Systems’ Center for Health System Studies in Detroit have evolved 80 performance indicators to evaluate health systems (Anderson, 1991).

Statistical Methods for Outcomes Studies

Although outcomes researchers test for statistical significance of their findings, this is not considered sufficient to judge the findings as important. Their focus is the clinical significance of study findings (see Chapter 18 for more information on clinical significance). In analyzing data, outcomes researchers have moved away from statistical analyses that use the mean to test for group differences. They place greater importance on analyzing change scores and use exploratory methods for examining the data to identify outliers.

Analysis of Change

With the focus on outcomes studies has come a renewed interest in methods of analyzing change. Gottman and Rushe (1993) reported that the first book addressing change in research, Problems in Measuring Change edited by Harris (1967), is the basis for most current approaches to analyzing change. Since then, a number of new ideas have emerged regarding the analysis of change (e.g., in studies by Collins & Horn, 1991; Rovine & Von Eye, 1991; Von Eye, 1990a, 1990b). However, many researchers are unfamiliar with these new ideas and continue to base their reasoning on Harris’s 1967 book. Gottman and Rushe (1993) suggested that many beliefs related to the analysis of change are based on little more than the following fallacies:

Fallacy 1: In change, regression toward the mean is an unavoidable law of nature.

Fallacy 2: The difference score between premeasurement and postmeasurement is unreliable.

Fallacy 3: Analysis of covariance (ANCOVA, or related methods such as path analysis) is the way to analyze change.

Fallacy 4: Two points (pretest and posttest) are adequate for the study of change.

Fallacy 5: The correlation between change and initial level is always negative.

Outcomes researchers are also questioning the method of analyzing change. Collins and Johnston (1995) have suggested that the recommended analysis method of regressing pretest scores on outcome scores and basing the analysis of change on residual change scores is overly conservative and tends to understate the extent of real change. There are serious questions about the conceptual meaning of these residual change scores.

For some outcomes, the changes may be nonlinear or may go up and down rather than always increasing. Thus, it is as important to uncover patterns of change as it is to test for statistically significant differences at various time points. Some changes may occur in relation to stages of recovery or improvement. These changes may occur over weeks, months, or even years. A more complete picture of the process of recovery can be obtained by examining the process in greater detail and over a broader range. With this approach, the examiner can develop a recovery curve, which provides a model of the recovery process and can then be tested (Collins & Johnston, 1995; Ottenbacher, Johnson, & Hojem, 1995).

Analysis of Improvement

In addition to reporting the mean improvement score for all patients treated, it is important to report what percentage of patients improve. Do all patients improve slightly, or is there a divergence among patients, with some improving greatly and others not improving at all? This divergence may best be illustrated by plotting the data. Researchers studying a particular treatment or approach to care might develop a standard or index of varying degrees of improvement that might occur. The index would allow better comparisons of the effectiveness of various treatments. Characteristics of patients who experience varying degrees of improvement should be described, and outliers should be carefully examined. This step requires that the study design include baseline measures of patient status, such as demographic characteristics, functional status, and disease severity measures. An analysis of improvement will allow better judgments of the appropriate use of various treatments (Felson et al., 1990).

Variance Analysis

Variance analysis is used to track individual and group variance from a specific critical pathway. The goal is to decrease preventable variance in process, thus helping patients and their families achieve optimal outcomes. Some of the variance is due to comorbidities. You may find that keeping a patient with comorbidities on the desired pathway may require you to utilize more resources early in the patient’s care. Thus, it is important to track both variance and comorbidities. Studies examining variations from pathways may make it easier for health care providers to tailor existing critical pathways for specific comorbidities.

Variance analysis can also be used to identify at-risk patients who might benefit from the services of a case manager. Variance analysis tracking is expressed through the use of graphics, and the expected pathway is plotted on the graph. The care providers plot deviations (negative variance) on the graph, allowing immediate comparison with the expected pathway. Deviations may be related to the patient, the system, or the provider (Tidwell, 1993).

Longitudinal Guttman Simplex Model

The longitudinal Guttman simplex (LGS) model is an extension of the Guttman scale that involves times, as well as items and persons. For example, an LGS model of mobility might involve the following items:

M1: moving unassisted from bed to chair

M2: moving unassisted from bed to another room

M3: moving unassisted up stairs

Table 12-8 shows hypothetical data collected with this measure on three patients at three periods, showing a pattern of improving ability over time (Collins & Johnston, 1995).

TABLE 12-8

Sample Data Using Longitudinal Guttman Scale

image

M1, Moving unassisted from bed to chair; M2, moving unassisted from bed to another room; M3, moving unassisted up stairs.

From Collins, L. M., & Johnston, M. V. (1995). Analysis of stage-sequential change in rehabilitation research. American Journal of Physical Medicine and Rehabilitation, 74(2), 167.

Latent Transition Analysis

Latent transition analysis (LTA) is used in situations in which stages or categories of recovery have been defined and transitions between stages can be identified. To use the analysis method, the researchers assign each member of the population in a single category or stage for a given point of time. However, stage membership changes over time. The analysis tests stage membership to provide a realistic picture of development. Collins and Johnston (1995) described an example of this type of analysis with a hypothetical model of recovery from functional neglect after stroke.

Let’s assume that we can define a study subpopulation displaying four latent stages or types of functional neglect: sensory limitations (S), cognitive limitations (C), both (S and C) or patients may recover and adapt to the point that they are functional (F). … Membership in each category is inferred from several clinical symptoms or test items, which supposedly go together but in fact may not for some patients. The items have some error and are imperfect indicators of true (latent) stage membership. Our objective is to estimate in which category a patient probably falls at any point in time and the probability of movement between stages over time, conditional on previous stage membership….Suppose we use a large number of times periodically to monitor progress, testing the same group of patients at multiple points in time. We record which items the patient passes and which the patient does not. (p. 47)

After the use of a computerized program designed to perform latent transition analysis, the researchers obtained the results (Table 12-9). Only two points of time are shown here, although the program can handle up to five points in time.

TABLE 12-9

A Hypothetical Latent Transition Model of Recovery from Neglect Following Stroke

image

*No patients were functional at time 1.

From Collins, L. M., & Johnston, M. V. (1995). Analysis of stage-sequential change in rehabilitation research. American Journal of Physical Medicine and Rehabilitation, 74(2), 168.

The first line of the table contains the estimate of the proportion of patients in each of the four stages at Time 1. In this example, 30% of the sample had both S and C limitations, 30% had S limitations, and 40% had C limitations, and none was functional. At Time 2, the proportion in each functional limitation appears to have declined, except sensory limitations, which is unchanged, and 27% are now in the functional stage. The bottom half of the table is a matrix of transition probabilities that reveals patterns of change. Of patients who started with S, 30% improved; however, the overall percentage at S remained the same because 30% of the patients who started at S and C moved to the S category. Of patients who initially had C problems alone, 46% moved to the functional category.

A third set of quantities estimated by the full latent transition analysis model but not shown in the table are the relationships between items and stage memberships. This relationship indicates the probability that when a subject moves from one category to another, each item will also change to reflect the new stage membership. Thus, this relationship determines the effectiveness of the test items or clinical symptoms as indicators of stage membership.

Multilevel Analysis

Multilevel analysis is used in epidemiology to study how environmental factors (aggregate-level characteristics) and individual attributes and behaviors (individual-level characteristics) interact to influence individual-level health behaviors and disease risks. For example, the risk that an adolescent will start smoking is associated with the following variables: (1) attributes of the child (e.g., self-esteem, academic achievement, refusal skills), (2) attributes of the child’s family (e.g., parental attitudes toward smoking, smoking behavior of parents), (3) general characteristics of the community (e.g., ease of minors’ access to cigarettes, school policies regarding smoking, city smoking ordinances, social norms of students toward smoking), and (4) general social factors (e.g., geographical region, economic policies that influence the price of cigarettes). The researchers might ask, “Does smoking status covary with the level of restriction of smoking in public places after we have controlled for the individual-level variables that influence smoking risks?” (Von Korff, Koepsell, Curry, & Diehr, 1992).

DISSEMINATING OUTCOMES RESEARCH FINDINGS

Including plans for the dissemination of findings as a component of a program of research is a new idea within nursing, if one considers the process of dissemination to be more than publishing the results in professional journals. As we discuss in Unit Three of this text, strategies for the dissemination of research findings tend to be performed by groups other than the original researchers. The transfer of knowledge from nurse researchers to nurse clinicians has been, for the most part, ineffective.

Nursing, as a discipline, has not yet addressed the various constituencies for nursing research knowledge. A research team conducting a program of outcomes research must identify its constituencies. These should include (1) the clinicians, who will apply the knowledge to practice; (2) the public, who may make health care decisions on the basis of the information; (3) health care institutions, which must evaluate care in their facilities on the basis of the information; (4) health policy makers, who may set standards on the basis of the information; and (5) researchers, who may use the information in designing new studies. Disseminating information to these various constituencies through presentations at meetings and publications in a wide diversity of journals and magazines, as well as releasing the information to the news media, requires careful planning. Mattson and Donovan (1994) suggested that dissemination involves strategies for debunking myths, addressing issues related to feasibility, communicating effectively, and identifying opinion leaders.

SUMMARY

• Outcomes research examines the end results of patient care.

• The scientific approaches used in outcomes studies differ in some important ways from those used in traditional research.

• Donabedian (1987) developed the theory on which outcomes research is based.

• Quality is the overriding construct of the theory, although Donabedian never defined this concept.

• The three major concepts of the theory are health, subjects of care, and providers of care.

• Donabedian identified three objects of evaluation in appraising quality: structure, process, and outcome.

• The goal of outcomes research is to evaluate outcomes as defined by Donabedian, whose theory requires that identified outcomes be clearly linked with the process that caused the outcome.

• Clinical guideline panels are developed to incorporate available evidence on health outcomes.

• Outcome studies provide rich opportunities to build a stronger scientific underpinning for nursing practice.

• A nursing-sensitive patient outcome is “sensitive” because it is influenced by nursing.

• Organizations currently involved in efforts to study nursing-sensitive outcomes include the American Nurses Association, the National Quality Forum (NQF), the California Nursing Outcomes Coalition, the Veterans Affairs Nursing Outcomes Database, the Center for Medicare and Medicaid Services’ (CMS) Hospital Quality Initiative, the American Hospital Association, the Federation of American Hospitals, the Joint Commission on Accreditation of Healthcare Organizations, and the Agency of Healthcare Research and Quality.

• Another interest in terms of outcome research is what happens during the process of advanced practice nurses care.

• Practice-based research networks (PBRNs) are a group of practices that focus on patient care and are affiliated in order to analyze their clinical practices in communities.

• Outcome designs tend to have less control than traditional research designs and, except for the clinical trial, seldom use random samples; rather, they use large representative samples.

• Statistical approaches used in outcomes studies include new approaches to examining measurement reliability, strategies to analyze change, and the analysis of improvement.

REFERENCES

Ahrens, T., Yancey, V., Kollef, M. Improving family communications at the end of life: Implications for length of stay in the intensive care unit and resource use. American Journal of Critical Care. 2003;12(4):317–323.

Allred, C.A., Arford, P.H., Mauldin, P.D., Goodwin, L.K. Cost-effectiveness analysis in the nursing literature, 1992–1996. Image: Journal of Nursing Scholarship. 1998;30(3):235–242.

Altimier, L.B., Eichel, M., Warner, B., Tedeschi, L., Brown, B. Developmental care: Changing the NICU physically and behaviorally to promote patient outcomes and contain costs. Neonatal Intensive Care. 2004;17(2):35–39.

Anderko, L., Lundeen, S., Bartz, C. The Midwest Nursing Centers Consortium Research Network: Translating research into practice. Policy, Politics & Nursing Practice. 2006;7(2):101–109.

Anderson, H.J. Sizing up systems: Researchers to test performance measures. Hospitals. 1991;65(20):33–34.

Auffrey, C., Geographic information systems as a tool for community health research and practice. Nursing Research Methods 1998. Retrieved March 24, 2003, from www.nursing.uc.edu/nrm/AUFFREY51598.htm.

Bergman, R. Are my outcomes better than yours? Hospital Health Network. 1994;68(15):113–116.

Bergmark, A., Oscarsson, L. Does anybody really know what they are doing? Some comments related to methodology of treatment service research. British Journal of Addiction. 1991;86(2):139–142.

Bernstein, S.J., Hilborne, L.H. Clinical indicators: The road to quality care? Joint Commission Journal on Quality Improvement. 1993;19(11):501–509.

Bircumshaw, D., Chapman, C.M. A study to compare the practice style of graduate and non-graduate nurses and midwives: The pilot study. Journal of Advanced Nursing. 1988;13(5):605–614.

Bombardier, C., Tugwell, P. Methodological considerations in functional assessment. Journal of Rheumatology. 1987;14(Suppl. 15):7–10.

Bowles, K.H., Baugh, A.C. Applying research evidence to optimize telehomecare. Journal of Cardiovascular Nursing. 2007;22(1):5–15.

Bradley, P., Lindsay, B. Specialist epilepsy nurses for treating epilepsy. Cochrane Database of Systematic Reviews. (4):2007. CD001907

Brooten, D., Naylor, M.D., York, R., Brown, L.P., Munro, P.H., Hollingsworth, A.O., et al. Lessons learned from testing the quality cost model of advanced practice nursing (APN) transitional care. Journal of Nursing Scholarship. 2002;34(4):369–375.

Byers, V.L., Mays, M.Z., Mark, D.D. Provider satisfaction in Army primary care clinics. Military Medicine. 1999;164(2):132–135.

Castle, N.G., Engberg, J. The influence of staffing characteristics on quality of care in nursing homes. Health Services Research. 2007;42(5):1822–1847.

Challis, D., Clarkson, P., Williamson, J., Hughes, J., Venables, D., Burns, A., et al. The value of specialist clinical assessment of older people prior to entry to care homes. Age & Ageing. 2004;33(1):25–34.

Chaput de Saintonge, D.M., Hattersley, L.A. Antibiotics for otitis media: Can we help doctors agree? Family Practice. 1985;2(4):205–212.

Chaput de Saintonge, D.M., Kirway, J.R., Evans, S.J., Crane, G.J. How can we design trials to detect clinically important changes in disease severity? British Journal of Clinical Pharmacology. 1988;26(4):355–362.

Chisholm, D., Knapp, M., Astin, J., Audini, B., Lelliott, B. The mental health residential care study: The “hidden costs” of provision. Health & Social Care in the Community. 1997;5(3):162–172.

Collins, L.M., Horn, J.L. Best methods for the analysis of change: Recent advances, unanswered questions, future directions. Washington, DC: American Psychological Association, 1991.

Collins, L.M., Johnston, M.V. Analysis of stage- sequential change in rehabilitation research. American Journal of Physical Medicine and Rehabilitation. 1995;74(2):163–170.

Cólon-Emeric, C.S., Ammarell, N., Bailey, D., Corazzini, K., Lekan-Rutledge, D., Piven, M.L., et al. Patterns of medical and nursing staff communication in nursing homes: Implications and insights from complexity science. Qualitative Health Research. 2006;16(2):173–188.

Crane, V.S. Economic aspects of clinical decision making: Applications of clinical decision analysis. American Journal of Hospital Pharmacy. 1988;45(3):548–553.

Cunningham, R.S. Advanced practice nursing outcomes: A review of selected empirical literature. Oncology Nursing Forum. 2004;31(2):219–230.

Davis, K. Use of data registries to evaluate medical procedures: Coronary artery surgery study and the balloon valvuloplasty registry. International Journal of Technology Assessment in Health Care. 1990;6(2):203–210.

DesHarnais, S., McMahon, L.F., Jr., Wroblewski, R. Measuring outcomes of hospital care using multiple risk-adjusted indexes. HSR: Health Services Research. 1991;26(4):425–445.

Deyo, R.A. Measuring functional outcomes in therapeutic trials for chronic disease. Controlled Clinical Trials. 1984;5(3):223–240.

Deyo, R.A., Carter, W.B. Strategies for improving and expanding the application of health status measures in clinical settings. Medical Care. 1992;30(5 Suppl):MS176–MS186.

Deyo, R.A., Centor, R.M. Assessing the responsiveness of functional scales to clinical change: An analogy to diagnostic test performance. Journal of Chronic Disease. 1986;39(11):897–906.

Deyo, R.A., Taylor, V.M., Diehr, P., Conrad, D., Cherkin, D.C., Ciol, M., et al. Analysis of automated administrative and survey databases to study patterns and outcomes of care. Spine. 1994;19(18 Suppl):2083S–2091S.

Donabedian, A. Benefits in medical care programs. Cambridge, MA: Harvard University Press, 1976.

Donabedian, A. Needed research in quality assessment and monitoring. National Center for Health Services Research: Hyattsville, MD: U.S. Department of Health, Education, and Welfare, Public Health Service, 1978.

Donabedian, A. Explorations in quality assessment and monitoring. Ann Arbor, MI: Health Administration Press, 1980.

Donabedian, A. The criteria and standards of quality. Ann Arbor, MI: Health Administration Press, 1982.

Donabedian, A., Some basic issues in evaluating the quality of health care. Rinke, L.T., eds., Outcome measures in home care, Vol. I. New York: National League for Nursing, 1987:3–28. (Original work published 1976.)

Doubilet, P., Weinstein, M.C., McNeil, B.H. Use and misuse of the term “cost effective” in medicine. New England Journal of Medicine. 1986;314(4):253–255.

Drummond, M.F., Stoddart, G.L., Torrance, G.W. Method for the economic evaluation of health care programs. New York: Oxford University Press, 1987.

Evans, R.G., Stoddart, G.L. Producing health, consuming health care. Social Science and Medicine. 1990;31(12):1347–1363.

Feinstein, A.R., Josephy, B.R., Wells, C.K. Scientific and clinical problems in indexes of functional disability. Annals of Internal Medicine. 1986;105(3):413–420.

Felson, D.T., Anderson, J.J., Meenan, R.F. Time for changes in the design, analysis, and reporting of rheumatoid arthritis clinical trials. Arthritis and Rheumatism. 1990;33(1):140–149.

Fowler, F.J., Jr., Cleary, P.D., Magaziner, J., Patrick, D.L., Benjamin, K.L. Methodological issues in measuring patient-reported outcomes: The agenda of the work group on outcomes assessment. Medical Care. 1994;32(7 Suppl):JS65–JS76.

Freedman, L.S., Schatzkin, A. Sample size for studying intermediate endpoints within intervention trials or observational studies. American Journal of Epidemiology. 1992;136(9):1148–1159.

Freund, D.A., Dittus, R.S., Fitzgerald, J., Heck, D. Assessing and improving outcomes: Total knee replacement. HSR: Health Services Research. 1990;25(5):723–726.

Fries, B.E. Comparing case-mix systems for nursing home payment. Health Care Financing Review. 1990;11(4):103–119.

Fullerton, J.T., Hollenbach, K.A., Wingard, D.L. Research exchange. Practice styles: A comparison of obstetricians and nurse-midwives. Journal of Nurse-Midwifery. 1996;41(3):243–250.

Geller, A.C., Oliveria, S.A., Bishop, M., Buckminster, M., Brooks, K.R., Halpern, A.C. Study of health outcomes in school children: Key challenges and lessons learned from the Framingham Schools’ Natural History of Nevi Study. Journal of School Health. 2007;77(6):312–318.

Given, B., Sherwood, P.R. Nursing-sensitive patient outcomes: A white paper. Oncology Nursing Forum. 2005;32(4):773–784.

Gold M.R., Siegel J.E., Russell L.B., Weinstein M.C., eds. Cost-effectiveness in health and medicine. New York: Oxford University Press, 1996.

Goldberg, H.I., Cummings, M.A., Steinberg, E.P., Ricci, E.M., Shannon, T., Soumerai, S.B., et al. Deliberations on the dissemination of PORT products: Translating research findings into improved patient outcomes. Medical Care. 1994;32(7 Suppl):JS90–JS110.

Goldman, L.E., Vittinghoff, E., Dudley, R.A. Quality of care in hospitals with a high percent of Medicaid patients. Medical Care. 2007;45(6):579–583.

Gottman, J.M., Rushe, R.H. The analysis of change: Issues, fallacies, and new ideas. Journal of Consulting and Clinical Psychology. 1993;61(6):907–910.

Greene, R., Bondy, P.K., Maklan, C.W. The national medical effectiveness research initiative. Diabetes Care. 1994;17(Suppl. 1):45–49.

Greist, J.H., Jefferson, J.W., Wenzel, K.W., Kobak, K.A., Bailey, T.M., Katzelnich, D.J., et al. The telephone assessment program: Efficient patient monitoring and clinician feedback. M.D. Computing. 1997;14(5):382–387.

Griffiths, P.D., Edwards, M.H., Forbes, A., Harris, R.L., Ritchie, G. Effectiveness of intermediate care in nursing-led in-patient units. Cochrane Database of Systematic Reviews. (2):2007. CD002214

Guadagnoli, E., McNeil, B.J. Outcomes research: Hope for the future or the latest rage? Inquiry. 1994;31(1):14–24.

Guess, H.A., Jacobsen, S.J., Girman, C.J., Oesterling, J.E., Chute, C.G., Panser, L.A., et al. The role of community-based longitudinal studies in evaluating treatment effects. Example: Benign prostatic hyperplasia. Medical Care. 1995;33(4 Suppl):AS26–AS35.

Guyatt, G., Walter, S., Norman, G. Measuring change over time: Assessing the usefulness of evaluative instruments. Journal of Chronic Disease. 1987;40(2):171–178.

Hale, C.A., Thomas, L.H., Bond, S., Todd, C. The nursing record as a research tool to identify nursing interventions. Journal of Clinical Nursing. 1997;6(3):207–214.

Haggerty, M.C., Stockdale-Woolley, R., Nair, S. Respi-Care: An innovative home care program for the patient with chronic obstructive pulmonary disease. Chest. 1991;100(3):607–612.

Harper, R.W. Care and cost effectiveness of the clinical care coordinator/patient care associate nursing case management model. Unpublished doctoral dissertation, University of California: San Francisco, 1992.

Harris C.W., ed. Problems in measuring change. Madison: University of Wisconsin Press, 1967.

Harris, M.R., Warren, J.J. Patient outcomes: Assessment issues for the CNS. Clinical Nurse Specialist. 1995;9(2):82–86.

Harris, R., Richardson, G., Griffiths, P., Hallett, N., Wilson-Barnett, J. Economic evaluation of a nursing-led inpatient unit: The impact of findings on management decisions of service utility and sustainability. Journal of Nursing Management. 2005;13(5):428–438.

Herrmann, D. Reporting current, past, and changed health status: What we know about distortion. Medical Care. 1995;33(4 Suppl):AS89–AS94.

Holmes, D., Teresi, J., Lindeman, D.A., Glandon, G.L. Measurement of personal care inputs in chronic care settings. Journal of Mental Health and Aging. 1997;3(1):119–127.

Howell-White, S. Choosing a birth attendant: The influence of a woman’s childbirth definition. Social Science & Medicine. 1997;45(6):925–936.

Hueston, W.J., Lewis-Stevenson, S. Provider distribution and variations in statewide cesarean section rates. Journal of Community Health. 2001;26(1):1–10.

Hughes, L.C., Robinson, L.A., Cooley, M.E., Nuamah, I., Grobe, S.J., McCorkle, R. Describing an episode of home nursing care for elderly postsurgical cancer patients. Nursing Research. 2002;51(2):110–118.

Ingersoll, G.L., McIntosh, E., Williams, M. Nurse- sensitive outcomes of advanced practice. Journal of Advanced Nursing. 2000;32(5):1272–1282.

Jaeschke, R., Singer, J., Guyatt, G.H. Measurement of health status: Ascertaining the minimal clinically important difference. Controlled Clinical Trials. 1989;10(4):407–415.

Jennings, B.M., Loan, L.A., DePaul, D., Brosch, L.R., Hildreth, P. Lessons learned while collecting ANA indicator data. Journal of Nursing Administration. 2001;31(3):121–129.

Kane, R.A., Bayer, A.J. Assessment of functional status. In: Pathy M.S.J., ed. Principles and practices of geriatric medicine. 2nd ed. New York: Wiley; 1991:265–277.

Katz, S., Downs, T.D., Cash, H.R., Grotz, R.C. Progress in the development of the Index of ADL. Gerontologist. 1970;10(1):20–30.

Kay, C.M. Targeting cost containment efforts in Massachusetts nursing homes. Unpublished doctoral dissertation, Brandeis University: The Florence Heller Graduate School for Advanced Studies in Social Welfare, 1999.

Keeler, E.B. Decision analysis and cost-effectiveness analysis in women’s health care. Clinical Obstetrics and Gynecology. 1994;37(1):207–215.

Kieffer, E., Alexander, G.R., Mor, J. Area-level predictors of use of prenatal care in diverse populations. Public Health Reports. 1992;107(6):653–658.

Kirshner, B., Guyatt, G. A methodological framework for assessing health indices. Journal of Chronic Diseases. 1985;38(1):27–36.

Kirwan, J.R., Chaput de Saintonge, D.M., Joyce, C.R.B., Holmes, J., Currey, H.L.F. Inability of rheumatologists to describe their true policies for assessing rheumatoid arthritis. Annals of Rheumatic Diseases. 1986;45(2):156–161.

Kleinpell, R., Gawlinski, A. Assessing outcomes in advanced practice nursing practice: The use of quality indicators and evidence-based practice. AACN Clinical Issues. 2005;16(1):43–57.

Kramer, M., Maguire, P., Schmalenberg, C. Excellence through evidence: The what, when, and where of clinical autonomy. Journal of Nursing Administration. 2006;36(10):479–491.

Lake, E.T. Multilevel models in health outcomes research Part I: Theory, design, and measurement. Applied Nursing Research. 2006;19(1):51–53.

Lange, L.L., Jacox, A. Using large data bases in nursing and health policy research. Journal of Professional Nursing. 1993;9(4):204–211.

Lasker, R.D., Shapiro, D.W., Tucker, A.M. Realizing the potential of practice pattern profiling. Inquiry. 1992;29(3):287–297.

Lee, T.H., Goldman, L. Development and analysis of observational data bases. Journal of the American College of Cardiology. 1989;14(3, Suppl. A):44A–47A.

Leeper, B. Nursing outcomes: Percutaneous coronary interventions. Journal of Cardiovascular Nursing. 2004;19(5):346–353.

Leidy, N.K. Survey measures of functional ability and disability of pulmonary patients. In: Metzger B.L., ed. Synthesis conference on altered functioning: Impairment and disability. Indianapolis: Nursing Center Press of Sigma Theta Tau International; 1991:52–79.

Lewis, B.E. HMO outcomes research: Lessons from the field. Journal of Ambulatory Care Management. 1995;18(1):47–55.

Lieu, T.A., Newman, T.B. Issues in studying the effectiveness of health services for children: Improving the quality of healthcare for children: An agenda for research. HSR: Health Services Research. 1998;4(33):1041–1058.

Lohr, K.N. Outcome measurement: Concepts and questions. Inquiry. 1988;25(1):37–50.

Ludbrook, A. Using economic appraisal in health services research. Health Bulletin. 1990;48(2):81–90.

Lujan, J., Ostwald, S.K., Ortiz, M. Promotora diabetes intervention for Mexican Americans. The Diabetes Educator. 2007;33(4):660–670.

Lynn, J., Virnig, B.A. Assessing the significance of treatment effects: Comments from the perspective of ethics. Medical Care. 1995;33(4):AS292–AS298.

Mark, B.A. The black box of patient outcomes research. Image: Journal of Nursing Scholarship. 1995;27(1):42.

Mark, D.D., Byers, V.L., Mays, M.Z. Primary care outcomes and provider practice styles. Military Medicine. 2001;166(10):875–880.

Mason, J.M., Freemantle, N., Gibson, J.M., New, J.P. Specialist nurse-led clinics to improve control of hypertension and hyperlipidemia in diabetes: Economic analysis of the SPLINT trial. Diabetes Care. 2005;28(1):40–46.

Mattson, M.E., Donovan, D.M. Clinical applications: The transition from research to practice. Journal of Studies on Alcohol. 1994;12(Suppl):163–166.

McCloskey, B., Grey, M., Deshefy-Longhi, T., Grey, L. APRN practice patterns in primary care. Nurse Practitioner: American Journal of Primary Health Care. 2003;28(4):39–44.

McCormick, B. Outcomes in action: The JCAHO’s clinical indicators. Hospitals. 1990;64(19):34–38.

McDonald, C.J., Hui, S.L. The analysis of humongous databases: Problems and promises. Statistics in Medicine. 1991;10(4):511–518.

McIsaac, C. Managing wound care outcomes. Ostomy Wound Management. 2005;51(4):54–56. 58, 60 passim

McNeil, B.J., Pedersen, S.H., Gatsonis, C. Current issues in profiling quality of care. Inquiry. 1992;29(3):298–307.

Metzel, D.S., Giordano, A. Locations of employment services and people with disabilities: A geographical analysis of accessibility. Journal of Disability Policy Studies. 2007;18(2):88–97.

Mitchell, J.B., Bubolz, T., Pail, J.E., Pashos, C.L., Escarce, J.J., Muhlbaier, L.H., et al. Using Medicare claims for outcomes research. Medical Care. 1994;32(7 Suppl):JS38–JS51.

Mitchell, P.H., Ferketich, S., Jennings, B.M., American Academy of Nursing Expert Panel on Quality Health Care. Quality health outcomes model. Image: Journal of Nursing Scholarship. 1998;30(1):43–46.

Mor, V. Defining and measuring quality outcomes in long term care. Journal of the American Medical Directors Association. 2006;7(8):532–538. discussion 532–540

Morgan, D.G., Stewart, N.J., D’Arcy, K.C., Werezak, L.J. Evaluating rural nursing home environments: Dementia special care units versus integrated facilities. Aging & Mental Health. 2004;8(3):25–265.

Moses, L.E. Measuring effects without randomized trials? Options, problems, challenges. Medical Care. 1995;33(4):AS8–AS14.

Nadzim, D.M., Turpin, R., Hanold, L.S., White, R.E. Data-driven performance improvement in health care: The Joint Commission’s Indicator Measurement System (IMSystem). Joint Commission Journal on Quality Improvement. 1993;19(11):492–500.

Naylor, M.D. Advancing the science in the measurement of health care quality influenced by nurses. Medical Care Research & Review. 2007;64(2 Suppl):144S–169S.

Nelson, E.C., Landgraf, J.M., Hays, R.D., Wasson, J.H., Kirk, J.W. The functional status of patients: How can it be measured in physicians’ offices? Medical Care. 1990;28(12):1111–1126.

O’Connor, G.T., Plume, S.K., Wennberg, J.E. Regional organization for outcomes research. Annals of the New York Academy of Sciences. 1993;703:44–51.

Orchard, C. Comparing healthcare outcomes. British Medical Journal. 1994;308(6942):1493–1496.

Oster, G. Economic aspects of clinical decision making: Applications in patient care. American Journal of Hospital Pharmacy. 1988;45(3):543–547.

Ottenbacher, K.J., Johnson, M.B., Hojem, M. The significance of clinical change and clinical change of significance: Issues and methods. American Journal of Occupational Therapy. 1995;42(3):156–163.

Patrick, D.L., Deyo, R.A. Generic and disease-specific measures in assessing health status and quality of life. Medical Care. 1989;27(3 Suppl):S217–S232.

Perkins, R., Olson, S., Hanson, J., Lee, J., Stiles, K., Lebrun, C. Impact of an anemia clinic on emergency room visits and hospitalizations in patients with anemia of CKD pre-dialysis. Nephrology Nursing Journal. 2007;34(2):167–174. 182

Phibbs, C.S., Holty, J.C., Goldstein, M.K., Garber, A.M., Wang, Y., Feussner, J.R., et al. The effect of geriatrics evaluation and management on nursing home use and health care costs: Results from a randomized trial. Medical Care. 2006;44(1):91–95.

Powe, N.R., Turner, J.A., Maklan, C.W., Ersek, M. Alternative methods for formal literature review and meta- analysis in AHCPR patient outcomes research teams. Medical Care. 1994;32(7):JS22–JS37.

Power, E.J., Tunis, S.R., Wagner, J.L. Technology assessment and public health. Annual Review of Public Health. 1994;15:561–579.

Riehle, A.I., Hanold, L.S., Sprenger, S.L., Loeb, J.M. Specifying and standardizing performance measures for use at a national level: Implications for nursing-sensitive care performance measures. Medical Care Research & Review. 2007;64(2 Suppl):64S–81S.

Rovine, M.J., Von Eye, A. Applied computational statistics in longitudinal research. San Diego, CA: Academic Press, 1991.

Rowell, P. Lessons learned while collecting ANA indicator data: The American Nurses Association responds. Journal of Nursing Administration. 2001;31(3):130–131.

Rubin, F.H., Williams, J.T., Lescisin, D.A., Mook, W.J., Hassan, S., Innouye, S.K. Replicating the Hospital Elder Life Program in a community hospital and demonstrating effectiveness using quality improvement methodology. Journal of the American Geriatrics Society. 2006;54(6):969–974.

Schmitt, M.H., Farrell, M.P., Heinemann, G.D. Conceptual and methodological problems in studying the effects of interdisciplinary geriatric teams. Gerontologist. 1988;28(6):753–764.

Schnelle, J.F., Simmons, S.F., Harrington, C., Cadogan, M., Garcia, E., Bates-Jensen, B.M. Relationship of nursing home staffing to quality of care. Health Services Research. 2004;39(2):225–250.

Schwartz, D.A. Postmortem sperm retrieval: An ethical analysis. Clinical Excellence for Nurse Practitioners. 2004;8(4):183–188.

Siegel, J.E. Cost-effectiveness analysis and nursing research: Is there a fit? Image: Journal of Nursing Scholarship. 1998;30(3):221–222.

Slade, M., Kuipers, E., Priebe, S. Mental health services research methodology. International Review of Psychiatry. 2002;14(1):12–18.

Sledge, C.B. Why do outcomes research? Orthopedics. 1993;16(10):1093–1096.

Sonnenberg, F.A., Roberts, M.S., Tsevat, J., Wong, J.B., Barry, M., Kent, D.L. Toward a peer review process for medical decision analysis models. Medical Care. 1994;32(7):JS52–JS64.

Sox, H.C. Independent primary care practice by nurse practitioners. JAMA. 2000;283(1):106–108.

Spector, W.D. Functional disability scales. In: Spilker B., ed. Quality of life assessments in clinical trials. New York: Raven Press; 1990:115–129.

Spitzer, W.O. State of science 1986: Quality of life and functional status as target variables for research. Journal of Chronic Disease. 1987;40(6):465–471.

Standing, M. Clinical decision-making skills on the developmental journal from student to registered nurse: A longitudinal inquiry. Journal of Advanced Nursing. 2007;60(3):257–269.

Stewart, A.L., Greenfield, S., Hays, R.D., Wells, K., Rogers, W.H., Berry, S.D., et al. Functional status and well-being of patients with chronic conditions. JAMA. 1989;262(7):907–913.

Stewart, B.J., Archbold, P.G. Nursing intervention studies require outcome measures that are sensitive to change: Part 2. Research in Nursing & Health. 1992;16(1):77–81.

Stone, D.W. Methods for conducting and reporting cost-effectiveness analysis in nursing. Image: Journal of Nursing Scholarship. 1998;30(3):229–234.

Swaen, G.M.H., Meijers, J.M.M. Influence of design characteristics on the outcomes of retrospective cohort studies. British Journal of Industrial Medicine. 1988;45(9):624–629.

Tanenbaum, S.J. Knowing and acting in medical practice: The epistemological politics of outcomes research. Journal of Health Politics, Policy and Law. 1994;19(1):27–44.

Tarlov, A.R., Ware, J.E., Jr., Greenfield, S., Nelson, E.C., Perrin, E., Zubkoff, M. The medical outcomes study: An application of methods for monitoring the results of medical care. JAMA. 1989;262(7):925–930.

Temple, R. Problems in the use of large data sets to assess effectiveness. International Journal of Technology Assessment in Health Care. 1990;6(2):211–219.

Tengs, T.O., Adams, M.E., Pliskin, J.S., Safran, D.G., Siegel, J.E., Weinstein, M.C., et al. Five hundred life-saving interventions and their cost-effectiveness. Risk Analysis. 1995;15(3):369–390.

Tidwell, S.L. A graphic tool for tracking variance and comorbidities in cardiac surgery case management. Progress in Cardiovascular Nursing. 1993;8(2):6–19.

Tierney, W.M., Caitlin, C., Oppenheimer, M.P.H., Hudson, B.L., Benz, J., Finn, A., et al. A national survey of primary care practice-based research networks. Annals of Family Medicine. 2007;5(3):242–250.

Tork, H.K., Dassen, T., Lohrmann, C. Care dependency of children in Egypt. Journal of Clinical Nursing. 2008;17(3):287–295.

Turk, D.C., Rudy, T.E. Methods for evaluating treatment outcomes: Ways to overcome potential obstacles. Spine. 1994;19(15):1759–1763.

Unsworth, C.A. Using a head-mounted video camera to study clinical reasoning. American Journal of Occupational Therapy. 2001;55(5):582–588.

U.S. Congress, Office of Technology Assessment. Identifying health technologies that work: Searching for evidence. Washington, DC: U.S. Government Printing Office, 1994. (Publication No. OTA-H-608)

Varricchio, C. Human and indirect costs of home care… for cancer patients. Nursing Outlook. 1994;42(4):151–157.

Veatch, R. Justice and outcomes research: The ethical limits. Journal of Clinical Ethics. 1993;4(3):258–261.

Verran, J.A., Mark, B.A., Lamb, G. Psychometric examination of instruments using aggregated data. Research in Nursing & Health. 1992;15(3):237–240.

Volinn, E., Diehr, P., Ciol, M.A., Loeser, J.D. Why does geographic variation in health care practices matter (and seven questions to ask in evaluating studies on geographic variation)? Spine. 1994;19(18S):2092S–2100S.

Von Eye A., ed. Statistical methods in longitudinal research: Vol. 1. Principles and structuring change. Boston: Academic Press, 1990.

Von Eye A., ed. Statistical methods in longitudinal research: Vol. 2. Time series and categorical longitudinal data. Boston: Academic Press, 1990.

Von Korff, M., Koepsell, T., Curry, S., Diehr, P. Multi-level analysis in epidemiologic research on health behaviors and outcomes. American Journal of Epidemiology. 1992;135(10):1077–1082.

Ward, D., Brown, M.A. Labor and cost in AIDS family caregiving. Western Journal of Nursing Research. 1994;16(1):1–25.

Ward, D., Severs, M., Dean, T., Brooks, N. Care home versus hospital and own home environments for rehabilitation of older people. Cochrane Database of Systematic Reviews. (2):2007. CD003164

Wennberg, J.E., Barry, M.J., Fowler, F.J., Mulley, A. Outcomes Research, PORTs, and health care reform. Annals of the New York Academy of Sciences. 1993;703:52–62.

Werley, H., Devine, E., Zorn, C., Ryan, P., Westra, B. The nursing minimum data set: Abstraction tool for standardized, comparable, essential data. American Journal of Public Health. 1991;81(4):421–426.

Werley, H., Lang, N. Identification of the nursing minimum data set. New York: Springer, 1988.

Westert, G.P., Groenewegen, P.P. Medical practice variations: Changing the theoretical approach. Scandinavian Journal of Public Health. 1999;27(3):173–180.

White, K.L. Health care research: Old wine in new bottles. Pharos of Alpha Omega Alpha Honor Medical Society. 1993;56(3):12–16.

Wilson, I.B., Landon, B.E., Hirschhorn, L.R., McInnes, K., Ding, L., Marsden, P.V., et al. Quality of HIV care provided by nurse practitioners, physician assistants, and physicians. Annals of Internal Medicine. 2005;143(10):729–736.

Wood, L.W. Medical treatment effectiveness research. Journal of Occupational Medicine. 1990;32(12):1173–1174.

Wray, N.P., Ashton, C.M., Kuykendall, D.H., Petersen, N.J., Souchek, J., Hollingsworth, J.C. Selecting disease- outcome pairs for monitoring the quality of hospital care. Medical Care. 1995;33(1):75–89.

Zielstorff, R., Hudgings, C., Grobe, S., National Commission on Nursing Implementation Project Task Force on Nursing Information Systems. Next generation nursing information systems: Essential characteristics for professional practice. Washington, DC: American Nurses Association, 1993.