CHAPTER 24

Reciprocal Role of Research and Practice

Chapter outline

Key terms

Evaluation practice

Evidence-based practice

Outcome assessment

Practice-based research

Process assessment

Reflexive intervention

Translation research

Treatment fidelity

You now have some level of comfort with the thinking and action processes involved in the experimental-type and naturalistic research traditions. These processes serve as your working tools to apply to a health or human service practice arena. In this chapter, we turn to the action of “application,” or applying research principles to the practice arena and visa versa. Although collaboration is part and parcel of research in both experimental-type and naturalistic inquiry, we see the intersection of research and practice and the most fertile ground for cooperation and conversely suggest that the critical relationship between research and practice can only be maximized through the collaborative efforts of professionals, clients, patients, and researchers. We discuss four approaches that attempt to bridge the research-practice gap: evidence-based practice, translational research, practice-based research, and evaluation practice.

Evidence-based practice

If you are a practicing health or human service professional, you most likely have heard of evidence-based practice. Most professional organizations highly espouse this practice as a critical way of basing treatment and clinical practices in research evidence. If you are not involved in a practice arena, you may be asking, “What's all the fuss about?” After all, you probably assume that all health and human service practices are based in or derived from evidence or knowledge that has been systematically obtained, rather than from hearsay, trial and error, or casual decision making. However, this has not exactly been the case. At issue are the definition of evidence and what constitutes “adequate evidence” for informing practice decisions.

Evidence-based practice is not a research method or design. Rather, it is a model of professional practice that draws heavily on research that uses particular methodologies to draw conclusions from research literature. Major aspects of evidence-based practice include which research methods and conclusions should underpin practice, how to organize a clinical setting to engage in this form of practice, how to teach this approach, and the barriers to its implementation. However, our focus here is how evidence is defined in this approach and how evidence-based practice uses research principles to draw evidentiary-based conclusions to inform practice. We begin our discussion with a brief history, then define and provide critical comments on this contemporary practice approach.

Definitions and Models

Since early in the 20th century, policy makers, scholars, and practitioners have been debating the nature and role of research in professional practice.1 Numerous terms have been used in these discussions to describe professional activity that in some way uses or generates knowledge based on the principles of scientific inquiry. In part, the disagreements about what constitutes science (and by extension, scientific inquiry) and how or even if science should form the foundation of professional practice have contributed to the conflict about scientifically driven practice.2 Remember that we define science as follows:

1. It is theory based or theory generating.

2. It is developed according to the rigor criteria of systematic use of inductive, abductive, and deductive logic structures in all phases of thinking and action processes.

3. It involves detailing the explicit evidence and reasoning on which knowledge claims are based.

Keep these definitional elements in mind as you learn more about evidence-based practice, because they form both its strengths and its limitations. Many terms are used to discuss models of scientific inquiry in professional practice. In general, all models posit the value of scientifically derived knowledge to support professional decision making and to examine the extent to which desired outcomes have been achieved.

Public health and education were the first professional fields to emphasize the importance of systematic evaluation for practice accountability.3,4 In the early 1950s, proliferation of federally supported programs resulted in the expansion of evaluation into a field unto its own, with its focus on fiscal accountability.5 Concurrently, debates emerged between those who espoused “empiricism” and those who opposed it in a “value-based” practice context.6 As evaluation was espoused by health and human service fields, debate increased about whether to conduct empirical inquiry to support practice. Discussion shifted away from polar arguments to a more expansive and complex analysis of the nature of evidence, when evidence is appropriate, and methods to generate it.7

The following definitions of evidence-based practice have been used in medicine,8,9 nursing,10 and occupational therapy11:

image The conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research.8

image The conscientious and judicious use of current best evidence from clinical care research in the management of individual patients.9

image [The] process by which nurses make clinical decisions using the best available research evidence, their clinical expertise and patient preferences, in the context of available resources.10

image [A]n ethical, conscientious, discriminative process of applying the best research-based evidence to decisions regarding client care.11

As you can see, each definition emphasizes the use of research evidence in the context of clinical expertise, although they all differ slightly. According to Mullen,12 two primary models have been reflected in these definitions. In one model, intervention is selected from an array of efficacious, empirically supported practices. In the second model, practitioner and client collaboratively consider best evidence to make decisions together.

In the 21st century, increasing debates about the nature of knowledge itself raise many questions about the meaning of evidence. Thus, current models of evidence-based practice are based on a broadened understanding of evidence as ranging along a continuum from anecdotal experience of providers and consumers, organizational guidelines, consensus groups or single-study review to highly structured systematic review of the literature (meta-analysis). Most prominent however is the traditional approach that remains grounded in the assumption of a “hierarchy of evidence” in which judgment about the value of knowledge is based on the methods of inquiry. The randomized controlled trial (RCT) is considered the highest level of evidence to support intervention efficacy, and other forms of knowledge are less valued. Thus, the true-experimental design to support claims about the efficacy of interventions remains as the methodological pinnacle of desirability, whereas other design approaches (e.g., quasi-experimental, naturalistic inquiry) tend to receive less acclaim, or they are not systematically considered in the evaluation of evidence.8

In general, all evidence-based practice models can be defined by their use of “best evidence” to guide practice decisions, thereby creating standard, valid interventions to the extent possible for specific conditions, diagnoses, and problem areas that are well researched.

Approaches to Identifying Evidence

As stated earlier, evidence-based practice is not a methodology, but it does label and thus use preferred methodologies for systematically reviewing evidence and linking the evidence to actual practice decisions. There are four methodological steps in applying evidence to clinical practice in this framework: reviewing the literature, rating the evidence, developing clinical guidelines, and applying or translating guidelines to a clinical case.

The key research action process in this form of practice is systematically reviewing published scientific literature, including clinical practice guidelines, meta-analysis (both qualitative and experimental type), and Web-based searches for relevant research-generated information. That is, key to the success of basing a clinical decision on the evidence is how investigators review the literature (see Chapter 5), including how they bound the topic or query through the selection of key terms and how they tailor the search. As in a search for any research inquiry, it is critical to know the limitations of the databases that are searched and how the bounding rules constructed will shape the evidence obtained. Given the complexity and time-consuming nature of conducting a comprehensive and adequate literature review, new search engines have been created, such as PubMed, which gives free access to MEDLINE databases (www.ncbi.nlm.nih.gov/PubMed), and SumSearch (http://sumsearch.uthscsa.edu), which searches databases that contain sources for evidence-based guidelines. We have found Google Scholar (www.scholar.google.com) valuable in locating sources as well.

After relevant research articles have been identified and retrieved, the next methodological step is applying a systematic rating to each type of research study and form of evidence retrieved (Box 24-1). Then, on the basis of the ratings that are derived, a series of clinical guidelines can be articulated (Box 24-2). The final step involves translating the evidence and guidelines to a particular clinical group or case. In this step, the researcher must ask numerous critical questions (Box 24-3). It is particularly important to determine whether differences exist between study populations and the particular clinical population or case and whether these may diminish the treatment response or change the risk-to-benefit ratio. This translational step draws on clinical knowledge and judgment.

BOX 24-1   Examples of Rating Systems in Evidence-Based Practice

“ABCD” System

A = Evidence from well-defined meta-analysis

B = Evidence from well-designed controlled trials (randomized and nonrandomized) with results that consistently support a specific action

C = Evidence from observational studies (correlational, descriptive) or controlled trials with inconsistent results

D = Evidence from expert opinion or multiple case reports

“1-2-3” System

1 = Generally consistent findings in a majority of studies

2 = Based on either a single acceptable study or a weak or inconsistent finding in multiple acceptable studies

3 = Limited scientific evidence that does not meet all criteria of acceptable studies

“I-II-III-IV” System

I = Evidence from at least one properly randomized controlled trial

II-A = Evidence from well-designed controlled trials without randomization

II-B = Evidence from well-designed cohort or case-control analytical studies

III = Evidence obtained from comparisons between times or places with or without the intervention

IV = Opinions of respected authorities based on clinical experience, descriptive studies, or reports of expert committees

BOX 24-2   Sample of Clinical Guidelines Based on Rating the Evidence

image Best to individualize music selection in accordance with patient preferences (evidence grade = B)

image Best to intervene 30 minutes before a person's peak level of agitation (evidence grade = B)

image Music intervention session should last approximately 30 minutes (evidence grade = B)

image Use of headphones may be confusing (evidence grade = D)

Modified from National Guideline Clearinghouse (www.guideline.gov), 2002.

BOX 24-3   Applying Evidence to Practice

image What were the results?

image How large are treatment effects

image Are results valid?

image Will results help me in caring for my cases/clients/community?

image Can results be applied?

image Were all outcomes of clinical significance considered?

image Are treatment benefits worth the potential harms and costs?

image Are there specific characteristics of the group/persons with whom I am working that differ from the study populations so as to affect treatment outcome?

image Are the contextual differences that may change the outcomes of a treatment or intervention (e.g., geography, nationality, physical or virtual world)?

Limitations of Evidence-Based Practice

From our discussion thus far and what you have learned in this book, what do you see as some of the important limitations of an evidence-based practice approach?

First, as an empirically based approach borrowed principally from medicine, its application to health and human service practices can be problematic for several reasons. Health and human service professionals engage in practice to address a wide range of human problems for which knowledge through systematic inquiry is necessary. However, the knowledge required to understand human problems is not necessarily amenable to RCTs. Other forms of knowledge may be of equal importance in understanding, for example, dynamic processes between therapist and client and how best to involve a clinical population in a disease prevention activity. Health and human service practice extends far beyond the focus on medical intervention or single treatments and the “magic bullet” medicine approach amenable to randomized design strategies.

A second significant limitation of this model of practice is that it is based on the assumption that the RCT is the primary valid design to generate knowledge that is useful in clinical practice. Only one type of research (the RCT and empirical evidence) is upheld as valuable in evidence-based practice and thus is viewed as the cornerstone of practice.13,14 This approach diminishes other forms of knowledge, such as that derived from the naturalistic tradition, participatory inquiry, or other experimental-type design strategies that can contribute different types of knowledge.

A related but critical point is that evidence-based practice involves the application of nomothetically derived knowledge to idiographic concerns.5,15 In this case, the issue for the evidence-based practitioner becomes how best to translate group data to the individual case. Given the growing emphasis on diversity and multicultural competence, knowledge of central tendencies (nomothetically generated research) does not necessarily capture a full range of diverse and unique experiences and needs and may not be appropriate to inform individual circumstance. The assertion that “design rigor and structure determine knowledge quality” is a myth that limits the critical assessment of knowledge for use in professional practice and leads to mechanistic thinking and action. Consider this simple example. Stephen, who has a fused left hip, injured his rotator cuff. To improve his range of motion, the physical therapist, using well supported evidence-based practice, sent him home with a large exercise ball and a regimen of prone exercises. Unfortunately, the physical therapist neglected to assess how this evidence-based strategy could be used by an individual who was not able to move his hip. Stephen, in attempting to be compliant, harmed himself when he fell off the ball.

The nature of evidence continues as an important area of debate. Because practice theories in so many health and human service professions1618 stress use of self and the relationship between client and practitioner, students and practitioners often identify the incompatibility of logical, systematic thinking with the relational foundations of practice. Recent acceptance of faith and spirituality in professional theory and practice as well as the recognition that diversity is a critical consideration in any clinical relationship5 are even more incongruent with positivist approaches to understanding human experience and human need as currently used in evidence-based practice.19 Thus, although we believe that different forms of evidence are credible for different professional and other stakeholder interest groups,5 evidence-based practice approaches discount this view and are often antithetical to the professional commitment to respect for diversity as it applies to acceptable evidence for practice process and outcome.

We see major limitations of current models of evidence-based practice for supporting professional practice, but we do not want to “throw out the baby with the bath water.” As we have said throughout this book, we encourage you to be critical thinkers and fully evaluate for which clinical challenges this approach is important and whether and how evidence-based practice may be useful in your daily practice. This point brings us to another related issue for which evidence-based practice is extremely useful: treatment fidelity.

Treatment Fidelity

As we discussed in detail in Chapter 9, clinical trial methodology involves the systematic evaluation of an intervention and to a large extent forms the basis for evidence-based practices. Thus, if you are engaging in the establishment of evidence-based practice or using methods that have been supported by these approaches, a critical aspect of such trials is the assurance that the intervention is delivered to study participants assigned to the experimental group as it is intended and as specified by a written protocol. Attention to the integrity of implementing the intervention is referred to as treatment fidelity.20,21 Actions that an investigator can take to enhance treatment fidelity include (1) creating a detailed manual of the intervention so that it can be delivered consistently, is reproducible, and is independent of interventionist style; (2) providing careful training in the intervention; (3) providing constant oversight of its delivery through direct observation of treatment sessions or review of audio or video recordings of its delivery; and (4) tracking adherence and reasons for nonparticipation.

One way to understand treatment fidelity is by a model used in psychotherapeutic intervention studies.20,21 This model posits three components (domains) of treatment fidelity that need to be enhanced and monitored: delivery, receipt, and enactment (Table 24-1). Basically, for an intervention to be effective, it must be delivered consistently, it must be received and be acceptable to participants, and finally, intervention strategies must be enacted or used. Although the definition of each component is still evolving, this model provides a helpful tool for thinking about the procedures that need to be considered and put into place to ensure the integrity of an intervention, particularly one that is considered evidence-based.

TABLE 24-1

Three Domains of Treatment Fidelity

Domain Enhancement Strategy Monitoring and Data Collection Strategy
Delivery Systematic training of interventionists Frequency of contact and intensity
  Manual of procedures  
Receipt Use of different implementation strategies (e.g., video, hands-on instruction, role-playing) Participant's acknowledgment of participation and receipt of intervention materials
    Evidence of receipt (e.g., knowledge test, improved understanding)
Enactment Identification of how and when to implement strategies Evidence of integration, use of knowledge, skills or intervention strategies
  Recording forms Participant's feedback as to use
    Enhanced proximal outcomes

Practice-Based Research

Different from evidence-based practice, which applies specific types of research to guide practice, practice-based research starts with practice as its focus of inquiry. Its purpose is to use existing or “present” data in the form of clinical notes, observations, organizational and institutional policy statements, and so on to create knowledge that generates a complex understanding of practice from which new and creative strategies can emerge. Consider what Candy said:

Practice-based research is an original investigation undertaken in order to gain new knowledge partly by means of practice and the outcomes of that practice. Claims of originality and contribution to knowledge may be demonstrated through creative outcomes which may include artefacts such as images, music, designs, models, digital media or other outcomes such as performances and exhibitions. Whilst the significance and context of the claims are described in words, a full understanding can only be obtained with direct reference to those outcomes.20

Unlike the linear sequence of evidence-based practice in which literature is selected and then applied to practice, practice-based research is embedded within the actual action processes of daily practices and observations of outcomes. In the United States, the Agency for Health Care Research and Quality (AHRQ) has espoused a model of practice-based research as a network of collaboration in which professionals, clients, and other key persons engage in research to investigate, characterize, and improve the nature of practice.20 Consider this example.

Suppose you were interested in examining the relationship between perceptions of clinical efficacy and clinical outcomes in a population of individuals with recent mild cardiovascular accidents (CVA) who are receiving outpatient services. You decide that this inquiry is complex and that the perceptions of multiple groups interact to influence outcome. Through practice-based inquiry, a team is assembled to record and organize existing data such as verbal interchanges between clinicians and clients, client records, evaluations completed by clients of their experience, and minutes of staff meetings. Analysis of these already “present” data sources provides a rich forum for the inquiry and for subsequent evidence-based change that emerges from analysis of daily practice.

Translation Research

The term “translational research” refers to the application of traditional “bench research findings” to clinical practice. Two primary levels have been discussed in the literature, applying laboratory research to intervention with specific individuals and then expanding the application to intervention practices in general.21 Thus, this model is collaboration among academic researchers and professionals who put this research into practice in clinical, health care, and community settings to:

1) captivate, advance, and nurture a cadre of well-trained multi- and inter-disciplinary investigators and research teams; 2) create an incubator for innovative research tools and information technologies; and 3) synergize multi-disciplinary and inter-disciplinary clinical and translational research and researchers to catalyze the application of new knowledge and techniques to clinical practice at the front lines of patient care.22

Consider the following example. The field of advanced robotics is often conducted and contained within academic and laboratory settings. To resolve mobility barriers for a wheelchair, a clinician turns to a robotics laboratory in an academic setting. A specialized wheelchair that uses sensors and automated robotic navigation is fabricated. The outcomes are significant, improving community life and employment opportunities for this client. Capitalizing on this success, a team of clinicians and robotics researchers frame a development project in which they are now creating simple, commercially available robotic solutions to eliminate a variety of navigational barriers in urban communities.

Evaluation practice

Consistent with practice-based and translational research models that bridge the research-practice gap, evaluation is moving in that direction as well.

The “evaluation practice,” model infuses everyday practice with an evaluative and systematic knowledge-generating framework. In their model of evaluation practice, DePoy and Gilson5 defined “evaluation” as the conduct of three major thinking and action processes: problem and need clarification, reflexive intervention, and outcome assessment. All rely on the systematic thinking and action processes presented throughout this book. These processes are now being applied to the context of practice.

Problem and Need Clarification

Because of the complexity of defining and understanding health and social problems, methods to address and alleviate them are often unclear. As we frequently see in professional practice, why a particular approach to intervention is necessary and to what problem it responds are often omitted from the thinking and action processes of many health and human service efforts. Without a clear understanding of what problem is being addressed, and without evidence supporting the method needed to address it, we cannot demonstrate the value of our practices.

Clarification of the problem is critical to any professional effort because how a problem is conceptualized, who owns it, who is affected by it, and what needs to be done about it are all questions based in political-purposive and ideological arenas. Thus, the problem forms the basis from which all subsequent systematic evaluative activity takes place, and it provides the ultimate foundation for the implementation and continuation of interventions. In all arenas of practice, a clear and well-supported understanding of “problem” and “need” is essential. Without problem and need clarification, interest groups may define problems differently and thus expect different outcomes from the same intervention.

In the model of evaluation practice, a clear understanding of need must be based on credible evidence. Although we may make claims on what type of intervention is needed to resolve problems, accountability in making informed professional decisions depends on the presence and organization of empirical evidence of need. Setting goals and objectives for intervention derives directly from need.

Reflexive Intervention

As discussed in previous chapters, reflexivity is an important construct, defined as self-examination for the purpose of ascertaining how the researcher's perspective influences the interpretation of data. In the model of evaluation practice, the term is expanded to denote the set of thinking and action processes that we believe should take place throughout interventions. The term “reflexive intervention” reminds us that the intervention action processes, resources, and influences are essential parts of evaluation practice and are thus subject to the same systematic scrutiny as needs assessment. Thus, the objective of reflexive thinking processes in evaluation practice is not limited to an individual; rather, it is applied to the sum total of the intervention and scope of influences on the process and outcome of intervention.

Reflexive intervention involves three important foci: monitoring process, resource analysis, and consideration of indirect influences on the intervention.

Monitoring (Process Assessment)

Monitoring, or process assessment, is the element of evaluation that examines whether and how the intervention is proceeding. Monitoring is an essential evaluation practice action in which the actual implementation of an intervention is systematically studied and characterized. Monitoring processes not only examine the scope of the intervention but also scrutinize the intervention processes to determine who they affect, assess the degree to which goals and process objectives are efficaciously reflected in the intervention, and provide feedback for revision based on empirical evidence. The primary purpose of monitoring is to ascertain whether an intervention was delivered as planned so that attribution of outcome can be positioned properly (similar to the concept of treatment fidelity in clinical trials).

imageTo monitor systematically funding, transportation, and elder involvement in activity, you set up a tracking system. You can determine what was done, who participated, and the frequency of participation. Without this information, you would not know what was done and could not attribute any changes in outcome to your program.

Resource Analysis

Similar to monitoring, resource analysis occurs throughout the intervention and requires reflection. When we think of resource analysis, cost in dollars for services rendered is the resource most often examined. In evaluation practice, however, resource analysis includes the full host of human and nonhuman resources used to conduct an intervention and that are related to its need and outcome.

Consideration of Influences on the Intervention

Part of the process of reflexive intervention must be a consideration of factors external to the intervention that affect both process and outcome. Without widening the scope of examination, it is difficult to determine why change is occurring (or why it is not occurring) in the desired direction. What if you neglected to look at community influences and a local church had implemented a community program for elders serving your same population? You can see how this unplanned influence might change your expectations and attribution of outcomes.

Outcome Assessment

Outcome assessment is the action process of evaluation practice that is most familiar. It answers the question, “To what extent did the desired outcomes occur?” Outcome assessment also examines the parts of an intervention and the intervention context that relate to outcome, as well as differential outcomes resulting from other influences (e.g., intervention processes, target populations, complexity of outcome expectations). Although its aim seems obvious—to judge the value of an intervention in its achievement of goals and objectives—outcome assessment serves many other important purposes. Outcome assessment provides empirical information on which to make programmatic, policy, and resource decisions that influence and shape social and human services at multiple levels.23

Evaluation practice provides a systematic framework by which to develop a knowledge base of professional practice. Documenting your systematic thinking and action processes throughout every practice sequence provides credible evidence from which to support practice claims and improve practice. Thus, evaluation practice provides a systematic and “doable” link between practice and research.

Summary

In this chapter, we have discussed four models of inquiry that inform or are informed by practice. The current model of interjecting evidence into practice is the evidence-based model generated primarily from medicine. This has led to difficulties in application, primarily because of its underlying assumptions of a hierarchy of knowledge, dismissal of all forms of research-generated knowledge except that derived from true-experimental designs, and attempt to enforce nomothetically generated knowledge on ideographic contexts. Collaborative models include practice-based research, translational research, and evaluation practice. Each relies on collaboration and aims to improve individual outcomes and overall health and human service practice.

Exercises

1. Find an evidence-based literature review on a topic of interest to you and examine its use in informing your practice. Would you seek additional information to make sound practice decisions? Why or why not? What information might you acquire beyond the research evidence presented in the literature review?

2. Identify a practice issue or problem and develop a plan for practice-based research. Then develop an evaluation practice design that includes all elements of this new model.

3. Look for two articles that discuss collaborative research. Identify the roles and responsibilities of each participant and critically examine how the collaboration contributes to the knowledge that was generated by the study.

References

1. Rubin, A. Practitioners' guide to using research for evidence-based practice. Belmont, Calif: Wiley, 2007.

2. DePoy, E., Gilson, S.F. The human experience. Lanham, Md: Rowman and Littlefield, 2007.

3. Brownson, R.C., Baker, E.A., Leet, T.L., Gillespie, K.N. Evidence-based public health. Oxford: Oxford University Press, 2003.

4. Thyer, B. The handbook of social work research methods, ed 2. Thousand Oaks, Calif: Sage, 2009.

5. DePoy, E., Gilson, S.F. Evaluation practice. Boston: Taylor and Francis, 2008.

6. Gordon, W.E. Toward a social work frame of reference. J Soc Work Educ. 1965;1:19–26.

7. Grinell, R., Unrau, Y. Social work research and evaluation: quantitative and qualitative approaches, ed 8. New York: Oxford University Press, 2007.

8. Sackett, D.L., Straus, S.E., Richardson, W.S., et al. Evidence-based medicine: how to practice and teach EBM, ed 2, New York: Churchill Livingstone; 2000:115.

9. Guyatt, G.H., Sackett, D.L., Cook, D.J. Users' guides to the medical literature, II: How to use an article about therapy or prevention, A. Are the results of the study valid? Evidence-Based Medicine Working Group. JAMA. 1993;270:2598–2601.

10. DiCenso, A., Cullum, N., Ciliska, D. Implementing evidence-based nursing: some misconceptions. Evid Based Nurs. 1998;1:38–40.

11. Lloyd-Smith, W. Evidence-based practice and occupational therapy. BJOT. 1997;60:474–478.

12. Mullen, E. Evidence-based knowledge: designs for enhancing practitioner use of research findings (a bottom up approach). presented at 4th International Conference on Evaluation for Practice, Tampere, Finland, University of Tampere, 2002.

13. Guyatt, G.H., Sackett, D.L., Cook, D.J. Users' guides to the medical literature, II. How to use an article about therapy or prevention. B. What were the results and will they help me in caring for my patients? Evidence-Based Medicine Working Group. JAMA. 1993;271:59–63.

14. Gibbs, L., Gambrill, E. Critical thinking for social workers: exercises for the helping professions. Thousand Oaks, Calif: Pine Forge, 1999.

15. Rosen, A. Evidence-based social work practice: challenges and promises. presented at Society for Social Work and Research Annual Conference, San Diego, 2002.

16. Kirst-Ashman, K.K., Hull, G.H. Understanding generalist practice, ed 2. Chicago: Nelson Hall, 1999.

17. Toseland, R.W., Rivas, R.F. An introduction to group work practice, ed 4. Boston: Allyn & Bacon, 2002.

18. Woods, M.E., Hollis, F. Casework: a psychosocial therapy. Columbus, Ohio: McGraw-Hill, 1999.

19. Ellor, J., Netting, F.E., Thibault, J.M. Religious and spiritual aspects of human service practice (Social Problems and Social Issues). Columbia: University of South Carolina Press, 1999.

20. Candy L: Practice-based research (University of Technology, Sydney CCS Report: 2006-V1.0 November 20AHRO-AHRQ), Support for Primary Care Practice-Based Research Networks., http://www.creativityandcognition.com/content/view/184/103/.

21. Hulley, S. Designing clinical research. Philadelphia: Williams and Wilkins, 2006.

22. National Institutes of Health, The NIH Common Fund. Transl Res 2009. September, http://nihroadmap.nih.gov/clinicalresearch/overview-translational.asp. [Accessed February 2, 2010].

23. Unrau, V.A. Using client exit interviews to illuminate outcomes in program logic models: a case example. Eval Program Plann. 2001;24:353–361.