31
Youth Aiding Youth Program Evaluation Reach Out Centre for Kids Surbhi Bhanot-Malhotra PhD Reach Out Centre for Kids Kelly Gulliani Reach Out Centre for Kids Sarah Randall McMaster University Oct 31 st 2011

Youth Aiding Youth Program Evaluation Reach Out … · Youth Aiding Youth Program Evaluation . Reach Out Centre ... thought that the program was ‘supposed to’ improve ... i.e.,

  • Upload
    lekhanh

  • View
    216

  • Download
    0

Embed Size (px)

Citation preview

Youth Aiding Youth Program Evaluation Reach Out Centre for Kids

Surbhi Bhanot-Malhotra PhD Reach Out Centre for Kids

Kelly Gulliani

Reach Out Centre for Kids

Sarah Randall McMaster University

Oct 31st 2011

Executive Summary Reach Out Centre for Kids: Youth Aiding Youth Program Surbhi Bhanot-Malhotra PhD, Research Associate The purpose of this project was to conduct an outcomes-based evaluation of the Youth Aiding Youth program. The program is designed to build the social skills and self-esteem of youth who are at risk for mental health problems. We wanted to assess whether the program did actually improve youth’s social skills and self-esteem and to see how satisfied youth were with the program.

Purpose:

• To evaluate the effectiveness of the Youth Aiding Youth social skills training program

• To inform service planning and delivery The Program Reach Out Centre for Kids (ROCK) is the largest accredited children’s mental health centre in Halton. It provides a multi-disciplinary approach to prevention, assessment and treatment of infants, children, youth and their families. Clients must be experiencing emotional, behavioural, developmental or social difficulties that put them at risk for serious mental health problems to qualify for services. Youth Aiding Youth is one of ROCK’s prevention programs. It is designed to build the social skills and self-esteem of youth (ages 6 to 12) who are at risk for developing mental health problems. The social skills groups component of the program (the focus of this evaluation) runs over a period of 10 weeks. Each weekly session focuses on a social skills-related topic (e.g., peer pressure, cooperation, etc.). There is a group discussion of the topic at the beginning of the session and this is followed by activities that are designed to reinforce what has been learned. For example, if the topic was cooperation, youth first have a group discussion about cooperation and are then given a task that helps them build their cooperative skills (e.g., doing an activity that requires teamwork).

The Plan The evaluation was conducted using a quantitative pre-test, post-test, 6 week post-test design. The total sample size was 56 (33 youth and 23 parents). Youth who participated in YAY were administered three surveys: the Social Skills Improvement System (SSIS) (Gresham, & Elliot, 2008) to measure their social skills, the Self-Esteem subscale (King, Vidourek, Davis, McClellan, 2002) to measure their self-esteem and the Youth Satisfaction Survey (Criminal Justice Division, 2003) to measure their overall satisfaction with the program. Youth completed the SSIS and the Self-Esteem subscale twice – once before they started the program and a second time after they completed the program. They completed the Youth Satisfaction Survey at the end of the program. Parents were asked to complete the adult version of the SSIS (Gresham, & Elliot, 2008) three times – once when the program started, a second time when the program ended and a third time 6 weeks after the program had finished.

The Product Overall, the results of our evaluation were mixed. We expected that the YAY program would increase the socials skills and self-esteem of youth who participated in the program but this was not entirely true. Although parents perceived that their children’s social skills had improved after participating in the program, youth’s own scores on the SSIS did not reflect this. In addition, there were no significant improvements in youth’s self-esteem. Why was this the case? First, it’s possible that the youth did not fill out the questions on the survey appropriately. Some may have been aware that they were participating in YAY because of their social skills/self-esteem ‘challenges’ and they may not have been entirely honest in their responses (social desirability response bias). Others may have had difficulty understanding the questions. Finally, since some surveys were quite long, it’s also possible that youth ‘gunned through’ the questions and didn’t really think about their responses. On the other hand, the discrepancy in the findings could be attributed to parents giving their children higher social skills scores at the end of the program because they thought that the program was ‘supposed to’ improve their social skills. Although both scenarios are plausible, our first hand experience with the evaluation suggests that it’s more likely that the results were mixed because youth did not answer the questions on the surveys appropriately. As such, we think that the YAY program may produce some improvement in youth’s social skills and self-esteem. To further explore this, we recommend that the agency conduct an additional evaluation of the program using more appropriate outcome measures as well as a mixed methods approach. We also recommend that ROCK staff discuss the deeper implications of the findings for service planning and delivery. We plan to share our experiences/evaluation findings with various stakeholder groups through a summary report. Amount awarded: $29,700 (including $5000 in student-related costs) Final report submitted: 10/31/2011 Region: Central West Region

Introduction A) Program Description

Youth Aiding Youth (YAY) is a program designed for ‘at risk’ youth. The primary goal of the program is to build the social skills and self-esteem of youth who have either been identified as being at higher risk for developing mental health problems (e.g., kids being bullied, kids from disadvantaged neighbourhoods, etc.) or those who have already been diagnosed with less severe mental health problems (e.g., Attention Deficit Disorder). Approximately 2400 youth from a wide range of backgrounds have participated in the program since its inception 24 years ago (roughly 100 youth per year).

The YAY program is comprised of two major components: social skills groups and one on one mentoring. Youth have the option of participating in either or both. Approximately 20-25% of youth decide to participate in both components of the program. Their participation can be simultaneous or sequential (i.e., some youth may participate in both components of the program simultaneously whereas others may participate in the one on one mentoring first and then join the social skills groups). Since the reviewers from CHEO thought that an evaluation of both components of the program would be too onerous for the implementation phase, we decided to focus our efforts on the social skills component of the program (i.e., we conducted an evaluation of the social skills groups but not the mentoring component).

An overview of the YAY social skills groups is presented in the logic model (refer to attached logic model). The purpose of these groups is to build the social skills and self-esteem of youth through discussion and group activities. By doing this, we hope to increase their social support and improve their functioning in the long term. Each social skills group runs over a period of 10 weeks. Groups meet once every week so there are 10 sessions in total. The groups are designed for youth ages 6 to 12. Although between 9 to 12 youth can participate in each group, there are usually 10 participants. The groups can be gender specific (i.e., girls only, boys only) or co-ed. They are offered at ROCK or in schools. The ROCK-based groups take place in the evening whereas the school-based groups take place over the lunch hour. Overall, ROCK provides 14 social skills groups each year. Twelve of these groups are offered between September and June. Six of these groups are offered at ROCK (2 girls groups, 2 boys groups and 2 co-ed groups) and the other six in schools (2 girls groups, 2 boys groups, 2 co-ed groups). Two co-ed groups are also offered over the summer.

The structure of the groups is similar even though they take place in different settings. The ROCK-based groups are 1.5 hours whereas the school-based groups are 1 hour in length. Each weekly session focuses on a topic relevant to the youth in the group (e.g., bullying, peer pressure, cooperation, etc.). During the first part of the session, there is a group discussion of the topic. After this discussion, youth participate in activities that are designed to reinforce what they have learned in the discussion. For example, if the topic was cooperation, youth would first have a group discussion about cooperation and then be given a task that would help to build their cooperative skills (e.g., building a tower in teams).

B) Purpose/Goals of the Evaluation

Although many clients have participated in Youth Aiding Youth’s social skills groups, and anecdotal evidence suggests that these groups have been quite successful, an empirical outcomes-based evaluation of the YAY social skills groups has not been conducted. Consequently, the purpose of this project was to conduct this evaluation. We developed an evaluation framework for this project using two theoretical frameworks: the utilization-based framework of evaluation and the goals-based framework of evaluation. Patton (2002) contends that evaluations should be judged on their utility and actual use. Consequently, this approach places emphasis on intended use by intended users with evaluators largely taking on the role of facilitators. They facilitate the evaluation process by working with intended users in order to help them determine their needs – for example they may help users select the most appropriate content, model, theory, methods, for their particular program and situation.

In contrast, the goals-based model of evaluation places emphasis on results or

outcomes rather than process. According to this framework, the effectiveness of a program can be measured by determining the extent to which it met the predetermined goals (Harris, 2005). The role of the evaluator is to: 1) determine the goals of the program, 2) decide how to measure these goals, 3) collect information/data about the goals, 4) to assess the effects, and 5) analyze and interpret the data in order to determine whether the programs stated goals have been met. A program is deemed successful if it has attained most of its specified goals.

The goal of this evaluation was two-fold. Since a utilization-focused framework

was adopted and the intended users (ROCK) thought that an outcomes-focused evaluation would be most useful, the primary goal of this project was to conduct an outcomes-based evaluation of the Youth Aiding Youth social skills groups. Specifically, we wanted to determine the extent to which the social skills groups actually increased youth’s level of social skills and the extent to which these groups increased their levels of self-esteem (the two desired outcomes of the program). In addition, from a process evaluation perspective, we wanted to assess whether youth were satisfied with the YAY social skills groups. As such, our three evaluation questions were: 1) Do the Youth Aiding Youth Social Skills groups increase clients’ social skills? 2) Do the Youth Aiding Youth Social Skills groups increase clients’ self-esteem? 3) Overall, are clients satisfied with the Youth Aiding Youth Social Skills groups? C) Relevant Stakeholders

The target population for the YAY program is youth who have been identified as being at higher risk for mental health problems (e.g., youth with limited social skills, youth who are being bullied, youth from single parent homes, youth from low socioeconomic status neighbourhoods, etc.). These youth are referred to the program through schools, doctors, and therapists working at ROCK’s walk-in clinic.

There are many internal and external stakeholders involved in the YAY program. The internal stakeholders are the youth who participate in the program, their parents, staff and student volunteers who help run the program and ROCK staff who refer youth to the program (e.g., therapists). The external stakeholders are Halton District School

Board, Halton Catholic District School Board, various funding organizations (e.g., corporate sponsors, ROCK foundation members, United Way), and other children’s mental health agencies (e.g., Nelson Youth Centre). D) Literature Review

Social competence is an integral component of children’s development. Youth

who do not develop appropriate social skills face challenges in several domains. From a mental health standpoint, poor social skills and relationship difficulties are associated with depression (Segrin, 2000), conduct disorders (Spence, 1991), social phobia (Spence, Donovan, and Brechman-Touissaint, 1999), and autism and Asperger’s syndrome (Harris, 1998). Gresham et al., (2004) found that children who are at risk for developing emotional or behavioral issues “experience significant difficulties in the development and maintenance of satisfactory interpersonal relationships, exhibition of prosocial behavior patterns, and social acceptance by peers”. Given the importance of social skills, much research has examined the different ways that we can train children to develop better social skills.

The current idea of social skills training and what social skills training looks like to help school-aged children adapt to their social settings emerged in the 1980’s. At that point, Ladd (1984) argued that social skills training was the most effective method of building social competence. Researchers have subsequently attempted to define what social skills training should look like but a unanimous decision has not been reached (Merrell & Gimpel, 1998). Some of the models of social skills development programs that have emerged include the social interaction model, the prosocial behavior model, and the social-cognitive skills model (Ang & Hughes, 2001; Beelmann et al., 1994; Losel & Beelmann, 2003; Quinn et al., 1999; Schneider, 1992; Schneider & Byrne, 1985).

The YAY program is based on a behavioral social skills model. As such, it is an evidence-informed program. Youth learn appropriate social behaviors through training techniques such as discussions, modeling, role-play, and rehearsal/practice (Spence, 2003). Research suggests that the behavioural model is an effective means of improving youth’s social skills (Gresham, 1981, 1985; McIntosh, Baughn, & Zaragoza, 1991). The format of the group is also evidence-informed. The social skills training literature suggests that social skills training is more effective if a) group leaders aim to maximize the participation of all group members; b) the training is adequate in duration (takes place over months rather than weeks); and c) the training focuses on social skills that do actually increase the chance of successful social outcomes (Bullis et al., 2001; Hansen, Nangle, & Meyer, 1998; Hepler, 1994; Spence, 1995). All of these elements have been incorporated into the YAY social skills group.

Although social skills training programs are based on various theoretical models, the goal of all of these programs is essentially the same: to build the social skills of participants. Subsequently, it is important to examine the different ways that we can measure this outcome. Several behaviour rating scales of social skills exist. One of the more commonly used scales is the Social Skills Ratings System (Gresham & Ellliott, 1990). Respondents are asked to rate a series of prosocial behaviours in terms of frequency of occurrence. These behaviours are thought to influence the quality of relationships with others (e.g., self-control, cooperation, and assertion). The findings of previous research suggest that the scale has adequate psychometric properties (Demaray et al., 1995). However, it still suffers from limitations. A number of the items on

the scale refer to more general aspects of functioning. Examples of these items include: “produces correct school work’ , ‘puts work materials or school work away’ and ‘keeps room clean and neat without being reminded’. Although these behaviours are important from a behavioural adjustment standpoint, they do not directly relate to interpersonal skills. Thus, the scale is not necessarily an ideal measure of social skills.

Another popular measure of children’s social skills is the School Social Behaviour Scales (Merrell, 1993). These scales include a social competence scale (32 items) and an antisocial behaviour scale (33 items). The social competence scale includes its own smaller subscales that focus on children’s self-management skills, their academic skills and their interpersonal skills. One of the limitations of this scale is that it provides researchers with little detail about specific behavioural social skills. A second limitation is that a parent version of the scale has not yet been developed. As a result, the scale relies heavily on self-report.

Other scales measure specific dimensions of children’s social functioning. For example, some scales are used to specifically assess children’s assertive responding. Examples of these types of scales include the Children’s Assertiveness Inventory (Ollendick, 1983), the Children’s Assertive Behaviours Scale (Michelson & Wood, 1982), and the Children’s Action Tendency Scale (Deluty, 1984). While useful, these scales are limited in that they measure only one component of children’s social functioning (i.e., assertiveness).

A more recent measure of children’s social skills is the Social Skills Improvement System (SSIS) (Gresham & Elliot, 2008). This scale measures a wide range of social skills and incorporates the communication, cooperation, assertion, responsibility, empathy, engagement, and self control-related social skills of children. Participants are asked to respond to 46 items on a four point Likert scale ranging from 1 (not true) to 4 (very true). One of the advantages of using the SSIS is that the scale is available in both youth and adult formats. In the youth version, children are asked to report on their own behaviour. In the adult version, adults (e.g., parents, teachers) are asked to report on their children’s or student’s behaviours. Given the preceding, it is possible to compare children’s self reports of their own behaviour to adults’ reports of their behaviour. As a result, it is possible to get a better assessment of children’s social behaviours. Due the aforementioned benefits of using the SSIS (Gresham & Elliott, 2008), we decided to use it as the measure of social skills in our evaluation.

Although social skills training programs ideally build the social skills of participants, anecdotal evidence suggests that these programs may also benefit youth in other ways. For example, many therapists at our agency think that the YAY program also has a positive impact on participants’ self-esteem. This makes sense since one would expect that youth would make more friends by increasing their social skills, which would in turn, increase their self-esteem. This relation between social competence and self-esteem is supported by research (Frankel & Myatt, 1994). We thought that it was important to include self-esteem as a possible outcome in our evaluation framework (i.e., we expected that youth would develop higher self-esteem as a result of participating in the Youth Aiding Youth program). In order to include this outcome in our evaluation, we needed to find an appropriate standardized measure. Numerous measures of self-esteem exist. Some of these measures are more global, meaning that they focus more on people’s overall evaluation of themselves (Rosenberg, 1965) wheras others focus more on specific facets of self-esteem (Marsh, Smith, & Barnes, 1983). Almost all are

based on a self-report format since it’s difficult for others to ‘see’/assess other people’s self-esteem. Based on our review of the literature, we thought that the ideal scale to use for this evaluation was the King, Vidourek, Davis and McClellan’s (2002) Self-Esteem subscale. The scale was considered ideal because it had good reliability (Crohnbach’s alpha = 0.8) and face validity, it was easy to comprehend and it was short.

Methodology Research Design

The evaluation was conducted using a pre-test, post-test and 6 week post-test design. Note that a six week post test time period was utilized because of the timing of the social skills groups. A two month follow up would have run into summer vacation/holidays and it was anticipated that fewer families would have been available to complete the follow up at this time. Sample

All clients who participated in the YAY social skills groups at the Burlington (Pilke House) and Milton (Our Lady of Victory) sites served as potential participants. In total, we collected data from 33 youth and 23 parents (total sample size of 56). Timing of Data Collection

Data collection took place from the middle of October 2010 to the middle of June 2011. Methods of Data Collection (refer to attached Outcomes/Indicators chart) Measures for Youth

Youth who participated in the YAY social skills groups were administered three questionnaires during the course of this evaluation (refer to Appendix A, B, C and D for copies of these questionnaires). All of these questionnaires followed a self-report format (i.e., youth were asked to report on their own thoughts/behaviours). The first questionnaire, the Social Skills Improvement System (SSIS) (Gresham, & Elliot, 2008), was used to measure youths’ social skills. Please note that this scale is also referred to as SSIS. The SSIS is comprised of 46 items. Youth are asked to respond to each of the items on a four point Likert scale ranging from 1 (not true) to 4 (very true). The scale has been used extensively and is considered a valid and reliable measure of social skills. Examples of items from the scale include “I pay attention when others present their ideas.”, “I feel bad when others are sad.” and “I get along with other children and/or adolescents”. Total possible scores can range from 46 to 184. Higher scores indicate that the child has better social skills.

Youth were also asked to complete King, Vidourek, Davis and McClellan’s (2002) Self-Esteem subscale. This scale is based on the Hare Abbreviated Self-Esteem Scale (Kelley, Denny, Young, 1997) and is comprised of five items. Respondents were asked to rate their level of agreement with the items on a four point Likert scale ranging from 1 (strongly disagree) to 4 (strongly agree). The subscale has demonstrated adequate reliability (Cronbach’s alpha = .8) in previous research. Sample items from this scale include “I like myself” and “I feel proud of myself”. Higher scores are indicative of a higher level of self esteem.

In addition to completing the above scales, youth were also be asked to complete the Youth Satisfaction Survey (Criminal Justice Division, 2003). This scale was used to measure youth’s overall satisfaction with the program. Respondents were asked to rate their level of agreement with 12 statements on a 5 point Likert Scale ranging from 1 (strongly disagree) to 5 (strongly agree). Sample items from this scale include “The place where the program was held was easy to get to” and “The information I received was useful”. Higher average scores reflect a higher level of satisfaction with the program. Measures for Parents

The parents of the youth were asked to complete an adult version of the SSIS (Gresham, & Elliot, 2008), which asked them to rate their child’s social skills. This version of the scale is similar to the version completed by the youth. However, parents are asked to report on their perceptions of their children’s behaviours rather than their own. The scale has 46 items and parents were asked to rate each of these items on a four point Likert scale ranging from 1 (never) to 4 (almost always). Sample items from the scale include “Interacts well with other children”, “Tolerates peers when they are annoying” and “Takes criticism without getting upset.” Procedure

Prior to starting the YAY program, parents were given consent forms to complete (refer to Appendix E for a copy of the consent form). This form provided parents with an overview of the evaluation activities. The forms were collected prior to youth starting the evaluation surveys. Pre-test

The client pre-test took place during the first session of the social skills groups. Youth were asked to complete the SSIS (Gresham & Elliot, 2008) and King et al.’s (2002) Self Esteem subscale at the beginning of the first session. The group facilitator handed out these surveys to the youth and went over the instructions for the survey. She then orally read each question/statement and instructed youth to select their response. Youth were given as much time as they needed to complete these questionnaires.

Parents were also administered the adult version of the SSIS at pre-test. They were asked to complete this survey when they dropped their children off for the first session. The facilitator went over the survey instructions with them and gave them as much time as they needed to complete the survey.

Post-test

The post-test took place during the last session of the social skills groups (i.e., session during week ten). At the end of this session, the group facilitator asked youth to complete the same questionnaires that they did during the first session (i.e., the SSIS (Gresham & Elliot, 2008) and the Self-esteem Subscale (King et al., 2002)). The post-test was administered in the same manner as the pre-test – the group facilitator read each question out loud and asked youth to select an appropriate response. After youth completed the post-test, they were thanked for their participation.

The parent post-test also took place after the final session of the social skills groups. When parents came in to pick up their children, they were asked to complete the same questionnaire that they had at pre-test (i.e., the adult version of the SSIS). After they completed this survey, they were provided with an information sheet about the purpose of the evaluation. They were encouraged to share this information with their children. Parents were also provided with information about the six week follow-up at this time and asked whether they consented to being contacted for this follow up. Six week post-test

If parents chose to complete the six week follow up, then the evaluator contacted them after 6 weeks to arrange a time/location to conduct the post-test. The evaluator subsequently administered the adult version of the SSIS to these parents at the prearranged time and location. After parents completed the survey, they were thanked for their participation and provided with information about the overall purpose of the evaluation. Evaluation limitations Limitation 1: Bias

One of the limitations of our evaluation was the use of a self-report design. As part of our evaluation, youth were asked to report on their own social skills. This could have introduced a bias in the results. For example, youth may have rated their social skills as being better than they actually were in order to present themselves in a positive light. A similar bias may have occurred when parents were asked to report on their children’s social skills. Parents may have felt compelled to report that their children had better social skills after completing the program (i.e., at post test). In addition, some parents may not have been able to accurately assess their children’s social skills because they did not see them in social situations enough (e.g., in cases where the parents are divorced, not seeing children’s behaviours in different settings, etc.). We tried to address this challenge by reiterating that there were no right or wrong answers to the survey questions and that their responses would have no impact on the quality of care they received at our agency. Limitation 2: Characteristics of Client Population

We also faced additional challenges due to the characteristics of the population that we work with. The youth who participate in the YAY social skills groups are generally a diverse group. Participants had varying underlying mental health challenges, they were at different developmental levels/ages, and they had different living

situations/contexts (e.g., CAS involvement, transient living situations). Given the heterogeneity of the group, variables that we are unable to measure may have influenced the level of effectiveness of program (i.e., SES, living arrangements, mental health diagnosis, attentional/learning issues, etc.). For example, the program may have a more significant impact on the social skills of youth whose parents did not have significant mental health concerns.

Participants’ lack of familiarity with the research process was also a challenge. Many parents who participated in our evaluation had not completed this type of survey before. As a result, they may have been unaware of the importance of consistency when completing the survey items. Literacy and understanding of the questions on the evaluation forms could have been another related limitation. If clients did not understand the question, then they were less likely to answer accurately and truthfully. This could also create an ethical issue if clients felt uncomfortable when answering questions. We could not account for all of these client-related variables in our evaluation. However, we tried to be mindful of them in our interpretation of the results.

Limitation 3: Data collection did not always go according to plan/schedule

A third challenge was ensuring that the data collection went according to plan. Our biggest limitation was that our sample size was significantly lower than we expected. We had hoped to collect data from 100 individuals (50 youth, 50 adults) even after accounting for attrition. However, in the end, we had a sample size of 56 (33 youth, 23 adults). There are many reasons why this occurred. First, although many youth signed up to take part in the YAY program, an unusually high number decided not to attend. In addition, many youth missed either the first session or the last session of the program. so we did not have a complete data set for them (i.e., we had a pre-test survey for some youth/parents but not a post-test and vice versa). Third, some parents filled out the survey inappropriately (e.g., mom completed the pre-test and dad completed the post-test rather than the same parent filling out both the pre-test and post-test).Consequently, we could not use their data. Fourth, some parents opted not to complete the survey due to factors beyond our control (e.g., time constraints, some thought the survey was too long, other children to take care of, etc.).

Limitation 4: Research Design/Measures

Since we wanted to conduct an outcomes-based evaluation of the YAY program, our research design was quantitative in nature. We relied on standardized surveys to measure our outcomes. In retrospect, this was probably not the best design. Our social skills survey seemed too long and tedious for the youth to complete. This was particularly true for youth who were younger and those who appeared to have learning/attention-related difficulties. Both youth and parents also seemed to have difficulty understanding some of the questions and became confused about the wording. It may have been better to use a mixed-methods design (i.e., to include some open-ended questions so that youth/parents had the opportunity to explain their answers in their own words).

Results

The data analyses were conducted using the Statistical Software Package for the Social Sciences (SPSS). Both outcome-focused and process-focused evaluation questions were addressed in these analyses. Evaluation Question 1: Do the Youth Aiding Youth Social Skills groups increase clients’ social skills? Indicators: Youth’s scores on the SSIS – Youth Version (Gresham & Elliot, 2008) and parent’s scores on the SSIS –Adult Version (Gresham & Elliot, 2008) Analysis: To answer this evaluation question, we compared youths’ scores on the SSIS at pre-test (i.e., at the beginning of the program) and post-test (i.e., at the end of the program). We had planned to collect data from 50 youth overall. However, our final sample size wound up being 33 (i.e., 33 youth completed both the pre-test and post-test survey). As mentioned earlier, this lower sample size was a result of youth not attending the program or being absent during the sessions where we administered the surveys. Given this low sample size, we could not conduct the paired samples t-test that we had planned to complete. From a statistical perspective, the power of the test would have been too low to do this. Instead, we focused our analysis on descriptive statistics (means) and the Wilcoxon test (a non parametric alternative to the paired samples t-test). The results of the descriptive analyses are presented in Table 1. The average pre-test score on the SSIS was 99.28 (standard deviation of 17.54) when they started the program and the average post-test score was 103.64 (standard deviation equals 22.19) when the program was complete. Thus, youth’s social skills appeared to increase very slightly by the end of the program (based on their self-report). However, since the difference was very small, it may not have been statistically significant. To further explore this evaluation question, we conducted a Wilcoxon Signed Rank test on the data. The results of this analysis suggested that the YAY program did not produce a statistically significant improvement in youth’s social skills (Z = -1.35, p = .18). Table 1: Social Skills Related Analysis (Youth) Average Score N Standard Deviation

Social Skills Score for Youth at Time 1 (beginning of program)

99.28 33 17.54

Social Skills Score for Youth at Time 2 (end of program)

103.64 33 22.19

In the second part of this analysis, we examined parents’ perceptions of their

children’s social skills at the beginning and end of the YAY program. Our overall sample size for the parent group was 23. The results of the descriptive analysis are presented in Table 2. Parents’ average score on the Social Skills Inventory was 80.35 (SD = 20.77) at the beginning of the program and 86.96 (SD = 15.98) at the end of the program. This suggests that parents felt that their children’s social skills were significantly better after they had completed the Youth Aiding Youth program. To test this statistically, we conducted a Wilcoxon Signed Rank test. In this case, the results of the test were statistically significant (Z = -2.21, p = .03), meaning that their was a statistically significant improvement in youth’s social skills after completion of the YAY program (based on parental perception). Table 2: Social Skills Related Analysis (Parents) Average Score N Standard Deviation

Parents Social Skills Score for their Children at Time 1 (beginning of program)

80.35 23 20.77

Parents Social Skills Score for their Children at Time 2 (end of program)

86.96 23 15.98

In order to assess whether these social skills changes were maintained over a longer time period, we also looked at the 6 week post program data. In this case, we compared parents’ scores on the SSIS before the groups (pre-test) to their scores 6 weeks after the groups had finished (6 week post-test). The sample size for this analysis was significantly smaller (11) since few parents completed the 6 week follow-up. The results of the descriptive analysis are presented in Table 3. As is evident in this table, parents did still perceive that their was an improvement in their children’s social skills 6 weeks after they had completed the program. However, this was not a statistically significant improvement based on the results of the Wilcoxon test (z= -.87, p= .39). In other words, although parents perceived that their children’s social skills continued to improve 6 weeks after the program, this was not a statistically significant improvement.

Table 3: Social Skills Related Analysis (Parents 6 week post-test) Average Score N Standard Deviation

Parents Social Skills Score for their Children at Time 1 (beginning of program)

79.91 11 18.92

Parents Social Skills Score for their Children at Time 2 (end of program)

83.91 11 23.92

Evaluation Question 2: Do the Youth Aiding Youth Social Skills groups increase clients’ self-esteem? Indicator: Youth’s scores on King et al.’s (2002) Self-Esteem subscale

In order to answer this question, we examined youth’s scores on the King et al.’s (2002) Self-Esteem subscale. Again, since the sample size was small, our analysis was limited to descriptive statistics and non-parametric alternatives (i.e., the Wilcoxon Signed Rank test). The results of the descriptive analyses are presented in Table 4. As is evident in Table 4, there was little improvement in youth’s self-esteem by the end of the program. This was supported by the results of the Wilcoxon test which suggested that there was no statistical significant improvement in youth’s self-esteem scores (Z= -1.38, p = .17). Table 4: Self-Esteem Related Analysis (Youth) Average Score N Standard Deviation

Youth’s Self-Esteem Score at Time 1 (beginning of program)

16.07 30 3.39

Youth’s Self-Esteem Score at Time 2 (end of program)

16.67 30 3.16

Evaluation Question 3: Are clients satisfied with the Youth Aiding Youth Social Skills groups?1 Indicator: Youth’s scores on the Youth Satisfaction Survey (Criminal Justice Division, 2003)

To address this question, we looked at youth’s scores on the Youth Satisfaction Survey (Criminal Justice Division, 2003). We calculated an average score for the scale to determine youth’s overall satisfaction with the program. The average score was 49.7 (out of a maximum possible of 60). This finding suggests that youth are very satisfied with YAY. We also looked at the average scores on individual items on the scale to see if youth were more satisfied with some aspects of the program over others. The results suggest that youth were particularly satisfied with the expertise and approachability of the staff (Average Score = 4.3) and their enjoyment of the program (Average Score = 4.3). Stakeholder Involvement

Various stakeholders were involved in the different aspects of the evaluation. First, youth and parents completed the survey and provided their feedback about the experience (what they thought about the survey, whether it was easy to understand, etc.). Second, ROCK and YAY staff were involved in the administration of the surveys. The research associate and student assistant administered the survey with the help of group facilitators (student volunteers) and the manager of the program (Kelly Giuliani). Overall, it was a very collaborative process and various types of knowledge were exchanged. For example, the group facilitators discussed the limitations of the surveys with the research associate and student assistant. They shared information about how the youth were struggling with the length of the survey and how those with reading challenges had difficulty understanding some of the items on the scale. This was a ‘real world’ perspective that would have been difficult to gain through other means. The research associate and student assistant similarly shared information about the methodology with the group facilitators (how it was important to be consistent in administering the surveys, how to respond to youth’s questions about the survey without ‘leading’ them to answer a particular way, etc.). Although external stakeholders were not directly involved in the evaluation process, they were given updates about the project through informal discussions at community meetings. Now that we have completed the evaluation, we plan to create a short summary of the results that will be shared with community partners.

1 We had planned to ask youth a second process evaluation question about what aspects of the program they liked and disliked. However, we did not complete this due to time restraints (i.e., youth did not want to spend time answering more questions). We received permission from CHEO to make this change.

Conclusion

Overall, the results of our evaluation were mixed. We had expected that YAY would increase the socials skills and self-esteem of youth who participated in the program but our hypothesis was not entirely supported. Although parents perceived that their children’s social skills had improved after participating in the program, youth’s scores on the SSIS did not reflect this. Why was this the case? There are a number of possible explanations.

First, we may have found no increase in youth’s social skills because they did not fill out the questions on the questionnaire appropriately. Some youth may have been aware that they were participating in YAY because of their social skills ‘challenges’ and they may not have been entirely honest in their self-appraisal (social desirability response bias). They may have given themselves high scores on social skills in the beginning of the program (pre-test) because they did not want anyone to think that they had social skills deficits. These higher baseline scores could have affected the results – since youth were scoring themselves relatively highly at pre-test, there was little room for improvement by the end of the program (post-test). We also found that youth felt the need to discuss their responses (i.e., talking about what they were going to pick out loud) despite instructions not to do so. As a result, their responses may have been influenced by others (e.g., they rated something as a ‘4’ because they heard someone else say it rather than it being an accurate reflection of their behaviour). Furthermore, it seemed like some of the youth had difficulty understanding items on the scale. Although the facilitators tried their best to explain them, there is no guarantee that the youth actually understood the questions. Finally, since the survey was quite long, it is possible that youth ‘gunned through’ the last few questions and circled responses that did not necessarily reflect what they actually thought. Collectively, all of these factors could have caused inaccuracies in the data which could have lead to insignificant findings (i.e., no improvement in youth’s social skills scores). Similar limitations could have also affected the parents responses. Parents may have felt compelled to give their children higher social skills scores at the end of the program since they thought that the program was ‘supposed to’ improve their social skills. We are hoping that this was not the case since we told parents that the point of the evaluation was to see if the program did produce any improvements in children’s social skills. We also told them that their responses would in no way affect the quality of care that they received at ROCK (i.e., that if they gave the program low scores, this would in no way impact their level of care).

The lack of significant findings for the self-esteem measure could be attributed to the survey itself. When we administered the survey, we found that youth were very clear about what it was measuring. Since many of the youth knew that the survey was measuring self-esteem, they may have felt the need to score themselves highly on it so that no one thought that they had low self-esteem. In fact, some youth made it a point to loudly discuss how proud they were of themselves despite behaviour that would indicate otherwise.

The results for our process evaluation-related question were more promising than our outcomes-related data. Overall, youth appeared to be very satisfied with the YAY program. They were particularly happy with the expertise and approachability of staff as well as their overall enjoyment of the program. These findings suggest that youth did gain from the program on some level.

Recommendations and Next Steps

Although the evaluation findings were not entirely what we had hoped for, we learned a great deal from the process and the results were both informative and useful. Accordingly, we have used these results to develop a number of recommendations and next steps for the agency. Our recommendations/next steps for ROCK are as follows: 1.Stakeholders need to better understand/discuss the possible implications of the evaluation findings for the YAY program.

Our first recommendation is for ROCK staff to spend some time to understand and discuss the deeper implications of the YAY evaluation findings. Prior to conducting this evaluation, we expected that the results of the evaluation would indicate that YAY builds the self-esteem and social skills of youth. However, some of the results indicated otherwise. There are two possible explanations for this. First, it’s possible that the program is actually effective and that we did not find an effect because of a flawed evaluation methodology (e.g., we didn’t use a good social skills/self-esteem survey, we had too few participants, youth didn’t understand the questions, etc.). This could be likely given the limitations we had in our design. However, it’s also possible that the program is not working as effectively as we think it is – that it isn’t actually producing the social skills/self-esteem improvements that we had hoped for. If the latter is true, then it’s important for the agency to have a further dialogue about this. This discussion could focus on questions like:

• Is it possible that the YAY program is not producing social skills/self-esteem changes to the extent that we expected?

• If so, what are some possible explanations? • Is it because the program is too short to produce sustainable behavioural

change? • Or is the program more effective for certain types of clients over others (rather

than all youth with social skills challenges)? • Is the model of social skills training that we are using not as effective? If so, do

we need to look at a different model? • Are we looking at the right outcomes? In other words, could YAY be benefitting

youth in other ways besides social skills and self-esteem?

In our opinion, a discussion around these questions could provide some valuable insights as well as next steps for the program. It would be particularly helpful in informing service planning and delivery. 2. To conduct ongoing evaluations of the YAY program.

Our second recommendation is to conduct ongoing evaluations of the YAY

program. The results from the current evaluation were conflicting. Parents perceived that their children’s social skills had improved from attending the program. However, youth’s own scores on the social skills survey suggested otherwise. In addition, we found no significant improvements in youth’s self-esteem despite the fact that many parents and other stakeholders feel that this is one of the major benefits of the program. It would be difficult to make definitive conclusions about the impact of the program without

conducting additional evaluation research (particularly since the sample size was relatively small). 3. To conduct future evaluations of YAY with a more appropriate methodology (e.g., find more appropriate outcome measures, incorporate qualitative feedback, etc.)

Our third recommendation relates to the design of future evaluations. While there is no ‘perfect’ design for an evaluation, we found that the current design suffered from a number of limitations. First, as mentioned earlier, our social skills measure was not ideal for the client population that we serve. We decided to use this measure at the request of one of the reviewers from Phase 1 of the grant. The SSIS is certainly a current, valid and reliable measure of social skills. However, we found the survey to be too onerous for the youth in our programs. To our knowledge, the survey has generally been used in school-based populations (i.e., with students who have more ‘generic’ social skills issues). We think that the youth who participate in our programs generally have more severe social skills issues (relatively) and often have related mental health/learning issues as well. As such, the structure of the survey is not necessarily ideal for them. For example, the survey is too long for youth who have attention-related difficulties. In addition, the survey assumes a certain level of literacy. Based on our experience, youth who have reading/learning disabilities appear to have difficulty understanding some of the questions. Given the preceding, it would be ideal to find better outcome measures for future evaluations.

We feel that the quantitative nature of our evaluation was also a limitation. While there are many benefits to using this approach, one of its drawbacks is that people are unable to explain the rationale behind their answers. In many cases, this is not an issue. However, in our case, we found that a number of parents and youth did want to elaborate on their responses (especially those who were unfamiliar with the Likert format of the scales). For example, some parents wanted to clarify that they were referring to specific settings when they answered the social skills questions (i.e., they were thinking about how their daughter/son behaved at home rather than at school since they did not get the chance to observe their behaviour at school). This information would have been useful in the interpretation of the results. It’s possible that the differences in youth’s and parent’s reports of social skills changes could be attributed to different settings - parents could have been answering based on how their kids behaved at home whereas youth may have been thinking about their behaviour at school/home/broader settings. This type of in depth understanding of the results is not possible without some qualitative component. As such, we plan to use a mixed method design in future evaluations of the program (i.e., we would combine quantitative survey questions with qualitative open-ended questions). 4. To continue to build stakeholder ‘buy in’ for future YAY evaluations.

Our fourth recommendation is to continue to increase and/or maintain the stakeholder buy in for the YAY social skills groups’ evaluation. Most of the stakeholders who participated in this project were excited that we conducted an overdue and much needed evaluation of the program. It is important to maintain this momentum in the future.

Lessons Learned

In addition to recommendations for our own agency, we also have insights/suggestions for other agencies who are thinking of conducting similar evaluations of their own programs. We wanted to share this information with others since we thought that it would have been helpful for us to know this in advance. Some of our major lessons were: 1. Conducting a good evaluation takes time so it’s a good idea to start early!

We learned that conducting an evaluation is a fairly involved process. You first need to develop an evaluation framework which involves conducting a literature review, developing a program logic model, and consulting with stakeholders throughout the process. You also spend time revising your logic model and indicators/outcomes multiple times. After this, you move on to the implementation phase which could involve pilot testing, administration of multiple surveys/focus groups, unanticipated statistical analyses. Since there are so many steps involved in the process, evaluations will likely take longer than expected. It is therefore advisable to start the process as early as possible. 2. Try to consult with as many stakeholders as you can – it will help you in the long run.

We sought feedback from a variety of stakeholders throughout the evaluation process. This allowed us to approach the evaluation from a variety of vantage points (i.e., the youth perspective, the parent perspective, the group facilitator perspective, etc.). As a result, we were able to conduct a more comprehensive evaluation of the YAY social skills groups. Overall, we found the involvement of stakeholders very helpful and we would suggest that other agencies who undertake this process do something similar. 3. Make sure that you have all the resources that you will need to conduct the evaluation.

Through this experience, we learned that conducting an evaluation requires a number of resources. Although the Centre provides a lot of support, you will need to do a significant amount of work on site at your own agency. This requires a good chunk of time. It’s also helpful to have the needed human resources. Our team found it very helpful to have both a research associate and a student work on this project. Their background in research helped to make some of the research-related tasks (e.g., the literature review, selection of measures) more manageable. Finally, there are also material resources involved in conducting an evaluation. For example, we offered incentives (gift cards) to stakeholders for providing their feedback. Based on the preceding, it is important for agencies to have an idea of the resources that they require to conduct an evaluation before they start the process. 4. Evaluation activities often don’t go according to plan so it’s important to adapt and persevere.

Our planned evaluation activities often did not go as expected. For example, our sample size was significantly lower than we expected despite our best attempts. YAY had one of the highest attrition rates that it has had in years – many youth signed up to participate in the program but did not show up for any of the sessions. In addition, a

number of youth who did participate where absent during the first or last sessions of the program (meaning we had no pre-test or post-test data from them). We found that in these situations it was important to ‘go with the flow’ and to try to gain as much information as possible from the data that you do have. Final Thoughts

Overall, we feel that we learned a lot through this evaluation process. Although the findings were not always ‘statistically significant’, we do think that the YAY program is significant in youth’s lives. Parents continue to tell us numerous stories about how the program has made a difference in their children’s lives. As a result of these benefits, they continue to enrol their children in future sessions of the program. Many youth also directly tell us that they have benefitted from the program – some go on to become mentors in the mentoring component of the program. One of our most important next steps is to continue on with future evaluations of the YAY program. By applying what we have learned through this project, we hope to gain an even better understanding of what is working and not working with the program.

Appendix A

Social Skills Improvement System- Student Version

Decide how true each sentence is for you.

Questions: N- Not

True

L- Little

True

A- A Lot

True

V- Very

True

I ask for information when I need it. N L A V

I pay attention when others present their ideas. N L A V

I try to forgive others when they say “sorry”. N L A V

I am careful when I use things that are not mine. N L A V

I stand up for others when they are not treated well. N L A V

I say “please” when I ask for things. N L A V

I feel bad when others are sad. N L A V

I get along with other children/adolescents. N L A V

I ignore others who act up in class. N L A V

I take turns when I talk with others. N L A V

I show others how I feel. N L A V

I do what the teacher asks me to do. N L A V

I try to make others feel better. N L A V

I do my part in a group. N L A V

I let people know when there’s a problem. N L A V

I look at people when I talk to them. N L A V

I help my friends when they are having a problem. N L A V

I make friends easily. N L A V

I do my work without bothers others. N L A V

I am polite when I speak to others. N L A V

I stay calm when I am teased. N L A V

I follow school rules. N L A V

I ask others to do things with me. N L A V

I am well-behaved. N L A V

I say nice things about myself without bragging. N L A V

I stay calm when people point out my mistakes. N L A V

I try to think about how others feel. N L A V

I meet and greet new people on my own. N L A V

I do the right thing without being told. N L A V

I smile or wave at people when I see them. N L A V

I try to find a good way to end a disagreement. N L A V

I pay attention when the teacher talks to the class. N L A V

I play games with others. N L A V

I do my homework on time. N L A V

I tell others when I’m not treated well. N L A V

I stay calm when dealing with my problems. N L A V

I am nice to others when they are feeling bad. N L A V

I ask to join others when they are doing things I like. N L A V

I keep my promises. N L A V

I say “thank you” when someone helps me. N L A V

I stay calm when others bother me. N L A V

I work well with my classmates. N L A V

I try to make new friends. N L A V

I tell people when I have made a mistake. N L A V

I ask for help when I need it. N L A V

I stay calm when I disagree with others. N L A V

Appendix B

Social Skills Improvement System- Parent Version Please read each item and think about your child’s behaviour during the past two months. Then, decide how often your child displays the behaviour. Questions: N- Never S-

Seldom O- Often A-

Almost Always

Expresses feelings when wronged.

N

S

O

A

Follows household rules.

N

S

O

A

Tries to understand how you feel.

N

S

O

A

Says “thank you”.

N

S

O

A

Asks for help from adults.

N

S

O

A

Takes care when using other people’s things.

N

S

O

A

Pays attention to your instructions.

N

S

O

A

Tries to make others feel better.

N

S

O

A

Joins activities that have already started.

N

S

O

A

Takes turns in conversations.

N

S

O

A

Says when there is a problem.

N

S

O

A

Works well with family members.

N

S

O

A

Forgives others.

N

S

O

A

Speaks in appropriate tone of voice.

N

S

O

A

Stands up for others who are treated unfairly.

N

S

O

A

Is well-behaved when unsupervised.

N

S

O

A

Follows your directions.

N

S

O

A

Tries to understand how others feel.

N

S

O

A

Starts conversations with peers.

N

S

O

A

Uses gestures or body appropriately with others.

N

S

O

A

Resolves disagreements with you calmly.

N

S

O

A

Respects the property of others.

N

S

O

A

Makes friends easily.

N

S

O

A

Says “Please”.

N

S

O

A

Questions rules that may be unfair.

N

S

O

A

Takes responsibility for his/her own actions.

N

S

O

A

Completes tasks without bothering others.

N

S

O

A

Tries to comfort others.

N

S

O

A

Interacts well with other children.

N

S

O

A

Responds well when others start a conversation or activity.

N

S

O

A

Stays calm when teased.

N

S

O

A

Does what he/she promised.

N

S

O

A

Introduces himself/herself to others.

N

S

O

A

Takes criticism without getting upset.

N

S

O

A

Says nice things about himself/herself without bragging.

N

S

O

A

Makes a compromise during a conflict.

N

S

O

A

Follows rules when playing games with others.

N

S

O

A

Shows concern for others.

N

S

O

A

Invites others to join in activities.

N

S

O

A

Makes eye contact when talking.

N

S

O

A

Tolerates peers when they are annoying.

N

S

O

A

Takes responsibility for his/her own mistakes.

N

S

O

A

Starts conversations with adults.

N

S

O

A

Responds appropriately when pushed or hit.

N

S

O

A

Stands up for herself/himself when treated unfairly.

N

S

O

A

Stays calm when disagreeing with others.

N

S

O

A

Appendix C

Self- Esteem Survey- Youth Please circle the number that best represents what you think. Keep in mind that there are no right or wrong answers – just opinions. Questions: Strongly

Disagree Disagree Agree Strongly

Agree

I like myself. Strongly Disagree

Disagree Agree Strongly Agree

I feel that there are many good things about me.

Strongly Disagree

Disagree Agree Strongly Agree

I feel proud of myself. Strongly

Disagree Disagree Agree Strongly

Agree

I feel confident. Strongly Disagree

Disagree Agree Strongly Agree

I like the way I am. Strongly

Disagree Disagree Agree Strongly

Agree

Appendix D

Youth Satisfaction Survey Question: Strongly

Disagree Disagree Neutral Agree Strongly

Agree The place where the program was held was easy to get to.

1 2 3 4 5

The place where the program was held seemed safe and clean.

1 2 3 4 5

Program activities usually started and ended on time.

1 2 3 4 5

Program staff knew a lot about their jobs.

1 2 3 4 5

Program staff was friendly and easy to talk to.

1 2 3 4 5

My questions were always answered.

1 2 3 4 5

The information I received was important to me.

1 2 3 4 5

The information I received was useful.

1 2 3 4 5

The information I received encouraged me to improve.

1 2 3 4 5

The information I received was easy to understand.

1 2 3 4 5

I liked participating in the program. 1 2 3 4 5

The program length was about the right length (i.e. not too short and not too long).

1 2 3 4 5

Appendix E

Consent Form Description of the evaluation and your participation:

ROCK is conducting an evaluation of the Youth Aiding Youth Social Skills program. The purpose of this evaluation is to assess the effectiveness of the social skills groups to determine if they are meeting the needs of the youth who participate in the program.

As a family participating in the Youth Aiding Youth social skills groups, we would like to invite you and your child to participate in the evaluation study. Your participation will involve the completion of questionnaires at the beginning of the program, at the end of your involvement with the program, and at a 6 week follow-up. If you decide to participate in the 6-week follow-up, you will have the opportunity to enter a lottery for a $100 gift card prize. Your child will be asked to complete the questionnaires at the beginning of the program, and at the end of the program. The amount of time required for your participation will be minimal.

We will use the information from this study to determine whether the Youth Aiding Youth social skills program is helpful in addressing the problems relating to social skills. Confidentiality: All the information that you provide will be kept confidential. Your information will be assigned a code and your name will not be used on any of the forms used. The list connecting your name to this code will be kept in a locked file, and when the study is completed, the list will be destroyed. Your name and any personal identifying information will not be used in any report. Voluntary Participation: Your participation in this evaluation is voluntary. You may choose not to participate or you may withdraw from the study at any time. You will not be penalized in anyway if you decide not to participate in this study or choose to withdraw at a later date. There are no known benefits to you that would result from your participation in this study; however, your information will help us understand some of the ways the program can be improved for others. Contact Information: If you have any questions or concerns about this study, please contact Dr. Surbhi Bhanot-Malhotra at 905 634-2347 ext. 265 or [email protected] CONSENT I have read the above information regarding my participation of the evaluation of the Youth Aiding Youth social skills program and have been given the opportunity to ask questions. I give my consent to participate in this evaluation. I also give my consent for my child to participate in this study.

References Ang, R., & Hughes, J. (2001). Differential benefits of skills training with antisocial youth

based on group composition: A meta-analytic investigation. School Psychology Review, 31, 164-185.

Beelmann, A., Pfingsten, U., & Losel, F. (1994). Effects of training social competence in

children: A meta-analysis of recent evaluation studies. Journal of Clinical Child Psychology, 23, 260-271.

Bullis, M., Walker, H. M., & Sprague, J. R. (2001). A promise unfulfilled: Social skills

training with at-risk and antisocial children and youth. Exceptionality, 9, 67–90. Deluty, R. H. (1984). Behavioural validation of the Children’s Action Tendency Scale.

Journal of Behavioural Assessment, 6, 115–130. Demaray, M. K., Ruffalo, S. L., Carlson, J., Busse, R. T., & Olson, A. E. (1995). Social

skills assessment: A comparative evaluation of six published rating scales. School Psychology Review, 24, 648–671.

Gresham, F. M. (1981). Assessment of children’s social skills. Journal of School

Psychology, 19, 120–133. Gresham, F. M. (1985). Behavior disorder assessment: Conceptual, definitional, and

practical considerations. School Psychology Review, 14, 495–509. Gresham, F. M., & Elliott, S. N. (1990). Social Skills Rating System. Circle Pines,

Minnesota: American Guidance Service. Gresham & S. N. Elliott (2008). Social Skills Improvement System Rating Scales.

Minneapolis, MN: NCS Pearson Gumpel, T., Gresham, F. M., Cook, C. R., Crews, S. D., & Kern, L. (2004). Social skills

training for children and youth with emotional and behavioral disorders: Validity considerations and future directions. Behavioral Disorders, 30(1), 32-32-46. Retrievedfrom http://search.proquest.com/docview/219677261?accountid=12347

Hansen, D. J., Nangle, D. W., & Meyer, K. A. (1998). Enhancing the effectiveness of

social skills interventions with adolescents. Education and Treatment of Children, 21, 489–513.

Harris, S. L. (1998). Behavioural and educational approaches to the pervasive

developmental disorders. In Volkmar, F. R.(Ed.), Autism and pervasive developmental disorders. Cambridge monographs in child and adolescent psychiatry (pp. 195–208). New York: Cambridge University Press.

Hepler, J. B. (1994). Evaluating the effectiveness of a social skills program for

preadolescents. Research on Social Work Practice, 4, 411–435. King K.A, Vidourek R.A, Davis B., & McClellan W. 2002. Increasing self-esteem and

school connectedness through through a multidimensional mentoring program. Journal of School Health. 72(7), 294-9.

Kupersmidt, J., Coie, J., & Dodge, K. (1990). The role of peer relationships in the

development of disorder. In S. Asher & J. Coie (Eds.), Peer rejection in childhood (pp. 274-308). New York: Cambridge University Press.

Ladd, G. W. (1984). Social Skills Training with Children: Issues in Research and

Practice. Clinical Psychology Review, 4, 317- 337. Losel, F., & Beelmann, A. (2003). Effects of child skills training in preventing antisocial

behavior: A systematic review of randomized evaluations. Annals, AAPSS, 857, 84-109.

Maag, J. W. (2006). Social Skills Training for Children with Behavioral Challenges: A

Review of Reviews. Behavioral Disorders, 32(1), 5-17. Marsh, H.W., Smith, I.D., & Barnes, J. (1983). Multi-trait multi-method analyses of the

Self-Description Questionnaire: Student-teacher agreement on multidimensional ratings of student self-concept. American Education Research Journal, 20, 333-357.

McIntosh, R., Vaughn, S., & Zaragoza, N. (1991). A review of social interventions for

students with learning disabilities. Journal of Learning Disabilities, 24, 451–458. Merrell, K. W. (1993). Using behavioral rating scales to assess social skills and

antisocial behavior in school settings: Development of the School Social Behavior Scales. School Psychology Review, 22, 115–133.

Merrell, K. W. (2001). Assessment ofNewcomb, A., Bukowski, W., & Pattee, L. (1993).

Children's peer relations: A meta-analytic review of popular, rejected, neglected, controversial, and average sociometric status. Psychological Bulletin, 113, 306-347.

Michelsen, L., & Wood, R. (1982). Development and psychometric properties of the

Children’s Assertive Scale. Journal of Behavior Assessment, 4, 3–13. Ollendick, T. H. (1983). Development and validation of the Children’s Assertiveness

Inventory. Child and Family Behavior Therapy, 5, 1–15. Parker, J., & Asher, S. (1987). Peer relations and later personal adjustment: Are low-

accepted children at-risk? Psychological Bulletin, 102, 357-389. Quinn, M. M., Kavale, K. A., Mathur, S. R., Rutherford, R. B., & Forness, S. R. (1999). A

meta-analysis of social skill interventions for students with emotional and behavioral disorders. Journal of Emotional and Behavioral Disorders, 7, 54-64.

Rosenberg, M. (1965). Society and the adolescent self-image. Princeton, NJ: Princeton

University Press.

Schneider, B. H. (1992). Didactic methods for enhancing children's peer relations: A quantitative review. Clinical Psychology Review, 12, 363-382.

Schneider, B. H., & Byrne, B. M. (1985). Children's social skills training: A meta analysis.

In B. H. Schneider, K. H. Rubin, & J. E. Ledingham (Eds.), Children's peer relations: Issues in assessment and intervention (pp. 175-190). New York: Springer-Verlag.

Segrin, C. (2000). Social skills deficits associated with depression. Clinical Psychology

Review, 20(3), 379-403.

Spence, S. H. (1981). Differences in social skills performance between institutionalized juvenile male offenders and a comparable group of boys without offence records. BritishJournal of Clinical Psychology, 20, 163–171.

Spence, S. H. (1995). Social skills training: Enhancing social competence and children and adolescents. Windsor, UK: The NFER-NELSON Publishing Company Ltd.

Spence, S.H. (2003). Social Skills Training with Children and Young People: Theory, Evidence and Practice. Child and Adolescent Mental Health, 8(2), 84-96.

Spence, S. H., Donovan, C., & Brechman-Toussaint. (1999). Social skills, social outcomes, and cognitive features of childhood social phobia. Journal of Abnormal Psychology,108, 211–221.