Wednesday 21 March 2018

Evaluation in WIL: Themes across case studies



In the afternoon session of the NSW/ACT Chapter Forum a number of case studies were analysed by participants in relation to evaluation in work integrated learning (WIL). 

Blair Slater, Senior Careers Consultant (International), UNSW Sydney, has collated the themes across the case studies, and presented a snapshot of those themes below.

It would be great to hear in the comments section below, your views in response to the challenges and tensions identified by the groups. And, more importantly, how have you dealt with these challenges and tensions at your university?

1. Stakeholders:

  • Predominate Stakeholders in WIL evaluations:
    Students (current and prospective), academics, designers/Administrators of program or course (careers office, Faculty office), senior executives (decision makers), employers, recipients of work performed (client), 3rd party providers, university or organisation recruitment team, future employers within the industry since they may potentially benefit from WIL graduates.

  2. Desired Outcomes:

  •  Acquire a better understanding of impact to achieve a change.
  • Understand the value of a WIL initiative and inform program/course development. “The purpose should be to improve, not to prove.”
  • To show positive results in the form of benefits for stakeholders, however, negative results also need to be considered.
  • Demonstrate value to increase funding
  • To formulate a “toolkit” for success for stakeholders.
  • Alignment between WILl and graduate attributes and employability.
  • Increase perception of return on investment for students.

3. Steps in the evaluation:

  •  Develop research questions and examine if there is any existing theory or literature to help inform questions and area of focus. Ask, will evaluation need to be longitudinal? Is it summative or formative?
  • Develop methodology, ethics application if considering research publication.
  • Data collection, online surveys, student self-evaluation.  In person interviews are effective for qualitative data.
  • Analysis and dissemination to stakeholders.
  • All the above need to be assessed in terms of feasibility and cost before starting.

 4. Opportunities, Challenges and Tensions:

 Opportunities

  • Evaluation can potentially be applied to other disciplines. For example, faculty to faculty.
  • Opportunities for curriculum enhancement/development. Receive greater academic support/financing.
  • Evaluation can create a good baseline upon which to base future development and evaluation.
  • If evaluation is done well, it could be used for, or extended, into research.

 Challenges

  • When evaluating outcomes, be open to the possibility of learning about an unintentional outcome which perhaps can warrant further evaluation or research. Also need to be cautious of censoring negative findings and handling this challenge.
  • Not much published on benefits for employers. What evaluations can be done in this area to help bring on more employers?
  • Importance of defining/listing the variables in addition to defining and understanding the assumptions being made prior to undertaking evaluation.
  • Potential for evaluations to be too broad if variables cannot me limited or identified. For example, students’ background, academics, industry and other factors that may be hard to control or know.
  • Not being as successful as originally thought. Do you still report the negative? Does this change the dissemination?
  • Tracking alumni in any longitudinal evaluation.
Tensions

  • Balancing research/evaluation with program agenda. Often need someone dedicated to the research component. Investment of time and resources in undertaking a good quality evaluation. Cost versus strategic value needs to be assessed.
  • Diverse conceptions of “value” depending on stakeholder. Stakeholder who is administrating the evaluation may have different values in mind from those who will be making decisions based on the evaluation.
  • Employability skills need to have a clear definition for both the evaluator and subject.
  • Employability is also often hard to measure and must be defined. Is it measured through satisfaction (assess against student expectations), learning outcomes, or engagement and activity level of learning longitudinally.
  • If students are aware that their reflections will be used for evaluation, will this change what they write or give them self in any self-ratings?
  • Need to be aware of comparing same WIL format but with different employers. 

ACEN NSW/ACT FORUM SNAPSHOT


Digital images courtesy of Dr Kylie Twyford
Evaluation is something that I have been giving a lot of thought to recently and evaluation of WIL in particular. It seems I am not alone judging by the numbers attending the NSW /ACT ACEN Chapter forum on Realising opportunities through WIL evaluation which was held at UTS in Sydney on 14 February.


The purpose of the day was to explore how evaluation can be used for a broad range of purposes in WIL such as informing improvements to practice, influencing policy and strategy, informing  research, and measuring impact (on students, partners and the wider community).


The day commenced with thought provoking presentations from experienced practitioners on each focus area. 


Associate Professor Rachael Hains-Wesson (University of Sydney Business School) presented on evaluating WIL to inform practice. She used the evaluation of an International Study Tours program to demonstrate how a mixed methods approach (e.g. critical friends’ meetings; reflections on literature; online survey for Third Party Providers & student focus group interviews) can inform key recommendations for effective and inclusive collaboration amongst stakeholders to create a high-quality short-term study tour program for Work-Integrated Learning.



JulieanneO’Hara (Manager of UTS Careers) spoke about evaluation of WIL to influence internally and told the story of how UTS Careers developed an Internship Manifesto by evaluation of existing practices and sharing responsibility of WIL through a Careers Community (community of practice) which enabled new ways of influencing internally.

Dr Theresa Winchester-Seeto (Winchester-Seeto Consultancy) provided some useful definitions of evaluation and its formative and summative forms. She spoke about the differences (and overlap) between evaluation and research  -“The purpose of evaluation is to improve, not prove” (Fain et al 2006). She also explained the potential problems with evaluation as research and the points that need to be considered if you want to turn your evaluation into research.

Associate Professor Annette Marlow (University of Tasmania) presented on evaluation to measure the impact of WIL.  She spoke about the Integrated Learning (WIL) Evaluation Tool which she and her colleagues have developed to measure, and report on, student perceptions of their learning while undertaking WIL. She referred to the use of feedback rubrics and the importance of formative feedback (given during the WIL program) to gauge impact. The question was raised as to how to deal with negative feedback about a host organization or indeed a WIL program.

At the end of the presentations participants were given the opportunity to discuss the presentations in groups and then ask the panel questions focusing on the key evaluation topics.

A delicious lunch (thank you UTS!) provided an opportunity for some networking and sharing of ideas before we all reconvened for the afternoon sessions.



The afternoon started with participants working in groups and being presented with a fictitious WIL case study which required evaluation with a focus on either informing practice, influencing internally, measuring impact or informing research. Reflecting on the morning presentations groups were asked to discuss what the desired outcomes of the evaluation would be for their particular case study and purpose. They were also asked to identify what steps might be involved in their evaluation. Lively and engaging discussion within groups enabled them to identify what opportunities, challenges and tensions might arise from such evaluation.  Each table presented their discussions to the whole group.

The afternoon concluded with a presentation by Dr. Anna Rowe and Associate Professor Kate Lloyd on an institutional wide approach to evaluation of Macquarie University’s Professional and Community Engagement (PACE) initiative. This was a good session to end the day on as it brought together many of the themes that had been discussed during the day. The presenters spoke about the opportunities and challenges of implementing a university wide Evaluation Framework to determine the impact of the initiative for the University, its stakeholders and the broader community, with a focus on continuous program improvement and development.



I’m sure that by the end of the day participants felt more confident in approaching evaluation and applying tools and methods to their own programs (I certainly did!). 

Links to all slides from the presentations are available here.


Donna Denyer

Career Development Officer, Careers Centre

University of Sydney