Pre-conference workshop program: Tuesday 18 September 2018

>>> DOWNLOAD a printable conference workshop program

Having trouble seeing the full overview table? Switch to mobile version here.
View Monday program here.

8–9am REGISTRATION
9am–12:30pm WORKSHOP PROGRAM

Understanding evaluation contexts (half day)

Sarah Mason

> Details
> Register

Interview skills: listening to understand (half day)

Jade Maloney; Kerry Hart

> Details
> Register

Principles-focused evaluation for transformation (full day)

Michael Quinn Patton, Kate McKegg

> Details

WORKSHOP SOLD OUT

Behaviour architects: a game that applies behavioural insights to improve policy solutions (full day)

Karol Olejniczak

> Details
> Register

Conflict resolution skills: a toolbox for evaluators (full day)

Ruth Pitt

> Details
> Register

BEGINNER

BEGINNER

BEGINNER / INTERMEDIATE

INTERMEDIATE / ADVANCED

BEGINNER / INTERMEDIATE / ADVANCED

12:30–1:30pm LUNCH
1:30–5pm WORKSHOP PROGRAM

From data to learning – how to run an effective reflection workshop (half day)

Byron Pakula

> Details
> Register

Questionnaire design: asking the right questions (half day)

Andrew Hawkins; Jasper Odgers

> Details
> Register

 

Patton, McKegg continued

Olejniczak continued

Pitt continued

BEGINNER / INTERMEDIATE

BEGINNER      

WORKSHOP DETAILS

Understanding evaluation contexts

presented by Sarah Mason, Centre for Program Evaluation, The University of Melbourne

HALF DAY (AM) – BEGINNER

Experienced evaluators will say time and again that context matters in evaluation. That there is no universal ‘right way’ to do an evaluation and that evaluation contexts—the settings in which programs, policies and evaluations are situated—necessarily affect the design, implementation and effectiveness of evaluation practice. This is documented in the AES’s evaluator competency framework, which dedicates an entire domain to understanding culture, stakeholders and context. It is also encouraged by evaluation thought leaders who call for ‘context-sensitive’ approaches to evaluation. But what does ‘context’ actually mean? And what pieces of information should an evaluator pay attention to if they are to adapt their practice to fit these contexts?

This half-day workshop is designed to introduce participants to the Framework for Situation Awareness in Program Evaluation: a research-based tool designed to support evaluators in defining, and understanding, the contexts in which they work. Through a combination of mini-lectures and evaluation scenarios, participants will also gain practical experience interpreting real-world evaluation situations, along with feedback based on the real-world outcomes of these scenarios.

By the end of this workshop participants will be able to:

  • explain the three levels of situation awareness
  • dscuss the Framework for Situation Awareness in Program Evaluation (‘the SA Framework’)
  • identify evaluation-specific situation awareness requirements
  • apply the SA Framework and information requirements when interpreting evaluation situations.

About the presenter:
Sarah Mason is a Research Fellow based at the Centre for Program Evaluation, The University of Melbourne where she has taught classes in quantitative methods and mixed methods for evaluation. Over the past 15 years, Sarah has conducted research and evaluation projects across a wide range of contexts, including Australia, the United States, Afghanistan, East Timor, Myanmar, Thailand and Cambodia. She recently led the design and implementation of an international survey of more than 1,000 schools across the globe. She has post-graduate training in experimental, quasi-experimental and non-experimental research designs, qualitative and quantitative data analysis, program theory-driven evaluation and evaluation theory.

Sarah has an MA in Evaluation and Applied Research Methods from Claremont Graduate University (CGU) and is currently pursuing a Ph.D. in the same field. She regularly presents at international conferences and was recently awarded a Faster Forward Fund scholarship for innovative ideas in evaluation. She has post-graduate training in experimental, quasi-experimental and non-experimental research designs, qualitative and quantitative data analysis, program theory-driven evaluation and evaluation theory. Sarah has an MA in Evaluation and Applied Research Methods from Claremont Graduate University (CGU) and is currently pursuing a Ph.D. in the same field. She also has an MA in Development Studies from the University of New South Wales, a Graduate Diploma in Early Years Education and a Bachelor of Arts in Political Science and International Relations from the University of Queensland. She regularly presents at international conferences and was recently awarded a Faster Forward Fund scholarship for innovative ideas in evaluation.

> back to overview > register


Interview skills: listening to understand

presented by Jade Maloney and Kerry Hart, ARTD Consultants

HALF DAY (AM) – BEGINNER

This applied workshop is a practical forum to learn the fundamentals of good interviewing through practice. It aligns to AES professional learning competency 4: ‘research methods and systematic inquiry’. It is designed for people who need to collect qualitative data from clients or stakeholders for evaluation, but have limited prior experience doing so.

The workshop covers what is needed to make a good interview – from asking the right questions through preparation and interviewer competencies to debriefing. The learning objectives are for participants to:

  • understand approaches to questioning and sampling for interviews
  • understand different interview structures and when they are appropriate
  • understand how to develop good interview questions
  • understand what makes a good interview experiencs
  • be able to appropriately prepare for an interview
  • be aware of the skills and competencies required for interviewing
  • put the skills and competencies into practice
  • be able to assess interviewer skills and competencies
  • understand important post-interview tasks
  • have increased confidence to undertake interviews.

The workshop uses applied techniques to support adult learning – outlining the theory, bringing this to life through participant discussion of how this fits with their project and practice-based scenarios and reflection using the interviewer competency framework. We ask participants to provide interview guides or descriptions of their projects ahead of time, so we can tailor scenarios and exercise to their context. Participants also receive a workbook with more detailed notes to take away and reflect on. This allows more time for practical skills building in the workshop. At the end of the session, participants should feel equipped to conduct interviews with a range of stakeholders for evaluation

About the presenters:

Jade Moloney originally trained as a journalist, and interviewed professionals from a range of industries for a careers book publisher before shifting into evaluation. Over the past decade at ARTD, she has conducted focus groups and interviews with clients of various social services, managers and staff in non-government organisations and government agencies. As ARTD’s lead for disability policy, Jade brings a strong understanding of working with people with varying communication needs to collect meaningful data.

Senior Consultant, Kerry Hart, has been interviewing with ARTD for over 15 years, using her specialist skills in face-to-face and telephone interviewing and small group processes to collect meaningful data to answer key evaluation questions. Kerry has collected meaningful data from interviews and focus groups with senior executives in government agencies, frontline staff in non-government organisations, people with disability, children with cancer and their siblings, and parents of children with disability, seniors, people from culturally and linguistically diverse and people from Aboriginal communities. She brings a strong understanding of adapting approaches to different contexts.

> back to overview > register


From data to learning – how to run an effective reflection workshop

presented by Byron Pakula, Clear Horizon Consulting

HALF DAY (PM) – BEGINNER / INTERMEDIATE

The foundation of good M&E practices is that the data collected is used to support learning and inform decision making. However, many practitioners have observed that the development and even implementation of a well-designed M&E system does not necessarily lead to learning. Moreover, the increasing complexity of programmes whereby evaluators are requested to support means the role of M&E to support reflection, learning, adaptation and decision making is more important now than ever.

The purpose of this workshop is for evaluation practitioners and project managers to be trained in the facilitation approach of Reflection Workshops. Building on the facilitated processes of summit workshops used in evaluations, the reflection workshops are designed to build consensus among project staff (and often the donors) related to the context, activities, outcomes, impacts, cross-cutting issues, and management responses. This serves three purposes for the project team: firstly, to provide evidence for the reporting to donors (accountability); secondly, to converge upon an agreed set of lessons and management responses (improvement); and thirdly to document the evidence against the key evaluation questions within the M&E framework (knowledge, M&E). The technique of Reflection Workshops includes developing an evidence matrix to link the data to performance expectations and evaluation questions; co-developing the findings from the data with the project team; and co-designing the recommendations with the decision makers.

The workshop includes a theoretical background followed by practical processes for setting objectives of the workshop; scoping with clients; how to prepare for the workshop; facilitation approaches for different sessions in the agenda; and lessons and tips for the workshop including writing up proceedings.

This workshop builds a foundational evaluation skill designed for beginner and intermediate practitioners, including M&E advisors and project managers. No pre-requisites are required, though knowledge of summit workshops, results charts and or evidence matrices is an advantage.

About the presenter:
Byron Pakula is a Principal Consultant the Aid Effectiveness Team Leader at Clear Horizon. Byron has gathered a broad array of professional experience working for government, private and not for profit sectors internationally and in Australia for over 15 years.  He has worked across much of Asia as a program manager, evaluator and consultant. Byron specialises in design, monitoring and evaluation for complex programs and in complex environments, currently supporting around 50 diverse investments across Indonesia, Timor Leste, Myanmar, ASEAN and even into the Pacific. A key passion for Byron is the use of data and evidence to inform decision making across programs, and works with a range of clients on improving the use of this in adaptive programs to increase learning across programs.

> back to overview > register


Questionnaire design: asking the right questions

presented by Andrew Hawkins and Jasper Odgers, ARTD Consultants

HALF DAY (PM) – BEGINNER

This applied workshop is a practical forum to learn the fundamentals of good survey design through practice. It aligns to AES professional learning competency 4: ‘research methods and systematic inquiry’.
It is designed for people who need to collect standardised satisfaction and outcomes data from clients or stakeholders as part of their professional practice, but have not had previous experience designing questionnaires. It is also suitable for funders of evaluation and research to understand what constitutes good practice when asked to review survey instruments as part of managing the contract for an evaluation.

The workshop covers the what is needed to make a good survey – from using a survey for the right kind of questions through approaches to sampling, scale design and question wording and options for distribution. The learning objectives are for participants to:

  • understand situations suited to different survey methods
  • identify appropriate survey methods for your project
  • understand approaches to sampling and their strengths and weaknesses
  • understand implications of design choices for analysis
  • have a basic understanding of reliability and validity
  • understand different types of scales
  • design appropriate scales
  • identify and avoid common question design errors
  • understand processes to refine questionnaires
  • consider survey distribution channels and their strengths.

The workshop uses applied techniques to support adult learning – outlining the theory, bringing this to life through participant discussion of how this fits with their project and practice-based examples (e.g. correcting poorly worded questions and picking up problems with scales). We ask participants to provide survey instruments or information about their project ahead of time, so we can delivery to their context. Participants also receive a workbook with more detailed notes to take away and reflect on, including a list top tips to avoid in question construction. This allows more time for practical skills building exercises in the workshop.

About the presenters:
Andrew Hawkins works as a trusted advisor and strategic evaluator for public policy professionals, generating insight and evidence for decision-making. Andrew has worked for a wide range of Australian and NSW public sector agencies and not-for-profits on the design, monitoring and evaluation of, policies, programs and other interventions into complex systems. Andrew has a background in psychometrics and survey design. He regularly designs and reviews surveys to ensure meaningful data collection for monitoring and evaluation projects across a range of sectors.

Jasper Odgers has been studying and working in quantitative research and data analysis for the past eight years. He manages online surveys, quantitative data analysis and data visualisation for all of ARTD’s reporting. He has recently managed several stakeholder surveys for NSW Government and Commonwealth Government agencies, which involved complex survey coding, strategies to increase response rates, and factor analysis. He brings a strong understanding of dynamic survey management, processes to ensure data quality and strategies for boosting response rates.

> back to overview > register


Principles-focused evaluation for transformation

presented by Michael Quinn Patton, Independent Evaluation Consultant, Minnesota, USA and Kate McKegg, Director, The Knowledge Institute Ltd

FULL DAY – BEGINNER / INTERMEDIATE

An effectiveness principle is a statement that provides guidance about how to think or behave toward some desired result (either explicit or implicit) based on norms, values, beliefs, experience, evidence and knowledge. However, the effectiveness of a principle in achieving a desired result is a hypothesis until evaluated within some context to determine its meaningfulness, truth, feasibility and utility for those attempting to apply it. Principles can provide guidance for personal behaviour, programs, initiatives, systems and evaluation. Principles can also be evaluated, but evaluating principles is different from evaluating projects and programs. Examples of principles-focused evaluations will be presented, analysed and discussed.

Participants will learn to use the GUIDE framework for principles-focused evaluation. GUIDE is an acronym and mnemonic specifying the criteria for a high-quality principle statement, one that (G) provides guidance, (U) is useful, (I) inspires, (D) supports ongoing development and adaptation, and (E) is evaluable. After presenting the GUIDE framework, Michael and Kate will engage participants in applying and using the framework on their own evaluations. Principles-focused evaluation is especially appropriate for helping innovative initiatives navigate the uncertainties and turbulence of complex dynamic systems. The workshop is based on Patton’s latest book, Principles-Focused Evaluation: The GUIDE (2018). 

Principles-focused evaluation for transformations is a special application – and an especially relevant one. Transformations are not simply projects and programs. Transformations can involve major, complex, and rapid systems changes. Given the complexities, uncertainties, multidimensional, and multi-level (macro, meso, micro) nature of transformative change, principle-focused global systems change initiatives are appropriately evaluated with principles-focused Blue Marble (whole Earth) evaluation. The implications of this will be presented, discussed and applied.


The workshop will be facilitated by Kate McKegg with live virtual inputs from Michael Patton. Kate and Michael have worked together on many previous occasions, with Kate facilitating and supporting developmental evaluation projects and training in New Zealand, Australia and Japan and co-editing the text book Developmental Evaluation Exemplars.


About the presenters:

Michael Quinn Patton is an independent evaluation consultant based in Minnesota, USA. He is former President of the American Evaluation Association (AEA) and author of eight major evaluation books including fourth editions of Utilization-Focused Evaluation and Qualitative Research and Evaluation Methods used in over 500 universities worldwide. He has also authored books on Practical Evaluation, Creative Evaluation, and Developmental Evaluation: Applying Systems Thinking and Complexity Concepts to Enhance Innovation and Use. He co-authored a book on the dynamics of social innovation and transformation with two Canadians entitled Getting to Maybe: How the World is Changed. He is recipient of the Myrdal Award for Outstanding Contributions to Useful and Practical Evaluation Practice, the Lazarsfeld Award for Lifelong Contributions to Evaluation Theory, and the 2017 Research on Evaluation Award, all from AEA. He regularly conducts training for The Evaluators’ Institute and the International Program for Development Evaluation Training. His latest books are Principles-Focused Evaluation (2018) and Facilitating Evaluation: Principles in Practice (Sage Publications, 2018). He is currently working on a book on Blue Marble Evaluation for Global Systems Transformation.

Kate McKegg has worked in the fields of evaluation, evaluation capacity building, research, policy and public sector management since the late 1980s. She has worked alongside Michael Quinn Patton on a number of occasions as well as co-editing the bookDevelopmental Evaluation exemplars; principles in practice. She is also highly experienced in the practical application of principles-focused evaluation. She has over 25 years applying evaluative thinking and practice in government, non-government, philanthropic and community contexts, including many indigenous settings. Kate is the director of The Knowledge Institute Ltd and a member of the Kinnect Group, as well as an indigenous led collective Tuakana Teina, based in the Waikato region of New Zealand. She is also a co-founder of the Developmental Evaluation Institute and a founding member and current Deputy Convenor of the Aotearoa New Zealand Evaluation Association (ANZEA). Kate is co-editor of New Zealand’s only evaluation text,Evaluating Policy and Practice, a New Zealand Reader(2003), and – along with Nan Wehipeihana, Kataraina Pipi and Veronica Thompson – was a recipient of the Australasian Evaluation Society 2013 Best Evaluation Policy and Systems Award for a developmental evaluation of an indigenous sport and recreation programme. She is also co-editor (along with Michael Quinn Patton and Nan Wehipeihana) of the book Developmental Evaluation: Real World Applications, Adapted Tools, Questions Answered, Emergent Issues, Lessons Learned, and Essential Principles, Guildford Press, New York, (2015)

> back to overview > register


Behaviour architects: a game that applies behavioural insights to improve policy solutions

presented by Karol Olejniczak, Centre for European Regional and Local Studies (EUROREG UW), Warsaw, Poland

FULL DAY – INTERMEDIATE / ADVANCED

Successful policy solutions (projects, programs or regulations) rely on policy addressees responding in certain ways. Our key mission as evaluators is to help policy designers to understand the underlying mechanisms that drive behaviours of policy addressees. Recent developments in applied behavioral science provide us with fascinating insights into human decision-making capabilities and mindsets.

The goal of this game-based workshop is to learn how to improve policy solutions by using behavioral insights in evaluation. During the workshop we will: (1) learn how to focus our evaluative inquiry on behaviours of policy addressees, (2) understand gaps in behaviours of target population that often impede policy effectiveness and, (3) test the spectrum of behavioral strategies that can improve effectiveness of public policies.

The workshop is designed in the form of a game with case studies that provide participants with engaging yet research-based learning experiences. All levels from beginners to advanced are welcomed, and the workshop is aimed at those interested in how to evaluate and improve public policy, as well as anyone wanting to experience a game-based workshop!.

About the presenter:
Karol Olejniczak is an Assistant Professor of public policy at EUROREG, University of Warsaw, Poland, and a visiting scholar at The George Washington University, Washington D.C. He is also a co-founder of policy research company Evaluation for Government Organizations (EGO s.c.).

His research interests focus on knowledge brokering, and the application of games and behavioral insights to program evaluation. He is an author of a number of publications on regional policy, methodology of evaluation and organizational learning. He has been conducting comparative public policy research in the US, the Netherlands, UK and Poland. He has executed a number of evaluations and policy analyses for Polish institutions and the European Commission.

In 2014, Karol received the title of Evaluation Leader of Poland, an award granted by the Polish government on the 10th anniversary of Polish membership in the European Union. In 2016, he received the University of Warsaw Award for Excellence in Teaching Social Science.

> back to overview > register


Conflict resolution skills: a toolbox for evaluators

presented by Ruth Pitt, Evaluation Unit, Australian Government Department of Social Services

FULL DAY – BEGINNER / INTERMEDIATE / ADVANCED

To conduct an evaluation is to conduct a project where those involved may have divergent ideas about the purpose, differing opinions on the appropriate process, emotional investments in the outcome, and where external interests may attempt to influence the results. Little wonder, then, that King and Stevahn (2005) argue that “…to experience a program evaluation is to experience conflict”. Therefore, conflict resolution skills such as mediation, facilitation and negotiation are essential for evaluators.

This workshop introduces evaluators to core frameworks and tools from conflict resolution that will be useful in evaluation practice. Specifically, by the end of the workshop, participants will be able to:

  • explain the importance of conflict resolution skills for evaluators
  • describe the five conflict resolution styles, assess the appropriate style for a given situation and identify which styles they misuse
  • distinguish between positions and interests, and construct collaborative problem statements based on interests
  • demonstrate listening and questioning skills, and role play using these in a mediation scenario
  • describe the ‘diamond of participatory decision making’ and practice applying this model when facilitating groups.

The workshop will include presentations on frameworks and tools, opportunities to apply skills through individual and group activities, and resources for further skills development. Scenarios and examples will be based on situations relevant to evaluation. It is suitable for evaluators of any level of experience who have little or no training in conflict resolution skills. Participants will be asked to reflect on their experiences from evaluation projects, but those with limited evaluation experience will be able to draw on experience in other contexts. The workshop covers skills relevant to managing evaluations, and will support evaluators to develop competency in a number of professional competency standards from Domain 5 (Project management) and Domain 6 (Interpersonal skills).

About the presenter:
Ruth Pitt is Assistant Director of the Evaluation Unit at the Australian Government Department of Social Services. Her evaluation experience includes diverse roles in consulting, government and not-for-profit organisations, in Australia and overseas. Her qualifications include a Master of Public Health (specialising in social and behavioural sciences) and a Graduate Certificate in Conflict Resolution, which she completed as part of a Graduate Degree Fellowship at the East-West Center in Hawaii.

> back to overview > register