Course Exit Self-Assessment Reflection

Now that I have reached the end of LDT 506, I feel more confident about evaluation and the evaluation process. The use of mock Likert-scale data in our team project inspired me to compare my entry and exit results of the self-assessment bookending this course, which utilized a scale from 1 (novice) to 6 (expert) to measure experience with the AEA’s 2018 evaluator competency list. In the entry survey, my average response was a 1.6 out of 6; that rose to an average of 3.9 out of 6 in the exit survey, indicating that this course increased my confidence in my evaluation competencies. While I still do not consider myself an expert in evaluation, I no longer feel daunted by the field.

Since the entry and exist self-assessments measured the same competencies, I created a line graph for a ‘before and after’ view of my competency scores. The first graph I created was ordered by competency number, and helped me visualize that across all of the competencies, I felt either more competent or at least as competent as a result of this course:

Image of self-assessment scores ordered by competency number

It was intriguing to me that while most areas indicated improvement, it appeared there were some exceptions. So, I reordered the graph by my initial competency scores, to see how my final competency scores improved as a result:

Image of self-assessment scores ordered by competency score

Clearly, I still consider myself a novice in Competency 1.7 – that I “[pursue] ongoing professional development to deepen reflective practice, stay current, and build connections” (AEA, 2018). I was not surprised to see this result, as this course did not change the fact that I am not involved in any evaluation-specific professional development initiatives. 

At the other end of the graph was the one competency in which I identified myself as an expert – Competency 4.7, or that “I am able to “[team] with others when appropriate” (2018). This competency is a lifelong skill applicable in virtually all fields; not only am I confident that I can work well with others, but I am sure I do better work when I have the opportunity to collaborate with fellow colleagues. I certainly had this chance during this course, as I was paired with a competent and ambitious team, and learned more about evaluation as a result of working with them than I would have on my own. My teammates were efficient and communicative, and we all served as a support system for each other. We were able to work smoothly and efficiently without too many roadblocks, and without requiring heavy use of project management software (such as the course’s recommended Ensightful tool). 

Although I generally followed an upward trend in my post-course confidence with the assessed evaluation competencies, I did find it interesting that there were some competencies I appear not to have grasped to the extent I did with the others. Notable among these is Competency 1.8 – that I am able to “[identify] how evaluation practice can promote social justice and the public good,” and Competency 5.5 – that I am able to “[attend] to the ways power and privilege affect evaluation practice” (2018). I discussed these two competencies in my blog post about the Course Entry Self-Assessment – they stood out to me because I was surprised to see them in the AEA’s list, and I was eager to experience how they play a role in the evaluation process. However, our team's mock evaluation of the UNCC:e-Learn course did not confront evaluator privilege, or consider the greater social impact of the evaluand.

While climate change is a pertinent matter, the climate series program and its related stakeholders were not down in the weeds of it, per se – in other words, we were not working with a population directly and severely affected by climate change on a regular basis. As a result, discussion of climate change and related initiatives both throughout the UN CC:e-Learn course and our mock evaluation report remained distant and aloof, as there were no on-the-ground experiences or situations involved in the course or the mock evaluation. This experience certainly is a reflection of privilege, both for us as the evaluators and for the primary stakeholders of the UN climate change series, but I am unsure we sufficiently addressed and navigated that fact throughout the evaluation process.

Finally, there were five competencies in which I saw the most improvement:

1.3: “Selects evaluation approaches and theories appropriately.”

1.5: “Reflects on evaluation formally or informally to improve practice.”

1.6: “Identifies personal areas of professional competence and needs for growth.”

2.5: “Identifies assumptions that underlie methodologies and program logic.”
5.6: "
Communicates in meaningful ways that enhance the effectiveness of the evaluation." (AEA, 2018)

Competencies 1.3, 1.5, and 2.5 involve processes that I was unfamiliar with before reading the Russ-Eft & Preskill (2009) text for this course. Competency 1.6 is especially pertinent, given that I was inexperienced with evaluation at the onset of the course – in other words, I didn’t know what I didn’t know. Finally, my improvement with Competency 5.6 I would attribute to the collaboration required of our team throughout our mock evaluation, as well as the experience I gained from writing out the results section of the final report. 

Moving forward, I have a unique opportunity to gain more competence with the evaluation process. The University of Arizona has initiated an RFP to select a new LMS for the university, and I volunteered to evaluate sandbox courses in three LMS for the University of Arizona’s Center for Assessment, Teaching & Technology. The Center held a town hall last week to share survey findings, explain next steps of the RFP process, and gather more feedback from attendees. I was thrilled to dig through the LMS sandboxes after doing something similar with the UN CC:e-Learn course in LDT 506, and I am looking forward to watching this RFP process unfold in real time – both because I have a vested interest in the subject of the RFP and because I feel more confident with the evaluation process as a result of this course.

References

American Evaluation Association. (2018). The 2018 AEA evaluator competencies. Retrieved from https://www.eval.org/About/Competencies-Standards/AEA-Evaluator-Competencies.

Comments