The Art of Evaluation: EvaluATE, an NSF/ATE Evaluation Hub, Makes a Difference for Prospective and Current Grantees

By George Lorenzo – Published May 24, 2020 in Workforce Monitor – Subscribe to the WFMonitor eNewsletter

Any organization or institution that decides to apply for any kind of philanthropic education-oriented grant will typically go through a cumbersome and time-consuming process just to get the application out the door – and then it could all be for naught, as not every application gets the hopeful and anticipated nod of approval. Then, if the award is won, the real work of the education grant gets off the ground. Other required processes start to take shape, such as necessary reporting and evaluation activities that ensure the grant’s overriding goals are appropriately moving forward. These too can be cumbersome and time-consuming processes. So, it’s no surprise that any team of individuals participating in most education-grant progressions could certainly use some help. This is where EvaluATE, a National Science Foundation (NSF) Advanced Technological Education (ATE) evaluation hub – located at the Western Michigan University in Kalamazoo – plays an important role in the ATE grant-making world.

The ATE program places an emphasis on two-year colleges and focuses on the education of technicians for the high-technology fields that drive our nation’s economy (see recent Workforce Monitor feature article). 

Assisting Grantees
EvaluATE is an ATE support entity that assists hundreds of prospective and current ATE grantees across the country. As noted in its NSF award abstract, EvaluATE enhances “the quality and impact of ATE workforce development projects . . . The goals of this project are to: Expand the evidence base for effective science, technology, engineering and mathematics education evaluation practices; improve ATE evaluators’ knowledge and skills in evaluation; improve the ability of ATE project personnel to use evaluation effectively (e.g., for improvement, understanding, and accountability of federally funded projects focused on workforce development); and increase professional exchange and strengthen connections among ATE evaluation stakeholders.” 

Moreover, EvaluATE’s support-assets and resources are open source, freely available to other educators in the grant- making world. So, it’s also no surprise that many individuals outside of the ATE community also find value in EvaluATE’s products and services as they learn about EvaluATE’s support systems and best practices related to applying for and maintaining the integrity of any education grant. Workforce Monitor talked with Lyssa Wilson Becho, EvaluATE’s director and principal investigator, to get a better idea of what those support systems and best practices are all about.

“If you are doing any type of education intervention, you should be doing evaluation, even if it’s not required for your grant” Becho says. Why? “Because you want to know – is it working? How is it working? What’s working or what’s not working, and how can you improve that?”

Four Main Steps of an Evaluation
Becho explains how evaluation processes start with four primary steps: questions, evidence, interpretation, and reporting. 

Questions can be about your program’s overall activity. For example, how are objectives accomplished, are they in alignment with your initial proposal, and what are your proposed outcomes?  “Especially when we’re talking about ATE programs, we’re really looking at the complexity of students’ success, because in community college, student success doesn’t only mean did they graduate,” Becho says. “It also means asking if they are able to learn skills that allow them to advance in their employment or career.” 

Gathering evidence is a path for questions to be answered, and in most cases that means “data of all sorts,” Becho says, both quantitative and qualitative. “Bringing in numbers and stories together is the richest evidence you can provide. Stories get to peoples’ hearts. But the numbers also say to what extent did that happen going through the program.” 

Interpreting your findings comes next. “Providing value-based answers to those evaluation questions is what differentiates evaluation from research,” Becho adds, using a hypothetical example from a project goal to improve retention rates for first-generation students in STEM careers by a 20% increase compared to the national average. “So, using that benchmark – or what we call a success target – is really what sets evaluation apart from research, because now we can say ‘yes’ this program is a success [because it reached that success/evaluation target].”

The final step is reporting. This phase relies on reliable data gathered through both internal and external evaluation efforts that can be utilized inside reports that focus attention on parts of a project that can be improved, “so it can get better and have bigger impacts,” Becho says. This kind of information can expose the most important purposes of an overall evaluation.  

Internal and External Evaluations  
Becho explains that the best evaluations are really a collaboration between project staff and external evaluation teams. For example, data can come from multiple places. Important data can be collected by institutional research offices, other data can be collected by project staff, and still other data can be collected by sometimes better-positioned external evaluators. Becho says the best equation for success is “working together and aligning the specific indicators, the specific data points that you want to use to answer your evaluation questions and recognizing who is going to collect that and when it’s going to be collected.” 

Additional factors apply to the internal and external review and data-collection territories, such as understanding the essential nature of various qualitative and quantitative evidence. What constitutes “rigorous and meaningful” evidence can mean different things to different people. “Who are your stakeholders?” Becho asks. And who is going to be reading the evaluation results? Additionally, “what decisions do you want to be impacted by evaluation data? What kind of evaluative statements do you want to say about your program?” Evaluation efforts should begin in the grantee’s early planning and grant-application processes and take into account the project’s big-picture cycle.  

Evaluating the Evaluators
In addition to providing evaluation support to prospective and current ATE grantees and others, EvaluATE itself gets evaluated via a National Advising Committee comprised of education professors and research scientists from across the country as well as from a Dayton-Ohio-based research and consulting firm (the Rucks Group, LLC) that specializes in analyzing and interpreting data. Becho notes, “Although our team includes evaluators, we still find it valuable to work with our external evaluators to gather evidence on the reach, effectiveness, and impact of our project. Just like other ATE projects, we want to know what is working and how we can improve.”  

The End Goal
Overall, EvaluATE is about building and supporting a grantee community that in the end learns everything they need to know in order to properly and accurately evaluate their projects at levels in which they are ultimately catalyzing effective services and programs that enable students to grow in their career paths.  In short, “evaluations should be showing us whether or not ATE projects are doing a good job,” Becho says. This end goal of essentially getting to a space in which quality evaluations take place is accomplished mostly through training services and community network building. 

Information about EvaluATE’s many services and network-building efforts is easily accessible through their website. Reaching their end goal often starts with training. “We have webinars, and we have open access resources to teach people how to do evaluation,” Becho explains. In addition, EvaluATE hosts a community branch, which provides opportunities for face-to-face and online engagement between evaluators and project staff.  EvaluATE webinars, for example, in addition to being geared for the ATE community, “attract people form multiple countries,” Becho adds. EvaluATE also provides a consistently updated and filtered resource library that offers “guidance on planning an evaluation, collecting data, or writing reports.” They also offer one-on-one coaching services to evaluators and project staff who are currently funded by or submitting to the ATE program. A special online research section is also available through the EvaluATE website. It’s noted there that EvaluATE engages in “research to advance evaluation practice,” and has been “working to build evaluation capacity in the ATE program since 2008,” which puts them “in a unique position to identify pressing challenges that need systematic investigation and creative problem-solving.” Finally, there’s the annual ATE survey which “asks about the activities and achievements of ATE projects and centers. Findings from this survey may be used by ATE grantees, grant seekers, and NSF program officers to inform project and program planning and evaluation. STEM education researchers may also use it to investigate issues related to technician education.” 

“We want to promote excitement around evaluations,” Becho says. “We want people to be excited about evaluation and their potential to improve ATE activities. You want to know how your ATE project is contributing to student success and promoting the well-being of our economy and our society. Evaluation can tell you that.” 

Subscribe to the Workforce Monitor eNewsletter to receive weekly briefs on Credentials and the Future of Work.