FAQs

What kind of organizations do you work with?

Our team has the most experience with youth-serving organizations as well as programs for children & adults with special needs, but our experience with program evaluation more broadly lends itself to a variety of program types. Some program evaluation strategies tend to be more universal, while some are highly specific to certain types of program. Before beginning any project, we have conversations with each organization to be sure that our experience matches their evaluation needs.

In addition, we currently work with small to mid-size organizations who have developed expertise and success in their field, but who may not have the resources or capacity for more advanced program evaluation, such as through university partnerships or national/international affiliations.  

How do organizations become partners?

The first step is simply reaching out and expressing interest. As we're a group, our process is straightforward & informal. After we have an initial conversation, we'll both determine if it makes sense to explore further possibilities.

How long do you work with organizations?

There is no theoretical upper or lower limit to a project length, which will ultimately depend on project scope. Ideally, we'd continue to provide follow-up support beyond the pilot stage, but our primary goal is for organizations to develop the internal capacity to maintain & sustain projects independent of us, which means a core focus is teaching organizations how we do what we do. As with many nonprofits, in a sense our end goal is working ourselves out of a job.

How do your measurement & evaluation strategies differ from "typical" strategies?

Our experience in psychology, education, & nonprofit management allow us to pull from multiple disciplines when selecting assessment tools, ranging from behavior rating scales to educational curriculum-based measurement. In other words, we have both knowledge of the process of nonprofit program evaluation (e.g., logic modeling) as well as content-specific assessment strategies spanning multiple service sectors. Over our careers, we've found that there are highly effective techniques in different fields, but those fields aren't always connected. Our interdisciplinary approach of importing & merging strategies across service domains allows us to create an interconnected pipeline of best practices, resulting in a "best of both worlds" scenario.

Do you provide support for fundraising & development?

Generally speaking, no, but we consider our work integral in the development process. Most funders have outcome measurement requirements, which is central to what we do. So, in many ways, our work is certainly part of the development & fundraising process, but we don't actually provide support for things like grant writing or capital campaigns outside of how program evaluation may interface with those areas.

Specifically, how does THE process work?

First and foremost, each project will look different because each organization will have different needs and be at different stages of their evaluation process. Some organizations may prefer to start from a more conceptual level with strategic planning & crafting program evaluation plans, while others may be in the middle of an evaluation process and need more technical support with specific evaluation tasks.

For sake of explanation, though, let's assume we were starting from scratch. Our first step would be to have a conversation to learn more about each other - our approach, as well as the organization's structure, programs, vision, & goals. If we both decide it would be helpful to work together, we'd then likely have a few more conversations, and possibly schedule an in-person visit to experience the program in action. This process would culminate in a proposal of next steps and how we might further continue working together. 

Our process can be summarized in four phases (outlined on our home page), with the first being planning. Continuing from the initial proposal/needs assessment, the planning phase essentially involves creating a highly customized evaluation plan based on the organization's current program structure. We reference "strategic planning," but our process assumes the organization has already defined its mission, purpose, values, etc. and clarified its own unique program approach. Our goal is simply to adapt evaluation processes to respond to the organization's profile. Essential to this phase is defining the goals of program evaluation - how the data will be used, what questions need to be answered, how the data will be organized, how the data will be communicated, etc. Many organizations approach new program evaluation cycles based on upcoming organizational shifts or initiatives such as new program additions, expanded fundraising goals, new Board structures or requirements, etc. Fundamental to the planning phase is ensuring that the upcoming program evaluation progress addresses upcoming program needs.

As the planning phase progresses, it gets more specific, from initial broad & strategic evaluation planning to increasingly specific evaluation plans. For example, we may start with an organization identifying that they want more information about a tutoring program to apply for more grants and expand the program's volunteer base, then end up with specific plans related to actual academic assessment instruments & data collection/management strategies, including details about who will collect data when, how it will be managed & communicated, etc. This phase would conclude with selection & adaptation of particular assessment tools, the creation of a database structure & procedures, and plans for the next steps of implementation.

The second phase - implementation - simply involves putting that plan into action. While the organization would take the lead with actual administration of assessments, etc., we would work closely with the organization to make sure their plan is implemented seamlessly and with fidelity. This phase tends to be more technical in nature - for example, dealing with database entry & management, coordination of assessment timelines, troubleshooting Excel issues, refining & adjusting assessment tools, and fine-tuning the evaluation plan.

The third phase - analysis - examines program results. Some of this is more technical in nature, such as adjusting data dashboards to display relevant data, while some is more analytical & strategic, such as interpreting data & running follow-up statistical analyses on results. While yielding the most "product," this phase actually tends to be more short & intense. For example, we may meet more frequently with the organization via video conference to look at graphs, explore relationships between data, and explain results. This phase also involves integrating other evaluation/data sources the organization may already have with new data - we see our work as complementary to what's already occurring, such as bolstering existing qualitative data (e.g., organizational narratives or "success stories") with quantitative data. 

The final phase - response - involves using the results of program evaluation to address organizational questions, goals, & needs, from preparing annual reports to applying for new grants and adjusting/tuning programs. The end result of this final phase generally reveals additional questions and new program goals, often serving as the seed for the next round of program evaluation. In this way, program evaluation tends to be cyclical & ongoing, with the results of one round of evaluation leading to the foundation of the next.

As mentioned initially, though, it's entirely probable that many organizations we work with already have some or most of this process already in place & in progress. For example, an organization may have already created a program evaluation plan, begun implementing that plan, and be in the process of reporting data to funders, but be interested in how we might help further refine a survey they've created, for example, or create a data dashboard to summarize & display existing data. Or, as another example, an organization may have already collected volumes of data, but be interested in learning more about how to analyze & condense those data to use for an annual report. Still another example might include an organization that has ongoing evaluation procedures in place, but who might be interested in a specific grant opportunity that calls for expanded outcome reporting beyond the scope of the existing plan.