I read about a study conducted about the organization One Laptop per Child (OLPC), which explored the value of their work since it hasn’t resulted in improved test scores.
It also posed the question as to whether or not the $200 invested in providing the laptops in the developing world was the best use of that aid money. Finally, it concluded that computers aren’t going to fix education, but will enhance positively or negatively the training already being delivered.
The article ended with the following question:
“Should the lack of evidence that students learn better with NCLB [No Child Left Behind] laptops change this equation, or are the benefits of individual laptops that can’t necessarily be measured more important?”
I’ll be honest, the article didn’t really work for me for a number of reasons, although none of those are why I’m writing this post. While I thought the questions raised in the study might have been worthwhile, and the question at the end of the article is also an important one, the bigger issue I took with both the study and the article is this: The questions being asked weren’t promising to be answered in the first place by One Laptop per Child or No Child Left Behind.
One Laptop per Child’s aim is:
“to provide each child with a rugged, low-cost, low-power, connected laptop. To this end, we have designed hardware, content and software for collaborative, joyful, and self-empowered learning. With access to this type of tool, children are engaged in their own education, and learn, share, and create together. They become connected to each other, to the world and to a brighter future.”
They are trying to engage children around the world in education and provide the access to technology and connectivity they might otherwise not have. They aren’t trying to improve test scores.
Now, No Child Left Behind is trying to improve test scores. But there isn’t a mandate to do it by providing children with laptops. Some school boards are trying it. They are accessing the available programs and technologies available for their students in an effort to improve test scores, but they’re not promising to do it solely through the laptops.
Again, these aims show that neither OLPC nor NCLB is saying that providing laptops will improve test scores. At least, not in their mission or goal statements. Yet the value of the programming is being studied and critiqued based on that standard.
This got me thinking about youth work program monitoring and evaluation.
Make sure that what you’re monitoring and evaluating is what you are actually trying to achieve.
We’ve mentioned setting SMART and SMARTER targets before. By having specific and measurable goals, it’s much easier to see if you are achieving them. Make sure you set goals for any area you want to track improvement in.
If you want to see an improvement in attendance, set goals and track attendance.
If you want to see an improvement in youth behavior, conduct a pre and post assessment to measure behavioral changes in youth.
Don’t set a goal for attendance at the beginning of the year and then evaluate youth behavior at the end of the year. Track and monitor attendance and then evaluate the methods you did or didn’t use in order to improve or increase attendance. Set the standard for what you want to achieve in your program and then track, monitor and evaluate that standard.
If you’re going to evaluate your project, make sure you know what you’re actually trying to prove or disprove. Don’t set one goal and evaluate another.
Do you need help with youth work program monitoring and evaluation? Contact us today!
Question: What goals do you have for your youth program this year? Do you conduct program monitoring and evaluation? If you do, what trends are you seeing? If you aren’t, is there a reason?
You can also connect with us by:
- Signing up to receive our posts via email
- Following us on Twitter
- Liking us on Facebook
- Signing up to our RSS feed