Throughout the process of creating the evaluation tool, back to the finding of bibliographies, the whole process has been an "a-ha" moment. It is the realization that there is a whole background and involved research in creating not just a tool, but a valuable and professional tool. While this tool is intended for use by professional educators, not all educators are proficient in technology and evaluating technology. The epiphany here was that we needed to make a tool that was useful for not just us, but the non tech-savvy person as well.
We created both a rubric and a likert scale evaluation. To a person not comfortable with technology, we realized we need to use language that was not intimidating. Our goal was to create a useable tool for all educators, not just tech-savvy educators.
In creating the tool, our idea immediately defaulted to a
rubric-based evaluation. Rubrics provided both a scaled and detailed
medium to evaluate anything. We spent a lot of time considering our
bibliographies and rubric categories. Once the categories were
determined, the scoring guide needed to be developed. We developed a
3-column rubric with scores of 5, 3, and 1, soon changed to a 4-column
scoring guide (5, 3, 1, 0), and then returned to the original 3-column
guide. We filled in the rubric on an high, medium, and low quality
evaluation using the number scale in order to create a quantitative
piece. Finding a workable scoring guide was a time-consuming task.
Upon further discussion, we returned to a 4-column scoring rubric
and added more categories. It was determined that we had many combined
categories that should be separated. For example, the incorporation of
curriculum and technology standards was combined, but later determined
to be separated because they are in fact two separate considerations.
If combined, the absence of one of these could effect the rating which
is unfair to the site and the evaluator. Creating specific categories for the rubric was a valuable consideration.
The other big change was that after spending so long on the
rubric we decided to do a likert scale for the traditional website
evaluation and make this an online form which could email the submitted
information back to the user. The likert scale form was much easier to
create and modify because wording for all rubric areas wasn't needed.
If we could state our categories for evaluation properly, the tool would
be useful and sound.
Working as a team is often rewarding and frustrating. I believe that our group worked well together, and while we had different wording for our rubric scale, we were all trying to say the same thing to best suit the user. We created a tool we found useful for us, and then sat back and rethought the rubric, and scale, as a user and reworked some parts to make it understandable to all. A group setting definitely promotes this thinking.
No comments:
Post a Comment