Comparing Team C & Team A
Commonalities
- Both evaluation tools are printable and not editable online. Team C's must be printed, Team A's does not have to be printed, but requires MS Excel to be used.
- The evaluation tools of both teams have suggested passing scores after completion of the instrument.
- Both tools use statements as their grading criteria.
- Similar areas are covered in each tool but Team A breaks them down a little further based on the different vocabulary used for each area.
- Team C's evaluation has a rating scale from -5 to 5 while Team D is evaluating based on scores of 5, 3, 1, or 0.
- The tool created by Team C lists on the evaluation a suggested minimum score for a site to be considered appropriate. Team D's tool has an accompanying document, which a user should read through first, in order to find a minimum "passing score."
- Team A allows for additional comments in each rating area with no overall narrative writing area. Team C does not allow for additional comments within the rating areas, but does allow for an overall impressions rating at the end.
- Team A has an inconsistent rating categories. While the scale is consistent, the vocabulary term for the point value changes throughout the tool. The reasoning appears to be to address several different aspect of the same criterion, such as content, which has 3 different ratings available.
- Team A's use of categories are nice and allow a quick overview of what areas are being evaluated.
- There is not a cost category in Team A's tool, nor is there an area to write the URL being evaluated.
- Our tool needs to be accessible online and not just printer-friendly. Google Docs can enable editing, but we need to be sure that cells we have created cannot be altered. A drop-down menu in the scoring column would be nice for users to be able to select instead of typing in an answer (if editing were available).
- Setting up categories in our tool would be useful and provide a user a quick-look about the evaluative areas.
- When creating our tool we were worried about the length and I believe Team A has done a nice job incorporating a lot of information in a reasonable space. I do like the narrative piece though and would like to keep that in. So, I'd like to redesign the evaluation to allow for more information in less space while keeping a narrative piece. In doing so, I believe the areas we put as narratives can be answered as part of our scale evaluation and lessen the amount of pages needed.
- I would like our instrument to show that it is research-based.
Comparison of Traditional Media Instruments
Comparing Team C & Team D
Commonalities
- The first commonality between our instruments of traditional media is that they are both online surveys. Both also state that results will be emailed as well. The scoring criteria, while grammatically different, is essentially providing the same results. Scoring is a 4-point rating system with an option for "N/A," as well.
- Both forms address core and technology standards, ability for the learner to comprehend and understand how to use the media, areas to comment further on each category (although Team C lacks categories).
- Digital Bloom's Taxonomy is addressed in both forms but differently (see below for differences).
- Team C's evaluation is meant for a traditional web site whereas Team D's evaluation can be used for traditional web sites as well as computer software.
- When picking a web service to host our survey one consideration was the display of results. Meaning, how would this instrument be able to collate the results for the user. While I'm no Google Form Master, we stepped away from Google Forms because I'm not aware of a way to display individual results back to the person filling out the form. WuFoo allows an immediate, and automatic, email returned to the user with a printout of their results. Google Forms, to my knowledge, doesn't allow for this without the administrator of the form intervening.
- Team C's form links to an example of Bloom's Taxonomy and asks if 4 or more areas are addressed. I think the link to the visual is nice. Team D's instrument addresses the levels of Digital Bloom's individually in their form.
- Team D has made their survey research-certified with citations listed at the end of their evaluative tool.
- I like how Team D put their scored areas into categories. One limit of a WuFoo free account is the number of questions available to ask. This was a consideration of ours, but the online service simply did not allow such as a free service.
- Their area and focus on cross-curricular use is also helpful. With many standard requirements and only so many hours in a day/week, cross-curricular instruction is more needed and prevalent than ever.
- For both instruments, a list of the core areas addressed at the beginning of the survey would helpful. Team C's instrument is one page with no headings. Team D's instrument is four pages and I cannot view the other pages without answering each page. As someone looking for a tool, I'd want to research it's usefulness prior to actually using it.
- Would like to add citations at the end so users know our tool is research-based.
Thanks for the eval Nick :)
ReplyDeleteJust FYI, for Team A WEB 2.0 instrument was taken off on purpose upon the belief that WEB 2.0 tools are usually free.