On October 7, 2008, eSchoolNews reported that beginning in 2012, technology literacy will be assessed as part of the National Assessment of Educational Progress (NAEP). No Child Left Behind has made it mandatory that students will be proficient in technology literacy by the end of eighth grade but has not defined how to assess if students are proficient. By the end of 2009, the National Assessment Governing Board will have a framework that will establish the guidelines for the assessment. One problem with this is how to do students get evaluated? What standards are being used for the assessments? ISTE’s NETS-S or National Technology Standards? (eSchoolnews, 10/7/08)
States and the Federal government are scrambling to develop an assessment that will assess students’ ability to use computers. In 1995, the European Computer Driving License was created testing people in seven major areas: Concepts of Information Technology, Using the Computer & Managing files, Word Processing, Spreadsheets, Database, Presentation and Information & Communication ( http://www.bcs.org ). It was through this framework that many of today’s studies on IT assessment come from.
Tuckett (1989) stated that computer literacy had three components such as an understanding of what computer do, the skills necessary to use them and confidence in their use. An assessment on IT needed to show an understanding of all three to show the ability to use the computer. Vivien Sieber stated in her study of 1st year undergraduates from Oxford that “e-assessment provides opportunities for diagnostic assessment which has potential to inform teaching.” Sieber also stated that the purpose of creating “diagnostic assessments that would show the need for IT training and allow the students an opportunity to review their skills/performance within a given range of expectations; to learn to question feedback; and to provide personal performance-related training recommendations along with information about opportunities for that training” (Sieber 2009, 217). Assessments should have a combination of multiple-choice, true-false, and drag and drop questions. (Sieber 2009, 217) “The primary criterion for selection of a question type…should not be the ease with which the response to the question gets evaluated by the computer but rather the type of learning the question is designed to assess” (Gibbs and Peck, 1995). Biggs in 1999 stated that this type of form does not show an “active demonstration of the knowledge in question, as opposed to talking or writing about it” (Biggs, 1999). Thus in order to get a clear assessment there would need to be a combination of both the multiple choice, true-false and drag and drop as well as a way for the student to be able to write or talk about it. Truly we judge mastery by the person’s ability to be able to explain what is expected to be known. Researchers have struggled with that concept.
Different researchers have come up with different ways to assess student performance online and with paper and pencil. Sieber in 2009 created her study that asked students in the Medical Services program at Oxford to complete a questionnaire asking students to rate on a scale of 1-5 about their ability to use different programs including file manager, word processing, spreadsheets and email. The questionnaire had the scale as well as a free response section. The results of Sieber’s study concluded that beginning in 2004 students were about to use the programs but there were some gaps in understanding. Sieber states “There was extensive variation in file management and word processing, which is particularly disheartening as students might be expected to be familiar with them. One possible explanation for complacency is that despite using a word processor regularly, students do not know how to use this software properly and are not aware of many of the advanced functions: (Sieber 2009). The research also pointed out that the most popular way to learn new skills was to have a friend teach the students the skill and weak students overestimated their skills while the stronger students underestimated their ability.
In Taiwan a study was conducted at different elementary schools to test the students’ competency in computer technology literacy. 1,539 Students were given a questionnaire that had them rate themselves on a scale of 1-5 of how they agreed or disagreed with statements. Questions ranged for operation skills to computer usages and concepts to attitudes towards computer technology. The results showed that students that used the Internet for homework scored better in computer technology while the students that used to computers for online gaming, scored better on attitudes toward computer technology. Students that accessed the computers for chat/MSN scored better in technology operation skills, computer uses, learning with technology and Internet operations. The research also concluded that based on this study, females were more competent that the males based on the results (Chang, 2008).
The best example of how an online assessment can be developed to get meaningful conclusions was the Formative Automated Computer Testing or FACT. This project was conducted by the University of Dundee to study how students were able to show IT competency. The researchers created the computer based section using Microsoft Word that allowed them to create and save the versions of the test and control how it was given to the students. The assessment was broken into three different parts evaluating different skills. The first part was the computer-based test using FACT to test the students to create and edit two different Microsoft Word documents by changing the font, applying different text styles and other functions. This had to be completed in 20 minutes. The second section was the 20 minute paper based skill test that tested the student’s knowledge of Microsoft Word and how to perform different tasks. This allowed the student to be able to write out the steps to show mastery. The third section was a written evaluation comparing the two different tests and describing the pros and cons of the sections.
The results of the FACT evaluation concluded that the software was able to handle what the researchers were looking for and had no problems. When surveyed to the preference of the computer-based test versus the paper test, students said that they preferred the computer based test. They felt that the FACT test was easier because they were able to get help using the computer and they felt that it was fairer. The students felt that the paper based assessment was more difficult because the students had to be able to visualize the steps needed to complete a task without the programs in front of them. The students that did not like the FACT computer-based test complained that the timer distracted them and pressured them to complete the assessment in the time needed (Hunt, Hughes, Rowe 2002).
The FACT assessment seemed to be the fairest version of online assessment because of it’s ability to assess basic IT skills on the computer plus being able to test for understanding by having the students write out the steps and explain them. Hunt, Hughes, and Rowe stated in their study that an IT assessment had to have three components in order to be a good tool. The tool had to be able to test how: “information technology tests can be constructed easily which are relevant to different disciplines; students be given appropriate feedback to assist learning; assessments be task-based and realistic, yet cost-effective and reliable” (Hunt, Hughes, Rowe 2002). The FACT assessment met those three components.
Online Discussion
14 years ago
No comments:
Post a Comment