Saturday, June 25, 2016

Assessing Digital Learning and Instruction Literature Review

Evaluation, or measurement, can be one of the most effective forms of obtaining information, reducing uncertainty, and identifying growth. To understand the extent of impact, one must evaluate.

In my extrapolations of assessment of digital learning, the difficulty of understanding technology’s impact on students is due to the fact that it evolves quickly, so studies are often focused on technology as a whole, rather than specific devices. Secondly, technology’s impact is often associated with scores on student exams. Oftentimes, those involved in education want to associate success of their technology deployment with increased test scores, but technology will not fix low scores. As Bradley Chambers of Out of School Podcast points out, there are too many uncontrolled variables within each classroom (Chambers, 2016).

It can be difficult to identify what makes a 1:1 iPad program “successful.” What defines success? How do you know if the iPads are helping? And what is it that they are helping? How can teachers be guided in the classroom? Ultimately, it would be effective to see students and teachers collaborating, communicating, creating and thinking critically with their iPads. This is measurable, but often there are simpler things to measure first. As Douglas W. Hubbard says in his book “How to Measure Anything: Finding the Value in the Intangibles in Business,”
“…don’t assume that the only way to reduce your uncertainty is to use an impractically sophisticated method” (Hubbard, 2014, p. 64).

In order to provide differentiated professional development for staff, as addressed in my technology innovation plan, measuring where the staff are in the process of integrating technology is important thing to assess first.

There are a surprising amount of reports and surveys related to proficiencies in technology, in general, but few reports specific to 1:1 iPad integration. iPads are quite new in education (implemented with the last five to six years). In this literary review of assessing digital learning and instruction, practical measurement concepts are primarily considered. Measuring the effective change on a learning environment is a long term goal, but first understanding the proficiencies, attitudes, and current integration is important to understand the needs of staff implementing 1:1 iPads. Often this basic information can provide perspective on how devices are being used in the classroom. Additionally, the trends of measurement tools will be highlighted. Lastly, several existing technology standards will be mentioned as possibilities for measurements to be assessed against using an evaluation tool of choice (whether created or adopted by an outside source). The reports and articles included in this review are based upon successful measurement plans that apply to a variety of technologies, which shows how versatile modes of measurement can be.

Overarching themes in the evaluation of technology integration are surveys or questionnaires, focus groups, and controlled versus experimental groups. Many institutions use surveys and questionnaires as their main way of obtaining information. This information ties back to one thing: guiding future professional development to support the needs of stakeholders involved in the technology program. The University of Washington (UW) has conducted comprehensive studies of their educational technology (all types of technologies) since at least 2004 (Gustafson & Kors, 2004). In a 2011 study of technological expertise, faculty and students were asked about proficiencies, skills, and digital literacy. This information was then analyzed for correlations to classroom integration (Giacomini, Lyle, & Wynn, 2012, pp. 1-2, 4). As a result, surveyors have learned that teacher proficiencies may impact the technologies they choose to implement into the class, and designing survey questions that can better evaluate the issues of the lack of proficiency will be key to providing support (Giacomini et al., 2012, p. 6). The University of Washington 2011 report reflects upon their years of surveying and they say, “that it takes time to build a culture that incorporates data into technology and support decisions” They further say,
“Educational technology is an area where it is challenging to know how to target initiatives to reach beyond traditional early adopters to the rest of the community, and so gathering data that show the support needs and areas of challenge for non-early adopters is important. These data allow evidence, rather than anecdote, to influence how technology and support decisions are made.” (Giacomini et al., 2012, p. 6).
In the ECAR Study of Faculty and Information Technology 2015, a similar format was followed. Participants from a variety of higher education schools volunteered, with incentive for a gift card drawing (Brooks, 2015, p. 48). This survey looked at years of experience, technology experience and skills (p. 8). The information is used to, among several things, help IT better implement technology into teacher practices (p. 7). As this evaluation has been conducted, Malcom Brown, the Director of EDUCAUSE Learning Initiative, discusses the impact of the feedback:
Our question has shifted from “what do you own” to “what kind of learning experiences does technology enable.” (p. 9).
At Coppell High School in Coppell, Texas, a survey tool called Clarity is used to obtain data from faculty and students on proficiency skills, such as basic computing online, and multimedia skills. Data is collected and reviewed every six weeks to provide action steps and solutions through their support program called Starfish (Parker, 2015, pp. 8-9). Lastly, at Ball State University, a questionnaire on proficiency, adoption, and reliability reveals barriers to technology usage (Butler & Sellbom, 2002, p. 2). By identifying proficiencies in tech use, barriers were identified and recommendations were made for future support (pp. 23-24).

Focus groups are an additional way to receive information that may not be clear cut as yes or no responses or likert scales. A focus group can provide very detailed qualitative information and opinions on technology use in the classroom. For example, the Community-based E-learning Center for Out-of-School Youth and Adults, called eSkwela, in the Philippines used focus groups to establish a baseline of qualitative data on attitudes and opinions of the needs trainees were receiving (UNESCO Bankok, p. 15). The University of Washington also conducted focus groups of faculty and students to understand experience and use of technology in the classroom (Gustafson & Kors, 2004). Another report assessing the TPACK framework within the classroom uses structured interviews. The interviews about technology based lessons were audio recorded and assessed against a rubric that is based upon the TPACK framework (Grandgenett, Harris, & Hofer, 2012).

Evaluating controlled and experimental groups can be a powerful way to determine if a particular implementation is effective because the control group provides a baseline comparison. In the UNESCO Mobile Learning in Europe Report, studies on inquiry learning in the class were conducted through experimental investigation using computer activities. There was an increase in inquiry compared to control classes (HylĂ©n, 2012, p. 21). In a study of using the SAMR Model to evaluate mobile learning, one study shows that students preferred and participated more frequently in online discussion using mobile devices, and that students who used an augmented-reality tool to work on an architectural proposal were more prone to outperform the control group (SAMR resource). These types of feedback provide the schools with the next steps to take in their technology programs. Lastly, an effective use of experimental investigation was done at in Coppell ISD using 1:1 pilot programs at elementary and middle schools. Students and parents were asked questions about “use, productivity, soft skills, and support” and educator questions looked at “staff training/preparedness, accessibility, lesson design, and classroom management.” (Coppell ISD) The results of the pilot have given the district information to grow their technology implementations into 1:1 iPads throughout a variety of schools.

Ultimately, the data received must provide the feedback for administration to drive the next steps of technology support. These next steps should happen quickly. University of Washington recognized from their 2011 survey that there is a need to minimize the gap between data collection and the release of data in order to have a bigger influence on the support staff and students need. Using their survey results, they look for ways to create questions more efficiently to meet these demands (Giacomini, Lyle, & Wynn, 2012, p. 6).

As a baseline of information is obtained from proficiency, attitude, and focus groups on iPad integration, the next question to ask might be, how do you measure student engagement? Or collaboration, or critical thinking influenced by the iPads? Can these things be quantified through a survey, or evaluated through a focus group? How can standards, such as ISTE, SAMR, or the four C’s of 21st Century Skills provide a framework for evaluation and measurement of iPad integration, learning environment evolution, and pedagogical changes? How often should evaluation take place? Should it occur every three years like the University of Washington, every six weeks like Coppell High School, or could effective feedback happen after every professional development session? Could badging be a fun way to evaluate technology proficiencies and influences in education? (Berdik, 2015). The data from baseline surveys can guide the way. As Hubbard says,
“Like many hard problems in business or life in general, seemingly impossible measurements start with asking the right questions.” (Hubbard, p. 3)

References can be viewed here.

No comments:

Post a Comment