Creative Education
Vol.5 No.12(2014), Article ID:47330,15 pages DOI:10.4236/ce.2014.512124

Evaluation by Rubrics: A Computerized System

Juan Cristobal Barrón1, Humberto Blanco1, José René Blanco1, Judith M. Rodríguez-Villalobos1*, Jesús Viciana2

1Faculty of Physical Culture Sciences, Autonomous University of Chihuahua, Chihuahua, Mexico

2Department of Physical Education and Sport, University of Granada, Granada, Spain

Email: *judithrv@gmail.com

Copyright © 2014 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY).

http://creativecommons.org/licenses/by/4.0/

Received 7 April 2014; revised 2 May 2014; accepted 22 May 2014

ABSTRACT

This report details a computerized system that allows teachers to use rubrics as a means of evaluation in a curriculum competency. The computerized system for evaluation by rubrics allows you to design them, modify them, and generate a bank with them so that they can be used later. It is a software that allows you to assess, co-assess, and self-assess either team or individually, the evidence of learning in a course through rubrics and represents an example of how the proper use of the new technologies can become a differentiating factor assessment process of learning about traditional evaluative practices by providing tools that allow use of time and material resources in a more effective and efficient manner both, for the teacher and the student.

Keywords:Rubrics, Curriculum Competency, Software, New Technologies

1. Introduction

At the end of the XX century, and the beginning of the XXI, a change is observed in the new technologies in many areas of life, the information and media. These last are closer every day to people in databases, information running around drowning us in seas of information that becomes a challenge to surf (Lei, Shen, & Johnson, 2014; Roehrig, Groos, & Guzey, 2014; Singley & Taft, 1995). There has been a resounding change within society, leaving aside industrial society and builds on the information society, which emphasizes the value of the data produced and the power of knowledge as a means to change the reality; later the learning society suppose a step in it, and individuals must learn throughout life to survive. And finally, the intelligence society emphasizes the idea of shared and distributed intelligence (Beltrán, 2011, 2013).

The use of the computer, among other things, allows extending the content being assessed, generating expert systems of correction, administration via Internet, the best items for certain assessment objectives (optimal tests) or to certain people (computerized adaptive tests) and so forth. We agree with Prieto, Carro, Orgaz, Pulido, and González-Tablas (1993) and Ishiyama and Watson (2014) in the sense that one of the important applications of personal computers is the construction and management of computerized tests that can replace some fields to the classic pencil and paper tests; allowing storage of data without pre-encoding steps with greater accuracy, speed, and immediate feedback when giving results; and facilitate registration latency time response to each item and the multimedia presentation, with the inclusion of texts, graphics, photographs and even videos and simulations.

It is also clear that automated systems allow for more accurate and reliable data, increasing speed and efficiency of the analysis, presentation and storage, thus untying the teaching routine and mechanical tasks, thereby promoting greater availability of time for other teaching tasks (Roland, 2006; Singley & Taft, 1995; Warren, Lee, & Najmi, 2014).

On the other hand, one of the undeniable characteristics of education systems today is the complexity of the learning goals proposed (Jonassen, 2014; Petropoulou, Vassilikopoulou, & Retalis, 2011) since it is the assessment whose goal is to corroborate these learning objectives as well as identify the factors that influence or affect such learning, without neglecting the quality and improvement of the educational intervention (Marín, Guzmán, & Castro, 2012), to evaluate becomes a complex and overwhelming process for teachers. And it is this context where the rubrics describing the degree to which a learner is running a process or a product based on clear and consistent performance criteria, allow to monitor and self-assess learning product, reducing the subjectivity in evaluating and helping to identify errors, understand their causes and make decisions to overcome them turning them into essential tools to ensure that the evaluation is integrated into everyday classroom processes, be continuous, encourage feedback and be coherent within the teaching-learning process (Petropoulou et al., 2011; Roland, 2006).

Therefore, this current project describes a computerized system designed to facilitate the use of rubrics as a means of evaluation, self-assessment and co-assessment; that is a system designed to improve the assessment process in an educational program established with competence approach.

2. Method

Next, the steps performed in the design of the Computerized Assessment System for Learning by Rubrics (SIEAR) are specified.

2.1. Analysis

At this stage, several discussion meetings of the research group were defined in detail the components and functions of the Computerized Assessment System for Learning by Rubrics (SIEAR) to be taken into account in their design.

2.2. Beta Version: Design and Testing of SIEAR

Once the new editor was technically finished and steady enough to work normally, tests were done to identify either the features and/or functions that would require to be modified.

2.3. Design and Testing of Version 1.0 of SIEAR

Once the corrections and modifications were made to that of the beta version, a free error software was achieved with a quality, suitable to be used by latter users. This version was again subjected to tests to identify the features and functions that needed to be corrected.

2.4. Design Module Installation SIEAR

After reaching Version 1.0 soft ware package using Install Shield 5.5 Professional Edition installer new editor was designed for distribution to latter users.

2.5. Overview SIEAR

The Automated System for the Assessment of Learning Through Rubrics (SIEAR) is a software that allows to assess, co-assess and self-assess in a teaching course, either team or individually the evidence of learning by rubrics. It consists of six modules: Builder of the course structure, rubrics Builder, Settings, Learning Assessment, Reporting and System Generator.

The figure 1 shows the module builder of the course structure, in addition to allowing define the general characteristics of the course: name, authors, sections, students, teachers, evidence of learning.

Builder module rubrics allow design, import, and/or customize the rubrics by which the evidence of learning each of the activities proposed in the course will be evaluated, see figure 2.

Figure 3 contains the configuration module allows predetermine some relevant characteristics of the user interface as colors, font size, coordinates, etc.

The learning assessment module, besides being the user interface; allows assessing, co-assessing, and self-assessing either team or individually the learning evidence of the course and storing the results of evaluations, self-assessments or co-assessments made; see figure 4.

Figure 5 shows the module reports, besides focusing and displaying the results of assessments, co-assessments, or self-assessments made, allows to support them in a destination different than the original.

The generator system module, allows to copy, either aim is selected, the files that establish the Computerized Assessment System for Learning and Using Rubrics to be used in the course for which it was designed; it is illustrated in figure 6.

3. Graphical Modeling Procedure for Using the Software

Step 1: Define the overall course structure and enrollment of students and teachers; see figure 7.

Step 2: Design of learning activities, evidence of performance, assessments and rubrics; it is illustrated in figure 8.

Figure 1. General Data display, builder module course structure.

Figure 2. Displayedit rubrics, rubric builder module.

Figure 3. Configuration menu of SIEAR interface, module configuration menu.

Figure 4. LCD for assessing aspects of the rubric, learning assessment module.

Figure 5. Menu results of the assessment of evidence, reports module.

Figure 6. Screen: Where do you want to copy the files locally and web versions? Generator module system..

Step 3: You can see in figure 9 the self-assessments and/or co-student assessments and teacher evaluations for each performance evidence.

Step 4: Browse by the students and teacher reports on the results of evaluations, self-assessments and co-evaluations conducted; see figure 10.

4. Conclusions

We believe that the main contribution of this type of software in the field of evaluation in education, basically represents a viable and effective computer use in the development, administration and scoring of assessment instruments example, which affects mainly the reliability of the data, besides the stage of collecting and reporting the results is carried out with relative ease and economy of time.

The Automated System for the Assessment of Learning Through Headings (SIEAR) favors the teaching-learning process because it acts as a guide, and improves the evaluation process as it allows him greater objectivity and transparency, making the tax assessment and not become a feedback tool that allows the student to have clear performance standards and discover specific aspects in which a greater effort must be made.

The SIEAR as such represents a good example of how the use of new technologies can become a differentiator in the process of learning assessment over traditional assessment practices by providing tools to use the resources of time and materials so more effectively and efficiently by both the teacher and the student. The prospects for new versions allow SIEAR thinking, among other things, expand the contents under assessment, and generate expert systems of correction.

Acknowledgements

This study is part of a project funded by the Secretaría de EducaciónPública-Subsecretaría de Educación Supe-

(a)(b)(c)(d)(e)(f)

Figure 7. Screens of builder module course structure (part one).

(a)(b)(c)(d)(e)(f)

Figure 8. Screens of builder module course structure (part two).

(a)(b)(c)(d)(e)(f)

Figure 9. Screens of learning assessment module..

(a)(b)(c)

Figure 10. Screens of evaluation reports.

rior-Dirección General de Educación Superior Universitaria de México [Mexican Ministry of Education-Department of Higher Education-General Directorate of the University Education] (OF-13-6894). Additionally, the third author is supported by a grant from the National Council of Science and Technology of Mexico (Conacyt).

References

  1. Beltrán, J. A. (2011). La educación inclusiva. Padres y Maestros, 338, 5-9.
  2. Beltrán, J. A. (2013). La educación como cambio. Revista española de pedagogía, 71, 101-118.
  3. Ishiyama, J., & Watson, W. L. (2014). Using Computer-Based Writing Software to Facilitate Writing Assignments in Large Political Science Classes. Journal of Political Science Education, 10, 93-101. http://dx.doi.org/10.1080/15512169.2013.859085
  4. Jonassen, D. H. (2014). Assessing Problem Solving. In J. M. Spector, M. D. Merrill, J. Elen, & M. J. Bishop (Eds.), Handbook of Research on Educational Communications and Technology (pp. 269-288). New York: Springer. http://dx.doi.org/10.1007/978-1-4614-3185-5_22
  5. Lei, J., Shen, J., & Johnson, L. (2014). Digital Technologies and Assessment in the Twenty-First-Century Schooling. Contemporary Trends and Issues in Science Education, 41, 185-200. http://dx.doi.org/10.1007/978-94-007-2748-9_13
  6. Marín, R., Guzmán, I., & Castro, G. (2012). Diseño y validación de un instrumento para la evaluación de competencias en preescolar. Revista Electrónica De Investigación Educativa, 14, 182-202.
  7. Petropoulou, O., Vassilikopoulou, M., & Retalis, S. (2011). Enriched Assessment Rubrics: A New Medium for Enabling Teachers to Easily Assess Student’s Performance When Participating in Complex Interactive Learning Scenarios. Operational Research, 11, 171-186. http://dx.doi.org/10.1007/s12351-009-0047-5
  8. Prieto, G., Carro, J., Orgaz, B., Pulido, R. F., & González-Tablas, M. (1993). Uso del hypercard para la construcción de tests informatizados de aptitudes espaciales. Psicológica, 14, 229-237.
  9. Roehrig, G. H., Groos, D., & Guzey, S. S. (2014). Developing Collective Decision-Making Through Future Learning Environments. Contemporary Trends and Issues in Science Education, 41, 227-242. http://dx.doi.org/10.1007/978-94-007-2748-9_16
  10. Roland, J. (2006). Measuring up: Online Technology Assessment Tools Ease the Teacher’s Burden and Help Students Learn. Learning & Leading with Technology, 34, 12-17.
  11. Singley, M. K., & Taft, H. L. (1995). Open-Ended Approaches to Science Assessment Using Computers. Journal of Science Education and Technology, 4, 7-20. http://dx.doi.org/10.1007/BF02211577
  12. Warren, S. J., Lee, J., & Najmi, A. (2014). The Impact of Technology and Theory on Instructional Design Since 2000. In J. M. Spector, M. D. Merrill, J. Elen, & M. J. Bishop (Eds.), Handbook of Research on Educational Communications and Technology (pp. 89-99). New York: Springer. http://dx.doi.org/10.1007/978-1-4614-3185-5_8.

NOTES

*Corresponding author.