As a high school Physics teacher, objectively scored questions are a boon when evaluating a learner’s performance, in a system that values the student based on the marks scored in an examination. I must admit that in my own evaluation process, the system I follow is not as insightful as those prescribed in this course, however, there are some practices that I do follow, and some that I would like to improve upon, under the guidance of the assessment system prescribed by the Central Board of Secondary Education’s (CBSE) curriculum, and my school.
The questions designed for, and implemented in examinations conducted as part of the summative assessment are perhaps the most scrutinized amongst all other forms of assessment. In addition, there is a heavy reliance on questions asked in past public examination papers, or those set by authors of textbooks. The usual practice for most exams is to copy questions from these dated sources as it has been vetted by experts on education, and it also provides a clear rubric on how we, as teachers, can evaluate the student’s understanding of a particular topic.
In my assessment practice, it is easier for me to provide feedback to students through descriptive-type and numerical-type questions. It is also easy for me to explain to the student, and in some cases even the learner’s parent(s), of the mistakes made, and what needs to be done so that the learner does not repeat these once again. In my experience, I have found that the student will, over a period of time be able to reflect on their own performance (for example, working towards understanding a concept, or to work on their time-management skills to complete a test within the given time limit). These goals are similar to what can be achieved through SQ3R, as Dr. Brown described in his second interview. This is because in these types of questions, it is unproblematic to see if the learner lacks understanding in the concept taught, or if there are concerns regarding the child’s ability to calculate mathematical solutions to problems.
However, this straightforward gathering of information is reduced when it comes to assessing the student’s performance in a multiple choice question, or even a binary-choice question. I find that in most cases, the evaluation is of surface understanding (remembering and understanding as per Bloom’s Taxonomy) of the topic or concept taught. It is rare to see a multiple-choice question be analytical without having to reference information that may be beyond what is taught in the classroom. Hence, these types of questions help me to understand if a particular topic is understood, but I am unable to evaluate if the student is able to demonstrate a deeper understanding of the same topic or concept. In the near future, I would like to change this by creating multiple-choice questions that will be more analytical and evaluative in nature, and thus make a more informed and meaningful decision.
In conclusion, objectively scored assessment helps me provide precise feedback to students, and also equips them with the tools to be reflective of their own performance in their summative assessments. Primarily, this kind of scoring allows me to understand and reflect on my own teaching methodology for certain topics. My area of improvement will be in the creation and implementation of binary- and multiple-choice type questions, so that I can provide a better answer to myself, my students and the administration of my school to the question “who needs to be taught what next”.
 Kamtet, W., Dechsri, P. & Tuentrakulchai, T. (2011). Proceedings of IAEA ’11: Development of Science Test Items . Manila, Philippines: IAEA.