By Musyimi, JN; Changach, JK (2023).
|
Greener Journal of Educational Research Vol. 13(1), pp. 65-72, 2023 ISSN: 2276-7789 Copyright ©2023, Creative Commons Attribution 4.0
International. |
|
Click on Play button...
Exploring Students’
Experiences of Practical Tests Utilization in Formative Evaluations in
Secondary School Computer Studies Curriculum Implementation in Kenya.
James N. Musyimi1,
John K. Changach2
1. PhD in Education Student, School of Education,
Department of Educational Management and Policy Studies, Moi
University, Kenya.
2. Professor, School of Education, Department of
Educational Management and Policy Studies, Moi
University, Kenya.
|
ARTICLE INFO |
ABSTRACT |
|
Article No.: 102623117 Type: Research Full Text: PDF, PHP, HTML, EPUB,
MP3 |
This study set out to explore the experiences
secondary school students’ experiences undergo while undertaking practical
tests used in formative evaluations in Computer Studies curriculum.
Participatory action research that took the qualitative research approach
was used in designing the study. Two Focus Group Discussions (FGD),
consisting of eight Form four Computer Studies students each, were selected
from a target population of 93 students in Thika
Municipality, Kenya. Data on students’ experiences of practical tests was
collected using a Focus Group Discussion Schedule. Thematic analysis was
used to analyze the data gathered. Majority of
negative experiences were exemplified under four themes namely: frequency of
practical tests, students’ experiences as they undertake tests, scoring of
tests and management of feedback from tests. The theme of student
preparation for tests was mainly characterized with positive experiences.
Based on these findings, the study concluded that there were gaps in how the
practical tests used in formative evaluation of secondary school Computer
Studies curriculum are implemented. The study therefore recommends that
Computer Studies teacher need to adjust the strategies they use to implement
practical tests so as to better meet the needs of their students. |
|
Accepted: 27/10/2023 Published: 07/11/2023 |
|
|
*Corresponding
Author Musyimi, James Ngeti. E-mail: musyimijd@ gmail.com |
|
|
Keywords: |
|
|
|
|
The utilization of practical tests in
Computer Studies enable students to apply the knowledge they have acquired in a
concrete manner, which can enhance their comprehension of concepts and enable
them to develop skills that will be useful in their future professions (Daghan & Akkoyunlu, 2014; Komari, Aryanti & Sudjani, 2019;
Salma & Prastikawati, 2021). Further, they
provide a more accurate assessment of a student's understanding and abilities
than traditional written exams, as they measure practical skills and knowledge
(Lai, Ferrara, Nichols & Reilly, 2014). The importance of these tests is
even greater when they are used for formative evaluations, as they provide
immediate feedback, enabling students to pinpoint areas where they need to
improve and take corrective measures accordingly (Mc Tighe, 2015).
Literature contends
that for practical tests in Computer Studies to achieve the foregoing ends,
they need to be implemented within a constructivist framework. This approach
makes the tests more engaging, challenging, and sometimes requires more time to
complete (Asan & Haliloglu,
2005; Mc Tighe, 2015; Wren,
2015). This therefore raises the question of what positive and negative
experiences students have when undertaking such practical tests. Despite this,
limited research has been conducted to explore the experiences of students when
undertaking these "engaging" and "time-consuming" practical
tests in the subject.
The use of practical tests is one of the most
common methods used to assess student’s mastery of concepts secondary school
Computer Studies. Since the subject is mainly hands-on, it is a foregone
conclusion that accurate measurement of student learning can best be done by
determining how well they can accomplish practical tests. Besides, as Espinosa
(2015) advances, practical tests integrate teaching, learning, and evaluation,
which facilitates differentiation in the teaching and learning process. As
such, the probability of achieving of desirable learning outcomes is increased.
The theoretical
underpinnings for the use of practical tests for formative evaluation is that
meaningful learning occurs through direct experience and that students learn by
discovering knowledge themselves rather than solely receiving it from their
teacher (Chan, 2023; Kolb &
Kolb, 2013; Vygotsky, 1978). Further, in contrast to traditional written tests,
where feedback is limited to test scores, in practical tests, feedback is
viewed as a crucial tool for improving student learning and teacher instructional
practices (Espinosa, 2015). This is also echoed by William and Thompson (2007)
who note that practical tests provide teachers and students with clear and
inferable evidence of learning progress which can be used to guide future
action. As a result, effective teachers should often use practical tests as a
formative evaluation tool to keep track of progress in student learning from
different perspectives and under varying instructional conditions.
In many
computing education systems worldwide, practical tests for Computer Studies
normally include projects and lesson-based practical exercises. The main
distinction between a project and a practical exercise lies in the difficulty
level of the task that students are required to accomplish (Bagheri,
Ali, Abdullah & Daud, 2013). Projects are more
intricate, multifaceted in terms of content tested and time consuming to
accomplish. Typically a project takes at
least a week to complete. Practical
exercises are lighter, covering basic concepts and can even be completed in a
single lesson (CSTA, 2011; Tucker, 2009). For these reasons, in Computer
Studies curriculum implementation, practical exercises are more prevalent than
projects (Musyimi, Orodho
& Thuo, 2021; Munyiri,
2014; Kithungu, 2015).
To
effectively carry out practical tests, a social constructivist approach has
been recommended (James, 2008). According to this approach, learners construct
their own knowledge actively and develop their skills by receiving support from
a more proficient peer or skilled teacher. As learners progress, they gradually
become more independent and are encouraged to solve new tasks on their own
(Adam, 2017; James, 2008). Vygotsky (1978) referred to this concept as the
“zone of proximal development.” James (2008) vouch for the use of such model for
practical tests to yield their intended outcomes. This therefore implies that two
main elements are of key essence in practical tests administered in a
constructivist setting, that is, competent teachers and adequate resources.
Competent teachers scaffold learners by designing meaningful tasks, training
them on the use of rubrics or checklists for self-regulation, setting up the
testing environment and providing feedback from the test. Adequate resources
facilitate collaboration and communication and ensure the smooth administration
of tests (James, 2006; Adam, 2017). In Computer Studies practical tests, such
resources include personal computers, up-to-date reference materials, reliable
internet bandwidths, printing facilities, computer networks for interaction,
and sufficient time for completion of tests (Wren, 2015).
The importance
of the administering practical tests in a constructivist framework cannot be
overstated. Nevertheless, research in this area has primarily focused on the
challenges involved in their implementation. For instance, Colley (2008) found
that teachers in the United States were reluctant to utilize practical tasks
that had open-ended features. The reason for this reluctance was that such
tasks involved a vast array of possible approaches, solutions, and answers,
making it challenging for teachers to create, administer, and grade students accurately.
Yildirim & Orsdemir
(2013) discovered that educators lacked guidance on the application of
practical tests, leaving them feeling ill-equipped to use these assessment
methods effectively. Consequently, they were not prepared to train their
students in the use of this type of assessment, resulting in learners being
unable to complete their tasks satisfactorily and not achieving the intended
outcomes.
According to
Kirmizi & Komec (2016),
Turkish high schools faced challenges in conducting practical assessments during
class due to limited time and insufficient materials. Meanwhile, Espinosa
(2015) discovered that the utilization of performance tests in Ecuadorian
secondary school language classrooms was limited, with teachers acknowledging
the benefits of these strategies but hesitant to transition to
performance-based and open-ended formats.
Research
conducted in Nigeria, Zambia, and Ghana investigated the issue of resource
availability in the implementation of Computer Studies curriculum. The studies
reveal a shortage of resources, including limited time for practical lessons, a
shortage of ICT teachers, inadequate access to electricity, computers, computer
labs, internet, scanners, printers, and projectors (Simulwi,
2018; Nyanja & Musonda, 2019; Aikins
Nyarko, 2019; Ogwo, Maidoh & Manwe, 2015).
The
government of Kenya recommends the use of practical tests, such as case
studies, projects, and practical exercises, in the secondary school Computer
Studies curriculum. This is to equip students with skills on how to apply
computing knowledge in solving everyday problems (Kenya Institute of Education,
2002). Studies conducted in this context have mainly focused on the challenges
that are encountered in the use of practical tests and frequency of their use. Research
on the frequency of practical tests reveals the frequency of the lesson-based
practical exercises has not been sufficient to enhance higher order thinking
skills such as innovation skills among learners (Musyimi,
Orodho & Thuo, 2021; Munyiri, 2014; Kithungu, 2015). Studies
on the challenges highlight issues such as the lack of resources, including
internet access and ICT policy frameworks, inadequate teacher proficiency in
assessing problem-solving skills, and a shortage of computers and peripherals (Mwangi, 2013; Awour & Kaburu, 2014; Gichuru, 2014). The
lack of studies that delve deeper into the experiences of students with
practical tests is a significant concern, as it hinders informed
decision-making. It is with this gap in mind that this study aimed to provide a
better understanding of the students' experiences with practical tests.
The use of practical tests for students’ formative
evaluation in hands-on subjects such as Computer Studies helps provide
immediate feedback, allowing students to identify areas where they need
improvement and take corrective action. They also help students to stay
motivated and engaged in the learning process, as they are able to see the
results of their efforts in real time. Further, it can help teachers to adjust
their teaching strategies to better meet the needs of their students. On this
basis, the Kenyan government recommends their use in the secondary school Computer
Studies curriculum implementation. Despite this entrenchment it is not clear
what experiences students have as they undertake these practical tests. This
portends a risk of making uninformed decisions in the learning process thus likely
to inhibit the achievement of the objectives of the subject. Against this background,
this study set out to explore students’ experiences of practical tests
utilization in formative evaluations in secondary school Computer Studies
curriculum implementation.
The research methodology used in this study
was participatory action research, which employed a qualitative research
approach with a phenomenological design. Vaughn & Jacquez (2020) endorse
such an approach when one wants to understand a lived experience from the
participants’ point of view and develop action-oriented interventions that are
beneficial and acceptable to all stakeholders. As such, this particular approach was deemed suitable
as it allowed for an in-depth exploration of the experiences of students with
practical tests. By gaining insight from their perspectives, practitioners can
develop interventions that are actionable and acceptable to all involved
parties.
The study targeted 93 Form Four Computer
Studies students of the year 2023, in two secondary schools within Thika Municipality in Kenya. Out of this, two focus group
discussions (FGD), one from a boys’ school and the other from a girls’ school,
with each consisting of eight Form four Computer Studies students, were
selected. Purposive sampling was used to select the participants that were
included in the FGDs. This sampling technique was used here in order to ensure
that the respondents selected are those that could give valuable information
with respect to the study objectives.
Data on students’ experiences of practical
tests utilization in formative evaluations in secondary school Computer Studies
curriculum implementation was collected using a focus group discussion
schedule. For each FGD, two sessions were conducted. Before the first session,
the researcher asked each of the participants to bring five copies of practical
tests they had done before. He then asked each participant to randomly select
three copies of the tests and write down their reflections on their experiences
of undertaking them. The researcher also used the first session to build
rapport with the participants and gain their trust. The researcher analyzed
each participant’s notes from the first session and used them to prepare for
the second session. During the second session, the researcher reviewed the
notes from the first session with the participants and engaged in deeper
conversation to get a full picture of their lived experiences of undertaking practical
tests in Computer Studies.
The data was analyzed using thematic analysis
approach. This is a method for analyzing qualitative data that entails
searching across a data set to identify, analyze, and report repeated patterns
(Braun & Clarke, 2012). It is considered appropriate and powerful method to
use when seeking to understand a set of experiences, thoughts, or behaviors
across a data set (Braun & Clarke, 2012). Accordingly, the approach was
used to single out five essential themes that were dominant in the two FGDs.
The process of analyzing the data started
after converting the recorded data stored in a memory card into written
text. The data reduction stage commenced by thoroughly reading and
reviewing the transcribed data. During the first reading of each transcript,
themes became evident. An open coding approach was then employed to
identify additional themes that emerged. The analysis led to the development of
five main themes: frequency of practical tests, students' preparation for practical
tests, and students’ experiences as they undertake tests, scoring of tests and
management of feedback from tests.
According to the “frequency of practical
tests” theme, participants indicated that their expectations frequency of
practical tests had not been met. They reported that out of the lessons
allocated for the subject, their teachers had not made it clear which lessons
were meant for theory and which ones were for practical work. Teachers would
switch between practical tests and theory work sporadically. These findings are
supported by the following excerpts:
“When I joined
form one I was so excited to be in a Computer Studies class because I thought
Computer Studies was all about practical work. But when we joined things were
not as we expected. In fact, we spent the whole form one year doing theory
work. We started doing practical tests in Form two.”
“I thought I would spend most of the time doing
computer practical, however I found that was not the case. This was quite
disappointing but I have had to adjust. The number of times I have been exposed
to practical tests has not been adequate to earn practical skills as I expected…”
“The practical tests are inadequate. Some topics such
as desktop publishing need a lot of practice. I am afraid that I have not been
able to gain practical skills as I expected…”
“We are not aware which lesson is reserved for theory
and practical work. It is our teacher who decides when to give us practical
tests.”
Under the “students' preparation for
practical tests” theme, most of the participants felt that they were adequately
prepared to undertake the tests. They reported that, prior to taking practical
tests, their teacher would explain to them the instructions and what was
expected of them in the test. However, there were a few divergent voices that
indicated that this was not the case in all practical tests. This would make
them anxious and sometimes making them to fumble around the test. These
findings are supported by the following sample sentences from participants:
“The instructions for practical tests are always clear
to me. I am always eager to personally perform what the teacher has
demonstrated in the lesson. It is always an exciting experience.”
“For the practical activities we do in class, our
teacher takes us through every aspect of it, step by step, as he highlights
areas we need to look out for. Although sometimes I do not get some things
clearly, I am always afraid to seek for clarification…”
“Test instructions are given well and they are always
clear. As the teacher takes us through them they come out clear. This makes me
to carry out the test with confidence and to perfection…”
“The preparations for practical tests is sometimes
insufficient. For example in spreadsheets, some questions go beyond the basic
skills and concepts that the teacher has taken us through. This makes me get
confused and demoralized…”
According to “students’ experiences as they
undertake tests” theme, participants reported both positive and negative experiences.
The most prominent experiences were those related to sharing of computers, power
fluctuations during tests, adequacy of time allocated for the tests, poor
condition of computers and availability of support from peers and the subject
teacher. These findings are supported by the following excerpts:
“I have been sharing a computer for the lesson based
practical tests. To me this does not feel good since sometimes quarrels emerge
on how we share the computer. The one I share with is experienced in computers
and he says that I am slow in typing. So he wants to be the one using the
computer always…”
“Our teacher puts us in pairs to share one computer. I
get bored when I am paired with someone I do not like working with…”
“When doing the tests, sharing computers helps me
exchange ideas with my colleague and widens my thinking scope…”
“To me sharing a computer for lesson-based practical
tests is a nice thing, since we get to help one another.”
“When doing practical tests we sometimes experience
abrupt power loss. Since the computers in our lab are not connected to power
backups, all the unsaved work gets lost. This is always discouraging since I
have to start doing it all again.”
“My worst experiences in practical tests is when
abrupt power loss occurs especially when I am almost through and I have not
saved my work. It is always disheartening since I have to start all over again.
In most cases the additional time is never enough to complete the work.”
“The time allocated for the practical tests have never
been enough for me. I have been leaving my work unfinished.”
“I have a problem with time management during
practical tests. I have not been able to finish my practical tests within the
allocated time.”
“The time for practical tests is always limited. In
most cases I give out my work without fully answering it. This really brings
down my spirits and makes me very disappointed…”
“Some computers
are slow in processing, this makes me not to finish the test within the
required time. It makes feel bad…”
“Some computers peripherals are old especially the
keyboards, one has to press the keys so hard for you to enter data. Some keys
totally fail to work. This makes me tensed due to the fear of losing marks in a
certain question...”
“…Most of the time I consult my friends whenever I
encounter challenges for not all the time the teacher is available during the
test”
“The teacher is always ready to help me whenever I
seek help on how to handle challenging situations. He always gives a deep
explanation on a problem which helps me understand and be able to solve similar
problem…”
Under the “scoring of practical tests” theme,
responses revealed varied experiences. A majority concern was on the dearth of
scoring lesson based practical tests. However, there were a few that indicated
that scoring of practical tests was fair and transparent. These findings are
shown by the following excerpts:
“The teacher does not mark class based practical tests
and it is really discouraging. I am
unable to know my strengths and weaknesses…”
“Only practical tests for end of term exams are
marked. Lesson based practical tests are never marked...”
“Tests are given to us during lessons are not marked.
This is quite discouraging because one cannot know where they messed and where
to make corrections…”
“Even if lesson based practical tests are not marked,
I take them as a way of acquiring skills. So I am comfortable with that. For
the tests that are marked, the marking is always fair and the areas that where
scored are always shown in a scoring guide which is made available to us…”
“For the tests that the teacher marks, he does it
well, fast and with a lot of fairness. This is good since it pushes me to do it
again and correct my mistakes…”
“The lesson based practical tests are never marked.
Sometimes we do not even complete them. When the set time is over, the tests
are never revisited by both the teacher and the students…”
In the “management of feedback from practical
tests” theme, most participants decried lack of review of practical tests after
they are scored. This finding is exemplified in the sample statements from the
participants:
“After doing practical tests, there is no time to look
at how they were done including those done in end of term exams where scoring
is usually done. So it is difficult for me to know my weaknesses and where I
went wrong…”
“The feedback from test is sometimes delayed. This
makes me keep on repeating the same mistake…”
“Once the tests are marked, there is little amount of
feedback from the teacher. No time is usually given for discussion of the exam
practical…”
“The practical tests are not revised by the teacher.
Therefore I fail to know my areas of weaknesses. This makes me fail to achieve
my targets…”
The central aim of this study was to explore
the experiences secondary school students undergo while undertaking practical
tests used in formative evaluations in Computer Studies curriculum. Participants
reported mixed experiences under the five themes that were generated from the
data collected. Majority of negative experiences were exemplified under four
themes namely: frequency of practical tests, students’ experiences as they
undertake tests, scoring of tests and management of feedback from tests. However,
the theme of student preparation for tests was mainly characterized with
positive experiences. Based on these findings, it can be concluded that there are gaps in how
the practical tests used in formative evaluation of secondary school Computer
Studies curriculum are implemented. The study therefore recommends that Computer
Studies teacher adopt the following: (1) increase the frequency and time
allocated for practical tests, (2) update and maintain the physical inputs such
as computer peripherals and power backups, (3) ensure that they are available
to offer support during the test, (4) score every test and give timely
feedback, (5) set time to review scored tests.
Adam, I. (2017). Vygotsky’s Social Constructivists Theory of Learning. Accessed online
from https://mmls.mmu.edu.my/wordpress/1161403286/wpcontent/uploads/sites/35482/2017/09/Vygotsky%E2%80%99s-Social-Constructivists-Theory-of-Learning.pdf
Aikins M. & Nyarko E. (2019).
Challenges facing information and communication technology implementation at
the primary schools. Education Research
and Reviews, 14(13), 484 -492
Asan, A. & Haliloglu,
Z. (2005). Implementing Project Based Learning in Computer Classroom. The Turkish Online Journal of Educational
Technology – TOJET, 4 (3), 68 - 81
Awour, E. & Kaburu, L. (2014). E-Learning in Public Institutions in Kenya:
Implementation Challenges. Journal of
Information Engineering and Applications 4(4), 2225-0506
Bagheri, M., Ali W., Abdullah, M., & Daud,
S. (2013). Effects of Project-based
Learning Strategy on Self-directed Learning Skills of Educational Technology
Students. Contemporary
Educational Technology, 4(1), 15-29
Braun, V. & Clarke,
V. (2012). Thematic analysis. In
H. Cooper (Ed.), APA Handbook of Research Methods in Psychology.
Colley, K. (2008).
Performance-based assessment. Science Teacher, 75(8), 68-72.
Computer Science Teachers Association
Standards Taskforce (2011). CSTA K-12 computer Science standards. New York:
ACM. Retrieved on March 12, 2023 from http://csta.acm.org/Curriculum/sub/K12Standards.html.
Daghan G. & Akkoyunlu B. (2014). A Qualitative Study about Performance
Based Assessment Methods Used in Information Technologies Lesson. Educational
Sciences: Theory & Practice, 14(1), 333-338.
Espinosa T.
(2015). Effective Use of Performance-based Assessments to Identify English
Knowledge and Skills of EFL Students in Ecuador. Theory and Practice in
Language Studies 5(12), 2441-2447
Gichuru F. (2014).
Classroom assessment practices in Kenyan secondary schools: Teacher
perspective. Nairobi. Unpublished Med project, University of Nairobi.
Herrera, S.,
Cabral, R., & Murry, K. (2013). Assessment accommodations for classroom
teachers of culturally and linguistically diverse students (2nd ed.). Boston, MA: Allyn & Bacon.
James, M. (2006).
Assessment, Teaching and Theories of Learning. DOI: 10.13140/2.1.5090.8960.
Accessed online from: https://www.researchgate.net/publication/271964452_Assessment_Teaching_and_Theories_of_Learning
James, M. (2008).
Assessment and Learning. DOI: 10.13140/2.1.4566.6082. Accessed online
from: https://www.researchgate.net/publication/271964532_Assessment_and_Learning
KIE (2002). Secondary
School syllabus. Nairobi: Kenya
Institute of Education.
Kithungu, R. (2015). Factors influencing students' choice of
Computer Studies in Public and Private secondary schools in Machakos
Sub County, Machakos County, Kenya. Nairobi.
Unpublished Med Project, University of Nairobi. Kenya
Kirmizi, O. & Komec, F. (2016). An
Investigation of Performance-Based Assessment at High Schools. Üniversitepark Bülten,
5(1-2), 53-65.
Kolb, D., & Kolb, A. (2013). The Kolb
learning style inventory 4.0: Guide to theory, psychometrics, research &
applications. Experience Based Learning Systems.
Komari, R.N., Aryanti, T., & Sudjani, S. (2019). Skill and Performance Assessment Using
Problem Based Learning in TVET. Proceedings of the 5th UPI
International Conference on Technical and Vocational Education and Training
(ICTVET 2018).
Koné, K. (2015).
The Impact of Performance-Based Assessment on University ESL Learners'
Motivation. Unpublished M.A. thesis. Mankato.
Minnesota State University
Lai E. R., Ferrara S., Nichols P., &
Reilly A. (2014). The once and future legacy of performance assessment.
Manuscript submitted for publication.
Mc Tighe (2015). Performance
Task PD with Jay McTighe: What is a
Performance Task? http://performancetask.com/what-is-a-performance-task
Munyiri. M. (2014). Classroom-based assessment of 21st
Century skills in secondary schools in Kenya. Nairobi. Unpublished Med Project,
University of Nairobi.
Musyimi, J., Orodho,
J. & Thuo, O.M. (2021). Frequency
of Performance-Based Assessments in Secondary School Computer Studies and Its
Influence on Students’ Innovation Capacity in Kandara
Sub-County, Kenya. Journal of Education
and Practice. 12(30).
Mwangi M.T. (2013). Issues and challenges facing implementation
of computer studies curriculum in Kahuro district, Murang’a County. Nairobi. Unpublished MEd project,
Kenyatta University.
Salma, N. & Prastikawati, E. (2021). Performance-based assessment in
the English learning process: washback and barriers. Getsempena
English Education Journal. 8 (1), 164-176.
Simulwi, L. (2018). The
impact of making ICT subject compulsory at junior secondary in Livingstone
District. Unpublished Master’s dissertation. University of Zambia.
Nyanja N. & Musonda
E. (2019). A review of the ICT subject implementation in schools: A perspective
of Lusaka Province (Zambia). Education
and Information Technologies, Journal of the IFIP Technical Committee on
Education, 18 (4) 32 – 53
Ogwo E., Maidoh E. & Onwe E. (2015). Computer Studies and Its impact in
Secondary Schools in Umuahia-North Local Government
Area of Abia State, Nigeria. IJMECS 7(6), 16-23.
Tucker, B. (2009).
Beyond the bubble: Technology and the future of educational assessment. Washington,
DC: Education Sector.
Vaughn, L. M.,
& Jacquez, F. (2020). Participatory Research Methods – Choice Points in the
Research Process. Journal of Participatory Research Methods, 1(1)
Vygotsky, L. S.
(1978). Mind in society: The development of higher psychological processes.
Harvard university press.
William, D., &
Thompson, M. (2007). Integrating assessment with instruction: What will it take
to make it work? In C. A. Dwyer (Ed.), The
future of assessment: Shaping teaching and learning (pp. 53–82). Mahwah,
NJ: Lawrence Erlbaum Associates.
Wren D.G. (2015). Assessing 21st
Century skills with performance tasks: The five year journey of a large school
division. Virginia educational
leadership, 12(1), 37 – 55
Yildirim, R., & Orsdemir, E.
(2013). Performance tasks as alternative assessment for young EFL learners:
Does practice match the curriculum proposal? International Online Journal of
Educational Sciences, 5(3), 562-574
1.
How often does your teacher use practical
tests in Computer Studies? Which genre of practical tests is most common? Is
there any of the two genres that you find enjoyable than the other? Give
reasons?
2.
What are the student experiences with regard to how they
get prepared for the tests by their teacher? (Probe for information on student experiences with instructions/scoring
guide/rubrics of the test)
3.
What are the student experiences with regard to how they
are organized when undertaking the tests? (probe information with regard to how
students seek for help/collaborate during tests, adequacy of resources e.g.
time, computers, internet connectivity)
4.
What are the student experiences regarding how feedback
on the practical test is collated? (Probe for information on how the tests are
scored, how long it takes to get feedback, how they use the feedback).
5.
Tell me about one your greatest distasteful experiences
you have had in practical tests?
6.
Of all the things we have discussed above,
what is the most enjoyable experience you have had in practical tests in
Computer Studies?
James Ngeti Musyimi
James N. Musyimi is a Doctor of Philosophy
(PhD) candidate in Educational Research and Evaluation in the School of
Education, Department of Educational Management and Policy Studies, Moi University, Kenya.
He holds a Master of Education (Educational Research, Evaluation and
Assessment) from Kenyatta University, a Post Graduate Diploma in Education
(PGDE) and Bachelor of Science degree in Mathematics and Computer Science. He
has a teaching experience of over 10 years in the field of Computer Studies and
Mathematics. He also serves as the ICT
in Education Champion in Kandara Sub County, Kenya
and a consultant in the area of programme
evaluations, educational assessments and psychometrics. His research focus and
interest is in innovative assessment strategies and techniques that enable the
acquisition of 21st Century skills among learners.
John Koskey Chang’ach, PhD
Prof. John Koskey Chang’ach is dean emeritus, school of education at Moi University, Kenya. He currently chairs two ad hoc Moi University Senate committees: Chair and Convener
Committee on Developing a Work Plan for Engaging Alumni and Fundraising for the
Endowment Fund and review of terms of service for Deans and Directors. He is a
professor of history and his research interests are: research methods in
history of education and pre-colonial education in Africa.
|
Cite this Article: Musyimi,
JN; Changach, JK (2023). Exploring Students’
Experiences of Practical Tests Utilization in Formative Evaluations in
Secondary School Computer Studies Curriculum Implementation in Kenya. Greener
Journal of Educational Research, 13(1): 65-72. |