Open-access Online quizzes as a self-assessment tool in higher education: The case of Mathematics in Business Administration

Los cuestionarios online como herramienta de autoevaluación en la educación superior: el caso de las Matemáticas en la Administración de Empresas

Abstract

The discussion about first-year students’ difficulties in mathematics is continuous in the fields of economics, business, administration and engineering. The present study examined the impact of self-assessment online quizzes on students’ mathematical performance and the development of their persistence to work on a mathematical task. The participants’ students attended an obligatory introductory course on Mathematics as part of a program in Business Administration. Their performance was compared with the respective students’ performance the previous academic year, who did not use the online quizzes. Results indicated that students with medium and high initial performance tended to have a higher positive impact on their performance through the use of the self-assessment online quizzes than students with low performance. The discussion concentrates on how academic math teachers can use technological or digital tools in order to attract and motivate their students to work on mathematical tasks that are expected to be related to their future orientation

Online quizzes; Self-assessment; Higher education

Resumen

La discusión sobre las dificultades de los estudiantes de primer año es continua en los campos de la economía, los negocios, la administración y la ingeniería. El presente estudio examinó el impacto de los cuestionarios de autoevaluación en línea sobre el rendimiento matemático y la persistencia de los estudiantes para trabajar en una tarea matemática. Esos estudiantes asistieron a un curso introductorio obligatorio de Matemáticas como parte de un programa de Administración de Empresas. Su desempeño se comparó con el desempeño de los respectivos estudiantes del año académico anterior, sin el uso de cuestionarios en línea. Los resultados indicaron que los estudiantes con un rendimiento inicial medio y alto tendían a tener un mayor impacto positivo en su rendimiento mediante el uso de cuestionarios de autoevaluación en línea que los estudiantes con un rendimiento bajo. La discusión se concentra en cómo los profesores académicos de matemáticas utilizan herramientas tecnológicas o digitales para atraer y motivar a sus estudiantes a trabajar en tareas matemáticas que se esperaba que estuvieran relacionadas con su orientación futura.

Cuestionarios en linea; Autoevaluacion; Educación Superior

1 Introduction

Mathematics is one of the main modules in all educational levels. In the case of higher education, it is a prerequisite for many different courses (Merve et al, 2020), or it is an introductory course at different fields such as engineering, economics, medicine and business. Recently, there is a tendency for an interdisciplinary perspective which asks for the implementation of the mathematical concepts into other fields and vice-versa. The STEM (Science Technology Engineering Mathematics) is discussed and introduced at all the levels of education and mathematics, as one of the major dimensions, gets a predominant role. Emphasis has been given to the teaching and learning processes in mathematics to increase the students’ academic accountability. This role has been upgraded due to the rapid development of technology, the resulting need for deep understanding of underlined mathematical theories (Rochsun; Agustin, 2020) and the tendency to use teaching perspectives which fulfil the posed learning goals of learning through investigation and exploration of authentic mathematical problem-solving scenarios. In higher education we “always need to identify the students’ and the teachers’ difficulties, interpret them and try to overcome them by implementing more effective teaching processes” (Panaoura et al., 2024, p. 128). However, we have to face academic teachers’ resistance to changes who are not familiar with their experiences as learners at a more traditionally teacher-oriented environment. Academic teachers believe that they are successful examples by themselves on how to encounter successfully any learning difficulties. They are not always able to realize that they could be probably the exception of the usual and the normal behavior, based on the “didactic contract” and the tendency to give up on the facing difficulties (Brousseau; Otte, 1989).

A number of studies that have been conducted among students show that there is a perception that mathematics is a quite difficult subject (Langoban, 2020) and many departments at university levels report on high drop-out rates in mathematics (Pepin; Biehler; Guendet, 2021) and that failure in mathematics courses might even cause students to quit their studies (Aguilar, 2021). Universities have taken a number of political decisions to address this issue such as asking academic educators of mathematics to announce office hours for tutoring, to establish peer tutoring which is offered by graduate students, to offer review sessions etc. (Opazo et al., 2021). An indication of the success or failure rate at the university level is the assessment results, either the summative or the formative. The assessment method is used as a source of information for educators, students and the decisions policy makers at the universities who have to take recovery decisions for the quality in education.

Assessment is a crucial issue either in the teaching or the learning of Mathematics (Joglar et al., 2010). In higher education the assessment criteria have been enriched in order to have many different types of assessment methods, which correspond to the different types and levels of the learning outcomes. The learning outcomes frequently correspond to the hierarchical levels of Bloom’s taxonomy (Hyder; Bhamani, 2016). To assess learning, teachers evaluate students’ performance on different types of assessments (quizzes, problem solving tasks, essays, projects etc.) which are related to the courses’ content. For example, in mathematics the use of tests and quizzes are frequent, while in a literature course the essays and the projects are frequent. Usually tests, quizzes and projects constitute a part of the formative assessment and those tools enable students to be more effectively prepared for the final exams. The different types of tasks which are included at those assessment tools and the different types of feedback are discussed without expecting them to lead to a common recipe suitable in all the cases. Feedback is related with students’ self-assessment and the respective self-efficacy beliefs about their abilities. Self-efficacy beliefs are used based on Bandura’s (1997) theory according to which there are people’s judgements of their ability to solve a task. In early ages students tend to overestimate their performance, while lately they tend to have a more precise self-image which is sometimes hidden in order to outsmart the others about their real mathematical performance (Panaoura, 2016). A predominant role in the case of self-assessment methods is the given to them analytical or overall feedback and the number of the possible attempts to rework on the same task.

There are studies on the use of self-assessment formative quizzes into the curriculum since many years ago (Aravinthan; Aravinthan, 2010). With the advancement of technology studies have examined the effectiveness of using digital self-assessment tools in mathematics education (Taja-On, 2023). Those tools can offer interactive and immediate feedback fostering and engaging learning environment. Formative assessment and self-assessment are increasingly facilitated through digital development (Bayrak, 2021). During the recent pandemic of Covid – 19, the restriction in education increased the online formats of evaluation (Matzavela; Alepis, 2023). As Taja-on and Miras-Lamayon (2022) indicated students tend to dislike exams and tests for summative evaluation. However, tests and quizzes, as parts of formative evaluation can be opportunities for practice and can function as one of the most powerful learning tools (Murphy; Little; Bjork, 2023), as it enables participants to self-reflect on their own work.

At the present study we concentrated our attention on the use of online quizzes as part of the formative assessment which aimed to engage further students at the learning process. The study took part in a private University at Cyprus with the participation of new-entranced Business Administration students. The specific students attended an obligatory, introductory course in Mathematics during the first semester of their studies (AMAT100: Business Mathematics). The present study was conducted during the fall semester 2021 and the fall semester 2022, after the discussion at the specific private university of the increase of the drop out students in Business and Engineering, their problematic performance in mathematics and the necessity for introduction of healing tools. A theoretical motivation of the present study was the ongoing discussions about the role of assessment in higher education. A cooperation between the academics in the domain of mathematics and the academics in the domain of business administration is expected to enrich the courses with the appropriate content, teaching and assessment methods. The long-term goal was the improvement of the quality of the university students’ academic performance in Mathematics and the short-term goal was to examine the accountability of the online quizzes which were used for self-assessment. The use of online quizzes, as part of self-assessment, aimed to enable students to identify key concepts, exercise themselves and to understand the expectable performance at their exams. We adapted the assumption that higher engagement of a learner could correspond with better learning outcomes (Ifenthaler; Gibson; Zheng, 2020).

2 Theoretical Framework

2.1 The teaching of mathematics in higher education

The teaching in higher education, according to organizations which examine the quality of education (e.g. such as the Cyprus Agency of Quality Assurance and Accreditation in Higher Education) asks for teaching and assessment methods that cover various cognitive levels, ranging from recall of knowledge to higher order thinking, such as analysis, synthesis and evaluation. Changes in higher education and the use of alternative teaching methods are related with the professional development of the academic on the domain of teaching (Santos et al., 2019). The lifelong professional development of academics has received much attention in recent years (Chadha, 2021). The Agenda for Higher Education (European Commission, 2017) underlines the role of continuous pedagogical training in higher education. Sometimes academics have the relevant knowledge and they may lack the encouragement to implement teaching methods which activate students through problem-solving, project-based and inquiry-based processes (Wondem, 2022). The teaching methods affect and at the same time are affected by the assessment methods. So, the teaching methods which aim to develop critical and creative thinking need to be accompanied by related innovative assessment methods.

A cohort of students in introductory mathematics may not be particularly motivated or interested in Mathematics. The emphasis on the formalistic perspective of mathematics in secondary education lead students to consider mathematics as a hard subject without direct practical implications (Panaoura, et al., 2024). Many universities face the undesirable fact that a part of new entrance students drop out one or two semesters after beginning their studies due to the difficult compulsory mathematical courses (Lithner, 2011). Obviously, the role of higher education is not to criticize or change the secondary education. Academics have to handle the situation and indicate the persistence to overcome the teaching obstacles by implementing innovative teaching processes which motivate, activate and attract students in order to face their learning difficulties.

Opstad and Arethun (2019) examined the factors influencing students’ choice of mathematical courses at high school and the impact this has on performance on business courses in Norway. Their results indicated that students attending courses in theoretical mathematics obtained significantly better marks in bachelor studies in Business Administration in comparison to those who attended applied mathematics. Students attending a business administration course may encounter various difficulties with mathematics due to many interrelated factors. Understanding the teaching and learning challenges can help educators to develop targeted interrelations to support students. Business administration often involves complex mathematical models. Students may have difficulties in understanding and examining those methods if they lack a strong background in understanding advanced mathematical concepts or on foundational mathematical knowledge. On the other hand, there is a need for relating the theory and the practical application otherwise some students are not able to realize the derived practical application of mathematical concepts in a business context and they are not able to perceive the mathematical concepts as relevant to their future careers. In all cases, the teaching orientation has to be related accordingly with the assessment methods.

2.2 Assessment methods in higher education

Higher education students around the world often experience a testing culture, including many summative assessments and a tendency to learn to tests (Jessop; Tomas, 2017). “Summative assessment refers to an assessment of achievement or outcome at the end of the instructional defined period while formative assessment refers to the ongoing assessment during the learning process” (Ifenthaler; Schumacher; Kuzilek, 2023, p.257). Many times, teachers at higher education, without a pedagogical background, tend to employ methods of teaching derived from their own experiences as students (Joglar et al., 2010), when the use of technology was limited and the paper and pencil examinations and the test were usual. Baartman and Quinlan (2023) argue that assessment and feedback practices in higher education need to be transformed to better address promoting learning, assuring assessment rigour and communicating students’ employability.

Pitt and Quinlan (2022) emphasized the role of assessment and feedback systems which are designed to ensure that students take ownership of their learning. Online quizzes have a variety of attributes that affect their efficacy as teaching and learning tool (Murphy; Little; Bjork, 2023). Online quizzes allow instructors to gauge students’ understanding throughout the course. Quizzes may cover different parts of course content, present as a multiple-choice format or a short answer format, take part during or after a presentation of the content. Teachers have to make many different decisions, such as the number of quizzes, the format, the feedback, which impact the quality of learning. Online quizzes can be accessed from anywhere with an internet connection, offering flexibility to complete assessments at their own pace. This accessibility is particular advantageous for higher education, where students have diverse schedules and commitments. Online quizzes can be designed to accommodate different learning paces. Students who need time can work at their one pace, promoting a more inclusive learning environment (Cohen; Sasson, 2016).

In the case of an online quiz, students may repeat it as many times as is necessary in order to feel that they understand the process which is related with the concept or they may repeat it as many times as is necessary to achieve the aimed score (Griffin; Gudlaugsdottir, 2006). In both cases they have valuable practice. According to Ender, Gaschler and Kubik (2021), the online quizzes are recently acknowledged as an economic method with positive attributes. At the same time, some argue that online quizzes may be more effective at assessing lower order cognitive skills rather than higher order skills like critical thinking, analysis and synthesis. Usually the online quizzes offer direct feedback. Through this process we have a self-assessment method which promotes the development of metacognition (Panaoura, 2012) and enables students to handle difficulties they face during the practice. Self-assessment is the evaluation of the “worth’s of one’s performance and the identification of one’s strengths and weaknesses with a view to improving one’s learning outcomes” (Sipos; Ionita; Kutzschebauch, 2023, p. 185). Especially before the exams, the self-assessment provides to students’ enhancement of their performance while focusing on the difficult study points (Matzavela; Alepis, 2023). As it is posed by Barana, Boetti and Marchision (2022), self-assessment involves students themselves as agent of the evaluation processes. When students actively participate in assessing their own progress, they feel a greater sense of ownership over their learning, leading to sustained effort.

Online quizzes provide instant feedback to students. Giving students feedback help them to understand the level achieved according to the expected aims (Barana; Boetti; Marchision, 2022). William and Thompson (2008) argue that formative assessment and the respective feedback help students to understand the standards of the learning outcomes and guide them to act to achieve the teaching goals. There are different types of feedback with different effects (Sipos; Ionita; Kutzschebauch, 2023). Effective feedback should be timely, specific and provide guidance on how to improve. In the case of the direct feedback the responses are predesigned and usually we are referring to right/wrong feedback where the students realize whether the provided answer is correct or to corrective feedback in which the correct answer is presented to them. On the contrary the explanatory feedback concentrates on justifying and explaining why there is a wrong answer. Rich et al. (2017) argue that the explanatory feedback is the most accountable as it may trigger students to a higher degree of critical evaluation of their responses. However, there is the danger of not self-reflecting on their responses in order to find or realize by themselves their mistakes.

Aravinthan and Aravinthan (2010) evaluated the overall performance of students in two engineering courses where self-assessment quizzes were made available through the e-learning system. Results indicated that nearly 50% of the students did not attempt the quizzes. This probably underlines the need to integrate elements that enhance learners’ motivation in respect to the expectancy-value theory of motivation (Wigfield; Eccles, 2000). Students need to understand the relevance of the assessment to their learning objects and provide them incentives for successful completion.

3 Methodology

The sample of the present study consisted of 94 students attending an obligatory, introductory math course during the Fall semester in 2021 and 101 students who were attending the same course during the Fall semester in 2022. The course was part of the structure of a bachelor degree in Business Administration. The content of the particular face to face course was based on linear algebra and its applications to Business.

The aim of the research during the fall semester 2022 was to design and implement online self-assignment quizzes that students would complete within a predefined time period (the deadline was before the final exams). The comparison with the students’ performance at the same course with the respective performance of the students the previous academic year was conducted in order to understand further the initial students’ academic situation. During the fall semester 2021 students’ academic performance assigned through two mid-terms during the semester and a final written exam. The duration of each midterm was 1.5 hours and it was consisted of exercises on 2 or 3 sections, while the duration of the final exam was 3 hours and it evaluated the students’ overall performance at the content of the course. Due to university rules the content of the exams cannot be published. During the fall semester 2022 we introduced the self-assessment online quizzes after the mid-term exam. The online quizzes were constructed by the researchers and implemented by two of them, who were at the specific semester the teachers of the course.

A set of individual number entries that form the correct answer. Although students would consider their multiple entries as a single answer, the exercise was designed to be capable to read individual entries and assign partial credit to partially correct answers. A second feature was that at the end of the assignment students would be able to view their grade, and the system was going to identify and report to them their mistakes, indicating the corresponding correct answer. Hence, real time feedback was possible after completing the assignments. To this end, the assignments were available to students to attempt them an unlimited number of times within the deadline and their assignment grade was decided to be their highest score per assignment. To avoid copying answers from previous attempts the individual exercises were designed to use random number generators to provide different versions of the exercises that resulted to completely different answers. In this way students were able to practice through this repeated process aiming improvement. Four indicative examples of the platform are presented in Figure 1. It is important to clarify that firstly students were asked to use individually their mobile devices the last 15 minutes of the class in order to take a quiz. Through this process we confirmed that they did not face any technical or practical issues and we had the opportunity to present them the pedagogical advantages of using, as part of their study, the specific self-assessment tools for exercise and improvement of their performance. We underlined them that we would be able to discuss any questions at the office hours or at the beginning of the next class.

Figure 1
– Indicative examples from the course platform which is accessed only through the university system Source: learn.frederick.ac.cy (2024)

Methods of analyses: Firstly, we conducted students’ performance at the mid-terms and the final exams for both semesters. We examined further the students’ performance in the fall 2022 after the intervention program with the online quizzes. We used the cluster analysis in order to divide the students into three groups (low, medium and high) in respect to their performance at the first midterm and examine their further academic behaviour with the use of the online quizzes and their performance at the final exam. Additionally, we compared the performance of students who worked on all the quizzes and those who did not indicate any attempt.

4 Results

Firstly, it was important to examine the students’ mathematical performance during the previous academic year before the intervention program (Fall 2021). The evaluation criteria for the course during the specific semester consisted of two midterm tests and the final paper and pencil exams. At the first midterm test 93 students participated with a minimum grade 1 and maximum 100. The mean performance was 59.9 (SD=30.34). At the second midterm test 83 students participated with a minimum grade of 1 and maximum 97. The mean performance was 52.42 (SD=28.59). The mean grade at the final exams was 58.31 (SD=28.23) and the students’ final mean grade was 48.97 (SD=31.48). It is important to underline that based on this result, 51% of the students did not succeed in the specific course and they had to attend it again the spring semester 2022. The students who took part at the first midterm test were clustered into 3 groups (21 low performance, 23 medium performance and 49 high performance). The analysis of variance ANOVA which was used for mean comparison about the performance of the three clusters in the final respective exam indicated that there were statistically significant differences (F2,90 = 72.047, p<0.01) between all the groups (X̄low =19.05, X̄medium =42, X̄high =73). This result indicated that teaching processes during the semester were more accountable for the students who had higher performance on mathematics. The interindividual differences in respect to performance remained and actually there were enforced.

Next year, during the academic year 2022-2023 (Fall 2022) 101 participants took part at the first midterm. The mean performance was 60.83 (SD=25.78), while the mean performance at the final exam was 65.33 (SD=23.78). The mean performance at the final grade was 54.13 (SD=22.05). Students worked on three online self-assessment quizzes after the midterm test. At the first one 98 students took part (X̄=84.03, SD=22.49), at the second one 87 students (X̄=81.99, SD=25.40) and at the third one 90 students (X̄=90.52, SD=20.83). The average performance at the three quizzes was 87.91 (SD=16.80). The high means were due to the students’ tendency to insist until they succeeded to solve them correctly.

Cluster analysis divided the sample into three groups according to their performance at the midterm (before the intervention): 14 participants consisted the group with the low performance, 59 with medium performance and 39 participants the group with high performance. The ANOVA mean comparison of three groups about their performance on final exam (after the intervention with the online quizzes) indicated statistically significant differences between all the groups (F2,92 = 62.43, p<0.01). The respective means were X̄low=29.92, X̄medium=52.15, X̄high=79.27. Similarly, were the findings of the comparison of the three groups about their final grade at the course (F2,97 = 87.13, p<0.01). The means of the three group were X̄low=26.92, X̄medium=53.93, X̄high=80.71. It is important that the interindividual differences on students’ performance at the midterm test remained similar the performance on the final test after the implementation of the activity with the online quizzes. The students with initial high performance were in most cases the students with the highest performance after the intervention.

We examined the differences between students who took part at the three quizzes (77 students) and those who did not participate at all the quizzes (30 students: 10 took part at two quizzes and 12 at one quiz and 8 indicated no attempt). They had statistically significant differences (F=4.830, p<0.05) on their performance at the final exam. Those who participated at the activity had higher mean grade (X̄=71.56, SD=19.15) than those who did not participate (X̄=45.83, SD=26.60). We have examined further the previous academic performance of students who had higher mean performance at the final exams in relation to their participation at all the quizzes. The crosstabs analysis indicated that 42.3% of the students who did not part in the quizzes had low performance, 23.1% medium and 34.6% high performance. On the contrary, only 4% of the students who took part in the quizzes had low performance, 70.7% had medium performance and 25.3% had high performance. This result did not indicate any positive intervention of the program in the case of students with low performance. It highlighted the persistence of students with low performance on attempts to overcome their difficulties in mathematics. The group of students with low performance was divided into 78.6% of students who did not took prat and 21.4% who took part. The group of students with high performance consisted of 32.1% who did not take part and 67.9% who had the experiences of the online quizzes. Important is the behaviour of the students with medium performance. The vast majority (89.8%) took part in the quizzes. This is an indication that the specific intervention was constructed in a way which was more suitable to attract students with medium and high performance on mathematics.

5 Discussion

Self-assessment is particularly relevant to the development of students’ capacity to learn how to learn and learn autonomously (Barana; Boetti; Marschisio, 2022). Research suggests that self-assessment quizzes can positively impact mathematics learning outcomes (Martinez et al., 2020). The present study indicated that in the case of higher education there is a need of intervention programs by using self-assessment quizzes. The duration of these types of interventions has to be examined. Probably students who used self-assessment quizzes developed a better understanding of their learning processes, identified areas of weakness and took proactive steps to address gaps in their mathematical knowledge. However, it seems that these types of intervention are more suitable and accountable for studens who have already a medium or high performance in mathematics and they are willing to work harder in mathematics. There is a group of students with low initial performance in mathematics who choose to study at a course which prerequisite mathematics withough having the motivation to face their difficulties either because they do not realize them or due to their established negative self-efficafy beliefs (Gagatsis et al., 2022). Literature acknowledge challenges associated with self-assessment in mathematics, such as students’ overestimation or underestimation of abilities (Demetriou et al., 2024) and the necessity to self-regulate the cognitive processes in order to face difficulties. The induction of the number of attempts before the final exams indicated that students tried to exercise themselves to be more prepared for the final exams (Butler; Roedinger, 2007).

Formative assessment is vital for supporting university students’ performance and the use of online self-assessment quizzes can contribute on this goal. As “self-assessments in higher education are increasingly facilitated via digital learning environments” (Ifenthaler; Schumacher; Kuzilek, 2023, p. 264). However, the most important finding of the present study is that the use of online quizzes can be positively effective for students with specific previous characteristics. Students with medium and high performance were more likely to use them and recognize them to be useful tools. The students with low performance at the initial midterm test indicated a refusal to use them probably because they had already developed very low self-efficacy beliefs to overcome their difficulties and succeed at the course. Unfortunately, in education setting we cannot find teaching “recipes” which are appropriate for all student. The time and the effort invested in creating interactive assessments can prove to be a valuable resource of teaching and learning, which benefit both academic instructors and students (Murphy; Little; Bjork, 2023). We have to continue working on attracting students with low performance in mathematics who probably cannot understand its relation with their future occupation and orientation. Additionally, we have to examine further how distinct groups of learners engage differently with self-assessment processes. Similarly, distinct groups of learners probably show differences in their navigation behaviour in a digital environment. The findings of the present study are in line with previous research suggesting that self-testing impact positively the learning performance (e.g. Rodriguez et al., 2021); however, the impact is not important for students who have already a low performance.

We do not suggest the exclusive use of online quizzes for the students’ formative assessment. As all the other methods, the specific method has strengths and limitations. Its exclusive use may discourage the deeper understanding of the mathematical concepts and its use can be against the emphasis of mathematics education on problem solving. Understanding the reasoning behind an answer is essential for providing meaningful feedback. The educational issues at all levels of education require further critical reflections concerning the “multifaceted” nature of learning engagement, that is the construct refers to a learner’s ability to interact with learning artefacts in a continuous learning process on a behavioural, cognitive, emotional and motivational level (Ifenthaler; Schumacher; Kuzilek, 2023).

The initial findings at the present study shed light on students’ tendency to use online self-assessment methods and its possible impact on the final exam performance. It is evident that students engage differently. The findings call for advanced studies with concentration on developing tools in respect to examined students inte-individual characteristics, such as cognitive styles, learning styles, previous experiences, attitudes, beliefs, digital knowledge etc. The following up aim of the present study is to develop adaptive online quizzes in which the next tasks’ difficulty differ in respect to individual learner’s performance. That means that if a learner consistently performs well, the test, which is constructed based on expected behaviour can adjust by presenting more challenging questions or simpler questions. Additionally, a future study could select data from multiple sources such as self-reports, interviews about students’ beliefs and self-efficacy beliefs and a learning analytic system with more data on the use of the quizzes. All those data could be analysed in relation to the course academic performance in order to guide our attempt to offer adaptive feedback.

References

  • AGUILAR, J. High school students' reasons for dislike Mathematics: The intersection between teachers' role and students' emotions, beliefs and self-efficacy. International Electronic Journal of Mathematics Education, Eastbourne, v. 16, n.3, p.1-11, 2021.
  • ARAVINTHAN, V.; ARAVINTHAN, T. Effectiveness of self-assessment quizzes as a learning tool. City: The Higher Education Academy Engineering Subject Centre, 2010. p.31-39.
  • BAARTMAN, L.; QUINLAN, K. Assessment and feedback in higher education reimagined: using programmatic assessment to transform higher education perspectives: Policy and practice in higher education. New York: Taylor and Francis, 2023.
  • BARANA, A.; BOETTI, G.; MARCHISIO, M. Self-assessment in the development of mathematical problem-solving skills. Education Science, Washington, v. 12, n.2, p.81-108, 2022.
  • BANDURA, A. Self-efficacy: The exercise of control. New York: Freeman, 1997.
  • BAYRAK, F. Investigation of the web-based self-assessment system based on assessment analytics in terms of perceived self-interventions. Technology, Knowledge and Learning, London, v. 27, n.1, p. 639-662, 2021.
  • BROUSSEAU, G.; OTTE, M. The fragility of knowledge. In: BISHOP, A.; MELLIN-OLSEN, S.; VAN DORMOLEN, J. (eds.). Mathematical knowledge: its growth teaching. Netherlands Kluwer,. p. 13-38, 1989
  • BUTLER, A.; ROEDINGER, H. Testing improves long-term retention in a simulated classroom setting. European Journal of Cognitive Psychology, London, v. 19, n. 4, p. 514-527, 2007.
  • CHADHA, D. Continual professional development for science lectures. Using professional capital to explore lessons for academic development. Professional Development Education, Irvine, v.1, n.1, p.1-16, 2021.
  • COHEN, D.; SASSON, I. Online quizzes in a virtual learning environment as a tool for formative assessment. Journal of Technology and Science Education, Spain, v. 6, n. 3, p.188-208, 2016.
  • DEMETRIOU, A.; SPANOUDIS, G.; GREIFF, S.; PANAOURA, R.; VAINIKAINEN, M.; KAZI, S.; MAKRI, N. Educating the developing mind: A developmental theory of instruction. London Taylor & Francis, 2024.
  • ENDER, N.; GASCHLER, R.; KUBIK, V. Online quizzes with closed questions in formal assessment: How elaborate feedback can promote learning. Psychology Learning & Teaching, Thousand Oaks, v. 20, n. 1, p. 91-106, 2021.
  • EUROPEAN COMMISSION. Renewed Agenda for Higher Education. Brussels: The Council of the European Parliament, 2017. (Vol. 5).
  • GAGATSIS, A.; PANAOURA, A.; NICOLAOU, S.; ELIA, I.; DELIYIANNI, E.; STAMATAKIS, S. The role of representations in the understanding of mathematical concepts in higher education: The case of function for economics students. Journal of Research in Science Mathematics and Technology Education, Harrisonburg, v. 5, n. 1, p. 69-92, 2022.
  • GRIFFIN, F.; GUDLAUGSDOTTIR, S. Using online randomized quizzes to boost student performance in mathematics and operations research. In: INTERNATIONAL CONFERENCE ON INFORMATION BASED HIGHER EDUCATION AND TRAINING, 7., 2006, Sydney.
  • HYDER, I.; BHAMANI, S. Bloom's taxonomy (cognitive domain) in higher education settings: Reflection brief. Journal of Education and Educated Development, London, v. 3, n. 2, p. 288-300, 2016.
  • IFENTHALER, D.; GIBSON, D.; ZHENG, L. Attributes of engagement in challenge-based digital learning environments. In: ISAIAS, P; SAMPSON, D.; IFENTHALER, D. (eds.). Online teaching and learning in higher education, New York: Springer, 2020. p.81-91.
  • IFENTHALER, D.; SCHUMACHER, C.; KUZILEK, J. Investigating students' use of self-assessment in higher education using learning analytics. Journal of Computer Assisted Learning, New York, v. 39, n.1, p. 251-268, 2023.
  • JESSOP, T.; TOMAS, C. The implications of programme assessment patterns for student learning. Assessment and Evaluation in Higher Education, New York, v. 42, n. 6, p. 990-999, 2017.
  • JOGLAR, N.; MARTIN, D.; COLMENAR, M.; MARTINEZ, I.; HIDALGO, I. iTest: Online assessment and self-assessment in mathematics. Interactive Technology and Smart Education, Leeds, v.7, n.3, p. 154-167, 2010.
  • LANGOBAN, M. What makes mathematics difficult as a subject for most students in higher education? International Journal of English and Education, Nevada, v. 9, n. 3, p. 214-220, 2020.
  • LITHNER, J. University mathematics students' learning difficulties. Journal Education Inquiry, New York, v. 2, n. 2, p. 289-303, 2011.
  • MARTINEZ, V.; MON, M.; ALVAREZ, M.; FUEYO, E.; DOBBARO, A. E-self-assessment as a strategy to improve the learning process at University. Educational Research International, v.1, p. 1-9,. 2020.
  • MATZAVELA, V.; ALEPIS, E. An application of self-assessment of students in mathematics with intelligent decision systems: questionnaire, design and implementation at digital education. Education and Information Technologies, Plymouth, v. 28, n.1, p. 15365-15380, 2023.
  • MERVE, R.L.; GROENEWALD, M.; VENTER, C.; SCRIMNGER - CHRISTIAN, C.; BOLOFO, M. Relating student perceptions of readiness to student success: A case study of a mathematics module. Heliyon, New York, v. 6, n.11, p. 1-8, 2020.
  • MURPHY, D.; LITTLE, J.; BJORK, E. The value of using tests in education as tools for learning - Not just for assessment. Educational Psychology Review, London, v. 35, n., p.89-110, 2023.
  • OPAZO, D.; MORENO, S.; ALVAREZ-MIRANDA, E.; PEREIRA, J. Analysis of first-year university student dropout through machine learning models: A comparison between Universities. Mathematics, Basel, v. 9, n. 20, p. 2599-2626, 2021.
  • OPSTAD, L.; ARETHUN, T. Factors influencing students' choice of mathematical level at high school and the impact this has on performance on business courses in Norway. In: WEI International Academic Conference, Rome, p. 28-40. 2019.
  • PANAOURA, A. Improving problem solving ability in mathematics by using a mathematical model. A computerized approach. Computers in Human Behavior, Amsterdam, v. 28, n. 6, p. 2291-2297, 2012.
  • PANAOURA, A. Self-regulation strategies during problem-solving by using an inquiry-based approach: Make sense of problems and persevere in solving them. In: NEWTON, K. (ed.). Problem solving strategies, challenges and outcomes. Headquarters: Nova Science Publishers, 2016. p. 197-210.
  • PANAOURA, A.; CHARALAMBIDES, M.; TSOLAKI, E.; PERICLEOUS, S. First-year engineering students' affective behaviour about mathematics in relation to their performance. European Journal of Science and Mathematics, London, v. 12, n. 1, p. 128-138, 2024.
  • PEPIN, B.; BIEHLER, R.; GUEUDET, G. Mathematics in engineering education: A review of the recent literature with a view towards innovative practices. International Journal of Research in Undergraduate Mathematics Education, London, v.7, n.2, p. 163-188, 2021.
  • PITT, E.; QUINLAN, K. Impacts of higher education assessment and feedback policy and practice on students: A review of the literature 2016-2021. York: Advance H.E, 2022.
  • RICH, P.R.; VAN LOON, M.H.; DUNLOSKY, J.; ZARAGOZA, M.S. Belief in corrective feedback for common misconceptions: Implications for knowledge revision. Journal of Experimental Psychology, Learning, Memory and Cognition, Washington, v. 43 n. 3, p. 492-501, 2017.
  • ROCHSUN, R.; AGUSTIN, R. The development of e-module mathematics based on contextual problems. European Journal of Education Studies, London, v.7, n. 10, p. 400-412, 2020.
  • RODRIGUEZ, F.; KATAOKA, S.; RIVAS, M.; KADANDALE, P.; NILI, A.; WARSCAUER, M. Do spacing and self-testing predict learning outcomes? Active learning in Higher Education, London, v. 22, n. 1, p. 77-91, 2021.
  • SANTOS, A.; GAUSAS, S.; MACKEVICIUTE, R.; JOTAUTYTE, A.; MARTINAITIS, Z. Innovating professional development in higher education. Luxembourg: Publications Office of the European Union, 2019.
  • SIPOS, K.; IONITA, G.; KUTZSCHEBAUCH, F. Online self-assessment in Mathematics at the University of Bern. iJET, [S. l.], v. 18, n. 3, p. 185-191, 2023.
  • TAJA-ON, E. Digital literacy on mathematical performance of college students in the course Mathematics in the modern world. School of Education Research Journal, Malaybaley, v. 4, n. 1, p. 1-10, 2023.
  • TAJA-ON, E.; MIRAS-LAMAYON, R. Online learning experiences and satisfaction of undergraduates in San Isidro College. School of Education Research Journal, Malaybaley, v. 3, n. 1, p. 13-41, 2022.
  • YEASMIN, M. Mathematics is everywhere. Connecting with other disciplines. International Journal of Applied Research. v.3, n.6, p.750-754, 2017.
  • WIGFIELD, A.; ECCLES, J. Expectancy-value theory of achievement motivation. Contemporary Educational Psychology, Pensylvania, v. 25, n. 1, p. 68-81, 2000.
  • WILLIAM, D.; THOMPSON, M. Integrating assessment with learning: What will it take to make it work? In: DWYER, C. (ed.), The future of assessment shaping teaching and learning. London: Routledge, 2008. p. 53-82.
  • WONDEM, D.I. Higher diploma program: A centrally initiated and successfully institutionalized professional development program for teachers in Ethiopian public universities. Cogent Education, New York, v. 9, n.1, p. 1-25, 2022.

Publication Dates

  • Publication in this collection
    07 Apr 2025
  • Date of issue
    2025

History

  • Received
    11 Apr 2024
  • Accepted
    10 Aug 2024
location_on
UNESP - Universidade Estadual Paulista, Pró-Reitoria de Pesquisa, Programa de Pós-Graduação em Educação Matemática Avenida 24-A, 1515, Caixa Postal 178, 13506-900 - Rio Claro - SP - Brazil
E-mail: bolema.contato@gmail.com
rss_feed Acompanhe os números deste periódico no seu leitor de RSS
Reportar erro