Preview

Informatics and education

Advanced search

Use of artificial intelligence methods to assess competencies during testing

https://doi.org/10.32517/0234-0453-2023-38-6-75-85

Abstract

According to modern standards of higher education (FSES 3++), teaching is aimed at developing a set of universal, general professional, and professional competencies in a graduate. In this regard, the problem of assessing the level of their formation arises. Since competence is a multidisciplinary concept, its assessment requires designing a task that simulates a real situation, which may require complex knowledge and skills related to different areas of knowledge. One of the modern tools for current and final control is computer testing. In this case, during one testing session, it is desirable to evaluate several competencies at once with a limited number of tasks. The task is relevant, because in the currently used classical one-dimensional one-, two- and three-parameter Rasch—Birnbaum models, it is assumed that all tasks in the test belong to the same area of knowledge.

The purpose of the article is to propose a methodology for the intellectual assessment of several competencies based on the results of one testing session, based on the modification of the Bayes algorithm, in which the choice of the next task is carried out as a result of analyzing the entropy (uncertainty) of the subject’s attribution to typical combinations (patterns) of preparedness levels for different competencies.

An original technique for regularizing the Bayes algorithm based on the assessment of the change in the entropy of the probability distribution over patterns and the Kullback—Leibler divergence has been developed, which makes it possible to identify the moment when the student gives an answer that does not correspond to what is expected from them, based on the analysis of previous answers. The proposed technique not only increases the stability of the Bayes algorithm for measuring the probability of a student belonging to a given pattern by slightly increasing the required number of tasks but also allows to model the actions of an examiner in the process of competencies assessment. In general, the developed approach makes it possible to objectively assess the levels of formation of several competencies based on the results of single test.

About the Authors

V. N. Gusyatnikov
Yuri Gagarin State Technical University of Saratov
Russian Federation

Victor N. Gusyatnikov, Doctor of Sciences (Physics and Mathe matics), Professor, Professor at the Department “Information and Communication Systems and Software Engineering”, Institute of Applied Information Technologies and Communication

Saratov



T. N. Sokolova
Saratov State Law Academy
Russian Federation

Tatyana N. Sokolova, Candidate of Sciences (Economics), Docent, Associated Professor at the Department of Information Law and Digital Technologies

Saratov



A. I. Bezrukov
Yuri Gagarin State Technical University of Saratov
Russian Federation

Aleksey I. Bezrukov, Candidate of Sciences (Economics), Docent, Associated Professor at the Department “Information and Communication Systems and Software Engineering”, Institute of Applied Information Technologies and Communication

Saratov



I. V. Kayukova
Yuri Gagarin State Technical University of Saratov
Russian Federation

Inna V. Kayukova, a postgraduate student at the Department “Information and Communication Systems and Software Engineering”, Institute of Applied Information Technologies and Communication

Saratov



References

1. Maslak A. A. Theory and practice of measuring latent variables in education: Monograph. Moscow, Yurait; 2022. 255 p. (In Russian.) EDN: RFKEIY.

2. Neiman Yu. M., Khlebnikov V. A. Introduction to the theory of modeling and parameterization of pedagogical tests. Moscow, Promethey; 2000. 168 p. (In Russian.)

3. Kim V. S. Testing of learning achievements: Monograph. Ussuriysk, UGPI; 2007. 215 p. (In Russian.) EDN: QWGVHH.

4. Avanesov V. S. Item response theory: Basic concepts and provisions. Pedagogicheskiye Izmereniiya. 2007;(2):3–28. (In Russian.) Available at: https://testolog.narod.ru/Theory59.html

5. Chelyshkova M. B. Theory and practice of designing pedagogical tests: Study guide. Moscow, Logos; 2002. 432 p. (In Russian.)

6. McDonald R. P. A basis for multidimensional item response theory. Applied Psychological Measurement. 2000;24(2):99–114. DOI: 10.1177/01466210022031552.

7. Ackerman T. A., Gierl M. J., Walker C. M. Using multidimensional item response theory to evaluate educational and psychological tests. Educational Measurement: Issues and Practice. 2003;22(3):37–53. DOI: 10.1111/j.1745-3992.2003.tb00136.x.

8. Bartolucci F. A class of multidimensional IRT models for testing unidimensionality and clustering items. Psychometrika. 2007;72(2):141–157. DOI: 10.1007/s11336-005-1376-9.

9. Freeman L. Assessing model-data fit for compensatory and non-compensatory multidimensional item response models using Vuong and Clarke statistics. Theses and Dissertations. University of Wisconsin, Milwaukee, US, 2016:1366. Available at: https://dc.uwm.edu/etd/1366

10. Fox J. P., Entink R. K., Avetisyan M. Compensatory and non-compensatory multidimensional randomized item response models. British Journal of Mathematical and Statistical Psychology. 2014;67(1):133–152. DOI: 10.1111/bmsp.12012.

11. Kardanova E. Yu. Proof of the applicability of G. Rasch’s polytomous model. Vestnik NovSU. 2006;(39):13–15. (In Russian.) EDN: MNJEKL.

12. Popov A. P. The recent trends in computer-based testing. Science and World. 2013;(1(1)):38–46. (In Russian.) EDN: RQCKTZ.

13. Wu M., Davis R. L., Domingue B. W., Piech C., Goodman N. D. Variational item response theory: Fast, accurate, and expressive. Proc. of the 13th Int. Conf. on Educational Data Mining. Montreal, Canada, 2020:257–268. DOI: 10.48550/arXiv.2002.00276.

14. Reckase M. D. Multidimensional item response theory. Springer New York, US; 2009. 349 p. DOI: 10.1007/978-0-387-89976-3.

15. Dvoryatkina S. N. Integration of fractal and neural network technologies in pedagogical monitoring and assessment of knowledge of trainees. RUDN Journal of Psychology and Pedagogics. 2017;14(4):451–465. (In Russian.) EDN: ZRKOGH. DOI: 10.22363/2313-1683-2017-14-4-451-465.

16. Kuravsky L. S., Yuryev G. A., Ushakov D. V., Yuryeva N. E., Valueva E. A., Lapteva E. M. Diagnostics basing on testing paths: The method of patterns. Experimental Psychology. 2018;11(2):77–94. (In Russian.) EDN: TKLBMZ. DOI: 10.17759/exppsy.2018110206.

17. Liu Y., Hu G., Cao L., Wang X., Chen M.-H. A comparison of Monte Carlo methods for computing marginal likelihoods of item response theory models. Journal of the Korean Statistical Society. 2019;48(4):503–512. DOI: 10.1016/j.jkss.2019.04.001.

18. Sokolova T. N., Gusyatnikov V. N., Bezrukov A. I., Kayukova I. V. Methodology for assessing a set of competencies based on testing results. Fundamental Research. 2020;(12):209–215. (In Russian.) EDN: IHYEON. DOI: 10.17513/fr.42935.

19. Gusyatnikov V. N., Sokolova T. N., Bezrukov A. I., Kayukova I. V. Adaptive model for testing several competencies based on the Bayes algorithm. Modern High Technologies. 2022;(1):40–46. (In Russian.) EDN: KPDACI. DOI: 10.17513/snt.39007.

20. Kulbak S. Information theory and statistics. Moscow, Nauka; 1967. 408 с. (In Russian.)


Review

For citations:


Gusyatnikov V.N., Sokolova T.N., Bezrukov A.I., Kayukova I.V. Use of artificial intelligence methods to assess competencies during testing. Informatics and education. 2023;38(6):75-85. (In Russ.) https://doi.org/10.32517/0234-0453-2023-38-6-75-85

Views: 344


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 0234-0453 (Print)
ISSN 2658-7769 (Online)