Preview

Informatics and education

Advanced search

Using large language models to develop skills in analyzing media context in students of philological specialties

https://doi.org/10.32517/0234-0453-2024-39-6-82-96

Abstract

Under modern conditions of digitalization of education, including liberal arts education, the use of natural language processing technologies and large language models (LLM) is of particular relevance and practical significance for improving professional training of students of philological specialties. The article explores the possibilities of using LLM for objective analysis of media context in the training of philology students. On the basis of critical analysis of literature and empirical research using the methods of thematic modeling, tonality analysis and fact extraction on the material of a corpus of 5,2 million media texts, the key thematic and linguistic features of the media context significant for the development of professional competencies of philology students are revealed. A conceptual model of integrating LLM methods into adaptive learning systems for personalized formation of media context analysis skills was developed. Experimental testing of the model on a sample of 120 students demonstrated a statistically significant increase in learning outcomes and learner satisfaction compared to the control group. The obtained results open prospects for the creation of intelligent adaptive systems for training philologists to work in a transforming media landscape. 

About the Author

G. Yiwen
Lomonosov Moscow State University
Russian Federation

Gao Yiwen, a postgraduate student at the Department of
Foreign Journalism and Literature, Faculty of Journalism

Moscow



References

1. Blei D. M., Ng A. Y., Jordan M. I. Latent Dirichlet allocation. Journal of Machine Learning Research. 2003;3:993–1022.

2. Diakopoulos N. Automating the news: How algorithms are rewriting the media. Cambridge, Massachusetts, USA, Harvard University Press; 2019. 336 p.

3. Iliev R., DehghaniM., Sagi E. Automated text analysis in psychology: Methods, applications, and future developments. Language and Cognition. 2015;7(2):265–290. DOI: 10.1017/langcog.2014.30.

4. Mikolov T., Sutskever I., Chen K., Corrado G., Dean J. Distributed representations of words and phrases and their compositionality. Proc. 26th Int. Conf. on Neural Information Processing Systems (NIPS’13). Red Hook, NY, USA, Curran Associates Inc.; 2013;2:3111–3119. DOI: 10.48550/arXiv.1310.4546.

5. Devlin J., Chang M.-W., Lee K., Toutanova K. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint. 2019:1–16. DOI: 10.48550/arXiv.1810.04805.

6. Guo L., Vargo C., Pan Z., Ding W., Ishwar P. Big social data analytics in journalism and mass communication. Journalism & Mass Communication Quarterly. 2016;93(2):332– 359. DOI: 10.1177/1077699016639231.

7. Leskovec J., Backstrom L., KleinbergJ. Meme-tracking and the dynamics of the news cycle. Proc. 15th ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining (KDD’09). New York, NY, USA, Association for Computing Machinery; 2009:497–506. DOI: 10.1145/1557019.1557077.

8. Pariser E. The filter bubble: What the Internet is hiding from you. USA, Penguin Press; 2011. 304 p.

9. Radford A., Wu J., Child R., Luan D., Amodei D., Sutskever I. Language models are unsupervised multitask learners. OpenAI. 2019;1(8):1–24.

10. Vargo C. J., Guo L., Amazeen M. A. The agendasetting power of fake news: A big data analysis of the online media landscape from 2014 to 2016. New Media & Society. 2017;20(5):2028–2049. DOI: 10.1177/1461444817712086.

11. ZhaoW. X., Jiang J., He J., Song Y., Achanauparp P., Lim E.-P., Li X. Topical keyphrase extraction from twitter. Proc. 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Portland, Oregon, USA, Association for Computational Linguistics; 2011;1:379–388.

12. Dudikhin V. V., Kondrashov P. E. Methodology of using large language models to solve tasks of state and municipal government for intelligent abstracting and automatic generation of text content. E-journal Public Administration. 2024;(105):169–179. (In Russian.) EDN: JBMLWE. DOI: 10.55959/MSU2070-1381-105-2024-169-179.]

13. Abeysiriwardana P. C., Jayasinghe-Mudalige U. K., Kodituwakku S. R. “Connected researches” in “smart lab bubble”: A lifeline of techno-society space for commercial agriculture development in “new normal”. New Techno Humanities. 2022;2(1):79–91. DOI: 10.1016/j.techum.2022.05.001.

14. Khan A. F., Chiarcos Ch., Declerck T., Gifu D., García E. G.-B., Gracia J., Ionov M., Labropoulou P., Mambrini F., McCrae J., Pagé-Perron É., Passarotti M., Ros S., Truică C.-O. When linguistics meets web technologies: Recent advances in modelling linguistic linked data. Semantic Web. 2022;13(6):1–64. DOI: 10.3233/SW-222859.

15. Katerynych P., GoianV., GoianO. Exploring the evolution of storytelling in the streaming era: A study of narrative trends in Netflix original content. Communication Today. 2023;14(2):3–12. DOI: 10.34135/communicationtoday.2023.Vol.14.No.2.3.

16. Naveed H., Khan A. U., Qiu S., Saqib M., Anwar S., Usman M., Akhtar N., Barnes N., Mian A. Comprehensive overview of large language models. arXiv preprint. 2023:1– 47. DOI: 10.48550/arXiv.2307.06435.

17. Movva R., Balachandar S., Peng K., Agostini G., Garg N., Pierson E. Topics, authors, and institutions in large language model research: Trends from 17K arXiv Papers. arXiv preprint. 2023:1–21. DOI: 10.48550/arXiv.2307.10700.

18. Carroll J. B. A model of school learning. Teachers College Record. 1963;64(8):723–733. DOI: 10.1177/016146816306400801.

19. Brown T. B., Mann B., Ryder N., Subbiah M., Kaplan J. D., Dhariwal P., Neelakantan A., Shyam P., Sastry G., Askell A., Agarwal S., Herbert-Voss A., Krueger G., Henighan T., Child R., Ramesh A., Ziegler D. M., Wu J., Winter C., Hesse Ch., Chen M., Sigler E., Litwin M., Gray S., Chess B., Clark J., Berner Ch., McCandlish S., Radford A., Sutskever I., Amodei D. Language models are few-shot learners. arXiv preprint. 2020:1–75. DOI: 10.48550/arXiv.2005.14165.

20. Radford A., Kim J. W., Hallacy C., Ramesh A., Goh G., Agarwal S., Sastry G., Askell A., Mishkin P., Clark J., Krueger G., Sutskever I. Learning transferable visual models from natural language supervision. arXiv preprint. 2021:1– 48. DOI: 10.48550/arXiv.2103.00020.

21. Bommasani R., Hudson D. A., Adeli E., Altman R., Arora S., Von Arx S., Bernstein M. S., Bohg J., Bosselut A., Brunskill E., Brynjolfsson E., Buch S., Card D., Castellon R., Chatterji N., Chen A., Creel K., Davis J. Q., Demszky D., Donahue C., DoumbouyaM., Durmus E., Ermon S., Etchemendy J., Ethayarajh K., Fei-Fei L., Finn Ch., Gale T., Gillespie L., Goel K., Goodman N., Grossman S., Guha N., Hashimoto T., Henderson P., Hewitt J., Ho D. E., Hong J., Hsu K., Huang J., Icard T., Jain S., Jurafsky D., Kalluri P., Karamcheti S., Keeling G., Khani F., Khattab O., Koh P. W., Krass M., Krishna R., Kuditipudi R., Kumar A., Ladhak F., Lee M., Lee T., Leskovec J., Levent I., Li X. L., Li X., Ma T., Malik A., ManningC. D., MirchandaniS., Mitchell E., Munyikwa Z., NairS., Narayan A., Narayanan D., Newman B., Nie A., Niebles J. C., Nilforoshan H., Nyarko J., Ogut G., Orr L., Papadimitriou I., Park J. S., Piech C., Portelance E., Potts C., Raghunathan A., Reich R., Ren H., Rong F., Roohani Y., Ruiz C., Ryan J., Ré C., Sadigh D., Sagawa S., Santhanam K., Shih A., Srinivasan K., Tamkin A., Taori R., Thomas A. W., Tramèr F., Wang R. E., Wang W., Wu B., Wu J., Wu Y., Xie S. M., Yasunaga M., You J., Zaharia M., Zhang M., Zhang T., Zhang X., Zhang Y., Zheng L., Zhou K., Liang P. On the opportunities and risks of foundation models. arXiv preprint. 2021:1–214. DOI: 10.48550/arXiv.2108.07258.

22. Fiedler S. H. D., Väljataga T. Personal learning environments: Concept or technology? International Journal of Virtual and Personal Learning Environments. 2011;2(4):1– 11. DOI: 10.4018/jvple.2011100101.


Review

For citations:


Yiwen G. Using large language models to develop skills in analyzing media context in students of philological specialties. Informatics and education. 2024;39(6):82-96. (In Russ.) https://doi.org/10.32517/0234-0453-2024-39-6-82-96

Views: 167


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 0234-0453 (Print)
ISSN 2658-7769 (Online)