Paper Infomation
A Lightweight Application of Large Language Model for Vocational Colleges
Full Text(PDF, 284KB)
Author: Yuxiao Wu
Abstract: Generative Artificial Intelligence (Generative AI), represented by ChatGPT, has attracted widespread attention in recent years. This research endeavors to create a lightweight vertical Large Language Model (LLM) application in the field of communication and information engineering education in vocational colleges. By thoroughly analyzing factors such as the convenience, access efficiency, cost, and legality of existing major models, this research has chosen ByteDance's Doubao model and, combined with client and server technologies, built an integrated learning assistant for students in vocational colleges. The client of the application is mainly responsible for friendly interaction with users and realizes user Q&A through the server interface. The server is built based on the classic Springboot framework and calls the general large model API via the HTTP interface. Moreover, optimizations have been made in prompt engineering. Through steps such as design, optimization, and evaluation, it helps the model to accurately respond to students' questions. Ultimately, this application will provide support for students' self-directed learning, information retrieval, homework, and tutoring, and contribute to the development of artificial intelligence applications in vocational education.
Keywords: Large Language Model, Generative Artificial Intelligence, Prompt Engineering, Learning Assistant
References:
[1] 柯亮亮.基于"1+X"证书制度试点背景下高职院校《路由交换技术》课程课证融合教学改革研究[J].电脑知识与技术:学术版, 2020, 16(27):4.
[2] Brown T, Mann B, Ryder N, et al. Language models are few-shot learners Advances in neural information processing systems[J]. 2020, 33:1877-1901.
[3] Ouyang L, Wu J, Jiang X, et al. Training language models to follow instructions with human feedback[J]. Advances in Neural Information Processing Systems, 2022, 35: 27730-27744.
[4] Wankhade M, Rao A C S, Kulkarni C. A survey on sentiment analysis methods, applications, and challenges[J]. Artificial Intelligence Review, 2022, 55(7): 5731-5780.
[5] Zaib M, Zhang W E, Sheng Q Z, et al. Conversational question answering: A survey[J]. Knowledge and Information Systems, 2022, 64(12): 3151-3195.
[6] OpenAI: Gpt-4 technical report. arXiv preprint arXiv, 2023:2303.08774.
[7] Sahoo P, Singh A K, Saha S, et al. A systematic survey of prompt engineering in large language models: Techniques and applications[J]. arXiv preprint arXiv, 2024:2402.07927.
[8] 火山方舟大模型服务平台Prompt最佳实践.https://www.volcengine.com/docs/82379/1221664.
[9] Ouyang L, Wu J, Jiang X, et al. Training language models to follow instructions with human feedback[J]. Advances in neural information processing systems, 2022, 35: 27730-27744.
[10] Chung H W, Hou L, Longpre S, et al. Scaling instruction-finetuned language models[J]. Journal of Machine Learning Research, 2024, 25(70): 1-53.
[11] Mitra A, Del Corro L, Mahajan S, et al. Orca 2: Teaching small language models how to reason[J]. arXiv preprint arXiv, 2023:2311.11045.
[12] 火山方舟大模型服务平台SFT最佳实践.https://www.volcengine.com/docs/82379/1221660.