Lab 3 Readings: LLM

Author

LASER Institute

Published

July 20, 2024

Overview

Our primary goal is to build a basic understanding of LLM and how it has been applied to gain insight into text data, and specifically its applications in educational contexts.

Readings

Required
  1. Xiao, Z., Yuan, X., Liao, Q. V., Abdelghani, R., & Oudeyer, P. Y. (2023, March). Supporting qualitative analysis with large language models: Combining codebook with GPT-3 for deductive coding. In Companion proceedings of the 28th international conference on intelligent user interfaces (pp. 75-78).

  2. Gao, J., Guo, Y., Lim, G., Zhang, T., Zhang, Z., Li, T. J. J., & Perrault, S. T. (2024, May). CollabCoder: a lower-barrier, rigorous workflow for inductive collaborative qualitative analysis with large language models. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 1-29).

  3. Barany, A., Nasiar, N., Porter, C., Zambrano, A. F., Andres, A. L., Bright, D., … & Baker, R. S. (2024, July). ChatGPT for Education Research: Exploring the Potential of Large Language Models for Qualitative Codebook Development. In International Conference on Artificial Intelligence in Education (pp. 134-149). Cham: Springer Nature Switzerland.

  4. Tai, R. H., Bentley, L. R., Xia, X., Sitt, J. M., Fankhauser, S. C., Chicas-Mosier, A. M., & Monteith, B. G. (2024). An examination of the use of large language models to aid analysis of textual data. International Journal of Qualitative Methods, 23, 16094069241231168.

  5. Dunivin, Z. O. (2024). Scalable Qualitative Coding with LLMs: Chain-of-Thought Reasoning Matches Human Performance in Some Hermeneutic Tasks. arXiv preprint arXiv:2401.15170.