Brain Check
KO

AI-Powered Auto Content Generation Technology

This page introduces background technologies that help explain BrainCheck's automatic learning-content generation features. It focuses on the public large language model (LLM) ecosystem. This does not mean BrainCheck uses any particular model directly.

Here, "public LLM" includes models whose weights are publicly available. It is not identical to "open source" in the software sense. Models, code, and datasets can each have different licenses, so original license terms should be checked before use.


What Is a Large Language Model?

A Large Language Model (LLM) is an AI model trained on massive text datasets so it can understand and generate natural language.

LLMs are trained to predict the next word or token in a given context. Through that process, they learn grammar, factual associations, reasoning patterns, and many text transformations. With enough scale and data, they can perform tasks such as summarization, translation, question answering, and rewriting.

In language learning, LLMs are useful because they can make learning material creation more efficient. For example, they can extract key ideas from a source text and reorganize them into question-and-answer flashcards.


The Public LLM Ecosystem

Early large language models were mostly closed. Since 2023, more models have released weights under licenses that can allow commercial use, such as Apache 2.0, MIT, or custom commercial terms.

The open-llms repository maintained by Eugene Yan is a community resource that catalogs commercially usable public LLMs. It includes models of many sizes and purposes, such as Falcon, Mistral 7B, and Llama 3, and is continuously updated by the community.

License Types

Public LLM licenses vary by model. Common patterns include:

License Commercial use Notes
Apache 2.0 Generally allowed, with notice and disclaimer obligations OpenLLaMA, Falcon, Mistral 7B, and others
MIT Generally allowed, with notice obligations Dolly, Phi-2, and others
Custom terms May include MAU thresholds, attribution duties, acceptable-use policies, or other conditions Llama 3, Qwen, and others

Note: License details must be checked against each model's original license text and may change by version.


OpenLLaMA: A Public Reproduction of LLaMA (Example)

This section introduces one concrete example of a public LLM. It does not mean BrainCheck uses OpenLLaMA.

OpenLLaMA is a project developed by researchers affiliated with Berkeley AI Research (UC Berkeley). It follows an architecture similar to Meta AI's LLaMA, but OpenLLaMA's own code and weights are released as a retrained model under Apache 2.0 terms.

What "Reproduction" Means

Meta AI's LLaMA (2023) originally provided model weights for research use, not under a commercial license. OpenLLaMA uses a similar model structure and training setup, such as learning rate and batch size, but replaces the training data with public datasets and trains the model from scratch. The goal is to provide terms that are more friendly to commercial use, subject to checking the original model and dataset licenses.

Training Data

Model Sizes

OpenLLaMA provides 3B, 7B, and 13B parameter models. Major public models targeted one trillion training tokens, and some were released in stages from preview to final. v2 focused mainly on 3B and 7B models.

Model Parameters Context length License
OpenLLaMA 3B (v1, v2) 3B 2,048 tokens Apache 2.0
OpenLLaMA 7B (v1, v2) 7B 2,048 tokens Apache 2.0
OpenLLaMA 13B (v1) 13B 2,048 tokens Apache 2.0

Context length means the amount of text the model can attend to at one time.

Benchmark Reference

According to benchmark results from the project, OpenLLaMA performs at a level similar to the original LLaMA on most evaluation tasks, such as English multiple-choice and commonsense benchmarks, and scores higher on some tasks.

These figures are reference values from research benchmarks. Real service quality, especially for Korean or educational content, can vary greatly depending on input material, post-processing, and evaluation criteria. Detailed numbers are available on the OpenLLaMA GitHub page.

Note: These figures are based on lm-evaluation-harness benchmark results in the OpenLLaMA GitHub repository. Results can vary by evaluation protocol, and the project notes slight differences from the original LLaMA paper.


Practical Use of Public LLMs

Public LLMs can generate general-purpose text, but it is common to adapt them to a specific task through additional fine-tuning.

Fine-Tuning

Fine-tuning is the process of further training a pretrained LLM for a target task. Examples include:

Fine-tuning requires task-specific training data and evaluation, and larger models may require substantial compute resources.


Limits and Cautions

LLM-based automatic generation has several important limits:

For these reasons, automatically generated learning content should be reviewed by the user before use.


How BrainCheck Applies AI Content Generation

BrainCheck helps create draft learning cards from text materials such as documents, articles, and transcribed audio, then lets users review and edit those drafts.

  1. Text extraction: Extracts key content suitable for learning from the source material.
  2. Draft learning-card generation: Converts extracted content into flashcard formats such as question-answer pairs and fill-in-the-blank items.
  3. User review and editing: Provides editing tools so users can check and revise automatically generated cards.

The quality of generated learning material depends on the source material, model capability, and post-processing. Final learning content should be used after user review.


Note: Licenses can change by version. The sources below are based on information available in 2025.

Sources: OpenLLaMA GitHub · Open LLMs · LLaMA paper · RedPajama-Data · Falcon refined-web · StarCoder dataset