← Back to Work

Qwen3.5-2B Fine-tuned

Year2026
TechUnsloth, Transformers, TRL
Qwen3.5-2B Fine-tuned AI Model

The Challenge

Create an optimized fine-tuned version of Qwen3.5-2B that delivers high-quality theory exam Question answer ability tuned to SPPU university format, while training faster and more efficiently than traditional methods.

Solution

I leveraged Unsloth, a cutting-edge fine-tuning framework that enables 2x faster training compared to standard methods. The model was fine-tuned on custom datasets to enhance its theory question-answering abilities and university domain knowledge.

Technical Details

  • Base Model: Qwen/Qwen3.5-2B
  • Training Framework: Unsloth (2x faster training)
  • Parameters: 2B
  • Quantization: Q8_0 (8-bit)
  • Model Size: 2.01 GB
  • License: Apache 2.0

Key Features

  • Optimized for theory question-answer responses
  • Supports GGUF format for efficient inference
  • Compatible with Transformers library
  • Hardware compatible with 8-bit quantization
  • Ready for deployment with text-generation-inference

Results

The fine-tuned model achieves improved theory question-answer performance while being significantly more efficient to train. Available on HuggingFace and supporting multiple quantization formats for flexible deployment.

Interested in AI/ML projects?

Let's discuss how I can help with your machine learning needs.

Let's build something refreshing.