N-Gram House

Tag: PEFT

Adapter Layers and LoRA for Efficient Large Language Model Customization

Adapter Layers and LoRA for Efficient Large Language Model Customization

LoRA and adapter layers let you customize large language models with minimal resources. Learn how they work, when to use each, and how to start fine-tuning on a single GPU.

Categories

  • History (50)
  • Machine Learning (36)

Recent Posts

Measuring Developer Productivity with AI Coding Assistants: Throughput and Quality Dec, 14 2025
Measuring Developer Productivity with AI Coding Assistants: Throughput and Quality
Service Level Objectives for Maintainability: Key Indicators and Alert Strategies Feb, 7 2026
Service Level Objectives for Maintainability: Key Indicators and Alert Strategies
Positional Encoding in Transformers: Sinusoidal vs Learned for LLMs Nov, 28 2025
Positional Encoding in Transformers: Sinusoidal vs Learned for LLMs
Chinchilla's Compute-Optimal Ratio and Its Limits for LLM Training Mar, 3 2026
Chinchilla's Compute-Optimal Ratio and Its Limits for LLM Training
Controlling Length and Structure in LLM Outputs: Practical Decoding Parameters Feb, 18 2026
Controlling Length and Structure in LLM Outputs: Practical Decoding Parameters

Menu

  • About
  • Terms of Service
  • Privacy Policy
  • CCPA
  • Contact

© 2026. All rights reserved.