N-Gram House

Tag: repetition penalty

Controlling Length and Structure in LLM Outputs: Practical Decoding Parameters

Controlling Length and Structure in LLM Outputs: Practical Decoding Parameters

Learn how to control LLM output length and structure using decoding parameters like temperature, top-k, top-p, and repetition penalties. Practical settings for real-world use cases.

Categories

  • History (50)
  • Machine Learning (9)

Recent Posts

Why Generative AI Hallucinates: The Hidden Flaws in Language Models Oct, 11 2025
Why Generative AI Hallucinates: The Hidden Flaws in Language Models
Adapter Layers and LoRA for Efficient Large Language Model Customization Jan, 16 2026
Adapter Layers and LoRA for Efficient Large Language Model Customization
Customer Journey Personalization Using Generative AI: Real-Time Segmentation and Content Feb, 2 2026
Customer Journey Personalization Using Generative AI: Real-Time Segmentation and Content
Why Transformers Replaced RNNs in Large Language Models Dec, 15 2025
Why Transformers Replaced RNNs in Large Language Models
Architectural Innovations Powering Modern Generative AI Systems Nov, 7 2025
Architectural Innovations Powering Modern Generative AI Systems

Menu

  • About
  • Terms of Service
  • Privacy Policy
  • CCPA
  • Contact

© 2026. All rights reserved.