N-Gram House

Tag: repetition penalty

Controlling Length and Structure in LLM Outputs: Practical Decoding Parameters

Controlling Length and Structure in LLM Outputs: Practical Decoding Parameters

Learn how to control LLM output length and structure using decoding parameters like temperature, top-k, top-p, and repetition penalties. Practical settings for real-world use cases.

Categories

  • Machine Learning (66)
  • History (50)
  • Software Development (7)
  • Business AI Strategy (6)
  • AI Security (4)

Recent Posts

Choosing Opinionated AI Frameworks: Why Constraints Boost Results Jan, 20 2026
Choosing Opinionated AI Frameworks: Why Constraints Boost Results
Prefix Tuning and Prompt Tuning Explained: Efficient LLM Adapters Guide Mar, 30 2026
Prefix Tuning and Prompt Tuning Explained: Efficient LLM Adapters Guide
Decoder-Only vs Encoder-Decoder Models: Choosing the Right LLM Architecture Apr, 26 2026
Decoder-Only vs Encoder-Decoder Models: Choosing the Right LLM Architecture
Stochastic Depth in LLMs: How Random Layer Dropping Boosts Performance May, 9 2026
Stochastic Depth in LLMs: How Random Layer Dropping Boosts Performance
Few-Shot Prompting Patterns That Boost Accuracy in Large Language Models Jan, 25 2026
Few-Shot Prompting Patterns That Boost Accuracy in Large Language Models

Menu

  • About
  • Terms of Service
  • Privacy Policy
  • CCPA
  • Contact

© 2026. All rights reserved.