N-Gram House

Tag: max tokens

Controlling Length and Structure in LLM Outputs: Practical Decoding Parameters

Controlling Length and Structure in LLM Outputs: Practical Decoding Parameters

Learn how to control LLM output length and structure using decoding parameters like temperature, top-k, top-p, and repetition penalties. Practical settings for real-world use cases.

Categories

  • History (50)
  • Machine Learning (43)
  • Software Development (1)

Recent Posts

Adapter Layers and LoRA for Efficient Large Language Model Customization Jan, 16 2026
Adapter Layers and LoRA for Efficient Large Language Model Customization
Accessibility-Inclusive Vibe Coding: Patterns That Meet WCAG by Default Oct, 12 2025
Accessibility-Inclusive Vibe Coding: Patterns That Meet WCAG by Default
Service Level Objectives for Maintainability: Key Indicators and Alert Strategies Feb, 7 2026
Service Level Objectives for Maintainability: Key Indicators and Alert Strategies
Validation and Early Stopping Criteria for Large Language Model Training Mar, 1 2026
Validation and Early Stopping Criteria for Large Language Model Training
Infrastructure Requirements for Serving Large Language Models in Production Dec, 8 2025
Infrastructure Requirements for Serving Large Language Models in Production

Menu

  • About
  • Terms of Service
  • Privacy Policy
  • CCPA
  • Contact

© 2026. All rights reserved.