N-Gram House

Tag: top-k

Controlling Length and Structure in LLM Outputs: Practical Decoding Parameters

Controlling Length and Structure in LLM Outputs: Practical Decoding Parameters

Learn how to control LLM output length and structure using decoding parameters like temperature, top-k, top-p, and repetition penalties. Practical settings for real-world use cases.

Categories

  • History (50)
  • Machine Learning (29)

Recent Posts

Adapter Layers and LoRA for Efficient Large Language Model Customization Jan, 16 2026
Adapter Layers and LoRA for Efficient Large Language Model Customization
Architectural Innovations Powering Modern Generative AI Systems Nov, 7 2025
Architectural Innovations Powering Modern Generative AI Systems
When to Transition from Vibe-Coded MVPs to Production Engineering Oct, 15 2025
When to Transition from Vibe-Coded MVPs to Production Engineering
KPIs for Vibe Coding Programs: Track Lead Time, Defect Rates, and AI Dependency Feb, 20 2026
KPIs for Vibe Coding Programs: Track Lead Time, Defect Rates, and AI Dependency
State-Level Generative AI Laws in the United States: California, Colorado, Illinois, and Utah Jun, 25 2025
State-Level Generative AI Laws in the United States: California, Colorado, Illinois, and Utah

Menu

  • About
  • Terms of Service
  • Privacy Policy
  • CCPA
  • Contact

© 2026. All rights reserved.