N-Gram House

Tag: max tokens

Controlling Length and Structure in LLM Outputs: Practical Decoding Parameters

Controlling Length and Structure in LLM Outputs: Practical Decoding Parameters

Learn how to control LLM output length and structure using decoding parameters like temperature, top-k, top-p, and repetition penalties. Practical settings for real-world use cases.

Categories

  • History (50)
  • Machine Learning (9)

Recent Posts

Evaluation Gates and Launch Readiness for Large Language Model Features Oct, 25 2025
Evaluation Gates and Launch Readiness for Large Language Model Features
When to Transition from Vibe-Coded MVPs to Production Engineering Oct, 15 2025
When to Transition from Vibe-Coded MVPs to Production Engineering
Trademark and Generative AI: How Synthetic Content Is Risking Your Brand Dec, 3 2025
Trademark and Generative AI: How Synthetic Content Is Risking Your Brand
Hybrid Search for RAG: Boost LLM Accuracy with Semantic and Keyword Retrieval Dec, 7 2025
Hybrid Search for RAG: Boost LLM Accuracy with Semantic and Keyword Retrieval
Benchmarking Bias in Image Generators: How Diffusion Models Reinforce Gender and Race Stereotypes Aug, 2 2025
Benchmarking Bias in Image Generators: How Diffusion Models Reinforce Gender and Race Stereotypes

Menu

  • About
  • Terms of Service
  • Privacy Policy
  • CCPA
  • Contact

© 2026. All rights reserved.