N-Gram House

Tag: LLMs

Positional Encoding in Transformers: Sinusoidal vs Learned for LLMs

Positional Encoding in Transformers: Sinusoidal vs Learned for LLMs

Sinusoidal and learned positional encodings were early solutions for transformers, but modern LLMs now use RoPE and ALiBi for better long-context performance. Learn why and how these techniques evolved.

Categories

  • History (50)

Recent Posts

Token Probability Calibration in Large Language Models: How to Make AI Confidence More Reliable Aug, 10 2025
Token Probability Calibration in Large Language Models: How to Make AI Confidence More Reliable
Scheduling Strategies to Maximize LLM Utilization During Scaling Jan, 6 2026
Scheduling Strategies to Maximize LLM Utilization During Scaling
Enterprise-Grade RAG Architectures for Large Language Models: Scalable, Secure, and Smart Jan, 28 2026
Enterprise-Grade RAG Architectures for Large Language Models: Scalable, Secure, and Smart
Positional Encoding in Transformers: Sinusoidal vs Learned for LLMs Nov, 28 2025
Positional Encoding in Transformers: Sinusoidal vs Learned for LLMs
Adapter Layers and LoRA for Efficient Large Language Model Customization Jan, 16 2026
Adapter Layers and LoRA for Efficient Large Language Model Customization

Menu

  • About
  • Terms of Service
  • Privacy Policy
  • CCPA
  • Contact

© 2026. All rights reserved.