N-Gram House

Tag: LLMs

Positional Encoding in Transformers: Sinusoidal vs Learned for LLMs

Positional Encoding in Transformers: Sinusoidal vs Learned for LLMs

Sinusoidal and learned positional encodings were early solutions for transformers, but modern LLMs now use RoPE and ALiBi for better long-context performance. Learn why and how these techniques evolved.

Categories

  • History (29)

Recent Posts

Toolformer-Style Self-Supervision: How LLMs Learn to Use Tools on Their Own Nov, 17 2025
Toolformer-Style Self-Supervision: How LLMs Learn to Use Tools on Their Own
Code Generation with Large Language Models: Boosting Developer Speed and Knowing When to Step In Aug, 10 2025
Code Generation with Large Language Models: Boosting Developer Speed and Knowing When to Step In
How to Detect Implicit vs Explicit Bias in Large Language Models Dec, 16 2025
How to Detect Implicit vs Explicit Bias in Large Language Models
Procurement Checklists for Vibe Coding Tools: Security and Legal Terms Dec, 17 2025
Procurement Checklists for Vibe Coding Tools: Security and Legal Terms
Token Probability Calibration in Large Language Models: How to Make AI Confidence More Reliable Aug, 10 2025
Token Probability Calibration in Large Language Models: How to Make AI Confidence More Reliable

Menu

  • About
  • Terms of Service
  • Privacy Policy
  • CCPA
  • Contact

© 2025. All rights reserved.