# Transformer

- [Positional Embeddings](/notes/transformer/positional-embeddings.md)
- [Fixed Positional Encoding](/notes/transformer/positional-embeddings/fixed-positional-encoding.md)
- [Learned Positional Encoding](/notes/transformer/positional-embeddings/learned-positional-encoding.md)
- [Rotary Positional Embeddings(RoPE)](/notes/transformer/positional-embeddings/rotary-positional-embeddings-rope.md)
- [Attention with Linear Biases (ALiBi)](/notes/transformer/positional-embeddings/attention-with-linear-biases-alibi.md)
- [Attention](/notes/transformer/attention.md)
- [KV Cache](/notes/transformer/attention/kv-cache.md)
- [Multi-Query Attention(MQA)](/notes/transformer/attention/multi-query-attention-mqa.md)
- [Grouped-Query Attention(GQA)](/notes/transformer/attention/grouped-query-attention-gqa.md)
- [Norm](/notes/transformer/norm.md)
- [Position](/notes/transformer/norm/position.md)
- [Activation function](/notes/transformer/activation-function.md)
