StackedML
Practice
Labs
Questions
Models
Pricing
Sign in
Questions
/
Deep Learning
/
Architectures (Conceptual)
/
Transformers (high-level intuition)
← Previous
Next →
609.
Positional Encodings Rationale
easy
Why are positional encodings necessary in transformers?
A
Self-attention requires integer indices rather than continuous embeddings so positional encodings provide this mapping
B
Self-attention computes relative positions implicitly but positional encodings make these explicit for the decoder
C
Self-attention is permutation-invariant — without positional encodings the model cannot distinguish token order in the sequence
D
Self-attention is position-sensitive by default but positional encodings amplify this signal for longer sequences in the transformer architecture
Sign in to verify your answer
← Back to Questions