StackedML
Practice
Labs
Questions
Models
Pricing
Sign in
Questions
/
Deep Learning
/
Foundations
/
Activation functions
← Previous
Next →
680.
ReLU Advantage
easy
What is the key advantage of the ReLU activation function over sigmoid and tanh?
A
It is differentiable everywhere, providing clean gradient signals at all points including exactly at zero
B
It avoids gradient saturation for positive inputs, enabling faster and more stable training of deep networks
C
It produces outputs bounded between 0 and 1, making gradients easier to normalize during backpropagation
D
It outputs negative values for some inputs, enabling the network to represent both excitatory and inhibitory signals
Sign in to verify your answer
← Back to Questions