##### Asked by: Banu Alayeto

asked in category: General Last Updated: 15th January, 2020# Why do we use activation functions in neural networks?

**activation function**is to add some kind of non-linear property to the

**function**, which is a

**neural network**. A

**neural network**without any

**activation function would**not be able to realize such complex mappings mathematically and

**would**not be able to solve tasks

**we**want the

**network**to solve.

Besides, why activation function is used in neural network?

The purpose of the **activation function** is to introduce non-linearity into the output of a **neuron**. We know, **neural network** has neurons that work in correspondence of weight, bias and their respective **activation function**.

Likewise, why do we use non linear activation function? **Non**-**linearity** is needed in **activation functions** because its aim in a neural network is to produce a **nonlinear** decision boundary via **non**-**linear** combinations of the weight and inputs.

Similarly one may ask, what are activation functions and why are they required?

**Activation functions** are really important for a Artificial Neural Network to learn and make sense of something really complicated and Non-linear complex functional mappings between the inputs and response variable. **They** introduce non-linear properties to our Network.

What is the best activation function in neural networks?

The ReLU is the most used **activation function** in the world right now. Since, it is used in almost all the convolutional **neural networks** or deep learning. As you can see, the ReLU is half rectified (from bottom).