Deep Learning Interview Questions & Answers

5 avg. rating (100% score) - 2 votes

Deep Learning Interview Questions & Answers

Finding another job can be so cumbersome that it can turn into a job itself. Prepare well for the job interviews to get your dream job. Here's our recommendation on the important things to You need to prepare for the job interview to achieve your career goals in an easy way. Deep Learning is one of the method learnings on data representations. Deep learning structures algorithms in layers to create an “artificial neural network” that can learn and make intelligent decisions on its own. Several organizations make use of this certification to get the employees move to higher level. Follow our Wisdomjobs page for Deep Learning job interview questions and answers page to get through your job interview successfully in first attempt.

Deep Learning Interview Questions

Deep Learning Interview Questions
    1. Question 1. Why Are Deep Networks Better Than Shallow Ones?

      Answer :

      Both shallow and deep networks are capable of approximating any function. For the same level of accuracy, deeper networks can be much more efficient in terms of computation and number of parameters. Deeper networks are able to create deep representations, at every layer, the network learns a new, more abstract representation of the input.

    2. Question 2. What Is A Backpropagation?

      Answer :

      Backpropagation is a training algorithm used for a multilayer neural networks. It moves the error information from the end of the network to all the weights inside the network and thus allows for efficient computation of the gradient.

      The backpropagation algorithm can be divided into several steps:

      1. Forward propagation of training data through the network in order to generate output.
      2. Use target value and output value to compute error derivative with respect to output activations.
      3. Backpropagate to compute the derivative of the error with respect to output activations in the previous layer and continue for all hidden layers.
      4. Use the previously calculated derivatives for output and all hidden layers to calculate the error derivative with respect to weights.
      5. Update the weights.

    3. Question 3. Explain The Following Three Variants Of Gradient Descent: Batch, Stochastic And Mini-batch?

      Answer :

      Stochastic Gradient Descent:

      Uses only single training example to calculate the gradient and update parameters.

      Batch Gradient Descent:

      Calculate the gradients for the whole dataset and perform just one update at each iteration.

      Mini-batch Gradient Descent:

      Mini-batch gradient is a variation of stochastic gradient descent where instead of single training example, mini-batch of samples is used. It’s one of the most popular optimization algorithms. 

    4. Question 4. What Are The Benefits Of Mini-batch Gradient Descent?

      Answer :

      1. Computationally efficient compared to stochastic gradient descent.
      2. Improve generalization by finding flat minima.
      3. Improving convergence, by using mini-batches we approximating the gradient of the entire training set, which might help to avoid local minima.

    5. Question 5. What Is Data Normalization And Why Do We Need It?

      Answer :

      Data normalization is very important preprocessing step, used to rescale values to fit in a specific range to assure better convergence during backpropagation. In general, it boils down to subtracting the mean of each data point and dividing by its standard deviation.

    6. Question 6. Weight Initialization In Neural Networks?

      Answer :

      Weight initialization is a very important step. Bad weight initialization can prevent a network from learning. Good initialization can lead to quicker convergence and better overall error. Biases can be generally initialized to zero. The general rule for setting the weights is to be close to zero without being too small.

    7. Question 7. Why Is Zero Initialization Not A Recommended Weight Initialization Technique?

      Answer :

      As a result of setting weights in the network to zero, all the neurons at each layer are producing the same output and the same gradients during backpropagation.

      The network can’t learn at all because there is no source of asymmetry between neurons. That is why we need to add randomness to weight initialization process.

    8. Question 8. What Is The Role Of The Activation Function?

      Answer :

      The goal of an activation function is to introduce nonlinearity into the neural network so that it can learn more complex function. Without it, the neural network would be only able to learn function which is a linear combination of its input data.

    9. Question 9. What Are Hyperparameters, Provide Some Examples?

      Answer :

      Hyperparameters as opposed to model parameters can’t be learn from the data, they are set before training phase.

      Learning rate:

      It determines how fast we want to update the weights during optimization, if learning rate is too small, gradient descent can be slow to find the minimum and if it’s too large gradient descent may not converge(it can overshoot the minima). It’s considered to be the most important hyperparameter.

      Number of epochs:

      Epoch is defined as one forward pass and one backward pass of all training data.

      Batch size:

      The number of training examples in one forward/backward pass.

    10. Question 10. What Is A Model Capacity?

      Answer :

      Ability to approximate any given function. The higher model capacity is the larger amount of information that can be stored in the network.

    11. Question 11. What Is An Autoencoder?

      Answer :

      Autoencoder is artificial neural networks able to learn representation for a set of data (encoding), without any supervision. The network learns by copying its input to the output, typically internal representation has smaller dimensions than input vector so that they can learn efficient ways of representing data. Autoencoder consist of two parts, an encoder tries to fit the inputs to an internal representation and decoder converts internal state to the outputs.

    12. Question 12. What Is A Dropout?

      Answer :

      Dropout is a regularization technique for reducing overfitting in neural networks. At each training step we randomly drop out (set to zero) set of nodes, thus we create a different model for each training case, all of these models share weights. It’s a form of model averaging.

    13. Question 13. What Is A Boltzmann Machine?

      Answer :

      Boltzmann Machine is used to optimize the solution of a problem. The work of Boltzmann machine is basically to optimize the weights and the quantity for the given problem.

      Some important points about Boltzmann Machine −

      • It uses recurrent structure.
      • It consists of stochastic neurons, which consist one of the two possible states, either 1 or 0.
      • The neurons in this are either in adaptive (free state) or clamped (frozen state).
      • If we apply simulated annealing on discrete Hopfield network, then it would become Boltzmann Machine.

    14. Question 14. Is It Ok To Connect From A Layer 4 Output Back To A Layer 2 Input?

      Answer :

      Yes, this can be done considering that layer 4 output is from previous time step like in RNN. Also, we need to assume that previous input batch is sometimes- correlated with current batch.

    15. Question 15. What Is An Auto-encoder?

      Answer :

      An autoencoder is an autonomous Machine learning algorithm that uses backpropagation principle, where the target values are set to be equal to the inputs provided. Internally, it has a hidden layer that describes a code used to represent the input.

      Some Key Facts about the autoencoder are as follows:-

      • It is an unsupervised ML algorithm similar to Principal Component Analysis
      • It minimizes the same objective function as Principal Component Analysis
      • It is a neural network
      • The neural network’s target output is its input

    16. Question 16. What Is Weight Initialization In Neural Networks?

      Answer :

      Weight initialization is one of the very important steps. A bad weight initialization can prevent a network from learning but good weight initialization helps in giving a quicker convergence and a better overall error. Biases can be generally initialized to zero. The rule for setting the weights is to be close to zero without being too small.

Popular Interview Questions

All Interview Questions

Deep Learning Practice Test

All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status

Tutorial