**Get Certificate: Deep Learning with TensorFlow**

**Deep Learning with TensorFlow**

**Module 1 – Introduction to TensorFlow**

**Question 1- Which statement about TensorFlow is FALSE?**

- TensorFlow is well suitable for Deep Learning Problems
**TensorFlow is not proper for Machine Learning Problems**- TensorFlow has a C/C++ backend as well as Python modules incorrect
- TensorFlow is an open source library
- All of the above

**Question 2- What is a Data Flow Graph?**

**A representation of data dependencies between operations**- A cartesian (x,y) chart
- A graphics user interface
- A flowchart describing an algorithm
- None of the above

**Question 3- Which function is NOT used as an Activation Function?**

- sigmoid()
- softplus()
**sin()**- tanh()
- relu()

**Question 4- Which statement about TensorFlow is TRUE?**

- runs on FPGA
- runs on CPU only
**runs on CPU and GPU**- runs on GPU only

**Question 5- Why TensorFlow is proper library for Deep Learning?**

- It will benefit from TensorFlow’s auto-differentiation and suite of first-rate optimizers.
- Provides helpful tools to assemble subgraphs common in neural networks and deep learning
- TensorFlow has extensive built-in support for deep learning
**All of the above**

**Module 2 – Convolutional Networks**

**Question 1- What can be achieved with “Convolution” operation on Images?**

- Noise Filtering
- Image Smoothing
- Image Blurring
- Edge Detection
**All of the above**

**Question 2- For convolution, it is better to store Images in TensorFlow Graph as:**

**Placeholder**- CSV file
- Numpy array
- Variable
- None of the above

**Question 3- Which of the following statements is TRUE about Convolution Neural Networks (CNN)?**

- CNN can be applied Only on Image and Text data.
**CNN can be applied on ANY 2D and 3D array of data**- CNN can be applied Only on Text and speech data.
- CNN can be applied Only on Image data.
- All of the above

**Question 4- Which of the following Layers can be part of Convolution Neural Networks (CNN)**

- Dropout
- Softmax
- Maxpooling
- Relu
**All of the above**

**Question 5- Objective of Activation Function is to:**

- Increase the Size of Network
**Handle Non-Linearity in the Network**- Handle Linearity in the Network
- Reduce the Size of Network
- None of the above

**Module 3 – Recurrent Neural Network**

**Question 1- What is a Recurrent Neural Network?**

**A Neural Network that can recurs to itself**- An infinite layered Neural Network
- A special kind of Neural Network to predict weather
- A markovian model

**Question 2- What is TRUE about RNNs?**

**RNNs are VERY suitable for sequential data**- RNNs are NOT suitable for sequential data
- RNNs are ONLY suitable for sequential data
- All of the above

**Question 3- What application(s) is (are) suitable for RNNs?**

- Estimate temperatures from weather Data
- Natural Language Processing
- Video context retriever
- Speech Recognition
**All of the above**

**Question 4- Why RNNs are susceptible to issues with their gradients?**

- Numerical computation of gradients can drive into instabilities
- Gradients can quickly drop and stabilize at near zero
- Propagation of errors due to the recurrent characteristic
- Gradients can grow exponentially
**All of the above**

**Question 5- What LSTM Stands for?**

- Last State Threshold Model
- Limit Sinusoidal Term Memory
**Long Short Term Memory**- Least Squares Topological Minimization
- None of the above

**Module 4 – Restricted Boltzmann Machines (RBM)**

**Question 1- What things we can do with unsupervised learning?**

- Data dimensionality reduction
- Object recognition
- Feature extraction
- Pattern recognition
**All of the above**

**Question 2- How many layers has a RBM (Restricted Boltzmann Machine)?**

- Infinte
- 4
**2**- 3
- All of the above

**Question 3- How RBM compares to PCA?**

- RBM cannot reduce dimensionality
- PCA cannot generate original data
- PCA is another type of Neural Network
**Both can regenerate input data**- All of the above

**Question 4- Select the True statement about Restricted means in RBM?**

- It is a Boltzmann machine, but with no connections between nodes in the same layer
- Each node in the first layer has a bias.
- The RBM reconstructs data by making several forward and backward passes between the visible and hidden layers.
- At the hidden layer’s nodes, X is multiplied by a W (weight matrix) and added to h_bias.
**All of the above**

**Question 5- Select the TRUE statement about RBM:**

- The objective function is to maximize the likelihood of our data being drawn from the reconstructed data distribution
- The Negative phase of RBM decreases the probability of samples generated by the model.
- Contrastive Divergence (CD) is used to approximate the negative phase of RBM.
- The Positive phase of RBM increases the probability of training data.
**All of the above**

**Module 5 – Autoencoders**

**Question 1- Autoencoders are also known as:**

- LSTM Networks
**Diabolo Networks**- Deep Belief Network
- Siamese networks
- None of the above

**Question 2- Which of the following problems cannot be solved by Autoencoders:**

- Dimensionality Reduction
**Time series prediction**- Image Reconstruction
- Emotion Detection
- All of the above

**Question 3- What is TRUE about Autoencoders:**

- Help to reduce Curse of DImensionality
- Are Shallow Neural Networks
- Used to Learn Most important Features in Data
- Used for Unsupervised Learning
**All of the Above**

**Question 4- What are Autoencoders:**

- A Neural Network that is designed to replace Non-Linear Regression
***A Neural Network that is trained to attempt to copy its input to its output**- A Neural Network that learns all the weights by using labeled Data
- A Neural Network Where different layer inputs are controled by gates
- All of the Above

**Question 5- What is a Deep Autoencoder:**

**Autoencoder with Multiple Hidden Layers**- Autoencoder with multiple input and output layers
- Autoencoder stacked with Deep Belief Network
- Autoencoder stacked with
- None of the Above

**Final Exam Answers**

**Question 1-Why use a Data Flow graph to solve Mathematical expressions?**

**To create a pipeline of operations and its corresponding values to be parsed**- To represent the expression in a human-readable form
- To show the expression in a GUI
- Because it is only way to solve mathematical expressions in a digital computer
- None of the above

**Question 2-What is an Activation Function**

- All of the above
- A function that models a phenomenon or process
**A function that triggers a neuron and generate the outputs**- A function to normalize the output
- None of the above

**Question 3-Why TensorFlow is considered fast and suitable for Deep Learning?**

- it is suitable to operate over large and multidimensional tensors
- runs on CPU
- its core is based on C++
- runs on GPU
**All of the above**

**Question 4-TensorFlow can replace Numpy?**

- None of the above
- No, whatsoever incorrect
- Only with numpy, we can’t solve Deep Learning problems, therefore TensorFlow is required
**Yes, completely**- Partially for some operations on tensors, such as minimization

**Question 5-What is FALSE about Convolution Neural Networks(CNN)**

**Fully connects to all neurons in all the layers**- connects only to neurons in local region(kernel size) of input image
- builds feature maps hierarchically in every layer
- Inspired by human visual system incorrect
- None of the above

**Question 6-What does “Strides” in Maxpooling Mean**

- The number of pixels, kernel should add.
**The number of pixels, kernel should be moved.**- The size of kernel.
- The number of pixels, kernel should remove.
- None of the above

**Question 7-What is TRUE about “Padding” in Convolution**

**size of Input Image is reduced for “VALID” padding.**- Size of Input Image is reduced for “SAME” padding.
- Size of Input Image is Increased for “SAME” padding.
- Size of input image is increased for “VALID” padding.
- All of the above

**Question 8-Which of the following best describes Relu Function**

- (-1,1)
- (0,5)
**(0, Max)**- (-inf,inf)
- (0,1)

**Question 9-Which ones are types of Recurrent Neural Networks?**

- Hopfield Network
- Elman Networks and Jordan Networks
- Recursive Neural Network
- Deep Belief Network
**LSTM**

**Question 10- What is TRUE about RNNs**

- RNNs can predict the future
**RNNs are VERY suitable for sequential data**- RNNs are NOT suitable for sequential data
- RNNs are ONLY suitable for sequential data
- All of the above

**Question 11-What is the problem with RNNs and gradients?**

- Numerical computation of gradients can drive into instabilities
- Gradients can quickly drop and stabilize at near zero
- Propagation of errors due to the recurrent characteristic
- Gradients can grow exponentially
**All of the above**

**Question 12-What type of RNN would you use in a NLP project to predict the next work in a phrase? (only one is correct)**

- Bi-directional RNN
- Neural history compressor
**Long Short Term Memory**- Echo state network
- None of the above

**Question 13-How RBM can reduce the number of features?**

By transforming the features using a kernel function

By randomly filtering out a few features then checking if the input can be regenerated

By minimizing the difference between inputs and outputs, while weighting the features in the

By cutting of features with less variance

**All of the above**

**Question 14-How Autoencoders compares to K-means?**

- Autoencoders are always faster than k-means
- Both are based on Neural Networks
- K-Means is always better than Autoencoders
**Both can cluster the data**- None of the Above

**Question 15-Select all possible uses of Autoencoders and RBM (select all that applies)**

- Predict data in time series
- Pattern Recognition
- Dimensionality Reduction
**Clustering**- All of the above

**Question 16-What is TRUE about Collaborative Filtering**

**it is a technique used by Recommender Systems**- None of the Above
- It makes automatic predictions for a user by collecting information from many users
- RBM can be used to implement a collaborative filter
- It is Deep Neural Network

**Question 17-Which of the statements is TRUE for training Autoencoders:**

- The Size of Last Layer must atleast be 10% of Input layer DImension
**The size of input and Last Layers must be of Same dimensions**- The Last Layer must be Double the size of Input Layer Dimension
- The Last Layer must be half the size of Input Layer Dimension
- None of the Above.

**Question 18-To Design a Deep Autoencoder Architecture, what factors are to be considered:**

- The Size of centre most layer has to be close to number of Important Features to be extracted.
- The Centre most Layer should have smallest size compared to all other layers
- The Network should have odd number of Layers
- All the layers must be symmetrical with respect to centre most layer
**All of the Above**

**Question 19-With is True about Backpropogation:**

- Can be used to train LSTM
- Can be used to train CNN
- Can be used to train RBM
- Can be used to train Autoencoders
**All of the Above**

**Question 20-How Autoencoder can be Improved to handle Higly nonlinear Data:**

- Use Genetic Algorithms
**Add more Hidden Layers to the Network**- Use Higher initial Weight Values
- Use lower initial weight Values
- All of the Above

Thanks for the answers