Sunday , December 22 2024
Breaking News

Deep Learning Fundamentals Cognitive Class Exam Quiz Answers

Deep Learning Fundamentals Cognitive Class Certification Answers

The further one dives into the ocean, the more unfamiliar the territory can become. Deep learning, at the surface might appear to share similarities. This course is designed to get you hooked on the nets and coders all while keeping the school together.

Question 1: Select the reason(s) for using a Deep Neural Network

  • Some patterns are very complex and can’t be deciphered precisely by alternate means
  • Deep Nets are great at recognizing patterns and using them as building blocks in deciphering inputs
  • We finally have the technology – GPUs – to accelerate the training process by several folds of magnitude
  • All of the above

Question 2: What is TRUE about the functions of a Multi Layer Perceptron?

  • The first neural nets that were born out of the need to address the inaccuracy of an early classifier, the perceptron.
  • It predicts which group a given set of inputs falls into.
  • It generates a score that determines the confidence level of the prediction.
  • All of above.  

Question 3: Why is the vanishing gradient a problem?

  • Training is quick if the gradient is large and slow if its small
  • With backprop, the gradient becomes smaller as it works back through the net
  • The gradient is calculated multiplying two numbers between 0 and 1
  • All of above.

Question 1: For Unsupervised Learning, which of the following deep nets would you choose?

  • Autoencoder or Restricted Boltzmann Machines
  • Deep Belief Nets
  • Convolutional Nets
  • Recurrent Nets

Question 2: True or False: The RELU activation has no effect on back-propagation and the vanishing gradient

  • True
  • False

Question 3: True or False: Convolutional Nets are the right model when dealing with data that changes over time because of their built-in feedback loop, allowing them to serve as a forecasting engine.

  • True
  • False

Question 1: Which of the following are use cases of Deep nets?

  • Sentiment Analysis of text data.
  • Offering personalized ads based on user activity history.
  • Flagging a transaction as fraudulent.
  • Analyze and segment customers based on digital activity and footprint.
  • Using satellite feeds and sensor data to detect changes in environmental conditions.
  • All of the above.  

Question 2: Which of the following are use cases of machine vision. Select all that apply.

  • Image classification and tagging
  • Sentiment Analysis
  • Face Detection
  • Video Recognition
  • Speech Recognition

Question 3: Which of the following is a good application of an RNTN?

  • If the patterns change through time
  • For general classification problems
  • If there is an unknown hierarchy inherent in the input features
  • For Supervised Fine-tuning
  • To determine the relative importance in the input features

Question 1: Which of the following is not an aspect of a deep net platform?

  • Choice of deep net models
  • Ability to integrate data from multiple sources
  • Manage deep net models from the UI
  • Under the hood performance enhancements to allow for fast training and execution
  • Deriving the optimal hyper-parameter configuration

Question 2: What are the different aspects of a Deep Learning Library?

  • They are a set of pre-built functions and modules that you can call through your own programs
  • Usually maintained by high-performance teams and are regularly updated
  • Most are open source and have a large community that contribute to the code base
  • All of above.

Question 3: True or False: Theano, Caffe, and TensorFlow are examples of deep learning platforms.

  • True
  • False

Question 1: For supervised learning, which of the following deep nets would you choose?

  • Autoencoder
  • Deep Belief Nets
  • Convolutional Nets
  • Restricted Boltzmann Machines
  • Recurrent Nets

Question 2: Which of the following is true with respect to the training process of a deep net?

  • The Cost is the difference between the net’s predicted and actual outputs.
  • The training process utilizes gradients which measure the rate at which the weights and biases change with respect to the cost.
  • The objective of the training process is to make the cost as low as possible.
  • The training process utilizes a technique called back-propagation.
  • All of above.

Question 3: True or False: With backprop, the early layers train slower than the later ones, making the early layers incapable of accurately identifying the pattern building blocks needed to decipher the full pattern.

  • True
  • False

Question 4: For image recognition, which of the following deep nets would you choose? Select all that apply.

  • Autoencoder
  • Deep Belief Nets
  • Convolutional Nets
  • Restricted Boltzmann Machines
  • Recurrent Nets

Question 5: How does the Deep Belief Network (DBN) solve the vanishing gradient? Select all that apply.

  • It uses a stack of RBMs to determine the initial weights and biases, where the output of any RBM forms the input to the next RBM.
  • It uses a small labelled data set to associate patterns learned by the RBMs to classes.
  • It utilizes supervised fine-tuning, resulting in tweaks in weights and biases and a slight improvement in accuracy.
  • It quickly moves through solution states – set of weights and biases – going from one to another based on a reward.
  • The complete process – RBMs for pre-training and supervised fine-tuning – results in a very accurate net which trains in an acceptable time.

Question 6: True or False: To train, a DBN combines two Learning methods – supervised and unsupervised.

  • True
  • False

Question 7: Which of the following is the most popular use of a Convolutional Net?

  • Image Recognition
  • Object Recognition in an Image
  • Time Series Forecasting
  • Supervised Fine Tuning
  • General classification

Question 8: Which of the following are True about a RBM? Select all that apply.

  • The RBM is part of the first attempt at beating the vanishing gradient and uses unlabelled data.
  • It improves its own accuracy through self-correction.
  • Its purpose is to re-create inputs and in doing so has to make decisions about which input features are more important.
  • It stores the relative importance of the features as weights and biases.
  • It predicts which group a given set of inputs falls into.

Question 9: Which of the following statements are true about the architecture of a CNN? Select all that apply.

  • A CNN can only have two types of layers: CONV and RELU.
  • A RELU layer has to always be followed by a POOL layer.
  • FC layers are usually found at the end.
  • A CONV layer has a theoretical maximum number of filters.
  • A typical CNN implementation has multiple repetitions of CONV, RELU and POOL layers, with sub-repetitions.

Question 10: True or False: By definition, the classifier in the nodes of an MLP cannot be anything other than the Perceptron.

  • True
  • False

Question 11: Which of the following are differences between a Recurrent Net and a Feedforward Net? Select all that apply.

  • Recurrent Nets feed the output of any time step back in as input for the next step.
  • Recurrent Nets are used for time series forecasting.
  • Recurrent Nets can output a sequence of values.
  • Recurrent Nets are trained using back-propagation.
  • The nodes in a recurrent nets have a classifier that activate and produce a score.

Question 12: Which of the following statements are true about training a Recurrent Net? Select all that apply.

  • Since RNNs use backprop, the vanishing gradient is a problem.
  • The number of time steps used for training has no bearing on the severity of the vanishing gradient problem.
  • The vanishing gradient can potentially lead to decay of information through time.
  • The most popular technique to address the vanishing gradient is the use of gates.
  • The only technique to address the vanishing gradient is the use of gates.

Question 13: True or False: Deep Autoencoders are used for dimensionality reduction.

  • True
  • False

Question 14: Which of the following are true about Autoencoders? Select all that apply.

  • It improves its own accuracy through self-correction.
  • Its purpose is to re-create inputs and in doing so has to make decisions about which input features are more important.
  • A Restricted Boltzmann Machine is a type of Autoencoders.
  • It stores the relative importance of the features as weights and biases.
  • It predicts which group a given set of inputs falls into.

Question 15: True or False: Given they are mainly about machine vision, Convolutional Nets don’t really find a home in the field of medicine.

  • True
  • False

Introduction to Deep Learning Fundamentals

Deep learning, a subset of machine learning, leverages neural networks with many layers to model complex patterns in data. It’s particularly powerful for tasks involving large datasets and unstructured data such as images, audio, and text. Here’s a breakdown of the core concepts and components of deep learning.

1. Neural Networks

Neural networks are the foundation of deep learning. They consist of layers of interconnected nodes, or neurons, where each connection has a weight that is adjusted during training. The basic types include:

  • Perceptron: The simplest type of neural network with a single layer of nodes.
  • Feedforward Neural Networks (FNNs): These have an input layer, one or more hidden layers, and an output layer. Information moves in one direction, from input to output.
  • Recurrent Neural Networks (RNNs): These are designed for sequential data, where connections form cycles allowing information to persist.
  • Convolutional Neural Networks (CNNs): These are specialized for processing grid-like data such as images. They use convolutional layers to automatically and adaptively learn spatial hierarchies of features.
  • Generative Adversarial Networks (GANs): These consist of two networks, a generator and a discriminator, that compete against each other to produce realistic data samples.

2. Activation Functions

Activation functions introduce non-linearity into the network, allowing it to learn complex patterns. Common activation functions include:

  • Sigmoid: Maps input values to a range between 0 and 1.
  • Tanh: Maps input values to a range between -1 and 1.
  • ReLU (Rectified Linear Unit): Outputs the input directly if it is positive; otherwise, it outputs zero. Variants like Leaky ReLU and Parametric ReLU address some limitations of the basic ReLU.
  • Softmax: Converts a vector of values into a probability distribution.

3. Training Neural Networks

Training involves adjusting the weights of the network to minimize the difference between the predicted output and the actual output, typically through:

  • Forward Propagation: The process of passing input data through the network to obtain output predictions.
  • Loss Function: Measures how well the model’s predictions match the actual data. Common loss functions include Mean Squared Error (MSE) for regression and Cross-Entropy Loss for classification.
  • Backpropagation: A method for calculating the gradient of the loss function with respect to each weight by the chain rule, which is then used to update the weights.

4. Optimization Algorithms

These algorithms are used to minimize the loss function by adjusting the weights. Popular optimization algorithms include:

  • Stochastic Gradient Descent (SGD): Updates weights incrementally using a subset of the training data.
  • Adam (Adaptive Moment Estimation): Combines the advantages of two other extensions of SGD, namely AdaGrad and RMSProp, by maintaining per-parameter learning rates adapted based on both first and second moments of the gradients.

5. Regularization Techniques

Regularization helps prevent overfitting by introducing additional constraints or penalties. Techniques include:

  • L1 and L2 Regularization: Add penalties equivalent to the absolute value or the squared value of the weights, respectively, to the loss function.
  • Dropout: Randomly sets a fraction of the input units to zero at each update during training, which helps prevent units from co-adapting too much.
  • Batch Normalization: Normalizes the input of each layer so that it has a mean of zero and a standard deviation of one, which can stabilize and accelerate training.

6. Deep Learning Frameworks

Several frameworks facilitate the building and training of deep learning models, including:

  • TensorFlow: An open-source library developed by Google, widely used for both research and production.
  • PyTorch: An open-source library developed by Facebook’s AI Research lab, known for its dynamic computation graph and ease of use.
  • Keras: A high-level API for neural networks, running on top of TensorFlow, that simplifies model building.

7. Applications of Deep Learning

Deep learning has broad applications across various domains:

  • Computer Vision: Image classification, object detection, and image generation.
  • Natural Language Processing (NLP): Text classification, language translation, and sentiment analysis.
  • Speech Recognition: Converting spoken language into text.
  • Healthcare: Medical image analysis, drug discovery, and genomics.
  • Autonomous Systems: Self-driving cars and robotics.

Conclusion

Deep learning has revolutionized various fields by enabling the development of models that can automatically learn and improve from experience. Its success is built on neural networks, effective training techniques, and powerful computational resources. As research and technology continue to advance, deep learning is poised to drive further innovations and applications across diverse sectors.

About Clear My Certification

Check Also

ESL003: Upper-Intermediate English as a Second Language Exam Answers

ESL003: Upper-Intermediate English as a Second Language Exam Answers Learning a new language requires you …

Leave a Reply

Your email address will not be published. Required fields are marked *