Career Essentials in Generative AI byMicrosoft and LinkedIn Exam Answers
You just purchased a new smart music player that can add songs if you simply describe the characteristics you’re looking for. To test out the player you say, “Will you add that famous Johnny Cash song that they turned into a movie?” What type of technology will your music player use to find your song?
- It is converting your description into a search by using natural language processing.
- It is using generative AI to compose a new song.
- It is using unsupervised machine learning to make a recommendation based on your music tastes.
- It is using reinforcement learning to create a personalized playlist for your music player.
You are an executive for a large company that has a customer service department. Recently some of the top managers have been talking about replacing customer service representatives with an AI chatbot. Some of the managers feel like the chatbot should impersonate a human customer service representative. They argue that if customers know it’s an AI chatbot then they would immediately disconnect. Other managers feel like it would be unethical to impersonate a human. What would be the best place to communicate your decision?
- Schedule a company-wide meeting.
- Create a Profitability with Generative AI Action Plan document.
- The executives should leave it to the product development team.
- Create a Responsible AI Policy and Governance framework.
You’re a director for an organization that detects credit card data. You’re trying to convince your manager to adopt a generative adversarial network (GAN) to test your system to see if it can identify credit card fraud. What’s one of the best arguments you have for using this type of neural network?
- Fraudulent transactions are by their very nature adversarial, so it’s good to have a network that reflects this.
- A GAN would allow the system to invent fraudulent transactions that aren’t present in the data.
- This type of neural network arrangement will be the easiest for your organization to set up.
- This type of system will generate many more fraudulent transactions than you would get with a typical neural network.
What is a good description of how a machine learning system operates?
- An AI system generates content as opposed to just classifying existing data.
- An AI system learns in a way that is consistent with its preprogrammed responses.
- A system achieves artificial general intelligence by collating responses from experts in every field.
- A system “learns” by observing patterns in massive datasets.
Oxford dictionary defines plagiarism as “the practice of taking someone else’s work or ideas and passing them off as one’s own.” If you ask ChatGPT to describe a sunset, it will give you a response, but these systems have never experienced a sunset. The only way it could respond is by “passing off ideas as its own.” Does that mean that these generative AI systems are plagiarism machines?
- Yes, reciting what others have written about sunsets is plagiarism.
- It’s unclear, so there needs to be a new measure of authenticity.
- No, these systems are incapable of breaking the law.
- No, these systems may be thought of as experiencing events that it hasn’t experienced.
You are an executive of a company that is implementing generative AI systems. What are the most essential ethical considerations to balance?
- your organization’s obligation to appease shareholders against your obligations to humanity
- the dangers to humanity against the possibility of your own enrichment
- the cost of implementing these new systems against the costs of maintaining full employment
- getting creative generative AI output and optimizing production while maintaining human oversight
A national newspaper reporter is writing a story on generative AI. As part of the story, they chat for hours with a new online generative chatbot. A few hours into the conversation, the chatbot tries to convince the reporter to leave his partner. The chatbot company said they don’t know why it gave these responses and will limit conversations to 30 minutes. What might be one of the biggest ethical challenges with this system?
- Chatbots shouldn’t offer marital advice.
- The system offered personal advice too soon in the conversation.
- The system has access to tremendous amounts of data, so it can offer hard but truthful advice.
- There isn’t enough transparency into how the chatbot is responding.
Why did early artificial intelligence systems do so well with board games?
- Because computer scientists could do a good job programming all the rules into the game that the system would understand.
- Because board games give the system unique insight into human behavior, early systems could learn and mimic the same behavior.
- Because board games are inherently chaotic, the system had a lot of opportunities to crunch new data.
- Because even with their limiting processing power, early systems thrived in a world of simple rules and pattern matching.
What does the term model mean in generative AI?
- A model is a data set of ethical issues.
- A model is AI mimicking human behavior.
- A model is a generative AI that trains another artificial intelligence on a dataset.
- A model is a set of algorithms that have been trained on a data set.
You’re trying to get better at prompt engineering, so you decided to try a new technique. You say, “Write a 500-word essay on large language models and hallucinations from the perspective of a computer science graduate student at a university.” What technique are you using here?
- You’re brainstorming with the system about large language models and hallucinations.
- You are using role-playing to get more accurate responses.
- You are using a compression technique by limiting the results to 500 words.
- You are taking an adversarial approach to get both sides of the story.
In machine learning, when a data model performs exceptionally well during the training set phase, but lacks the complexity to generate accurate predictions during the test set phase, the model is _____ the data.
You’re trying to improve your skills with prompt engineering, so you asked ChatGPT to generate a paragraph of text. The first prompt you create is, “Tell me about lactose intolerance.” You weren’t satisfied with the results, so for the second prompt you wrote, “Write a blog article on lactose intolerance for my healthcare website.” What did you do with the second prompt that you didn’t do with the first?
- You asked for an adversarial response.
- You started a brainstorming session.
- You used an analogy.
- You provided context.
How is an artificial neural network related to machine learning?
- An artificial neural network uses preprogrammed responses instead of learning.
- An artificial neural network is an earlier form of machine learning.
- An artificial neural network is a machine learning technique.
- An artificial neural network does not require programming like a machine learning system.
What is a generative adversarial network (GAN)?
- when two neural networks work in opposition, with a generator and a discriminator to improve the generative output
- when two generative AI organizations compete for the same resources
- when a discriminator generates output so that a generator can review it and offer adversarial feedback
- when two neural networks work cooperatively to produce the best output
You work for a large financial institution that wants to identify undervalued stocks. To do so, you feed decades of financial information into an artificial neural network to create clusters of stocks. Then your data science team tries to find stocks in those clusters that substantially increased in value. Your data science team hopes to find stocks in the same cluster that may also gain value. What type of machine learning are you using?
- generative artificial intelligence
- unsupervised learning
- supervised machine learning
- reinforcement learning
Your large social media company has decided to open source the data and source code for your chatbot. You recently found out that a foreign government has downloaded your code and set up a chatbot to spread propaganda. The chatbot encourages violence against an ethnic minority group. What AI ethics violation might your chatbot release have caused?
- Governments should always be able to use your technology for whatever reason they see fit.
- Your technology is too easy to implement.
- Your technology assisted a human rights violation.
- There is now a danger of competition from a large well-funded government.
You work in the marketing department for a large company, and you’d like to create a weekly opinion letter using ChatGPT to give your take on the top news in your industry. You create a few test posts, and you notice that ChatGPT is getting the dates wrong and is mixing up the CEOs of different companies. Why are you running into this challenge?
- There’s a good chance that these are human errors that can be corrected by fully embracing ChatGPT.
- ChatGPT is getting much better at opinion-based writing, so you should use it now to get ahead of the game.
- ChatGPT needs to scale up so that it has a better understanding of your industry.
- ChatGPT shouldn’t be used for creative writing because it’s still prone to factual errors.
You are having some difficulty dealing with a colleague at work. You asked ChatGPT for advice on how to improve your relationship with the coworker. ChatGPT gives you extremely helpful advice. You find yourself intuitively thanking it for its help. Given your interaction, do you think that ChatGPT is strong or weak AI?
- It is neither because ChatGPT is a generative AI system which falls outside the distinctions in traditional artificial intelligence.
- It is weak AI because ChatGPT doesn’t understand what it’s saying—it’s just gathering information that it found online.
- It is strong AI because ChatGPT gave you genuinely helpful advice that’s the same quality as a human’s.
- It is weak AI because ChatGPT is a good example of artificial general intelligence.
Your company produces science fiction and fantasy graphic novels. One of your top illustrators has developed a style that is very strongly associated with your brand. Your company decides to create a generative AI model to mimic their illustrations. This new model can create new graphics in their style in seconds. Now the company will have better control over their brand and increase productivity. What is one of the main challenges with this approach?
- It will “normalize mediocrity”—the graphics will look the same and lack a creative spark.
- The generative AI model will always need to be further trained, so it doesn’t save any time.
- It is currently illegal in the United States to mimic the style of working illustrators.
- Current generative AI models are not doing a very good job mimicking creative illustrators.
Your online movie-streaming business wants to create an artificial neural network that can recommend new movies based on what customers have already seen. The team creates a series of XY diagrams of different film genres. Then it puts the film rating along the X-axis and the duration that people watch on the Y-axis. It then makes a recommendation based on how close movies are to each other on the chart. What type of machine learning algorithm is the team using?
- Naive Bayes
- K-nearest neighbor
- reinforcement learning
- Q learning
You are a technical manager for a large city courthouse. The judges have asked you to implement a new system that will make criminal sentencing recommendations. As part of your testing, your team has the system make sentencing recommendations for past court convictions. Your team finds that the new system is much more likely to recommend longer sentences for some groups of people. What is the main ethical challenge with implementing this system?
- Impartial judges should make sentencing recommendations. AI systems should not be involved.
- The courthouse obviously does not have the technical expertise to improve the system.
- The city courthouse might not be able to afford the service.
- It magnifies existing biases rather than mitigating them.
What is the difference between generative AI and discriminative AI?
- Generative AI creates content while discriminative AI classifies data.
- Generative AI tends to not work with digital data.
- Discriminative AI creates content while generative AI classifies data.
- Discriminative AI is mostly used in government and university work.
You work for a political organization that does sentiment analysis of social media networks. Politicians look to your service to see how people feel about certain difficult topics. Your organization has developed an artificial neural network that can search social media for topics and classify the comments as strongly agree, neutral, and strongly disagree. What type of machine learning are you using?
- variational auto encoding generative AI
- unsupervised learning binary classification
- reinforcement learning unsupervised clustering
- supervised learning multiclass classification
You are a software developer on a team that’s developing a generative AI nurse for a healthcare company. You’ve trained the system on all your internal data, but to make it more “worldly” you’ve also trained it with social media data. During your testing, you found that sometimes the nurse will make recommendations that aren’t based on science. As a software developer, your AI ethical responsibility is to make sure that the AI nurse _____.
- is well-versed in alternative forms of treatment
- is always using the latest information
- is always focused on generating data, increasing profits and reliable customer service
- is developed in a way that’s transparent, explainable, and accountable
You work for a large credit card company that wants to create an artificial neural network that will help predict when people are going to have trouble paying their bills. So your team gathers all the billing statements for people who had trouble paying their bills. Then you feed this data into an artificial neural network. What is this process called?
- This is unsupervised machine learning.
- This is training your artificial neural network with labeled data.
- This is classifying your data using reinforcement labels.
- This is testing your artificial neural network with unlabeled data.
Your company wants to use generative AI to come up with new pharmaceuticals. This system will analyze all existing chemical compounds and try to develop new compounds based on the success of some of your current pharmaceuticals. This system will require a lot of custom programming and access to your proprietary data sets. What type of generative AI system might work best?
- Combine a series of open-source models and run on a cloud service.
- Use a text to graphics engine such as DALL-E 2.
- Use a generative AI service like ChatGPT.
- Develop your own generative AI model based on your existing data.
You are going to use machine learning to try and do a better job predicting the weather. To start out, you just want to classify two weather events: “rain” or “not rain.” What steps would you take to build this system?
- Find labeled weather data, create a small training set of that data, and that set aside more data for the test set.
- Input all the labeled weather data and allow the system to create its own clusters based on what it sees in the data.
- Use a linear regression to show the trend line from “not rain” to “rain.”
- Use reinforcement learning to allow the machine to create rewards for itself based on how well it predicted the weather.
You’re an executive for a software development company. Your company develops only one product. You want to include ethical decision-making into your software development, so you ask a senior developer to also serve as the company’s chief AI ethics officer. What would be one of the challenges with this approach?
- Since you have only one product, your senior developer should always be focusing on software development.
- Software developers are busy people and they need to focus on technical challenges.
- With only one product, there aren’t going to be many ethical AI issues, so you should have your developers focus on developing software.
- A chief AI ethics officer sets the ethical direction for the entire company and shouldn’t just focus on the product.
You recently purchased a new smartwatch. To set up the watch, you had to go to the manufacturer’s website and create a new account. When you create an account, it presents a long license agreement you have to accept to create an account. You were anxious to use your new watch, so you didn’t scroll through the 50-plus pages of the license agreement. What is one of the ethical issues with how the smartwatch manufacturer is operating?
- It should always try to keep the private data on the watch itself.
- It should shorten the license agreement.
- It should have you automatically accept an agreement when you purchase the watch.
What’s one of the key dangers for organizations that over rely on generative AI systems?
- Generative AI systems might make key decisions about who works for the company.
- Generative AI systems will start to run these organizations with little to no human oversight.
- Your employees might resign if they feel that your system is in danger of replacing their livelihood.
- They will regenerate the same material without any spark of creativity.
You work for a large financial institution that would like to offer immediate approval for loan applications. Your team has identified four predictors about whether someone will be a good loan candidate: income, credit score, employment, and debt. You develop a system that will look at each predictor independently and then come up with an overall score. What machine learning algorithm are you using?
- K-means clustering
- K-nearest neighbor
- Naive Bayes
- Linear regression
You work for a company that wants to improve spam filtering for mobile email applications. Your data science team gathers one million messages that have been correctly labeled as spam. You then train an artificial neural network to correctly identify these spam messages. After you train the system, one of the product managers asks why you don’t use those same million messages to test the network for accuracy. How should you respond?
- It is a good idea to use the same messages, but the machine learning system can test its accuracy.
- An artificial neural network does not use test data like other machine learning systems.
- That is an efficient way to train the system without having to find another several million email messages.
- If you use the training data, then you’re not testing how well the system will do in the future to identify spam.
How does a reasoning engine work?
- It’s a way for search engines to crawl, index, and rank new content so that it’s always fresh data and able to solve real problems.
- It’s a way for computer scientists to optimize server code in a hosted reason repository.
- It networks together several search engines so that users always have access to good content.
- It draws conclusions, makes decisions, summarizes information, and solves problems based on available data.
You use a new text-to-image generating service to create a beautiful artistic landscape. You submit your artwork for an AI-generated artistic award and win third place. The news media picks up the story and uses your image in online articles without any compensation or attribution. Did they violate your copyright protection?
- Yes, your image and the prompt engineering phrase can be protected by copyright.
- No, they didn’t violate your copyright protection, but they did violate the system’s.
- Yes, but you’ll have to split the copyright proceeds and attribution with the AI system.
- No, because currently AI-generated images can’t be protected by copyright.
When you’re approached with a generative AI ethics challenge, what is one of the first questions you should ask?
- How can you get this product to market quickly?
- What is the highest standard of responsible human behavior?
- What would cause the least harm to the greatest number of people?
- What would be most profitable for your organization?
You work on a team that’s developing a generative AI text-to-image service. Your service will specialize in creating realistic looking paintings. To train the system you have to process millions of digitized paintings. The system has learned that paintings almost always have a signature. When you test the system, it creates a fake signature on the painting. The product manager asks you to create an algorithm to remove the signatures. What might be an ethical challenge with this approach?
- It isn’t transparent in how the system collects its data.
- It’s a missed opportunity to show the artistic strength of generative AI.
- It should allow the system to create a fake form of attribution.
- If it signs the painting, then it might have questionable intellectual property rights.
Typically, what are the three layers of an artificial neural network?
- the transaction layer, the generator layer, and the final layer
- the supervised layer, the unsupervised layers, and the reinforcement layer
- the artificial layer, the machine learning layer, and the data layer
- the input layer, many hidden layers, and the output layer
You work for a company that produces video games. One of the challenges is creating non-player characters (NPCs) that are controlled by the game, but still make strategic decisions. Your team decides to use machine learning, and each time an NPC does better than a player it gets a small reward. Now the machine learning algorithms are coming up with interesting new ways to play the game. What type of learning is this?
- unsupervised machine learning
- self-supervised machine learning
- reinforcement learning
- generative AI
Your company wants to create a smartphone application that identifies plants using the phone’s camera. The company purchases millions of digital images of plants labeled with the species names. You use this initial batch of images to train your artificial neural network. What type of machine learning are you using for your network?
- self-supervised learning
- reinforcement learning
- supervised learning
- unsupervised machine learning
You manage a radiology department in a large hospital. Your hospital has millions of computed tomography (CT) images. You want to create a system where once someone gets a CT scan, the system will immediately check for anomalies. That way it can be sent for review by a senior radiologist. Which generative AI system might work best for this approach?
- an adversarial autoencoder (AAA)
- a flexible learning encoding X-ray (FLEX)
- a generative autoencoding network (GAN)
- a variational autoencoder (VAEs)