Q6. Which of the following is a milestone in Ethical AI Practice Maturity Model?
A. Accurate & Accountable
B. Managed & Sustainable
C. Responsible & Inclusive
Answer
Q7. Which of the following is one of the perceived risks of real-time personalization in marketing?
A. Automated spam emails
B. Encouraging unhealthy habits
C. Data being collected, shared, or used in unanticipated ways
Answer
C
Biggest perceived risks of real-time personalization in marketing:
- Security events, like data breaches
- Data being collected, shared, or used in unanticipated ways
- Personalizing interactions that feel invasive or unwanted to consumers
- Inadvertent bias introduced by relying on demographic attributes for interactions instead of behavioral and engagement data
Q8. Which of the following is a factor that can determine the quality of data used for training AI models?
A. Data Compatibility
B. Duplicate Records
C. Data Volume
Answer
B
Factors that determine data quality
- Missing Records
- Duplicate Records
- No Data Standards
- Incomplete Records
- Stale Data
Q9. Which of the following is a Data Quality Dimention?
A. Naming Convention
B. Completeness
C. Formatting
Answer
B
Data Quality Dimensions
- Age – What was the last time each record was updated?
- Completeness – Are all key business fields on records filled in?
- Accuracy – Has it been matched against a trusted source?
- Consistency – Is the same formatting, spelling, and language used across records?
- Duplication – Are records and data duplicated in your org?
- Usage – Is your data being harnessed in reports, dashboards, and apps?
Q10. What is AI Hallucination?
A. A confident response by an AI that does not seem to be justified by its triaining data
B. AI systems begin to perceive and interact with fictional and fantastical entities in their virtual worlds
C. AI systems start exhibiting behaviors reminiscent of characters from classic literature
Answer
A
Hallucinations: Predictions from generative AI that diverge from an expected response, grounded in facts, are known as hallucinations. They happen for a few reasons, like if the training data was incomplete or biased, or if the model was not designed well.
Thank you, Dinesh Yadav, for the questions.
Thank you Dinesh Yadav for helping me preparing AI associate certification. I really appreciate your effort helping the society.