Practice Test 5
•25 QuestionsCase study (Artificial Intelligence in Healthcare, ~505 words): A regional hospital system deploys an AI tool called ScanAid to help radiologists review chest X-rays and CT scans. ScanAid uses machine learning, meaning it learns patterns from large sets of labeled medical images to estimate the likelihood of conditions such as pneumonia or a collapsed lung. When a new scan arrives, the system highlights areas it deems suspicious and assigns a risk score. The hospital integrates ScanAid into its workflow so that high-risk cases move to the front of the review queue.
The benefits appear quickly. In the emergency department, clinicians receive faster preliminary flags, which helps prioritize patients who may need urgent treatment. Administrators report shorter average turnaround times for imaging results, and radiologists say the tool reduces “missed findings” on busy nights by acting as a second set of eyes. This aligns with real-world momentum: the U.S. Food and Drug Administration has authorized hundreds of AI-enabled medical devices, many aimed at imaging, reflecting a broad belief that computing can improve efficiency and decision support.
Yet the hospital also confronts serious risks. ScanAid requires large quantities of patient data for training and ongoing monitoring. Even if names are removed, data can sometimes be re-identified when combined with other information, raising privacy concerns. The system’s vendor stores model updates in the cloud, and a security audit warns that misconfigured access controls could expose sensitive scans. In addition, clinicians discover performance gaps: the tool flags fewer abnormalities in patients from a smaller rural clinic that uses older imaging machines. A quality team suspects the training data underrepresents those machine types, a form of algorithmic bias (systematic error that disadvantages certain groups or settings). Finally, some physicians worry about “automation bias,” where staff may over-trust a computer’s score and overlook contradictory clinical evidence.
The hospital responds by requiring human sign-off on all diagnoses, conducting bias tests across equipment types, and encrypting data in transit and at rest. Still, leaders acknowledge that the same computing innovation that boosts speed and consistency also introduces new ethical dilemmas around data security, fairness, and accountability.
Based on the case study, what are two benefits and two risks associated with AI diagnostics as described in the passage?
Case study (Artificial Intelligence in Healthcare, ~505 words): A regional hospital system deploys an AI tool called ScanAid to help radiologists review chest X-rays and CT scans. ScanAid uses machine learning, meaning it learns patterns from large sets of labeled medical images to estimate the likelihood of conditions such as pneumonia or a collapsed lung. When a new scan arrives, the system highlights areas it deems suspicious and assigns a risk score. The hospital integrates ScanAid into its workflow so that high-risk cases move to the front of the review queue.
The benefits appear quickly. In the emergency department, clinicians receive faster preliminary flags, which helps prioritize patients who may need urgent treatment. Administrators report shorter average turnaround times for imaging results, and radiologists say the tool reduces “missed findings” on busy nights by acting as a second set of eyes. This aligns with real-world momentum: the U.S. Food and Drug Administration has authorized hundreds of AI-enabled medical devices, many aimed at imaging, reflecting a broad belief that computing can improve efficiency and decision support.
Yet the hospital also confronts serious risks. ScanAid requires large quantities of patient data for training and ongoing monitoring. Even if names are removed, data can sometimes be re-identified when combined with other information, raising privacy concerns. The system’s vendor stores model updates in the cloud, and a security audit warns that misconfigured access controls could expose sensitive scans. In addition, clinicians discover performance gaps: the tool flags fewer abnormalities in patients from a smaller rural clinic that uses older imaging machines. A quality team suspects the training data underrepresents those machine types, a form of algorithmic bias (systematic error that disadvantages certain groups or settings). Finally, some physicians worry about “automation bias,” where staff may over-trust a computer’s score and overlook contradictory clinical evidence.
The hospital responds by requiring human sign-off on all diagnoses, conducting bias tests across equipment types, and encrypting data in transit and at rest. Still, leaders acknowledge that the same computing innovation that boosts speed and consistency also introduces new ethical dilemmas around data security, fairness, and accountability.
Based on the case study, what are two benefits and two risks associated with AI diagnostics as described in the passage?