Automated Responsible AI Testing of Medical Language Models

Tuesday, September 10, 2024 1 p.m. to 2 p.m.

Making sure that AI models are safe, effective, and fair is not only a moral imperative but now also a legal one. The ACA Section 1557 Final Rule, which went into effect in June 2024, prohibits discrimination in medical AI algorithms based on race, color, national origin, gender, age, or disability. The HTI-1 Final Rule from earlier this year requires transparency in medical decision making, including for algorithms. Whether you build medical AI models or adopt & deploy them, you are now required to comprehensively test them in advance – and show your work.

This session with Microsoft and John Snow Labs presents the open-source LangTest library and the no-code Generative AI Lab as a solution for automating generating and running more than 100 test types for different aspects of Responsible AI. We’ll cover healthcare-specific examples that show typical biases exhibited by current large language models, which test types are available to catch and mitigate them, and what are current best practices including running, versioning, and reusing test suites. This session is intended for anyone looking to deploy Generative AI solutions in real-world healthcare settings.

Presented by David Talby, CTO, John Snow Labs

Read More

Location:

Event Registration

Register for this event.

Register Now

Contact:

Research IT Events ResearchITEvents@ucf.edu

Calendar:

Anthropology Department Calendar

Category:

Workshop/Conference

Tags:

AI research computing