Making sure that artificial intelligence (AI) models are safe, effective and fair is not only a moral imperative, but now also a legal one. The ACA Section 1557 Final Rule, which went into effect in June 2024, prohibits discrimination in medical AI algorithms based on race, color, national origin, gender, age or disability. The HTI-1 Final Rule from earlier this year requires transparency in medical decision-making, including for algorithms. Whether you build medical AI models or adopt and deploy them, you are now required to comprehensively test them in advance and show your work.
This session with Microsoft and John Snow Labs presents the open-source LangTest library and the no-code Generative AI Lab as a solution for automating generating and running more than 100 test types for different aspects of Responsible AI. We’ll cover healthcare-specific examples that show typical biases exhibited by current large language models, which test types are available to catch and mitigate them, and what are current best practices including running, versioning and reusing test suites. This session is intended for anyone looking to deploy Generative AI solutions in real-world healthcare settings.
Presented by David Talby, CTO, John Snow Labs
Read More