Online registrations are invited for the Quality and Safety for LLM Applications by DeepLearning AI and WHYLABS. Check the details below!

About DeepLearning AI

DeepLearning AI is an education technology company that is empowering the global workforce to build an AI-powered future through world-class education, hands-on training, and a collaborative community. DeepLearning AI has created high-quality AI programs on Coursera that have gained an extensive global following. By providing a platform for education and fostering a tight-knit community, DeepLearning AI has become the pathway for anyone looking to build an AI career.

About WHYLABS AI

As teams across industries adopt AI, WhyLabs enables them to operate with certainty by providing model monitoring, preventing costly model failures, and facilitating cross-functional collaboration. Incubated at the Allen Institute for AI, WhyLabs is a privately held, venture-funded company based in Seattle.

Video Source

The company was founded by Amazon Machine Learning alums Alessya Visnjic, Sam Gracie, and Andy Dang, together with Maria Karaivanova, former Cloudflare executive and early-stage investor. Use WhyLabs to supercharge your AI teams!

Course Details

  • Monitor and enhance security measures over time to safeguard your LLM applications.
  • Detect and prevent critical security threats like hallucinations, jailbreaks, and data leakage.
  • Explore real-world scenarios to prepare for potential risks and vulnerabilities.

What you’ll learn in this course?

It’s always crucial to address and monitor safety and quality concerns in your applications. Building LLM applications poses special challenges.

In this course, you’ll explore new metrics and best practices to monitor your LLM systems and ensure safety and quality. You’ll learn how to:

  • Identify hallucinations with methods like SelfCheckGPT
  • Detect jailbreaks (prompts that attempt to manipulate LLM responses) using sentiment analysis and implicit toxicity detection models.
  • Identify data leakage using entity recognition and vector similarity analysis.
  • Build your own monitoring system to evaluate app safety and security over time.

Upon completing the course, you’ll have the ability to identify common security concerns in LLM-based applications and be able to customize your safety and security evaluation tools to the LLM that you’re using for your application.

Eligibility Criteria

Anyone with basic Python knowledge is interested in mitigating issues like hallucinations, prompt injections, and toxic outputs.

How to Join?

Interested candidates can directly apply through this link.

Fee

Free for a limited time

Instructor

Bernease Herman, CEO, Co-founder, Lamini

Click here to view the official notification Quality and Safety for LLM Applications by DeepLearning AI and WHYLABS AI.

Image Source