Researchers at MIT have developed a system that enables large language models to convert machine-learning explanations into narrative text, allowing users to easily understand AI predictions.
Enabling AI to explain its predictions in plain language
Researchers at MIT have developed a system that uses large language models (LLMs) to convert machine-learning explanations into narrative text that can be easily understood by users.
The system consists of two components: NARRATOR and GRADER. NARRATOR uses an LLM to create narrative descriptions of SHAP explanations, while GRADER rates the narrative on four metrics: conciseness, accuracy, completeness, and fluency.
Users can customize both NARRATOR and GRADER by providing different examples of narrative explanations. This allows researchers to tailor the system to specific applications or user preferences.
The future of AI explanations is promising, with the potential for users to ask models follow-up questions about their predictions in real-world settings.