Building trustworthy NeuroSymbolic AI Systems: Consistency, reliability, explainability, and safety

Dr. Manas Gaur, Department of Computer Science and Electrical Engineering

Building trust in AI involves the essential elements of explainability and safety, which demand a model to consistently and reliably perform. To achieve this, a combination of statistical and symbolic AI methods, rather than relying on either alone, is crucial for analyzing relevant data and knowledge in the AI application. We advocate and aim to illustrate that the NeuroSymbolic AI approach is better suited to establish AI as a trusted system. For instance, despite having safety guardrails, ChatGPT can still generate unsafe responses.
At the Knowledge-infused AI and Inference Lab at UMBC, the students will be dedicated to constructing the CREST framework. This framework will showcase how consistency, reliability, user-level explainability, and safety can be attained in NeuroSymbolic methods, using data and knowledge to meet the requirements of critical applications such as health and well-being.