Amazon AWS Introduces Hybrid AI to Combat AI Hallucinations

Edytowane przez: Ольга Паничкина

Amazon AWS has launched a new functionality aimed at reducing AI hallucinations in generative AI and large language models (LLMs). This innovative approach combines symbolic and sub-symbolic AI, referred to as hybrid AI or neuro-symbolic AI, to enhance the accuracy of AI-generated responses.

The hybrid model allows businesses to input specific rules, enabling AI to generate responses that align with company policies. For instance, if a customer requests a refund, the AI can check predefined rules to determine eligibility, minimizing the risk of incorrect responses.

Additionally, businesses can utilize generative AI to derive these symbolic rules from existing policy documents, streamlining the process. This capability not only enhances the reliability of AI responses but also helps in identifying and correcting AI hallucinations.

Amazon's new feature, dubbed 'Automated Reasoning,' is designed to prevent factual errors by integrating reasoning checks directly into the AI. This development reflects a growing trend towards hybrid AI systems that leverage the strengths of both symbolic and sub-symbolic approaches.

Czy znalazłeś błąd lub niedokładność?

Rozważymy Twoje uwagi tak szybko, jak to możliwe.