Amazon Web Services (AWS) launched a new service at its ongoing re:Invent conference that will help enterprises reduce instances of artificial intelligence (AI) hallucination. Launched on Monday, the Automated Reasoning checks tool is available in preview and can be found within the Amazon Bedrock Guardrails. The company claimed that the tool mathematically validates the accuracy of responses generated by large language models (LLMs) and prevents factual errors from hallucinations. It is similar to the Grounding with Google Search feature which is available on both the Gemini API as well as the Google AI Studio.
AWS Automated Reasoning Checks 1m3s2h
AI models can often generate responses that are incorrect, misleading, or fictional. This is known as AI hallucination, and the issue impacts the credibility of AI models, especially when used in an enterprise space. While companies can somewhat mitigate the issue by training the AI system on high-quality organisational data, the pre-training data and architectural flaws can still make the AI hallucinate.
AWS detailed its solution to AI hallucination in a Amazon explained that it uses “mathematical, logic-based algorithmic verification and reasoning processes” to the information generated by LLMs.
The process is pretty straightforward. s will have to relevant documents that describe the rules of the organisation to the Amazon Bedrock console. Bedrock will automatically analyse these documents and create an initial Automated Reasoning policy, which will convert the natural language text into a mathematical format.
Once done, s can move to the Automated Reasoning menu under the Safeguards section. There, a new policy can be created and s can add existing documents that contain the information that the AI should learn. s can also manually set processing parameters and the policy's intent. Additionally, sample questions and answers can also be added to help the AI understand a typical interaction.
Once all of this is done, the AI will be ready to be deployed, and the Automated Reasoning checks tool will automatically in case the chatbot provides any incorrect responses. Currently, the tool is available in preview in only the US West (Oregon) AWS region. The company plans to roll it out to other regions soon.