{"id":49767,"date":"2026-04-30T11:58:59","date_gmt":"2026-04-30T11:58:59","guid":{"rendered":"https:\/\/www.cmarix.com\/blog\/?p=49767"},"modified":"2026-04-30T11:59:27","modified_gmt":"2026-04-30T11:59:27","slug":"ai-chatbot-hallucinations","status":"publish","type":"post","link":"https:\/\/www.cmarix.com\/blog\/ai-chatbot-hallucinations\/","title":{"rendered":"AI Chatbot Hallucinations: Why LLMs Fabricate Answers and How to Fix Them Architecturally"},"content":{"rendered":"\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><strong>Quick Overview:<\/strong> Chatbot AI never lies. Instead, it uses probabilities to generate its answers. In the absence of validation procedures such as RAG, validation checks, and output processes, chatbots may generate answers that sound confident but are untrue. This detailed guide on AI chatbot hallucinations explains more about why your AI chatbot is always lying or giving incorrect results.<\/p>\n<\/blockquote>\n\n\n\n<p>You created your chatbot. It was supposed to assist customers, speed up internal information retrieval, or facilitate an entire process. And all of a sudden, it started making things up.<\/p>\n\n\n\n<p>Wrong product specs. Fabricated legal citations. Non-existent policy clauses stated with complete authority, and 404 page links that never existed. This phenomenon has a name: AI hallucination.<\/p>\n\n\n\n<p>And according to<a href=\"https:\/\/hai.stanford.edu\/news\/hallucinating-law-legal-mistakes-large-language-models-are-pervasive\" target=\"_blank\" rel=\"noopener\"> research from Stanford HAI,<\/a> hallucinations are not edge cases; they are widespread. In legal-specific evaluations, large language models produced incorrect or fabricated answers in 69% to 88% of queries, and at least 75% of the time when identifying core court rulings.<\/p>\n\n\n\n<p>This is not a minor bug. It is a structural consequence of how large language models are built. Fixing it requires understanding the architectural roots of the problem, not just sprinkling \u201cbe accurate\u201d into your system prompt.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"500\" src=\"https:\/\/www.cmarix.com\/blog\/wp-content\/uploads\/2026\/04\/76-of-enterprise-AI-pilots-1024x500.webp\" alt=\"enterprise AI pilots\" class=\"wp-image-49778\" srcset=\"https:\/\/www.cmarix.com\/blog\/wp-content\/uploads\/2026\/04\/76-of-enterprise-AI-pilots-1024x500.webp 1024w, https:\/\/www.cmarix.com\/blog\/wp-content\/uploads\/2026\/04\/76-of-enterprise-AI-pilots-400x195.webp 400w, https:\/\/www.cmarix.com\/blog\/wp-content\/uploads\/2026\/04\/76-of-enterprise-AI-pilots-768x375.webp 768w, https:\/\/www.cmarix.com\/blog\/wp-content\/uploads\/2026\/04\/76-of-enterprise-AI-pilots.webp 1500w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>The problem isn\u2019t model intelligence, it\u2019s the probabilistic nature of language generation: LLMs predict the next word based on patterns in training data, not verified facts, so without grounding mechanisms, they can confidently produce content that sounds plausible but is factually wrong.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">What Is an AI \u201cHallucination\u201d?<\/h2>\n\n\n\n<p>A hallucination is an example in which the text produced by the language model contains falsehoods, lacks verifiable sources, and is completely made up, yet with the same level of certainty as truthful text. The language model does not question itself and makes it up as it goes along. According to the <a href=\"https:\/\/internationalaisafetyreport.org\/publication\/international-ai-safety-report-2026\" target=\"_blank\" rel=\"noopener\">International AI Safety Report 2026<\/a>, such problems include various types of errors and fabrications.<\/p>\n\n\n\n<p>This matters acutely in industries where a wrong answer carries real-world consequences, such as healthcare, legal tech, financial services, and compliance. Understanding the <a href=\"https:\/\/www.cmarix.com\/blog\/ai-security-risks-business-guide\/\">security risks of public AI models,<\/a> hallucination-prone systems in sensitive contexts is the first step before any architecture decision.<\/p>\n\n\n\n<p>\u201cA language model doesn\u2019t \u2018know\u2019 things the way humans do. It learns statistical patterns across tokens. When asked something outside its training distribution, it doesn\u2019t stop, it extrapolates, fluently.\u201d<\/p>\n\n\n\n<p>The ability to talk convincingly, which characterizes contemporary LLMs, is actually one of their most problematic features. Such systems have become very skilled at producing grammatically correct statements, even if those statements are incorrect. Users trust language that sounds right.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The 5 Root Causes Behind AI Chatbot Hallucinations(It\u2019s Not Stupidity)<\/h2>\n\n\n\n<p>To fix hallucinations, you have to understand why they happen. There is no single cause; there are five distinct architectural and epistemological failure modes, and most enterprise deployments suffer from at least three simultaneously. Comprehensive analyses, such as this <a href=\"https:\/\/arxiv.org\/html\/2503.21411v1\" rel=\"nofollow noopener\" target=\"_blank\">arxiv study on hallucination mechanisms<\/a>, break them down further into training gaps and inference flaws.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>01: Probabilistic Text Generation<\/strong><\/h3>\n\n\n\n<p>The model predicts the most probable next token, not necessarily the correct one. Large Language Model (LLM) Accuracy and coherence are independent goals, and the model aims for the latter.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">02: Training Data Cutoff<\/h3>\n\n\n\n<p>Models have knowledge about cutoffs. Any question about events, products, or regulations after that date forces the model to extrapolate, thereby dramatically increasing the risk of hallucinations.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">03: Context Window Limitations<\/h3>\n\n\n\n<p>Having a large window size does not mean that all the facts within it will be used by the LLM; they will most likely be ignored.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">04: No Native Uncertainty Modeling<\/h3>\n\n\n\n<p>There is no uncertainty modeling function within base LLMs. Models are trained to generate coherent responses to prompts without indicating uncertainty.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">05: Instruction-Following Pressure<\/h3>\n\n\n\n<p>RLHF fine-tuning trains models to be more complete and helpful. This creates a bias toward giving an answer, any answer, rather than refusing or hedging. Helpfulness and truthfulness often conflict, with helpfulness given priority.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The Compounding Problem in Enterprise Deployments<\/h2>\n\n\n\n<p>In a generic consumer chatbot, hallucinations are annoying. In an enterprise system, an <a href=\"https:\/\/www.cmarix.com\/blog\/ai-telemedicine-chatbot-development\/\">AI telemedicine chatbot development<\/a>, a compliance advisor, and a customer-facing product assistant are catastrophic. The consequences become more serious because the AI model is being used as an authoritative interface to access data it was never trained on. According to a <a href=\"https:\/\/www.researchgate.net\/publication\/386148806_AI_Hallucinations_Types_Causes_Impacts_and_Strategies_for_Detection_and_Prevention\" target=\"_blank\" rel=\"noopener\">ResearchGate study on AI hallucinations<\/a>, these AI hallucinations affect people in multiple ways and stress the urgency of setting up layers of defense to avoid them.<\/p>\n\n\n\n<p>And this is the core issue: developers see the LLM as a source of knowledge, while, essentially, it is an instrument for completing patterns. It was trained on the Internet. Your unique SOPs, your up-to-date drug formularies, your real-time inventory, none of that was on the web.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">\u200bAI Chatbot Architecture Fixes That Actually Work<\/h2>\n\n\n\n<p>Prompt engineering alone will not solve hallucination. Real fixes require structural changes at the system design level.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Fix 1: Retrieval-Augmented Generation (RAG)<\/h3>\n\n\n\n<p>RAG is currently the most robust production-grade solution for reducing hallucination in domain-specific deployments. Instead of asking the model to recall facts from its weights, you retrieve relevant documents at inference time and inject them as grounding material.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><strong>RAG Pipeline Flow\u200b:<br><\/strong><br>User Query \u2192 Embedding Model \u2192 Vector DB Lookup \u2192 Top-K Chunks \u2192 LLM + Grounded Context \u2192 Cited Response<\/p>\n<\/blockquote>\n\n\n\n<p>A good RAG system must not only retrieve documents but also include citations, confidence intervals, and fallbacks if the retrieval process is not up to standard. When designing a pipeline for RAG, it is important to understand the differences between vector databases and traditional databases in AI applications.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Fix 2: Confidence Thresholds and Refusal Policies<\/h3>\n\n\n\n<p>Any production AI system should have explicit \u201cI don\u2019t know\u201d architecture:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Minimum retrieval relevance scores before injecting context (e.g., cosine similarity \u2265 0.82)<\/li>\n\n\n\n<li>Classifier layers that detect out-of-domain queries before they reach the LLM<\/li>\n\n\n\n<li>Explicit fallback responses when confidence is below the threshold* Fallback messages when confidence falls below the level set<\/li>\n\n\n\n<li>Post-generation fact-checking using a second validation model or structured data parsing<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Fix 3: Structured Output Enforcement<\/h3>\n\n\n\n<p>The process of hallucination occurs easily if responses are given in their natural language form. Using a structured response format, such as a JSON schema, keeps the space for generation minimal. When an AI is expected to respond based on data collected from various sources, there is no scope for anything else.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Fix 4: Self-Hosted Architecture<\/h3>\n\n\n\n<p>For those who have stringent requirements around data governance, the choice in architectural approach from the standpoint of <a href=\"https:\/\/www.cmarix.com\/blog\/self-hosted-ai-vs-openai-apis-enterprise-guide\/\">self-hosted AI architecture vs OpenAI APIs<\/a> is no longer simply a financial matter; rather, it is one of reliability and control. For those considering this option, they should look into <a href=\"https:\/\/www.cmarix.com\/blog\/enterprise-private-llm-deployment-guide\/\">deploying private LLMs on AWS<\/a>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Fix 5: Guardrail Layers<\/h3>\n\n\n\n<p>The most appropriate architecture should contain: RAG to provide context to the response based on the source document; thresholds for the confidence interval and contingency planning; formatting of the response to limit the output; classifiers to manage input\/output; citations for the information sources used in the model; and human monitoring to generate responses with low confidence intervals.<\/p>\n\n\n<div style=\"border: 2px solid #439bc2;padding: 18px;border-radius: 6px;background-color: #f5fbfe\">\n<H2>Is Your AI Architecture Hallucination-Proof?<\/H2><\/p>\n<p>CMARIX specializes in building enterprise-grade AI systems with grounding, RAG pipelines, and guardrail layers that eliminate uncontrolled hallucination risk.<\/p>\n<p><a href=\"https:\/\/www.cmarix.com\/ai-consulting-services.html\">Talk to Our AI Consultants<\/a><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">RAG vs Fine-Tuning: Choosing Your Weapon For More Honest AI Chatbots<\/h2>\n\n\n\n<p>Practitioners often ask: Should we retrieve or should we train? The honest answer is that these approaches solve different problems, and the best enterprise systems use both.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Dimension<\/strong><\/td><td><strong>RAG<\/strong><\/td><td><strong>Fine-Tuning<\/strong><\/td><\/tr><tr><td>Knowledge freshness<\/td><td>\u2705 Real-time, always current<\/td><td>\u274c Requires retraining for updates<\/td><\/tr><tr><td>Factual grounding<\/td><td>\u2705 Source-attributable citations<\/td><td>\u26a0\ufe0f Embedded in weights, harder to audit<\/td><\/tr><tr><td>Domain tone\/style<\/td><td>\u26a0\ufe0f Depends on base model<\/td><td>\u2705 Highly customizable\u200b<\/td><\/tr><tr><td>Hallucination risk<\/td><td>\u2705 Low with quality retrieval<\/td><td>\u26a0\ufe0f Can amplify if training data has errors<\/td><\/tr><tr><td>Infrastructure cost<\/td><td>\u26a0\ufe0f Vector DB + retrieval layer<\/td><td>\u274c GPU-intensive training runs<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>A detailed treatment is in our guide to <a href=\"https:\/\/www.cmarix.com\/blog\/rag-vs-fine-tuning-enterprise-ai\/\">RAG vs Fine-Tuning: choosing the right AI architecture for enterprise applications<\/a>. The short version: use RAG for factual accuracy and knowledge recency; use fine-tuning for behavioral alignment and task-specific reasoning. For high-stakes domains, layer them.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Regulatory Context: The EU AI Act and Hallucination Accountability<\/h3>\n\n\n\n<p>It is no longer just about technology. To ensure <a href=\"https:\/\/www.cmarix.com\/blog\/eu-ai-act-compliance-checklist\/\">compliance with the EU AI Act<\/a>, AI systems in the high-risk category must be designed to be explainable, auditable, and controllable. The rise of systems that allow every output to be traceable to a retrievable source and that come with confidence scores becomes prevalent.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Enterprise AI Chatbot Auditing\/Deployment Checklist<\/h2>\n\n\n\n<p>If you\u2019re building or auditing an AI chatbot deployment, run through these checkpoints before going to production. CMARIX uses this framework across its <a href=\"https:\/\/www.cmarix.com\/generative-ai-solutions.html\">generative AI development services<\/a> engagements.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Check<\/strong><\/td><td><strong>Implementation<\/strong><\/td><td><strong>Risk Reduction<\/strong><\/td><\/tr><tr><td>RAG pipeline active<\/td><td>Vector DB + retrieval at inference<\/td><td>High<\/td><\/tr><tr><td>Retrieval confidence threshold<\/td><td>Minimum cosine similarity score set<\/td><td>High<\/td><\/tr><tr><td>Refusal policy defined<\/td><td>Out-of-domain queries return structured fallback<\/td><td>Medium-High<\/td><\/tr><tr><td>Output schema enforcement<\/td><td>JSON \/ structured format for critical responses<\/td><td>Medium<\/td><\/tr><tr><td>Citation attribution enabled<\/td><td>Response includes source document reference<\/td><td>High<\/td><\/tr><tr><td>Guardrail classifier layers<\/td><td>Input + output validators deployed<\/td><td>Medium-High<\/td><\/tr><tr><td>Human-in-the-loop<\/td><td>Low-confidence outputs flagged for review<\/td><td>High<\/td><\/tr><tr><td>Monitoring &amp; drift detection<\/td><td>Hallucination rate tracked in production<\/td><td>Medium<\/td><\/tr><tr><td>Private deployment evaluated<\/td><td>VPC-isolated inference for sensitive data<\/td><td>Medium<\/td><\/tr><tr><td>Fine-tuning on domain data<\/td><td>Task-specific behavioral alignment<\/td><td>Medium<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>Each checklist item represents a dedicated engineering workstream. Our team of <a href=\"https:\/\/www.cmarix.com\/hire-ai-developers.html\">dedicated AI engineers<\/a> and <a href=\"https:\/\/www.cmarix.com\/hire-data-engineers.html\">data engineering expert<\/a> handles the full stack, from vector database configuration to guardrail layer deployment.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Starting From Scratch? Validate Before You Build<\/h3>\n\n\n\n<p>The best cost-saving starting point is by validating concepts via an AI Proof of Concept. This targeted proof-of-concept phase allows you to test retrieval effectiveness, assess hallucination rates in your domain, and identify gaps in the architecture.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Mobile AI and Voice: The Emerging Frontiers<\/h3>\n\n\n\n<p>The possibility of hallucination isn&#8217;t nullified by the mobile layer either. The <a href=\"https:\/\/www.cmarix.com\/blog\/flutter-on-device-ai-development\/\">privacy-first mobile AI apps<\/a> face a problem: on-device inference protects user data but often lacks a grounding in real-world context. Likewise, there is a risk of hallucinations in the generation layer that creates text for TTS in the context of business communication. Hallucinations in speech are even worse than on paper.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/www.cmarix.com\/inquiry.html\"><img decoding=\"async\" width=\"951\" height=\"271\" src=\"https:\/\/www.cmarix.com\/blog\/wp-content\/uploads\/2026\/04\/Stop-patching.-Start-architecting.webp\" alt=\"\" class=\"wp-image-49789\" srcset=\"https:\/\/www.cmarix.com\/blog\/wp-content\/uploads\/2026\/04\/Stop-patching.-Start-architecting.webp 951w, https:\/\/www.cmarix.com\/blog\/wp-content\/uploads\/2026\/04\/Stop-patching.-Start-architecting-400x114.webp 400w, https:\/\/www.cmarix.com\/blog\/wp-content\/uploads\/2026\/04\/Stop-patching.-Start-architecting-768x219.webp 768w\" sizes=\"(max-width: 951px) 100vw, 951px\" \/><\/a><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Final Words<\/h2>\n\n\n\n<p>AI chatbots don\u2019t lie because they are malicious. They lie because they are probabilistic systems tasked with performing deterministic tasks without the architectural scaffolding needed to enforce factual accuracy. The gap between what users expect and what base LLMs deliver is not a problem of model intelligence; it is a system design problem.<\/p>\n\n\n\n<p>The good news is that the engineering solutions exist. At CMARIX, we have helped organizations across healthcare, fintech, legal, and manufacturing move from hallucination-prone pilots to enterprise-grade AI systems with measurable accuracy improvements. Whether you\u2019re starting with a PoC, rebuilding a fragile deployment, or architecting from scratch, our team brings the full-stack AI engineering depth to solve this properly.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">FAQs about AI Chatbot Hallucinations<\/h2>\n\n\n<div id=\"rank-math-faq\" class=\"rank-math-block\">\n<div class=\"rank-math-list \">\n<div id=\"faq-question-1777533828380\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \">What is an AI \u201challucination\u201d?<\/h3>\n<div class=\"rank-math-answer \">\n\n<p>In terms of AI hallucinations, it involves large language models creating factual errors or made-up information in an ordinary tone when generating text. In essence, there is no inherent way to express uncertainty about the texts generated by language models, and thus, the output text will be based solely on statistical accuracy.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1777533840346\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \">Why does ChatGPT lie instead of saying \u201cI don\u2019t know\u201d?<\/h3>\n<div class=\"rank-math-answer \">\n\n<p>The model rewards helping behavior and task completion. This implies a hidden bias towards giving a response rather than declining to do so. If the model lacks credible information, it doesn&#8217;t stall; it infers the pattern from the distribution of the training data.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1777533852346\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \">What are the main architectural causes of AI chatbots lying?<\/h3>\n<div class=\"rank-math-answer \">\n\n<p>The primary reasons for this from the perspective of architecture are: (i) Probabilistic text generation with a focus on fluency over truthfulness; (ii) Cut-off points in training datasets; (iii) Context window length and degradation of attention mechanisms; (iv) Lack of native uncertainty estimation in the base LLM architecture; and (v) Instruction-completion bias in RLHF training.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1777533869602\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \">How can we fix AI chatbot hallucinations using architecture improvements?<\/h3>\n<div class=\"rank-math-answer \">\n\n<p>The best architectural methods would be: RAG for grounding the answer in the source document; thresholds for confidence levels and fallback strategies; structural formatting of the answer to limit the output; classifiers for input and output controls; citations for sourcing facts used by the model; and human oversight for generating answers with lower confidence levels.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1777533881802\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \">How can I get more accurate results from ChatGPT?<\/h3>\n<div class=\"rank-math-answer \">\n\n<p>For personal use: provide the model with clear context, tell it to reference sources, and encourage it to respond &#8220;I don&#8217;t know&#8221; when unsure. If you are deploying your own enterprise solution, the only way to achieve consistent, predictable accuracy is through the architectural solutions listed above. Prompting techniques help, but cannot eliminate hallucinations completely.<\/p>\n\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Quick Overview: Chatbot AI never lies. Instead, it uses probabilities to generate [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":49776,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[44],"tags":[],"class_list":["post-49767","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.cmarix.com\/blog\/wp-json\/wp\/v2\/posts\/49767","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.cmarix.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.cmarix.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.cmarix.com\/blog\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.cmarix.com\/blog\/wp-json\/wp\/v2\/comments?post=49767"}],"version-history":[{"count":20,"href":"https:\/\/www.cmarix.com\/blog\/wp-json\/wp\/v2\/posts\/49767\/revisions"}],"predecessor-version":[{"id":49791,"href":"https:\/\/www.cmarix.com\/blog\/wp-json\/wp\/v2\/posts\/49767\/revisions\/49791"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.cmarix.com\/blog\/wp-json\/wp\/v2\/media\/49776"}],"wp:attachment":[{"href":"https:\/\/www.cmarix.com\/blog\/wp-json\/wp\/v2\/media?parent=49767"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.cmarix.com\/blog\/wp-json\/wp\/v2\/categories?post=49767"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.cmarix.com\/blog\/wp-json\/wp\/v2\/tags?post=49767"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}