Limitations of Knowledge Graphs
Knowledge Graphs are a new foundational structure that has emerged with the growth in AI applications. They represent knowledge as a machine-readable format. They organize information as triples—a head entity, a relation, and a tail entity—forming a graph-like structure where entities are nodes and relationships are edges. This allows computers to comprehend and reason about connected knowledge. It supports intelligent applications, such as question-answering, semantic analysis and recommendation systems.
Despite the effectiveness of Knowledge Graphs, they have significant limitations. They can lose context information which makes it hard to accurately capture real world knowledge. In addition, many KGs are data sparse, meaning that relationships and entities may be incomplete. Due to the lack of annotation, contextual information is not available for inference. This can make it difficult to reason effectively, especially when combined with larger language models.
Context Diagrams
Context Graphs are an extension of traditional Knowledge Graphs. They add extra details such as location, time and sources. They capture context, not just isolated facts. This allows for a more precise and accurate view of knowledge.
Context graphs can be used to store the decisions made by agents. Agents need more than rules—they need to know how rules were applied before, when exceptions were allowed, who approved decisions, and how conflicts were handled. Agents are in the best position to record all of this information, as they work directly at decision-making locations.
These stored decision traces over time form a graph of context that agents can use to learn from their past actions. It allows the systems to not only understand what occurred, but why, resulting in more reliable and consistent agent behavior.

What is the impact of contextual information?
Contextual information adds important layers to knowledge representation by going beyond simple entities–relation facts. It allows for the differentiation of facts which appear similar, but are actually different, due to differences such as time, place, size, and surrounding circumstances. In one case, for example, two firms may compete in a certain market or period of time but not the other. In capturing this context, the systems are able to represent knowledge more precisely and not treat all facts that look similar as identical.
Contextual information is also a crucial factor in context graphs. This includes historical events, such as decisions made in the past, rules applied, approved exceptions, or approvals. When agents record how a decision was made—what data was used, which rule was checked, and why an exception was allowed—this information becomes reusable context for future decisions. These records allow for the connection of entities not linked directly and enable systems to use precedents and past results rather than fixed rules.
There has been a clear shift in AI systems—from static tools to decision-making agents, driven largely by major industry players. In the real world, decisions are not always based solely on rules. They often involve approvals and exceptions as well as lessons learned from previous cases. Context graphs address this gap by capturing how decisions are made across systems—what policies were checked, which data was used, who approved the decision, and what outcome followed. Agents can reuse previous judgments by structuring the decision history into context instead of repeatedly learning edge cases. This shift can be seen in:
- Gmail’s Gemini features and Gemini 3–based agent frameworks both show AI shifting from simple help to active decision-making, whether that’s managing inbox priorities or running complex workflows.
- Gemini 3 uses memory and states to complete longer tasks. Gmail, on the other hand, relies heavily on user intention and conversation history. Context is more important than a single response in both situations.
- Gemini 3 is an orchestration layer that can be used for multiple-agent systems, such as ADK, Agno and Letta. It works similarly to the way Gemini manages summarization, prioritization and writing in Gmail.
- AI Inbox, Suggested Replies and other features rely upon a persistent understanding of the user’s behavior. Agent frameworks such as Letta or mem0 also rely upon stateful memory in order to avoid context loss and maintain consistent behavior.
- Gmail turns email into actionable summaries and to-dos, while Gemini-powered agents automate browsers, workflows, and enterprise tasks—both reflecting a broader shift toward AI systems that act, not just respond.

OpenAI
- ChatGPT Health brings health data from different sources—medical records, apps, wearables, and notes—into one place. The system can then use this context to better understand the health trends over time rather than answering individual questions.
- ChatGPT health helps its users to make more informed decisions based on their personal medical history. This includes preparing for doctor appointments or understanding the results of tests.
- The health system runs on a secure, separate space that keeps sensitive data private. The context of health is protected and accurate, which makes it possible to use context-based tools like context graphs safely.

JP Morgan
- JP Morgan’s decision to replace proxy advisors by its AI tool Proxy IQ shows a move towards building internal systems which aggregate and analyze data from thousands of meetings rather than relying solely on third party recommendations.
- By analyzing proxy data internally, the firm can incorporate historical voting behavior, company-specific details, and firm-level policies—aligning with the idea of context graphs that preserve how decisions are formed over time.
- JP Morgan’s internal AI-based analyses have increased transparency, consistency, and speed in voting for proxy. This reflects a wider move towards context-aware AI-driven decisions in corporate settings.

NVIDIA
- NVIDIA NeMo Agent Toolskit helps AI agents become production-ready by adding controls for observability evaluation and deployment. By capturing execution traces, reasoning steps, and performance signals, it records how an agent arrived at an outcome—not just the final result—aligning closely with the idea of context graphs.
- OpenTelemetry and structured evaluations, for example, convert the agent’s behavior into a context that can be used. The ability to easily debug, compare, and continuously improve reliability is made possible by this.
- NAT, in a manner similar to how DLSS 4.5 deeply integrates AI within real-time graphics workflows, integrates AI Agents into enterprise workflows. These two products are indicative of the shift to AI systems that can retain their state, context, and history.

Microsoft
- Brand Agents, Copilot Checkouts and other tools turn purchasing conversations into immediate purchases. The customer can ask questions, compare products, make decisions and then purchase the product.
- These AI agents operate exactly where buying decisions happen—inside chats and brand websites—allowing them to guide users and complete checkout without extra steps.
- The merchants retain control over transactions and data. These interactions provide useful information about the customer’s intent and purchasing patterns over time. This helps future decisions be made faster.



