Back to Blog
Tony Seale's 2026 Predictions for Ontology and AI
Trending Post

Tony Seale's 2026 Predictions for Ontology and AI

·AI

A deeper dive into Tony Seale's 2026 predictions on ontology, AI memory, decentralised identifiers and the coming enterprise data crunch.

LinkedIn contentviral postscontent strategyontologyknowledge graphsAI memorydecentralised identifiersenterprise datasemantic web

Tony Seale, "The Knowledge Graph Guy," recently posted something that made me stop scrolling: "Here are my predictions for 2026: Ontology hits the hype cycle... I'm calling it early: this is the year ontology goes mainstream." That opening captured both the excitement and the risk of where AI and data are heading.

When Tony Seale talks about ontology and knowledge graphs, I pay attention. In this case, his viral LinkedIn post laid out four intertwined predictions: ontology going mainstream, a new era of "memory wars" in AI, the quiet return of decentralised identifiers, and a deepening enterprise data crunch. Together, they form a map of where AI-driven organisations are heading next.

"Here are my predictions for 2026: Ontology hits the hype cycle... I'm calling it early: this is the year ontology goes mainstream."

In this post, I want to unpack Tony Seale's argument, add some context from the broader data and AI landscape, and explore what these predictions mean if you are building or buying AI-heavy systems inside an organisation.

Ontology Hits the Hype Cycle

Tony Seale's first prediction is that ontology will finally hit the hype cycle in 2026. Put simply, ontology is a formal way of describing how concepts in your domain relate to one another. It is the backbone of knowledge graphs and semantic web technologies, and it is what lets machines move beyond keywords to actual understanding.

Tony predicts that by mid-2026, thousands of organisations will be asking the same question: "What is our ontology strategy?" Ontology will gain what he calls a semi-mystical aura: part buzzword, part talisman. Vendors will start branding anything structured or taxonomical as "ontology" or "semantics," even when it lacks the rigor and interoperability that real ontologies require.

This is the first trap for enterprises. As Tony hints, some platforms will work hard to divorce ontology from its roots in the semantic web and knowledge graphs, repackaging it as a proprietary feature rather than building on open standards.

As Tony puts it, expect disinformation that uses "ontology" and "semantic" interchangeably with "magic."

If you are responsible for data, architecture, or AI, this matters. Ontology is powerful precisely because it promises shared meaning across systems, teams, and even organisations. That promise breaks the moment each vendor invents its own incompatible flavor.

So how should you respond?

  • Treat "ontology" as a design discipline, not a product logo.
  • Ask vendors how their models align with W3C standards and knowledge graph practices.
  • Insist on export, interoperability, and the ability to extend or reuse models across tools.

Ontology will go mainstream, but whether it becomes an enduring capability or just another hype cycle buzzword will depend on these choices.

From Reasoning to Memory: The New AI Battleground

In 2025, the AI conversation was dominated by reasoning: chain-of-thought, tools, agents, and increasingly complex orchestration on top of foundation models. Tony Seale argues that in 2026 the battleground moves to memory.

Foundation model providers are already racing to wrap scaffolding around their core models: long-term memory, continuous learning, session history, and personalisation. The marketing story is simple: your AI will remember you.

But Tony reframes the real contest:

"Memory is the new moat. The battle to own yours will be fierce."

There are two levels to this memory war:

  • At the individual level, memory means your conversations, preferences, work artifacts, and even your reasoning patterns.
  • At the enterprise level, it is organisational knowledge, institutional memory, and the accumulated intelligence of thousands of employees.

The strategic question is no longer just "Which model is best?" but "Who owns the traces of how we think and work?" If your AI vendor controls your memory layer, switching costs become enormous. Your historical context, embeddings, and knowledge base are locked into someone else's architecture.

My own view aligns with Tony's warning: organisations should treat AI memory as part of their core data infrastructure, not a side effect of using a chatbot. That means:

  • Designing a vendor-agnostic memory layer where possible.
  • Keeping raw content, knowledge graphs, and embeddings under your control.
  • Separating model providers from the storage and governance of enterprise memory.

The companies that get this right will be able to experiment freely with new models while preserving the long-term value of what their AI systems learn.

Why Decentralised Identifiers Are Coming Back

Tony's third prediction flows directly from the ontology land grab. If vendors try to sell "ontology" as a purely conceptual layer, they will eventually collide with a stubborn reality: you cannot do serious semantics without solid identification.

As he puts it, "Semantics without identification is philosophy without physics." An ontology can define what a "customer" or a "drug" is, but in practice you still need to anchor those concepts to real-world instances: this customer, that product, this transaction, that clinical trial.

That is where decentralised identifiers (and older ideas like resolvable URIs) come in. For years, semantic web practitioners have argued that you cannot unify data across silos with concepts alone. You also need stable, resolvable, interoperable identifiers that different systems can point to and agree on.

Tony predicts that by late 2026 the industry will rediscover this truth. The attempt to peel ontology away from knowledge graphs will fail, because without graph-like identifiers, ontologies become theoretical diagrams that never quite connect to operational data.

For enterprises, this means a few concrete things:

  • Start treating identifiers as a first-class architectural concern, not an afterthought.
  • Prefer identifiers that are globally unique, resolvable, and stable across vendors and time.
  • Map your internal IDs to shared schemes where it makes sense (for example, industry standards or public reference data).

In other words, if you want the benefits of semantic interoperability, you will eventually have to care about identifiers, whether you use the language of URIs, DIDs, or something else.

The Deepening Enterprise Data Crunch

Tony's final prediction is the most sobering. In an earlier post he suggested that organisations would finally confront the importance of data quality and structure. According to his new forecast, 2026 is when they realise the problem is far bigger than a "cleanup project."

The vision of a "connected enterprise" sounds inspiring until you map it onto real systems. Most large organisations have decades of overlapping applications, shadow IT, manual workarounds, and M&A history embedded in their data. Turning that mess into something AI-ready is not a six-month initiative; it is a multi-year transformation.

Tony anticipates a moment of collective realisation: executives will look at their data landscape and understand that every system, process, and team is touched by this work. Some will stall. Some will quietly downgrade their AI ambitions. And, as he suggests, we may even see a shock casualty: a knowledge-intensive company whose fragmented data becomes visible to the market, triggering questions about its "AI data readiness."

From my perspective, this is not a prediction of doom but a call for realism. AI is forcing organisations to confront the true cost of years of data neglect. The winners will be the ones who:

  • Treat data and ontology as strategic assets, not IT housekeeping.
  • Fund multi-year programs with clear milestones, not one-off "AI readiness" projects.
  • Connect their ontology, identifiers, and memory strategy into a coherent architecture.

Preparing for Tony Seale's 2026 Landscape

Taken together, Tony Seale's predictions sketch a challenging but actionable picture of the near future: ontologies hyped and distorted, AI memory becoming the new competitive moat, identifiers returning from the shadows, and executives waking up to the scale of the data problem.

If you are responsible for AI, data, or architecture, you do not need to wait for 2026 to respond. You can start now by:

  1. Clarifying your ontology strategy and insisting on open, interoperable models.
  2. Designing a memory layer you own, independent of any single model provider.
  3. Investing in decentralised or at least stable identifiers that link concepts to real data.
  4. Framing data readiness as a long-term transformation, not a quick fix.

Tony Seale has put words to shifts that many practitioners are already feeling. The organisations that listen, and act early, will be the ones still standing when the hype cycle clears and the real work of semantic, data-driven AI begins.

This blog post expands on a viral LinkedIn post by Tony Seale, The Knowledge Graph Guy. View the original LinkedIn post →