News and Updates
AWS re:Invent 2025 and the future of enterprise AI
Curiosity
Jan 26, 2026


The announcements weren't about bigger models. They were about making AI work in real enterprises, closely aligned with Curiosity's approach.
AWS re:Invent 2025, held in early December in Las Vegas, was the biggest cloud and enterprise AI event of the year. Across keynotes and product announcements, a clear pattern emerged: the focus was not on bigger models or smarter prompts, but on the foundations required to make AI work in enterprise environments.
In particular, AWS emphasized three things:
Knowledge foundations: structuring and connecting enterprise data so it can be reused and understood
Context and grounding: linking documents, metadata, and systems so AI operates on connected context rather than isolated files
Production integration: embedding AI into enterprise environments with governance, access control, and performance built in
These themes showed up consistently in how AWS talked about its platform. The focus shifted away from headline model capabilities toward the underlying systems that organize data, enforce access control, connect services, and run reliably at scale. The message was clear: AI only becomes useful once it leaves the demo stage. Many enterprises are still learning this the hard way, treating metadata and context as afterthoughts and allowing information to fragment across systems. AWS’s roadmap makes the opposite point. Structure, integration, and governance have to come first.
This is precisely the problem Curiosity was designed to address. Reading AWS’s roadmap, the alignment is striking. The priorities AWS is now making explicit reflect how Curiosity approaches enterprise AI from the ground up.
In the rest of this post, we will look at three areas where AWS’s direction closely aligns with our approach:
From AI features to connected knowledge foundations: turning implicit structure into explicit, reusable enterprise knowledge
From chat interfaces to AI embedded in real workflows: fitting AI interactions and components into existing tools and processes
From experiments to production systems: designing for scale, on-prem deployment, and hard requirements on performance, security, and auditability

From AI features to connected knowledge foundations
AWS repeatedly returned to the same idea. Enterprise AI only works when data is structured, connected, and governed. Models alone cannot compensate for fragmented information landscapes or inconsistent metadata. Without shared structure, AI systems struggle to ground answers, and users struggle to trust them.
Most enterprises already have structure in their systems: part numbers, process IDs, project codes, system names, and document references. The issue is not that this structure is missing, but that it is implicit or siloed. A document may reference a component, but the relationship is not explicit. A process ID may appear in text, but it is not connected to the system that defines it. Context exists, but only in fragments.
Curiosity treats this as a knowledge connection problem. Structured and unstructured data are linked into a single knowledge graph so information can be understood in relation to the rest of the enterprise. Documents are not treated as isolated files. They belong to projects, reference systems or components, and relate to other documents and cases across tools.
This connected structure directly affects how AI behaves. For example, an AI assistant can ground its responses by selecting source documents based on project number, system, or component, using explicit relationships rather than text similarity alone. The result is answers that are easier to trace, easier to explain, and aligned with how the organization actually organizes its work.
From chat interfaces to real workflows
The talks at AWS re:Invent also point toward AI that is integrated into enterprise systems rather than delivered as a standalone interface. Chat can be useful, but it is only one interaction pattern. In complex organizations, most work happens across established tools, screens, and backend processes, not inside a conversational interface.
This aligns with how we approach AI systems at Curiosity. The assumption is that AI should adapt to existing processes, not the other way around. As a result, AI interactions are not limited to a single UI. They can appear in different interfaces or run in the background as part of existing workflows via APIs.
The same principle applies to the AI components themselves. NLP pipelines, embeddings, retrieval logic, and LLM interactions are configured based on the task at hand. Case resolution, investigation, and analysis require different interactions and constraints. The goal is not to standardize everything around chat, but to integrate AI where it supports real work with appropriate control and context.
From experiments to production systems
Speakers at AWS encouraged enterprises to move beyond pilots and toward production-ready AI. This is often where ambition meets operational reality. What works in a demo rarely survives real data volumes, real users, and real constraints.
At production scale, those constraints are concrete. Enterprises operate on tens of terabytes of heterogeneous documents alongside structured data from many systems. Knowledge graphs can reach hundreds of millions of nodes and billions of relationships. Results need to return in seconds. In many industries, systems must run on customer infrastructure, including on-prem environments, due to data sensitivity, regulatory requirements, and contractual obligations.
These conditions shape system design from the start. Performance cannot be solved by simply scaling infrastructure. Latency, data movement, and cost impose hard technical and financial limits. Security and auditability must be enforced at the lowest layers. This is why Curiosity was built to run on-prem and why it follows a tightly integrated, monolithic architecture optimized for speed and predictable performance. These are consequences of building AI systems meant to operate reliably in production.

What this means for enterprise AI?
The limiting factor for enterprise AI is not model capability. It is readiness.
In most organizations, knowledge is still fragmented. Relationships between systems are implicit. Metadata is inconsistent. Governance is uneven. Large language models do not fix these issues. They expose them. When information cannot be traced, structured, or connected, AI systems are forced to guess, and trust erodes quickly.
The implication is straightforward. Enterprises that invest in connected knowledge foundations, workflow-level integration, and production-ready systems create the conditions for AI to work reliably over time. Those that focus primarily on model advances, without addressing these fundamentals, will see diminishing returns.
This is the reality Curiosity was built around from the start. Knowledge comes first. AI is applied on top of it, not used as a substitute.
AI will not replace enterprise knowledge. But it will make very clear whether that knowledge is understood.
The announcements weren't about bigger models. They were about making AI work in real enterprises, closely aligned with Curiosity's approach.
AWS re:Invent 2025, held in early December in Las Vegas, was the biggest cloud and enterprise AI event of the year. Across keynotes and product announcements, a clear pattern emerged: the focus was not on bigger models or smarter prompts, but on the foundations required to make AI work in enterprise environments.
In particular, AWS emphasized three things:
Knowledge foundations: structuring and connecting enterprise data so it can be reused and understood
Context and grounding: linking documents, metadata, and systems so AI operates on connected context rather than isolated files
Production integration: embedding AI into enterprise environments with governance, access control, and performance built in
These themes showed up consistently in how AWS talked about its platform. The focus shifted away from headline model capabilities toward the underlying systems that organize data, enforce access control, connect services, and run reliably at scale. The message was clear: AI only becomes useful once it leaves the demo stage. Many enterprises are still learning this the hard way, treating metadata and context as afterthoughts and allowing information to fragment across systems. AWS’s roadmap makes the opposite point. Structure, integration, and governance have to come first.
This is precisely the problem Curiosity was designed to address. Reading AWS’s roadmap, the alignment is striking. The priorities AWS is now making explicit reflect how Curiosity approaches enterprise AI from the ground up.
In the rest of this post, we will look at three areas where AWS’s direction closely aligns with our approach:
From AI features to connected knowledge foundations: turning implicit structure into explicit, reusable enterprise knowledge
From chat interfaces to AI embedded in real workflows: fitting AI interactions and components into existing tools and processes
From experiments to production systems: designing for scale, on-prem deployment, and hard requirements on performance, security, and auditability

From AI features to connected knowledge foundations
AWS repeatedly returned to the same idea. Enterprise AI only works when data is structured, connected, and governed. Models alone cannot compensate for fragmented information landscapes or inconsistent metadata. Without shared structure, AI systems struggle to ground answers, and users struggle to trust them.
Most enterprises already have structure in their systems: part numbers, process IDs, project codes, system names, and document references. The issue is not that this structure is missing, but that it is implicit or siloed. A document may reference a component, but the relationship is not explicit. A process ID may appear in text, but it is not connected to the system that defines it. Context exists, but only in fragments.
Curiosity treats this as a knowledge connection problem. Structured and unstructured data are linked into a single knowledge graph so information can be understood in relation to the rest of the enterprise. Documents are not treated as isolated files. They belong to projects, reference systems or components, and relate to other documents and cases across tools.
This connected structure directly affects how AI behaves. For example, an AI assistant can ground its responses by selecting source documents based on project number, system, or component, using explicit relationships rather than text similarity alone. The result is answers that are easier to trace, easier to explain, and aligned with how the organization actually organizes its work.
From chat interfaces to real workflows
The talks at AWS re:Invent also point toward AI that is integrated into enterprise systems rather than delivered as a standalone interface. Chat can be useful, but it is only one interaction pattern. In complex organizations, most work happens across established tools, screens, and backend processes, not inside a conversational interface.
This aligns with how we approach AI systems at Curiosity. The assumption is that AI should adapt to existing processes, not the other way around. As a result, AI interactions are not limited to a single UI. They can appear in different interfaces or run in the background as part of existing workflows via APIs.
The same principle applies to the AI components themselves. NLP pipelines, embeddings, retrieval logic, and LLM interactions are configured based on the task at hand. Case resolution, investigation, and analysis require different interactions and constraints. The goal is not to standardize everything around chat, but to integrate AI where it supports real work with appropriate control and context.
From experiments to production systems
Speakers at AWS encouraged enterprises to move beyond pilots and toward production-ready AI. This is often where ambition meets operational reality. What works in a demo rarely survives real data volumes, real users, and real constraints.
At production scale, those constraints are concrete. Enterprises operate on tens of terabytes of heterogeneous documents alongside structured data from many systems. Knowledge graphs can reach hundreds of millions of nodes and billions of relationships. Results need to return in seconds. In many industries, systems must run on customer infrastructure, including on-prem environments, due to data sensitivity, regulatory requirements, and contractual obligations.
These conditions shape system design from the start. Performance cannot be solved by simply scaling infrastructure. Latency, data movement, and cost impose hard technical and financial limits. Security and auditability must be enforced at the lowest layers. This is why Curiosity was built to run on-prem and why it follows a tightly integrated, monolithic architecture optimized for speed and predictable performance. These are consequences of building AI systems meant to operate reliably in production.

What this means for enterprise AI?
The limiting factor for enterprise AI is not model capability. It is readiness.
In most organizations, knowledge is still fragmented. Relationships between systems are implicit. Metadata is inconsistent. Governance is uneven. Large language models do not fix these issues. They expose them. When information cannot be traced, structured, or connected, AI systems are forced to guess, and trust erodes quickly.
The implication is straightforward. Enterprises that invest in connected knowledge foundations, workflow-level integration, and production-ready systems create the conditions for AI to work reliably over time. Those that focus primarily on model advances, without addressing these fundamentals, will see diminishing returns.
This is the reality Curiosity was built around from the start. Knowledge comes first. AI is applied on top of it, not used as a substitute.
AI will not replace enterprise knowledge. But it will make very clear whether that knowledge is understood.
The announcements weren't about bigger models. They were about making AI work in real enterprises, closely aligned with Curiosity's approach.
AWS re:Invent 2025, held in early December in Las Vegas, was the biggest cloud and enterprise AI event of the year. Across keynotes and product announcements, a clear pattern emerged: the focus was not on bigger models or smarter prompts, but on the foundations required to make AI work in enterprise environments.
In particular, AWS emphasized three things:
Knowledge foundations: structuring and connecting enterprise data so it can be reused and understood
Context and grounding: linking documents, metadata, and systems so AI operates on connected context rather than isolated files
Production integration: embedding AI into enterprise environments with governance, access control, and performance built in
These themes showed up consistently in how AWS talked about its platform. The focus shifted away from headline model capabilities toward the underlying systems that organize data, enforce access control, connect services, and run reliably at scale. The message was clear: AI only becomes useful once it leaves the demo stage. Many enterprises are still learning this the hard way, treating metadata and context as afterthoughts and allowing information to fragment across systems. AWS’s roadmap makes the opposite point. Structure, integration, and governance have to come first.
This is precisely the problem Curiosity was designed to address. Reading AWS’s roadmap, the alignment is striking. The priorities AWS is now making explicit reflect how Curiosity approaches enterprise AI from the ground up.
In the rest of this post, we will look at three areas where AWS’s direction closely aligns with our approach:
From AI features to connected knowledge foundations: turning implicit structure into explicit, reusable enterprise knowledge
From chat interfaces to AI embedded in real workflows: fitting AI interactions and components into existing tools and processes
From experiments to production systems: designing for scale, on-prem deployment, and hard requirements on performance, security, and auditability

From AI features to connected knowledge foundations
AWS repeatedly returned to the same idea. Enterprise AI only works when data is structured, connected, and governed. Models alone cannot compensate for fragmented information landscapes or inconsistent metadata. Without shared structure, AI systems struggle to ground answers, and users struggle to trust them.
Most enterprises already have structure in their systems: part numbers, process IDs, project codes, system names, and document references. The issue is not that this structure is missing, but that it is implicit or siloed. A document may reference a component, but the relationship is not explicit. A process ID may appear in text, but it is not connected to the system that defines it. Context exists, but only in fragments.
Curiosity treats this as a knowledge connection problem. Structured and unstructured data are linked into a single knowledge graph so information can be understood in relation to the rest of the enterprise. Documents are not treated as isolated files. They belong to projects, reference systems or components, and relate to other documents and cases across tools.
This connected structure directly affects how AI behaves. For example, an AI assistant can ground its responses by selecting source documents based on project number, system, or component, using explicit relationships rather than text similarity alone. The result is answers that are easier to trace, easier to explain, and aligned with how the organization actually organizes its work.
From chat interfaces to real workflows
The talks at AWS re:Invent also point toward AI that is integrated into enterprise systems rather than delivered as a standalone interface. Chat can be useful, but it is only one interaction pattern. In complex organizations, most work happens across established tools, screens, and backend processes, not inside a conversational interface.
This aligns with how we approach AI systems at Curiosity. The assumption is that AI should adapt to existing processes, not the other way around. As a result, AI interactions are not limited to a single UI. They can appear in different interfaces or run in the background as part of existing workflows via APIs.
The same principle applies to the AI components themselves. NLP pipelines, embeddings, retrieval logic, and LLM interactions are configured based on the task at hand. Case resolution, investigation, and analysis require different interactions and constraints. The goal is not to standardize everything around chat, but to integrate AI where it supports real work with appropriate control and context.
From experiments to production systems
Speakers at AWS encouraged enterprises to move beyond pilots and toward production-ready AI. This is often where ambition meets operational reality. What works in a demo rarely survives real data volumes, real users, and real constraints.
At production scale, those constraints are concrete. Enterprises operate on tens of terabytes of heterogeneous documents alongside structured data from many systems. Knowledge graphs can reach hundreds of millions of nodes and billions of relationships. Results need to return in seconds. In many industries, systems must run on customer infrastructure, including on-prem environments, due to data sensitivity, regulatory requirements, and contractual obligations.
These conditions shape system design from the start. Performance cannot be solved by simply scaling infrastructure. Latency, data movement, and cost impose hard technical and financial limits. Security and auditability must be enforced at the lowest layers. This is why Curiosity was built to run on-prem and why it follows a tightly integrated, monolithic architecture optimized for speed and predictable performance. These are consequences of building AI systems meant to operate reliably in production.

What this means for enterprise AI?
The limiting factor for enterprise AI is not model capability. It is readiness.
In most organizations, knowledge is still fragmented. Relationships between systems are implicit. Metadata is inconsistent. Governance is uneven. Large language models do not fix these issues. They expose them. When information cannot be traced, structured, or connected, AI systems are forced to guess, and trust erodes quickly.
The implication is straightforward. Enterprises that invest in connected knowledge foundations, workflow-level integration, and production-ready systems create the conditions for AI to work reliably over time. Those that focus primarily on model advances, without addressing these fundamentals, will see diminishing returns.
This is the reality Curiosity was built around from the start. Knowledge comes first. AI is applied on top of it, not used as a substitute.
AI will not replace enterprise knowledge. But it will make very clear whether that knowledge is understood.
The announcements weren't about bigger models. They were about making AI work in real enterprises, closely aligned with Curiosity's approach.
AWS re:Invent 2025, held in early December in Las Vegas, was the biggest cloud and enterprise AI event of the year. Across keynotes and product announcements, a clear pattern emerged: the focus was not on bigger models or smarter prompts, but on the foundations required to make AI work in enterprise environments.
In particular, AWS emphasized three things:
Knowledge foundations: structuring and connecting enterprise data so it can be reused and understood
Context and grounding: linking documents, metadata, and systems so AI operates on connected context rather than isolated files
Production integration: embedding AI into enterprise environments with governance, access control, and performance built in
These themes showed up consistently in how AWS talked about its platform. The focus shifted away from headline model capabilities toward the underlying systems that organize data, enforce access control, connect services, and run reliably at scale. The message was clear: AI only becomes useful once it leaves the demo stage. Many enterprises are still learning this the hard way, treating metadata and context as afterthoughts and allowing information to fragment across systems. AWS’s roadmap makes the opposite point. Structure, integration, and governance have to come first.
This is precisely the problem Curiosity was designed to address. Reading AWS’s roadmap, the alignment is striking. The priorities AWS is now making explicit reflect how Curiosity approaches enterprise AI from the ground up.
In the rest of this post, we will look at three areas where AWS’s direction closely aligns with our approach:
From AI features to connected knowledge foundations: turning implicit structure into explicit, reusable enterprise knowledge
From chat interfaces to AI embedded in real workflows: fitting AI interactions and components into existing tools and processes
From experiments to production systems: designing for scale, on-prem deployment, and hard requirements on performance, security, and auditability

From AI features to connected knowledge foundations
AWS repeatedly returned to the same idea. Enterprise AI only works when data is structured, connected, and governed. Models alone cannot compensate for fragmented information landscapes or inconsistent metadata. Without shared structure, AI systems struggle to ground answers, and users struggle to trust them.
Most enterprises already have structure in their systems: part numbers, process IDs, project codes, system names, and document references. The issue is not that this structure is missing, but that it is implicit or siloed. A document may reference a component, but the relationship is not explicit. A process ID may appear in text, but it is not connected to the system that defines it. Context exists, but only in fragments.
Curiosity treats this as a knowledge connection problem. Structured and unstructured data are linked into a single knowledge graph so information can be understood in relation to the rest of the enterprise. Documents are not treated as isolated files. They belong to projects, reference systems or components, and relate to other documents and cases across tools.
This connected structure directly affects how AI behaves. For example, an AI assistant can ground its responses by selecting source documents based on project number, system, or component, using explicit relationships rather than text similarity alone. The result is answers that are easier to trace, easier to explain, and aligned with how the organization actually organizes its work.
From chat interfaces to real workflows
The talks at AWS re:Invent also point toward AI that is integrated into enterprise systems rather than delivered as a standalone interface. Chat can be useful, but it is only one interaction pattern. In complex organizations, most work happens across established tools, screens, and backend processes, not inside a conversational interface.
This aligns with how we approach AI systems at Curiosity. The assumption is that AI should adapt to existing processes, not the other way around. As a result, AI interactions are not limited to a single UI. They can appear in different interfaces or run in the background as part of existing workflows via APIs.
The same principle applies to the AI components themselves. NLP pipelines, embeddings, retrieval logic, and LLM interactions are configured based on the task at hand. Case resolution, investigation, and analysis require different interactions and constraints. The goal is not to standardize everything around chat, but to integrate AI where it supports real work with appropriate control and context.
From experiments to production systems
Speakers at AWS encouraged enterprises to move beyond pilots and toward production-ready AI. This is often where ambition meets operational reality. What works in a demo rarely survives real data volumes, real users, and real constraints.
At production scale, those constraints are concrete. Enterprises operate on tens of terabytes of heterogeneous documents alongside structured data from many systems. Knowledge graphs can reach hundreds of millions of nodes and billions of relationships. Results need to return in seconds. In many industries, systems must run on customer infrastructure, including on-prem environments, due to data sensitivity, regulatory requirements, and contractual obligations.
These conditions shape system design from the start. Performance cannot be solved by simply scaling infrastructure. Latency, data movement, and cost impose hard technical and financial limits. Security and auditability must be enforced at the lowest layers. This is why Curiosity was built to run on-prem and why it follows a tightly integrated, monolithic architecture optimized for speed and predictable performance. These are consequences of building AI systems meant to operate reliably in production.

What this means for enterprise AI?
The limiting factor for enterprise AI is not model capability. It is readiness.
In most organizations, knowledge is still fragmented. Relationships between systems are implicit. Metadata is inconsistent. Governance is uneven. Large language models do not fix these issues. They expose them. When information cannot be traced, structured, or connected, AI systems are forced to guess, and trust erodes quickly.
The implication is straightforward. Enterprises that invest in connected knowledge foundations, workflow-level integration, and production-ready systems create the conditions for AI to work reliably over time. Those that focus primarily on model advances, without addressing these fundamentals, will see diminishing returns.
This is the reality Curiosity was built around from the start. Knowledge comes first. AI is applied on top of it, not used as a substitute.
AI will not replace enterprise knowledge. But it will make very clear whether that knowledge is understood.

Built for Enterprise. Designed Around You.
Talk with our team to learn how Curiosity can help your organization..

Build for Enterprise. Designed Around You.
Talk with our team to create a Curiosity Workspace tailored to your organization.

Build for Enterprise. Designed Around You.
Talk with our team to create a Curiosity Workspace tailored to your organization.
© 2025 Curiosity GmbH - All rights reserved
© 2025 Curiosity GmbH - All rights reserved
© 2025 Curiosity GmbH - All rights reserved

