close

Autonomous Networks

6GAutonomous Networks

Architecting the Intelligent Network: Quick recap of top seven stories into Agentic AI, 5G-Advanced Mobility, and 6G Platform Monetization!

6G & Agentic AI

Introduction

For CSPs, the path from 5G Standalone to 5G-Advanced and the eventual 6G architecture represents a paradigm shift. We are no longer simply provisioning capacity; we are transitioning to intent-driven, AI-native platforms capable of exposing distributed edge compute, ensuring deterministic ultra-reliable low-latency communications (URLLC), and orchestrating multi-domain autonomous operations.

As MWC 2026 is round the corner, let’s take a look at some of the latest developments across the Telco ecosystem and deep-dive into seven of the most pivotal announcements and white papers shaping the immediate future of our network infrastructure.


1. Scaling Autonomous Networks: Deutsche Telekom’s MINDR and the Agent-to-Agent Protocol

The operational complexity of managing multi-domain, multi-vendor telecommunications networks has long outpaced the capabilities of legacy, script-based automation. In a massive leap toward Level 4/5 Autonomous Networks, Deutsche Telekom, in collaboration with Google Cloud, has announced the development of MINDR (Multi-Agentic Intelligent Network Diagnostics & Remediation). This platform fundamentally shifts network operations from reactive alarm-chasing to predictive, service-driven automation that resolves anomalies before the end-user experience is impacted.

From a technical architecture perspective, MINDR is built utilizing Google’s Gemini models deployed on Vertex Cloud and incorporates Google Cloud’s Autonomous Network Operations framework. Unlike isolated automation silos, MINDR operates as a collaborative multi-agent system. It is designed to utilize the Agent-to-Agent (A2A) protocol to orchestrate specialized AI agents across the Radio Access Network (RAN), transport, and core domains. These agents continuously ingest and correlate network telemetry to build a real-time, end-to-end view of service performance, enabling autonomous root-cause analysis and explainable remediation actions.

The commercial viability and operational ROI of this agentic approach have already been proven in the field. MINDR is an evolution of Deutsche Telekom’s RAN Guardian Agent, which has been operating live in Germany’s commercial network. During high-traffic events, RAN Guardian autonomously triggered over 100 remediation actions within its first month, reducing the operational time required to manage major network events from several hours down to approximately one minute—a staggering >95% operational improvement.

During the February Carnival season, the system autonomously pre-checked 611 different mobile sites serving over 130 events. When five of these sites experienced unexpected peak loads, the AI agent dynamically optimized the radio parameters in real-time. With MINDR extending these capabilities beyond the RAN into the transport and core domains, Deutsche Telekom is actively scaling this self-healing infrastructure across its European footprint, beginning with the Czech Republic and Croatia. For telco professionals, MINDR represents the blueprint for deploying governed, multi-agentic AI to drastically lower OPEX and secure strict Service Level Agreements (SLAs).

Read more here.

2. Eliminating Handover Jitter: Ericsson’s L1/L2 Triggered Mobility (LTM)

For time-critical enterprise use cases—such as immersive Extended Reality (XR), automated guided vehicles (AGVs), and remote industrial robotics—seamless cellular mobility is a strict technical prerequisite. Ericsson, partnering with KDDI and MediaTek, has successfully completed the world’s first in-field joint demonstration of Layer 1/Layer 2 (L1/L2) Triggered Mobility (LTM) on a live commercial Radio Access Network.

Standardized as part of the 3GPP Release 18 specifications, LTM introduces a fundamental architectural enhancement to the 5G Advanced Critical IoT subscription tier. Historically, cellular handovers and mobility signaling have relied on legacy Layer 3 (RRC) messaging. Layer 3 mobility inherently introduces processing overhead and scheduling delays, leading to data interruption during cell changes that can trigger safety hazards in autonomous operations or cause severe user nausea in XR environments.

LTM bypasses this bottleneck by executing mobility commands using lower-layer (L1/L2) signaling. Ericsson’s proprietary software algorithms leverage this 3GPP standard to drastically reduce signaling overhead, shortening the data interruption period during cell changes by a definitive 25%. Furthermore, the technical design of Ericsson’s LTM implementation is highly efficient; it smartly reuses existing Layer 3 network measurements while enabling early downlink and uplink synchronization using a single trigger. It also lowers User Equipment (UE) requirements, ensuring broader compatibility across devices with varying capabilities.

For communications service providers, this is a highly monetizable capability. By transitioning from standard 5G Standalone to 5G-Advanced architectures equipped with LTM, operators can provide the near-seamless connectivity required by latency-sensitive AI and cloud applications. KDDI has explicitly highlighted that this low-latency mobility is foundational for supporting AI-powered real-time applications and ensuring operational efficiency and safety in Japan’s industrial sectors. Adopting standards-based LTM allows telcos to future-proof their 5G Advanced capital investments while accelerating the introduction of premium, time-critical enterprise services.

Read more here.

3. Architecting for Value: TM Forum’s 6G Monetization Blueprint

The telecommunications industry learned a difficult lesson during the initial rollout of 5G: deploying advanced radio capabilities utilizing Non-Standalone (NSA) architectures and fragmented legacy IT systems severely bottlenecked service readiness and limited monetization. To ensure the industry does not repeat these mistakes, the TM Forum—in collaboration with major operators—has released a comprehensive white paper (IG1485) outlining a monetization-driven architecture for the 6G era.

The TM Forum postulates that 6G must evolve beyond a “dumb pipe” connectivity foundation into an AI-native platform capable of on-demand experiences and programmable network exposure. To achieve this, the architecture must tightly couple a 6G RAN featuring native intelligence with a 6G AI-native Core that embeds AI-driven control, policy, and analytics directly into core network functions. The white paper outlines three primary 6G monetization models. The first is “Differentiated Experience,” which extends traditional data plans with static QoS tiers. The second, “On-Demand Experience,” introduces dynamic, time-bound connectivity (e.g., temporary QoS boosts for factory production windows), requiring substantial upgrades to real-time policy and charging functions. The third, “Enablement Beyond Connectivity,” exposes sensing, AI, and edge compute via APIs to developers, utilizing outcome-linked B2B2X contracts.

To execute these models, the TM Forum insists on deploying the Open Digital Architecture (ODA), which acts as a “Marketplace OS”. ODA replaces rigid, siloed legacy BSS/OSS with a component-based, cloud-native architecture, enabling “Composable Commerce” so operators can rapidly assemble billing engines for specific verticals (like drone traffic management) without bespoke IT projects. Additionally, achieving Level 4/5 Autonomous Networks is critical; automated, closed-loop service assurance is necessary to dynamically enforce SLAs in real-time and prevent SLA penalty payouts on high-value guaranteed-performance contracts. By standardizing multi-sided marketplace platforms and utilizing Open APIs, telcos can securely expose these 6G capabilities to third parties, transitioning from selling raw capacity to orchestrating high-margin platform ecosystems.

Read more about TMForum 6G White-paper here.

4. Resolving Un-Scripted Anomalies: NTT DOCOMO & AWS Agentic AI Operations

As mobile network architectures scale to support both 4G and 5G non-standalone/standalone environments alongside multi-domain and multi-vendor equipment, the complexity of network maintenance has skyrocketed. Legacy operational support systems (OSS) typically rely on script-based automation, which is highly effective for predefined, well-understood network failures. However, when complex, un-scripted anomalies occur, operations teams are forced to manually collect and parse through massive volumes of data from disparate domains to identify the root cause, resulting in unacceptable Mean Time to Repair (MTTR) metrics.

To directly combat this operational bottleneck, NTT DOCOMO has announced the commercial deployment of a massive-scale agentic AI system for network maintenance, developed in partnership with AWS. Deployed across their commercial mobile network as of early February 2026, this platform is engineered on Amazon Bedrock AgentCore, ensuring the secure governance and execution of agentic AI workloads at scale.

The technical scope of this deployment is unprecedented. The platform ingests and correlates real-time traffic and alarm telemetry from over one million network devices, spanning both base stations and core network equipment. To process this massive data lake, DOCOMO utilizes high-performance databases specifically optimized for time-series, tabular, and graph data workloads. By leveraging a graph-modeled network topology, multiple AI agents are orchestrated to autonomously analyze network behavior, detect anomalies, pinpoint suspected failure nodes, and present deterministic remediation recommendations to maintenance engineers.

By training and operating this agentic architecture on one of the world’s largest telecommunications datasets, DOCOMO has achieved a greater than 50% reduction in response times for complex network failures that previously demanded intensive manual analysis. For network operations professionals, this deployment validates that utilizing cloud-native agentic AI and graph-based topology modeling is the definitive path to achieving Autonomous Network operations, slashing service disruption windows, and guaranteeing the high reliability required for advanced 5G and 6G services.

Read more here.

5. Deterministic Slicing for Robotics: Configured Grant and Real Haptics

Providing connectivity for remote robot teleoperation is one of the most demanding URLLC use cases in the enterprise 5G portfolio. For remote operators to perform delicate tasks using advanced robotics, bidirectional force feedback must be transmitted with absolute precision. High or fluctuating latency (jitter) disrupts the synchronization between the operator (“leader”) and the remote robot (“follower”), rendering precise force reproduction impossible.

NTT DOCOMO and Keio University’s Haptics Research Center have successfully addressed this physical layer constraint, demonstrating the world’s first stable, high-fidelity robot teleoperation over a commercial 5G Standalone network. The trial utilized Keio’s Real Haptics technology—which bidirectionally transmits tactile and contact information—layered over a specific 5G SA network slicing technology known as Configured Grant.

In standard 5G SA deployments, devices communicate using a “Dynamic Grant” scheduling method. When a User Equipment (UE) needs to transmit data, it must first send a resource request to the base station. The base station processes this request and allocates resources, introducing a “scheduling delay” that fluctuates wildly depending on background network congestion. Configured Grant completely bypasses this bottleneck. By pre-allocating exclusive communication resources to a specific device line for a defined period, the UE can transmit data instantly without executing a resource request. This effectively eliminates scheduling delays, flattening jitter and ensuring deterministic, ultra-low latency.

The technical trial routed control data through DOCOMO’s commercial 5G SA network and a docomo MEC private network, terminating at a virtual server running the Bilateral Edge Platform. To simulate harsh, real-world conditions, 20 Mbps of background traffic was injected alongside the control data. The empirical results heavily validate the architecture: utilizing Configured Grant increased the force-feedback reproduction rate by an impressive 40%, delivering highly precise tactile feedback. Concurrently, the smoothness of the robotic movements—quantified via Dimensionless Jerk Cost—decreased by 59%, ensuring highly stable control. This proves Configured Grant is an essential slicing capability for monetizing industrial B2B robotics.

Read more here.

6. Monetizing the Edge: SoftBank and Nokia’s AI-RAN Orchestrator

The transition to virtualized Radio Access Networks (vRAN) has laid the groundwork for entirely new infrastructure utilization models. SoftBank and Nokia have announced a critical functional expansion to the AITRAS Orchestrator—part of SoftBank’s AI-RAN product portfolio—that transforms the telco edge into a brokered, distributed AI execution platform.

The AITRAS platform is designed to natively converge AI workloads and vRAN control functions onto a single, unified virtualization platform. Previously, the AITRAS Orchestrator dynamically balanced computing resources solely between SoftBank’s internal RAN control requirements and internal AI processing tasks. However, the cyclical nature of mobile traffic dictates that RAN compute demand fluctuates significantly based on the time of day. Restricting the platform to internal workloads inevitably results in stranded, underutilized computing resources during off-peak hours, diminishing the capital investment efficiency of the infrastructure.

To resolve this and generate net-new revenue streams, SoftBank integrated Nokia Bell Labs’s AI platform—the Nokia AI-RAN External Compute Engine—into the AITRAS Orchestrator. This powerful integration allows the orchestrator to securely broker, partition, and manage telecommunications computing resources for external enterprise clients. External customers can now dynamically access high-performance AI compute power directly at the telco edge, entirely on-demand, without requiring heavy capital expenditure in their own AI hardware.

This technical achievement realizes the “Execution of External AI Workloads” use case formally defined by the AI-RAN Alliance’s Working Group. For telco strategists, this represents a fundamental evolution of the business model. By expanding the AI-RAN architecture to seamlessly accommodate external B2B AI demands, operators can efficiently monetize dormant compute cycles. We are no longer simply selling connectivity; we are operating a distributed, high-margin computing utility service that maximizes the ROI of our localized edge infrastructure.

Read more here.

7. Optimizing Data Pipelines for AI Agents: 6G and IOWN Integration

As we project toward the 6G horizon, the proliferation of continuously operating AI agents will introduce crushing capacity demands on mobile networks. If a user wearing Augmented Reality (AR) glasses utilizes an AI agent to constantly monitor their environment for safety risks, the continuous ingestion of multimodal sensor data (high-resolution video, audio, spatial telemetry) creates three distinct technical bottlenecks: severe wireless bandwidth starvation, immense computational processing loads, and unsustainable power consumption at the edge and core. Furthermore, processing every raw frame cumulatively increases end-to-end (E2E) latency, destroying the real-time feedback loop required for AR assistance.

To solve this, The University of Tokyo, NTT, and NEC have successfully demonstrated an integrated 6G/IOWN architectural platform combining three groundbreaking technologies. First, to address wireless bandwidth constraints, the platform utilizes Streaming Semantic Communication. Instead of transmitting raw bit-level video streams, this protocol detects contextual changes and transmits only the semantic differences, radically compressing the required wireless payload.

Second, to mitigate the computational load of continuous inference, the system employs AI-Oriented Media Control. This technology applies data identifiers to the incoming stream, selectively filtering and feeding only the most critical, relevant sensor frames to the AI agent. Finally, to address massive AI model scaling, the architecture leverages In-Network Computing (INC). INC distributes small, specialized AI processing tasks deep within the network core, eliminating the need to haul all data to a centralized cloud and drastically reducing latency.

In a trial utilizing a 60-second, 1,800-frame critical situation video dataset, the integration of these three technologies yielded exceptional results. The platform successfully maintained an almost constant E2E latency profile without any cumulative processing wait times. Crucially, this massive reduction in communication traffic and computational load was achieved with zero degradation in AI inference accuracy. This trial definitively proves that optimizing data transmission pipelines at the semantic level is an absolute requirement for supporting real-time, AI-native 6G applications.

Read more here.


Conclusion

As we analyze the technical roadmaps presented by these industry leaders, a cohesive vision for the future of telecommunications emerges. The network is no longer a passive conduit for data. By implementing Configured Grant and Layer 1/Layer 2 Triggered Mobility, we are guaranteeing the strict determinism required for industrial automation and XR. By deploying governed, multi-agentic AI architectures like MINDR and AWS Bedrock-powered maintenance platforms, we are achieving the Level 4/5 autonomy necessary to manage multi-domain complexity and defend service level agreements.

Furthermore, by adopting the TM Forum’s Open Digital Architecture and SoftBank’s AI-RAN external compute brokering, we are actively unlocking the next generation of B2B2X revenue streams. Moving forward into the 6G and IOWN era, the optimization of semantic data pipelines and In-Network Computing will ensure our infrastructure can scale to support the massive influx of autonomous AI agents. For telco professionals, the technical foundations for a highly programmable, vastly monetizable, and fully autonomous intelligent edge are officially here.

read more