AWS CloudWatch Goes Full Observability: Apache Iceberg Integration Changes Monitoring Landscape
AWS transforms CloudWatch from basic monitoring into a unified observability platform with Apache Iceberg storage and OCSF normalization, eliminating ETL pipelines and enabling cross-account analytics.
AWS has transformed Amazon CloudWatch from a basic monitoring service into a unified observability platform, introducing Apache Iceberg-compatible storage and native data normalization that fundamentally changes how organizations manage logs at scale. The update, announced in December 2025, addresses a persistent enterprise challenge: fragmented log management requiring multiple tools, data copies, and complex ETL pipelines.
The Core Innovation: Query Logs in Place
The key breakthrough is Apache Iceberg-compatible access to log data through Amazon S3 Tables. According to AWS's official announcement, this enables organizations to "query logs in place without ETL pipelines while maintaining compatibility with third-party analytics tools."
CloudWatch now stores logs in a unified data store that can be queried directly using natural language, LogsQL, PPL, or SQL—or accessed through any Iceberg-compatible analytics tool including Amazon Athena, Amazon SageMaker Unified Studio, Apache Spark, and others. Organizations no longer need to extract, transform, and load log data into separate analytics platforms.
This "Zero-ETL" approach eliminates the operational overhead of maintaining multiple copies of the same data across different tools. AWS states that CloudWatch "consolidates log management into a single service with built-in governance capabilities without storing and maintaining multiple copies of the same data across different tools and data stores."
OCSF and OpenTelemetry: Native Data Normalization
CloudWatch now automatically normalizes log data using the Open Cybersecurity Schema Framework (OCSF) and OpenTelemetry (OTel) standards. This managed normalization addresses a longstanding pain point: security and operational teams speaking different data languages.
The platform provides managed OCSF conversion for AWS and third-party data sources, along with Grok processors for custom parsing and field-level operations. According to the announcement, this automation means "you can focus on analytics and insights" rather than building normalization pipelines.
Unified Data Collection at Enterprise Scale
CloudWatch now natively aggregates vended logs across accounts and regions, integrating with AWS Organizations. The platform automatically collects logs from AWS services including CloudTrail, VPC Flow Logs, AWS WAF access logs, and Route 53 resolver logs.
Third-party integrations extend to endpoint security (CrowdStrike, SentinelOne), identity management (Okta, Microsoft Entra ID), cloud security (Wiz), network security (Zscaler, Palo Alto Networks), productivity tools (Microsoft Office 365, GitHub), and IT service management (ServiceNow CMDB).
Suresh Rajashekaraiah, an architect at Mphasis, noted in a LinkedIn post that "for years, enterprises struggled with fragmented operational and security logs, which complicated troubleshooting and compliance processes." The unified platform addresses this by consolidating and normalizing data from AWS and third-party sources.
New Interface: Facets and Data Source Management
The updated CloudWatch console introduces a Logs Management View with three key tabs: Summary, Data sources, and Pipeline. The Facets interface enables interactive exploration by source, application, account, region, and log type.
Developers can run cross-account and cross-region queries with intelligent parameter inference. The data sources view automatically categorizes logs by AWS services, third-party sources, or custom application logs, providing visibility into ingestion patterns and anomalies.
For analytics workflows, teams can integrate selected data sources with S3 Tables, making logs available in a read-only aws-cloudwatch S3 Tables bucket for analysis through Athena, Amazon Redshift, or any Iceberg-compatible query engine.
Pipeline Configuration for Data Processing
CloudWatch's new pipeline feature streamlines collecting, transforming, and routing telemetry data. The pipeline configuration wizard guides users through five steps: choosing data sources, configuring destinations, setting up processors (up to 19 can be chained), and deployment.
Processors can filter, transform, or enrich data as it flows through the platform. This approach enables teams to standardize data formats for observability and security use cases without building custom infrastructure.
Cost and Competition Considerations
Storage in Amazon S3 Tables costs $0.0265 per GB according to industry analysis, with recent updates reducing compaction costs by up to 90% for Apache Iceberg tables. The Zero-ETL architecture eliminates egress costs associated with moving data to external analytics platforms.
However, the competitive landscape remains complex. Corey Quinn, through his AWS Snarkbot, posted on Bluesky that "CloudWatch now does what Splunk did 15 years ago, but with more AWS service names per sentence than actual features. 'Unified data store' = S3 with extra steps and a consulting bill."
Splunk offers cross-platform visibility across Azure, GCP, and on-premises environments. Datadog and Dynatrace provide deep application performance monitoring and hybrid-cloud interfaces, though they often incur higher egress and indexing fees compared to AWS's query-in-place model. Open-source alternatives like the ELK stack and Grafana Loki provide vendor independence but require organizations to manage their own infrastructure.
What This Means for AWS-Centric Organizations
For teams heavily invested in AWS infrastructure, CloudWatch's evolution represents a potential consolidation opportunity. The platform eliminates the need for separate log management tools while providing enterprise-grade governance and compliance features.
The Apache Iceberg integration is particularly significant. By storing logs in an open table format, AWS reduces lock-in concerns while enabling teams to use their preferred analytics tools. A security team can query the same data in CloudWatch console while data scientists access it through Spark notebooks.
The official announcement provides a concrete example: "query correlates network traffic with AWS API activity from a specific IP range by joining VPC Flow Logs with CloudTrail logs based on matching source IP addresses." This type of integrated investigation was previously difficult without copying data to a separate analytics platform.
Availability and Getting Started
The enhanced CloudWatch features are available in all AWS regions except AWS GovCloud and China regions. AWS states there are no upfront commitments or minimum fees—teams pay only for log ingestion, storage, and queries.
For organizations evaluating the platform, the critical question isn't whether CloudWatch now offers unified observability—it clearly does. The question is whether AWS-native integration and Zero-ETL cost profile outweigh the benefits of established cross-platform tools or open-source alternatives.
The Takeaway
CloudWatch's transformation signals a broader industry shift toward unified observability platforms built on open standards like Apache Iceberg and OCSF. The Zero-ETL architecture pattern—storing data once and querying it with multiple tools—represents a fundamental rethinking of how organizations manage observability data.
For developers managing production systems on AWS, this update warrants evaluation. The platform now offers capabilities that previously required assembling multiple tools, potentially simplifying architecture while reducing operational complexity. Whether it fully replaces established observability platforms depends on your specific requirements for multi-cloud visibility, application performance monitoring depth, and vendor independence priorities.