309: Microsoft tries to give away cloud services for free, sadly, it’s only SQL

Cloud Pod Header
tcp.fm
309: Microsoft tries to give away cloud services for free, sadly, it's only SQL
Loading
/

Welcome to episode 308 of The Cloud Pod – where the forecast is always cloudy! Justin and Matt are on hand and ready to bring you an action packed episode. Unfortunately, this one is also lullaby free. Apologies. This week we’re talking about Databricks and Lakebridge, Cedar Analysis, Amazon Q, Google’s little hiccup, and updates to SQL – plus so much more! Thanks for joining us.  

Titles we almost went with this week:

  • 📲KV Phone Home: When Your Key-Value Store Goes AWOL
  • 🍎When Your Coreless Service Finds Its Core Problem
  • 🪙Oracle’s Vanity Fair: Pretty URLs for Pretty Penny
  • 🎟️From Warehouse to Lakehouse: Your Free Ticket to Cloud Town
  • 1️⃣Databricks Uno: Because One is the Loneliest Number
  • 🍻Free as in Beer, Smart as in Data Science
  • 🌲Cedar Analysis: Because Your Authorization Policies Wood Never Lie
  • 🌳Cedar Analysis: Teaching Old Policies New Proofs
  • 💬Amazon Q Finally Learns to Talk to Other Apps
  • 📅Tomorrow: Visual Studio’s Predictive Edit Revolution
  • 👻The Ghost of Edits Future: AI Haunts Your Code Before You Write It
  • 😖IAM What IAM: Google’s Identity Crisis Breaks the Internet
  • 🙈Permission Denied: The Day Google Forgot Who Everyone Was
  • 💉403 Forbidden: When Google’s Bouncer Called in Sick
  • 🥵AWS Brings the Heat to Fusion Research
  • ☁️Larry’s Cloud Nine: Oracle Stock Soars on Forecast Raise
  • 📈OCI You Later: Oracle Bets Big on Cloud Growth
  • 🔮Oracle’s Crystal Ball Shows 40% Cloud Growth Ahead
  • 🤖Meta Scales Up Its AI Ambitions with $14 Billion Investment
  • 💄From FAIR to Scale: Meta’s $14 Billion AI Makeover
  • 👏Congratulations Databricks one, you are now the new low code solution. 
  • 🔥AWS burns power to figure out how power works

AI Is Going Great – Or How ML Makes Money 

02:12 Zuckerberg makes Meta’s biggest bet on AI, $14 billion Scale AI deal

  • Meta is finalizing a $14 billion investment for a 49% stake in Scale AI, with CEO Alexandr Wang joining to lead a new AI research lab at Meta. 
  • This follows similar moves by Google and Microsoft acquiring AI talent through investments rather than direct acquisitions to avoid regulatory scrutiny.
  • Scale AI specializes in data labeling and annotation services critical for training AI models, serving major clients including OpenAI, Google, Microsoft, and Meta. 
  • The company’s expertise covers approximately 70% of all AI models being built, providing Meta with valuable intelligence on competitor approaches to model development.
  • The deal reflects Meta’s struggles with its Llama AI models, particularly the underwhelming reception of Llama 4 and delays in releasing the more powerful “Behemoth” model due to concerns about competitiveness with OpenAI and DeepSeek. Meta recently reorganized its GenAI unit into two divisions following these setbacks.
  • Wang brings both technical AI expertise and business acumen, having built Scale AI from a 2016 startup to a $14 billion valuation. His experience includes defense contracts and the recent Defense Llama collaboration with Meta for national security applications.
  • For cloud providers and developers, this consolidation signals increased competition in AI infrastructure and services, as Meta seeks to strengthen its position against OpenAI’s consumer applications and model capabilities through enhanced data preparation and training methodologies.

03:29 📢 Matt – “It’s interesting, especially the first part of this where companies are trying to acquire AI talent through investments rather than directly hiring people – and hiring them away from other companies. It’s going to be an interesting trend to see if it continues on in the industry where they just keep doing it this way. They just acquire small companies and medium (or large in this case) in order to continue to grow their teams or to at least augment their teams in that way. Or if they’re going to try to build their own in-house units too.”

07:50 Introducing Databricks Free Edition | Databricks Blog

  • Databricks Free Edition provides access to the same data and AI tools used by enterprise customers, removing the cost barrier for students and hobbyists to gain hands-on experience with production-grade platforms.
  • The offering addresses the growing skills gap in AI/ML roles, where job postings have increased 74% annually over four years and 66% of business leaders require AI skills for new hires.
  • Free Edition includes access to Databricks’ training resources and industry-recognized certifications, allowing users to validate their skills on the same platform used by major companies.
  • Universities like Texas A&M are already integrating Free Edition into their curriculum, enabling students to gain practical experience with enterprise data tools before entering the workforce.
  • This move positions Databricks to capture mindshare among future data professionals while competing with other cloud providers’ free tiers and educational offerings.
  • Want to try it out? You can do that here

08:28 Introducing Databricks One | Databricks Blog

  • Databricks One creates a simplified interface specifically for business users to access data insights without needing technical expertise in clusters, queries, or notebooks. 
  • The consumer access entitlement is available now, with the full experience entering beta later this summer.
  • The platform provides three key capabilities for non-technical users: AI/BI Dashboards, Genie for natural language data queries, and interaction with Databricks Apps through a streamlined interface designed to minimize complexity.
  • Security and governance remain centralized through Unity Catalog, allowing administrators to expand access to business users while maintaining existing compliance and auditing controls without changing their governance strategy.
  • The service will be included at no additional license fee for existing Databricks Intelligence Platform customers, potentially expanding data access across organizations without requiring additional technical training or resources.
  • Future roadmap includes expanding from single workspace access to account-wide asset visibility, positioning Databricks One as a centralized hub for business intelligence across the entire Databricks ecosystem.

08:42 📢 Justin – “I think the Databricks Free Edition is a really strong move on their part… I can play with it, see what it does and kick the tires on it and be interested in it as a hobbyist. And then I can bring it back to my day job and say, hey, I was using Databricks over the weekend and I did a thing and I think it could work for us at work. Being able to get access to these tools and these types of capabilities to play with, I think it’s a huge advantage. Everything’s moving so fast right now, that unless you have access to these tools, you feel like you’re left behind.”

AWS

10:45 AWS And National Lab Team Up To Deploy AI Tools In Pursuit Of Fusion Energy

  • AWS is partnering with Lawrence Livermore National Laboratory to apply machine learning to fusion energy research, specifically to predict and prevent plasma disruptions that can damage tokamak reactors
  • The collaboration uses AWS cloud infrastructure to process massive datasets from fusion experiments.
  • The project leverages AWS SageMaker and high-performance computing resources to analyze terabytes of sensor data from fusion reactors, training models that can predict plasma instabilities milliseconds before they occur. This predictive capability could prevent costly reactor damage and accelerate fusion development timelines.
  • Cloud computing enables fusion researchers to scale their computational workloads dynamically, running complex simulations and ML training jobs that would be prohibitively expensive with on-premises infrastructure. 
  • AWS provides the elastic compute needed to process years of experimental data from multiple fusion facilities worldwide.
  • The partnership demonstrates how cloud-based AI/ML services are becoming essential for scientific computing applications that require massive parallel processing and real-time analysis. 
  • Fusion researchers can now iterate on models faster and share findings globally through cloud collaboration tools.
  • This application of cloud AI to fusion energy could accelerate the path to commercial fusion power by reducing experimental downtime and improving reactor designs through better predictive models. Success here would validate cloud platforms as critical infrastructure for next-generation energy research.

12:34 Use Model Context Protocol with Amazon Q Developer for context-aware IDE workflows | AWS DevOps & Developer Productivity Blog

  • Amazon Q Developer now supports Model Context Protocol (MCP) in VS Code and JetBrains IDEs, enabling developers to connect external tools like Jira and Figma directly into their coding workflow. 
  • This eliminates manual context switching between browser tabs and allows Q Developer to automatically fetch project requirements, design specs, and update task statuses.
  • MCP provides a standardized way for LLMs to integrate with applications, share context, and interact with APIs. Developers can configure MCP servers with either Global scope (across all projects) or Workspace scope (current IDE only), with granular permissions for individual tools including Ask, Always Allow, or Deny options.
  • The practical implementation shown demonstrates fetching Jira issues, moving tickets to “In Progress”, analyzing Figma designs for technical requirements, and implementing code changes based on combined context from both tools. This integration allows Q Developer to generate more accurate code by understanding both business requirements and design specifications simultaneously.
  • This feature builds on Q Developer’s existing agentic coding capabilities which already included executing shell commands and reading local files. The addition of MCP support extends these capabilities to any tool that implements the protocol, with AWS providing an open-source MCP Servers repository on GitHub for additional integrations.
  • For AWS customers, this reduces development friction by keeping developers in their IDE while maintaining full context from project management and design tools. The feature is available now in Q Developer’s IDE plugins with no additional cost beyond standard Q Developer pricing.

13:26 📢 Justin – “I mean, if you think Q Developer is the best tool for you, then more power to you, and I’m not going to stop you. But I am glad to see this get added to one more place.” 

14:08 AWS WAF now supports automatic application layer distributed denial of service (DDoS) protection – AWS

  • AWS WAF now includes automatic Layer 7 DDoS protection that detects and mitigates attacks within seconds, using machine learning to establish traffic baselines in minutes and identify anomalies without manual rule configuration.
  • The managed rule group works across CloudFront, ALB, and other WAF-supported services, reducing operational overhead for security teams who previously had to manually configure and tune DDoS protection rules.
  • Available to all AWS WAF and Shield Advanced subscribers in most regions, the service automatically applies mitigation rules when traffic deviates from normal patterns, with configurable responses including challenges or blocks.
  • This addresses a critical gap in application-layer protection where traditional network-layer DDoS defenses fall short, particularly important as L7 attacks become more sophisticated and frequent.
  • Pricing follows standard AWS WAF managed rule group costs, making enterprise-grade DDoS protection accessible without requiring dedicated security infrastructure or expertise.

14:56 📢 Justin – “I have say that I’ve used the WAF now quite a bit – as well as Shield and CloudFront. Compared to using CloudFlare, they’re so limited what you can do on these things. I so much prefer CloudFlare over trying to tune AWS WAF properly.”

19:27 Powertools for AWS Lambda introduces Bedrock Agents Function utility – AWS

  • Powertools for AWS Lambda now includes a Bedrock Agents Function utility that eliminates boilerplate code when building Lambda functions that respond to Amazon Bedrock Agent action requests. 
  • The utility handles parameter injection and response formatting automatically, letting developers focus on business logic instead of integration complexity.
  • This utility integrates seamlessly with existing Powertools features like Logger and Metrics, providing a production-ready foundation for AI applications. Available for Python, TypeScript, and .NET, it standardizes how Lambda functions interact with Bedrock Agents across different programming languages.
  • For organizations building agent-based AI solutions, this reduces development time and potential errors in the Lambda-to-Bedrock integration layer. The utility abstracts away the complex request/response patterns required for agent actions, making it easier to build and maintain serverless AI applications.
  • Developers can get started by updating to the latest version of Powertools for AWS Lambda in their preferred language. Since this is an open-source utility addition, there are no additional costs beyond standard Lambda and Bedrock usage fees.
  • This release signals AWS’s continued investment in simplifying AI application development by providing purpose-built utilities that handle common integration patterns. It addresses a specific pain point for developers who previously had to write custom code to properly format Lambda responses for Bedrock Agents.

20:21 📢 Matt – “It’s great to see them making these more accessible to *not* subject matter experts and to the general developer. So would I want to take my full app and go to full production leveraging power tools? No, but it’s good to let the standard developer that just wants to play with something and learn and figure out how to do it. Get something up and running decently easily.”

20:53 Introducing Cedar Analysis: Open Source Tools for Verifying Authorization Policies | AWS Open Source Blog

  • AWS releases Cedar Analysis as open source tools for verifying authorization policies, addressing the challenge of ensuring fine-grained access controls work correctly across all scenarios rather than just test cases. The toolkit includes a Cedar Symbolic Compiler that translates policies into mathematical formulas and a CLI tool for policy comparison and conflict detection.
  • The technology uses SMT (Satisfiability Modulo Theories) solvers and formal verification with Lean to provide mathematically proven soundness and completeness, ensuring analysis results accurately reflect production behavior. 
  • This approach can answer questions like whether two policies are equivalent, if changes grant unintended permissions, or if policies contain conflicts or redundancies.
  • Cedar itself has gained significant traction with 1.17 million downloads and production use by companies like MongoDB and StrongDM, making robust analysis tools increasingly important as applications scale. The open source release under Apache 2.0 license allows developers to independently verify policies and researchers to build upon the formal methods foundation.
  • The practical example demonstrates how subtle policy refactoring errors can be caught – splitting a single policy into multiple policies accidentally restricted owner access to private photos, which the analysis tool identified before production deployment. This capability helps prevent authorization bugs that could lead to security incidents or access disruptions.
  • For AWS customers using services like Verified Permissions (which uses Cedar), this provides additional confidence in policy correctness and a path for building custom analysis tools tailored to specific organizational needs. The formal verification aspect also positions Cedar as a research platform for advancing authorization system design.

22:57 📢 Justin – “We’re using strong DM in the day jo0,b and it is very nice to see Cedar getting used in lots of different ways, particularly the mathematical proofs to be used in policies.”

GCP

23:51 Identity and access management failure in Google Cloud causes widespread internet service disruptions – SiliconANGLE

  • A misconfiguration in Google Cloud’s IAM systems caused widespread outages affecting App Engine, Firestore, Cloud SQL, BigQuery, and Memorystore, demonstrating how a single identity management failure can cascade across multiple cloud services and impact thousands of businesses globally.
  • The incident highlighted the interconnected nature of modern cloud infrastructure as services like Cloudflare Workers, Spotify, Discord, Shopify, and UPS experienced partial or complete downtime due to their dependencies on Google Cloud components.
  • Google Workspace applications including Gmail, Drive, Docs, Calendar, and Meet all experienced failures, showing how IAM issues can affect both infrastructure services and end-user applications simultaneously.
  • The outage underscores the critical importance of IAM redundancy and configuration management in cloud environments, as even major providers like Google can experience service-wide disruptions from a single misconfiguration.
  • While AWS appeared largely unaffected, Amazon’s Twitch service may have experienced issues due to network-level interdependencies, illustrating how cloud outages can have ripple effects across provider boundaries through shared DNS, CDN, or authentication services.
  • FULL RCA is available here. 

26:11📢 Matt – “For the SRE team at Google, within 2 minutes was already triaging, in 10 minutes it identified the root cause – that’s an impressive response time.” 

28:28 Cloudflare service outage June 12, 2025

  • Cloudflare experienced a 2 hour 28 minute global outage on June 12, 2025 affecting Workers KV, WARP, Access, Gateway, Images, Stream, Workers AI, Turnstile, and other critical services due to a third-party storage provider failure that exposed architectural vulnerabilities in their infrastructure.
  • The incident revealed a critical single point of failure in Workers KV’s central data store, which depends on many Cloudflare products despite being designed as a “coreless” service that should run independently across all locations.
  • During the outage window, 91% of Workers KV requests failed, cascading failures across dependent services while core services like DNS, Cache, proxy, and WAF remained operational, highlighting the blast radius of shared infrastructure dependencies.
  • Cloudflare is accelerating migration of Workers KV to their own R2 storage infrastructure and implementing progressive namespace re-enablement tooling to prevent future cascading failures and reduce reliance on third-party providers.
  • This marks at least the third significant R2-related outage in recent months (March 21 and February 6, 2025 also mentioned), raising questions about the stability of Cloudflare’s storage infrastructure during their architectural transition period.

29:31📢 Justin – “I think the failure here is they’re running an entire KV on top of GCS or GCP in a way that they were impacted by this word that should be blast radiuses out to multiple clouds. Cloudflare is a partner of AWS, GCP, and Azure. They should be able to make things redundant – because I don’t necessarily know that their infrastructure is going to be better than anyone else’s infrastructure.”

32:53 Securing open-source credentials at scale | Google Cloud Blog

  • Google Cloud has developed an automated tool that scans open-source packages and Docker images for exposed GCP credentials like API keys and service account keys, processing over 5 billion files across hundreds of millions of artifacts from repositories like PyPI, Maven Central, and DockerHub.
  • The system detects and reports leaked credentials within minutes of publication, matching the speed at which malicious actors typically exploit them, with automatic remediation options including disabling compromised service account keys based on customer-configured policies.
  • Unlike GitHub and GitLab’s source code scanning, this tool specifically targets built packages and container images where credentials often hide in configuration files, compiled binaries, and build scripts – areas traditionally overlooked in security scanning.
  • Google plans to expand beyond GCP credentials to include third-party credential scanning later this year, positioning this as part of their broader deps.dev ecosystem for open-source security analysis.
  • For GCP customers publishing open-source software, this provides free automated protection against credential exposure without requiring additional tooling or workflow changes, addressing what Mandiant reports as the second-highest cloud attack vector at 16% of investigations.
  • The moral of the story? Please patch. We know it’s a pain. But please, patch. 

33:55📢 Matt – “I feel like AWS has had this, where they scan the GIthub commits for years – so I appreciate them doing it, don’t get me wrong, but also, I feel like this has been done before?”

35:48 Google’s Cloud Location Finder unifies multi-cloud location data | Google Cloud Blog

  • Google Cloud Location Finder provides a unified API for accessing location data across Google Cloud, AWS, Azure, and Oracle Cloud Infrastructure, eliminating the need to manually track region information across multiple providers. The service is available at no cost via REST APIs and gcloud CLI.
  • The API returns rich metadata including region proximity data (currently only for GCP regions), territory codes for compliance requirements, and carbon footprint information to support sustainability initiatives. 
  • Data freshness is maintained at 24 hours for active regions with automatic removal of deprecated locations.
  • Key use cases include optimizing multi-cloud deployments by identifying the nearest GCP region to existing AWS/Azure/OCI infrastructure, ensuring data residency compliance by filtering regions by territory, and automating location selection in multi-cloud applications. This addresses a common pain point where organizations maintain hard-coded lists of cloud regions across providers.
  • While AWS and Azure offer their own region discovery APIs, Google’s approach of providing cross-cloud visibility in a single service is unique among major cloud providers. The inclusion of sustainability metrics like carbon footprint data aligns with Google’s broader environmental commitments.

37:39 C4D VMs: Unparalleled performance for business workloads | Google Cloud Blog

  • Google’s C4D VMs are now generally available, powered by 5th Gen AMD EPYC processors (Turin) and delivering up to 80% higher throughput for web serving and 30% better performance for general computing workloads compared to C3D. 
  • The new instances scale up to 384 vCPUs and 3TB of DDR5 memory, with support for Hyperdisk storage offering up to 500K IOPS.
  • C4D introduces Google’s first AMD-based Bare Metal instances (coming in weeks), providing direct server access for workloads requiring custom hypervisors or specialized licensing needs. The instances also feature next-gen Titanium Local SSD with 35% lower read latency than previous generations.
  • Performance benchmarks show C4D delivers 25% better price-performance than C3D for general computing and up to 20% better than comparable offerings from other cloud providers. For database workloads like MySQL and Redis, C4D shows 35% better price-performance than competitive VMs, with MySQL seeing up to 55% faster query processing.
  • The new VMs support AVX-512 with a 512-bit datapath and 50% more memory channels, making them well-suited for CPU-based AI inference workloads with up to 75% price-performance improvement for recommendation inference. C4D also includes confidential computing support via AMD SEV for regulated workloads.
  • C4D is available in 12 regions and 28 zones at launch, with a 30-day uptime window between planned maintenance events. Early adopters like AppLovin report 40% performance improvements, while Verve Group sees 191% faster ad serving compared to N2D instances.

38:18 Introducing G4 VM with NVIDIA RTX PRO 6000 | Google Cloud Blog

  • Google Cloud is first to market with G4 VMs featuring NVIDIA RTX PRO 6000 Blackwell GPUs, combining 8 GPUs with AMD Turin CPUs (up to 384 vCPUs) and delivering 4x compute/memory and 6x memory bandwidth compared to G2 VMs. This positions GCP ahead of AWS and Azure in offering Blackwell-based instances for diverse workloads beyond just AI training.
  • The G4 instances target a broader range of use cases than typical AI-focused GPUs, including cost-efficient inference, robotics simulations, generative AI content creation, and next-generation game rendering with 2x ray-tracing performance. Key customers include Snap for LLM inference, WPP for robotics simulation, and major gaming companies for next-gen rendering.
  • With 768GB GDDR7 memory, 12 TiB local SSD, and support for Multi-Instance GPU (MIG), G4 VMs enable running multiple workloads per GPU for better cost efficiency. The instances integrate with Vertex AI, GKE, and Hyperdisk (500K IOPS, 10GB/s throughput) for complete AI inference pipelines.
  • G4 supports NVIDIA Omniverse workloads natively, opening opportunities in manufacturing, automotive, and logistics for digital twins and real-time simulation. The combination of high CPU-to-GPU ratio (48:1) and Titanium’s 400 Gbps networking makes it suitable for complex simulations where CPUs orchestrate graphics workloads.
  • Currently in preview with global availability by year-end through Google Cloud Sales representatives. Pricing not disclosed, but positioning suggests premium pricing for specialized workloads requiring both AI and graphics capabilities.

Azure

39:40 Public Preview: Encrypt Premium SSD v2 and Ultra Disks with Cross Tenant Customer Managed Keys

  • Cross-Tenant customer-managed Keys (CMK) for Premium SSD v2 and Ultra disk are now in preview in select regions.
  • Encrypting managed disks with cross-tenant CMK enables encrypting the disk with a CMK hosted in an Azure Key Vault in a different Microsoft Entra tenant than the disk. 
  • This will allow customers leveraging SaaS solutions that support CMK to use cross-tenant CMK with Premium SSD v2 and Ultra Disks without ever giving up complete control. (i have doubts)

40:31📢 Justin – “The only was this makes sense to me is if you have a SaaS application where you’re getting single servers or small cluster of servers per tenant; which I don’t want to manage. But if that’s what you have, then this may make sense to you. But this has a pretty limited use case, in my opinion.”

42:10 Microsoft Cost Management updates—May 2025 (summary) | Microsoft Community Hub

  • Azure Carbon Optimization reaches general availability, allowing organizations to track and reduce their cloud carbon footprint alongside cost optimization efforts. 
  • This positions Azure competitively with AWS’s Customer Carbon Footprint Tool and GCP’s Carbon Footprint reporting.
  • Export to Microsoft Fabric enters limited preview, enabling direct integration of Azure cost data into Microsoft’s unified analytics platform. 
  • This streamlines FinOps workflows by eliminating manual data transfers between Cost Management and analytics tools.
  • Free Azure SQL Managed Instance offer launches in GA, providing a no-cost entry point for database migrations. 
  • This directly challenges AWS RDS Free Tier and could accelerate enterprise SQL Server migrations to Azure.
  • Network Optimized Azure Virtual Machines enter preview, promising reduced network latency and improved throughput for data-intensive workloads. These specialized VMs target high-performance computing and real-time analytics scenarios.
  • Smart VM Defaults in AKS reaches GA, automatically selecting cost-optimized VM sizes for Kubernetes workloads. 
  • This feature reduces overprovisioning and helps organizations avoid common AKS sizing mistakes that inflate costs. 

42:49📢 Matt – “I doubt they’re giving you Enterprise SQL. I assume it’s SQL Express or SQL standard – but they’re not giving you Enterprise SQL.”

44:20 Next edit suggestions available in Visual Studio – Visual Studio Blog

  • GitHub Copilot’s Next Edit Suggestions (NES) in Visual Studio 2022 17.14 predicts and suggests your next code edit anywhere in the file, not just at cursor location, using AI to analyze previous edits and suggest insertions, deletions, or mixed changes.
  • The feature goes beyond simple code completion by understanding logical patterns in your editing flow, such as refactoring a 2D Point class to 3D or updating legacy C++ syntax to modern STL, making it particularly useful for systematic code transformations.
  • NES presents suggestions as inline diffs with red/green highlighting and provides navigation hints with arrows when the suggested edit is on a different line, allowing developers to Tab through related changes across the file.
  • Early user feedback indicates accuracy issues with less common frameworks like Pulumi in C# and outdated training data for rapidly evolving APIs, highlighting the challenge of AI suggestions for niche or fast-changing technologies.
  • While this enhances Visual Studio’s AI-assisted development capabilities, the feature currently appears limited to Visual Studio users rather than being a cloud-based service accessible across platforms or IDEs.

45:36📢 Matt – “It’s a pretty cool feature and I like the premise of it, especially when you are refactoring legacy code or anything along those lines where it’s like, hey, don’t forget this thing over here – because on the flip side, while it’s distracting, it also would be fairly nice to not run everything, compile it, and then have the error because I forgot to refactor this one section out.”

Oracle

46:25  Oracle soars after raising annual forecast on robust cloud services demand | Reuters

  • Oracle raised its fiscal 2026 revenue forecast to $67 billion, projecting 16.7% annual growth driven by cloud services demand, with total cloud growth expected to accelerate from 24% to over 40%.
  • Oracle Cloud Infrastructure (OCI) is gaining traction through multi-cloud strategies and integration with Oracle’s enterprise applications, though this growth primarily benefits existing Oracle customers rather than attracting new cloud-native workloads.
  • The company’s approach of embedding generative AI capabilities into its cloud applications at no additional cost contrasts with AWS, Azure, and GCP’s usage-based AI pricing models, potentially lowering adoption barriers for Oracle’s enterprise customer base.
  • Fourth quarter cloud services revenue reached $11.70 billion with 14% year-over-year growth, suggesting Oracle is capturing market share but still trails the big three cloud providers who report quarterly cloud revenues of $25+ billion.
  • Oracle’s growth story depends heavily on enterprises already invested in Oracle databases and applications migrating to OCI, making it less relevant for organizations without existing Oracle dependencies.

48:18📢 Justin – “Oracle is actually a really simple cloud. It is just Solaris boxes, as a cloud service to you. It’s all very server-based. That’s why they have iSCSI and they have fiber channels and they have all these things that are very data center centric. So if you love the data center, and you just want a cloud version of it, Oracle cloud is not bad for you. Or if you have a ton of egress traffic, the cost advantages of their networking is far superior to any of the other cloud providers. So there are benefits as much as I hate to say it.”

49:38 Oracle and AMD Collaborate to Help Customers Deliver Breakthrough Performance for Large-Scale AI and Agentic Workloads

  • Oracle announces AMD Instinct MI355X GPUs on OCI, claiming 2X better price-performance than previous generation and offering zettascale AI clusters with up to 131,072 GPUs for large-scale AI training and inference workloads.
  • This positions Oracle as one of the first hyperscalers to offer AMD’s latest AI accelerators, though AWS, Azure, and GCP already have established GPU offerings from NVIDIA and their own custom silicon, making Oracle’s differentiation primarily about AMD partnership and pricing.
  • The MI355X delivers triple the compute power and 50% more high-bandwidth memory than its predecessor, with OCI’s RDMA cluster network architecture supporting the massive 131,072 GPU configuration for customers needing extreme scale.
  • Oracle emphasizes open-source compatibility and flexibility, which could appeal to customers wanting alternatives to NVIDIA’s CUDA ecosystem, though the real test will be whether the price-performance claims hold up against established solutions.
  • The announcement targets customers running large language models and agentic AI workloads, but adoption will likely depend on actual benchmarks, software ecosystem maturity, and whether Oracle can deliver on the promised cost advantages.

50:52 Introducing Vanity Urls On Autonomous DB

  • Oracle now allows custom domain names for APEX applications on Autonomous Database, eliminating the need for awkward database-specific URLs like apex.oraclecloud.com/ords/f?p=12345 in favor of cleaner addresses like myapp.company.com.
  • This vanity URL feature requires configuring DNS CNAME records and SSL certificates through Oracle’s Certificate Service, adding operational complexity compared to AWS CloudFront or Azure Front Door which handle SSL automatically.
  • The feature is limited to paid Autonomous Database instances only, excluding Always Free tier users, which may restrict adoption for developers testing or running small applications.
  • While this brings Oracle closer to parity with other cloud providers’ application hosting capabilities, the implementation requires manual certificate management and DNS configuration that competitors have largely automated.
  • The primary benefit targets enterprises already invested in Oracle’s ecosystem who need professional-looking URLs for customer-facing APEX applications without exposing underlying database infrastructure details.

Closing

And that is the week in the cloud! Visit our website, the home of the Cloud Pod where you can join our newsletter, slack team, send feedback or ask questions at theCloud Pod.net or tweet at us with hashtag #theCloudPod

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.