301: The Cloud Pod PartyRocks in the House Tonight

Cloud Pod Header
tcp.fm
301: The Cloud Pod PartyRocks in the House Tonight
Loading
/
77 / 100 SEO Score

Welcome to episode 301 of The Cloud Pod – where the forecast is always cloudy! Matt and Justin are bringing you a party no a BOULDER of news today – seriously. So. Much. News. We’ve got updates on Google’s legal woes, OpenAI’s new o3 and 04-mini, Nova Reel, and even some updates from KubeCon EU! It’s truly a global episode, and we’re glad you’re here with us in the cloud. Here’s to 300 more episodes! 

Titles we almost went with this week:

  • 🧐My Monopoly board says Google Goes to Jail, Does Not Collect 200 Million dollars
  • 😵‍💫OpenAI confuses everyone with updates to 03 models… similar to Apple M3-Ultras being  
  •       announced beside M4 chips
  • 🧑‍💻Github finally gets the good coding vibes
  • 🎉We Are gonna PartyRock like it’s 1999 
  • 🦔Nova SONIC THE HEDGEHOG
  • 🦙Azure gets 4 llamas

A big thanks to this week’s sponsor:

We’re sponsorless! Want to get your brand, company, or service in front of a very enthusiastic group of cloud news seekers? You’ve come to the right place! Send us an email or hit us up on our Slack channel for more info. 

General News 

13:55 Google holds illegal monopolies in ad tech, US judge finds

  • The Department of Justice has won a ruling against Google, paving the way for U.S. antitrust prosecutors to seek a breakup of its ad products. 
  • Google was found liable for willingly acquiring and maintaining monopoly power in markets for publisher ad servers and the market for ad exchanges, which sit between buyers and sellers. 
  • Now, hearings will be scheduled to determine what Google must do to restore competition in those markets, such as selling off parts of its business.
  • US AG Pamela Bondi called the ruling “a landmark victory in the ongoing fight to stop Google from monopolizing the digital public square.”
  • Google says it will appeal the ruling, pointing out that it won half the case. 
  • The DOJ says that Google should have to sell off at least its Google Ad Manager, which includes the company’s publisher ad server and ad exchanges
  • This ruling is in addition to the recent ruling that they used monopolistic practices with the Chrome browser and would need to sell it as well. 

15:02 📢 Matthew – “It feels like a replay of when I was growing up and Microsoft Windows and everything…it’s good to see that they are looking at these companies to see if they have monopolies.” 

15:50 DOJ’s sweeping remedies would harm America’s economy and technological leadership 

  • Google says that the DOJ’s 2020 search distribution lawsuit is a backwards-looking case at a time of intense competition and unprecedented innovation. With new services like ChatGPT and Deepseek thriving, the DOJ’s sweeping remedy proposals are unnecessary and harmful. 
  • Google says at trial they will show how the DOJ’s unprecedented proposals go miles beyond the court’s decision and would hurt America’s consumers, economy, and technological leadership. 
  • Google has made the following points:
    • The DOJ’s proposal would make it harder for you to get to services you prefer. People use Google because they want to, not because they have to. DOJ’s proposal would force browsers and phones to default to search services like Microsoft’s Bing, making it harder for you to access Google.
    • The DOJ’s proposal to prevent us from competing for the right to distribute Search would raise prices and slow innovation. Device makers and web browsers (like Mozilla’s Firefox) rely on the revenue they receive from search distribution. Removing that revenue would raise the cost of mobile phones and handicap the web browsers that you use every day.
    • The DOJ’s proposal would force Google to share your most sensitive and private search queries with companies you may never have heard of, jeopardizing your privacy and security. Your private information would be exposed, without your permission, to companies that lack Google’s world-class security protections, where it could be exploited by bad actors.
    • The DOJ’s proposal would also hamstring how we develop AI, and have a government-appointed committee regulate the design and development of our products. That would hold back American innovation at a critical juncture. We’re in a fiercely competitive global race with China for the next generation of technology leadership, and Google is at the forefront of American companies making scientific and technological breakthroughs.
    • The DOJ’s proposal to split off Chrome and Android — which we built at great cost over many years and make available for free — would break those platforms, hurt businesses built on them, and undermine security. Google keeps more people safe online than any other company in the world. Breaking off Chrome and Android from our technical, security, and operational infrastructure would not just introduce cybersecurity and even national security risks, but also increase the cost of your devices.
  • Instead, Google is recommending their own set of remedies, which they link too. 
    • Browser agreements:
      • Browser companies like Apple and Mozilla should continue to have the freedom to do deals with whatever search engine they think is best for their users. The Court accepted that browser companies “occasionally assess Google’s search quality relative to its rivals and find Google’s to be superior.” And for companies like Mozilla, these contracts generate vital revenue.
      • Our proposal allows browsers to continue to offer Google Search to their users and earn revenue from that partnership. But it also provides them with additional flexibility: It would allow for multiple default agreements across different platforms (e.g., a different default search engine for iPhones and iPads) and browsing modes, plus the ability to change their default search provider at least every 12 months (the court’s decision specifically referred to a 12 month agreement as “presumed reasonable” under antitrust law).
    • Android contracts:
      • Our proposal means device makers have additional flexibility in preloading multiple search engines, and preloading any Google app independently of preloading Search or Chrome. Again, this will give our partners additional flexibility and our rivals like Microsoft more chances to bid for placement.
    • Oversight and compliance:
      • Our proposal includes a robust mechanism to ensure we comply with the Court’s order without giving the Government extensive power over the design of your online experience.

17:03 📢 Justin – “Apple phones keep going up regardless of this, so I don’t think I’m getting any benefit from the money that they’re paying Apple to make Safari or Google the default search engine. I’m pretty sure Apple’s just pocketing that money and charging me more.”

AI Is Going Great – Or How ML Makes Its Money 

24:49 Introducing OpenAI o3 and o4-mini 

  • ChatGPT released OpenAI o3 and o4-mini, the latest in their o-series models trained to think longer before responding.  
  • OpenAI says these are the smartest models they have released to date, representing a step change in ChatGPT’s capabilities for everyone from curious users to advanced researchers. 
  • For the first time the reasoning models can agentically use and combine every tool within ChatGPT — this includes searching the web, analyzing uploaded files and other data with Python, reasoning deeply about visual inputs, and even generating images.  
  • OpenAI o3 is the most powerful reasoning model that pushes the frontier across coding, math, science, visual perception, and more. 
    • It sets a new SOTA on benchmarks including Codeforces, SWE-bench (without building a custom model-specific saffold), and MMMU.   
    • External experts, o3 makes 20 percent fewer major errors than OpenAI o1 on difficult, real-world tasks especially excelling in areas like programming, business/consulting, and creative ideation. 
  • OpenAI o4-mini is a smaller model optimized for fast, cost-efficient reasoning – it achieves remarkable performance for its size and cost, particularly in math, coding and visual tasks.  
  • In expert evaluations, the o4-mini outperforms its predecessor, o3-mini, on non-STEM tasks as well as domains like data science.  
  • Thanks to its efficiency, o4-mini supports significantly higher usage limits than o3, making it a strong high-volume, high-throughput option for questions that benefit from reasoning. 

26:16 📢 Justin – “So, the thing about this is everyone thought they’d have GPT 4.5 out by now and they don’t. And then they’re now doing major updates to both the and the 04 mini. You know, it sort of feels like they’re just making optimizations to the current models because they don’t have anything better to provide. Yeah, they need something to respond to Gemini 2.5 and Claude, Sonnet 3.7 and Mistral – whatever version they’re on – so this feels like maybe Open AI is losing their way just a little bit… it definitely feels like things aren’t where they want them to be.”

30:51 Introducing GPT-4.1 in the API

  • In addition to the new models we just talked about, they have also released three new models in the API. GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano. 
  • These models outperform GPT-4o and GPT-4o mini across the board, with major gains in coding and instructions following. They also have larger context windows supporting up to 1 million tokens of context and are able to better use that context with improved long-context comprehension, and also feature a more recent knowledge cutoff of June 2024. 
  • GPT-4.1 excels at:
    • Coding: GPT‑4.1 scores 54.6% on SWE-bench Verified, improving by 21.4% over GPT‑4o and 26.6% over GPT‑4.5—making it a leading model for coding.
    • Instruction following: On Scale’s MultiChallenge⁠ benchmark, a measure of instruction following ability, GPT‑4.1 scores 38.3%, a 10.5% increase over GPT‑4o.
    • Long context: On Video-MME⁠, a benchmark for multimodal long context understanding, GPT‑4.1 sets a new state-of-the-art result—scoring 72.0% on the long, no subtitles category, a 6.7% improvement over GPT‑4o.
  • Note that GPT 4.1 will only be available via the API, and they will also begin deprecating GPT 4-5 preview in the API, as GPT 4.1 offers improved or similar performance in many key capabilities at much lower cost and latency. 

Things seem really confused and muddled over at OpenAI don’t they? 

Cloud Tools 

34:57 Vibe coding with GitHub Copilot: Agent mode and MCP support rolling out to all VS Code users

  • Github Copilot is rolling out a whole new agentic coding experience.
  • Agent Mode in VS Code is rolling out to all users, now complete with MCP support that unlocks access to any context or capabilities you want. They are also releasing the open source and local GitHub MCP server, giving you the ability to add GitHub functionality to any LLM tool that supports MCP.
  • In keeping their commitment to offer multi-model choice, they are making Anthropic Claude 3.5, 3.7, 3.7 Sonnet Thinking, Google Gemini 2.0 Flash, and OpenAI O3-mini generally available via premium requests, included in all paid Copilot tiers
  • These premium requests are in addition to unlimited requests for agent mode; context driven chat and code completions that all paid plans have when using their base model.  With the new Pro+ tier, individual developers get the most out of the latest models with Copilot. 
  • In addition, they have announced the GA of Copilot Code Review agent, plus the GA of Next Edit Suggestions so you can tab tab tab your way to coding glory. 

37:30 📢 Matt – “I use the pro at my day job and it works fine for me. I haven’t needed the premium access models or haven’t found the full reason for that yet.”

AWS

39:00 Announcing the general availability of Amazon VPC Route Server 

  • AWS announces the general availability of VPC Route Server to simplify dynamic routing between virtual appliances in your VPC.  
  • Route Server lets you advertise routing information through BGP from virtual appliances and dynamically update the VPC Route tables associated with subnets and internet gateway. 
  • Before this, you would have to create custom scripts or use virtual routers with overlay networks to dynamically update VPC route tables. The update removes the operational overhead of creating and maintaining overlay networks or custom scripts, and offers a managed solution for dynamically updating routes in route tables. 
  • With a VPC route server, you deploy endpoints inside your VPC and peer them with your virtual appliances to advertise routes using BGP. 

39:56 📢 Matt – “It’s is not a cheap service, and it’s by endpoint, so if you’re going to look at this – and it will simplify a lot of the stuff in your life – but be careful of the pricing.” 

40:09 PartyRock introduces image playground, powered by Amazon Nova Canvas  

  • The party keeps rocking over in PartyRock who now has an image playground that leverages Amazon Nova Canvas foundational model to transform ideas into customizable images.  
  • You can access this directly through the images section, featuring an intuitive interface and comprehensive customization options. 
  • Justin keeps hoping they are going to kill this now that they have Amazon Nova.

40:53 AWS simplifies Amazon VPC Peering billing 

  • Good news, Amazon is making it easier to understand inter-AZ VPC Peering usage within the same AWS region by introducing a new usage type in their bill. 
  • The bad news is that it’s effective immediately, meaning it will likely break – or show up in your FinOps tooling in a new and exciting way. May the odds be ever in your favor. 

41:21📢 Justin – “EC2 other is the bane of my existence.”

42:48 Announcing AWS Security Reference Architecture Code Examples for Generative AI

  • For our security practitioners, AWS is giving you new Security Reference Architecture (SRA) code examples for securing generative AI workloads.  The example includes two capabilities focused on secure model inference and RAG implementations. 
  • The new code examples are available in the AWS SRA examples repository and include ready-to-deploy CloudFormation templates to assist application developers in getting started with network segmentation, identity management, encryption, prompt injection detection, and logging and monitoring. 

43:31 📢 Justin – “I definitely was curious how they were deploying some of the things, like Guardrails, and some of the other things that have some good examples – so if you are confused about some of the things, this is a pretty good repo to poke around in.” 

45:30 Introducing Amazon Nova Sonic: A New Gen AI Model for Building Voice Applications and Agents

    • Amazon is introducing a new model, Amazon Nova Sonic, a new foundational model that unifies speech understanding and speech generation into a single model, to enable more human-like voice conversations in AI applications.  
    • Available in Bedrock via a new bi-directional streaming API, the model simplifies the development of voice applications, such as customer service call automation and AI agents across a broad range of industries, including travel, education, healthcare, entertainment, and more.
  • “From the invention of the world’s best personal AI assistant with Alexa, to developing AWS services like Connect, Lex, and Polly that are used across a wide range of industries, Amazon has long believed that voice-powered applications can make all of our customers’ lives better and easier,” said Rohit Prasad, SVP of Amazon Artificial General Intelligence. “With Amazon Nova Sonic, we are releasing a new foundation model in Amazon Bedrock that makes it simpler for developers to build voice-powered applications that can complete tasks for customers with higher accuracy, while being more natural, and engaging.”
  • Nova sonic solves challenges of traditional approaches around building voice-enabled applications, which requires complex orchestration of multiple models.  
  • The unified model architecture of Nova delivers speech understanding and generation, without requiring a separate model for each of the steps.  
  • This unification enables the model to adapt the generated voice response to the acoustic context and the spoken input, resulting in a more natural dialog. 

48:05 Amazon Nova Reel 1.1: Featuring up to 2-minutes multi-shot videos  

  • Amazon is announcing that Nova Reel 1.1 is available, which provides quality and latency improvements in 6-second single-shot video generation, compared to Nova reel 1.0. 
  • The update generates multi-shot videos up to 2 minutes long, maintaining a consistent style across each shot. 
  • You can provide a single prompt for up to a 2-minute video composed of 6-second shots, or design each shot individually with custom prompts. 

50:27 Amazon Bedrock Guardrails enhances generative AI application safety with new capabilities

  • Last year Amazon released Bedrock Guard Rails to standardize protections across generative AI applications, bridging the gap between native model protections and enterprise requirements, and streamline governance processes.  Today they are announced several new capabilities:
  • Multimodal toxicity detection with industry leading image and text protection
  • Enhanced privacy protection for PII detection in user inputs
  • Mandatory Guardrail enforcement with IAM
  • Optimize performance while maintaining protection with selective guardrail policy application.

50:57 📢 Matt- “I like that they’re making it mandatory, and this kind of goes back to some of the other stuff – security compliance is really starting to look at AI, and you’re starting to have AI policies specific to everything, ie AI committees and this forcibly checks that box.”

51:38 Announcing up to 85% price reductions for Amazon S3 Express One Zone 

  • S3 Express One Zone, a high performance, Single AZ storage class built to deliver consistent single-digit millisecond data access for your most frequently accessed data and latency sensitive applications is getting a price cut. 
  • Amazon is announcing a 31 percent reduction in storage price, 55% reduction in Put requests, and reducing Get request prices by 85% percent.  
  • In addition, S3 Express One Zone has reduced the per-gb charges for data uploads and retrievals by 60 percent, and these charges now apply to all bytes transferred rather than just a portion of the request greater than 512kb.
  • Want to check out the AWS pricing calculator? You can find that here

53:42 AWS STS global endpoint now serves your requests locally in regions enabled by default 

  • AWS STS now automatically serves all requests to the global endpoint in the same AWS region as your deployed workloads, enhancing resilience and performance. 
  • Previously, they were all served from the US-East region.  
  • We warned about this in a previous episode; the time is now. 

54:02 📢 Matt – “I don’t know how this would break your product, but it’s probably going to.”

GCP

56:49 Spring cleaning with FinOps hub

  • Google Cloud has some new FinOps capabilities this week that are good. 
  • Find FinOps Hub 2.0 now comes with new utilization insights to zero in on optimization opportunities. 
  • Finops 2.0, focused exclusively on bringing utilization insights on your resources to the forefront so you can see what potential waste may exist and take action immediately. Waste can come in many forms: from a VM that is barely getting used at 5% (overprovisioned), to a GKE cluster that is actually running hot (underprovisioned), to manage resources like Cloud Run instances that may not be optimally configured (suboptimal configuration)
  • Gemini Cloud Assist supercharges FinOps Hub to summarize optimization insights and send opportunities to engineering.  
  • This tool helps create personalized cost reports and synthesize insights, and has resulted in >100k FinOps hours saved by Google’s customers. 
  • Eliminate Waste: introducing a NEW IAM role permission for your tech solution owners to see & directly take action on these optimizations via the new project billing costs manager role.

58:15 📢 Matt – “Even the most clean environment, there’s always something else. There’s a hard drive from a server that you deleted that you forgot to clean up. There’s all these small things that just sit there and you know, run cost, you know, add costs. And honestly, it’s the way the cloud providers make money is these things sitting there. So it’s nice of them that they are slowly adding all these little pieces into the world and hopefully keeping your environment clean.”

58:49 Developers can now start building with Gemini 2.5 Flash

59:02 Announcing general availability of Memorystore for Valkey

  • Memorystore for Valkey is now Generally Available, a significant step forward for open source in-memory data management.  
  • Google is offering a 99.99% availability SLA along with features such as Private Service Connect, Multi-VPC access, cross-region replication, persistence, and many more. 
  • In addition to PSC support, as part of the GA they are now giving you zero-downtime scaling, integrated Google-built vector similarity search, managed backups, Cross Region Replication and persistence. 

Azure

1:00:14 New capabilities in Azure AI Foundry to build advanced agentic applications

  • Azure is releasing several new capabilities into AI Foundry, including: 
    • GA of the Agent framework — an extension of Azure AI Foundry’s open-source kit Semantic Kernel, specifically designed to simplify the orchestration of multi-agent systems. 
    • Agent framework makes it easier for agents to coordinate and dramatically reduces the code developers need to write.  Organizations like KPMG are using Semantic Kernel to orchestrate workflows among specialized agents, dramatically reducing development complexity. 
    • At the core of Azure AI Foundry is a feedback system and advanced observability platform that gives developers visibility into agent behavior and outcomes. 
    • AI Red Teaming Agent – in Preview. It is an agent that systematically probes AI models to uncover safety risks, integrating Azure AI Foundry’s robust evaluation systems with Microsoft Security PyRIT framework.  The agent generates comprehensive reports, tracking improvements over time, creating an AI safety testing ecosystem that evolves alongside your system.
  • They are releasing the Azure AI foundry extension for VS Code, in preview as well. Developers can now build, test, and deploy agent-based applications entirely within their IDE, no context switching required. 

1:01:24 📢 Matt – “I kind of like the concept of the AI Red Teaming Agent – it’s kind of cool. I don’t know how to use it yet or how I want to use it…but I definitely think it’s going to add a lot to the ecosystem.”   

1:02:07 Learn more about what’s new with Microsoft Azure Storage at KubeCon Europe 2025

  • Azure storage dropped some new capabilities at KubeCon EU.
  • New updates to Blobfuse2 – 2.4.1 that allow you to access Blob Storage via the Container Storage Interface driver providing a seamless way to store and retrieve data at scale. 
    • Speeds up model training and inference
    • Simplify Data preprocessing
    • Ensure data integrity at scale
    • Parallel access of massive datasets

1:02:58 Introducing the Llama 4 herd in Azure AI Foundry and Azure Databricks

  • Llama 4 models are now available in Azure AI Foundry and Azure Databricks. These models include the:
    • Llama-4-Scout-17B-16E
    • Llama-4-Scout-17B-16E-Instruct
    • Llama-4-Maverick-17B-128E-Instruct-FP8
  • Llama 4 had several architectural innovations including Early-Fusion Multimodal Transformer.
  • It also contains a cutting-edge mixture of expert architecture.

1:03:34 📢 Justin – “Basically it’s an open source model that you can now use there, because they’re not liking OpenAI right now.

1:04:26 Announcing the GPT-4.1 model series for Azure AI Foundry and GitHub developers

1:04:39 o3 and o4-mini: Unlock enterprise agent workflows with next-level reasoning AI with Azure AI Foundry and GitHub

  • Oh – in case you were worried, they are also supporting OpenAI o3 and o4-mini.
  • Cool. Cool cool cool. 

1:05:39 Announcing General Availability of Azure SQL Database Capabilities for Microsoft Copilot in Azure

  • Azure announced the GA of Copilot in Azure. They are excited to share that Copilot in Azure includes capabilities for Azure SQL database, which helps to streamline database operations, troubleshoot issues, and optimize your performance — all designed to enhance productivity and simplify complex tasks. 
  • The new capabilities include:
    • Intelligent troubleshooting to provide guidance for common sql errors like SQL error codes (10928, 18456, 40613), identify and resolve issues related to scaling and replication, and assist with login problems
    • Performance Optimization: Helps you determine if the database is reaching a storage capacity limit or hitting its IO limit. Analyze database connection timeouts and provide recommendations to optimize connection settings. 
    • Configuration: Guidance on selecting appropriate tiers for your DB. Clear directions for creating and using correct connection strings to ensure seamless connectivity. 
    • Security and Data Management:  Troubleshooting issues with TDE, provide insights on replication issues and offering solutions to secure and sync your data across geo secondaries.

1:06:36 📢 Justin – “I’m glad to see AI coming to Azure SQL. I’m more excited that we’re bringing AI to SQL Management Studio, which will be much more interesting to me long-term.”

1:07:40 Announcing the Public Preview of the New Hybrid Connection Manager (HCM)

  • Public Preview of Hybrid Connection Manager, a powerful tool designed to enhance connectivity and streamline the management of hybrid connections.  
  • Key features in the preview include:
    • Cross Platform Compatibility to support both Windows and Linux clients, allowing for seamless management of hybrid connections across platform
    • Enhanced UI 
    • Improved visibility
  • For those not familiar: Hybrid Connection Manager is a relay service that enables Azure App Services to securely communicate with resources in other networks, particularly on-premise systems, without requiring complex networking configurations like VPN or expressroute. 

1:08:39 Microsoft’s “1‑bit” AI model runs on a CPU only, while matching larger 

systems 

  • Microsoft has a cool 1-bit AI model story coming from Ars Technica.  
  • Future computing may not need supercomputers thanks to the models like Bitnet b1.58 2b4t.
  • Most traditional AI models rely on full-precision weights (32 bit or 16 bit floating point numbers), these weights require significant memory and computational resources, and each weight represents millions of possible values for precise calculations. 
  • Typically this math and floating point operations require high powered GPUs, consuming a lot of electricity. It’s estimated that AI GPU’s are consuming 4% of global electricity. 
  • This new bitnet approach from Microsoft that powers the bitnet b1.58 2b4t model uses extreme compression by reducing weights to just three possible values (-1, 0, and +1)
  • This 1.58 bit approach dramatically reduces memory needs, and the model requires only simple addition operations instead of complex multiplications. 
  • Using a custom framework called bitnet.cpp, these are designed to run efficiently on CPU’s rather than GPUs.
  • You need to request only 400MB of memory vs the 1.4GB for comparable models. 
  • Might be great use cases around Edge Computing, Mobile Devices, Embedded systems, etc.

1:11:04 Conversion to Hyperscale: Now generally available with enhanced efficiency

  • Key feature
    • Shorter cutover time
    • Higher log generation rate
    • Manual cutover
    • Enhanced progress reporting

1:12:17 📢 Matt- “it’s a good quality of life improvement. I thought that if anybody is actually using it, you know, or looking to migrate, it will help you hopefully with the story to your execs to say, look, we can actually do this in schedule to say, hey, we’re going to trigger on Friday and prep it on, you know, and do it on Saturday versus, hey, we’re going to run a Friday night and it will be done sometime between Friday night and Sunday… we have no idea.”

Announcement: Microsoft Build May 19-22nd in Seattle

Oracle

1:13:49 Oracle tells customers its public cloud was compromised

  • Oracle has come around to tell customers that they had a successful intrusion of their public cloud, as well as theft of their data after previously denying it had been compromised.
  • Claims emerged in late March when a hacker with the handle of rose87168 boasted of cracking into two of big red’s login servers. 
  • Multiple information security experts analyzed the data from the hacker and concluded that Oracle’s classic cloud product was indeed compromised. As it wasn’t patched against CVE-2021-35587, a vulnerability in Oracle Access Manager, a product of its Oracle Fusion Middleware suite. 
  • Oracle is facing a few lawsuits over this and has run afoul of Europe’s GDPR regulations which requires organizations to report theft of customer data to affected folks within 72 hours of discovery. 
  • Come on, Oracle. This is just embarrassing. 

1:15:57 The Reg translates the letter in which Oracle kinda-sorta tells customers it was pwned 

  • Oracle sent a letter to customers about the intrusion, and the Reg did an excellent job skewering it, so we’ll just quote them. It’s definitely worth a read. 
  • Seriously. Right now. Go read it. 

Closing

And that is the week in the cloud! Visit our website, the home of the Cloud Pod where you can join our newsletter, slack team, send feedback or ask questions at theCloud Pod.net or tweet at us with hashtag #theCloudPod

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.