
Welcome to episode 303 of The Cloud Pod – where the forecast is always cloudy! Justin, Ryan and exhausted dad Matt are here (and mostly awake) ready to bring the latest in cloud news! This week we’ve got more news from Nova, updates to Claude, earnings news, and a mini funeral for Skype – plus a new helping of Cloud Journey!
Titles we almost went with this week:
- 💤Claude researches so Ryan can nap
- 🚀The best AI for Nova Corps, Amazon Nova Premiere JB
- 🍴If you can’t beat them, change the licensing terms and make them fork, and then
- reverse course… and profit
- 🧭Q has invaded your IDE!!
- ☠️Skype bites the dust
A big thanks to this week’s sponsor:
We’re sponsorless! Want to get your brand, company, or service in front of a very enthusiastic group of cloud news seekers? You’ve come to the right place! Send us an email or hit us up on our Slack channel for more info.
Follow Up
02:50 Sycophancy in GPT-4o: What happened and what we’re doing about it
- OpenAI wrote up a blog post about their sycophantic Chat GPT 4o upgrade last week, and they wanted to set the record straight.
- They made adjustments at improving the models default personality to make it feel more intuitive and effective across a variety of tasks.
- When shaping model behavior, they start with a baseline principle and instructions outlined in their model spec.
- They also teach their models how to apply these principles by incorporating user signals like thumbs up and thumbs down feedback on responses.
- In this update, though, they focused too much on short-term feedback and did not fully account for how users’ interactions with ChatGPT evolve. This skewed the results towards responses that were overly supportive – but disingenuous.
- Beyond rolling back the changes, they are taking steps to realign the model behavior, including refining core training techniques and system prompts to explicitly steer the model away from sycophancy.
- They also plan to build more guardrails to increase honesty and transparency principles in the model spec.
- Additionally, they plan to expand ways for users to test and give direct feedback before deployments.
- Lastly, OpenAI continues to expand evaluations building on the model sync and our ongoing research.
04:43 Deep Research on Microsoft Hotpatching:
- Yes, they’re grabbing money and screwing you. Basically.
07:06 📢 Justin – “I’m not going to give them any credit on this one. I appreciate that they created hotpatching, but I don’t like what you want to charge me for it.”
General News
🎉It’s Earnings time – cue the sound effects!🎉
08:03 Alphabet’s Q1 earnings shattered analyst expectations, sending the stock soaring. Google’s CEO credits its AI efforts
Alphabet Q1 2025 earnings call: CEO Sundar Pichai’s remarks
- Google started us off the last week of April by hitting a grand slam of earnings performance!
- Alphabet exceeded revenue estimates and shares were up in after hours trading.
- PES was 2.81 vs 2.01 expected on revenue of 90.23 Billion vs 89.1 billion expected.
- Google Cloud revenue rose from 12.26 Billion to 12.31 billion.
- Sundar in his remarks pointed at the strong growth of their AI investments including adoption of Gemini 2.
09:19 Microsoft stock surges after hours after the company blows past Q3 estimates
- Microsoft followed up with their earnings on the 30th, also crushing Wall Street estimates for their 3rd quarter.
- Cloud and AI are the essential inputs for every business to expand output, reduce costs and accelerate growth, which leads to lots of money for Microsoft.
- EPS was 3.46 vs 3.21 on 70.1 billion in revenue (68.48 expected).
- Cloud Revenue was 42.4 billion vs 42.22 billion, and intelligent cloud was 26.8 billion vs 25.99 billion.
10:28 Amazon earnings recap: Company ‘maniacally focused on’ keeping prices low amid light Q2 guidance
Amazon Announces First Quarter Results
- Amazon is a bit more complicated as they will be heavily impacted by tariffs, but it appears it hasn’t caused any problem – at least not yet.
- Amazon also reported better-than-expected earnings on May 1st.
- The company is heads down on keeping prices low in the coming months as tariffs take effect.
- Jassy reiterated that their investments in AI will pay off as more businesses turn to Amazon for their AI needs.
- Sales increased 9% in the quarter to 155.7 billion, up from 143.3 billion the year prior.
- AWS sales increased 17% YOY to 29.3 billion.
11:44 📢 Justin – “I think a lot of companies are not estimating AI uplifts into their forecasts until they know for sure adoption and market and are they making money, etc.”
16:17 RIP Skype (2003–2025), survived by multiple versions of Microsoft Teams
- Skype is officially dead, we talked about it when it was announced back in February, but the ax has officially fallen.
- We aren’t sad about it.
- *TAPS*
AI – Or How ML Makes Money
18:45 Claude’s AI research mode now runs for up to 45 minutes before delivering reports
- Last week Anthropic updated Claude and introduced research capabilities that will have Claude run for up to 45 minutes before delivering comprehensive reports.
- The company has also expanded its integration options, allowing Claude to connect with popular third party services.
- Anthropic first announced its Research feature on April 15th, but now they have taken it a step further allowing it to conduct deeper investigations across hundreds of internal and external sources.
- When users toggle the research button, Claude breaks down complex tasks into smaller components, examines each one, and compiles a report with citations linking to original sources.
- Unfortunately this is only included in the $100 per month Max plan.
- Currently nobody at TCP has this plan. We’re waiting for Justin to bite the bullet and will report back when he does.
19:42 📢 Justin – “If they were to include unlimited API calls from Claude Code or from a Visual Studio plugin that would probably push me over the edge.”
20:44 OpenAI scraps controversial plan to become for-profit after mounting pressure
- ChatGPT maker OpenAI has announced it will remain under the control of its nonprofit board, scrapping its controversial plan to split off its commercial operations as a for-profit company after mounting pressure from critics.
- Sam Altman blogged they made the decision after hearing from civic leaders and having discussions with the Attorneys General of California and Delaware.
- This move represents a shift in how OpenAI will be restructured.
- The previous plan would have established OpenAI as a public benefit corporation with the non-profit merely holding shares and having limited influence; the revised approach keeps the nonprofit firmly in control of operations.
- This doesn’t mean they aren’t changing the structure at all – they still plan to do a for-profit LLC under the non-profit, and will transition to a Public Benefit Corporation with the same mission, instead of their current complex capped profit structure, which made sense when it looked like there might be one dominant AGI effort.
- This is not a sale, but a change to the structure to something simpler.
- There may still be some uncertainties, such as OpenAI’s recent raise with Softbank stipulated that it would reduce its contribution to 20B if it failed to restructure into a fully for-profit entity by the end of 2025.
23:22 Anthropic to Buy Back Employee Shares at $61.5 Billion Valuation
- Anthropic reportedly offers to buy back shares from hundreds of former and current employees, the first transaction of its kind for the 4-year-old company.
- The buyback shows how integral these are to rewarding employees at fast-growing startups and retaining rare research talent in the AI talent war.
- For employees who have worked for the company for at least 2 years, their offering lets them sell up to 20% of their equity, with a maximum of $2 million each.
- The buyback values the startup at 61.5 billion, the exact valuation of its recent March fundraising.
24:08 📢 Ryan – “This says to me don’t sell – hold.”
Cloud Tools
25:31 Redis is now available under the AGPLv3 open source license
- Redis foiled those pesky hyperscalers by adopting SSPL to protect their business from cloud providers extracting value without reinvesting.
- Redis says moving to the SSPL achieved their goal AWS and Google now maintaining their own fork, but they admit it hurt their relationship with the Redis community.
- Duh.
- SSPL is not truly open source because the OSI clarified it lacks the requisites to be an OSI-approved license.
- Following the SSPL change, Salvatore Sanfillipo decided to rejoin Redis as a developer evangelist.
- The CEO Rowan Trolloope and him collaborated on new capabilities, company strategy and community engagement.
- The CEO, CTO and Salvatore and the core developers have decided to make some improvements to improve Redis going forward:
- Adding the OSI Approved AGPL as an additional licensing option for Redis, starting with Redis 8
- Introducing Vector sets – the first new data type in years – created by Salvatore
- Integrated Redis stack technologies including JSON, Time Series, Probabilistic data types, Redis Query engine and more into Core Redis 8 under GPL.
- Delivered over 30 performance improvements with up to 87% faster commands 2x throughput
- Improved community engagement, particular with client ecosystem contributions.
27:14 📢 Ryan – “We’ll see… There’s a lot of people who moved over to Valkey, and I don’t know that they’re going to be swapping back anytime soon.”
30:50 Announcing HCP Terraform Premium: Infrastructure Lifecycle Management at scale
- If your HCP Terraform solution wasn’t expensive enough, you can now get PREMIUM to extend the capabilities of HCP Terraform, offering powerful features that enable organizations to scale their infrastructure.
- Woohoo! PREMIUM!
- HCP Terraform Premium is designed to help enterprises with their Infrastructure Lifecycle Management at high scale and includes everything from the standard and plus plans, with additional features:
- Private VCS access: Access private VCS repositories securely by ensuring that your source code and static credentials are not exposed over the public internet.
- Private policy enforcement: Apply and enforce internal security and compliance policies within private cloud environments.
- Private run tasks: Integrate Terraform workflows with internal systems securely, creating a seamless automation pipeline that aligns with your internal processes and policies.
- Module lifecycle management – Revocation: Streamline module management by revoking outdated or vulnerable modules.
- All of this simplifies operations, improves security and lowers your TCO (per Hashi) and maybe increases your likelihood of outages, but that’s neither here nor there.
32:09 📢 Matthew – “The only thing that I like here is the revocation. I think that that’s cool. If you have credentials in your repo, I have better questions about why you have credentials in your repo – and what life choices you’ve already made from that one. And policy enforcement, there’s enough other add-ons that you can get without paying for this premium feature.”
AWS
33:44 Amazon Nova Premier: Our most capable model for complex tasks and teacher for model distillation
- Amazon is expanding the Nova family of foundation models announced at AWS Re:invent with the GA of Amazon Nova Premier.
- Premier joins the existing Nova models in Amazon Bedrock.
- Similar to Nova Lite and Pro, premier can produce text, images and videos (excluding audio.) With its advanced capabilities, Nova premier excels at complex tasks that require deep understanding of context, multi-step planning, and precise execution across multiple tools and data sources.
- It has a context length of 1 million tokens, allowing you to process long documents and large code bases.
- Nova Premier, combined with Bedrock Model Distillation, allows you to create a capable, cost effective and low-latency version of Nova Pro, Lite and Micro for your specific needs.
- “Amazon Nova Premier has been outstanding in its ability to execute interactive analysis workflows, while still being faster and nearly half the cost compared to other leading models in our tests,” said Curtis Allen, Senior Staff Engineer at Slack, “a company bringing conversations, apps, and customers together in one place.” (Sure, Jan)
34:58 📢 Justin – “You know what I was mostly disappointed about was that I did not find it on the LLM Leaderboard from Chatbot Arena, so either it didn’t score or hasn’t been tested.”
35:36 Amazon Q Developer elevates the IDE experience with new agentic coding experience
- Amazon Q Developer introduces a new, interactive, agentic coding experience that is now available in the IDE for VS Code.
- This brings interactive coding capabilities, building upon existing prompt-based features.
- You now have a natural, real-time collaborative partner working alongside you while writing code, creating documentation, running tests and reviewing changes.
- Q developer transforms how you write and maintain code by providing transparent reasoning for its suggestions and giving you the choice between automated modifications or step-by-step confirmation of changes.
- You can chat with Q in English, Mandarin, French, German, Italian, Japanese, Spanish, Korean, Hindi and Portuguese.
- The system uses your repository structure, files and documentation while giving you flexibility to interact seamlessly with natural dialog with your local development environment. This deep comprehension allows for more accurate and contextual assistance during development tasks.
- Q developer provides continuous status updates, as it works through tasks, and lets you choose between automated code modifications or step-by-step review, giving you complete control over the development process.
37:32 Amazon Q Developer in GitHub (in preview) accelerates code generation
- Starting today, you can now use Amazon Q Developer in Github in preview. This allows for developers who use github, whether at work or for personal projects. They can use Amazon Q developer for feature development, code reviews, and java code migration directly within the GitHub Interface.
38:24 📢 Ryan – “People use the web ID for more than just resolving merge conflicts?”
39:49 EC2 Image Builder now integrates with SSM Parameter Store
- EC2 Image Builder now integrates with Systems Manager Parameter Store, offering customers a streamlined approach for referencing SSM parameters in their image recipes, components and distribution configurations.
- This capability allows customers to dynamically select base images within their image recipes, easily use configuration data and sensitive information for components, and update their SSM parameters with the latest output images.
- Before this you had to specify AMI IDs in the image recipe to use custom base images, leading to a constant maintenance cycle when these base images had to be updated.
- Furthermore, customers were required to create custom scripts to update SSM parameters with output images and to utilize SSM parameter values in components, resulting in substantially lower overhead.
42:53 Accelerate the transfer of data from an Amazon EBS snapshot to a new EBS volume
- AWS is announcing the GA of Amazon EBS provisioned rate for volume initialization, a feature that accelerates the transfer of data from an EBS Snapshot, a highly durable backup of volumes stored in S3 to a new EBS volume.
- This allows you to create fully performance EBS volumes within a predictable amount of time. You can use this feature to speed up the initialization of hundreds of concurrent volumes and instances. You can also use this feature when you need to recover from an existing EBS snapshot and need your EBS volume to be created and initialized as quickly as possible.
- This allows you to specify a specific rate between 100 MiB/2 and 300 MiB/s. You can specify this rate when the snapshot blocks are downloaded from S3 to the volume.
GCP
47:05 Reliable AI with Vertex AI Prediction Dedicated Endpoints
- Google is announcing Vertex AI prediction dedicated endpoints, a new family of Vertex AI Prediction endpoints, designed to address the needs of modern AI applications, including those related to large-scale generative AI models.
- These dedicated endpoints are engineered to help you build more reliability with the following new features:
- Native support for streaming inference
- gRPC protocol support
- Customizable request timeouts
- Optimized resource handling
- In addition you can utilize these dedicated endpoints via Private Service Connect
47:33 📢 Ryan – “All this means to me is that the engineers that were supporting the service within Google were really sick of the two separate types of workloads that were going across these endpoints… I bet you it was a nightmare to predict load and support from that direction.”
Azure
48:42 Microsoft Cost Management updates—April 2025
- Several enhancements for Finops professionals in the Azure world in April.
- First up is the GA of Microsoft Copilot for Azure. You can ask natural language questions about your subscriptions, costs and drivers.
- Also included are several enhancements for exports, including the ability to export price sheets, reservation recommendations, reservation details and reservation transactions, along with standard cost and usage data
- Support for FOCUS is now GA.
- Export data in either CSV or Parquet formats.
- There are several new ways to save money in Microsoft Cloud, including AKS Cost recommendations, autoscale for vcore-based Azure Cosmos DB for MongoDB.
- Troubleshoot disk performance with Copilot.
- On demand backups for Azure Database for PostgreSQL Flexible, VM Hibernation on GPU VMs and Azure Netapp Files Flexible service in preview.
51:25 📢 Justin – “I look forward to exporting all my data into Parquet formats and just sending it to people randomly…figure it out bro!”
53:05 One year of Phi: Small language models making big leaps in AI
- A year ago Microsoft introduced small language models (SLMs) to customers with the release of Phi-3.
- Now they are announcing the new Phi-4 family, including Phi-4-reasoning, Phi-4-reasoning-plus, and phi-4-mini-reasoning marking a new era for small language models and once again redefining what is possible with small and efficient AI.
- These are all reasoning models trained to leverage inference-time scaling to perform complex tasks that demand multi-step decomposition and internal reflections.
- Phi-4-reasoning is a 14-billion parameter open-weight reasoning model that rivals much larger models on complex reasoning tasks.
- Trained via supervised fine-tuning of Phi-4 on carefully curated reasoning demonstrations from OpenAI o3-mini, Phi-4 reasoning generates detailed reasoning chains that effectively leverage additional inference-time compute.
- The model demonstrates that meticulous data curation and high-quality synthetic datasets allow smaller models to compete with larger counterparts.
- Phi-4-reasoning-plus builds on the phi-4 reasoning model further trained with reinforcement learning to utilize more inference-time compute, using 1.5x more tokens than Phi-4-reasoning, to deliver higher accuracy.
- The Phi-4-mini-reasoning is designed to meet the demand for a compact reasoning model.
- This transformer-based language model is optimized for mathematical reasoning, providing high-quality, step-by-step problem solving in environments with constrained computing or latency. Fine-tuning with synthetic data generated by the Deepseek-R1 model, phi-4-mini-reasoning balances efficiency with advanced reasoning ability.
55:41 Announcing Public Preview of Terraform Export from the Azure Portal
- Azure is announcing the preview of Terraform Export within the Azure portal.
- With this new feature, you can now easily export your existing Azure resources to be managed declaratively directly from the Azure Portal.
- This will streamline IaC workflows, making it simpler to manage and automate your Azure resources via the AzureRM and AzAPI providers.
56:06 📢 Matthew – “So, this is a feature that is useful when you are learning Terraform, or need to figure out what the settings are. Because, sometimes you don’t know what all the variables are when you’re going through it… So it’s fine if you’re trying to use it, but please don’t just take this code and use it in your infrastructure as code. You will hate yourself because everything is hard coded.”
1:03:52 Azure virtual network terminal access point (TAP) public preview announcement
- Virtual Network TAP allows customers to continuously stream virtual machine network traffic to a network packet collector or analytics tool.
- Many security and performance tools rely on packet-level insights that are difficult to access in cloud environments.
- Virtual Network TAP bridges this gap by integrating with their industry partners to offer:
- Enhanced security and threat detection
- Performance monitoring and troubleshooting
- Regulatory compliance.
1:04:20 📢 Justin – “I always appreciate when they say ‘this is for threat detection’ because we love to make our security tools the biggest risk in the whole business by sending all the data and all the packets there.”
Oracle
1:07:27 Sphere Powers its AI Platform with Oracle Database 23ai
- All the hyperscalers want to be doing stuff for the Sphere, from Google doing the Wizard of Oz movie, to apparently google providing Oracle Database 23ai on the Oracle Autonomous Database.
- In general we don’t really care that much, but thought it was funny, considering Google has regularly bought ads during Re:invent.
Cloud Journey
1:09:59 Why Your Tagging Strategy Matters on AWS | by Keegan Justis | May, 2025
- Keegan Justis had a great medium post on why your tagging strategy matters on AWS.
- He highlights the benefits of tagging:
- Improved Cost Visibility and Accountability
- Effective Resource Ownership and Management
- Enhanced Security and Compliance
- Reliable Automation and Lifecycle Management
- Operational Clarity and Faster Troubleshooting
- Streamlined Multi-account and Multi-Team governance
- Reduced Manual Work and Better efficiency
- Simplified Onboarding and Knowledge Transfer
- Recommended shared
- Enforcement of Tags
Closing
And that is the week in the cloud! Visit our website, the home of the Cloud Pod where you can join our newsletter, slack team, send feedback or ask questions at theCloud Pod.net or tweet at us with hashtag #theCloudPod