223: Get an AWS Spin on Savings with Cost Optimization Flywheel

Cloud Pod Header
223: Get an AWS Spin on Savings with Cost Optimization Flywheel
71 / 100

Welcome episode 223 of The CloudPod Podcast! It’s a full house – Justin, Matt, Ryan, and Jonathan are all here this week to discuss all the cloud news you need. This week, cost optimization is the big one, with a deep dive on the newest AWS blog. Additionally, we’ve got updates to BigQuery, Google’s Health Service, managed services for Prometheus, and more.

Titles we almost went with this week:

  • 🧑‍💻I swear to you Mr. Compliance Man, Mutator is not as bad as it sounds
  • 🔢Oracle Cloud customer  – or how we let Oracle Audit us internally at will 
  • 🗞️We are all confused by the lack of AWS news
  • ✨The CloudPod copies other Podcast’s Features
  • 🛞Get AWS spin on savings with Cost Optimization Flywheel 

A big thanks to this week’s sponsor:

Foghorn Consulting provides top-notch cloud and DevOps engineers to the world’s most innovative companies. Initiatives stalled because you have trouble hiring?  Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week.

📰General News this Week:📰


No AWS news – so that should tell you we’re DEFINITELY getting close to announcement season. 


01:35 Introducing new SQL functions to manipulate your JSON data in BigQuery 

  • Enterprises are generating data at an exponential rate, spanning traditional structured transactional data, semi-structured like JSON and unstructured data like images and audio. 
  • Beyond the scale, the divergent types present processing challenges for developers, sometimes requiring a separate processing flow for each. 
  • BigQuery supported semi structured JSON at launch eliminating the need for processing and providing schema flexibility, intuitive querying and the scalability benefits afforded to structured data. 
  • Google is now releasing new sql functions for Bigquery JSON, extending the power and flexibility of their core JSON support. These new functions make it easier to extract and construct JSON data and perform complex data analysis. 
    • Convert JSON values into primitive types (INT64, FLOAT64, BOOL and STRING)
      • Is anyone else insulted that STRING is considered primitive?  
    • easier and more flexible way with new JSON LAX functions
    • Easily update and modify existing JSON values in BigQuery with new JSON Mutator functions.  
    • Construct JSON objects and JSON arrays with SQL in BigQuery with new JSON Constructor functions

03:58 📢 Justin – “Well, you only know that a NoSQL solution makes it once it gets a SQL interface. That’s how you know it’s truly become web scale.”

06:25 Introducing Personalized Service Health: Upleveling incident response communications

  • Outages happen to everyone… especially when your AZ’s aren’t really separate. 
  • Google is **excited** to introduce personalized service health, which provides fast, transparent, relevant and actionable communication about Google Cloud service disruptions.  
    • In preview, it allows you to receive granular alerts about Google Cloud service disruptions, as a stop in your incident response, or integrated with your incident response or monitoring tools.
  • Today when there is an issue they publish it to Google Cloud Service Health page, but with Personalized Service health they take this a step further by allowing you to decide which service disruptions are relevant to you.
    • You would think they could figure this out for us, but what do we know? 
  • Integration with your incident management workflow is available via PagerDuty or other tools.
  • Personalized Service Health emits logs and can push customizable alerts to make incidents more discoverable for your workflow. 

07:22 📢 Jonathan – “You can guess how that product turned out, or started out. I guess it’s, how do we not tell customers that we have all these outages? Let’s make a personalized dashboard that they actually have to configure before it shows anything.”

13:48 Improved cost visibility and 60 percent price drop for Managed Service for Prometheus

  • Does all your newly generated data need someplace to live? Maybe that someplace is Prometheus. Well, now it’s cheaper! We love a good discount…
  • Prometheus is the de facto standard for K8 application metrics, but running it yourself can strain engineering time and infra resources, especially at production scale. 
  • Managed services for prometheus can help offload the burden, freeing up your engineers to build your next big application rather than spending time building out metrics infrastructure. 
  • Google is announcing a 60% price reduction for sample ingestion effective immediately. Thanks Google!
  • Metric samples are tiered into 4 buckets. 

15:17📢 Matt – “I always wonder what they do on the backend to get such a good price reduction? And then my next question is, how long has it been there they haven’t given me the price reduction that they’ve been making that much profit on it?”

15:32 📢 Justin – “I was wondering these things too, is the reason why people aren’t adopting it is because it’s too expensive? And is it really a margin builder for them – or is it that they weren’t getting any revenue from it? So now they have an opportunity to get more revenue because customers now aren’t saying, oh, that’s too expensive.”

16:33 📢 Ryan – “I’ve never seen someone like, ‘let’s use Prometheus!’ and then be cost aware about that choice… those two things don’t happen.”


16:50 Azure Storage Mover support for SMB and Azure Files

  • Azure storage mover can now migrate your SMB shares to Azure File Shares
  • Fully managed migration service that enables you to migrate on-premise files and folders to Azure storage while minimizing downtime for your workload.  


18:55 Introducing Oracle Compute Cloud@Customer 

  • Oracle is pleased to announce the latest addition to the Oracle Distributed Cloud portfolio! (Whatever that is.)
  • Oracle Compute Cloud@Customer a fully managed, rack-scale infrastructure platform that lets organizations run enterprise and cloud-native workloads on Oracle Cloud infrastructure. 
  • Compute Cloud@Customer is built, installed, owned and remotely managed by Oracle, so you can focus your scarce IT resources on growing your business and improving operating efficiency
  • The @customer offering is built using 4th generation AMD EPYC processors with 96 cores per processor and DDR5 memory.  You can subscribe to increments of 552 available processor cores with 6.7 TB of available memory.  
    • Up to a maximum of 6,624 cores overall.
  • This is 3.8 times the number of cores per rack as AWS Outpost system and 1.4 times the densest Microsoft Azure Stack Hub systems.  
  • Oracle, unlike Amazon, wisely decided to give you only the pricing by cost per core and not by the actual monthly price you will pay for this unit.
    • They really would like you to notice that their price per core is *only* $53 / month

21:37 📢 Justin – “So anytime someone uses @Customer or @Ppartner like this, I have horror stories back to the company I worked at where we did SaaS @partner, which was terrible; where we basically took our SaaS application that we managed and we’re like, ‘we’re going to go run it in a data center owned by a partner who’s going to resell it.’ And that was terrible. And I did it twice with the same leader – the same guy came up with the same dumb idea two different places; failed both times. And yet I had to go implement it both times and have it fail. So it’s great. Super awesome.”

Continuing our Cloud Journey Series Talks

23:01 Cost optimization flywheel 

  • Today we’re taking a deep dive into the AWS blog post  “Cost Optimization Flywheel”. 
  • The article discusses the concept of a “cost optimization flywheel” for managing and reducing costs in the cloud. 
  • The key points of the article are as follows:
    • The cost optimization flywheel is a continuous cycle of four steps: “Analyze,” “Recommend,” “Deploy,” and “Operate.”
    • The first step, “Analyze,” involves gathering data and analyzing it to identify areas where cost optimization is possible.
    • The second step, “Recommend,” includes using automated tools and machine learning algorithms to generate cost-saving recommendations.
    • The third step, “Deploy,” involves implementing the recommended changes identified in the previous steps.
    • The final step, “Operate,” focuses on monitoring and measuring the impact of the changes made and adjusting strategies accordingly.
  • Amazon Web Services (AWS) provides various services, tools, and resources to assist customers in each step of the cost optimization flywheel.
  • The article emphasizes the importance of a continuous, iterative approach to cost optimization, rather than treating it as a one-time effort.
  • It also highlights the role of automation and machine learning in enabling more efficient and effective cost optimization.
  • The cost optimization flywheel helps organizations achieve cost savings and better align cloud spending with business needs.
  • The cost optimization flywheel enables organizations to gain greater visibility and control over their cloud costs, allowing them to allocate resources more strategically and make informed decisions about their cloud spending.

26:53 📢 Matt – “So in the rare occasion I defend Azure, this is if you have essentially it’s the same thing as AWS with the, like you get X number of IOPS per gigabyte for GP2, they have the same process with Azure file share for like DFS on AWS, where like, or sorry, FSX on AWS, where you like, you get X number of gigabytes per IOPS. And if you need more IOPS, then you have to just provision more storage space.”

27:20 📢 Ryan – “It’s more of an artifact of PIOPS being too expensive though, right? Then like, that’s just the cost model. Like, so it’s, it’s because you can achieve better results, more cheaper by doing it that way versus, you know, how it’s supposed to be, which is if I needed high, high throughput, I should be able to check the box for PIOPS, but it’s so ridiculously expensive. It doesn’t make sense to do so.”

27:55 📢 Jonathan – “Just in general, I don’t like this blog post at all. I don’t like the diagram. I don’t like the write-up. It’s really amateurish. My eight-year-old could have drawn a better diagram of this. If you really want the proper diagram, go to FinOps Foundation and look at their framework and the phases of their framework. They have basically the same thing. Inform, optimize, and operate. going around a little circle, and a really nice cloud-agnostic write-up of the process that you should be following.”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.