265: Swing and a WIF

Cloud Pod Header
265: Swing and a WIF
80 / 100

Welcome to episode 265 of the Cloud Pod Podcast – where the forecast is always cloudy! Justin and Matthew are with you this week, and even though it’s a light news week, you’re definitely going to want to stick around. We’re looking forward to FinOps, talking about updates to Consul, WIF coming to Vault 1.17, and giving an intro to Databricks LakeFlow. Because we needed another lake product. Be sure to stick around for this week’s Cloud Journey series too. 

Titles we almost went with this week:

  • 🌊 The CloudPod lets the DataLake flow
  • 🗺️ Amazon attempts an international incident in Taiwan
  • 🔃 What’s your Vector Mysql? 

A big thanks to this week’s sponsor:

We’re sponsorless! Want to reach a dedicated audience of cloud engineers? Send us an email, or hit us up on our Slack Channel and let’s chat! 

General News

01:40 Consul 1.19 improves Kubernetes workflows, snapshot support, and Nomad integration

  • Consul 1.19 is now generally available, improving the user experience, providing flexibility and enhancing integration points. 
  • Consul 1.19 introduces a new registration custom resource definition (CRD) that simplifies the process of registering external services into the mesh.  
  • Consul service mesh already supports routing to services outside of the mesh through terminating gateways. However, there are advantages to using the new Registration CRD. 
  • Consul snapshots can now be stored in multiple destinations, previously, you could only snapshot to a local path or to a remote object store destination but not both.  
  • Now you can take a snapshot of NFS Mounts, San attached Storage, or Object storage. 
  • Consul API gateways can now be deployed on Nomad, combined with transparent proxy and enterprise features like admin partitions 

01:37 📢 Matthew- “What I was surprised about, which I did not know, was that console API gateway can now be deployed on Nomad. Was it not able to be deployed before? Just feels weird… you know, consoles should be able to be deployed on nomad compared to that. You know, it’s all the same company, but sometimes team A doesn’t always talk to team B.”

03:21 Vault 1.17 brings WIF, EST support for PKI, and more  

  • Vault 1.17 is now generally available with new secure workflows, better performance and improved secrets management scalability. 
  • Key new features:
    • Workload Identify Federation (WIF) allows you to eliminate concerns around providing security credentials to vault plugins.  
    • Using the new support for WIF< a trust relationship can be established between an external system and vault’s identity token provider to access the external system.  
    • This enables secretless configuration for plugins that integrate with external systems such as AWS, Azure and GCP.  
    • Two new major additions to PKI certificate management
      • Support for IOT/EST based devices
      • Custom certificate metadata
    • Vault Enterprise Seal High Availability, previously you relied on a single key management system to store the vault seal key securely.  
    • This could create a challenge if the KMS provider had an issue such as it being deleted, disaster recovery or compromise.  
      • In such a case the vault couldn’t be unsealed, now, with the new HA feature, you can configure independent seals secured with multiple KMS providers. 
    • Extended namespace and mount limits
    • Vault Secrets Operator (VSO) instant updates.  

05:00 📢 Justin – “As I was reading through it, I was like, yeah, if someone gets access to your account and can delete your KMS keys, then they could seal your vault and then you’re totally hosed. Yeah, it was definitely something I had not really considered at all. Even the console feature where they talked about the ability to do the backup to multiple systems.”

07:09 Introducing an Enhanced & Redefined Tanzu CloudHealth User Experience 

  • With Finops X starting tomorrow in sunny San Diego, the press releases are coming out for new capabilities for Cloud cost management. 
  • VMware Tanzu Cloud Health is upgrading its entire User experience. 
    • It will be showing it off at X. 
    • It’s available initially as a tech preview for interested customers to reach out to their account team to request more information. 
  • The FInops and Cloud operations team will find collaborating easier, with all users accessing the same data and using a shared platform. 
  • Tanzu Cloudhealth UI is powered under the hood by a unique graph data store.  The significance of the graph datastore lies in its ability to capture many-to-many relationships typical in multi-cloud environments. 
  • The new UI includes a vastly enhanced feature set
    • Tanzu Intelligent Assist is an LLM-enabled chatbot that allows users to gain insights about their clouds and services—including resources, metadata, configuration and status—through natural language without following a specific query format. 
    • Cloud Smart Summary – a concise summary of the vast data in your cloud bills, including what drives your cloud spending, why they change over time, and suggestions you can follow to optimize your costs further. 
    • Optimization Dashboard – a single, customizable pane that combines all available committed discount recommendations, rightsizing opportunities, and anomalous spending across your cloud and services. 
    • Realized savings – detailed reporting and analysis alongside key performance indicators that quantify savings realized over a desired timeframe. 

08:44 📢 Justin – “Now I’m mostly impressed with this press release because they said all of that without actually using the words AI or artificial intelligence anywhere. Yes, they did have 10 intelligent assists and it is LM enabled, but someone in marketing should be fired for not specifically having the AI keyword that any investor of Broadcom would of course want to see in this press release.”

AI is Going Great – Or How ML Makes All Its Money 

10:42 Introducing Databricks LakeFlow: A unified, intelligent solution for data engineering

  • Databricks is announcing Databricks LakeFlow, a new solution that contains everything you need to build and operate production data pipelines. 
  • It includes new native, highly scalable connectors for databases including MysQL, Postgres, SQL Server and Oracle and enterprise applications like Salesforce, Dynamics 365, Netsuite, Workday, Servicenow and Google analytics. 
  • Users can transform data in batch and streaming using standard SQL and Python. 
  • They are also announcing real time mode for apache spark, allowing stream processing at orders of magnitude faster latencies than microbatch. 
  • Finally you can orchestrate and monitor workflows and deploy to production using CI/CD. 
  • Want to learn more or request access? You can here.

11:20 📢 Matthew – “So about five years ago, you walked around any of these tech conferences and all you saw was cloud health, cloud spend, cloud whatever, something cloud. And I feel like the new thing is Lake whatever, Lake flow. I’m like, how am I ever gonna find this in the future? And I’m like, I wanna look this up. it’s that one with Lake in its name.”

13:16 Open Sourcing Unity Catalog  

  • Databricks is open sourcing Unity Catalog, the industry’s first open source catalog for data and AI governance across clouds, data formats, and data platforms. 
  • Here are the most important pillars of the Unity Catalog vision:
    • Open Source API implementation
    • Multi-format Support
    • Multi-Engine support
    • Multimodal
    • A vibrant ecosystem of partners. 

14:52 📢 Justin – “You can get started with this today. If you’re a Databricks customer, you already have access. And if you’re not, good luck figuring out how to integrate it.”


16:03 In the Works – AWS Region in Taiwan

AWS to Launch an Infrastructure Region in Taiwan  

    • Amazon is announcing that Taiwan will have a new region in early 2025 (assuming a lot of bad geopolitical things don’t happen.)  
    • The new AWS Asia Pacific (Taipei) region will consist of three Availability Zones at launch.
  • Cathay Financial Holdings (CFH) is a leader in financial technology in Taiwan and continuously introduces the latest technology to create a full-scenario financial service ecosystem. Since 2021, CFH has built a cloud environment on AWS that meets security control and compliance requirements. “Cathay Financial Holdings will continue to accelerate digital transformation in the industry, and also improve the stability, security, timeliness, and scalability of our financial services,” said Marcus Yao, senior executive vice president of CFH. “With the forthcoming new AWS Region in Taiwan, CFH is expected to provide customers with even more diverse and convenient financial services.” 
  • It will be interesting to see how this one plays out…

17:27 Introducing Maven, Python, and NuGet support in Amazon CodeCatalyst package repositories

  • AWS is announcing support for Maven, Python and Nuget package formats directly in Amazon CodeCatalyst package repositories.  
  • CodeCatalyst customers can now securely store, public and share Maven, Python and nugget packages, using popular package managers such as MVN, PIP, Nuget and more.  
  • Through code catalyst package repositories, you can also access open source packages from 6 additional public package registries.  

18:16 📢 Justin – “So CodeArtifact is a build and release automation service that provides a centralized artifact repository, access management, and CI -CD integration. CodeArtifact can automatically fetch software packages from public package repositories on demand, allowing teams to access the latest versions of application dependencies. CodeCatalyst is a unified service that helps development teams build, deliver, and scale applications on AWS.”


19:39  What’s new with Cloud SQL for MySQL: Vector search, Gemini support, and more 

  • Google has released several new features for Cloud SQL for MySQL to help you drive innovation and enhance user experiences. 
  • 1) Support for vector search to build generative AI applications and integrate with MySQL. Embedding data vectors allows AI systems to interact with it more meaningfully. Leveraging LangChain, the cloud sql team built a Vector Langchain package to help with processing data to generate vector embeddings and connect it with MySQL.  
  • Vector search embedded into Mysql. You can create embedded tables leveraging AI to allow you to determine things about data in your table like the distance between two addresses. 
  • 2) Use Gemini to optimize, manage and debug your MySQL databases. Leveraging Index Advisor identifies queries that contribute to database inefficiency and recommends new indexes to improve them within the query insights dashboard. Debug and prevent performance issues with active queries and monitor and improve database health with MySQL recommender. 

21:42 📢 Matthew – “I always worry when you put more and more things embedded in the SQL databases, you kind of slowly build more and more of a single point of failure within your application, you know, because then your SQL database becomes computer resource constrained more and more. And with SQL scaling horizontally is a little bit harder, is a lot harder than, you know, scaling vertically. So you just normally end up scaling vertically and then you become less and less cloud native.”

23:33 Join the latest Google Cloud Security Talks on the intersection of AI and cybersecurity 

25:15 Bringing file system optimizations to Cloud Storage with a hierarchical namespace

  • Data-intensive and file-oriented applications are some of the fastest growing workloads on Cloud Storage. However, these workloads often expect certain folder semantics that are not optimized in the flat structure of existing buckets.  To solve this, Google announced Hierarchical namespaces (HNS) for Cloud Storage, a new bucket creation option that optimizes folder structure, resources and operations.  Now in preview, HNS can provide better performance, consistency and manageability for cloud storage buckets. 
  • Existing cloud storage buckets consist of a flat namespace where objects are stored in one logical layer. 
  • Folders are simulated in UI and CLI through / prefixes, but are not backed by cloud storage resources and cannot be explicitly accessed via API. This can lead to performance and consistency issues with applications that expect file-oriented semantics, such as Hadoop/Spark analytics and AI/ML Workloads. 
  • It’s not a big deal until you say you need to move a folder by renaming the path. In a traditional filesystem, that operation is fast and atomic, meaning that the rename succeeds and all folder contents have their paths renamed, or the operation fails and nothing changes. 
  • In a cloud storage bucket, each object underneath the simulated folder needs to be individually copied and deleted. If your folder contains hundreds or thousands of objects this is slow and inefficient. It is also non-atomic – if the process fails midway, your bucket is left in an incomplete state. 
  • A bucket with the new hierarchical namespace has storage folder resources backed by an API, and the new “rename folder” operation recursively rename a folder and its content as metadata-only operations.  
  • This has many benefits including:
    • Improved performance
    • File-oriented enhancements
    • Platform support

26:50 📢 Matthew – “I mean, it’s a great feature, you know, just kind of getting rid of all the day to day stuff. More one of those things that it just feels like they’re really just announcing a rename feature, but then they’ve kind of set it up so you can only use this API call if you’ve set it up in a very specific way. So I’m kind of more concerned that it’s like, Okay, they’ve optimized it – they’ve over optimized in one way and then can it cause performance issues on the other they don’t talk about. So I’ll be kind of curious to see how this actually works and if there’s other issues.”


28:45 Empowering every scientist with AI-augmented scientific discovery

  • We need Jonathan for this one – so just know you’re pretty much on your own for this one. 
  • Microsoft is announcing generative Chemistry and Accelerated DFT, which will expand how researchers can harness the full power of the platform.   
  • Generative AI will, together with quantum-classical hybrid computing, augment every stage of the scientific method. 
  • And that’s all you’re getting from us. 👍


30:45 Oracle Announces Fiscal 2024 Fourth Quarter and Fiscal Full Year Financial Results 

  • No earnings horns for Oracle – your ears are safe. 
  • Fourth Quarter results fell short of Wall Street’s expectations with earnings per share of 1.63 adjusted and 14.29 billion in revenue. 
  • Cloud Services and License support were up 9% to 10.23 B in revenue. 
  • Cloud Infrastructure increased to 2.0 billion up 42%, but was slower than the 49& growth rate in the prior quarter. 
  • Honestly, what’s a couple of million at this point?

31:44 Oracle Access Governance introduces next-gen access dashboard and more integrations 

  • Oracle is committed to helping organizations with continuous improvement and innovation, and so they are releasing the following features:
    • Next-Gen Access Dashboard with details on who has access to what
    • Support for expanded identity orchestration with Oracle Peoplesoft HRMS
    • Configure Oracle Cloud Infrastructure Email Delivery service for customized notifications. 
    • Also, this one included some very executive friendly pretty graphs, which Justin very much appreciated, so gold star to whatever intern made those. 

32:34 📢 Justin – “It’s sort of weird the, you know, the Oracle Access Government utilizes internal email delivery service notifications. Like, what else would it have leveraged? I would hope you’re using internal email delivery services to deliver email to me.”

Cloud Journey Series

35:23 Free to be SRE, with this systems engineering syllabus 

  • Creating and implementing reliable systems of code, and infrastructure forms the disciplines of system engineering, which is used by Google SRE.  To help you learn more about systems engineering Google has compiled a list of best practices and resources for you. 
  • The Systems Engineering Side of Site Reliability Engineering
  • Non-Abstract Large System Design
  • Distributed Imageserver workshop
  • Google Production Environment Youtube Talk
  • Reliable data processing with minimal toil
  • How to design a distributed system in 3 hours (Youtube)
  • Implementing SLO
  • Making Push on green a Reality
  • Canary Analysis Service


And that is the week in the cloud! Visit  our website, the home of the Cloud Pod where you can join our newsletter, slack team, send feedback or ask questions at theCloud Pod.net or tweet at us with hashtag #theCloud Pod

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.