240: Secure AI? We Didn’t Train for That!

Cloud Pod Header
tcp.fm
240: Secure AI? We Didn’t Train for That!
Loading
/
70 / 100

Welcome to episode 240! It’s a doozy this week! Justin, Ryan, Jonathan and Matthew are your hosts in this supersized episode. Today we talk about Google Gemini, the GCP sales force (you won’t believe the numbers) and Google feudalism. (There’s some lovely filth over here!) Plus we discuss the latest happenings over at HashiCorp, Broadcom, and the Code family of software. So put away your ugly sweaters and settle in for episode 240 of The Cloud Pod podcast – where the forecast is always cloudy! 

Titles we almost went with this week:

  • 🏃Why run Kubernetes when you can have a fraction of the functionality from Nomad and Podman?
  • 💸The CloudPod hopes for a Microsoft buyout before we shut down
  • 🔬The CloudPod looks forward to semantic versioning now Mitchell has left Hashicorp
  • 🏰Amazon Fiefdoms, Microsoft Sovereignty… I look forward to Google Feudalism
  • 👑Sovereign Skies vs. Feudal Fiefdoms: Who Owns the Cloud’s Crown?*
  • 🧑‍🌾Cloud Fiefdoms, Feudal Futures: Battling for Data Sovereignty*
  • 🧭Fiefdoms Fall, Sovereigns Rise: The Cloud’s Feudal Flaws*

A big thanks to this week’s sponsor:

Foghorn Consulting provides top-notch cloud and DevOps engineers to the world’s most innovative companies. Initiatives stalled because you have trouble hiring?  Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week.

Follow Up

01:09 Broadcom is killing off VMware perpetual licenses and strong-arming users onto subscriptions

  • Broadcom is wasting no time pissing off the VMware community after the closure of their purchase of Vmware. They moved quick! 
  • With absolutely no warning, Broadcom is killing VMWares on-premise perpetual licenses, and forcing you to move onto subscriptions. According to Broadcom, this is “simplifying” their lineup and licensing model. Sure. 
    • They are doing this by ending the sale of support and subscriptions effective immediately. 
  • This impacts the Vsphere family of products, Cloud Foundation, SRM and the Aria suite. 
  • You may continue to use your existing perpetual licenses until your current contract expires.
    • They will most likely provide a one time incentive of some kind for the transition to subscription. Then, you get to pay FOREVER. Insert Mr. Burns laugh here. 
  • You will also be able to “bring your own subscription” for license portability to Vmware validated hybrid cloud endpoints running VMware Cloud foundation. 
  • They are also sweetening the deal by offering 50% off Vmware Cloud Foundation, and including higher support service levels including enhanced support for activating the product and lifecycle management.
  • Competitors are rapidly raising their hand to fill the gap mainly led by Nutanix, who points out the entire business model for Broadcom is to maximize the acquired asset within 2 to 3 years and as a VMWare customer you will *feel* it. 
  • There are also other alternatives – including Zen, KVM, Hyper-V, Proxmox, XCP-ng and Canonical’s new Microcloud offering
  • You know what this means? It’s time to get Kubernetes going! 

02:37 📢 Ryan- “ …this is shocking. You know when there’s an acquisition there’s going to be changes, but this is pretty brutal and very quick..”

General News

11:24 Magic Quadrant is here

  • The latest Magic Quadrant from Gartner has dropped, with a couple of interesting things.
    • Only 4 companies made the leader box: AWS, Microsoft, Google and Oracle. 
    • Niche Players were IBM, Alibaba Cloud, Huawei Cloud and Tencent Cloud
  • Amazon was still top when it came to ability to execute, but Microsoft has passed them on “Completeness of Vision”
  • Nothing really jumps out to us in Strengths or Cautions. 
  • There are things we have talked about here on the podcast in depth. 
  • Microsoft did get dinged for persistent resilience and security issues… yet they have the biggest completeness of vision on how they’ll get your data hacked. Go figure. 

12:37 📢 Ryan – “Completeness of vision has always been sort of like this, I don’t know, I’ve always sort of hated that part of these Gartner reports, just because it’s super subjective, and it seems to be like when you look at different ways they rate different technologies just even outside of cloud, it just seems to vary a whole lot, even their justification of why they’re ranking. It’s never made sense to me – it’s never felt logical.”

13:51 📢 Justin – “I think the reason why they got dinged on it is because of AI. And so, you know, you know, this, this magic quadrant just got published, you know, last week. And most likely it was finalized before re:Invent. And, you know, if I look at the pre -reinvent period of time, everyone was saying AWS was out on AI and didn’t have a play and was all messed up. And so I suspect that that’s why they got dinged this year on, uh, vision.”

18:03 Red Hat Podman and HashiCorp Nomad integration matures

  • For those of you :like some of your podcast hosts) who are allergic to paying Docker money; we have talked about Podman, Finch and Lima in the past as alternatives. 
  • Well, this week Hashicorp has updated Nomads Podman drivers to make the integration better than ever. Awesome!
  • Enhancements including running Podman containers in task groups with bridge networking, new authentication options, and specifying credential helpers or external credential configuration files for working with private registries
  • Plus with Nomad 1.7 you get tighter integration with Podman and Hashicorp Consul service mesh integrations. 

19:19 📢 Matthew – “There’s some decently large companies that use Nomad though. I remember reading about one of the big Roblox issues, included Nomad; so they clearly use the HashiStack.”

21:48 Software Startup That Rejected Buyout From Microsoft Shuts Down, Sells Assets to Nutanix

  • Hey, remember Mesosphere? Well, if you don’t, no one else did either – as they are shutting down after selling some assets, IP and some employees to Nutanix. 
  • Mesosphere had a pretty strong moment early in the container adoption craze, but ended up in the dustbin alongside swarm and other attempts at orchestration. (That was not called Kubernetes.) 
  • As part of their pivot to being a K8 solution, they rebranded to D2iQ.  
  • I think even here at the podcast we thought that was a terrible name at the time. 
  • The company had raised over 250m in VC funding, and will return some portion of assets back to the creditors.  

25:49 Mitchell reflects as he departs HashiCorp

  • After 11 years, Hashicorp Co-Founder Mitchell Hashimoto is leaving
  • Mitchell says in his goodbye letter he had been thinking about it for a while, and he has been phasing out slowly since stepping down as CEO in 2016, and then departing the board of directors and leadership team in 2021. 
  • His family recently welcomed his first child, and so he will be spending time with the baby, as well as after 15 years in tooling he wants to dabble in new areas ($10 dollars says its AI.) 
  • Good luck on your next thing Mitchell, and thanks for all the fish. 🐬

27:14 📢 Jonathan – “He may well not NEED to make anymore money. SO building a new terminal emulator, well, if that’s what makes him happy.”

AI is Going Great!

18:14 The State of AI Security (Or, how ML makes all its money.)

  • In a surprisingly transparent blog post from a company that wants to make billions on AI technology, Cohere has written post on the State of AI security that covers most of the things I’ve heard. 
  • They rightfully point out that the use of LLMs and systems like retrieval augmented generation that integrate proprietary knowledge, comes with rising concerns of cyber attacks and data breaches against these systems. 
  • Integration of LLM’s and associated toolkits into existing applications not built for the models creates new security risks that are compounded by the  current rush to adopt generative AI technology.  API’s should be treated as inherently untrustworthy. Allowing an LLM, which has its own vulnerabilities, elevated privileges and the ability to perform fundamental functions involving proprietary and sensitive data, such as CRUD ops, adds additional risk on top of the API.  
  • They go on to talk about the 10 vulnerabilities in LLM applications, which was compiled by OWASP
    • 1. Prompt Injection
    • 2. Insecure Output handling
    • 3. Training data poisoning
    • 4. Model denial of service
    • 5. Supply Chain Vulnerabilities
    • 6. Sensitive information disclosure
    • 7. Insecure plugin design
    • 8. Excessive agency
    • 9. Over-reliance
    • 10. Model Theft
  • Good article to share with your security folks who are trying to learn just as fast as your developers are trying to rush AI into their products.  May the odds be ever in your favor. 

30:02📢 Justin – “…naturally there’s an opportunity to cause that problem. Insecure output handling, training data poisoning, where you actually just give it bad data on purpose, to make it, I think it’s telling you the truth. Model denial of service, supply chain vulnerabilities, sensitive information disclosure, insecure plugin design, this is where you’re adding a plugin on top of it to give hints, excessive agency, overreliance on the LLM, and model theft.”

AWS

34:52 New for AWS Amplify – Query MySQL and PostgreSQL database for AWS CDK

  • You can now connect and query our existing MySQL and PostgreSQL database with the AWS CDK, a new feature to create a real-time, secure GraphQL API for your relational database within or outside of AWS.  
  • You can now generate the API for all relational database operations with just your database endpoint and credentials. When the schema changes, you can run a command to apply the latest table schema changes. 
  •  I think this will make Ryan happy as he’ll never have to write a SQL query again. 🙂

36:54 📢 Ryan – “The cool part about this as well, it does just auto-generate that stuff. If you have a very jacked schema, the tooling that they’ve provided you allows you to provide your own input to that. So it wouldn’t be done automatically, but you could tune it to your particular use case. You wouldn’t be completely hosed, which is kind of neat. I was reading this article and I was laughing because the CDK portion of this, I was like, really doesn’t have a lot to do with CDK. But when you read through the article and go through the steps of all the things you’re doing, it really does highlight just how powerful the CDK has really become and what you can do with it. And that’s very different from any other tooling where you have a declarative state managing it that way. It’s kind of neat.”

38:33 Introducing managed package repository support for Amazon CodeCatalyst

  • Apparently the CodeCatalyst team forgot that CodeArtifact exists, and is announcing Managed Package Repositories in Amazon CodeCatalyst.  Codecatalyst customers can now secure store, publish and share npm packages.  You can also access open source NPM packages from the npm registry.   

45:55 Amazon EC2 Instance Connect now supports RHEL, CentOS, and macOS

  • Justin is a huge fan of Amazon EC2 Instance connect, which allows you to connect to your instances using SSH, but it has previously been limited to Amazon Linux and Ubuntu. 
  • Amazon has now extended it to RedHat Enterprise Linux (RHEL), CentOS, and MacOS.  

50:03 AWS Overhauls 60,000-Person Sales Team to Fix ‘Fiefdoms,’ Customer Complaints

  • **60,000 GTM PPL holy crap**
  • Matt Garman is apparently prepping the largest reorg of AWS Sales team. (he has been making regular changes but this will be the most extensive
  • The information points out that AWS sales reps enjoyed just taking orders from eager customers but now with stiff competition from Azure and Google they have to actually compete. 
  • AWS has 115,000 employees over all; meaning over 50% are in the GTM team.  (sales, marketing and professional services — it must only be like 10 in marketing……)
  • Garman has made it a priority to get more of the Fortune 1000 over 10m annually on cloud spend, it’s apparently 20% now. 
  • AWS projects that make up ground with the fortune 1000 could net 8b in additional revenue.  

52:07 📢 Justin – “…after we got past the shock of the number here, apparently Matt Garman, who’s in charge of sales and marketing and all these things is apparently prepping the largest Reorg AWS sales team ever. Uh, although he’s been in that role for like 10 years. This is like his fourth or fifth major Reorg, uh, that violates the three letter rule for me, but that’s okay. The information points out that the AWS sales reps enjoyed just taking orders from eager customers, but now with stiff competition from Azure and Google, they have to actually go out and compete.”

GCP

54:12  Introducing Gemini: our largest and most capable AI model

  • The long awaited response to Open AI (wait wasn’t that bard) google gemini was previewed to the world. 
  • Sundar stops by in this one to talk about what excites him about AI: “The chance to make AI helpful for everyone, everywhere in the world”
  • He points out they are 8 years in on their journey….
  • Gemini per google is the most capable and general model they have built. 
  • A result of a large-scale collaborative effort by teams across Google, including google research.  Built from the ground up to be multi-modal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image and video. 
  • It was also designed to be flexible from small enough to run on a mobile device to large cloud datacenters.  
  • Gemini 1.0 will have 3 different sizes:
    • Gemini Ultra- The largest and most capable model for highly complex tasks.
    • Gemini Pro- Their best model for scaling across a wide range of tasks.
    • Gemini Nano – The most efficient model for ondevice tasks
  • Google points out their model has state of the art performance, and provides a handy table comparing Gemini Ultra to GPT-4, with Gemini ultra berate GPT-4 in many areas. 
  • Previously multi-modal models involved training separate components for different modalities and then stitching them together to roughly mimic some of the functionality that google will provide. 
  • They trained Gemini on the TPU v4 and v5e tensor processing units, and they are pleased to announce a new TPU; the Cloud TPU v6p, designed for training cutting-edge AI models. 
  • If you want to play with it, Bard is already taking advantage of Gemini pro, and they will be bringing Nano to the new Pixel 8 Pro.  
  • Also, over the next few months will show up in search, ads, chrome and duet AI.  
  • Gemini Pro as of today is now available to you as well in Google AI studio or Google Cloud Vertex AI.  
  • Gemini Ultra isn’t yet available as they complete extensive trust and safety check, red-teaming processes and further refining the model using fine-tuning and reinforcement learning from human feedback before making it broadly available. 
  • Gemini Ultra will appear early next year as Bard Advanced. 

56:07 📢 Ryan – “It’s interesting because you see the relationship now between the model and the service that they’re trying to monetize. And so, like, which is interesting because I always felt like Bard was sort of an emergency reaction to Chat GPT. And so, like, so they’re not killing it, but they’ve put something out there that you can now leverage and interact with and they can make it smarter.”

1:00:30 Don’t be fooled: Google faked its Gemini AI voice demo

  • There are some super slick demos of Gemini, but shortly after the video launched lots of claims of it being fake started to circulate. 
  • In the demo they show the AI interacting with someone drawing a duck on a piece of paper and responding to spoken questions about the object being drawn.  
  • In reality, the audio was just them reading the text prompt they had entered into the system. 
  • Google admitted the demo shows what it could look like.  
  • Google also releases a second video that details the prompts and methods they used to create the demo video, which also shows some of the hints they had to supply. 

1:02:24 📢  Matthew – “So I’m annoyed that they did it, but I think the fact that they showed it and then, you know, only at once they were called out on it, but like showed how it actually all worked and what they had to do to show the realisticness of it. Like, this is actually where we’re at. I mean, at least gives me some honesty from them about like, look, this is really where we’re at.”

1:03:19  NotebookLM adds more than a dozen new features

  • NotebookLM is an experimental product (likely to be killed by google at a terrible moment) from the labs team designed to help you do your best thinking, is now available in the US to ages 18 and up. And it’s using Gemini pro, their best model for scaling across a wide range of tasks to help with document understanding and reasoning.  

1:05:25 What’s new with Filestore: Enhancing your stateful workloads on GKE

  • Filestore has had several enhancements to help you run stateful workloads on GKE. Like MSSQL.
  • Filestore, is Google’s fully managed file storage service, is a multi-reader, multi-writer solution that is decoupled from compute VMs making it resilient to VM Changes/failures. 
  • Filestore is fully managed and integrated into GKE’s CSI driver, and is continuously evolving with new features, functionality and GKE integrations.  
  • CSI Driver support for Filestore Zonal Capacity (100TiB).  
    • The new CSI driver integration of their high-capacity, zonal offering with GKE starts at 10TiB and scales capacity and performance linearly to meet your high-capacity and high performance needs up to 100TiB per instance.  This is useful for large scale AI/ML training frameworks like PyTorch/Tensorflow that expect a file interface. Additionally, it features non-disruptive upgrades, and 1,000 NFS connections per 10TiB. Thats up to 10,000 concurrent NFS connections supporting large GKE deployments and demanding multi-writer AI/ML workloads. 
  • Backups you can now use the volume snapshot API on filestore enterprise volumes. Google admits that its a bad name, as its actually a method to backup the data and is not a local file system snapshot as the name implies.  The process of using the API to invoke a backup of filestore basic and enterprise are the same. 
  • GKE and Filestore enterprise customers have the benefit of Multi-share instances that they launched last year, enabling them to subdivide a 1TiB instance into multiple 100GiB persistent volumes to improve storage utilization.  Now you divide your enterprise instance into 80 shares (up from 10) and the minimum size can be 10Gib (down from 100Gib)

1:06:46 📢  Ryan – “That’s pretty cool. I’m still waiting on being convinced that the CSI drivers aren’t just fused by another name, waiting to screw me. But I do like this service, and I’m sort of hoping that it lives up to the documentation because I’m testing this right now for a couple of projects I’m working on.

1:07:42 Gemini API and more new AI tools for developers and enterprises

  • And because we missed a week recording Google is already dropping new Gemini things. 🙂
  • Gemini pro is available now via the Google AI studio and Google Cloud Vertex AI
  • As well they have given a new imagent 2 text to image diffusion tool and MedLM a foundation fine tuned for medical.  

Azure

1:09:04  Key customer benefits of the Microsoft and MongoDB expanded partnership

  • MongoDB and Microsoft have continued to expand their partnership
  • Those improvements were highlighted at the recent ignite conference
  • MongoDB for VS Code Extension was released in August
  • MongoDB integrated directly into Azure Synapse Analytics, Microsoft Purview, Power BI, and Data federation capabilities. 
  • As well as you can run MongoDB atlas on Azure through the marketplace. 
  • They’ve also released a ton of joint documents from building serverless functions that talk to Mongo, Flask, Iot, and Azure Data Studio with Mongo Integration.  

1:09:50 📢  Jonathan – “I think anyone who sells a product at this point should be trying to partner with cloud vendors to get their products to marketplaces.”

1:10:19 Microsoft Cloud for Sovereignty now generally available, opening new pathways for government innovation

  • Microsoft has announced the GA of Microsoft Cloud for Sovereignty across all Azure regions.  
  • Sovereign offering helps governments meet compliance, security and policy requirements while utilizing cloud to provide superior value to citizens
  • There are 3 main things in the setup: 
    • First, Microsoft Cloud for Sovereignty is built on the foundation of more than 60 cloud regions, providing industry-leading cybersecurity along with the broadest compliance coverage. Microsoft offers the most regions of any cloud provider. Customers can implement policies to contain their data and applications within their preferred geographic boundary, in alignment with national or regional data residency requirements.
    • Second, Microsoft Cloud for Sovereignty provides sovereign controls to protect and encrypt sensitive data and control access to that data, enabled by sovereign landing zones and Azure Confidential Computing.
    • A sovereign landing zone is a type of Azure landing zone designed for organizations that need government-regulated privacy, security and sovereign controls. Organizations can leverage landing zones as a repeatable best-practice for secure and consistent development and deployment of cloud services. As many government organizations face a complex and layered regulatory landscape, utilizing sovereign landing zones makes it much easier to design, develop, deploy and audit solutions while enforcing compliance with defined policies.
  • In addition customers can take advantage of Azure Confidential Computing to secure sensitive and regulated data even while it’s being processed in the cloud. 
  • And thirdly you can adopt specific, sovereignty-focused Azure policy initiatives to address the complexity of compliance. 

Oracle

1:13:00 Microsoft and Oracle announce that Oracle Database@Azure is now generally available

  • Did anybody else see the video announcing this? Was it creepy to anyone else? No? Just us? OK, moving on…
  • Oracles running on Azure is now GA, this was initially announced in September with Satya and Larry
  • Please note: It’s only the Azure East region, with more regions coming next year. 
  • Currently Oracle Database Azure runs on the Exadata database services and is the first service available along with support for Oracle Real Application Clusters (RAC), Oracle Golden Gate and Oracle Guard Duty Technologies. With autonomous database service coming in the near future.  
  • Seems to be a few different dimensions for pricing with Dedicated Infrastructure costing you 1.3441 per OCPU hour if you want them to provide the license, or $0.3226 per hour for BYOl. There is also a quarter rack x9m cost, database server x9m cost and storage server x9m cost. 
    • All I know is I went to the calculator and selected an X9m shape which apparently comes with 2 database servers and 3 storage servers.  Which you can then provision additional resources for your workloads with any combination of 2 to 32 db servers and 64s storage service in a single exadata database service. And then you pay for the OCPU on top of that. Minimum of 4… so starts $14.8k for 8 vcpu. 
  • If you want to have a lot of fun… you can max these numbers out and it will only cost you 4.23m a month. Think they’ll take a check? 

1:19:07 📢  Justin – “Well, as, as your, as your single CPUs get faster and faster with more and more cores, like you all of a sudden had to get into this complexity of like, well, how, what’s my core boundary where I start charging for more licenses because they’re getting more value out of it, right? Like the same, you know, I think Matt mentioned earlier with VMware, like you used to be, or no, it was Hyper-V, used to be able to buy a data center edition. And then you have unlimited windows virtualization on top of that. Yes, you used to be able to do that. You cannot do that today.”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.