208: Azure AI Lost in Space

tcp.fm
tcp.fm
208: Azure AI Lost in Space
Loading
/
83 / 100

Welcome to the newest episode of The Cloud Pod podcast! Justin, Ryan and Matthew are your hosts this week as we discuss all the latest news and announcements in the world of the cloud and AI. Do people really love Matt’s Azure know-how? Can Google make Bard fit into literally everything they make? What’s the latest with Azure AI and their space collaborations? Let’s find out!

Titles we almost went with this week:

  • Clouds in Space, Fictional Realms of Oracles, Oh My. 
  • The cloudpod streams lambda to the cloud

A big thanks to this week’s sponsor: 

Foghorn Consulting, provides top-notch cloud and DevOps engineers to the world’s most innovative companies. Initiatives stalled because you have trouble hiring?  Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week.

📰News this Week:📰

General News

@00:57 – Interesting article – What is Open AI doing that Google Isn’t 

  • (Besides making a usable product, obviously.) 
  • -Google AI lab is separate, meaning researchers are separate from the engineers, versus Open AI where they are one combined team, which – go figure – works out better. -The article goes on to question whether Google is “losing their edge” which, as the number 3 player in the AI industry, is pretty evident. The guys discuss the two services, as well as how Bard can be crammed into every product Google makes. 
  • 02:49 📢Ryan: “I find it kind of fascinating that Open AI, because they were first to market, gets to dictate what AI is.”

@07:01 Are you an AI developer? Are you looking to build out your own models? -Good luck. Finding the hardware to do that continues to be an issue. 

  • The Information put out an article about a shortage of servers at all the major cloud companies, including AWS, Azure, GPC, and OCI. The biggest issue is a shortage of GPUs and GPU processors, which was one of the first and main resources to have supply chain issues. 
  • Desktop computer GPUs are having less issues with supply. Some of that is thanks to the bottom falling out of the Bitcoin market (no need for mining anymore.)
  • 07:57 📢Ryan – “It’s a run on a limited resource, and GPU’s – they were the first to hit supply chain issue… it’s always been sort of a scarce resource. When I first heard of GPU’s being used for machine learning and those types of workloads, there weren’t enough of them, and it wasn’t really embedded in the type of hardware you need to run in a data center.
  • 09:07📢Justin – “A lot of GPU returns and GPU availability in the desktop market, which those GPU’s are better suited for doing high computational work of 3D and things that are required for getting to bitcoin… so you could use desktop GPUs but your experience won’t go as far.”
  • Unfortunately the smart British guy isn’t here to tell us all the ins and outs of the differences between types of GPUs, so do tune in for that next week!

@10:37 FinOps slack channels had some chatter in regards to the Amazon spot market pricing increases. 

  • For the past couple weeks prices have continued to grow in US East 1, US AP Southeast 1A, and European servers (which are always more expensive anyway) among others. Justin discusses his ideas for why this is the case. Surprisingly (or not surprisingly at all) most of his theoretical reasons for these prices increases are pretty cynical – but they include capacity constraints in the supply chain, Amazon limiting additional buying because they’re going into earnings, and (most cynically) theorizing that Amazon is artificially increasing the prices in the spot market to boost sales and topline growth. 
  • 12:35📢Justin – “I used to run spot instances for The Cloud Pod in US West 2 which is in Oregon, and it worked really great until re:Invent week. Then all the labs said ‘use US West 2!’ and guess what? They’re all hitting capacity that I was using in a spot market. So all my servers go down – which is a terrible scenario.”

AWS

@16:56 AWS Lambda announces support for payload streaming

  • -Response payloads can be progressively streamed back to the client, which should help improve performance for both web and mobile apps, since functions can send partial responses as they become ready. The streaming responses will cost more in network transfer costs; billing is based on the number of bytes generated and streamed after the first 6mb, with an initial maximum response size of 20mb. The guys agree this is a useful function, but is at an early stage, since it only supports Node.js currently. We’re excited to see how this one evolves as they develop it further. 
  • 18:09📢Ryan – “That is pretty cool actually, because that does open up Lambda for a lot more workloads that have been traditionally stuck on big servers with big beefy network connections.” 

@ 21:36 – Any Proton users out there? They just announced an integration with Git for service sync.  

  • Essentially customers can sync Proton service configurations directly from GitHub. Cool, huh? 
  • Justin especially is interested in this one, and is excited to play with it a bit, even if Matt was a little surprised it wasn’t already in place. 
  • We also think this one may be a really good intro for some dev teams when it comes to simple CI/CD pipelines. 

@ 23:43 – You can now add an Elasticache cache to Amazon RDS databases in the RDS console

  • While you could definitely do this before, you  had to do your own plumbing, configurations, and configure security groups. 
  • This new update should help accelerate application performance, and potentially lower costs, since when using caching you have to pay data transfer costs if it’s going across zones. 
  • We’d love to give you more information, and a quick note about our experience with this, but neither our RDS servers in Oregon or RDS servers in Ohio have this option, so that’s helpful. 
  • 26:46📢Justin: “It’s the most basic, low level integration they could have possibly done to make this work.”
  • 27:08📢Ryan: “It’s a console enablement of the existing service”
  • 27:27📢Justin: “The promise of this headline is amazing – and the detail implementation is *not*.”
  • If you’re looking for “the button” – it’s DEEP and hidden in the actions menu. (Don’t search in the DBS instances menu where Justin was looking. That would make too much sense.) 
  • Bottom line: This isn’t what we were hoping it would be. Sad face. 

GCP 

@31:50 – No new announcements, but they added some new things to their blog. Google Cloud deploy now supports canary deployment strategy The new deployment will support all target types, including Google Kubernetes Engine, Cloud Run, and Anthos. 

@ 35:39 Google also announced General Availability of Cloud Run services as backends to Internal HTTP(S) Load Balancers and Regional External HTTP(S) Load Balancers

  • -Internal load balancers allow you to establish private connectivity between Cloud Run services and other services and clients on Google Cloud, on-premises, or on other clouds. Additionally, you can get custom domains, migration tools from legacy servers, identity aware proxy support. Internal load balancers are something many companies overlook, so we’re excited to see this coming out with the external load balancers. 
  • 37:27📢Matt: “I feel like the internal load balancer is one of the harder things; a lot of times they use whatever their external tools are, and the internal load balancer is really what trip up a lot of the cloud providers and always is a later feature. So it’s nice to see that they’re doing it all at once.”

@ 38:41 – The Observability tab in the Compute Engine console has reached general availability.  

  • -The new visualization tool for Compute Engine Fleets, which Google says is an easy way to monitor and troubleshoot the health of your fleet VMs. Cool! We like pretty graphs. 
  • -Sorry Google. We don’t mean to be mean. Next week give us some press releases so we’re less salty. 

Azure 

@ 40:33 – Microsoft is excited to continue sucking up all the oxygen on AI with a new Azure connected learning experience (or CLX)

  • Do you want to learn all about AI? Of course you do! Become a data scientist today (and earn more money too!) by taking part in three new courses centered around AI data and skills. Now you too can become an Azure certified AI solutions engineer. Fancy! 

@ 42:36 – AZURE IN SPACE – Azure announced advancements in technologies across multiple government agencies with multiple new features. 

  • Are you ready for some acronyms? Because the Fed LOVES their acronyms. Ok. Here we go: advancements include:
  • Viasat RTE integration with Azure Orbital Ground Station, bringing high rate, low latency data streaming downlink from spacecraft directly to Azure.
  • A partnership with Ball Aerospace and Loft Federal on the Space Development Agency’s (SDA) National Defense Space Architecture Experimental Testbed (NeXT) program, which will bring 10 satellites with experimental payloads into orbit and provide the associated ground infrastructure.
  • Advancements on the Hybrid Space Architecture for the Defense Innovation Unit, U.S. Space Force and Air Force Research Lab, with new partners and demonstrations that showcase the power, flexibility, and agility of commercial hybrid systems that work across multi-path, multi-orbit, and multi-vendor cloud enabled resilient capabilities.
  • Seriously though, how much do we all love the name Space Force?
  • Azure powers Space Information Sharing and Analysis Center (ISAC) to deliver Space cybersecurity and threat intelligence operating capabilities. The watch center’s collaborative environment provides visualization of environmental conditions and threat information to rapidly detect, assess and respond to space weather events, vulnerabilities, incidents, and threats to space systems.
  • @43:46 📢Ryan: “Space is cool.”
  • That really about sums it up, doesn’t it?

@47:39 – new Azure App Service plans – will they bring greater choice and cost savings? There are two new offerings, and they’re super exciting and not confusing at all. 

  • -We’re so happy to have Matt here to explain ALL of this stuff for us. Are you ready for this? Microsoft names for instances always make a ton of sense, so pay attention. 
  • -“In uncertain economic times, you need more flexible options to achieve your business outcomes. To meet the need, Microsoft is excited to announce new plans for Azure App Service customers with two new offerings in the Premium V3 (Pv3) tier and expansion in the isolated v2 tier, which powers the high-security app service environment 3. The cost-effective P0v3 plan and a new series of memory-optimized (P*mv3) plans are designed to help more customers thrive and grow with Azure platform as a service (PaaS).” 
  • Got that? Don’t say we didn’t warn you. Don’t worry. Matt is going to be able to explain it EVEN BETTER next week. Probably. Maybe. 
  • @52:51 📢Matt: “You can’t do anything less than ultra premium – this is what I’ve learned. Everything has to be the highest end, because that’s the only way they nicely give you the security stuff that you need to make it past all your compliance.”

Oracle 

@ 56:28 Oracle sovereign cloud solutions are now making realms available for enhanced cloud isolation

  • Realms help with data sovereignty requirements by creating logical collections of cloud regions that are isolated from each other, and they don’t allow customer content to leave a region outside that realm.
  • Oracle’s EU Sovereign Cloud will be launching in 2023
  • The EU sovereign cloud will initially be made of two regions in Germany and Spain.
  • TLDR; regions that are linked keep data in one governance area. Thanks Oracle. We’re super grateful to know what terrible things you’re doing on the backend. 👍

Continuing our Cloud Journey Series Talks

Skipping the cloud journey. Again. We’ve been talking a bit too long already, so we should probably end here. You’re welcome. 

News That Didn’t Make the Main Show

AWS

Azure

Oracle 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.