262: I Only Aspire Not to Use and Support .NET

Cloud Pod Header
tcp.fm
262: I Only Aspire Not to Use and Support .NET
Loading
/
77 / 100

Welcome to episode 262 of the Cloud Pod podcast – where the forecast is always cloudy! Justin, and Ryan are your hosts this week, and there’s a ton of news to get through! We look at updates to .NET and Kubernetes, the future of email, new instances that promise to cause economic woes, and – hold onto your butts – a new deep sea cable! Let’s get started! 

Titles we almost went with this week:

  • ☁️What is a vagrant when you move it into your cloud
  • 🥅I only Aspire not to use/support .NET
  • 💉AI Is the Gateway drug to Cloudflare
  • 📪Let me tell you about the future with MAIL ROUTING
  • 💸AWS invents impressive ways to burn money with the U7i instances
  • 📜Google Only wishes they could delete our podcast with an expiring subscription
  • ⚔️AKS Automatic — impressive new attack weapon or an impressive way to make Ops Cry? 

A big thanks to this week’s sponsor:

Big thanks to Sonrai Security for sponsoring today’s podcast! Check out Sonrai Securities’ new Cloud Permission Firewall. Just for our listeners, enjoy a 14 day trial at https://sonrai.co/cloudpod 

General News 

00:53 Vagrant Cloud is moving to HCP 

  • What sort of feels like a “if you care about it, get it moved into HCP before the IBM acquisition is done” Vagrant Cloud is being migrated to the Hashicorp Cloud Platform (HCP) under the new name of HCP Vagrant Registry.  
  • All existing users of Vagrant Cloud are now able to migrate their Vagrant Boxes to HCP. 
  • Vagrant isn’t changing; HCP provides a fully managed platform to make using Vagrant easier. 
  • Users who migrate can register for free with the same email address as their existing Vagrant cloud account. 
  • Want to review the migration guide? You can find it here

01:53 📢 Justin – “I really think Vagrant would be a key pillar of the IBM future strategy for HashiCorp? Nope, I sure did not. I mean, I figured they’d probably just keep it open source and people would keep developing on it, but I didn’t really expect much. So, you know, to at least get this and an improved search experience is kind of nice because the old Vagrant cloud website, it was definitely a little stale. So I can have improved search and a new UI is always nice.”

AI Is Going Great (Or How ML Makes All It’s Money)

02:43 Snowflake Announces Agreement to Acquire TruEra AI Observability Platform to Bring LLM and ML Observability to the AI Data Cloud  

  • Snowflake is announcing the acquisition of TrueEra AI Observability. 
  • This complementation investment will allow them to provide even deeper functionality that will help organizations drive AI quality and trustworthiness by evaluating, monitoring and debugging models and apps across the full lifecycle, in both development and production.  
  • TruEra’s technology helps evaluate the quality of inputs, outputs and intermediate results of LLM apps. 
  • This expedites experiment evaluation for a wide variety of use cases, including question answering, summarization, retrieval, augmented generation-based applications, and agent based applications. 
  • TrueAI Observability claims they can identify LLM and AI risks such as Hallucinations, bias or toxicity, so that issues can be addressed quickly, and so that organizations can demonstrate compliance with AI regulations. 

04:02 📢 Ryan – “Yeah, this is a gap, right? Like, and I think we’ll, we’re in that uncomfortable phase of new technology, where it’s sort of rushing, like there’s AI, but there’s the management of AI. And, you know, how to sort of operate it at scale. And so there’ll be a couple different tasks and solutions. I feel like this is one. Hopefully, yeah, observability is a little funny, because it’s sort of like, I get it. But maybe another word.”

05:06 AI Gateway is generally available: a unified interface for managing and scaling your generative AI workloads 

  • Cloudflare’s AI Gateway is now generally available.  
    • Since the beta launched in September 2023, Cloudflare has proxied over 500M requests and are now prepared for you to use it in production. 
  • AI Gateway is an AI Ops platform that offers a unified interface for managing and scaling your generative AI workloads.  
  • At its core, it acts as a proxy between your service and your inference providers, regardless of where your model runs.  
  • With a single line of code, you can unlock a set of powerful features focused on performance, security, reliability and observability — cloudflare says its the control plane of your AI ops – and it’s just the beginning with a robust roadmap of exciting features planned for the future. 
  • Today in AI gateway you get the following benefits and capabilities: 
    • Analytics: Aggregated metrics across multiple providers. Allowing you to see traffic patterns and usage including the number of requests, tokens and costs over time. 
    • Real-Time Logs into request and errors as you build
    • Caching: Enable custom caching rules and use cloudflare’s cache for repeat requests instead of hitting the original model provider API, helping you save on cost and latency. 
    • Rate Limiting: Control how your application scales by limiting the number of requests our app receives to control cost or prevent abuse
    • Support for your favorite providers including Workers AI and 10 of the most popular including Bedrock, Anthropic, Azure OpenAI, Cohere, Google Vertex AI, Groq, Hugging Face, OpenAI, Perplexity, Replicate, Universal Endpoint. 
    • Universal endpoint – in case of errors, improving resilience by defining request fallbacks to another model or inference provider.  

06:46 📢 Ryan – “…it’s funny, because I think they’re largely a very similar offering with Yeah, a little bit of difference in terms of the validity of the responses. But I do, you know, like, it is going to be fun to watch all the all these areas sort of fill in because this is, this is really nice for, for those companies who are trying to productionize AI and realizing like, this is ridiculously expensive if you’re routing everything back to your model and, and so like having your cache is gonna be super key and that’s cool.”

AWS

09:05 Optimized for low-latency workloads, Mistral Small now available in Amazon Bedrock 

  • Amazon is announcing that the Mistral Small foundation model (FM) from Mistral AI is now generally available in Amazon Bedrock
  • This is a fast follow-up to their recent announcements of Mistral 7B and Mixtral 8x7B in March and Mistral Large in April. 
  • You can now access four high-performing models from Mistral AI in Amazon Bedrock. 
  • Key Features of the Mistral Small you need to know about:
    • Retrieval-Augmented Generation (RAG) specialization
    • Coding Proficiency
    • Multilingual Capability.
  • Interested in pricing? Find that here

09:44 📢 Justin – “So I’ve been playing around with them more and more because he got me LM Studio and I just like playing with them. So I downloaded one, I was downloading the Microsoft ones for their newer model the other day and I was playing with that one and the reality is I very quickly realized I can’t see a difference between most of the models. I am not sophisticated enough to understand what the differences are between these things.”

13:11 PostgreSQL 17 Beta 1 is now available in Amazon RDS Database Preview Environment

  • RDS For PostgreSQL 17 Beta 1 is now available in the Amazon RDS Database Preview Environment, allowing you to evaluate the pre-release of PostgreSQL 17. 
  • PostgreSQL 17 includes the following features that reduce memory usage, improves time to finish vacuuming, and shows progress of vacuuming indexes. 
  • With PostgreSQL 17, you no longer need to drop logical replication slots when performing a major version upgrade. 
  • They continue to build on the SQL/JSON standard, support for ‘JSON_TABLE’ features that can convert JSON to standard PostgreSQL tables. 
  • The ‘MERGE’ command now supports the “RETURNING” clause, letting you further work with modified rows.  
  • PostgreSQL 17 also includes general improvements to query performance and adds more flexibility to partition management with the ability to SPLIT/MERGE partitions. 
  • Overall Postgres released beta 1 on May 23rd, and Amazon is supporting it on May 24…
  • Pricing information is available here

14:42 Amazon EKS and Amazon EKS Distro now support Kubernetes version 1.30

  • Speaking of other fast support updates.  
  • K8 1.30 is now supported in Amazon EKS and Amazon EKS Distro
  • Amazon points out that 1.30 includes stable support for pod scheduling readiness and minimum domain parameters for PodTopologySpread constraints. 
  • EKS 1.3 managed node groups will automatically default to AL2023 as the node operating system. So now you too can be mad at system D! 

15:16 📢 Ryan – “Yeah, that’s that’s not going to be fun for some Kubernetes operators, but probably not to a lot of the Kubernetes users… Yeah, all their automation is now not going to work.”

16:07 Mail Manager – Amazon SES introduces new email routing and archiving features

  • Amazon SES is exactly what it sounds like “Simple Email service” allowing you to send and receive emails without having to provision email servers yourself. 
  • However, managing multiple email workloads at scale can be a daunting task for organizations. From handling high volumes of emails to routing them efficiently, and ensuring uniform compliance with regulations, the challenges can be overwhelming. 
  • Managing different types of outbound emails, whether one-to-one user email, transactional or marketing emails generated from applications, also becomes challenging due to increased concerns of security and compliance requirements. 
  • To make these pain points easier, they are introducing the new SES Mail Manager.
  • Yes, you read that right. It was so simple, it needed a manager. 
  • SES Mail Manager is a comprehensive solution with a powerful set of email gateway features that strengthens your organization’s email infrastructure. It simplifies email workflow management and streamlines compliance control, while integrating seamlessly with your existing systems. Mail manager consolidates all incoming and outgoing email through a single control point. This allows you to apply unified tools, rules and delivery behaviors across your entire email workflow. 
  • Key capabilities include connecting different business applications, automating inbound email processing, managing outgoing emails, enhancing compliance through archival, and efficiently controlling overall email traffic.  
  • Mail Manager Features:
    • Ingress Endpoints – Customizable SMTP endpoints for receiving emails. This will allow you to utilize filtering policies and rules that you can configure to determine which emails should be allowed into your organization and which ones should be rejected. YOu can use an open ingress endpoint or an authenticated ingress endpoint.
    • Traffic Policy and policy statements with rule sets.  
    • SMTP Relay allows you to integrate your inbound email processing workflow with external email infrastructure, such as on-premise exchange or third-party email gateways. 
    • Email Archiving to store emails in S3
    • Support for add-ons or specialized security tools can enhance the security posture and tailor inbound email workflows to your specific needs. 

19:58 📢 Jutsin – “Yeah, I’m just thinking of the compliance benefit of being able to directly write these emails to S3 to then be able to have security scan them for compliance or DLP use case. Like there’s so many use cases that this allows for you to do. That’s really kind of cool.”

20:23 Amazon Security Lake now supports logs from AWS WAF

  • AWS announces the expansion of the log coverage for Amazon Security Lake to now include AWS Web Application Firewall logs.  
  • You can now easily analyze your log data to determine if a suspicious IP address is interacting with your environment, monitor trends in denied requests to identify new exploitation campaigns or conduct analytics to determine anomalous successful access by previously blocked hosts.     

22:23 Amazon EC2 high-memory U7i Instances for large in-memory databases

  • If you need lots of memory for things like the Caching, the new U7i instances have graduated from preview to GA. 
  • These instances have up to 32TB of DDR 5 Memory and 896 vCPUs. Leveraging the fourth-generation Intel Xeon scalable processors (Sapphire Rapids), these high-memory instances are designed to support large, in-memory databases, including SAP HANA, Oracle, and SQL Server. 
  • 3 Sizes U7i-12tb, 24tb and 32tb.  
    • 152.88 per hour – $113,742.72 per month
    • 305.76 per hour – $227485.44
    • 407.68 per hour – $303,313.92

**If you’re a company that uses these instances – and you’re hiring- we have a couple of guys who would LOVE to chat with you. Hit us up!**

23:59 AWS Weekly Roundup – LlamaIndex support for Amazon Neptune, force AWS CloudFormation stack deletion, and more (May 27, 2024) 

  • Amazon OpenSearch Service zero-ETL integration with Amazon S3 — This Amazon OpenSearch Service integration offers a new efficient way to query operational logs in Amazon S3 data lakes, eliminating the need to switch between tools to analyze data. You can get started by installing out-of-the-box dashboards for AWS log types such as Amazon VPC Flow Logs, AWS WAF Logs, and Elastic Load Balancing (ELB). To learn more, check out the Amazon OpenSearch Service Integrations page and the Amazon OpenSearch Service Developer Guide.
  • New Amazon CloudFront edge location in Cairo, Egypt — The new AWS edge location brings the full suite of benefits provided by Amazon CloudFront, a secure, highly distributed, and scalable content delivery network (CDN) that delivers static and dynamic content, APIs, and live and on-demand video with low latency and high performance. Customers in Egypt can expect up to 30 percent improvement in latency, on average, for data delivered through the new edge location. To learn more about AWS edge locations, visit CloudFront edge locations.
  • LlamaIndex support for Amazon Neptune — You can now build Graph Retrieval Augmented Generation (GraphRAG) applications by combining knowledge graphs stored in Amazon Neptune and LlamaIndex, a popular open source framework for building applications with large language models (LLMs) such as those available in Amazon Bedrock. To learn more, check the LlamaIndex documentation for Amazon Neptune Graph Store.

25:28 📢 Ryan – “…this last one, there’s a lot of words that I don’t understand put together, but hopefully we can part, we’re gonna go through it, Ryan. The Llama index support for Amazon Neptune is now available. You can now build a graph retrieval augmented generation or graph rag. I didn’t know this was a thing. I knew what rag is, I knew what graph database was, but apparently you put together it’s a graph rag. Application by combining knowledge graphs stored in Amazon Neptune and the Llama index, which is apparently a popular open source framework for building applications with large language models, such as those available to you in Bedrock, of course. Apparently that can make magic happen. So, if you’ve been waiting for this, you can now do it.”

GCP

26:31 More FedRAMP High authorized services are now available in Assured Workloads

Google Cloud Achieves FedRAMP High Authorization on 100+ Additional Services 

  • Google has shown their commitment to the federal agencies with a significant milestone this week with over 100 new FedRAMP high authorized services. Including services such as Vertex AI, Cloud Build and Cloud Run, etc. 
  • Google Cloud provides the most extensive data center footprint for FedRAMP high workloads of any cloud service provider, with nine US regions to choose from. 
  • They have also received Top Secret/Secret Authorization as well. 
  • One of the most interesting things about these announcements is the fact they are aligned with the new Office of Management and Budgets Guidance (OMB) which basically is to embrace the commercial based cloud solutions, vs using dedicated cloud providers like gov cloud. 
  • The OMB guidance basically points out that the requirements for dedicated govcloud regions has decreased the value to the federal government that Fedramp was supposed to provide, adding high barriers to entry. 

29:47 📢 Justin – “Yeah, I mean, I would much rather do it this way and then deal with the small extra things on the configuration or additional audit logging capabilities you need to do. And the reality is that a lot of these fast companies are selling to megabanks and other very heavily scrutinized organizations that care a lot about security of their customers’ data, customers like Apple, et cetera. So these vendors are under a lot of scrutiny for lots of reasons.”

31:13 Sharing details on a recent incident impacting one of our customers

  • If you’ve been paying attention to X or other social locations where people talk about the cloud, you have probably heard about Google Deleting their customer Unisuper’s data in Australia. I think we have touched on this maybe once or twice, but without official Google communications, we haven’t spent a lot of time on it. This changes today, as Google has written a formal communication about it. 
  • Google says the delay in communicating about this issue was because their first priority was focused on getting the customer back up and fully operational.  And now they’ve had a chance to do a full internal review and share more information publicly. 
  • The incident specifically impacted:
    • One customer in one cloud region
    • One Google Service – Google Cloud VMware Engine (GCVE)
    • One of the customers multiple GCVE private clouds (across two zones)
  • It did not impact any other Google service, any other customer using GCVE or any other Google cloud service, the customer’s other GCVE private clouds, Google account, org, folders or projects, or the customer’s data backups stored in the GCS in the same region. 
  • During the initial deployment of Google Cloud VMware Engine for the customer using an internal tool, Google operators inadvertently misconfigured the GCVE service by leaving a parameter blank. This had the unintended and unknown consequence of defaulting the customer’s GCVE private cloud to a fixed term, with automatic deletion at the end of that period. The incident trigger and the downstream system behavior have been corrected to ensure this cannot happen again. 
  • The Customer and Google teams worked 24/7 over several days to recover the customer’s GCVE private cloud, restore the network and security configurations, restore its applications and recover data to restore full operations. 
  • This was assisted by the customer’s robust and resilient architectural approach to managing risk outage or failure. 
  • Data backups stored in GCS in the same region were not impacted by the deletion, and third-party backup software was instrumental in aiding the rapid restoration. 
  • Google has deprecated the internal tool that triggered this sequence of events. As this is now fully automated and controlled by customers via the user interface, even when specific capacity management is required. 
  • Google scrubbed the system database and manually reviewed all GCVE private clouds to ensure that no other GCVE deployment is at risk. 
  • They have corrected the system behavior that set GCVE private clouds for deletion for such deployment workflows

35:07 📢 Justin – “Well, as all errors tend to be, they’re all human error. So it’s just, I’m glad Google stood up a blog post really taking ownership of this and said, hey, this was on us. We’re taking responsibility. And it won’t happen to you. And here’s why it won’t happen to you. And here’s what we’re doing to prevent this from happening in the future, which makes me feel more confident. I think they needed to get something out maybe a little sooner. Like, hey, this is true. This had happened. We were helping the customer. We’ll get back to you.”

36:51 Improving connectivity and accelerating economic growth across Africa with new investments

  • Today, Google announced new investments in digital infrastructure and security initiatives designed to increase digital connectivity, accelerate economic growth and deepen resilience across Africa. 
  • Yes, that’s right, it’s a new Deep Sea Cable!
  • The new Undersea cable, “Umoja,” which means unity in Swahili, is the first fiber optic route to connect Africa directly with Australia. 
  • Anchored in Kenya, the route will pass through Uganda, Rwanda, the Democratic Republic of the Congo, Zambia, Zimbabwe, and South Africa, including the Google Cloud Region, before crossing the Indian Ocean to Australia.  
  • The path was built in collaboration with Liquid Intelligent Technologies to form a highly scalable route through Africa, including access points allowing other countries to take advantage of the network. 

37:46 📢 Justin – “Yeah, pretty heavily invested in by China actually because of how untapped it is by the rest of the market, but you know, I think having more competition there and being able to get access to data and to network services and anything to make it better going to Australia with multiple paths is also a win because, yeah, there for a long time was not a lot of options.”

38:50 Cloud SQL: Rapid prototyping of AI-powered apps with Vertex AI  

  • Developers seeking to leverage the power of ML on their PostgreSQL data often find themselves grappling with complex integrations and steep learning curves. CloudSQL for PostgreSQL now bridges the gap, allowing you to tap into cutting-edge ML models and vector generation techniques offered by Vertex AI, directly within your SQL Queries. 

Azure

42:03 General Availability of .NET Aspire: Simplifying .NET Cloud-Native Development

  • At Build, Microsoft has announced the latest and greatest .Net capability Aspire.  This streamlines the development of .NET cloud-native services and is now GA.  
  • .Net Aspire brings together tools, templates and NuGet packages that help you build distributed applications in .net more easily. Whether you’re building a new application, adding cloud-native capabilities to an existing one, or are already deploying .net apps to production in the cloud today, .NET Aspire can help you get there faster. 
  • Why .Net Aspire? (or really why? .net)
    • It’s been an ongoing aspirational goal to make .NET one of the most productive platforms for building cloud-native applications. In pursuit of this goal, we’ve worked alongside some of the most demanding services at Microsoft, with scaling needs unheard of for most apps, services supporting hundreds of millions of monthly active users.  Working with these services to make sure we satisfied their needs ensured we had foundational capabilities that could meet the demands of high scale cloud services. 
    • Microsoft invested in important technologies and libraries such as Health Checks, YARP, HTTP Client Factory, and gRPC.  With Native AOT, they worked towards a sweet spot of performance and size, and SDK container builds make it trivial to get any .NET app into a container and ready for the modern cloud. 
    • But developers said they needed more as building apps for the cloud was still too hard.  Developers are increasingly pulled away from their business logic and what matters to most to deal with the complexity of the cloud.  Enter .Net Aspire, a cloud ready stack for building observable, production ready, distributed applications. 
  • First Aspire .net provides a local development experience with C# and the .Net Aspire App Host project.  This allows you to use C# and familiar-looking APis to describe and configure the various application projects and hosted services that make up a distributed application. Collectively, these projects and services are called resources and the code in the app host forms an application model of the distributed application. Launching the app host project during the developer inner-loop will ensure all resources in the application model are configured and launched according to how they were described. 
  • Adding an App Host project is the first step in adding .Net aspire to an existing application. 
  • The Aspire Dashboard is the easiest easy to see your applications open telemetry data. 

44:11 📢 Ryan – “This is interesting because when I first read the title, I thought it was more of like a, you know, features into a .NET framework, but this is more like CDK or programmatic resources for .NET, which is kind of cool, actually. As much as I wanted to make fun of it before, like this is a gap.”

46:17 Microsoft Copilot in Azure extends capabilities to Azure SQL Database (Public Preview)

  • AI has come for SQL server. Microsoft Copilot for Azure SQL Database is here now.  The skills can be invoked in the azure portal query editor allowing you to use natural language to query SQL or in the Azure Copilot integration.

47:14 📢 Ryan – “…soon will be saying, you know, like we always say, like, it doesn’t have a SQL interface. That’s how you know it’s real, it’ll be like, does it have like natural language processing of a SQL interface? Because it, you know, like I can’t form a query to save my life.”

49:43 AKS at Build: Enhancing security, reliability, and ease of use for developers and platform teams

  • At Build, they announced in preview AKS Automatic, which provides the easiest way to manage the K8 experience for developers, DevOps and platform engineers. 
  • It’s ideal for modern AI applications, enabling AKS cluster setup and management and embedding best practice configurations. This ensures that users of any skill level have security, performance, and dependability for their applications. 
  • With AKS automatic, azure manages the cluster configuration, including nodes, scaling, security updates and other pre-configured settings. Automatic clusters are optimized to run most production workloads and provision compute resources based on K8 manifests.
  • With more teams running K8 at scale, managing thousands of clusters efficiently becomes a priority.  Azure Kubernetes fleet manager now helps platform operators schedule their workloads for greater efficiency.  Several new skills are available for AKS in Copilot for Azure to assist platform operators and developers. 
    • Intelligent workload scheduling
    • Copilot in Azure has skills for AKS
    • Auto-instrumentation for Azure monitor Application Insights
    • Azure portal now supports KEDA scaling. 

50:45 📢 Ryan – “Finally, I’ve been waiting for these management features of Kubernetes for years now, because it’s so difficult to operate Kubernetes at scale. And you’re seeing this now with GKE for Enterprise, I think it’s called now, what was Anthos, and now AKS Automatic, which I love the name.”

Closing

And that is the week in the cloud! Go check out our sponsor, Sonrai and get your 14 day free trial. Also visit  our website, the home of the Cloud Pod where you can join our newsletter, slack team, send feedback or ask questions at theCloud Pod.net or tweet at us with hashtag #theCloud Pod

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.