219: The Cloud Pod Proclaims: One Does Not Just Entra into Mordor

tcp.fm
tcp.fm
219: The Cloud Pod Proclaims: One Does Not Just Entra into Mordor
Loading
/
78 / 100

Welcome episode 219 of The Cloud Pod podcast – where the forecast is always cloudy! Today your hosts are Justin and Jonathan, and they discuss all things cloud, including clickstream analytics, databricks, Microsoft Entra, virtual machines, Outlook threats, and some major changes over at the Google Cloud team. 

Titles we almost went with this week:

  • TCP is not Entranced with Entra ID
  • The Cave you Fear to Entra, Holds the Treasure you Seek
  • Microsoft should rethink Entra rules for their Email

A big thanks to this week’s sponsor:

Foghorn Consulting, provides top-notch cloud and DevOps engineers to the world’s most innovative companies. Initiatives stalled because you have trouble hiring?  Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week.

📰News this Week:📰

AWS

00:47 Clickstream Analytics on AWS for Mobile and Web Applications

  • Want some solutions? Don’t we all! Well, for clickstream analytics at least, Amazon has released an update that has pre built solutions using Amazon components. 
  • Covers iOS and Android 
  • You can now deploy an end-to-end solution to capture, ingest, store, analyze and visualize your customers’ clickstreams inside your web and mobile applications. 
  • This solution is built using standard AWS services to allow you to keep your data in the security and compliance perimeter of your AWS account and customize the processing and analytics as you require, giving you the full flexibility to extract value for your business. 
  • The new solution leverages ECS+Kafka/Kineses/S3, EMR, Redshift and Quicksight
  • You can use plugins to transform the data during processing via EMR< AWS has provided you to build in ones for User Agent enrichment and IP address enrichment.
    • You can also export your source server inventory list to a CSV file and download it to your local disk. 
    • You can always continue leveraging the previously launched import and export functionality to and from an S3 bucket if you’re so inclined. 
    • Additional Post launch actions, adds four predefined post launch actions. 
      • Configure Time Sync
      • Validate Disk Space
      • Verify HTTP(S) response
      • Enable Amazon Inspector
  • If only this had been written 9 months ago when everyone was trying to run away from Google analytics…

02:45 📢 Justin- “I believe they have cloud cost optimization opportunities and solutions, but I would appreciate maybe some additional of those. More dashboards, more pretty pictures for dealing with your Amazon bill.”

02:58 Introducing the AWS .NET Distributed Cache Provider for DynamoDB

  • Have you ever had to set up DynamoDB as a distributed cache provider in .Net? Were you frustrated with the documentation and/or the complexity of what you have to do? Well, fear not, gentle listener! Amazon has your back. 
  • Now in preview is AWS .NET Distributed Cache Provider for DynamoDB. This library enables Amazon DynamoDB to be used as the storage for ASP.NET Core’s distributed cache framework. 
  • This avoids unnecessary heavy lifting to implement a common .net core platform. 

03:26📢 Jonathan – “That’s awesome. I mean, this is replacing things like memcache and other similar technologies that are pluggable, I assume.”

04:09📢 Justin – “One of the things I’ve done quite a few times is enable session state for ASP.net code. And you can actually even use this Dynamo TV table to cache that, which is kind of great, because the way you either do it is you use Redis, which is the right way to do it, or you use SQL Server, which is the wrong way to do it. And you cause yourself all kinds of grief when your application gets a few hundred connections as your SQL Server can’t keep up with it. So, always good to have another option in addition to Redis that is not SQL Server.”

GCP

04:44 Former Amazon Web Services data center leader Chris Vonderhaar joins Google Cloud 

  • Chris Vonderhaar (who was the VP of AWS Data Center Community and left in the spring) has now joined Google as VP Demand and Supply Management. 
  • This is part of a larger shakeup in Google Cloud’s management team. 
  • The changes include longtime google executive Urs Holzle, shifting to an Individual Contributor Role.
  • In the past Amazon has been aggressive about pursuing legal action against former executives, we will see what happens in this case. 

06:20 Set task timeout (jobs)

  • Have you been angry that Google Cloud Run only supports a timeout of 1 hour? 
  • Are you also angry that they’ve pivoted to using things like Knative to solve that problem?
  • Well, release that anger – you can now have Google Cloud Run timeouts up to 24 hours. 
  • This is great for those **LONG** running jobs.
  • We here at The Cloud Pod like to refer to this as “serverful for serverless” and it’s a great feature. 

06:48📢 Justin – “Do be careful on this one. The pricing can get a little out of control on long running transactions. So do your math and ROI calculation to see if maybe you should just run it in a container, if it was going to take that long. Just to put it out there.”

Azure

08:44 Latest generation burstable VMs – Bsv2, Basv2, and Bpsv2 

  • Microsoft has announced the public preview of new burstable VMs, Bsv2, Basv2 and Bpsv2
  • These VMs offer a more cost-effective way to run workloads that burst in and out of activity. 
  • BSV2 VMs have a base performance level that is guaranteed, and they can burst to a higher performance level for short periods of time.
  • BASV2 VMs are designed for workloads that only need to run occasionally, and they offer a pay-as-you-go pricing model.
  • To learn more about the new burstable VMs, check the Azure Blog announcement and Burstable VM documentation.

09:35 📢 Jonathan- “You almost love this kind of thing because now they can charge for the full Windows license for all 8 cores But you actually only get 2 cores worth of performance.”

09:43 📢 Justin – “I hadn’t thought of that perspective, but yes, you’re completely right. Well done, Microsoft, well done.”

09:53 Microsoft Entra expands into Security Service Edge and Azure AD becomes Microsoft Entra ID 

  • Azure AD is now known as Microsoft Entra ID.  
  • The name change represents the evolution and unification of the entire Microsoft Entra family, and a commitment to simplify secure access. 
  • Did you know there was a WHOLE Entra family? We didn’t. Must have missed that blog. 
  • No action other than snickering at the name is required for you, the end user. 
  • Entra ID is more than just the AD you “know and love,” – it also provides:
    • App Integrations via SSO, Passwordless and MFA 
    • Conditional access
    • Identity protection
    • Privileged Identity Management
    • End-User Self Service
    • Unified Admin Center
  • This brings it in line with the rest of the Entra product family with includes ID Governance, External SSO, Verified ID, Permissions Management, Workload ID, Internet Access and Private App Access
  • More information on Entra will be available at the live Tech Accelerator event on July 20th, 2023. 

09:43 📢 Justin – “So apparently the Entra product that was announced a year ago in July of 2022, we clearly were on vacation… It’s interesting, you know, how we’ve been doing the show for a couple of years. I would say that the last 15 months now have been just kind of slow in general terms. So I don’t know if that’s a sign of the maturing cloud market, I don’t know if that’s a sign of productivity issues and layoffs impacting things, but I am sort of curious to see what Google Next drops this year. I’m really curious to see what reInvent does this year because it definitely feels like big innovations are kind of slowing down. And I don’t know if that’s just a perception I have or if that’s reality.”

14:36 Microsoft mitigates China-based threat actor Storm-0558 targeting of customer email 

  • Microsoft mitigated a China-based threat actor targeting customer email.
  • The threat actor used a variety of techniques to steal email credentials, including phishing emails, malicious websites, and watering hole attacks.
  • Microsoft blocked the threat actor’s activity and notified customers who may have been affected.
  • Most concerning in the announcement is some of the details – or lack of details. 
    • The actor exploited a token validation issue to impersonate Azure AD users and gain access to enterprise mail.

15:01 📢 Justin – “The most concerning part of the answer though, in my opinion, is that they talk about the quote here, is the actor exploited a token validation issue to impersonate Azure AD users and gain access to enterprise mail. And they don’t really say how he got that token. Was it, you know, is it a token that everybody has access to in the web application, or is it a private token that he should never have been exposed that he got through insider threat model or from. you know, maybe a performer employee or I don’t know how that got out there. I wish they would expand on this. The initial alert on it is pretty lightweight.”

15:43 📢 Jonathan – “Yeah being able to forge tokens to access Outlook web access is slightly concerning…”

16:09 Azure’s cross-region Load Balancer is now generally available

  • The Azure cross-region load balancer is a load balancer that distributes traffic across multiple regions. (We know, that seems obvious but you never want to assume.)
  • This provides high availability and disaster recovery, as if one region fails, the other regions can continue to serve traffic.
  • The load balancer uses a global network of Azure virtual network gateways to provide high-performance, low-latency connections to ensure that users in any region will have a good experience when accessing your application.
  • As you would expect, the load balancer also provides health checking to ensure that only healthy instances serve traffic.

16:59 📢 Justin – “If you’re next to the server that’s normally in Los Angeles and now you’re being routed to India, that’s not gonna be a great latency experience, I’m sure. So good on them.”

17:13 General availability: Azure Data Explorer adds support for PostgreSQL, MySQL, and CosmosDB SQL external tables  

  • ADX External Tables is a new feature in Azure Databricks 
  • This allows you to connect to external data sources and query them using Databricks SQL.
  • This can be useful for a variety of reasons, such as:
    • Accessing data that is not stored in Databricks, such as data in a data warehouse or on-premises file system.
    • Querying data in a more efficient way than is possible with Databricks’ native connectors.
    • Using Databricks’ powerful SQL engine to analyze data from a variety of sources.
  • To use ADX External Tables, you first need to create an external table definition. This definition specifies the location of the data source and the format of the data.
  • Once you have created an external table definition, you can query it using Databricks SQL.
  • ADX External Tables is currently in preview and is available for a limited number of customers.
  • Want to become one of those super special people with the super special limited access? Contact your sales rep. 

19:13 📢 Justin – “It’s interesting to me Databricks is still around because I was convinced this company would get bought by Microsoft when they created Azure Databricks. But I was just looking at them as we were talking, they’ve raised a lot of money, including like $1.6 billion in August 2021. So they have a long runway and they’re probably very expensive to buy at a billion dollars in revenue. But I’m sure, I assume they’re gonna IPO at some point. So then if they fall apart, then Microsoft can buy them for cheap on the stock market. So maybe it’s a good strategy!” 

Oracle

Continuing our Cloud Journey Series Talks

We’ll continue our Cloud Journey Series next week when Ryan and Matt join us again – so be sure to tune in next week.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.