Welcome to the newest episode of The Cloud Pod podcast – where the forecast is always cloudy! Today your hosts are Jonathan and Matt as we discuss all things cloud and AI, including Temporary Elevated Access Management (or TEAM, since we REALLY like acronyms today) FTP servers, SQL servers and all the other servers, as well as pipelines, whether or not the government should regulate AI (spoiler alert: the AI companies don’t think so) and some updates to security at Amazon and Google.
Titles we almost went with this week:
- The Cloud Pod’s FTP server now with post-quantum keys support
- The CloudPod can now Team into your account, but only temporarily
- The CloudPod dusts off their old floppy drive
- The CloudPod dusts off their old SQL server disks
- The CloudPod is feeling temporarily elevated to do a podcast
- The CloudPod promise that AI will not take over the world
- The CloudPod duals with keys
- The CloudPod is feeling temporarily elevated.
A big thanks to this week’s sponsor:
Foghorn Consulting, provides top-notch cloud and DevOps engineers to the world’s most innovative companies. Initiatives stalled because you have trouble hiring? Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week.
📰News this Week:📰
No general news this week! Probably because no one wanted to talk to us.
- You can now connect via SSH and RDP to EC2 instances without using public IP addresses. With EIC endpoints, customers have remote connectivity to their instances in private subnets, eliminating the need to use public IPv4 addresses for connectivity.
- Previously you would have needed to create bastion hosts to tunnel SSH/RDP connections to instances with private IP addresses, but that created its own set of problems because bastion hosts would have to be patched, managed and audited as well as incur additional costs.
- EIC endpoint combines AWS IAM-based access controls to restrict access to trusted principles with network-based controls such as security group rules.
- It provides an audit of all connections via AWS cloud trail, helping customers improve their security posture.
01:31📢 Matt- “It’s nice to see Amazon still coming up with more solutions to not have things be public; and really try to get their customers to not use all the older-school technology.”
- RDS Custom for SQL Server now allows customers to use their own SQL server installation media when creating an instance. By using BYOM, customers may leverage their existing SQL server licenses with Amazon RDS for SQL Server. Amazon RDS custom is a managed database service that allows customization of the underlying operating system and database environment. Managed features include Multi-AZ, point in time recovery, and more.
- Previously when using RDS custom for SQL Server, customers used a license that included hourly pay as you go model. With BYOM, customers can provide their own SQL server licenses on Amazon RDS Custom for SQL Server. This allows customers who have already purchased SQL server licenses to save on costs while offloading the undifferentiated heavy lifting of database management to RDS custom.
04:28📢Jonathan- “I think the advantage for me is that I’ve often heard, well, we can’t use RDS because it doesn’t support this, doesn’t support this, doesn’t support this. Whereas now you can deploy your own instances with your own controls and just use RDS as a management layer. Kind of cool.”
- AWS is providing you a solution for Temporary Elevated Access Management (TEAM) that integrates with IAM Identity Center (formerly AWS SSO) and allows you to manage temporary elevated access to your multi-account AWS environment. You can download the TEAM solution from AWS samples, deploy it and customize it to your needs.
- The team solution has the following features:
- Workflow and approval
- Invoke access using IAM identity center
- View request details and session activity
- Ability to use managed identities and group memberships
- A rich authorization mode
A Note from the team with some Reinforce quick hits for you:
** If you’re using AWS Transfer for your SFTP solution, and quantum computing breaking your SFTP and FTPs ciphers keeps you up at night, AWS now supports post-quantum keys for AWS transfer. I mean personally if you’re leveraging SFTP… in 2023 and post quantum security is your priority i’m unsure you’re using the right technology.
Your SOC team rejoices as it allows you to take automated actions to update your findings. These rules make it easy to avoid alert fatigue and more quickly close out alerts and issues.**
07:02 📢Matt- “This whole solution looks great. I’ll be more curious in about two years from now when they add it into Amazon SSO – or the rebranded Amazon IAM Identity Center – to actually see it all nicely integrated in and not, ‘Hey, there’s a web portal over here that you run with Amplify and there’s probably Step Functions and CloudWatch.’ It’s a really good solution for build your own. And if you have a public cloud team that can help manage this, great. But if you’re trying to do this for a one or two AWS account, probably not worth the overhead and complexity of it. But it’s nice to see that they’re, again, providing solutions for people.”
08:15 📢Jonathan – “I guess you could integrate it with things like change handlers so you can only get admin access during pre-approved changes or to pre-approved instances and that kind of thing. I’m sure this is a problem that a lot of people have, like what do you do when you don’t want admin all the time, but you do need admin rights when you need it? And I’ve seen people build all kinds of tooling around this, you know, well, we keep passwords in volt, but if we get the password out to use temporarily, then we have to go back and change the password later. It’s all a lot of moving parts. And so having an off the shelf solution like this is pretty neat.”
09:34 re:Inforce 2023 Quick Hits
- Our recording schedule has been a bit off so we didn’t cover it at the time, but re:Inforce has come and gone – and we have the Cliffsnotes version just for you.
- Post-quantum hybrid SFTP file transfers using AWS Transfer Family
**Quick note from Justin** If you’re using AWS Transfer for your SFTP solution, and quantum computing breaking your SFTP and FTPs ciphers keeps you up at night, AWS now supports post-quantum keys for AWS transfer. I mean personally if you’re leveraging SFTP… in 2023 and post quantum security is your priority i’m unsure you’re using the right technology. Your SOC team rejoices as it allows you to take automated actions to update your findings. These rules make it easy to avoid alert fatigue and more quickly close out alerts and issues.
For those who were excited about WAF Fraud Control for Account Takeover Prevent (ATP, they are adding Account Creation Fraud Protection to protect your applications sign up pages against fake account creation by detecting and blocking fake requests.
- Prevent account creation fraud with AWS WAF Fraud Control – Account Creation Fraud Prevention
- Screw up timestamp as requested: 11:24
- Let’s blame Justin
- Additional timestamp errors for your listening pleasure! This is what happens when AI DOESN’T do the work for you I guess. Moving on. (Language Warning!)
- For those who need to meet NSA CNSSP 15 for FIPS Compliance and Data at Rest capability Package 5.0 guidance for two layers of CNSA encryption. The new S3 Dual Layer Server Side Encryption with Keys stored in KMS (DSSE-KMS) is available for objects when uploaded to an S3 bucket. S3 is the only cloud object storage service that allows customers to apply two layers of encryption at the object level and control data keys used for both layers. DSSE-KMS makes it easier for highly regulated customers to fulfill rigorous security standards, such as the DOD.
- DSSE-KMS applies two layers of encryption to objects in Amazon S3, which can help protect sensitive data against the low probability of a vulnerability in a single layer of cryptographic implementation.
- DSSE-KMS is designed to meet National Security Agency CNSSP 15 for FIPS compliance and Data-at-Rest Capability Package (DAR CP) Version 5.0 guidance for two layers of CNSA encryption.
- Holy acronyms, Batman!
14:30 📢Matt – “I think for the average consumer, you’re probably not gonna need or want this. I’d be curious of what the overhead is or if it’s something that Amazon’s just eating the overhead on the backend.”
- AWS is launching a new API for SQS.
- These new API’s allow you to manage dead-letter queue (DLQ) redrive operations programmatically.
- You can use the SDK or the CLI to programmatically move messages from the DLQ to their original queue, or to a custom queue destination to attempt to process them again.
16:13📢Matt – “This is kind of nice. I mean, I always feel like I’ve had a dead letter queue and then I just send a notification. It’s all I’ve ever used it for. But, if you can actually now move that message to somewhere useful, do either retry or if you’re doing failure driven development (which I would recommend against) you could in theory just cascade it down, but it’s nice that they are actually enabling this with APIs.”
16:40📢Jonathan – “Yeah, I’ve definitely had a use case for this before when we used SQS for hundreds of thousands of log events. And when Elasticsearch was down regularly, things would eventually time out of the queue after three days of trying to rebuild the Elasticsearch cluster. So moving those things was a Python script back to the thing, as I said, ended up in the back of the queue again. So. Definitely nice.”
- Now generally available!
- Amazon Verified Permissions (AVP) is a new service that makes it easier to manage authorization in your applications.
- AVP uses machine learning to verify that users have the permissions they need to access your resources.
- AVP can be used with any AWS service that supports IAM policies.
- AVP is easy to set up and use.
- AVP can help you reduce the risk of unauthorized access to your resources.
- AVP can help you improve compliance with security and regulatory requirements.
- AVP is available in all AWS regions.
- AVP is a free service.
- For more information, see the AWS documentation.
Note from Jonathan – It say easy to **deploy** not easy to **use**. Listener beware.
- Google is announcing the general availability of Dataform, which lets data teams develop, version control, and deploy SQL pipelines in BigQuery.
- Dataform helps data engineers and data analysts of all skill levels build production-grade SQL pipelines in BigQuery while following software engineering best practices such as version control with Git, CI/CD, and code lifecycle management.
- Dataform offers a single unified UI and API with which to build, version control and operationalize scalable SQL pipelines.
- In this single environment, data practitioners can develop new tables faster, ensure data quality and operationalize their pipelines with minimal effort, making data more accessible across their organization.
- “Before we started using Dataform, we used an in-house system to transform our data which was struggling to scale to meet our needs,” says Neil Schwalb, Data Engineering Manager at Intuit Mailchimp. “After adopting Dataform and more recently Dataform in Google Cloud we’ve been able to speed up and scale our data transformation layer to 300+ tables across large volumes of data. The Google Cloud-Dataform integration has also sped up our development workflow by enabling faster testing, clearer logging, and broader accessibility.”
22:02📢 Matt- “Hey, Jonathan. Help explain to me what they’re doing here, because all I see is that we’re building pipelines from SQL to BigQuery, and they put a UI around it.”
22:14📢 Jonathan- “I think the big thing is data engineers spend a lot of time in a console clicking through things, clicking through pipelines, a lot of data quality is managed by people. A lot of pipelines are built by people rather than as code and so I guess by forcing it to be defined as code and versioned as code… potentially you could build a new pipeline, compare the output of that with the output of a previous pipeline. If it looks good then promote it to the next environment.”
- Google’s Secure AI Framework is a set of principles and practices that guide the development and deployment of secure AI systems. The framework is based on three pillars:
- Responsible AI development: This pillar includes principles such as transparency, accountability, and fairness.
- Robust AI systems: This pillar includes principles such as accuracy, reliability, and safety.
- Secure AI systems: This pillar includes principles such as confidentiality, integrity, and availability.
- The framework is designed to help Google build AI systems that are safe, reliable, and trustworthy. It is also intended to help Google comply with applicable laws and regulations.
- The key data points from the article are:
- Google’s Secure AI Framework is a set of principles and practices that guide the development and deployment of secure AI systems.
- The framework is based on three pillars: responsible AI development, robust AI systems, and secure AI systems.
- The framework is designed to help Google build AI systems that are safe, reliable, and trustworthy.
- It is also intended to help Google comply with applicable laws and regulations.
- The article also includes a number of case studies that illustrate how Google has applied the framework to real-world projects.
24:13📢 Matt- “I feel like all the cloud providers and all the AI providers are just saying, hey, this is what we’re gonna do. And, you know, I really would like to see what are the consequences if they break their own framework. You know, like what are they going to do? Because cool, they can say that they’re gonna be responsible and robust and secure and ensure confidentiality and all these things, but it’s very easy to put out a press release saying that. It’s very hard to prove that you’re doing that.”
- Google has warned its employees not to disclose confidential information to BARD, this isn’t surprising as many other large firms have similarly voiced these concerns.
- However, they also said that they should not use the code generated by Bard, which seems to counter the message that developers can become more productive using Bard.
- Google told Reuters its internal ban was introduced because bard can output undesired code suggestions. Which could lead to buggy or complex, bloated software that will cost developers more time to fix than if they don’t use AI to code at all.
- The article also mentions that Google’s DeepMind AI lab does not want the US government to set up an agency to focus on regulating AI. (WEIRD)
- Google argues the role should be split across different departments.
- They believe NIST could oversee and guide policies and issues.
27:40📢 Matt- “NIST is a framework though; It’s not an regulating agency. It’s not like NIST says you have to do this. It’s not a, it’s a standards agency.”
28:01📢 Jonathan- “Yeah, that’s why they want nest involved, presumably, so that it’s very unregulated.”
- Microsoft is committing several things to its customers around AI:
- To be transparent about how AI is used in Microsoft products and services.
- To provide customers with control over how their data is used for AI.
- To build AI systems that are fair, unbiased, and accountable.
- To invest in research and development to ensure that AI is used for good.
- To collaborate with governments and regulators to develop responsible AI policies.
- To educate and empower people to use AI safely and responsibly.
- These commitments are important because they show that Microsoft is committed to using AI in a way that benefits customers and society as a whole.
32:18 📢 Matt- “Repeat everything we just talked about for Google.”