224: The Cloud Pod Adopts the BS License

Cloud Pod Header
tcp.fm
224: The Cloud Pod Adopts the BS License
Loading
/
62 / 100

Welcome to episode 224 of The CloudPod Podcast – where the forecast is always cloudy! This week, your hosts Justin, Jonathan, and Ryan discuss some major changes at Terraform, including switching from open source to a BSL License. Additionally, we cover updates to Amazon S3, goodies from Storage Day, and Google Gemini vs. Open AI. 

Titles we almost went with this week:

None! This week’s title was ✨chef’s kiss✨

A big thanks to this week’s sponsor:

Foghorn Consulting provides top-notch cloud and DevOps engineers to the world’s most innovative companies. Initiatives stalled because you have trouble hiring?  Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week.

📰Pre-Show📰 

📰General News this Week:📰

00:41 AWS and HashiCorp announce Service Catalog support for Terraform Cloud 

  • AWS is catching up with GCP, with now native support for Terraform in Service Catalog.
  • The new integration is expanding on the previous support for Open Source; they now support the Terraform Cloud service. 
  • This new feature is available in all AWS Regions where AWS Service Catalog is available.

02:07 HashiCorp adopts Business Source License

  • Do you use tools like N0 or ScaleSet? Or perhaps some of the other Terraform-adjacent things? You **may** be in some trouble. 
  • Despite being ok with Amazon and GCP integrating their open source – and now Terraform cloud offering – Hashicorp is mad at companies adopting their technology and productizing it, forcing them to move to the new BSL (Business Source License) model. 
  • This covers all Hashicorp products, not just Terraform. 
  • HashiCorp points out that their approach has enabled them to partner closely with cloud providers to enable tight integrations for their joint users and customers, as well as hundreds of other technology partners.  
  • There are vendors who take advantage of pure OSS models, and the community work on OSS projects, for their own commercial goals, without providing material contributions back. (GASP!)  
    • Hashi doesn’t think this is “the spirit of open source.” 
  • As a result, they believe commercial open source models need to change, as Open Source has reduced the barrier to copying innovation and selling it through existing distribution channels. 
    • They point out they’re in good company; pointing to other OSS projects that have closed source or adopted similar BSL models. 
  • They are officially moving from the Mozilla Public License v2.0 to the BSL v1.1 on all future releases of HashiCorp products.  
    • The APIs, SDKs and almost all other libraries will remain MPL 2.0
  • BSL is a source-available license that allows copying, modification, redistribution, non-commercial use, and commercial use under specific conditions.
    • They point out Couchbase, Cockroach, Sentry and Maria DB developed this license in 2013.
  • Hashi also would like to point out that they are including additional grants that allow for broadly permissive use of their source code, to make things slightly less scary. 
  • End users can continue to copy, modify and redistribute the code for all non-commercial and commercial use, except where providing a competitive offering to Hashicorp.  
  • They produced a nice FAQ just in case you have more questions that may have been frequently asked. https://www.hashicorp.com/license-faq

Open TF is the response but we still have questions. 

  • It’s clear things are not great in the open source community, and this one has the potential to be especially impactful. We’d love to hear our listeners thoughts on this movement away from open source and to more commercial business models. 

03:52 📢 Justin – “So here’s where I get confused. If I make a product internally that uses HashiCorp for my own needs, and that prevents me from buying Terraform Enterprise because I copied all the functionality for my own personal gain in my company… not selling it, not getting any money out of it. Does that count as competing with HashiCorp, or is that okay?”

04:30 📢 Jonathan – “I also have questions about like, is it just the source that they care about in that sense? Because everything about it is the source license. Can I still integrate the next version of Terraform binary if I download it and use it without modification in my own product and compete with HashiCorp? I’m unclear on that.”

07:31 The Open TF Manifesto – a plea to keep TF open Source

  • For those folks who WERE seriously impacted by the changes (we’re looking at you, Spacelift, Env0, Scaler, and GrantWork) – they have written a full manifesto on why Terraform adopting BSL is bad. This is essentially a plea to keep Terraform open source forever. 
  • They say the BUSL license is a poison pill for Terraform – with unknown legal risk and future legal risks. The use grants are vague and you now have to ask if you are in violation. 
  • The request from the manifest is that Terraform switch back to an open source license. 
  • However, if they do not they will fork Terraform into a foundation such as Linux Foundation or the Cloud Native Computing Foundation.  

08:15 📢 Justin – “When I think about what’s happened to Docker, that’s a really bad thing when that happens because the community moves on from you – and you get kind of left behind. Then you get bought by some company we never heard of, divested a bunch of things, and now you have to pay for licensing for Docker for zero reason. So if I had to pay for a Terraform client natively from Terraform someday – because some PE company bought them, I’m going to be super mad. But I’ll just move to OpenTF hopefully by then.”

 

12:04 📢 Jonathan – “I’ve got a question for you then. If Terraform had never been open source, do you think it would have gained the same success as it has?”

-We’d be interested in hearing listener feedback to this question! What do you all think? 

AWS

14:47 Welcome to AWS Storage Day 2023!

  • The fifth annual storage day took place on the 9th after our editorial cutoff. 
  • There is a replay available if you want to sit through it, but we only care about the announcements so let’s get into it! 
  • Generative AI/ML was the big theme of the day – color us surprised.  
  • They want to highlight that EBS has just turned 15 years old.  It handles more than 100 trillion I/O operations daily, and over 390 million EBS volumes are created every day, which is just an incredible number. 
  • On their new M7i instances you can attach 128 of the EBS volumes, 28 more than the previous version.

16:55📢 Ryan – “EBS was really one of the key foundational things for really taking advantage of having an elastic workload or having a self-healing workload and anything attached to a server where you could operate it and operate your data as its own thing and move it around. Like it’s a big, big advancement over what you could do in the data center.”

17:20📢 Jonathan- “Yeah, I feel like they’ve still missed an opportunity there. Getting the data off the host themselves and off of SSDs or disks on those instances and using instance storage, that was great because now if a machine goes down, you don’t lose all your stuff, but they still don’t support live migration of VMs between hosts. And EBS is the key for doing that, but they’ve never enabled that functionality.”

And now, onto the Storage Day Goodies! 

19:15 Mountpoint for Amazon S3 – Generally Available and Ready for Production Workloads

  • Mountpoint for S3 (or AWS finally agreeing that S3FUSE was thing) is a new open source file client that delivers high throughput access, lowering compute costs for data lakes on Amazon S3.  
  • Mountpoint for S3 is a file client that translates local file system API calls to S3 object API calls.  
  • Mountpoint supports basic file operations, and can read files up to 5tb in size. 
    • It *can* list and read existing files and create new ones  
    • It *cannot* modify existing files or delete directories, and it does not support symbolic links or file locking. 
  • Mountpoint will work with all S3 storage classes

20:23 New – Improve Amazon S3 Glacier Flexible Restore Time By Up To 85% Using Standard Retrieval Tier and S3 Batch Operations

  • S3 Glacier flexible retrieval improves data restore time by up to 85%, at no additional cost.  
  • Faster data restores automatically apply to the standard retrieval tier when using S3 batch operations. 
  • These restores begin to restore objects within minutes, so you can process restored data faster. 
  • Using S3 batch operations you can restore archived data at scale by providing the manifest of objects and specifying the retrieval tier. 

21:28📢 Ryan  – “This just proves that -I think my theory that Glacier is just all their older EBS hardware and they’re just cycling it through. And so now they’ve moved from spinators to SSDs. I’m certain of it.”

22:22 New — File Release for Amazon FSx for Lustre

  • Yes, Lustre supports files. This is something different – and it’s actually pretty neat. 
  • Amazon FSX for Lustre provides fully managed shared storage with the scalability and high performance of the open source Lustre file system to support your Linux-based workloads.  
  • At Storage Day they announced the file release for FSx for Lustre. This feature helps you manage your data lifecycle by releasing file data that has been synchronized with S3.  
  • File release frees up storage space so that you can continue writing new data to the file system while retaining on-demand access to release files through the FSX Lustre lazy loading from S3.  
  • This has the potential to be extremely valuable to machine learning workloads. 

25:18 Announcing AWS Backup logically air-gapped vault (Preview)     

  • AWS backup is announcing in preview the logically air-gapped vault, a new type of AWS backup vault that allows secure sharing of backups across accounts and organizations, supporting direct restore to help reduce recovery times from a data loss event. 
  • AWS backup is a fully managed service that centralizes and automates data protection across AWS services and hybrid workloads.  
  • This is a TERRIBLE name, but it really does a lot of work, so we’re not mad at it. 

26:49 📢 Justin  – I’m a little annoyed though that this took so long because like ransomware is not new. Like, I mean, we’ve been talking about ransomware risks in Amazon for three or four years now, maybe even longer, maybe six. And I do remember there was a magic quadrant that came out recently where they were the magic quadrant actually dinged them for not having A solid answer for ransomware and now all of a sudden they have this…we’ve all been telling you all over the market, you know in the cloud practitioners that this is something we need To meet compliance requirements. Then why did it take Gartner to get there? So that part annoys me just a little bit.”

28:14📢 Jonathan – “So if you encrypt your data in the vault, where do you store the keys securely so that the keys can’t be compromised or attacked or corrupted? Because I think that becomes the next problem down the line. So great, we’ve got the backups and they’re encrypted because that’s best practice. But now we’ve got these keys and we need to also keep someplace safe. And I think attacks on encryption keys is probably going to be the next biggest sort of destructive power against enterprise. Cause if you’ve got all encrypted backups and you lose the keys, you’ve got no encrypted backups.”

29:27 Few other items we won’t talk about:

  • Power ML research and big data analytics with EFS
  • Multi-AZ file systems on FSX for OpenZFS
  • Higher throughput capacity levels for FSx for Windows File Server
  • Copy Data to and from other clouds with AWS datasync

30:51 Network Load Balancer now supports security groups

  • NLB’s now support security groups, enabling you to filter the traffic that your NLB accepts and forward to your applications. 
  • This was one of the most confusing things to learn when implementing NLB’s and we’re so glad it now aligns to the patterns for all other load balancers. 

31:48📢 Ryan – “Yeah. I mean, the poor networking team that had to expand the public subnets in a rush, right? Because the first thing you do is deploy your server into a private subnet and realize you can’t actually get to, can’t actually have the security group be the source IP. And it just turned into chaos real fast trying to.”

34:45 Amazon EC2 M7a General Purpose Instances Powered by 4th Gen AMD EPYC Processors 

  • Were you excited about those M7i instances we talked about a few weeks ago? Were you perhaps thinking to yourself, “man, I wish I had an AMD version of that?” Well it’s GOOD NEWS! 
  • A few weeks ago Amazon announced the M7I instances, and now they’re back with the M7A instances powered by 5th Gen AMD EPYC (Genoa) processors with maximum frequency of 3.7GHz, which offer up to 50 percent higher performance compared to m6a instances. 
  • M7a instances support AVX-512 Vector Neural Network Instructions and Brain Floating Point (bfloat16).  They also support DDR5 memory, which enables high-speed access to data in memory and delivers 2.25 times more memory bandwidth. 
  • Configurations available from m7a.medium 1/4 to m7a.48 xlarge with 192/768. 
  • You can take a look at the pricing here. It’s pricey. Be aware. 

GCP

37:37 How Google is Planning to Beat OpenAI (Article – Subscription required)

  • In a prior episode we talked about Google merging their two large artificial intelligence teams–with distinct cultures and code- to catch up to (and surpass) OpenAI and rivals.
  • This effort is culminating into a release of large machine-learning models this fall. 
  • The models, known as Gemini, are expected to give Google the ability to build products its competitors can’t, according to a person involved with Gemini’s development.
  • Open AI can understand and produce conversational text, but Gemini will go beyond that, combining the text capabilities of LLMs like GPT-4 with the ability to create AI images based on a text description, similar to AI image generators like Midjourney and stable diffusion. 
  • It may also be able to analyze charts or create graphics with text descriptions and controlling software using text or voice commands. 
  • Google is planning on having Gemini power its Bard Chatbot, Google Docs and Slides. 
  • Google will charge app developers for access to Gemini through its google cloud product.  
  • The big question that I think everyone has asked for the last nine months is, ‘When will someone even look like they can catch up to OpenAI?’” says James Cham, an AI startup investor at Bloomberg Beta. “This is going to be the first indication that someone can compete in a legitimate way with GPT-4.” 
  • Google is using its biggest advantage Youtube and the large corpus of Youtube video transcripts, but it could also integrate video and audio into the model, giving them multi-modal capabilities many researchers believe will be the next frontier in AI. 

39:24📢 Jonathan – “You know, first to press release is not always first to market or first to success, for sure. And so Google announcing that they’re working on this amazing thing, that’s great. You can talk about it all you like. Pretty sure OpenAI are already working on this. They’ve already published models for text and audio and 3D objects. And they’re working on video, all kinds of things. Integrating those into a single model, that will be awesome. That’s what Google kind of… talking about doing here is having a multimedia evolution of large language models or generative AI. I don’t think they’re going to be Open AI to it unless Open AI ends up going out of business because they’re sued and lose in the courts and that’s a huge risk right now for them.”

40:14📢 Ryan- It’s interesting. Yeah. Cause you know… I always felt that AI was Google’s fight to lose, but they weren’t first to market, but in doing so open AI has taken all the risk, and all the weird legal hurdles. And then Google has the advantage of all this data on the backend.”

50:12 Google launches Pricing API to help enterprises optimize cloud costs 

  • Google has launched a new pricing API that will help enterprises optimize their cloud costs. The API will provide businesses with real-time visibility into their cloud usage and costs, and will allow them to set budgets and alerts. The API is also designed to help businesses identify and eliminate waste in their cloud usage.
  • Let’s be real. Setting budgets doesn’t save me money. 
  • Alerts don’t necessarily save you money. 
  • The pricing API is part of Google’s Cloud Billing service, which provides businesses with tools to manage their cloud costs. Cloud Billing includes a number of features, such as usage reports, budget alerts, and cost allocation.
  • Here are some key points from the article:
    • Google has launched a new pricing API that will help enterprises optimize their cloud costs.
    • The API will provide businesses with real-time visibility into their cloud usage and costs.
    • The API will allow businesses to set budgets and alerts.
    • The API is also designed to help businesses identify and eliminate waste in their cloud usage.
    • The pricing API is part of Google’s Cloud Billing service.
    • Cloud Billing includes a number of features, such as usage reports, budget alerts, and cost allocation.
    • The pricing API is now available in beta.

51:38📢 Ryan – “I mean, this is the response to the age old problem, right? The CFO wants to save money. Everyone else in the business wants to empower developers to move faster. Right, and it’s sort of like, how do you reconcile those two worlds? So, I mean, these APIs, yes, setting in budgets and stuff via APIs, but what it really does is empower approval workflows so that communication is happening about money being spent. And that’s really the value in these things. And so, you know, like you set a budget and then you exceed that budget and that triggers a workflow of approvals. And then you can automatically update that budget to not block the business.”

Azure

Oracle

Continuing our Cloud Journey Series Talks

After Show

Closing

And that is the week in the cloud! We would like to thank our sponsors Foghorn Consulting. Check out our website, the home of the cloud pod where you can join our newsletter, slack team, send feedback or ask questions at thecloudpod.net or tweet at us with hashtag #thecloudpod

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.