253: Oracle Autonomous Database is the OG Dad Joke

Cloud Pod Header
tcp.fm
253: Oracle Autonomous Database is the OG Dad Joke
Loading
/
83 / 100

Welcome to episode 253 of the Cloud Pod podcast – where the forecast is always cloudy! Justin, Ryan, and Jonathan are your hosts this week as we discuss data centers, OCI coming in hot (and potentially underwater?) in Kenya, stateful containers, and Oracle’s new globally distributed database (Oracle Autonomous Database) of many dollars. Sit back and enjoy the show!

Titles we almost went with this week:

  • 😂The Cloud Pod: Transitioning to SSPL – Sharply Satirical Podcast Laughs!
  • 🗺️The Data Centers of Loudoun County
  • 🍴The Forks of Redis were Speedb
  • 🛒AWS, I’d Like to Make a Return, Please
  • 🫙See…Stateful Containers Are a Thing
  • 😮‍💨Azure Whispers Sweet Nothings to You
  • 😎I’m a Hip OG-DAD 
  • 🤑Legacy Vendor plus Legacy Vendor = Profit $$
  • 🍷Wine Vendors >Legacy Vendors 
  • 🧑‍🍼I’m Not a Regular Dad, I’m an OG Dad

A big thanks to this week’s sponsor:

We’re sponsorless this week! Interested in sponsoring us and having access to a specialized and targeted market? We’d love to talk to you. Send us an email or hit us up on our Slack Channel. 

Follow Up

02:25  Microsoft Agreed to Pay Inflection $650 Million While Hiring Its Staff 

  • Listener Note: Payway article 
  • Last week, we talked about Microsoft hiring the Inflection Co-Founder Mustafa Suleyman and their Chief scientist, as well as most of the 70-person staff. 
  • Inflection had previously raised 1.5B, and so this all seemed strange as part of their shift to an AI Studio or a company that helps others train AI models. 
  • Now, it has been revealed that Microsoft has agreed to pay a 620M dollar licensing fee, as well as 30M to waive any legal rights related to the mass hiring. As well as it renegotiated a $140M line of credit that aimed to help inflection finance its operations and pay for the MS services. 

03:22 📢 Justin – “…that explains the mystery that we talked about last week for those who were paying attention.”

General News 

05:17 Redis switches licenses, acquires Speedb to go beyond its core in-memory database 

  • Redis, one of the popular in-memory data stores, is switching away from its Open Source Three-Clause BSD license. 
  • Instead it is adopting a dual licensing model called the Redis Source Available License (RSALv2) and Server Side Public Licensing (SSPLv1).  
    • Under the new license, cloud service providers hosting Redis will need to enter into a commercial agreement with Redis. The first company to do so was Microsoft. 
  • Redis also announced the acquisition of Speedb (speedy-bee) to take it beyond the in memory space. 
  • This isn’t the first time that Redis has changed the licensing model. 
    • In 2018 and 2019, it changed the way it licensed Redis Models under the Redis Source Available License v1. 
  • Redis CEO Rowan Trollope said they switched for the same reasons; he thinks that everyone has switched to the SSLP, particularly when they are making an acquisition like Speedb, it’s a big investment, and if they include it in Redis and cloud services providers just pick it up and ship it without paying anything, thats problematic for Redis long term viability. 
  • Redis Trollope – who joined the company a year ago, said customers he spoke to about the change were not concerned, even though as per definition of the Open Source Institute, it isn’t technically open source.  
    • He also said he would not be surprised if Amazon sponsored a Fork of Redis. 
  • With the change Redis is also considering consolidating Redis Stack and Redis Community Edition into a single distribution. 
  • Speedb is a rocksDB-compatible key-value storage engine, which may seem odd for Redis to acquire.  
  • Redis being an all in memory play made sense at the time, but now NVMe drives and their higher transfer rates opens up a middle ground to be found that combines fast drives with in-memory storage as something akin to a very large cache. 
  • Redis had previously been planning to IPO, and they still plan to but once the market window reopens. 
  • We would like to note that there are already some forks of Redis with the most popular being KeyDB which is now part of Snap Inc (since May 2022).  
    • Key DB is a high performance fork of Redis with a focus on multi-threading, memory efficiency and high throughput. 
  • A copyleft fork of Redis was also announced by Drew Devault which he blogged about on March 22nd, after the announcement of the adoption of the SSPL.  
  • Drew is known as a bit of a dick in the open source community, but he is also an uncompromising advocate for Free and Open Source Software. 
  • Madelyn Olson and some other former redis contributors have also created a fork with the current name placeholder, Madelyn happens to be employed by AWS but says this is not sponsored by them. 
  • Microsoft being the first cloud provider to license the new Redis, of course wrote a blog post.  
    • Through partnership they will continue to offer integrated solutions like Azure Cache for Redis, ensuring that they have access to the latest features and capabilities. 
    • There will be no interruption to Azure Cache for Redis, Redis Enterprise and Enterprise Flash services and customers will receive timely updates.

08:36 📢 Jonathan – “I’m less bothered by Redis doing this, then I think I have been about anybody else. Maybe I’m just kind of getting numb to it now a little bit. Maybe. I’m not sure what it is. I mean, I feel like there’s a key difference between something that works at runtime in an application or something that a cloud vendor would adopt and then sell as a service and something like Terraform. I think there’s some significant differences there. So I think the types of people who are using Redis at scale in production apps will want to pay for support.”

AI is Going Great – Or How ML Makes Money 

14:50 Sora: First Impressions

    • We don’t remember how much we covered of SORA – which is Chat GPT’s capability to take text and turn it into video. 
      •  As it wasn’t available to most we didn’t cover it.
    • Now, OpenAI has a blog post with some videos created by production companies and directors so you can see what it is possible to do. And it is a pretty cool concept. 
  • Sora is at its most powerful when you’re not replicating the old but bringing to life new and impossible ideas we would have otherwise never had the opportunity to see. – Paul Trillo, Director 

16:23 📢 Justin – “…there’s like seven or eight videos here, all very interesting and worth checking out if you are curious about what AI can do for video and why maybe the writers and the actors all struck, you know, had strikes about it, because it could be pretty compelling long-term.”

AWS

19:48 AWS announces a 7-day window to return Savings Plans 

  • Without a lot of fanfare, AWS is announcing that customers can now return savings plans within 7 days of purchase!
  • Savings plans are a flexible pricing model that can help you reduce your bill by up to 72% compared to On-Demand prices, in exchange for a one or three year hourly spend commitment. 
  • Now if you accidentally screwed up you can return it and if needed repurchase another plan that better matches your need. 
  • I assumed I would have to open a support case, but it’s built right into the console, and is as simple as going to the savings plans menu in the cost management console, inventory and then selecting the plan and choosing the return savings plan. 
  • There are some restrictions, as this is quota controlled so you can’t do it regularly, the savings plan must be in active state – it can’t be in pending. 
  • We really appreciate the opportunity to “undo” without having to talk to a sales rep. 

17:30 📢 Justin – “I mean, at one point you could resell these things on marketplaces and things like that. And then people were abusing it. And so Amazon took it away. But it would be nice to still have some of those capabilities and options and saying, hey, at the end of the day, it’s a commitment to Amazon.”

22:40 Improve the security of your software supply chain with Amazon CodeArtifact package group configuration 

  • Administrators of package repositories can manage the configuration of multiple packages in one single place with the new AWS CodeArtifact package group configuration capability. 
  • A package group allows you to define how packages are updated by internal developers or from upstream repositories. 
  • You can now allow or block internal developers to publish packages or allow or block upstream updates for a group of packages. 
  • Simple applications routinely include dozes of packages.  
  • To minimize the risk of supply chain attacks, some organizations manually vet the packages that are available in internal repositories and the developers who are authorized to update those packages.  
  • There are three ways to update a package in a repository. 
  • Administrators previously had to manage the important security settings of packages with allow and block and internal publish controls. Now they can define these three security parameters for a group of packages at once, the packages are identified by their type, their namespace and their name. This capability operates at the domain level, not the repository level.  

17:30 📢 Ryan – “This is definitely handy for those internal teams who have had to manage this, just because it’s not the end of the world, but it’s toil, having to iterate through and go through each layer and set the security settings. So this is helpful.”

26:46 Run large-scale simulations with AWS Batch multi-container jobs

  • AWS Batch is Amazon’s “fully managed: service that helps you run batch workloads across a range of AWS compute offerings.  
    • Traditionally AWS batch only allowed single-container jobs and required extra steps to merge all components into a monolithic container.  
    • It also did not allow using separate sidecar containers, which are auxiliary containers that complement the main application by providing additional services like data logging. 
  • AWS batch NOW offers multi-container jobs, making it easier and faster to run large-scale simulations in areas like autonomous vehicles and robotics.  
  • These workloads are usually divided between the simulation itself and the system under test (known as the agent) that interacts with the simulation.  
  • These two components are often developed and optimized by different teams.   
  • With this capability you get multiple containers per job, you get advanced scaling, scheduling and cost optimization provided by AWS Batch, and you can use modular containers representing different components like 3D environments, robot sensors, or monitoring sidecars.  

28:02 📢 Ryan – “I’ve always thought Batch was a tool that was waiting for a problem to solve.”

28:22 📢 Jonathan – “As a tool who used Batch once… it was a tool, yeah. I mean, it does a job. I wouldn’t say it’s fully managed. You pretty much have to bring a lot of your own management into it. But the simplicity of it for doing what it does do is really good. And to add the complexity of side cars and all this other stuff, I just don’t think it’s the right choice to add these extra features.”

GCP

30:55 Google Cloud VMware Engine supercharged with Google Cloud NetApp Volumes

  • Google now allows you to use Google Cloud VMware Engine with Google Cloud Netapp Volumes.
  • The combination reduces operational overhead and lowers the cost of migrating and managing VMWare applications. Customers can extend their existing investments in VMWare using the same tools and processes they already use while benefiting from Google Cloud’s Planet scale. 
  • Netapp Volumes are fully certified and supported as an NFS datastore for google cloud vmware engine. 

31:48 📢 Justin – “Well, really the problem with NFS in this model is the multicast nature of NFS and the amount of network traffic that it puts out there that you don’t really… that’s the bigger problem with running NFS for VMware at scales. You run into a lot of network chatter.”

32:11 Introducing stronger default Org Policies for our customers

  • Google is updating the default org policies under their secure-by-default organization resources, potentially insecure postures and outcomes are addressed with a bundle of policies that are enforced as soon as a new organization resource is created. 
  • Existing orgs are not impacted by the change. 
  • Some of the new stronger defaults:
    • IAM
      • Disable service account creation
      • Disable automatic IAM grants for default service accounts
      • Disable service account key upload
    • Storage Constraints
      • Uniform bucket-level access – constraint prevents cloud storage buckets from using per-object ACLs to provide access, enforcing consistency for access management and auditing. 
    • Essential Contacts Constrain
      • New default policy constraints for essential contacts limiting contacts to only the allowed managed user identities. 

33:22 📢 Ryan – “Subscribing to an API shouldn’t mean that you create a principal identity that has full admin access to that service. Like it just doesn’t make any sense to me why you would do that. So this is, these are good, good saying things to have. And if you have an existing org. I recommend going through and checking that you have some of these on. Because it’s uniform bucket level access. Everyone gets burned by that.”

36:13 Anthropic’s Claude 3 Sonnet and Claude 3 Haiku are now generally available on Vertex AI 

  • Claude 3 Sonnet and Haiku are now GA on Vertex AI.  We’ve talked about Haiku and Sonnet and length… we’ll be back when someone has Opus. 
  • Google promises Opus is a few weeks away. 
  • Jonathan predicts Google will get there first – because they have more money.  

39:24 5 ways Google’s data centers support Loudoun County    

  • Neighbors of cloud data centers have been raising the alarm of these power hungry, mega AC powered data centers for the last few years.  
  • Google has several in Loudoun County, Virginia, and Google and Deloitte have released a report to evaluate the progress to drive positive economic, social and environmental impacts. 
  • Five highlights from Google:
    • An Economic Engine for Loudoun county, adding 1.1 Billion annually to the county’s GDP. Google’s operations created 3600 jobs, including 400 direct jobs in 2022. As well as the tax revenue has helped support the county’s schools, social services and more. 
    • Social advancement through community support  – 2.4 Million in grants and STEM education programs,
    • Training tomorrow’s workforce – Google partnered with 16 educational institutions to provide certificates that can be completed within 3-6 months 
    • Powering a cleaner future by delivering three times the computing power using the same amount of power they did 5 years ago.  They also announced a 10-year program to buy power with AES to supply 24/7 carbon-free energy. 
    • Climate-conscious water stewards: Climate-conscious approach to data center cooling. 
  • The report was clearly paid for by Google, so nothing critical is mentioned in it. I think I’d like to see a third-party analysis. 
  • For the record: it’s mostly a noise complaint issue. 

41:42 Introducing Cloud Run volume mounts: connect your app to Cloud Storage or NFS

  • Cloud run is Google’s fully managed container platform, running directly on top of Google’s scalable infrastructure to simplify developers’ lives and make it easier to build cloud-native applications. 
  • Cloud run instances had access to its own local file system, but until now you couldn’t access shared data stored in a local file system.  Forcing developers to use complex hacks or look at other services to meet their needs. 
  • Google is now announcing in preview Volume Mounts. 
  • WIth volume mounts, mounting a volume in a cloud run service or job is a single command. You can either mount a cloud storage bucket or an NFS share, like a cloud filestore instance. 
  • This allows your containers to access the storage bucket or file server content as if the files were local, utilizing file system semantics for a familiar experience. 
  • Some limitations to be aware of: 
    • Cloud run uses Cloud Storage FUSE for the volume mount. It does not provide concurrency control for multiple writes to the same file.  Cloud storage FUSE is not fully POSIX compliant. 
    • For Writing to an NFS volume, your container must run as root. Cloud Run does not support NFS locking. NFS volumes are automatically mounted in No-lock mode.
    • So yes, you can do this. But good luck to you. 

43:13 📢 Ryan – “ I wish they would blunt the rough edges on this a little bit, just because everyone burns themselves with fuse drivers in the same way. And it’s so ugly because it’s painful in a way that equals data loss and in some cases unrecoverable errors. Especially, it works fine at smaller scales and then until it doesn’t. I hate problems like that when you have to troubleshoot them out.”

45:21 Take control of GKE scaling with new quota monitoring 

  • Managing the growth of your K8 clusters within GKE just got easier with the recently introduced ability to directly monitor and set alerts for crucial scalability limits, providing you with deeper insight and control over your K8 environment. 
  • Specific limits you can now keep track of. 
  • Etcd database size (GiB): Understand how much space your Kubernetes cluster state is consuming.
  • Nodes per cluster: Get proactive alerts on your cluster’s overall node capacity.
  • Nodes per node pool (all zones): Manage node distribution and limits across specific node pools.
  • Pods per cluster (GKE Standard / GKE Autopilot): Ensure you have the pod capacity to support your applications.
  • Containers per cluster (GKE Standard / GKE Autopilot): Prevent issues by understanding the maximum number of containers your cluster can support

46:16 📢 Ryan – “ …these features are super, super cool for anyone who’s running sort of Kubernetes as a platform service for the rest of their business. Previously, before this, right, you’d hit all these same limitations, except for it’s hard and you can’t do anything about it, right, at least with quotas, you can sort of manage it and you can set them where you can relax them and sort of reevaluate. I know, you know, I’ve hit the etcd database size with, you know, rapidly scaling clusters really fast, right? And when that fails, it is spectacular.”

Azure

47:33 Study showcases how Microsoft Dev Box impacts developer productivity 

  • Microsoft Dev Box, a dev-optimized VDI can transform today’s developer workstation.  
  • Traditional, physical workstations are frequently hampered by complex environmental setup, lost productivity from conflicting configurations and lack of scalability. 
  • As Microsoft Dev Box has been available for over a year, they wanted to understand the benefits, so they hired research firm GigaOM, to vet those findings which has resulted in a blog post. 
  • There has been little advancement in the administration, automation, or defining of highly customized VDI solutions…until now. [Microsoft Dev Box] delivers significant benefit over outfitting development teams with traditional laptops or VDI-served stations.”
  • GigaOm’s hands-on-testing further broke this takeaway down into three primary findings: 
    • Microsoft Dev Box significantly improved developer productivity and reduced configuration time and IT overhead compared to VDI solutions.
    • Developer typing experience felt as good as on a local machine—even over hotspot and public Wi-Fi.
    • A Visual Studio-equipped Microsoft Dev Box setup produced better performance with the sample Chromium code base than VDI or local clients.
  • Transforms the dev workstation experience
  • Accelerates dev workflows with project-based configurations
  • Maintaining centralized security and management

50:20 📢 Jonathan – “Yeah, it’s interesting that it’s all about productivity though and not about the real reasons that people are moving to these VDIs for dev work and that’s really about supply chain.”

52:13 Microsoft and NVIDIA partnership continues to deliver on the promise of AI

  • Microsoft and NVIDIA are also partnering to bring the new Grace Blackwell 200 Superchips to Azure Cloud. 
  • Thanks Microsoft. 

52:22 Accelerate your productivity with the Whisper model in Azure AI now generally available  

    • OpenAI Whisper on Azure is now generally available.  
    • Whisper is a speech to text model from OpenAI that developers can use to transcribe audio files.
    • You can use the Whisper API in both Azure OpenAI service as well as Azure AI Speech services on production workloads knowing that it is backed by Azure’s enterprise readiness promise. 
  • “By merging our call center expertise with tools like Whisper and a combination of LLMs, our product is proven to be 500X more scalable, 90X faster, and 20X more cost-effective than manual call reviews and enables third-party administrators, brokerages, and insurance companies to not only eliminate compliance risk; but also to significantly improve service and boost revenue. We are grateful for our partnership with Azure, which has been instrumental in our success, and we’re enthusiastic about continuing to leverage Whisper to create unprecedented outcomes for our customers.” Tyler Amundsen, CEO and Co-Founder, Lightbulb.AI

Note from shownote writer: Copywriters are still better (and funnier) than AI, especially me. 

54:44 Preview: New Features in Azure Container Storage

  • Everyone is getting into the Container Storage game apparently, Azure Container Storage (in preview) is a fully managed, cost-efficient volume orchestration service built natively for K8.  
  • Azure Container Storage offers Azure Kubernetes Service integrated block storage volumes for production-scale stateful container applications on Azure.  
  • By packing multiple persistent volumes in a single disk, Azure container storage helps you achieve better price performance.  
  • You can also attach more persistent volumes per node to reach new scale levels, and leverage locally attached ephemeral storage for extremely latency sensitive and IOPS intensive workloads.
  • Announced at Kubecon 2023 NA in preview, at Kubecon Europe they are announced some new capabilities including:
    • Simplifying volume management by leveraging a new ephemeral backing storage option, Temp SSD< to enhance the efficiency for use cases like caching.
    • Achieve reduced TCO with resource optimization with updates ot the AKS CLI installation process
    • Scale your backing storage up and on-demand to meet your workload’s storage needs in a cost efficient manner without downtime.  
  • You can not make all your stateful container dreams come true! 

55:47📢 Ryan – “…as much as I’m against stateful data and containers, I’m always sort of curious to see if someone cracks it, because I do think that it’s not going to go away. People are always going to have workloads and use cases that were born of the server world and sort of have that shared model. And so if something can be written that’s performant and consistent, it really would be a bone. I mean, other than the fact that you would make me do SQL Server on it. But, you know, I do think that these things are, they’re getting better.”

56:50 Breaking Changes to Azure API Management Workspaces

  • I don’t really care about this that much… but I’m more interested in the fact that Azure is cool with breaking API changes and with less than three months notice! 
  • If you use Azure API management for workspaces, just know they’re going to screw you up on June 14th, and you can look at more details in our show notes
  • But #1 You published what’s new on a breaking change, and you’re just messing with people writing automation. It’s a bad look, Azure. 
  • It’s annoying, and we don’t like it. 

59:11 General Availability: Automatic Scaling for App Service Web Apps

  • Azure App service has launched into GA, Automatic Scaling feature.  They received great feedback during the preview phase, and have several enhancements with the GA release as well.
    • Automatic Scaling is available for Premium V2 and Premium V3 pricing tiers, and supported for all types: Windows, Linux, and Windows Containers
    • A new Metric Viz (Automatic Scaling Instance Count) is now available for web apps where Automatic Scaling is enabled.  AutomaticScalingInstanceCount will report the number of virtual machines on which the app is running include the pre-warmed instance if it is deployed
  • In addition to the key capabilities released at Preview:
    • The App Service platform will automatically scale out the number of running instances of your application to keep up with the flow of incoming HTTP requests, and automatically scale in your application by reducing the number of running instances when incoming request traffic slows down.
    • Developers can define per web app scaling and control the minimum number of running instances per web app.
    • Developers can control the maximum number of instances that an underlying app service plan can scale out to. This ensures that connected resources like databases do not become a bottleneck once automatic scaling is triggered.
    • Enable or disable automatic scaling for existing app service plans, as well as apps within these plans.
    • Address cold start issues for your web apps with pre-warmed instances. These instances act as a buffer when scaling out your web apps.
    • Automatic scaling is billed on a per second basis and uses the existing Pv2 and Pv3 billing meters.
    • Pre-warmed instances are also charged on a per second basis using the existing Pv2 and Pv3 billing meters once it’s allocated for use by your web app.

1:00:32 📢 Jonathan – “Well, these novel new features like we actually scale this thing for you that we called managed service beforehand that’s uh it’s revolutionary technology.”

Oracle — The Globally Distributed Oracle Autonomous Database costs how much? 

1:00:54 Announcing the general availability of Oracle Globally Distributed Autonomous Database

  • Oracle has announced the GA of Oracle Globally Distributed Autonomous Database. This fully managed OCI service is available in datacenters around the world. 
  • Built-in, cutting edge capabilities redefine how enterprises manage and process distributed data to achieve the highest possible levels of scalability and availability and provide data sovereignty features. 
  • (Think Spanner but Oracle – and way more expensive)
  • Some of the key capabilities of this overly longly named service (OGDAD?)
    • High Availability as it splits logical databases into multiple physical databases (shards) that are distributed across multiple datacenters, availability domains or regions. Faults in one shard do not affect others, enhancing overall availability. Automatic replication of shards across domains or regions provides protection from outages. OGDAD runs on fault-tolerant Exadata infrastructure for the highest possible availability (what your not willing to put a number on that Oracle?)
    • Horizontal Scalability: You can add servers and associated database shards online and without interrupting database operations.  Data and accesses are automatically redistributed to maintain a consistently balanced workload. Scaling to multi-terabyte or multi-petabyte levels, addressing the requirements for the most demanding applications.  Plus it runs on Exadata providing high levels of vertical scaled performance. 
    • Data Sovereignty: Orgs can specify where data is stored using a choice of customer defined data placement policies. Updates are automatically inserted into database shards in the correct location based on these policies 
    • Choice of data distribution methods: Globally distributed autonomous database offers extensive control over how data is distributed across shards. Unlike other databases with limited methods, Oracle supports value based, system managed has, user-defined, duplicated and partitioned distribution within shards, as well as allowing flexible combinations. 
    • Autonomous Management: The services bring advanced, ML-driven capabilities of the autonomous database to distributed databases with automatic database patching, security, tuning and performance scaling within shards. The service combines vertical and horizontal scalability to achieve optimum levels on demand
    • AI: Autonomous Database Select AI is also supported, letting users access their distributed databases using LLM-enabled natural language queries without having to know how data is structured or where it’s located. 
    • Simple application development: The OGDAD offers a unified logical database view to applications. Its cloud-native capabilities and support for Oracle’s rich feature set provides the ideal platform for modern applications. Automated and transparent data distribution and access simplify the development of distributed aps. 
    • All of this sounds really expensive. Per Oracle, it is priced based on the number of shards being used and the amount of database consumption on each shard. They say it’s simple and predictable. 
    • My calculator just using basic defaults was already at 34k a month for 2 shards…. So totally affordable, right? 


1:03:05 📢 Justin – “So you’re paying for the rack plus the database servers plus the storage just to get started. And then you layer on the OG dad on top of that.”

1:03:14 📢 Ryan – “I mean, I guess if you’re using Oracle database, you’re already sort of independently wealthy, money means nothing to you. And so, what’s an extra $34,000 among friends?”

1:04:13 Oracle Plans to Open a Public Cloud Region in Kenya 

    • To meet growing demand across Africa, OCI is planning on opening a public cloud region in Nairobi Kenya. 
    • Oracle will be taking advantage of Kenya’s renewable energy and digital infrastructure including abundant submarine and national connectivity.
    • The investment underscores Oracle’s commitment to Africa and aims to help drive the digital transformation of the Kenyan government, public institutions, enterprises, startups, universities and investors in Kenya and the continent. 
  • “Oracle’s intent to open a public cloud region in Nairobi will be a key component of Kenya’s Bottom up Economic Transformation Agenda initiative, which is focused on digital transformation, private sector development, agricultural transformation, housing development, and healthcare modernization,” said Eliud Owalo, cabinet secretary, Ministry of Information, Communications, and the Digital Economy, Kenya.
  • “We are delighted to extend our commitment to helping Kenya accelerate the digital transformation of its government and private sector,” said Scott Twaddle, senior vice president, Product and Industries, Oracle Cloud Infrastructure. “OCI is leveraged by governments and companies across the world as a scalable and secure platform for mission-critical workloads on which to drive innovation and transformation. We already have a strong business in Kenya, and the upcoming public cloud region in Nairobi represents a significant next step forward in helping support the country’s economic goals.”
  • We’re still stuck on the fact that they used “Oracle will be taking advantage of Kenya…”

1:04:38 📢 Justin – “Is it Wi-Fi enabled submarine or satellite enabled? Because the latency might be a killer. Yeah. So anyways, Oracle’s definitely not gonna be selling, I imagine, a lot of these Oracle OG dads to the Kenyan people, because I don’t know that there’s that much money in Kenya to pay for that, but they’re gonna be there.

Closing

And that is the week in the cloud! Just a reminder – if you want to join us as a sponsor, let us know! Check out our website, the home of the Cloud Pod where you can join our newsletter, slack team, send feedback or ask questions at theCloud Pod.net or tweet at us with hashtag #theCloud Pod

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.