As we embark on another holiday season in the United States, we are being told to start our holiday shopping even earlier this year to avoid some of the delays in shipping. These slowdowns stem from a number of factors, including container shortages, Covid-19 outbreaks that backlogged ports, and a dearth of truck drivers and warehouse workers. Even without the shortages and slowdowns, retailers are in for a long holiday season ahead of them as sales are predicted to grow by 7% this holiday season. And, as many of us do our shopping online, that means a big Cyber Monday this year.
With so many aspects of the holiday shopping experience outside of retailers’ control (delays, supply issues, etc.), one thing they can control is the online shopping experience for customers. By adopting a set of policy guardrails to mitigate risk and operational error within their applications, retailers can offer their customers a faster, more flexible and secure shopping experience.
Three critical components of an application for retailers
There are three critical components of applications that will lead to a successful holiday season for retail vendors (Target, Amazon, Best Buy, etc.)—and all of them tie back to a common theme: Be ready for heavy online ordering. All gift-giving holiday shoppers, whether they’re Costco deal shoppers looking to replace the movie theater with a that perfect 8k 86” TV, comfort-seekers looking for a family matching pajama set, or trying to get through the season with a beer/wine subscription for their in-laws, everyone is likely to avoid risk, crows and hassle by ordering online. So, what are those three critical components that will make the customer experience go off without a hitch? A reliable cloud-native architecture, well-managed and monitored microservices as a way to ensure performance, and clear, tested and proven authorization guardrails to control access to it all.
That’s a lot of industry jargon — here’s what it really means:
Cloud-native architecture is simply just adopting tools and services that are purpose-built to run in a cloud environment. like Amazon Web Services, Google Cloud Platform or Microsoft Azure. Consumers often think of cloud as online storage, or turnkey SaaS apps, but cloud-native architecture is the set of components, software, and tools that developers use to make those apps. Cloud-native applications provide the scalability and flexibility to meet spikes in consumer demand, as well as swap in new features, offers, or services to address fast-moving promotions—all of which will be critical this holiday season!
Cloud-native also means “software-defined”—cloud developers have the power to use software to control all the infrastructure that makes apps run, which includes storage, compute and network. With the right software-defined architecture, apps can scale, accept more traffic and ultimately support more customers, without outages, downtime or errors. After all, an app that goes down during the middle of a holiday sale makes for very unhappy customers!
Within that cloud-native architecture, there has to be a way to orchestrate (yes, like the symphony conductor) all the components of applications so they are performant and available.
Typically, microservices architectures will consist of multiple microservices, each of which performs a specific operational task — like different instrument sections in an orchestra. Each microservice should only communicate with the others in just the right ways, to make the application work as planned. In the case of microservices, each service/data store/deployment, however, also has an independent lifecycle and can be built with whatever programming language is preferred by the developer. This is perfect, and by design —you don’t want trombones in your flute section, and you don’t want your clarinets trying to play the music for the percussion players!
But, where this gets tricky is managing all of those microservices, the network communication and the policy that controls what can talk to what, how each service can “play” and when multiple services need to play together. (See, it really is an orchestra!) Often, this is where developer teams leverage a “service mesh” to control all service-to-service communication.
Now, that the cloud-native architecture and the microservices architecture (including that service mesh) are in place, the last critical component is authorization.
Since each application is broken down into individual microservices, organizations need a way to incorporate dynamic context into the policy that governs how, when and why a service mesh should allow or deny traffic amongst the microservices. This becomes tricky during Cyber Monday (or now all of the holiday season) when vendors run applications that need to make decisions at a rate that no human, and indeed, no service mesh, can make. Organizations will need a way to make sure that any context—including dynamic customer data, the type of information being requested, the software APIs making the requests, and much more—can be easily incorporated into the policy that governs what can talk to what, all without custom code or app changes… and all at incredibly high speed and scale.
If we used the service mesh itself in between every service, it is akin to having a conductor tell each player in an orchestra when and how to play every single note. There’s no way a conductor could make it around the whole band at once, let alone all the thousands of times notes need to be played—it would be chaos. Instead, the music determines the notes, and the conductor interprets that music in real time. The analogy breaks down a bit here, but the real time nature is similar—with the right type of authorization policy governing how the mesh, the microservices, and even the software-defined infrastructure works, we get beautiful music!
In practical terms, we get a second benefit: offloading decisions to a dedicated policy service not only speeds decision making, it also means that the decisions are treated just like any other software —so development and operation teams not only ensure apps work as intended, but they can change one “band section” without affecting all the others! What’s that mean? When your policy is separated from your microservices, or infrastructure, you can add new application code, features or updates, without changing the governing policy. Likewise, you can swap in new rules for how things should work, or be secured, without affecting the app components or infrastructure. Need a better violin section? Swap it in! Need a new piece of music that everyone plays? No problem! Need a better analogy? Suggest one to us on Twitter! 😉
All these cloud app sub-components can seem daunting—in large part because they are! The scale, speed and complexity of modern cloud apps is exponentially higher than the traditional, on-premises applications of even just a few years ago. But with policy in place to control the implementation and management of business logic, as well as to ensure that everything works in accordance to corporate governance policy, we can simplify app deployments. Styra Declarative Authorization Service (DAS) and Open Policy Agent (OPA) provide guardrails to manage authorization policy for both apps and infrastructure—which in turn simplifies the services themselves. And since authorization logic is made independent of app or mesh code, it is much easier to deploy, test and monitor. Even better, Styra DAS policy decisions are enforced locally, reducing any latency and availability concerns—which is absolutely critical during the holidays.
We can’t promise to help with shipping delays, an overabundance of pumpkin spice, or bad analogies, but with authorization in place across cloud-native infrastructure, microservices, and applications themselves, hopefully we can all count on a more reliable consumer shopping experience this year, and moving forward!
Looking to enforce authorization across the cloud-native stack? Request a Styra DAS demo today!