It’s the year 2022, but you don’t need to look back very far to see the change in the state of the public cloud. From the beginning of the global pandemic in March of 2020 till today, many things have changed in the landscape of technology. What was once in person is virtual, what was dine-in is now dining out, and much more. When we have these inflection points in the world, there is generally a shift in technology, and this most recent change was one for the ages.
Acceleration of cloud adoption driven by the pandemic
Many of my family members, friends, and others not in technology often ask me, “Who uses the public cloud?” My answer almost always shocks them: “Every company you can imagine or any application you use on a daily basis is hosted in AWS (Amazon Web Services), GCP (Google Cloud Platform), or Microsoft Azure.” I also explain when needed that the public cloud is on-demand computing and infrastructure services managed by a third party shared with multiple organizations using the public internet.
But why is this important given the recent changes toward remote living during a pandemic? The answer is simple, but the change itself is a complex one to adapt to and manage. The transition to remote has driven customer demand and changed the ways companies operate. There is an urgency to bring new applications to market to meet that demand, and as a result, adoption of cloud-native architectures is accelerating at a lightning pace. This demand means business is booming, but what are the repercussions associated with the change?
More users, more data, emphasis on reliability, and an EXTREMELY competitive environment. If Zoom doesn’t work, I switch to Google Meet. If Lyft doesn’t work or is delayed, I decide to use Uber, and so on.
The explosion of metrics data
The state of the public cloud is one filled with a new challenge: Knowing how to manage my data and leverage it to my competitive advantage as a company. But if I have more data, is that expensive? Does it create more complexity? Do we need to update our SLAs? How long do we keep our data? What will this do to our infrastructure? Do we need to re-architect?
All of these important questions fall on top of the fact that many companies have followed the paradigm shift into cloud-native. So with the ephemeral nature of cloud infrastructure today, and with the world moving in an unprecedented direction, how will companies adjust to the new state of the cloud?
It seems the answer always lies in the data – how much of it are we consuming, how long are we keeping it, and are we using it to our benefit or creating more complexity?
The market seems to be trending towards the latter and the complexity of all of this data is starting to cause concern. This complexity creates an opportunity for forward-thinking companies to pivot and be early adopters in the post-pandemic world. Who will it be and how will they do it?
Chronosphere lets you control what you store
Many people – including myself – feel that having an observability strategy is the future. You have all of these metric data points, but is anyone helping you extract value out of them? Do you need to keep all of your data, and if so, is that cost-effective? In a recent ESG survey on observability in cloud-native environments, 71% of respondents said observability data (metrics, logs, traces, etc.) is growing at a concerning rate. If you look at the most successful companies since this shift has happened, they have all figured out the best way to manage their metrics data. Take DoorDash, which uses Chronosphere for its cloud-native observability, and which saw its popularity skyrocket as the demand for delivery increased through a pandemic.
Once we stop thinking of metrics as a task, and start thinking of metrics as an opportunity to understand the business, and start using it to make decisions, we will start to win.
Given the current state of the public cloud, we are starting to see metrics-cost outpace infrastructure-cost. At Chronosphere we think about metrics and observability in a different way compared to the rest of the market. We give you control over how long you store your data, which data you store, and aggregate the data you keep in order to control cardinality.
Observability is the future of cloud-native
The last two years have seen monumental changes in data in a short period of time. We didn’t even talk about the outages the public cloud providers (GCP in particular) have had in recent months, which caused application downtime and cost their customers time and energy.
I can’t wait to see where the state of the cloud will be two years from now. I’m excited to have a front-row seat for watching the outcomes of the great data boom from 2020-2022. Observability as a whole is here to stay and will be a pillar no matter what the state of the public cloud is over the next few years. It seems that the definition of observability is what will continue to mature as we move forward.
Schedule a demo to learn how Chronosphere helps you take back control of your observability.