ACF Image Podcasts

Chronosphere on Crafting a Cloud-Native Observability Strategy with Rachel Dines

Featuring:

A woman in glasses standing in front of a shelf, presenting at a webinar on future observability.
Rachel Dines

Head of Product Marketing
Chronosphere

Rachel leads product and technical marketing for Chronosphere. Previously, she built out product, technical, and channel marketing at CloudHealth (acquired by VMware). Prior to that she led product marketing for AWS and cloud-integrated storage at NetApp and also spent time as an analyst at Forrester Research covering resiliency, backup, and cloud. Outside of work, she tries to keep up with her young son and hyper-active dog, and when she has time, enjoys crafting and eating out at local restaurants in Boston.

A man in a suit is making a funny face while embracing a cloud-native mindset.
Corey Quinn

Chief Cloud Economist
The Duckbill Group

Corey is the Chief Cloud Economist at The Duckbill Group. Corey’s unique brand of snark combines with a deep understanding of AWS’s offerings, unlocking a level of insight that’s both penetrating and hilarious. He lives in San Francisco with his spouse and daughters.

Summary:

Rachel Dines, Head of Product and Technical Marketing at Chronosphere, joins Corey on Screaming in the Cloud to discuss why creating a cloud-native observability strategy is so critical, and the challenges that come with both defining and accomplishing that strategy to fit your current and future observability needs.

Transcript:

Announcer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.

Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. Today’s featured guest episode is brought to us by our friends at Chronosphere, and they have also brought us Rachel Dines, their Head of Product and Solutions Marketing. Rachel, great to talk to you again.

Rachel: Hi, Corey. Yeah, great to talk to you, too.

Corey: Watching your trajectory has been really interesting, just because starting off, when we first started, I guess, learning who each other were, you were working at CloudHealth which has since become VMware. And I was trying to figure out, huh, the cloud runs on money. How about that? It feels like it was a thousand years ago, but neither one of us is quite that old.

Rachel: It does feel like several lifetimes ago. You were just this snarky guy with a few followers on Twitter, and I was trying to figure out what you were doing mucking around with my customers [laugh]. Then [laugh] we kind of both figured out what we’re doing, right?

Corey: So, speaking of that iterative process, today, you are at Chronosphere, which is an observability company. We would have called it a monitoring company five years ago, but now that’s become an insult after the observability war dust has settled. So, I want to talk to you about something that I’ve been kicking around for a while because I feel like there’s a gap somewhere. Let’s say that I build a crappy web app—because all of my web apps inherently are crappy—and it makes money through some mystical form of alchemy. And I have a bunch of users, and I eventually realize, huh, I should probably have a better observability story than waiting for the phone to ring and a customer telling me it’s broken.

So, I start instrumenting various aspects of it that seem to make sense. Maybe I go too low level, like looking at all the discs on every server to tell me if they’re getting full or not, like their ancient servers. Maybe I just have a Pingdom equivalent of is the website up enough to respond to a packet? And as I wind up experiencing different failure modes and getting yelled at by different constituencies—in my own career trajectory, my own boss—you start instrumenting for all those different kinds of breakages, you start aggregating the logs somewhere and the volume gets bigger and bigger with time. But it feels like it’s sort of a reactive process as you stumble through that entire environment.

And I know it’s not just me because I’ve seen this unfold in similar ways in a bunch of different companies. It feels to me, very strongly, like it is something that happens to you, rather than something you set about from day one with a strategy in mind. What’s your take on an effective way to think about strategy when it comes to observability?

Rachel: You just nailed it. That’s exactly the kind of progression that we so often see. And that’s what I really was excited to talk with you about today—

Corey: Oh, thank God. I was worried for a minute there that you’d be like, “What the hell are you talking about? Are you just, like, some sort of crap engineer?” And, “Yes, but it’s mean of you to say it.” But yeah, what I’m trying to figure out is there some magic that I just was never connecting? Because it always feels like you’re in trouble because the site’s always broken, and oh, like, if the disk fills up, yeah, oh, now we’re going to start monitoring to make sure the disk doesn’t fill up. Then you wind up getting barraged with alerts, and no one wins, and it’s an uncomfortable period of time.

Rachel: Uncomfortable period of time. That is one very polite way to put it. I mean, I will say, it is very rare to find a company that actually sits down and thinks, “This is our observability strategy. This is what we want to get out of observability.” Like, you can think about a strategy and, like, the old school sense, and you know, as an industry analyst, so I’m going to have to go back to, like, my roots at Forrester with thinking about, like, the people, and the process, and the technology.

But really what the bigger component here is like, what’s the business impact? What do you want to get out of your observability platform? What are you trying to achieve? And a lot of the time, people have thought, “Oh, observability strategy. Great, I’m just going to buy a tool. That’s it. Like, that’s my strategy.”

And I hate to bring it to you, but buying tools is not a strategy. I’m not going to say, like, buy this tool. I’m not even going to say, “Buy Chronosphere.” That’s not a strategy. Well, you should buy Chronosphere. But that’s not a strategy.

Corey: Of course. I’m going to throw the money by the wheelbarrow at various observability vendors, and hope it solves my problem. But if that solved the problem—I’ve got to be direct—I’ve never spoken to those customers.

Rachel: Exactly. I mean, that’s why this space is such a great one to come in and be very disruptive in. And I think, back in the days when we were running in data centers, maybe even before virtual machines, you could probably get away with not having a monitoring strategy—I’m not going to call it observability; it’s not we call the back then—you could get away with not having a strategy because what was the worst that was going to happen, right? It wasn’t like there was a finite amount that your monitoring bill could be, there was a finite amount that your customer impact could be. Like, you’re paying the penny slots, right?

We’re not on the penny slots anymore. We’re in the $50 craps table, and it’s Las Vegas, and if you lose the game, you’re going to have to run down the street without your shirt. Like, the game and the stakes have changed, and we’re still pretending like we’re playing penny slots, and we’re not anymore.

Corey: That’s a good way of framing it. I mean, I still remember some of my biggest observability challenges were building highly available rsyslog clusters so that you could bounce a member and not lose any log data because some of that was transactionally important. And we’ve gone beyond that to a stupendous degree, but it still feels like you don’t wind up building this into the application from day one. More’s the pity because if you did, and did that intelligently, that opens up a whole world of possibilities. I dream of that changing where one day, whenever you start to build an app, oh, and we just push the button and automatically instrument with OTel, so you instrument the thing once everywhere it makes sense to do it, and then you can do your vendor selection and what you said were decisions later in time. But these days, we’re not there.

Rachel: Well, I mean, and there’s also the question of just the legacy environment and the tech debt. Even if you wanted to, the—actually I was having a beer yesterday with a friend who’s a VP of Engineering, and he’s got his new environment that they’re building with observability instrumented from the start. How beautiful. They’ve got OTel, they’re going to have tracing. And then he’s got his legacy environment, which is a hot mess.

So, you know, there’s always going to be this bridge of the old and the new. But this was where it comes back to no matter where you’re at, you can stop and think, like, “What are we doing and why?” What is the cost of this? And not just cost in dollars, which I know you and I could talk about very deeply for a long period of time, but like, the opportunity costs. Developers are working on stuff that they could be working on something that’s more valuable.

Or like the cost of making people work round the clock, trying to troubleshoot issues when there could be an easier way. So, I think it’s like stepping back and thinking about cost in terms of dollar sense, time, opportunity, and then also impact, and starting to make some decisions about what you’re going to do in the future that’s different. Once again, you might be stuck with some legacy stuff that you can’t really change that much, but [laugh] you got to be realistic about where you’re at.

Corey: I think that that is a… it’s a hard lesson to be very direct, in that, companies need to learn it the hard way, for better or worse. Honestly, this is one of the things that I always noticed in startup land, where you had a whole bunch of, frankly, relatively early-career engineers in their early-20s, if not younger. But then the ops person was always significantly older because the thing you actually want to hear from your ops person, regardless of how you slice it, is, “Oh, yeah, I’ve seen this kind of problem before. Here’s how we fixed it.” Or even better, “Here’s the thing we’re doing, and I know how that’s going to become a problem. Let’s fix it before it does.” It’s the, “What are you buying by bringing that person in?” “Experience, mostly.”

Rachel: Yeah, that’s an interesting point you make, and it kind of leads me down this little bit of a side note, but a really interesting antipattern that I’ve been seeing in a lot of companies is that more seasoned ops person, they’re the one who everyone calls when something goes wrong. Like, they’re the one who, like, “Oh, my God, I don’t know how to fix it. This is a big hairy problem,” I call that one ops person, or I call that very experienced person. That experience person then becomes this huge bottleneck into solving problems that people don’t really—they might even be the only one who knows how to use the observability tool. So, if we can’t find a way to democratize our observability tooling a little bit more so, like, just day-to-day engineers, like, more junior engineers, newer ones, people who are still ramping, can actually use the tool and be successful, we’re going to have a big problem when these ops people walk out the door, maybe they retire, maybe they just get sick of it. We have these massive bottlenecks in organizations, whether it’s ops or DevOps or whatever, that I see often exacerbated by observability tools. Just a side note.

Corey: Yeah. On some level, it feels like a lot of these things can be fixed with tooling. And I’m not going to say that tools aren’t important. You ever tried to implement observability by hand? It doesn’t work. There have to be computers somewhere in the loop, if nothing else.

And then it just seems to devolve into a giant swamp of different companies, doing different things, taking different approaches. And, on some level, whenever you read the marketing or hear the stories any of these companies tell you also to normalize it from translating from whatever marketing language they’ve got into something that comports with the reality of your own environment and seeing if they align. And that feels like it is so much easier said than done.

Rachel: This is a noisy space, that is for sure. And you know, I think we could go out to ten people right now and ask those ten people to define observability, and we would come back with ten different definitions. And then if you throw a marketing person in the mix, right—guilty as charged, and I know you’re a marketing person, too, Corey, so you got to take some of the blame—it gets mucky, right? But like I said a minute ago, the answer is not tools. Tools can be part of the strategy, but if you’re just thinking, “I’m going to buy a tool and that’s going to solve my problem,” you’re going to end up like this company I was talking to recently that has 25 different observability tools.

And not only do they have 25 different observability tools, what’s worse is they have 25 different definitions for their SLOs and 25 different names for the same metric. And to be honest, it’s just a mess. I’m not saying, like, go be Draconian and, you know, tell all the engineers, like, “You can only use this tool [unintelligible 00:10:34] use that tool,” you got to figure out this kind of balance of, like, hands-on, hands-off, you know? How much do you centralize, how much do you push and standardize? Otherwise, you end up with just a huge mess.

Corey: On some level, it feels like it was easier back in the days of building it yourself with Nagios because there’s only one answer, and it sucks, unless you want to start going down the world of HP OpenView. Which step one: hire a 50-person team to manage OpenView. Okay, that’s not going to solve my problem either. So, let’s get a little more specific. How does Chronosphere approach this?

Because historically, when I’ve spoken to folks at Chronosphere, there isn’t that much of a day one story, in that, “I’m going to build a crappy web app. Let’s instrument it for Chronosphere.” There’s a certain, “You must be at least this tall to ride,” implicit expectation built into the product just based upon its origins. And I’m not saying that doesn’t make sense, but it also means there’s really no such thing as a greenfield build out for you either.

Rachel: Well, yes and no. I mean, I think there’s no green fields out there because everyone’s doing something for observability, or monitoring, or whatever you want to call it, right? Whether they’ve got Nagios, whether they’ve got the Dog, whether they’ve got something else in there, they have some way of introspecting their systems, right? So, one of the things that Chronosphere is built on, that I actually think this is part of something—a way you might think about building out an observability strategy as well, is this concept of control and open-source compatibility. So, we only can collect data via open-source standards. You have to send this data via Prometheus, via Open Telemetry, it could be older standards, like, you know, statsd, Graphite, but we don’t have any proprietary instrumentation.

And if I was making a recommendation to somebody building out their observability strategy right now, I would say open, open, open, all day long because that gives you a huge amount of flexibility in the future. Because guess what? You know, you might put together an observability strategy that seems like it makes sense for right now—actually, I was talking to a B2B SaaS company that told me that they made a choice a couple of years ago on an observability tool. It seemed like the right choice at the time. They were growing so fast, they very quickly realized it was a terrible choice.

But now, it’s going to be really hard for them to migrate because it’s all based on proprietary standards. Now, of course, a few years ago, they didn’t have the luxury of Open Telemetry and all of these, but now that we have this, we can use these to kind of future-proof our mistakes. So, that’s one big area that, once again, both my recommendation and happens to be our approach at Chronosphere.

Corey: I think that that’s a fair way of viewing it. It’s a constant challenge, too, just because increasingly—you mentioned the Dog earlier, for example—I will say that for years, I have been asked whether or not at The Duckbill Group, we look at Azure bills or GCP bills. Nope, we are pure AWS. Recently, we started to hear that same inquiry specifically around Datadog, to the point where it has become a board-level concern at very large companies. And that is a challenge, on some level.

I don’t deviate from my typical path of I fix AWS bills, and that’s enough impossible problems for one lifetime, but there is a strong sense of you want to record as much as possible for a variety of excellent reasons, but there’s an implicit cost to doing that, and in many cases, the cost of observability becomes a massive contributor to the overall cost. Netflix has said in talks before that they’re effectively an observability company that also happens to stream movies, just because it takes so much effort, engineering, and raw computing resources in order to get that data do something actionable with it. It’s a hard problem.

Rachel: It’s a huge problem, and it’s a big part of why I work at Chronosphere, to be honest. Because when I was—you know, towards the tail end at my previous company in cloud cost management, I had a lot of customers coming to me saying, “Hey, when are you going to tackle our Dog or our New Relic or whatever?” Similar to the experience you’re having now, Corey, this was happening to me three, four years ago. And I noticed that there is definitely a correlation between people who are having these really big challenges with their observability bills and people that were adopting, like Kubernetes, and microservices and cloud-native. And it was around that time that I met the Chronosphere team, which is exactly what we do, right? We focus on observability for these cloud-native environments where observability data just goes, like, wild.

We see 10X 20X as much observability data and that’s what’s driving up these costs. And yeah, it is becoming a board-level concern. I mean, and coming back to the concept of strategy, like if observability is the second or third most expensive item in your engineering bill—like, obviously, cloud infrastructure, number one—number two and number three is probably observability. How can you not have a strategy for that? How can this be something the board asks you about, and you’re like, “What are we trying to get out of this? What’s our purpose?” “Uhhhh… troubleshooting?”

Corey: Right because it turns into business metrics as well. It’s not just about is the site up or not. There’s a—like, one of the things that always drove me nuts not just in the observability space, but even in cloud costing is where, okay, your costs have gone up this week so you get a frowny face, or it’s in red, like traffic light coloring. Cool, but for a lot of architectures and a lot of customers, that’s because you’re doing a lot more volume. That translates directly into increased revenues, increased things you care about. You don’t have the position or the context to say, “That’s good,” or, “That’s bad.” It simply is. And you can start deriving business insight from that. And I think that is the real observability story that I think has largely gone untold at tech conferences, at least.

Rachel: It’s so right. I mean, spending more on something is not inherently bad if you’re getting more value out of it. And it definitely a challenge on the cloud cost management side. “My costs are going up, but my revenue is going up a lot faster, so I’m okay.” And I think some of the plays, like you know, we put observability in this box of, like, it’s for low-level troubleshooting, but really, if you step back and think about it, there’s a lot of larger, bigger picture initiatives that observability can contribute to in an org, like digital transformation. I know that’s a buzzword, but, like that is a legit thing that a lot of CTOs are out there thinking about. Like, how do we, you know, get out of the tech debt world, and how do we get into cloud-native?

Maybe it’s developer efficiency. God, there’s a lot of people talking about developer efficiency. Last week at KubeCon, that was one of the big, big topics. I mean, and yeah, what [laugh] what about cost savings? To me, we’ve put observability in a smaller box, and it needs to bust out.

And I see this also in our customer base, you know? Customers like DoorDash use observability, not just to look at their infrastructure and their applications, but also look at their business. At any given minute, they know how many Dashers are on the road, how many orders are being placed, cut by geos, down to the—actually down to the second, and they can use that to make decisions.

Corey: This is one of those things that I always found a little strange coming from the world of running systems in large [unintelligible 00:17:28] environments to fixing AWS bills. There’s nothing that even resembles a fast, reactive response in the world of AWS billing. You wind up with a runaway bill, they’re going to resolve that over a period of weeks, on Seattle business hours. If you wind up spinning something up that creates a whole bunch of very expensive drivers behind your bill, it’s going to take three days, in most cases, before that starts showing up anywhere that you can reasonably expect to get at it. The idea of near real time is a lie unless you want to start instrumenting everything that you’re doing to trap the calls and then run cost extrapolation from there. That’s hard to do.

Observability is a very different story, where latencies start to matter, where being able to get leading indicators of certain events—be a technical or business—start to be very important. But it seems like it’s so hard to wind up getting there from where most people are. Because I know we like to talk dismissively about the past, but let’s face it, conference-ware is the stuff we’re the proudest of. The reality is the burning dumpster of regret in our data centers that still also drives giant piles of revenue, so you can’t turn it off, nor would you want to, but you feel bad about it as a result. It just feels like it’s such a big leap.

Rachel: It is a big leap. And I think the very first step I would say is trying to get to this point of clarity and being honest with yourself about where you’re at and where you want to be. And sometimes not making a choice is a choice, right, as well. So, sticking with the status quo is making a choice. And so, like, as we get into things like the holiday season right now, and I know there’s going to be people that are on-call 24/7 during the holidays, potentially, to keep something that’s just duct-taped together barely up and running, I’m making a choice; you’re make a choice to do that. So, I think that’s like the first step is the kind of… at least acknowledging where you’re at, where you want to be, and if you’re not going to make a change, just understanding the cost and being realistic about it.

Corey: Yeah, being realistic, I think, is one of the hardest challenges because it’s easy to wind up going for the aspirational story of, “In the future when everything’s great.” Like, “Okay, cool. I appreciate the need to plant that flag on the hill somewhere. What’s the next step? What can we get done by the end of this week that materially improves us from where we started the week?” And I think that with the aspirational conference-ware stories, it’s hard to break that down into things that are actionable, that don’t feel like they’re going to be an interminable slog across your entire existing environment.

Rachel: No, I get it. And for things like, you know, instrumenting and adding tracing and adding OTEL, a lot of the time, the return that you get on that investment is… it’s not quite like, “I put a dollar in, I get a dollar out,” I mean, something like tracing, you can’t get to 60% instrumentation and get 60% of the value. You need to be able to get to, like, 80, 90%, and then you’ll get a huge amount of value. So, it’s sort of like you’re trudging up this hill, you’re charging up this hill, and then finally you get to the plateau, and it’s beautiful. But that hill is steep, and it’s long, and it’s not pretty. And I don’t know what to say other than there’s a plateau near the top. And those companies that do this well really get a ton of value out of it. And that’s the dream, that we want to help customers get up that hill. But yeah, I’m not going to lie, the hill can be steep.

Corey: One thing that I find interesting is there’s almost a bimodal distribution in companies that I talk to. On the one side, you have companies like, I don’t know, a Chronosphere is a good example of this. Presumably you have a cloud bill somewhere and the majority of your cloud spend will be on what amounts to a single application, probably in your case called, I don’t know, Chronosphere. It shares the name of the company. The other side of that distribution is the large enterprise conglomerates where they’re spending, I don’t know, $400 million a year on cloud, but their largest workload is 3 million bucks, and it’s just a very long tail of a whole bunch of different workloads, applications, teams, et cetera.

So, what I’m curious about from the Chronosphere perspective—or the product you have, not the ‘you’ in this metaphor, which gets confusing—is, it feels easier to instrument a Chronosphere-like company that has a primary workload that is the massive driver of most things and get that instrumented and start getting an observability story around that than it does to try and go to a giant company and, “Okay, 1500 teams need to all implement this thing that are all going in different directions.” How do you see it playing out among your customer base, if that bimodal distribution holds up in your world?

Rachel: It does and it doesn’t. So, first of all, for a lot of our customers, we often start with metrics. And starting with metrics means Prometheus. And Prometheus has hundreds of exporters. It is basically built into Kubernetes. So, if you’re running Kubernetes, getting Prometheus metrics out, actually not a very big lift. So, we find that we start with Prometheus, we start with getting metrics in, and we can get a lot—I mean, customers—we have a lot of customers that use us just for metrics, and they get a massive amount of value.

But then once they’re ready, they can start instrumenting for OTEL and start getting traces in as well. And yeah, in large organizations, it does tend to be one team, one application, one service, one department that kind of goes at it and gets all that instrumented. But I’ve even seen very large organizations, when they get their act together and decide, like, “No, we’re doing this,” they can get OTel instrumented fairly quickly. So, I guess it’s, like, a lining up. It’s more of a people issue than a technical issue a lot of the time.

Like, getting everyone lined up and making sure that like, yes, we all agree. We’re on board. We’re going to do this. But it’s usually, like, it’s a start small, and it doesn’t have to be all or nothing. We also just recently added the ability to ingest events, which is actually a really beautiful thing, and it’s very, very straightforward.

It basically just—we connect to your existing other DevOps tools, so whether it’s, like, a Buildkite, or a GitHub, or, like, a LaunchDarkly, and then anytime something happens in one of those tools, that gets registered as an event in Chronosphere. And then we overlay those events over your alerts. So, when an alert fires, then first thing I do is I go look at the alert page, and it says, “Hey, someone did a deploy five minutes ago,” or, “There was a feature flag flipped three minutes ago,” I solved the problem right then. I don’t think of this as—there’s not an all or nothing nature to any of this stuff. Yes, tracing is a little bit of a—you know, like I said, it’s one of those things where you have to make a lot of investment before you get a big reward, but that’s not the case in all areas of observability.

Corey: Yeah. I would agree. Do you find that there’s a significant easy, early win when customers start adopting Chronosphere? Because one of the problems that I’ve found, especially with things that are holistic, and as you talk about tracing, well, you need to get to a certain point of coverage before you see value. But human psychology being what it is, you kind of want to be able to demonstrate, oh, see, the Meantime To Dopamine needs to come down, to borrow an old phrase. Do you find that some of there’s some easy wins that start to help people to see the light? Because otherwise, it just feels like a whole bunch of work for no discernible benefit to them.

Rachel: Yeah, at least for the Chronosphere customer base, one of the areas where we’re seeing a lot of traction this year is in optimizing the costs, like, coming back to the cost story of their overall observability bill. So, we have this concept of the control plane in our product where all the data that we ingest hits the control plane. At that point, that customer can look at the data, analyze it, and decide this is useful, this is not useful. And actually, not just decide that, but we show them what’s useful, what’s not useful. What’s being used, what’s high cardinality, but—and high cost, but maybe no one’s touched it.

And then we can make decisions around aggregating it, dropping it, combining it, doing all sorts of fancy things, changing the—you know, downsampling it. We can do this, on the trace side, we can do it both head based and tail based. On the metrics side, it’s as it hits the control plane and then streams out. And then they only pay for the data that we store. So typically, customers are—they come on board and immediately reduce their observability dataset by 60%. Like, that’s just straight up, that’s the average.

And we’ve seen some customers get really aggressive, get up to, like, in the 90s, where they realize we’re only using 10% of this data. Let’s get rid of the rest of it. We’re not going to pay for it. So, paying a lot less helps in a lot of ways. It also helps companies get more coverage of their observability. It also helps customers get more coverage of their overall stack. So, I was talking recently with an autonomous vehicle driving company that recently came to us from the Dog, and they had made some really tough choices and were no longer monitoring their pre-prod environments at all because they just couldn’t afford to do it anymore. It’s like, well, now they can, and we’re still saving the money.

Corey: I think that there’s also the downstream effect of the money saving to that, for example, I don’t fix observability bills directly. But, “Huh, why is your CloudWatch bill through the roof?” Or data egress charges in some cases? It’s oh because your observability vendor is pounding the crap out of those endpoints and pulling all your log data across the internet, et cetera. And that tends to mean, oh, yeah, it’s not just the first-order effect; it’s the second and third and fourth-order effects this winds up having. It becomes almost a holistic challenge. I think that trying to put observability in its own bucket, on some level—when you’re looking at it from a cost perspective—starts to be a, I guess, a structure that makes less and less sense in the fullness of time.

Rachel: Yeah, I would agree with that. I think that just looking at the bill from your vendor is one very small piece of the overall cost you’re incurring. I mean, all of the things you mentioned, the egress, the CloudWatch, the other services, it’s impacting, what about the people?

Corey: Yeah, it sure is great that your team works for free.

Rachel: [laugh]. Exactly, right? I know, and it makes me think a little bit about that viral story about that particular company with a certain vendor that had a $65 million per year observability bill. And that impacted not just them, but, like, it showed up in both vendors’ financial filings. Like, how did you get there? How did you get to that point? And I think this all comes back to the value in the ROI equation. Yes, we can all sit in our armchairs and be like, “Well, that was dumb,” but I know there are very smart people out there that just got into a bad situation by kicking the can down the road on not thinking about the strategy.

Corey: Absolutely. I really want to thank you for taking the time to speak with me about, I guess, the bigger picture questions rather than the nuts and bolts of a product. I like understanding the overall view that drives a lot of these things. I don’t feel I get to have enough of those conversations some weeks, so thank you for humoring me. If people want to learn more, where’s the best place for them to go?

Rachel: So, they should definitely check out the Chronosphere website. Brand new beautiful spankin’ new website: chronosphere.io. And you can also find me on LinkedIn. I’m not really on the Twitters so much anymore, but I’d love to chat with you on LinkedIn and hear what you have to say.

Corey: And we will, of course, put links to all of that in the [show notes 00:28:26]. Thank you so much for taking the time to speak with me. It’s appreciated.

Rachel: Thank you, Corey. Always fun.

Corey: Rachel Dines, Head of Product and Solutions Marketing at Chronosphere. This has been a featured guest episode brought to us by our friends at Chronosphere, and I’m Corey Quinn. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry and insulting comment that I will one day read once I finished building my highly available rsyslog system to consume it with.

Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business, and we get to the point. Visit duckbillgroup.com to get started.