Chronologues Episode 5: Why Telemetry Pipelines?

In this episode of Chronologues, Sophie Kohler is joined by teammates at Chronosphere to dive into the topic of Telemetry Pipelines.

Rosti Spitchka
Rosti Spitchka

Senior Sales Engineer | Chronosphere

The woman with long brown hair, wearing a gray jacket.
Carolyn King

Head of Community & Developer | Chronosphere

A person with long hair tied back and a goatee is smiling. They are wearing a light blue collared shirt. The background appears blurred and features some green and beige hues, creating an intricate blend that subtly hints at managing complexity in the environment.
Eddie Dinel

Product Management Lead | Chronospere

riley peronto headshot
Riley Peronto

Sr. Product Marketing Manager | Chronosphere

Riley Peronto, a Sr. Product Marketing Manager at Chronosphere, brings years of expertise in log management and telemetry pipelines.

Working closely with customers, Riley gains invaluable insights that fuel his technical storytelling. He aims to help teams that are navigating the landscape of cloud-native technologies and data-driven operations.

Sophie Kohler

Content Writer | Chronosphere

Sophie Kohler is a Content Writer at Chronosphere where she writes blogs as well as creates videos and other educational content for a business-to-business audience. In her free-time, you can find her at hot yoga, working on creative writing or playing a game of pool.

Transcript

Sophie: Did you know that log data has grown over 250 percent year over year? Teams need to route telemetry data to different backend destinations, which means managing configurations, monitoring health and performance,
and then rolling out updates across disjointed tools. Ah, management burden. We hate to see it.

Now, this creates challenges for security and observability teams: Like high logging costs, complexity, data in different formats, and vendor lock-in. That’s where the power of a telemetry pipeline comes into play. Let’s kick things off with how teams can manage high logging costs. Rosti, can you help us out?

How to combat high logging costs

Rosti: Hey Sophie, why not?  Reducing log data volume and cost is one of the primary outcomes of using a telemetry pipeline. Telemetry volumes explode as organizations embrace containerized and microservice architectures. A natural consequence of this growth is a higher observability bill, because cost scales at the same rate as the log data volume itself and log data is essential to observability and security. But large portions of the log data that you create is neither useful nor used.

Telemetry pipelines help you capture the right data in the optimal format to drive efficiency. As such, you could right size your log data footprint Before you pay to transport and analyze that data.

Sophie: What about the complexity when it comes to collecting and routing data?

Managing data complexity

Carolyn: Thanks, Sophie. That’s a great question. Today, we see observability and security teams support more data sources and destinations than ever before. And this has led to increased infrastructure complexity and a number of new challenges for these teams. First is maintenance overhead. This includes ongoing updates, patches, configuration changes, which only become more difficult as infrastructure scales.

Second, we see teams having to manage different proprietary agents running on the same infrastructure, which can be a massive drain on application resources. And finally, teams see a lot of duplicate data being routed to multiple backends. The good news is that a telemetry pipeline can help solve these problems by providing a central place to manage both data collection and data routing. 

Sophie: So, we’ve talked about reducing costs and complexity. Now, what about data that’s in different formats? Eddie, do your thing.

Collecting data from different sources

Eddie: I gotcha. There are a whole bunch of different places that logs come from. It’s not just telemetry. It’s not just observability. Different sources send out data in different formats. And so, a security team is going to work with data from many different sources, like Palo Alto Networks or CrowdStrike Endpoint Detection, or what have you.

During an investigation, you’re going to need to run queries on data from other sources. You’re going to be going through these sources one by one figuring out what’s going on there, or you’re trying to craft the perfect query that pulls all of this together.

And, both of those are really time consuming. What a telemetry pipeline allows you to do is to solve the Tower of Babel problem. It allows you to bring together and normalize data from a whole bunch of different sources. So you can take data from a bunch of different places, and normalize your timestamps, or standardize your IP address formats, or create a unified taxonomy for different data types, or any of those things.

Sophie: Ah, so you can enforce a schema across all of your different data so that when you’re actually in an incident, you can solve the problem quickly.

Migrating to the right log management platform

Sophie: Hmm, what are we missing? Riley! Can you tell us about how to successfully migrate to a platform that fits your needs?

Riley: Can do, Sophie. Observability and security teams might want to migrate to a new log management or SIEM platform for a few different reasons. They might need a platform that can better manage today’s log data volumes, they might want one that can enable more proactive cybersecurity practices, or they might even want to consolidate their logging backends.

In the past, migration was really complex for a few different reasons. It required teams to reinstall and reconfigure their log collection agents, it required them to re-imagine their logs to align with new schema requirements, and it also meant that teams might lose access to historical data during the migration process.

Telemetry pipelines solve these challenges on a few different fronts. So first, they can collect data from any source and push them to any destination. What that means is that you can route data to your new logging backend without reinstalling a new log collection agent. The second way it helps solve these challenges is it allows you to reshape data in flight so you can meet your new schema requirements without re-instructing logs upstream.

And the third way is that it allows you to route a copy of all your data to low-cost object storage such as Amazon S3. If you ever need access to this data to investigate a breach or to support an audit or whatever else, you can rehydrate the data back into your log management or your SIEM platform.

Sophie: In a containerized microservices world, telemetry pipelines have emerged as a crucial solution to increasingly complex systems. And telemetry pipelines can access information about your data sources that may not be available to you downstream.

If you’re interested in telemetry pipelines, check out some of the resources that we’ve linked in the description below.

We’d love to hear in the comments your take on telemetry pipelines, and what you’d like to hear about in the next episode. See ya!