Field CTO | Chronosphere
Bill Hineline, Field CTO, joins Chronosphere after 17 years with United Airlines and brings a wealth of cross-functional expertise to his position. Over the last 24 years in the airline industry, he held a variety of leadership roles, ranging from IT operations and engineering to digital marketing.
In his last role, he served as Director of Enterprise Observability for United Airlines. As Field CTO, Hineline will serve as a trusted advisor for Chronosphere customers. In his role, he will bridge the gap between business challenges and observability solutions by sharing real-world experience from a customer perspective.
Principal Developer Advocate | Chronosphere
Paige Cruz is a Principal Developer Advocate at Chronosphere passionate about cultivating sustainable on-call practices and bringing folks their aha moment with observability.
She started as a software engineer at New Relic before switching to Site Reliability Engineering holding the pager for InVision, Lightstep, and Weedmaps. Off-the-clock you can find her spinning yarn, swooning over alpacas, or watching trash TV on Bravo.
Social Media and Content Manager | Chronosphere
Sophie Kohler is a Content Writer at Chronosphere where she writes blogs as well as creates videos and other educational content for a business-to-business audience. In her free-time, you can find her at hot yoga, working on creative writing or playing a game of pool.
Bill sits down with Developer Advocate Paige Cruz to talk about OpenTelemetry, its common misconceptions, and real-world use cases in this 3-part series.
Bill Hineline: You are listening to “O11YUnplugged,” the podcast for the observability obsessed. I’m Bill Hineline, Field CTO, and a translator between those who love traces and those who speak in slide decks. So produce or so. we’re kicking off a three part series here when you wanna tell us about that?
Sophie Kohler: Yes, exactly. Hi everyone. I’m Producer Soph, and we’re diving deep into OpenTelemetry, breaking it down into manageable pieces so folks can really understand what it is, why it matters, and why it ain’t that hard. Each episode is gonna build on the last, so by the end of tåhe series, you’ll have a pretty solid foundation to start to improve your OpenTelemetry journey.
we all know OpenTelemetry can feel like a beast. I mean, all yamal, no chill, right? But it doesn’t have to be that way. In this series, as Sophie mentioned, we’re gonna really break all that down and to help us, we are joined by OOTelQueen herself. Paige Cruz.
Paige Cruz: Hello, how’s it going today?
Bill Hineline: Grand. Grand. Everybody’s gaga over OTel. You hear that a lot as we’re having conversations about observability, and so why is everybody crazy about it?
Paige Cruz: If you have been to any technical conference in the last, I don’t know, 3, 4, 5 years, OTe lhas been on almost every stage on every track. What is the buzz about? Well. The mission of OpenTelemetry, according to their official site, is to deliver high quality, ubiquitous, and portable telemetry to enable effective observability.
For me, I translate that as the promise of OpenTelemetry is to instrument once using OpenTelemetry, SDKs and libraries and observe that data anywhere. Whether that’s a vendor you’re paying for self-hosting, an open source observability stack, or even paying one of the open source vendors, cloud provider offering, it’s really about using one ecosystem to emit data and be able to send that data and receive it and meaningfully monitor, analyze, and act on that data without being locked in.
Bill Hineline: I’d say that this is a big game changer because, you said it first, an instrument once and done. There isn’t this constant change of telemetry, methods every time you change a platform for your observability. But I gotta think it also helps you standardize your metrics a bit, which has gotta make things like correlation and other things a lot easier. So, we’re gonna have a great conversation today, but I think Producer Soph has something in store for us.
Sophie Kohler: Yeah, so I was thinking it might be fun if we share our observability pet peeves. I feel like this says a lot about a person. Everyone’s answer is gonna be a little different. Bill, do you wanna go first?
Bill Hineline: My pet peeve is really this idea of a single pane of glass. If I had a dollar for every exec who wanted me to build a single pane of glass, you know what? I’d have enough money to fund an observability team that they probably forgot to hire.
Paige Cruz: Ooh, you heard it here on O11Y Unplugged.
Sophie Kohler: Paige, what about you? What’s your pet peeve?
Paige Cruz: My observability pet peeve is this mindset that SREs or your central observability team or your platform engineers, or, God help you if you’re calling it the DevOps team, that they are there to be the application development teams, like personal monitoring butler – “oh, they can set up the alerts. They could set up the dashboards, that’s what they’re here for.” Oh no. You are vastly under utilizing those teams and that capability if all you’re having them do is set up your applications, monitoring, data, infrastructure, all of that stuff. You should own your data as an application team, and not outsource that to your lovely SREs.
Bill Hineline: I am gonna plus 20 on that one. I’m up voting that a lot. Whatever platform you’re up, voting on Reddit, whatever. I would upvote that.
Sophie Kohler: My observability, pet peeve, I think would be when teams are sticking with a platform that they don’t actually like, just because the thought of migrating to a new one that actually fits our needs is daunting. I think at the end of the day, it’s not as scary as you might make it out to be.
Paige Cruz: There’s a better world out there. OpenTelemetry can be your bridge to that world.
Bill Hineline: Let’s talk about OTel. Let’s start with this idea of the right metrics. Paige, I don’t know about you, but the number of times I’ve heard people say: “Oh, collect everything we’ll figure it out.” That might be the runner up for my pet peeve if I were to rewind a bit, but I think that people get into FOMO or they just want to collect everything because they feel like that’s easier, and I just feel like that sets you up for such a nightmare later. What do you think, Paige?
Paige Cruz: I think it absolutely does and, and I’ll say I had to learn and develop some empathy here. These kinds of factors contribute to that idea of using telemetry as a safety blanket. I cannot possibly let any single metric or log or field go because I may need it, if you’re working in a culture like that, there’s often folks or organizations that have scars from major incidents or security breaches when maybe they didn’t have the data they needed on hand, and their takeaway was: “never let a piece of data drop again,” that is an overcorrection to the problem you experienced of maybe not having the right depth of visibility in the first place.
Unfortunately, collecting everything and figuring it out later is not necessarily scalable or cost effective. I think folks who advocate to collect everything sound like they may be in a vendor’s pocket, or may be a secret lobbyist for a vendor working inside your org. Certainly not the SRE team having to operate an observability stack. But Bill, speaking of the bill, the “observability bill,” why is this such a big deal for leadership? Why is a bill overrun treated almost as an incident when somebody came, they said: “The logging bill’s too high. You literally must stop all of the work you’re doing.” If you don’t fix the bill, what’s the impact there? What’s the risk of not addressing a budget overrun?
Bill Hineline: First of all, we keep interchanging “bill” and “Bill” and so I’m starting to feel like, again, a seventies reference, an afterschool special about being just a bill. Anyway, back to your question page. Those bill overruns are a nightmare for any central O11Y team or an SRE team that’s managing the O11Y platform.
And, they are because they put the spotlight on you for the wrong reasons, right? I tell people all the time, if your CFO knows what observability is because of the cost and not the value it brings, then you’re in a no-win situation right out of the bucket. And those overruns come because you don’t have a strategy around what to collect or you just subscribe because that’s what we all were kind of told early on is collect everything and sort it out later. But I think those overruns are really becoming a huge problem right now.
Then, your CFO wants to cut funds and you wind up not being able to perhaps instrument your entire enterprise, which then really cripples your observability platform, right? Or, you’re putting in quotas that don’t give people what they want and suddenly you’re not finding things on the platform that you want to.
Again, all of these, along with something we haven’t talked about yet, which is the noise that comes from this, all of this is a death march for your observability success.
Paige Cruz: That’s a great point about the impact to the broader company and the organization, spending your budget wisely, always important. But, not at the expense of losing visibility. And what I heard is, not being able to instrument your full enterprise is a problem where you don’t have the visibility you need.
And, I’d say from the developer perspective, I had released a video on “What is cardinality?”, a little five minute thing. I sent it to a friend for feedback and they said: “Oh my God, I needed to really better understand this, because I have been so worried about adding a metric that would blow up the cardinality, that would blow up the bill.” Get all the eyes on me, like you said, for the wrong reasons, that I have not actually added data that I thought that I needed, because I was operating from this place of fear and not having knowledge of what adding this data would mean for the overall system, the overall bill.
I also think as we talk about this, we have to talk about the noise it brings. Because it’s not just about the cost, it’s about the noise, right? That alert fatigue can become a real thing and it can cause trust issues. When they don’t trust it, they ignore the alerts. And then, how are you gonna ever be successful and deliver on what you promise from a business perspective? So, I think it’s really important to think about how you do it.
The advice that I’ve given to dev teams and engineers over the years is: Start at a much higher level when you are out to instrument your digital product. Decide why it exists. Like, why did we write this for our business to begin with?
And then, what are the five or ten things that are either the most critical or the most popular features of this app? And put SLOs around them to give yourself a break on being perfect by giving yourself an error budget. Instead of from the ground up, you’re building from the top down and driving what your observability platform is supposed to be driving to begin with, which is: how is it performing? How is the customer experience? Am I getting the conversion rates I want? Am I doing better with these code releases as we release to improve, you know, adoption of a feature?
Start high-end instead of down at the bottom, so many people I find are still stuck in the old world of monitoring at a component level: “I care about this service and I care about that server and I care about that microservice.” No, you care about the app and the service it provides and the business that you’re in. Start high.
Paige Cruz: When I think about service level objectives, what I really think is the power to proactively decide the data you need to answer those really critical questions that come in the heat of an incident when you don’t have all the data and facts at your disposal, and you’re rushing to mitigate. It is expected that customer success is going to ask you how are users affected? What parts of the product experience are broken? What is still functional, and how many people is this affecting? And, I’m not telling you SLOs give you that answer on a silver platter, but they make it a lot easier.
To scope and narrow the investigation and understand impact because you’ve decided upfront what the most important parts of your product experience are to measure, you’ve gotten alignment with product and engineering and you’ve set what targets look like for unacceptable performance, versus okay performance.
And having done all that before an alert, even fires before things go sideways, gives you, as the responder, a lot more space to communicate the impact while working on the mitigations: “How do we put these pieces together?” Well, let’s try an SLO. Let’s have the user experience thought of first and foremost as the foundation of your alerting, and then go from there.
Sophie Kohler: Listen guys, this is, this is great stuff, but it’s sparking a few questions for me. It sounds like just monitoring components can have some cons to it. How do you implement OpenTelemetry across an entire system then?
Paige Cruz: That’s a great question. Bill, I don’t know if you agree that it starts with getting leadership bought in. You would be hard pressed to bring OTel to an organization, without getting your leadership on board.
Sophie Kohler: How would you say that to a CFO who’s wondering what the total cost of doing that is?
Bill Hineline: So, I would say that you gotta help an executive understand: “What’s the project?” They likely don’t care about OTel. They don’t care how you get observability done. They want you to get it done. But, the benefits are really strong there. I would phrase it along the lines of: “Look, you’re future proofing your code. We move to OpenTelemetry. We’re standardizing our data, so it will be more valuable.” Right? We’re able to get greater insights because we don’t have to shape and modify the metrics as much, because they are already in standard formats.
Paige, what do you think?
Paige Cruz: Plus one to all of that. And, I would say, if you are the developer who’s really bullish on OpenTelemetry and you’re like: “How do I get my org to understand and come on board?” Well, storytelling is going to be your friend here. And so, I would recommend looking back at your company’s wiki or wherever you keep project documentation to find the last time a new observability platform had to be onboarded, or you had to migrate between observability platforms. Take a look at what that plan was, what actually happened, how long it took, all of the roadblocks to adapting to proprietary vendor formats, translating your monitors and dashboards, et cetera. And paint the picture contrasting: “here’s what happened for the last migration, and here’s what a migration would look with OpenTelemetry instrumentation and data by our side.”
I think, for leaders who were there who had suffered through those previous migrations, the light bulb will go off very quickly to say: “Okay, I see. In terms of resources, time, effort, energy, money, I can see that while this is a short term work that impacts us over the long term, we really do get those benefits.” I always like to tell a story. Stories are effective and remind people of: “We’ve done this before, what if I told you this is the last time we have to do this sort of work?”
Bill Hineline: We’ve talked a lot about something that touches a nerve with both of us. And that’s this, you know, not just collecting everything, but being more purposeful, and really identifying, a purpose for the metrics you’re gathering rather than just sort of pulling everything in, and avoiding what I started calling a “data landfill” instead of a data lake. Because, it’s very evocative. It’s just a lot of junk at times. So, let’s shift gears a little bit and let’s talk about metrics shaping.
I feel like shaping is really about noise. Maybe it’s about rethinking. Do you need all metrics in your platform? Maybe you need to have a platform that helps you move things into different spaces.
Paige Cruz: To me, shaping is all about making and producing and storing data that is meaningful to you. And, aside from the earlier conversation about how monitors can contribute to data noise, I think the other major contributor is this idea of just turning on auto-instrumentation across your cloud services, across your applications, across your infrastructure layer, across third party libraries you bring in, and just accepting that the defaults are going to work for you and be meaningful to you.
Now, that is not what any of those library authors or the data store authors had in mind. They were thinking: “What telemetry do I need to troubleshoot these? What is the most data that I could provide to people to be maximally useful?” They weren’t thinking about your org, your system, your scale, your developers. They were trying to go super broad, and I don’t think one-size-telemetry-fits-all is a good approach.
I’ll tell you very quickly a story of a time that I introduced data to the data landfill, with good intentions. We’ve gotta have empathy for past us and the past people who were in charge of this system. I was a new SRE joining a company and we were running Kubernetes. I get into the system metrics and I say: “Huh, we’re running Java, or CronJobs within Kubernetes. I don’t see any metrics reporting on their status, success or failure.” I made an assumption instead of talking to people (and I would go back and talk to people before assuming things). I assumed we had a visibility gap here. Surely, if Kubernetes wasn’t telling us that the job was successful, how could we trust any data from the application that was running inside of it?
I get my config ready and I go and I deploy kube-state-metrics, which among other things, provides insights into jobs and the CronJobs, and other resources in Kubernetes. And, I was working for an observability company, so I kind of assumed, again, people would get the value of this data I brought.
I had made some slack posts like: “Look at this data I have gifted you with. You now know whether your jobs are succeeding or failing.” There was no fanfare, there was no applause. There was no acknowledgement that this data was useful or necessary. And it turns out, you know, while I thought there was a data visibility gap, the devs trusted their application metrics, or this system had been stable enough, they didn’t need these.
And so, I ended up producing a lot of data people weren’t looking at, weren’t using, weren’t alerting on. While it was useful and did provide a deeper level of understanding what was happening in our system, the data’s not valuable if people aren’t using it. So my big takeaway was like: “Oh, you gotta talk to people and figure out what data they’re using. What data they need and plug in any gaps there without making assumptions or adding more data to the landfill.”
Sophie Kohler: When you’re talking about metrics shaping, is there more than one way or is there just one golden best practice to shape your metrics?
Bill Hineline: That comes from really your app and what you need. Right? Without getting too deep into how to de-stress your engineering teams or your development teams, when you roll out Al, I think one of the things is providing them with this idea of a starter kit. And that starter kit gives you those foundational metrics. So, you take some of the FOMO away. And you give them a good foundation to start and build upon. If you add things, that’s fine. I think you add because you need some additional color, I am very passionate about the idea that the golden signals of response time, error rate, and load, or saturation, or whatever you would like to call it, are probably some of the best indicators of how your app is doing.
You probably are gonna want to add more. You’re certainly gonna want to add more depending on the app that you have. But, this gives you a good start and I would go out on a limb and say. You’re less likely to run into: “I missed something.” If you have those things meaning, you’re gonna be less likely to have some sort of catastrophic problem that went undetected, if you start with those basics and then add things.
I use this example all the time, but maybe your app uses Kafka. Maybe that’s not prevalent in your organization, but you started using it and it’s not part of the starter kit. Great. Add Kafka metrics. Add them meaningfully.
Sophie Kohler: I’m kind of hungry right now, so I’m equating this to food. It’s kind of like a sourdough starter kit. You have your foundation, you add what you need from there, right?
Paige Cruz: You let it grow over time in generations. You’re cultivating, you’re building, it’s rising. Every region’s a little different. This reminds me of a conversation I had last season on “Off Call” where I had a guest, Matthew Sanabria, who worked at the time at Cockroach Labs, and he said CockroachDB emits over 1,500 different metrics and said I’m not looking at 1500 metrics, like, let’s be real. Different teams who work on building different parts of the database, they really do care about a subset of those metrics. But, for the everyday user who’s outside of that company, there’s like a core set of maybe, I don’t know, 10 or 15 metrics that they care about over that vast tapestry.
instead of allowing everything, why don’t you start with denying everything and add in what you need?
And, yes, it can be painful to pull up the big list of every metric and decide if you need it or not. Do you wanna do that work now proactively, calmly, during business hours? Or, do you wanna be furiously trying to delete things that don’t matter? Because the bill, the “observability bill” is over your shoulder.
Bill Hineline: Not this bill. We need a synonym for Bill. You know, just to reel it back into how this all applies to OTel, I think one of the great things about OTel is that, unlike a lot of the proprietary agents, you have that fine tuned control ability of what you’re going to collect, you can make as extensible or locked down as you want, but it’s not just: “I’m stuck with what I’m getting sent, and then I have to figure out another way to get rid of it.” The other thing I’ll say is that, you know, back to being empathetic to people who have been collecting everything for years, it’s also changed with the evolution of technology and cloud and microservices, and all of this ephemeral infrastructure that now doesn’t just emit a few metrics.
It literally, to your point, Paige, emits billions of metrics if you’re a giant organization. Maybe it didn’t mean that big of a deal before, but now, as companies are hyperscaling, it really does make sense to think about it. I would say as you’re looking around at observability platforms, you need a platform that’s gonna help give you a sense of the value of the metrics you’re ingesting and give you some way to control that and, and manage it easily.
Paige Cruz: If I take a step back, OpenTelemetry ushered in this new era of open observability we live in today. When I started my career, it was almost a dream when I first heard: “Oh, there’s a group that’s trying to get standardized instrumentation across all the different languages and applications,” and “Oh yeah. They want all of the observability vendors who have their own proprietary systems. We want them to support it, integrate it, and contribute back to it.” That was a joke. It was wild in the span of the first five years of my career to see, what was at the time, OpenTracing, go from kind of a group of tracing nerds who are like: “Hey, there’s this new tracing thing. It would be really a bummer if we had five different tracing standards. So, let’s try as an industry to kind of converge on one. And we had three, you know, Jaeger and Zipkin and OpenTracing, which I can think of at the time were our leading standards. And, at this point, I believe everything integrates with OTelYeager, has deferred instrumentation to OTelat at this point.
And, if you look at the contributors, all the cloud vendors, all of the observability vendors, their names are on that committer list. And so this new world means we have to develop new capabilities. We have more control over our data, and that means we need to now ask our vendors for these features we need to control. Who’s using this data? Is this used in a monitor or a dashboard or has nobody ever literally queried this metric? And, maybe that’s a good candidate to start with cleaning up the stuff that literally no one’s looking at.
Bill Hineline: There’s always a team that says, “I need a thousand metrics,” and then, you know, with the help of the right tools, you can see, well, you really used 10, so how about we clean the house.
As we talk about OTel, and the amazing capabilities of OTel, I think we’d be remiss without reinforcing the importance of what I would think most people understand is sort of a table stakes philosophy. But that’s tagging. There are so many reasons and OTel is also heavily reliant on tagging for a variety of reasons. Paige, you wanna want to give us some insight there? I may have a horror story or two to share.
Sophie Kohler: Before we jump into that, can you gimme the lowdown on what you mean by tagging?
Bill Hineline: One of the examples that I give to folks at times are: Okay, like, pull up my eye photos or pull up my photos app on my Mac, right? I’ve got 9,000 pictures of my dog. And, they’re all valuable.
Paige Cruz: Unlike your metrics.
Bill Hineline: But if I wanna find that one picture of my dog where she’s sitting on the couch and doing something cute, then I gotta have more than just “dog” as tag. I gotta have maybe “dog and couch,” and something else, right?
And so, similarly in the cloud space, everything that exists in the cloud, there needs to be some labels on it or tags on it so it knows that it’s part of an app. Maybe it’s part of some critical infrastructure. Maybe it’s PCI related, and you want to tag for that. Different companies have different tagging philosophies, but you need to have a tagging governance and you need to have consistency. Otherwise, yes, you’re still fine, you’re still looking through a thousand dog pictures and not finding what you’re looking for.
Sophie Kohler: Got it. Thank you for the dog reference. Really helped me understand it.
Paige Cruz: I was thinking if you wanted to find the photo of your dog on vacation, having the location tag would be super handy. If you take your dog on vacation, I don’t know. I’ve only traveled with a cat on a plane once, and it was a journey. It’s not something I wanna repeat again.
Bill Hineline: I mean, this is all fascinating for pet parents out there. However, we’re here to talk about tagging and how it relates to observability and specifically OTel.
Paige Cruz: So, one of those three major pieces of OpenTelemetry I talked about earlier were the semantic conventions. And so, that is agreed upon names of metrics – maybe fields in logs. First, it’s: “What are we calling this data?” Whether it’s a metric measuring CPU utilization, or how long it takes for an application to respond to a request, we’re all gonna use the same name across different technology stacks, Ruby on Rails versus Django and Python, we’re all gonna speak the same languages with our data and how we use to describe or filter that data.
So, that gets us all in the same conversation. What’s powerful about that? Some people feel like standards are limiting. In this case, standards actually expand your capabilities because, now you can start to do correlation, because we’re all calling the HTTP latency metric, the same name we can correlate across different applications or instances applications are running on where hotspots of latency are.
We can start to look at all of our data together, instead of in this really siloed view, per app, per stack, whatever. So, to me, the power of tagging is that ability for me as an SRE to look across my whole system and get an understanding,
Sophie Kohler: Got it. That makes me wonder, is there tag sprawl across teams then?
Paige Cruz: Unfortunately, when you are adding custom tags, even if you’re working within a vendor’s locked-in ecosystem, if you, once you expose the ability of an open text box to any user, you cannot expect there to be any sort of conformance or standardization unless you enforce it yourself. Bill, you’ve worked out some pretty big organizations. Have you seen tag sprawl in the wild?
Bill Hineline: Oh, yes. That’s why I laughed a little bit. People tend to forget that there are other people, especially in a large organization that might need or depend on those tags, right? And certainly, tagging is really important for correlating things together. That gets really hard when there’s just subtle variations in tags.
You could say: “I have tags.” Maybe you have the right key of the environment, but maybe it’s, us-west-1, or maybe a [slight variation]. That slight variance in tag, and a person looking at it can probably go: “Oh yeah, I get that.” But the reality is, it’s pattern matching, right? Those things that don’t follow that tag properly…will cause you to lengthen your MTTR, make it even seem like you don’t have something instrumented in your environment. Your CICD pipeline is a great place to do some last minute checks before deployment on not just that you have a tag defined, but the key value pairs are valid.
Paige Cruz: You could have an app team, app, team A that says, yep, we have environment colon staging or environment colon production. They go: “I’m following the policy.” You could have team B over here that says: “Well, instead of environment I have ENV.” We all know N. That means environment. So, I’ll still have staging, but my tag name, my key is ENV instead of the full word environment. Me as the SRE, if I sit back, I go: “Hmm, I shoulda been more specific when I said, you need an environment tag, and I should probably have a way to figure out if you are following or not following that rule”
Bill Hineline: I’m a big fan of governance that a central observability team can bring, and I know that standards and governance can really step on the toes of DevOps teams. But, there’s a fine line between the wild west. The tech debt that you build as a result and governance that allows you to get out of your own way.
Paige Cruz: I just was at SREcon and heard a presentation from folks where they changed the HTTP semantic conventions, because we changed the name of a metric, the effect of that because they were auto upgrading. Great policy. We wanna stay up with the latest in OTel. It’s changing so fast.
Well, when you change a metric name that you use in a query for a monitor or a dashboard, you have a broken dashboard, and a broken monitor, because it’s calling something that doesn’t exist anymore. And so, this was not a story of doom and gloom because they presented a new to me tool in the OTel ecosystem, called OpenTelemetry Weaver. Bill, have you heard of this? It is a pretty new tool. Have you come across it yet?
Bill Hineline: I haven’t. I haven’t. I’d love to hear about it.
Paige Cruz: It would be up your alley because it marries both the governance for the Semantic Conventions OTel defines, and lets organizations define their own data dictionary of sorts. So, me, as the SRE could say: “We need an environment tag on everything, and that environment tag is going to be the full word ‘environment’ spelled out.” With OTel Weaver, you can drop it in your CICD checks, just like you mentioned to do that validation.
Bill Hineline: I love all of that. Plus 20 for that too. We might have to talk about that further.
Sophie Kohler: And let me just say that my brain hurts in the best way possible listening to you guys, so thank you.
Bill Hineline: This has been fun. Thanks for tuning into the first episode of O11Y Unplugged, and the first in a three part series of “OTel: It Ain’t That Hard!” So if you’re leaving this episode thinking that telemetry still means shoveling logs into a bottomless data landfill, I hate to break it to you, you’re part of the problem.
But hey, in my effort to be more empathetic and follow Paige’s lead, I will continue to love to hear why you’ve done it in the past and maybe help bring some awareness of how to think differently in this ever-exploding world of data. I’m Bill Heinlein Field, CTO, and translator-in-chief signing off.
Thank you for joining us in this first podcast. See you next time.
Share This: