I'm pretty sure Olivier Pomel rarely does podcasts, but this was a pretty good one.
Some of my thoughts:
- Customers "lie to themselves" saying they prefer noise to missed issues, when in practice 2 false alarms make them lose faith - and its implications of that to AI.
- The path to agentic adoption comes from finding narrow use-cases they can solve with high confidence, and bringing them to life.
- How foundation models can't understand time-series, and how DataDog's foundation model, Toto (get it?!) tackles it.
Customers also lie to themselves about how complex their observability problems are.
I've worked at shops with already-structured data you could put 90% solution monitoring on top of and be done, but they obsessed about the perfect "analyze the firehouse of unstructured data automatically" solution that just never came.
So we slept walk through many preventable outages because rather than learning to crawl then walk then run, we wanted to fly to colonize Mars in 1 giant leap.
Part of it is incentives - putting in stupid dummy checks on structured data is cheap/easy and a low-class problem. Pitching an all singing all dancing AI driven observability platform with beautiful dashboards is career enhancing. You get to talk to the CTO! He puts the thing on a big screen in his office! You have years of POCs, vendor selection, hiring, infra build out before you get caught out as a fraud!
reminds me of the days from about a decade ago when Big Data was a thing and there were so many anomaly detection products. Anyways, agree with the overall sentiment here and Datadog only stands to gain from a broadened market of LLM traffic / usage observability play.
I'm pretty sure Olivier Pomel rarely does podcasts, but this was a pretty good one.
Some of my thoughts:
- Customers "lie to themselves" saying they prefer noise to missed issues, when in practice 2 false alarms make them lose faith - and its implications of that to AI.
- The path to agentic adoption comes from finding narrow use-cases they can solve with high confidence, and bringing them to life.
- How foundation models can't understand time-series, and how DataDog's foundation model, Toto (get it?!) tackles it.
Customers also lie to themselves about how complex their observability problems are.
I've worked at shops with already-structured data you could put 90% solution monitoring on top of and be done, but they obsessed about the perfect "analyze the firehouse of unstructured data automatically" solution that just never came.
So we slept walk through many preventable outages because rather than learning to crawl then walk then run, we wanted to fly to colonize Mars in 1 giant leap.
Part of it is incentives - putting in stupid dummy checks on structured data is cheap/easy and a low-class problem. Pitching an all singing all dancing AI driven observability platform with beautiful dashboards is career enhancing. You get to talk to the CTO! He puts the thing on a big screen in his office! You have years of POCs, vendor selection, hiring, infra build out before you get caught out as a fraud!
This is so true!
reminds me of the days from about a decade ago when Big Data was a thing and there were so many anomaly detection products. Anyways, agree with the overall sentiment here and Datadog only stands to gain from a broadened market of LLM traffic / usage observability play.