Artificial Intelligence (AI) is getting a lot of buzz in the software industry these days. AI pops up in discussions of everything from consumer products like Amazon’s Alexa to tax preparation software to products geared for the software industry itself.
But what exactly are we talking about when we say “AI”? And just how real are the benefits to be gained from AI in the software industry?
A lot of words get thrown around when we start talking about AI and software:
- Machine learning
- Predictive analytics
- Data science
- Prescriptive analytics
- Statistical analysis
- A bunch of IF statements
- Just good math
AI is all of that, for better or worse, and much more besides. The problem is that the hype can get ahead of the reality. Last year, for example, Gartner’s well-known hype cycle pegged machine learning at the very peak of the curve.
Of course, hype doesn’t mean that there isn’t real value in the technology; what matters is whether you’re solving an actual problem. So let’s look at a few examples of New Relic solutions that fall under the AI umbrella (all of these examples are either in limited release or beta right now).
Sometimes AI is just good math: Dynamic Baseline Alerts
New Relic’s Dynamic Baseline Alerts are designed to predict the future, but they are far from magic. They rely on well-established statistical formulas—computed on a large quantity of data at high speed—to predict the next value in a time series. With that expected value in hand we can identify when a system is not behaving as expected. Humans can do that as well, but computers are able to do it faster and more accurately.
Significantly, the math can help identify cyclical patterns. Think of a typical business website: usage climbs during the day and declines in the evening. Weekends might be quieter but still see some usage. With the right math, we can tell when a system is “not normal” even if normal for Monday at 10 a.m. is very different than Saturday at 10 p.m.
Machine learning: Project Seymour
Machine learning systems use data to improve the accuracy of their output. A common example is a system that can recognize cat pictures. Rather than a human trying to define and code all the logic that might uniquely identify the image of a cat, a machine learning system looks at lots of pictures with and without cats and develops a system that can correctly identify a cat.
Well, as much as I love cats, that doesn’t really help New Relic customers. What would? Perhaps personalized recommendations about systems that may be trending towards failure, anomalous behavior, or inefficient database queries, based on what you do and what you’re interested in.
That’s what we’re doing in our Project Seymour—using anonymized data to understand the most likely job for each user, based on their usage of New Relic. We don’t recognize cats; we recognize roles like developer, operations specialist, or manager. In addition, as people use Seymour, it can learn even more about what things they should pay attention to. Our collaborative filter software runs on a large set of anonymized data to provide each user with the content they’re most likely to find useful.
Generic artificial intelligence: Root Cause Analysis
This last category is actually the least well defined. A good working definition of AI for our purposes is a solution that replaces something that humans typically do—and hopefully does it faster, cheaper, and more reliably.
It turns out there are lots of things that experienced software engineers and DevOps folks know that can be captured programmatically in a variety of ways. Implementations might range from, yes, a bunch of IF statements to see if you’re running out of date agents, to more complicated heuristics that look for correlations and understand the context of a software trace in order to find a root cause of a problem.
New Relic has code that does both. The Seymour agent’s Out of Date card performs a simple version check, but does so proactively to help technical staff who may not have software updates top of mind.
A more sophisticated service looks for the root cause of a system issue by evaluating the software trace to see if there is a method or external service that is correlated to the problem. This can provide the most likely place to start troubleshooting. If the service doesn’t find a likely source, it lets us know; it shows a suggestion only if the confidence is high enough to constitute a “good” suggestion. For AI to be useful, it must be trusted, and one way we build trust is by being transparent.
Both of these types of AI are useful because they support the humans who need that information to do their jobs. It’s not math nor does it meet the definition of machine learning, but it’s smart software that can do things you might think would take a knowledgeable human to accomplish.
Why does AI matter so much to New Relic? Because at the foundation of AI and machine learning is data. And no one is more invested in making data useful than New Relic. We ingest an average of more than 500 billion data points a day for our customers.
We’re committed to finding the reality behind the AI hype so that we can help our customers use their data to prevent problems, resolve problems faster, and gain a better understanding of what’s going on in their ecosystem.
AI is real and useful, but it’s also complicated. I love being part of a team exploring new features and new techniques no matter where they fall in the AI spectrum. Our bottom line is “Is it useful?” It’s a big bonus that sometimes it is also pretty cool.