Many people seem to think artificial intelligence is leading us toward some kind of robo-apocalypse or a dreamscape techno-utopia. As a software engineer working on AI projects, I’m invested in—and entertained by—those fears and expectations for how AI may affect our society.

I don’t believe we’ll be in either scenario anytime soon. But in between those extreme views lie all the real-world ways AI already enhances our lives. For example, AI has blessed us by catching 99.9% of Gmail spam in our inboxes for years now. A pilot project with smart traffic lights powered by AI is reducing vehicle wait time by 40%. As more and more industries experiment with AI, its impact on our day-to-day lives will continue to grow.

So where exactly are we with AI, and where are we headed?

AI in the name of science and industry

While our future relationship with artificial intelligence remains uncertain, we can ground our thinking by looking at the current state of AI affairs. For example, recently I attended the International Conference for Learning Representations (ICLR), where I gleaned some intriguing hints on what might come next in AI.

The conference, in Vancouver, British Columbia, has grown exponentially in recent years, both in number of attendees and number of papers submitted, so it isn’t surprising to learn that researchers continue to pour efforts into making machines smarter.

In paper titled Zero-Shot Visual Imitation, researchers describe a technique they developed that enabled a robot to tie a knot and navigate an office after a single demonstration, with no specific guidance. And in a presentation on integer deep neural network training, attendees saw how an autonomous bicycle can follow people on its own.

In addition to these examples illustrating AI’s progress, the conference also revealed that standard approaches to AI have some serious flaws. In fact, one presentation argued that 90% of surveyed machine learning researchers agree there is at least a slight reproducibility crisis in the field.

And even as AI continues to evolve, humans clearly still hold the upper hand in many arenas. Researchers at LabSix, for example, printed a 3D model of a turtle and fooled a computer into thinking it was a rifle.

Nevertheless, while machines may still lack “general intelligence,” they are getting smart and capable enough to seriously impact our society, especially by influencing the economy. In an article on the What can machine learning do? Workforce implications, researchers from MIT and The National Bureau of Economic Research say that while we can expect more changes to automation, there likely won’t be a widespread replacement of all human workers anytime soon. “Machines cannot do the full range of tasks that humans can do,” they state. Machines have a competence that is “dramatically narrower and more fragile than human decision making.”

The research could apply to digital performance monitoring

While many of the advances presented at the ICLR conference aren’t necessarily relevant to the digital performance monitoring and management space, a couple of the ideas are relevant. Consider experiments in healthcare from researchers at Johns Hopkins University using time series data to determine the risk of death for pneumonia patients. Could the same AI strategies used to keep patients alive help DevOps teams keep apps and hosts running smoothly? Despite huge differences between the use cases, I think it’s a real possibility. Machine learning cares more about the shape of data than the specific content. The technique used in the Johns Hopkins research works on generic time series data (of which New Relic customers gather an abundance) and doesn’t care if the data is about people or machines.

Or what about a new AI technology that learns to understand data well enough to fill in gaps where data is missing and sharpen regions where data is fuzzy in data samples used to generate models of complex distributions? Such tech could help us with anomaly detection or estimating data in monitoring blind spots.

What does this mean for New Relic’s AI journey?

Monitoring solutions have been collecting and visualizing software data increasingly well for more than a decade now, but as systems become more complex, the old techniques can’t always keep up. Tools like New Relic can now collect data simultaneously across hundreds of interconnected hosts and microservices. With that level of complexity, it’s no longer enough to simply throw data on a screen and leave it up to the user to sift through it to find meaning and determine the proper action.

When our customers experience incidents in their software, it can cost them a lot of time, effort, and potential lost profits. These high stakes require something more from monitoring vendors. Our customers need smart systems that continually analyze and search for anomalous behavior, so that they can make predictions and sort data into dynamic groups. That’s the magic consumers will soon expect from AI-driven technology. If our machines could take on the duty of analysis, we could hand things off to our users at the last possible moment, when only human expertise and decision-making can suffice, maximizing the efficiency of our users.

New Relic has already made progress on this front, and we’ve learned a lot from our first steps with AI. We’ve built tools into our ecosystem that can tell whether or not things are behaving normally. Error profiles, for instance, use statistical measures to surface the errors that deviate most dramatically from the non-error transactions in your app. The profiles provide visual details about differences in the frequency of values for the events, showing you where to focus your attention without making you manually click through all the dimensions. Dynamic baseline alerting allows our users to set alert thresholds for a particular application metric based on a predictive baseline for that metric.

We’re still working toward the reliable execution and prediction accuracy necessary to create out-of-the-box global views of the current and future health of complex systems. We’re working on some promising prototypes and are focused on advancing the future of AIOps. Using a data-driven approach to AI, we take steps to validate our thinking with customers at each step along the way.

We’re continuing to evolve our digital performance monitoring platform to drastically improve the effectiveness of our users. We invite you to keep an eye on our progress, and brace for an exciting future!

Dan Rufener is the lead for New Relic’s intelligence platform. He’s based in Portland, Oregon. View posts by .

Interested in writing for New Relic Blog? Send us a pitch!