In our first post of this series, we talked about the questions we wanted to answer about our sign-up and deploy process, and what some of the challenges were to getting that data. In this post we’ll get a little deeper into the methods we used to answer those questions. Having decided on the Ethnio – GoToMeeting combination to recruit from our user base, we were then faced with the task of actually intercepting those users and observing them. It seems straightforward, but there were enough moving parts to make it more complicated than you might think.
The Intercept: Using Ethnio
The intercept offered an Amazon incentive for approximately 1 hour of their time and prompted them to take a short, 10-question survey. This allowed us to filter the candidates. We wanted to get a good mixture of people deploying our various language agents.
For the first week of the research project, we had a large number of users who were willing to let us observe them (~5% of all users that saw the Ethnio survey, higher than Ethnio’s average of 3.8%). Things were looking up!
After the first few rounds of intercepting, however, the number of users we intercepted slowly started to drop. We’d often spend hours waiting for a user, only to have a “bogus” or unqualified user enter our pool. At a certain point we decided to take our future out of fate’s hands and investigate. Using data we were collecting through MixPanel, we discovered a cyclical pattern to when customers sign up for New Relic, which mapped to certain times of the workday (with some late-evening spikes for night-owls). It turns out we had unwittingly switched our intercepts to the lulls in signups. So we changed our strategy and only turned the screener on during peak sign-up times.
The Observations: Using GoToMeeting to observe users in real time
After catching a user who fit our criteria, we began our interview with an immediate phone call, often to our user’s shock, even though they had JUST filled out a survey saying that they were available for a call. A few of these users even had to reschedule our interview, as they weren’t actually prepared for a 45 minute call.
For users who met our criteria and were prepared, we promptly got set up in GoToMeeting. After beginning the session, we followed standard user research practices by explaining what was going to happen and getting consent to both continue and record our session. We then began with a short list of questions about what role they played in their company, what brought them to New Relic, what they hoped to get out of their experience with New Relic, and tried to get a sense of their overall understanding of what New Relic offers. This information was helpful for us when discussing the context and motives of these users later on as we were brainstorming and recommending changes to the sign-up and deploy process.
Many of our users had not actually signed up with the intent of deploying the New Relic agent. Although our primary focus was on the deployment process, these users proved to be quite valuable. We were able to watch them as they explored our front end site (www.newrelic.com) to try to discover more about what our product is and what we offer. These users expected to get more information once they signed up, but were dropped right into the installation process instead.
We also encountered a few scenarios in which we were unable to “watch over their shoulders.” In one case, the user was on a tablet trying to navigate the front end website, in another, the user downloaded the mobile app in addition to installing the web agent on his application. In these cases, we just rolled with the punches and users describe what they were seeing and doing. While it was not an ideal situation, we still learned a lot that we wouldn’t have been able to if we only worked with people who were a perfect match.
Stay tuned for Part 3 of this series, where we’ll discuss some of our big takeaways from our research and what actions we’ve started to take based on our learning.