In this episode of the “New Relic Modern Software Podcast,” we welcome Barry Sacks, CTO of Phlexglobal, a UK company helping pharmaceutical companies around the world streamline their heavily regulated clinical research processes. Phlexglobal is going through its own modernization journey with a new microservices architecture managed with Kubernetes, incremental migration to the cloud, and the adoption of DevOps practices. And Barry’s team is also helping Phlexglobal’s own legacy Big Pharma customers to modernize.
I’m joined by my co-hosts, New Relic Senior Program Manager Tori Wieldt, and Manesh Tailor, Director of Solutions in New Relic’s technical solution sales organization, for a conversation with Barry about how Phlexglobal is leveraging technology to disrupt the pharmaceuticals industry.
You can listen to the episode via the player below, or you can get all the episodes by subscribing to the Modern Software Podcast on Apple Podcasts, Libsyn, or wherever you get your podcasts. Read on for a full transcript of our conversation with Barry (edited for clarity):
New Relic was the host of the attached forum presented in the embedded podcast. However, the content and views expressed are those of the participants and do not necessarily reflect the views of New Relic. By hosting the podcast, New Relic does not necessarily adopt, guarantee, approve, or endorse the information, views, or products referenced therein.
Fred: To start off, Barry, can you tell us a little bit more about Phlexglobal? I know that it helps pharmaceutical companies with their clinical trials, but what does that actually entail?
Barry: Phlexglobal is a market leader in providing expertise and product-related services. And we support what we call the trial master file management of the clinical trial.
All clinical trials are required to submit necessary information to demonstrate that the clinical trial was undertaken ethically and within regulatory controls and obligations. What Plexglobal provides is a range of services and products that effectively make that process as simple as possible for our clients, as well as supporting them with those regulatory obligations.
Typically, our customers are looking to be able to manage their clinical trials in a very timely way to make sure that what they are submitting, at the end of the trial, is complete. And to understand at any point in time what their state of inspection readiness is, which effectively means that undertaking the trial, at any point in time, an inspector could knock on the door and ask to undertake an audit review to make sure that they are working within a compliant fashion.
Tori: How has the company evolved? I know you’ve been through a lot of change lately. Can you tell us about that?
Barry: Phlexglobal is in its 21st year. It was a traditional services business, providing resources to support the clinical trial process for our clients. As digitization became more prominent, then the business developed software to support it as a back-office process, initially. And that software then became more important to our clients and evolved to become a standalone SaaS product in its own right, supported by our expert services.
Fred: I want to hear more about that software and how it works, and how it’s changing the company. But first, maybe you can bring us up to date on your role there.
Barry: I’m the Chief Technology Officer of Phlexglobal. I was brought in by our current owners and private equity investors, a company called Vitruvian Partners. Vitruvian saw the opportunity within Phlexglobal to accelerate that evolution towards being a very digitally focused, technology-focused organization. At that point in time, I was asked to come on board to support that digital transformation.
My background, historically, has been as a CTO, working within large and small companies, from startups to blue-chip organizations, helping them to get the most value out of technology, whether that’s pure digital transformation, product innovation, or managing a transition from a legacy organization, software-driven business to one that is cloud-focused, cloud-based.
Tori: Given those changes, and that you are in a highly regulated industry, tell me about your biggest technological challenges and opportunities?
Barry: I like to bring quite a lot of agility into the organization and to move our product evolution forward at pace, and still demonstrate to our clients and to our regulators that we are still managing our products and validating our products to their expectations. That is sometimes a challenge. The industry is not necessarily as forward-thinking as some other industries in that regard. And regulation does also add quite a lot of process overhead to the product evolution or product management process.
So, I think some of the challenge is: How do you move a business like ours forward at pace and still retain the demonstration of the qualitative aspects and regulatory aspects that we’re obliged to adhere to?
Manesh: Would you say that you’re changing customers’ expectations; or are they demanding a much more digital service, and they’re forcing you to change the way you deliver your business?
Barry: Ever since I’ve been on board, I have seen a much stronger focus around information security, and technology from our clients to ensure that we are able to protect their data, secure their data, and provide a performant, reliable service solution to them.
Historically, the industry was much more focused around the people aspects of providing those services. And as the industry has digitalized and transitioned to being more technology-focused, then the demands on us from our clients, their auditors, and the regulators are much more focused around information systems, and particularly, the security of those systems.
To answer the other part of the question: We have to drive innovation in the industry. We have to try and pull the industry forward and allow them to understand the benefits of the technology approaches being used in other industries.
Fred: Barry, I think it would be helpful to talk about this new product—just to get a little more insight into what we’re really talking about.
Barry: Our core client-facing product is a product called PhlexEview. And PhlexEview is a TMF, a trial master file management service/platform.
Throughout the life of a clinical trial, there is a lot of information and documentation that’s generated and that needs to be retained. There needs to be an evidence trail, an audit trail of the communications. All of this information needs to be collated. It needs to be stored in a very highly structured way. It has to be put into the right place so that at the end of the trial, when that information is extracted and submitted to a regulator, the regulator knows where to look for certain information, has evidence that that information was put in at the right time, signed by the right people, and placed in the right artifact, as we call them.
And during that clinical trial process that could run for 10 years or more, at any point in time the sponsor, which is typically a pharmaceutical company, can understand where they stand: How complete is the TMF based on where you are in the life cycle of the trial? And if I were to be inspected, could I demonstrate to the regulator that that clinical trial has been undertaken to meet their expectations?
So PhlexEview, as a service, provides all of that management and oversight to our clients, who are typically the pharmaceutical companies. We also provide services and people to manage that to best practice. A lot of companies will outsource all of the document processing, management, and quality control to us as a complete, outsourced proposition or service offering.
Fred: Barry, you mentioned that these trials can last a decade or more. That’s forever in terms of technology evolution. How do you deal with the disruption of the technology changes and take advantage of new technologies in an environment like that?
Barry: That is an interesting challenge. The industry is still very document-focused, as opposed to information-focused. The clients are quite risk-averse, as you would expect. You have to demonstrate, clearly, the benefits of following you or coming along on the ride—on the journey of evolution with you.
What I will say is that as clients get much more comfortable, for example, moving to the cloud, it’s easier when we offer to move them from our legacy, on-prem solutions to the cloud. Because internally, their IT folk are perhaps rolling out Office 365 to the organization.
They may well be using document management systems that are already cloud-hosted, as an example. So, I think there is a natural evolution of the industry that makes those conversations a little easier. But, you know, I’m responsible, ultimately, for demonstrating that our products are fit for purpose, and part of that challenge is convincing our clients that they need to continue to evolve and move with us.
Manesh: Drawing parallels to the financial services industry that you mentioned earlier, having worked with many of these organizations, they’ve been very selective about which types of workloads they move to the cloud, and they have a certain level of resistance for certain workloads as well. How has that worked out for you?
Barry: Coming from the background that I have, I was quite surprised at that initial level of resistance to cloud—to the point that some historical clients even have clauses in their contracts to say, “We don’t ever want to go to the cloud. It’s a big scary thing for us. We’re in a regulated industry. We can’t protect our information.”
I think that that kind of frozen attitude is thawing somewhat. But again, it’s about working with our clients as partners, rather than looking at us just as a vendor supplying a service. We’re certainly getting there: One of the biggest changes I’ve seen during the last 18 months is a client’s readiness to understand that, actually, that is the way to go—and to accept a cloud-hosted solution.
Tori: Barry, your organization made its own move to the cloud. Why don’t you talk a little bit about that in terms of some of the changes you’ve seen and the advantages you get out of it?
Barry: I came into a business that was very successful. But I think it’s fair to say that the success of the organization hadn’t necessarily been reflected in our solution. The solution was functionally sufficient. It was performing the service that it was told to achieve. But as an on-prem solution, it was really quite constrained—and becoming more so—by requiring ever-greater hardware environments to enable it to scale. So, it was necessary to review that system and to understand where to make changes to give us the benefits that a distributed system built on cloud technologies affords us.
You never, unfortunately, have the opportunity of just starting again. Everything you do has to be an evolution. Our approach has been to take a traditional, sort of Agile MVP approach, where we can demonstrate some quick wins, some value, in what we want to achieve strategically, which is to have everything hosted in the cloud using microservices; and to have a component architecture that allows us to pick and choose how we deliver that service to our clients.
Tori: Tell me how your tech stack has evolved in this process. What are some of the key technologies you’re depending on now?
Barry: We were a Microsoft house and still are a Microsoft house. We were using some of the legacy .NET technologies. We’re now fully C# product-based on .NET, and more so .NET Core.
Our cloud hosting provider is Azure. It’s very important with GDPR and some of the U.S. regulations that we can demonstrate to our clients where the information is stored, how it’s managed, how it’s distributed, and how it’s viewed. And our partnership with Microsoft gives us flexibility around how we demonstrate that and how we can actually manage our infrastructure to isolate that data and information.
When I came into the business, we had a great team, we were managing these systems to client expectations, but we were quite blind to the inner workings of this monolith, this on-premise application that we built. You know, we’ve got over a million lines of code in this product. And when issues did occur, I think it’s fair to say there was quite a lot of swarming: putting a lot of people onto the job to try and understand where those problems were. Taking a more traditional route, you turn up the debugging, you start reading through the log files, you start to look for pointers–and you’re kind of shooting in the dark to try and understand where issues might lie.
Something that I think was really important to us was to bring in modern toolsets and different ways of working that gave understanding and visibility into the performance of our applications. And to help us, initially, understand and drill down into where issues might be lying, and to investigate those in a much more productive and efficient way than we had achieved historically.
Fred: Tell us how you got involved with New Relic—what products you’re using and the benefits they’re bringing.
Barry: I had the benefit of using New Relic in a previous life—introducing New Relic a number of years ago into a client I was working with. But I also had the luxury and fun of developing my own SaaS-based business opportunity. And quite early on, I selected New Relic to support the application service that I was delivering. So, I came into Phlexglobal with some experience of using New Relic and the benefits that it could bring.
Quite early on, New Relic was one of the first services that I introduced into the organization. And initially, we really just used New Relic Infrastructure to understand our hardware—what was actually going on with our boxes.
Because we were a monolithic, on-premise application, we were very dependent on hardware scaling to provide performance to our clients. We were in that constant cycle of: Have we got enough CPU? What percentage is that running at? Have we got enough storage? Have we got any network latency that’s affecting performance?
The team had used reasonably traditional tools to monitor that infrastructure. But as we started to scale, introduce virtualization, introduce a number of tiered VMs across the service, it became more and more obvious that we needed an integrated APM solution to provide us with insights into our infrastructure and our overall application performance.
So, initially, I introduced New Relic to provide infrastructure insights, which quickly developed into instrumenting our application using APM. And we then started to roll it out across our entire solution, both infrastructure and application. And now, today, we use the full range of New Relic solutions. We use APM, Browser, Synthetics; we use Insights and Infrastructure.
We actually have quite a number of plugins supporting our solution. We use MySQL as our relational database. We use Redis for in-memory caching and SendGrid as a service for our email distribution. The one module we don’t use, which we will be using end of the year, is Mobile. Historically, we haven’t had a mobile offering, but by the end of the year, we will have a compliant scanning solution.
Tori: I understand you guys are using Kubernetes in a big way. Can you tell us a little bit about the adoption and how it’s helping you deliver value to your customers?
Barry: What I wanted to achieve here at Phlexglobal was really to move towards containerization. We introduced Docker into the mix and started to Dockerize the components that we were hiving off as we pulled them away and moved to a more service-orientated architecture.
A natural evolution of that was: How do we orchestrate those containers? Historically, we had named services, and we were just using virtualization. As you think about using orchestration, you’re interested in being outcome-focused around performance, clustering, always-on availability. You start to lose being able to understand how those servers or how those containers are performing individually.
What you need is a much higher view of that orchestrated environment. That, for us, was enabled by Kubernetes.
Manesh: With the shift in customer expectations and the technology that you are building your platforms and services on top of, there must be some cultural changes you’ve had to make in the organization, as well. Any challenges with that?
Barry: Yeah, many. We were quite paper-orientated in terms of our delivery processes. We were quite traditional in terms of waterfall methodology. Industry regulators still expect to see a V-shaped model validation process demonstrable or undertaken by vendors such as ourselves.
So, one of the challenges we have is how do we introduce agile processes and still meet our regulatory obligations with the validation process paperwork trail that is necessary for us to take our solution into our client, and for our clients to be able to take that solution into production?
So, one of the challenges is: How do you move an organization from being fairly siloed, fairly traditional, fairly process-oriented, waterfall based, to one that is very focused on customer outcomes, very focused on agile-based processes. And to culturally change the organizational focus to be much more aligned almost to a startup environment, as opposed to a traditional legacy enterprise?
You know, organizational design helps, so we moved into a DevOps focus model. So, taking a traditional IT support mentality to one that is much more akin to having a scrum team that is comprised of all of the capabilities, from a technology point of view, that are needed to deliver a solution. And then to empower that team to take that forward, utilizing all the skills within the team.
This is where New Relic came into its own: How do you empower people to understand the dynamics of that environment without compromising security? We don’t want to let your developers loose on all of your environments, including your production environments, but you still need to be able to provide them reviews so they can understand: How is our environment performing? If there are issues, how do you allow people to drill down into that environment to help diagnose and understand what’s going on? New Relic has been instrumental in that transformation.
Fred: That’s awesome. My next question, I guess, extends that into your customers. How are you dealing with these changes in the cultural issues with your customers, and has monitoring and observability played into that at all?
Barry: It has. For our customers, their main concern is the availability of the system.
You know, running a clinical trial is a very expensive process for our clients. It costs around, I think, $2 billion to bring a drug to market. And so, if our system isn’t available, or if our system doesn’t manage that information to their expectations, a document that goes missing is quite a big deal if that document potentially means a drug isn’t approved for marketing. We have to demonstrate to our clients that we can manage their information responsibly and to their expectations.
We can use tools such as New Relic to demonstrate that. We can provide, for example, availability reporting. When issues do occur, we clearly have to be reactive to investigate those issues. But using tools such as New Relic, which is now our go-to service, whenever an issue does occur, we can very quickly understand where that issue lies and the root cause for that issue.
Having the ability, for example, to overlay infrastructure performance, information about the infrastructure, alongside or over the top of the application. We can quickly understand that, a perceived application issue may actually be related to an infrastructure issue very quickly, whereas historically, investigating such an issue would have been a large, manual undertaking. For our clients, the benefit is a very timely resolution to any issues, and a proactive means to demonstrate performance of the system, and evidence of our compliance obligations.
We do have to be very careful around obfuscation of sensitive data. In our industry, there’s a lot of sensitivity about the information that’s stored with the clinical trial. We have to have tools and services, and resources that are able to manage that information in a compliant way. And so, again, New Relic gives us a way of being able to drill down into our production systems, without putting at risk some of the regulatory requirements around information security and privacy.
Manesh: In addition to the information security areas that the regulators have control over, there must be performance measures that they will start to introduce, as well, as more and more of the services become technology delivered.
Barry: We work very closely with the regulators, as a vendor, to try and understand what their expectations are and what they want to see us deliver to satisfy their expectations. So, one of the interesting things around, you know, the TMF, the trial master file, is that it’s actually very dependent on a number of other systems integrating with it.
And so, one of the interesting things about that is, we need to be very open. We need to open our systems, whereas, historically, they were quite closed, best of breed environments. And those need to be very open, integration-ready environments that can share and pass information into and out of our solution, and into third-party platforms.
In terms of performance, the expectation is around what we call timeliness and completeness in the quality of the information going into the TMF; and being able to demonstrate that, and our clients becoming much more sophisticated around their expectations of performance. The expectation of performance is that you can get information into our system readily, and you can view it and manage that information readily in a performant way.
Fred: Barry, this has been a really fascinating look at your industry. Do you have any advice for other companies that might be attempting this kind of digital transformation?
Barry: It isn’t all about technology. Perhaps it’s a little cliched, but I think it’s very about people, process, and technology. Not one of those three things will achieve an outcome on its own. We’ve invested significant amounts of time and effort in our people, making sure they understand the journey we’re on, that we have acquired the skills to support them. We’ve even introduced services to allow people to self-serve and educate on some of the technologies that we’re using, New Relic University, to allow people to understand how to get the best out of the New Relic environment. Insights, for example, is invaluable to us. But to get the best out of it, you need to know how to drive it effectively.
So, focus on the people and what you need from them. I think process is extremely important, especially in a highly regulated industry. Unfortunately, you can’t afford to take a “fail fast, learn fast” mentality. From a client’s perspective, we can’t afford to fail fast. But what you can do is innovate internally, and you can afford to allow people the latitude to fail internally, so that they’re learning quickly.
I’ve differentiated between what we produce and present externally to meet our regulatory validation requirements, but internally, support people to learn on their own and to drive innovation as quickly as possible. To do that though, you have to recognize that you won’t always get things right the first time.
And then, of course, technology has a huge part to play. We are still learning and evolving very quickly. Our partners’ systems and services are evolving very quickly. And I include New Relic in that. We’ve been working with you on your evolution of support for Kubernetes, as an example, your support of instrumentation of windows in Docker. We need your support as a vendor to provide the services we provide—as a vendor to our own clients.
It’s never gonna be easy. I think you really do need to look at your organization as an extended enterprise; and look at the other vendors that enable you to move as quickly as possible, internally, to achieve an objective. That’s where New Relic has been instrumental to us in supporting that journey.