Migrating an existing enterprise application to the Amazon Web Services (AWS) cloud can give your organization some useful security advantages. But cloud security is definitely a team sport—one where working closely with AWS, and staying informed and engaged, can make a critical difference.

Over the past decade, attitudes towards the security of public-cloud environments have evolved considerably. Today, IT decision-makers now largely accept that top-tier public cloud providers such as AWS maintain IT security expertise and capabilities that few enterprises would—or could—attempt to match.

That expertise should inform your organization’s basic security strategy for its cloud migration initiatives: Lean on AWS for practical guidance; build security considerations into your migration process from the planning phase forward; and understand and follow the vendor’s best-practice recommendations.

New Relic Senior Director of Cloud Architecture Lee Atchison

“First and foremost, the cloud provider is an expert in security. Use the tools and processes they provide and recommend in order to facilitate a highly secure environment,” said Lee Atchison, Senior Director of Cloud Architecture at New Relic. “These might differ from best practices you used on-premises, but that’s OK. AWS is an expert on security on AWS platforms. Use that expertise.”

Let’s examine some other security tips and tools for AWS environments. While our focus here is on migration projects involving legacy on-premise apps, the same principles generally apply when deploying modern apps architected for cloud environments.

Security insights for cloud migrations

Many organizations continue to rely on “lift-and-shift” migrations designed to move an existing application, more or less as-is, from an on-premise environment to a public cloud platform like AWS. Lift-and-shift projects can be a quick and inexpensive way to kick off a phased, longer-term cloud strategy, but they also raise some important security concerns.

“Lifting-and-shifting an existing workload into AWS is a fast way to start your transition to the public cloud—but doing so isn’t without risk,” said Jonathan LaCour, CTO at Mission, an AWS managed services provider. Mitigating that risk, he stated, can involve selecting from a number of security tools and best practices—some carried over from a legacy environment, others unique to the AWC cloud environment.

1. Consider using an AWS Virtual Private Cloud (VPC)

“On-prem environments have traditionally used network segmentation, network appliances, local firewalls and other such mechanisms to implement and enforce security. When undergoing a lift-and-shift, it’s common to replicate the network topology of the existing workload within an Amazon VPC to ease that transition,” LaCour said.

Jonathan LaCour, CTO at Mission, an AWS managed services provider.

New Relic’s Atchison noted that using an Amazon VPC gives software teams granular control over their traffic. “When you utilize an Amazon VPC, you get an extremely detailed ability to manage and control traffic inside your virtual network,” Atchison explained. “You have packet filtering, L2 firewalls, L4 firewalls, and L7 firewalls. Make good use of those tools.”

2. Understand where and how you can apply AWS security capabilities without refactoring

LaCour pointed out that migrating an existing app isn’t just about replicating your old security playbook; it’s about making use of the native tools and processes AWS offers.

“AWS provides an entirely new set of primitives to take advantage of enabling security across all layers,” LaCour said. “Once a lift-and-shift is completed, I recommend that workloads rapidly apply these new primitives, applying both host-level and network-level boundaries, leveraging security groups, enabling fine-grained access controls through IAM, and encrypting data in transit and at rest.”

While some AWS security features might require you to refactor your application, LaCour pointed out that many are available immediately for use with lift-and-shift workloads: “Lift-and-shift … can still apply many best practices and security features without refactoring to apply security at additional layers.”

3. Don’t lose sight of critical security fundamentals during and after a migration

According to Atchison, three fundamental practices are especially important during and after a migration:

  • Least privilege: The principle of least privilege can apply widely, but it should be a driving force of your security strategy once you complete a cloud migration. “Always utilize a least-privilege approach in all aspects of your network,” Atchison said. “This applies to things like packet routing—only allow required packets to get through, instead of allowing all but bad packets through, for example. But it also applies to things like permissions management; a service should only be allowed to do the things it needs to do and nothing more.

In that vein, Atchison recommended assigning each service its own unique credentials, never sharing those credentials, and only giving services the access they actually need. That includes revoking one privilege altogether.

“In a properly constructed network, there is no reason why a real human should need to access anything deep in your production network,” he said. “Automated tooling that requires proper credentials should be used rather than logging in to secure computers. Logging into a production server is a no-no.”

  • Logging: Logging is important, and Atchison said this should include everything—both successful access attempts and unsuccessful ones. He added that logs should be sent to and stored in different secure environments, such as a separate VPC.
  • Layers: “Layered security processes and procedures work best in the cloud,” Atchison said. “The same can generally be said for on-prem, but it’s especially true in the cloud.”

For example, you can use multiple VPCs, Atchison said, depending on how deep you want to go into your application. The front-most VPC—those are your front-end services—would be the least secure, while the back-most VPC would be the most secure, and where your data stores and other secure information would live.

Security insights for cloud modernization

When people refer to modernizing an application, they’re commonly referring to retooling it (including rewriting code) to better run in a cloud environment like AWS. The popular term for this is refactoring. It can also refer to cloud-native development, meaning applications built specifically to take advantage of cloud and related technologies such as containers.

Just as with an initial cloud migration, this has its own security considerations:

1. Be prepared to deal with complexity

“As you modernize your application, the number of components will increase in size radically while the size of each component will decrease in size,” Atchison said, referring to the growth of microservices architecture as a cloud-native development approach. “This gives a lot more separation between parts of your application—which is good from a security standpoint—but it means your cloud security controls will get a lot more complicated since there are more cross-connected components. Having a plan and strategy to standardize these security aspects is important.”

Automation becomes an increasingly important piece of the puzzle, according to Atchison, as do repeatable processes; both limit the opportunity for introducing errors into increasingly complex cloud systems.

2. Leverage the AWS Well-Architected Framework

When it comes to modernizing an application for the cloud—whether refactoring an existing app or building a new one specifically to run in AWS—a lot of that expertise has been collected in a single place: the AWS Well-Architected Framework. It’s a compendium of best practices for AWS that spans five pillars, including a Security Pillar.

“When modernizing an application for AWS, I recommend using the Well-Architected Security Pillar as an instructive blueprint to guide your transformation,” said LaCour. “The framework outlines seven design principles for security in the cloud, including implementing a strong identity foundation, applying security at all layers, automating security best practices, and protecting data in-transit and at rest.”

LaCour added that the framework includes practical tips as well as design best practices. Examples include establishing a principle of least privilege using IAM, centrally managing and enforcing policies with AWS Organizations, applying detective controls via the capture and analysis of logs, and building out stateful host-level firewalls with Amazon VPC Security Groups.”

3. Choose tools that enable and extend your visibility

Some teams may look to third-party tools to bolster their security posture, too, especially as their cloud footprint grows. Visibility becomes an increasing issue as IT environments become more distributed and scalable. More specifically, Atchison noted that even well-established security processes must be measured and monitored to ensure their continued efficacy. In this context, keeping close tabs on a cloud application’s performance and health, including resource consumption and usage patterns, with an observability platform such as New Relic One can be particularly important.

“The more you know how your system is functioning on the inside, the more assured you can be that it is operating securely,” Atchison said. “New Relic gives you visibility into how your application or service is actually performing, which may be different than how you think it’s performing. The distinction can mean the difference in a secure and non-secure system.”

Kevin Casey writes about technology and business for a wide variety of publications and companies. He won an Azbee Award, given by the American Society of Business Publication Editors, for his InformationWeek story, “Are You Too Old for IT?” He’s also a former community choice honoree in the Small Business Influencer Awards. View posts by .

Interested in writing for New Relic Blog? Send us a pitch!