Oct 16, 2019 3:09PM
A Note About HIPAA
There are many services in the AWS cloud platform that are deemed compliant with the Health Insurance Portability and Accountability Act (HIPAA). They are documented here. For the uninitiated, HIPAA provides regulations for securing Protected Health Information (PHI) including a patient’s name, address, date of birth, social security number and much more. Organizations not adhering to the act are subject to fines and imprisonment, making it a key priority for stewards of this sensitive data. If you examine the whitepaper linked above, you will notice a surprising omission, AWS Elastic Beanstalk. I know what you are thinking, this is a dead stop for HIPAA compliant organizations right? Not exactly, let’s dig a little deeper.
At first blush, you would expect not to be able to use the Beanstalk service. However, with a little leg work we come across this blog post on the AWS Partner Network (APN). In this article, the following assertion is made:
Just because a service is not HIPAA-eligible doesn’t mean that you can’t use it for healthcare applications. In fact, many services you would use as part of a typical DevSecOps architecture pattern are only used to automate and schedule automation activities, and therefore do not store, process, or transmit PHI. As long as only HIPAA-eligible services are used to store, process, or transmit PHI, you may be able to use orchestration services such as AWS CloudFormation, AWS Elastic Beanstalk, Amazon EC2 Container Service (Amazon ECS), and AWS OpsWorks to assist with HIPAA compliance and security by automating activities that safeguard PHI.
In other words, you can achieve HIPAA compliance simply by using HIPAA compliant services for storing, processing and transmitting sensitive PHI. Guess what, we asserted in part 1 that AWS Elastic Beanstalk is only an orchestrator over Infrastructure as a Service (IaaS) services, specifically EC2 and Elastic Load Balancing which are HIPAA compliant services! Great news, we are back in business with our lift and shift effort. The aforementioned article asserts we can define an internal-facing application (internal application load balancer) with end-to-end encryption. Thus protecting our data in transit. Data at rest is stored on-premises and processed only via locked down EC2 instances with Remote Desktop Protocol (RDP) capabilities disabled. Servers are also only exposed to the internal company network via the direct connect VPC we configure against.
Below is the architecture we are seeking to build with some omissions for clarity sake, mainly an S3 bucket for code storage, security groups and Route 53 hosted zone changes.
In this design, the terms public and private subnets are a bit misleading and used as more of a labeling method. Essentially all the subnets are public to the direct connect/on-premises network. We could optionally tighten the private subnets down with Network Access Control Lists (NACLs), however given this load balancer is an internal load balancer we already have protection from the outside world accessing these servers in any way. In addition, Remote Desktop Protocol (RDP) connections are not allowed so this would be another redundant layer of protection if implemented. The “private” subnets in this case are labeled as such as there is no way to get to these servers that isn’t through the “public” internal load balancer.
Security Groups (SG)
Our design calls for three security groups. For port 22 (SSH), which frankly doesn’t make a lot of sense for a Windows server. This port is configured on the VPC security group created by Beanstalk along with desired HTTP port 80 and HTTPS port 443 rules. Port 22 is created automatically since we want to assign an SSH key to the EC2 instances. This is in case a need arises to add an RDP rule for remoting into the server (decrypting the key is how you get the Administrator user password for each instance). As a result Beanstalk creates this SG rule exposing a potential port vulnerability (as of writing we have been unable to prevent this behavior). We chose to map this port’s source to a security group with no inbound or outbound rules, effectively stopping port 22 traffic dead in its tracks. The next security group defined is used to route HTTP/HTTPS traffic through the internal load balancer. Inbound rules are defined as follows:
Outbound rules are defined as follows with the source pointing to the SG created for the VPC by Beanstalk (with the original port rules):
Before we move ahead there is a catch here. The load balancer SG is created dynamically at runtime via EbExtensions, which will be discussed in part 4 of this series.
Software settings are pretty straight forward and we largely ignored other optional features at this time in favor of investigating them later. The defaults were acceptable provided we chose the proper solution stack, which in our case was
64bit Windows Server 2016 v2.2.2 running IIS 10.0at the time of writing. Check here for the latest .NET on Windows Server with IIS platform versions supported by Elastic Beanstalk. Also as a general FYI, we are using the high availability configuration preset for our application (for load balancing purposes). Be aware this comes with additional costs and you will need to click the “Configure more options” button on the create new application wizard to set the choice.
Instance types are largely up to you, however we did our initial testing with
t3.microbelieve it or not. AWS does not recommend this let’s be clear, I believe they call out
t2.mediumas a recommended minimum for Windows. I am here to tell you it is possible with smaller sizes if you are looking to minimize cost (maybe for a dev/test environment not seeing a ton of use). The defaults for CloudWatch and Root volume were fine for us. Next we needed to pay attention to the EC2 security groups. You should place the instances in the VPC security group provisioned by Beanstalk. So if this is a brand new application you are spinning up, do not specify the SG.
We configured capacity as follows for the auto scaling group:
Effectively this requires our application to always have one server available, with the capability to scale out to two if auto scaling triggers pass. The scaling cooldown is a period of time before another scale-in/out operation is performed (so you are not constantly spinning stuff up and down). The default scaling triggers were used for the sake of this proof of concept, but adjust as desired:
Choose the application load balancer and add ports for HTTP/HTTPS along with the certificate used for the SSL port. Under processes the default should be entered for HTTP with a health check path of the root. You should enter a record for HTTPS and ensure both process definitions are set to stickiness disabled, assuming sticky sessions are not important to you. The health check URL can be changed if desired. Essentially this ensures that configured HTTP response codes (200 by default) are achieved at the path specified. If they are not, the instance is unhealthy and a auto-scaling event to replace the unhealthy server is triggered. Lastly, we configured the Rules section as follows to ensure precedence of ports/paths (preferring SSL):
Rolling updates and deployments
This is a fun section to configure. There are different capabilities available however we chose to use an immutable deployment policy with the following settings:
So what does the above give us? We basically are telling the orchestrator on new application deployments to replace servers rather than update in place. This gives us a fresh copy of the AMI (used to setup EC2 for the platform configuration chosen) every time we do a deployment, and it prevents server configuration drift from updates and other sources from introducing problems. We are telling this deployment that it can only replace 50% of the given EC2 instances at a time. For simple Beanstalk configuration changes, we apply them in a rolling fashion one instance at a time (always keeping one instance available) over a ten minute period. This gives Windows intialization activites enough time to complete, which is critical if you chose to use a smaller EC2 instance size. Lastly under preferences, we chose to obey the health check and fail the deployment if the work exceeds 600 seconds.
Under security apply the appropriate key pair for the EC2 instances and define an IAM instance profile. This profile is how your server will be able to access AWS services without storing credentials on the actual server. Effectively, the server assumes these capabilities using AWS tooling installed on the base AMI. As for the service role, we let Beanstalk provision that. This provides the server with the abilities to do all the Beanstalk specific tasks it needs to perform: scaling, health checking, etc.
Hang in there we have one more block to touch in the Beanstalk configuration wizard after this. Managed updates if enabled will apply Beanstalk platform updates automatically in a scheduled window. We configured it to be immutable (replacing instances) and allow minor and patch updates to the platform. Be aware you will still want to setup a maintenance window in SSM for Windows security patching. As of this writing, AWS has told me they release platform updates on the cadence of monthly for Windows (see this thread here). I will write a future post on Windows patching in AWS at a later date.
Ok you made it to the end of the configuration block. This one is fairly easy, you need to use your direct connect VPC, set the load balancer visibility to internal (so it is not publicly accessible in the case of internal applications) and add the load balancer to the “public” subnet in each chosen availability zone (AZ), thus making it highly available in the event of an failure. Remember always plan for eventual failure, it is a fact of life and technology. Finally, place your instances in the “private” subnets again across both AZs.
Deploy all those items from above and you should have your infrastructure in place, most likely with the sample hello world app. Congratulations, in our next post we will talk about some additional infrastructure and server side config necessary to drive this to completion along with deploying our code to wrap-up!