Deploying SonarQube CE in AWS ECS Fargate
The AWS Console is really handy to get something up and running in the prototyping stage of a project. But often times the temptation is there to just take the 'ok, thats working now, phew!' approach and move on. Inevitably, 6 months or more down the track you've got to either recreate a similar resource or redeploy the same resource again, and despite being sure you'd remember all the little aspects, you've got to go on the voyage of rediscovery all over again to re-identify all the little settings you need.
Because I have the memory of a goldfish, I prefer to take a 'Infrastructure as Code' approach, even for personal side projects. So, here's a walk through how I wrote up a Cloudformation template to deploy the Community Edition SonarQube code quality and static code analysis tool as a Docker container in AWS Elastic Container Service (ECS) using Fargate. I'm not sure that this approach over an EC2 instance is necessarily better, but it was more of a 'lets see how we can do this' exercise and to evaluate whether I'm going to continue with SonarQube for static code analysis of my own code.
SonarQube Community Edition can be downloaded from here
SonarQube Community Edition documentation can be viewed here
Here's what this (and the precursor stacks) will be providing
tldr; - if you've just after the template, you can grab it from here
Precursors/Dependencies
NOTE: This template has some pre-conditions/dependencies;
- The VPC stack deployed as part of the Building out a simple AWS VPC with the NatGatewayAttachment and NatGateway uncommented and deployed (a NAT gateway is required to permit the ECS task deployed to a private subnet to be able to reach your code repositories.
- A database. In this case I have used a PostgreSQL RDS instance (SonarQube also supports Oracle and MS Sql Server). The template provided in Building out an AWS postgreSQL RDS instance can be used with some modification to remove the automatic credential rotation for the sonarqube user and remove any of the additional unnecessary features such as the read replica or RDS proxy. A simpler template to provision a postgreSQL RDS instance for the SonarQube database can be found here. It might also be feasible (although not a part of this journal entry) to run a separate task in the cluster for a containerized postgreSQL database also backed by EFS for the data directory. Maybe I'll write that up in the future sometime.
- The SonarQube Docker image already uploaded to an Elastic Container Registry (ECR) repository. It is also entirely possible to pull the image directly from DockerHub - although it is nice to know that if something changes with SonarQube Community Edition I'm not immediately impacted if the image disappears from DockerHub.
docker pull sonarqube:lts-community
aws ecr get-login-password --region <region> | docker login --username AWS --password-stdin <account-id>.dkr.ecr.<region>.amazonaws.com
docker tag sonarqube:latest <accountID>.dkr.ecr.<region>.amazonaws.com/sonarqube:latest
docker push <accountID>.dkr.ecr.<region>.amazonaws.com/sonarqube:latest
Security Groups
To begin with, a couple of template Parameters to cut down on hard coded values in the template. We've got a ServiceName parameter that will be used as a prefix for a variety of resource names so as to associate these all together, as well as a VPC Id parameter. This could be easily be removed and where its referenced instead use an !ImportValue to bring in the VPC Id exported from the Building out a simple AWS VPC template instead.
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::SecretsManager-2020-07-23
Description: ECS Cluster for SonarQube CE.
Dependent on an RDS instance/stack and a VPC stack.
Parameters:
ServiceName:
Type: String
Default: sonarqube-app
Description: A name for the service.
This name will be used to create the ECS service, task definition,
and used as a prefix for other resources.
VpcId:
Type: AWS::EC2::VPC::Id
Description: The VPC where the service will be deployed
Change these defaults (or remove them) as appropriate
Default: vpc-1234564e2959c07c
Next we'll set up a number of security groups that will be needed later. First up is the security group that will be used to restrict egress (outbound traffic) from the SonarQube ECS service itself. This Security Group permits outbound traffic to the postgres DB (port 5432) as well as allowing for HTTPS traffic outbound (port 443). We've also added DNS access (port 53) outbound so that when we attach an Elastic File System (EFS) to the ECS tasks, the internal names can be resolved. Note the !ImportValue limiting the destination for port 5432 to the Security Group that will have been created by the SonarQube postgreSQL stack mentioned earlier as a dependency for this stack.
Resources:
# Security Groups
SonarQubeECSSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: !Sub ${ServiceName}-ecs-sg
GroupDescription: Security group for SonarQube ECS cluster
VpcId: !Ref VpcId
SecurityGroupEgress:
- IpProtocol: tcp
FromPort: 5432
ToPort: 5432
DestinationSecurityGroupId: !ImportValue SonarQubeRDSSG
Description: Postgresql default port (Internal) for access from ECS cluster
- IpProtocol: tcp
FromPort: 443
ToPort: 443
CidrIp: 0.0.0.0/0
Description: HTTPS access to the internet
- IpProtocol: tcp
FromPort: 53
ToPort: 53
CidrIp: !ImportValue vpc-cidr
Description: VPC DNS access
Tags:
- Key: Name
Value: !Sub ${ServiceName}-ecs-sg
Next up is a Security Group to allow incoming (ingress) traffic from the ECS tasks to EFS, limiting this to only permit TCP traffic on port 2049 to come from resources associated with the SonarQubeECSSecurityGroup
SonarQubeEFSSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: !Sub ${ServiceName}-efs-sg
GroupDescription: Security group for SonarQube EFS
VpcId: !Ref VpcId
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 2049
ToPort: 2049
SourceSecurityGroupId: !Ref SonarQubeECSSecurityGroup
Description: NFS access from ECS cluster
Tags:
- Key: Name
Value: !Sub ${ServiceName}-efs-sg
Next is a security group that will be attached to an internet facing load balancer that will allow HTTP and HTTPS traffic in and outbound TCP traffic to SonarQubes default TCP port 9000 to ECS resources associated with the SonarQubeECSSecurityGroup.
Obviously we won't be allowing HTTP and we'll be adding a redirect load balancer listener rule to redirect to HTTPS, but for that redirect to occur, we still need to add an ingress rule to allow HTTP. Note that the HTTP and HTTPS ingress currently are 'world accessible' - in practical use you'd really want to lock this down somewhat to either your organisations VPN or for personal use your specific IP.
SonarQubeELBSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: !Sub ${ServiceName}-elb-sg
GroupDescription: Security group for SonarQube ELB
VpcId: !Ref VpcId
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: 443
ToPort: 443
CidrIp: 0.0.0.0/0
SecurityGroupEgress:
- IpProtocol: tcp
FromPort: 9000
ToPort: 9000
DestinationSecurityGroupId: !Ref SonarQubeECSSecurityGroup
Tags:
- Key: Name
Value: !Sub ${ServiceName}-elb-sg
To avoid a circular reference issue when creating the SonarQubeECSSecurityGroup or SonarQubeEFSSecurityGroup, the SonarQubeECSSecurityGroup currently doesn't have an egress rule to allow TCP traffic on port 2049 to EFS. We can't add the egress rule to the initial create of SonarQubeECSSecurityGroup as we want to set the DesitnationSecurityGroupId to be SonarQubeEFSSecurityGroup. WE can't create the SonarQubeEFSSecurityGroup first and make SonarQubeECSSecurityGroup DependsOn SonarQubeEFSSecurityGroup because of the egress rule for TCP traffic on port 2049 from SonarQubeEFSSecurityGroup to SonarQubeECSSecurityGroup. Rather than opening one of these rules up to allow traffic to/from any resource in the VPC, we can add a specific AWSEC2SecurityGroupEgress that will be attached to the SonarQubeECSSecurityGroup to allow TCP traffic on port 2049 from the ECS tasks to the EFS resources by adding the following;
SonarQubeECSToEFSEgress:
Type: AWS::EC2::SecurityGroupEgress
Properties:
GroupId: !Ref SonarQubeECSSecurityGroup
IpProtocol: tcp
FromPort: 2049
ToPort: 2049
DestinationSecurityGroupId: !Ref SonarQubeEFSSecurityGroup
Description: EFS access from ECS cluster
Similarly, to avoid a circular reference problem with TCP traffic on port 9000 from the ELB to ECS (Target Group Health Check), a separate AWSEC2SecurityGroupIngress is added to SonarQubeECSSecurityGroup accepting TCP traffic on port 9000 from SonarQubeELBSecurityGroup
SonarQubeECSFromELBIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !Ref SonarQubeECSSecurityGroup
IpProtocol: tcp
FromPort: 9000
ToPort: 9000
SourceSecurityGroupId: !Ref SonarQubeELBSecurityGroup
Description: ELB to ECS access
Finally, we need to allow TCP traffic on port 5432 from the ECS tasks into the postgreSQL database. As this stack has already been created separately previously, we add a AWSEC2SecurityGroupIngress rule to the SoanarQubeRDS security group.
SonarQubeECSToRDSIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !ImportValue SonarQubeRDSSG
IpProtocol: tcp
FromPort: 5432
ToPort: 5432
SourceSecurityGroupId: !Ref SonarQubeECSSecurityGroup
Description: Postgresql default port (Internal) for access from ECS cluster
Here's the total set of all security groups plus additional Egress/Ingress rules
Resources:
# Security Groups
SonarQubeECSSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: !Sub ${ServiceName}-ecs-sg
GroupDescription: Security group for SonarQube ECS cluster
VpcId: !Ref VpcId
SecurityGroupEgress:
- IpProtocol: tcp
FromPort: 5432
ToPort: 5432
DestinationSecurityGroupId: !ImportValue SonarQubeRDSSG
Description: Postgresql default port (Internal) for access from ECS cluster
- IpProtocol: tcp
FromPort: 443
ToPort: 443
CidrIp: 0.0.0.0/0
Description: HTTPS access to the internet
- IpProtocol: tcp
FromPort: 53
ToPort: 53
CidrIp: !ImportValue vpc-cidr
Description: VPC DNS access
Tags:
- Key: Name
Value: !Sub ${ServiceName}-ecs-sg
SonarQubeEFSSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: !Sub ${ServiceName}-efs-sg
GroupDescription: Security group for SonarQube EFS
VpcId: !Ref VpcId
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 2049
ToPort: 2049
SourceSecurityGroupId: !Ref SonarQubeECSSecurityGroup
Description: NFS access from ECS cluster
Tags:
- Key: Name
Value: !Sub ${ServiceName}-efs-sg
SonarQubeELBSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: !Sub ${ServiceName}-elb-sg
GroupDescription: Security group for SonarQube ELB
VpcId: !Ref VpcId
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: 443
ToPort: 443
CidrIp: 0.0.0.0/0
SecurityGroupEgress:
- IpProtocol: tcp
FromPort: 9000
ToPort: 9000
DestinationSecurityGroupId: !Ref SonarQubeECSSecurityGroup
Tags:
- Key: Name
Value: !Sub ${ServiceName}-elb-sg
SonarQubeECSToEFSEgress:
Type: AWS::EC2::SecurityGroupEgress
Properties:
GroupId: !Ref SonarQubeECSSecurityGroup
IpProtocol: tcp
FromPort: 2049
ToPort: 2049
DestinationSecurityGroupId: !Ref SonarQubeEFSSecurityGroup
Description: EFS access from ECS cluster
SonarQubeECSFromELBIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !Ref SonarQubeECSSecurityGroup
IpProtocol: tcp
FromPort: 9000
ToPort: 9000
SourceSecurityGroupId: !Ref SonarQubeELBSecurityGroup
Description: ELB to ECS access
SonarQubeECSToRDSIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !ImportValue SonarQubeRDSSG
IpProtocol: tcp
FromPort: 5432
ToPort: 5432
SourceSecurityGroupId: !Ref SonarQubeECSSecurityGroup
Description: Postgresql default port (Internal) for access from ECS cluster
Target Group and Application Load Balancer
Next, we'll create the Target Group, Load Balancer, and the listeners for the Load Balancer. The load balancer will accept HTTP or HTTPS connections from the internet, redirect any HTTP to HTTPS, and forward traffic to the ECS tasks running in the private subnet/s.
First, we'll need a couple additional Parameters added.
The ContainerPort is the port that the application inside the Docker container will be binding to. For SonarQube, thats 9000. This ContainerPort will be used in a number of different places, so declaring it in one location for reference is helpful - but this could easily also just be a Mapping value.
The CertificateId is the Identifier for the AWS Certificate Manager certificate required for HTTPS termination at the load balancer.
The PublicSubnetIds is the set of Ids for the VPC public subnets that the load balancer will be able to direct traffic to.
The Path and Priority values probably could just be hard coded into the template as the load balancer being created is only for use by this stack.
ContainerPort:
Type: Number
Default: 9000
Description: What port number the application inside the docker container is binding to
CertificateId:
Type: String
Description: The ID of the certificate to use for HTTPS, e.g. 12345678-1234-1234-1234-123456789012
PublicSubnetIds:
Type: List<AWS::EC2::Subnet::Id>
Description: The public subnets where the load balancer service will be deployed.
Change these defaults (or remove them) as appropriate
Default: "subnet-a123456789abcdef0,subnet-b123456789abcdef0b,subnet-c123456789abcdef0"
Path:
Type: String
Default: "*"
Description: A path on the public load balancer that this service
should be connected to. Use * to send all load balancer
traffic to this service.
Priority:
Type: Number
Default: 1
Description: The priority for the routing rule added to the load balancer.
This only applies if your have multiple services which have been
assigned to different paths on the load balancer.
Now, the Target Group is defined. Of note here is the HealthCheckIntervalSeconds and the HealthCheckTimeoutSeconds. These need to be as high as possible because SonarQube can take quite some time to complete its start up and respond to the health check. If these values are too low, its likely that SonarQube won't respond in time and the target (the ECS task) will be terminated too soon.
Also note that the health check uses HTTP rather than HTTPS as HTTPS will be terminated at the load balancer, and the SonarQube container expects HTTP on the port it is exposing (9000).
TargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
HealthCheckIntervalSeconds: 300
HealthCheckProtocol: HTTP
HealthCheckPort: !Ref 'ContainerPort'
HealthCheckTimeoutSeconds: 120
HealthyThresholdCount: 2
TargetType: ip
Name: !Sub '${ServiceName}-tg'
Port: !Ref 'ContainerPort'
Protocol: HTTP
UnhealthyThresholdCount: 2
VpcId: !Ref VpcId
Next is the internet facing application load balancer, and the HTTP listener to redirect HTTP traffic to HTTPS.
LoadBalancer:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Name: !Sub '${ServiceName}-elb'
Scheme: internet-facing
SecurityGroups:
- !Ref SonarQubeELBSecurityGroup
Subnets: !Ref PublicSubnetIds
Type: application
HttpLoadBalancerListener:
Type: AWS::ElasticLoadBalancingV2::Listener
DependsOn:
- LoadBalancer
Properties:
DefaultActions:
- Type: redirect
RedirectConfig:
Protocol: HTTPS
Port: '443'
Host: '#{host}'
Path: '/#{path}'
Query: '#{query}'
StatusCode: HTTP_301
LoadBalancerArn:
Ref: LoadBalancer
Port: 80
Protocol: HTTP
and the HTTPS listener to forward HTTPS (and redirected HTTP) traffic to the Target Group defined earlier. Note the construction of the CertificateArn. If the CertificateId template parameter is replaced with a CertificateArn instead, just set this to !Ref CertificateArn.
HttpsLoadBalancerListener:
Type: AWS::ElasticLoadBalancingV2::Listener
DependsOn:
- LoadBalancer
Properties:
Certificates:
- CertificateArn: !Sub arn:aws:acm:${AWS::Region}:${AWS::AccountId}:certificate/${CertificateId}
DefaultActions:
- TargetGroupArn: !Ref TargetGroup
Type: 'forward'
LoadBalancerArn: !Ref LoadBalancer
Port: 443
Protocol: HTTPS
HttpsLoadBalancerListenerRule:
Type: AWS::ElasticLoadBalancingV2::ListenerRule
Properties:
Actions:
- TargetGroupArn: !Ref TargetGroup
Type: 'forward'
Conditions:
- Field: path-pattern
Values: [!Ref 'Path']
ListenerArn: !Ref HttpsLoadBalancerListener
Priority: !Ref 'Priority'
Here's the entire set up for the target group, the load balancer and its listeners.
#Load Balancer, TargetGroups and Listeners
TargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
HealthCheckIntervalSeconds: 300
HealthCheckProtocol: HTTP
HealthCheckPort: !Ref 'ContainerPort'
HealthCheckTimeoutSeconds: 120
HealthyThresholdCount: 2
TargetType: ip
Name: !Sub '${ServiceName}-tg'
Port: !Ref 'ContainerPort'
Protocol: HTTP
UnhealthyThresholdCount: 2
VpcId: !Ref VpcId
LoadBalancer:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Name: !Sub '${ServiceName}-elb'
Scheme: internet-facing
SecurityGroups:
- !Ref SonarQubeELBSecurityGroup
Subnets: !Ref PublicSubnetIds
Type: application
HttpLoadBalancerListener:
Type: AWS::ElasticLoadBalancingV2::Listener
DependsOn:
- LoadBalancer
Properties:
DefaultActions:
- Type: redirect
RedirectConfig:
Protocol: HTTPS
Port: '443'
Host: '#{host}'
Path: '/#{path}'
Query: '#{query}'
StatusCode: HTTP_301
LoadBalancerArn:
Ref: LoadBalancer
Port: 80
Protocol: HTTP
HttpsLoadBalancerListener:
Type: AWS::ElasticLoadBalancingV2::Listener
DependsOn:
- LoadBalancer
Properties:
Certificates:
- CertificateArn: !Sub arn:aws:acm:${AWS::Region}:${AWS::AccountId}:certificate/${CertificateId}
DefaultActions:
- TargetGroupArn: !Ref TargetGroup
Type: 'forward'
LoadBalancerArn: !Ref LoadBalancer
Port: 443
Protocol: HTTPS
HttpsLoadBalancerListenerRule:
Type: AWS::ElasticLoadBalancingV2::ListenerRule
Properties:
Actions:
- TargetGroupArn: !Ref TargetGroup
Type: 'forward'
Conditions:
- Field: path-pattern
Values: [!Ref 'Path']
ListenerArn: !Ref HttpsLoadBalancerListener
Priority: !Ref 'Priority'
Elastic File Storage (EFS)
The next step is to configure Elastic File Storage (EFS) for use by SonarQube. Storage for the ECS tasks is emphemeral, so each time a new task initialises, the local filesystem is cleared. SonarQube writes plugins and other data to disk, so by using EFS, this data can persist between task instances instead of having to re-install plugins each time a new task starts or re-establish any required data.
An additional tempalte Parameter needs to be added - the list of private subnets that will be used to set up the EFS mountpoints to allow the ECS tasks running in the private subnet/s to access the EFS file system.
PrivateSubnetIds:
Type: List<AWS::EC2::Subnet::Id>
Description: The private subnets where the containers will be deployed
Change these defaults (or remove them) as appropriate
Default: "subnet-d123456789abcdef0,subnet-e123456789abcdef0,subnet-f123456789abcdef0"
There are three unix file paths used by SonarQube that should be persisted. /sonarqube/extensions /sonarqube/logs /sonarqube/data
The first step is to create the shared EFS file system itself. This sets up a fairly basic EFS configuration, encypting the file system for security.
# EFS shared file system
FileSystem:
Type: AWS::EFS::FileSystem
Properties:
FileSystemTags:
- Key: Name
Value: !Sub '${ServiceName}-efs'
PerformanceMode: generalPurpose
Encrypted: true
ThroughputMode: bursting
The next step is to create the 'application specific views into the file system'. These are essentially the EFS equivalent of mkdir, chown and chmod commands for the three directories that will be made available to the ECS tasks, setting appropriate ownership and file permissions for the three file system directories
- /sonarqube_data
- /sonarqube_extensions
- /sonarqube__logs
AccessPoint1:
Type: AWS::EFS::AccessPoint
Properties:
FileSystemId: !Ref FileSystem
PosixUser:
Uid: '1000'
Gid: '1000'
RootDirectory:
CreationInfo:
OwnerGid: '1000'
OwnerUid: '1000'
Permissions: '755'
Path: '/sonarqube_data'
AccessPointTags:
- Key: Name
Value: !Sub '${ServiceName}-data'
AccessPoint2:
Type: AWS::EFS::AccessPoint
Properties:
FileSystemId: !Ref FileSystem
PosixUser:
Uid: '1000'
Gid: '1000'
RootDirectory:
CreationInfo:
OwnerGid: '1000'
OwnerUid: '1000'
Permissions: '755'
Path: '/sonarqube_extensions'
AccessPointTags:
- Key: Name
Value: !Sub '${ServiceName}-extensions'
AccessPoint3:
Type: AWS::EFS::AccessPoint
Properties:
FileSystemId: !Ref FileSystem
PosixUser:
Uid: '1000'
Gid: '1000'
RootDirectory:
CreationInfo:
OwnerGid: '1000'
OwnerUid: '1000'
Permissions: '755'
Path: '/sonarqube_logs'
AccessPointTags:
- Key: Name
Value: !Sub '${ServiceName}-logs'
The final step in setting up EFS are defining mount points in each of the three private subnets to allow mounting the file system on tasks executing in the private subnet/s
MountTarget1:
Type: AWS::EFS::MountTarget
Properties:
FileSystemId: !Ref FileSystem
SubnetId: !Select [ 0, !Ref PrivateSubnetIds]
SecurityGroups:
- !Ref SonarQubeEFSSecurityGroup
MountTarget2:
Type: AWS::EFS::MountTarget
Properties:
FileSystemId: !Ref FileSystem
SubnetId: !Select [ 1, !Ref PrivateSubnetIds]
SecurityGroups:
- !Ref SonarQubeEFSSecurityGroup
MountTarget3:
Type: AWS::EFS::MountTarget
Properties:
FileSystemId: !Ref FileSystem
SubnetId: !Select [ 2, !Ref PrivateSubnetIds]
SecurityGroups:
- !Ref SonarQubeEFSSecurityGroup
Here's the entire set up for the EFS file system, its access points (directories) and mount points
# EFS shared file system
FileSystem:
Type: AWS::EFS::FileSystem
Properties:
FileSystemTags:
- Key: Name
Value: !Sub '${ServiceName}-efs'
PerformanceMode: generalPurpose
Encrypted: true
LifecyclePolicies:
- TransitionToIA: AFTER_30_DAYS
ThroughputMode: bursting
BackupPolicy:
Status: DISABLED
AccessPoint1:
Type: AWS::EFS::AccessPoint
Properties:
FileSystemId: !Ref FileSystem
PosixUser:
Uid: '1000'
Gid: '1000'
RootDirectory:
CreationInfo:
OwnerGid: '1000'
OwnerUid: '1000'
Permissions: '755'
Path: '/sonarqube_data'
AccessPointTags:
- Key: Name
Value: !Sub '${ServiceName}-data'
AccessPoint2:
Type: AWS::EFS::AccessPoint
Properties:
FileSystemId: !Ref FileSystem
PosixUser:
Uid: '1000'
Gid: '1000'
RootDirectory:
CreationInfo:
OwnerGid: '1000'
OwnerUid: '1000'
Permissions: '755'
Path: '/sonarqube_extensions'
AccessPointTags:
- Key: Name
Value: !Sub '${ServiceName}-extensions'
AccessPoint3:
Type: AWS::EFS::AccessPoint
Properties:
FileSystemId: !Ref FileSystem
PosixUser:
Uid: '1000'
Gid: '1000'
RootDirectory:
CreationInfo:
OwnerGid: '1000'
OwnerUid: '1000'
Permissions: '755'
Path: '/sonarqube_logs'
AccessPointTags:
- Key: Name
Value: !Sub '${ServiceName}-logs'
MountTarget1:
Type: AWS::EFS::MountTarget
Properties:
FileSystemId: !Ref FileSystem
SubnetId: !Select [ 0, !Ref PrivateSubnetIds]
SecurityGroups:
- !Ref SonarQubeEFSSecurityGroup
MountTarget2:
Type: AWS::EFS::MountTarget
Properties:
FileSystemId: !Ref FileSystem
SubnetId: !Select [ 1, !Ref PrivateSubnetIds]
SecurityGroups:
- !Ref SonarQubeEFSSecurityGroup
MountTarget3:
Type: AWS::EFS::MountTarget
Properties:
FileSystemId: !Ref FileSystem
SubnetId: !Select [ 2, !Ref PrivateSubnetIds]
SecurityGroups:
- !Ref SonarQubeEFSSecurityGroup
Log Group
Next up is to add a log group for logging from the ECS tasks. This is a simple step, add the following to the template.
# log group
LogGroup:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: !Sub '/ecs/${ServiceName}'
RetentionInDays: 14
IAM Roles
Nearly there. A few more dependencies to configure before we finish up with the ECS cluster, service and task definitions. Next up, some IAM roles for the Task role and the Task Execution role.
A new template Parameter is required for the name of the secret in AWS Secrets Manager that was created during the set up of the RDS database as the predecessor task.
DBInstancePasswordSecretName:
Type: String
Description: The name of the secret in Secrets Manager that contains the database password
The Task Role doesn't require any permissions. The Task Execution role is where we specify the permissions required for the task execution..
# IAM Roles
SonarQubeTaskRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Sub '${ServiceName}-task-role'
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: [ecs-tasks.amazonaws.com]
Action: ['sts:AssumeRole']
Path: /
Policies: []
SonarQubeTaskExecutionRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Sub '${ServiceName}-execution-role'
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: [ecs-tasks.amazonaws.com]
Action: ['sts:AssumeRole']
Path: /
Policies:
- PolicyName: ECRReadAccess
PolicyDocument:
Statement:
- Effect: Allow
Action:
- 'ecr:GetAuthorizationToken'
- 'ecr:BatchCheckLayerAvailability'
- 'ecr:GetDownloadUrlForLayer'
- 'ecr:BatchGetImage'
Resource: !Sub 'arn:aws:ecr:${AWS::Region}:${AWS::AccountId}:repository/${ECRRepository}'
- PolicyName: LogWriteAccess
PolicyDocument:
Statement:
- Effect: Allow
Action:
- 'logs:CreateLogStream'
- 'logs:PutLogEvents'
Resource: !Sub 'arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/ecs/${ServiceName}:log-stream:*'
- PolicyName: DatabaseSecretReadAccess
PolicyDocument:
Statement:
- Effect: Allow
Action:
- 'secretsmanager:GetSecretValue'
Resource:
!Join [
'',
[
'arn:aws:secretsmanager:',
!Ref AWS::Region,
':',
!Ref AWS::AccountId,
':',
'secret:',
!Ref DBInstancePasswordSecretName,
'-??????'
]
]
ECS Cluster, Service and Task Definition
Finally, we reach the set up of the ECS cluster, service and task definition.
The ECS cluster is quite straight forward, simply add the following to the template to create an ECS cluster with Container Insights enabled. Container Insights can provide a useful overview of the health and performance of the cluster, service and tasks
SonarQubeCluster:
Type: AWS::ECS::Cluster
Properties:
ClusterName: !Sub '${ServiceName}-cluster'
ClusterSettings:
- Name: containerInsights
Value: enabled
Next is the Task Definition. This is where we bring together a majority of the predecessor resources configured in the template prior to reaching this point. Before adding the AWSECSTaskDefinition, a couple of additional template Parameters are required. These are probably the minimum for SonarQube.
ContainerCpu:
Type: Number
Default: 1024
Description: How much CPU to give the container. 1024 is 1 CPU
ContainerMemory:
Type: Number
Default: 3072
Description: How much memory in megabytes to give the container
Next, we add the TaskDefinition itself. It may not be necessary to declare the TaskDefinition as explicitly dependent on the EFS resources, but in testing it was observed that the TaskDefinition could start to be created before the EFS resources were fully created, causing some deployment failures. Cloudformation does recognise all the other dependencies that are referenced and will wait for these to exist prior to creating the TaskDefinition so its not necessary to declare any other resources explicitly as a dependency.
SonarQubeTaskDefinition:
Type: AWS::ECS::TaskDefinition
DependsOn:
- AccessPoint1
- AccessPoint2
- AccessPoint3
- FileSystem
Properties:
Family: !Ref 'ServiceName'
Cpu: !Ref 'ContainerCpu'
Memory: !Ref 'ContainerMemory'
NetworkMode: awsvpc
RequiresCompatibilities:
- FARGATE
ExecutionRoleArn: !Ref SonarQubeTaskExecutionRole
TaskRoleArn: !Ref SonarQubeTaskRole
Volumes:
- Name: sonarqube_data
EFSVolumeConfiguration:
FilesystemId: !Ref FileSystem
TransitEncryption: ENABLED
AuthorizationConfig:
AccessPointId: !Ref AccessPoint1
- Name: sonarqube_extensions
EFSVolumeConfiguration:
FilesystemId: !Ref FileSystem
TransitEncryption: ENABLED
AuthorizationConfig:
AccessPointId: !Ref AccessPoint2
- Name: sonarqube_logs
EFSVolumeConfiguration:
FilesystemId: !Ref FileSystem
TransitEncryption: ENABLED
AuthorizationConfig:
AccessPointId: !Ref AccessPoint3
ContainerDefinitions:
- Name: !Ref 'ServiceName'
Cpu: !Ref 'ContainerCpu'
Memory: !Ref 'ContainerMemory'
ReadonlyRootFilesystem: false
Essential: true
Image:
!Join [
'.',
[
!Ref AWS::AccountId,
'dkr.ecr',
!Ref AWS::Region,
!Sub 'amazonaws.com/${ECRRepository}:${ECRTag}'
]
]
PortMappings:
- ContainerPort: !Ref 'ContainerPort'
Name: !Sub '${ServiceName}-9000-tcp'
HostPort: 9000
AppProtocol: http
Protocol: tcp
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: !Ref LogGroup
awslogs-region: !Ref 'AWS::Region'
awslogs-stream-prefix: ecs
Environment:
- Name: SONARQUBE_JDBC_URL
Value:
!Join [
'',
[
'jdbc:postgresql://',
!ImportValue SonarQubeDBInstanceEndpointAddress,
':5432/',
!ImportValue SonarQubeDBInstanceName
]
]
- Name: SONARQUBE_JDBC_USERNAME
Value: !Sub '{{resolve:secretsmanager:${DBInstancePasswordSecretName}:SecretString:username}}'
- Name: SONARQUBE_JDBC_PASSWORD
Value: !Sub '{{resolve:secretsmanager:${DBInstancePasswordSecretName}:SecretString:password}}'
- Name: SONAR_SEARCH_JAVAADDITIONALOPTS
Value: '-Dnode.store.allow_mmap=false,-Ddiscovery.type=single-node'
MountPoints:
- ContainerPath: /opt/sonarqube/data
SourceVolume: sonarqube_data
ReadOnly: false
- ContainerPath: /opt/sonarqube/extensions
SourceVolume: sonarqube_extensions
ReadOnly: false
- ContainerPath: /opt/sonarqube/logs
SourceVolume: sonarqube_logs
ReadOnly: false
Ulimits:
- HardLimit: 65535
Name: nofile
SoftLimit: 65535
Of particular note is the setting of the unix user limits (Ulimits) to allow for a significant increase in the resource limits (namely the number of files that can be opened) by the users in this container. AFAIK, this isn't settable via the AWS ECS Console, so must be set via Cloudformation and/or AWS CLI.
The final step is to define the ECS Service itself.
One final template Parameter to add - how many tasks to run. Its worth noting the description here - SonarQube CE is limited to a single instance - a cluster set up is restricted to a specific licence level.
DesiredCount:
Type: Number
Default: 1
Description: How many copies of the service task to run.
Note cluster set up of SonarQube is exclusive to the Data Center Edition
and the definition for the AWSECSService itself. Note the DeploymentConfiguration settings for a single task instance.
SonarQubeService:
Type: AWS::ECS::Service
DependsOn: LoadBalancer
Properties:
ServiceName: !Ref 'ServiceName'
Cluster: !Ref SonarQubeCluster
LaunchType: FARGATE
DeploymentConfiguration:
MaximumPercent: 100
MinimumHealthyPercent: 0
DesiredCount: !Ref DesiredCount
NetworkConfiguration:
AwsvpcConfiguration:
AssignPublicIp: DISABLED
SecurityGroups:
- !Ref SonarQubeECSSecurityGroup
Subnets:
- !Select [ 0, !Ref PrivateSubnetIds]
- !Select [ 1, !Ref PrivateSubnetIds]
- !Select [ 2, !Ref PrivateSubnetIds]
TaskDefinition: !Ref SonarQubeTaskDefinition
LoadBalancers:
- ContainerName: !Ref 'ServiceName'
ContainerPort: !Ref 'ContainerPort'
TargetGroupArn: !Ref 'TargetGroup'
Once SonarQube is up and running, the next steps are to configure it for your particular code repository. The SonarQube documentation will assist there. The only thing I will add is it can be a bit tricky locating the user id/password for the initial login. Don't share this, but its admin/admin.
As mentioned at the top of the post in the tldr; the full template can be found here. I'd encourage you to take a look as I do maintain this (and other templates) over time, and there might be additional tweaks made that I forget to come back here to update. Also, its easier to copy/paste rather than hunting through a journal post.