Install Bahmni on AWS Kubernetes
Bahmni Lite comes with Terraform based automation to deploy Bahmni components onto AWS EKS cluster. Our own publicly accessible demo environments run on the same infrastructure.
The Bahmni Github Actions, on each build, publish docker images onto docker-hub, and also publish Helm Charts. The terraform code uses these helm charts and scripts, to deploy Bahmni onto EKS.
See this Github action as an example which performs deployment to EKS using Helm Umbrella Chart: https://github.com/BahmniIndiaDistro/helm-umbrella-chart/blob/main/.github/workflows/deploy.yaml
To see a high level deployment architecture diagram please visit this document first: Bahmni Lite AWS Architecture
Please see this YouTube training video, conducted by Bahmni team on AWS/Kubernetes deployment: https://www.youtube.com/watch?v=A27n-9lqVAA&list=PLzknGpbejfSzEB2dT87mexJaBUsXNuZkD&index=7
Creating Admin User
When you start form a fresh AWS account with root access, its a best practice to create an admin user and lock down the root keys. Follow this document to create your first IAM admin user and user group.
Creating IAM users with least privileges
You should be able to provision Bahmni infra using admin privileges (IAM admin user). If you plan to automate infra provisioning via CI/CD (GitHub Actions) or perhaps delegate to your Dev-Ops team to help with infra management, then you would need to create additional IAM users. Its a recomended practice to Use Roles for Delegating Permissions and Grant Least Privileges.
Note: if you are using CLI then make sure you have setup and configured aws on your local. Also make sure to checkout bahmni-infra github repo.
aws/policies folder contains all custom policies applied to the AWS account. Below CLI assumes that your local aws profile is named
bahmni-aws
. You can also globally export your aws profile usingexport AWS_PROFILE=your-profile
which would eliminate the need to specify --profile with each CLI.⚠️ Note: you will need to replace {YourAccountNumber} with your account number in CLI and in the policy documents. Remember to not check in your account number to public github repositories.
1️⃣ Create Bahmni Infra Admin Policy with least privilege
The first step is to create a Policy with least permission to provision Bahmni infra.
aws iam create-policy \
--policy-name BahmniInfraAdmin \
--policy-document file://aws/policies/BahmniInfraAdmin.json \
--profile bahmni-aws
You can also create/update policy using AWS Console following these steps.
If you need to make changes to BahmniInfraAdmin
policy post creation - please follow following steps
a) Fetch the policy arn
aws iam list-policies \
--scope Local \
--profile bahmni-aws
b) (Conditional) List policy versions - Note if there are already 5 revisions of the policty you will need to delete the oldest version. Remember to fetch the oldest version where "IsDefaultVersion": false
aws iam list-policy-versions \
--policy-arn arn:aws:iam::{YourAccountNumber}:policy/BahmniInfraAdmin \
--profile bahmni-aws
c) (Conditional) Delete policy version
d) Apply policy changes to recreate a new revision
2️⃣ Create role with trust policy
We would create a Role BahmniInfraAdminRoleForIAMUsers
whose trust policy allows IAM with appropriate privileges to assume the role.
once the Role is created, we would then attach BahmniInfraAdmin policy to BahmniInfraAdminRoleForIAMUsers
role.
you can also create the role and attach policy using AWS Console.
3️⃣ Create assume role policy for IAM users
Finally we need to create a policy that allows IAM / Group to assume BahmniInfraAdminRoleForIAMUsers
role - so that those IAM / group users can have the permission of BahmniInfraAdmin policy and perform infra provisioning.
BahmniInfraAdminAssumeRolePolicy.json
create BahmniInfraAdminAssumeRolePolicy
policy
4️⃣ Create IAM User groups and users
We recommend to create IAM User group and not attaching policies directly to IAM users.
🔘 Create group bahmni_infra_admins
( follow this document for AWS console)
🔘 Attach BahmniInfraAdminAssumeRolePolicy
policy permissions to allow assuming BahmniInfraAdminRoleForIAMUsers
role (follow this document for AWS console)
once the IAM group is created you can now create IAM users (follow these steps for CLI / Console) and add them to bahmni_infra_admins
group.
⚠️ Note: All the IAM users attach to bahmni_infra_admins
Group or capable to assume BahmniInfraAdminAssumeRolePolicy
role would have extensive privilege to perform admin operations on bahmni aws infrastructure. Be careful to only extend such access with a controlled set of devops users or limit it to CI/CD pipelines.
Provision the infrastructure
Bahmni uses terraforms to provision the aws infrastructure. Make sure you have Terraform CLI installed and configured on your local. It leverages terraform backends to maintain infra state using S3 and dynamodb.
if you are new to terraforms then please go through this amazing terraform - aws 45mins crash course that can get you started in no time.
You would need to install kubectl in addition to terraform and aws CLI
1️⃣ Create S3 bucket (to store terraform state file)
you can also choose to enable versioning on S3 bucket
2️⃣ Create dynamodb table
Please use appropriate values for <bucket-name>
, <lock-table-name>
and <yourRegion>
. Once the S3 bucket and the DynamoDB table is created, set the values in the config.s3.tfbackend file.
3️⃣ Create resources
The steps until here are one-time steps that needs to be done when a new AWS Account has been created. Now the resources can be provisioned either from your local machine using terraform CLI (or) using Github Actions.
🔰 Using Github Actions
You can fork the bahmni-infra repository and run the Github actions workflow to provision infrastructure.
1️. Add AWS Secrets to Github Actions Secrets
You need to add the AWS Secrets to Github actions secrets for the workflow to authenticate to AWS.
BAHMNI_AWS_ID
→ Access Key ID of the user provisioned
BAHMNI_AWS_SECRET
→ Secret Access Key of the user
BAHMNI_INFRA_ADMIN_ROLE
→ Role ARN of the BahmniInfraAdminRoleForIAMUsers
2️. Update remote state config
Update the values of bucket and dynamodb table in the config.s3.tfbackend
file in your fork with the bucket name and the dynamodb table name created in the above step.
3️. Run the pipeline
There are 2 pipelines
a) Deploy: This pipeline would provision shared resources such as EKS, RDS etc. Remember to comment out slack-workflow-status
job if you dont intent to integrate with your slack otherwise you could consider setting up web hook with slack and define the URL as github secret SLACK_WEBHOOK_URL
b) node-group: This pipeline would be automatically triggered post Deploy to create node groups and nodes. Its also possible to manual trigger the pipeline to create more nodegroups based on the needs. The default nodegroup name is nonprod
with below configuration - please consider changing it in your fork to desired state
🔰 CLI
Before executing the below commands, have a look at nonprod.tfvars
and adjust any configuration parameters you need.
Shared Infra
The above command will provision the infrastructure with resource names suffixed with non-prod.
Node Groups and nodes
4️⃣ (Optional Step) Provisioning multiple environments
5️⃣ Using AWS EFS for Persistence
EFS has the capability to mount the same persistent volume to multiple pods at the same time using the ReadWriteMany access mode and EFS data can be accessed from all availability zones in the same region.
We would be using EFS for persistence in our cluster. Following are the steps to set-up the same:
Connect to the Amazon EKS cluster
Install the Amazon EFS driver
Install the Amazon EFS CSI driver using a manifest.
Apply the manifest.
Apply a StorageClass manifest for Amazon EFS
Edit the file. Find the following line, and replace the value for fileSystemId
with your file system ID. You can retrieve the fileSystemId
from SSM by using the following command:
Deploy the storage class.
Verify the storage class.
This StorageClass name can be referenced by your PVCs and PVs will be provisioned dynamically. To use static provisioning with EFS, Refer the following template PV file:
Destroy all allocated AWS resources
It is recommended to remove resources from the AWS EKS Cluster before destroying the cluster.
The Bahmni documentation is licensed under Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)