Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 31 Current »

Creating Admin User

When you start form a fresh AWS account with root access, its a best practice to create an admin user and lock down the root keys. Follow this document to create your first IAM admin user and user group.


Creating IAM users with least privileges

You should be able to provision Bahmni infra using admin privileges (IAM admin user). If you plan to automate infra provisioning via CI/CD (GitHub Actions) or perhaps delegate to your Dev-Ops team to help with infra management, then you would need to create additional IAM users. Its a recomended practice to Use Roles for Delegating Permissions and Grant Least Privileges.

Note: if you are using CLI then make sure you have setup and configured aws on your local. Also make sure to checkout bahmni-infra github repo.

aws/policies folder contains all custom policies applied to the AWS account. Below CLI assumes that your local aws profile is named bahmni-aws. You can also globally export your aws profile using export AWS_PROFILE=your-profile which would eliminate the need to specify --profile with each CLI.

⚠️ Note: you will need to replace {YourAccountNumber} with your account number in CLI and in the policy documents. Remember to not check in your account number to public github repositories.


1️⃣ Create Bahmni Infra Admin Policy with least privilege

The first step is to create a Policy with least permission to provision Bahmni infra.

BahmniInfraAdmin.json

aws iam create-policy \
 --policy-name BahmniInfraAdmin \
 --policy-document file://aws/policies/BahmniInfraAdmin.json \
 --profile bahmni-aws

You can also create/update policy using AWS Console following these steps.

If you need to make changes to BahmniInfraAdmin policy post creation - please follow following steps

a) Fetch the policy arn

aws iam list-policies \
 --scope Local \
 --profile bahmni-aws

b) (Conditional) List policy versions - Note if there are already 5 revisions of the policty you will need to delete the oldest version. Remember to fetch the oldest version where "IsDefaultVersion": false

aws iam list-policy-versions \
 --policy-arn arn:aws:iam::{YourAccountNumber}:policy/BahmniInfraAdmin \
 --profile bahmni-aws

c) (Conditional) Delete policy version

aws iam delete-policy-version \
 --policy-arn arn:aws:iam::{YourAccountNumber}:policy/BahmniInfraAdmin \
 --version-id {versionNumber} \
 --profile bahmni-aws

d) Apply policy changes to recreate a new revision

aws iam create-policy-version \
 --policy-arn arn:aws:iam::{YourAccountNumber}:policy/BahmniInfraAdmin \
 --policy-document file://aws/policies/BahmniInfraAdmin.json \
 --set-as-default \
 --profile bahmni-india-aws

2️⃣ Create role with trust policy

We would create a Role BahmniInfraAdminRoleForIAMUsers whose trust policy allows IAM with appropriate privileges to assume the role.

BahmniInfraAdminRoleForIAMUsers.json

aws iam create-role \
 --role-name BahmniInfraAdminRoleForIAMUsers \
 --assume-role-policy-document file://aws/roles/BahmniInfraAdminRoleForIAMUsers.json

once the Role is created, we would then attach BahmniInfraAdmin policy to BahmniInfraAdminRoleForIAMUsers role.

aws iam attach-role-policy \
 --policy-arn arn:aws:iam::{YourAccountNumber}:policy/BahmniInfraAdmin \
 --role-name BahmniInfraAdminRoleForIAMUsers

you can also create the role and attach policy using AWS Console.


3️⃣ Create assume role policy for IAM users

Finally we need to create a policy that allows IAM / Group to assume BahmniInfraAdminRoleForIAMUsers role - so that those IAM / group users can have the permission of BahmniInfraAdmin policy and perform infra provisioning.

BahmniInfraAdminAssumeRolePolicy.json

{
  "Version": "2012-10-17",
  "Statement": {
      "Effect": "Allow",
      "Action": [
        "sts:AssumeRole",
        "sts:TagSession"
      ],
      "Resource": "arn:aws:iam::{YourAccountNumber}:role/BahmniInfraAdminRoleForIAMUsers"
  }
}

create BahmniInfraAdminAssumeRolePolicy policy

aws iam create-policy \
 --policy-name BahmniInfraAdminAssumeRolePolicy \
 --policy-document file://aws/policies/BahmniInfraAdminAssumeRolePolicy.json \
 --profile bahmni-aws

4️⃣ Create IAM User groups and users

We recommend to create IAM User group and not attaching policies directly to IAM users.

🔘 Create group bahmni_infra_admins ( follow this document for AWS console)

aws iam create-group --group-name bahmni_infra_admins

🔘 Attach BahmniInfraAdminAssumeRolePolicy policy permissions to allow assuming BahmniInfraAdminRoleForIAMUsers role (follow this document for AWS console)

aws iam attach-group-policy \
 --policy-arn arn:aws:iam::{YourAccountNumber}:policy/BahmniInfraAdminAssumeRolePolicy \
 --group-name bahmni_infra_admins

once the IAM group is created you can now create IAM users (follow these steps for CLI / Console) and add them to bahmni_infra_admins group.

⚠️ Note: All the IAM users attach to bahmni_infra_admins Group or capable to assume BahmniInfraAdminAssumeRolePolicy role would have extensive privilege to perform admin operations on bahmni aws infrastructure. Be careful to only extend such access with a controlled set of devops users or limit it to CI/CD pipelines.


Provision the infrastructure

Bahmni uses terraforms to provision the aws infrastructure. Make sure you have Terraform CLI installed and configured on your local. It leverages terraform backends to maintain infra state using S3 and dynamodb.

if you are new to terraforms then please go through this amazing terraform - aws 45mins crash course that can get you started in no time.

You would need to install kubectl in addition to terraform and aws CLI

1️⃣ Create S3 bucket (to store terraform state file)

aws s3api create-bucket \
 --bucket <bucket-name> \
 --create-bucket-configuration LocationConstraint=<yourRegion>

you can also choose to enable versioning on S3 bucket

aws s3api put-bucket-versioning \
 --bucket <bucket-name> \
 --versioning-configuration Status=Enabled

2️⃣ Create dynamodb table

aws dynamodb create-table \
 --table-name <lock-table-name> \
 --attribute-definitions AttributeName=LockID,AttributeType=S \
 --key-schema AttributeName=LockID,KeyType=HASH \
 --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \
 --region <yourRegion>

Please use appropriate values for <bucket-name>, <lock-table-name> and <yourRegion>. Once the S3 bucket and the DynamoDB table is created, set the values in the config.s3.tfbackend file.


3️⃣ Create resources

The steps until here are one-time steps that needs to be done when a new AWS Account has been created. Now the resources can be provisioned either from your local machine using terraform CLI (or) using Github Actions.

Bahmni product uses nonprod to represent default name for various resources such as cluster, nodegroups etc. Please change it to your desired names in the configuration and folders

A Terraform Module for Amazon SES has been added, which is used by Bahmni Appointments Module for sending emails. This module needs a domain to be registered in AWS Route53 to work and two variables domain_name and hosted_zone_id are to be provided before provisioning the infrastructure. These variables can be assigned systematically in a file like nonprod.tfvars or set as an environment variable. This module is optional (can be enabled/disabled) using the variable enable_ses.

🔰 Using Github Actions

You can fork the bahmni-infra repository and run the Github actions workflow to provision infrastructure.

1️. Add AWS Secrets to Github Actions Secrets

You need to add the AWS Secrets to Github actions secrets for the workflow to authenticate to AWS.

BAHMNI_AWS_ID → Access Key ID of the user provisioned

BAHMNI_AWS_SECRET → Secret Access Key of the user

BAHMNI_INFRA_ADMIN_ROLE → Role ARN of the BahmniInfraAdminRoleForIAMUsers

2️. Update remote state config

Update the values of bucket and dynamodb table in the config.s3.tfbackend file in your fork with the bucket name and the dynamodb table name created in the above step.

3️. Run the pipeline

There are 2 pipelines

a) Deploy: This pipeline would provision shared resources such as EKS, RDS etc. Remember to comment out slack-workflow-status job if you dont intent to integrate with your slack otherwise you could consider setting up web hook with slack and define the URL as github secret SLACK_WEBHOOK_URL

b) node-group: This pipeline would be automatically triggered post Deploy to create node groups and nodes. Its also possible to manual trigger the pipeline to create more nodegroups based on the needs. The default nodegroup name is nonprod with below configuration - please consider changing it in your fork to desired state

cluster_name         = "bahmni-cluster-nonprod"
node_role_name       = "BahmniEKSNodeRole-nonprod"
node_group_name      = "nonprod"
desired_num_of_nodes = 2
min_num_of_nodes     = 1
max_num_of_nodes     = 2
node_instance_type   = "m5.xlarge"

🔰 CLI

Before executing the below commands, have a look at nonprod.tfvars and adjust any configuration parameters you need.

Shared Infra

cd terraform/
terraform init -backend-config=config.s3.tfbackend -backend-config='key=nonprod/terraform.tfstate'
terraform apply -var-file=nonprod.tfvars

The above command will provision the infrastructure with resource names suffixed with non-prod.

Node Groups and nodes

cd terraform/node_groups/nonprod
terraform init -backend-config=../../config.s3.tfbackend
terraform apply -auto-approve

4️⃣ (Optional Step) Provisioning multiple environments

 (Optional Step) Provisioning multiple environments

The below steps are optional and this is needed only when multiple environments are needed to be managed under a single AWS account.

  • Create environment specific tfvars file

Duplicate the nonprod.tfvars file and rename it to the environment you would like to create. Then update the required configurations you need.

  • Update environment value

In the newly created tfvars file make sure to update the value of environment and vpc_suffix .

  • Provisioning the environment

Replace {environment_name} in the following commands.

terraform init -backend-config=config.s3.tfbackend -backend-config='key={environment_name}/terraform.tfstate'
terraform apply -var-file={environment_name}.tfvars

5️⃣ Using AWS EFS for Persistence

EFS has the capability to mount the same persistent volume to multiple pods at the same time using the ReadWriteMany access mode and EFS data can be accessed from all availability zones in the same region.

We would be using EFS for persistence in our cluster. Following are the steps to set-up the same:

  • Connect to the Amazon EKS cluster

aws eks update-kubeconfig --name <cluster-name>
  • Install the Amazon EFS driver

Install the Amazon EFS CSI driver using a manifest.

kubectl kustomize \
    "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.4" > public-ecr-driver.yaml

Apply the manifest.

kubectl apply -f public-ecr-driver.yaml
  • Apply a StorageClass manifest for Amazon EFS

curl -o storageclass.yaml https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/examples/kubernetes/dynamic_provisioning/specs/storageclass.yaml

Edit the file. Find the following line, and replace the value for fileSystemId with your file system ID. You can retrieve the fileSystemId from SSM by using the following command:

aws ssm get-parameter --with-decryption --name "/nonprod/efs/file_system_id" --query "Parameter.Value" --output text

Deploy the storage class.

kubectl apply -f storageclass.yaml

Verify the storage class.

kubectl get storageclass

This StorageClass name can be referenced by your PVCs and PVs will be provisioned dynamically. To use static provisioning with EFS, Refer the following template PV file:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: <name-for-your-pv>
spec:
  capacity:
    storage: 8Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  mountOptions:
    - tls
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    namespace: <namespace>
    name: <name-of-your-pvc>
  storageClassName: <name-of-storage-class-created-in-above-step>
  csi:
    driver: efs.csi.aws.com
    volumeHandle: <file-system-id>

Destroy all allocated AWS resources

The following commands would destroy the resources provisioned completely. Run this only when you want cleanup the entire environment.

It is recommended to remove resources from the AWS EKS Cluster before destroying the cluster.

kubectl delete all --all --all-namespaces
$ cd terraform/
$ terraform destroy

  • No labels