Tip |
---|
|
Table of Contents |
---|
Creating Admin User
...
aws/policies folder contains all custom policies applied to the AWS account. Below CLI assumes that your local aws profile is named
bahmni-aws
. You can also globally export your aws profile usingexport AWS_PROFILE=your-profile
which would eliminate the need to specify --profile with each CLI.⚠️ Note: you will need to replace {YourAccountNumber} with your account number in CLI and in the policy documents. Remember to not check in your account number to public github repositories.
...
1️⃣ Create Bahmni Infra Admin Policy with least privilege
The first step is to create a Policy with least permission to provision Bahmni infra.
...
Code Block |
---|
aws iam create-policy-version \ --policy-arn arn:aws:iam::{YourAccountNumber}:policy/BahmniInfraAdmin \ --policy-document file://aws/policies/BahmniInfraAdmin.json \ --set-as-default \ --profile bahmni-india-aws |
...
2️⃣ Create role with trust policy
We would create a Role BahmniInfraAdminRoleForIAMUsers
whose trust policy allows IAM with appropriate privileges to assume the role.
...
you can also create the role and attach policy using AWS Console.
...
3️⃣ Create assume role policy for IAM users
Finally we need to create a policy that allows IAM / Group to assume BahmniInfraAdminRoleForIAMUsers
role - so that those IAM / group users can have the permission of BahmniInfraAdmin policy and perform infra provisioning.
...
Code Block |
---|
aws iam create-policy \ --policy-name BahmniInfraAdminAssumeRolePolicy \ --policy-document file://aws/policies/BahmniInfraAdminAssumeRolePolicy.json \ --profile bahmni-aws |
...
4️⃣ Create IAM User groups and users
We recommend to create IAM User group and not attaching policies directly to IAM users.
...
You would need to install kubectl in addition to terraform and aws CLI
1️⃣ Create S3 bucket (to store terraform state file)
Code Block |
---|
aws s3api create-bucket \ --bucket <bucket-name> \ --create-bucket-configuration LocationConstraint=<yourRegion> |
...
Code Block |
---|
aws s3api put-bucket-versioning \ --bucket <bucket-name> \ --versioning-configuration Status=Enabled |
...
2️⃣ Create dynamodb table
Code Block |
---|
aws dynamodb create-table \ --table-name <lock-table-name> \ --attribute-definitions AttributeName=LockID,AttributeType=S \ --key-schema AttributeName=LockID,KeyType=HASH \ --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \ --region <yourRegion> |
Please use appropriate values for <bucket-name>
, <lock-table-name>
and <yourRegion>
. Once the S3 bucket and the DynamoDB table is created, set the values in the config.s3.tfbackend file.
...
3️⃣ Create resources
The steps until here are one-time steps that needs to be done when a new AWS Account has been created. Now the resources can be provisioned either from your local machine using terraform CLI (or) using Github Actions.
...
Code Block |
---|
cd terraform/node_groups/nonprod terraform init -backend-config=../../config.s3.tfbackend terraform apply -auto-approve |
...
4️⃣ (Optional Step) Provisioning multiple environments
Expand | |||||||
---|---|---|---|---|---|---|---|
| |||||||
Duplicate the
In the newly created tfvars file make sure to update the value of
Replace
|
5️⃣ Using AWS EFS for Persistence
EFS has the capability to mount the same persistent volume to multiple pods at the same time using the ReadWriteMany access mode and EFS data can be accessed from all availability zones in the same region.
...