This is a topic that I have come across quite often while provisioning infrastructure. Let me present the following scenario:
- Provision some instances in AWS.
- Use Ansible to update to configure these instances.
In this scenario, we write some Terraform code to provision our servers on AWS. Once that is done, we will want to run some Ansible playbooks to configure these servers.

However, that is easier said and done. Ansible requires an inventory file with the EC2 details.
In this post I will provide one possible solution on getting this done, from within Terraform itself.
1. Create the keys
For the purpose of this exercise we need to create 2 SSH keys.
- ansible_user
- terraform_keypair
Let me explain the purpose of these two keys.
The ansible_user SSH key will be used by the ansible binary to connect to the EC2 hosts when you want to run your playbooks.
The terraform_keypair SSH key is used by AWS to provision the EC2 hosts. This key allows you to SSH into the user. This is a requirement.
It is a good practice to have these two keys separate as they serve different purposes.
Here is a quick snippet on generating the keys:

NOTE: Avoid entering any passphrases as automation will break.
Once your keys have been generated, ensure both the public and private keys are in your ~/.ssh/ folder.
2. Download the repository
Get the code by cloning the following Github repository:
https://github.com/kunalnanda/terraform_inventory
3. Prepare the environment
Once the repository has been cloned locally, you will need perform the following updates.
3.1 Public keys
Copy the contents of your ~/.ssh/ansible_user.pub key and add it to the modules/ansible_user.pub file.
3.2 AWS: Create the subnets
For the purpose of this tutorial, generate 3 public subnets. Something similar to as follows:

3.3 AWS: Create the Security Groups
Create a new Security Group with the following two rules:

The way to do this is to create a new Security Group with the SSH rule first. This will give you the ID of the security group that you will add as source to the second rule. Doing so allows all traffic within that Security Group. In this case, all EC2 instances that have the Security Group will be able to communicate with each other.
3.4 AWS: Create the EC2 Keypair
Remember the terraform_keypair SSH key that we created above. Well, it’s time to push that to AWS.
Navigate to EC2 > Key Pairs. Click on “Actions” and then “Import key pair“. You can either copy/paste the contents of the terraform_keypair.pub file or use the “Browse” option to select the file.
Ensure that the key pair is named “terraform_keypair” as we are using that in our Terraform code.
3.5 Generate a KMS key to encrypt the EBS Volumes
In the AWS Console, navigate to Key Management Service > Customer managed keys. Click on Create key. Select Symmetric type and follow the prompts to create a new key.
3.5 Populate the params.tfvars and locals.tf files
Finally we will now update some of the variables we intend to use in our automation.
params.tfvars
| aws_region | The aws region code. Eg: ap-southeast-2 |
| vpc_id | The VPC ID |
| ebs_kms_key | The ARN of the KMS key |
| ansible_user | Copy the contents of the ansible_user.pub file |
| backend_bucket | The S3 bucket name |
| app_ec2_count | The number of EC2 instances in the App tier. |
| app_ami_id | The AMI ID of the instance. Eg: ami-0a58e22c727337c51 |
| app_ec2_type | t3.micro |
| app_instance_profile | If you have an Instance profile, otherwise leave it blank. |
| app_security_groups | The SG ID of the security group we created earlier. |
| app_keypair_name | terraform_keypair |
| web_ec2_count | The number of EC2 instances in the Web tier. |
| web_ami_id | The AMI ID of the instance. Eg: ami-0a58e22c727337c51 |
| web_ec2_type | t3.micro |
| web_ec2_name | web-tier |
| web_instance_profile | If you have an Instance profile, otherwise leave it blank. |
| web_security_groups | The SG ID of the security group we created earlier. |
| web_keypair_name | terraform_keypair |
locals.tf
Update the following snippet with the IDs of the subnets created earlier:
subnet_placements = {
"0" = ""
"1" = ""
"2" = ""
}
4. Run Terraform
Once all the above is done, run the following commands:
$ terraform init
Initializing modules...
- app_tier in modules
- web_tier in modules
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "aws" (hashicorp/aws) 2.69.0...
- Downloading plugin for provider "local" (hashicorp/local) 1.4.0...
- Downloading plugin for provider "template" (hashicorp/template) 2.1.2...
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
* provider.aws: version = "~> 2.69"
* provider.local: version = "~> 1.4"
* provider.template: version = "~> 2.1"
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
After terraform has initialized, we can now do a plan.
$ terraform plan --var-file=params.tfvars
Acquiring state lock. This may take a few moments...
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
<= read (data resources)
Terraform will perform the following actions:
...
...
...
Plan: 22 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
Releasing state lock. This may take a few moments...
5. The nitty-gritty
In order to generate the ansible_inventory.yaml file, we use the data “template_file” block. These can be seen in the ansible_inventory.tf file.
Let’s look at the following snippet:
data "template_file" "ansible-app-tier-hosts" {
count = var.app_ec2_count
template = file("./templates/hostnames.tpl")
vars = {
ec2_public_dns = module.app_tier.module_fqdn[count.index]
}
}
What we are telling Terraform to do is, create data block using the hostnames.tpl template which is based on Jinja2. We are passing in the FQDN (or the DNS) of the app-tier EC2 instances.
We do the same thing for the web-tier instances as well.
Once we have generated all the host names, we combine these into the inventory-file-template block.
data "template_file" "inventory-file-template" {
template = file("./templates/ansible_host.tpl")
vars = {
app_tier_hosts = join("\n", data.template_file.ansible-app-tier-hosts.*.rendered)
web_tier_hosts = join("\n", data.template_file.ansible-web-tier-hosts.*.rendered)
}
}
And Voila! We have generated a dynamic inventory file from our Terraform code. You can now use this inventory file for your Ansible playbooks.
Hello Kunal,
Unbale to open the github link which you share.
https://github.com/kunalnanda/terraform_inventory
It should be available now.