Deploy the HubZero platform on AWS using either Terraform or AWS CDK (TypeScript). Both tools produce identical infrastructure.
New to AWS? Start with docs/getting-started-aws.md — it walks through account setup, CLI configuration, and finding the IDs you need before running any deployment commands.
Choose a profile to match your cost and reliability needs. The profile sets the EC2 instance type and compute strategy; all other features (RDS, ALB, EFS, etc.) are controlled independently.
| Profile | Instance | Arch | Pricing | Est. compute/mo | Best for |
|---|---|---|---|---|---|
minimal (default) |
t3.medium | x86_64 | On-demand | ~$30 | Development, small hubs |
graviton |
t4g.medium | ARM64 | On-demand | ~$24 | Same as minimal, ~20% cheaper |
spot |
t3.medium | x86_64 | Spot | ~$4–8 | Cost-sensitive; requires RDS + EFS |
Override the instance size without changing the profile:
# Terraform — use a larger instance when needed, keep all other profile settings
deployment_profile = "minimal"
instance_type = "t3.large"# CDK
npx cdk deploy -c deploymentProfile=minimal -c instanceType=t3.largeSpot profile note: when AWS reclaims a spot instance the ASG launches a replacement in ~3–5 minutes. Data survives because the web root lives on EFS and the database lives on RDS. The spot profile enforces both via a deploy-time precondition.
Internet ──► CloudFront (CDN, optional)
│
▼
Application Load Balancer (HTTPS)
AWS WAF v2 (managed rules)
│
▼
Auto Scaling Group (min=1 / max=1)
┌───────────────────────────────┐
│ EC2 — Amazon Linux 2023 │
│ Apache 2.4 + PHP-FPM 8.2 │
│ HubZero CMS v2.4 │
│ Optional: Docker + Solr │
└───────────────────────────────┘
│ │
▼ ▼
RDS MariaDB EFS (shared
10.11 web root)
│
▼
S3 (file storage)
Key properties:
| Property | Detail |
|---|---|
| OS | Amazon Linux 2023 |
| Web | Apache 2.4 + PHP-FPM 8.2 |
| Database | RDS MariaDB 10.11 (default) or local MariaDB |
| TLS | ACM certificate on ALB — no certbot required |
| Access | SSM Session Manager only — no SSH port exposed |
| AMI | Pre-baked with all packages (Packer); falls back to base AL2023 |
| Scaling | ASG min=1/max=1 with ELB health check and rolling refresh |
All optional features (ALB, WAF, EFS, S3, CDN, monitoring, VPC endpoints, Parameter Store, Patch Manager) are individually toggleable with boolean variables.
If you are coming from a traditional bare-metal or VM-based HubZero install, the number of AWS services involved can look daunting. This section explains what each one does and why it exists — usually replacing something you were doing manually.
| Service | What it does | Why it's used here |
|---|---|---|
| EC2 | The virtual machine that runs Apache, PHP, and HubZero | Direct replacement for your physical server or on-prem VM |
| Auto Scaling Group | Keeps exactly one EC2 instance running at all times | If the instance crashes or fails a health check, the ASG automatically launches a replacement — no manual restart needed. It is not for horizontal scaling (min=max=1). |
| Launch Template | Defines the instance configuration (AMI, instance type, user data, IAM role) | The blueprint the ASG uses to launch a replacement instance. Using a Launch Template (not a Launch Configuration) is required by newer AWS accounts. |
| Amazon Linux 2023 | The operating system | AWS's supported Linux with SELinux, systemd, dnf, and built-in SSM agent. Receives security patches through Patch Manager. |
| Service | What it does | Why it's used here |
|---|---|---|
| VPC | Your private network inside AWS | Required container for all other resources. You provide an existing VPC — this project does not create one. |
| Security Groups | Stateful firewall rules on each resource | Replaces iptables. The EC2 instance only accepts traffic from the ALB; the ALB only accepts traffic from your CIDR. No port 22 (SSH) is opened. |
| Application Load Balancer (ALB) | Terminates HTTPS and forwards HTTP to the instance | Replaces running Nginx or certbot directly on the server. The ALB handles TLS so the instance only sees plain HTTP on port 80. |
| ACM (Certificate Manager) | Provisions and auto-renews the TLS certificate | Replaces certbot/Let's Encrypt. The certificate is free, AWS-managed, and never expires unexpectedly. No cron job needed. |
| WAF v2 | Web application firewall in front of the ALB | Blocks common web attacks (OWASP top 10, SQL injection, known bad inputs) using AWS-managed rule sets — without configuring ModSecurity on the instance. |
| CloudFront | Global CDN that caches static assets near users | Optional. Speeds up page loads for geographically distributed users by serving cached CSS/JS/images from edge locations. |
| VPC Endpoints | Private network paths to AWS services (S3, SSM, Secrets Manager, CloudWatch) | Without these, the instance needs an internet gateway to call AWS APIs. Endpoints keep that traffic on the AWS private network and eliminate the internet path. Savings of ~$35/mo in interface endpoints — disabled by default in test. |
| Service | What it does | Why it's used here |
|---|---|---|
| EBS (Elastic Block Store) | The instance's root disk | Replaces your server's local hard drive. Encrypted at rest. The ASG attaches a fresh volume on each launch unless EFS is used for the web root. |
| EFS (Elastic File System) | Network filesystem mounted at /var/www/hubzero |
Replaces an NFS share. When the ASG replaces an instance, EFS means the HubZero files survive — the new instance mounts the same filesystem and picks up exactly where the old one left off. Required for the spot profile. |
| S3 | Object storage for HubZero file uploads | Replaces storing uploads on local disk. Cheaper than EBS per GB, durable across availability zones, and accessible even if the instance is replaced. |
| DLM (Data Lifecycle Manager) | Takes daily EBS snapshots automatically | Replaces a cron job that runs aws ec2 create-snapshot. Snapshots are retained for 7 days (30 for prod) and tagged for easy identification. The policy is retained (not deleted) when the stack is destroyed so in-flight snapshots are not interrupted. |
| Service | What it does | Why it's used here |
|---|---|---|
| RDS MariaDB | Managed relational database (optional; use_rds=true) |
Replaces running MariaDB on the same instance. AWS handles OS patching, automated backups, storage autoscaling, and Multi-AZ failover. You never SSH into the database server. Set use_rds=false for test environments to save ~$55/mo. |
| Secrets Manager | Stores the RDS master password | Replaces passwords in config files. The password is auto-generated at deploy time, rotatable, and never written to the Terraform state file in plaintext. The instance retrieves it at boot via IAM role. |
| Service | What it does | Why it's used here |
|---|---|---|
| SSM Session Manager | Browser/CLI shell access to the instance | Replaces SSH. No port 22, no key pair to manage, no bastion host. Access is controlled by IAM. Session activity can be logged to CloudWatch. |
| SSM Parameter Store | Stores runtime configuration (domain, DB host, S3 bucket, etc.) | Replaces hard-coded values in config files or environment variables baked into the AMI. The instance reads its configuration at boot so the same AMI works in every environment. |
| SSM Patch Manager | Applies OS security patches on a schedule | Replaces a cron job running dnf update. Patches are applied weekly during a Sunday maintenance window; only Security/Critical patches with a 7-day approval delay. |
| IAM Role | Grants the EC2 instance permission to call AWS APIs | The instance never needs long-lived credentials. It authenticates to S3, SSM, Secrets Manager, and CloudWatch using its attached role — no aws configure on the server. |
| CloudWatch Logs | Centralises Apache and bootstrap logs | Replaces tailing log files over SSH. Log groups survive instance replacement; you can search and alert on them without connecting to the instance. |
| CloudWatch Alarms + SNS | Sends alerts when CPU, memory, disk, or RDS metrics exceed thresholds | Replaces Nagios/Zabbix for basic health monitoring. Alarm → SNS topic → email when something needs attention. |
- AWS account with permissions to create EC2, RDS, ALB, IAM, and related resources (see docs/getting-started-aws.md for IAM setup)
- AWS CLI v2 configured (
aws configure) - Terraform >= 1.5 or Node.js >= 18 with AWS CDK
Verify your credentials:
aws sts get-caller-identityThe fastest path is a test deployment with all defaults. You need a VPC ID, one public subnet ID, and your current public IP.
Find your VPC and subnet:
# List VPCs
aws ec2 describe-vpcs \
--query 'Vpcs[*].[VpcId,IsDefault,Tags[?Key==`Name`].Value|[0]]' \
--output table
# List public subnets in a VPC
aws ec2 describe-subnets \
--filters "Name=vpc-id,Values=<vpc-id>" \
"Name=map-public-ip-on-launch,Values=true" \
--query 'Subnets[*].[SubnetId,AvailabilityZone,CidrBlock]' \
--output table
# Your current public IP
curl -s https://checkip.amazonaws.com# Bootstrap state backend (one-time per account/region)
bash scripts/bootstrap-terraform-backend.sh
cd terraform
terraform init
# Edit environments/test.tfvars and add your three required values:
# vpc_id = "vpc-..."
# subnet_id = "subnet-..."
# allowed_cidr = "YOUR_IP/32"
#
# The test.tfvars already sets deployment_profile=minimal with the
# cost-saving options (no RDS, no ALB, no VPC endpoints) — total ~$35/mo.
terraform apply -var-file=environments/test.tfvarsDeployment takes 10–15 minutes.
terraform applyitself finishes in 2–3 minutes (infrastructure created), but the EC2 instance then bootstraps in the background — downloading packages, installing PHP/Apache, and cloning HubZero. See Monitoring the bootstrap below.
cd cdk
npm install
cp cdk.context.example.json cdk.context.json
# Edit cdk.context.json: set vpcId and allowedCidr
# The example already uses deploymentProfile=minimal with cost-saving defaults
npx cdk bootstrap # one-time per account/region
npx cdk deploy -c environment=testBefore terraform init, create the state S3 bucket and DynamoDB lock table:
bash scripts/bootstrap-terraform-backend.shOr update the backend "s3" block in terraform/main.tf to reference your
existing state bucket.
cd terraform
terraform init
# Test (minimal, single subnet OK)
terraform apply -var-file=environments/test.tfvars \
-var='vpc_id=vpc-xxx' \
-var='subnet_id=subnet-xxx' \
-var='allowed_cidr=1.2.3.4/32'
# Staging (RDS requires 2 subnets in different AZs)
terraform apply -var-file=environments/staging.tfvars \
-var='vpc_id=vpc-xxx' \
-var='subnet_id=subnet-xxx' \
-var='allowed_cidr=0.0.0.0/0' \
-var='domain_name=hub.example.com' \
-var='rds_subnet_ids=["subnet-aaa","subnet-bbb"]'
# Production
terraform apply -var-file=environments/prod.tfvars \
-var='vpc_id=vpc-xxx' \
-var='subnet_id=subnet-xxx' \
-var='allowed_cidr=0.0.0.0/0' \
-var='domain_name=hub.example.com' \
-var='[email protected]' \
-var='rds_subnet_ids=["subnet-aaa","subnet-bbb","subnet-ccc"]'cd cdk
npm install
cp cdk.context.example.json cdk.context.json
# Edit cdk.context.json — set vpcId, allowedCidr, and desired feature flags
# One-time per account/region (requires CDK_DEFAULT_ACCOUNT and CDK_DEFAULT_REGION):
CDK_DEFAULT_ACCOUNT=$(aws sts get-caller-identity --query Account --output text) \
CDK_DEFAULT_REGION=us-east-1 \
npx cdk bootstrap
# Deploy (environment defaults to "test"; override with -c environment=staging/prod):
CDK_DEFAULT_ACCOUNT=$(aws sts get-caller-identity --query Account --output text) \
CDK_DEFAULT_REGION=us-east-1 \
npx cdk deploy
# Staging / production — pass domain overrides via context:
CDK_DEFAULT_ACCOUNT=... CDK_DEFAULT_REGION=us-east-1 \
npx cdk deploy -c environment=staging -c domainName=hub.example.comNote:
CDK_DEFAULT_ACCOUNTandCDK_DEFAULT_REGIONare required for VPC lookups. The region must match where your VPC lives.
After terraform apply completes, the EC2 instance bootstraps in the background.
Total time: 10–15 minutes (from base AL2023 AMI). You can watch progress with:
# 1. Find the instance ID launched by the ASG
ASG_NAME=$(terraform -chdir=terraform output -raw asg_name)
INSTANCE_ID=$(aws ec2 describe-instances \
--filters "Name=tag:aws:autoscaling:groupName,Values=${ASG_NAME}" \
"Name=instance-state-name,Values=running" \
--query 'Reservations[0].Instances[0].InstanceId' --output text)
echo "Instance: $INSTANCE_ID"
# 2. Wait for SSM to become available (~60–90 seconds after launch), then stream logs
aws ssm send-command \
--instance-ids "$INSTANCE_ID" \
--document-name AWS-RunShellScript \
--parameters 'commands=["tail -f /var/log/cloud-init-output.log"]' \
--output text --query 'Command.CommandId'
# Then poll the command output (replace COMMAND_ID):
aws ssm get-command-invocation \
--command-id COMMAND_ID --instance-id "$INSTANCE_ID" \
--query 'StandardOutputContent' --output textOr for a live-streaming session (requires the SSM Session Manager plugin):
aws ssm start-session --target "$INSTANCE_ID"
# Inside the session:
sudo tail -f /var/log/cloud-init-output.log
sudo tail -f /var/log/hubzero-bake.log # package install phase
sudo tail -f /var/log/hubzero-userdata.log # configuration phaseBootstrap is complete when you see:
=== HubZero bootstrap completed at <timestamp> ===
To check instance health at a glance:
# Instance state
aws ec2 describe-instances --instance-ids "$INSTANCE_ID" \
--query 'Reservations[0].Instances[0].[State.Name,PublicIpAddress]' --output table
# SSM connectivity (should show "Online")
aws ssm describe-instance-information \
--filters "Key=InstanceIds,Values=$INSTANCE_ID" \
--query 'InstanceInformationList[0].PingStatus' --output textThere is no SSH port — access is exclusively via SSM Session Manager. No EC2 key pair is required.
The deploy outputs a ready-to-run ssm_connect_command / SsmConnect that
looks up the running instance dynamically:
# Terraform — copy the ssm_connect_command output value, e.g.:
aws ec2 describe-instances \
--filters 'Name=tag:aws:autoscaling:groupName,Values=hubzero-test-...' \
'Name=instance-state-name,Values=running' \
--query 'Reservations[0].Instances[0].InstanceId' --output text \
| xargs -I{} aws ssm start-session --target {}
# CDK — copy the SsmConnect output value (same pattern)Once connected, monitor the bootstrap log:
sudo tail -f /var/log/hubzero-userdata.logBootstrap completes in roughly 3–5 minutes when using a pre-baked AMI, or 10–15 minutes from the base AL2023 AMI.
cd terraform
# Terraform (pass the same vars used at apply time)
terraform destroy -var-file=environments/test.tfvars \
-var='vpc_id=vpc-...' \
-var='subnet_id=subnet-...' \
-var='allowed_cidr=0.0.0.0/0'Important:
aws_regionmust match the region used duringterraform apply. If your region differs from the default (us-east-1), either uncomment theaws_regionline in your.tfvarsfile (recommended) or pass-var='aws_region=<your-region>'explicitly. Omitting it causes some operations (S3, SNS) to target the wrong region during destroy.
The S3 bucket has force_destroy = true so Terraform empties and deletes it automatically. After terraform destroy completes, check for these resources that Terraform does not delete:
# EBS snapshots created by the DLM lifecycle policy
aws ec2 describe-snapshots --owner-ids self --region us-west-2 \
--filters "Name=tag:Project,Values=hubzero" \
--query 'Snapshots[*].[SnapshotId,StartTime]' --output table
# Delete them (replace IDs):
aws ec2 delete-snapshot --snapshot-id snap-xxx --region us-west-2To also remove the Terraform state backend (S3 bucket + DynamoDB table), run the teardown script after terraform destroy:
TF_STATE_BUCKET=hubzero-terraform-state-<account-id> \
TF_LOCK_TABLE=hubzero-terraform-locks \
AWS_REGION=us-west-2 \
bash scripts/teardown-terraform-backend.shThe script verifies the state is empty before deleting. Pass --force to skip the confirmation prompt.
CloudWatch log groups are deleted by terraform destroy. If they were created outside Terraform, delete them manually:
aws logs delete-log-group --log-group-name /aws/ec2/hubzero-test/userdata --region us-west-2
aws logs delete-log-group --log-group-name /aws/ec2/hubzero-test/apache-access --region us-west-2
aws logs delete-log-group --log-group-name /aws/ec2/hubzero-test/apache-error --region us-west-2cd cdk
CDK_DEFAULT_ACCOUNT=$(aws sts get-caller-identity --query Account --output text) \
CDK_DEFAULT_REGION=us-east-1 \
npx cdk destroyCDK empties and deletes the S3 bucket automatically (via a custom resource Lambda). CloudWatch log groups and most resources are deleted cleanly. One resource is intentionally retained after destroy:
- DLM lifecycle policy — left in the account with
RemovalPolicy.RETAINso any in-progress snapshots are not interrupted. Delete manually after destroy:
# List orphaned DLM policies
aws dlm get-lifecycle-policies --query 'Policies[*].[PolicyId,Description,State]' --output table
# Delete by policy ID
aws dlm delete-lifecycle-policy --policy-id policy-xxxxxxxxxxxxxxxxxInstance type is set by deployment_profile (default: t3.medium). Use
instance_type to override. EBS volume and RDS sizing are per-environment:
| Environment | EBS | RDS Class (when use_rds=true) |
RDS Storage | Multi-AZ |
|---|---|---|---|---|
| test | 30 GB | db.t3.medium | 20 GB | No |
| staging | 100 GB | db.r6g.xlarge | 100 GB | No |
| prod | 200 GB | db.r6g.2xlarge | 500 GB | Yes |
When enable_alb=true (default) and a domain_name is set:
- An ACM certificate is provisioned with DNS validation.
- The deploy outputs a CNAME record under
acm_certificate_validation_cname. - Add that CNAME to your DNS provider.
- Once ACM validates the domain, the ALB HTTPS listener activates.
No certbot is involved. TLS terminates at the ALB; the EC2 instance receives plain HTTP from the load balancer on port 80.
For test environments without a domain name, set enable_alb=false to skip
the ALB entirely and access the instance directly over HTTP.
Using a pre-baked AMI makes instance launches 3–5× faster and ensures identical environments across replacements.
cd packer
packer init .
# Build (requires AWS credentials with EC2 permissions)
GIT_SHA=$(git rev-parse --short HEAD) packer build hubzero.pkr.hclThe resulting AMI is named hubzero-base-YYYY-MM-DD. Terraform and CDK
automatically prefer it over the base AL2023 AMI when use_baked_ami=true
(the default).
To bake a new AMI after system updates:
GIT_SHA=$(git rev-parse --short HEAD) packer build \
-var aws_region=us-east-1 hubzero.pkr.hcl- No SSH port — SSM Session Manager is the only access path
- IMDSv2 enforced — token-based instance metadata, hop limit 1
- ALB + WAF v2 — CommonRuleSet, KnownBadInputsRuleSet, SQLiRuleSet in Block mode
- ACM TLS — AWS-managed certificate with automatic renewal
- VPC endpoints — S3 (gateway), SSM, SSMMessages, EC2Messages, SecretsManager, Logs (interface); no internet egress required for AWS API calls
- Encrypted storage — EBS, RDS, EFS, and S3 all encrypted at rest (AES-256 / KMS)
- RDS managed credentials — master password in Secrets Manager, never in state
- SSM Parameter Store — runtime configuration injected at boot, not hard-coded
- SSM Patch Manager — weekly Sunday 03:00 UTC maintenance window; Security/Critical+Important patches; 7-day auto-approval
- fail2ban — Apache brute-force rate limiting
- HSTS + security headers —
Strict-Transport-Security,X-Content-Type-Options,X-Frame-Options,Content-Security-Policy, and more - PHP hardening —
expose_php = Off,open_basedir, secure session cookies - Daily EBS snapshots — DLM lifecycle policy (7-day retention; 30 days for prod)
- RDS automated backups — 7-day retention (14 days for prod), deletion protection in prod
- Docker hardening — user namespace remapping, read-only filesystem, digest-pinned images, resource limits (when
install_platform=true)
| Variable | Description |
|---|---|
vpc_id / vpcId |
Existing VPC ID |
subnet_id |
Public subnet for EC2 / ALB (Terraform) |
allowed_cidr / allowedCidr |
CIDR for ALB ingress — use 0.0.0.0/0 only when behind WAF |
environment |
test, staging, or prod |
| Variable | Default | Description |
|---|---|---|
aws_region |
us-east-1 |
AWS region |
domain_name / domainName |
"" |
Domain for ACM cert + HTTPS |
enable_alb / enableAlb |
true |
ALB with HTTPS termination |
acm_certificate_arn |
"" |
Bring-your-own ACM cert ARN (empty = create new) |
enable_cdn / enableCdn |
false |
CloudFront CDN in front of ALB |
enable_vpc_endpoints |
true |
VPC endpoints for AWS services |
| Variable | Default | Description |
|---|---|---|
deployment_profile / deploymentProfile |
minimal |
minimal, graviton, or spot — see Deployment Profiles |
instance_type / instanceType |
"" |
Override the profile's instance size (e.g. t3.large) |
use_baked_ami / useBakedAmi |
true |
Prefer pre-baked hubzero-base-* AMI; auto-selects correct architecture |
key_name / keyName |
"" |
EC2 key pair (optional; SSM is preferred) |
use_rds / useRds |
true |
RDS MariaDB; false = local DB. Required true for spot profile |
rds_subnet_ids |
[] |
≥2 subnet IDs in different AZs (required when use_rds=true) |
enable_s3_storage / enableS3Storage |
true |
S3 bucket for HubZero file uploads |
enable_efs / enableEfs |
true |
EFS shared web root. Required true for spot profile |
efs_subnet_ids |
[] |
EFS mount target subnets (defaults to subnet_id) |
| Variable | Default | Description |
|---|---|---|
enable_waf / enableWaf |
true |
WAF v2 on ALB (requires enable_alb) |
enable_patch_manager |
true |
SSM Patch Manager weekly patching |
| Variable | Default | Description |
|---|---|---|
enable_monitoring / enableMonitoring |
true |
CloudWatch metrics, alarms, log groups |
alarm_email / alarmEmail |
"" |
SNS email for CloudWatch alarm notifications |
enable_parameter_store |
true |
Store config in SSM Parameter Store |
| Variable | Default | Description |
|---|---|---|
install_platform / installPlatform |
false |
Docker + Apache Solr 9.7 |
certbot_email / certbotEmail |
"" |
Email for certbot (only used when enable_alb=false) |
| Output | Description |
|---|---|
web_url |
Full URL (CloudFront > domain > ALB DNS) |
asg_name |
Auto Scaling Group name |
alb_dns_name |
ALB DNS name (empty if ALB disabled) |
cloudfront_domain |
CloudFront domain (empty if CDN disabled) |
ssm_connect_command |
Ready-to-run SSM session command |
rds_endpoint |
RDS endpoint (N/A if local DB) |
efs_id |
EFS file system ID |
s3_bucket_name |
S3 bucket name for file storage |
acm_certificate_validation_cname |
DNS CNAME to add for ACM validation |
sns_topic_arn |
SNS alarm topic ARN |
See docs/migration-guide.md for a step-by-step guide to migrating an existing on-premises HubZero instance (database dump, app file sync, and configuration update).
# Terraform
cd terraform
terraform destroy -var-file=environments/test.tfvars \
-var='vpc_id=vpc-xxx' -var='subnet_id=subnet-xxx' -var='allowed_cidr=0.0.0.0/0'
# CDK
cd cdk
npx cdk destroy -c environment=testProduction resources have deletion protection. Disable before destroying:
# RDS deletion protection
aws rds modify-db-instance \
--db-instance-identifier <id> --no-deletion-protection --apply-immediately
# EFS (must delete mount targets first)
aws efs describe-mount-targets --file-system-id <efs-id> \
--query 'MountTargets[*].MountTargetId' --output text \
| xargs -n1 aws efs delete-mount-target --mount-target-id├── docs/
│ ├── getting-started-aws.md # AWS primer for new users
│ └── migration-guide.md # On-prem to AWS migration guide
├── packer/
│ └── hubzero.pkr.hcl # Packer template for baked AMI
├── scripts/
│ ├── bake.sh # Static installs baked into AMI
│ ├── userdata.sh # Launch-time env-specific bootstrap
│ ├── migrate.sh # On-prem migration script
│ └── bootstrap-terraform-backend.sh
├── terraform/
│ ├── main.tf # All AWS resources
│ ├── variables.tf
│ ├── outputs.tf
│ └── environments/
│ ├── test.tfvars
│ ├── staging.tfvars
│ └── prod.tfvars
└── cdk/
├── bin/app.ts
├── lib/hubzero-stack.ts # CDK stack (feature-parity with Terraform)
├── cdk.context.example.json
└── package.json
Costs in us-east-1 (on-demand pricing). Profile determines the EC2 cost; other services are the same across profiles.
| Profile | EC2 | Local MariaDB | No ALB | No VPC endpoints | Total |
|---|---|---|---|---|---|
minimal (t3.medium) |
~$30/mo | $0 | $0 | $0 | ~$35/mo |
graviton (t4g.medium) |
~$24/mo | $0 | $0 | $0 | ~$29/mo |
spot (t3.medium spot) |
~$5/mo | — | — | — | ~$65/mo (spot + RDS + EFS required) |
| Resource | minimal | graviton | spot |
|---|---|---|---|
| EC2 | ~$30/mo | ~$24/mo | ~$5/mo |
| RDS db.r6g.2xlarge (Multi-AZ, prod) | ~$740/mo | ~$740/mo | ~$740/mo |
| ALB | ~$20/mo | ~$20/mo | ~$20/mo |
| EFS (10 GB) | ~$3/mo | ~$3/mo | ~$3/mo |
| S3 + CloudWatch | ~$15/mo | ~$15/mo | ~$15/mo |
| VPC endpoints (5 interface) | ~$35/mo | ~$35/mo | ~$35/mo |
| Total | ~$843/mo | ~$837/mo | ~$818/mo |
For a typical small research hub the RDS instance can be sized down significantly (db.t3.medium at ~$55/mo instead of db.r6g.2xlarge). Use AWS Pricing Calculator for precise estimates.