This guide covers migrating an existing on-premises HubZero deployment to the AWS infrastructure in this project.
- AWS deployment is running and healthy (bootstrap complete — see
Monitoring the Bootstrap; if using
enable_alb=true, the ALB target group shows the instance as healthy) - SSH or SCP access from the AWS instance to the on-prem server
- On-prem HubZero version is v2.4 (or migration SQL adjustments are planned)
- Sufficient disk space: on EFS (
/var/www/hubzero) for app files, and/tmpfor the database dump - Maintenance window scheduled and users notified
- DNS TTL lowered to 300 seconds at least 24 hours before cutover
| Component | Method | Notes |
|---|---|---|
| MariaDB database | mysqldump → import |
All users, content, config, CMS data |
App files (app/) |
rsync |
Uploads, custom extensions, templates, project files |
| Configuration | Automated rewrite | DB host and credentials updated by the script |
| Solr search index | Rebuilt from DB | Faster and more reliable than transferring |
| Component | Action Required |
|---|---|
| DNS | Point A/CNAME to the ALB DNS name (not the EC2 IP — the instance may be replaced by the ASG) |
| TLS | Handled by ACM on the ALB automatically — no certbot needed |
| SMTP/Email | Configure SES or update SMTP settings in hub config |
| LDAP/SSO | Update authentication config to reach identity providers from AWS |
| Tool sessions | Export/import Docker images if using the full Platform (install_platform=true) |
| Custom cron jobs | Recreate system-level cron jobs not managed by HubZero |
With this deployment, the load balancer DNS name (e.g.
hubzero-prod-xxxx.us-east-1.elb.amazonaws.com) is the target for your
CNAME — not an EC2 IP address. The Auto Scaling Group may replace the EC2
instance at any time (e.g. during SSM Patch Manager maintenance), so do not
use the instance's IP.
If you provisioned an ACM certificate with domain_name set, the ALB already
serves HTTPS. Update your DNS CNAME record to point to the alb_dns_name
output value.
There is no SSH port. Use SSM Session Manager:
# Use the ssm_connect_command from your Terraform/CDK output, or:
aws ec2 describe-instances \
--filters "Name=tag:aws:autoscaling:groupName,Values=<asg-name>" \
"Name=instance-state-name,Values=running" \
--query 'Reservations[0].Instances[0].InstanceId' --output text \
| xargs -I{} aws ssm start-session --target {}Replace <asg-name> with the asg_name / AsgName output from your deploy.
The migration runs on the AWS instance and pulls data from the on-prem server over SSH. If your on-prem server is not publicly reachable, you will need to set up a bastion or VPN.
# From within the SSM session on the AWS instance:
ssh -i /path/to/key user@onprem-server hostnameIf you have no direct path, use the S3 bucket as an intermediary (see Manual migration via S3 below).
Perform these steps from within the SSM session on the AWS instance.
Step 3a — Put on-prem hub into maintenance mode (edit via on-prem admin panel or config file, or simply schedule the migration during low-traffic hours).
Step 3b — Dump and import the database:
# On the AWS instance: dump from on-prem over SSH and pipe directly into RDS
# Source the DB credentials written by userdata.sh
source /root/.hubzero-credentials
# Dump the on-prem database over SSH and import in one pipeline
ssh -i /path/to/key user@onprem-server \
"mysqldump -u root --single-transaction --routines hubzero" \
| mysql -h "${HUBZERO_DB_HOST}" -u "${HUBZERO_DB_USER}" \
-p"${HUBZERO_DB_PASS}" "${HUBZERO_DB_NAME}"If the dump is large (>1 GB), dump to a file first:
ssh -i /path/to/key user@onprem-server \
"mysqldump -u root --single-transaction --routines hubzero | gzip" \
> /tmp/hubzero-dump.sql.gz
gunzip < /tmp/hubzero-dump.sql.gz \
| mysql -h "${HUBZERO_DB_HOST}" -u "${HUBZERO_DB_USER}" \
-p"${HUBZERO_DB_PASS}" "${HUBZERO_DB_NAME}"Step 3c — Sync the app/ directory (uploads, extensions, templates):
The on-prem app/ directory path depends on your existing installation:
- HubZero 2.4 on-prem:
/var/www/hubzero/app/ - Older HubZero installs:
/var/www/html/app/or/var/www/html/hubzero/app/
# On the AWS instance — rsync from on-prem (EFS is already mounted at /var/www/hubzero)
rsync -az --delete \
-e "ssh -i /path/to/key" \
user@onprem-server:/var/www/hubzero/app/ \
/var/www/hubzero/app/
chown -R apache:apache /var/www/hubzero/app/Step 3d — Update database connection settings:
Edit /var/www/hubzero/app/config/database.php to use the AWS database
credentials (already written to /root/.hubzero-credentials by userdata.sh):
source /root/.hubzero-credentials
# Verify the current config:
grep -E "host|database|username" /var/www/hubzero/app/config/database.phpUpdate the host, database name, username, and password to match the AWS values.
The exact format depends on your HubZero version; use the admin panel
(/administrator → Global Configuration → Database) if available.
Step 3e — Restart services:
sudo systemctl restart httpd php-fpmProgress of each step is visible in the terminal. If any step fails, the on-prem server is unaffected (the dump is read-only; maintenance mode can be cleared at any time).
Test the migrated site using a hosts-file override on your local machine (bypasses DNS; talks directly to the ALB):
# Get the ALB's IP (will change over time — use only for testing)
dig +short <alb-dns-name>
# e.g. dig +short hubzero-prod-xxxx.us-east-1.elb.amazonaws.com
# Add to /etc/hosts (macOS/Linux) or
# C:\Windows\System32\drivers\etc\hosts (Windows):
<alb-ip> hub.example.eduVerify:
- Homepage loads over HTTPS with a valid certificate
- You can log in with an existing account
- Uploaded files and images are accessible (served from EFS / S3)
- Search returns results (index rebuild may take a few minutes)
- Admin panel accessible
Remove the hosts-file entry after testing.
Update your DNS provider to point the hub domain to the ALB DNS name:
hub.example.edu. CNAME hubzero-prod-xxxx.us-east-1.elb.amazonaws.com.
Do not use an A record pointing to the EC2 IP — the ASG may replace the instance. An A record works for testing but is not safe for production.
DNS propagation with a 300-second TTL takes 5–10 minutes globally.
Configure email (SES recommended):
# In the HubZero admin panel:
# Admin → Global Configuration → Mail Settings
# Or edit app/config/mail.php
# To use SES:
# 1. Verify your domain in SES
# 2. Request production access (to send to non-verified addresses)
# 3. Create SMTP credentials in SES
# 4. Update mail settings: SMTP host = email-smtp.us-east-1.amazonaws.com, port 587Migrate tool session Docker images (if using full Platform):
# On the on-prem server — save and compress the image
docker save <image-name> | gzip > tool-image.tar.gz
# Upload to the S3 bucket provisioned by this project
aws s3 cp tool-image.tar.gz s3://<s3-bucket-name>/tool-images/
# In the SSM session on the AWS instance — download and load
aws s3 cp s3://<s3-bucket-name>/tool-images/tool-image.tar.gz /tmp/
docker load < /tmp/tool-image.tar.gzRemove or disable the on-prem server once the migration is verified and DNS has propagated. The migration script does not modify the source server (except to temporarily set maintenance mode), so it is safe to keep running in parallel for a rollback window.
If direct SSH from AWS to on-prem is not available, use S3 as an intermediary:
# On the on-prem server — dump and upload
mysqldump -u root hubzero | gzip | \
aws s3 cp - s3://<s3-bucket>/migration/hubzero-db.sql.gz
# Adjust the source path to match your on-prem install:
# HubZero 2.4: /var/www/hubzero/app/
# Older installs: /var/www/html/app/ or /var/www/html/hubzero/app/
rsync -az --delete /var/www/hubzero/app/ /tmp/hubzero-app-snapshot/
tar czf - /tmp/hubzero-app-snapshot/ | \
aws s3 cp - s3://<s3-bucket>/migration/hubzero-app.tar.gz
# On the AWS instance (via SSM session) — download and import
aws s3 cp s3://<s3-bucket>/migration/hubzero-db.sql.gz /tmp/
gunzip < /tmp/hubzero-db.sql.gz | \
mysql -h "${HUBZERO_DB_HOST}" -u "${HUBZERO_DB_USER}" \
-p"${HUBZERO_DB_PASS}" "${HUBZERO_DB_NAME}"
aws s3 cp s3://<s3-bucket>/migration/hubzero-app.tar.gz /tmp/
tar xzf /tmp/hubzero-app.tar.gz -C /var/www/hubzero/app/
chown -R apache:apache /var/www/hubzero/app/The credentials file at /root/.hubzero-credentials contains the DB
host, name, user, and password (or Secrets Manager ARN) for use in the
import commands above.
The migration steps are non-destructive on the on-prem server (mysqldump is read-only; maintenance mode can be cleared immediately). If anything goes wrong:
- Revert DNS to the on-prem server's address.
- Clear maintenance mode on the on-prem server if it was set.
- Any database dump files written to
/tmp/on the AWS instance are preserved for post-mortem analysis.
The AWS deployment is unaffected by a DNS rollback — you can re-attempt the migration after addressing any issues.