← Back to Blog
Building a CI/CD Pipeline for a Static Site

How I wired GitHub Actions to automatically sync my portfolio to S3 and invalidate CloudFront on every push to main — zero manual deploys.

Why Automate a Static Site?

When I first set up this portfolio on AWS S3 + CloudFront, I was manually running aws s3 sync from my terminal every time I made a change. It worked, but it was tedious — and more importantly, it wasn't the right way to do things. A real deployment pipeline means pushing code and walking away. The infrastructure does the rest.

This post walks through exactly how I set up a GitHub Actions CI/CD pipeline that deploys this site automatically on every push to main.

The Stack

Step 1 — Create a Scoped IAM Deploy User

The first thing I did was create a dedicated IAM user for deployments. Never use your root account or personal IAM user for CI/CD — if the credentials leak, the blast radius needs to be minimal.

The deploy user has only two permissions:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["s3:PutObject", "s3:DeleteObject", "s3:ListBucket"], "Resource": [ "arn:aws:s3:::your-bucket-name", "arn:aws:s3:::your-bucket-name/*" ] }, { "Effect": "Allow", "Action": "cloudfront:CreateInvalidation", "Resource": "arn:aws:cloudfront::YOUR_ACCOUNT_ID:distribution/YOUR_DIST_ID" } ] }

Step 2 — Store Credentials in GitHub Secrets

After creating the IAM user, I generated access keys and stored them in GitHub Secrets under Settings → Secrets and variables → Actions:

Never hardcode AWS credentials in your workflow file. GitHub Secrets are encrypted at rest and are never exposed in logs. This is the right pattern for any CI/CD pipeline.

Step 3 — Write the GitHub Actions Workflow

I created .github/workflows/deploy.yml at the root of the repo. It runs on every push to main and does three things: checks out the code, syncs it to S3, and invalidates the CloudFront cache.

name: Deploy to S3 + CloudFront on: push: branches: - main jobs: deploy: runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v3 - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v2 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: ${{ secrets.AWS_REGION }} - name: Sync to S3 run: | aws s3 sync . s3://${{ secrets.S3_BUCKET }} \ --delete \ --exclude ".git/*" \ --exclude ".github/*" - name: Invalidate CloudFront cache run: | aws cloudfront create-invalidation \ --distribution-id ${{ secrets.CLOUDFRONT_DISTRIBUTION_ID }} \ --paths "/*"

Why the --delete Flag Matters

The --delete flag on aws s3 sync removes files from S3 that no longer exist in the repo. Without it, deleted or renamed files stay in the bucket forever — users might hit stale URLs returning old content, and your bucket accumulates junk over time.

Why You Must Invalidate CloudFront

CloudFront caches files at edge locations around the world. Even after S3 is updated, users may still receive the old cached version for hours — or until the TTL expires. The create-invalidation call with path /* forces all edge locations to fetch fresh content from S3 immediately after every deploy.

Every time I forgot to invalidate the cache during manual deploys, I wasted 10 minutes wondering why my changes weren't showing up. Automating this step eliminated that entirely.

The Full Flow

# What happens on every git push to main: git push origin main → GitHub Actions triggered → Checkout repo → Configure AWS credentials (from Secrets) → aws s3 sync --delete → files updated in S3 → cloudfront create-invalidation → cache cleared → Live site updated within ~30 seconds

Lessons Learned

← All Posts