SJSU Cloudathon — Full Walkthrough

Seven AWS Jam challenges, one team, one day. Real broken AWS accounts, real fixes, real lessons.


Inspiration

The SJSU Cloudathon ran AWS Jam — a competition where every challenge drops you into a freshly provisioned, deliberately broken AWS account and gives you a clock. Patch this server. Encrypt this data lake. Migrate this database to ARM without taking it offline. Build a serverless audit-log pipeline from four disconnected pieces.

We came in wanting to do more than just "click around the console until it turns green." We wanted to leave with a feel for how production cloud engineering actually works — diagnosing problems we'd never seen before, reading error messages literally, and learning when a "successful" command isn't the same as a successful outcome.

What is AWS Jam?

Think of it like a video-game arcade for the cloud. Each challenge drops you into a real, broken AWS account with a job to do: fix a website, secure a server, plug a leak, build a pipeline. You don't read about cloud — you actually log in and fix it.

Every challenge gets its own throwaway AWS account with temporary credentials that expire in a few hours. That keeps it safe and keeps the playing field even. And the lab credentials are deliberately scoped to exactly what the challenge needs — many calls return AccessDenied, which forces you to use the right tool for the job rather than brute-force around it.

The Scoreboard — All 14 Challenges

# Challenge Services Outcome
1 Hello AWS Jam S3 · Static Hosting Solved
2 Prepare to Fail(over) ALB · Target Groups · Stickiness Solved
3 Protect my CloudFront Origin CloudFront · OAC · S3 bucket policy Solved
4 Our CEO wants to say something but... Amazon Polly · Lambda · IAM Solved
5 No unencrypted databases allowed RDS · KMS · Snapshot lifecycle Solved
6 Encrypt the Data Lake S3 Batch Operations · KMS Solved
7 ARM64 your Databases Aurora PostgreSQL · Graviton Solved
8 Bring it back in Style AWS Backup · Restore Solved
9 Data with the Stars! Redshift / Athena · Glue Solved
10 Lost in Metadata: EC2 Instance Puzzle EC2 · IMDS · User Data Solved
11 Your Query Whisperer SageMaker · Bedrock (Nova Lite) Solved
12 CloudFormation Sherlock !! CloudFormation · stack debugging Solved
13 Patch me if you can! SSM Patch Manager · Run Command Solved
14 Serverless Nightwatch (4 tasks) RDS · CloudWatch · Lambda · EventBridge · S3 Solved

Challenge 1 — Hello AWS Jam (80/80)

The setup. They handed us an empty S3 bucket and said: "Make this serve a website. Submit the public URL when it works." Sounds easy. It isn't, because Amazon goes out of its way to stop you from accidentally putting things on the public internet.

What made it hard. S3 has three different "off" switches that all have to be flipped:

  1. Bucket-level Block Public Access (turned on by default since 2018 to stop data leaks)
  2. A bucket policy that explicitly grants read access to anonymous users
  3. The static website hosting feature flag itself, with the index document declared

Miss any one and the site returns "403 Forbidden." The error messages don't tell you which switch is wrong — just Access Denied. And S3 website URLs follow an obscure regional pattern: http://bucket.s3-website-us-east-1.amazonaws.com. Get the region wrong, get a dead link.

The fix:

aws s3api put-public-access-block --bucket BUCKET \
  --public-access-block-configuration "BlockPublicAcls=false,IgnorePublicAcls=false,BlockPublicPolicy=false,RestrictPublicBuckets=false"

aws s3api put-bucket-policy --bucket BUCKET --policy \
  '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":"*","Action":"s3:GetObject","Resource":"arn:aws:s3:::BUCKET/*"}]}'

aws s3 website s3://BUCKET --index-document index.html

Lesson. AWS treats every service as "secure by default." Anything public requires you to explicitly opt in — usually in two places (the bucket setting AND a bucket policy). Annoying for tutorials, but it's why your future employer probably won't accidentally leak its customer database.


Challenge 2 — Prepare to Fail(over) (80/80)

The setup. An Application Load Balancer (ALB) sat in front of two EC2 web servers. In theory, half the requests go to server A, half to server B. In practice, traffic was glued to a single server and the failover-readiness was zero.

Task 1 — Register the missing server (48 pts)

One of the two servers wasn't even connected to the load balancer. The ALB's "target group" — its address book of servers — only had one entry. Server B existed, was running, was healthy, but the ALB had no idea it was supposed to send traffic there.

aws elbv2 describe-target-health --target-group-arn ARN
aws ec2 describe-instances --query 'Reservations[].Instances[].[InstanceId,State.Name]'
# Target group: 1 instance.  Account: 2 instances.  Math doesn't math.

aws elbv2 register-targets --target-group-arn ARN --targets Id=i-INSTANCEID

Task 2 — Turn off "stickiness" (32 pts)

Even after both servers were registered, the ALB was still being weird. We hit it six times in a row with curl — every single request came back from the same server. That defeats the entire point of load balancing.

The culprit: session stickiness. When stickiness is on, the ALB drops a cookie on your browser and pins you to whichever server it picked first. Combined with our identical curl requests (same client, same cookie), it was sending everything to one place.

aws elbv2 modify-target-group-attributes \
  --target-group-arn ARN \
  --attributes Key=stickiness.enabled,Value=false

The classic round-robin verification:

for i in {1..6}; do curl -s ALB-DNS/ | grep "Server:"; done
# Before: all 6 hits same server.  After: alternating between two.

Built With

Share this project:

Updates