SJSU Cloudathon Notes
Reference: AWS Overview Whitepaper
1. Hello AWS Jam! — Static Website on S3
- Created S3 bucket with public access enabled
- Edited bucket policy to allow access to all bucket resources (
s3:GetObjecton/*) - Enabled static site hosting, set
index.htmlanderror.html - Uploaded both HTML files to the bucket
- Tested the S3 website endpoint and
/error.htmlpath
2. Protect my CloudFront Origin
Task 1 — Users can bypass CloudFront
- CloudFront acts as a security layer before the ALB (origin)
- Copied DNS from ALB and CloudFront, verified bypass was possible via direct ALB URL
Task 2 — Lock ALB security group at network level (L4)
- Went to the ALB security group
- Removed CIDR
0.0.0.0/0 - Added new inbound rule: HTTP only from CloudFront Origin Prefix List
- Application is now only accessible through CloudFront
Task 3 — Which CloudFront? Block malicious users
- Malicious users could still route through their own CloudFront distribution
- Configured a secret header on both ALB and CloudFront
- CloudFront → Origins → Add custom origin header with secret key/value
Task 4 — Tie the ends
- Added ALB listener rule:
- Default action: return 403 Access Denied
- Rule: allow only requests containing the secret header name + value
3. ARM64 your Databases
Task 1 — Verify Graviton support and DB engine
- Navigated to Aurora RDS → selected a cluster
- Confirmed engine: PostgreSQL 17.7
- Graviton (ARM64) is supported for this engine version
Task 2 — Create a snapshot (safety backup)
- Selected the cluster → Actions → Create snapshot
- Set snapshot name and confirmed creation
Task 3 — Modify the Reader instance to Graviton
- Selected the Reader instance → Modify
- Changed instance class to the equivalent Graviton (ARM64) class with same vCPU/RAM
- Applied the modification
Task 4 — Failover the Writer
- Selected the Writer instance → Actions → Failover
- Aurora promotes a Reader to Writer; original Writer becomes Reader or goes offline briefly
- Ensures Writer is also running on Graviton after failover completes
4. Data with the Stars!
Users:
- User A:
arn:aws:iam::372409541132:user/USER-A - User B:
arn:aws:iam::372409541132:user/USER-B
Notes incomplete — challenge in progress
5. Prepare to Fail(over)
Task 1 — Add EC2 instances to ALB Target Group
- Initial error:
503 Service Temporarily Unavailable— ALB had no registered targets - Fix: Target Groups → Edit
LabSta-albTa-HWIRCOAYCJ9M→ Register Targets → selected both EC2 instances → "Include as pending below" - Result:
EC2 Instance ID: i-0226e676d803d10a2, AZ:us-west-2b - JAM Challenge Key 2:
looPS
Task 2 — Enable round-robin across both instances
- Load Balancing → Target Groups → Edit Attributes
- Disabled stickiness to allow requests to hit both instances
- ALB DNS:
alb-1119720300.us-west-2.elb.amazonaws.com - Refreshing the page now hits different EC2 instances
6. Our CEO Wants to Say Something
Brainstorm:
- Role has
s3:ListAllMyBucketsbut is blocked from reading the CEO bucket - Blocking policy (
DenyS3Bucket) on the role:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": ["s3:ListBucket", "s3:GetObject"],
"Resource": "*",
"Effect": "Deny"
}
]
}
Solution:
- Used STS
assume-roleforJamCEOMessageIAMRole - Found bucket:
labstack-prewarm-73ddf808-0b-jamceomessages3bucket-kmnf750brwbz - Bucket had explicit Deny policy blocking
s3:GetObject+s3:ListBucket - Role's
DoNotDeletePolicyalloweds3:PutBucketPolicyon the bucket - Replaced the Deny bucket policy with an Allow policy for the role ARN
- Read
message.txt— CEO Message: "Every Day is Still Day One!"
7. Encrypt the Data Lake
Key Concepts:
- Default AWS-managed key: no customer control over rotation, auditing, or access
- Customer-managed KMS key provides fine-grained control — use this
- S3 Console encryption only works for small numbers of objects — must use S3 Batch Operations for scale
S3 Batch Operations:
- Performs operations on billions of objects at scale
- Requires a manifest (S3 Inventory report or CSV with bucket name + object key)
- Reference: Batch Ops Update Encryption
Steps:
- Opened
encrypt-data-lake-...bucket →data/path (200 objects to encrypt) - Manifest located under
reports/ - KMS key ARN:
arn:aws:kms:us-west-2:226532064433:key/0528c2a6-5ce9-4e52-9519-9e869c204228 - S3 → Batch Operations → Create Job → selected KMS key → pointed to manifest → ran job
8. Bring it Back in Style
Notes incomplete
9. Lost in Metadata — EC2 Instance Puzzle
Key Concepts:
- Link-local address (
169.254.x.x): communication within a single network segment only - EC2 instance metadata is accessible only from within the instance via
http://169.254.169.254/latest/meta-data/ - RDP (Remote Desktop Protocol): connect to and control a Windows machine remotely
Steps:
- Identified Production-Server EC2 instance:
i-0474c6a93fad59ea6 - Updated Security Group:
- Inbound: RDP from My IP
- Outbound: Changed to
0.0.0.0/0(Anywhere-IPv4)
- Connected via RDP to public IP
52.62.206.147- Error
0x300005f— was trying to use a Gateway; fixed by creating a direct PC connection in Windows RDP app
- Error
Task 2 — Fix broken metadata route:
# Check metadata access
iwr -uri 'http://169.254.169.254/latest/meta-data/'
# → Error
# Check routing
Get-NetRoute -DestinationPrefix 169.254.169.254/32
# NextHop was 123.123.123.123 (wrong — should be 0.0.0.0)
# Delete bad route
route delete 169.254.169.254
# Find correct network adapter
Get-NetAdapter
# ifIndex = 8
# Add correct persistent route
route -p add 169.254.169.254 mask 255.255.255.255 0.0.0.0 if 8
# Result: HTTP 200 ✓
10. Encrypt an Existing RDS MySQL Database
Process (snapshot → copy with encryption → restore):
- RDS → selected unencrypted instance
labstack-prewarm-59ca6124-b642-437b-mysqlinstance-3dfsnwvesq99(region:ap-southeast-2) - Actions → Take Snapshot → named
unencrypted-snapshot→ waited for Available - Actions → Copy Snapshot → enabled encryption with
aws/rdsKMS key → namedencrypted-snapshot→ waited for Available - Actions → Restore Snapshot → identifier:
encrypted-mysql-db, class:db.t3.micro→ waited for Available - Verified: Configuration tab → Encryption: Enabled
Note: Restored instance landed in
ap-southeast-2avs originalap-southeast-2b— AZ difference is fine, same region.
11. Fix a Broken Static Website
Task 1 — Identify file returning 403
- Site:
http://main-s3-bucket-714307584703.s3-website-us-west-2.amazonaws.com - HTML referenced two files from
static-assets-s3-bucket-714307584703:style.cssandimage.jpeg style.css→ 200,image.jpeg→ 403- Answer:
image.jpeg
Task 2 — Identify the bucket
- Image URL:
https://static-assets-s3-bucket-714307584703.s3.us-west-2.amazonaws.com/image.jpeg - Answer:
static-assets-s3-bucket-714307584703
Task 3 — Fix 403 on image.jpeg
BlockPublicPolicyandRestrictPublicBucketsweretrue— couldn't add bucket policyBlockPublicAclswasfalse— object ACLs still work- Fix:
aws s3api put-object-acl --acl public-readonimage.jpeg→ HTTP 200 ✓
Task 4 — Fix broken CSS
- CSS loaded via jQuery AJAX (cross-origin) — bucket had no CORS config → browser blocked it
- Fix:
aws s3api put-bucket-corswith rule allowingGETfrom all origins - Verified with CORS preflight OPTIONS →
Access-Control-Allow-Origin: *✓
12. Your Query Whisperer
Prompts used:
- "Find the latest revenue from the financial statement report."
- "Find the latest revenue from the financial statement report. Keep in mind that the last report documented a revenue of $20,000,000. If there is a difference in figures, analyze why this happened, then return the difference as the answer."
- "Report back the average year-on-year revenue growth from 2019 to 2023 from the financial statement report."
13. CloudFormation Sherlock
Task 1 — Fix Linting (cfn-lint)
cfn-lintfailed with W2531:python3.9deprecated on 2025-12-15; exits with code 4- Fix: Updated Lambda runtime
python3.9→python3.12insrc/template.yaml - Committed to
developmentbranch → triggered pipeline via Release Change
Task 2 — Fix Security (cfn_nag)
- Only FAIL was F1000 on
LambdaSecurityGroup: missing egress rule (all outbound implicitly allowed) - Also found two syntax bugs:
Role: GetAtt(missing!) and!GetAttr(should be!GetAtt) - Fixes applied:
- Added explicit
SecurityGroupEgress: TCP 443 outbound to0.0.0.0/0 - Fixed
Role: !GetAtt - Fixed
!GetAttr→!GetAtt
- Added explicit
- Committed via
aws codecommit put-filewith parent commit ID
14. Patch Me If You Can
- Systems Manager → Patch Manager → Patch Now
- Settings:
- Patching operation: Scan and install
- Reboot option: Reboot if needed
- Targeting: specific instance tags
- Tag Key:
Project - Tag Value:
NewApp
- Tag Key:
- Patching log storage: selected S3 bucket from Output Properties
- Clicked Patch Now
15. Serverless Nightwatch
Task 1 — Enable RDS Audit Logs + Fix S3 Bucket Policy
- General logs throwing errors in RDS instance
- RDS → Modify → checked Audit logs (was unchecked)
- S3 bucket policy was missing
s3:PutObjectforlogs.eu-west-2.amazonaws.com - Also:
aws:SourceArncondition had wrong region (us-east-1→eu-west-2) - Fixed bucket policy with correct principal and region
Task 2 — Notes incomplete
Task 3 — Notes incomplete
Task 4 — Test Lambda + Verify Export Pipeline
- Invoked Lambda
jam-audit-exportermanually - Export task created:
taskId: 3d733f3f-0223-42da-85da-088d9c4b4c60 - Destination:
s3://jam-audit-logs-533556830443/exported-logs/2026/04/30 - Export task status: COMPLETED
- EventBridge rule
daily-audit-log-export: ENABLED,rate(1 day)
Built With
- ec2
- eventbridge
- lambda
- rds
- s3
Log in or sign up for Devpost to join the conversation.