This academic project was developed to simplify attendance tracking in hybrid classrooms by combining AWS AI services and a serverless architecture. It enables automatic verification of student participation using uploaded images from both in-person and online sessions.
Taking attendance manually—especially across both Zoom and offline classes—is inefficient and error-prone. This project aims to provide a smart solution that enables professors to verify participation accurately with minimal effort.
Professor Workflow: After each session, the professor uploads class images to an S3 bucket: Zoom class screenshots go into the /proj-images/names/ folder In-person classroom images go into the /proj-images/faces/ folder
Image Processing: Students also upload their own class participation image via the frontend React app (hosted on AWS Amplify) The image is processed by an AWS Lambda function, written in Java It uses Textract to extract names from Zoom screenshots It uses Rekognition to compare faces with in-person class photos
Data Storage: Uploaded user images are stored in Amazon S3 Participation status (including match results) is saved in DynamoDB
Frontend: Built with static HTML and deployed via Amplify, the app allows users to upload images and receive instant verification results
Through this project, I gained hands-on experience in designing and building serverless applications using AWS Lambda, API Gateway, and the AWS CDK. I learned how to apply AI-powered services like Amazon Textract for text extraction and Amazon Rekognition for face comparison in a real-world classroom setting. Additionally, I developed skills in managing structured data storage across S3 buckets and DynamoDB tables, and I became proficient in troubleshooting complex integration issues such as CORS configuration while connecting a React frontend to a cloud-native backend.
Looking ahead, this project has the potential to evolve into a real-time participation tracking system. For offline classes, it could automatically capture live images through in-class cameras and run face verification in the background. For online sessions, the system could integrate directly with Zoom APIs to access participant screenshots or metadata, eliminating the need for manual uploads and making the process entirely automated and seamless for instructors.
Some of the key challenges involved managing the folder structure and object keys in S3, especially when processing multiple image types. Image quality inconsistencies such as poor lighting or angled faces posed difficulties for the face comparison logic. I also had to fine-tune CORS headers and preflight OPTIONS responses in API Gateway to ensure smooth communication between the frontend and backend. Lastly, ensuring accuracy in participation detection was difficult due to variations in name formatting and facial angles, which required thorough testing and adjustments.
Built With
- amazon-dynamodb
- amazon-dynamodb-for-storing-participation-records
- amazon-rekognition-for-facial-recognition-in-classroom-images
- amazon-web-services
- api
- aws-cdk
- aws-textract-for-extracting-text-from-zoom-screenshots
- base64
- cloudformation
- html
- iam
- java
- lambda
- rekognition
- s3
- textract
Log in or sign up for Devpost to join the conversation.