Inspiration
The idea for SmartVenue PSU came from a real experience at Penn State’s White Out game. Seeing the massive entry lines, crowd buildup, and confusion around gates made it clear that large event venues still struggle with crowd flow at the point where experience matters most: entry. When we got to HackPSU, we realized this was the perfect problem to tackle. We wanted to build something that improves the fan experience while also giving organizers a smarter way to manage movement at scale.
What it does
SmartVenue PSU is an organizer focused venue intelligence platform built for high traffic events like Penn State football games, concerts, and large live gatherings.
On the organizer side, SmartVenue PSU uses camera based crowd detection to monitor congestion at entry gates and understand where lines are building up in real time. Based on that information, the platform can guide attendees toward less crowded gates and improve overall venue flow.
On the user side, we built a companion app that gives people a reason to actually use it. The app includes a live scoreboard, a venue map, and route suggestions to help attendees find the fastest way to the stadium using navigation support. We also introduced a fast gate concept: users can upload their face ahead of time and use facial recognition at entry, making the face itself the ticket for a smoother and quicker access experience.
In simple terms, SmartVenue PSU reduces entry congestion, improves crowd distribution, and makes the event experience faster and easier for both fans and organizers.
How we built it
We started by using Base44 to generate the front end interface for the user facing mobile app. That allowed us to quickly visualize the experience and build out the initial product flow. From there, we used JavaScript to create a more stable backend structure and connected the app flow into Xcode so it could run as a mobile experience on a phone.
Once the app structure was in place, we focused on the core technical modules — each grounded in peer-reviewed research and engineered for real-time performance.
Crowd Analytics Engine The foundation of the system is a computer vision pipeline running on a dedicated laptop with an RTX 5060 GPU. We implemented YOLOv8s for real-time person detection, achieving 41 AI inference frames per second while maintaining a smooth 60fps display — a result of a custom multi-threaded architecture that fully decouples capture, detection, and rendering into independent threads so the display never waits for AI inference. To solve the challenge of detecting people at distance where standard detectors fail, we integrated SAHI (Slicing Aided Hyper Inference, arXiv 2022), which tiles each frame into overlapping 320x320 patches and runs detection on each slice separately, recovering people who appear as few as 15 pixels tall in the full frame. For crowd movement analysis, we implemented Farneback Dense Optical Flow to generate per-zone directional vectors, giving us real-time crowd flow direction across all six gate zones simultaneously. We additionally integrated Real-ESRGAN (ICCV Workshop 2021) for 4x super resolution on distant detections, sharpening silhouettes before the detector runs.
We also developed an original crowd safety metric we call the Crowd Pressure Score: pressure = density × (1 - flow_speed). This formula, grounded in Fruin's Level of Service model (1971) — the same physics framework used by stadium safety engineers worldwide — identifies dangerous conditions where high density combines with crowd stagnation before a visible incident occurs. Routing and Decision Logic
We did not just look at which line was shortest. We also considered how long it would take a person to actually walk from their current position, including areas such as parking lots and stadium surroundings, making rerouting more practical and useful.
The routing engine is built on FastAPI and implements what we call Anticipatory Crowd Intelligence (ACI) — a predictive rerouting system that fires before congestion forms, not after. This is the key distinction from every existing stadium system. Rather than reacting to a full gate, ACI models arrival waves using functional data analysis methodology published in Machine Learning Journal (Springer, 2023) from real Camp Nou gate timestamp data, adapted to Penn State's gate structure and 106,572 capacity. The system predicts gate load 10 and 20 minutes ahead per gate and begins rerouting fans while they are still in the parking lot.
Gate assignments are optimised using a weighted cost function: cost = 0.55 × congestion + 0.25 × predicted congestion + 0.20 × walk distance, inspired by the GABPPO gate assignment framework published in Transportation Research Record (2024). In live testing during our demo, this reduced simulated average gate wait time from 12 minutes to 4.3 minutes — a 64% reduction — with the same number of fans and identical gate infrastructure.
Facial Recognition Fast Gate Instead of relying on NFC, users can pre-upload a facial image and the system uses camera based recognition to support a faster entry flow. For the entry system, we implemented a multi-paper facial recognition pipeline targeting accuracy beyond standard deployed systems. The stack uses RetinaFace (CVPR 2020) for face detection and landmark alignment, AdaFace (CVPR 2022 Oral) for quality-adaptive embeddings where the recognition margin dynamically adjusts based on image sharpness, and LVFace (ICCV 2025) — currently ranked number 1 globally on the MFR-Ongoing benchmark as of March 2025 — as the primary Vision Transformer embedding backbone. We also integrated TopoFR (NeurIPS 2024), which uses persistent homology to preserve the topological structure of face embeddings and improve generalisation across varying lighting and angles.
Our most novel contribution is what we call TAQFV — Temporal Adaptive Quality-Weighted Fusion. Rather than verifying from a single frame like every existing system, we capture 5 consecutive frames as a fan approaches the kiosk, weight each frame by its AdaFace quality score using the feature norm as a quality proxy, and fuse them into a single robust embedding: final_embedding = Σ(quality_i × embedding_i) / Σ(quality_i). This approach is theoretically grounded in AdaFace but has never been implemented in any production kiosk system. We also added a three-tier confidence calibration system — instant verification above 0.75 cosine similarity, automatic recapture between 0.55 and 0.75, and denial below 0.55 — replacing the binary yes/no decision used by every existing facial recognition gate including Digi Yatra.
Physical Prototype Alongside the software, we designed and 3D printed a modular three-part gate terminal housing — a base unit, a neck connector, and an angled display head — after six failed prints due to filament spread and bed adhesion issues on the print surface. The final working model demonstrates how SmartVenue PSU would physically exist at a real venue entrance, with the display angled toward the fan's face, camera integrated above the screen, and all electronics concealed inside the housing.
Challenges we ran into
One of our biggest early challenges was technical compatibility. The frontend we built through Base44 was helpful for rapid design, but it was not flexible enough for the level of JavaScript based editing and backend integration we needed. Because of that, we had to rethink our workflow and shift more of the implementation toward Xcode and a more direct app based approach.
Another challenge was our original NFC plan. We initially wanted to use NFC for ticket entry, but Apple’s security limitations made that difficult to implement within the scope of the hackathon. That forced us to pivot to a facial recognition based entry approach.
We also ran into hardware limitations. Our Raspberry Pi setup was not suitable for the type of larger model processing and experimentation we were attempting, so we had to pivot again and keep the demo centered around laptop and phone based systems.
On the hardware side, 3D printing was another major struggle. We ended up going through six failed prints because of uneven board conditions, filament spreading, and time pressure. With only a few hours left, we had to adjust print orientation, fix the spread issue, and keep iterating until we got a usable final model.
Accomplishments that we're proud of
We are especially proud of the camera based people detection system. It performed well and became one of the strongest parts of the project. We are also proud that we were able to create a working app experience with solid usability and a clear value proposition for both organizers and users.
Another accomplishment was our rerouting logic. We did not stop at simple crowd based redirection. We also factored in walking time and practical movement from surrounding areas, which made the experience more realistic.
Finally, we are proud that despite all the pressure and failed prints, we still brought the physical prototype together and turned the project into something both visual and functional.
What we learned
The biggest lesson was how much teamwork matters in a project like this. This was an ambitious build, and being able to divide the work, adapt quickly, and support each other made a huge difference.
We also learned a lot about using AI assisted tools during development. This project would have been much harder to build at this speed without AI helping us prototype, debug, and move faster. On top of that, we learned more about working with computer vision systems, recognition pipelines, rapid prototyping, and product thinking under a tight deadline.
Most importantly, we learned that building a useful product is not just about having a cool idea. It is about being willing to pivot when things break and still finding a way to deliver something real.
What's next for SmartEvent PSU
SmartVenue PSU is currently an MVP, but we see much bigger potential for it.
Our next step is to continue refining the crowd intelligence system, improve the reliability of the fast gate experience, and make the organizer dashboard more powerful. We also want to strengthen the business side of the product by focusing on organizer based monetization. The idea is that venues, stadiums, and event organizers would pay for SmartVenue PSU as an operations and fan experience solution, while attendees benefit from the app features for free.
Beyond HackPSU, we want to explore opportunities to pitch SmartVenue PSU through Penn State LaunchBox and other startup support systems. Long term, we see this becoming a scalable platform for football stadiums, concert venues, and other large event spaces where crowd management and entry flow are a major pain point.
If you want, I can also turn this into a polished Devpost ready version with stronger headings, tighter wording, and a more impressive tone.
Built With
- amazon-web-services
- base44
- boto3
- google-directions
- indexfaces
- javascript
- opencv
- python
- pytorch
- rekognition
- s3
- swiftui
- wkwebview
Log in or sign up for Devpost to join the conversation.