Inspiration
The Eye of Agamotto application is a testament to the power of technology and its ability to create magical experiences. With its advanced hand tracking and gesture recognition features, the app allows users to control their digital world with the wave of a hand. Just like the mystical Eye of Agamotto from the Marvel universe, this app unlocks a new level of power and control over technology. It's truly inspiring to see how technology can blur the lines between reality and fantasy, giving us the ability to interact with the digital world in ways we never thought possible. With the Eye of Agamotto app, the possibilities are truly endless!
What it does
The Eye of Agamotto is an AI-powered application that utilizes advanced hand tracking and gesture recognition technologies to provide users with a novel and intuitive way of interacting with technology. The application consists of two primary models, each optimized for a different platform.
The first model is optimized for website use and allows users to control various aspects of their web browsing experience using hand gestures. The AI model is integrated with the TensorFlow js hand library and a camera library to accurately track the user's hand movements. The application uses different points associated with the user's hands to create various gestures such as moving the mouse pointer, scrolling the webpage, and liking images on social media platforms. The first model provides a unique and intuitive way for users to interact with web content, improving the overall browsing experience.
The second model is optimized for use on computer systems and provides users with a hands-free way of controlling their devices. The AI model is developed using Python and integrated with the Open-Cv Python library to access the camera on the hosting device. The model is trained using the Mediapipe library to recognize hand gestures, identify fingers, and detect the distance between fingers. Users can control the mouse pointer, click action, and even automate typing of certain words using hand gestures. The second model is particularly useful for individuals who require hands-free technology or those with mobility impairments, providing them with a more accessible and intuitive way of using their computer system.
How we built it
Our First Model (Website Optimized): To track a user's hand movement, we used the TensorFlow js hand library. We also tracked the hand using a camera library. We created gesture functionality on our mock HTML webpages by using the different points associated with the user's hands. These gestures include using your hands to move the mouse and webpage, and in our example, we used gestures to like images on a social media platform.
Our Second Model (Optimized for Computer Systems): Python was used to create our second model. To access the camera on the hosting device, the Open-Cv Python library was used. Our AI was able to detect hands thanks to the Python Mediapipe library. We were able to create an AI module that recognizes hands, identifies the fingers up, and detects the distance between fingers using the Mediapipe Mediapipe library. We were able to train our AI to associate different hand gestures with different operations using these methods. To effectively demonstrate the experience provided by our AI, we developed a messaging app that is compatible with our model.
Challenges we ran into
During the development of this application, we encountered several challenges. One of the primary challenges was optimizing the hand tracking and gesture recognition algorithms to work seamlessly on both the website and computer system models. Additionally, we faced challenges in training the AI to accurately recognize and associate different hand gestures with specific actions. The camera angle, lighting conditions, and background interference also posed challenges in accurately detecting hand movements. Debugging and testing the application to ensure its smooth performance across different devices and platforms was also a challenge. Despite these obstacles, we persisted and overcame these challenges to create an impressive AI-powered hand-tracking and gesture-recognition application.
Accomplishments that we're proud of
We are incredibly proud of the accomplishments we achieved in building this app. One of our greatest achievements was successfully integrating advanced hand tracking and gesture recognition technology into the application, allowing users to control their digital experience with simple hand gestures. We were able to optimize the AI models for both website and computer system use, resulting in a seamless user experience on both platforms. We also developed an intuitive user interface that made the app easy to use and navigate. We are proud of how we overcame the technical challenges and produced an innovative product that showcases the power of AI and hand-tracking technology. Finally, we are proud to have created an app that has the potential to change how people interact with technology in their everyday lives.
What we learned
Building this application was a valuable learning experience for our team. We gained a deep understanding of advanced AI technologies such as hand tracking, working with libraries like Mediapipe and TensorFlow, and gesture recognition, and how these can be applied to enhance user experiences. We also gained a better appreciation for the importance of testing and debugging, as we encountered numerous challenges in fine-tuning the AI models to work seamlessly across different devices and platforms.
Additionally, we learned how to create intuitive user interfaces that make complex technologies accessible to a wider audience. Finally, we learned the importance of persistence and collaboration, as building an innovative and impactful application requires a collective effort from a team of dedicated individuals. Overall, we are proud of what we have accomplished and excited to continue learning and pushing the boundaries of AI technology in the future.
What's next for Eye of Agamotto
Integration with Virtual and Augmented Reality: The application can be integrated with virtual and augmented reality platforms to provide users with an immersive and interactive experience. For example, users can use hand gestures to interact with virtual objects and control their movements.
Expansion to Mobile Devices: The application can be expanded to mobile devices, such as smartphones and tablets, by creating a mobile application. With the increasing popularity of mobile devices, this expansion can provide a new market for the application.
Collaboration with Other AI Features: The application can be collaborated with other AI features, such as voice recognition and natural language processing, to provide a more sophisticated user experience. For example, users can use hand gestures to control a virtual assistant that can recognize voice commands and respond accordingly.
Built With
- javascript
- mediapipe
- opencv
- python
- tensorflow


Log in or sign up for Devpost to join the conversation.