Inspiration

The internet is flooded with movie reviews, but discerning genuine, helpful opinions from spam, abuse, and biased content is a significant challenge. We were inspired to create a platform that empowers users to share their thoughts on newly released movies while ensuring a safe and constructive environment. Our goal was to leverage the power of Generative AI to moderate content in near real-time, promoting responsible online discussions and contributing to a more positive experience for movie enthusiasts. We envisioned a system where reviews are validated for appropriate content before they become publicly visible, fostering a community built on respectful and insightful feedback.

What it does

CineGuard: AI-Powered Movie Review Governance is a web portal and AI-driven system that enables users to submit reviews for newly released movies. It employs a two-agent Bedrock system to ensure responsible content sharing:

  • User Review Submission: A user-friendly web portal allows users to browse newly released movies (for example, fetching data from a public API like TMDB), submit ratings, and write reviews.
  • Pending Review Storage: Submitted reviews are initially stored in a DynamoDB table with a "pending" status, preventing them from being immediately visible to other users.
  • AI-Powered Validation: An AI Master Agent analyzes each pending review using a Bedrock LLM to identify and flag abusive language, hate speech, or personal attacks. Reviews are then updated in DynamoDB with a status of "approved" or "rejected," along with a rejection reason, if applicable.
  • Notification to Publishers: A Notification Agent, also triggered by DynamoDB Streams, informs publishers (via SNS) about newly approved reviews, allowing them to share user feedback with their subscribers.
  • Public Review Display: Only approved reviews are displayed on the web portal, creating a safe and informative space for movie discussions.

How we built it

CineGuard is built using a combination of serverless technologies on the AWS Cloud Platform:

  • Frontend: A web portal created with HTML, CSS, and JavaScript provides the user interface.
  • Backend API: An API Gateway endpoint connected to a Lambda function handles user review submissions and stores them in a DynamoDB table.
  • Data Storage: DynamoDB stores movie reviews with attributes such as review_id, movie_title, rating, comments, submission_timestamp, status, validation_timestamp, and validation_reason.
  • AI Master Agent: A Lambda function triggered by DynamoDB Streams acts as the AI Master Agent. It invokes a Bedrock LLM to validate review content and updates the review status in DynamoDB.
  • Notification Agent: Another Lambda function, also triggered by DynamoDB Streams, sends notifications to publishers about newly approved reviews (simulated via SNS).
  • Amazon Bedrock Guardrails: We configured Amazon Bedrock Guardrails with specific rules and policies to prevent the generation of inappropriate content. These rules were designed to align with responsible AI principles and our application's content moderation requirements. We defined categories of prohibited content and implemented mechanisms to block or filter reviews that violate these policies.

Challenges we ran into

  • Prompt Engineering for Accurate Validation: Crafting effective prompts for the Bedrock LLM to accurately identify and flag abusive language proved challenging. We experimented with different phrasing and instructions to optimize the model's performance.
  • Balancing Accuracy and Performance: Striking a balance between the accuracy of the AI validation and the response time of the system was crucial. We explored techniques to optimize the Bedrock LLM's performance while maintaining a high level of accuracy.
  • Handling Edge Cases and Nuances: Identifying and handling edge cases and nuances in online language required careful consideration. We implemented mechanisms to allow for human review of flagged content to ensure fairness and accuracy.
  • Simulating the Publisher Notification System: Due to time constraints, we simulated the publisher notification system by sending notifications to an SNS topic, rather than integrating with actual publisher APIs.
  • Configuring and Tuning Bedrock Guardrails: Effectively configuring and tuning Amazon Bedrock Guardrails required careful planning and experimentation. We had to define specific rules and policies that aligned with our content moderation goals while minimizing the risk of false positives or unintended consequences.

Accomplishments that we're proud of

  • Functional AI-Powered Validation System: We successfully built a working system that automatically validates user-submitted movie reviews using Generative AI, ensuring a safe and constructive online environment.
  • AI driven Content Moderation: Our system validates reviews before they are made public, providing a proactive approach to content moderation.
  • End-to-End Workflow: We implemented a complete end-to-end workflow, from user submission to AI validation to publisher notification.
  • Integration with AWS Services: We successfully integrated various AWS services, including API Gateway, Lambda, DynamoDB, DynamoDB Streams, and Bedrock, to build a scalable and reliable solution.
  • User-Friendly Web Portal: We created a user-friendly web portal that allows users to easily submit and view movie reviews.

What we learned

  • The Importance of Prompt Engineering: The quality of the prompts used to interact with the Bedrock LLM has a significant impact on the accuracy and effectiveness of the AI validation process.
  • The Power of Serverless Technologies: Serverless technologies like Lambda and DynamoDB allow us to build scalable and cost-effective applications without the need to manage servers.
  • The Value of Real-time Content Moderation: Real-time content moderation is crucial for creating a safe and positive online environment.
  • The Challenges of AI Bias: We learned about the potential for bias in AI models and the importance of carefully selecting and training models to mitigate bias.

What's next for CineGuard: AI-Powered Movie Review Governance

  • Implement a Human Review Process: Add a human review process for flagged content to ensure fairness and accuracy.
  • Integrate with Real Publisher APIs: Integrate with actual publisher APIs to enable real-time notification of newly approved reviews.
  • Personalize the User Experience: Personalize the user experience by recommending movies and reviews based on user preferences.
  • Expand Content Coverage: Expand the platform to include reviews for other types of content, such as TV shows, games, and books.
  • Explore Monetization Strategies: Explore options such as premium features for users or partnerships with movie studios and streaming services.

Built With

  • agent-to-agent
  • amazon-sdk
  • anthropic.claude-3-5-sonnet-20241022-v2:0
  • api-gateway
  • bedrok-agent
  • boto3
  • css
  • dynamodb
  • ec2
  • guardrails
  • html
  • javascript
  • lambda
  • openapi
  • python
Share this project:

Updates