Inspiration

Do you think AI systems accurately reflect your moral values? Do you think it could ever respect your moral and ethical values? These were questions that I pondered and have sought to answer in my PhD research at the University of Aberdeen. This is not directly related to my PhD. Still, it is a visualisation of the theory I am working on, which posits that the preservation of an agent's capacity to choose is a fundamental ethical substrate that can lead to collaborative dialogue between humans and AI agents, and further protect individual and community rights in an increasingly diverse world.

What it does

This project enables you to demonstrate how your involvement with AI decision-making and the preservation of agents' capacity to choose leads to ethical discussions and care. It also allows users to observe the agents and how they respond to situations, and it enables them to learn more about the ethical frameworks and moral systems that exist. Eventually, it will feature a tool that allows people to learn more about different ethical frameworks and, when applied to their specific roles, how they can enhance their capacity.

How we built it

Since I have no or low-code background, I built this by first utilising Claude to summarise my thesis formulas into a prompt for Bolt to build the measurement metrics according to the theory accurately. Through collaboration with the agent, I was able to determine the best way to represent the information by asking the agent to identify areas for improvement and justify the reasoning behind some of the choices.

Challenges we ran into

The biggest challenge is related to the background in no- or low-code development. I have been mostly relying on the agent to understand and execute the plan accordingly. It was only through noticing how things were functioning that I was able to provide prompts to suggest fixes. In some cases, one step forward meant six steps backwards. Finding the correct sequence of words to keep the progress made, all the while trying to fix the errors.

Accomplishments that we're proud of

Coming up with a functional proof of concept for the idea has been a long way coming. The fact that I could discuss the idea with the agent, work through the logic, and execute it without needing to know code was great; it could be improved with better foundational knowledge.

What we learned

It was a great time to join the hackathon and learn about the diversity of the types of competition. It was also great to use Bolt.new differently and learn more about what all goes into making applications and the different sorts of AI tools that can be used to accomplish your goals, even with no coding experience.

What's next for SERA Collaborative Framework

Eventually, I would like to expand this into a "Choose Your Own Adventure" type of storytelling bot that can guide readers on unique journeys dictated by their choices, and AI-generated storytelling that emphasises the impact of the user's choice on the story's narrative. Outside of this, the SERA framework will hopefully be used in other ways to enhance AI capacity, reasoning, and decision-making skills.

Built With

Share this project:

Updates