About

Moralit.ai is an artificially intelligent personal assistant which uses natural language processing and machine learning to perform moral decision making. By adhering to deontological principles surrounding murder, suffering, adultery, and deception, moralit.it determines the ethical permissibility of performing action requests from the user.

Inspiration

If you hold to the material nature of the world around us, AI with true AMA could mean an end to the problem of conscious life and could bring about ethical obligations to these rational "persons." With AI performing ever larger feats of intelligent processing, artificial moral agency (AMA) is becoming a topic of discussion among many philosophers, computer scientists, and layman alike. Inspired by recent moral dilemmas faced by developers of artificially intelligent software (Tay.ai at Microsoft, self driving car software at Tesla, etc), the aim of this project is to explore the means by which an AI which can rationalize its own moral decisions.

How we built it

First, we created four Kantian universals to feed our AI that it is absolutely unable to participate in or act on:

  1. Murder
  2. Creating suffering
  3. Adultery
  4. Deception

We then used api.ai for natural language processing which allowed us to understand the domain of the users speech. Based upon the domain, intent, and original user text, we process ethical reasoning where necessary. In certain cases such as completing math functions, asking for the time, and authorizing/opening software applications, we bypassed any ethical processing. When the text didn't fall into one of those domains, we compared it to the AI's current knowledge of the 4 forbidden universals. When running into an ethical problem, the AI defers to denying the user's request or refusing to answer the question.

The second portion of ethical reasoning (which has not yet been connected, but was developed to 90%) uses machine learning to train our AI about further censoring of topics in its realm of work. For example, when a user requests to search the web for "how to build a bomb," there is an obvious ethical intersection with our first Kantian universal. The AI uses an API to grab the main context in a sentence and ranks its relevance to our four principles using ML and after that point, we compare the users request with a constant value which we deem "permissible." If the relevance score is below our constant permissible score value, we return the answer (in this case, a web search), and if it is not, we deny the users request.

What's next for Moralit.ai

We would like to finalize the connection with our machine learning ethical reasoning in order to expand upon the personal assistants capabilities. Currently, it is more restrictive than it should be in some situations as a precaution, as with deontological ethics it is better to be safe than sorry. Additionally, expanding upon our understanding of NLP would allow us to differentiate between active and passive verbs (ex: "I'm going to kill x" versus "X said I he/she was going to kill me").

Built With

Share this project:
×

Updates