LLMs are a very powerful piece of technology. Having the possibility to use an LLM through a job based decentralized fashion is the minimal requirement for anyone to construct APIs and Apps on top of lilypad.

What it does

Here, I built a stack composed of a python script, bundled in a docker container, which is then called by a module on lilypad.

The project does the following:

1) A JSON template prompt is uploaded to IPFS 2) A job using the docker image is send through the lilypad network 3) The job is run on a node, and the resulting inference is uploaded to the IPFS network. 4) The result can be gathered and injected into the JSON template prompt to create a loop and therefore and conversational chatbot.

How we built it

The project was built using python, the Transfomer library from HuggingFace, docker stack, and the lilypad framework.

However, the lilypad framework was not ready for production yet, therefore I spend most of the first day helping the lilypad team to fix their stack.

All the code is available on github: The CarpAI LLM image: and a fork of the lilypad project with the module code:

Challenges we ran into

The lilypad stack was not working, and I had to help the team fix their stack such that it would run on any GPU based node. The stack was trying to package nvidia library the wrong way therefore I helped the team create a more robust docker image for their tech such that it is not dependant on weird library mounting hacks.

Accomplishments that we're proud of

Getting the thing to actually work ! Even if just for a bit !

What we learned

That making a good Dev experience is a hard thing ! And that huggingFace libraries are amazing at spinning up AI project quickly.

What's next for CarpAI

A deployment on the mainnet of lilypad for anyone to be able to build LLM projects through the lilypad stack !

Built With

Share this project: