Inspiration

We have seen and interacted with several support "chatbots" in the past and these are typically either staffed by humans or have automatic canned responses that are typical simple lookups or searches in a knowledge base and which are not very intelligent and not very useful. Having played around with ChatGPT and other Generative AI systems recently it became apparent that integrating a system like this for this purpose could allow for more intelligent and useful AI-based agents. We believe this could allow customer support organizations to be significantly more responsive to their customers while at the same time increasing customer satisfaction due to more immediate useful responses without the need to increase staff.

What it does

The app allows teams to connect a Jira Service Management instance to a Generative AI engine (initially OpenAI API or custom-deployed LLaMa) and have that engine act as a "virtual agent" for their teams, automatically responding to customer requests and/or allowing or requiring human agents to review and/or update automatically generated responses as needed.

The app allows for some level of control at the JSM Project Admin level so that admins can configure the integration by specifying which data from the request to use to augment prompts, allow for translation of responses, and even to allow admins to fully customize the prompt and enrich with context.

How we built it

The app was built as a Forge app using triggers to identify issue creation, updates, and incoming comments, and Custom UI for the Forge Issue Panel, Project Admin, and Global Admin pages. The Jira / JSM REST APIs are used to save the response from the Generative AI engine as issue property data and to generate new customer responses. The OpenAI API and a custom build and custom hosted LLaMa engine with an API frontend are used to provide generated responses. The code for the app is built using mostly Javascript, Typescript, Node.JS, React, and the Atlassian Design System for UI.

Challenges we ran into

It is challenging to take a custom request that could consist of anything, validate it, enrich it with context (custom field data, possibly system configuration / settings), and form that into a useful LLM prompt. Once a response is received from the LLM, it is another challenge to validate that response, ensure that it is something that is suitable for consumption by the agent and the end customer who will see that response, and to ensure privacy and security end to end throughout this process. In addition, these APIs or hosting of custom LLMs have associated cost / performance tradeoffs.

What we learned

We learned a lot about how to craft suitable prompts from potentially arbitrary input and ensure that these prompts result in high quality responses. It also required trial and error in determining how to validate that the response from the LLM is something that is valid and worthwhile sending to the customer or human agent.

We also learned how to create a custom LLM engine, implement an API in front of this, and to custom train and fine tune these models, and how to configure the resources needed to ensure that this is both cost-effective and performant as possible.

What's next for Virtual Agent for Jira Service Management

While very useful in its existing form there is a lot that can be done to make this product even more compelling. We intend to make this solution more useful to teams and to allow them to fine-tune, custom-train, and automatically deploy custom models that are suited to their team's specific needs. We plan to build a mechanism where we can have a customer specify which content they'd like to have a model trained on. For example Confluence content, JSM and/or Jira issue history, document stores (Sharepoint, Google Drive), and external third-party systems and services (Salesforce or Slack for example). The customer could then specify a schedule for re-training, and automatically deploy new / updated models on that schedule. This would enable teams to implement "virtual agents" that are tailored to their specific domains and use cases.

Additional security and privacy features should also be added to allow for customers to implement rules to scan data going into and/or coming back from the Generative AI API to ensure that sensitive data is not shared and that privacy is maintained. The ability to host and maintain a custom LLM engine that can reside inside the organization's infrastructure would also allow for a higher level of security and privacy in situations where organizations may be more likely to be dealing with private data and/or may be involved in more highly regulated industries.

There are countless use cases for the underlying technology implemented in this solution and can be applied outside of JSM to Jira Software, Jira Work Management, Confluence, DevOps scenarios and other tools and environments.

Built With

+ 7 more
Share this project:

Updates