[note: text might look AI-ish because I CBA with writing everything from scratch]
Hi everyone ๐
Iโve been exploring ways to improve request quality and reduce agent effort in JSM (or CSM) by leveraging AI beyond native capabilities (e.g., Rovo / Virtual Agent).
While testing built-in features, I ran into a few limitations:
Handling incomplete or ambiguous user inputs
Limited ability to leverage external context (HRIS, historical requests, user attributes)
Lack of proactive request enrichment before reaching agents
To be clearer, I'm looking at how to resolve the following requirement: Leveraging AI to minimize incomplete requests and agent effort ๐๐
I started experimenting with a more extensible architecture using a third-party LLM + middleware layer (via APIs / MCP).
Conceptually, the flow looks like this:
In short:
User submits a request (potentially outside Atlassian)
Middleware enriches the prompt with external data (HRIS, past tickets, etc.)
LLM generates a structured, complete request
Request is created in JSM
Optional feedback loop for continuous improvement
Iโd love to validate how feasible this is within the Atlassian ecosystem and what constraints I might be missing.
Specifically:
Has anyone implemented something similar using Rovo MCP or external LLMs?
Any specific limitations of MCP for third-party AI integrations (for this scenario)?
Would this require all end users to have Atlassian accounts, or would portal-only accounts also work?
Any performance or cost trade-offs worth calling out?
Would really appreciate any insights, examples, or lessons learned ๐
Cheers,
Tobi
Fyi โ I briefly discussed this with an Atlassian PM, and one suggestion was leveraging Slack โ JSM integration for intake (i.e. users submitting requests via Slack, which then creates tickets in JSM).
This makes sense from an entry-point perspective, but Iโd still expect to need an AI layer in between.
@Tomislav Tobijas Moin Moin
Nice diagram. The overall idea looks solid to me.
The main thing I would separate here is who is calling the API versus who the request is being created for.
On the JSM side, creating a request on behalf of a customer is possible with raiseOnBehalfOf, so the request can still be opened for a portal customer. But the account making that API call is not just a portal-only customer account. In practice, that needs to be an account with the right JSM access, typically an agent/admin-capable one.
About external...
your middleware authenticates with a dedicated service account, creates the request through the JSM API, and uses raiseOnBehalfOf to set the actual end user as the reporter.
That way, the external user does not need to be a full licensed Jira user. They would just need to exist as a valid customer in the portal/customer setup.
Where I would be a bit careful is the MCP part.
The JSM API itself supports this model, but I would not state too strongly that the current Atlassian MCP server already exposes that exact flow. From the current public docs, the JSM-related MCP tooling still looks fairly limited, so that part is the one I would leave a bit open unless someone has tested it recently.
The main thing I would separate here is who is calling the API versus who the request is being created for.
On the JSM side, creating a request on behalf of a customer is possible with raiseOnBehalfOf, so the request can still be opened for a portal customer. But the account making that API call is not just a portal-only customer account. In practice, that needs to be an account with the right JSM access, typically an agent/admin-capable one.
Yeah - this can probably be handled for the AI to find the appropriate user in case we don't do direct authentication (if, in fact, AI is using some generic or admin account to create requests). Although I agree that it would probably be much easier to simply rely on users having Atlassian accounts.
The JSM API itself supports this model, but I would not state too strongly that the current Atlassian MCP server already exposes that exact flow. From the current public docs, the JSM-related MCP tooling still looks fairly limited, so that part is the one I would leave a bit open unless someone has tested it recently.
This part also requires some dedicated review before the actual implementation. In theory (and by the docs) it's all flowers and rainbows ๐
but I'm keen to see if anyone built something like this.
We'll see...
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Every concept is still theoretical at this stage until you actually bring it to life.
That is especially true in the AI field. A lot of ideas sound promising on paper, but turning them into something real, useful, and sustainable is much harder.
That is one of the main reasons so many AI startups fail, while many of the rest end up being absorbed or outcompeted by much larger players in the market.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I second @Arkadiusz Wroblewski
I don't really see a reason in using the MCP vs. just using the regular API. If your Enrichment Layer is not an LLM but a deterministic piece of Software that uses the AI outputs of the LLM layer, you can just use the normal API.
That way, you can definitely just use a Service Account (that has Agent Permissions in the JSM Space) for authentication and you have more available endpoints.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@Tomislav Tobijas Best quote of the month "[note: text might look AI-ish because I CBA with writing everything from scratch] "
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.