Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

Robust CVE ↔ Asset Matching in Jira Assets (Data Center) using CPE - Best Practice?

Nick Kölliker
I'm New Here
I'm New Here
Those new to the Atlassian Community have posted less than three times. Give them a warm welcome!
February 12, 2026

Hi everyone

we are currently planning to designing a CVE integration in Jira Assets (Data Center) based on NVD (NIST) data and would like to validate our architectural approach with the community.

Our goal is not just to import CVEs, but to achieve reliable vulnerability matching between infrastructure assets and CVEs, ideally based on CPE rather than free-text product matching (free-text was our first approach ;)).

Current Situation

  • NVD data is imported into a dedicated CVE schema in Assets

  • Our CMDB (Assets) contains infrastructure objects and installed software

  • Software names are currently not normalized to CPE format

  • A ScriptRunner-based matching approach exists, but it relies on vendor/name/version string comparisons

As expected, this leads to:

  • false positives

  • false negatives

  • maintainability concerns

Target Architecture (Conceptual)

We are considering introducing:

  • A normalized product catalog schema

  • Storing CPE identifiers per product

  • Mapping software instances → product objects → CPE

  • Matching CVEs based on CPE and version logic

Before implementing this, we would like to understand:

  1. Has anyone successfully implemented CPE-based CVE matching in Jira Assets?

  2. Did you model a separate normalized product catalog?

  3. How do you handle CPE version ranges?

  4. Did you automate CPE assignment or maintain it manually?

  5. At what point did you decide to integrate a dedicated vulnerability scanner instead?

We are aware that Assets is not a native vulnerability management system. Our objective is to understand whether a robust CPE-based model inside Assets is sustainable long-term or whether most organizations move toward integrating a scanner.

Any architectural insights or lessons learned would be highly appreciated.

Thanks in advance!

Nick Kölliker

1 answer

0 votes
Christos Markoulatos -Relational-
Community Champion
February 19, 2026

H@Nick Kölliker  welcome

This is a great architectural exercise, and in my opinion your instinct to move toward CPE-based matching is the right one.

On CPE-based matching in Jira Assets, it's doable, but you need to go in with realistic expectations. CPE matching sounds precise on paper, but the NVD's CPE data itself is inconsistent, vendors populate it unevenly, and the same product can appear under multiple CPE URIs across different CVE entries. So CPE gets you significantly closer to correctness than free-text, but it doesn't eliminate the need for curation.

Normalizing the product catalog is almost universally the approach teams land on after struggling with direct software-instance-to-CVE matching. The typical working model looks like: Software Instance → Product (with CPE) → CVE. The product catalog becomes your normalization layer. The key discipline is that only the product catalog layer owns CPE identifiers — software instances link to catalog entries, never define their own CPE strings. This makes CPE maintenance tractable.

For CPE version ranges is where the architecture gets painful. NVD expresses vulnerable ranges using versionStartIncluding, versionEndExcluding, etc., and implementing proper semver range logic in ScriptRunner/Groovy is non-trivial. You'll need a dedicated comparison function, and you'll immediately hit edge cases: four-part version strings, non-numeric suffixes like -patch1, OS-specific versioning. Most teams implement a simplified range check that covers ~90% of cases and flag the rest for manual review rather than trying to handle every edge case programmatically.

On automating CPE assignment hybrid is the realistic answer. You can automate CPE lookup against the NVD CPE dictionary API when new products are added, presenting candidates ranked by string similarity, but human confirmation before the CPE is committed to the catalog is strongly advisable. Fully automated CPE assignment without review produces subtle errors that are hard to audit later.

On when to bring in a dedicated scanner is the most important question. The honest answer is that most organizations doing this work in a CMDB eventually integrate a scanner, and the tipping point is usually one of three things: when version-range logic becomes a maintenance burden, when you need authenticated scan data to reliably detect installed versions, or when audit/compliance requirements demand scan evidence rather than CMDB-derived matching. Tools like Tenable, Qualys, and Rapid7 all have Jira integrations that can push findings into Assets as objects, at which point your CMDB model shifts from doing the matching to providing context (ownership, criticality, environment) that enriches scanner findings.

In my experience if your environment is under 500 managed software products and your team has ScriptRunner capacity, the CPE-catalog model is sustainable and worth building, it also forces good CMDB hygiene that pays dividends regardless. If you're looking at thousands of products or need to demonstrate scan coverage to auditors, budget for a scanner integration now and design your Assets schema to receive findings from it rather than generate them internally. The two approaches aren't mutually exclusive; many mature teams run both, using scanner findings to validate their CMDB and CMDB context to prioritize scanner findings.

The architecture you've outlined, product catalog with CPE, software instances linking to catalog entries, ScriptRunner doing the version-range matching, is a solid foundation either way.

This is my opinion from my experience. Hope it helps u

 

Nick Kölliker
I'm New Here
I'm New Here
Those new to the Atlassian Community have posted less than three times. Give them a warm welcome!
February 25, 2026

Hi Christos

Thank you very much for your detailed and thoughtful response, this is exactly the kind of architectural perspective we were hoping to hear at this stage.

We are currently still in the architectural design phase and have not yet implemented or tested a CPE-based matching model. During our concept evaluation, however, it already became apparent that the NVD CPE dictionary itself is not entirely consistent, especially regarding vendor naming variations and multiple CPE URIs for what is essentially the same product. It is very helpful to see this confirmed from practical experience.

Your emphasis on introducing a strict normalization layer (Software Instance → Product → CPE → CVE) aligns strongly with our thinking. In particular, the governance principle that only the product catalog owns CPE identifiers seems essential to keep the model maintainable and prevent uncontrolled CPE drift over time.

Regarding version range handling, your pragmatic "cover the majority and flag edge cases" approach is very insightful. We are currently evaluating how deep we want to go with version comparison logic versus consciously accepting a controlled level of imperfection. Your experience provides a valuable reality check in that regard.

On the scanner topic, our current strategic direction is to rely fully on Assets for vulnerability correlation, at least for now. Integrating a dedicated vulnerability scanner is not part of the immediate roadmap. That said, we truly appreciate your perspective. It is helpful to understand at which scale or complexity threshold organizations typically pivot toward scanner integration. We will definitely keep that evolution path in mind as our initiative progresses.

One additional question, if you don’t mind:

Did you ever implement validation controls within the product catalog layer (for example duplicate CPE detection, lifecycle checks, or schema constraints) to prevent inconsistencies over time?

Thanks again for sharing your experience, this input is extremely valuable as we shape the architecture before moving into implementation.

Best regards

Nick

Christos Markoulatos -Relational-
Community Champion
March 2, 2026

Hi @Nick Kölliker 

Good question and honestly one I don't have a fully battle-tested answer, so take this as a starting point rather than a proven playbook.

For duplicate CPE detection, a uniqueness constraint on the CPE URI attribute in Assets handles exact duplicates reliably enough. The harder problem is near-duplicates, meaning the same product entered with slightly different CPE URI formatting (cpe:2.3 vs legacy cpe:/ binding, or vendor name variants). A periodic reconciliation script that normalizes stored CPEs to 2.3 format and flags high-similarity pairs for manual review tends to be more practical than trying to catch these at insert time.

For lifecycle management, the lightest thing that adds real value is a Status attribute (Active / Deprecated / EOL) combined with a Last Validated date stamped by automation. A scheduled job flagging products whose CPE has not appeared in recent NVD feeds gives a useful signal without needing full vendor EOL database integration, though it is imperfect and will miss products that NVD simply stops referencing quietly.

For schema constraints, making CPE URI, vendor, product name and version scheme mandatory fields with format validation prevents half-populated entries from accumulating and silently degrading match quality over time. The regex for CPE URI format is straightforward to write.

The governance model honestly matters as much as the technical controls. Automation for detection and flagging, humans for resolution. Fully automated catalog cleanup without review tends to produce data loss that is difficult to trace later.

Hope it helps even if it is not a complete answer.

Suggest an answer

Log in or Sign up to answer