# Checking Object Similarity and Equivalence¶

The Environment has functions for checking if two STIX Objects are very similar or identical. The functions differentiate between equivalence, which is a binary concept (two things are either equivalent or they are not), and similarity, which is a continuum (an object can be more similar to one object than to another). The similarity function answers the question, “How similar are these two objects?” while the equivalence function uses the similarity function to answer the question, “Are these two objects equivalent?”

For each supported object type, the object_similarity() function checks if the values for a specific set of properties match. Then each matching property is weighted since every property does not represent the same level of importance for semantic similarity. The result will be the sum of these weighted values, in the range of 0 to 100. A result of 0 means that the two objects are not equivalent, and a result of 100 means that they are equivalent. Values in between mean the two objects are more or less similar and can be used to determine if they should be considered equivalent or not. The object_equivalence() calls object_similarity() and compares the result to a threshold to determine if the objects are equivalent. Different organizations or users may use different thresholds.

TODO: Add a link to the committee note when it is released.

There are a number of use cases for which calculating semantic equivalence may be helpful. It can be used for echo detection, in which a STIX producer who consumes content from other producers wants to make sure they are not creating content they have already seen or consuming content they have already created.

Another use case for this functionality is to identify identical or near-identical content, such as a vulnerability shared under three different nicknames by three different STIX producers. A third use case involves a feed that aggregates data from multiple other sources. It will want to make sure that it is not publishing duplicate data.

Below we will show examples of the semantic similarity results of various objects. Unless otherwise specified, the ID of each object will be generated by the library, so the two objects will not have the same ID. This demonstrates that the semantic similarity algorithm only looks at specific properties for each object type. Each example also shows the result of calling the equivalence function, with a threshold value of 90.

Please note that you will need to install a few extra dependencies in order to use the semantic equivalence functions. You can do this using:

pip install stix2[semantic]

## Attack Pattern Example¶

For Attack Patterns, the only properties that contribute to semantic similarity are name and external_references, with weights of 30 and 70, respectively. In this example, both attack patterns have the same external reference but the second has a slightly different yet still similar name.

[3]:

import stix2
from stix2 import AttackPattern, Environment, MemoryStore

env = Environment(store=MemoryStore())

ap1 = AttackPattern(
name="Phishing",
external_references=[
{
"url": "https://example2",
"source_name": "some-source2",
},
],
)
ap2 = AttackPattern(
name="Spear phishing",
external_references=[
{
"url": "https://example2",
"source_name": "some-source2",
},
],
)
print(env.object_similarity(ap1, ap2))
print(env.object_equivalence(ap1, ap2, threshold=90))

[3]:

91.81818181818181

[3]:

True


## Campaign Example¶

For Campaigns, the only properties that contribute to semantic similarity are name and aliases, with weights of 60 and 40, respectively. In this example, the two campaigns have completely different names, but slightly similar descriptions.

[4]:

from stix2 import Campaign

c1 = Campaign(
name="Someone Attacks Somebody",)

c2 = Campaign(
name="Another Campaign",)
print(env.object_similarity(c1, c2))
print(env.object_equivalence(c1, c2, threshold=90))

[4]:

30.0

[4]:

False


## Identity Example¶

For Identities, the only properties that contribute to semantic similarity are name, identity_class, and sectors, with weights of 60, 20, and 20, respectively. In this example, the two identities are identical, but are missing one of the contributing properties. The algorithm only compares properties that are actually present on the objects. Also note that they have completely different description properties, but because description is not one of the properties considered for semantic similarity, this difference has no effect on the result.

[5]:

from stix2 import Identity

id1 = Identity(
name="John Smith",
identity_class="individual",
description="Just some guy",
)
id2 = Identity(
name="John Smith",
identity_class="individual",
description="A person",
)
print(env.object_similarity(id1, id2))
print(env.object_equivalence(id1, id2, threshold=90))

[5]:

100.0

[5]:

True


## Indicator Example¶

For Indicators, the only properties that contribute to semantic similarity are indicator_types, pattern, and valid_from, with weights of 15, 80, and 5, respectively. In this example, the two indicators have patterns with different hashes but the same indicator_type and valid_from. For patterns, the algorithm currently only checks if they are identical.

[6]:

from stix2.v21 import Indicator

ind1 = Indicator(
indicator_types=['malicious-activity'],
pattern_type="stix",
pattern="[file:hashes.MD5 = 'd41d8cd98f00b204e9800998ecf8427e']",
valid_from="2017-01-01T12:34:56Z",
)
ind2 = Indicator(
indicator_types=['malicious-activity'],
pattern_type="stix",
pattern="[file:hashes.MD5 = '79054025255fb1a26e4bc422aef54eb4']",
valid_from="2017-01-01T12:34:56Z",
)
print(env.object_similarity(ind1, ind2))
print(env.object_equivalence(ind1, ind2, threshold=90))

[6]:

20.0

[6]:

False


If the patterns were identical the result would have been 100.

## Location Example¶

For Locations, the only properties that contribute to semantic similarity are longitude/latitude, region, and country, with weights of 34, 33, and 33, respectively. In this example, the two locations are Washington, D.C. and New York City. The algorithm computes the distance between two locations using the haversine formula and uses that to influence similarity.

[7]:

from stix2 import Location

loc1 = Location(
latitude=38.889,
longitude=-77.023,
)
loc2 = Location(
latitude=40.713,
longitude=-74.006,
)
print(env.object_similarity(loc1, loc2))
print(env.object_equivalence(loc1, loc2, threshold=90))

[7]:

67.20663955882583

[7]:

False


## Malware Example¶

For Malware, the only properties that contribute to semantic similarity are malware_types and name, with weights of 20 and 80, respectively. In this example, the two malware objects only differ in the strings in their malware_types lists. For lists, the algorithm bases its calculations on the intersection of the two lists. An empty intersection will result in a 0, and a complete intersection will result in a 1 for that property.

[8]:

from stix2 import Malware

MALWARE_ID = "malware--9c4638ec-f1de-4ddb-abf4-1b760417654e"

mal1 = Malware(id=MALWARE_ID,
malware_types=['ransomware'],
name="Cryptolocker",
is_family=False,
)
mal2 = Malware(id=MALWARE_ID,
malware_types=['ransomware', 'dropper'],
name="Cryptolocker",
is_family=False,
)
print(env.object_similarity(mal1, mal2))
print(env.object_equivalence(mal1, mal2, threshold=90))

[8]:

90.0

[8]:

True


## Threat Actor Example¶

For Threat Actors, the only properties that contribute to semantic similarity are threat_actor_types, name, and aliases, with weights of 20, 60, and 20, respectively. In this example, the two threat actors have the same id properties but everything else is different. Since the id property does not factor into semantic similarity, the result is not very high. The result is not zero because of the “Token Sort Ratio” algorithm used to compare the name property.

[9]:

from stix2 import ThreatActor

THREAT_ACTOR_ID = "threat-actor--8e2e2d2b-17d4-4cbf-938f-98ee46b3cd3f"

ta1 = ThreatActor(id=THREAT_ACTOR_ID,
threat_actor_types=["crime-syndicate"],
name="Evil Org",
aliases=["super-evil"],
)
ta2 = ThreatActor(id=THREAT_ACTOR_ID,
threat_actor_types=["spy"],
name="James Bond",
aliases=["007"],
)
print(env.object_similarity(ta1, ta2))
print(env.object_equivalence(ta1, ta2, threshold=90))

[9]:

6.66666666666667

[9]:

False


## Tool Example¶

For Tools, the only properties that contribute to semantic similarity are tool_types and name, with weights of 20 and 80, respectively. In this example, the two tools have the same values for properties that contribute to semantic similarity but one has an additional, non-contributing property.

[10]:

from stix2 import Tool

t1 = Tool(
tool_types=["remote-access"],
name="VNC",
)
t2 = Tool(
tool_types=["remote-access"],
name="VNC",
description="This is a tool"
)
print(env.object_similarity(t1, t2))
print(env.object_equivalence(t1, t2, threshold=90))

[10]:

100.0

[10]:

True


## Vulnerability Example¶

For Vulnerabilities, the only properties that contribute to semantic similarity are name and external_references, with weights of 30 and 70, respectively. In this example, the two vulnerabilities have the same name but one also has an external reference. The algorithm doesn’t take into account any semantic similarity contributing properties that are not present on both objects.

[11]:

from stix2 import Vulnerability

vuln1 = Vulnerability(
name="Heartbleed",
external_references=[
{
"url": "https://example",
"source_name": "some-source",
},
],
)
vuln2 = Vulnerability(
name="Heartbleed",
)
print(env.object_similarity(vuln1, vuln2))
print(env.object_equivalence(vuln1, vuln2, threshold=90))

[11]:

100.0

[11]:

True


## Other Examples¶

Comparing objects of different types will result in a ValueError.

[12]:

print(env.object_similarity(ind1, vuln1))

ValueError: The objects to compare must be of the same type!



Some object types do not have a defined method for calculating semantic similarity and by default will give a warning and a result of zero.

[13]:

from stix2 import Report

r1 = Report(
report_types=["campaign"],
published="2016-04-06T20:03:00.000Z",
object_refs=["indicator--a740531e-63ff-4e49-a9e1-a0a3eed0e3e7"],
)
r2 = Report(
report_types=["campaign"],
published="2016-04-06T20:03:00.000Z",
object_refs=["indicator--a740531e-63ff-4e49-a9e1-a0a3eed0e3e7"],
)
print(env.object_similarity(r1, r2))

'report' type has no 'weights' dict specified & thus no object similarity method to call!

[13]:

0


By default, comparing objects of different spec versions will result in a ValueError.

[14]:

from stix2.v20 import Identity as Identity20

id20 = Identity20(
name="John Smith",
identity_class="individual",
)
print(env.object_similarity(id2, id20))

ValueError: The objects to compare must be of the same spec version!



You can optionally allow comparing across spec versions by providing a configuration dictionary using ignore_spec_version like in the next example:

[15]:

from stix2.v20 import Identity as Identity20

id20 = Identity20(
name="John Smith",
identity_class="individual",
)
print(env.object_similarity(id2, id20, **{"_internal": {"ignore_spec_version": True}}))

[15]:

100.0


## Detailed Results¶

If your logging level is set to DEBUG or higher, the function will log more detailed results. These show the semantic similarity and weighting for each property that is checked, to show how the final result was arrived at.

[16]:

import logging
logging.basicConfig(format='%(message)s')
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)

ta3 = ThreatActor(
threat_actor_types=["crime-syndicate"],
name="Evil Org",
aliases=["super-evil"],
)
ta4 = ThreatActor(
threat_actor_types=["spy"],
name="James Bond",
aliases=["007"],
)
print(env.object_similarity(ta3, ta4))

logger.setLevel(logging.ERROR)

Starting object similarity process between: 'threat-actor--54040762-8540-4c37-8f6d-6ebcc20da2b5' and 'threat-actor--b2a6f234-5594-42d9-9cdb-f4b82bc575a6'
--              partial_string_based 'Evil Org' 'James Bond'    result: '11.111111111111114'
'name' check -- weight: 60, contributing score: 6.666666666666669
--              partial_list_based '['crime-syndicate']' '['spy']'      result: '0.0'
'threat_actor_types' check -- weight: 20, contributing score: 0.0
--              partial_list_based '['super-evil']' '['007']'   result: '0.0'
'aliases' check -- weight: 20, contributing score: 0.0
Matching Score: 6.666666666666669, Sum of Weights: 100.0

[16]:

6.66666666666667


You can also retrieve the detailed results in a dictionary so the detailed results information can be accessed and used more programatically. The object_similarity() function takes an optional third argument, called prop_scores. This argument should be a dictionary into which the detailed debugging information will be stored.

Using prop_scores is simple: simply pass in a dictionary to object_similarity(), and after the function has finished executing, the dictionary will contain the various scores. Specifically, it will have the overall matching_score and sum_weights, along with the weight and contributing score for each of the semantic similarity contributing properties.

For example:

[17]:

ta5 = ThreatActor(
threat_actor_types=["crime-syndicate", "spy"],
name="Evil Org",
aliases=["super-evil"],
)
ta6 = ThreatActor(
threat_actor_types=["spy"],
name="James Bond",
aliases=["007"],
)

prop_scores = {}
print("Semantic equivalence score using standard weights: %s" % (env.object_similarity(ta5, ta6, prop_scores)))
print(prop_scores)
for prop in prop_scores:
if prop not in ["matching_score", "sum_weights"]:
print ("Prop: %s | weight: %s | contributing_score: %s" % (prop, prop_scores[prop]['weight'], prop_scores[prop]['contributing_score']))
else:
print ("%s: %s" % (prop, prop_scores[prop]))

[17]:

Semantic equivalence score using standard weights: 16.666666666666668

[17]:

{'name': {'weight': 60, 'contributing_score': 6.666666666666669}, 'threat_actor_types': {'weight': 20, 'contributing_score': 10.0}, 'aliases': {'weight': 20, 'contributing_score': 0.0}, 'matching_score': 16.666666666666668, 'sum_weights': 100.0}

[17]:

Prop: name | weight: 60 | contributing_score: 6.666666666666669

[17]:

Prop: threat_actor_types | weight: 20 | contributing_score: 10.0

[17]:

Prop: aliases | weight: 20 | contributing_score: 0.0

[17]:

matching_score: 16.666666666666668

[17]:

sum_weights: 100.0


## Custom Comparisons¶

If you wish, you can customize semantic comparisons. Specifically, you can do any of three things: - Provide custom weights for each semantic equivalence contributing property - Provide custom comparison functions for individual semantic equivalence contributing properties - Provide a custom semantic equivalence function for a specific object type

### The weights dictionary¶

In order to do any of the aforementioned (optional) custom comparisons, you will need to provide a weights dictionary as the last parameter to the object_similarity() method call.

The weights dictionary should contain both the weight and the comparison function for each property. You may use the default weights and functions, or provide your own.

#### Existing comparison functions¶

For reference, here is a list of the comparison functions already built in the codebase (found in stix2/equivalence/object):

For instance, if we wanted to compare two of the ThreatActors from before, but use our own weights, then we could do the following:

[18]:

weights = {
"threat-actor": {                                                            # You must specify the object type
"name": (30, stix2.equivalence.object.partial_string_based),             # Each property's value must be a tuple
"threat_actor_types": (50, stix2.equivalence.object.partial_list_based), # The 1st component must be the weight
"aliases": (20, stix2.equivalence.object.partial_list_based)             # The 2nd component must be the comparison function
}
}

print("Using standard weights: %s" % (env.object_similarity(ta5, ta6)))
print("Using custom weights: %s" % (env.object_similarity(ta5, ta6, **weights)))

[18]:

Using standard weights: 16.666666666666668

[18]:

Using custom weights: 28.33333333333334


Notice how there is a difference in the semantic similarity scores, simply due to the fact that custom weights were used.

### Custom Weights With prop_scores¶

If we want to use both prop_scores and weights, then they would be the third and fourth arguments, respectively, to object_similarity():

[19]:

prop_scores = {}
weights = {
"threat-actor": {
"name": (45, stix2.equivalence.object.partial_string_based),
"threat_actor_types": (10, stix2.equivalence.object.partial_list_based),
"aliases": (45, stix2.equivalence.object.partial_list_based),
},
}
env.object_similarity(ta5, ta6, prop_scores, **weights)
print(prop_scores)

[19]:

10.000000000000002

[19]:

{'name': {'weight': 45, 'contributing_score': 5.000000000000002}, 'threat_actor_types': {'weight': 10, 'contributing_score': 5.0}, 'aliases': {'weight': 45, 'contributing_score': 0.0}, 'matching_score': 10.000000000000002, 'sum_weights': 100.0}


### Custom Semantic Similarity Functions¶

You can also write and use your own semantic equivalence functions. In the examples above, you could replace the built-in comparison functions for any or all properties. For example, here we use a custom string comparison function just for the 'name' property:

[20]:

def my_string_compare(p1, p2):
if p1 == p2:
return 1
else:
return 0

weights = {
"threat-actor": {
"name": (45, my_string_compare),
"threat_actor_types": (10, stix2.equivalence.object.partial_list_based),
"aliases": (45, stix2.equivalence.object.partial_list_based),
},
}
print("Using custom string comparison: %s" % (env.object_similarity(ta5, ta6, **weights)))

[20]:

Using custom string comparison: 5.0


You can also customize the comparison of an entire object type instead of just how each property is compared. To do this, provide a weights dictionary to object_similarity() and in this dictionary include a key of "method" whose value is your custom semantic similarity function for that object type.

If you provide your own custom semantic similarity method, you must also provide the weights for each of the properties (unless, for some reason, your custom method is weights-agnostic). However, since you are writing the custom method, your weights need not necessarily follow the tuple format specified in the above code box.

Note also that if you want detailed results with prop_scores you will need to implement that in your custom function, but you are not required to do so.

In this next example we use our own custom semantic similarity function to compare two ThreatActors, and do not support prop_scores.

[21]:

def custom_semantic_similarity_method(obj1, obj2, **weights):
sum_weights = 0
matching_score = 0
# Compare name
w = weights['name']
sum_weights += w
contributing_score = w * stix2.equivalence.object.partial_string_based(obj1['name'], obj2['name'])
matching_score += contributing_score
# Compare aliases only for spies
if 'spy' in obj1['threat_actor_types'] + obj2['threat_actor_types']:
w = weights['aliases']
sum_weights += w
contributing_score = w * stix2.equivalence.object.partial_list_based(obj1['aliases'], obj2['aliases'])
matching_score += contributing_score

return matching_score, sum_weights

weights = {
"threat-actor": {
"name": 60,
"aliases": 40,
"method": custom_semantic_similarity_method
}
}

print("Using standard weights: %s" % (env.object_similarity(ta5, ta6)))
print("Using a custom method: %s" % (env.object_similarity(ta5, ta6, **weights)))

[21]:

Using standard weights: 16.666666666666668

[21]:

Using a custom method: 6.66666666666667


You can also write custom functions for comparing objects of your own custom types. Like in the previous example, you can use the built-in functions listed above to help with this, or write your own. In the following example we define semantic similarity for our new x-foobar object type. Notice that this time we have included support for detailed results with prop_scores.

[22]:

def _x_foobar_checks(obj1, obj2, prop_scores, **weights):
matching_score = 0.0
sum_weights = 0.0
if stix2.equivalence.object.check_property_present("name", obj1, obj2):
w = weights["name"]
sum_weights += w
contributing_score = w * stix2.equivalence.object.partial_string_based(obj1["name"], obj2["name"])
matching_score += contributing_score
prop_scores["name"] = (w, contributing_score)
if stix2.equivalence.object.check_property_present("color", obj1, obj2):
w = weights["color"]
sum_weights += w
contributing_score = w * stix2.equivalence.object.partial_string_based(obj1["color"], obj2["color"])
matching_score += contributing_score
prop_scores["color"] = (w, contributing_score)

prop_scores["matching_score"] = matching_score
prop_scores["sum_weights"] = sum_weights
return matching_score, sum_weights

prop_scores = {}
weights = {
"x-foobar": {
"name": 60,
"color": 40,
"method": _x_foobar_checks,
},
"_internal": {
"ignore_spec_version": False,
},
}
foo1 = {
"type":"x-foobar",
"id":"x-foobar--0c7b5b88-8ff7-4a4d-aa9d-feb398cd0061",
"name": "Zot",
"color": "red",
}
foo2 = {
"type":"x-foobar",
"id":"x-foobar--0c7b5b88-8ff7-4a4d-aa9d-feb398cd0061",
"name": "Zot",
"color": "blue",
}
print(env.object_similarity(foo1, foo2, prop_scores, **weights))
print(prop_scores)

[22]:

71.42857142857143

[22]:

{'name': (60, 60.0), 'color': (40, 11.428571428571427), 'matching_score': 71.42857142857143, 'sum_weights': 100.0}


# Checking Graph Similarity and Equivalence¶

The next logical step for checking if two individual objects are similar or equivalent is to check all relevant neighbors and related objects for the best matches. It can help you determine if you have seen similar intelligence in the past and builds upon the foundation of the local object similarity comparisons described above. The Environment has two functions with similar requirements for graph-based checks.

For each supported object type, the graph_similarity() function checks if the values for a specific set of objects match and will compare against all of the same type objects, maximizing for score obtained from the properties match. It requires two DataStore instances which represent the two graphs to be compared and allows the algorithm to make additional checks like de-referencing objects. Internally it calls object_similarity().

Some limitations exist that are important to understand when analyzing the results of this algorithm. - Only STIX types with weights defined will be checked. This could result in a maximal sub-graph and score that is smaller than expect. We recommend looking at the prop_scores or logging output for details and to understand how the result was calculated. - Failure to de-reference an object for checks will result in a 0 for that property. This applies to *_ref or *_refs properties. - Keep reasonable expectations in terms of how long it takes to run, especially with DataStores that require network communication or when the number of items in the graphs is high. You can also tune how much depth the algorithm should check in de-reference calls; this can affect your running-time.

Please note that you will need to install the TAXII dependencies in addition to the semantic requirements if you plan on using the TAXII DataStore classes. You can do this using:

pip install stix2[taxii]

By default, the algorithm uses default weights defined here object_similarity() in combination with graph_similarity().

[23]:

import json

from stix2 import Relationship

g1 = [
AttackPattern(
name="Phishing",
external_references=[
{
"url": "https://example2",
"source_name": "some-source2",
},
],
),
Campaign(name="Someone Attacks Somebody"),
Identity(
name="John Smith",
identity_class="individual",
description="Just some guy",
),
Indicator(
indicator_types=['malicious-activity'],
pattern_type="stix",
pattern="[file:hashes.MD5 = 'd41d8cd98f00b204e9800998ecf8427e']",
valid_from="2017-01-01T12:34:56Z",
),
Malware(id=MALWARE_ID,
malware_types=['ransomware'],
name="Cryptolocker",
is_family=False,
),
ThreatActor(id=THREAT_ACTOR_ID,
threat_actor_types=["crime-syndicate"],
name="Evil Org",
aliases=["super-evil"],
),
Relationship(
source_ref=THREAT_ACTOR_ID,
target_ref=MALWARE_ID,
relationship_type="uses",
),
Report(
report_types=["campaign"],
published="2016-04-06T20:03:00.000Z",
object_refs=[THREAT_ACTOR_ID, MALWARE_ID],
),
]

g2 = [
AttackPattern(
name="Spear phishing",
external_references=[
{
"url": "https://example2",
"source_name": "some-source2",
},
],
),
Campaign(name="Another Campaign"),
Identity(
name="John Smith",
identity_class="individual",
description="A person",
),
Indicator(
indicator_types=['malicious-activity'],
pattern_type="stix",
pattern="[file:hashes.MD5 = '79054025255fb1a26e4bc422aef54eb4']",
valid_from="2017-01-01T12:34:56Z",
),
Malware(id=MALWARE_ID,
malware_types=['ransomware', 'dropper'],
name="Cryptolocker",
is_family=False,
),
ThreatActor(id=THREAT_ACTOR_ID,
threat_actor_types=["spy"],
name="James Bond",
aliases=["007"],
),
Relationship(
source_ref=THREAT_ACTOR_ID,
target_ref=MALWARE_ID,
relationship_type="uses",
),
Report(
report_types=["campaign"],
published="2016-04-06T20:03:00.000Z",
object_refs=[THREAT_ACTOR_ID, MALWARE_ID],
),
]

memstore1 = MemoryStore(g1)
memstore2 = MemoryStore(g2)
prop_scores = {}

similarity_result = env.graph_similarity(memstore1, memstore2, prop_scores)
equivalence_result = env.graph_equivalence(memstore1, memstore2, threshold=60)

print(similarity_result)
print(equivalence_result)
print(json.dumps(prop_scores, indent=4, sort_keys=False))

[23]:

59.68831168831168

[23]:

False

[23]:

{
"matching_score": 835.6363636363635,
"len_pairs": 14,
"summary": {
"threat-actor--8e2e2d2b-17d4-4cbf-938f-98ee46b3cd3f": {
"lhs": "threat-actor--8e2e2d2b-17d4-4cbf-938f-98ee46b3cd3f",
"rhs": "threat-actor--8e2e2d2b-17d4-4cbf-938f-98ee46b3cd3f",
"prop_score": {
"name": {
"weight": 60,
"contributing_score": 6.666666666666669
},
"threat_actor_types": {
"weight": 20,
"contributing_score": 0.0
},
"aliases": {
"weight": 20,
"contributing_score": 0.0
},
"matching_score": 6.666666666666669,
"sum_weights": 100.0
},
"value": 6.66666666666667
},
"campaign--02eb6d99-15d3-4534-99ce-d5f946ca52fe": {
"lhs": "campaign--02eb6d99-15d3-4534-99ce-d5f946ca52fe",
"rhs": "campaign--d7fecca0-d020-43ae-977d-8d226df84c36",
"prop_score": {
"name": {
"weight": 60,
"contributing_score": 18.0
},
"matching_score": 18.0,
"sum_weights": 60.0
},
"value": 30.0
},
"campaign--d7fecca0-d020-43ae-977d-8d226df84c36": {
"lhs": "campaign--d7fecca0-d020-43ae-977d-8d226df84c36",
"rhs": "campaign--02eb6d99-15d3-4534-99ce-d5f946ca52fe",
"prop_score": {
"name": {
"weight": 60,
"contributing_score": 18.0
},
"matching_score": 18.0,
"sum_weights": 60.0
},
"value": 30.0
},
"indicator--d17a1296-d6c9-4119-9fbf-433c7f1f11af": {
"lhs": "indicator--d17a1296-d6c9-4119-9fbf-433c7f1f11af",
"rhs": "indicator--d2e7d0b6-4229-447d-9c44-2b0f7d93797b",
"prop_score": {
"indicator_types": {
"weight": 15,
"contributing_score": 15.0
},
"pattern": {
"weight": 80,
"contributing_score": 0
},
"valid_from": {
"weight": 5,
"contributing_score": 5.0
},
"matching_score": 20.0,
"sum_weights": 100.0
},
"value": 20.0
},
"indicator--d2e7d0b6-4229-447d-9c44-2b0f7d93797b": {
"lhs": "indicator--d2e7d0b6-4229-447d-9c44-2b0f7d93797b",
"rhs": "indicator--d17a1296-d6c9-4119-9fbf-433c7f1f11af",
"prop_score": {
"indicator_types": {
"weight": 15,
"contributing_score": 15.0
},
"pattern": {
"weight": 80,
"contributing_score": 0
},
"valid_from": {
"weight": 5,
"contributing_score": 5.0
},
"matching_score": 20.0,
"sum_weights": 100.0
},
"value": 20.0
},
"relationship--b399060e-0cdb-4e41-a30e-5894ae3627e8": {
"lhs": "relationship--b399060e-0cdb-4e41-a30e-5894ae3627e8",
"rhs": "relationship--b97e59e9-5e0d-47ef-a3f9-6a6e4fcefaab",
"prop_score": {
"relationship_type": {
"weight": 20,
"contributing_score": 20.0
},
"source_ref": {
"weight": 40,
"contributing_score": 2.666666666666668
},
"target_ref": {
"weight": 40,
"contributing_score": 36.0
},
"matching_score": 58.66666666666667,
"sum_weights": 100.0
},
"value": 58.666666666666664
},
"relationship--b97e59e9-5e0d-47ef-a3f9-6a6e4fcefaab": {
"lhs": "relationship--b97e59e9-5e0d-47ef-a3f9-6a6e4fcefaab",
"rhs": "relationship--b399060e-0cdb-4e41-a30e-5894ae3627e8",
"prop_score": {
"relationship_type": {
"weight": 20,
"contributing_score": 20.0
},
"source_ref": {
"weight": 40,
"contributing_score": 2.666666666666668
},
"target_ref": {
"weight": 40,
"contributing_score": 36.0
},
"matching_score": 58.66666666666667,
"sum_weights": 100.0
},
"value": 58.666666666666664
},
"report--87a26bd6-2870-44de-980f-e4cc6b63e1d5": {
"lhs": "report--87a26bd6-2870-44de-980f-e4cc6b63e1d5",
"rhs": "report--a71101c7-6064-4b8f-a9b4-ff49ff65e524",
"prop_score": {
"name": {
"weight": 30,
"contributing_score": 30.0
},
"published": {
"weight": 10,
"contributing_score": 10.0
},
"object_refs": {
"weight": 60,
"contributing_score": 29.0
},
"matching_score": 69.0,
"sum_weights": 100.0
},
"value": 69.0
},
"report--a71101c7-6064-4b8f-a9b4-ff49ff65e524": {
"lhs": "report--a71101c7-6064-4b8f-a9b4-ff49ff65e524",
"rhs": "report--87a26bd6-2870-44de-980f-e4cc6b63e1d5",
"prop_score": {
"name": {
"weight": 30,
"contributing_score": 30.0
},
"published": {
"weight": 10,
"contributing_score": 10.0
},
"object_refs": {
"weight": 60,
"contributing_score": 29.0
},
"matching_score": 69.0,
"sum_weights": 100.0
},
"value": 69.0
},
"rhs": "identity--4d8b54e3-d584-47c6-858f-673fffa45e96",
"prop_score": {
"name": {
"weight": 60,
"contributing_score": 60.0
},
"identity_class": {
"weight": 20,
"contributing_score": 20.0
},
"matching_score": 80.0,
"sum_weights": 80.0
},
"value": 100.0
},
"identity--4d8b54e3-d584-47c6-858f-673fffa45e96": {
"lhs": "identity--4d8b54e3-d584-47c6-858f-673fffa45e96",
"prop_score": {
"name": {
"weight": 60,
"contributing_score": 60.0
},
"identity_class": {
"weight": 20,
"contributing_score": 20.0
},
"matching_score": 80.0,
"sum_weights": 80.0
},
"value": 100.0
},
"attack-pattern--57bc38b5-feda-4710-b613-441717c0062c": {
"lhs": "attack-pattern--57bc38b5-feda-4710-b613-441717c0062c",
"rhs": "attack-pattern--d9de40c6-a9a0-4e6f-ae59-d90a91e4f0e8",
"prop_score": {
"name": {
"weight": 30,
"contributing_score": 21.818181818181817
},
"external_references": {
"weight": 70,
"contributing_score": 70.0
},
"matching_score": 91.81818181818181,
"sum_weights": 100.0
},
"value": 91.81818181818181
},
"attack-pattern--d9de40c6-a9a0-4e6f-ae59-d90a91e4f0e8": {
"lhs": "attack-pattern--d9de40c6-a9a0-4e6f-ae59-d90a91e4f0e8",
"rhs": "attack-pattern--57bc38b5-feda-4710-b613-441717c0062c",
"prop_score": {
"name": {
"weight": 30,
"contributing_score": 21.818181818181817
},
"external_references": {
"weight": 70,
"contributing_score": 70.0
},
"matching_score": 91.81818181818181,
"sum_weights": 100.0
},
"value": 91.81818181818181
},
"malware--9c4638ec-f1de-4ddb-abf4-1b760417654e": {
"lhs": "malware--9c4638ec-f1de-4ddb-abf4-1b760417654e",
"rhs": "malware--9c4638ec-f1de-4ddb-abf4-1b760417654e",
"prop_score": {
"malware_types": {
"weight": 20,
"contributing_score": 10.0
},
"name": {
"weight": 80,
"contributing_score": 80.0
},
"matching_score": 90.0,
"sum_weights": 100.0
},
"value": 90.0
}
}
}


The example above uses the same objects found in previous examples to demonstrate the graph similarity and equivalence use. Under this approach, Grouping, Relationship, Report, and Sighting have default weights defined, allowing object de-referencing. The Report and Relationship objects respectively show their *_ref and *_refs properties checked in the summary output. Analyzing the similarity output we can observe that objects scored high when checked individually, but when the rest of the graph is taken into account, discrepancies add up and produce a lower score.