Query complexity calculations
Last updatedQuery complexity calculation#
The Integration API calculates query complexity to prevent system abuse. The score is based on the number and type of requested fields and their associated costs. This complexity score helps enforce rate limits.
The algorithm is based on a GitHub document, with some modifications.
Details#
-
Scalars usually don’t add any complexity score. Very few are marked as expensive, and then their score is
2
. -
Objects add
1
+ their fields' complexity. Only selected fields and their dependencies are taken into the calculation. -
Lists multiply the cost of a list item (object) by the
limit
. In case there’s nolimit
, a default of10
is assumed for the sake of complexity calculation. -
Connections work like collections, but the limit is specified in either the
first
orlast
argument. In their structure, there’s a list ofedges
, which doesn’t add complexity points, as the multiplication happens on the connection level.
Simple example#
Let’s say we have a query like this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
{
markets(limit: 50) { # 50 * (1 + 110)
id
name
assignedToCountries(limit: 10) { # 10 * (1 + 10)
code
continent
name
states(limit: 10) { # 10 * 1
id
}
}
}
}
Up to 50 markets can be returned, each one with a list of countries (limited to 10), each potential country with up to 10 states.
Now, counting from the deepest level:
states
will return up to 10 objects, each object is worth 1 complexity point, because they have only scalars inside;assignedToCountries
will return up to 10 countries, each country object worth 1 point for being an object, plus the child complexity of 10, for a total of 110 points.markets
will now be worth 1 + 110 per object, which multiplied by the limit makes a total of 5550 points.
More complex example#
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
{
productVariantConnection(last: 100) { # 100 * (1 + 1 + 114)
totalCount
pageInfo { # 1
hasPreviousPage
startCursor
}
edges { # no additional cost, it's part of the connection
cursor
node { # 1 + 3 + 110
id
...cost # 3
...attributes @include(if: true) # 110
}
}
}
}
fragment cost on ProductVariant {
unitCost { # 1 + 1 + 1
currency { # 1
code
}
formattedValue
converted { # 1
formattedValue
}
conversionDate
conversionRate
}
}
fragment attributes on ObjectWithAttributes {
attributes { # 10 * (1 + 10)
description
... on MappedAttribute {
id
}
elements { # 10
key
description
... on AttributeStringElement {
value
}
}
}
}
Now the calculations starting from the fragments:
- The cost fragment consists of one object with another two objects nested in it, so its complexity score is 3.
- The attributes fragment returns a list with a list embedded in it. Since there are no limit arguments, a default value of 10 is taken on both levels, so it makes a total of 110.
- Each
ProductVariant
object is now worth 114 complexity points,edges
list doesn't add complexity, andpageInfo
adds only 1. Multiplied by the limit from thelast
argument (100), the total complexity becomes 11600.
Expensive scalar example#
1
2
3
4
5
6
7
{
categories(limit: 100) { # 100 * (1 + 2)
id
name
displaySortType # expensive = 2
}
}
Here the displaySortType
is marked as "expensive", because it has a complex formula. Other than that, the calculation is pretty basic: each category object gets a score of 1 + 2, and there are up to 100 categories returned, for a total of 300.
Individual query complexity limit#
The maximum complexity of any individual query to the Integration API is also limited, and the limit is 100 000 complexity points.
Updates to query complexity: higher limits, smarter scoring#
To ensure your integrations continue to run smoothly and efficiently, we’re introducing several key improvements to our Integration API. We've refined our query scoring system to be more accurate and transparent, giving you more control over your API usage. Best of all, we’re increasing the maximum query complexity limit to provide a generous buffer for all your existing queries.
Summary#
All your queries will keep working, the limits are higher, and the scoring is smarter. No action required for existing queries.
A higher complexity limit: a safety buffer for your queries#
We're raising the maximum allowed complexity per query:
- Old limit: 100,000 complexity points
- New limit: 150,000 complexity points
Based on our analysis, this new limit provides enough breathing room to ensure that all current production queries will continue to run without error. This proactive increase is our way of making sure the improvements we've made to the scoring system won't negatively impact your existing integrations.
Smarter, more transparent query scoring#
We've refined our scoring algorithm to better reflect the true cost of each query. This leads to more predictable and consistent performance for everyone. Here’s a look at what’s changed:
Accurate scoring for interface lists#
Previously, the system underestimated how "expensive" queries were when fetching lists from interface types. This has been corrected so the query's complexity now accurately reflects the work required on the backend.
While this means some queries will now have a higher complexity score, the new, higher limit of 150,000 points prevents this change from causing any issues.
Refined stock field costs#
We’ve updated the complexity cost of the various stock quantity fields to more accurately reflect the true cost of computing them on the backend.
Updated scores:
Field | Description | Complexity |
---|---|---|
freeToAllocateQuantity | Fast to compute, ideal for most use cases | 0 ✅ |
incomingQuantity , linkedIncomingQuantity , unlinkedIncomingQuantity , onDeliveryQuantity | Depends on supplier module data | 3 |
physicalQuantity , allocatedQuantity , demandQuantity , availableNowQuantity , unshippedQuantity | Depends on live order and shipment data | 4 |
availableQuantity | Computed as physical + unlinked incoming | 7 ❗ |
See the documentation for details on how these fields are calculated.
What should you do about your stock queries?#
- For most integrations, you only need
freeToAllocateQuantity
– it’s free in terms of complexity and reflects what’s available to sell. - Use higher-cost fields only when necessary, and be aware they contribute significantly more to your query budget.
Lower costs for common fields#
To balance out the scoring changes, some frequently-used fields are now significantly cheaper in terms of query complexity – in most cases, up to 10x less expensive. These include:
*.translations
and their fields*.attributes
, their types, and elementsdiscountsApplied
,appliedVouchers
, andallocations
on order lines- Tax breakdown and tax rules
- Size chart and measurement chart labels
customer.newsletterSubscriptions
, and more
These fields now have special-case handling in the complexity calculation, ensuring they don't penalize your queries above the actual cost of their evaluation.
Default limits for certain lists lowered#
Some fields (like Shipment.lines
or Attribute.elements
) don’t allow you to pass a limit
or first
argument – because in practice, you always need all of the data.
- Previously, the assumed default was 10
- Now, the default is 5
This adjustment more accurately reflects real-world usage and reduces unnecessary penalties.
If your query includes many such fields, this change could actually reduce your overall complexity score a lot.
Smarter complexity for lists filtered by IDs#
When fetching a collection by a list of specific ID
s, for example:
1
deliveryWindows(limit: 200, where: { id: [86, 99, 120, 99] }) { ... }
We now calculate complexity based on the actual number of unique items requested (3), not the declared limit (200). This ensures you’re not penalized for high limits if you're only fetching a few known items.
Passing the limit
matching the number of unique identifiers was previously suggested as a simple optimization, but now it’s no longer necessary to handle this on the client side.
See previous scores with a new header#
To help you see the impact of these changes for yourself, we've introduced a new header that allows you to compare the old and new complexity scores side by side.
Add the following header to your GraphQL request:
X-Previous-Complexity: true
When this header is present, the API response will include both the current and previous complexity values in the extensions
section of the response:
1
2
3
4
"extensions": {
"complexity": 1550,
"complexity.previous": 6155
}
What does it mean?
- complexity: This is the current complexity score used to validate your query.
- complexity.previous: This is the previous complexity score, before the adjustments applied. It’s no longer used and is provided for your reference only.
Since the new complexity scores will land on your QA servers first, we recommend using this header to audit your most complex and critical queries between QA and production servers. It's a great way to see the impact of the changes firsthand and spot opportunities to optimize your queries, even if they aren't close to the new limit.
TL;DR#
Change | Impact |
---|---|
Increased Single Query Limit | From 100k to 150k. Impact: No production queries will fail. |
Refined Stock Costs | Use freeToAllocateQuantity where possible (cost = 0) |
Lower costs for Common Fields | Many nested list fields are now 10x cheaper |
Smarter list Filtering | Filters by ID now scale based on list size |
Reduced Default Limits | Default assumption dropped from 10 to 5 |
Accurate Interface List Scoring | Queries using interfaces (Order, OrderLine) now have a more accurate complexity score |
We are confident that these updates will lead to a more reliable and predictable API experience for all our partners. If you have any questions or need help optimizing your queries, please feel free to reach out to our support team.