Skip to content
Webinar - May 13th: How Criteo powers real-time decisions with a reduced footprintRegister now

Query Threading

Query Threading is a powerful feature that enables individual queries to run across multiple threads in parallel. It’s especially effective for complex graph traversals or queries that touch a large number of elements, such as stepping off supernodes with hundreds of thousands of edges. By distributing the work, you can dramatically boost performance and reduce response times for heavy-duty operations.

Understanding Query Threading

Standard Query Execution

With the default configurations, each Gremlin query in AGS executes on a single thread from the thread pool controlled by the aerospike.graph-service.gremlinPool configuration parameter. This single-threaded execution can become a bottleneck when processing queries that touch hundreds of thousands of graph elements. For example, traversing supernodes (vertices with extremely high connectivity) or performing a deep multi-hop query with high connectivity on each hop.

Threaded Query Execution

Query Threading is configured with the aerospike.graph.parallelize parameter, which enables individual queries to utilize multiple threads during execution, distributing the workload across available compute resources. This threaded execution model is particularly effective for I/O-bound operations in which the query spends significant time waiting for data retrieval from the Aerospike database.

When to Use Parallel Query Execution

Query Threading is particularly beneficial for:

  • High fan-out queries: queries that expand to touch many vertices from a single starting point.
  • Supernode processing: queries involving vertices with thousands to hundreds of thousands of edges.

Usage

To enable Query Threading for a specific query, include the parallelize parameter in your query as shown below.

g.with("aerospike.graph.parallelize", <NUM-THREADS>).<QUERY>

With Query Threading, each thread executes in batches. Batch size is determined by the aerospike.client.batch.read.size configuration option. Additionally supernodes are pull via index in their own thread.

The default value of aerospike.client.batch.read.size is 5000, so if a query steps onto 25,000 elements in 1 step, setting the aerospike.graph.parallelize parameter to 5 maximizes parallelization.

Alternatively, you can set aerospike.client.batch.read.size to 2500 in your configuration file and set the aerospike.graph.parallelize parameter to 10 in your query. This setup doubles the number of processing threads and may cause the query to run faster, but also uses more compute power.

aerospike.client.batch.read.size: 2500

The Gremlin query:

g.with(“aerospike.graph.parallelize”, 10).V(id).out().out().out().toList()
Feedback

Was this page helpful?

What type of feedback are you giving?

What would you like us to know?

+Capture screenshot

Can we reach out to you?