Query Routing

Unlike other decentralized LLM systems proposed and being utilized, which require all network participants to calculate and return results for a user query The protocol has 3 parameters with regards to providing inferences that it needs to optimise and maximise for - Uptime, Speed and Stake. In a distributed LLM network we approach the query routing problem with multiple goals:

  1. Quality of inferences requested by users should meet a threshold (i.e. detailed, moderation free and insightful)

  2. Ensuring the system is capable of handling a high volume of queries

  3. Network is computationally efficient (All nodes do not need to run the same computation

  4. All node operators are appropriately incentivised to run nodes

  5. Node operators should compete with each other on performance parameters - Uptime, response time and amount staked

We utilise a simple model that monitors these global parameters for the network in the previous epoch and calculates new weights for them in the routing process. The process is structured as follows:

  1. Initial Weights: Set initial weights for each parameter determining their relative importance in the probability of the node getting assigned user queries.

  2. Observation and Comparison: At the end of each epoch, the network observes the values of performance parameters and then compares these values with the corresponding values from the previous epoch.

  3. Adjust Weights: If the median value for a parameter increases, we either keep the weight unchanged or in the case of staked concentration - decrease it to avoid centralisation risks. If the median value decreases, we increase its weight to incentivize node operators to receive a higher share of queries by scaling up their performance concerning that parameter.

  4. Smoothing Mechanism:

  • To avoid abrupt and extreme changes in weights we apply a moving average smoothing technique. This is followed by Normalizing the weights to ensure they sum up to 1.

Each node receives an allocation score based on epoch weights and normalized parameters for uptime, speed of inference and staked capital.

Ai=uptimenodeiuptime+speednodeispeed+max(Sf,min(Sc,Snodei))A_i = \text{uptime}_{\text{node}_i} - \text{uptime} + \text{speed}_{\text{node}_i} - \text{speed} + \max(\text{S}_{\text{f}}, \min(\text{S}_{\text{c}}, \text{S}_{\text{node}_i}))

Ai:Allocation score for node iA_i: \text{Allocation score for node i}

α:Weights assigned to parameter on network performance in the previous epoch\alpha: \text{Weights assigned to parameter on network performance in the previous epoch}

Ni,param:normalized value for node performance for the respective parameterN_{i,param}: \text{normalized value for node performance for the respective parameter}

Sf:Minimum stake required to be part of the node operator setS_f: \text{Minimum stake required to be part of the node operator set}

Sc:Protocol enforced cap on staked capital per node.S_c: \text{Protocol enforced cap on staked capital per node.}

There might be a node operator providing the best performance across parameters. We wish to ensure that this node operator receives the highest number of queries but simultaneously to achieve the other objective of incentivizing all node operators to be continually operational on the network we utilise this Allocation score to calculate probability of getting allocated the next query.

Pi(Ai)=Aij=1nAjP_i(A_i) = \frac{A_i}{\sum_{j=1}^{n} A_j}

Last updated