0

Recently, I developed a MATLAB-based simulation to evaluate my robot pathfinding algorithm. The robots operate on a network of unidirectional tracks, where each robot computes a single path from its starting point to a destination before executing the route. In my envisioned scenario, hundreds to thousands of robots (e.g., 1,000–2,000) operate concurrently. My goal is to design an algorithm that balances traffic efficiency with dynamic congestion mitigation, enabling robots to reroute around overcrowded segments without significantly degrading overall performance.

I modified Dijkstra’s algorithm as a foundation, integrating a pheromone-based mechanism inspired by ant colony optimization (ACO). Each edge in the graph has an initial pheromone value that decays when a robot traverses it. Pheromone levels across all edges are periodically replenished to simulate evaporation and recovery. Both decay and recovery processes enforce upper and lower bounds (lower bound fixed at 0), with adjustable parameters for decay magnitude, recovery rate, and upper limit.

Based on this framework, I revised the cost calculation formula in Dijkstra’s adjacency node evaluation. The pseudocode for these variants is as follows. To ensure consistency, I fixed all experimental conditions: random seed, robot generation count (up to 300 per origin), inter-robot dispatch interval, and origin-destination pair sets. Simulations terminate when all robots reach their destinations, and two metrics are recorded: 1.Average travel time across all robots. 2.Total ccc.

% d is the distance from the starting point to the current point i;

neighbors = map.graphGetAllNodeTo(u);
for n = 1:length(neighbors)
   j = neighbors{n};        
   weight = map.edges.(i).(j);  % distance of edge_ij
   ed = map.getEuclideanDistance(i, node_destination);  % euclidean distance of edge_ij
   % alg1, dijkstra 
   new_dist = d + weight;
   % alg2  
   new_dist = (d + weight + ed)/(1 + edge_ij.pheromone_current);
   % alg3, lambda=2 and is a adjustment coefficients
   new_dist = d + ed + weight * (1 - edge_ij.pheromone_current /edge_ij.pheromone_max)^lambda;
   % alg4, lambda=2 and alpha=1.2 are adjustment coefficients
   new_dist = d + ed + weight * (1 + alpha*(1 - edge_ij.pheromone_current  / edge_ij.pheromone_max)^obj.lambda); 

   % if a shorter path is found, record it
   index_node_to = find(strcmp(node_list, j));
   if new_dist < dist(index_node_to)
      if ~obj.hangerCheckFormCycle(u, j, node_list, prev)  % prevent loop
         dist(index_node_to) = new_dist;
         prev{index_node_to} = i;
         heap{end+1} = {new_dist, node_list{index_node_to}}; 
      end
   end
end

Each algorithm underwent 10 trials, with results averaged. Surprisingly, the outcomes contradicted my expectations. enter image description here

From observing the simulation dynamics, I prefer alg3 and alg4 over alg2. The latter exhibits undesirable behavior: robots cluster on a single edge, then abruptly switch to another in waves. In contrast, alg3 and alg4 prioritize routing robots onto the shortest path until approaching congestion thresholds, at which point one or two robots are diverted to alternate routes. Their average travel times closely match standard Dijkstra’s algorithm. However, I cannot explain why their simulation durations are significantly longer than alg2’s. Intuitively, shorter average travel times should correlate with faster overall traffic completion.

Additionally, I observed that each origin dispatches a robot every 3 seconds until reaching the 300-robot limit. This further perplexes me: Why does alg2’s simulation duration not exceed 900 seconds (i.e., 300 robots × 3-second interval)?

I appreciate any insights or suggestions to resolve these anomalies.

6
  • The last paragraph shows there's something wrong with the timings. You claim the last robot isn't dispatched until 900s (and then has to travel to a destination), but sometimes the simulation is over by 852s. This is somehow incorrect. When was the last robot dispatched and how long did it take? First fix the time measuring, only then revisit comparisons. Commented Sep 15 at 12:52
  • "Each algorithm underwent 10 trials" This is not really a large number of trials. You need to measure the variance of your results, as well as the absolute value, in order to determine how many trials you need to get a meaningful result. Commented Sep 15 at 14:13
  • @ravenspoint thank you for your suggestions. However, I'm not very familiar with statistics and I'm unsure what range of variance would make the results of my experiment meaningful. Could you tell me how to roughly determine an acceptable range for the variance based on the content of my experiment? Commented Sep 16 at 2:28
  • @NeilButcher its reall confused me that I specifically observed the entire process of a simulation experiment. I found that for alg2, it reaches the robot's sending capacity limit around 650~700 seconds, whereas for alg3 or 4, it often takes until 800~870 seconds. During this period, based on my observations,alg2's dispatch interval even shortens during a period of high network traffic. So this appears to be more of a program issue rather than an algorithm issue? Commented Sep 16 at 3:19
  • This is not the best place for an introductory course in statistics. I only have room here to offer a practical suggestion: repeat your experiment with runs of 100 trials and see if the "weird results" vanish or become significantly smaller. If they do, then you will know that they are caused by large random fluctuations in the results of individual trials. ( While they are running, get started on a statistics textbook, ideally one with an emphasis on experimental design ) Commented Sep 16 at 14:38

0

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.