This si not homework or anything, it is just something I want to find out so I have an upper bond on performance of an algorithm I can compare against.
Given an undirected Graph G made from m vertices. The vertices form a regular lattice square grid, e.g. internal vertices have connectivity 4, edges 3, and corners 2. Think of an array of pixels etc.
The cover time is the time (or steps) required to completely cover every single vertex via some system.
IF we perform a random walk, such that at any one vertex there is a uniform probability to move to any of the neighboring vertices, then the cover time is given by mH_m , where m is the number of vertices and H_m is the mth harmonic (i.e. 1/1 + 1/2 + 1/3.... + 1/m). This is often stated as the Coupon Collection problem (e.g., if you were collecting pokemon cards, how many packets do you have t buy to collect all cards, if the distribution was uniform) This works for a complete graph though, I'm not sure if my graph is complete?
This number I have as an absolute baseline in performance. But I want to see what happens with slightly different systems.
Firstly. Instead of having a single system which randomly walks across the graph, we now have n systems which in parallel randomly walk the graph (but without coordination). Perhaps imagine that we have this graph and there are n people/robots randomly walking the graph.- How long does it take to visit ever vertex?
A naive solution would by that the total expected time would decease by a factor n: (m*H_m / n) for m vertices and n parrellel searching systems.
This I'm sure is wrong since in a worst case scenario, it is possible that ever one of the n systems replicates identically the search of the the n. Thus the upper bound is still m*H_m. This would make (m*H_m / n) the lower bound if absolutely every system n did not take the same path.
If we consider the central vertices which each have 4 edges leading to the neighboring vertices, then the probability of going from one node to an adjacent node is 0.25. The probability that another system also goes to the same node is 0.25*0.25. If both system go to the same next adjacent vertex, the probability is only 0.25*0.25*0.25*0.25. S this search replication is rare.
Any graph theory experts that can shine some light on this?
Given an undirected Graph G made from m vertices. The vertices form a regular lattice square grid, e.g. internal vertices have connectivity 4, edges 3, and corners 2. Think of an array of pixels etc.
The cover time is the time (or steps) required to completely cover every single vertex via some system.
IF we perform a random walk, such that at any one vertex there is a uniform probability to move to any of the neighboring vertices, then the cover time is given by mH_m , where m is the number of vertices and H_m is the mth harmonic (i.e. 1/1 + 1/2 + 1/3.... + 1/m). This is often stated as the Coupon Collection problem (e.g., if you were collecting pokemon cards, how many packets do you have t buy to collect all cards, if the distribution was uniform) This works for a complete graph though, I'm not sure if my graph is complete?
This number I have as an absolute baseline in performance. But I want to see what happens with slightly different systems.
Firstly. Instead of having a single system which randomly walks across the graph, we now have n systems which in parallel randomly walk the graph (but without coordination). Perhaps imagine that we have this graph and there are n people/robots randomly walking the graph.- How long does it take to visit ever vertex?
A naive solution would by that the total expected time would decease by a factor n: (m*H_m / n) for m vertices and n parrellel searching systems.
This I'm sure is wrong since in a worst case scenario, it is possible that ever one of the n systems replicates identically the search of the the n. Thus the upper bound is still m*H_m. This would make (m*H_m / n) the lower bound if absolutely every system n did not take the same path.
If we consider the central vertices which each have 4 edges leading to the neighboring vertices, then the probability of going from one node to an adjacent node is 0.25. The probability that another system also goes to the same node is 0.25*0.25. If both system go to the same next adjacent vertex, the probability is only 0.25*0.25*0.25*0.25. S this search replication is rare.
Any graph theory experts that can shine some light on this?