Is your feature request related to a problem? Please describe.
Let's take a laptop:
goos: windows
goarch: amd64
pkg: github.com/LdDl/ch
cpu: Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz
Run test (with debugged flag ON) on it for this Graph file containing ~211k edges:
go test -timeout 30s -v -run ^TestShortestPath$ github.com/LdDl/ch
....
--- PASS: TestShortestPath (10.32s)
...
Bassicaly it has taken ~10seconds to preprocess graph. It's not that bad. But.
Let's take another Graph file containing 251k edges:
go test -timeout 5000s -v -run ^TestShortestPath$ github.com/LdDl/ch
....
--- PASS: TestShortestPath (4240.13s)
...
Now it's about 70+ minutes. I guess the problem with that graph is (a) topology + (b) a lot of Dijkstra function calls during contraction process on last ~10k steps in priority queue (vertices sorted by importance).
Something needs to be done with preparing contraction hierarchies: improve heuristics (I don't think that ~[edges_num*2] number of generated shortcuts is a good ...) / delete unnecessary calculations / refactor code (probably find hidden bugs)
Additional context
Used heuristics currently:
- Edge difference:
// computeImportance Update importance of vertex
func (vertex *Vertex) computeImportance() {
// Worst possible shortcuts number throught the vertex is: NumWorstShortcuts = NumIncomingEdges*NumOutcomingEdges
vertex.shortcutCover = len(vertex.inIncidentEdges) * len(vertex.outIncidentEdges)
// Number of total incident edges is: NumIncomingEdges+NumOutcomingEdges
vertex.incidentEdgesNum = len(vertex.inIncidentEdges) + len(vertex.outIncidentEdges)
// Edge difference is between NumWorstShortcuts and TotalIncidentEdgesNum
vertex.edgeDiff = vertex.shortcutCover - vertex.incidentEdgesNum
// [+] Spatial diversity heuristic: for each vertex maintain a count of the number of neighbors that have already been contracted [vertex.delNeighbors], and add this to the summary importance
// note: the more neighbours have already been contracted, the later this vertex will be contracted in further.
// [+] Bidirection edges heuristic: for each vertex check how many bidirected incident edges vertex has.
// note: the more bidirected incident edges == less important vertex is.
vertex.importance = vertex.edgeDiff*14 + vertex.incidentEdgesNum*25 + vertex.delNeighbors*10 - vertex.bidirectedEdges()
}
2 .Lazy update heuristic:
// update Importance of vertex "on demand" as follows:
// Before contracting vertex with currently smallest Importance, recompute its Importance and see if it is still the smallest
// If not pick next smallest one, recompute its Importance and see if that is the smallest now; If not, continue in same way
vertex := heap.Pop(graph.pqImportance).(*Vertex)
vertex.computeImportance()
if graph.pqImportance.Len() != 0 && vertex.importance > graph.pqImportance.Peek().importance {
graph.pqImportance.Push(vertex)
continue
}
Is your feature request related to a problem? Please describe.
Let's take a laptop:
Run test (with debugged flag ON) on it for this Graph file containing ~211k edges:
go test -timeout 30s -v -run ^TestShortestPath$ github.com/LdDl/ch .... --- PASS: TestShortestPath (10.32s) ...Bassicaly it has taken ~10seconds to preprocess graph. It's not that bad. But.
Let's take another Graph file containing 251k edges:
Now it's about 70+ minutes. I guess the problem with that graph is (a) topology + (b) a lot of Dijkstra function calls during contraction process on last ~10k steps in priority queue (vertices sorted by importance).
Something needs to be done with preparing contraction hierarchies: improve heuristics (I don't think that ~[edges_num*2] number of generated shortcuts is a good ...) / delete unnecessary calculations / refactor code (probably find hidden bugs)
Additional context
Used heuristics currently:
2 .Lazy update heuristic: