tzlaine wrote:What bothers me though is that you keep the double loop forall (source,target) get distance(source,target) in WithinStarlaneJumps::Eval. This scales quadratically with the number of objects, not systems, which grows quite a bit higher (the example game has few colonized systems yet). I got the closest object "on the way", but now that we have the distances in constant time, isn't there a more efficient way to perform this check? If the closest subcondition match is within the #subcondition_matches closest objects to the target (in fact it needs to be one step closer than this allows), or if there are less objects at most jump_limit-1 steps away than there are subcondition matches, a simple BFS search for any subconditon match starting at the target will take at most the time of scanning through all the subcondition matches, the less the closer the object is (in fact you have to apply a correction factor here as scanning a list item is faster than testing set membership of a vertex). I think it is worthwhile to use this alternative method in most cases for a jump limit of 1 (this is actually used). To verify when its actually useful we'd have to keep the d-neighbourhood sizes for each vertex and small distances d in memory and compare them to the subcondition match list size. Then again, hardware evolution will quickly render this effort moot within the next few years, so...
There's no way of getting away from the O(N*N) performance of doing this kind of search. We have implemented a greedy system where possible (we only search from previous search results in an And for instance). The only fix that can get rid of this is to memoize results for later reuse. For this to work, we need to have a better understanding of when the universe is constant and when it is not (and so we must throw out our memos). This is what I've been planning for some time. I hope it's even possible.[/quote]
True, and still untrue. In theory, you cannot improve the worst case. In practice, you can improve most realistical cases, especially at such radically short jump limits as 1. Still, I've abandoned that idea myself as it's likely not worth the effort.
Sidenote: do you know the simplex algorithm? It is proven to run in worst case exponential time. However, it is also proven to run in expected polynomial time on arbitrary inputs, provided you add some small random error. Algorithms can be bad in the worst case and still good in practice.
Yesterday, we were still on the brink. Fortunately, today we have come one step further.