Dilvish wrote:Unless Geoff or Marcel has some significant objection, I am OK with the idea that if NetworkX proves to be plenty speedy (** but see below**) we could just rely on including it, or if the speed is marginal then perhaps fall back to including NetworkX if we are not able to fairly promptly re-implement something using boost-graph. But if a NetworkX implementation of something wound up slowing down the AI too much then we would not roll it out in that form even if we did not have a ready reimplementation with boost-graph.
** re the speed of NetworkX-- it appears to me that NetworkX will rely on the optional Numpy and Scipy packages to speed up various calculations if those packages are present. I know I already have those installed on my machines and I have the impression you likely do as well, but we can't rely on our normal players having those, so any assessments of the time requirements for any of these graph algorithms would have to be done without reliance on those packages.
I have those packages installed but they are not within the Freeorion python path, so that should probably not cause trouble. There are a few algorithms that absolutely require numpy/scipy (i.e. have no alternative implementation) but they seem unlikely to use in AI context.
On the maintenance issue, I disagree that the only maintenance issue is if we change python versions. It appears to me that NetworkX still has bugfixes being periodically made, and so one of us would need to be keeping track of it to see if we need to upgrade our version periodically (and we may get into situations like with boost where we make local patches/workarounds and then have to track how long they are needed). That is not necessarily a dealbreaker kind of issue, but it seems to me that it needs to be acknowledged. There is also an issue regarding optional packageswhich I'll discuss more below. Are you willing to take on the responsibility for keeping track of such things?
Right. I would be willing to keep track of such things.
I was not proposing preemptively exposing a large portion of boost-graph; my thought was to mostly rely on helper functions written in C++, like we currently do for some of our pathing code, and just expose additional items (helper functions or things directly from boost-graph) as it seemed suitable. It's not clear to me what your objection was to the idea of exploring and prototyping algorithms using NetworkX with the expectation of doing the "production" implementation with boost-graph ...
It is duplicate work which is unnecessary if the NetworkX implementation is fast enough. It makes development slower (both because it is duplicate work and because it is heavy templated C++ compared to Python) and yes, also adds possibly long delays after an implemenation which is ready for play-testing exists. If play-testing feedback is negative, it is once again easier/faster to do it in Python.
As far as initial work is concerned, we would at the very least need a solid interface to create and modify (i.e. add/delete edges/vertices, redefine weights etc) as well as iterate over a boost-graph representing object, preferably conversion functions to the NetworkX data structure as once a "production" implementation needs to be extended/modified, again working in Python is strongly prefered.
That being said, I am open for trying it first. So I would work with my local NetworkX installation, post a PR with that implementation, get play-testing feedback from you / possibly other devs that set up NetworkX locally and then once the algorithm is confirmed/polished, try to see how long the boost adaptation takes (and if it is necessary given the performance of the algorithm).
If I provided any code, scripts or other content here, it's released under GPL 2.0 and CC-BY-SA 3.0