This project is mirrored from https://github.com/mantidproject/mantid.git.
Pull mirroring updated .
- 01 Mar, 2016 1 commit
-
-
Hahn, Steven authored
-
- 14 Feb, 2016 1 commit
-
-
Hahn, Steven authored
-
- 23 Jan, 2016 1 commit
-
-
Hahn, Steven authored
-
- 22 Jan, 2016 1 commit
-
-
Hahn, Steven authored
-
- 15 Jan, 2016 2 commits
-
-
Hahn, Steven authored
-
Hahn, Steven authored
-
- 05 Oct, 2015 1 commit
-
-
Campbell, Stuart authored
-
- 17 Apr, 2015 1 commit
-
-
Campbell, Stuart authored
-
- 16 Dec, 2014 1 commit
-
-
Whitfield, Ross authored
-
- 22 Apr, 2014 2 commits
-
-
Owen Arnold authored
Add better documentation.
-
Owen Arnold authored
The problem does not get picked up in the unit tests, and only occurs with a real dataset to work on. The problem is that we get a overloaded stack due to some clusters becoming their own children. When the getLabel is then called on the compositecluster, it results in an overloaded stack. I think this could be fixed by changing the logic and working of the merge method to use a member vector<set<label_id> > to track each cluster group for the incomplete clusters, and only generating the composites (which would only ever by one level deep) at the end of the process.
-
- 15 Apr, 2014 1 commit
-
-
Owen Arnold authored
-
- 14 Apr, 2014 1 commit
-
-
Owen Arnold authored
A composite cluster will fix the problem I've been having. Namely that the clusters are processed in parallel to get the uniform min, but the order that they are processed in will affect the end labeling. If clusters are set up as composites before hand. This will not be a problem.
-
- 09 Apr, 2014 3 commits
-
-
Owen Arnold authored
-
Owen Arnold authored
A very simple solution to a threading issue. We can safely process the peaks list in parallel if the integration can be guaranteed to be thread safe. By making the interation a const member that simply returns the results we do this. Therefore concurrent thread access will cause no side effects. This Will allow us to handle the case where the user has selected a threshold such that two peaks end up sitting on the same cluster and therefore the integration method could be entered simultaneously. Functional programming had the solution. Also, I've made some changes to reduce the amount of work happening to determine the Clusters which are unresolved. I can avoid this duplication because for each indexpair that gets registered, there will always be a duplicate (there has to be as each unreachable neighbour for one process index is itself an unreachable neighbour for another process index) and therfore only one cluster needs to be entered into the vector.
-
Owen Arnold authored
Have the integration working parallel and non parallel. Peformance tests have reduced in speed by 1/2 so now run in approximately 1/2 a second. The non-parallel version runs quicker than the parallel version indicating that some work still needs to be done to improve the speed of the merging.
-
- 07 Apr, 2014 1 commit
-
-
Owen Arnold authored
Cluster extracted to it's own type in it's own physical structure. Clusters are now passed around. Tests added for the cluster type.
-