-
Inge Alexander Raknes authored
Updated the example configuration to use the same parameter name for concurrent jobs that is being read in pulsar/managers/queued.py
Inge Alexander Raknes authoredUpdated the example configuration to use the same parameter name for concurrent jobs that is being read in pulsar/managers/queued.py
Code owners
Assign users and groups as approvers for specific file changes. Learn more.
job_managers.ini.sample 2.92 KiB
## Default job manager is queued and runs 1 concurrent job.
[manager:_default_]
type = queued_python
num_concurrent_jobs=1
## Create a named queued (example) and run as many concurrent jobs as
## server has cores. The Galaxy Pulsar url should have /managers/example
## appended to it to use a named manager such as this.
#[manager:example]
#type=queued_python
#num_concurrent_jobs=*
## DRMAA backed manager (vanilla).
## Be sure drmaa Python module install and DRMAA_LIBRARY_PATH points
## to a valid DRMAA shared library file. You may also need to adjust
## LD_LIBRARY_PATH.
#[manager:_default_]
#type=queued_drmaa
#native_specification=-P bignodes -R y -pe threads 8
## Condor backed manager.
#[manager:_default_]
#type=queued_condor
## Optionally, additional condor submission parameters can be
## set as follows:
#submit_universe=vanilla
#submit_request_memory=32
#submit_requirements=OpSys == "LINUX" && Arch =="INTEL"
#submit_rank=Memory >= 64
## These would set universe, request_memory, requirements, and rank
## in the condor submission file to the specified values. For
## more information on condor submission files see the following link:
## http://research.cs.wisc.edu/htcondor/quick-start.html.
## CLI Manager Locally
## Manage jobs via command-line execution of qsub, qdel, stat.
#[manager:_default_]
#type=queued_cli
#job_plugin=Torque
## CLI Manager via Remote Shell
## Manage jobs via qsub, qdel, qstat on remote host `queuemanager` as
## Unix user `queueuser`.
#[manager:_default_]
#type=queued_cli
#job_plugin=Torque
#shell_plugin=SecureShell
#shell_hostname=queuemanager
#shell_username=queueuser
## DRMAA (via external users) manager.
## This variant of the DRMAA manager will run jobs as the supplied user.
#[manager:_default_]
#type=queued_external_drmaa
#production=true
#chown_working_directory_script=scripts/chown_working_directory.bash
#drmaa_kill_script=scripts/drmaa_kill.bash
#drmaa_launch_script=scripts/drmaa_launch.bash
## NOT YET IMPLEMENTED. PBS backed manager.
#[manager:_default_]
#type=queued_pbs
## Pulsar-side action configuration. For actions initiated by Pulsar - the
## following parameters can be set to control retrying these actions (if they)
## fail. (XXX_max_retries=-1 => no retry, XXX_max_retries=0 => retry forever -
## this may be a bit counter-intuitive but I believe it is consistent with
## Kombu.
#preprocess_action_max_retries=-1
#preprocess_action_interval_start=2
#preprocess_action_interval_step=2
#preprocess_action_interval_max=30
#postprocess_action_max_retries=-1
#postprocess_action_interval_start=2
#postprocess_action_interval_step=2
#postprocess_action_interval_max=30
## MQ-Options:
## If using a message queue the Pulsar will actively monitor status of jobs
## in order to issue status update messages. The following options are
## then available to any managers.
## Minimum seconds between polling intervals (increase to reduce resources
## consumed by the Pulsar).
#min_polling_interval = 0.5