Submit a Job

description

This shows how to submit a job an EMR (Elastic Map Reduce) cluster on AWS to get id

Command-line Interface

>>> ./EHPC-EMR --submit -d=DOMAIN --command=COMMAND OPTIONS
All parameters should be in format parameter=value
--command                    original command to run
--input-files, -i            list of input files seprated by commas
--output-files, -o           list of expected output files seprated by commas
--domain, -d                 Domain of the main node
--cache-files, -cf           list of cache files to cache to mappers and reducers, for hadoop mode only
--cache-archives, -ca        list of cache archives to cache to mappers and reducers, for hadoop mode only
--files                      list of files to pack with the job
--reducer                    path of the reducer to execute e.g 'cat', default NONE
--output-dir                 path of the output dir for the mappers and reducers, default /home/hadoop/output/ID
--conf                       set of hadoop configuration parameters seprated by commas, for hadoop mode only
--owner                      the owner of job
        if owner is system, the commad will execute on the command line, client will wait the job is Done
        if owner is hadoop, the job will be submitted as a Hadoop job
        if owner is otherwise, this will be a PBS Torque Job
--no-fetch-output-files     Don't fetch output files, in case of s3://

Example 1

Run crossbow on ec2......com

./EHPC-EMR --submit -d=ec2......com -id=1 --command='export CROSSBOW_HOME=/home/hadoop/crossbow;$CROSSBOW_HOME/cb_hadoop --preprocess --input=s3://eg.nubios.us/crossbow/example/hg18/reads.manifest --output=s3://eg.nubios.us/crossbow/example/hg18/output --reference=s3://eg.nubios.us/crossbow-refs/hg18.jar --all-haploids --tempdir=/mnt/tmp --streaming-jar=/home/hadoop/contrib/streaming/hadoop-streaming-0.20.205.jar --just-align' -owner=user1

Returns JOBID

Table Of Contents

Previous topic

Run a Job

Next topic

Check Job Status

This Page