tab for tasks

Comments

5 comments

  • Avatar
    Yogi Kulkarni

    Each Job is mean't to be a cohesive unit of work and in most cases the tasks are very closely related to the Job on hand and are used to do some processing / cleanup before and after the job e.g. kill stray processes, delete unwanted files, etc.


    What kind of tasks do you plan to have? Please let us know more about your use case.


    Best,


    Yogi

    0
    Comment actions Permalink
  • Avatar
    LeeBenhart

    We currently have the pipeline set such that the first stage is a build and test.  the job in the stage will


    Stage: BuildAndTest


    Job: BuildAndTest


    task: execute the build (cleanup is in this task as well)


    task:execute unit tests against build if passed


    task:run metrics if tests passed


    task:package the deployable items if metrics passed


    task:package smoke and acceptance tests for later execution if packaging passed


    Since any of these tasks could fail, I want to drill down directly to the task which failed rather than scroll the console-log.


    Even applying an xsl transform to the console-log to collapse each task section would serve this purpose.


     

    0
    Comment actions Permalink
  • Avatar
    Yogi Kulkarni

    The way we'd model this is as multiple stages in a pipeline:

    Pipeline: BuildAndTest
        Stage: Build
            Job: execute the build (cleanup is in this task as well)
        Stage: UnitTest
            Job: execute unit tests
        Stage: Metrics
            Job: run metrics
        Stage: PackageDeployers
            Job: package the deployable items
        Stage: PackageAcceptanceTests
            Job:package smoke and acceptance tests for later execution if packaging passed
          
    The advantage of structuring it this way is that:

    - Each subsequent stage is triggered only if the previous one passes so you don't need any additional checks.

    - You can reduce the time that unit or acceptance tests take by partitioning tests across multipe Jobs in one stage. E.g. You can have multiple jobs in the UnitTest stage, each of which runs a subset of the test suites. Go will schedule these parallely and if you have sufficient agents, your tests will complete faster. Additionally, Go  aggregates and reports tests failures across all Jobs in the "Test" tab on the Stage Detail page (you only need to upload the junit/nunit test report artifacts in the Job definition).

    - You can trigger downstream pipelines whenever any stage completes. This is useful, for example, to parallely kick off Acceptance and Performance tests once a Package stage completes. This dependency chain is then visible in the "Pipeline Dependency" tab on a Stage Detail page.

    -Yogi

    0
    Comment actions Permalink
  • Avatar
    LeeBenhart

    I should have been more clear.


    we have 5 stages currently defined per pipeline.  we do not want to take up the time and space to package a build for deployment and testing if basic unit tests do not pass.  Each of these stages have 1 job with multiple tasks.  Since we are single platform, there is no need to have multiple jobs executing in parallel.  I started down the path you suggested, it made the most sense to me logically, however, we would have ended up with 10-16 stages per pipeline.


    1) buildAndTest (build, unit tests, metrics, packaging)


    2) Dev-DeployAndTest (deploy, smoke test, acceptance test)


    3) Test-DeployAndTest (deploy, smoke test, acceptance test)


    4) Production-DeployAndTest (deploy, smoke test)


    5) LabelAndCheckin (binary checkin of production deployment)

    0
    Comment actions Permalink
  • Avatar
    Yogi Kulkarni

    I agree that having more than 10 stages is not ideal.


    In such cases we typically break up the Stages into multiple pipelines, with dependencies across these, e.g. our Dev pipeline has Build, UnitTest, Package (for acceptance), Installers. And we have 2 Acceptance test pipelines one for Accceptance-IE (runs on Windows agents) and one for Acceptance-Firefox (runs on Linux).


    We also have Deploy pipelines that get triggered after the Dev - Package Stage, one which deploys the latest build to our Staging environment.


    In general, the way Go is designed, you'd get most flexibility if you used Pipeline, Stages and Jobs.


    -Yogi

    0
    Comment actions Permalink

Please sign in to leave a comment.