Commit 71747198 authored by Dmitry Ganyushin's avatar Dmitry Ganyushin
Browse files

Merge branch 'master' into http-connector

* master: (183 commits)
  Adding tests for writing null blocks with and without compression
  Replace LookupWriterRec's linear search on RecList with an unordered_map. For 250k variables, time goes from 21sec to ~1sec in WSL. The order of entries in RecList was not necessary for the serializer to work correctly.
  Merge pull request #3805 from pnorbert/fix-bpls-string-scalar
  Merge pull request #3804 from pnorbert/fix-aws-version
  Merge pull request #3759 from pnorbert/bp5dbg-metadata
  new attempt to commit query support of local array. (#3868)
  MPI::MPI_Fortran should be INTERFACE not PUBLIC
  Fix hip example compilation error (#3865)
  Server Improvements (#3862)
  ascent,ci: remove unshallow flag
  Remove Slack as a contact mechanism (#3866)
  bug fix:  syntax error in json  output (#3857)
  Update the bpWriterReadHip example's cmake to run on crusher
  Examples: Use BPFile instead of BP3/4/5 for future-proof
  inlineMWE example: Close files at the end
  Examples: Add BeginStep/EndStep wherever it was missing
  BP5Serializer: handle local variables that use operators (#3859)
  Blosc2 USE ON: Fix Module Fallback (#3774)
  SST,MPI,DP: soft handle peer error
  SST,MPI,DP: improve uniq identifier
  Fix destdir install test (#3850)
  cmake: include ctest before detectoptions
  ci: enable tau check
  Add/Improve the ReadMe.md files in examples directory
  Disable BUILD_TESTING and ADIOS2_BUILD_EXAMPLES by default
  Remove testing based on ADIOS2-examples
  Fix formatting issue in DetectOptions.cmake
  Add examples from ADIOS2-Examples
  Improve existing examples
  MPI_DP: do not call MPI_Init (#3847)
  cmake: update minimum cmake to 3.12 (#3849)
  MPI: add timeout for conf test for MPI_DP (#3848)
  Tweak Remote class and test multi-threaded file remote access (#3834)
  Add prototype testing of remote functionality (#3830)
  Try always using the MPI version
  Try always using the MPI version
  Import tests from bp to staging common, implement memory selection in SST
  ci: fix codeql ignore path (#3772)
  install: export adios2 device variables (#3819)
  added support to query BP5 files (#3809)
  ffs 2023-09-19 (67e411c0)
  Fix abs/rel step in BP5 DoCount
  fix dummy Win build
  Pass Array Order of reader to remote server for proper Get() operation
  Fix the ADIOS_USE_{_} variable names to use ADIOS2
  Remove BP5 BetweenStepPairs variable that hides Engine.h counterpart
  Delete experimental examples
  yaml-cpp: support 0.8.0 version
  Mod not to overload to make some compilers happy
  Mod to keep MinBlocksInfo private to BP5Writer, not reuse Engine.h API.
  gha,ci: update checkout to v4
  Limit testing changes to BP5
  tests: fix windows 64bit type issues
  Add location of hdf5.dll to PATH for windows tests
  Fix more compiler warnings from ompi build
  clang format
  Fix size_t -> int conversion warnings
  Allow building with Visual Studio and HDF5 shared libs
  Fixup local var reading by block with test, master branch
  use documented initial cache variable to find hdf5 on windows
  ci: Add HDF5 to a windows build
  Remove unused SelectionType values
  Update ADIOS2 HDF5 VOL with basic set of capability flags
  defines
  reorder
  Remove file before writing, catch exceptions
  format
  Reader-side Profiling
  removed  a comment
  ci: disable MGARD static build
  cmake: fix ffs dependency
  removed comments and cleaned up more code
  bp5: remove ADIOS2_USE_BP5 option
  operators: fix module library
  ci: Create static minimal build
  cmake: correct prefer_shared_blosc behavior
  removing warning from auto
  clang-format
  match type of timestep for h5 engine to size_t (same as adios VariableBase class) when storing to hdf5, check the limit of size_t and match to the right type (uint, ulong, default ulonglong)
  BP5Deserialize: modify changes to keep abi compt
  Rename test. fix windows compile errors.
  add codeql workflow
  Fix ChunkV maintaining CurOffset when downsizing current chunk in Allocate() and creating the next chunk.
  ci: fix docker images names
  removed kwsys gitattributes
  KWSys 2023-05-25 (c9f0da47)
  fixup! ci: add mgard dependency to spack builds
  ci: remove unused script part
  cmake: remove redundant pugixml dependency
  ci: add mgard dependency to spack builds
  cmake: Remove enet config warning
  cmake: resolve cmake python deprecation
  enet 2023-08-15 (7587eb15)
  EVPath 2023-08-15 (657c7fa4)
  dill 2023-08-15 (235dadf0)
  atl 2023-08-15 (7286dd11)
  remove data.py since BP5 data file has no format
  format
  bp5dbg parse records and check sizes during it for mmd.0 and md.0. No decoding FFS records.
  cmake: correct info.h installation path
  ...
parents f47310d2 65ef26c4
Loading
Loading
Loading
Loading
+1 −1
Original line number Diff line number Diff line
@@ -10,7 +10,7 @@ CompactNamespaces: false
ConstructorInitializerAllOnOneLineOrOnePerLine: false
FixNamespaceComments: false
Standard: Cpp11
ColumnLimit: 80
ColumnLimit: 100
AllowAllParametersOfDeclarationOnNextLine: true
AlignEscapedNewlines: Right
AlignAfterOpenBracket: Align

.gitattributes

0 → 100644
+22 −0
Original line number Diff line number Diff line
# Set the default behavior, in case people don't have core.autocrlf set.
*       text=auto

# Explicitly declare text files you want to always be normalized and converted
# to native line endings on checkout.
*.cxx   text
*.h     text
*.hxx   text
*.tcc   text
*.cu    text
*.c     text
*.h     text
*.py    text
*.f90   text
*.F90   text
*.sh    text

*.cmake whitespace=tab-in-indent
*.md    whitespace=tab-in-indent whitespace=-blank-at-eol conflict-marker-size=79
*.rst   whitespace=tab-in-indent conflict-marker-size=79
*.txt   whitespace=tab-in-indent
*.xml   whitespace=tab-in-indent
+19 −2
Original line number Diff line number Diff line
@@ -44,10 +44,27 @@ git push
git fetch origin
git checkout -b release_@MAJOR@@MINOR@ origin/master
# Use the following command with care
git push origin
git push origin release_@MAJOR@@MINOR@:release_@MAJOR@@MINOR@
```
<!-- else -->
- [ ] Create PR that merges release_@MAJOR@@MINOR@ into master
- [ ] Remove older patch releases for @MAJOR@.@MINOR@.X in ReadTheDocs.
- [ ] Create merge -sours commit in master:
```
git fetch origin
git checkout master
git reset --hard origin/master
# We do not want the changes master from the release branch
git -s ours release_@MAJOR@@MINOR@
# Be very careful here
git push origin master
```
<!-- endif -->
- [ ] Submit a PR in Spack that adds this new version of ADIOS (if not RC mark this new version as preferred)
  - Run `spack checksum -a adios2` to add it, create commit; push it; Create
    PR in Spack repo.
- [ ] Submit a PR in Conda that adds this new version of ADIOS (if not RC mark this new version as preferred)
  - CondaForge robot should do this for you automatically, expect a new PR at
    https://github.com/conda-forge/adios2-feedstock a couple of hours after the
    release.
- [ ] Write an announcement in the ADIOS-ECP mail-list
  (https://groups.google.com/a/kitware.com/g/adios-ecp)
+221 −39
Original line number Diff line number Diff line
@@ -13,6 +13,17 @@
# may have made it to the target branch after the pull_request was started.
#######################################

#######################################
# Note regarding restore/save of cache for use by ccache:
#
# We only save cache on main branch runs. PR workflows only consume the
# cache to avoid exceeding the 10 GB limit, which can cause cache thrashing.
# Also, we only save the cache if there was *not* an exact match when cache
# was restored. This avoids attempting to write to an existing cache key,
# which results in failure to save the cache. While failure to save doesn't
# cause job failures, it seems a waste and is easily avoidable.
#######################################

name: GitHub Actions

on:
@@ -40,7 +51,7 @@ jobs:
    outputs:
      num_code_changes: ${{ steps.get_code_changes.outputs.num_code_changes }}
    steps:
      - uses: actions/checkout@v3
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0
      - name: Check for appropriately named topic branch
@@ -66,13 +77,13 @@ jobs:

    runs-on: ubuntu-latest
    container:
      image: ornladios/adios2:ci-formatting
      image: ghcr.io/ornladios/adios2:ci-formatting

    steps:
      - uses: actions/checkout@v3
      - uses: actions/checkout@v4
        with:
          path: gha
      - uses: actions/checkout@v3
      - uses: actions/checkout@v4
        with:
          ref: ${{ github.event.pull_request.head.sha }}
          path: source
@@ -91,49 +102,74 @@ jobs:
# Build and test jobs
#######################################

  linux:
  linux_ubuntu:
    needs: [format, git_checks]
    if: needs.git_checks.outputs.num_code_changes > 0

    runs-on: ubuntu-latest
    runs-on: ubuntu-20.04
    container:
      image: ornladios/adios2:ci-spack-el8-${{ matrix.compiler }}-${{ matrix.parallel }}
      image: ghcr.io/ornladios/adios2:ci-spack-ubuntu20.04-${{ matrix.compiler }}
      options: --shm-size=1g
      env:
        GH_YML_JOBNAME: ${{ matrix.os }}-${{ matrix.gpu_backend }}${{ matrix.compiler }}-${{ matrix.parallel }}
        GH_YML_JOBNAME: ${{ matrix.os }}-${{ matrix.compiler }}${{ matrix.shared == 'static' && '-static' || ''}}-${{ matrix.parallel }}
        GH_YML_BASE_OS: Linux
        GH_YML_MATRIX_OS: ${{ matrix.os }}
        GH_YML_MATRIX_COMPILER: ${{ matrix.compiler }}
        GH_YML_MATRIX_PARALLEL: ${{ matrix.parallel }}
        CCACHE_BASEDIR: "${GITHUB_WORKSPACE}"
        CCACHE_DIR: "${GITHUB_WORKSPACE}/.ccache"
        CCACHE_COMPRESS: true
        CCACHE_COMPRESSLEVEL: 6

    strategy:
      fail-fast: false
      matrix:
        os: [el8]
        compiler: [gcc8, gcc9, gcc10, gcc11, icc, oneapi, nvhpc222]
        parallel: [serial, mpi]
        os: [ubuntu20.04]
        compiler: [gcc8, gcc9, gcc10, gcc11, clang6, clang10]
        shared: [shared]
        parallel: [ompi]
        include:
          - os: el8
            compiler: cuda
          - os: ubuntu20.04
            compiler: gcc10
            parallel: mpich
          - os: ubuntu20.04
            compiler: gcc8
            parallel: serial
            constrains: build_only
          - os: el8
            compiler: cuda
          - os: ubuntu20.04
            compiler: clang6
            parallel: serial
            gpu_backend: kokkos
          - os: ubuntu20.04
            compiler: gcc8
            shared: static
            parallel: ompi
            constrains: build_only
          - os: el8
            compiler: gcc10
            parallel: mpich

          - os: ubuntu20.04
            compiler: clang6
            shared: static
            parallel: ompi
            constrains: build_only
          - os: ubuntu20.04
            compiler: gcc8
            shared: static
            parallel: serial
    steps:
      - uses: actions/checkout@v3
      - uses: actions/checkout@v4
        with:
          path: gha
      - uses: actions/checkout@v3
      - uses: actions/checkout@v4
        with:
          ref: ${{ github.event.pull_request.head.sha }}
          path: source
      - name: Restore cache
        uses: actions/cache/restore@v3
        id: restore-cache
        with:
          path: .ccache
          key: ccache-${{ matrix.os }}-${{ matrix.compiler }}-${{ matrix.parallel }}-${{ github.sha }}
          restore-keys: |
            ccache-${{ matrix.os }}-${{ matrix.compiler }}-${{ matrix.parallel }}
      - name: Configure cache
        run: ccache -z
      - name: Setup
        run: gha/scripts/ci/gh-actions/linux-setup.sh
      - name: Update
@@ -142,10 +178,83 @@ jobs:
        run: gha/scripts/ci/gh-actions/run.sh configure
      - name: Build
        run: gha/scripts/ci/gh-actions/run.sh build
      - name: Print ccache statistics
        run: ccache -s
      - name: Save cache
        uses: actions/cache/save@v3
        if: ${{ github.ref_name == 'master' && steps.restore-cache.outputs.cache-hit != 'true' }}
        id: save-cache
        with:
          path: .ccache
          key: ccache-${{ matrix.os }}-${{ matrix.compiler }}-${{ matrix.parallel }}-${{ github.sha }}
      - name: Test
        if: ${{ matrix.constrains != 'build_only' }}
        run: gha/scripts/ci/gh-actions/run.sh test

  linux_el8:
    needs: [format, git_checks]
    if: needs.git_checks.outputs.num_code_changes > 0

    runs-on: ubuntu-latest
    container:
      image: ghcr.io/ornladios/adios2:ci-el8-${{ matrix.compiler }}
      options: --shm-size=1g
      env:
        GH_YML_JOBNAME: ${{ matrix.os }}-${{ matrix.compiler }}-${{ matrix.parallel }}
        GH_YML_BASE_OS: Linux
        GH_YML_MATRIX_OS: ${{ matrix.os }}
        GH_YML_MATRIX_COMPILER: ${{ matrix.compiler }}
        GH_YML_MATRIX_PARALLEL: ${{ matrix.parallel }}
        CCACHE_BASEDIR: "${GITHUB_WORKSPACE}"
        CCACHE_DIR: "${GITHUB_WORKSPACE}/.ccache"
        CCACHE_COMPRESS: true
        CCACHE_COMPRESSLEVEL: 6

    strategy:
      fail-fast: false
      matrix:
        os: [el8]
        compiler: [icc, oneapi]
        parallel: [ompi]

    steps:
      - uses: actions/checkout@v4
        with:
          path: gha
      - uses: actions/checkout@v4
        with:
          ref: ${{ github.event.pull_request.head.sha }}
          path: source
      - name: Restore cache
        uses: actions/cache/restore@v3
        id: restore-cache
        with:
          path: .ccache
          key: ccache-${{ matrix.os }}-${{ matrix.compiler }}-${{ matrix.parallel }}-${{ github.sha }}
          restore-keys: |
            ccache-${{ matrix.os }}-${{ matrix.compiler }}-${{ matrix.parallel }}
      - name: Configure cache
        run: ccache -z
      - name: Setup
        run: gha/scripts/ci/gh-actions/linux-setup.sh
      - name: Update
        run: gha/scripts/ci/gh-actions/run.sh update
      - name: Configure
        run: gha/scripts/ci/gh-actions/run.sh configure
      - name: Build
        run: gha/scripts/ci/gh-actions/run.sh build
      - name: Print ccache statistics
        run: ccache -s
      - name: Save cache
        uses: actions/cache/save@v3
        if: ${{ github.ref_name == 'master' && steps.restore-cache.outputs.cache-hit != 'true' }}
        id: save-cache
        with:
          path: .ccache
          key: ccache-${{ matrix.os }}-${{ matrix.compiler }}-${{ matrix.parallel }}-${{ github.sha }}
      - name: Test
        run: gha/scripts/ci/gh-actions/run.sh test

  macos:
    needs: [format, git_checks]
    if: needs.git_checks.outputs.num_code_changes > 0
@@ -157,6 +266,10 @@ jobs:
      GH_YML_MATRIX_OS: ${{ matrix.os }}
      GH_YML_MATRIX_COMPILER: ${{ matrix.compiler }}
      GH_YML_MATRIX_PARALLEL: ${{ matrix.parallel }}
      CCACHE_BASEDIR: "${GITHUB_WORKSPACE}"
      CCACHE_DIR: "${GITHUB_WORKSPACE}/.ccache"
      CCACHE_COMPRESS: true
      CCACHE_COMPRESSLEVEL: 6

    strategy:
      fail-fast: false
@@ -172,25 +285,43 @@ jobs:
            compiler: xcode13_4_1

    steps:
      - uses: actions/checkout@v3
      - uses: actions/checkout@v4
        with:
          path: gha
      - uses: actions/checkout@v3
      - uses: actions/checkout@v4
        with:
          ref: ${{ github.event.pull_request.head.sha }}
          path: source
      - name: Setup
        run: gha/scripts/ci/gh-actions/macos-setup.sh
      - name: Restore cache
        uses: actions/cache/restore@v3
        id: restore-cache
        with:
          path: .ccache
          key: ccache-${{ matrix.os }}-${{ matrix.compiler }}-${{ matrix.parallel }}-${{ github.sha }}
          restore-keys: |
            ccache-${{ matrix.os }}-${{ matrix.compiler }}-${{ matrix.parallel }}
      - name: Configure cache
        run: ccache -z
      - name: Update
        run: gha/scripts/ci/gh-actions/run.sh update
      - name: Configure
        run: gha/scripts/ci/gh-actions/run.sh configure
      - name: Build
        run: gha/scripts/ci/gh-actions/run.sh build
      - name: Print ccache statistics
        run: ccache -s
      - name: Save cache
        uses: actions/cache/save@v3
        if: ${{ github.ref_name == 'master' && steps.restore-cache.outputs.cache-hit != 'true' }}
        id: save-cache
        with:
          path: .ccache
          key: ccache-${{ matrix.os }}-${{ matrix.compiler }}-${{ matrix.parallel }}-${{ github.sha }}
      - name: Test
        run: gha/scripts/ci/gh-actions/run.sh test


  windows:
    needs: [format, git_checks]
    if: needs.git_checks.outputs.num_code_changes > 0
@@ -207,7 +338,7 @@ jobs:
      fail-fast: false
      matrix:
        os: [win2019, win2022]
        parallel: [serial, mpi]
        parallel: [serial, ompi]
        include:
          - os: win2019
            image: windows-2019
@@ -221,10 +352,10 @@ jobs:
        shell: bash

    steps:
      - uses: actions/checkout@v3
      - uses: actions/checkout@v4
        with:
          path: gha
      - uses: actions/checkout@v3
      - uses: actions/checkout@v4
        with:
          ref: ${{ github.event.pull_request.head.sha }}
          path: source
@@ -255,7 +386,7 @@ jobs:
        baseos: [ubuntu-bionic]

    steps:
      - uses: actions/checkout@v3
      - uses: actions/checkout@v4
        with:
          ref: ${{ github.event.pull_request.head.sha }}
          path: ci-source
@@ -317,24 +448,19 @@ jobs:
    strategy:
      fail-fast: false
      matrix:
        code: [examples, lammps, tau]
        code: [lammps, tau]
        include:
          - code: examples
            repo: ornladios/ADIOS2-Examples
            ref: master
          - code: lammps
            repo: pnorbert/lammps
            ref: fix-deprecated-adios-init
          - code: tau
            repo: ornladios/ADIOS2-Examples
            ref: master

    defaults:
      run:
        shell: bash -c "docker exec adios2-ci bash --login -e $(echo {0} | sed 's|/home/runner/work|/__w|g')"

    steps:
      - uses: actions/checkout@v3
      - uses: actions/checkout@v4
        if: ${{ matrix.repo != '' }}
        with:
          repository: ${{ matrix.repo }}
          ref: ${{ matrix.ref }}
@@ -365,12 +491,68 @@ jobs:
      - name: Test
        run: /opt/adios2/source/testing/contract/${{ matrix.code }}/test.sh

#######################################
# Code analysis builds
#######################################

  analyze:
    needs: [format, git_checks]
    name: CodeQL
    runs-on: ubuntu-latest
    container:
      image: 'ghcr.io/ornladios/adios2:ci-spack-ubuntu20.04-gcc8'
      env:
        GH_YML_JOBNAME: ubuntu20.04-gcc8-serial-codeql
        GH_YML_BASE_OS: Linux
        GH_YML_MATRIX_OS: ubuntu20.04
        GH_YML_MATRIX_COMPILER: gcc8
        GH_YML_MATRIX_PARALLEL: serial
    permissions:
      actions: read
      contents: read
      security-events: write

    strategy:
      fail-fast: false
      matrix:
        language: [ 'cpp' ]

    steps:
    - uses: actions/checkout@v4
      with:
        path: gha
    - uses: actions/checkout@v4
      with:
        ref: ${{ github.event.pull_request.head.sha }}
        path: source
    - name: Initialize CodeQL
      uses: github/codeql-action/init@v2
      with:
        languages: ${{ matrix.language }}
        config: |
          paths:
            - source
          paths-ignore:
            - source/thirdparty
    - name: Setup
      run: gha/scripts/ci/gh-actions/linux-setup.sh
    - name: Update
      run: gha/scripts/ci/gh-actions/run.sh update
    - name: Configure
      run: gha/scripts/ci/gh-actions/run.sh configure
    - name: Build
      run: gha/scripts/ci/gh-actions/run.sh build
    - name: Perform CodeQL Analysis
      uses: github/codeql-action/analyze@v2
      with:
        category: "/language:${{matrix.language}}"

#######################################
# Workaround for skipping matrix jobs
#######################################

  build_and_test:
    needs: [linux, macos, docker, contract]
    needs: [linux_el8, linux_ubuntu, macos, docker, contract]
    runs-on: ubuntu-latest
    steps:
      - run: echo "All required jobs complete"
+4 −4
Original line number Diff line number Diff line
@@ -169,7 +169,7 @@ class SpackCIBridge(object):
                        # Check if we should defer pushing/testing this PR because it is based on "too new" of a commit
                        # of the main branch.
                        tmp_pr_branch = f"temporary_{pr_string}"
                        subprocess.run(["git", "fetch", "--unshallow", "github",
                        subprocess.run(["git", "fetch", "github",
                                       f"refs/pull/{pull.number}/head:{tmp_pr_branch}"], check=True)
                        # Get the merge base between this PR and the main branch.
                        try:
@@ -226,7 +226,7 @@ class SpackCIBridge(object):
                        # then we will push the merge commit that was automatically created by GitHub to GitLab
                        # where it will kick off a CI pipeline.
                        try:
                            subprocess.run(["git", "fetch", "--unshallow", "github",
                            subprocess.run(["git", "fetch", "github",
                                           f"{pull.merge_commit_sha}:{pr_string}"], check=True)
                        except subprocess.CalledProcessError:
                            print("Failed to locally checkout PR {0} ({1}). Skipping"
@@ -306,7 +306,7 @@ class SpackCIBridge(object):
        self.gitlab_shallow_fetch()

        if self.main_branch:
            subprocess.run(["git", "fetch", "--unshallow", "github", self.main_branch], check=True)
            subprocess.run(["git", "fetch", "github", self.main_branch], check=True)

    def get_gitlab_pr_branches(self):
        """Query GitLab for branches that have already been copied over from GitHub PRs.
@@ -350,7 +350,7 @@ class SpackCIBridge(object):
    def fetch_github_branches(self, fetch_refspecs):
        """Perform `git fetch` for a given list of refspecs."""
        print("Fetching GitHub refs for open PRs")
        fetch_args = ["git", "fetch", "-q", "--unshallow", "github"] + fetch_refspecs
        fetch_args = ["git", "fetch", "-q", "github"] + fetch_refspecs
        subprocess.run(fetch_args, check=True)

    def build_local_branches(self, protected_branches):
Loading