Skip to content

Conversation

@lcirrottola
Copy link
Contributor

⚠️ NOT TO MERGE RIGHT NOW ⚠️

🚧 Add support for HDF5 I/O thanks to @gabriel-suau.

Need more extensive testing and some minor bugfix before merging.

Algiane and others added 27 commits February 24, 2023 16:52
…an used by writer

  - Add the possibility to open/write a .h5 mesh. For now it is mandatory to provide .h5 extension to save the mesh at hdf5 format (even if with hdf5 inputs).
  - Fix multiple issues with different number of procs while reading a file than when it was saved:
     - pass the MPI communicator as argument of functions called before the first load balancing step to allow to pass read_comm communicators or computational comm (allow to not guess that if we want to save a .h5 file it means that we have read a .h5 file too);
     - correction of wrong communicators calls;
…load on more procs than the number of partitions.
Remove calls to H5Sselect_none for empty entites on partitions (leads to deadlock in H5Dwrite).
…fault, or with -met or -sol command line options.
@Algiane
Copy link
Member

Algiane commented May 5, 2023

As this PR doesn't break the develop branch and has diverged a lot and for many file, I will merge it even if the feature is not entirely implemented:

  • it adds distributed inputs and seems to work for files saved with the same number of processes (npartin) than the number used for reading (npart);

  • npartin > npart will not be implemented

  • there are still analysis issues when npartin<npart -> to fix later

  • the format has to be improved: the xdmf file format doesn't allow to visualize at the same time boundary triangles and tetrahedra. We have to found a solution (vtk_hdf5 format?) that allow this as well as th vizualization of parallel entities.

@Algiane Algiane merged commit 8e0c08e into develop May 5, 2023
@Algiane Algiane deleted the feature/parallel-hdf5-io branch August 21, 2024 18:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

part: application application usage specific part: I/O specific to I/Os part: MPI / HPC MPI or HPC related priority: medium linked to strong improvements or to a medium-term deadline state: in progress started but unfinished action

Projects

Development

Successfully merging this pull request may close these issues.

4 participants