Relay I/O

Conduit Relay I/O provides optional Silo, HDF5, and ADIOS I/O interfaces.

These interfaces can be accessed through a generic path-based API, generic handle class, or through APIs specific to each underlying I/O interface. The specific APIs provide lower level control and allow reuse of handles, which is more efficient for most non-trivial use cases. The generic handle class strikes a balance between usability and efficiency.

Relay I/O Path-based Interface

The path-based Relay I/O interface allows you to read and write conduit::Nodes using any enabled I/O interface through a simple path-based (string) API. The underlying I/O interface is selected using the extension of the destination path or an explicit protocol argument.

The conduit_relay library provides the following methods in the conduit::relay::io namespace:

  • relay::io::save
    • Saves the contents of the passed Node to a file. Works like a Node::set to the file: if the file exists, it is overwritten to reflect contents of the passed Node.
  • relay::io::save_merged
    • Merges the contents of the passed Node to a file. Works like a Node::update to the file: if the file exists, new data paths are appended, common paths are overwritten, and other existing paths are not changed.
  • relay::io::load
    • Loads the contents of a file into the passed Node. Works like a Node::set from the contents of the file: if the Node has existing data, it is overwritten to reflect contents of the file.
  • relay::io::load_merged
    • Merges the contents of a file into the passed Node. Works like a Node::update rom the contents of the file: if the Node has existing data, new data paths are appended, common paths are overwritten, and other existing paths are not changed.

The conduit_relay_mpi_io library provides the conduit::relay::mpi::io namespace which includes variants of these methods which take a MPI Communicator. These variants pass the communicator to the underlying I/O interface to enable collective I/O. Relay currently only supports collective I/O for ADIOS.

Relay I/O Path-based Interface Examples

Save and Load

  • C++ Example:
// setup node to save
Node n;
n["a/my_data"] = 1.0;
n["a/b/my_string"] = "value";
std::cout << "\nNode to write:" << std::endl;
n.print();

//save to json using save
conduit::relay::io::save(n,"my_output.json");

//load back from json using load
Node n_load;
conduit::relay::io::load("my_output.json",n_load);
std::cout << "\nLoad result:" << std::endl;
n_load.print();
  • Output:

Node to write:

a: 
  my_data: 1.0
  b: 
    my_string: "value"


Load result:

a: 
  my_data: 1.0
  b: 
    my_string: "value"

Save Merged

  • C++ Example:
// setup node to save
Node n;
n["a/my_data"] = 1.0;
n["a/b/my_string"] = "value";
std::cout << "\nNode to write:" << std::endl;
n.print();

//save to hdf5 using save
conduit::relay::io::save(n,"my_output.hdf5");

// append a new path to the hdf5 file using save_merged
Node n2;
n2["a/b/new_data"] = 42.0;
std::cout << "\nNode to append:" << std::endl;
n2.print();
conduit::relay::io::save_merged(n2,"my_output.hdf5");

Node n_load;
// load back from hdf5 using load:
conduit::relay::io::load("my_output.hdf5",n_load);
std::cout << "\nLoad result:" << std::endl;
n_load.print();
  • Output:

Node to write:

a: 
  my_data: 1.0
  b: 
    my_string: "value"


Node to append:

a: 
  b: 
    new_data: 42.0


Load result:

a: 
  my_data: 1.0
  b: 
    my_string: "value"
    new_data: 42.0

Load Merged

  • C++ Example:
// setup node to save
Node n;
n["a/my_data"] = 1.0;
n["a/b/my_string"] = "value";
std::cout << "\nNode to write:" << std::endl;
n.print();

//save to hdf5 using generic i/o save
conduit::relay::io::save(n,"my_output.hdf5");

// append to existing node with data from hdf5 file using load_merged
Node n_load;
n_load["a/b/new_data"] = 42.0;
std::cout << "\nNode to load into:" << std::endl;
n_load.print();
conduit::relay::io::load_merged("my_output.hdf5",n_load);
std::cout << "\nLoad result:" << std::endl;
n_load.print();
  • Output:

Node to write:

a: 
  my_data: 1.0
  b: 
    my_string: "value"


Node to load into:

a: 
  b: 
    new_data: 42.0


Load result:

a: 
  b: 
    new_data: 42.0
    my_string: "value"
  my_data: 1.0

Load from Subpath

  • C++ Example:
// setup node to save
Node n;
n["path/to/my_data"] = 1.0;
std::cout << "\nNode to write:" << std::endl;
n.print();

//save to hdf5 using generic i/o save
conduit::relay::io::save(n,"my_output.hdf5");

// load only a subset of the tree
Node n_load;
conduit::relay::io::load("my_output.hdf5:path/to",n_load);
std::cout << "\nLoad result from 'path/to'" << std::endl;
n_load.print();
  • Output:

Node to write:

path: 
  to: 
    my_data: 1.0


Load result from 'path/to'

my_data: 1.0

Save to Subpath

  • C++ Example:
// setup node to save
Node n;
n["my_data"] = 1.0;
std::cout << "\nNode to write to 'path/to':" << std::endl;
n.print();

//save to hdf5 using generic i/o save
conduit::relay::io::save(n,"my_output.hdf5:path/to");

// load only a subset of the tree
Node n_load;
conduit::relay::io::load("my_output.hdf5",n_load);
std::cout << "\nLoad result:" << std::endl;
n_load.print();
  • Output:

Node to write to 'path/to':

my_data: 1.0


Load result:

path: 
  to: 
    my_data: 1.0

Relay I/O Handle Interface

The relay::io::IOHandle class provides a high level interface to query, read, and modify files.

It provides a generic interface that is more efficient than the path-based interface for protocols like HDF5 which support partial I/O and querying without reading the entire contents of a file. It also supports simpler built-in protocols (conduit_bin, json, etc) that do not support partial I/O for convenience. Its basic contract is that changes to backing (file on disk, etc) are not guaranteed to be reflected until the handle is closed. Relay I/O Handle supports reading AXOM Sidre DataStore Style files. Relay I/O Handle does not yet support Silo or ADIOS.

IOHandle has the following instance methods:

  • open

    • Opens a handle. The underlying I/O interface is selected using the extension of the destination path or an explicit protocol argument. Supports reading and writing by default. Select a different mode by passing an options node that contains a mode child with one of the following strings:
    rw read + write (default mode) Supports both read and write operations. Creates file if it does not exist.
    r read only Only supports read operations. Throws an Error if you open a non-existing file or on any attempt to write.
    w write only Only supports write operations. Throws an Error on any attempt to read.

Danger

Note: While you can read from and write to subpaths using a handle, IOHandle does not support opening a file with a subpath (e.g. myhandle.open("file.hdf5:path/data")).

  • read
    • Merges the contents from the handle or contents from a subpath of the handle into the passed Node. Works like a Node::update from the handle: if the Node has existing data, new data paths are appended, common paths are overwritten, and other existing paths are not changed.
  • write
    • Writes the contents of the passed Node to the handle or to a subpath of the handle. Works like a Node::update to the handle: if the handle has existing data, new data paths are appended, common paths are overwritten, and other existing paths are not changed.
  • has_path
    • Checks if the handle contains a given path.
  • list_child_names
    • Returns a list of the child names at a given path, or an empty list if the path does not exist.
  • remove
    • Removes any data at and below a given path. With HDF5 the space may not be fully reclaimed.
  • close
    • Closes a handle. This is when changes are realized to the backing (file on disc, etc).

Relay I/O Handle Examples

  • C++ Example:
// setup node with example data to save
Node n;
n["a/data"]   = 1.0;
n["a/more_data"] = 2.0;
n["a/b/my_string"] = "value";
std::cout << "\nNode to write:" << std::endl;
n.print();

// save to hdf5 file using the path-based api
conduit::relay::io::save(n,"my_output.hdf5");

// inspect and modify with an IOHandle
conduit::relay::io::IOHandle h;
h.open("my_output.hdf5");

// check for and read a path we are interested in
if( h.has_path("a/data") )
{
    Node nread;
    h.read("a/data",nread);
    std::cout << "\nValue at \"a/data\" = " 
              << nread.to_float64()
              << std::endl;
}

// check for and remove a path we don't want
if( h.has_path("a/more_data") )
{
    h.remove("a/more_data");
    std::cout << "\nRemoved \"a/more_data\"" 
              << std::endl;
}

// verify the data was removed
if( !h.has_path("a/more_data") )
{
    std::cout << "\nPath \"a/more_data\" is no more" 
              << std::endl;
}

std::cout << "\nWriting to \"a/c\""
          << std::endl;
// write some new data
n = 42.0;
h.write(n,"a/c");

// find the names of the children of "a"
std::vector<std::string> cld_names;
h.list_child_names("a",cld_names);

// print the names
std::cout << "\nChildren of \"a\": ";
std::vector<std::string>::const_iterator itr;
for (itr = cld_names.begin();
     itr < cld_names.end();
     ++itr)
{
    std::cout << "\"" << *itr << "\" ";
}

std::cout << std::endl;

Node nread;
// read the entire contents
h.read(nread);

std::cout << "\nRead Result:" << std::endl;
nread.print();
  • Output:

Node to write:

a: 
  data: 1.0
  more_data: 2.0
  b: 
    my_string: "value"


Value at "a/data" = 1

Removed "a/more_data"

Path "a/more_data" is no more

Writing to "a/c"

Children of "a": "data" "b" "c" 

Read Result:

a: 
  data: 1.0
  b: 
    my_string: "value"
  c: 42.0

  • Python Example:
import conduit
import conduit.relay.io

n = conduit.Node()
n["a/data"]   = 1.0
n["a/more_data"] = 2.0
n["a/b/my_string"] = "value"
print("\nNode to write:")
print(n)

# save to hdf5 file using the path-based api
conduit.relay.io.save(n,"my_output.hdf5")

# inspect and modify with an IOHandle
h = conduit.relay.io.IOHandle()
h.open("my_output.hdf5")

# check for and read a path we are interested in
if h.has_path("a/data"):
     nread = conduit.Node()
     h.read(nread,"a/data")
     print('\nValue at "a/data" = {0}'.format(nread.value()))

# check for and remove a path we don't want
if h.has_path("a/more_data"):
    h.remove("a/more_data")
    print('\nRemoved "a/more_data"')

# verify the data was removed
if not h.has_path("a/more_data"):
    print('\nPath "a/more_data" is no more')

# write some new data
print('\nWriting to "a/c"')
n.set(42.0)
h.write(n,"a/c")

# find the names of the children of "a"
cnames = h.list_child_names("a")
print('\nChildren of "a": {0}'.format(cnames))

nread = conduit.Node()
# read the entire contents
h.read(nread)

print("\nRead Result:")
print(nread)
  • Output:
 
 Node to write:
 
 a: 
   data: 1.0
   more_data: 2.0
   b: 
     my_string: "value"
 
 
 Value at "a/data" = 1.0
 
 Removed "a/more_data"
 
 Path "a/more_data" is no more
 
 Writing to "a/c"
 
 Children of "a": ['data', 'b', 'c']
 
 Read Result:
 
 a: 
   data: 1.0
   b: 
     my_string: "value"
   c: 42.0
 
 
  • C++ Sidre Basic Example:
// this example reads a sample hdf5 sidre style file

std::string input_fname = relay_test_data_path(
                                "texample_sidre_basic_ds_demo.sidre_hdf5");

// open our sidre file for read with an IOHandle
conduit::relay::io::IOHandle h;
h.open(input_fname,"sidre_hdf5");

// find the names of the children at the root
std::vector<std::string> cld_names;
h.list_child_names(cld_names);

// print the names
std::cout << "\nChildren at root: ";
std::vector<std::string>::const_iterator itr;
for (itr = cld_names.begin();
     itr < cld_names.end();
     ++itr)
{
    std::cout << "\"" << *itr << "\" ";
}

Node nread;
// read the entire contents
h.read(nread);

std::cout << "\nRead Result:" << std::endl;
nread.print();
  • Output:

Children at root: "my_scalars" "my_strings" "my_arrays" 
Read Result:

my_scalars: 
  i64: 1
  f64: 10.0
my_strings: 
  s0: "s0 string"
  s1: "s1 string"
my_arrays: 
  a5_i64: [0, 1, 2, 3, 4]
  a0_i64: []
  a5_i64_ext: [0, 1, 2, 3, -5]
  b_v0: []
  b_v1: [1.0, 1.0, 1.0]
  b_v2: [2.0, 2.0, 2.0]

  • Python Sidre Basic Example:
import conduit
import conduit.relay.io

# this example reads a sample hdf5 sidre style file
input_fname = relay_test_data_path("texample_sidre_basic_ds_demo.sidre_hdf5")

# open our sidre file for read with an IOHandle
h = conduit.relay.io.IOHandle()
h.open(input_fname,"sidre_hdf5")

# find the names of the children at the root
cnames = h.list_child_names()
print('\nChildren at root {0}'.format(cnames))

nread = conduit.Node()
# read the entire contents
h.read(nread);

print("Read Result:")
print(nread)

  • Output:
 
 Children at root ['my_scalars', 'my_strings', 'my_arrays']
 Read Result:
 
 my_scalars: 
   i64: 1
   f64: 10.0
 my_strings: 
   s0: "s0 string"
   s1: "s1 string"
 my_arrays: 
   a5_i64: [0, 1, 2, 3, 4]
   a0_i64: []
   a5_i64_ext: [0, 1, 2, 3, -5]
   b_v0: []
   b_v1: [1.0, 1.0, 1.0]
   b_v2: [2.0, 2.0, 2.0]
 
 
  • C++ Sidre with Root File Example:
// this example reads a sample hdf5 sidre datastore, grouped by a root file
std::string input_fname = relay_test_data_path(
                                "out_spio_blueprint_example.root");

// read using the root file
conduit::relay::io::IOHandle h;
h.open(input_fname,"sidre_hdf5");

// find the names of the children at the root
std::vector<std::string> cld_names;
h.list_child_names(cld_names);

// the "root" (/) of the Sidre-based IOHandle to the datastore provides
// access to the root file itself, and all of the data groups

// print the names
std::cout << "\nChildren at root: ";
std::vector<std::string>::const_iterator itr;
for (itr = cld_names.begin();
     itr < cld_names.end();
     ++itr)
{
    std::cout << "\"" << *itr << "\" ";
}

Node nroot;
// read the entire root file contents
h.read("root",nroot);

std::cout << "\nRead \"root\" Result:" << std::endl;
nroot.print();

Node nread;
// read all of data group 0
h.read("0",nread);

std::cout << "\nRead \"0\" Result:" << std::endl;
nread.print();

// reset, or trees will blend in this case
nread.reset();

// read a subpath of data group 1
h.read("1/mesh",nread);

std::cout << "\nRead \"1/mesh\" Result:" << std::endl;
nread.print();
  • Output:

Children at root: "root" "0" "1" "2" "3" 
Read "root" Result:

blueprint_index: 
  mesh: 
    state: 
      number_of_domains: 4
    coordsets: 
      coords: 
        type: "uniform"
        coord_system: 
          axes: 
            x: 
            y: 
          type: "cartesian"
        path: "mesh/coordsets/coords"
    topologies: 
      mesh: 
        type: "uniform"
        coordset: "coords"
        path: "mesh/topologies/mesh"
    fields: 
      field: 
        number_of_components: 1
        topology: "mesh"
        association: "element"
        path: "mesh/fields/field"
      rank: 
        number_of_components: 1
        topology: "mesh"
        association: "element"
        path: "mesh/fields/rank"
file_pattern: "out_spio_blueprint_example/out_spio_blueprint_example_%07d.hdf5"
number_of_files: 4
number_of_trees: 4
protocol: 
  name: "sidre_hdf5"
  version: "0.0"
tree_pattern: "datagroup_%07d"


Read "0" Result:

mesh: 
  coordsets: 
    coords: 
      dims: 
        i: 3
        j: 3
      origin: 
        x: 0.0
        y: -10.0
      spacing: 
        dx: 10.0
        dy: 10.0
      type: "uniform"
  topologies: 
    mesh: 
      type: "uniform"
      coordset: "coords"
  fields: 
    field: 
      association: "element"
      topology: "mesh"
      volume_dependent: "false"
      values: [0.0, 1.0, 2.0, 3.0]
    rank: 
      association: "element"
      topology: "mesh"
      values: [0, 0, 0, 0]


Read "1/mesh" Result:

coordsets: 
  coords: 
    dims: 
      i: 3
      j: 3
    origin: 
      x: 20.0
      y: -10.0
    spacing: 
      dx: 10.0
      dy: 10.0
    type: "uniform"
topologies: 
  mesh: 
    type: "uniform"
    coordset: "coords"
fields: 
  field: 
    association: "element"
    topology: "mesh"
    volume_dependent: "false"
    values: [0.0, 1.0, 2.0, 3.0]
  rank: 
    association: "element"
    topology: "mesh"
    values: [1, 1, 1, 1]

  • Python Sidre with Root File Example:
import conduit
import conduit.relay.io

# this example reads a sample hdf5 sidre datastore,
# grouped by a root file
input_fname = relay_test_data_path("out_spio_blueprint_example.root")

# open our sidre datastore for read via root file with an IOHandle
h = conduit.relay.io.IOHandle()
h.open(input_fname,"sidre_hdf5")

# find the names of the children at the root
# the "root" (/) of the Sidre-based IOHandle to the datastore provides
# access to the root file itself, and all of the data groups
cnames = h.list_child_names()
print('\nChildren at root {0}'.format(cnames))

nroot = conduit.Node();
# read the entire root file contents
h.read(path="root",node=nroot);

print("Read 'root' Result:")
print(nroot)

nread = conduit.Node();
# read all of data group 0
h.read(path="0",node=nread);

print("Read '0' Result:")
print(nread)

#reset, or trees will blend in this case
nread.reset();

# read a subpath of data group 1
h.read(path="1/mesh",node=nread);

print("Read '1/mesh' Result:")
print(nread)

  • Output:
 
 Children at root ['root', '0', '1', '2', '3']
 Read 'root' Result:
 
 blueprint_index: 
   mesh: 
     state: 
       number_of_domains: 4
     coordsets: 
       coords: 
         type: "uniform"
         coord_system: 
           axes: 
             x: 
             y: 
           type: "cartesian"
         path: "mesh/coordsets/coords"
     topologies: 
       mesh: 
         type: "uniform"
         coordset: "coords"
         path: "mesh/topologies/mesh"
     fields: 
       field: 
         number_of_components: 1
         topology: "mesh"
         association: "element"
         path: "mesh/fields/field"
       rank: 
         number_of_components: 1
         topology: "mesh"
         association: "element"
         path: "mesh/fields/rank"
 file_pattern: "out_spio_blueprint_example/out_spio_blueprint_example_%07d.hdf5"
 number_of_files: 4
 number_of_trees: 4
 protocol: 
   name: "sidre_hdf5"
   version: "0.0"
 tree_pattern: "datagroup_%07d"
 
 Read '0' Result:
 
 mesh: 
   coordsets: 
     coords: 
       dims: 
         i: 3
         j: 3
       origin: 
         x: 0.0
         y: -10.0
       spacing: 
         dx: 10.0
         dy: 10.0
       type: "uniform"
   topologies: 
     mesh: 
       type: "uniform"
       coordset: "coords"
   fields: 
     field: 
       association: "element"
       topology: "mesh"
       volume_dependent: "false"
       values: [0.0, 1.0, 2.0, 3.0]
     rank: 
       association: "element"
       topology: "mesh"
       values: [0, 0, 0, 0]
 
 Read '1/mesh' Result:
 
 coordsets: 
   coords: 
     dims: 
       i: 3
       j: 3
     origin: 
       x: 20.0
       y: -10.0
     spacing: 
       dx: 10.0
       dy: 10.0
     type: "uniform"
 topologies: 
   mesh: 
     type: "uniform"
     coordset: "coords"
 fields: 
   field: 
     association: "element"
     topology: "mesh"
     volume_dependent: "false"
     values: [0.0, 1.0, 2.0, 3.0]
   rank: 
     association: "element"
     topology: "mesh"
     values: [1, 1, 1, 1]
 
 

Relay I/O HDF5 Interface

The Relay I/O HDF5 interface provides methods to read and write Nodes using HDF5 handles. It is also the interface used to implement the path-based and handle I/O interfaces for HDF5. This interface provides more control and allows more efficient reuse of I/O handles. It is only available in C++.

Relay I/O HDF5 libver

HDF5 provides a libver setting to control the data structures and features used. When using HDF5 1.10 or newer, relay io will default to use libver 1.8 when creating HDF5 files to provide wider read compatibly. This setting can be controlled via the hdf5 relay option libver, accepted values include: default, none, latest, v108, and v110.

Relay I/O HDF5 Interface Examples

Here is a example exercising the basic parts of Relay I/O’s HDF5 interface, for more detailed documentation see the conduit_relay_io_hdf5_api.hpp header file.

HDF5 I/O Interface Basics

  • C++ Example:
// setup node to save
Node n;
n["a/my_data"] = 1.0;
n["a/b/my_string"] = "value";
std::cout << "\nNode to write:" << std::endl;
n.print();

// open hdf5 file and obtain a handle
hid_t h5_id = conduit::relay::io::hdf5_create_file("myoutput.hdf5");

// write data 
conduit::relay::io::hdf5_write(n,h5_id);

// close our file
conduit::relay::io::hdf5_close_file(h5_id);
    
// open our file to read
h5_id = conduit::relay::io::hdf5_open_file_for_read_write("myoutput.hdf5");

// check if a subpath exists
if(conduit::relay::io::hdf5_has_path(h5_id,"a/my_data"))
    std::cout << "\nPath 'myoutput.hdf5:a/my_data' exists" << std::endl;
    
Node n_read;
// read a subpath (Note: read works like `load_merged`)
conduit::relay::io::hdf5_read(h5_id,"a/my_data",n_read);
std::cout << "\nData loaded:" << std::endl;
n_read.print();

// write more data to the file
n.reset();
// write data (appends data, works like `save_merged`)
// the Node tree needs to be compatible with the existing
// hdf5 state, adding new paths is always fine.  
n["a/my_data"] = 3.1415;
n["a/b/c"] = 144;
// lists are also supported
n["a/my_list"].append() = 42.0;
n["a/my_list"].append() = 42;

conduit::relay::io::hdf5_write(n,h5_id);

// check if a subpath of a list exists
if(conduit::relay::io::hdf5_has_path(h5_id,"a/my_list/0"))
    std::cout << "\nPath 'myoutput.hdf5:a/my_list/0' exists" << std::endl;

// Read the entire tree:
n_read.reset();
conduit::relay::io::hdf5_read(h5_id,n_read);
std::cout << "\nData loaded:" << std::endl;
n_read.print();

// other helpers:

// check if a path is a hdf5 file:
if(conduit::relay::io::is_hdf5_file("myoutput.hdf5"))
    std::cout << "\nFile 'myoutput.hdf5' is a hdf5 file" << std::endl;
  • Output:

Node to write:

a: 
  my_data: 1.0
  b: 
    my_string: "value"


Path 'myoutput.hdf5:a/my_data' exists

Data loaded:
1.0

Path 'myoutput.hdf5:a/my_list/0' exists

Data loaded:

a: 
  my_data: 3.1415
  b: 
    my_string: "value"
    c: 144
  my_list: 
    - 42.0
    - 42


File 'myoutput.hdf5' is a hdf5 file

HDF5 I/O Options

  • C++ Example:
Node io_about;
conduit::relay::io::about(io_about);
std::cout << "\nRelay I/O Info and Default Options:" << std::endl;
std::cout << io_about.to_yaml() << std::endl;

Node &hdf5_opts = io_about["options/hdf5"];
// change the default chunking threshold to 
// a smaller number to enable compression for
// a small array
hdf5_opts["chunking/threshold"]  = 2000;
hdf5_opts["chunking/chunk_size"] = 2000;

std::cout << "\nNew HDF5 I/O Options:" << std::endl;
hdf5_opts.print();
// set options
conduit::relay::io::hdf5_set_options(hdf5_opts);

int num_vals = 5000;
Node n;
n["my_values"].set(DataType::float64(num_vals));

float64 *v_ptr = n["my_values"].value();
for(int i=0; i< num_vals; i++)
{
    v_ptr[i] = float64(i);
}

// save using options
std::cout << "\nsaving data to 'myoutput_chunked.hdf5' " << std::endl;

conduit::relay::io::hdf5_save(n,"myoutput_chunked.hdf5");
  • Output:

Relay I/O Info and Default Options:

protocols: 
  json: "enabled"
  conduit_json: "enabled"
  conduit_base64_json: "enabled"
  yaml: "enabled"
  conduit_bin: "enabled"
  csv: "enabled"
  hdf5: "enabled"
  sidre_hdf5: "enabled"
  h5z-zfp: "disabled"
  conduit_silo: "enabled"
  conduit_silo_mesh: "enabled"
  adios: "disabled"
options: 
  hdf5: 
    compact_storage: 
      enabled: "true"
      threshold: 1024
    chunking: 
      enabled: "true"
      threshold: 2000000
      chunk_size: 1000000
      compression: 
        method: "gzip"
        level: 5


New HDF5 I/O Options:

compact_storage: 
  enabled: "true"
  threshold: 1024
chunking: 
  enabled: "true"
  threshold: 2000
  chunk_size: 2000
  compression: 
    method: "gzip"
    level: 5


saving data to 'myoutput_chunked.hdf5' 

You can verify using h5stat that the data set was written to the hdf5 file using chunking and compression.