The HDF5 format has been working awesome for me, but I ran into danger when I started to mix it with multiprocessing. It was the worst kind of danger: the intermittent error.
Here are the dangers/issues in order of escalation
(TL;DR is use a generator to feed data from your file into the child processes as they spawn. It's the easiest way. Read on for harder ways.)
Here are the dangers/issues in order of escalation
(TL;DR is use a generator to feed data from your file into the child processes as they spawn. It's the easiest way. Read on for harder ways.)
- An h5py file handle can't be pickled and therefore can't be passed as an argument using pool.map()
- If you set the handle as a global and access it from the child processes you run the risk of racing which leads to corrupted reads. My personal runin was that my code sometimes ran fine but sometimes would complain that there are NaNs or Infinity in the data. This wasted some time tracking down. Other people have had this kind of problem [1].
- Same problem if you pass the filename and have the different processes open individual instances of the file separately.
- The hard way to solve this problem is to switch your workflow over to MPI and use mpi4py or somesuch. The details are here. My problem is embarrassingly parallel and the computations result in heavy data reduction, so I ended up simply doing ...
- The easy way to solve the problem is to have a generator in the parent thread that reads in the data from the file and passeses it to the child process in a JIT manner.
fin = h5py.File(fname,'r') X = (read_chunk(fin, [n,n+Nstep]) for n in range(0,N,Nstep)) pool = mp.Pool() result = pool.map(child_process, X) fin.close()
Comments
Post a Comment