Skip to content

Nipype looks for node results in wrong folders #2689

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
achetverikov opened this issue Sep 2, 2018 · 11 comments
Closed

Nipype looks for node results in wrong folders #2689

achetverikov opened this issue Sep 2, 2018 · 11 comments

Comments

@achetverikov
Copy link
Contributor

achetverikov commented Sep 2, 2018

Summary

I get errors like this:

Traceback (most recent call last):

  File "<ipython-input-2-4d0fd1617f36>", line 1, in <module>
    runfile('/home/visual/andche/MRI_DATA/andche_pipeline_v1.py', wdir='/home/visual/andche/MRI_DATA')

  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/spyder_kernels/customize/spydercustomize.py", line 668, in runfile
    execfile(filename, namespace)

  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/spyder_kernels/customize/spydercustomize.py", line 100, in execfile
    builtins.execfile(filename, *where)

  File "/home/visual/andche/MRI_DATA/andche_pipeline_v1.py", line 629, in <module>
    flow.run('PBS', plugin_args={'max_jobs':100, 'qsub_args' : '-l walltime=00:20:00,mem=8g', 'template':'#!/bin/sh\necho `date "+%Y%m%d-%H%M%S"`\nsource activate /project/3019005.02/conda_env/nipype_v1x\n', 'max_tries':3,'retry_timeout': 5, 'max_jobname_len': 15})

  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/workflows.py", line 595, in run
    runner.run(execgraph, updatehash=updatehash, config=self.config)

  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/plugins/base.py", line 162, in run
    self._clean_queue(jobid, graph, result=result))

  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/plugins/base.py", line 224, in _clean_queue
    raise RuntimeError("".join(result['traceback']))

RuntimeError: Traceback (most recent call last):
  File "/project/3019005.02/MT_localizers/TMP/preproc_flow_v1/batch/pyscript_20180902_190106_preproc_flow_v1.mc_and_detrend_motion_correct_post_check.b2.py", line 33, in <module>
    result = info['node'].run(updatehash=info['updatehash'])
  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 471, in run
    result = self._run_interface(execute=True)
  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 1250, in _run_interface
    self.config['execution']['stop_on_first_crash'])))
  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 1125, in _collate_results
    for i, nresult, err in nodes:
  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/utils.py", line 106, in nodelist_runner
    result = node.run(updatehash=updatehash)
  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 424, in run
    updatehash=updatehash and not updated)
  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 555, in _run_interface
    return self._run_command(execute)
  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 589, in _run_command
    result = self._load_results()
  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 569, in _load_results
    needed_outputs=self.needed_outputs)
  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/interfaces/base/core.py", line 620, in aggregate_outputs
    raise error
TraitError: The trait 'std_img' of a MCFLIRTOutputSpec instance is an existing file name, but the path  '/home/visual/andche/20180810_121603fMRIrestcmrrmb4TR1500128vols003a001_warp4D_mcf.nii.gz_sigma.nii.gz' does not exist.

or this:


Traceback (most recent call last):

  File "<ipython-input-1-4d0fd1617f36>", line 1, in <module>
    runfile('/home/visual/andche/MRI_DATA/andche_pipeline_v1.py', wdir='/home/visual/andche/MRI_DATA')

  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/spyder_kernels/customize/spydercustomize.py", line 668, in runfile
    execfile(filename, namespace)

  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/spyder_kernels/customize/spydercustomize.py", line 100, in execfile
    builtins.execfile(filename, *where)

  File "/home/visual/andche/MRI_DATA/andche_pipeline_v1.py", line 629, in <module>
    flow.run('PBS', plugin_args={'max_jobs':100, 'qsub_args' : '-l walltime=00:20:00,mem=8g', 'template':'#!/bin/sh\necho `date "+%Y%m%d-%H%M%S"`\nsource activate /project/3019005.02/conda/nipype\n', 'max_tries':3,'retry_timeout': 5, 'max_jobname_len': 15})

  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/workflows.py", line 595, in run
    runner.run(execgraph, updatehash=updatehash, config=self.config)

  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/plugins/base.py", line 162, in run
    self._clean_queue(jobid, graph, result=result))

  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/plugins/base.py", line 224, in _clean_queue
    raise RuntimeError("".join(result['traceback']))

RuntimeError: Traceback (most recent call last):
  File "/project/3019005.02/MT_localizers/TMP/preproc_flow_v1/batch/pyscript_20180902_202027_preproc_flow_v1.mc_and_detrend_meanBold.b2.py", line 34, in <module>
    result = info['node'].run(updatehash=info['updatehash'])
  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 471, in run
    result = self._run_interface(execute=True)
  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 1250, in _run_interface
    self.config['execution']['stop_on_first_crash'])))
  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 1125, in _collate_results
    for i, nresult, err in nodes:
  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/utils.py", line 106, in nodelist_runner
    result = node.run(updatehash=updatehash)
  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 424, in run
    updatehash=updatehash and not updated)
  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 555, in _run_interface
    return self._run_command(execute)
  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 589, in _run_command
    result = self._load_results()
  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 569, in _load_results
    needed_outputs=self.needed_outputs)
  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/interfaces/base/core.py", line 620, in aggregate_outputs
    raise error
TraitError: The trait 'out_file' of a MathsOutput instance is an existing file name, but the path  '/project/3019005.02/MT_localizers/TMP/preproc_flow_v1/batch/20180810_121603fMRIrestcmrrmb4TR1500128vols007a001_warp4D_mean.nii.gz' does not exist.

The files in question exist in node working directories. Apparently, nipype forgets to change the working directory to node directory somewhere, but I'm not able to find out where.

Platform details:

{'commit_hash': '%h',
 'commit_source': 'archive substitution',
 'networkx_version': '2.1',
 'nibabel_version': '2.3.0',
 'nipype_version': '1.1.2',
 'numpy_version': '1.15.0',
 'pkg_path': '/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype',
 'scipy_version': '1.1.0',
 'sys_executable': '/project/3019005.02/conda/nipype/bin/python',
 'sys_platform': 'linux2',
 'sys_version': '2.7.15 | packaged by conda-forge | (default, Jul 27 2018, 10:26:36) \n[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)]',
 'traits_version': '4.6.0'}
@djarecka
Copy link
Collaborator

djarecka commented Sep 3, 2018

so what are the paths of existing files? Can you share the code that leads to the error?

any chances you run it in a container that you can share?

@achetverikov
Copy link
Contributor Author

Sorry, no docker. I can share the code if you think it would help, but I honestly doubt it. You can't run it without the data anyway.

The correct paths should be the node working folder like

/project/3019005.02/MT_localizers/TMP/preproc_flow_v1/mc_and_detrend/_subject_id_S15/motion_correct_post_check/mapflow/_motion_correct_post_check0/20180810_121603fMRIrestcmrrmb4TR1500128vols003a001_warp4D_mcf.nii.gz_sigma.nii.gz

in the first example.

But you can see from the paths in the error codes above that in the first example nipype looks for the file in the home folder, and in the second example, it looks for it in the batch folder, so it looks like the paths are not set properly somewhere.

I did not have this error before but then I recently switched to python 2.7 and also updated to the last version of nipype (from 1.0.x). I see that there were similar issues raised in the past here on Github but they all seem to be resolved by the last updated.

@achetverikov
Copy link
Contributor Author

achetverikov commented Sep 3, 2018

here's the code for mcflirt example node:


motion_correct_post_check = nipype.MapNode(interface=fsl.MCFLIRT(save_mats=True, smooth = 0, dof = 12,
                                                  save_plots=True, save_rms=True, stats_imgs = True),
                            name='motion_correct_post_check',
                            iterfield=['in_file'])

mc_and_detrend.connect(motion_correct_post_check, 'std_img', mc_and_detrend_out, 'MRI.PREPROC.QC.MOTION_PLOTS.FLIRT_POST')
mc_and_detrend.connect(motion_correct_post_check, 'par_file', mc_and_detrend_out, 'MRI.PREPROC.QC.MOTION_PLOTS.FLIRT_POST.MOT_PARS')

and here's meanBold node from the 2nd example:

mean_bold = nipype.MapNode(interface = fsl.maths.MeanImage(dimension='T'),name = 'meanBold',
                         iterfield=['in_file'])

mc_and_detrend.connect([(apply_mc_transfs, mean_bold, [('out_file', 'in_file')]),
                        (mean_bold, add_images, [('out_file', 'operand_file')])
                        ])
    

@effigies
Copy link
Member

effigies commented Sep 3, 2018

This is consistent with a problem we saw in nipreps/fmriprep#2161. I was unable to reproduce it in a small workflow. Are you able to produce a small workflow that consistently produces this issue?

Also, are all failing nodes MapNodes? That may help narrow things down.

@achetverikov
Copy link
Contributor Author

achetverikov commented Sep 3, 2018

Yes, it seems they are from MapNode. I'm not sure about a small example, for now, it consistently gets stuck with one subject, so I'm trying to do some debugging. Here's the info with the debug mode on and some extra printouts:

180903-14:58:36,401 nipype.workflow INFO:
         [Node] Setting-up "_meanBold3" in "/project/3019005.02/MT_localizers/TMP/preproc_flow_v1/mc_and_detrend/_subject_id_S15/meanBold/mapflow/_meanBold3".
180903-14:58:36,403 nipype.workflow DEBUG:
         Setting node inputs
180903-14:58:36,406 nipype.workflow DEBUG:
         [Node] Hashes: [('dimension', u'T'), ('in_file', (u'/project/3019005.02/MT_localizers/TMP/preproc_flow_v1/mc_and_detrend/_subject_id_S15/apply_mc_transfs/mapflow/_apply_mc_transfs3/20180810_121603fMRIrestcmrrmb4TR1500128vols006a001_warp4D.nii.gz', 'ec567f9d3f271a73b151201a129a28fd')), ('output_type', u'NIFTI_GZ')], 35c69c3017b3a2d3be444ca6354c05df, /project/3019005.02/MT_localizers/TMP/preproc_flow_v1/mc_and_detrend/_subject_id_S15/meanBold/mapflow/_meanBold3/_0x35c69c3017b3a2d3be444ca6354c05df.json, [u'/project/3019005.02/MT_localizers/TMP/preproc_flow_v1/mc_and_detrend/_subject_id_S15/meanBold/mapflow/_meanBold3/_0x35c69c3017b3a2d3be444ca6354c05df.json']
180903-14:58:36,408 nipype.workflow DEBUG:
         [Node] Up-to-date cache found for "_meanBold3".
180903-14:58:36,409 nipype.workflow DEBUG:
         Only updating node hashes or skipping execution
----------------
/project/3019005.02/MT_localizers/TMP/preproc_flow_v1/mc_and_detrend/_subject_id_S15/meanBold/mapflow/_meanBold3
----------------
180903-14:58:36,424 nipype.workflow DEBUG:
         Aggregate: False
180903-14:58:36,426 nipype.workflow INFO:
         [Node] Cached "_meanBold3" - collecting precomputed outputs
180903-14:58:36,427 nipype.workflow INFO:
         [Node] "_meanBold3" found cached.
180903-14:58:36,432 nipype.workflow DEBUG:
         setting input 4 in_file /project/3019005.02/MT_localizers/TMP/preproc_flow_v1/mc_and_detrend/_subject_id_S15/apply_mc_transfs/mapflow/_apply_mc_transfs4/20180810_121603fMRIrestcmrrmb4TR1500128vols007a001_warp4D.nii.gz
180903-14:58:36,435 nipype.workflow INFO:
         [Node] Setting-up "_meanBold4" in "/project/3019005.02/MT_localizers/TMP/preproc_flow_v1/mc_and_detrend/_subject_id_S15/meanBold/mapflow/_meanBold4".
180903-14:58:36,437 nipype.workflow DEBUG:
         Setting node inputs
180903-14:58:36,440 nipype.workflow DEBUG:
         [Node] Hashes: [('dimension', u'T'), ('in_file', (u'/project/3019005.02/MT_localizers/TMP/preproc_flow_v1/mc_and_detrend/_subject_id_S15/apply_mc_transfs/mapflow/_apply_mc_transfs4/20180810_121603fMRIrestcmrrmb4TR1500128vols007a001_warp4D.nii.gz', '69e62a2c58f458996055d3d9a0b0ee63')), ('output_type', u'NIFTI_GZ')], 2241f64e6b807f2d3a8cab2b86bebe70, /project/3019005.02/MT_localizers/TMP/preproc_flow_v1/mc_and_detrend/_subject_id_S15/meanBold/mapflow/_meanBold4/_0x2241f64e6b807f2d3a8cab2b86bebe70.json, [u'/project/3019005.02/MT_localizers/TMP/preproc_flow_v1/mc_and_detrend/_subject_id_S15/meanBold/mapflow/_meanBold4/_0x2241f64e6b807f2d3a8cab2b86bebe70.json']
180903-14:58:36,441 nipype.workflow DEBUG:
         [Node] Up-to-date cache found for "_meanBold4".
180903-14:58:36,443 nipype.workflow DEBUG:
         Only updating node hashes or skipping execution
----------------
/project/3019005.02/MT_localizers/TMP/preproc_flow_v1/mc_and_detrend/_subject_id_S15/meanBold/mapflow/_meanBold4
----------------
180903-14:58:36,445 nipype.workflow DEBUG:
         Aggregate: True
180903-14:58:36,447 nipype.workflow DEBUG:
         aggregating results
180903-14:58:36,449 nipype.workflow DEBUG:
         copying files to wd [execute=True, linksonly=True]
180903-14:58:36,452 nipype.utils DEBUG:
         Removing contents of /project/3019005.02/MT_localizers/TMP/preproc_flow_v1/mc_and_detrend/_subject_id_S15/meanBold/mapflow/_meanBold4/_tempinput

Predicted outputs
{'out_file': u'/home/visual/andche/MRI_DATA/20180810_121603fMRIrestcmrrmb4TR1500128vols007a001_warp4D_mean.nii.gz'}

Outputs

out_file = <undefined>

180903-14:58:36,460 nipype.workflow WARNING:
         [Node] Error on "preproc_flow_v1.mc_and_detrend.meanBold" (/project/3019005.02/MT_localizers/TMP/preproc_flow_v1/mc_and_detrend/_subject_id_S15/meanBold)
180903-14:58:37,72 nipype.workflow DEBUG:
         Clearing 13 from queue
Traceback (most recent call last):

  File "<ipython-input-2-4d0fd1617f36>", line 1, in <module>
    runfile('/home/visual/andche/MRI_DATA/andche_pipeline_v1.py', wdir='/home/visual/andche/MRI_DATA')

  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/spyder_kernels/customize/spydercustomize.py", line 668, in runfile
    execfile(filename, namespace)

  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/spyder_kernels/customize/spydercustomize.py", line 100, in execfile
    builtins.execfile(filename, *where)

  File "/home/visual/andche/MRI_DATA/andche_pipeline_v1.py", line 629, in <module>
    flow.run('PBS', plugin_args={'max_jobs':100, 'qsub_args' : '-l walltime=00:20:00,mem=8g', 'template':'#!/bin/sh\necho `date "+%Y%m%d-%H%M%S"`\nsource activate /project/3019005.02/conda/nipype\n', 'max_tries':3,'retry_timeout': 5, 'max_jobname_len': 15})

  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/workflows.py", line 595, in run
    runner.run(execgraph, updatehash=updatehash, config=self.config)

  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/plugins/base.py", line 184, in run
    self._send_procs_to_workers(updatehash=updatehash, graph=graph)

  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/plugins/base.py", line 315, in _send_procs_to_workers
    self._clean_queue(jobid, graph)

  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/plugins/base.py", line 224, in _clean_queue
    raise RuntimeError("".join(result['traceback']))

RuntimeError: Traceback (most recent call last):

  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/plugins/base.py", line 313, in _send_procs_to_workers
    self.procs[jobid].run()

  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 471, in run
    result = self._run_interface(execute=True)

  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 1254, in _run_interface
    self.config['execution']['stop_on_first_crash'])))

  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 1129, in _collate_results
    for i, nresult, err in nodes:

  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/utils.py", line 106, in nodelist_runner
    result = node.run(updatehash=updatehash)

  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 424, in run
    updatehash=updatehash and not updated)

  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 555, in _run_interface
    return self._run_command(execute)

  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 593, in _run_command
    result = self._load_results()

  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 573, in _load_results
    needed_outputs=self.needed_outputs)

  File "/project/3019005.02/conda/nipype/lib/python2.7/site-packages/nipype/interfaces/base/core.py", line 624, in aggregate_outputs
    raise error

TraitError: The trait 'out_file' of a MathsOutput instance is an existing file name, but the path  '/home/visual/andche/MRI_DATA/20180810_121603fMRIrestcmrrmb4TR1500128vols007a001_warp4D_mean.nii.gz' does not exist.

Look at the predicted outputs and outputs before the error - that's a printout from core.aggregate_outputs (the lines between ---- are the print out of cwd from _load_results in nodes.py).

It seems that it expects the node to have some out_file. But if I'm reading this correctly it's actually the error when it tries to aggregate the outputs from MapNode - should it even check for the existence of out_file?

If I look at the results pklz from this node at the subjects that passed without errors for some reason, I see:

Bunch(out_file=[u'/project/3019005.02/MT_localizers/TMP/preproc_flow_v1/mc_and_detrend/_subject_id_S07/meanBold/mapflow/_meanBold0/20180806_164452fMRIrestcmrrmb4TR1500128vols003a001_warp4D_mean.nii.gz',
       u'/project/3019005.02/MT_localizers/TMP/preproc_flow_v1/mc_and_detrend/_subject_id_S07/meanBold/mapflow/_meanBold1/20180806_164452fMRIrestcmrrmb4TR1500128vols004a001_warp4D_mean.nii.gz',
       u'/project/3019005.02/MT_localizers/TMP/preproc_flow_v1/mc_and_detrend/_subject_id_S07/meanBold/mapflow/_meanBold2/20180806_164452fMRIrestcmrrmb4TR1500128vols005a001_warp4D_mean.nii.gz',
 .... some output trimmed... 
u'/project/3019005.02/MT_localizers/TMP/preproc_flow_v1/mc_and_detrend/_subject_id_S07/meanBold/mapflow/_meanBold8/20180806_164452fMRIrestcmrrmb4TR1500128vols011a001_warp4D_mean.nii.gz'])

@effigies
Copy link
Member

effigies commented Sep 3, 2018

Oh, I wonder if outputs are being saved as relative paths, which might lead to invalid population of the OutputSpec when loaded outside the Node.run stage.

@achetverikov
Copy link
Contributor Author

This is a MathsCommand instance, so when I look at the interface specs and print os.getcwd() within _list_outputs() I see that for some reason current directory is not the node directory, but a home directory instead. So I think, @effigies, your last comment is close. Actually out_file is undefined, so it is generated with gen_fname but then only the filename is generated and the path is taken from a working directory. Any idea where to look further?

@achetverikov
Copy link
Contributor Author

OK, so here's my current workaround which seems to work so far. I've changed Node._load_results() to the following:

    def _load_results(self):
        cwd = self.output_dir()
        
        result, aggregate, attribute_error = _load_resultfile(cwd, self.name)
        old_dir = os.getcwd()
        os.chdir(cwd)
        # try aggregating first
        if aggregate:
            logger.debug('aggregating results')
            if attribute_error:
                old_inputs = loadpkl(op.join(cwd, '_inputs.pklz'))
                self.inputs.trait_set(**old_inputs)
            if not isinstance(self, MapNode):
                self._copyfiles_to_wd(linksonly=True)
                print(self.name)
                aggouts = self._interface.aggregate_outputs(
                    needed_outputs=self.needed_outputs)
                runtime = Bunch(
                    cwd=cwd,
                    returncode=0,
                    environ=dict(os.environ),
                    hostname=socket.gethostname())
                result = InterfaceResult(
                    interface=self._interface.__class__,
                    runtime=runtime,
                    inputs=self._interface.inputs.get_traitsfree(),
                    outputs=aggouts)
                _save_resultfile(result, cwd, self.name)
            else:
                logger.debug('aggregating mapnode results')
                result = self._run_interface()
        os.chdir(old_dir)

        return result

So I basically switch the working directory to the node output directory before aggregating the results. It seems to work. The caveat is that I also had to clear the cache from previous runs, so maybe that was the important step and not the changes in the code.

@effigies
Copy link
Member

effigies commented Sep 3, 2018

That looks reasonable to me. Also, FYI, we added a utils.filemanip.indirectory context manager in #2521 that would be useful here. Interested in making a pull request using that?

@effigies effigies added this to the 1.1.3 milestone Sep 3, 2018
@achetverikov
Copy link
Contributor Author

Will do!

@effigies
Copy link
Member

Please reopen if this is still occurring post 1.3.2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants