Upload
fuzzysphere
View
897
Download
33
Tags:
Embed Size (px)
DESCRIPTION
Making computations reproducible
Tokyo.SciPy #62014-08-02
1 / 31
Abstract
Scientific computations tend to involve a number of experimentsunder different conditions.
It is important to manage computational experiments so that theirresults are reproducible.
In this talk we introduce 3 rules to make computations reproducible.
2 / 31
Outline
...1 Introduction
...2 DisciplineThree elementsThree rulesComplements
...3 Practice
...4 Summary
3 / 31
1. Introduction
4 / 31
Background
A lab notebook is indispensable for experimental research in naturalscience. One of its role is to make experiments reproducible.
Why not for computational research?
.
......Lack of reproducibility means lack of reliability.
5 / 31
Common problems
Common problems in computational experiments:I confused which results is got under which condition.I overwrote previous results without intent.I used inconsistent data to get invalid results....
Not a few problems are caused due an inappropriate managementof experiments.
6 / 31
Goal
To archive all results of each experiment withall information required to reproduce themso that we can retrieve and restore easilyin a systematic and costless way.
7 / 31
Note
What is introduced in this talk is not a established methodology,but a collection of field techniques. Same with wording.
In this talk, we will not deal withdistributed computationdocumentation or testpublishing of a paperrelease of OSS
8 / 31
2. Discipline
9 / 31
Three elements
We distinguish the following elements which affect reproducibility ofcomputations:
Algorithm an algorithm coded into a programimplemented by yourself, calling external library, ...
Data input and output data, intermediate data to reuseEnvironment software and hardware environment
external library, server configuration, platform, ...
10 / 31
Three rules
Give an Identifier to each element and archive them.Record a machine-readable Recipewith a human-readable comments.Make every manipulation Mechanized.
11 / 31
.Identifier........Give an Identifier to each element and archive them.
Algorithmuse version control systemDatagive a name to distinguish data kindgive a version to distinguish concrete contentEnvironmentfind information of platformfind a version (optionally build parameters) of a library
Keep in mind to track all elements during the whole process:every code under version controlno data without an identifierno temporary environment
12 / 31
.Recipe..
......Record a machine-readable Recipewith a human-readable comments.
A recipe should include all informationrequired to reproduce the results of an experiment(other than contents of Algorithm, Data and Environmentstored in other place.)
A recipe should be machine-readable to re-conduct the experiment.
A recipe should include a human-readable commenton purpose and/or meanings of the experiment.
A recipe should be generated automatically by trackingexperiments.
13 / 31
Typically a recipe include the following information:in which orderwhich data is processedby which algorithmunder which environmentwith which Parameter
Typically a recipe consists of the followings:a script file to run the whole processa configuration file which specifies parameters and identifiersa text file of comments
14 / 31
.Mechanize........Make every manipulation Mechanized.
Run the whole process of an experiment by a single operation.No manual manipulation of data.No manual compilation of source codes.Automated provision of an environment.
15 / 31
complement: Tentative experiment
Too large archive detracts substantive significant of reproducibility.
For tentative experiments with ephemeral results,it is not necessarily required to record.
test of codestrial on tiny data...
If there is a possibility to get a result which might be used, referredor looked up afterward, then it should be recorded.
16 / 31
complement: Reuse of intermediate data
In order to reuse intermediate data, utilize an identifier.Explicitly specify intermediate data to reuse by an identifier.Automatically detect available intermediate databased on dependency....
17 / 31
3. Practice
18 / 31
Identify Algorithm
Use a version control system to manage source codessuch as Git and Mercurial.
It is easy to record a revision and uncommitted changesat each experiment.
(Learn inside of VCS if you need more flexible management.)
19 / 31
Identify Data
FileGive appropriate names to directories and files,then a resolved absolute path can be used as an identifier.
If no meaningful word is thought up, use time-stamp or hash.
DB or other APIA pair of URI and query of which results are constantcan be used as an identifier.
If API behaves randomly, keep the results at hand (w/time-stamp).
20 / 31
Identify Environment
Python packageUse PyPa tools (virtualenv, setuptools and pip) or Conda/enstaller.
LibraryUse HashDist.It is an alternative to utilize CDE.
PlatformUse platform, a standard library of Python
Server configurationUse Ansible or other configuration management tool,and Vagrant or other provisioning tool.
21 / 31
HashDist
A tool for developing, building and managing software stacks.
An software stack is described by YAML.We can create, copy, move and remove software stacks.
$ git checkout stack.yml
$ hit build stack.yaml
22 / 31
Recipe: configuration file
A configuration in recipe should be of a machine-readable format.
Use ConfigParser, PyYAML or json moduleto read/write parameters in INI, YAML or JSON format.
A receipt should include the followings:command line argumentenvironment variablerandom seed
23 / 31
Recipe: script file
A script in recipe should run the whole processby a single operation.
There are several alternatives to realize such a script:utilize a build tool (such as Autotools, Scons, and maf)utilize a job-flow tool (such as Ruffus, Luigi)write a small script by hand (e.g. run.py)
24 / 31
maf
“maf is a waf extension for writing computational experiments.”
Conduct computational experiments as build processes.Focus on machine learning:
list configurationsrun programs with each configurationaggregate and visualize their results
25 / 31
Recipe: automatic generation
Do it yourself, or use Sumatra.
“Sumatra: automated tracking of scientific computations”recording information about experiments, linking to data filescommand line & web interfaceintegration with LATEX/Sphinx
$ smt run --executable=python --main=main.py \
conf.param input.data
$ smt comment "..."
$ smt info
$ smt repeat
26 / 31
4. Summary
27 / 31
Summary
We have introduced 3 rules to manage computational experimentsso that their results are reproducible.
However, our method is just a makeshift patchwork of fieldtechniques.
.
......
We need a tool to manage experimentsin more integrated, systematic and sophisticated mannerfor reproducible computations.
28 / 31
LinksPyPa http://python-packaging-user-guide.
readthedocs.org
Conda http://conda.pydata.orgenstaller https://github.com/enthought/enstallerHashDist http://hashdist.github.io
CDE http://www.pgbovine.net/cde.htmlAnsible http://www.ansible.comVagrant http://www.vagrantup.com
Scons http://www.scons.orgmaf https://github.com/pfi/maf
Ruffus http://www.ruffus.org.ukLuigi https://github.com/spotify/luigi
Sumatra http://neuralensemble.org/sumatra
29 / 31
References
[1] G. K. Sandve, A. Nekrutenko, J. Taylor, E. Hovig, “Ten Simple Rulesfor Reproducible Computational Research,” PLoS Comput. Biol.9(10): e1003285 (2013). doi:10.1371/journal.pcbi.1003285
[2] V. Stodden, F. Leisch, R. Peng, “Implementing ReproducibleResearch,” Open Science Framework (2014). osf.io/s9tya
30 / 31
fin.
back to outline
Revision: f2b0e97 (2014-08-03)
31 / 31