This talk will demonstrate the use of IPython notebook widgets for the monitoring of long-running processes such as parameter search in machine learning and model-based computational science. This use of the IPython notebook widget technology allows the introspection of experiments in an interactive manner from within a familiar platform thereby allowing a rapid iteration of scientific code.
This talk will demonstrate the use of IPython notebook widgets for the monitoring of long-running processes such as parameter search in machine learning and model-based computational science.
In the general use of the IPython notebook for developing high-performance scientific code, the
IPython.parallel module allows one to launch distributed jobs on multiprocessor systems, remote resources such as cloud computing platforms or university HPC clusters.
Calculations of this kind, and other remote/distributed processes, can often run for some time until results can be reviewed.
The ability to run parallel computations from the notebook without blocking the notebook
interface is useful but means that often one is left wondering about the progress of such jobs, leading to an impedence on the iteration of code and results.
I demonstrate how diagnostic code querying such jobs can be executed in the periodic
This same pattern can be extended to may other situations as it opens up the use of the widget backend for execution of diagnostics on external resources without blocking the notebook.
To demonstrate this further in a more specific example, we will examine the use-case of monitoring of hyperparameter optimisation in machine learning. Rather than waiting until a parameter search has completed for examining it's success via plots and statistics, the widget framework can be used to assess the experiment as it is running.