EuroSciPy logo

EuroSciPy 2013

Brussels, Belgium - August 21-24 2013

Processing biggish data on commodity hardware: simple Python patterns

Gael Varoquaux

Fri 23 2 p.m.–2:30 p.m. in Dupreel

Abstract

While big data spans many terabytes and requires distributed computing, most mere mortals deal with gigabytes. In this talk I will discuss our experience in applying efficiently machine learning to hundreds of gigabytes on commodity hardware. In particular, I will discuss patterns implemented in two Python libraries, joblib and scikit-learn, dissecting why they help addressing big data and how to implement them efficiently with simple tools. In particular, I will cover:

  • On the fly data reduction
  • On-line algorithms and out-of-core computing
  • Parallel computing patterns: performance outside of a framework
  • Caching of common operations, with efficient hashing of arbitrary Python objects and a robust datastore relying on Posix disk semantics
  • Fast I/O mechanisms

The talk will illustrate the high-level concepts introduced with detailed technical discussions on Python implementations, based both on examples using scikit-learn and joblib and on an analysis of how these libraries work. The goal here is less to sell the libraries themselves than to share the insights gained in using and developing them.

Sponsors