The Large Synoptic Survey Telescope will find more than 10^5 astronomical transients per night - keeping up will be a challenge. Each alert contains only a few datapoints, requiring a Bayesian approach to classification. I'll introduce the active learning techniques we're exploring for optimizing robotic telescope follow-up schedules, and discuss packages used for the underlying Bayesian analysis.
Next-generation astronomical facilities such as the LSST and the SKA will be game-changers, allowing us to observe the entire southern sky and track changing sources (exploding stars, planetary eclipses, matter accreting onto black holes) in near real-time. Keeping up with their alert-streams represents a significant challenge - how do we make the most of our limited telescope resources to follow up 100000 sources per night?
The biggest problem here is classification - we want to find the really interesting transients and spend our time watching those. However, classification based on the initial survey data can only get you so far - we'll need to use robotic follow-up telescopes for rapid-response observations, to give us more information on the most promising targets. To get the most science done, we need to be smart about scheduling that follow-up.
We're exploring use of active learning algorithms (AKA Bayesian Decision Theory) to solve this problem, building a framework that allows for iterative refinement of a probabilistic classification state. Because there are no algorithms that fit this problem 'out-of-the-box', we've built our own analysis framework using the emcee and PyMultiNest packages to power the underlying Bayesian inference. I'll give a brief overview of how our proposed system works, and talk about the pros and cons of rolling your own Bayesian analysis code in Python.