Last week, I went to Barcelona to attend NIPS 2016, or the Conference on Neural Information Processing Systems. NIPS is essentially a giant research conference on machine learning and neural networks, with a dash of cognitive science and neuroscience thrown-in. It was super interesting, so I thought I’d jot down some thoughts while the experience is still fresh in my mind.
First off, NIPS was an amazing experience! It’s one thing to read papers from Arxiv and follow some researchers' blogs – it’s totally different to actually meet researchers in person and interact with them. Basically all researchers have a well thought-out research program that can be hard to glean from just a handful of papers. And even as someone who isn’t doing any ML research at the moment, learning about and comparing people’s research programs has definitely helped me develop my “taste” for what interesting problems are.
That said, I think I definitely benefited more from this last year at my first NIPS. The broad motivations, problems, and ideas of most research programs haven’t really changed (which makes sense – it’d be a pretty shallow research program otherwise), so a lot of the deep learning ideas were things that I was already exposed to before. While there was definitely a lot of progress on a bunch of fronts, it’s unclear to me how much of the knowledge gained will still be relevant and interesting in a couple years (i.e. when I would potenitally be starting a PhD program).
The hype this year was very obvious. NIPS grew 50% from last year (4000 attendees to 6000), and there were a ton of companies trying to get in on the action (many of which I suspect need a better ETL pipeline and some linear regression, not deep learning). There’s a huge recruiting presence from places like FAIR and DeepMind – it’s almost like NIPS is half academic conference and half career fair / head hunting.
In terms of the research, deep learning has colonized NIPS even further. The amount of neuroscience and cognitive science was even smaller than last year and really only appearing in a couple of workshops. Even within ML, I don’t think I saw a single paper on kernel methods and only saw like 2 papers on random forests.
Also interesting was how quickly buzz shifted within deep learning. Last year, there was ton of excitement and buzz around memory augmented neural networks (e.g. Neural Turing Machines or Neural Stack Machines). This year, there was much less buzz about memory augmentation. Instead, GANs and adversarial training were clearly the new hotness for this year’s NIPS.
On a more personal note, I made a much more concerted effort this year to be more social and meet more people, which I think was a moderate success. That meant actively seeking out and going to various parties after NIPS (I went to 4 different parties this year, compared to only 1 last year), actually talking with people during coffee breaks, and being willing to skip talks to chat. This was definitely the right choice – individual conversations are much more free-flowing than listening to talks, so they often ended up being much more thought-provoking and interesting too. I definitely recommend viewing these kinds of unofficial social interactions, and not the official programming, as the primary attraction of NIPS (and of most conferences I’ve been to, honestly).
Pro tip: never admit that you’re an undergraduate to other people if you can help it; otherwise, people will assume you have no idea what you’re talking about and it’ll kill the conversation Because I wasn’t affiliated with any of research group at Columbia, for me this also meant hiding my name badge so that people would ask “what do you do” instead of “which lab are you from?”
The reason that this was only a moderate success is mostly my fault – I’m still not quite comfortable striking up conversations with total strangers. Equally importantly, I also don’t actually do any research at the moment, which means I only have vague impressions and high-level ideas without any of the knowledge and depth that comes from fighting in the trenches. I got around this by talking about my background in cognitive science and linguistics, which is different enough from machine learning that I was able to have some pretty interesting and fruitful conversations without delving too deeply into technical details I’ve forgotten about, but not bearing the scars and war stories of actual daily research was definitely pretty limiting.
Because of this, I’m honestly not sure that I’d go to NIPS 2017 unless I was sharing some of my own research. Although I really enjoyed going to NIPS the last two times, I feel like I’ve reached diminishing returns in what I get out of NIPS as a spectator. We’ll see what happens though.
NIPS 2016 Highlights
- Favorite Talks: Taming non-Convexity via Geometry by Suvrit Sha and A Tribute to David MacKay
- Favorite Workshop: Intuitive Physis
- Favorite Panel: Brains and Bits
- Favorite Thing that I thought would be really boring but was actually super interesting: Efficient Hardware for Deep Learning
- Favorite Paper not in NIPS that I was pointed to: Universal Adversarial Perturbations