The Road Through Strata - Wednesday

12 February 2014

This was my second day at Strata.

Keynotes

The keynotes at Strata were very short, 5-10 minutes each. This was a mixed blessing; presenters were brief, but many of them used nothing but data buzzwords. I was strongly reminded of The Worst Speech in the World.

P (quality) = (reality / buzzwords + reality).

However, there were two amazing speakers: Farrah Bostic and David Epstein. They made clear points, had a bit of light humor, and were refreshing immune to buzzword-itis.

Farrah Bostic's argument was "How we decide what to measure, matters." Market research, surveys and focus groups are more biased than we think, leading to flawed decisions and flawed results. I've seen the results of this anecdotally, when people making decisions using the data they had rather than the problem they had.

David Epstein had two points. The first is that before collecting tons of data, you should determine what is important and what can be changed. Collecting data, and then analyzing it, should enable change that is possible. His second point was the famous "10,000 hours of practice" study was based on a flawed study of 40 gifted violinists; it isn't generally applicable. Even the original researcher, K.A. Ericsson, called the hype around the 10,000 hours idea "the danger of delegating education to journalists."

Big Data: Too Few Artists

This was the earth-shaking session of the day. Chris Re is a Stanford professor with a stupefying vision for data analysis.

A challenge with data problems, like software engineering, is to reduce the time between idea and product. One huge bottleneck is the cognitive/human time required to build a good model from data.

Building a good model requires iterating over 2 steps:

  1. Getting data and extracting features from it
  2. Testing any and all features against various models to see which combinations are meaningful.

The second step can be streamlined, even automated.

Automated Machine Learning

(IMAGE OF SKYNET)

For everything but the largest data sets, it is computationally/economically possible to run hundreds, even thousands, of machine learning models on a data set and use statistical methods to identify the best ones.

This is an old idea. Data scientists tune machine learning models using hyperparameters all the time. I often use hyperparameter searches when tuning a model; it's a fast way to tune a good model.

This leaves us with the first step: generating features.

It's About the Features, Stupid

One of the big lessons in machine learning is more data trumps a complicated model. This is exemplified in the seminal paper "The unreasonable effectiveness of data."

Another lesson is better features trumps a complicated model. The best data scientists spend time adding features to their data (feature engineering).

Deep Dive

Chris' ideas are brought to fruition in DeepDive, a system that has a user define features, but not machine learning or statistics. The tool does all of the machine learning and statistical analysis, and then shows the results. It's already been used on paleobiology data (extracting data from PDF-formatted research papers) with promising results.

I'll be following this closely.

Thinking with Data

Max Shron's premise was simple: good data science benefits from having an intellectual framework. The detail of this session is in his new book.

Scoping

"How do we make sure we're solving the right problem?"

Data scientists aren't the first to ask that question. Designers have this problem all the time, worse than we do. Vague, conflicting requests are a fact of life.

Borrowing from designers and their scoping framework can:

  • Help defining a data problem clearly by asking careful questions.
  • Reduce the chance of error by using mockups. A fake graph can be very helpful.
  • Help deliver a clear presentation by copying narrative structure: setup, conflict, resolution, and denouement.

Arguments

Convincing people of something, even with data, is a form of argument. Data scientists can benefit from 2500 years of work in the humanities, rhetoric, and social sciences.

Knowing the structure of an argument can help with:

  • Clarifying what you need to convince people of.
  • Anticipating objections and questions, so you can be prepared
  • Identifying indirect and opportunity costs that may trip up your ideas
  • Keeping your presentation concise by not covering already-agreed-upon terms and definitions.

This was the most intellectual of the sessions I attended, and one of the most helpful.

Tracking zzzzz

In contrast, Monica Rogati's session was lighthearted and utterly entertaining. This was an amazing example of telling a story using data.

The topic? Sleep.

As a data scientist for Jawbone, Monica is effectively running the world's largest sleep study, with access to 50 million nights' sleep. Some findings:

  • Hawaii is the most sleep deprived. Vermont is the least.
  • The conventional wisdom for jet lag is 1 day recovery time 1-hour time zone. It actually takes 2 days.
  • A coast-to-coast trip takes 6-7 days to recover from.
  • Fishing, pets, hiking, and softball are correlated with more sleep.
  • Socializing at work, personal grooming, and commuting are correlated with less.

I'll be looking at this session again, looking for presentation tips.

Errata

  • I asked 30 people at Data after Dark, and 23 of them knew how to count cards. No wonder we're playing poker and not blackjack!
  • A vendor booth offering free Bud Light at the Exhibit Hall was completely empty. No surprise; there was free scotch and craft beer nearby. Competition matters.
  • I found intriguing research paper on the theory behind join algorithms (warning, math heavy)

That's it for tonight. Until tomorrow, data nerds!

Permalink

The Road Through Strata - Day 1

11 February 2014

This was my first day at Strata. Here's what I found.

The Good

  • It's important to explain data science products to users. They don't trust black boxes.
  • Online learning (updating a model incrementally) is very useful, but it's still not available for most algorithms.
  • Assembling, transforming, and cleaning data still takes 80%+ of the time.
  • The age-old machine learning headaches live on: overfitting, outlier detection and removal, feature extraction, the curse of dimensionality, and mismatched tooling for prototyping vs. production.

The Bad

I made the very mistake I warned against yesterday: I went to sessions based on the topic, and not the quality of the speaker.

I missed out on amazing sessions by John Foreman, Jeff Heer, and Carlos Guestrin.

I'll be more selective about my sessions for the next couple days.

The Ugly

I asked a dozen people, from a variety of industries, what they did for a living. I also asked how ensured their work wasn't being used to make more profit in an unethical way.

Nobody had an answer to the latter question. I'm fervently hoping this is due to my low sample size and not broadly representative of the data analytics community.

Meeting People

In addition to my ethical survey I had the chance to talk to people from a D.C. startup, the Lawrence Berkeley Lab, Microsoft Research, Netflix, Etsy, Vertafore, the Department of Defense, and Sage Bionetworks. Everyone was ridiculously smart, and most of them were data scientists.

I came prepared with a list of questions:

Questions

  • What's your name? Are you from the Bay Area? Where do you work?
  • What are you passionate about? What do you like to do?
  • What's your ideal problem to solve?
  • What projects do you wish other people would help you with?
  • What's one question you wish people would ask you?
  • What do you think people should pay more attention to?

I found some common elements:

  • They are learning, and confused by, the myriad software stacks and languages available nowadays.
  • They all want to learn from each other.
  • They all want more in-depth sessions.

Data-Intensive Everything

The range of subject areas covered was immense.

Data-Intensive Physics

  • Sloan Digital Sky Survey (SDSS)
  • Large Synoptic Survey Telescope (LSST)
  • Search for Extra-Terrestrial Intelligence (SETI)
  • Large Hadron Collider (LHC)

Data-Intensive Medicine

  • Personalized medicine
  • Predictive health - preventative care
  • Early detection (for cancers)

Data-Intensive Cybersecurity

  • Intrusion detection
  • Fraud monitoring

Data-Intensive IT

  • Automatic root-cause analysis
  • Monitoring with intelligent anomaly detection
  • Capacity analysis and automatic scaling

Data-Intensive Cruft

There were some boring problems discussed...

  • Show people more interesting ads
  • Recommend movies, books, or news articles to people
  • Recommend matches on a dating site
  • Improve high-frequency trading systems

Luckily, I was saved by the amount of discussion on data-intensive genomics...

Data-Intensive Genomics

On Monday night I attended a Big Data Science meetup, and the best presenter was Frank Nothaft, a grad student at UC Berkeley, working on large-scale genomics.

Why?
  • The cost of processing a genome is now less than $1,000 per genome. Also, the price is dropping faster than Moore's law. The cost of computation may be the bottleneck in genomics.
  • Personalized medicine is now possible. Doctors would have the ability to identify which genetic traits make us more or less susceptible to different diseases, cancers, and so on.
  • Data volumes are large. 200-1,000GB per genome.
  • Analyzing enough genomes to do population analyses requires petabytes of data.

The societal benefit from this work could be immense. I understand why he was so cheerful when he talked.

How

I was impressed by the quality of thought put into the project:

  • Use open-source, popular software stacks because they will improve over time
  • Add interface support for many languages, such as Python, C++, C#, PHP, Ruby, etc.
  • Identify tools that are best at each part of the software stack to improve performance and scalability. In this case, that's Apache Spark, Avro, Parquet, and HDFS.
  • Using a columnar data store (Parquet) on top of HDFS for data storage. Genomic data can be efficiently stored in a columnar format and this leads to better parallelism.
  • Use an interoperable data-storage setup (Avro) to support multiple interfaces
  • Add support for SQL-like queries (Shark, Impala)
  • Test the performance and scalability, both at a single node and scaling out.

There's a lot more detail, available on the website, the in-depth research paper, or the entirely-public codebase.

Deep Neural Networks

Deep neural networks have gotten a lot of press lately, mostly because they can work well on problems most ML algorithms struggle with (image recognition, speech recognition, machine translation).

Ilya Sutskever gave a good, useful intro into deep neural networks. 'Deep' in this case refers to 8-10 layer of neurons 'hidden' between the input and output layers. A traditional neural net has 1-2 hidden layers.

The reasoning to look at 10 layers is great. Humans can do a variety of things in 0.1 seconds. However, neurons are pretty slow; they can only fire about 100/second. Therefore a human task that happens in under 0.1 seconds takes only 10 layers of neurons to do.

One of the big problems behind neural networks is that they require a lot of data to train at this depth. They are also not intuitive to tune; Ilya didn't go over that at all in his session. It was a good 101-level talk.

"Give me explainability or give me depth"

For more, I'd recommend the Neural Networks Blog.

Open Reception

The reception afterwards was mostly dull. The food was good, and free. The vendors, however, were spreading their own particular flavors of FUD.

I asked 11 different vendors for the data to back up claims behind their value propositions. The responses were a comic mix of dumbfounded expressions, misdirection, and spin. It's hilarious that companies selling to data and analysis professionals don't use data to back up their marketing claims.

Tomorrow...

I find myself excited about the potential to meet awesome people and learn amazing things.

I'm looking forward to tomorrow.

Permalink