Deep Learning – expectations & opportunities

A recent post on using DSSTNE (a deep learning library that I had a minor hand in) for training a simple movie recommender, sparked off some interesting conversations around expectations we have of Deep Learning. It can basically be summed up as – Is Deep Learning the path to Artificial Intelligence or will it be a one-hit wonder liable to fall out of fashion quickly?

Having developed actual production systems using machine learning and deep learning, I want to set expectations for deep learning and highlight opportunities that should not be ignored.

If you want the truth to stand clear before you, never be for or against. The struggle between “for” and “against” is the minds worst disease. – Seng-ts’an, c. 700 C. E.

In case you haven’t heard Deep Learning (aka neural networks) are on a comeback after the great winter of AI, thanks largely to the dropping cost of compute (i.e. GPUs) and easier development libraries (i.e. CUDA, Theano, Torch, Caffe, TensorFlow and DSSTNE). However, the biggest reason is the easy access to large volumes of data thanks to the internet and the labeled data collection platforms like Amazon’s Mechanical Turk.

One such dataset is put together by ImageNet. ImageNet Large Scale Visual Recognition Challenge (ILSVRC) is one of the biggest challenges in Computer Vision for state of the art in image recognition and understanding. New York Times wrote about it back in 2014 and when Baidu was banned from the competition for breaking competition rules. The challenge is to classify 1.28 million images belonging to 1,000 classes.

Enter Deep Learning

Deep Learning made a splash at ILSVRC in 2012, when Alex Krizhevsky, Ilya Sutskever & Geoffrey E. Hinton proposed a 5 layer neural network that outperformed any of the non-neural network approaches in the ImageNet. Their SuperVision entry based on a deep learning network (commonly referred to as AlexNet) won the competition with a 16.4% error rate, compared to the next best entry with an error rate of 26.2%. Since then Google, Facebook, Microsoft, Baidu, and others have aggressively researched into using deep learning. Last year Microsoft won the competition with an error rate of around 6% using a network with 152 layers.

In other applications of deep learning, Google saw a 49% drop in their speech recognition (i.e. transcription) errors using long-short-term memory deep recurrent neural network. Paypal uses deep learning for fraud detection and prevention (blog,  video).

A Cautionary Tale

Clearly, deep learning has been very successful in solving some of the most challenging problems in AI. While we must approach it with a healthy dose of skepticism, we have to acknowledge the successes and explore the possibilities. The problem often comes when people throw deep learning at a problem without thinking through the problem.

It is no surprise that Amazon uses deep learning for recommendations since they have open sourced the engine and blogged about it. But it was not always like that. One of the challenges the personalization team faced when exploring deep learning was that the initial prototypes gave the same or worse performance when compared to traditional machine learning approaches used in the field of recommendations.

My biggest contribution to that team’s effort was to model the problem the right way. In this particular case, the right way was not the traditional way recommender systems have been thought of.

post_dl_everywhere_modeling_the_right_way.jpg

Even though both algorithms (A and B) used deep learning, a similar sized network, structure and training parameters, the approach I proposed and demonstrated saw a 6x improvement in precision for the top recommended item. I cannot share the details behind the formulation since Amazon didn’t allow external publication of that work, but if you have access check out the video of talk I gave at the Amazon Machine Learning Conference in 2015 😉

Simply throwing the data (and compute) at deep learning is not a good idea. You have to model and solve the problem in a manner appropriate for that specific problem.

Promise of Deep Learning

Deep Learning’s biggest promise is actually in learning latent feature or representation learning, which makes the subsequent task of prediction easier. Getting the right features can make the learning and prediction part of the problem trivial. By far scientists spend most of their time in the manual engineering of the right features using domain knowledge, experience and intuition, supported by standard feature selection and projection algorithms.

In deep learning techniques, neural networks jointly optimize the feature engineering, feature selection, and modeling steps – all at the same time. This opens up the opportunity for us to skip manual feature engineering, and let the machine discover the relative importance and non-linear interaction between the signals as they propagate through the network layers. In some setups, such as autoencoders, the network can learn the important layers of features without any labels at all – i.e. in an unsupervised way. This means we can start applying machine learning to domains where we have large volumes of unlabelled data or where acquiring labels is difficult/expensive.

There is also a lot of interest in transfer learning, where features are first learnt in a domain with a large amount of labeled data or in an unsupervised way. Once learnt the features are then fine-tuned for another related domain using much smaller datasets. But the practical reality for the moment is the same – deep learning requires a lot of data and computation.

No Free Lunch!

When I was doing my Ph.D. at UNSW, I often chatted with Achim Hoffmann who wrote this interesting perspective on the limitation of machine learning [ post-script file ] published in the European Conference on Artificial Intelligence back in 1990. The key element for me was this.

The results indicate a rather general point. Namely, that for any amount of information which should get acquired, people have to do the complete work. One may choose between writing complex programs and providing a program with a huge amount of input data. In any case, the work cannot be reduced essentially. The machine can only do what it is told to do. And it cannot be told to generate information by itself. … The results do not mean, that machine learning is completely purposeless. But they clearly show that one cannot expect any magic from machine learning.

Even though 25 years have passed since this paper was written, the underlying idea is very relevant to set our expectations of machine learning and deep learning. We can throw data and compute at deep learning, but it cannot magically get us the answer. We still need human experts and scientists to figure out how to apply deep learning appropriately, not to mention push the research boundaries of what is capable of deep learning. I think deep learning is a very promising field for exploration and worth taking the risks in investing experimentation resources towards. We just need to be prepared to learn.

 

 

Advertisements

Deep Learning with DSSTNE

Recently I got a couple of EVGA GeForce GTX 1080 to keep my study nicely lit and warm when winter comes to Seattle. My interest in GPUs though is more for Deep Learning, than lighting and heating. Deep Learning is actively being explored for all kinds of machine learning applications since they offer a hope of automatic feature learning. In fact, a large number of Kaggle competition winners tend to rely on Deep Learning methods to avoid any kind of hand-crafted feature engineering. Considering how computationally expensive Deep Learning training tends to be, GPUs are essential for doing anything meaningful in a reasonable amount of time.

As part of my job with the Big Data & Analytics Platform team at Oracle, I come across customers that do need help with tackling some of these cutting-edge machine learning problems – from image understanding to speech recognition and even product recommendations. Part of the challenge is always simplifying the complexity and letting the people focus on what they need to do and hide away what is important but not necessary for immediate focus.

Siraj Rival ( @sirajology ) had posted a really nice video earlier this year on how to build a movie recommender system using 10 lines of C++ code and DSSTNE (pronounced “destiny“), a deep learning library that my old team at Amazon built and open-sourced earlier this year.

Aside: DSSTNE does automagic model parallelism across multiple GPUs and is also very fast on sparse datasets. Scott Le Grand ( @scottlegrand ) who was the main creator of DSSTNE has reported DSSTNE to be almost 15x faster than TensorFlow in some cases.

  • Disclosure: Scott and I used to work together at Amazon for the personalization team that built DSSTNE. We no longer work for Amazon, so cannot speak to how it is being used inside Amazon.
  • Update: Check out this talk by Scott talk on DSSTNE at Data Science Summit 2016 )

Back to Siraj’s movie recommender – although he does a great job, I think there are some very important points about the design of DSSTNE that are easily overlooked. DSSTNE has 3 important design elements:

  1. scale – to handle large datasets that won’t fit on a single GPU, and do that automatically.
  2. speed – for faster experimentation cycles, allowing the scientists to be more efficient and scale the number of experiments they run
  3. simplicity – for non-experts to experiment, deploy and manage deep learning solutions into production

In this post, I’ll show how to build a movie recommender writing NO lines of C++ code. DSSTNE is largely configured through a Neural Network Layer Definition Language and 3 binaries – generateNetCDF, train & predict. It uses a JSON based config file to describe the network, the functions, and parameters to use when training the model. This approach makes it much easier for people to run the hyper-parameter search across different network structures without needing to write a single line of C++ code.

So let us get started by installing CUDA and cuDNN on my Ubuntu 16.04.

CUDA & cuDNN

First the prerequisites for CUDA.

$ sudo add-apt-repository ppa:graphics-drivers/ppa
$ sudo apt-get update
$ sudo apt-get install nvidia-367
$ sudo apt-get install mesa-common-dev
$ sudo apt-get install freeglut3-dev

Download the local run binary from https://developer.nvidia.com/cuda-toolkit

Install the CUDA 8 library:

$ sudo ./cuda_8.0.27_linux.run --override

IMPORTANT: Make sure you DO NOT install the drivers included with the .run file. Keep others are defaults and yes for everything else.

Set environment variables:

$ export PATH=/usr/local/cuda-8.0/bin${PATH:+:${PATH}}
$ export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

At this point, you should be able to check that network cards are recognized by CUDA by running nvidia-smi.

$ nvidia-smi
Thu Aug 11 10:51:36 2016       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 367.35                 Driver Version: 367.35                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 1080    Off  | 0000:03:00.0     Off |                  N/A |
|  0%   31C    P8     7W / 180W |      1MiB /  8113MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX 1080    Off  | 0000:06:00.0      On |                  N/A |
|  0%   35C    P8     8W / 180W |    156MiB /  8110MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    1      4455    G   /usr/lib/xorg/Xorg                             106MiB |
|    1      5229    G   compiz                                          48MiB |
+-----------------------------------------------------------------------------+

To force CUDA to use the latest version of GCC edit the header file that drops you out of a build.

$ sudo nano /usr/local/cuda/include/host_config.h

Comment out the line which complains about the GCC version

//#error -- unsupported GNU version! gcc versions later than 5.3 are not supported!

Compile the samples:

$ cd ~/NVIDIA_CUDA-8.0_Samples
$ make

Some of the samples still fail, but I’ll look into them later.

Get the CUDNN library from https://developer.nvidia.com/cudnn and follow the instructions to install it.

Now for DSSTNE (Destiny)

First, you need to install the pre-requisites for DSSTNE. I’ve put together a shell script that runs the steps documented here.

Then, make sure you have the paths set up correctly. I had something like this in my .bashrc.

# Add CUDA to the path
# Could use /usr/local/cuda/bin:${PATH} instead of explicit cuda8
export PATH=/usr/local/cuda-8.0/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

# Add cuDNN library path
export LD_LIBRARY_PATH=/usr/local/cudnn-8.0/lib64:${LD_LIBRARY_PATH}

# Add OpenMPI to the path
export PATH=/usr/local/openmpi/bin:${PATH}

# Add the local libs to path as well
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/lib

Now to get, build and test DSSNTE:

$ git clone https://github.com/amznlabs/amazon-dsstne.git
$ cd amazon-dsstne/src/amazon/dsstne
$ make

This will build the binaries under amazon-dsstne/src/amazon/dsstne/bin for:

  • generateNetCDF –  converts CSV text files into NetCDF format used by DSSTNE
  • train – trains a network using input, output data and config file with network definition
  • predict – uses a pre-trained network to make predictions.

There is a nice example of training an auto-encoder based recommender for MovieLens 20m dataset that comes with the code.

Download the data in the CSV/Text File format. If you have your own dataset, make sure it conforms to this data format.

$ wget https://s3-us-west-2.amazonaws.com/amazon-dsstne-samples/data/ml20m-all

Convert the text data into NetCDF format data for network input and expected network output. It also builds up a features and samples index files. 

$ generateNetCDF -d gl_input -i ml20m-all -o gl_input.nc -f features_input -s samples_input -c
$ generateNetCDF -d gl_output -i ml20m-all -o gl_output.nc -f features_output -s samples_input -c

Train the network using the config for 30 epochs and batch size of 256. It will checkpoint and save the network every 10 epochs. Handy if you want to explore the network convergence by epochs.

$ train -c config.json -i gl_input.nc -o gl_output.nc -n gl.nc -b 256 -e 30

Once the training complete, you can use the network and GPU to make batch mode offline predictions in the original text format. The following command generated 10 movie recommendations for each user in ml20m-all file (i.e. -r ml-20all) into the recs file (-s recs). It also lets you mask or filter out movies that the user has already seen (-f ml20-all)

$ predict -b 256 -d gl -i features_input -o features_output -k 10 -n gl.nc -f ml20m-all -s recs -r ml20m-all

That’s it. While it’s training you can use nvidia-smi to see which GPU it is running on, and how much memory it uses.

$ nvidia-smi
Thu Aug 11 12:11:58 2016       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 367.35                 Driver Version: 367.35                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 1080    Off  | 0000:03:00.0     Off |                  N/A |
|  0%   42C    P2    77W / 180W |    524MiB /  8113MiB |    100%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX 1080    Off  | 0000:06:00.0      On |                  N/A |
|  0%   36C    P8     8W / 180W |    170MiB /  8110MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0      5523    C   train                                          521MiB |
|    1      4455    G   /usr/lib/xorg/Xorg                             106MiB |
|    1      5229    G   compiz                                          60MiB |
+-----------------------------------------------------------------------------+

Look ma, no C++ code!

The number of input nodes is automatically inferred from the input data file.

The number of output nodes is automatically inferred from the expected output data file.

Everything else is defined in the config.json or command line flags ( batch size and the number of epochs for training). Neural Network Layer Definition Language describes all that DSSTNE supports.

{
 "Version" : 0.7,
 "Name" : "AE",
 "Kind" : "FeedForward", 
 "SparsenessPenalty" : {
 "p" : 0.5,
 "beta" : 2.0
 },

"ShuffleIndices" : false,

"Denoising" : {
 "p" : 0.2
 },

"ScaledMarginalCrossEntropy" : {
 "oneTarget" : 1.0,
 "zeroTarget" : 0.0,
 "oneScale" : 1.0,
 "zeroScale" : 1.0
 },
 "Layers" : [
 { "Name" : "Input", "Kind" : "Input", "N" : "auto", "DataSet" : "gl_input", "Sparse" : true }, 
 { "Name" : "Hidden", "Kind" : "Hidden", "Type" : "FullyConnected", "N" : 128, "Activation" : "Sigmoid", "Sparse" : true },
 { "Name" : "Output", "Kind" : "Output", "Type" : "FullyConnected", "DataSet" : "gl_output", "N" : "auto", "Activation" : "Sigmoid", "Sparse" : true }
 ],
 
 "ErrorFunction" : "ScaledMarginalCrossEntropy"
}

Applying deep learning techniques to a problem such as recommendation typically means lots of experimentation exploring the different mix of the:

  1. types of input data and output targets – purchase or browsing history, rating, product attributes such as category, cost, and color, attributes such as age or gender, etc.
  2. network structures – the number of layers, number of nodes per layer, connections between layers, etc.
  3.  network and training parameters – learning rates, denoising, drop-outs, activation functions, etc.

How you pose the problem and prepare the dataset (#1) is VERY important in applying deep learning. If you pose the machine learning problem incorrectly, not even deep learning and a cloud full of GPUs can help you there.

But, once you have that, figuring out the right network structure (#2) and training parameters (#3) can mean a difference between success and failure. That means running a lot of experiments or essentially a hyper-parameter search problem.

The JSON based config simplifies the hyper-parameter search problem. You can generate a large combination of these config files and try them out in parallel, quickly narrowing the options down to the configurations that are most suitable for that particular application.

Given this is still early days for Deep Learning, the speed and scale of experimentation has a huge bearing on what we learn about using Deep Learning. Plus I know my study will be warm for this coming winter.

Credits and References:

 

Big Data is also Big Compute

Machine learning and statistical modeling to get to the right model is a process of discovery. Analyst or scientist don’t know a priori the perfect algorithmic combinations that would yield the best possible model, even if the task or problem is well understood. It is an iterative and incremental process of exploration and discovery.

Typically a data scientist will start with an initial guess using their best judgment – a mix of industry best standards and tacit knowledge from personal experiences – to come up with algorithms for a machine learning pipeline:

  1. feature generation – which transforms raw data into features/signals for the model
  2. feature normalization or transformations – clean up, center or rescale
  3. feature selection – keeping the best mix of features battling the curse of dimensionality
  4. modeling algorithm – the actual model for regression or classification
  5. evaluation – on a held out test set to see how well the pipeline works

Then begins an iterative process of exploration, looking for improvements across the different combination of algorithms and parameter settings. Even on a small dataset, the number of possible combinations can grow dramatically.

For example, say if I had 6 different feature generation algorithms, 2 normalization options, 2 feature selection algorithms and 7 modeling algorithms, we have at least 6x2x2x7 = 168 combinations without even considering the possible hyper-parameters for each of the algorithms. Now imagine evaluating it across multiple data partitions for k-fold evaluation where k=10, it gives us 1,680 combinations. If we were working with time-series datasets where a model was trained weekly, we would evaluate the stability of the pipeline across each of the 52 weeks of the year across 2 years, giving us 17,472 combinations. A daily model evaluated the same time period would mean 122,640 combinations. Clearly, this quickly becomes a Big Compute problem.

This also an embarrassingly parallel problem and lends well to Spark/Hadoop environments. Even if the datasets are small, distributing the thousands of modeling combinations across a cluster of machines can dramatically speed up the time a scientist has to spend legitimately slacking off.

Recently, this is exactly what we did for a customer. My team at Oracle helps customer from all industries realize the value Big Data & Analytics platforms can bring to their organization, by engaging in pilots and proof-of-concepts. This PoC for a leading North American commodity producer focused on improving their price forecasting capabilities. A more accurate price prediction means better opportunity to make sell-vs-hold decisions. They wanted to use the data they had (weekly commodity price, daily international exchange rates, monthly economic data) and data that they didn’t have (hourly weather) to see if this would lead to more accurate predictions. Given the short 3-week sprint, our intent was to help their analysts become more efficient going forward – i.e. scale their capacity for experimentation.

testing_17k_models_using_spark

The figure above shows the results from evaluating each of the 17,472 combinations for a classification based approach that would simply predict if the price will go up or not in the following week, across a 2-year period. Each dot represents a single combination of a machine learning pipeline = [feature generation, feature normalization, feature selection, modeling algorithm, test set]. The color denotes the modeling algorithm for that run. Formulating the problem as a 2-class classification problem helps when dealing with a rather noisy target, by not trying to fit the exact price too closely and also a great way to data/modeling biases. A similar approach was then used to explore a further 22,464 combination of models that predicted the actual price.

The search found a better algorithm (in red below)  that predicted the commodity price within +/-5% of the actual price 73% of the time, compared to 40% for the algorithm the customer uses (in blue below).  The figure below shows the narrow range of error for the newer algorithm compared the existing one, which predicts prices over and under the actual price by up to 20%.

Price Error Bounds

This shotgun approach may not appeal to the machine learning purists, but it is a great way to quickly zero in on the set of combinations that consistently perform well and eliminate the combinations that added little or no value.

Big Data technologies such as Spark/Hadoop are also Big Compute technologies to scale the number of experiments that a scientist can run, making them more efficient, allowing them to explore wider and deeper than they could otherwise. In this particular case, it helped to identify a new algorithm to improve the accuracy of the price forecast, which has a direct impact on the bottom line of any commodity producer.