Plant Science Initiative @ NC State University

In my role at Oracle, I get to work across many industries on some very interesting problems. One that I have been involved with recently is the collaboration between North Carolina (NC) State University and Oracle with NC State’s Plant Science Initiative.

In particular, we’ve been working with the College of Agriculture and Life Sciences (CALS) to launch a big data project that focuses on sweet potatoes. The goal is to help geneticists, plant scientists, farmers and industry partners in the sweet potato industry to develop better varieties of sweet potatoes, as well as speed up the pace with which research is commercialized. The big question is can we use the power of Big Data, Machine Learning, and Cloud computing to reduce the time it takes to develop and commercialize a new variety of sweet potato crop from 10 years to three or four years?

One of the well-known secrets to driving innovation is scaling and speeding up experimentation cycles. In addition, reducing the friction associated with collaborative research and development can help bring research to market more quickly.

My team is helping the CALS group to develop engagement models that facilitate interdisciplinary collaboration using the Oracle Cloud. Consider geneticists, plant science researchers, farmers, packers, and distributors of sweet potato being able to contribute their data and insights to optimize different aspects of the sweet potato production – sweet potato from the genetic sequence to the dinner plate.

I am extremely excited by the potential impact open collaboration between various stakeholders can mean for the sweet potato and precision agriculture industry.

More details at cals.ncsu.edu

Advertisements

It is a go for Amazon Go!

The super secret exciting project that I spent days and nights slogging over when I was at Amazon has finally been announced – Amazon Go. A checkout-less, cashier-less magical shopping experience in a physical store. Check out the video to get a sense of the shopping experience that simplifies the CX around the shopping experience. Walk in, pick up what you need and walk out. No line, no waiting, no registers.

I’m very proud of an awesome team of scientists & engineers covering software, hardware, electrical and optics that rallied together to build an awesome solution of machine learning, computer vision, deep learning and sensor fusion. The project was an exercise in iterative experimentation and continually learning, refining all aspects of the hardware, software as well as innovative vision algorithms. I personally was involved in 5 different prototypes and the winning solutions that ticked all the boxes more than 2 years ago.

I remember watching Jeff Bezos and the senior leadership at Amazon, playing with the system by picking and returning the items back to the shelves. Smiles and high-fives all around as the products were added and removed from the shopper’s virtual cart, with the correct quantity of each item.

Needless to say there is a significant effort after the initial R&D is done to move something like this to production, so it is not surprising that it has taken 2 years since then to get it ready for public. Well done to my friends at Amazon for getting the engineering solution over the line to an actual store launch for early 2017.

Photo Credit: Original Image by USDA – Flickr

 

Precision Agriculture needs Scalable Automation

If you have ever looked out an airplane window as it flies over land, chances are that you see some spectacular landscapes, sprawling cities or quilted patchwork of farms. Over the centuries science, machines and better land management practices have increased agricultural outputs dramatically, allowing farmers to manage and cultivate ever larger swaths of lands. The era of Big Data and Artificial Intelligence is pushing these productivity gains even further with Precision Agriculture. For example using satellite images to evaluate the health of crops to direct farming decisions or predict the likely yields we can expect during harvest time.

Earlier this year, my team at Oracle (chiefly Venu Mantha, Marta Maclean & Ashok Holla) worked with a large agricultural customer to help them shift towards a more data driven agricultural approach that would maximize their yields and reduce waste. We explored a variety of technologies, from field sensors streaming measurements over Internet-of-Things (IoT) networks to the geo-spatial fusion of a variety of historical and real-time data. The idea is to support farmers with contextually relevant data, allowing them to make better decisions. Apart from people and process challenges associated with such a dramatic business transformation for an ancient sector, the major technological obstacle for realizing the potential of precision agriculture is in fact scalable automation.

farms_small.jpg

Lets take a concrete example. A key aspects of the proposal was the use of aerial or satellite imagery to assess the health of the crop. Acquiring satellite or aerial imagery on demand is significantly easier compared to what it was just a few years ago with the growing number of vendors in the market and falling cost of acquisition (e.g. Digital Globe,  Free Data Sources). Now that we can get imagery in high-resolution (i.e. down to level of an individual tree), that is multi-spectral (i.e. color and infrared bands) and covers large expanses of land (i.e. 100 acres or more) the challenge has shifted to a well recognized one in the world of Big Data – How do you sift through the large volumes of data to extract meaningful and actionable insights quickly?

If one image covers 100 acres and takes a Geographic Information System (GIS) specialist an hour to review manually, handling images for over 100,000 acres would mean a team of 40 GIS specialists working without a break for about 25 hours to go through the full batch of images. Clearly throwing more people at the problem is not going to work. Not only is it slow and error prone, but finding enough specialists with domain knowledge would be a challenge.

The answer is to automate the image analysis pipelines and distributed computing to parallelize and speed up the analysis. Oracle’s Big Data Spatial & Graph (BDSG) is particularly well suited for partitioning, analyzing and stitching back large image blocks using map-reduce framework of Hadoop. It understand common GIS and image file formats and gives the developer Java bindings to the OpenCV image processing library as part of its multimedia analytics capabilities. You can either split up a large image (raster or vector) and analyze each chunk in parallel, or analyze each image in parallel. You can write your own image processing algorithms or composing one using the fundamental image processing algorithms available in OpenCV.

alignment_process.png

The challenge for the customer, however, was coming up with an algorithm that could correct the image misalignment that naturally creeps in during image acquisition or image stitching process. A misaligned image would require a GIS specialist to open and manually adjust the image using tools like ArcGIS from ESRI. Analyzing a misaligned image would lead to incorrect results and can lead to bad decisions.

This is where the BDSG product engineering team (Siva Ravada, Juan Carlos Reyes & Zazhil ha Herena) and I stepped in to design and develop a solution to automate the image alignment and analysis processes. We have a patent application around the solution that can be used in a variety of domains beyond farming – Think of urban planning, defense, law & enforcement and even traffic reports.

Just the manual alignment of images would take a GIS expert 3-8 mins per image. With our solution, the entire alignment and analysis process takes less than 90 seconds and can handle 100s of images in parallel. Instead of a team of 40 GIS experts working without a break for 25 hours, we can now analyze imagery covering 100,000 acres in about 15 minutes. 

The key lesson here is that although we can access interesting sensors and data sources to inform us and guide Precision Agriculture, successful technological solutions require scalable automation that minimize the human effort, not add to it. The adoption of these solutions in practice further depend on the maturity of the organization in embracing change.

Want to know more or talk about how the Oracle Big Data & Analytics team can help your business objectives? Connect with me on LinkedIn and follow me on Twitter

Links:

 

What’s the (big) deal with AlphaGo?

In March of 2016, Google’s AlphaGo beat the world champion Lee Sedol at the game of Go, a feat hailed as an important milestone for Artificial Intelligence (AI). It was also a big deal with Deep Learning and Reinforcement Learning. But what was the big deal?

Lets start with a simple game of Noughts and Crosses, also known as Tic-Tac-Toe. A game played on a 3×3 grid by 2 players placing O (noughts) and X (crosses)  in turns with the objective of getting 3 noughts or crosses in a row.

Source: Wikipedia

Naive counting leads to 19,683 possible board layouts (39 since each of the nine spaces can be X, O or blank), and 362,880 (i.e., 9!) possible games (different sequences for placing the Xs and Os on the board). – Wikipedia

Now we (i.e. humans) play the game without enumerating all possible board layouts or exploring all possible games. However, that is how computers are typically programmed to play the game. After each move, computers would generate the tree of moves, where each branch represents the sequence of moves. The computer generates the tree to a certain depth and then identify the ‘branches’ most likely to lead to a victory, and selects that as its next move. Then process is repeated after the other players makes a move, until the computer or human wins. This brute-force search and prune strategy is fundamentally how Deep Blue beat Garry Kasporav in 1997, and the game of chess has about 1043 number of legal positions.

Now lets look at the game of Go.

There are 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,

000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,

000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,

000,000,000,000,000,000 possible positions—that’s more than the number of atoms in the universe, and more than a googol times larger than chess. – Google Blog 

With 10171 possible positions in Go (I counted the zeros), a brute-force search used by Deep Blue to beat Kasparov in chess, just won’t work with Go. So AlphaGo had to use intuition when selecting between moves, a bit like the way humans do. 

In this context intuition means working with limited information, and taking a shortcut in selecting only a tiny subset of options to arrive at a good move. The idea is similar to the idea of “thin-slicing” that Malcolm Gladwell discusses in his book Blink [ kindle ]. Intuition is a function of experience and practice – leading to both desirable efficiencies and undesirable biases into our daily decision making, but that is a different post for another day. 

During the match against Fan Hui, AlphaGo evaluated thousands of times fewer positions than Deep Blue did in its chess match against Kasparov4; compensating by selecting those positions more intelligently, using the policy network, and evaluating them more precisely, using the value network—an approach that is perhaps closer to how humans play. Furthermore, while Deep Blue relied on a handcrafted evaluation function, the neural networks of AlphaGo are trained directly from gameplay purely through general-purpose supervised and reinforcement learning methods. – Nature Paper

This means, that given data for supervised learning (i.e. deep learning) and time to practice (i.e. reinforcement learning) AlphaGo could continue to get better and better, and not rely on human experts to come up with heuristics or evaluation functions.

This was an extremely promising sign for Artificial Intelligence (AI) and getting us one tiny tiny (did i mention tiny?) step towards general purpose AI. Deep learning is used to reduce the feature engineering that would typically go in setting up a machine learning algorithm. Reinforcement learning mimics the idea of practice or experimentation that we as humans use in learning how to play tennis, play an instrument or make a drawing.

Learning to mimic intuition, without explicit feature engineering with deep learning and practicing it via reinforcement learning offers a very interesting template for teaching machines how to deal with kind of problems that humans excel at with their intuition. It also represents a way for machines to operate on large search space problems (i.e. traveling salesman problems, scheduling and optimizations) and get reasonably good solutions given a fair trade off of time and compute resources. This is why AlphaGo is a big deal.

Credits & References:

Oracle’s Big Data & Analytics Platform for Data Scientists

I work for Oracle, helping businesses realize the potential of data science, big data and machine learning to grow their revenues, minimize their costs and expand new opportunities to leap-frog their competition. Which means working with some amazing folks from different parts of businesses and across industries. Invariably I’m asked – So, what does Oracle offer in the space of data science and machine learning on Big Data?

Lets leave aside the machine learning and optimization solutions embedded within different Oracle products, and just focus on the platform pieces for Big Data and Analytics for today. Lets also ignore for the moment the data management questions around security, encryption and integration that are important and chiefly the concerns of the IT department. Lets focus only on what it offers for data analysts and data scientists.

Oracle’s Big Data & Analytics Platform enables data science and machine learning at scale by taking the best that open-source offers, putting it together as an engineered solution and adding capabilities and features where open-source falls short.

Oracle Big Data Cloud Service (BDCS) is essentially Hadoop/Spark in a “Box” (or rather a number of dedicated cloud based machines connected with a 40Gb/sec InfiniBand fabric making network IO between cluster nodes very fast). It runs Cloudera Enterprise version of Hadoop with engineered hardware optimized for speeding up the analytics. Analysts can use Python, R, Scala and Java for data manipulation, analytics and machine learning using open source libraries such as SparkML. Python users such as myself can use open-source libraries (e.g. numpy, scipy, pandas, scikit-learn, seaborn, folium) inside Jupyter notebooks via PySpark kernel for operating on distributed datasets.

Out of the box, R users don’t get all the benefits of SparkML, so Oracle R Advanced Analytics for Hadoop (ORAAH) addresses that gap, giving R users access to SparkML implementations of machine learning algorithms. In addition ORAAH’s own implementation of Linear Regression, Generalized Linear Models and Neural Networks are faster and more efficient than the open-source implementations within SparkML. In experiments run by Marcos Arancibia‘s team, ORAAH’s LM model training was 6x-32x faster than SparkMLlib. Similarly GLM models trained by ORAAH were 4x -15x faster than SparkMLlib. More importantly, ORAAH continues to scale  linearly despite memory constraints, where as SparkMLlib just fails.

oraah_vs_spark_ml.png

ORAAH is available for on-premise (BDA) and the cloud (BDCS).

But not everyone can code or should have to code to transform and explore data in Hadoop. Oracle Big Data Discovery (BDD) provides “citizen data-scientists” and data analysts with interactive way to find, transform and visually discover patterns or relationship within the data stored in Hadoop. It works by keeping a sample of the data in Hadoop in-memory, automatically generating graphs that describe the shape of that attribute, and allows users to interactively manipulate that data.

Once the analyst is comfortable with the transformations, he or she can apply them to the full dataset with a click of a button. It is a very nice tool for data analysts and data scientists alike in preparing a dataset before switching to Jupyter or RStudio to use the distributed machine learning algorithms in Spark or ORAAH.

Data isn’t always in a tabular form, nor does it make sense to analyze it that way. The Spatial component of Oracle Big Data Spatial & Graph (BDSG)  scales up the analysis of images and geo-spatial data using Hadoop and OpenCV with a Java interface. Just last week I finalized a patent application on a method to automate the alignment and analysis of aerial and satellite imagery to known structures, that I had prototyped earlier this year. For one potential customer wanting to scale their operations to cover 100,000 acres of agricultural properties no longer requires them to hire a team of 40 GIS specialists and making them work round to clock to keep up with the volume of imagery expected each week.

The Graph component of BDSG provides an in-memory graph engine and algorithms for fast property graph analysis. The in-memory graph engine can handle 20-30 billion edge graphs on a single node, scale out to multiple nodes as it expands beyond the limits of a single node, and perform 10-50x faster than other graph engines for finding communities, optimal paths and even product recommendations.

Analysts that have been using Oracle Advance Analytics (OAA) as part of the Oracle Databases to train machine learning models within the database or using R, can continue to use the same interface while bringing in data from Hadoop or NoSQL Databases via Oracle Big Data SQL. Big Data SQL pushes the predicate (i.e. query processing and filtering) to Hadoop or NoSQL Databases and pulls across only the smaller filtered dataset to the relational Database. This allows analysts to user SQL, Oracle Data Miner or R, while manipulating and joining datasets in Hadoop and Database.

Once the analysis is done, now comes time to tell the story. Oracle Data Visualization (DV) is an interactive data visualization and presentation platform as a desktop application or a cloud service, letting the business intelligence, analysts and scientists reveal the story hidden within the data visually.

There are also a number of things that have been announced at Oracle Open World 2016, and coming soon. One of the most exciting for data science is the Big Data Cloud Service – Compute Edition (BDCS-CE). It is an on-demand elastic compute Hadoop/Spark cluster, allowing data scientists to spin up clusters as needed, scaling it up as needed and tear them down afterwards. For an analysts perspective, it is a perfect environment to sandbox ad-hoc queries and experimentation, before operationalizing these as analytics pipelines. There is also the Event Hub Cloud Service that provides a Kafka-based streaming data platform.

Want to know more or talk about how the Oracle Big Data & Analytics team can help your business objectives? Connect with me on LinkedIn and follow me on Twitter

Deep Learning – expectations & opportunities

A recent post on using DSSTNE (a deep learning library that I had minor hand in) for training a simple movie recommender, sparked off some interesting conversations around expectations we have of Deep Learning. It can basically be summed up as – Is Deep Learning the path to Artificial Intelligence or will it be a one-hit wonder liable to fall out of fashion quickly?

Having developed actual production systems using machine learning and deep learning, I want to set expectations for deep learning and highlight opportunities that should not be ignored.

If you want the truth to stand clear before you, never be for or against. The struggle between “for” and “against” is the minds worst disease. – Seng-ts’an, c. 700 C. E.

In case you haven’t heard Deep Learning (aka neural networks) are on a comeback after the great winter of AI, thanks largely to the dropping cost of compute (i.e. GPUs) and easier development libraries (i.e. CUDA, Theano, Torch, Caffe, TensorFlow and DSSTNE). However, the biggest reason is the easy access to large volumes of data thanks to the internet and the labelled data collection platforms like Amazon’s Mechanical Turk.

One such dataset is put together by ImageNet. ImageNet Large Scale Visual Recognition Challenge (ILSVRC) is one of the biggest challenges in Computer Vision for state of the art in image recognition and understanding. New York times wrote about it back in 2014 and when Baidu was banned from the competition for breaking competition rules. The challenge is to classify 1.28 million images belonging to 1,000 classes.

Enter Deep Learning

Deep Learning made a splash at ILSVRC in 2012, when Alex Krizhevsky, Ilya Sutskever & Geoffrey E. Hinton proposed a 5 layer neural network that out performed any of the non-neural network approaches in the ImageNet. Their SuperVision entry based on a deep learning network (commonly referred to as AlexNet) won the competition with a 16.4% error rate, compared to the next best entry with an error rate of 26.2%. Since then Google, Facebook, Microsoft, Baidu and others have aggressively researched into using deep learning. Last year Microsoft won the competition with an error rate of around 6% using a network with 152 layers.

In other applications of deep learning, Google saw a 49% drop in their speech recognition (i.e. transcription) errors using long short-term memory deep recurrent neural network. Paypal uses deep learning for fraud detection and prevention (blog,  video).

A Cautionary Tale

Clearly deep learning has been very successful in solving some of the most challenging problems in AI. While we must approach it with healthy dose of skepticism, we have to acknowledge the successes and explore the possibilities. The problem often comes when people throw deep learning at a problem without thinking through the problem.

It is no surprise that Amazon uses deep learning for recommendations since they have open sourced the engine and blogged about it. But it was not always like that. One of the challenges the personalization team faced when exploring deep learning was that the initial prototypes gave the same or worse performance when compared to traditional machine learning approaches used in the field of recommendations.

My biggest contribution to that team’s effort was to model the problem the right way. In this particular case, the right way was not the traditional way recommender systems have been thought of.

post_dl_everywhere_modeling_the_right_way.jpg

Even though both algorithms (A and B) used deep learning, a similar sized network, structure and training parameters, the approach I proposed and demonstrated saw a 6x improvement in precision for the top recommended item. I cannot share the details behind the formulation since Amazon didn’t allow external publication of that work, but if you have access check out the video of talk I gave at the Amazon Machine Learning Conference in 2015 😉

Simply throwing the data (and compute) at deep learning is not a good idea. You have to model and solve the problem in a manner appropriate for that specific problem.

Promise of Deep Learning

Deep Learning’s biggest promise is actually in learning latent feature or representation learning, which makes the subsequent task of prediction easier. Getting the right features can make the learning and prediction part of the problem trivial. By far scientists spend most of their time in manual engineering of the right features using domain knowledge, experience and intuition, supported by standard feature selection and projection algorithms.

In deep learning techniques, neural networks jointly optimize the feature engineering, feature selection and modeling steps – all at the same time. This opens up the opportunity for us to skip manual feature engineering, and let the machine discover the relative importance and non-linear interaction between the signals as they propagate through the network layers. In some setups, such as autoencoders, the network can learn the important layers of features without any labels at all – i.e. in an unsupervised way. This means we can start applying machine learning to domains where we have large volumes of unlabelled data or where acquiring labels is difficult/expensive.

There is also a lot of interest in transfer learning, where features are first learnt in a domain with large amount of labeled data or in unsupervised way. Once learnt the feature are then fine-tuned for another related domain using much smaller datasets. But the practical reality for the moment is the same – deep learning requires lot of data and computation.

No Free Lunch!

When I was doing my PhD at UNSW, I often chatted with Achim Hoffmann who wrote this interesting perspective on the limitation of machine learning [ post-script file ] published in the European Conference on Artificial Intelligence back in 1990. The key element for me was this.

The results indicate a rather general point. Namely, that for any amount of information which should get acquired, people have to do the complete work. One may choose between writing complex programs and providing a program with a huge amount of input data. In any case, the work cannot be reduced essentially. The machine can only do what it is told to do. And it cannot be told to generate information by itself. … The results do not mean, that machine learning is completely purposeless. But they clearly show that one cannot expect any magic from machine learning.

Even though 25 years have passed since this paper was written, the underlying idea is very relevant to setting our expectations of machine learning and deep learning. We can throw data and compute at deep learning, but it cannot magically get us the answer. We still need human experts and scientists to figure out how to apply deep learning appropriately, not to mention push the research boundaries of what is capable with deep learning. I think deep learning is a very promising field for exploration and worth taking the risks in investing experimentation resources towards. We just need to be prepared to learn.

 

 

Deep Learning with DSSTNE

Recently I got a couple of EVGA GeForce GTX 1080 to keep my study nicely lit and warm when winter comes to Seattle. My interest in GPUs though is more for Deep Learning, than lighting and heating. Deep Learning is actively being explored for all kinds of machine learning applications, since they offer a hope of automatic feature learning. In fact a large number of Kaggle competition winners tend to rely on Deep Learning methods to avoid any kind of hand-crafted feature engineering. Considering how computationally expensive Deep Learning training tends to be, GPUs are essential for doing anything meaningful in a reasonable amount of time.

As part of my job with the Big Data & Analytics Platform team at Oracle, I come across customers that do need help with tackling some of these cutting edge machine learning problems – from image understanding, to speech recognition and even product recommendations. Part of the challenge is always simplifying the complexity and letting the people focus on what they need to do and hide away what is important but not necessary for immediate focus.

Siraj Rival ( @sirajology ) had posted a really nice video earlier this year on how to build a movie recommender system using 10 lines of C++ code and DSSTNE (pronounced “destiny“), a deep learning library that my old team at Amazon built and open-sourced earlier this year.

Aside: DSSTNE does automagic model parallelism across multiple GPUs and is also very fast on sparse datasets. Scott Le Grand ( @scottlegrand ) who was the main creator of DSSTNE has reported DSSTNE to be almost 15x faster than TensorFlow in some cases.

  • Disclosure: Scott and I used to work together at Amazon for the personalization team that built DSSTNE. We no longer work for Amazon, so cannot speak to how it is being used inside Amazon.
  • Update: Check out this talk by Scott talk on DSSTNE at Data Science Summit 2016 )

Back to Siraj’s movie recommender – although he does a great job, I think there are some very important points about the design of DSSTNE that are easily overlooked. DSSTNE has 3 important design elements:

  1. scale – to handle large datasets that won’t fit on a single GPU, and do that automatically.
  2. speed – for faster experimentation cycles, allowing the scientists to be more efficient and scale the number of experiments they run
  3. simplicity – for non-experts to experiment, deploy and manage deep learning solutions into production

In this post I’ll show how to build a movie recommender writing NO lines of C++ code. DSSTNE is largely configured through a Neural Network Layer Definition Language and 3 binaries – generateNetCDF, train & predict. It uses a JSON based config file to describe the network, the functions and parameters to use  when training the model. This approach makes it much easier for people to run hyper-parameter search across different network structures without needing to write a single line of C++ code.

So lets get started by installing CUDA and cuDNN on my Ubuntu 16.04.

CUDA & cuDNN

First the prerequisites for CUDA.

$ sudo add-apt-repository ppa:graphics-drivers/ppa
$ sudo apt-get update
$ sudo apt-get install nvidia-367
$ sudo apt-get install mesa-common-dev
$ sudo apt-get install freeglut3-dev

Download the local run binary from https://developer.nvidia.com/cuda-toolkit

Install the CUDA 8 library:

$ sudo ./cuda_8.0.27_linux.run --override

IMPORTANT: Make sure you DO NOT install the drivers included with the .run file. Keep others are defaults and yes for everything else.

Set environment variables:

$ export PATH=/usr/local/cuda-8.0/bin${PATH:+:${PATH}}
$ export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

At this point you should be able to check that network cards are recognized by CUDA by running nvidia-smi.

$ nvidia-smi
Thu Aug 11 10:51:36 2016       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 367.35                 Driver Version: 367.35                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 1080    Off  | 0000:03:00.0     Off |                  N/A |
|  0%   31C    P8     7W / 180W |      1MiB /  8113MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX 1080    Off  | 0000:06:00.0      On |                  N/A |
|  0%   35C    P8     8W / 180W |    156MiB /  8110MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    1      4455    G   /usr/lib/xorg/Xorg                             106MiB |
|    1      5229    G   compiz                                          48MiB |
+-----------------------------------------------------------------------------+

To force CUDA to use latest version of GCC edit the header file that drops you out of a build.

$ sudo nano /usr/local/cuda/include/host_config.h

Comment out the line which complains about the GCC version

//#error -- unsupported GNU version! gcc versions later than 5.3 are not supported!

Compile the samples:

$ cd ~/NVIDIA_CUDA-8.0_Samples
$ make

Some of the samples still fail, but I’ll look into them later.

Get the CUDNN library from https://developer.nvidia.com/cudnn and follow the instructions to install it.

Now for DSSTNE (Destiny)

First you need to install the pre-requisites for DSSTNE. I’ve put together a shell script that runs the steps documented here.

Then, make sure you have the paths setup correctly. I had something like this in my .bashrc.

# Add CUDA to the path
# Could use /usr/local/cuda/bin:${PATH} instead of explicit cuda8
export PATH=/usr/local/cuda-8.0/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

# Add cuDNN library path
export LD_LIBRARY_PATH=/usr/local/cudnn-8.0/lib64:${LD_LIBRARY_PATH}

# Add OpenMPI to the path
export PATH=/usr/local/openmpi/bin:${PATH}

# Add the local libs to path as well
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/lib

Now to get, build and test DSSNTE:

$ git clone https://github.com/amznlabs/amazon-dsstne.git
$ cd amazon-dsstne/src/amazon/dsstne
$ make

This will build the binaries under amazon-dsstne/src/amazon/dsstne/bin for:

  • generateNetCDF –  converts CSV text files into NetCDF format used by DSSTNE
  • train – trains a network using input, output data and config file with network definition
  • predict – uses a pre-trained network to make predictions.

There is a nice example of training an auto-encoder based recommender for MovieLens 20m dataset that comes with the code.

First download the data in the CSV/Text File format. If you have your own dataset, make sure it conforms to this data format.

$ wget https://s3-us-west-2.amazonaws.com/amazon-dsstne-samples/data/ml20m-all

Convert the text data into NetCDF format data for network input and expected network output. It also builds up a features and samples index files. 

$ generateNetCDF -d gl_input -i ml20m-all -o gl_input.nc -f features_input -s samples_input -c
$ generateNetCDF -d gl_output -i ml20m-all -o gl_output.nc -f features_output -s samples_input -c

Train the network using the config for 30 epochs and batch size of 256. It will checkpoint and save the network every 10 epochs. Handy if you want to explore the network convergence by epochs.

$ train -c config.json -i gl_input.nc -o gl_output.nc -n gl.nc -b 256 -e 30

Once the training complete, you can use the network and GPU to make batch mode offline predictions in the original text format. The following command generated 10 movie recommendations for each user in ml20m-all file (i.e. -r ml-20all) into the recs file (-s recs). It also lets you mask or filter out movies that the user has already seen (-f ml20-all)

$ predict -b 256 -d gl -i features_input -o features_output -k 10 -n gl.nc -f ml20m-all -s recs -r ml20m-all

That’s it. In fact while its training you can use nvidia-smi to see which GPU it is running on, and how much memory it uses.

$ nvidia-smi
Thu Aug 11 12:11:58 2016       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 367.35                 Driver Version: 367.35                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 1080    Off  | 0000:03:00.0     Off |                  N/A |
|  0%   42C    P2    77W / 180W |    524MiB /  8113MiB |    100%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX 1080    Off  | 0000:06:00.0      On |                  N/A |
|  0%   36C    P8     8W / 180W |    170MiB /  8110MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0      5523    C   train                                          521MiB |
|    1      4455    G   /usr/lib/xorg/Xorg                             106MiB |
|    1      5229    G   compiz                                          60MiB |
+-----------------------------------------------------------------------------+

Look ma, no C++ code!

The number of input nodes are automatically inferred from the input data file.

The number of output nodes are automatically inferred from the expected output data file.

Everything else is defined in the config.json or command line flags ( batch size and number of epochs for training). Neural Network Layer Definition Language describes all that DSSTNE supports.

{
 "Version" : 0.7,
 "Name" : "AE",
 "Kind" : "FeedForward", 
 "SparsenessPenalty" : {
 "p" : 0.5,
 "beta" : 2.0
 },

"ShuffleIndices" : false,

"Denoising" : {
 "p" : 0.2
 },

"ScaledMarginalCrossEntropy" : {
 "oneTarget" : 1.0,
 "zeroTarget" : 0.0,
 "oneScale" : 1.0,
 "zeroScale" : 1.0
 },
 "Layers" : [
 { "Name" : "Input", "Kind" : "Input", "N" : "auto", "DataSet" : "gl_input", "Sparse" : true }, 
 { "Name" : "Hidden", "Kind" : "Hidden", "Type" : "FullyConnected", "N" : 128, "Activation" : "Sigmoid", "Sparse" : true },
 { "Name" : "Output", "Kind" : "Output", "Type" : "FullyConnected", "DataSet" : "gl_output", "N" : "auto", "Activation" : "Sigmoid", "Sparse" : true }
 ],
 
 "ErrorFunction" : "ScaledMarginalCrossEntropy"
}

Applying deep learning techniques to a problem such as recommendation typically means lots of experimentation exploring different mix of the:

  1. types of input data and output targets – purchase or browsing history, rating, product attributes such as category, cost and color, attributes such as age or gender, etc.
  2. network structures – number of layers, number of nodes per layer, connections between layers, etc.
  3.  network and training parameters – learning rates, denoising, drop-outs, activation functions, etc.

How you pose the problem and prepare the dataset (#1) is VERY important in applying deep learning. In fact if you pose the machine learning problem incorrectly, not even deep learning and a cloud full of GPUs can help you there.

But, once you have that, figuring out the right network structure (#2) and training parameters (#3) can mean a difference between success and failure. That means running lot of experiments or essentially a hyper-parameter search problem.

The JSON based config simplifies the hyper-parameter search problem. You can generate a large combination of these config files and try them out in parallel, quickly narrowing the options down to the configurations that are most suitable for that particular application.

Given this is still early days for Deep Learning, the speed and scale of experimentation has a huge bearing on what we learn about using Deep Learning. Plus I know my study will be warm for this coming winter.

Credits and References: