Impactful Data Scientists

In 2012, Davenport and Patil’s article in Harvard Business Review titled Data Scientist: The Sexiest Job of the 21st Century, raised the profile of a profession that had been naturally evolving in the modern computing era – an era where data and computing resources are more abundantly and cheaply available than ever before. There was also a shift in our industry leaders adopting a more open and evidence-based approach to guiding the growth of their business. Brilliant data scientists with machine learning and artificial intelligence expertise are invaluable in supporting this new normal.

While there are different opinions on what defines a data scientist, as the leader of the Data Science Practice at Think Big Analytics, the consulting arm of Teradata, I expect data scientist on my team to embody specific characteristics. This expectation is founded on a simple question – Are you having a measurable and meaningful impact on the business outcome?

Any data scientist can dig into data, use statistical techniques to find insights and make recommendations for their business partners to consider. A good data scientist makes sure that the business adopts those insights and recommendations by focusing on the problems that are important to the company and making a compelling case grounded in business value. An impactful data scientist can iterate quickly, address a wide variety of business problems for the organization and deliver meaningful business impact swiftly by using automation and getting their insights integrated into production systems. Consequently, impactful data scientists more often answer ‘yes‘ to the question above.

So what makes a Data Scientist impactful? In my experience, they possess skillsets that I broadly characterize as that of a scientist, a programmer, and an effective communicator. Let us look at each of these in turn.

what_is_a_data_scientist_2.png

Firstly they are a scientist. Data scientists work in highly ambiguous situations and operate on the edge of uncertainty. Not only are they trying to answer the question, they often have to determine what is the question in the first place. They have to ask vital questions to the understand the context quickly, identify the root of the problem that is worth solving, research and explore the myriad of possible approaches and most of all manage the risk and impact of failure. If you are a scientist or have undertaken research projects, you would recognize these as traits of a scientist immediately.

In addition, data scientists are also programmers. Traditional mathematicians, statistician, and analysts who are comfortable using GUI-driven analytical workbenches that allow them to import data and build models with a few clicks often contest this expectation. They argue that they don’t need computer science skills since they are supported by (a) team of data engineers to find and cleanse their data, and (b) software engineers to take their models and operationalize them by re-writing them for the production environment. However, what happens when data engineers are busy, or the sprint backlog of IT department means the model that a data scientist has just found to make a company millions won’t make it to production for the next 6-9 months? They wait, and their amazing insights have no impact on the business.

Programming and computer science skills are essential for data scientists so that they are not ‘blocked’ by organizational constraints. A data scientist shouldn’t have to wait for someone else to find and wrangle the data they need, nor be afraid of getting their hands dirty with the code to ensure their models make it to production. It also means, data scientist do not become a bottleneck to their organization by automating their solutions for production or automatic reports. Given the highly distributed and large volume transactions in online, mobile and IoT applications means data scientists need to consider the design of their solution for scale. For example, will their real-time personalization model scale to the 100,000 requests per second for their company’s website and mobile app?

Finally, a data scientist should be an effective 2-way communicator. Not only should they empathize to understand the business context and customer needs, but also convey the value of their work in a manner that appeals to them. One of the hardest skill to master for some knowledgeable data scientists is often the ability to influence organizations without authority. A data scientist that goes around asserting that everyone should listen to them because he or she has data and insights without cultivating trust is likely to earn them the title of a prima donna and not achieve the impact that they can with those insights. Effective communication is relatable, precise and concise.

Data scientists with these three broad skillsets are in an excellent position to have a meaningful and measurable impact on the business outcomes, making them highly valuable to any organization. Of course, this list doesn’t talk about innate abilities like creativity, bias for action and a sense of ownership. Neither does it consider the organizational culture that may either support or hider their impact. I have focused on skills that can be developed through training and practice. In fact, these are essential elements to the growth and career paths for my team of brilliant and impactful data scientists at Think Big Analytics. 

Credits:

Hello, Think Big Analytics

A little over a month ago I left my role as the Chief Data Scientists for Big Data & Analytics Platform Team at Oracle. It was sad to say goodbye to some wonderfully talented people that I had the pleasure of working with, but change is an inevitable part of our lives. After enjoying a month off at my warmer and sunnier home in Sydney spent with family and friends, I feel energized about what is next.

I am humbled and excited about my new role as the Practice Director – Data Science & Analytics, Americas at Think Big Analytics. There are exciting developments in the world of artificial intelligence that makes it more important than ever for data scientists to understand the customer’s needs, reflect upon the wider context beyond those needs, and develop solutions that have a meaningful impact for the customer. I am looking forward to getting to know a talented team who is focused on the evolving needs of our customers and delivering impactful data science consulting services.

AI & ML – Lessons learnt and real-world challenges

Just before I flew back to Seattle, I gave a talk last week at my alma mater – School of Computer Science & Engineering at UNSW, Australia. It was great to see some familiar faces and meet some new ones that I hope feel more compelled to tackle some interesting problems in data science, machine learning (ML) and artificial intelligence (AI).

In this talk, I shared some the personal lessons that I learnt as part of building AI & ML solutions at companies like Amazon and Oracle. I also opened up about my fears of these technologies, as well as the challenges that the industry faces in delivering intelligent systems for the 99% (?) of businesses. You can find the slides from the talk (PDF) for the references and links that I mentioned. Just send an email to ( avishkar @ gmail dot com) with the subject “AI & ML” to get the password to the PDF.

The most important message that I wanted to impart to the room full of researchers, academics, and industry practitioners was how do we collectively address the shortage of skills needed to develop AI and ML solutions to the broad range of business problems beyond the top 1% of leading-edge tech companies. Education, standards and automated tools can help ensure a certain base level of competency in the application of AI & ML.AddressingSkillsShortage.jpg

The vast majority of the businesses out there are not Google, Amazon or Facebook, with deep pockets and years of R&D experience to tackle the challenge of applying AI and ML. Everyone from schools (i.e. universities) and industry responsible for growing this field must also develop standards and tools that ensure a certain level of quality is maintained for the solutions that we put into production. We have had standards when it comes to mechanical and civil engineering to ensure that things that can impact people’s lives and safety adhere to a certain quality standard. Similarly, we should also develop standards and encourage organizations to validate compliance with those standards when it comes to developing AI & ML solutions with far-reaching consequences.

BiasedDataBiasedModels.jpg

A simple and very personal example was that one of my own photos was rejected by the automated checks to verify that a passport photo complies with the requirements for visas. The fact that the slightly “browner” version of me (left) failed the check seems to suggest an inherent bias in the system due to the kind of data used to build the system. Funny but scary. How many other “brown” people have had their photos rejected by such a system?

Other examples would be Human Resource systems that identify potential candidates, suggests no-/hire decisions or recommends salary packages to new hires. If the system is trained on historical data and uses gender as a feature, is it possible that the system could be biased against women for high-profile or senior positions? Afterall historically women have been under-representative in senior positions. Standards and compliance verification tools can help us identify such biases, ensuring that data and models do not introduce biases that are unacceptable in a modern and equitable society.

Academics, researchers, and industry practitioners cannot absolve themselves of the duty of care and consideration when developing systems that have a broad social impact. Data scientists must think beyond the accuracy metric and the whole ecosystem in which the system operates.

Image Credit:

  • Modeling API by H Alberto Gongora from the Noun Project
  • education by Rockicon from the Noun Project
  • tools by Aleksandr Vector from the Noun Project
  • Checklist by Ralf Schmitzer from the Noun Project

Plant Science Initiative @ NC State University

In my role at Oracle, I get to work across many industries on some very interesting problems. One that I have been involved with recently is the collaboration between North Carolina (NC) State University and Oracle with NC State’s Plant Science Initiative.

In particular, we’ve been working with the College of Agriculture and Life Sciences (CALS) to launch a big data project that focuses on sweet potatoes. The goal is to help geneticists, plant scientists, farmers and industry partners in the sweet potato industry to develop better varieties of sweet potatoes, as well as speed up the pace with which research is commercialized. The big question is can we use the power of Big Data, Machine Learning, and Cloud computing to reduce the time it takes to develop and commercialize a new variety of sweet potato crop from 10 years to three or four years?

One of the well-known secrets to driving innovation is scaling and speeding up experimentation cycles. In addition, reducing the friction associated with collaborative research and development can help bring research to market more quickly.

My team is helping the CALS group to develop engagement models that facilitate interdisciplinary collaboration using the Oracle Cloud. Consider geneticists, plant science researchers, farmers, packers, and distributors of sweet potato being able to contribute their data and insights to optimize different aspects of the sweet potato production – sweet potato from the genetic sequence to the dinner plate.

I am extremely excited by the potential impact open collaboration between various stakeholders can mean for the sweet potato and precision agriculture industry.

More details at cals.ncsu.edu

It is a go for Amazon Go!

The super secret exciting project that I spent days and nights slogging over when I was at Amazon has finally been announced – Amazon Go. A checkout-less, cashier-less magical shopping experience in a physical store. Check out the video to get a sense of the shopping experience that simplifies the CX around the shopping experience. Walk in, pick up what you need and walk out. No line, no waiting, no registers.

I’m very proud of an awesome team of scientists & engineers covering software, hardware, electrical and optics that rallied together to build an awesome solution of machine learning, computer vision, deep learning and sensor fusion. The project was an exercise in iterative experimentation and continually learning, refining all aspects of the hardware, software as well as innovative vision algorithms. I personally was involved in 5 different prototypes and the winning solutions that ticked all the boxes more than 2 years ago.

I remember watching Jeff Bezos and the senior leadership at Amazon, playing with the system by picking and returning the items back to the shelves. Smiles and high-fives all around as the products were added and removed from the shopper’s virtual cart, with the correct quantity of each item.

Needless to say there is a significant effort after the initial R&D is done to move something like this to production, so it is not surprising that it has taken 2 years since then to get it ready for public. Well done to my friends at Amazon for getting the engineering solution over the line to an actual store launch for early 2017.

Photo Credit: Original Image by USDA – Flickr

 

Precision Agriculture needs Scalable Automation

If you have ever looked out an airplane window as it flies over land, chances are that you see some spectacular landscapes, sprawling cities or a quilted patchwork of farms. Over the centuries science, machines and better land management practices have increased agricultural outputs dramatically, allowing farmers to manage and cultivate ever larger swaths of lands. The era of Big Data and Artificial Intelligence is pushing these productivity gains even further with Precision Agriculture. For example, using satellite images to evaluate the health of crops to direct farming decisions or predict the likely yields we can expect during harvest time.

Earlier this year, my team at Oracle (chiefly Venu Mantha, Marta Maclean & Ashok Holla) worked with a large agricultural customer to help them shift towards a more data-driven agricultural approach that would maximize their yields and reduce waste. We explored a variety of technologies, from field sensors streaming measurements over Internet-of-Things (IoT) networks to the geospatial fusion of a variety of historical and real-time data. The idea is to support farmers with contextually relevant data, allowing them to make better decisions. Apart from people and process challenges associated with such a dramatic business transformation for an ancient sector, the major technological obstacle for realizing the potential of precision agriculture is, in fact, scalable automation.

farms_small.jpg

Let us take a concrete example. A key aspect of the proposal was the use of aerial or satellite imagery to assess the health of the crop. Acquiring satellite or aerial imagery on demand is significantly easier compared to what it was just a few years ago with the growing number of vendors in the market and falling cost of acquisition (e.g. Digital Globe,  Free Data Sources). Now that we can get imagery in high-resolution (i.e. down to level of an individual tree), that is multi-spectral (i.e. color and infrared bands) and covers large expanses of land (i.e. 100 acres or more) the challenge has shifted to a well-recognized one in the world of Big Data – How do you sift through the large volumes of data to extract meaningful and actionable insights quickly?

If one image covers 100 acres and takes a Geographic Information System (GIS) specialist an hour to review manually, handling images for over 100,000 acres would mean a team of 40 GIS specialists working without a break for about 25 hours to go through the full batch of images. Clearly throwing more people at the problem is not going to work. Not only is it slow and error-prone, but finding enough specialists with domain knowledge would be a challenge.

The answer is to automate the image analysis pipelines and distributed computing to parallelize and speed up the analysis. Oracle’s Big Data Spatial & Graph (BDSG) is particularly well suited for partitioning, analyzing and stitching back large image blocks using the map-reduce framework of Hadoop. It understands common GIS and image file formats and gives the developer Java bindings to the OpenCV image processing library as part of its multimedia analytics capabilities. You can either split up a large image (raster or vector) and analyze each chunk in parallel or analyze each image in parallel. You can write your own image processing algorithms or compose one using the fundamental image processing algorithms available in OpenCV.

alignment_process.png

The challenge for the customer, however, was coming up with an algorithm that could correct the image misalignment that naturally creeps in during image acquisition or image stitching process. A misaligned image would require a GIS specialist to open and manually adjust the image using tools like ArcGIS from ESRI. Analyzing a misaligned image would lead to incorrect results and can lead to bad decisions.

This is where the BDSG product engineering team (Siva Ravada, Juan Carlos Reyes & Zazhil ha Herena) and I stepped in to design and develop a solution to automate the image alignment and analysis processes. We have a patent application around the solution that can be used in a variety of domains beyond farming – Think of urban planning, defense, law & enforcement, and even traffic reports.

Just the manual alignment of images would take a GIS expert 3-8 mins per image. With our solution, the entire alignment and analysis process takes less than 90 seconds and can handle 100s of images in parallel. Instead of a team of 40 GIS experts working without a break for 25 hours, we can now analyze imagery covering 100,000 acres in about 15 minutes. 

The key lesson here is that although we can access interesting sensors and data sources to inform us and guide Precision Agriculture, successful technological solutions require scalable automation that minimizes the human effort, not add to it. The adoption of these solutions in practice further depends on the maturity of the organization in embracing change.

Want to know more or talk about how the Oracle Big Data & Analytics team can help your business objectives? Connect with me on LinkedIn and follow me on Twitter

Links: