Data-Driven Decision Making using Machine Learning Technology

Scientific Method

Managers in every organization have to continually make decisions on how to run their businesses most effectively and efficiently. These decisions cover every aspect of a business organization from marketing campaign decisions to recruiting, from order logistics operations to IT outsourcing. Most of the value the management adds to the bottom line of an organization is through the quality of those decisions and that is why you see so much of management literature focused on the effective decision making.

There is pretty much universal agreement on the right way of making decisions: make decisions based on objective, observable facts through a logical reasoning process. This is in essence the scientific method that has helped humanity progress so much in last couple of centuries. Conceptually it is very simple to describe: You start with observing the reality, the phenomena whose dynamics you are interested in understanding intuitively, which then lead you to form a hypothesis, a statement about the reality that needs to be verified, you believe explains the nature of the reality you have observed. At this point you don’t have more than a speculative thesis. The crucial step, one which truly separates scientific method from other forms of thinking, is to test your speculation (hypothesis) with new set of observed data to objectively verify it. If it passes the test i.e. it agrees with the objectively observed data (the reality) then you keep the hypothesis as a vehicle, a tested theory, that explains the observed reality satisfactorily. If it does not, you either modify or completely discard it and go back to the beginning stage of hypothesis forming and redo the following steps till you come up with a hypothesis that passes the tests.

There is clearly lots of experimentation and iterative refinement involved in the scientific method and that probably explains what Thomas Edison meant when he said “Genius is one percent inspiration and ninety-nine per cent perspiration”. Genius is in creative thinking and building an intuitive understanding of the reality but that part is purely speculation if it is not tested and verified with objectively observed data in a rigorous way. And as Edison points out, that is just a very small part of the overall scientific way of problem solving. Most of the heavy lifting is in testing ideas, refining them based on the feedback from the tests and going back to the drawing board and redoing all the steps again till you come up with speculative statements that pass the rigor of the objective tests.

Machine Learning

Machine learning(ML) techniques and technology could be utilized to streamline the crucial hypothesis testing phases of scientific method, allowing managers making truly data-driven decisions. To see how this works requires a bit of understanding of machine learning mechanics and a slightly non-orthodox way of utilizing ML technology. First, note that machine learning is primarily a systematic pattern finding technology that finds the mathematical relationships between a set of input data and corresponding output data. The specific algorithms used in this process could have implications on the type and nature of relationships that could be found out in this way but this nevertheless does not change the fundamental nature of the pattern finding process.

Also important to note is that ML finds patterns based on the prior assumption that such patterns exist between the input and output datasets. ML itself won’t come up with potential pattern-based relationship ideas, that is the job of the user (manager in our case) and to a certain extent is the aforementioned 1% inspiration part. Second, even though the usual purpose of building a ML model is to have predictive capabilities whereby you can infer output data from newly observed input data, we can repurpose the part of ML model building process and associated back testing technology which focuses on the tuning of the pattern matching internals of the ML model so that it provides the best statistical matches of input data to the output data.

I will illustrate the process of deploying ML technology for data-driven decision making through a simple, fictitious example. Let’s say you are running a supermarket store and there is a gas station across the road to your shop. It is one of the several gas stations in the area. Recently there has been a marketing campaign run by the gas station that has attracted more than usual customer traffic to the gas station and you have also noticed sales pickup in your shop. You have intuitively built a hypothesis that extra pickup in your store’s sales is due to the marketing campaign pursued by the gas station across the road. Let’s assume that gas station managers are happy to share their sales and campaign data with you and you are about to make some decisions whether to pursue some joint campaigns with the gas stations (e.g. you fund the some of gas discount with the hope that you make more than enough through increased sales to finance the cost of the discount campaign) based on your business instincts.

To use ML technology to test our hypothesis that gas discount based marketing campaign increases store sales, we start with getting all the data related to the sales both in the gas station and in our store. ML models work by finding intricate mathematical patterns(relationships) between input data called features and output data. In this case, output data will be the set of data that refers to our store’s sales. This could be refined along product lines, product categories, time of sales, etc. The features data will be chosen to be the data that will correspond to our hypothesis. Since we think that our store sales increased due to marketing campaigning in the gas station, we should find data that correspond to the gas station campaign. For example, if we had the gas station customer traffic data showing number of customers served in a given hour over a period of both pre-campaigning and during campaign, we could use it to devise ML model features that agree with the premises of our hypothesis.

As an example,

we could have a feature called “Is Gas Station Campaign Active” with true or false values. Yet another feature could be “Discount Amount” which would be the dollar discount on the standard gasoline price the gas station selling its gas for. We can then ask our ML technology toolkit to build a model that will try to find any relationship between these features and the store sales data parameters we have observed in our store. As a part of ML model build process, certain statistical significance metrics will also be calculated to show how much of the observed data (sales data) can be explained by the features data.

So if our hypothesis is correct, we should see strong statistically significant relationship between the output data and input data. If ML backtesting process cannot find any statistically significant relationship between our input features data and output data, something is amiss. This could either be due to using the wrong hypothesis (e.g. we thought sales increase was due to gas station campaign but it was in fact due to the one of our competitor beign shut down for a period for a renonvation ) or having used the wrong features(parameters) in ML model that do not quite reflect the true nature of our hypothesis. In the former case, as in the scientific method, we revise or abondon our current hypothesis and form a new one and repeat the model building and statistical back testing process. In the latter, we try to come up with better, more reliably and granuarly quantified features corresponding to our hypothesis. This part is bit art than science and will improve with experience of doing this kind of ML model building.

In the end, assuming we have a good data infrastructure and ML toolkit that can access and utilize this data infrastructure effectively, we can very quickly form hypothesis using our business understanding and intuition, turn them into objectively testable statements by means of formulating them as ML model features and back testing them using a variety of ML models (regressions, decision trees, random forests, etc.) and rigorous statistical validity checks. This way, we can combine intuitive elements of business insights gained through years of industry experience with scientific way of testing and validating ideas originating from those insights before taking materially significant actions on them.

The Author

Cetin Karakus has almost two decades of experience in designing and building large scale software systems.

Over the last decade, he has worked on design and development of complex derivatives pricing and risk management systems in leading global investment banks and commodity trading houses. Prior to that, he has worked on various large scale systems ranging from VOIP stacks to ERP systems.

In his current role, he had the opportunity to build an investment bank grade quantitative derivatives pricing and risk infrastructure from scratch. Most recently, he has shifted his focus on building a proprietary  state-of-the-art BigData analytics platform based existing open source tools and technologies.

Cetin has a degree in Electrical & Electronics Engineering and enjoys thinking and reading on various fields of humanities in his free time.

Leave a Reply

Your email address will not be published.

Corinium Global Intelligence is registered in England & Wales, number 08520994. Registered office:
Brook House, School Lane, South Cerney, Cirencester, GL7 5TY.

Share This