Friday, August 22, 2014

HOW CAN MAPS CHANGE THE WAY WE OPTIMIZE COMPANY RESOURCES ?

We all need to fundamentally find better ways of getting more out of our company's assets. In essence everybody wants answers to the following :

Marketing:
 - Where must we place our retail stores with maximum revenue and service impact ?
 - How must we allocate our marketing budget across segments, channels and touch points ?
 - What is the behaviour of our customers and what do they do at our touch points ?

Sales:
 - How should we define our sales territories ?
 - What is the best way for our sales reps to cover these territories and manage OPEX ?
 - What is the reach of competition into our sales areas ?

Operations:
 - Where should we place our distribution hubs and service centers ?
 - What impact will maintenance and repair have on our service levels ?
 - Where should we place field staff to cover unknown demands ?
 - How can reduce our working capital and still deliver ?

Human Resource:
 - What is the minimum required capacity to maintain organisational performance ?
 - How many people do we require in our organisation and where should they be placed ?

Executive:
- How do we better with less and still meet shareholders expectations ?

Using MAPS and business data one is able to put all information in context and which enables deeper insight into the data for better decision making.


Our capability uses MAPS to extract insight from your company's data to allow you to do more with less - and cheaper.

Please email me at antonie@visualitics.co.za to see how we can assist you to answer any of the above questions in a simple and straight forward manner.

Saturday, June 28, 2014

ARE SENSORS CRITICAL TO PREDICTIVE ANALYTICS ?

My immediate answer to this is –YES! Predictive analytics and response optimization of value chain assets requires real context in terms of time and space to traditional data management.

From a design perspective it is easy to conceptually design and understand data in the organisation – but placing it in context and deriving insights from it is difficult.





I believe that in order to integrate and fully harness data across marketing, sales, operations, services, support, tactics and strategy  - you need to deploy four broad categories of sensor and data collection technologies (and this is before we talk about big data !), being:
1      - Remote sensing
      -  Non-intrusive sensing
      -  Intrusive sensing, and
      -  Digital sensing
   

Remote sensing is the ability to analyse images and extract meaningful information from it. In the following instance we created an algorithm which uses spectral analysis for feature extraction and then combining it with census data to forecast trade area success. 

So far this model has been 100% in the classification of successful or not successful trade areas.



Non-intrusive sensing deals with the collection of information in a non-identified, non-intrusive manner. Here the analysis of radio-signals from Wi-Fi sensors and cellular towers are particular useful to study the mass movement of people around points of interest. This deal particularly well with staff movement, staff capacity planning and customer journeys. This example shows a heat map of customers moving around a retail point.




Intrusive sensing deals with the tagging of physical items such as equipment, stock, or vehicles. This enables real behavior to become visible in the supply chain – here an example of optimizing cost and service coverage.


Digital sensing enables data sources in the organisation to become geo-spatial intelligent as it is captured, or in the rework of historical data. This means that any customer information, supplier information, employee information or any data with a physical location becomes geo-intelligent; meaning it can be placed on a map with all the other sensor data mentioned about so that a complete data set on a map can be used to optimize value chain assets.


When this is done, one can start asking meaningful questions from a an optimization perspective and create the traditional views and results after the big data, reduce maps, data science and visualization steps are completed.


The application area of predictive analytics and response optimization stretches across the organisation - from planning to risk management - and as such we are only limited by our imagination by what we can do with predictive analyics and sensors today !





Wednesday, June 18, 2014

Business Architecture on the Move

Business data, process data, planning data, strategic data – mostly all of this data in the context of a business architecture is static. At Visualitics we aim to optimize value chain assets through four core drivers – “proximity”, “visibility”, “attractiveness”, and “movement”.

In the following example we were asked to assist in capacity planning of emergency services using data from the operations call center. Using our easycodeproduct together with a spatial quadrant cluster algorithm we were able to make this data geo-intelligent as in the picture.

Spatial quadrant cluster output



Geo-Intelligent Data


In this context business events can be modeled and planned within the context of demographic and geographic data  – simply put, we can place the event (planned/unplanned) on a map, measure the shortest distance to the event, and also deduct demographic risk factors from it.

This is what I call “Business Architecture on the move” – data in context of the real world for real decision making.



Tuesday, May 6, 2014

NEW DRIVERS FOR BUSINESS TRANSFORMATION



According to Google, the next transformation of business will be due to maps. The Boston Consulting Group states that 95% of businesses still haven’t realized the broad benefits of maps. This means that most business leaders do not understand what location information can do for their businesses. I believe that it is more than just MAPS; business assets need to be transformed into SMART assets (using indoor or outdoor maps!).

Five steps are required to make a business asset SMART:
Step 1 Make the Asset visible
Step 2 Measure the asset performance “sweet spot”
Step 3 Do risk and complexity qualification of the asset
Step 4 Predict asset behavior
Step 5 Optimize the asset response model

Step 1 requires one to construct a “data brick” as in the next figure. The key  in the construction is to align all data layers by means of a Geo-tag.


This allows one to solve business problems across the value chain of the business – and in a significant way as all quantitative data is now aligned and placed in context. This very simply means that people, space and data can now be put in the same place and we can use analytics to drive business asset optimization in a number of ways, such as:
a) Create strategic insight from operations
      b)  Coordinate and deploy sales and service teams in real-time to where the action is
      c)  Monitor assets anytime, anywhere
      d)  Place customer into context quickly and reliable

e)  Enable tactical actions such as capacity planning and demand management

In the same manner that technology changes the way we interact between the digital and real world, we need to change our mental models of business transformation and how we use tools such as sensors, big data, data science and visualization.

Thursday, April 10, 2014

WHY DO YOU STILL USE POWERPOINT AND EXCEL ?

It will be hell if I have to use Excel to solve data science problems...............


Then suicide will definitely follow if I have to present my findings in Powerpoint.........

What about having some fun by mashing operational data on top of your facility maps (after they have been geo-coded), and drop all of it into an interactive dashboard to see how everything changes over time.








Monday, March 17, 2014

COMPLEX PROBLEMS, COMPLEX SOLUTIONS, COMPLEXITY ?

About a year ago I saw a presentation from a group of telco business analysts. One particular slide caught my eye which contained a classic statement “this seems to be a complex problem, we need to design a complex model for it”.  Right there and then the system engineering part inside me got a heart attack.  Recently I have seen LinkedIn blogs on “We will solve your complex problems”, or  “This is amazing software to solve your complex data problems “ etc. So I visited the companies, downloaded the software and read all the books, articles and comments. From a practical perspective I look back and I have the following questions and comments as a practitioner trying to optimize organization assets using data science as a core toolbox:

1. Why would you spend at least 10 000 euro’s per year on software that draws nice pictures but can be modeled through something like PostGIS/Postgre/MySQL and R?
2. Do we define solving complex problems as having the ability to process a MxN matrix with thousands of variables and find correlations from it ?
3. How can a piece of software help you to solve “complex” if you don’t understand what “complex” really means?
4. Is the saying true “a fool with a tool is still a fool?”
5. Shouldn’t software provide you with the ability to understand and solve “complex” problems in an open environment which is fed by global intelligence rather than single company intelligence?
6. Shouldn’t you start with the basic skills and understanding of data before trusting software solutions?
7. How will you know the software produces correct answers?
8. Can you interpret the results from these “magic tools” ?
9. If you can do 7 and 8, why do you need software that “does everything for you?”
10. If the vendor had a magic bullet then why do they sell software and not change the world with their own competitive weapons instead?

So when we deal with “Complex problems”, “Complex Solutions” and “Complexity” then I think one should master the following principles as practised in complexity management:
11. Have a standard work approach for data science so that you can understand how to select, prepare, analyse, model and visualise data.
12. Think in terms of multidimensional data-sets.
13. Expand your multidimensional thinking to include spatial data.
14. Expand spatial data to include geographical data, images and dynamic object movements.
15. Think spatial data as how it changes over time – temporal insight.
16. Multidisciplinary insight rather than functional insight solves complex problems.
17. A large MxN matrix with thousands of variables do not capture or solve “complex” – rather it reflects on the tool jockey not understanding insight into the problem required to be solved.
18. Kill MS Excel ideas in the data science domain.

This means that if we borrow from Bioinformatics and gene mapping analytics, we can use that knowledge to build data solutions which can assist in the identification of fraud patterns inside a large ERP system. Some of my favorite “business fractal images” shows how these multidimensional data sets can be used to find optimal trade densities from ERP and geospatial data – definitely not possible through a propriety system – but possible through multiple open-source collaboration efforts.





Wednesday, March 12, 2014

Is big data really changing things ?

As an industrial engineer I used to lack data for business models and optimization; suddenly things have changed and we can access huge amounts of data. But what is the impact of this ? A few examples of how things are changing:

Example - Marketing
No more sample base data for marketers; in one exercise we analyzed 30 million transactional events
of 300 000 customers and with the help of a non-supervised learning algorithm discovered three natural customers segments. This leaves everybody to discuss the output and not argue about what they think the customer does.

Example - Business Architecture
Using the same 30 million lines of code we were able to construct all possible customer process flows using process mining. No more mapping energy required but rather discussion time on patterns, trends and possibilities.


Example - BIS/GIS
Merging GIS data with customer data provides an immediate picture of where customers spend their
time. No need for corner modelling, but rather facilitating the integration of operations, supply chain, sales and marketing around the data.




Example Logistics/Fleet/Route Planning
 Use sample sets from 50 to 100 000 vehicles to understand what the travel time impact will be between any points in the supply chain network. No more guessing, optimal estimates can be derived.




With the help of my colleague Carmen van der Merwe, we have created an Integrated Design Framework for Value propositions which shortens product development as it integrates large design teams, but even more exiting about this is that Big Data directly impacts a number of key design areas which even should shorten this more drastically. The following picture shows where Big Data directly impacts on the design.




Friday, February 28, 2014

Asset Insight and Risk Evaluation through Big Data & Data Science

Sensor technologies create Big Data, and Big Data enables us to move away from "sample-based" models to real behaviour.  In this picture we were able to use 30 million data points to create a visual flow pattern of 300 000 unique customers around certain points of interest using the Disco process mining tool from www.fluxicon.com. Our SMART collection system www.chekkins.com did all the hard work of capturing the millions of transactions. This now gives management the ability to investigate any flow sequence around any point of interest at any point in time (this wasn't possible a few years ago - but that is the power of Big Data and Data Analytics today).



The following example shows a summary map of combining Big data with geospatial data. Using www.chekkings.com we are able to collect millions of site observations and from that calculated the volatility levels, uncertainty, system risk, robustness and probability of default on various risk clusters.

Sunday, January 12, 2014

REDUCE RISK THROUGH QUANTITATIVE COMPLEXITY SCIENCES

In my previous blog  I speculated about the practical application of complexity sciences to assist in business management. In this blog I demonstrate how we use complexity sciences to create additional quantitative insight into business systems to support our mission at IAM of “To see better. To understand better. To do better”. As an entrepreneur, I have learned over the years that investments go hand-in-hand with risk – understand risk in order to mitigate potential issues and subsequently gain on system performance.

The Anscombe quartet consists of four data sets which share similar statistical properties, but when visually inspected shows different patterns. I provide the visual scatter plot at the end of the blog to support the use of complexity science in discovering better insight into system behavior.

Complexity is hard to define, but in simplicity I can state that complexity is the result of a combination of uncertainty, volatility and relationships in a system. This also underpins the key requirement for any complexity science approach - having the ability to measure in a non-supervised fashion, entropy and chaos in a non-linear manner. In the case of the Anscombe quartet, the observations are 11 – not enough to create meaningful insight into characteristics such as robustness, self-organised criticality and small-world behaviors, but still powerful enough to demonstrate the concept.

The Anscombe quartet

The Anscombe quartet is 4 data sets which shares common statistical attributes. In this example the four pairs can represent 4 business units, departments, products or processes with 11 observations.


Statistical Analysis

Descriptive statistics show similarity between the X and Y pairs, with similar correlations between the variables. On face value these 4 units operate and perform in a similar fashion.




COMPLEXITY SCIENCE INSIGHT

What else can we see or understand from this system ? How will we go about to optimize this system ?  What is the overall risk of this system ? If we optimise this system, where should we start ? What risks do we face if we do this ?

With complexity science we can get quantitative answers on these questions. To summarize the following sections we can state the following about the system as derived from quantitative analysis:

" This system is fairly integrated and shielded from either random failures or structural failures. This is caused by the high interaction of system components between all business units. However, performance optimization of this system do pose a greater challenge. From the quantitative insight the strategy should be to address and correct the high uncertainty of x1,x2 and x3 performance. The cause-effect model of the system should be used to study the impact of changing x1,x2, and x3 to understand what implications will effect the overall system. Improving x1, x2, and x3 will have significant impact on performance indicators by reducing customer lead times, work-in-progress and waiting times. It is important to protect the stability of BU 4 (x4 and y4) as they have significant impact on 60% of all observations".


ANALYSIS: VOLATILITY, UNCERTAINTY & IDEAL CAPACITY
Volatility = measure deviation from the indicator mean.  Between 0-50 % indicates a relative
stable indicator, between 51-100 less stable, and above 100 highly volatile.
Uncertainty = indicates the level of uncertainty in the indicator measurement, with 0%
no uncertainty, and 100% total uncertainty in the measurement.
Ideal Capacity =  ideal capacity required to process indicator values according to the level of  uncertainty in the indicator values at the selected frequency of observations.





ANAlysis: Systemic Risk Score

Complexity Score = Relative value calculated based on the current measured complexity within range of the minimum and maximum complexity of the system.
Systemic Risk Score = Value between 0 and 100 %. Complexity is a result of the volatility in behavior of objects within a system, and the level of uncertainty in the relationships between these objects. For a given system 100% represents the maximum risk due to complexity – this is an absolute value and can be used to compare different systems against each other.




Analysis: Small World Evaluation

Small World networks are typical of scale-free systems where the average path in the network is short (mean geodesic) and with high transitivity. This means that relative few nodes act as hubs with many relationships and weak links between these hubs. In this case the relative path is shorter than a similar random network, but with no transitivity due to the small number of nodes.



Analysis: Additional Network Insight

Due to the number of observations in the Anscombe Quartet the following analysis indicates the potential of complexity sciences to uncover more hidden facts about the quartet. Extending the network model with Bayesian probabilities, potential cause-effect relationships can be identified to show potential directional cause-effect relationships in the data. Understanding this adds greatly to creating a predictive model using dependent and independent variables. 

The scale-free test enables insight into the self-organised criticality state of the system (in this case which it is not).

To test robustness of the system, use is made of “random attacks” and “structured  attacks” on the network nodes(vertices). This means the nodes are removed in a random order, as well as a structured order to observe the mean distance of the network. In this case with the removal of 25% of vertices the mean distance only increased with 1.5% which indicates a fairly robust network. This supports the fact that the network is fairly dense at 85% - meaning many relationships between the different indicators and no significant hubs.



Analysis: Self-organised clustering

The time-based observations (events) are not sufficient for temporal Investigations – hence self- organised clustering is used to investigate potential correlations between events. In this case, six of the 5 observations cluster around significant Business Unit 4 input and outputs.



INSIGHT DERIVED FROM USING COMPLEXITY SCIENCES

To summarize the insight derived from the complexity analysis we can see and understand the Anscombe quarter better by adding the following observations:

a)    The volatility indicators show similar volatility in the X and Y groups. Volatility explains the deviations from a mean, and in this case the outliers were left alone for demonstration purposes.
b)  The uncertainty measures show the different levels of uncertainty on each indicator – it becomes clear that these indicators are not similar as measured by standard statistical measurements.
c)   The ideal capacity calculation uses the combination of volatility and uncertainty to indicate the different levels of capacity required by each business unit.
d)   The systemic risk score is relatively high at 68%. The level of uncertainty in the indicators, as well as the high system density measurement (85%) supports this level of systemic risk which indicates that the 4 business units do not operate in isolation but have cross relationships which increases the complexity between them.
e)   The system’s relationships are not random driven as the benchmark against a similar random network shows significant differences in network characteristics. The strongest relationships are between x1, x2 and x3 at 86%.
f)  The system doesn’t measure as a scale-free system and does not support a self-organised critical system.
g)   The system is fairly robust against random and direct attacks. A simulated 25% removal of vertices only resulted in a 1.5% increase in average network distance.
h)  From a temporal view, 6 out of the 11 events can be clustered around Business unit 4 events – mainly due to the fact that level of uncertainty around Unit 4 is low, and that Unit 4 is not significant impacted by the other units.

This can be summarised in the following risk approach and improvement strategy:

"  This system is fairly integrated and shielded from either random failures or structural failures. This is caused by the high interaction of system components between all business units. However, performance optimization of this system do pose a greater challenge. From the quantitative insight the strategy should be to address and correct the high uncertainty of x1,x2 and x3 performance. The cause-effect model of the system should be used to study the impact of changing x1,x2, and x3 to understand what implications will effect the overall system. Improving x1, x2, and x3 will have significant impact on performance indicators such as customer lead times, reduction in work-in-progress and waiting times. It is important to protect the stability of BU 4 (x4 and y4) as they have significant impact of 60% on all observations made in the system".

Anscombe Quartet Scatterplot


Conclusion
In summary the above analysis shows that the Anscombe Quartet is a fairly complex system of related relationships between most variables with a high degree of uncertainty in the data sets. Each business unit, although appearing very similar in standard statistical measurements are quite different in operation, and cannot be viewed or treated as individual units. 

In conclusion, this approach should still be applied as preached in any data science methodology - use your common sense to analyze, construct and predict !






Sunday, January 5, 2014

Quo Vadis complexity management?


It is now 8 years since I bought a 1500 GBP report on how to measure complexity in business. I also remember how I read Mandelbrot's book on the misbehaviour of markets with great excitement, and how my head went on a spin after spending a week with a self-acclaimed "guru and publisher "on the subject matter. Since then I have read hundreds of articles and many books on complexity in order to gain a practical understanding (and application) into it. Also have to add to this, tried to solve real world problems in supply chain, retail credit books, merchandising, commodity trading, stock control, systemic risk management and a few more with so-called complexity tools and techniques.

Having spent another holiday reading books on "non-linear dynamics", "chaos theory" and “black swans”, I must admit that in these 8 years I still have to find a practical application or use for things called “self-organised criticality", "fragility" or wolfram's "cellular automaton". It is if either the author tries to find a case in the rhythm of the flight of a butterfly, or writes 300 pages describing him playing with sand piles, or tries to discredit 300 years’ worth of scientific discoveries. The fact that man has accomplished major feats such as putting a man on the moon, develop quantum physics, and splitting atoms doesn't seem to matter at all to these complexity gurus. So for people like me that is really interested in solving business problems and believe complexity theory can assist in developing new approaches to achieve this, I have to say "Please, if you publish or claim to be a complexity expert/guru, show us real applications with evidence instead of always ending with vague statements of how the world is going to change with your contribution!"

Quo Vadis complexity management?  It is nice to talk about complexity; it becomes a real problem when you measure complexity through qualitative surveys, and a waste of time when one can't implement anything in the business world from those mathematical proof theories. If you think you will discover complexity insight through articles, books and current claims of quantitative complexity measurement, you need to know that you will most probably end at the same spot where you started - confused.

I believe that there is a place for complexity management, first of all as a management philosophy, one that states we need to solve real-world problems through cross- functional diversity. One great example of this is in data science, specifically the open-source R movement which enables learning between disciplines such as genetics, bio-informatics, finance and engineering. Here my company has used algorithms from the ecology and bioinformatics disciplines to solve problems in marketing - that is a real application of complexity management thinking.

As a science there is place for complexity management, one which requires bridging the chasm between academic research and real business applications - and as such I would like to see it as one where you don't have to be a nuclear physicist to understand what it is all about. Even Taleb, the author of the Black Swan concept confuses principles of probability, event outcomes, scale-free and scalability - and then to end after hundreds of pages with vague speculations about how it will change the world. It won't - stop torturing the human race.


So, if Complexity Management keeps on its current road it will either keep on thriving or eventually die in exotic academic locations and R&D labs; or maybe reach a best seller list by confusing people with butterflies and sand piles. Or just maybe we can start seeing real practical applications which will benefit the business world – if this happens, then this road might have a bright future.

Popular Posts