Saturday, October 12, 2013


What is SMART assets? For me it means taking an intangible or tangible business asset and applying the following principle:

“To see better. To understand better. To do better”.

In order to do this, we need to design and implement Product, Process and Data which supports the customer journey to a road of higher customer experience – but bearing in mind organisational change will be part of this!

Business Process Engineering (BPE) is a change framework which integrates change management and complexity management with solid industrial engineering. BPE has been developed over 20 years and provides a practical and resilient approach to implement and convert most change requirements from ideas to market engagements in any complex business environment.

Organisational change is unique to every single organisation and as such BPE depends on key principles derived from the discipline of complexity management to enable the change initiative to stay on course – no matter how disruptive the change may be.

Making organisational assets SMART requires a process which takes the organisation through three major change phases; Innovation, Implementation and Improvement. During this process we split “hard change” from “soft change” to enable the proper balance between business changes while allowing people to adopt and drive them. The focus always stays on CUSTOMER->PRODUCT->PROCESS->DATA.

In order to deal with real-time views and scenario’s we apply the latest Big Data, Data Sciences, Process Sciences, Physics and Engineering models – and sometimes we borrow from anthropology, biology or astronomy to solve the most difficult problems. At the heart of this is a business fractal model which implements complexity management in a practical manner.

Sunday, August 25, 2013

Robustness of a network

The following graph is the final network model state as discussed in my previous blog (, where I indicated the ability to determine the robustness of a network from a network simulation (

The key statistics of the network can be found as being:

Number of Vertices: 303

Number of Edges: 326

Diameter of network: 13

Mean Degree of the network: 2.151815

Maximum Degree of the network: 32

Mean Geodesic of the network: 5.136059

Density of the network : 0.0071017

Transitivity of the network:0.003640777

Robustness of the network can be tested in two ways, first by attacking the highest connected hubs in a structured manner, and then by removing nodes in a random manner. The simplest measure to test this impact is the average distance (mean geodesic) of the network. The following graphs shows a few simulations by removing 55% of nodes in the network. The blue line shows the failure in the average path if a structured attack is carried out, with the red lines resulting from random attacks. From a structured perspective this network is highly fragile, failing in less than 1% of node removals, where-as the network is quite robust when being attack in a random manner. Whether this is good or bad naturally depends on the application; if we want to design a marketing campaign, then we might rather use a structured approach to implement it, as apposed to a random approach. If it is a supply chain network, the structured attack will be of high concern from a risk management perspective.

Finally, analysis of the network model indicates that this model is a scale-free network, as the connection degree do follow a power-law behaviour, which does indicates the reason for the network fragility under structured and random attacks.




Thursday, August 22, 2013


I just evaluated a fancy dashboard/analytical toolset in my search for solving a specific problem; very nice to drag and drop stuff and get some fancy pictures out.

I then received the quote for the perpetual/annual licence agreement/pay-me-as-much-as-possible-for-fancy-graphs.

Got a bit green when I saw the price and then to calm myself I went and grab a few lines of brilliant opensource R code from a blog to do a dynamic network model (I am busy trying to calculate the diffusion rates in market segments from complex network structures).

Here is the result:

The next step will be to insert 3 lines of R code to calculate and model a random attack on the network to see the level of network robustness. And add another 3 lines to model a structured attack on the key nodes and understand the overall fragility of the design. Then plot it all with R ggplot which is free.

Advice asked & advice given is free.

Can check all code and see all results, can also mingle a bit of Monte Carlo simulation into it as well; it still stays free. It feels like a good deal to me, especially having the freedom to do as I like.

Don't see how a scientist or engineer can work with analytical tools which generates by "magic"
answers which cannot be verified - too restrictive and dangerous for me. Not to get it wrong, there are some very specialist good ones out there like for example the Disco toolset from , but then at least I can read the Phd's from the founders and understand what the
wiring is all about... that's cool.

I still think that a fool with a tool is still a fool - even if the tool is Visio or Powerpoint or Excel.

Wednesday, July 17, 2013

Self-Organised Criticality - a Real Application

Here some examples of what the movement steps look like, as well as the time spend per movement.

Popular Posts