Creating dynamic classes in javascript ES6

I have often wondered whether or not it is possible to dynamically create classes on the fly in javascript. Of course, using prototype inheritance, it is possible to do this simply by adding prototype methods on an object. But what if the class you are creating at runtime doesn’t actually know what methods it has to implement until an external source provides them? Further, in today’s day and age using ES 6 or 7, with babel, how can these classes be dynamically created on the fly from external implementations?

As it turns out, it is actually quite easy. Here is a code example:

let mymethod = 'myMethod'

class myClass {
  constructor() {
    console.log('class constructed')

  [mymethod] (){

var my = new myClass()

This is great. It is a way of creating methods in the class without knowing the name of the method up front, but rather from a variable. But, this still is fairly limited, and why bother? If the implementation of the method is known in the file (via require/import or as a function in the javascript file), but what if the actual implementation of the method is stored elsewhere?  Now this is a challenge. One way is to add them on the prototype chain. This can be done as so:

var methods = {
  'increment': function() { this.value++; },
  'display' : function() { console.log(this.value); }

function addMethods(object, methods) {
  for (var name in methods) {
    object[name] = methods[name];

var obj = { value: 3 };
addMethods(obj, methods);
obj.display();  // "3"
obj.display();  // "4"

Another way is to add the events after the fact to the prototype chain, like this:

var obj = {value: 3 };
obj.prototype.mynewmethod = function method mynewmethod(myparam) {
//Dosomething, return something

But this is based on ES5 Javascript, not ES6.  While ES6 is merely “syntactic sugar” that transpiles to ES5 (using babel, tranceur, etc), it is still valid code. what it does is construct an object with functions in it (with dynamic class names) be mapped to an object’s prototype chain. This is a good way to add functions to an existing object.

But what if you don’t know the method’s implementation at run time (eg: download a method or function from somewhere else? This is where things get hairy. Because you don’t know the “source” of the method implementation, you have to trust that this source is trusted. Security issues come largely into play here. Javascript is a great language, especially with the ES6 “sugar”. But if you don’t understand the prototype chain, it is a bit of a double edge sword.

Containerization: Docker

I want to show some details on how to build a microservice using NodeJS, and deploy to docker containers within any cloud.

If you haven’t used Docker before, it is fairly simple. It is a new technology that allows you to run an “application container” within an existing server (physical or virtual). It is almost like have a virtual machine within a virtual machine, because docker uses the host VM’s OS to work with the application. However, the difference is you can isolate your application code into a container that can be deployed anywhere.

Another great feature of Docker is that it is a sandbox for your application. Everything in the Docker container is self contained. In a traditional VM, especially one that hosts multiple applications, normally apps are deployed to multiple directories on the host. However, if these applications share resources, then if you move an application from one server to another, you can break the application on the new host, if you don’t also make sure the dependencies it has aren’t installed.

Enter Docker. Docker allows you to deploy everything your application requires in a minimalistic ways, and bundles up the application stack. This means you can build a docker image with everything it needs to run, and take that “box” and deploy anywhere. It uses a layered file system, so you can also grab and auto install any deployment code you want even from a repository like GIT.  However, in doing this, it focuses on your application dependencies, but won’t have in it the OS files.

To make a proper distribution of a docker image, the only contents of a Docker container has in it is your application dependencies. Therefore, if you have a host OS using ubuntu, your Docker container won’t have the Ubunto kernel or other files, but will have anything specific over and above to make your application work.

NodeJS is a server side tool that brings javascript to the server. While traditional use of Javascript is run on the browser, given that is the language of the web, NodeJS is based on the V8 Engine that Google Chrome uses in the chrome browser. But, since NodeJS is not a browser application, but rather runs code on the server, it uses “modules” that can be shared running javascript on the server, and sending to the browser. Another exciting use of Node is to run a universal or “isomorphic” application which allows your code to run on both the browser, and the server, with little difference. This is great for SEO and indexing, but still keeps your application lightning fast, particularly using an SPA technology like angular, react, or backbone to run most of your code on the browser.

A few things to note about Docker. It runs within a single process, and that means it only runs within ONE thread on the server. However, it has an event loop that allows node applications to use functions asynchronously as callback which make it perform much faster than a traditional application.  This is the magic of NodeJS.

Once an application is built using Node, it can be deployed anywhere using docker on any virtual image on any cloud, and as long as all of the dependencies are contained within that black box, it will just work. It is possible to include all the dependencies within a Docker image to ensure little issues when deploying, but often dependencies such as a database technologies don’t make sense to include in an application stack, simply because they are shared. But that doesn’t mean they can’t run in a Docker image themselves!

Docker is changing the world. Google has written a library around Docker, called Kubernetes, which allows a lot more control over clustering and more production ready deployments. I suggest you check both of them out. Oh by the way, each docker image does have an initial startup script which you can use to run inside the docker image to build itself. Automated deployments with minimal footprints anyone?

Here are some links you might be interested in:

Docker: Docker Website

NodeJS: Node JS Website

Tutorial on setting up Docker and NodeJS, with other Docker Images running mongodb, redis, logs, and other dependencies all saved on the host from Docker containers:

Example of a React Isomorphic application using ES6 and Facebook React

The world of IT changes all the time. New technologies have evolved, sometimes overnight that have the potential to change the world. Facebook React and Flux are two examples, but these technologies are still evolving, but have matured enough for me to write a how to guide to using them to create pages that are rendered either on the server or browser. Why is this important? Because it means you can finally create real-time applications that not only work on both, but will be seen by the major search engines.

Never before has a time been so right to make sure we as developers can create these kinds of applications that are exposed to the web as a whole without the network latency being an issue. This helps a lot with performance, but also helps in code maintenance. Why? Because you only require developers who know javascript, a one language that can do both. For more information on Isomorphic applications, please refer to my earlier post: It can be found here.

Now on to bigger and better things, an example.  I have been working with NodeJS, React, Flux and SPA apps for quite a while. While frameworks like Angular came on the scene, we have seen SPA languages that are high performance based, but they leave a lot to be desired, particularly in the learning space. React and Flux from Facebook changes all that, by allowing developers to create UI  “components” with their own state, and “stores” where the data required for this UI component is re-rendered on a change, in a high performance oriented manner.

New Business Applications using Javascript, SPA, and Isomorphic applications

With the invention of SPA (Single Page Applications), javascript is the primary language of choice, simply because this is really the only programming languages that browsers understand. This single language has surpassed all other languages given the massive exposure it has on the web by the browsers.

In the past, javascript was referred to as a simple or non-language because of the way it does things. Things like Polymorphism, Inheritance, Abstraction, common to most developers don’t appear to be in the language. This isn’t true however, but since javascript is a true object oriented language, it just does things differently.

Enter ES6. This is the newly accepted standard for javascript. The way it does things is a lot closer to traditional programming languages, even though how it does it is just a bit of syntactic sugar on top of the original language to make it a little easier for developers. ES6 however has just been recently approved as an official release and it will take time for browser developers to keep up with this change, given that their browser products are used by billions of people world wide.

However, this isn’t a real problem today. There are transpilers available that will translate ES6 code to ES5. Is this a performance issue? Not at all, as a matter of fact, Java does the same thing by compiling java classes into binary classes. So does .NET by translating C# classes into CLR code. At the end of the day, does it really matter? How a machine executes your code is quite different to how you write your code with the myriad of developers you have. Maintainability of that code, and reacting to issues is much more important given that bugs/issues cause the most time to your teams.

Personally, I really like Node/NodeJS. It allows developers to create code using javascript on the server. It is based on the V8 engine by chrome, and offers a set of classes that run server side (rather than client side), which means you as a developer don’t have to worry about multiple resources to maintain multiple projects written in different languages. Want a web server that listens on http, https, and web sockets? this can be done in a very small amount of code.

Where these technologies really shine though is in sharing javascript code between the client and the browser. Where there are some differences between javascript on the server vs browser, the language is still the same, and new applications are coming out, and very quickly that support both. The term given to these applications is “isomorphic”, which allows the same code in the same file to be executed in either environment.

This is a BIG deal for project teams, whether at a small mom and pop shop, or a huge enterprise.

The focus of today’s blog post is to highlight these new technologies, especially allowing code to be executed on either site, and still run the same way regardless of server or browser client side. A lot of interest has been put into SPA apps, and for good reason, they perform almost real time whereas most server side applications need a request and response for every page re-render.  Server side rendering applications traditionally have been very slow performance wise, and leaves changes up to the developer (which means this can be buggy) to sort out. Not ideal, and this way of creating web applications has been the “norm” for quite some time.

Then came “ajax”. This was using XML to send “snippets” of data to the server and get a response without re-rendering a page again. This means libraries like JQuery came onto the scene to step towards a more realtime web client user experience. However XML is verbose, and is simply a bandaid to address the performance issue.

At the time it worked quite well. If a bit of data was needed (up to date) for a re-presentation of that data, a simple ajax call was used to obtain this and allow the client side application to work out the rendering.  Great start, but not quite enough.

However, SPA applications changed all that. Developers could now focus on applications that will run in a browser in real time, and not worry about the time it takes to re-render a page, simply because page renderings wasn’t needed from the server. The client browser had all the code available to perform these changes. This was quite an advance for web based applications. This means that real time applications can be built even though the browser doesn’t have direct access to server controlled data.

The issue here is that web browsers only care about data that is to be presented, and javascript as a language has been developed and traditionally has been used as a way to work with the user. But as a language it still has everything required to be a fully fledged language, but because it does things “differently” than other server side languages, it was difficult to work with and often led to developers “pulling their hair out”. Well, with ES6, react, flux, and node, this isn’t quite an issue anymore. A whole new developer experience has resulted and one I believe will be quite valuable to not only IT professionals but to business units as well. This translates to cutting costs and timeframes, and allows applications that perform incredibly fast to be built in weeks rather than months or years. Try that on for size. This means your development resources are cut down, and can turn around your business needs in mere months. You wanted the IT world to listen? We have by making things way faster with less resources. Perhaps it is time for you to start investing in IT again, and stop fearing it just because you don’t understand it?

In my next blog post I will provide an example that can be used by developers to do just this. Stay tuned.

An introduction to React and Flux by Facebook

React is a great new library from Facebook. It takes a little to get used to, and sometimes gives a little “yikes” factor, simply because it challenges the normal and accepted way of constructing applications. The biggest yike is that it challenges a Model, View Controller concept, by implementing a set of components that include both javascript and HTML. Traditional system, whether server rendered or client rendered usually frown on having the presentation layer mixed with the business layer, and with an MVC style approach to web development, these areas of concerns are separated between presentation, business logic, and data models.

However React challenges this idea only to the extent of declaring that area’s of concerns have traditionally been for a development team (where there are UI designers, business logic designers and data designers). However their challenge is timely, especially in an Agile development world because most designers don’t even code to the HTML or stylesheet level, but leave that up to developers.

Also, the area’s of concern in Facebook’s view is more around the business functionality. I tend to agree with this. If a component is required to be developed for, let’s say a messaging system, then there would be a clear business owner for this, and therefore, to meet his requirements, the messaging system would have a particular UI, and way of interacting with users, over and above who is employed to build each part of the system.  Since developers understand HTML and stylesheets anyways, why separate these concerns between a UI designer (who is more focused on mockups), and the business logic?  Further, if business logic is needed just for presentation, why should a UI designer know or care about some programming language to implement the business rules they define?  They shouldn’t.

To further this concept, Facebook’s React framework challenges the MVC model because they feel this just isn’t scalable. Again, I tend to agree, simply because the MVC model can become out of hand over time, where more dependencies from one MVC to another increases, causing a chain reaction of support issues.

To address these concerns, Facebook came up with React. This is a great framework that makes development a lot easier by implementing UI components, that maintain state. This is the state of the overall UI, not the state of the data, and hence, mixing html fragments within a React component (done by developers anyways), is not such a bad thing, as a matter of fact, is a great thing because it allows the developer to think in UI components, keeping the UI logic within that component and not mixing it up with actual business logic.

Enter Flux. Flux is yet another thing the Facebook folks have created. It is not a library like React, but rather an architecture that allows developers to separate concerns within a project team. However, unlike traditional MVC style of architecture, they feel that the state of data should flow one way, from the source of the data change, through to the actual presentation, and then to the data layer to persist the change. Hey, not bad at all. This means there is no complexity by rendering up to the second changes in the data. They do so by re-rendering the entire UI component. Oops, yet another “Yiikes”!

But, like other high performance applications, the UI component they re-render is only a delta change rather than an entire rewrite. Enter the React Virtual DOM. Like high performance systems like game engines, only rendering just that part of the screen that actually changes increases performance 10 fold. So the React framework allows this by maintaining a virtual DOM, and when it actually comes to expensive DOM operations, it only does a DIFF to see the areas that are the delta and renders them. This means the state of the data or even presentation state remains intact immediately after a change, and tells the rendering system to re-render those UI changes by way of the delta.  This is a developer’s dream, since they don’t have to write complicated logic to keep up with the changes to either the data or presentation of any “Area of concern” (business unit).

Flux is not a library, but rather an architectural style that merely keeps up when state changes. It allows state to change instantaneously and on the fly, without compromising what needs to get rendered as a result. It does this by defining actions that can be performed by UI components, and these actions, when tied to actual presentation data stores, can “react” instantly on a data change. However, the keyword there is “presentation data”, not actual data. Flux does change actual data in a database, but this is not what a Store worries about. It only worries about the state of the data at the time it is rendered, and announces the change to UI components to allow them to re-rerender.

FaWhat does this mean for business owners? A great deal. This means the business owner can focus on their business data, and presentation of that data in a real time manner, and leave the techie stuff up to techies, whether it be a presentation UX designer, architect, developer or tester; each part of the software development lifecycle is handled by the right person allowing the business owner to focus on the features required from their area.  Perhaps it is time to see things differently? As Facebook says, give it 5 minutes, and perhaps it might just make sense. After all they deal with Billions of users, why shouldn’t we give what they have to say just 5 minutes?

Managing a cloud world

Managing processes and services in a cloud architecture is not a trivial task. It requires executing processes in disperse locations across disperse virtual servers running on disperse physical machines. It can be an operation manager’s nightmare to organise all of these running processes, while at the same time as reducing costs for the organisation, while still “keeping the lights on”.

Developers while they are developing an application rarely think about this stuff, nor should they. They are focused on the functionality of the application.  But the moment they want to deploy their application, operations becomes involved, and these questions are expected to be answered by the developer teams. Whoa… hold on a minute!  They are not operations!  So a divide occurs, where operations is against the developers. The developers work for the business unit, and develop to the business unit’s requirements, but business people don’t give a toss about any of this stuff. The great divide. Sure, architects were hired to bridge this gap, but even they didn’t understand all of the issues from both sides.

In today’s world however, this bridge has been crossed with a DevOps approach. This means that applications are now architected in such a way as to ensure that both the interests of operations and their business units are addressed. Business units want a solution that is always up, even in the event of a disaster, and rely on operations to ensure this happens. Operations don’t know anything about the business requirements (and these requirements are getting more vague by the minute), so how can Operations take over the support of a solution that addresses their needs? Only way they can is to make sure the application’s design is good enough to allow for scalability, reliability, availability, etc. Again Enter the architect.

Far too often I have been in an organisation though that questions the need for an architect from a cost perspective. And perhaps they are right. We don’t live in a technology world anymore that requires a navigator (or do we?). The tools are there. Does this mean architects are not required anymore? Hardly, but with the right architecture all of these gaps can be addressed in a single design.

The point of this article is to talk about Process Management, but what exactly does the above have to do with this? Process Management is usually an operations task isn’t? Well no. It is all of our tasks to get this part right. and, it may not be as hard as you might think. Sure containerisation and micro services offer a new way of thinking, there are already tools that exist today to ease the pain of setting all of this up. In the old days where only physical machines existed, it was just a matter of creating a VIP on a load balancer as the entry IP address (internal or external facing), but that led to a number of load balancers for each application, and created a nightmare in managing all of these web servers. If they were only allowing traffic on port 80 or 443, that was ok, but as soon as a new port was introduced, let the cavalry come in, from security to firewall configurations, and that exercise for big corporates is not an easy road to navigate.

Enter the day of virtualisation, the day where a server can run on a single large enterprise grade server; one that allows multiple virtual machines given the appropriate amount of RAM, CPU/Core usage, and voila! A single physical machine can now host multiple virtual images, significantly reducing the physical footprint, and leveraging a single machines power to be extended horizontally. Need more CPUs or RAM? Simply buy another big mother of a server, and capacity expands 10 fold.

However, from a funding perspective, this wasn’t the best either.  This meant that when an application came along that exceeded the capacity of a single host server, then that project would be charged for the new host, and this cost was NOT TRIVIAL, or it was put into an OPEX arrangement where operations bought the server, and sliced the costs across the business units to pay for it. Still not ideal.

Today, the buzz is all around “containers”. These are simple containers that leverage the hosts OS, but sandboxes the application so the application can be spread across multiple hosts, each with an API that can be accessed, load balanced, and can live in the depths of an infrastructure to not necessarily require ports to be opened on the firewall. Even with today’s cloud based technology, where one virtual server could be in the US, another in Asia Pac, each server could have a copy of the container, acting as a single application across multiple diverse locations. This is a significant step forward, particularly when physical space is an issue for you. However, this new type of thinking does require a new form of architecture, and one that manages processes and containers efficiently.

Enter Process Management; the ability to manage processes to do exactly this regardless of where the process is running. Node.js is a great technology that offers efficient IO using the Chrome V8 engine. But that in itself is not enough. Node.js is single threaded on a single core, how then can that platform be extended to use all of the cores/memory offered by a single virtual server? This is where process management comes in. This is particularly important when you are dealing with new technologies such as Docker, where containerisation within a virtual server is the concern.

I have looked at a number of options in this space, and my two favourites are PM2 and StrongLoop. StrongLoop is the paid commercial solution and is probably best in breed, but PM2 is also a contender, as it manages processes within a single container.  While each of these have their own strengths and weaknesses (namely functionality over cost), StrongLoop is a leader in this space given it’s ability to put a process manager on each container, then communicate with each container (regardless of where it is deployed), and manages these in a single UI. PM2 only offers process management within a single virtual or single container on a virtual. But it still has it’s advantages.

Events as first class citizens

In todays programming terminology, many people focus on either services or products. Often this is because of the revenue generated from these two models (a sale either comes from a product or service), and to some degree, this is how it should be.  After all, we do not live in a world where money doesn’t exist, it does, and should be treated respectively.

Over the years, a product has had its place and still does, and maintains this place because it is a tangible thing for sale. Perhaps it might be a car, a bicycle, a laptop, or tv. All of these are products and have a sale value. On the services front, this usually translates to actions done by someone, also which has a value, and should be the basis of a “product” construct for the service. In traditional terms, a product is a tangible item for sale, and once sold, becomes the property of the buyer.  In the case of a service, it is a little more vague, because a service is usually depending on a “piece of time”, or perhaps a particular function offered by people, but in either case, it can also be denoted as a product for sale, where that product is time or that function performed by people.

However, in each case, both products and services are simply tangible things that people see. It is easy to see that a car has four wheels, where each wheel is a product, and the car using the 4 wheels is also a product (1 car has four wheels). Total cost: price of the car, OR price of 4 wheels. Depending on the customer and what that customer is trying to achieve depends on the level of product sold: in the case of the car manufacturer, the sale price is for 4 wheels, hardly enough to drive a car, where as the car dealership is the price is that of a fully assembled car (which in any event should cover the cost of the four wheels plus all other components that make up the car).

A service on the other hand is a price put on the “handling” of something. In the car case above, it might relate to the “commission value” the salesperson gets for the car, or perhaps the administration handling on processing the orders for the car. Over time, and the more demand increases for cars, the cost of the “products” required to build a car decrease (including services), as does the sub-part products that are used to assemble the car, hence now, with some markup values the price of the car.

But a fundamental concern seems to be lost in both products and services. While each are treated as a specific and tangible cost whether or not it is something you can hold or feel, or a time based thing where peoples time are required to build or administer the build, each of these things are changes which happen over time.  Often companies see these changes as arbitrary, and somethings (more often than not) don’t really see the lifecycle of a product or service in their initial implementation. To this end there are even organisations who don’t really care about the lifecycle simply because projects are based on an initial deliverable (both products and services), and hence the implementation usually only cares about what it costs for the initial implementation. I can tell you with certainty that the design of a new car costs a whole lot more than a customer pays for a car once it has become a commodity.

However, everything has states. A product can be “new”, “used”, or “destroyed”. A person’s time can be “starting”, “in progress”, or “finished”. In any event these states outline the very nature of a lifecycle of something, often overlooked.  Architecture today usually focuses on the direct elements that get to a project’s implementation, and often see the lifecycle of that “thing”, as something not necessarily worth considering in the implementation phases… That is, until operations becomes involved. Then let the fireworks begin, often with a result where things are not approved to get into production from the implementation project, or perhaps as worst case, where operations are seen as a blockage an forced to accept the lifecycle of a given product or service.  These states reflect the lifecycle for any product and service, and should be treated as a whole to determine the total cost of ownership for the events that occur for that customer to give the customer the ability to make the right choice.

However, both groups seem to be missing something. Like the saying, when someone is buying a drill, it isn’t the drill they buy, but rather the hole. In reality they want a hole and that is what they are buying, and the purchase of a drill is the thing they use to get that hole. Value of the drill: $299. Value of the service to drill the hole: $50. Value of the event that required the hole: at least, $250. But what if they require a new deck, which requires holes, and much more so they can enjoy a coffee on a sunny sunday morning?

Events are first class citizens. Pain points, or any need or want of any kind is an “event”. While this can be translated into products and services at the time to assist with the cost of the event, an event is the driver, not the products or services underneath. It is the event that requires costing, and with regard to the other lifecycle events that might occur.

In the old days of IT, there was a strong push to suggest that things should be “data driven”, meaning that it is the data that drives the outcome. But this didn’t really work that well simply because those that understood data, didn’t really account for the services required to get that data, nor did they really give much thought to the events that could affect this data. More often it was about the data entity itself, and not necessarily the events that create the actual business needs. More recently there has been a push for a “service driven” approach, where services should come first, but again, a service is only something which leads to an event outcome. In latter days I have heard of an “event driven” approach, which I feel is a little closer to the mark, but perhaps still needs a bit more thinking.

Events are not just arbitrary, they cause demand based on a point in time. Whether it be due to the fact that a new smart phone has been released, the event could be to “keep up with the jones’ by getting the latest apple or android phone”. Or perhaps it is based on the fact that the features of the phone are in demand. In either event, the law of supply and demand fuels commerce, and after all, aren’t these just events in a customer’s lifecycle?  Isn’t the fact that when supply is high, and demand is low, that the sales price should decrease? Or when demand is high, but supply is low, that the price should increase? Aren’t both of these based on events that suggest that demand is high or low, or other events regardless of supply and demand make up the want or need? If so, why is the IT world so interested in products or services, rather than the demand or supply, or more apt, the customer event lifecycle? This doesn’t make much sense to me.

Moreover, if I want a hole and am prepared to buy one (whether or not I drill it myself by buying a drill or hire a contractor with a drill to make the hole), shouldn’t that be the focus? What if I want a new decking in my backyard, do I care about the holes, or would it be better to hire a company to make my deck for me? I am sure they have all the tools I need to get my deck done, holes drilled, including other things, to get me my deck?  The driver? I want a deck. That is the event.

I believe the IT world has missed the mark. They are so busy trying to justify their time, or perhaps even their products that they actually don’t cost out the cost of the event; the fact I want a deck. Perhaps if they did, the focus would be more on that event, so they can achieve it, rather than the individual parts or services that make up this event outcome. Developers ask for requirements, operations ask for total cost of ownership including their support services. As a customer wanting a deck, I actually don’t care about either one of these things: I only really care about a deck and how much I need to pay year on year to keep the deck to a point that I can sit on it on a sunny day and enjoy a cup of coffee.

The event is the sales point, not the service or product to answer it. It is the business driver, and perceived outcome, and what the customer is buying.

As another example, if I buy a car, I want one that is of a particular brand, perhaps in a specific color, with a sunroof, power brakes, and perhaps fuel injected to increase performance and decrease fuel costs. The event: I want a new car.

I recently heard an executive say that this is what people want. He equated IT as ordering a car, but rather than get a “white car” that he wanted, it was delivered in blue. The sunroof, oops sorry, didn’t think about that. Power brakes, oh yes they have them, but it costs $5K to keep this running. If I were a car buyer I would be pissed at this, and this was the point of the IT executive.  While his points are all valid, he is talking about an end product that has spent millions to design to get to production ready, with a service catalogue in place to answer his event that he needed a new car. How often the design element is overlooked in preference to the product or service.

Events are first class citizens, and are what should be costed/priced.

Let’s take the car buyer. His events are:

I need a new car

I need to service my car

I need to pay for breakdowns when they happen

I need to sell my car

I need to return to the first event

All of these events are lifecycle based. This has nothing to do directly with the car itself. This person needs a car for various reasons whether they be to get to work, to pick up the kids after school, or to go away for the weekend. That is the actual events. To do this, the customer needs a car.  In this new thinking the events are:

I need to get to work

I need to pick up my kids

I need to be able to go away for a weekend

Each event should be layered. the person that wants to get to work has a number of options, one of which could be a car. Same goes for the second and third. Perpaps a car is his choice to receive the outcome, but let me ask you a question: where is the value?  (car: 50K, cost of parking, cost of maintenance to keep going, cost of repairs due to unexpected events, and cost of fuel.  Alternate: Cost of bus, train or plane ticket).  What exactly is the cost of the latter events? I am sure the cost of public transport vs the cost of a car can be estimated and compared. What exactly is he buying?

Enter events as first class citizens.  If the person chose that the best way to get to work was a cost of a car, and was prepared to pay for the fuel, parking and everything else, then he has chosen to answer his event with a car, at a certain cost. Yet if his answer to pay for that event was to take public transport, that would also have a cost, perhaps cheaper, but in any event, the lifecycle events around each phase of his/her decision making process is the event, and this includes a total cost of ownership.

Moreover, each event has lifecycle states. In the case of the car, these states might be “new”, “in need of repair”, “in need of gas”, etc. In the case of the bus ticket, perhaps the events are “in need of a new monthly pass”. In either case, the event is the driver, regardless of level.

To relate this back to IT, the event is a first class citizen. It is what people are buying a resolution for. Each event that occurs has a solution and a cost to implement, and when compared to each other event in that layer, there is also a cost over time that needs to be accounted for. After all, if it cost me a small amount to take a bus but the monthly amount to renew exceeds the cost of buying a car, why then would I choose bus as my answer to my event to get to work?

In the IT world, the event is king. The technical solution to address that event is arbitrary as long as it fits in the overall lifecycle of my events. Therefore, if we looked rather at the lifecycle of events that are required and costed/priced them, perhaps we will not only have addressed the customer’s needs (rather than simply our own), but also look at it end to end to ensure the entire lifecycle is addressed. If this approach is taken then this can also lead to responding to events, with the appropriate products and services to match, rather than pushing a specific product and service. Now, this leads to a dynamic way of offering product and services to respond to an event rather than simply seeing and pushing product and services regardless of demand or supply.

To summarize; events are first class citizens. If we, as IT focus on the customer events, perhaps then we will be able to provide the appropriate products and/or services to answer that event. If we cost that event, perhaps this will ensure funding is appropriate across the lifecycle (after all we need to look at all events in the lifecycle not just one for a proper costing decision). Perhaps if we do this, then the products and services (e.g.: drill, or man with a drill) is not so important, as long as we can give the customer a hole.


Micro services seem’s to have the spotlight today. These are small, self contained services that do a small subset of business functionality. In the old days, a single monolithic application did everything it needed to do for a given technology category (e.g.: CRM), but has it’s downside in that all integration within the Monolith was left up to that Monolith application. This is not ideal when a business unit just want’s to create a customer, or perhaps sell something to a customer. Traditionally it means the monolith would manage the entire lifecycle of a customer. Again, unless you are working with that monolith application, integration became a problem.

Fast Forward many years later; these monolith applications grew, new versions with new features were introduced by the single vendor. As the enterprise grew and mature in their use of this application, integration solutions were introduced that were either point to point, or required a significant product knowledge in order to integrate to it from outside variances. A lot of pressure was then put on these vendors to break their integration down, and some did by producing an API. But again, this API waas only defined from a single vendor’s perspective (albeit based on feedback from multiple customers). While a single customer might have had a voice depending on their size and how much money they gave the vendor, the vendor still had to think about all of their other customers before releasing a new version of their application, that this often led to vendor lock-in. For some vendors, this is their agenda, for others it is simply a matter of evolution. Is the vendor really to blame?

As an enterprise, it is the responsibility of the enterprise to own their data, and relationships to this data, not a vendor. But even when “enterprise” architects came along to address this problem, they weren’t seem much as providing value to single business units, and often the funding came into question quickly. I know many very smart enterprise architects given the axe simply because they couldn’t work out how to fund them.

Then again, is it up to a single person to define all of the enterprise related data, relationships and services? I am not so sure about this. An enterprise must take responsibility for their own data. They know their business, and if a single application fits the bill perfectly (I have yet to see this happen), then why wouldn’t an enterprise use a single vendor? After all, that vendor probably even knows more about their business than the enterprise does given the amount of customers the vendor might have.  But sadly, and often, a vendor’s interest does kick in to ensure longevity.

Then enter architectural principles such as reuse before buy before build. A sound concept to ensure that previous investment is reused, but often leads to a single enterprise having to fit a square peg into a round hole, or perhaps lock in to yet another vendor for a partial solution. Last step (heaven forbid) that an enterprise actually should take final responsibility for their own data and relationships. Perhaps it is better to blame a vendor?

Enter MicroServices. These are small, lightweight services dedicated to a single business concept at the enterprise level.  As an example, a single concept of customer by a CRM system may not quite “fit” the definition of a customer within an enterprise. Does this mean that that customer within the context of an application by a vendor isn’t suitable? Perhaps not, but I would suggest that that customer view doesn’t really address the needs of an enterprise at their perspective of who their customers are. Sure, they might have commonalities, such as cold/warm/hot lead, or communication with a  customer, or even how to put a customer into a “funnel” to try to expedite sales. However, they all still don’t “fit” 100% with a definition of any organisation’s view of “customer”.  Using MicroServices, which are “bespoke” services (OMG) intended to represent the organisations 100% view of a customer, and can be written in a week, I do suppose these services are something to be afraid of. Or are they? These services can be built quickly technically, and can ofter everything around a particular enterprise owned concept, like a customer, or incident.AAEAAQAAAAAAAAJBAAAAJGFjMjM5NGI4LTY4NGItNDNiZi1iZTUzLTY1YTI4NTI2MTY0Mw

So, let’s challenge the thinking of the past. Is bespoke code the problem? Perhaps. I do agree if a vendor should “own” your definition of customer then I do suppose it would be better to “blame” them if they get it wrong in your particular context. Then again, you own your customers, your vendors don’t, why would you possibly outsource this definition?

By using bespoke code to “own” your definition of customer means that you can control all working’s related to “your customers”. And if these solutions can come into being in a week or two, and can be “thrown away” when you decide to change your definition, not forcing you to fit your definition into a single vendors view (based on their own multiple customer perspectives which may or may not fit your view), why would you possibly lock in to a single vendor?  If you do, I do suppose it is better for you to decide this on the golf course with your vendor friends than to do what is right for your enterprise. Oops. Did I say that? If I didn’t I assure your teams are.

I am not trying to highlight any of this to pick apart any decisions that have been made, but I would like to highlight that perhaps using Vendors is OK for what they have to offer. But leave it to that!  What I am actually suggesting is for you to take ownership of your enterprise’s data and services, even if it is just a wrapper for your vendor’s offering. Why? To make sure that when they change because of general opinion, this doesn’t hamstring you into their solution. Micro services are the way to do this.

Graph-09As an integration expert, and one who defined many strategies for enterprises on integration, I have actually been advising enterprises for years to do exactly this. Perhaps 10 years ago this was around SOA, and ESB (single vendor supplied), and heavy XML to enforce standards in this space. But in all of my dealings with large corporates, a few very important principles were always agreed to: 1) Layer your services, and 2) ensure loose coupling. I am sure not one reader of this article would disagree with this. Today, it is not about a single architectural “hub” owned by a single integration specialist vendor, but rather “you” owning your data and relationships in small, reusable, versionable (where two versions can exist at the same time to account for any point in time change by a vendor), and are isolated enough to make sure you can not only grow with your customers, but also grow with your vendors using disposable, and throw away able services. This is what Microservices are; the ability for your organisation to focus on the products and services that fit YOUR business by creating small snippets around individual features or capabilities of your business, rather than the technology.

Happy coding.