A Very Good WordPress Theme – Beaver Builder

The best WordPress theme I have ever come across is Beaver Builder. It is so good I have dropped all other themes.  What you can do with Beaver Builder in a matter of minutes using their drag and drop page builder puts all other themes to shame.

With Beaver Builder you can customise just about anything you want in a theme, from page layout, to pre-defined templates for both pages and content, Beaver Builder will get you to a point of building a professional WordPress website without all the headaches of learning code, css files, all using their Page Builder.

Check out Beaver Builder, a strong recommendation from Lowery.com

Beaver Builder

Reactive Programming – managing streams and application state

Reactive Programming is all the buzz at the moment. It is similar to the iterator pattern, where you have a collection of values, and iterate over them to obtain these values and make decisions on these values. But these are “pull” paradigms that often result in heavier CPU cycles and memory to hold the entire collection while you are evaluating the concepts.

Reactive Programming uses a concept of an “observable”. Basically this means that whenever you have a value of some sort, whether on its own or part of a collection, you can observe changes to that data value, and have these changes “pushed” to an observer. This is by far way more powerful as it can be done using “stream” based code, where on every value you have an observer function to handle the change, whether it be for a single data value change, or changes to an entire collection (one by one) without the need to hold the collection in memory and take up CPU cycles to iterate.

In effect, it is an event based paradigm on a single value state change. If you have 1000 changes to a value over time, rather than holding onto a collection in memory and iterating over the list, you can respond to each change one at a time, updating any possible dependencies. Whether these dependencies on a value change refer to updating the UI, or restarting a server, each value change triggers the corresponding listener to update the dependencies.

Observable programming is, in effect much more efficient than iterator programming, simply . Similar to promises, it allows for the proper execution of code in a controlled manner on each of these changes, however has an option to allow for any dependency updates to be cancelled based on some very simple function calls that respond to an observed change.

To start at the very beginning, firstly let me explain what an “observable” is.  It is simply an object which holds one or more values, but also holds a “next” function (similar to the next item in a collection using an iterator), an “on success” function (to be fired after the successful completion of an “on next”, and an “on error” function if something goes wrong.  An “observer” function is the one that responds to the “on next”, “on complete”, and “on error” handlers.

It is similar to a promise, but differs in the fact you can cancel any pending operation (since a promise only responds to an “on success”, or “on failure” responder – more specifically, if a promise doesn’t cancel an operation, it also can’t be stopped or discarded when in progress).  It can handle this by first injecting data into a stream, filtering on specific data, and combining with other streams before the actual success handler is invoked.

Using observables are much more efficient (since they are similar to streams, and don’t hold collections), but can also be acted on for particular operations from multiple streams. This is useful for combining streams together, filtering data along the way, and even stopping at events like a “mouseup” before the entire stream is sent to the success handler. This is useful for things that require I/O (like a server or database request, an animation, or calling a render function once the streams have been processed. In essence it is a much better way to handle state changes of your application to ensure things happen when you want them to happen.

For example, and a basic observable pattern, might be to watch for a mouse down, then on a mouse move, call the mouse move handler, and on a mouse up, stop processing all events (even if they are in progress), allowing you to perform final cleanup on a drag operation. Similarly, it can also be used for reading data, where you retrieve the data from one table, filter it against another table, and finally finish the operation only once all data has been retrieved, while still looking at particular records along the way until the operation is finished.

Thinking in streams can be daunting at first, but doesn’t have to be. To work this way does take a different way of thinking, but rather than thinking in sequential patterns (if-then-else), you can change your thinking into (what is the outcome of what I want, what data do I need to get there, and what operations do I need to perform until I get there). At first, a little hard to grasp, but with a little practice means your code will only have to fire when it has to, without holding a large amount of data, and allows you the freedom to write uncomplicated code.  Better yet, it can also be thought of synchronous programming in an asynchronous way to help with debugging.

There are two things you need to be mindful of when using “Reactive Programming”. These are working with changes to state data, and “side effects” of that data. An example of a side effect would be updating a database, or rendering to a screen. These are not to be confused with responding to events in time, mapping, and filtering data to determine when the “on next” handler will be fired, and when you consider a transaction of handlers to be considered “completed”.  A side effect is simply that, a side effect after you have processed your streams.

To distinguish between stream handling and side effects, it is best thought of as “what do I do mid stream using pure functions (eg a computed value of first and last name) vs what do I do with the final stream state (eg update a contact record in a database, or a contact details page on a UI).

To summarise all of this, using Reactive Programming allows you to “react” to mutations, even in an immutable way, and only fire the code you want to fire when you want to fire it. This leads to a ten fold improvement in performance, because you are optimising memory, CPU cycles, and I/O along the way.

Some libraries are available today to help you along. While some of these libraries are very good, they will require you to think differently in your approach but this is well worth your effort in your journey, and will have substantial benefits.  Some useful libraries in this approach are:

Learn how to program using Reactive Programming today. You will be very glad you did.

Creating dynamic classes in javascript ES6

I have often wondered whether or not it is possible to dynamically create classes on the fly in javascript. Of course, using prototype inheritance, it is possible to do this simply by adding prototype methods on an object. But what if the class you are creating at runtime doesn’t actually know what methods it has to implement until an external source provides them? Further, in today’s day and age using ES 6 or 7, with babel, how can these classes be dynamically created on the fly from external implementations?

As it turns out, it is actually quite easy. Here is a code example:

let mymethod = 'myMethod'

class myClass {
  constructor() {
    console.log('class constructed')

  [mymethod] (){

var my = new myClass()

This is great. It is a way of creating methods in the class without knowing the name of the method up front, but rather from a variable. But, this still is fairly limited, and why bother? If the implementation of the method is known in the file (via require/import or as a function in the javascript file), but what if the actual implementation of the method is stored elsewhere?  Now this is a challenge. One way is to add them on the prototype chain. This can be done as so:

var methods = {
  'increment': function() { this.value++; },
  'display' : function() { console.log(this.value); }

function addMethods(object, methods) {
  for (var name in methods) {
    object[name] = methods[name];

var obj = { value: 3 };
addMethods(obj, methods);
obj.display();  // "3"
obj.display();  // "4"

Another way is to add the events after the fact to the prototype chain, like this:

var obj = {value: 3 };
obj.prototype.mynewmethod = function method mynewmethod(myparam) {
//Dosomething, return something

But this is based on ES5 Javascript, not ES6.  While ES6 is merely “syntactic sugar” that transpiles to ES5 (using babel, tranceur, etc), it is still valid code. what it does is construct an object with functions in it (with dynamic class names) be mapped to an object’s prototype chain. This is a good way to add functions to an existing object.

But what if you don’t know the method’s implementation at run time (eg: download a method or function from somewhere else? This is where things get hairy. Because you don’t know the “source” of the method implementation, you have to trust that this source is trusted. Security issues come largely into play here. Javascript is a great language, especially with the ES6 “sugar”. But if you don’t understand the prototype chain, it is a bit of a double edge sword.

Example of a React Isomorphic application using ES6 and Facebook React

The world of IT changes all the time. New technologies have evolved, sometimes overnight that have the potential to change the world. Facebook React and Flux are two examples, but these technologies are still evolving, but have matured enough for me to write a how to guide to using them to create pages that are rendered either on the server or browser. Why is this important? Because it means you can finally create real-time applications that not only work on both, but will be seen by the major search engines.

Never before has a time been so right to make sure we as developers can create these kinds of applications that are exposed to the web as a whole without the network latency being an issue. This helps a lot with performance, but also helps in code maintenance. Why? Because you only require developers who know javascript, a one language that can do both. For more information on Isomorphic applications, please refer to my earlier post: It can be found here.

Now on to bigger and better things, an example.  I have been working with NodeJS, React, Flux and SPA apps for quite a while. While frameworks like Angular came on the scene, we have seen SPA languages that are high performance based, but they leave a lot to be desired, particularly in the learning space. React and Flux from Facebook changes all that, by allowing developers to create UI  “components” with their own state, and “stores” where the data required for this UI component is re-rendered on a change, in a high performance oriented manner.

New Business Applications using Javascript, SPA, and Isomorphic applications

With the invention of SPA (Single Page Applications), javascript is the primary language of choice, simply because this is really the only programming languages that browsers understand. This single language has surpassed all other languages given the massive exposure it has on the web by the browsers.

In the past, javascript was referred to as a simple or non-language because of the way it does things. Things like Polymorphism, Inheritance, Abstraction, common to most developers don’t appear to be in the language. This isn’t true however, but since javascript is a true object oriented language, it just does things differently.

Enter ES6. This is the newly accepted standard for javascript. The way it does things is a lot closer to traditional programming languages, even though how it does it is just a bit of syntactic sugar on top of the original language to make it a little easier for developers. ES6 however has just been recently approved as an official release and it will take time for browser developers to keep up with this change, given that their browser products are used by billions of people world wide.

However, this isn’t a real problem today. There are transpilers available that will translate ES6 code to ES5. Is this a performance issue? Not at all, as a matter of fact, Java does the same thing by compiling java classes into binary classes. So does .NET by translating C# classes into CLR code. At the end of the day, does it really matter? How a machine executes your code is quite different to how you write your code with the myriad of developers you have. Maintainability of that code, and reacting to issues is much more important given that bugs/issues cause the most time to your teams.

Personally, I really like Node/NodeJS. It allows developers to create code using javascript on the server. It is based on the V8 engine by chrome, and offers a set of classes that run server side (rather than client side), which means you as a developer don’t have to worry about multiple resources to maintain multiple projects written in different languages. Want a web server that listens on http, https, and web sockets? this can be done in a very small amount of code.

Where these technologies really shine though is in sharing javascript code between the client and the browser. Where there are some differences between javascript on the server vs browser, the language is still the same, and new applications are coming out, and very quickly that support both. The term given to these applications is “isomorphic”, which allows the same code in the same file to be executed in either environment.

This is a BIG deal for project teams, whether at a small mom and pop shop, or a huge enterprise.

The focus of today’s blog post is to highlight these new technologies, especially allowing code to be executed on either site, and still run the same way regardless of server or browser client side. A lot of interest has been put into SPA apps, and for good reason, they perform almost real time whereas most server side applications need a request and response for every page re-render.  Server side rendering applications traditionally have been very slow performance wise, and leaves changes up to the developer (which means this can be buggy) to sort out. Not ideal, and this way of creating web applications has been the “norm” for quite some time.

Then came “ajax”. This was using XML to send “snippets” of data to the server and get a response without re-rendering a page again. This means libraries like JQuery came onto the scene to step towards a more realtime web client user experience. However XML is verbose, and is simply a bandaid to address the performance issue.

At the time it worked quite well. If a bit of data was needed (up to date) for a re-presentation of that data, a simple ajax call was used to obtain this and allow the client side application to work out the rendering.  Great start, but not quite enough.

However, SPA applications changed all that. Developers could now focus on applications that will run in a browser in real time, and not worry about the time it takes to re-render a page, simply because page renderings wasn’t needed from the server. The client browser had all the code available to perform these changes. This was quite an advance for web based applications. This means that real time applications can be built even though the browser doesn’t have direct access to server controlled data.

The issue here is that web browsers only care about data that is to be presented, and javascript as a language has been developed and traditionally has been used as a way to work with the user. But as a language it still has everything required to be a fully fledged language, but because it does things “differently” than other server side languages, it was difficult to work with and often led to developers “pulling their hair out”. Well, with ES6, react, flux, and node, this isn’t quite an issue anymore. A whole new developer experience has resulted and one I believe will be quite valuable to not only IT professionals but to business units as well. This translates to cutting costs and timeframes, and allows applications that perform incredibly fast to be built in weeks rather than months or years. Try that on for size. This means your development resources are cut down, and can turn around your business needs in mere months. You wanted the IT world to listen? We have by making things way faster with less resources. Perhaps it is time for you to start investing in IT again, and stop fearing it just because you don’t understand it?

In my next blog post I will provide an example that can be used by developers to do just this. Stay tuned.

An introduction to React and Flux by Facebook

React is a great new library from Facebook. It takes a little to get used to, and sometimes gives a little “yikes” factor, simply because it challenges the normal and accepted way of constructing applications. The biggest yike is that it challenges a Model, View Controller concept, by implementing a set of components that include both javascript and HTML. Traditional system, whether server rendered or client rendered usually frown on having the presentation layer mixed with the business layer, and with an MVC style approach to web development, these areas of concerns are separated between presentation, business logic, and data models.

However React challenges this idea only to the extent of declaring that area’s of concerns have traditionally been for a development team (where there are UI designers, business logic designers and data designers). However their challenge is timely, especially in an Agile development world because most designers don’t even code to the HTML or stylesheet level, but leave that up to developers.

Also, the area’s of concern in Facebook’s view is more around the business functionality. I tend to agree with this. If a component is required to be developed for, let’s say a messaging system, then there would be a clear business owner for this, and therefore, to meet his requirements, the messaging system would have a particular UI, and way of interacting with users, over and above who is employed to build each part of the system.  Since developers understand HTML and stylesheets anyways, why separate these concerns between a UI designer (who is more focused on mockups), and the business logic?  Further, if business logic is needed just for presentation, why should a UI designer know or care about some programming language to implement the business rules they define?  They shouldn’t.

To further this concept, Facebook’s React framework challenges the MVC model because they feel this just isn’t scalable. Again, I tend to agree, simply because the MVC model can become out of hand over time, where more dependencies from one MVC to another increases, causing a chain reaction of support issues.

To address these concerns, Facebook came up with React. This is a great framework that makes development a lot easier by implementing UI components, that maintain state. This is the state of the overall UI, not the state of the data, and hence, mixing html fragments within a React component (done by developers anyways), is not such a bad thing, as a matter of fact, is a great thing because it allows the developer to think in UI components, keeping the UI logic within that component and not mixing it up with actual business logic.

Enter Flux. Flux is yet another thing the Facebook folks have created. It is not a library like React, but rather an architecture that allows developers to separate concerns within a project team. However, unlike traditional MVC style of architecture, they feel that the state of data should flow one way, from the source of the data change, through to the actual presentation, and then to the data layer to persist the change. Hey, not bad at all. This means there is no complexity by rendering up to the second changes in the data. They do so by re-rendering the entire UI component. Oops, yet another “Yiikes”!

But, like other high performance applications, the UI component they re-render is only a delta change rather than an entire rewrite. Enter the React Virtual DOM. Like high performance systems like game engines, only rendering just that part of the screen that actually changes increases performance 10 fold. So the React framework allows this by maintaining a virtual DOM, and when it actually comes to expensive DOM operations, it only does a DIFF to see the areas that are the delta and renders them. This means the state of the data or even presentation state remains intact immediately after a change, and tells the rendering system to re-render those UI changes by way of the delta.  This is a developer’s dream, since they don’t have to write complicated logic to keep up with the changes to either the data or presentation of any “Area of concern” (business unit).

Flux is not a library, but rather an architectural style that merely keeps up when state changes. It allows state to change instantaneously and on the fly, without compromising what needs to get rendered as a result. It does this by defining actions that can be performed by UI components, and these actions, when tied to actual presentation data stores, can “react” instantly on a data change. However, the keyword there is “presentation data”, not actual data. Flux does change actual data in a database, but this is not what a Store worries about. It only worries about the state of the data at the time it is rendered, and announces the change to UI components to allow them to re-rerender.

FaWhat does this mean for business owners? A great deal. This means the business owner can focus on their business data, and presentation of that data in a real time manner, and leave the techie stuff up to techies, whether it be a presentation UX designer, architect, developer or tester; each part of the software development lifecycle is handled by the right person allowing the business owner to focus on the features required from their area.  Perhaps it is time to see things differently? As Facebook says, give it 5 minutes, and perhaps it might just make sense. After all they deal with Billions of users, why shouldn’t we give what they have to say just 5 minutes?

Events as first class citizens

In todays programming terminology, many people focus on either services or products. Often this is because of the revenue generated from these two models (a sale either comes from a product or service), and to some degree, this is how it should be.  After all, we do not live in a world where money doesn’t exist, it does, and should be treated respectively.

Over the years, a product has had its place and still does, and maintains this place because it is a tangible thing for sale. Perhaps it might be a car, a bicycle, a laptop, or tv. All of these are products and have a sale value. On the services front, this usually translates to actions done by someone, also which has a value, and should be the basis of a “product” construct for the service. In traditional terms, a product is a tangible item for sale, and once sold, becomes the property of the buyer.  In the case of a service, it is a little more vague, because a service is usually depending on a “piece of time”, or perhaps a particular function offered by people, but in either case, it can also be denoted as a product for sale, where that product is time or that function performed by people.

However, in each case, both products and services are simply tangible things that people see. It is easy to see that a car has four wheels, where each wheel is a product, and the car using the 4 wheels is also a product (1 car has four wheels). Total cost: price of the car, OR price of 4 wheels. Depending on the customer and what that customer is trying to achieve depends on the level of product sold: in the case of the car manufacturer, the sale price is for 4 wheels, hardly enough to drive a car, where as the car dealership is the price is that of a fully assembled car (which in any event should cover the cost of the four wheels plus all other components that make up the car).

A service on the other hand is a price put on the “handling” of something. In the car case above, it might relate to the “commission value” the salesperson gets for the car, or perhaps the administration handling on processing the orders for the car. Over time, and the more demand increases for cars, the cost of the “products” required to build a car decrease (including services), as does the sub-part products that are used to assemble the car, hence now, with some markup values the price of the car.

But a fundamental concern seems to be lost in both products and services. While each are treated as a specific and tangible cost whether or not it is something you can hold or feel, or a time based thing where peoples time are required to build or administer the build, each of these things are changes which happen over time.  Often companies see these changes as arbitrary, and somethings (more often than not) don’t really see the lifecycle of a product or service in their initial implementation. To this end there are even organisations who don’t really care about the lifecycle simply because projects are based on an initial deliverable (both products and services), and hence the implementation usually only cares about what it costs for the initial implementation. I can tell you with certainty that the design of a new car costs a whole lot more than a customer pays for a car once it has become a commodity.

However, everything has states. A product can be “new”, “used”, or “destroyed”. A person’s time can be “starting”, “in progress”, or “finished”. In any event these states outline the very nature of a lifecycle of something, often overlooked.  Architecture today usually focuses on the direct elements that get to a project’s implementation, and often see the lifecycle of that “thing”, as something not necessarily worth considering in the implementation phases… That is, until operations becomes involved. Then let the fireworks begin, often with a result where things are not approved to get into production from the implementation project, or perhaps as worst case, where operations are seen as a blockage an forced to accept the lifecycle of a given product or service.  These states reflect the lifecycle for any product and service, and should be treated as a whole to determine the total cost of ownership for the events that occur for that customer to give the customer the ability to make the right choice.

However, both groups seem to be missing something. Like the saying, when someone is buying a drill, it isn’t the drill they buy, but rather the hole. In reality they want a hole and that is what they are buying, and the purchase of a drill is the thing they use to get that hole. Value of the drill: $299. Value of the service to drill the hole: $50. Value of the event that required the hole: at least, $250. But what if they require a new deck, which requires holes, and much more so they can enjoy a coffee on a sunny sunday morning?

Events are first class citizens. Pain points, or any need or want of any kind is an “event”. While this can be translated into products and services at the time to assist with the cost of the event, an event is the driver, not the products or services underneath. It is the event that requires costing, and with regard to the other lifecycle events that might occur.

In the old days of IT, there was a strong push to suggest that things should be “data driven”, meaning that it is the data that drives the outcome. But this didn’t really work that well simply because those that understood data, didn’t really account for the services required to get that data, nor did they really give much thought to the events that could affect this data. More often it was about the data entity itself, and not necessarily the events that create the actual business needs. More recently there has been a push for a “service driven” approach, where services should come first, but again, a service is only something which leads to an event outcome. In latter days I have heard of an “event driven” approach, which I feel is a little closer to the mark, but perhaps still needs a bit more thinking.

Events are not just arbitrary, they cause demand based on a point in time. Whether it be due to the fact that a new smart phone has been released, the event could be to “keep up with the jones’ by getting the latest apple or android phone”. Or perhaps it is based on the fact that the features of the phone are in demand. In either event, the law of supply and demand fuels commerce, and after all, aren’t these just events in a customer’s lifecycle?  Isn’t the fact that when supply is high, and demand is low, that the sales price should decrease? Or when demand is high, but supply is low, that the price should increase? Aren’t both of these based on events that suggest that demand is high or low, or other events regardless of supply and demand make up the want or need? If so, why is the IT world so interested in products or services, rather than the demand or supply, or more apt, the customer event lifecycle? This doesn’t make much sense to me.

Moreover, if I want a hole and am prepared to buy one (whether or not I drill it myself by buying a drill or hire a contractor with a drill to make the hole), shouldn’t that be the focus? What if I want a new decking in my backyard, do I care about the holes, or would it be better to hire a company to make my deck for me? I am sure they have all the tools I need to get my deck done, holes drilled, including other things, to get me my deck?  The driver? I want a deck. That is the event.

I believe the IT world has missed the mark. They are so busy trying to justify their time, or perhaps even their products that they actually don’t cost out the cost of the event; the fact I want a deck. Perhaps if they did, the focus would be more on that event, so they can achieve it, rather than the individual parts or services that make up this event outcome. Developers ask for requirements, operations ask for total cost of ownership including their support services. As a customer wanting a deck, I actually don’t care about either one of these things: I only really care about a deck and how much I need to pay year on year to keep the deck to a point that I can sit on it on a sunny day and enjoy a cup of coffee.

The event is the sales point, not the service or product to answer it. It is the business driver, and perceived outcome, and what the customer is buying.

As another example, if I buy a car, I want one that is of a particular brand, perhaps in a specific color, with a sunroof, power brakes, and perhaps fuel injected to increase performance and decrease fuel costs. The event: I want a new car.

I recently heard an executive say that this is what people want. He equated IT as ordering a car, but rather than get a “white car” that he wanted, it was delivered in blue. The sunroof, oops sorry, didn’t think about that. Power brakes, oh yes they have them, but it costs $5K to keep this running. If I were a car buyer I would be pissed at this, and this was the point of the IT executive.  While his points are all valid, he is talking about an end product that has spent millions to design to get to production ready, with a service catalogue in place to answer his event that he needed a new car. How often the design element is overlooked in preference to the product or service.

Events are first class citizens, and are what should be costed/priced.

Let’s take the car buyer. His events are:

I need a new car

I need to service my car

I need to pay for breakdowns when they happen

I need to sell my car

I need to return to the first event

All of these events are lifecycle based. This has nothing to do directly with the car itself. This person needs a car for various reasons whether they be to get to work, to pick up the kids after school, or to go away for the weekend. That is the actual events. To do this, the customer needs a car.  In this new thinking the events are:

I need to get to work

I need to pick up my kids

I need to be able to go away for a weekend

Each event should be layered. the person that wants to get to work has a number of options, one of which could be a car. Same goes for the second and third. Perpaps a car is his choice to receive the outcome, but let me ask you a question: where is the value?  (car: 50K, cost of parking, cost of maintenance to keep going, cost of repairs due to unexpected events, and cost of fuel.  Alternate: Cost of bus, train or plane ticket).  What exactly is the cost of the latter events? I am sure the cost of public transport vs the cost of a car can be estimated and compared. What exactly is he buying?

Enter events as first class citizens.  If the person chose that the best way to get to work was a cost of a car, and was prepared to pay for the fuel, parking and everything else, then he has chosen to answer his event with a car, at a certain cost. Yet if his answer to pay for that event was to take public transport, that would also have a cost, perhaps cheaper, but in any event, the lifecycle events around each phase of his/her decision making process is the event, and this includes a total cost of ownership.

Moreover, each event has lifecycle states. In the case of the car, these states might be “new”, “in need of repair”, “in need of gas”, etc. In the case of the bus ticket, perhaps the events are “in need of a new monthly pass”. In either case, the event is the driver, regardless of level.

To relate this back to IT, the event is a first class citizen. It is what people are buying a resolution for. Each event that occurs has a solution and a cost to implement, and when compared to each other event in that layer, there is also a cost over time that needs to be accounted for. After all, if it cost me a small amount to take a bus but the monthly amount to renew exceeds the cost of buying a car, why then would I choose bus as my answer to my event to get to work?

In the IT world, the event is king. The technical solution to address that event is arbitrary as long as it fits in the overall lifecycle of my events. Therefore, if we looked rather at the lifecycle of events that are required and costed/priced them, perhaps we will not only have addressed the customer’s needs (rather than simply our own), but also look at it end to end to ensure the entire lifecycle is addressed. If this approach is taken then this can also lead to responding to events, with the appropriate products and services to match, rather than pushing a specific product and service. Now, this leads to a dynamic way of offering product and services to respond to an event rather than simply seeing and pushing product and services regardless of demand or supply.

To summarize; events are first class citizens. If we, as IT focus on the customer events, perhaps then we will be able to provide the appropriate products and/or services to answer that event. If we cost that event, perhaps this will ensure funding is appropriate across the lifecycle (after all we need to look at all events in the lifecycle not just one for a proper costing decision). Perhaps if we do this, then the products and services (e.g.: drill, or man with a drill) is not so important, as long as we can give the customer a hole.


Micro services seem’s to have the spotlight today. These are small, self contained services that do a small subset of business functionality. In the old days, a single monolithic application did everything it needed to do for a given technology category (e.g.: CRM), but has it’s downside in that all integration within the Monolith was left up to that Monolith application. This is not ideal when a business unit just want’s to create a customer, or perhaps sell something to a customer. Traditionally it means the monolith would manage the entire lifecycle of a customer. Again, unless you are working with that monolith application, integration became a problem.

Fast Forward many years later; these monolith applications grew, new versions with new features were introduced by the single vendor. As the enterprise grew and mature in their use of this application, integration solutions were introduced that were either point to point, or required a significant product knowledge in order to integrate to it from outside variances. A lot of pressure was then put on these vendors to break their integration down, and some did by producing an API. But again, this API waas only defined from a single vendor’s perspective (albeit based on feedback from multiple customers). While a single customer might have had a voice depending on their size and how much money they gave the vendor, the vendor still had to think about all of their other customers before releasing a new version of their application, that this often led to vendor lock-in. For some vendors, this is their agenda, for others it is simply a matter of evolution. Is the vendor really to blame?

As an enterprise, it is the responsibility of the enterprise to own their data, and relationships to this data, not a vendor. But even when “enterprise” architects came along to address this problem, they weren’t seem much as providing value to single business units, and often the funding came into question quickly. I know many very smart enterprise architects given the axe simply because they couldn’t work out how to fund them.

Then again, is it up to a single person to define all of the enterprise related data, relationships and services? I am not so sure about this. An enterprise must take responsibility for their own data. They know their business, and if a single application fits the bill perfectly (I have yet to see this happen), then why wouldn’t an enterprise use a single vendor? After all, that vendor probably even knows more about their business than the enterprise does given the amount of customers the vendor might have.  But sadly, and often, a vendor’s interest does kick in to ensure longevity.

Then enter architectural principles such as reuse before buy before build. A sound concept to ensure that previous investment is reused, but often leads to a single enterprise having to fit a square peg into a round hole, or perhaps lock in to yet another vendor for a partial solution. Last step (heaven forbid) that an enterprise actually should take final responsibility for their own data and relationships. Perhaps it is better to blame a vendor?

Enter MicroServices. These are small, lightweight services dedicated to a single business concept at the enterprise level.  As an example, a single concept of customer by a CRM system may not quite “fit” the definition of a customer within an enterprise. Does this mean that that customer within the context of an application by a vendor isn’t suitable? Perhaps not, but I would suggest that that customer view doesn’t really address the needs of an enterprise at their perspective of who their customers are. Sure, they might have commonalities, such as cold/warm/hot lead, or communication with a  customer, or even how to put a customer into a “funnel” to try to expedite sales. However, they all still don’t “fit” 100% with a definition of any organisation’s view of “customer”.  Using MicroServices, which are “bespoke” services (OMG) intended to represent the organisations 100% view of a customer, and can be written in a week, I do suppose these services are something to be afraid of. Or are they? These services can be built quickly technically, and can ofter everything around a particular enterprise owned concept, like a customer, or incident.AAEAAQAAAAAAAAJBAAAAJGFjMjM5NGI4LTY4NGItNDNiZi1iZTUzLTY1YTI4NTI2MTY0Mw

So, let’s challenge the thinking of the past. Is bespoke code the problem? Perhaps. I do agree if a vendor should “own” your definition of customer then I do suppose it would be better to “blame” them if they get it wrong in your particular context. Then again, you own your customers, your vendors don’t, why would you possibly outsource this definition?

By using bespoke code to “own” your definition of customer means that you can control all working’s related to “your customers”. And if these solutions can come into being in a week or two, and can be “thrown away” when you decide to change your definition, not forcing you to fit your definition into a single vendors view (based on their own multiple customer perspectives which may or may not fit your view), why would you possibly lock in to a single vendor?  If you do, I do suppose it is better for you to decide this on the golf course with your vendor friends than to do what is right for your enterprise. Oops. Did I say that? If I didn’t I assure your teams are.

I am not trying to highlight any of this to pick apart any decisions that have been made, but I would like to highlight that perhaps using Vendors is OK for what they have to offer. But leave it to that!  What I am actually suggesting is for you to take ownership of your enterprise’s data and services, even if it is just a wrapper for your vendor’s offering. Why? To make sure that when they change because of general opinion, this doesn’t hamstring you into their solution. Micro services are the way to do this.

Graph-09As an integration expert, and one who defined many strategies for enterprises on integration, I have actually been advising enterprises for years to do exactly this. Perhaps 10 years ago this was around SOA, and ESB (single vendor supplied), and heavy XML to enforce standards in this space. But in all of my dealings with large corporates, a few very important principles were always agreed to: 1) Layer your services, and 2) ensure loose coupling. I am sure not one reader of this article would disagree with this. Today, it is not about a single architectural “hub” owned by a single integration specialist vendor, but rather “you” owning your data and relationships in small, reusable, versionable (where two versions can exist at the same time to account for any point in time change by a vendor), and are isolated enough to make sure you can not only grow with your customers, but also grow with your vendors using disposable, and throw away able services. This is what Microservices are; the ability for your organisation to focus on the products and services that fit YOUR business by creating small snippets around individual features or capabilities of your business, rather than the technology.

Happy coding.