One of the things I have noticed particularly in the database world is how much people and database technologies focus on “Global” persistence. In the software development world, using a global variable is a no-no. Why then do we do this in the database world? So often databases are a collection of data shared by…
Reactive Programming is all the buzz at the moment. It is similar to the iterator pattern, where you have a collection of values, and iterate over them to obtain these values and make decisions on these values. But these are “pull” paradigms that often result in heavier CPU cycles and memory to hold the entire collection while you are evaluating the concepts.
Reactive Programming uses a concept of an “observable”. Basically this means that whenever you have a value of some sort, whether on its own or part of a collection, you can observe changes to that data value, and have these changes “pushed” to an observer. This is by far way more powerful as it can be done using “stream” based code, where on every value you have an observer function to handle the change, whether it be for a single data value change, or changes to an entire collection (one by one) without the need to hold the collection in memory and take up CPU cycles to iterate.
In effect, it is an event based paradigm on a single value state change. If you have 1000 changes to a value over time, rather than holding onto a collection in memory and iterating over the list, you can respond to each change one at a time, updating any possible dependencies. Whether these dependencies on a value change refer to updating the UI, or restarting a server, each value change triggers the corresponding listener to update the dependencies.
Observable programming is, in effect much more efficient than iterator programming, simply . Similar to promises, it allows for the proper execution of code in a controlled manner on each of these changes, however has an option to allow for any dependency updates to be cancelled based on some very simple function calls that respond to an observed change.
To start at the very beginning, firstly let me explain what an “observable” is. It is simply an object which holds one or more values, but also holds a “next” function (similar to the next item in a collection using an iterator), an “on success” function (to be fired after the successful completion of an “on next”, and an “on error” function if something goes wrong. An “observer” function is the one that responds to the “on next”, “on complete”, and “on error” handlers.
It is similar to a promise, but differs in the fact you can cancel any pending operation (since a promise only responds to an “on success”, or “on failure” responder – more specifically, if a promise doesn’t cancel an operation, it also can’t be stopped or discarded when in progress). It can handle this by first injecting data into a stream, filtering on specific data, and combining with other streams before the actual success handler is invoked.
Using observables are much more efficient (since they are similar to streams, and don’t hold collections), but can also be acted on for particular operations from multiple streams. This is useful for combining streams together, filtering data along the way, and even stopping at events like a “mouseup” before the entire stream is sent to the success handler. This is useful for things that require I/O (like a server or database request, an animation, or calling a render function once the streams have been processed. In essence it is a much better way to handle state changes of your application to ensure things happen when you want them to happen.
For example, and a basic observable pattern, might be to watch for a mouse down, then on a mouse move, call the mouse move handler, and on a mouse up, stop processing all events (even if they are in progress), allowing you to perform final cleanup on a drag operation. Similarly, it can also be used for reading data, where you retrieve the data from one table, filter it against another table, and finally finish the operation only once all data has been retrieved, while still looking at particular records along the way until the operation is finished.
Thinking in streams can be daunting at first, but doesn’t have to be. To work this way does take a different way of thinking, but rather than thinking in sequential patterns (if-then-else), you can change your thinking into (what is the outcome of what I want, what data do I need to get there, and what operations do I need to perform until I get there). At first, a little hard to grasp, but with a little practice means your code will only have to fire when it has to, without holding a large amount of data, and allows you the freedom to write uncomplicated code. Better yet, it can also be thought of synchronous programming in an asynchronous way to help with debugging.
There are two things you need to be mindful of when using “Reactive Programming”. These are working with changes to state data, and “side effects” of that data. An example of a side effect would be updating a database, or rendering to a screen. These are not to be confused with responding to events in time, mapping, and filtering data to determine when the “on next” handler will be fired, and when you consider a transaction of handlers to be considered “completed”. A side effect is simply that, a side effect after you have processed your streams.
To distinguish between stream handling and side effects, it is best thought of as “what do I do mid stream using pure functions (eg a computed value of first and last name) vs what do I do with the final stream state (eg update a contact record in a database, or a contact details page on a UI).
To summarise all of this, using Reactive Programming allows you to “react” to mutations, even in an immutable way, and only fire the code you want to fire when you want to fire it. This leads to a ten fold improvement in performance, because you are optimising memory, CPU cycles, and I/O along the way.
Some libraries are available today to help you along. While some of these libraries are very good, they will require you to think differently in your approach but this is well worth your effort in your journey, and will have substantial benefits. Some useful libraries in this approach are:
- Elm (a new language based on Reactive Programming)
- Rx (A Microsoft initiative to bring Observables to many languages)
Learn how to program using Reactive Programming today. You will be very glad you did.
I want to show some details on how to build a microservice using NodeJS, and deploy to docker containers within any cloud.
If you haven’t used Docker before, it is fairly simple. It is a new technology that allows you to run an “application container” within an existing server (physical or virtual). It is almost like have a virtual machine within a virtual machine, because docker uses the host VM’s OS to work with the application. However, the difference is you can isolate your application code into a container that can be deployed anywhere.
Another great feature of Docker is that it is a sandbox for your application. Everything in the Docker container is self contained. In a traditional VM, especially one that hosts multiple applications, normally apps are deployed to multiple directories on the host. However, if these applications share resources, then if you move an application from one server to another, you can break the application on the new host, if you don’t also make sure the dependencies it has aren’t installed.
Enter Docker. Docker allows you to deploy everything your application requires in a minimalistic ways, and bundles up the application stack. This means you can build a docker image with everything it needs to run, and take that “box” and deploy anywhere. It uses a layered file system, so you can also grab and auto install any deployment code you want even from a repository like GIT. However, in doing this, it focuses on your application dependencies, but won’t have in it the OS files.
To make a proper distribution of a docker image, the only contents of a Docker container has in it is your application dependencies. Therefore, if you have a host OS using ubuntu, your Docker container won’t have the Ubunto kernel or other files, but will have anything specific over and above to make your application work.
A few things to note about Docker. It runs within a single process, and that means it only runs within ONE thread on the server. However, it has an event loop that allows node applications to use functions asynchronously as callback which make it perform much faster than a traditional application. This is the magic of NodeJS.
Once an application is built using Node, it can be deployed anywhere using docker on any virtual image on any cloud, and as long as all of the dependencies are contained within that black box, it will just work. It is possible to include all the dependencies within a Docker image to ensure little issues when deploying, but often dependencies such as a database technologies don’t make sense to include in an application stack, simply because they are shared. But that doesn’t mean they can’t run in a Docker image themselves!
Docker is changing the world. Google has written a library around Docker, called Kubernetes, which allows a lot more control over clustering and more production ready deployments. I suggest you check both of them out. Oh by the way, each docker image does have an initial startup script which you can use to run inside the docker image to build itself. Automated deployments with minimal footprints anyone?
Here are some links you might be interested in:
Docker: Docker Website
NodeJS: Node JS Website
Tutorial on setting up Docker and NodeJS, with other Docker Images running mongodb, redis, logs, and other dependencies all saved on the host from Docker containers:
However, this isn’t a real problem today. There are transpilers available that will translate ES6 code to ES5. Is this a performance issue? Not at all, as a matter of fact, Java does the same thing by compiling java classes into binary classes. So does .NET by translating C# classes into CLR code. At the end of the day, does it really matter? How a machine executes your code is quite different to how you write your code with the myriad of developers you have. Maintainability of that code, and reacting to issues is much more important given that bugs/issues cause the most time to your teams.
This is a BIG deal for project teams, whether at a small mom and pop shop, or a huge enterprise.
The focus of today’s blog post is to highlight these new technologies, especially allowing code to be executed on either site, and still run the same way regardless of server or browser client side. A lot of interest has been put into SPA apps, and for good reason, they perform almost real time whereas most server side applications need a request and response for every page re-render. Server side rendering applications traditionally have been very slow performance wise, and leaves changes up to the developer (which means this can be buggy) to sort out. Not ideal, and this way of creating web applications has been the “norm” for quite some time.
Then came “ajax”. This was using XML to send “snippets” of data to the server and get a response without re-rendering a page again. This means libraries like JQuery came onto the scene to step towards a more realtime web client user experience. However XML is verbose, and is simply a bandaid to address the performance issue.
At the time it worked quite well. If a bit of data was needed (up to date) for a re-presentation of that data, a simple ajax call was used to obtain this and allow the client side application to work out the rendering. Great start, but not quite enough.
However, SPA applications changed all that. Developers could now focus on applications that will run in a browser in real time, and not worry about the time it takes to re-render a page, simply because page renderings wasn’t needed from the server. The client browser had all the code available to perform these changes. This was quite an advance for web based applications. This means that real time applications can be built even though the browser doesn’t have direct access to server controlled data.
In my next blog post I will provide an example that can be used by developers to do just this. Stay tuned.
In todays programming terminology, many people focus on either services or products. Often this is because of the revenue generated from these two models (a sale either comes from a product or service), and to some degree, this is how it should be. After all, we do not live in a world where money doesn’t exist, it does, and should be treated respectively.
Over the years, a product has had its place and still does, and maintains this place because it is a tangible thing for sale. Perhaps it might be a car, a bicycle, a laptop, or tv. All of these are products and have a sale value. On the services front, this usually translates to actions done by someone, also which has a value, and should be the basis of a “product” construct for the service. In traditional terms, a product is a tangible item for sale, and once sold, becomes the property of the buyer. In the case of a service, it is a little more vague, because a service is usually depending on a “piece of time”, or perhaps a particular function offered by people, but in either case, it can also be denoted as a product for sale, where that product is time or that function performed by people.
However, in each case, both products and services are simply tangible things that people see. It is easy to see that a car has four wheels, where each wheel is a product, and the car using the 4 wheels is also a product (1 car has four wheels). Total cost: price of the car, OR price of 4 wheels. Depending on the customer and what that customer is trying to achieve depends on the level of product sold: in the case of the car manufacturer, the sale price is for 4 wheels, hardly enough to drive a car, where as the car dealership is the price is that of a fully assembled car (which in any event should cover the cost of the four wheels plus all other components that make up the car).
A service on the other hand is a price put on the “handling” of something. In the car case above, it might relate to the “commission value” the salesperson gets for the car, or perhaps the administration handling on processing the orders for the car. Over time, and the more demand increases for cars, the cost of the “products” required to build a car decrease (including services), as does the sub-part products that are used to assemble the car, hence now, with some markup values the price of the car.
But a fundamental concern seems to be lost in both products and services. While each are treated as a specific and tangible cost whether or not it is something you can hold or feel, or a time based thing where peoples time are required to build or administer the build, each of these things are changes which happen over time. Often companies see these changes as arbitrary, and somethings (more often than not) don’t really see the lifecycle of a product or service in their initial implementation. To this end there are even organisations who don’t really care about the lifecycle simply because projects are based on an initial deliverable (both products and services), and hence the implementation usually only cares about what it costs for the initial implementation. I can tell you with certainty that the design of a new car costs a whole lot more than a customer pays for a car once it has become a commodity.
However, everything has states. A product can be “new”, “used”, or “destroyed”. A person’s time can be “starting”, “in progress”, or “finished”. In any event these states outline the very nature of a lifecycle of something, often overlooked. Architecture today usually focuses on the direct elements that get to a project’s implementation, and often see the lifecycle of that “thing”, as something not necessarily worth considering in the implementation phases… That is, until operations becomes involved. Then let the fireworks begin, often with a result where things are not approved to get into production from the implementation project, or perhaps as worst case, where operations are seen as a blockage an forced to accept the lifecycle of a given product or service. These states reflect the lifecycle for any product and service, and should be treated as a whole to determine the total cost of ownership for the events that occur for that customer to give the customer the ability to make the right choice.
However, both groups seem to be missing something. Like the saying, when someone is buying a drill, it isn’t the drill they buy, but rather the hole. In reality they want a hole and that is what they are buying, and the purchase of a drill is the thing they use to get that hole. Value of the drill: $299. Value of the service to drill the hole: $50. Value of the event that required the hole: at least, $250. But what if they require a new deck, which requires holes, and much more so they can enjoy a coffee on a sunny sunday morning?
Events are first class citizens. Pain points, or any need or want of any kind is an “event”. While this can be translated into products and services at the time to assist with the cost of the event, an event is the driver, not the products or services underneath. It is the event that requires costing, and with regard to the other lifecycle events that might occur.
In the old days of IT, there was a strong push to suggest that things should be “data driven”, meaning that it is the data that drives the outcome. But this didn’t really work that well simply because those that understood data, didn’t really account for the services required to get that data, nor did they really give much thought to the events that could affect this data. More often it was about the data entity itself, and not necessarily the events that create the actual business needs. More recently there has been a push for a “service driven” approach, where services should come first, but again, a service is only something which leads to an event outcome. In latter days I have heard of an “event driven” approach, which I feel is a little closer to the mark, but perhaps still needs a bit more thinking.
Events are not just arbitrary, they cause demand based on a point in time. Whether it be due to the fact that a new smart phone has been released, the event could be to “keep up with the jones’ by getting the latest apple or android phone”. Or perhaps it is based on the fact that the features of the phone are in demand. In either event, the law of supply and demand fuels commerce, and after all, aren’t these just events in a customer’s lifecycle? Isn’t the fact that when supply is high, and demand is low, that the sales price should decrease? Or when demand is high, but supply is low, that the price should increase? Aren’t both of these based on events that suggest that demand is high or low, or other events regardless of supply and demand make up the want or need? If so, why is the IT world so interested in products or services, rather than the demand or supply, or more apt, the customer event lifecycle? This doesn’t make much sense to me.
Moreover, if I want a hole and am prepared to buy one (whether or not I drill it myself by buying a drill or hire a contractor with a drill to make the hole), shouldn’t that be the focus? What if I want a new decking in my backyard, do I care about the holes, or would it be better to hire a company to make my deck for me? I am sure they have all the tools I need to get my deck done, holes drilled, including other things, to get me my deck? The driver? I want a deck. That is the event.
I believe the IT world has missed the mark. They are so busy trying to justify their time, or perhaps even their products that they actually don’t cost out the cost of the event; the fact I want a deck. Perhaps if they did, the focus would be more on that event, so they can achieve it, rather than the individual parts or services that make up this event outcome. Developers ask for requirements, operations ask for total cost of ownership including their support services. As a customer wanting a deck, I actually don’t care about either one of these things: I only really care about a deck and how much I need to pay year on year to keep the deck to a point that I can sit on it on a sunny day and enjoy a cup of coffee.
The event is the sales point, not the service or product to answer it. It is the business driver, and perceived outcome, and what the customer is buying.
As another example, if I buy a car, I want one that is of a particular brand, perhaps in a specific color, with a sunroof, power brakes, and perhaps fuel injected to increase performance and decrease fuel costs. The event: I want a new car.
I recently heard an executive say that this is what people want. He equated IT as ordering a car, but rather than get a “white car” that he wanted, it was delivered in blue. The sunroof, oops sorry, didn’t think about that. Power brakes, oh yes they have them, but it costs $5K to keep this running. If I were a car buyer I would be pissed at this, and this was the point of the IT executive. While his points are all valid, he is talking about an end product that has spent millions to design to get to production ready, with a service catalogue in place to answer his event that he needed a new car. How often the design element is overlooked in preference to the product or service.
Events are first class citizens, and are what should be costed/priced.
Let’s take the car buyer. His events are:
I need a new car
I need to service my car
I need to pay for breakdowns when they happen
I need to sell my car
I need to return to the first event
All of these events are lifecycle based. This has nothing to do directly with the car itself. This person needs a car for various reasons whether they be to get to work, to pick up the kids after school, or to go away for the weekend. That is the actual events. To do this, the customer needs a car. In this new thinking the events are:
I need to get to work
I need to pick up my kids
I need to be able to go away for a weekend
Each event should be layered. the person that wants to get to work has a number of options, one of which could be a car. Same goes for the second and third. Perpaps a car is his choice to receive the outcome, but let me ask you a question: where is the value? (car: 50K, cost of parking, cost of maintenance to keep going, cost of repairs due to unexpected events, and cost of fuel. Alternate: Cost of bus, train or plane ticket). What exactly is the cost of the latter events? I am sure the cost of public transport vs the cost of a car can be estimated and compared. What exactly is he buying?
Enter events as first class citizens. If the person chose that the best way to get to work was a cost of a car, and was prepared to pay for the fuel, parking and everything else, then he has chosen to answer his event with a car, at a certain cost. Yet if his answer to pay for that event was to take public transport, that would also have a cost, perhaps cheaper, but in any event, the lifecycle events around each phase of his/her decision making process is the event, and this includes a total cost of ownership.
Moreover, each event has lifecycle states. In the case of the car, these states might be “new”, “in need of repair”, “in need of gas”, etc. In the case of the bus ticket, perhaps the events are “in need of a new monthly pass”. In either case, the event is the driver, regardless of level.
To relate this back to IT, the event is a first class citizen. It is what people are buying a resolution for. Each event that occurs has a solution and a cost to implement, and when compared to each other event in that layer, there is also a cost over time that needs to be accounted for. After all, if it cost me a small amount to take a bus but the monthly amount to renew exceeds the cost of buying a car, why then would I choose bus as my answer to my event to get to work?
In the IT world, the event is king. The technical solution to address that event is arbitrary as long as it fits in the overall lifecycle of my events. Therefore, if we looked rather at the lifecycle of events that are required and costed/priced them, perhaps we will not only have addressed the customer’s needs (rather than simply our own), but also look at it end to end to ensure the entire lifecycle is addressed. If this approach is taken then this can also lead to responding to events, with the appropriate products and services to match, rather than pushing a specific product and service. Now, this leads to a dynamic way of offering product and services to respond to an event rather than simply seeing and pushing product and services regardless of demand or supply.
To summarize; events are first class citizens. If we, as IT focus on the customer events, perhaps then we will be able to provide the appropriate products and/or services to answer that event. If we cost that event, perhaps this will ensure funding is appropriate across the lifecycle (after all we need to look at all events in the lifecycle not just one for a proper costing decision). Perhaps if we do this, then the products and services (e.g.: drill, or man with a drill) is not so important, as long as we can give the customer a hole.
Micro services seem’s to have the spotlight today. These are small, self contained services that do a small subset of business functionality. In the old days, a single monolithic application did everything it needed to do for a given technology category (e.g.: CRM), but has it’s downside in that all integration within the Monolith was left up to that Monolith application. This is not ideal when a business unit just want’s to create a customer, or perhaps sell something to a customer. Traditionally it means the monolith would manage the entire lifecycle of a customer. Again, unless you are working with that monolith application, integration became a problem.
Fast Forward many years later; these monolith applications grew, new versions with new features were introduced by the single vendor. As the enterprise grew and mature in their use of this application, integration solutions were introduced that were either point to point, or required a significant product knowledge in order to integrate to it from outside variances. A lot of pressure was then put on these vendors to break their integration down, and some did by producing an API. But again, this API waas only defined from a single vendor’s perspective (albeit based on feedback from multiple customers). While a single customer might have had a voice depending on their size and how much money they gave the vendor, the vendor still had to think about all of their other customers before releasing a new version of their application, that this often led to vendor lock-in. For some vendors, this is their agenda, for others it is simply a matter of evolution. Is the vendor really to blame?
As an enterprise, it is the responsibility of the enterprise to own their data, and relationships to this data, not a vendor. But even when “enterprise” architects came along to address this problem, they weren’t seem much as providing value to single business units, and often the funding came into question quickly. I know many very smart enterprise architects given the axe simply because they couldn’t work out how to fund them.
Then again, is it up to a single person to define all of the enterprise related data, relationships and services? I am not so sure about this. An enterprise must take responsibility for their own data. They know their business, and if a single application fits the bill perfectly (I have yet to see this happen), then why wouldn’t an enterprise use a single vendor? After all, that vendor probably even knows more about their business than the enterprise does given the amount of customers the vendor might have. But sadly, and often, a vendor’s interest does kick in to ensure longevity.
Then enter architectural principles such as reuse before buy before build. A sound concept to ensure that previous investment is reused, but often leads to a single enterprise having to fit a square peg into a round hole, or perhaps lock in to yet another vendor for a partial solution. Last step (heaven forbid) that an enterprise actually should take final responsibility for their own data and relationships. Perhaps it is better to blame a vendor?
Enter MicroServices. These are small, lightweight services dedicated to a single business concept at the enterprise level. As an example, a single concept of customer by a CRM system may not quite “fit” the definition of a customer within an enterprise. Does this mean that that customer within the context of an application by a vendor isn’t suitable? Perhaps not, but I would suggest that that customer view doesn’t really address the needs of an enterprise at their perspective of who their customers are. Sure, they might have commonalities, such as cold/warm/hot lead, or communication with a customer, or even how to put a customer into a “funnel” to try to expedite sales. However, they all still don’t “fit” 100% with a definition of any organisation’s view of “customer”. Using MicroServices, which are “bespoke” services (OMG) intended to represent the organisations 100% view of a customer, and can be written in a week, I do suppose these services are something to be afraid of. Or are they? These services can be built quickly technically, and can ofter everything around a particular enterprise owned concept, like a customer, or incident.
So, let’s challenge the thinking of the past. Is bespoke code the problem? Perhaps. I do agree if a vendor should “own” your definition of customer then I do suppose it would be better to “blame” them if they get it wrong in your particular context. Then again, you own your customers, your vendors don’t, why would you possibly outsource this definition?
By using bespoke code to “own” your definition of customer means that you can control all working’s related to “your customers”. And if these solutions can come into being in a week or two, and can be “thrown away” when you decide to change your definition, not forcing you to fit your definition into a single vendors view (based on their own multiple customer perspectives which may or may not fit your view), why would you possibly lock in to a single vendor? If you do, I do suppose it is better for you to decide this on the golf course with your vendor friends than to do what is right for your enterprise. Oops. Did I say that? If I didn’t I assure your teams are.
I am not trying to highlight any of this to pick apart any decisions that have been made, but I would like to highlight that perhaps using Vendors is OK for what they have to offer. But leave it to that! What I am actually suggesting is for you to take ownership of your enterprise’s data and services, even if it is just a wrapper for your vendor’s offering. Why? To make sure that when they change because of general opinion, this doesn’t hamstring you into their solution. Micro services are the way to do this.
As an integration expert, and one who defined many strategies for enterprises on integration, I have actually been advising enterprises for years to do exactly this. Perhaps 10 years ago this was around SOA, and ESB (single vendor supplied), and heavy XML to enforce standards in this space. But in all of my dealings with large corporates, a few very important principles were always agreed to: 1) Layer your services, and 2) ensure loose coupling. I am sure not one reader of this article would disagree with this. Today, it is not about a single architectural “hub” owned by a single integration specialist vendor, but rather “you” owning your data and relationships in small, reusable, versionable (where two versions can exist at the same time to account for any point in time change by a vendor), and are isolated enough to make sure you can not only grow with your customers, but also grow with your vendors using disposable, and throw away able services. This is what Microservices are; the ability for your organisation to focus on the products and services that fit YOUR business by creating small snippets around individual features or capabilities of your business, rather than the technology.