Real Time Messaging

One of the things I have noticed particularly in the database world is how much people and database technologies focus on “Global” persistence. In the software development world, using a global variable is a no-no. Why then do we do this in the database world? So often databases are a collection of data shared by…

This content is for Advanced Integration Membership members only.
Log In Register

Containerization: Docker

I want to show some details on how to build a microservice using NodeJS, and deploy to docker containers within any cloud.

If you haven’t used Docker before, it is fairly simple. It is a new technology that allows you to run an “application container” within an existing server (physical or virtual). It is almost like have a virtual machine within a virtual machine, because docker uses the host VM’s OS to work with the application. However, the difference is you can isolate your application code into a container that can be deployed anywhere.

Another great feature of Docker is that it is a sandbox for your application. Everything in the Docker container is self contained. In a traditional VM, especially one that hosts multiple applications, normally apps are deployed to multiple directories on the host. However, if these applications share resources, then if you move an application from one server to another, you can break the application on the new host, if you don’t also make sure the dependencies it has aren’t installed.

Enter Docker. Docker allows you to deploy everything your application requires in a minimalistic ways, and bundles up the application stack. This means you can build a docker image with everything it needs to run, and take that “box” and deploy anywhere. It uses a layered file system, so you can also grab and auto install any deployment code you want even from a repository like GIT.  However, in doing this, it focuses on your application dependencies, but won’t have in it the OS files.

To make a proper distribution of a docker image, the only contents of a Docker container has in it is your application dependencies. Therefore, if you have a host OS using ubuntu, your Docker container won’t have the Ubunto kernel or other files, but will have anything specific over and above to make your application work.

NodeJS is a server side tool that brings javascript to the server. While traditional use of Javascript is run on the browser, given that is the language of the web, NodeJS is based on the V8 Engine that Google Chrome uses in the chrome browser. But, since NodeJS is not a browser application, but rather runs code on the server, it uses “modules” that can be shared running javascript on the server, and sending to the browser. Another exciting use of Node is to run a universal or “isomorphic” application which allows your code to run on both the browser, and the server, with little difference. This is great for SEO and indexing, but still keeps your application lightning fast, particularly using an SPA technology like angular, react, or backbone to run most of your code on the browser.

A few things to note about Docker. It runs within a single process, and that means it only runs within ONE thread on the server. However, it has an event loop that allows node applications to use functions asynchronously as callback which make it perform much faster than a traditional application.  This is the magic of NodeJS.

Once an application is built using Node, it can be deployed anywhere using docker on any virtual image on any cloud, and as long as all of the dependencies are contained within that black box, it will just work. It is possible to include all the dependencies within a Docker image to ensure little issues when deploying, but often dependencies such as a database technologies don’t make sense to include in an application stack, simply because they are shared. But that doesn’t mean they can’t run in a Docker image themselves!

Docker is changing the world. Google has written a library around Docker, called Kubernetes, which allows a lot more control over clustering and more production ready deployments. I suggest you check both of them out. Oh by the way, each docker image does have an initial startup script which you can use to run inside the docker image to build itself. Automated deployments with minimal footprints anyone?

Here are some links you might be interested in:

Docker: Docker Website

NodeJS: Node JS Website

Tutorial on setting up Docker and NodeJS, with other Docker Images running mongodb, redis, logs, and other dependencies all saved on the host from Docker containers:

New Business Applications using Javascript, SPA, and Isomorphic applications

With the invention of SPA (Single Page Applications), javascript is the primary language of choice, simply because this is really the only programming languages that browsers understand. This single language has surpassed all other languages given the massive exposure it has on the web by the browsers.

In the past, javascript was referred to as a simple or non-language because of the way it does things. Things like Polymorphism, Inheritance, Abstraction, common to most developers don’t appear to be in the language. This isn’t true however, but since javascript is a true object oriented language, it just does things differently.

Enter ES6. This is the newly accepted standard for javascript. The way it does things is a lot closer to traditional programming languages, even though how it does it is just a bit of syntactic sugar on top of the original language to make it a little easier for developers. ES6 however has just been recently approved as an official release and it will take time for browser developers to keep up with this change, given that their browser products are used by billions of people world wide.

However, this isn’t a real problem today. There are transpilers available that will translate ES6 code to ES5. Is this a performance issue? Not at all, as a matter of fact, Java does the same thing by compiling java classes into binary classes. So does .NET by translating C# classes into CLR code. At the end of the day, does it really matter? How a machine executes your code is quite different to how you write your code with the myriad of developers you have. Maintainability of that code, and reacting to issues is much more important given that bugs/issues cause the most time to your teams.

Personally, I really like Node/NodeJS. It allows developers to create code using javascript on the server. It is based on the V8 engine by chrome, and offers a set of classes that run server side (rather than client side), which means you as a developer don’t have to worry about multiple resources to maintain multiple projects written in different languages. Want a web server that listens on http, https, and web sockets? this can be done in a very small amount of code.

Where these technologies really shine though is in sharing javascript code between the client and the browser. Where there are some differences between javascript on the server vs browser, the language is still the same, and new applications are coming out, and very quickly that support both. The term given to these applications is “isomorphic”, which allows the same code in the same file to be executed in either environment.

This is a BIG deal for project teams, whether at a small mom and pop shop, or a huge enterprise.

The focus of today’s blog post is to highlight these new technologies, especially allowing code to be executed on either site, and still run the same way regardless of server or browser client side. A lot of interest has been put into SPA apps, and for good reason, they perform almost real time whereas most server side applications need a request and response for every page re-render.  Server side rendering applications traditionally have been very slow performance wise, and leaves changes up to the developer (which means this can be buggy) to sort out. Not ideal, and this way of creating web applications has been the “norm” for quite some time.

Then came “ajax”. This was using XML to send “snippets” of data to the server and get a response without re-rendering a page again. This means libraries like JQuery came onto the scene to step towards a more realtime web client user experience. However XML is verbose, and is simply a bandaid to address the performance issue.

At the time it worked quite well. If a bit of data was needed (up to date) for a re-presentation of that data, a simple ajax call was used to obtain this and allow the client side application to work out the rendering.  Great start, but not quite enough.

However, SPA applications changed all that. Developers could now focus on applications that will run in a browser in real time, and not worry about the time it takes to re-render a page, simply because page renderings wasn’t needed from the server. The client browser had all the code available to perform these changes. This was quite an advance for web based applications. This means that real time applications can be built even though the browser doesn’t have direct access to server controlled data.

The issue here is that web browsers only care about data that is to be presented, and javascript as a language has been developed and traditionally has been used as a way to work with the user. But as a language it still has everything required to be a fully fledged language, but because it does things “differently” than other server side languages, it was difficult to work with and often led to developers “pulling their hair out”. Well, with ES6, react, flux, and node, this isn’t quite an issue anymore. A whole new developer experience has resulted and one I believe will be quite valuable to not only IT professionals but to business units as well. This translates to cutting costs and timeframes, and allows applications that perform incredibly fast to be built in weeks rather than months or years. Try that on for size. This means your development resources are cut down, and can turn around your business needs in mere months. You wanted the IT world to listen? We have by making things way faster with less resources. Perhaps it is time for you to start investing in IT again, and stop fearing it just because you don’t understand it?

In my next blog post I will provide an example that can be used by developers to do just this. Stay tuned.

Microservices

Micro services seem’s to have the spotlight today. These are small, self contained services that do a small subset of business functionality. In the old days, a single monolithic application did everything it needed to do for a given technology category (e.g.: CRM), but has it’s downside in that all integration within the Monolith was left up to that Monolith application. This is not ideal when a business unit just want’s to create a customer, or perhaps sell something to a customer. Traditionally it means the monolith would manage the entire lifecycle of a customer. Again, unless you are working with that monolith application, integration became a problem.

Fast Forward many years later; these monolith applications grew, new versions with new features were introduced by the single vendor. As the enterprise grew and mature in their use of this application, integration solutions were introduced that were either point to point, or required a significant product knowledge in order to integrate to it from outside variances. A lot of pressure was then put on these vendors to break their integration down, and some did by producing an API. But again, this API waas only defined from a single vendor’s perspective (albeit based on feedback from multiple customers). While a single customer might have had a voice depending on their size and how much money they gave the vendor, the vendor still had to think about all of their other customers before releasing a new version of their application, that this often led to vendor lock-in. For some vendors, this is their agenda, for others it is simply a matter of evolution. Is the vendor really to blame?

As an enterprise, it is the responsibility of the enterprise to own their data, and relationships to this data, not a vendor. But even when “enterprise” architects came along to address this problem, they weren’t seem much as providing value to single business units, and often the funding came into question quickly. I know many very smart enterprise architects given the axe simply because they couldn’t work out how to fund them.

Then again, is it up to a single person to define all of the enterprise related data, relationships and services? I am not so sure about this. An enterprise must take responsibility for their own data. They know their business, and if a single application fits the bill perfectly (I have yet to see this happen), then why wouldn’t an enterprise use a single vendor? After all, that vendor probably even knows more about their business than the enterprise does given the amount of customers the vendor might have.  But sadly, and often, a vendor’s interest does kick in to ensure longevity.

Then enter architectural principles such as reuse before buy before build. A sound concept to ensure that previous investment is reused, but often leads to a single enterprise having to fit a square peg into a round hole, or perhaps lock in to yet another vendor for a partial solution. Last step (heaven forbid) that an enterprise actually should take final responsibility for their own data and relationships. Perhaps it is better to blame a vendor?

Enter MicroServices. These are small, lightweight services dedicated to a single business concept at the enterprise level.  As an example, a single concept of customer by a CRM system may not quite “fit” the definition of a customer within an enterprise. Does this mean that that customer within the context of an application by a vendor isn’t suitable? Perhaps not, but I would suggest that that customer view doesn’t really address the needs of an enterprise at their perspective of who their customers are. Sure, they might have commonalities, such as cold/warm/hot lead, or communication with a  customer, or even how to put a customer into a “funnel” to try to expedite sales. However, they all still don’t “fit” 100% with a definition of any organisation’s view of “customer”.  Using MicroServices, which are “bespoke” services (OMG) intended to represent the organisations 100% view of a customer, and can be written in a week, I do suppose these services are something to be afraid of. Or are they? These services can be built quickly technically, and can ofter everything around a particular enterprise owned concept, like a customer, or incident.AAEAAQAAAAAAAAJBAAAAJGFjMjM5NGI4LTY4NGItNDNiZi1iZTUzLTY1YTI4NTI2MTY0Mw

So, let’s challenge the thinking of the past. Is bespoke code the problem? Perhaps. I do agree if a vendor should “own” your definition of customer then I do suppose it would be better to “blame” them if they get it wrong in your particular context. Then again, you own your customers, your vendors don’t, why would you possibly outsource this definition?

By using bespoke code to “own” your definition of customer means that you can control all working’s related to “your customers”. And if these solutions can come into being in a week or two, and can be “thrown away” when you decide to change your definition, not forcing you to fit your definition into a single vendors view (based on their own multiple customer perspectives which may or may not fit your view), why would you possibly lock in to a single vendor?  If you do, I do suppose it is better for you to decide this on the golf course with your vendor friends than to do what is right for your enterprise. Oops. Did I say that? If I didn’t I assure your teams are.

I am not trying to highlight any of this to pick apart any decisions that have been made, but I would like to highlight that perhaps using Vendors is OK for what they have to offer. But leave it to that!  What I am actually suggesting is for you to take ownership of your enterprise’s data and services, even if it is just a wrapper for your vendor’s offering. Why? To make sure that when they change because of general opinion, this doesn’t hamstring you into their solution. Micro services are the way to do this.

Graph-09As an integration expert, and one who defined many strategies for enterprises on integration, I have actually been advising enterprises for years to do exactly this. Perhaps 10 years ago this was around SOA, and ESB (single vendor supplied), and heavy XML to enforce standards in this space. But in all of my dealings with large corporates, a few very important principles were always agreed to: 1) Layer your services, and 2) ensure loose coupling. I am sure not one reader of this article would disagree with this. Today, it is not about a single architectural “hub” owned by a single integration specialist vendor, but rather “you” owning your data and relationships in small, reusable, versionable (where two versions can exist at the same time to account for any point in time change by a vendor), and are isolated enough to make sure you can not only grow with your customers, but also grow with your vendors using disposable, and throw away able services. This is what Microservices are; the ability for your organisation to focus on the products and services that fit YOUR business by creating small snippets around individual features or capabilities of your business, rather than the technology.

Happy coding.