One of the things I have noticed particularly in the database world is how much people and database technologies focus on “Global” persistence. In the software development world, using a global variable is a no-no. Why then do we do this in the database world? So often databases are a collection of data shared by…
I want to show some details on how to build a microservice using NodeJS, and deploy to docker containers within any cloud.
If you haven’t used Docker before, it is fairly simple. It is a new technology that allows you to run an “application container” within an existing server (physical or virtual). It is almost like have a virtual machine within a virtual machine, because docker uses the host VM’s OS to work with the application. However, the difference is you can isolate your application code into a container that can be deployed anywhere.
Another great feature of Docker is that it is a sandbox for your application. Everything in the Docker container is self contained. In a traditional VM, especially one that hosts multiple applications, normally apps are deployed to multiple directories on the host. However, if these applications share resources, then if you move an application from one server to another, you can break the application on the new host, if you don’t also make sure the dependencies it has aren’t installed.
Enter Docker. Docker allows you to deploy everything your application requires in a minimalistic ways, and bundles up the application stack. This means you can build a docker image with everything it needs to run, and take that “box” and deploy anywhere. It uses a layered file system, so you can also grab and auto install any deployment code you want even from a repository like GIT. However, in doing this, it focuses on your application dependencies, but won’t have in it the OS files.
To make a proper distribution of a docker image, the only contents of a Docker container has in it is your application dependencies. Therefore, if you have a host OS using ubuntu, your Docker container won’t have the Ubunto kernel or other files, but will have anything specific over and above to make your application work.
A few things to note about Docker. It runs within a single process, and that means it only runs within ONE thread on the server. However, it has an event loop that allows node applications to use functions asynchronously as callback which make it perform much faster than a traditional application. This is the magic of NodeJS.
Once an application is built using Node, it can be deployed anywhere using docker on any virtual image on any cloud, and as long as all of the dependencies are contained within that black box, it will just work. It is possible to include all the dependencies within a Docker image to ensure little issues when deploying, but often dependencies such as a database technologies don’t make sense to include in an application stack, simply because they are shared. But that doesn’t mean they can’t run in a Docker image themselves!
Docker is changing the world. Google has written a library around Docker, called Kubernetes, which allows a lot more control over clustering and more production ready deployments. I suggest you check both of them out. Oh by the way, each docker image does have an initial startup script which you can use to run inside the docker image to build itself. Automated deployments with minimal footprints anyone?
Here are some links you might be interested in:
Docker: Docker Website
NodeJS: Node JS Website
Tutorial on setting up Docker and NodeJS, with other Docker Images running mongodb, redis, logs, and other dependencies all saved on the host from Docker containers:
However, this isn’t a real problem today. There are transpilers available that will translate ES6 code to ES5. Is this a performance issue? Not at all, as a matter of fact, Java does the same thing by compiling java classes into binary classes. So does .NET by translating C# classes into CLR code. At the end of the day, does it really matter? How a machine executes your code is quite different to how you write your code with the myriad of developers you have. Maintainability of that code, and reacting to issues is much more important given that bugs/issues cause the most time to your teams.
This is a BIG deal for project teams, whether at a small mom and pop shop, or a huge enterprise.
The focus of today’s blog post is to highlight these new technologies, especially allowing code to be executed on either site, and still run the same way regardless of server or browser client side. A lot of interest has been put into SPA apps, and for good reason, they perform almost real time whereas most server side applications need a request and response for every page re-render. Server side rendering applications traditionally have been very slow performance wise, and leaves changes up to the developer (which means this can be buggy) to sort out. Not ideal, and this way of creating web applications has been the “norm” for quite some time.
Then came “ajax”. This was using XML to send “snippets” of data to the server and get a response without re-rendering a page again. This means libraries like JQuery came onto the scene to step towards a more realtime web client user experience. However XML is verbose, and is simply a bandaid to address the performance issue.
At the time it worked quite well. If a bit of data was needed (up to date) for a re-presentation of that data, a simple ajax call was used to obtain this and allow the client side application to work out the rendering. Great start, but not quite enough.
However, SPA applications changed all that. Developers could now focus on applications that will run in a browser in real time, and not worry about the time it takes to re-render a page, simply because page renderings wasn’t needed from the server. The client browser had all the code available to perform these changes. This was quite an advance for web based applications. This means that real time applications can be built even though the browser doesn’t have direct access to server controlled data.
In my next blog post I will provide an example that can be used by developers to do just this. Stay tuned.
Micro services seem’s to have the spotlight today. These are small, self contained services that do a small subset of business functionality. In the old days, a single monolithic application did everything it needed to do for a given technology category (e.g.: CRM), but has it’s downside in that all integration within the Monolith was left up to that Monolith application. This is not ideal when a business unit just want’s to create a customer, or perhaps sell something to a customer. Traditionally it means the monolith would manage the entire lifecycle of a customer. Again, unless you are working with that monolith application, integration became a problem.
Fast Forward many years later; these monolith applications grew, new versions with new features were introduced by the single vendor. As the enterprise grew and mature in their use of this application, integration solutions were introduced that were either point to point, or required a significant product knowledge in order to integrate to it from outside variances. A lot of pressure was then put on these vendors to break their integration down, and some did by producing an API. But again, this API waas only defined from a single vendor’s perspective (albeit based on feedback from multiple customers). While a single customer might have had a voice depending on their size and how much money they gave the vendor, the vendor still had to think about all of their other customers before releasing a new version of their application, that this often led to vendor lock-in. For some vendors, this is their agenda, for others it is simply a matter of evolution. Is the vendor really to blame?
As an enterprise, it is the responsibility of the enterprise to own their data, and relationships to this data, not a vendor. But even when “enterprise” architects came along to address this problem, they weren’t seem much as providing value to single business units, and often the funding came into question quickly. I know many very smart enterprise architects given the axe simply because they couldn’t work out how to fund them.
Then again, is it up to a single person to define all of the enterprise related data, relationships and services? I am not so sure about this. An enterprise must take responsibility for their own data. They know their business, and if a single application fits the bill perfectly (I have yet to see this happen), then why wouldn’t an enterprise use a single vendor? After all, that vendor probably even knows more about their business than the enterprise does given the amount of customers the vendor might have. But sadly, and often, a vendor’s interest does kick in to ensure longevity.
Then enter architectural principles such as reuse before buy before build. A sound concept to ensure that previous investment is reused, but often leads to a single enterprise having to fit a square peg into a round hole, or perhaps lock in to yet another vendor for a partial solution. Last step (heaven forbid) that an enterprise actually should take final responsibility for their own data and relationships. Perhaps it is better to blame a vendor?
Enter MicroServices. These are small, lightweight services dedicated to a single business concept at the enterprise level. As an example, a single concept of customer by a CRM system may not quite “fit” the definition of a customer within an enterprise. Does this mean that that customer within the context of an application by a vendor isn’t suitable? Perhaps not, but I would suggest that that customer view doesn’t really address the needs of an enterprise at their perspective of who their customers are. Sure, they might have commonalities, such as cold/warm/hot lead, or communication with a customer, or even how to put a customer into a “funnel” to try to expedite sales. However, they all still don’t “fit” 100% with a definition of any organisation’s view of “customer”. Using MicroServices, which are “bespoke” services (OMG) intended to represent the organisations 100% view of a customer, and can be written in a week, I do suppose these services are something to be afraid of. Or are they? These services can be built quickly technically, and can ofter everything around a particular enterprise owned concept, like a customer, or incident.
So, let’s challenge the thinking of the past. Is bespoke code the problem? Perhaps. I do agree if a vendor should “own” your definition of customer then I do suppose it would be better to “blame” them if they get it wrong in your particular context. Then again, you own your customers, your vendors don’t, why would you possibly outsource this definition?
By using bespoke code to “own” your definition of customer means that you can control all working’s related to “your customers”. And if these solutions can come into being in a week or two, and can be “thrown away” when you decide to change your definition, not forcing you to fit your definition into a single vendors view (based on their own multiple customer perspectives which may or may not fit your view), why would you possibly lock in to a single vendor? If you do, I do suppose it is better for you to decide this on the golf course with your vendor friends than to do what is right for your enterprise. Oops. Did I say that? If I didn’t I assure your teams are.
I am not trying to highlight any of this to pick apart any decisions that have been made, but I would like to highlight that perhaps using Vendors is OK for what they have to offer. But leave it to that! What I am actually suggesting is for you to take ownership of your enterprise’s data and services, even if it is just a wrapper for your vendor’s offering. Why? To make sure that when they change because of general opinion, this doesn’t hamstring you into their solution. Micro services are the way to do this.
As an integration expert, and one who defined many strategies for enterprises on integration, I have actually been advising enterprises for years to do exactly this. Perhaps 10 years ago this was around SOA, and ESB (single vendor supplied), and heavy XML to enforce standards in this space. But in all of my dealings with large corporates, a few very important principles were always agreed to: 1) Layer your services, and 2) ensure loose coupling. I am sure not one reader of this article would disagree with this. Today, it is not about a single architectural “hub” owned by a single integration specialist vendor, but rather “you” owning your data and relationships in small, reusable, versionable (where two versions can exist at the same time to account for any point in time change by a vendor), and are isolated enough to make sure you can not only grow with your customers, but also grow with your vendors using disposable, and throw away able services. This is what Microservices are; the ability for your organisation to focus on the products and services that fit YOUR business by creating small snippets around individual features or capabilities of your business, rather than the technology.