A Very Good WordPress Theme – Beaver Builder

The best WordPress theme I have ever come across is Beaver Builder. It is so good I have dropped all other themes.  What you can do with Beaver Builder in a matter of minutes using their drag and drop page builder puts all other themes to shame.

With Beaver Builder you can customise just about anything you want in a theme, from page layout, to pre-defined templates for both pages and content, Beaver Builder will get you to a point of building a professional WordPress website without all the headaches of learning code, css files, all using their Page Builder.

Check out Beaver Builder, a strong recommendation from Lowery.com

Beaver Builder

Review of Doodly – Drawing Animations made easy

Doodly is a new software product available for Mac and Windows. It allows you to create drawing based animations lightening fast, by providing you a number of pre-drawn scenes, characters, and even allows you to customise a drawing path using any image you select.  For hand drawn animations, it is simple and easy to use, and even includes pre-defined, royalty free stock and audio to give your drawing animations a bit of ambience.

Doodly also provides the ability to define a drawing, slowing or speeding up the drawing to match the audio!  It is truly a fantastic piece of software. I use it personally for my own drawing based animations.

Doodly comes in three pricing packages, which are Basic, Pro, and Enterprise. With the Pro version you get many more pre-defined scenes and characters (which are always being updated), and with the enterprise version you get the ability to resell anything you make with Doodly.

When I first started Doodly, I did find a number of bugs with the software, but when I contacted support these issues were updated almost immediately. The next time I logged into the software on my workstation, Doodly asked to upgrade, and when I did, the software issues I identified and pointed out to support were handled in a very timely fashion.

I strongly recommend that you check out Doodly. Note, in order to obtain the pro and enterprise versions you will need to sign up for the Basic version. The license for this desktop application is less than 100 for the Basic version. Also note, that the basic version sales page may be closed, to allow them to work on the next version.

GET Doodly Today!

Review of Adobe Creative Suite

For professional graphic design, and music/video development, nothing beats Adobe. The Adobe Suite contains some of the most advanced tools for the graphic/video/audio professional.

Adobe has taken a different stance on their software. In the past it costs thousands of dollars to get a license for even one of their products, such as PhotoShop. But they have now moved to a subscription service where you can get not only Photoshop, but many other packages, including Audition, After Effects, Premiere Pro, and many others for a small low monthly price.

Enter the Creative Cloud subscription. I use it personally, and have downloaded all the professional tools I need for the digital era. No longer do I have to have a huge cash outlay for Photoshop, but can pay this small monthly price to get so much more. Here is what is available in the Creative Cloud subscription, for a small monthly fee of $49 per month:

  • Photoshop
  • Audition
  • Premiere Pro
  • Dreamweaver
  • Illustrator
  • Acrobat
  • LightRoom
  • InDesign
  • Muse
  • Flash
  • Stock (photos and video)

And even more!  You simply can’t go past this. Each software package in the past has attracted thousands of dollars in licensing, but all available at a single yearly price. For the graphic, audio, and video professional, upgrade today. In doing so you still get the tools you love installed on your Mac or PC, but for a simple small monthly price.


Sign up today for your Creative Cloud Subscription.

Reactive Programming – managing streams and application state

Reactive Programming is all the buzz at the moment. It is similar to the iterator pattern, where you have a collection of values, and iterate over them to obtain these values and make decisions on these values. But these are “pull” paradigms that often result in heavier CPU cycles and memory to hold the entire collection while you are evaluating the concepts.

Reactive Programming uses a concept of an “observable”. Basically this means that whenever you have a value of some sort, whether on its own or part of a collection, you can observe changes to that data value, and have these changes “pushed” to an observer. This is by far way more powerful as it can be done using “stream” based code, where on every value you have an observer function to handle the change, whether it be for a single data value change, or changes to an entire collection (one by one) without the need to hold the collection in memory and take up CPU cycles to iterate.

In effect, it is an event based paradigm on a single value state change. If you have 1000 changes to a value over time, rather than holding onto a collection in memory and iterating over the list, you can respond to each change one at a time, updating any possible dependencies. Whether these dependencies on a value change refer to updating the UI, or restarting a server, each value change triggers the corresponding listener to update the dependencies.

Observable programming is, in effect much more efficient than iterator programming, simply . Similar to promises, it allows for the proper execution of code in a controlled manner on each of these changes, however has an option to allow for any dependency updates to be cancelled based on some very simple function calls that respond to an observed change.

To start at the very beginning, firstly let me explain what an “observable” is.  It is simply an object which holds one or more values, but also holds a “next” function (similar to the next item in a collection using an iterator), an “on success” function (to be fired after the successful completion of an “on next”, and an “on error” function if something goes wrong.  An “observer” function is the one that responds to the “on next”, “on complete”, and “on error” handlers.

It is similar to a promise, but differs in the fact you can cancel any pending operation (since a promise only responds to an “on success”, or “on failure” responder – more specifically, if a promise doesn’t cancel an operation, it also can’t be stopped or discarded when in progress).  It can handle this by first injecting data into a stream, filtering on specific data, and combining with other streams before the actual success handler is invoked.

Using observables are much more efficient (since they are similar to streams, and don’t hold collections), but can also be acted on for particular operations from multiple streams. This is useful for combining streams together, filtering data along the way, and even stopping at events like a “mouseup” before the entire stream is sent to the success handler. This is useful for things that require I/O (like a server or database request, an animation, or calling a render function once the streams have been processed. In essence it is a much better way to handle state changes of your application to ensure things happen when you want them to happen.

For example, and a basic observable pattern, might be to watch for a mouse down, then on a mouse move, call the mouse move handler, and on a mouse up, stop processing all events (even if they are in progress), allowing you to perform final cleanup on a drag operation. Similarly, it can also be used for reading data, where you retrieve the data from one table, filter it against another table, and finally finish the operation only once all data has been retrieved, while still looking at particular records along the way until the operation is finished.

Thinking in streams can be daunting at first, but doesn’t have to be. To work this way does take a different way of thinking, but rather than thinking in sequential patterns (if-then-else), you can change your thinking into (what is the outcome of what I want, what data do I need to get there, and what operations do I need to perform until I get there). At first, a little hard to grasp, but with a little practice means your code will only have to fire when it has to, without holding a large amount of data, and allows you the freedom to write uncomplicated code.  Better yet, it can also be thought of synchronous programming in an asynchronous way to help with debugging.

There are two things you need to be mindful of when using “Reactive Programming”. These are working with changes to state data, and “side effects” of that data. An example of a side effect would be updating a database, or rendering to a screen. These are not to be confused with responding to events in time, mapping, and filtering data to determine when the “on next” handler will be fired, and when you consider a transaction of handlers to be considered “completed”.  A side effect is simply that, a side effect after you have processed your streams.

To distinguish between stream handling and side effects, it is best thought of as “what do I do mid stream using pure functions (eg a computed value of first and last name) vs what do I do with the final stream state (eg update a contact record in a database, or a contact details page on a UI).

To summarise all of this, using Reactive Programming allows you to “react” to mutations, even in an immutable way, and only fire the code you want to fire when you want to fire it. This leads to a ten fold improvement in performance, because you are optimising memory, CPU cycles, and I/O along the way.

Some libraries are available today to help you along. While some of these libraries are very good, they will require you to think differently in your approach but this is well worth your effort in your journey, and will have substantial benefits.  Some useful libraries in this approach are:

Learn how to program using Reactive Programming today. You will be very glad you did.

Creating dynamic classes in javascript ES6

I have often wondered whether or not it is possible to dynamically create classes on the fly in javascript. Of course, using prototype inheritance, it is possible to do this simply by adding prototype methods on an object. But what if the class you are creating at runtime doesn’t actually know what methods it has to implement until an external source provides them? Further, in today’s day and age using ES 6 or 7, with babel, how can these classes be dynamically created on the fly from external implementations?

As it turns out, it is actually quite easy. Here is a code example:

let mymethod = 'myMethod'

class myClass {
  constructor() {
    console.log('class constructed')

  [mymethod] (){

var my = new myClass()

This is great. It is a way of creating methods in the class without knowing the name of the method up front, but rather from a variable. But, this still is fairly limited, and why bother? If the implementation of the method is known in the file (via require/import or as a function in the javascript file), but what if the actual implementation of the method is stored elsewhere?  Now this is a challenge. One way is to add them on the prototype chain. This can be done as so:

var methods = {
  'increment': function() { this.value++; },
  'display' : function() { console.log(this.value); }

function addMethods(object, methods) {
  for (var name in methods) {
    object[name] = methods[name];

var obj = { value: 3 };
addMethods(obj, methods);
obj.display();  // "3"
obj.display();  // "4"

Another way is to add the events after the fact to the prototype chain, like this:

var obj = {value: 3 };
obj.prototype.mynewmethod = function method mynewmethod(myparam) {
//Dosomething, return something

But this is based on ES5 Javascript, not ES6.  While ES6 is merely “syntactic sugar” that transpiles to ES5 (using babel, tranceur, etc), it is still valid code. what it does is construct an object with functions in it (with dynamic class names) be mapped to an object’s prototype chain. This is a good way to add functions to an existing object.

But what if you don’t know the method’s implementation at run time (eg: download a method or function from somewhere else? This is where things get hairy. Because you don’t know the “source” of the method implementation, you have to trust that this source is trusted. Security issues come largely into play here. Javascript is a great language, especially with the ES6 “sugar”. But if you don’t understand the prototype chain, it is a bit of a double edge sword.

Containerization: Docker

I want to show some details on how to build a microservice using NodeJS, and deploy to docker containers within any cloud.

If you haven’t used Docker before, it is fairly simple. It is a new technology that allows you to run an “application container” within an existing server (physical or virtual). It is almost like have a virtual machine within a virtual machine, because docker uses the host VM’s OS to work with the application. However, the difference is you can isolate your application code into a container that can be deployed anywhere.

Another great feature of Docker is that it is a sandbox for your application. Everything in the Docker container is self contained. In a traditional VM, especially one that hosts multiple applications, normally apps are deployed to multiple directories on the host. However, if these applications share resources, then if you move an application from one server to another, you can break the application on the new host, if you don’t also make sure the dependencies it has aren’t installed.

Enter Docker. Docker allows you to deploy everything your application requires in a minimalistic ways, and bundles up the application stack. This means you can build a docker image with everything it needs to run, and take that “box” and deploy anywhere. It uses a layered file system, so you can also grab and auto install any deployment code you want even from a repository like GIT.  However, in doing this, it focuses on your application dependencies, but won’t have in it the OS files.

To make a proper distribution of a docker image, the only contents of a Docker container has in it is your application dependencies. Therefore, if you have a host OS using ubuntu, your Docker container won’t have the Ubunto kernel or other files, but will have anything specific over and above to make your application work.

NodeJS is a server side tool that brings javascript to the server. While traditional use of Javascript is run on the browser, given that is the language of the web, NodeJS is based on the V8 Engine that Google Chrome uses in the chrome browser. But, since NodeJS is not a browser application, but rather runs code on the server, it uses “modules” that can be shared running javascript on the server, and sending to the browser. Another exciting use of Node is to run a universal or “isomorphic” application which allows your code to run on both the browser, and the server, with little difference. This is great for SEO and indexing, but still keeps your application lightning fast, particularly using an SPA technology like angular, react, or backbone to run most of your code on the browser.

A few things to note about Docker. It runs within a single process, and that means it only runs within ONE thread on the server. However, it has an event loop that allows node applications to use functions asynchronously as callback which make it perform much faster than a traditional application.  This is the magic of NodeJS.

Once an application is built using Node, it can be deployed anywhere using docker on any virtual image on any cloud, and as long as all of the dependencies are contained within that black box, it will just work. It is possible to include all the dependencies within a Docker image to ensure little issues when deploying, but often dependencies such as a database technologies don’t make sense to include in an application stack, simply because they are shared. But that doesn’t mean they can’t run in a Docker image themselves!

Docker is changing the world. Google has written a library around Docker, called Kubernetes, which allows a lot more control over clustering and more production ready deployments. I suggest you check both of them out. Oh by the way, each docker image does have an initial startup script which you can use to run inside the docker image to build itself. Automated deployments with minimal footprints anyone?

Here are some links you might be interested in:

Docker: Docker Website

NodeJS: Node JS Website

Tutorial on setting up Docker and NodeJS, with other Docker Images running mongodb, redis, logs, and other dependencies all saved on the host from Docker containers:

Example of a React Isomorphic application using ES6 and Facebook React

The world of IT changes all the time. New technologies have evolved, sometimes overnight that have the potential to change the world. Facebook React and Flux are two examples, but these technologies are still evolving, but have matured enough for me to write a how to guide to using them to create pages that are rendered either on the server or browser. Why is this important? Because it means you can finally create real-time applications that not only work on both, but will be seen by the major search engines.

Never before has a time been so right to make sure we as developers can create these kinds of applications that are exposed to the web as a whole without the network latency being an issue. This helps a lot with performance, but also helps in code maintenance. Why? Because you only require developers who know javascript, a one language that can do both. For more information on Isomorphic applications, please refer to my earlier post: It can be found here.

Now on to bigger and better things, an example.  I have been working with NodeJS, React, Flux and SPA apps for quite a while. While frameworks like Angular came on the scene, we have seen SPA languages that are high performance based, but they leave a lot to be desired, particularly in the learning space. React and Flux from Facebook changes all that, by allowing developers to create UI  “components” with their own state, and “stores” where the data required for this UI component is re-rendered on a change, in a high performance oriented manner.

New Business Applications using Javascript, SPA, and Isomorphic applications

With the invention of SPA (Single Page Applications), javascript is the primary language of choice, simply because this is really the only programming languages that browsers understand. This single language has surpassed all other languages given the massive exposure it has on the web by the browsers.

In the past, javascript was referred to as a simple or non-language because of the way it does things. Things like Polymorphism, Inheritance, Abstraction, common to most developers don’t appear to be in the language. This isn’t true however, but since javascript is a true object oriented language, it just does things differently.

Enter ES6. This is the newly accepted standard for javascript. The way it does things is a lot closer to traditional programming languages, even though how it does it is just a bit of syntactic sugar on top of the original language to make it a little easier for developers. ES6 however has just been recently approved as an official release and it will take time for browser developers to keep up with this change, given that their browser products are used by billions of people world wide.

However, this isn’t a real problem today. There are transpilers available that will translate ES6 code to ES5. Is this a performance issue? Not at all, as a matter of fact, Java does the same thing by compiling java classes into binary classes. So does .NET by translating C# classes into CLR code. At the end of the day, does it really matter? How a machine executes your code is quite different to how you write your code with the myriad of developers you have. Maintainability of that code, and reacting to issues is much more important given that bugs/issues cause the most time to your teams.

Personally, I really like Node/NodeJS. It allows developers to create code using javascript on the server. It is based on the V8 engine by chrome, and offers a set of classes that run server side (rather than client side), which means you as a developer don’t have to worry about multiple resources to maintain multiple projects written in different languages. Want a web server that listens on http, https, and web sockets? this can be done in a very small amount of code.

Where these technologies really shine though is in sharing javascript code between the client and the browser. Where there are some differences between javascript on the server vs browser, the language is still the same, and new applications are coming out, and very quickly that support both. The term given to these applications is “isomorphic”, which allows the same code in the same file to be executed in either environment.

This is a BIG deal for project teams, whether at a small mom and pop shop, or a huge enterprise.

The focus of today’s blog post is to highlight these new technologies, especially allowing code to be executed on either site, and still run the same way regardless of server or browser client side. A lot of interest has been put into SPA apps, and for good reason, they perform almost real time whereas most server side applications need a request and response for every page re-render.  Server side rendering applications traditionally have been very slow performance wise, and leaves changes up to the developer (which means this can be buggy) to sort out. Not ideal, and this way of creating web applications has been the “norm” for quite some time.

Then came “ajax”. This was using XML to send “snippets” of data to the server and get a response without re-rendering a page again. This means libraries like JQuery came onto the scene to step towards a more realtime web client user experience. However XML is verbose, and is simply a bandaid to address the performance issue.

At the time it worked quite well. If a bit of data was needed (up to date) for a re-presentation of that data, a simple ajax call was used to obtain this and allow the client side application to work out the rendering.  Great start, but not quite enough.

However, SPA applications changed all that. Developers could now focus on applications that will run in a browser in real time, and not worry about the time it takes to re-render a page, simply because page renderings wasn’t needed from the server. The client browser had all the code available to perform these changes. This was quite an advance for web based applications. This means that real time applications can be built even though the browser doesn’t have direct access to server controlled data.

The issue here is that web browsers only care about data that is to be presented, and javascript as a language has been developed and traditionally has been used as a way to work with the user. But as a language it still has everything required to be a fully fledged language, but because it does things “differently” than other server side languages, it was difficult to work with and often led to developers “pulling their hair out”. Well, with ES6, react, flux, and node, this isn’t quite an issue anymore. A whole new developer experience has resulted and one I believe will be quite valuable to not only IT professionals but to business units as well. This translates to cutting costs and timeframes, and allows applications that perform incredibly fast to be built in weeks rather than months or years. Try that on for size. This means your development resources are cut down, and can turn around your business needs in mere months. You wanted the IT world to listen? We have by making things way faster with less resources. Perhaps it is time for you to start investing in IT again, and stop fearing it just because you don’t understand it?

In my next blog post I will provide an example that can be used by developers to do just this. Stay tuned.

An introduction to React and Flux by Facebook

React is a great new library from Facebook. It takes a little to get used to, and sometimes gives a little “yikes” factor, simply because it challenges the normal and accepted way of constructing applications. The biggest yike is that it challenges a Model, View Controller concept, by implementing a set of components that include both javascript and HTML. Traditional system, whether server rendered or client rendered usually frown on having the presentation layer mixed with the business layer, and with an MVC style approach to web development, these areas of concerns are separated between presentation, business logic, and data models.

However React challenges this idea only to the extent of declaring that area’s of concerns have traditionally been for a development team (where there are UI designers, business logic designers and data designers). However their challenge is timely, especially in an Agile development world because most designers don’t even code to the HTML or stylesheet level, but leave that up to developers.

Also, the area’s of concern in Facebook’s view is more around the business functionality. I tend to agree with this. If a component is required to be developed for, let’s say a messaging system, then there would be a clear business owner for this, and therefore, to meet his requirements, the messaging system would have a particular UI, and way of interacting with users, over and above who is employed to build each part of the system.  Since developers understand HTML and stylesheets anyways, why separate these concerns between a UI designer (who is more focused on mockups), and the business logic?  Further, if business logic is needed just for presentation, why should a UI designer know or care about some programming language to implement the business rules they define?  They shouldn’t.

To further this concept, Facebook’s React framework challenges the MVC model because they feel this just isn’t scalable. Again, I tend to agree, simply because the MVC model can become out of hand over time, where more dependencies from one MVC to another increases, causing a chain reaction of support issues.

To address these concerns, Facebook came up with React. This is a great framework that makes development a lot easier by implementing UI components, that maintain state. This is the state of the overall UI, not the state of the data, and hence, mixing html fragments within a React component (done by developers anyways), is not such a bad thing, as a matter of fact, is a great thing because it allows the developer to think in UI components, keeping the UI logic within that component and not mixing it up with actual business logic.

Enter Flux. Flux is yet another thing the Facebook folks have created. It is not a library like React, but rather an architecture that allows developers to separate concerns within a project team. However, unlike traditional MVC style of architecture, they feel that the state of data should flow one way, from the source of the data change, through to the actual presentation, and then to the data layer to persist the change. Hey, not bad at all. This means there is no complexity by rendering up to the second changes in the data. They do so by re-rendering the entire UI component. Oops, yet another “Yiikes”!

But, like other high performance applications, the UI component they re-render is only a delta change rather than an entire rewrite. Enter the React Virtual DOM. Like high performance systems like game engines, only rendering just that part of the screen that actually changes increases performance 10 fold. So the React framework allows this by maintaining a virtual DOM, and when it actually comes to expensive DOM operations, it only does a DIFF to see the areas that are the delta and renders them. This means the state of the data or even presentation state remains intact immediately after a change, and tells the rendering system to re-render those UI changes by way of the delta.  This is a developer’s dream, since they don’t have to write complicated logic to keep up with the changes to either the data or presentation of any “Area of concern” (business unit).

Flux is not a library, but rather an architectural style that merely keeps up when state changes. It allows state to change instantaneously and on the fly, without compromising what needs to get rendered as a result. It does this by defining actions that can be performed by UI components, and these actions, when tied to actual presentation data stores, can “react” instantly on a data change. However, the keyword there is “presentation data”, not actual data. Flux does change actual data in a database, but this is not what a Store worries about. It only worries about the state of the data at the time it is rendered, and announces the change to UI components to allow them to re-rerender.

FaWhat does this mean for business owners? A great deal. This means the business owner can focus on their business data, and presentation of that data in a real time manner, and leave the techie stuff up to techies, whether it be a presentation UX designer, architect, developer or tester; each part of the software development lifecycle is handled by the right person allowing the business owner to focus on the features required from their area.  Perhaps it is time to see things differently? As Facebook says, give it 5 minutes, and perhaps it might just make sense. After all they deal with Billions of users, why shouldn’t we give what they have to say just 5 minutes?

Managing a cloud world

Managing processes and services in a cloud architecture is not a trivial task. It requires executing processes in disperse locations across disperse virtual servers running on disperse physical machines. It can be an operation manager’s nightmare to organise all of these running processes, while at the same time as reducing costs for the organisation, while still “keeping the lights on”.

Developers while they are developing an application rarely think about this stuff, nor should they. They are focused on the functionality of the application.  But the moment they want to deploy their application, operations becomes involved, and these questions are expected to be answered by the developer teams. Whoa… hold on a minute!  They are not operations!  So a divide occurs, where operations is against the developers. The developers work for the business unit, and develop to the business unit’s requirements, but business people don’t give a toss about any of this stuff. The great divide. Sure, architects were hired to bridge this gap, but even they didn’t understand all of the issues from both sides.

In today’s world however, this bridge has been crossed with a DevOps approach. This means that applications are now architected in such a way as to ensure that both the interests of operations and their business units are addressed. Business units want a solution that is always up, even in the event of a disaster, and rely on operations to ensure this happens. Operations don’t know anything about the business requirements (and these requirements are getting more vague by the minute), so how can Operations take over the support of a solution that addresses their needs? Only way they can is to make sure the application’s design is good enough to allow for scalability, reliability, availability, etc. Again Enter the architect.

Far too often I have been in an organisation though that questions the need for an architect from a cost perspective. And perhaps they are right. We don’t live in a technology world anymore that requires a navigator (or do we?). The tools are there. Does this mean architects are not required anymore? Hardly, but with the right architecture all of these gaps can be addressed in a single design.

The point of this article is to talk about Process Management, but what exactly does the above have to do with this? Process Management is usually an operations task isn’t? Well no. It is all of our tasks to get this part right. and, it may not be as hard as you might think. Sure containerisation and micro services offer a new way of thinking, there are already tools that exist today to ease the pain of setting all of this up. In the old days where only physical machines existed, it was just a matter of creating a VIP on a load balancer as the entry IP address (internal or external facing), but that led to a number of load balancers for each application, and created a nightmare in managing all of these web servers. If they were only allowing traffic on port 80 or 443, that was ok, but as soon as a new port was introduced, let the cavalry come in, from security to firewall configurations, and that exercise for big corporates is not an easy road to navigate.

Enter the day of virtualisation, the day where a server can run on a single large enterprise grade server; one that allows multiple virtual machines given the appropriate amount of RAM, CPU/Core usage, and voila! A single physical machine can now host multiple virtual images, significantly reducing the physical footprint, and leveraging a single machines power to be extended horizontally. Need more CPUs or RAM? Simply buy another big mother of a server, and capacity expands 10 fold.

However, from a funding perspective, this wasn’t the best either.  This meant that when an application came along that exceeded the capacity of a single host server, then that project would be charged for the new host, and this cost was NOT TRIVIAL, or it was put into an OPEX arrangement where operations bought the server, and sliced the costs across the business units to pay for it. Still not ideal.

Today, the buzz is all around “containers”. These are simple containers that leverage the hosts OS, but sandboxes the application so the application can be spread across multiple hosts, each with an API that can be accessed, load balanced, and can live in the depths of an infrastructure to not necessarily require ports to be opened on the firewall. Even with today’s cloud based technology, where one virtual server could be in the US, another in Asia Pac, each server could have a copy of the container, acting as a single application across multiple diverse locations. This is a significant step forward, particularly when physical space is an issue for you. However, this new type of thinking does require a new form of architecture, and one that manages processes and containers efficiently.

Enter Process Management; the ability to manage processes to do exactly this regardless of where the process is running. Node.js is a great technology that offers efficient IO using the Chrome V8 engine. But that in itself is not enough. Node.js is single threaded on a single core, how then can that platform be extended to use all of the cores/memory offered by a single virtual server? This is where process management comes in. This is particularly important when you are dealing with new technologies such as Docker, where containerisation within a virtual server is the concern.

I have looked at a number of options in this space, and my two favourites are PM2 and StrongLoop. StrongLoop is the paid commercial solution and is probably best in breed, but PM2 is also a contender, as it manages processes within a single container.  While each of these have their own strengths and weaknesses (namely functionality over cost), StrongLoop is a leader in this space given it’s ability to put a process manager on each container, then communicate with each container (regardless of where it is deployed), and manages these in a single UI. PM2 only offers process management within a single virtual or single container on a virtual. But it still has it’s advantages.