About Me

My photo
Rohit is an investor, startup advisor and an Application Modernization Scale Specialist working at Google.

Saturday, November 10, 2018

Top 10 reasons for containerizing legacy COTS software ?

What does cloud transformation mean for legacy custom off the shelf software aka COTS ? Yes we are talking of rules engines , portals, commerce engines, BPMs, etc and other products from IBM, Redhat, Oracle and other vendors built on big application servers and other forms of middleware....

1. Operational efficiency provided by PKS for managing the upgrade of both the platform(K8s) and the CustomOffTheShelf software. If done right zero downtime, rolling updates, canary blue/green upgrades can be done with minimal fuss. Advanced deployment policies provided by K8s enables the deploy and release of builds without impact on upstream / downstream systems  providing loose coupling.

2. Infrastructure consolidation and better utilization of hardware via horizontal auto-scaling. Depending on the workload you can attach certain COTS to certain nodes if hardware affinity is required. PKS/K8s has a better story for hardware affinity if the COTS needs some special GPU/CPU/Memory.

3. Most COTS vendors will move to a container based deployment model in the future. Installers shall become moot. This is the future. K8s dial-tone is a must for all future deployment of COTS.

4. Through various value adds that PKS provides for logs. metrics, telemetry, clusters, fine-grained security, health-watch, network micro-segmentation - you will get better uptime, stability and resiliency of your COTS platform if deployed right. Monitor end-to-end to gain insight and make informed decisions using the insights provided by PKS. Kubernetes along with BOSH provides proactive system health and automated health management at the container level

5. Developer Benefits include more consistent environments. On demand environments for COTS can be stamped out as opposed to manual provisioning. Putting it in a container allows deployment across zones and across different cloud providers realizing the multi-cloud dream and avoiding BIG cloud vendor lockin.

6. Putting  the COTs in a Container and deploying with a K8s platform like PKS provides flexibility in dynamic routing and service discovery to start strangling the functionality from COTs if so desired. This provides a gateway to modernization.

7. If the COTS is non-strategic, deploying it in k8s provides a long term resting place that is coherent with the cloud strategy.

8. COTS in PKS + microservices in PAS provides for the right abstraction to run at the right platform level. Recommend keeping COTS vanilla, build customization using Boot microservices

9. K8s updates every quarters. COTS vendors and everyone else are playing catchup and moving to a faster upgrade - AsAService cycle. As an IT organization COTS in PKS helps you get ahead of this curve. If done right Pace is an immense leverage to your organization.

10. Developers love containers and can run the COTS locally through a docker image whereas earlier they would have to spend a lot of hours in arcane setup. They get all the goodness of docker aka OCI compliant container images.

Top 10 challenges in containerizing legacy enterprise apps

1. Vendor Support: Independent of the K8s distribution PKS or Openshift or GKE the availability of vendor supported images and deployment models via helm or operators. Who will support us on the platform ? Will we get a single throat to choke ? See IBM support model for containers here. Oracle Weblogic guidance for containers. 

2. Upgrades including Security: Upgrade and patching of the said COTs software and apps packed in the images. Since you bring your own image in Docker it behooves the app owner to update the entire stack OS, JVM. App-Server/middleware and the app itself. Are best practices being followed for container creation ?

3. State: Usually when containerizing legacy apps the intent is to do 0 refactoring of the app itself => this leads to embedded state within the app. This complicates the deployment and day-2 ops of the app on K8s ranging from autoscaling , liveliness and readiness  probes and proper use of stateful sets.

4. Plethora of choices: In the K8s world there are five ways of doing anything. Lets take service discovery for example - Do you want to use native DNS, cluster-IP, Istio sidecar, environment variables, OSBAPI service brokers, Eureka, client based or server based discovery. Picking the right one for your technical stack is a drag that most ignore. These choices multiple as you consider logging, security, metrics etc., This is where PKS/Pivotal/AppTx can help. We know what works for you.

5. Day-2 App Ops: Very few understand how to operationally keep a fleet of K8s clusters alive. The care and feeding of a K8s cluster requires a level of operational maturity that is hard to visualize and estimate. The care and feeding of your K8s cluster and the pods is usually an after-thought. 

6. ROI: Show me the Money? : There is a wave of buzzwords raining down on the industry right now - microservices, serverless, devops, containers, agile, etc.,. The return on investment by rehosting, refactoring, replatforming or rebuilding/retiring an application is NOT clear. Developers follow mandates from the top. Is containerization really the right choice for your technical and business outcomes ? Is this a strategic play or a tactical play. All these options needs to be considered before a decision is made to containerize your legacy app. Pivotal AppTx has a structured funnel approach see here to make the right choice. What is the right choice from P to V to C [link]. Remember CaaS is only a means to the end. If the end is unclear you may not be making the right choices along the way.

7. Code Provenance:  Dude where is the source code ? : Sometimes the provenance of the code of the legacy app cannot be established. The source code is owned by a third-party partner who has been maintaining the app for years. Development happens offshore with only a few key customer coordinators who manage the project from  on-shore. In such a situation the outsourced partner has little incentive to containerize and eliminate waste because it translates to a material impact on consulting `$$$`. This is really a question of alignment of priorities between you and any major offshore partner.

8. Process: I am a big fan of the Theory of Constraints by Dr. Eliyahu Goldratt. The Theory of Constraints is a methodology for identifying the most important limiting factor (i.e. constraint) that stands in the way of achieving a goal and then systematically improving that constraint until it is no longer the limiting factor. If you don't eliminate the top bottlenecks you may be solving the wrong problem. How do you pick the right set of workloads to be run  at the right level of abstraction. Have you eliminated waste in your release management process. There is no point in optimizing the 20% of the time spent in developing the software and keeping the 80% of the time spent in  Q&A and release gates intact.

9. Resiliency: Often packaging an app in containers changes the environment and assumptions of the app enough to have a detrimental effect on the stability of the app. You have to be careful about how the application is dockerized and run in K8s. The inherent assumption in all the container orchestrators is that the container is fungible and location transparent. If the legacy app violates these constraints then you are fitting a round peg in square hole.

10. Skills: The subset of developers who understand cloud native and furthermore Kubernetes and Platform  as a Service is small. K8s is a fast moving target. Significant features show up in releases in alpha or beta form every 3 months. It is critical that your application is written with cloud native principles thereby making it cloud agnostic therefore enabling it to consume these platform features whenever and as soon as they show up. It is critical to architect and develop the legacy or greenfield application in the right way to ride the surf waves of K8s releases.