About Me

My photo
Rohit leads the Pivotal Labs App Modernization Practice in engineering, delivery training & cross-functional enablement, tooling, scoping, selling, recruiting, marketing, blog posts, webinars and conference sessions. Rohit has led multiple enterprise engagements including ones featured in the Wall Street Journal. Rohit focuses on designing, implementing and consulting with enterprise software solutions for Fortune 500 companies on application migration and modernization.

Wednesday, August 19, 2020

Why Hybrid Cloud ?

 # Why Hybrid Multi-cloud

On-prem, AWS, GCP (... alibaba, Ibm cloud, ORCL)
1. Insurance against vendor lockin
2. Leverage the power of hyper cloud providers
  - on-demand IaaS
  - value added services
3. Regulatory
4. Disaster Recovery across providers
5. ORCL, VMware, IBM legacy DC incumbency reasons
  - vmc for aws, vmc for gcp, vmc for azure
  - zero change VM migration
  - vmc control plane for VMs
6. 2-10% penetration of public cloud
7. Proliferation of platforms - SINGLE Control multi-cloud plane
  - CF
  - Mesosphere
  - k8S (anthos, tmc, arc, crossplane, ..... )
  - SINGLE CONTROL PLANE

# Replication Edge <=> Central

  - Edge workloads will 75% cloud workloads
  - Call back home - Edge Architecture
  - Replicate -> offline-online
    - Data constraints
    - Network constraints
  - STATE Management
  - Cell Towers, POS, Autonomous Robots

# Stateful workloads - Databases

 - Operational pain - engineers
 - containers, VMs (legacy)
 - Operators
 - STATEFUL Sets
    - CSI
    - Portworx
 - Performance reasons for staying on a VM
 - self-service, faster changes, choice
 - Data resiliency, backup, DR (BCDR)
 - Gemfire replication (pattern)

# Best Practices on active-active for data

https://tanzu.vmware.com/content/white-papers/multi-site-pivotal-cloud-foundry-deployment-topology-patterns-and-practices

Monday, August 17, 2020

Modernizing Powerbuilder Apps

If you have a massive data window system baked into a Powerbuilder client application persisting to a Oracle database you are likely in a world of pain. This is a business critical system that generates billions of dollars of revenue. Multiple attempts to modernize the system have cratered due to big-bang vendor product driven, technology infatuated soirees. You want to dip your toes into the modern cloud native microservices developer friendly world but have been burnt twice already. You have no interest or inclination in fielding yet another rewrite the system from scratch in 2 years sales pitch. What the hell do you do ? Hit the Bottle ? Pay ORCL another 25M$ in licensing fees.  Put a mask on and upgrade Powerbuilder to 2019 R2 ? Is there a way out of this black hole ?

Yes. But it ain't easy.

First acknowledge that this is a tough task. The short-cuts were already taken which is why we are at this F*#$ed up point. 

Here is a path that can be trodden. A couple of the smart guys in data Alastair Turner and Gideon Low have heavily influenced my thinking on this topic.

First figure out what is the primary driver for modernization Cost or Productivity. Depending on the answer a different set of strategies needs to be followed. 

Cost

Let's assume all your customer support professionals are well versed in the screens and actually love the data window UI. The application is functional and can be enhanced with speed in production. The only issue the licensing and cost associated with running Powerbuilder. In such a scenario perhaps migration is a better option i.e. migrate all the data to Postgres. Check to see if your SAP or Appeon version of Powerbuilder supports PostgresSQL as a backend. You might be so bold as to consider migrating your DB to the cloud with Amazon Database Migration service

Depending on the cost you may choose to use code generators that auto generate Java or C# code from Powerbuilder libraries. Both Appeon and Blu Age have such tools; however buyer beware. Any tool that charges for modernization by LOC is immediately suspect. Code Generators are like Vietnam - easy to get in, hard to get out. 

Productivity

You want to develop new features and microservices and expose APIs to newer channels and other consumers of the service. Here you have a fork in the road. 

1. Upgrade to the latest GA version of GA Powerbuilder 2019 R2 and begin an expensive project to RESTify Existing DataWindows as webservices. The limitation with using conversion tools is that you don't really get a chance to identify and fix various classes of important problems—you don't pay-down your technical debt. This is like trading one form of debt for another  like replacing your high interest rate debt from Visa with a slightly lower interest rate debt from Capital One.  What is in your wallet ?

2. The RIGHT Way. Start by cataloging the business use-cases, and rebuild each one.  The legacy system's only real role is to validate that behavior hadn't changed.  If you can't get the business rules from the business you will need to reverse-engineer the stored procedures and persistence using tools like  ER Diagrams, SchemaSpy or leverage Oracle Dependency Catalog utilities to determine the object dependency tree. Visualization tools can't hurt, but SO MUCH is usually trapped in the stored procedure logic that their use can be as much a hindrance as anything. A good way to visualize the entire business process is to leverage techniques from the app world like Event Storming or User Journey, Workflow, Interaction mapping. 

There is no substitute for hard work. There is no real substitute for case by case refactoring. Start with  documenting the problem, visualising it and identifying the starting points is of value. Thereafter pick a particular steel thread aka end to end slice and identify the right set of APIs. Leverage the tactical patterns listed below from Sam Newman's Book Monoliths To Microservices (chapter #4) for decomposing data and pulling out services. .


Start the journey of 10,000 miles with the first step. Start small, iterate and demonstrate value to business at the end of each iteration. This is the only way to be successful in the long term with any modernization of a critical complex system at scale. 

Good Luck !!


References

See how ZIM Shipping moved from .NET/PowerBuilder development to Spring Boot with Tanzu Application Service

Common Process Challenges When Breaking Monoliths

3 difficult challenges that we often come across when our team works with clients as they try to break monoliths:

1. Silver Bullets - Enterprises have been burnt by vendor solutions that promise migration with a tool like BPM or some silver bullet methodology that promises seamless migration. The Truth is that disentangling your monolith's code and data is going to get messy. The entire business process will need to be disaggregated, visualized and seams will need to be identified to create a blueprint for a target architecture. Other than Pivotal/VMware there is no one else who has done this at enterprise scale. Our approach modernizes the monolith incrementally with demonstrated business value in weeks not years. 

2. Over Engineering - It is common to get distracted by technology choices and deployment options rather than focus on the difficult work of understanding the core domain. Identifying the business capabilities and assessing if they are core or supporting domains. Do what is sustainable and what your average (not rockstar) software engineers can support and focus on the outcomes.

3. Pressure Cooker:  When clients continuously keep changing their priorities or lose faith in the progress or micromanage the design then it subverts the process and the target architecture looks like the old system.  Breaking monoliths with Domain Driven Design is like landing the plane from 30,000 feet. You cannot skip phases and go straight to user story backlog generation with somebody else's domain model or DDD process. Don't short circuit steps in the process.  It is critical to follow the strategic and tactical patterns and land the plane with gradual descent to an organized user story backlog.  

Saturday, August 15, 2020

Eight Pitfalls Of Application Modernization (m11n) when Practicing Domain Driven Design (DDD)

1. High Ceremony OKRs

An application modernization effort is necessary broad in scope. Overly constraining the goals and objectives often results in premature bias and could lead to solving the wrong problems. Driving out premature Key Results means that you are tracking and optimizing for the wrong outcomes. If you are uncertain of the goals of modernization or if this is your first time refactoring and rewriting a critical legacy system, it is best to skip this step and come back to it later. 


2. Mariana Trench Event Storming

There are multiple variants of Event Storming (ES). ES can be used for big picture, detailed system design, greenfield exploration, value stream mapping, journey mapping, event modeling etc., Beware of going too deep into existing system design with ES when modernizing a system. If the intent of ES is to uncover the underlying domains and the services it is counterproductive to look at ALL the existing constraints and hot spots in the system. Model enough to get started to understand and reveal the seams of the domain. If you go deep with Event Storming you will bias or over-correct the design of the new system with the baggage and pain of the old. 

They key to success with Event Storming is to elicit the business process i.e. how the system wants to behave and not the current hacked up process. When there is no representation from the business there is no point in doing event storming. 

3. Anemic Boris

When modeling relationships between bounded contexts it is critical to fully understand the persistence of data, flow of messages, visualization of UI and the definition of APIs. If these interactions are not fully flushed out the Boris diagram becomes anemic and this in turn reflects an anemic domain model

4. Picking Thin Slices

As you start modeling the new system design the end to end happy path of the user workflow first. Pick concrete scenarios and use cases that add value to the business.  Those that encounter the maximum pain. The idea here is to validate the new design cheaply with stickies instead of code and not get stuck in error or edge cases. If you don't maintain speed and cycle through multiple end to end steel threads you may may be prematurely restricting the solution space. 

5. Tactical Patterns - The Shit Sandwich

As you start implementing the new system and co-existing with the new there are a few smells to watch for the biggest one being the Shit Sandwich. “Shit Sandwich” - When you are stuck doing DDD on a domain that is sandwiched between upstream and downstream domains/services that cannot be touched thereby overly constraining your decomposition and leading to anemic modeling since you cannot truly decompose a thin slice. You spend all your time writing ACLs mapping data across domains. 

So the Top bun is the downstream service, the bottom bun is the upstream service, then there are two layers of cheese - which are the ACLs and then  your core domain is the burger patty in the middle - and now you have the :shit:  sandwich. Watch for this when you are called in to modernize ESBs and P2P integration layers. 



6. Over Engineering

Engineers are guilty of this all the time.
Ooooh ... we modeled the system with Event Storming as events ergo - we should implement the new system in Kotlin with Event Sourcing on Kafka and deploy to Kubernetes.
Yikes!!
Do what it is sustainable and what your average (not rockstar) software engineers can support. My colleague Shaun Anderson explains this best in The 3 Ks of the Apocalypse — Or how awesome technologies can get in the way of good solutions.


7. Pressure Cooker Stakeholder Management

When the stakeholders of the project continuously keep changing their priorities or they lose faith in the progress or if the domains are pre-decided then it is time to push back and reassert control. The top down process of Domain Driven Design is like landing the plane from 30,000 feet. You cannot skip phases and go straight to user story backlog generation with somebody else's domain model or DDD process. Don't short circuit steps in the process.  It is critical to follow the strategic and tactical patterns and land the plane with gradual descent to an organized user story backlog. 

8. Faith

The biggest way to fail with DDD when doing system design or modernization is when you lose faith  and start questioning the process. Another guaranteed failure mode is when you procrastinate and don't start the modeling activities i.e. you are stuck in analysis and paralysis phase. The only prerequisite to success is fearlessness and curiosity. You don't need to have a formal training to be a facilitator.  Start the journey, iterate and improve as you go.  

Saturday, August 1, 2020

Designing Good APIs

Part 1 -- Principles of Designing & Developing APIs (for Product Managers, Product Designers, and Developers)

Part 2 -- Process of Designing APIs (for Product Managers, Product Designers, and Developers)

Part 3 -- Tips for managing an API backlog (for Non-Technical PMs)

Part 4 -- Developing, Architecting, Testing, & Documenting an API (for Developers)

API First Development Recipe

How To Document APIs

Shopify APIs

API Design 




Friday, July 24, 2020

Don't Use Netflix Eureka For Service Discovery

So  … there are multiple issues with Eureka
  1. Deprecated by Netflix and no longer maintained by Pivotal/VMware.  So no long term future maintainer.
  2. The whole world seems to have moved on to either the service discovery provided natively by the platform like Kube DNS (Kubernetes) or Bosh DNS (Cloud Foundry) or service meshes (Istio) or service networking product like Consul.  See replacing-netflix-eureka-with-kubernetes-services and polyglot-service-discovery-container-networking-cloud-foundry 
  3. Eureka does not work with Container to Container networking
  4. Stale Service Registry Problem - Eureka & Ribbon don’t react quickly enough to apps coming up and down due to aggressive caching see making-service-discovery-responsive

Monday, June 22, 2020

The Eight Factors of Cloud Native Gemfire Applications

If after moving the in memory JVM cache to Gemfire your pain still hasn't gone away take a look at this checklist of factors for getting you app to perform efficiently with Gemfire. Thanks to my buddy Jeff Ellin who is a expert ninja at Gemfire and PCC.

TL;DR

Look at queries and data. If you have a read problem then the query is not constructed correctly or the data is not partitioned correctly. If its a write problem it's probably a cluster tuning issue, network issue or some bad synchronous listeners that have been implemented. The best way to triage this is to  take some stats files you can be loaded up into vsd. Gemfire keeps a lot of statistics you can visualize. You can see throughput of various operations.

Here are some of the other things you should probably look at.
  1. Data Partitioning 
  2. Non index lookups 
  3. Serialization 
  4. Querying on the server they should be using PDX for OQL Queries. If you aren’t using PDX the server needs to deserialize the object to do the query. Any time you query for data that isn’t the key you are using OQL 
  5. Data Colocation strategy of data (customer orders should be on the same partition as customer record, reference data should be replicated) 
  6. Leveraging Transactions Incorrectly (all objects must be in the same partition). Make your operations idempotent instead of relying on transactions. 
  7. Excessive GC activity due to data changing too frequently. 
  8. If networking sucks performance will suck due to the amount of replication - In rare cases you may need to enable Delta Propagation if the objects being serialized are big . also read When to avoid delta propogation. For each region where you are using delta propagation, choose whether to enable cloning using the delta propagation property cloning-enabled. Cloning is disabled by default. See Delta Propagation PropertiesIf you do not enable cloning, review all associated listener code for dependencies on EntryEvent.getOldValue. Without cloning, GemFire modifies the entry in place and so loses its reference to the old value. For delta events, the EntryEvent methods getOldValue and getNewValue both return the new value.

Thursday, June 18, 2020

2020 A Year In Review

  • Leading the delivery of  one-of-a kind Application Modernization Navigator   of a Pharmacy Benefits Management system currently running in the mainframe.
  • Led the first two Remote Application Swift Navigators creating the Playbook for Remote Event Storming and other remote collaborative modeling practices.
  • GoToMarket Remote App Modernization Navigators https://tanzu.vmware.com/content/resources-for-remote-software-teams/how-to-conduct-a-remote-event-storming-session
  • Self Published Book on Practical Microservices 
  • Self Published Book on Emergent Trends on Modern Applications 
  • Improving the quality and number of recipes in our App Modernization Cookbooks and opening our tools like SNAP to broader VMware commnunity.
  • One of the lead Influencers in MAPBU Services Marketing measured in Published articles and blog posts, internal documents and views
  • Top articles including How to Build Sustainable, Modern Application Architectures https://tanzu.vmware.com/content/practitioners-blog/how-to-build-sustainable-modern-application-architectures
  • CKA and CKAD Certified in 2019. Conducted training for both Labs and class AppTx App Services on How to get certified on Kubernetes.
  • Taught broader MAPBU organization on How To  Conduct Remote Scopings. 
  • Taught Multiple (> 5) Swift Modernization Workshops across Pivotal and VMware for Scaling App Modernization
  • Working with R&D to bring Spring Transformation Tooling to market. Created Spring Bootifier with Tim Dalsing which is now used in the implementation of Tanzu Workbench Spring Bootifier Transformer. Currently aligning with Spring R&D effort on automated cloud native remediation of Java applications.
  • Created Kafka battle-card for App Services sales 
  • Anchored One Off a Kind Kafka  Real Time Inventory Engagement and created collateral for App Modernization For Streaming workloads deck 
  • Creating App Services pipeline at our top customers - Over 375+ individual Travelers participants attended the Should This Be A Microservice webinar on Tanzu enablement see  Over 2500 views on linkedin.
  • COVID 19 Project - Facilitator of Remote Swift Event Storm and Boris exercises. This client is a small startup who worked with Labs Seattle 4 years ago on an app to connect in home carers with medicaid recipients. Since COVID 19 they've expanded their platform to now connect essential workers with childcare providers. They're experiencing a huge demand for this service. Unsurprisingly, there is a huge demand for this service. They are currently rolling out to public hospitals in LA.
  • Working across discovery teams to formulate the Discovery scoping and workshop process. 

Tuesday, June 2, 2020

Modernization Myths - Microservices - Explained

Myth - “Microservices Is The Only True Way”

Why are microservices the default way of developing cloud native applications ? Why does application Modernization in the form of decomposing monoliths result in so many microservices ? Why has microservices become the default deployment model for applications in spite of enterprises struggling with the observability, complexity and performance of the distributed system. Heisenbugs abound when you mix sync and async flows further compounded by various forms of async event driven architecture like thin events, thick events, event sourcing, CQRS etc.,
Frustrated by the intractability and cognitive overload of monoliths, I  was one of the first people to ride the monoliths to the microservices wave. However six years after the Microservices article came out from Martin Fowler and James Lewis came out, it is time to retrospect on reality. Have Microservices got you down ? Are you swimming in a mess of brittle microservices that break every night ? Are the number of microservices going up resembling the death-star architecture ? It behooves us to travel upstream and examine the motivations for microservices. How do we peel back the onion and get back to sanity around microservices. How do we rationalize microservices ?
So what is the path forward ? Throw the baby out of the bathwater and swing back to Monoliths. There ought to be a middle way where we can take best of the microservices advantages like domain based composition, enforced module boundaries, independent evolvability, better cognition etc.  It is time to look after life after running microservices architectures in production and learn from the mistakes committed over the past five years.
There is a way that has emerged from working with a number of customers where the value of microservices has not been realized from application modernization despite leveraging Domain Driven Design and doing all things right. This process adds sanity to the process of constructing microservices and provides guidelines and design heuristics on structuring microservices. We need to tackle this problem from a technical, business and social perspective marrying concepts from DDD with Wardley Business Maps and Sociotechnical architecture. Rationalize microservices into modular monoliths based on technical and business heuristics. Employ techniques  which are  a combination of mapping microservices to core technical attributes reduced by affinity mapping and business domain context distillation. This workshop/process called micro2monp has simplified enterprise architectures and improved the operational burden of microservices. You can find more details of the process here and six-factors and post

So How DO you Rationalize your microservices ?  Here is a presentation that walks through all the factors that should be used for rationalizing Microservices.










A Blueprint For Mainframe Modernization

At VMware Pivotal Labs we have cracked the code of Mainframe Modernization. Frustrated with low fidelity code generators migrating COBOL to Java, losing business rules in the process ?. Tired of multi-year modernization projects with no end in sight. Take a look at the seven step punch VMware Pivotal employs to achieve concrete modernization business outcomes for high value critical mainframe OS400 and OS390 workloads.




Modernization in weeks not months and years
Blueprint For Mainframe Modernization VMWare Pivotal Labs