On-prem, AWS, GCP (... alibaba, Ibm cloud, ORCL) 1. Insurance against vendor lockin 2. Leverage the power of hyper cloud providers - on-demand IaaS - value added services 3. Regulatory 4. Disaster Recovery across providers 5. ORCL, VMware, IBM legacy DC incumbency reasons - vmc for aws, vmc for gcp, vmc for azure - zero change VM migration - vmc control plane for VMs 6. 2-10% penetration of public cloud 7. Proliferation of platforms - SINGLE Control multi-cloud plane - CF - Mesosphere - k8S (anthos, tmc, arc, crossplane, ..... ) - SINGLE CONTROL PLANE
# Replication Edge <=> Central
- Edge workloads will 75% cloud workloads - Call back home - Edge Architecture - Replicate -> offline-online - Data constraints - Network constraints - STATE Management - Cell Towers, POS, Autonomous Robots
# Stateful workloads - Databases
- Operational pain - engineers - containers, VMs (legacy) - Operators - STATEFUL Sets - CSI - Portworx - Performance reasons for staying on a VM - self-service, faster changes, choice - Data resiliency, backup, DR (BCDR) - Gemfire replication (pattern)
If you have a massive data window system baked into a Powerbuilder client application persisting to a Oracle database you are likely in a world of pain. This is a business critical system that generates billions of dollars of revenue. Multiple attempts to modernize the system have cratered due to big-bang vendor product driven, technology infatuated soirees. You want to dip your toes into the modern cloud native microservices developer friendly world but have been burnt twice already. You have no interest or inclination in fielding yet another rewrite the system from scratch in 2 years sales pitch. What the hell do you do ? Hit the Bottle ? Pay ORCL another 25M$ in licensing fees. Put a mask on and upgrade Powerbuilder to 2019 R2 ? Is there a way out of this black hole ?
Yes. But it ain't easy.
First acknowledge that this is a tough task. The short-cuts were already taken which is why we are at this F*#$ed up point.
Here is a path that can be trodden. A couple of the smart guys in data Alastair Turner and Gideon Low have heavily influenced my thinking on this topic.
First figure out what is the primary driver for modernization Cost or Productivity. Depending on the answer a different set of strategies needs to be followed.
Let's assume all your customer support professionals are well versed in the screens and actually love the data window UI. The application is functional and can be enhanced with speed in production. The only issue the licensing and cost associated with running Powerbuilder. In such a scenario perhaps migration is a better option i.e. migrate all the data to Postgres. Check to see if your SAP or Appeon version of Powerbuilder supports PostgresSQL as a backend. You might be so bold as to consider migrating your DB to the cloud with Amazon Database Migration service.
Depending on the cost you may choose to use code generators that auto generate Java or C# code from Powerbuilder libraries. Both Appeon and Blu Age have such tools; however buyer beware. Any tool that charges for modernization by LOC is immediately suspect. Code Generators are like Vietnam - easy to get in, hard to get out.
You want to develop new features and microservices and expose APIs to newer channels and other consumers of the service. Here you have a fork in the road.
1. Upgrade to the latest GA version of GA Powerbuilder 2019 R2 and begin an expensive project to RESTify Existing DataWindows as webservices. The limitation with using conversion tools is that you don't really get a chance to identify and fix various classes of important problems—you don't pay-down your technical debt. This is like trading one form of debt for another like replacing your high interest rate debt from Visa with a slightly lower interest rate debt from Capital One. What is in your wallet ?
2. The RIGHT Way. Start by cataloging the business use-cases, and rebuild each one. The legacy system's only real role is to validate that behavior hadn't changed. If you can't get the business rules from the business you will need to reverse-engineer the stored procedures and persistence using tools like ER Diagrams, SchemaSpy or leverage Oracle Dependency Catalog utilities to determine the object dependency tree. Visualization tools can't hurt, but SO MUCH is usually trapped in the stored procedure logic that their use can be as much a hindrance as anything. A good way to visualize the entire business process is to leverage techniques from the app world like Event Storming or User Journey, Workflow, Interaction mapping.
There is no substitute for hard work. There is no real substitute for case by case refactoring. Start with documenting the problem, visualising it and identifying the starting points is of value. Thereafter pick a particular steel thread aka end to end slice and identify the right set of APIs. Leverage the tactical patterns listed below from Sam Newman's Book Monoliths To Microservices (chapter #4) for decomposing data and pulling out services. .
Start the journey of 10,000 miles with the first step. Start small, iterate and demonstrate value to business at the end of each iteration. This is the only way to be successful in the long term with any modernization of a critical complex system at scale.
Good Luck !!
See how ZIM Shipping moved from .NET/PowerBuilder development to Spring Boot with Tanzu Application Service
3 difficult challenges that we often come across when our team works with clients as they try to break monoliths:
1. Silver Bullets - Enterprises have been burnt by vendor solutions that promise migration with a tool like BPM or some silver bullet methodology that promises seamless migration. The Truth is that disentangling your monolith's code and data is going to get messy. The entire business process will need to be disaggregated, visualized and seams will need to be identified to create a blueprint for a target architecture. Other than Pivotal/VMware there is no one else who has done this at enterprise scale. Our approach modernizes the monolith incrementally with demonstrated business value in weeks not years.
2. Over Engineering - It is common to get distracted by technology choices and deployment options rather than focus on the difficult work of understanding the core domain. Identifying the business capabilities and assessing if they are core or supporting domains. Do what is sustainable and what your average (not rockstar) software engineers can support and focus on the outcomes.
3. Pressure Cooker: When clients continuously keep changing their priorities or lose faith in the progress or micromanage the design then it subverts the process and the target architecture looks like the old system. Breaking monoliths with Domain Driven Design is like landing the plane from 30,000 feet. You cannot skip phases and go straight to user story backlog generation with somebody else's domain model or DDD process. Don't short circuit steps in the process. It is critical to follow the strategic and tactical patterns and land the plane with gradual descent to an organized user story backlog.