About Me

My photo
Rohit leads the Pivotal Labs App Modernization Practice in engineering, delivery training & cross-functional enablement, tooling, scoping, selling, recruiting, marketing, blog posts, webinars and conference sessions. Rohit has led multiple enterprise engagements including ones featured in the Wall Street Journal. Rohit focuses on designing, implementing and consulting with enterprise software solutions for Fortune 500 companies on application migration and modernization.

Tuesday, May 26, 2020

Modernization Myths Explained 1 & 2

In this blog post we go deeper into the top two myths of Application Modernization. An overview of all the top 10 myths can be found here


Myth 1 - “Application has to be cloud native to land on a PaaS”

The truth is that most Platforms As A Service run applications of different cloud native characteristics just fine. Applications have to progress through a spectrum as they land and flourish in the cloud from Not running in the cloud, to running in the cloud, to running great in the cloud. A PaaS like Cloud Foundry has also evolved features like volume services and multi-port routing to help stateful and not born on the cloud applications run without changes on Cloud Foundry.  In his blog series  debunking Cloud Foundry myths , Richard Seroter authoritatively disproves the notion that  Cloud Foundry can only run cloud-native applications.
Applications do not have to be classic 12 factor or 15 factor compliant to land on PaaS. Applications evolve on the cloud native spectrum. The more cloud native idiomatic changes to an app - the more return on investment you get from the changes. The more cloud native you make the app, the higher the optionality you get since it becomes cloud agnostic allowing enterprises to exact maximum leverage from all the providers. The focus needs to be on the app inside-out to get the best returns. In general the higher you are in the abstraction stack the more performance gains you will get so Architecture changes will yield a 10x more benefit than JVM or GC tuning which will yield a 10x more benefit than tuning assembly code and so on … If it is the database tier that you think is the problem - then you can put in multiple shock absorbers instead of tuning the startup memory and app start times. Apps first, Platform second :-)  

Myth 2 - “Application have to be refactored to run them on Kubernetes”

It's a fallacy that applications need to be modified by developers before landing them on Kubernetes. In fact an enterprise can get significant cost savings by migrating one factor apps to Kubernetes. A one factor app simply has the capability to restart with no harmful side-effects James Watters the cloud soothsayer has posed the question in the cloud-native podcast - Do you even have a 1-factor application ? 
Most business applications are not ready for refactoring but still want the cost advantages of running in the cloud.  For apps where the appetite for change is zero, starting  small, as in just restarting the application predictably i.e. making it one factor can make it run on a container platform like Kubernetes. As you shift to declarative automation and scheduling, you will want the app to restart  cleanly. There is an application-first movement of being able to do some basic automation of even your monolithic applications. Apps are the scarce commodity right now. With Kubernetes becoming more and more ubiquitous — All the application portfolios need a nano change mindset to adapt to the cloud.

Saturday, May 16, 2020

Top Ten Application Modernization Myths

Sometimes we tell little lies to ourselves. It is always good to take inventory of reality and introspect on what is true and what is not. Here are some of the little lies of application migration and modernization that I have observed over the last five years. 
  1. Application has to be 12/15 factors compliant to land on PaaS. Apps can be modified on the cloud native spectrum. The more cloud native idiomatic changes to an app - the more ROI you get from the changes. See Myth #1 "Cloud Foundry can only run cloud-native, 12-factor apps." - FALSE https://tanzu.vmware.com/content/blog/debunking-cloud-foundry-myths
  2. Applications need to be modified by developers before landing them on Kubernetes (TKG). In fact an enterprise can get significant cost savings by migrating one factor apps to Kubernetes. A one factor app simply has the capability to restart with no harmful side-effects See https://tanzu.vmware.com/content/intersect/vmware-tanzu-in-15-minutes Do you even have a 1-factor application?
  3. Once technical debt on an application becomes unsurmountable the only recourse is to rewrite it. Surgical strikes with an emphasis on understanding the core domain can lead to incremental modernization of the most valuable parts of a big critical legacy system. A FULL rewrite is not the only option.  See technical debt like financial debt https://tanzu.vmware.com/content/intersect/risk-profile-of-technical-debt and https://tanzu.vmware.com/content/webinars/may-6-tech-debt-audit-how-to-prioritize-and-reduce-the-tech-debt-that-matters-most
  4. There is a silver bullet for app migration. There is an increasing bevy of tools that have started promising a seamless migration of VMs to containers in the cloud. Remember in life nothing is free. You get what you put in. Migration is highly contextual and the OPEX and Developer efficiency returns are dependent on the workloads being ported. Migration of apps in VMs to Kubernetes stateful sets or automatic dockerization through buildpacks etc should be evaluated for the desired objectives of the Migration Project.
  5. Microservices and event driven architecture is ALWAYS the right architecture choice for app modernization. Sometimes the answer is to step back simplify the domain and implement a modular monolithic system and sometimes the answer is to decompose the large system into a combination of microservices and functions. Understand the design and operational tradeoffs first before making the choice. Every tech choice like eventing, APIs, streaming etc has a spectrum. The fundamental job of an architect is to understand the sociotechnical factors and make the right choices from a process, people and implementation perspective. see https://tanzu.vmware.com/content/practitioners-blog/how-to-build-sustainable-modern-application-architectures
  6. Decomposing and rearchitecture of an existing system can be done concurrently with forward development with little impact to exisrting release schedules. This is a dream. When working on two branches of an existing system a forward development branch and a rearchitecture branch > the total output often times gets worse before becoming better. WBB - This is because there is a period of time where dual maintenance and dual development and the coordination tax across two teams are levied without getting any of the benefits of modularization and refactoring. See The Capability Trap: Prevalence in Human Systems https://www.systemdynamics.org/assets/conferences/2017/proceed/papers/P1325.pdf https://rutraining.org/2016/05/02/dont-fall-into-the-capability-trap-does-your-organization-work-harder-or-smarter/
  7. The fundamental problems of app modernization are technical. If developers  only had the rigor and discipline to write idiomatic code all problems would be fixed and we won't incur technical debt. Wrong- The fundamental problems of app modernization are team and people related. Incorrect team structure, wrong alignment of resources to core domains and messed up interaction patterns are far more responsible for the snail pace for feature addition rather than technical changes. The answer is team re-organization based on the reverse conway maneuver. See Team Topologies https://www.slideshare.net/matthewskelton/team-topologies-how-and-why-to-design-your-teams-alldaydevops-2017
  8. Mainframe modernization can be accelerated by using lift-n-shift tools like emulators or code generation tools. In our experience a complex mainframe modernization almost always involves a fundamental rethink of the problem being solved and then rewriting a new system to address the core domain divorced from the bad parts of the existing intermingled complex system. Theory of constraints and a systems thinking help us reframe the system and implement a better simpler one.
  9. Engineers, Developers and Technical Architects tend to think from a technical nuts and bolts perspective (the “how”) and, therefore, tend to look at modern technologies such as Cloud Foundry, Spring Boot, Steeltoe, Kafka and containerization as the definition of a modern application. This misses the mark. The Swift Method pioneered by Pivotal  helps bridge the gap in understanding between the non-technical, top down, way of thinking and the technical, bottom up thought process.  The end result is an architecture that maps to the way the system “wants to behave” rather than one that is dictated by the software frameworks of du jour. 
  10. AWS or Azure or GKE/GCP etc provide an all encompassing suite of tools, services and platforms to enable and accelerate modernization and migration of workloads. While it is true that the major cloud providers have ALL the bells and whistles to migrate workloads, the economics of app modernization tend towards the app and not the platform. The more cloud native you make the app, the higher the optionality you get since it becomes cloud agnostic allowing enterprises to exact maximum leverage from all the providers. The focus needs to be on the app inside-out to get the best returns. In general the higher you are in the abstraction stack the more performance gains you will get so Architecture changes will yield a 10x more benefit than JVM or GC tuning which will yield a 10x more benefit than tuning assembly code and so on … If it is the database tier that you think is the problem - then you can put in multiple shock absorbers 1. caches 2. queues 3. partitioning first and focused on instead tuning the startup memory and app start times. Apps first, Platform second :-)  

Friday, April 24, 2020

Java Application Modernization Maturity Model

This is how I think about the Maturity Model for Java Transformers
--------------------------------------------------------------------------

1. Basic Containerization of Stateless apps to TKG - enabled by https://github.com/pivotal/kpack and Cloud Native Buildpacks. - Deploy with vanilla  manifests that maybe helmified   / Basic Containerization to TAS - Using TAS Buildpacks ... some apps require no changes when deploying with JavaBuildpack. **O changes.**

2. TKG - Basic Containerization of Stateful apps possibly using K8s Stateful sets or persistent volumes. / TAS - Extract state out like session replication or databases (not sure how to do this yet).   Some tools purport to do this. Like Google anthos and the POC the tracker team is working on. **Minimal changes.**

3. Invasive S-boot transformer - high vaule - high degree of change and difficulty. Thae automate the transformation recipes to cloud native. Bootifier ones as well simpler ones like boot 2 -> boot 3, XML -> Config Migration.  **Invasive changes.**

4. Microservice generator - looking at the dynamic runtime of the appllication. Determines seams and makes suggestions where apps can be decomposed and used as a starting point for Swift. **Monoliths2Microservices**

Thursday, April 9, 2020

The Balance between Precision and Accuracy when it comes to Performance Benchmarking

Usually when called in to a fire fighting situation where the performance and scale of the system is going to shit, it is critical to understand the right treatment to apply to the problem. Understanding the Precision vs Accuracy tradeoff  is critical in prescribing the right medicine for your problems.  So when should you go for accuracy and when should you pursue precision ?

When you need to be in the right ballpark of the solution within an order of magnitude go for accuracy. When you need to be precise to the individual digit level strive for precision. In most situations you can guess that accuracy comes first, precision comes later. Another critical tradeoff is performance vs scale. Your application may scale great but perform like crap and vice-versa. Scale needs to be achieved at the right level of end user performance.

In general the higher you are in the abstraction stack the more performance gains you will get so Architecture changes will yield a 10x more benefit than JVM or GC tuning which will yield a 10x more benefit than tuning assembly code and so on … If it is the database tier that you think is the problem - then you can put in multiple shock absorbers 1. caches 2. queues 3. partitioning first and focused on instead tuning the startup memory and app start times.

Always understand the top constraint and problem you are solving for. You should always prioritize to solve the top constraint and the best way to determine the top constraint is to take a system level holistic view - draw out a system map and take a look at all the queues and waits in the system.

Visualize the system as a bucket of wait queues - Something like the picture below. An Event Storming exercise can help suss out this system map and visualize the API/Date stream.














Here are the top five performance mitigation suggestions based on past experience
  1. Classpath Bloat: Check to see if the Classpath is bloated. Spring boot autoconfigures stuff that sometimes you don’t need and bloated class path leads to misconfiguration of threadpools and libraries. As a remedy put the dependencies (pom.xml or build.gradle) through a grinder. We have 25 step checklist if you want details. This will also reduce the overall memory footprint due to library misconfiguration. Developers load up  a lot of dependencies and libraries in the app and the full implications of memory bloat and runtime misconfiguration are not understood till later
  2. Startup time: If you want your app to startup quickly see this list https://cloud.rohitkelapure.com/2020/04/start-your-spring-apps-in-milliseconds.html
  3. Memory: If the app is memory constrained ensure that GC is tuned correctly. Employ verbose GC tracing. see https://blog.heaphero.io/2019/11/18/memory-wasted-by-spring-boot-application/#4A7023 and https://heaphero.io/
  4. External integration tuning including outboud DB, HTTP Calls & Messaging. Connection Pool Tuning. See https://github.com/pbelathur/spring-boot-performance-analysis. In general examine any outbound connection/integration to any database, messaging queue. Circuit Breakers& metrics on every outbound call to see the health of the outbound connection
  5. Latency/ Response Time analysis to see where & wether time is spent on the platform /app /network/disk Use the latency-troubleshooter app to hunt down latency hogs.  https://docs.cloudfoundry.org/adminguide/troubleshooting_slow_requests.html and https://community.pivotal.io/s/article/How-to-troubleshoot-app-access-issues
For a detailed checklist see
HAPPY Performance Hunting!

Wednesday, April 1, 2020

Start your Spring Apps in Milliseconds on Seconds

Tweaks

If you want to start your app as quickly as possible (most people do) there are some tweaks you might consider. Here are some ideas:
  • Use the spring-context-indexer (link to docs). It’s not going to add much for small apps, but every little helps.
  • Don’t use actuators if you can afford not to.
  • Use Spring Boot 2.1 and Spring 5.1.
  • Fix the location of the Spring Boot config file(s) with spring.config.location (command line argument or System property etc.).
  • Switch off JMX - you probably don’t need it in a container - with spring.jmx.enabled=false
  • Run the JVM with -noverify. Also consider -XX:TieredStopAtLevel=1 (that will slow down the JIT later at the expense of the saved startup time).
  • Use the container memory hints for Java 8: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap. With Java 11 this is automatic by default.
Your app might not need a full CPU at runtime, but it will need multiple CPUs to start up as quickly as possible (at least 2, 4 are better). If you don’t mind a slower startup you could throttle the CPUs down below 4. If you are forced to start with less than 4 CPUs it might help to set -Dspring.backgroundpreinitializer.ignore=true since it prevents Spring Boot from creating a new thread that it probably won’t be able to use (this works with Spring Boot 2.1.0 and above).

Thursday, March 26, 2020

CKA and CKAD Certification Preparation

COVID-19 provides an excellent opportunity to up-skill your game around Kubernetes. Here are some tips and a video that takes you through the prep needed to ace the CKA and CKAD Kubernetes certifications. Enterprises will most likely enter into a period of cost reduction both in licenses and infrastructure. Kubernetes provides the container density to consolidate your datacenter footprint and postpone that migration to public cloud and save millions of dollars.

Remember the key to ace the CKA and CKAD exams is to understand how to generate the K8s manifest YAMLs expeditiously and practice with the exam simulators. These exams are real and provide a solid foundation in developing and operating on Kubernetes. In some ways CKAD is more challenging than the CKA. However the breadth of the CKA is 30% more than CKAD. So you can chose to go broad or you can chose to go deep.

Here are the top resources:
Finally a cheesily edited video by my son Rushil Kelapure of me and my colleagues Varun, Igor, Alfus and Wael in VMware Pivotal Labs App Modernization Team providing guidance on how to study for CKA & CKAD certifications.



Best Of Luck!
Rohit 

Friday, March 20, 2020

Top Lessons Learnt on How To Conduct a Remote Event Storming Session

How Does the Domain Driven Design Practice of Event Storming Change in the Virtual World ?

Event Storming (ES) is a fun activity that will reveal the emergent context and subdomains of your complex system and will lay the foundation for re-architecture to the desired future state. ES in the virtual world is a different beast than in the physical world primarily due to the lack of context and the low bandwidth of physical aspect of communication via video. Virtual ES can be made to work with a lot of work and love and care of the participants. Things will go wrong and you will need to be flexible and roll with it. As with ALL things the most difficult part is in starting and doing the first one. It will get better with practice. Leverage Miro and pre-baked dashboards and do your prep of both the facilitators and the practitioners and you will be on your way to deeply discover and explore the complexity of your domain.

More recently thanks to the Corona Virus situation, we had to practice an all remote event storming. Event storming is a tactile in person human collaborative modeling activity which requires a high degree of facilitation to align the technical team and business analysts and domain experts towards a shared pool of understanding of a complex domain. Event Storming facilitation requires a lot of techniques and jedi mind tricks around giving everyone a voice and making all the implicit knowledge explicit on paper. In this specific case we were modeling a batch system on the mainframe. 

What worked 

  • Event Storming (ES) with the right tool DOES work. In this case we use the Miro Canvas collaborative board. Basically an online version of the large wall we need in the real world. Miro worked great considering that we were on it continuously for nearly 30 mins. For a brief 15 min. period it did glitch, but overall it got the job done and worked great.
  • We also leveraged Miro templates as a productivity booster to land a bunch of colored empty stickies. The Brainstorming  template is especially helpful. Someone in the DDD community should publish out an Event Storming Miro Template to encode the lessons learnt. 
  • The silent generation of stickies did work and even though we had done the Event Storming 101 with all the participants the quality of the stickies was not that great. As we gave in-time feedback, the quality of the stickies improved.
  • Always have Video ON. 
  • The preparation we did revealed a big problem in that the participants could not connect to Miro over VPN. Therefore we had to quickly pivot to an alternate way for our participants to access Miro. We figured out all the Miro and Zoom tricks before the sessions started. It is important to do a run through for all participants before the session begins.
  • Having a couple of facilitators is absolutely a must because the facilitators have to do much more of a heavy lift to pull in participants and get the energy flowing in a virtual ES. The facilitators switch roles. The first facilitator is play-by-play and the second facilitator provides the color. 
  • After the Boris exercise of Swift we did a user story map of all the stories following the three happy paths. This worked well to generate all the stories in the backlog. It enabled us to map the stories to the thin flows and define MVPs. 
  • Use ONE Miro Board for each activity. In our Inception we ended up with the following boards 1. OKRs  2. Event Storming 3. Risks 4. Data Flow Architecture 5. Boris and 6. Spring Retro. The entire process took 5 days. 
  • You may want to look at Miro Maintenance windows to ensure there is no blackout period during your ES. Please backup all your boards after every day. 
Remote Event Storming Miro Board

What did not work ?

  • The collaborative aspect of Event Storming where there are simultaneous conversations among the participants was a #fail. I could record only one organic conversation. The rest were facilitated during the walk troughs. This is because in an online setup there is ONLY one channel in a large group activity and number of voices. This contention results in poor bandwidth of communication. 
  • Event Storming went too long. I think we clocked out at 3 hours. Towards the end it became primarily a guided session. I believe we should have given the participants a chance to conduct the walkthrough. If we as the facilitators take a heavy hand then it enables the participants to go into a passive read-only mode instead of a more active do-er role. This is a thin line. 
  • We did not have the business and domain experts for the entire time. This is the #1 reason for lack of flow during the ES. 
  • We did not do the sufficient amount of prep to train all the participants on ES. A half an hour introduction was not enough. We will pay more attention to this and instead of highlighting individual stickies we will showcase a small sample system with multiple stickies including events, external systems, issues and Aggregates.
  • When we started the ES in addition to events our intent was also to generate commands, aggregates, views and users. This created a cognitive overload even though we did this incrementally - from past ES experiences we knew this is too much. 
  • At a minimum all the online ceremonies went 20-40% over the actual time it takes in the physical world and this is primarily due to the lack of context and working with a group of individuals the first time. The storming  and norming process that bonds a group together takes time virtually. There is no way to speed this up other than to take the time. 
  • We did not spend the time to create an icebreaker domain event stickies which may have accelerated the stickie generation. Next time we will seed a few relevant domain events which will get the group going. 
Remote Event Storming Legend

What Did we learn and What will we do differently next time ?

  • Miro Training - We will conduct a dedicated miro training for both facilitators and participants so that everyone is a Miro ninja. The number of times all of us messed up the miro board was non-trivial. 
  • No more than 7 participants in any remote zoom session. When in excess of 7 participants split into multiple breakout virtual rooms and then bring everyone back together to do a summary. Split the business process into high level categories and break into different rooms if over 10 participants. 
  • More facilitated Event Storming - We should have taken a break in between and should have done the front to back and back to front walkthroughs at shorter intervals. 
  • Get dedicated time from the business and domain experts.
  • No session in excess of 90 minutes. 
  • As a facilitator you will need to big screens. My setup of just one laptop and one large screen was sub-optimal. If I were to do this again especially as you get to Boris, you need the ES screen on the left and the Boris screen on the right. Get a bigger work/office table. 
  • For a first time event storm we will ONLY stick to Domain Events, Issues, HOT Spots, External Systems, Aggregates and emergent Sub-domains/Bounded contexts in that order. Commands are mostly useless and should be skipped for ES. Read Models, Views, UI and other notations are useful but ONLY for advanced Event Stormers.  
  • Miro Tips: 
    1. You can lock certain part of the canvas of the Miro Board with locking the frames. When you click on a frame and the context menu shows up, click on the lock icon. That will lock in the frame in place but still allows things inside the frame to moved around. Frames are easier to navigate. They show up on the sidebar menu and you can move them up or down
    2. You can lock most elements, lines, boxes, labels, whatever you want to remain in an anchored location on the canvas.
    3. Use Bulk Mode for folks to add sticky notes. It lets you type a list of stickies then, boom, dump them on the board when you hit Done.
    4. Use labels in the sticky note context menu for adding categorization to notes
    5. Leverage the Iconfinder to add incremental notation and icons to the event storm. Have some fun with it. 
    6. Everyone can same size stickies to be the Small Medium or Large size. Stickies default to different sizes depending upon the users zoom level. Stickies can be resized to S, M or L, by clicking the S, M or L icon in the stick note's context menu. 
    7. Use the cards and dot-voting features of Miro for prioritization and affinity mapping of ideas. 
Workstation Setup
Good luck and hit me up @rkela  (DMs open) if you want 1:1 coaching on how to do this. 

Saturday, February 1, 2020

How To Write Good User Stories - What makes a good technical story ?

User stories are the currency  Product Managers use to turn architecture into code. To design any system you need both  technical and user-driven stories. Both styles have a place in a app modernization engagement. 

Classic stories are written from the users perspective and explain incremental business or user value. Technical stories sometimes may not have an obvious human user and/or a clear business / user value. That is OK.

Here are some tips on how to write good stories 

  1. Event Driven Architecture lends well to Gherkin style stories https://www.pivotaltracker.com/blog/principles-of-effective-story-writing-the-pivotal-labs-way and https://content.pivotal.io/blog/how-to-write-well-formed-user-stories
  2. Techical Story Writing https://medium.com/product-labs/ways-to-approach-technical-story-writing-961e0506fa13 
  3. How To Write Well Formed Stories https://content.pivotal.io/blog/how-to-write-well-formed-user-stories
  4. Good reference for story writing https://www.pivotaltracker.com/blog/principles-of-effective-story-writing-the-pivotal-labs-way
  5. When working with non-engineering PMs This is excellent guidance for writing API user stories https://content.pivotal.io/product-managers/designing-developing-an-api-product-part-3-of-4   and https://builttoadapt.io/what-my-backend-and-api-user-stories-look-like-c5e965beb778#.vi9fi0yq4
  6.  A really good overview of different story types, including bug reports, with examples: https://www.pivotaltracker.com/blog/principles-of-effective-story-writing-the-pivotal-labs-way

Sample Technical Story

Title
[driver] service subscribes to [order-accepted] event and publishes [driver-assigned] event with dummy driver information


Acceptance Criteria
When [order-accepted] event is received by [driver] service
Then  [driver] service publishes a [driver-assigned] event with dummy driver information
And [driver-assigned] event contains the same orderId that was received from [order-accepted] event
Dev Notes
  • The [order-accepted] event will look like { "orderId":"...", "restaurantId":"...", "eventDate":"2019-08-16T15:30:30Z" }
  • The [driver-assigned] event might look like { "orderId":"...", "driverId":"...", "eventDate":"2019-08-16T15:30:30Z" }

Saturday, January 25, 2020

Application Portfolio Rationalization for Cloud Migration

How to run a an workload/app migration discovery workshop 

  1. Goals (Technical and Business) for the program > (Objectives & Key Results) for the session
  2. Start at the portfolio level. Figure out how may high level portfolio’s exist.
  3. For each portfolio figure out the high level buckets of apps. case in point - JavaEE apps, .NET apps, Spring Apps, PHP/Python/NodeJS …. you should create a heat map of the portfolio with the buckets. Each tile in the heat map represents a bucket and the color + intensity shows the ease of migration. The size of the tile represents the % in the portfolio. 
  4. Now you have two choices go broad or go deep. i.e. look at one bucket and dig deeper or go broad and sample a couple from each bucket.
  5. Settle on a couple of specific apps to start with to drill down throughout the process
    Apply business value and other org heuristics to prioritize the apps in terms of business ROI. 2 * 2 matrix of technical effort vs business value to determine focus. This can be done at the bucket level or individual app level. For individual apps run a brief SNAP to get an idea of cloud suitability. Customized SNAP with Heat maps is the way to go … so low tech way of doing this is to do this on a flipchart with Blue grid easel pads like or go with excel
  6. What is the smallest possible thing we could do to add value. Discussion of MVPs. What will a potential AppTx engagement look like. How will we measure success and elicit feedback.
  7. If they you are stuck in the paralysis part then do a path to production or value stream exercise to figure out what is the top constraint/problem you need to focus ON in addition to the apps
  8. Retrospective & Next Steps

credit to Felicia Schwartz who developed this model ...


Bucketing and Technical Suitability of Applications
Application Migration Heatmap

Thursday, January 16, 2020

Rohit Kelapure A Year In Review 2019

So this is a bit late - but there is never a better time to retrospect and reflect. Without the support of my awesome AppTx team, peers and management at Pivotal this would not have been possible.

Rohit Kelapure - A Year In Review 2019

# Delivery

I  anchored seven enterprise engagements including one that was featured in the Wall Street Journal. I led an finished initiatives in a fifty person solution architect team in the following areas > Training & Cross-functional Enablement, Practice Management, Internal Initiatives, Tooling, Scoping, Selling, Recruiting, Marketing, Blog Posts, Webinars and Conference sessions. See details below.

# Practice Management 

  • Cookbooks Maintenance
  • Created Healthcheck & SRE Offering 
  • App Services Anchor Bootcamp
  • Blog Series Architecture - A Pivotal Opinion (upcoming)
  • Mainframe Modernization GTM
  • Closing loops - Feedback From AppTx to R&D
  • Closing loops - Feedback to DATA, PCFMetrics, Spring Cloud Services and other R&D Teams
  • Active participation PWG-SRE practice workgroup
  • AppTx for PSR Offering

# Training & Cross-functional Enablement 

  • Kubernetes Training mini-Conference
  • So You Want To Run An AppTx Scoping
  • So You Want To Run An AppTx Healthcheck
  • Anchoring Best Practices - Things We have learnt the hard way
  • Wrote On-boarding 2 - Week - New Hire Fast Ramp 4 AppTx Solution Architects
  • Continuous assistance on Slack #modern-family & #app-transformation channels
  • AppTx Scoping Retro - Train the scopers 2 sessions 
  • PAL PKS Course Development 
  • Microservices Workshop DBSBank

# Pivotal Internal Initiatives

  • Google Anthos intel
  • PAA-RFC
  • Cookbooks - RFC
  • Java Devex Team

# Created Tools

  1. Pivotal App Analyzer  _Refinement of Rules , Guidance_
  2. AppTx Effort Estimation Model with Steve Woods
  3. PKS SNAP
  4. Spring Bootifier with Tim Dalsing

# Recruiting - 2 Solution Architects

# Mentored/Improved - One colleague

# Conducted Scopings - 6 including commercial and federal 

# Pivotal Blog Posts

- Twitter  @rkela (750 followers) 

# Webinars (Solo, Partners & Customers)

  1. Why Your Digital Transformation Strategy Demands Middleware Modernization
  2. How to Migrate Applications Off a Mainframe
  3. Tools and Recipes to Replatform Monolithic Apps to Modern Cloud Environments 
  4. App Modernization with .NET Core: How Travelers Insurance is Going Cloud-Native 

# Conferences - SpringOne Platform 2019

- [360-Degree Health Assessment of Microservices on the PCF Platform](https://springoneplatform.io/2019/sessions/360-degree-health-assessment-of-microservices-on-the-pcf-platform)

# Certifications


# Courses