About Me

My photo
Rohit leads the Pivotal Labs App Modernization Practice in engineering, delivery training & cross-functional enablement, tooling, scoping, selling, recruiting, marketing, blog posts, webinars and conference sessions. Rohit has led multiple enterprise engagements including ones featured in the Wall Street Journal. Rohit focuses on designing, implementing and consulting with enterprise software solutions for Fortune 500 companies on application migration and modernization.

Thursday, November 2, 2017

A View on Chargeback and Billing with Pivotal Cloud Foundry

We see that when enterprises install a Platform like Pivotal Cloud Foundry one of the primary motivations is to reduce the time to market. Enterprises want to fulfill their customer needs and leverage the platform and app dial tone provided by Pivotal Cloud Foundry to enable developers to push apps faster than before and rapidly experiment. There is however a real temptation to institute the same practices of yore to manage and onboard apps to the platform. Establish fine grained chargeback models that penalize innovation and count every nickel and dime when it comes to memory or network traffic or app or service instances on the PaaS. This defeats the purpose of PCF to encourage creativity and rapid genesis of ideas to production.

# So how does one think of chargeback in a new world. How do we get the profusion of apps on the platform and gain optics into cost and charge reasonably ?

Keep it simple. Observe, orient and act.

First and foremost remember PCF is a  PRODUCT and when you’re launching a new product whether it’s ice cream cones or a PaaS, if you don’t price it right based upon the market conditions, then no customers will come and you’ll be left with a bunch of melted ice cream or an empty PaaS. If  the market is undefined i.e. the platform engineers are not already doing chargeback for other infrastructure, any attempt to price PCF now would be a shot in the dark and more likely to over/under estimate the value of the product than to be hit the mark. And over/under pricing it both come with big risks in the both the short and long term. Put in place all the facility you need to understand consumption, do market research on what people are willing to pay.  Customers don’t even know the value yet until they start using it!.



Promote usage internally first, by observing usage amount/usage patterns/costs with earlier adopters before start charging other teams, otherwise adoption rate by dev teams may be low if the platform costs too much.

Build a model based on quotas as a starting point.  Show what the costs are for their entitlement NOT what they are actually using i.e. similar to the AWS model. Organization Quotas force accountability and make it way easier to do capacity planning. Pretty easy to get started.  There’s a cost to buying a lot and the Platform Engineers should not care what the app teams do with that lot.  If the app teams decide to deploy nothing platform engineers have still reserved whatever is in your quota.  Doesn’t mean that platform team can’t over subscribe resources.

# Resources


# Credits

PCFs Team - Caleb, Parker, Luciano, Zac, Clayton, David, Ian and Eamon 

Friday, October 13, 2017

Pushing WebSphere Application Sever Applications to Cloud Foundry with Open Liberty

Thanks to a  friendly nudge and significant contributions from Michael Wright, I have had the opportunity to reacquaint myself with an old friend - the WebSphere Liberty Profile Buildpack. In case you missed it, in a bold move IBM open sourced the WebSphere Liberty Profile their lightweight application server.  Open Liberty as it is called now is an excellent landing spot for ALL current WebSphere Application Server classic apps. 

To get them running with the least amount of changes WebSphere apps should typically be pushed with the WebSphere Liberty Profile Buildpack. The Liberty buildpack has myriad ways of configuring it with lots of environment variables to control the set of features and WebSphere server configuration and so on ... Here are some best practices to follow a repeatable and simple process of migrating your apps using Open Liberty to Cloud Foundry: 

1. It’s OK to develop on STS or Eclipse, but don’t rely on Eclipse to package the application inside the server configuration.

2. Develop your application with Open Liberty's maven plugin. If you have an existing project convert it to Open Liberty by cloning and copying the build pom.xml from one of the guides.  This is a very similar to a spring boot like experience for JavaEE/EE4J.  An alternate way of developing is to configure the server.xml for your server and then keep copying the app to the dropins folder. 

3. After your application runs locally package a Liberty server with your application in the dropins folder and the right server configuration in the usr directory. This is akin to packaging a fat jar for your WebSphere app. Creating a server package avoids the headache of configuring the buildpack with the right set of environment variables when pushing the app. 
server package defaultServer --include=usr 

4. When the application does run in Cloud Foundry play special attention to this message 2017-10-12T22:59:27.80-0400 [APP/PROC/WEB/0] OUT [AUDIT] CWWKT0016I: Web application available (default_host): http://9e1d0c1c-a318-46eb-7760-0302:8080/war-with-resource/
This informational message reveals your context-root for your application. Context root determination for an app is a multi-step evaluation based multiple files.

The context-root attribute specifies the entry point of the deployed application. The entry point of a deployed application is determined in the following precedence:
- context-root in the server.xml file
- application.xml, if an EAR application
- ibm-web-ext.xml, if a web application
- name of the application in the server.xml file, if a web application
- Directory name or the file name relative to the drop-ins directory of Liberty
My rule of the thumb is NEVER to set a context root and manage the public URL via a Cloud  Foundry route. By default the buildpack set context root for an ear file is "/".

Notes of Caution 
  1. Your application does not need to be cloud native to run on Cloud Foundry. You can push zero factor apps to CF no problem. If the application does not function correctly look under the hood by cf ssh'ing into the container and examine the final server.xml. If your EE resources are not being served or recognized by the container check your list of features and corresponding configuration in server.xml
  2. WebSphere Liberty Profile Buildpack provisions the DEV licensed WebSphere Liberty Profile runtime and NOT the Open Liberty runtime. To avoid licensing headaches you should fork the Liberty Profile Buildpack to provision the Open Liberty runtime instead of the DEV only licensed WebSphere Liberty Profile runtime.

In Closing

If you want to completely avoid the IBM apple cart, an excellent alternative is the TomEE buildpack which also runs JavaEE7 Full , Web and micro-profiles  ear, war,  fat and skinny jar apps. For more on this topic read a previous post yes-we-can.

Replatforming  is just the first phase of your multi-step journey to Cloud Native.  Your ideal end-state is to make the app cloud native and therefore vendor and platform agnostic leveraging the app  dialtone based on the contracts established by the  platform API. 

The Cost Curve of Application Replatforming


Tuesday, October 10, 2017

PCF Is the best place to run Spring Apps


Forgot to mention the auto patching of vulnerabilities by the buildpack. Instead of the developer managing all the middle layers the CF buildpack  curates and maintains this list. 

Sunday, October 8, 2017

Emergent Systems and the need for Chaos Architecture

The ideas below are an amalgamation of key signals from Adrian Cockcroft, Neal Ford, Matt Stine, Russ Miles, Michael Nygard and the rockstar engineers of Netflix who have pioneered Chaos Engineering.

For the long term survival of your microservices system some key concepts have now come together chiefly: anti-fragilitycontinuos partial failure  and evolutionary architecture

Since all of us seem to building networks of distributed microservices there is no way to test the emergent behavior of these systems in test. We HAVE to run controlled experiments in production. Chaos engineering and a chaos friendly architecture is critical for enterprises to maintain availability of their applications and survive breaches.  Adrian Cockcroft in his recent Cloud Native London keynote espoused four layers, two teams and an attitude.

Chaos Engineering is the discipline of experimenting on a distributed system
in order to build confidence in the system’s capability
to withstand turbulent conditions in production.


Chaos engineering is the best continuously holistic function to evaluate a distributed system to withstand impact and external perturbation. So what are the different levels at which chaos engineering

Layer-1: Infrastructure: Lay out the infra. so that there is no SPOF. Multiple zones, regions. app is distributed enough times and enough ways - diversity in the infrastructure

Layer-2: Switching and interconnect. Strategy for interconnecting. NO SPOF => data in more than one place. This will require data to be replicated in a different side of the world. Data needs be more than one disk - in  a different building. Routing needs to transparently handle failover across datacenters. Unfortunately DR routing/failover is the Least well tested set of components in the system. Usually all error handling code explodes at impact. Once the fallen datacenter comes up there is a need to Re-route and re-synchronize - introduce anti-entropy people back into the system. It is critical to regularly test Failover regularly to the backup datacenter. Test HA/DR across data-centers  properly instead of availability theater.

Layer-3: Application Layer: What does app do when it  experiences data loss , n/w connectivity failures, timeouts, error returns, slow responses, network partitions - app hangup - goes 100% busy
Single functions and microservices can be tested to one thing. Lambda is easy to develop for. Unit of testing and deployment. Monoliths lead to combination testing. Lot of variations

Layer 4 People: When machines  mis-behave people really screw it up. Usually folks made it worse. There are countless stories of systems that were thrashed by the operators due to a comedy of errors. It is super-important to practice gamedays similar to how children in kindergarten practice a fire drill.  Chernobyl.  Reboot may be the wrong thing to do when you have services. Play out fire-drills so that when there is an actual fire people take the right action - disaster preparedness.  Practice, Practice, Practice. ... Fire drills

Tools that attack the different layers.
  1. Game Days - exercise outage. Right way everyone is to behave. folks on a call. how to find the dashboards etc. as well digging into details
  2. Simian army- Tests once a month
  3. FIT - Deep injection of failures. CHAP. Chaos automation platform
  4. Gremlin Inc. -  Auto mates chaos engineering scenarios and gamedays.  Undo button. Safer with automation.
  5. There is an excellent catalog tools at the end of the chaos engineering eBook
Two Teams
  1. Security Blue Team/Security Red Team(Break into your site)
  2. Chaos Engineering Team/SRE Team

Companies offer services to make System secure and resilient.
  • AttackIQ
  • Safebreach
  • Spear fishing

Attitude - Improve your chaos posture
- OReilly book - Chap and the Chaos Maturity Model gives a roadmap to improve your chaos game.
- Chaos engineering community day coming up - London this is becoming a thing

If you want to create a system with 99999/99999999 availability it is important to establish  a Chaos Engineering Practice that keeps the team safe, and whole stack reliable.

Attitude

Go run a gameday. People experienced in simulating outages. easy cleanup.
Start at top and work your way down.


Tuesday, October 3, 2017

Pushing Zero Factor Apps to Cloud Foundry

Often times I hear the refrain that Pivotal Cloud Foundry is only suitable for cloud native apps that adhere to the 12 or 15 factors. This is like saying that the Spring is XML centric in 2017 or that Java EE is heavyweight in 2017. Cloud Foundry is a big tent platform that is suitable for running all kinds of workloads across languages and runtimes. 

In this post, lets disprove the myth by looking into what it will take to push a 0 factor app or a cloud-ANGRY app to Pivotal Cloud Foundry.  On a slight tangent the definition of cloud native keeps expanding. First we had the 12 factors from Heroku. Thereafter we added authentication/authorization, monitoring and API First making it 15. Recently Adrian Cockroft  espoused the following Cloud Native Principles 1. Pay As You go 2. Self Service- no waiting 3. Globally distributed by default 4. Cross-zone/region availability models 5. High Utilization- turn idle resources off and 6. Immutable Code Deployments.

So here it goes ....

1. One Codebase, one application
Violation of this rule leads to multiple application emerging from a single codebase or a mono-repo. Why is this pattern so bad - Because it leads to anti-patterns where updating one dependency leads to a cascade affecting all other apps built from the same repo. This  is akin to the domino effect. If you can sustain the pain of building all your apps from one repo - leading to a cascade of inter-dependent updates you don't need multiple repos. You can violate Conway's law - if you have a  complete holocracy. In fact google stores billions of Lines Of Code in a single monorepo.  Cloud foundry does NOT care if your app was built from one repo or sourced from multiple repos. 

2. API First
So this principle states that Built into every decision you make and every line of code you write is the notion that every functional requirement of your application will be met through the consumption of an API. You could of course  write code without interfaces or APIs or achieving market fit. At the extreme end you would ONLY build the API if you needed it and when there were multiple consumers and customers necessitating it. If you have only one consumer then why have a consumer driven contract. Talk the other team and establish a partner relationship. Again Cloud Foundry does NOT care if you are API first or API last. As app owners you have to live with the implications of this decision not the platform. 


3. Dependency Management
Software engineering practice dictates that we should explicitly declare and isolate dependencies. Most modern apps put an onus on independences i.e. package all the downstream dependencies including the application server in a fat jar. You don't rely on the implicit existence of system wide packages or mommy servers  like WebSphere. On the flip side you can definitely let the application server bring majority of the dependencies or use a method of layering with multi-buildpacks to inject your dependencies. Cloud Foundry has multiple options for pushing apps including ear, war, jar and docker files. Buildpacks like the Liberty Profile, JBOSS or TomEE will gladly allow you to keep your app lean and source all the dependencies from the application server classpath. So instead of a fat jar you can have the buildpacks create the Mommy server.

4. Design, Build, Release and Run
Strictly separate the build and run stages. Build one artifact and configure it at runtime for the environment. Release = (Build + Environment specific settings + app-specific configuration).  This principle allows for auditability and rollback. In the absence of separation in the build and run stages you are implicitly acknowledging that rollback is never an option and you don't need to track your past releases. Code and features are always moving forward in production. Everyone lives and codes on the master branch. You could completely violate this rule by packaging separate artifacts for every environment. As long as you have an ITIL process to validate such a process and the appropriate org structure it is possible to build apps-per env and throw them over the wall. Again this way severely defeats the purpose of devops and makes life of service reliability engineers ridiculously difficult. As long as you are using the environment variables from the right org and space you could pull this off with zero changes to the app in Cloud Foundry.

5. Configuration, Credentials and Code
Ideally you should keep this holy trinity separate from each other and bring them together ONLY at runtime in the platform; however I contend in 80% of the cases these three stay together and can be pushed together to Cloud Foundry. Of course the app is less secure and brittle since configuration is hardwired and credentials hardcoded, the app can still function correctly on Cloud Foundry. There is no explicit requirement that code should be separated from credentials and configuration should be externalized. As long as the properties are correct and location independent, the app will work with configuration bundled within the app 

6. Processes: Any state that is long lasting outside the scope of the request must be externalized. This allows for processes to be treated like cattle and not pets. This constraint can be violated by keeping long running state like caches, sessions and other user data across requests. As long as you can live with the risk of the occasional user seeing errant 500 responses when the server goes down this drawback can be tolerated. A side-effect of keeping state within the container is that your JVM sizes will be atypically large and rebalancing your app across diego cells will become time consuming.

7. Port Binding
The platform will perform the port management and assign the container a port through the environment instead of hardcoding the port in the server configuration. It is difficult to directly violate this principle since cloud foundry is responsible for port assignment and creation of routes to the app instance. Using container-to-container networking you may be able to overcome this constraint. 

8. Concurrency
Cloud apps should scale out using the process model. There is nothing in Cloud Foundry stopping you from scaling vertically instead of horizontally. Go crazy and push with a -Xmx of 16GB if your Diego cells  have the capacity.

9. Disposability
Ideally your app should not take more than 30s to startup or shutdown. One should always maximize robustness with fast startup and graceful shutdown. however if your app does a ton of startup initialization like priming caches or loading reference data then tune the cloud foundry cli to increase the push and the app start timeouts. If your application is BIG then you can cf push with a --no-start and then start it separately. When your app takes a significant time to startup and shutdown then it messes with the auto scaleup/scale down model of CF as well as uptime guarantees when cloud foundry redistributes your apps when new diego cells get on-boarded.

10. Dev/prod parity 
Try to keep development, staging, and production as similar as possible. This is like the advice to exercise an hour every day. Don't worry if your environments are lop-sided as long as there is some semblance of proportionality and the same PCF tech stack runs in every environment. 

11. Logs
You should treat logs as event streams unless you don't care about observability or if the logs are full of exceptions and totally useless. If the logging at INFO is so verbose so as to be of no use during debugging then don't worry about streaming the logs just call it in by persisting the logs to ephemeral disk or cheat by mounting a NFS volume and writing the logs to the volume services mounted NFS mount.

12. Admin Processes
If you don't run Run admin/management tasks as one-off processes then you will need to embed conditional logic  and APIs so as to externally trigger the admin task. You will need to map the same app under a different route to trigger a singleton instance of the management task.

13. Telemetry
Instrumenting the app for telemetry is only useful if you are going to watch the metrics and logs and take meaningful actions. In the absence of app telemetry rely on PCF metrics to provide app insight or in-built support with spring boot actuators which may be good enough for you. 

14. Authentication and Authorization
Do not rely on death star security to secure your apps. Ignore this principle at your own peril. Rely on the principle of least trust and industry standard protocols like OAUTH2 and OIDC or risk becoming the next Equifax

15. Backing Services
Treat backing services as attached resources. If you do violate this constraint then you need stateful constructs like NFS mount points via volume services and you will inject the bound service configuration in a CI pipeline. In a legacy app a lot of the backing service dependencies are already captured in a resources.xml surfaced via JNDI. Rely on buildpack auto-configuration magic to rewire these resources to cloud services automatically during cf push time.

Understand the tradeoffs and constraints and push zero factor apps to the cloud. Happy Hunting.

Wednesday, September 27, 2017

A Radar for App Transformation

My team at Pivotal is hiring Solution Architects anywhere in the USA. Please NOTE applications via greenhouse are NOT entering a blackhole. Our recruiters are looking at all CVs. We work with customers to enable digital transformation and affect change by implementing the three pillars of devops, people, process and technology. In concrete terms we help customers migrate applications and workloads to Pivotal Cloud Foundry.

To give you an idea of our tools, practices, replatforming and modernization I have created a Thoughtworks inspired technology radar for application transformation. First some definition of terms to set the context for the categories of the radar

*Replatforming* - Make minimal testable changes to an application so as to run it on PCF using an automated CI/CD pipeline. Replatforming typically entails spring bootifying an application by introducing spring boot auto starters and upgrading the dependencies of the application to work with the modern Spring ecosystem. We also typically inject data sources for downstream dependencies and fix other application concerns like logging. Like TDD we do the smallest set of changes to get the application working without affecting and external APIs or contracts. We add tests along the way as well as other production enablement features like Spring Boot actuators. There are other less tread path to replatforming as well using alternate buildpacks however; Spring Boot and Spring Cloud are in the happy path to replatforming. In a typical replatforming engagement we replatform in the order of 10-15 apps to production and leave the customer with a cookbook of reusable recipes.

*Modernization* - Make testable changes to an application (or a vertical slice of an app) to make it “run well” (achieving 15 factor compliance) on PCF. This may mean decomposing a monolith into a set of microservices aligned with their bounded contexts. Modernization is an exercise in making an application cloud native. Modernization is applied to apps of all sizes ranging from hundreds to millions of LOC. Definition of the initial scope of the work here is key to success. Pivotal practices modernization with a bunch of techniques like DDD, event storming, snap analysis, vertical slice analysis, snappy analysis, Boris Wheel and OKRs that narrow the scope of the work to intersect with the customer’s goals.We engage with the customer in XP fashion with TDD and pairing being mandatory practices. A successful app  transformation engagement unleashes a wave of such engagements across the company.

*Practices* - Our core values and fundamental techniques of XP that we do NOT compromise on. This covers a whole suite of practices like TDD, Pair-programming that we practice every day when delivering code.

*Tools* - A suite of web applications, hard and soft hacks,  the ecosystem of libraries and frameworks and the platform used to deliver app transformation.

Find the Radar here http://bit.ly/apptx-radar

 app-transformation-radar

Friday, September 22, 2017

HOW NOT TO DDD

Recently at the exploreDDD conference I gave a lightening talk on How Not To DDD.
Here are the 7 ways you should NOT do Domain Driven Design(DDD).

1. Dysfunctional DDD

Doing DDD has no effect if you do not affect organizational change that aligns with the bounded contexts.

2. Honeymoon DDD

The activities of DDD are only effective when they are done in a group to enable cross-discipline collaboration. Event storming for 7 days by two people locked in a room is a honeymoon not event storming

3. Cargo-Cult DDD

Using DDD terms but not really understanding the "Why" of practicing DDD. I usually steer clear  of folks who do this unless they are about to do real harm to the project.

4. DDD-Lite

Relying on the warm wooly blanket of tooling and IDEs to enforce all your DDD constructs. Generating package names and a proper project structure does not a DDD make. Practice DDD in the actual model and not on the surface.

5. Everybody in the Pool DDD 

When the going gets tough instead of refining the model abandon ship and start merging bounded contexts or have transactions span boundaries.

6. DDD Heaven

You have entered this state when your Process Mangers act like God objects and start orchestrating across domains and bounded contexts. 

7. Event Source/CQRS Everything DDD

Yes it is true that the ceremonies of DDD like event storming naturally lead you to messaging and event driven architectures. This does not mean that you must automatically resort to event sourcing /CQRS to implement your design. Saving state in a database and exposing APIs for event driven state transfer is perfectly fine. 





Thursday, September 21, 2017

How To Deal With XA Global Transactions in the Cloud

For decades developers have relied on global distributed XA transactions with a two phase commit protocol to coordinate  state updates across different resource types. Application vendors fueled this drug addiction by building increasingly powerful transaction managers and introducing extensions like last participant support. The XA protocol and the strong consistency provided by transactions does not model real life where eventual consistency abounds. 

In a distributed system built on the microservices architectural style deployed on the cloud the implicit guarantees provided by the default rollback of the XA protocol no longer work. In distributed systems the uncertainty of the state has to be explicitly dealt by the application.  This inconsistency in a distributed application is dealt with by a Saga or a Process Manager. Note that saga's and process manager's are NOT antithetical to DDD. 

Data consistency in a microservices system cannot be implemented by distributed transactions. Use the SAGA pattern to string together a series of local transactions with compensation events. Using Saga's or process managers correctly will not violate your context boundaries. In fact using a higher level state machine will allow you to focus more on the domain concerns rather than bike shedding over the implementation(re-sequencing, de-duplication) of an event driven architecture.

In today’s distributed world, consider global transaction managers an architecture smell. See Kevin Hoffman's article on this topic:  Distributed Transactions in a Cloud-Native, Microservice World.

Pat Helland in his updated paper Life Beyond Distributed Transactions states
In a system that cannot count on distributed transactions, the management of uncertainty must be implemented in the business logic. The uncertainty of the outcome is held in the business semantics rather than in the record lock. This is simply workflow. Nothing magic, just that we can’t use distributed transaction so we need to use workflow.
Another excellent source of insight on this topic is from Josh Long and Kenny Bastani in their excellent book Cloud Native Java. From See JTA and XA Transaction Management in Appendix A Using Spring Boot with Java EE in CloudNativeJava book.

Distributed transactions gate the ability of one service to process transactions at an independent cadence. Distributed transactions imply that state is being maintained across multiple services when they should ideally be in a single microservice. Ideally, services should share state not at the database level but at the API level, in which case this whole discussion is moot: REST APIs don’t support the X/Open protocol, anyway! There are other patterns for state synchronization that promote horizontal scalability and temporal decoupling, centered around messaging. You’ll find more on messaging in our discussion of messaging and integration.
From https://www.safaribooksonline.com/library/view/Cloud+Native+Java/9781449374631/part03ch02.html#messaging

With event carried state transfer or event sourcing  We can use the saga pattern and design compensating transactions for every service with which we integrate and any possible failure conditions, but we might be able to get away with something simpler if we use a message broker. Message brokers are conceptually very simple: as messages are delivered to the broker, they’re stored and delivered to connected consumers. If there are no connected consumers, the broker will store the messages and redeliver them upon connection of a consumer.
Message brokers have their own, resource-local notion of a transaction. A producer may deliver a message and then, if necessary, withdraw it, effectively rolling the message back. A consumer may accept delivery of a message, attempt to do something with it, and then acknowledge the delivery—or, if something should go wrong, return the message to the broker, effectively rolling back the delivery. Eventually both sides will agree upon the state. This is different than a distributed transaction in that the message broker introduces the variable of time, or temporal decoupling. In doing so, it simplifies the integration between services. This property makes it easier to reason about state in a distributed system. You can ensure that two otherwise non-transactional resources will eventually agree upon state. In this way, a message broker bridges the two otherwise non transactional resources.
If you wan to do Distributed Transactions In Spring, With And Without XA take a look at this article from David Syer and the code samples that go along with it on how to do non XA distrubuted transactions with Spring 

Best Efforts 1PC Pattern
The Best Efforts 1PC pattern is fairly general but can fail in some circumstances that the developer must be aware of. This is a non-XA pattern that involves a synchronized single-phase commit of a number of resources. Because the 2PC is not used, it can never be as safe as an XA transaction, but is often good enough if the participants are aware of the compromises. Many high-volume, high-throughput transaction-processing systems are set up this way to improve performance.

So in summary
  • Prefer Resource Local best effort single single resource Transactions over global transactions
  • Prefer Spring’s @Transactional variant over EJB’s @javax.ejb.TransactionAttribute or JTA’s @javax.transaction.Transactional
Moving away from distributed transactions requires an adjustment in mindset to make use of new paradigms. Move away from JTA to an architecture and implementation style where you model the domain uncertainty and make the best effort to model and handle it in the business logic instead of delegating this core responsibility to the underlying platform. 



Resources:
  1. Daniel Frey has created A Demo App to illustrate the use of the JPATransactionManager to handle Database and JMS “rollbacks”. distributed-tx 



Tuesday, September 5, 2017

Porting WebSphere Datapower to Pivotal Cloud Foundry


Locked in and married unto death with WebSphere Datapower ? There is a way out where you can emerge stronger. Read on ...

Authors

- [Rohit Kelapure](https://www.linkedin.com/in/rohitkelapure/)
- [Elena Neroslavskaya](https://www.linkedin.com/in/neros/)

Replatforming 

Run DP in PCF with the native Docker support.
_Docker image of DP_
https://hub.docker.com/r/ibmcom/datapower/
https://developer.ibm.com/datapower/2016/11/09/using-datapower-for-docker-in-ibm-container-service/

OR

Modernization 

  • Scenario 1: XML Processing with XSLTs at wire speed. Incrementally migrate XSLT flows from DataPower to Spring Boot microservices leveraging java streaming XSLT 2.0/3.0 processors like [SAXON](http://www.saxonica.com/welcome/welcome.xml) and [Altova]. The Spring 5 and Spring boot 2 reactive support will allow us to process all logic reactively in a non-blocking fashion from end-user to server. If the XSLT transformations involve business logic then migrate those business rules to a Rules engine like DROOLs or orchestrate the data rules microservices logic with SpringCloudDataFlow.
  • Scenario 2: Authentication and Authorization, SSL Offload, Security vectors - HTTP threat protection: This is modernized and replaced via a combination of federated authn/authz + Web firewall and networking edge devices with NSX-T or NSX-V.
  • Scenario 3: mini-ESB Protocol Transformation and Governance: Use Spring Integration or Camel light weight implementation of EAI patterns to transform data.
  • Scenario 4: Caching: Leverage Redis or Pivotal Gemfire or Pivotal Cloud Cache to cache data alongside the app and not in the DMZ.
  • Scenario 5: Data Integration- Connectors to Mainframes and other EAI systems: Leverage zOS Connect 2.0 web service support on the mainframe to expose REST APIs for your zOS backend sub-systems like IMS, CICS, DB2, ... or use Java connectors to IMS and CICS. Implement data microservices that consume these APIs and/or use the connectors.
  • Scenario 6: API Gateway: Replace with a custom route service with a software layer 7 service aware router and load balancer like Netflix Zuul or leverage an integrated solution like [Apigee](https://apigee.com/about/blog/developer/edge-microgateway-pivotal-cloud-foundry-technical-updates) or [Istio](https://content.pivotal.io/blog/pivotal-and-istio-advancing-the-ecosystem-for-microservices-in-the-enterprise)

Recommendation:

We recommend following the path of modernization instead of replatforming since dockerizing DP will not yield the full benefits of a cloud native architecture. Modernizing instead of replatforming over the long term will end the dependence on DP and  and replace with more scalable and widely available software and people and process. 

WHY

  1. Running Docker images will give you IaaS efficiencies dropping OPEX cost
  2. Separation of responsibilities. Smaller teams can implement and own individual service capabilities without having a central god governance and transformation point. Choreography instead of orchestration. Smart endpoints, dumb pipes.
  3. Developer productivity with Java non XML code is high.
  4. Setting up a CI pipeline and integrating other DEVOPS tools is easier with microservices.
  5. Debugging is easier in java code than XSLT transforms
  6. Ramp up of developers and provisioning of environments becomes a lot easier.
  7. If Datapower is not being used for data transformation, connectors to backend systems, caching or Authenication and authorization then strangling the XSLT transforms can be done in a phased systematic fashion since the true value  of DP is not being leveraged.


Thursday, August 17, 2017

YES WE CAN - you can push JavaEE apps to CloudFoundry

We all know and love the Java Buildpack - the workhorse for pushing majority of our applications to Cloud Foundry. There is also another gem of a buildpack called the TomEE buildpack. As you can guess from the name it is a close cousin of the Tomcat buildpack with the added enhancement that it supports the JavaEE WebProfile, Full profile* and supports the push of ear files. 

Wait a Minute !!!!!!!!!!!!!!!!!

We can push web profile and full profile applications* packaged as ear and war files to Cloud Foundry and not just plain vanilla spring apps that run on Tomcat ?

Yes siree bob ....

These are the buildpacks we often use in replatforming to move JavaEE apps to Cloud Foundry with minimal changes.


So why go through all this rigmarole and not just push Docker images ? 

Well I contend that pushing ear and war files are better than pushing well formed Docker images because a proper Docker CI image pipeline starts looking like what a buildpack does. So skip all the preamble and discovery and leverage the power of buildpacks. Why transmogrify your app to include OS bits and layers etc ? Deal with the currency you are familiar with i.e. jar, war and ear files. 

All these buildpacks also have magic in the form of  auto-configuration to wire and map your Cloud Foundry bound managed and CUPS services into existing data-sources so that JNDI lookups in your application source don't have to change. This allows for external service configuration to be consumed seamlessly by your data and messaging layers.  

Finally if everything else fails then there is always Docker ... 

You have my attention now ?? Which buildpacks should I use ?

See that depends on three things 1. What is in the apps and 2. Which app server are they coming from

In general if possible we recommend bootifying your application and leveraging the most useful framework components of the JavaEE stack and running your app using the Java Buildpack. If this is not feasible then your first step is to cf push the app using the buildpack of your application server. This will minimize changes needed to your application.

Thereafter I would proceed to nuke ALL the server specific deployment descriptors such that you can run the app  on a generic EE server like TomEE or Payara. If you don't like buildpacks and prefer the fat jar or the uber jar approach instead then bundle the app server within the app and push using the Java Buildpack. 

Well now you have me thoroughly confused ... 

Don't worry here is a picture that will sort you out ...




I end this blog on a cautious note - There are NO silver bullets in software development. 

The benefits from the cloud  are maximized form the agility gained from running lighter weight, smaller scale well bounded cloud native apps. Moving monolithic apps to the platform without modernization will yield a benefit that you should invest back into modernizing your application along the 15 factor continuum.


* Note full profile app support in TomEE is not default. You have to do some acrobatics to bundle the right TomEE distribution into the TomEE offline/online buildpack
* Also note some aspects of JavaEE will NOT work in the cloud for instance if there are any 2pc transactions then those transaction managers will obviously not work in a platform that has ephemeral containers and file system.


Wednesday, August 16, 2017

Pushing Docker images to Pivotal Cloud Foundry

Everyone thinks that Cloud Foundry does NOT support Docker images. Well here is your periodic reminder that CF and by extension PCF does support pushing of Docker images both from public and private docker registries.  Start with reading these links Using Docker in Cloud Foundry and Deploy an app with Docker.

Lets push a sample batch WebSphere Liberty Profile application to PCF. This batch application lives at https://github.com/WASdev/sample.batch.sleepybatchlet

SleepyBatchlet is a simple sample batchlet for use with feature batch-1.0 on WebSphere Liberty Profile. batch-1.0 is Liberty's implementation of the Batch Programming Model in Java EE 7, as specified by JSR 352. The batchlet itself is rather uninteresting. All it does is sleep in 1 second increments for a default time of 15 seconds. The sleep time is configurable via batch property sleep.time.seconds. The batchlet prints a message to System.out each second, so you can easily verify that it's running.

A WebSphere Liberty Profile image was built using the following repo: https://hub.docker.com/_/websphere-liberty/  with the following Dockerfile and server.xml configuration. Please note the majority of the Dockerfile comes FROM https://github.com/WASdev/ci.docker/blob/master/ga/developer/kernel/Dockerfile that EXPOSE 9080 9443

Note the following stanza in the server.xml 
<httpEndpoint id="defaultHttpEndpoint"

host="*"

httpPort="9080"

httpsPort="9443" />

The WebSphere Liberty Profile application is listening on ports 9080 and 9443. Cloud Foundry by default ONLY routes to one HTTP port. When launching an application on Diego, the Cloud Controller honors any user-specified overrides such as a custom start command or custom environment variables.To determine which processes to run, the Cloud Controller fetches and stores the metadata associated with the Docker image. The Cloud Controller Instructs Diego and the Gorouter to route traffic to the lowest-numbered port exposed in the Docker image. So in this case Diego, goRouter and CC collaborate to automatically route traffic to port 9080 and ignore port 9443. 

The Dockerfile built simply copies the built application into the config/dropins folder of the LibertyProfile and drops the server.xml into the config folder and configures Liberty to install the right features needed at runtime. Its useful to look at all the Docker caveats as you compose the Dockerfile. Note you can only copies from the current Docker context into the image and cannot COPY or ADD paths starting at /.

Commands to build and push the Docker image:

docker build -t jsr352app . 

First run the app locally using the following command:

 docker run -d -p 80:9080 -p 443:9443 --name jsr352 jsr352app    

and validate output with
 docker logs --tail=all -f jsr352 

Your local IP address can be found with the following command

docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' jsr352

    

Push the built Docker image to a public Docker registry like DockerHub using the following instructions
docker push kelapure/jsr352app
The image can be found publicly at https://hub.docker.com/r/kelapure/jsr352app/


You can now push the app to CF and watch logs during the push
cf push batch-app --docker-image kelapure/jsr352app 

Some other options that are helpful when debugging failed docker image pushes are the -u and the -t option that disable health check and increase the staging app start timeout respectively.
-t 300 and -u none

cf logs batch-app

It will take a while (probably 5 mins. or so) for the app to start and the cf command line output will look like this 


0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
1 of 1 instances running

App started
OK

App batch-app was started using this command `/opt/ibm/docker/docker-server run defaultServer`

Showing health and status for app batch-app in org pivot-rkelapure / space development as rkelapure@pivotal.io...
OK

requested state: started
instances: 1/1
usage: 1G x 1 instances
urls: batch-app.cfapps.pez.pivotal.io
last uploaded: Tue Aug 15 21:12:17 UTC 2017
stack: cflinuxfs2
buildpack: unknown


Multiple Application Ports

Please NOTE if you want to expose multiple ports in PCF you will need to use experimental APIs and follow these steps
  1. Push the WAS/Docker image
  2. Add the additional port via:
    cf curl "/v2/apps/<App GUID>" -X PUT -d '{"ports":[8080, 9060]}'
  3. Create a new, un-bound, route: cf create-route...
  4. Map the 2nd, new route to port 9060
    cf curl "/v2/route_mappings" -X POST -d '{ "app_guid": "<App GUID>", "route_guid": "<Route GUID>", "app_port": 9060}'
The Cloud Foundry community is actively working on adding support for multiple app port routing. You will find a WIP proposal on multiple custom app ports . Credit DereK Beauregard

Resources

  1. https://tecadmin.net/remove-docker-images-and-containers/
  2. https://github.com/WASdev/ci.docker.tutorials/tree/master/liberty

Tuesday, August 15, 2017

CRAP - Complexity Remediating Application Proxy

CRAP is a term that has negative connotations; however in our world of application replatforming and modernization it is the most used application remediation pattern to shield the complexity of an external domain from your own domain allowing the microservices model of your core domain to remain pure. CRAP is a specific instantiation of an anti-corruption layer that bridges cloud native apps to non-cloud native apps. 

Permit me a quick segway here into terminology ... 

So what the hell is a cloud native application ? There are two definitions* of cloud native apps that I really like : 

1. A cloud-native application is an application that has been designed, architected and implemented to run on a Platform-as-a-Service installation and to embrace horizontal elastic scaling. Cloud native architectures take full advantage of on-demand delivery, global deployment, elasticity, and higher-level services. They enable huge improvements in developer productivity, business agility, scalability, availability, utilization, and cost savings.

2. A cloud native application is an application that satisfies ALL 15 factors  that define the DNA of Highly Scalable, Resilient Cloud Applications.

Note none of these definitions are mine. 1. is from Kevin Hoffman and 2. is from Adrain Cockroft

What is a non cloud or anti-cloud application ? - An application that cannot and does NOT want to run in the cloud and cannot be easily remediated to run well in the cloud is a cloud-angry or anti-cloud application. Big COTS packages and other monstrosities like WebSphere Commerce and WebSphere Portal from IBM and ORCL generally fall into this category. Cloud angry apps can range to mammoth application server to small shared kernel libraries that rely on assumptions of a particular OS or file-system characteristics.  

Saturday, May 13, 2017

Application TLS, SSL, Mutual SSL with Cloud Foundry

_ With Significant Contributions from the Pivotal Services Team (Caleb, Anwar, Biju, Shaozhen, Zac, Mark)  _

It is critical that applications encrypt data in motion. To get a refresher on WHY this is important read Justin Smith's post on a more practical approach on encrypting data in motion. In cloud foundry TLS inbound to an app in CF is terminated either at the load balancer or at the goRouter. Outbound mutual two-way TLS to a remote endpoint from the application in pivotal cloud foundry (PCF) requires that the app in PCF trusts the server certificate and the remote endpoint needs to trust the client cert presented by the PCF app.  A two way trust needs to be established. In Java this is done using Keystore's and Truststore's. 

KeyStore : A Java KeyStore (JKS) is a repository of security certificates – either authorization certificates or public key certificates – plus corresponding private keys, used for instance in SSL encryption.

TrustStore : A Truststore is used to store certificates from trusted Certificate authorities(CA) which are used to verify certificate presented by Server in SSL Connection. There are no private keys in the truststore. 
  
Typically JVM  Keystores and Truststores are manipulated using the keytool. In cloud foundry these keystore cannot be manipulated directly within the JVM since the JVM is provisioned by the buildpack. You can supply the JVM Keystores and Truststores 1. within the app. 2. using environment variables or 3. with a third party trusted intermediary like Vault. So how does one achieve this ? Here are some projects that will enable you to achieve mutual two-way SSL also know as client authentication between client and Server with apps in Cloud Foundry.

Spring Boot Client authentication demo : Packages the keystore(certs  and private key) and truststore within the app and loads them with getClassLoader().getResource
https://github.com/viniciusccarvalho/boot-two-way-ssl-example 

Fork of Spring Boot Client Auth Demo: Packages keystores(certs and private key) and truststores within the app and specifies the location using environment variables.

Cloud Foundry Certificate Truster: When certificates are to be downloaded from a remote location and not available at startup CloudFoundryCertificateTruster will download certificates and add them to the JVM truststore at the earliest possible time. This can be forked to load certs from any 3rd party trusted store like Vault.  Override L160.
https://github.com/pivotal-cf/cloudfoundry-certificate-truster

MutualAuthAutoConfiguration: _coming soon_ The pivotal services team (@zacbergquist, @biju) have written a spring boot starter  to automatically append Certs to the internal trust store. We intend to make this repo public soon, MutualAuthAutoConfiguration modifies the app SSL context based on base64 encoded Keystore and password specified as properties.

Monday, April 10, 2017

Breaking apart monoliths and transitioning to a microservices driven API-First world

Q. When do we modify an existing service for a new consumer vs. build a new service for that consumer?  For example, if a service is returning back data a certain way and a new consumer comes along and wants to see the data in a slightly different way (with potentially more data), do we modify the existing service or build a completely new service?

When a consumer for a existing service comes along we build an adapter for the new consumer that adapts the data for the new consumer. We  modify an existing service aggregates and entities based on the business invariants of the domain. We never modify our internal model for an external service.  Splitting a monolith is all about identifying seams. There are two primary approaches to figure out the seams of your application with their accompanying advantages and pitfalls.

Top-Down: This is an approach of decomposition driven from the top using tactical techniques of DDD like Event Storming to identify the bounded contexts and their respective context mappings. Usually this yields a desired model of microservices for the monolith. Translating from current fucked up state to desired state is like landing a plane on a runway requiring rigorous practice of Lean engineering  principles (MVPs, Iteration, Feedback). The exercises of event storming and modeling with DDD typically result in buzz-word compliant CQRS and Event Sourcing implementations. CQRS/ES are super difficult to implement in greenfield let alone as a bridge from brownfield apps. Its important to keep focus on the goals of breaking the megalith and not get enamored by the new and shiny.

Bottom-Up: This approach of decomposition is driven by current pain points of the monolith. For instance separating the UI from backend processing. Separating batch from real time processing. Here you are letting technology and your existing domain expertise break the app apart. DDD and domains informs your decomposition; however this is a code led desplunking effort. Another way I have seen this implemented is that a vertical slice of business capability is carved out of the monolith and this vertical slice of function is then used to modernize all layers of technical stack including web and backend tiers. Code driven decomposition is made difficult by the fact that humans can keep only so much information in the head at one time. It is very easy to get lost in the forest and keep walking in circles and NEVER emerge out of the woods. Techniques like TDD, Iterative development and Mikado method will help keep you on the right path out of the forest. 

Q. What are our service versioning strategies when making changes to an existing service that supports multiple clients?
When a service is used by multiple clients ideally ALL changes to the service need to be backwards compatible. If this is not possible then implement [parallel change]. Semantically version all changes and evolutions to the service schema and APIs. There may be times when you need to make a breaking change. When this happens, you need to ensure that you never do anything that will cause your API consumers to fix the code. It is important to establish an API Versioning Strategy 1. Establish proper expectations that the API will change 2. API is a contract and cannot be broken with a new version release. API versioning will follow [semver] guidelines i.e. Non breaking changes result in a minor version bump. Breaking changes result in a new major version. API versioning can be implemented using 1. Resource Versioning 2. URI versioning 3. Hostname versioning. [api-versioning-when-how]

Q. How do we coordinate multiple teams switching over to a modernized service at different times?
The key here is to insert [consumer driven contracts]. Each team onboarded  establishes a consumer driven contract with the supplier service. This give us the fine-grained insight and rapid feedback when the modernized service requires to plan changes and assess its impact on applications currently in production. The contracts established here serve as insurance when new teams onboard or when the modernized service evolves.

Q. As we are in the process of modernizing a service (this process could take multiple years) it’s possible that new requirements come along that need to be implemented.  How do we effectively identify that these requirements need to be implemented in both the legacy service and the modernized service

You could follow a couple  of policies here
  1. Never  modify the legacy service. All new function ONLY gets added to the modernized service with suitable bridges, adapters and anti-corruption layers to the legacy service.
  2. First modify the modernized service and then take the lessons and apply them to the legacy code ideally as a standalone component or module of the legacy system.
  3. Leverage feature flags allowing you to turn off features in the legacy service once the feature is completely migrated to the modernized service.

Q. What is the migration strategy for cutting over clients to the modernized service?  For example, today we usually incrementally switch clients over to a new service, usually by jurisdiction. Is this an effective strategy?

Introduce a layer of abstraction. Have both services implement the facade. Gradually switch clients to the modernized service that implements the same facade as the old code. Clients could be migrated by any grouping criteria. Use techniques like  dynamic routing with API Gateways, Blue/Green, Context Path Routing  and canary releases to reduce the impact of cutover to the modernized service. Use feature flags to control the flow of inbound clients.

Q. How do we manage the migration of data from the legacy services to the modernized services?  Some of our tables have millions of records and hundreds of columns.  

[Branch-by-abstraction] enables rapid deployment with feature development that requires large changes to the codebase. For example, consider the delicate issue of migrating data from an existing store to a new one. This can be broken down as follows:
  1. Require a transition period during which both the original and new schemas exist in production
  2. Encapsulate access to the data in an appropriate data type. Expose a Facade service to  encapsulate DB changes.
  3. Modify the implementation to store data in both the old and the new stores. Move logic and constraints to the edge aka services
  4. Bulk migrate existing data from the old store to the new store.
  5. This is done in the background in parallel to writing new data to both stores.
  6. Modify the implementation to read from both stores and compare the obtained data. Implement retry and compensations. Database Transformation Patterns cataloged like Data sync, data replication and migrating data.
  7. Leverage techniques like TCP Proxy for JDBC to understand the flow of data and transparently intercept traffic. Use Change Data Capture tooling to populate alternate datastores.
  8. When convinced that the new store is operating as intended, switch to using the new store exclusively (the old store may be maintained for some time to safeguard against unforeseen problems).
Managing Persistence: You will need to choose between creating a new DB or letting the old and new implementations share the same datastore. Separating the DBs is more complex if you need to keep them in sync, but it gives you a lot more freedom. If your old and new applications share a datastore, you’ll need to build a translation layer to translate between the old and new models. If you give your old and new applications separate datastores, be prepared to invest a lot of effort in tooling to synchronize the two DBs. If your DB synchronization mechanism writes directly to the DB, be careful you don’t violate any assumptions the application makes about being the sole writer.[Re-engineering Legacy Software]. Splitting data for microservices involves breaking foreign key relationships and managing constraints in the resulting services rather than at the database level. For shared mutable data you may need to split the schemas , keep the service together before splitting the application out into separate microservices. By splitting the schemas  but keeping the application code together, we can revert our changes or continue to tweak things without impacting any consumers of our service. Once we are satisfied that the DB separation makes sense, we can then think about splitting out the application code into two services.[Refactoring Databases]

Q. What happens when a journey/business capability team has a service that multiple teams want to use?
Establish appropriate provider and consumer contracts with downstream consumers and expose a consumable API. The downstream consumers will conform to the model exposed by the desired Journey services.

Q. What is our strategy for figuring out who the existing clients are?
Insert transparent proxies in the routing flow to determine all the downstream consumers. Leverage edge entry controller patterns like bridge, router, proxy, facade and backends 4 frontends.

Q. What are some technical issues we may run into when a legacy service tries consuming a next generation service?
Model mismatch, Mapping and translation, data duplication, unnecessary hops, data consistency.

Q. Which services should we target for modernization ?
Modernization has to start from some point. There are various starting points. You should avoid analysis-paralysis and quickly start learning to inform the refactoring of the rest of the code. Perhaps a core domain that is upstream to a number of services would be a better starting point.

Q. We currently operate on a monthly release cycle.  At any given time, we will have 8 different environments to support 2 different monthly releases. We will not be able to completely break away from this release schedule for years.
Understand that this is more of a DevOps issue. You need to transform the value chain following this playbook created by Josh Kruck
  1. Identify a single product to work with / go after
  2. Put all the people responsible for the thing together (design, dev, qa, arch, pm etc), permanently
  3. Identify the thing that 1. is done most often and 2. is repeated most often (use a 2x2)
  4. Fix it, solution can totally be a one off as long as you learn from it
  5. Repeat 3&4.

Q. What does the dialog look like with the current consumers of legacy services when we are trying to move them to a modernized a capability?
Surface the pain first. Talk to them about existing pain points and integration down the road. Provide a roadmap of expected changes to the API and policies for evolving the service. Establish provider and supplier contracts and a protocol for communication that will survive schema evolution.

Q. We have over 100s of different service operations today. So far our strategy has been to increment over each one of these operations based off of a very focused isolated use case and eventually reach out to other clients to understand their needs.
You should take the time and examine these discrete operations and find opportunities to align and  refactor them along bounded contexts. Consumers need to call Car.start() and not Car.getEngine().start(). Tell the API to carry out a capability rather than orchestrate discrete flows with data.

Q. Performance testing strategies across the entire ecosystem
Unit tests, Gatling performance tests, WireMock Tests, Service Virtualization with Hoverfly, Synthetic tests in production, SOAP-UI tests, Selenium web driver tests, IntegrationTests, Functional Tests,  Stress tests, Chaos tests, PEN Tests, User acceptance Tests, A/B tests. [see]

Sunday, February 26, 2017

NFSv3 volume services in PCF 1.10

Today's blog is another guest post from Usha Ramachandran

Introduction

Volume services support has been available as part of open-source Cloud Foundry and enables applications to connect to shared file systems. With volume services an operator can deploy a number of service brokers and volume drivers available to connect to a variety of file systems Details can be found here. With PCF 1.10 we are adding support for deploying the NFSv3 driver/broker pair directly through the ERT tile (see How-to below). Customers can continue to deploy other driver/broker pairs by following the OSS documentation.

Use cases

This feature is targeted towards bringing new apps to Cloud Foundry that were previously unsupported because they have a file system dependency. Key use cases include:
  • Legacy lift and shift
    • File system as a transient store
    • content and config store
    • third party modules that cannot be rewritten
  • Applications that require a file system interface for interactions
    • Pipeline jobs - inbox/outbox
  • Content Management Systems
    • content and config store
  • Enterprise shared volume
    • Collaboration and auth/z

Use cases to avoid:

  • Replacing a database as a backing service
  • Greenfield apps that could use an object store
  • Local host persistence. Only NFSv3 support
  • Running database software as an app instance

How-to

  1. An operator can enable the NFSv3 volume service by selecting it in the “Advanced” tab while deploying Elastic Runtime. 
  2. When the operator selects this option on, an NFSv3 driver is deployed on every cell in the deployment. 
  3. In addition a broker is pushed to the system domain. The service then has to be enabled by the operator for all orgs and spaces or specific orgs. 
  4. Applications can now volume mount existing NFSv3 shares.

Restrictions

  • This is only applicable to Linux, no Windows support
  • Docker apps have not been tested
  • Read-write support (read-only support is untested)
  • Access-control is left to the app developer, the user binding to the service picks a UID to use with the NFS server (No LDAP integration)
  • NFSv4 is not supported which also means that EFS is not supported
  • No HA support for the service broker (deploy one instance of your service broker)