About Me

My photo
Rohit is an investor, startup advisor and an Application Modernization Scale Specialist working at Google.

Friday, November 25, 2016

Current State of Persistence in Cloud Foundry and Pivotal Cloud Foundry


Below is the current status of the nfs integration with ERT.

The ERT alpha available right now has the CC property for volume services enabled by default. Customers can then follow the documentation to deploy the NFSv3 driver/broker pair manually. 

ERT has a radio button to enable NFS volume services on the advanced tab. When the button is enabled, two things have to happen:
1 - The NFS broker is deployed as a CF app and registered to the marketplace. Work is being tracked in the OSS backlog to deploy the driver as an App) https://docs.cloudfoundry.org/adminguide/deploy-vol-services.html
2 - The NFS driver is collocated on Diego Cells, disabled by default, and enabled when configured. (Windows Cells is not supported)

In PCF 1.10, the driver can be deployed via bosh add-on and broker can be pushed separately for testing.

Sample app thanks to Adam Zwickey

Container persistence shipped in 1.8 in a closed beta and will GA in PCF 1.10.  It is trivial to modify a PCF 1.9  to enable volume services. See above. To play with it locally use PCFDev.

There are a number of use-cases where we have to remediate the dependency on persistent filesystem when replatforming apps to PCF. We run into these use-case in all our app replatforming engagements. 
  • Sharing data across apps  as an integration layer
  • 2PC commit transaction logs
  • Existing datasets on disk
  • Disk caches
  • Content management systems that r/w from mounted file volumes
  • 3rd party dependencies and libraries that rely on persistent file systems
  • Composite apps. some services runs on PCF and others run on the IaaS 
For Persistence Orchestration, a new project called Volman (short for Volume Manager) has become the newest addition to the Diego release. Volman is part of Diego and will live in a Diego Cell. At a high level, Volman is responsible for picking up special flags from Cloud Controller, invoke a Volume Driver to mount a volume into a Diego Cell, then provide access to the directory from the runtime container.

As of cf-242 and Service Broker API v2.10, Cloud Foundry now ships with support for Volume Services: filesystem-based data services. The v2.10 API is a release candidate, and will be considered GA unless a bug in the implementation is fond. An experimental version of the API was added in v2.9.

What is included in CF itself is the plumbing required to plugin driver/broker pairs that add support for specific kinds of external volumes. The support for EFS, NFS, Isilon, etc is added through separate BOSH releases not tied to a particular CF version. In https://github.com/cloudfoundry-incubator there is a local-volume-release and an efs-volume-release. In the Persi tracker there is an nfsv3 - "broker driver pair for existing nfs shares that can be mounted with nfsv3" epic almost complete.

Until recently, the only data services that have been allowed are ones with network-based interfaces, such as a SQL database. With Volume Services, brokers can now attach data services which have a filesystem-based interface.

Currently, we have platform support for Shared Volumes. Shared Volumes are distributed filesystems, such as NFS-based systems, which allow all instances of an application to share the same mounted volume simultaneously and access it concurrently.

This feature adds two new concepts to CF: Volume Mounts for Service Brokers and Volume Drivers for Diego Cells. Details can be found in the links below.
Slack: If you're interested in rolling out a volume service, please ask questions here, on the OSS #persi slack channel.

Finally if you want to play with persistence support with Cloud Foundry checkout PCFDev. CFDev team released a new version of PCFDev that includes local-volume services out of the box. This new release of PCFDev gives you an option to get up and running with volume services that is an order of magnitude easier than any of the options we had before. Here is a  post detailing the steps to try out volume services with your Cloud Foundry applications.

This feature does not mean we automatically revert to using persistent mounts when replatforming applications - I look at this as another weapon in our arsenal_  also see To be clear this just a stepping stone to a more cloud-native architecture. You have to treat blobs (files) as a construct supported by a backing service. All apps instances now will now see a common NFS mount - so app instance have to manage consistency when talking to the mount .... each app instance does NOT get its own mount

Adam Zwickey Pivotal platform architect validated the persistence feature in CF following these steps:
1) Enable volume-services on the cloud controller. Cloud Foundry must be deployed with the cc.volume_services_enabled BOSH property set to true.
2) Deploy a volume driver colocated with each Cell  (using bosh add-ons)
3) Deploy a service broker that implements the volume API.

For sample apps that require disk persistence you can employ Spring apps that leverage the @cacheable abstraction  and write the cache to disk. On app restart you should see cache hits for the content written to the disk. see 1 and 2.


credit:  Thanks to Greg Oehman & Julian Hjortshoj & Adam Zwickey

Migrating 1TB of Data from DB2 to MySQL

I would advise that you first insulate the application against this change by making all the app services interact with the backend using a Repository pattern. i.e. put the repository abstraction in place that will allow you to then switch the DB internally. I also like to follow the expand-contract-expand pattern for consumers explained in http://martinfowler.com/articles/evodb.html for existing data. 

When you refactor your application to use MySQL leverage something like Flyway or jOOQ to manage  future DB migrations.

At a raw DB2 data and schema level there are tools like DBConvert that can be used to move and sync the data.

So its a combination of both app and data patterns along with some Blue/Green routing magic that will be needed to move the data from DB2 to MySQL.