Sunday, May 14, 2017

Enterprise World Wide Cloud - Notes from the Oracle #PaaSForum 2017



This must be one of the last posts on #PaaSForum 2017 that anyone has written. Too late but I hope not too little, as it gives us an opportunity to reflect upon the material presented there. I heartily thank Jürgen Kress for organising this event on a grand scale, at a beautiful location (Split, Croatia this year), for his invitation, warm welcome and hospitality. Also thanks to his colleagues from Oracle for their help in organising this event and in sharing their deep knowledge.  Further, we gained a wealth of information from leading Oracle professionals from all over the world - I felt privileged to meet so many of them in person!  We started this event on a positive note (Jürgen literally asked us to take our jackets off and get hands-on with some cutting edge Oracle technology!) and ended it full of enthusiasm watching the sunset over Adriatic sea over drinks and conversation.



Although I learnt a lot in the various presentations, please don't assume every idea or interpretation below is endorsed by Oracle or other partner presenters who spoke on the topics I have chosen to write about.
Review and comments are very welcome (especially if you find inaccuracies in my writing).
Notation guide: Numbers in [square brackets] refer to items in the References section. 

Oracle Management Cloud (OMC) and Application Performance Monitoring (APM)
APM has been a maturing discipline in the past decade - what is different about the Oracle Cloud is that APM sits within a wider OMC suite. As we cannot fix things we don't know about, as a first step, OMC "agents" aggregate all the relevant information from multiple sources and display these different dimensions on standard dashboards. Operators (possibly helped with some machine learning capability in the background) can look for anomalies to fix them, optimise their infrastructure and applications and continue monitoring. Technology wise, it uses simple agents that are installed on the consumer side and gather all kinds of important statistics from different levels such as infrastructure, operating system, application servers and applications. Operational monitoring teams can then use this wealth of information to debug situations, perform root cause analyses etc. 
Since I had an ongoing customer case for a similar requirement, Volker, one of the Oracle OMC sales team who Jürgen introduced me to, very kindly set up a demo account for me - he then gave me a very detailed walk-through of the products and I was able to easily install the agents on my test environment. 

After installing the APM agent and updating my startWeblogic script to reference it, my OMC dashboard now shows a consolidated view from across the system (Windows, FMW):
set JAVA_OPTIONS=%JAVA_OPTIONS% -javaagent:%DOMAIN_HOME%\apmagent\lib\system\ApmAgentInstrumentation.jar
There is a more detailed post by Lucas Jellema on the installation of APM in your environment who also introduced this at #PaaSForum in his presentation. 
I think this quote is from Lucas' presentation and this sums up the approach that OMC set of technologies follows: To find a needle in a haystack, we need to build the haystack first!


Agents are also available for other platforms and applications such as Node.js
Once the platform is installed and operational, the real work starts - in the initial stages of a product release, one could see useful statistics such as slow transactions, unusual or unexpected groups of errors, warnings, stuck threads - things that would be either invisible or hidden away in logs. Over time, these issues would be resolved, resulting in a more smoothly functioning system. Over time, operations would have built up an "expected profile" of how a system is supposed to look in optimal shape - so any deviations would get flagged up immediately. Any new releases in the system would have a benchmark for comparison. 
For critical unusual incidents, the log explorer is able to automatically group together related log entries and other information to help operational support  (something similar to searching by ECID - except that Oracle have specifically tried to keep it technology agnostic to allow monitoring of various kinds of technologies). 
Imagine this - if you can correlate performance of a specific business process or application to metrics in APM, optimising application performance can provide direct, tangible benefits in business revenue!

Good technology to get simple, stateless integration flows up and running fast without the hassle of managing infrastructure and platforms AND at a predictable cost. The ICS platform further offers a rich catalogue of pre-built adapters for many SaaS applications. A complete SaaS integration was done hands-on on our lab day (Days 4 and 5 at the #PaaSForum 2017  offered intense hands-on labs by some of the leading technology professionals from Oracle like Niall Commiskey and Deepak Arora of the A-Team fame). 

Deepak also gave a good presentation on his experiences and learning from the industry. 
One of the other notable presentations on ICS was by Robert van Mölken, who shared his experiences and workarounds with delivering particularly complex integrations with complex data mappings and transformations. Having a book published on Oracle ICS (co-authored with Phil Wilkins) on this technology while the technology is being changed and developed is quite a fine achievement! Robert also very supportively watched my 1,458,217th attempt at debugging my own Oracle ICS adapter. 

Full blown Oracle SOA Suite in the cloud - perfect for organisations that already have stateful service orchestrations running on-premise. With proper risk evaluation and security assessments, you can very well reduce or minimise costs of running and maintaining the cost of running the platform in-house and focus specifically on tasks that deliver business value. 
For customers who wish to adopt a proper business process management (BPM) approach with BPMN, multi-organisation or multi-departmental workflows, the Process Cloud Service (PCS) is the tried and tested offering from Oracle. 

API's have been in the limelight lately but as a concept an API simply means "Application Programming Interface" and have been around ever since computers have been programmable! If there is software (or hardware) that offers business value, an API is the gateway to unlocking that value. Seriously - even the assembly code instructions that a microcontroller acts on are API's. And API-first or contract first development is simply a good software engineering practice. 
I think what changed in recent years is:
  • Organisations - both software vendors and buyers - discovered the value in having clean, inter-operable interfaces. (Software Engineering long preached contract first development as a prerequisite for "high cohesion, loose coupling")
  • The advent of web services (SOAP and now REST) and their popularity, helped expose disparate systems first to other previously un-interoperable systems and now the cloud
  • API lifecycle management has matured as a field and there is value to be derived by monetising API's, enforcing policies and usage agreements (see definition of usage agreement in [5] Oracle SOA Reference architecture ) and monitoring of API usage
  • The realisation in the industry of the need for documented, visible catalogues of their digital capabilities. 
Oracle API cloud service fills many of the gaps in this area for end to end API lifecycle governance and value realisation. It is no wonder that Oracle acquired Apiary a while back. Having theoretical knowledge of the above needs is one thing but having tools to govern and enforce is the logical next step. 

Beyond #PaaSForum - actors in the cloud landscape and the future...
According to the US Standards body NIST's Cloud computing reference model [1], key actors in the cloud space are: Cloud Consumer (CC), Cloud Provider (or CP - SaaS, PaaS product providers), Cloud Auditor, Cloud Carrier and Cloud Broker (CB- just like Systems Integrators).
The role of Cloud Broker is somewhat clearly defined - the NIST definition states:
An entity that manages the use, performance and delivery of cloud services, and negotiates relationships between Cloud Providers and Cloud Consumers
This includes some or all of tasks such as: assessing consumer requirements, determine suitability to pick either the best possible SaaS/PaaS/iPaaS offering or a hybrid combination thereof or simple, custom solutions (after all, one size never fits all). Where a central EAI (Enterprise applications integration) system or a well designed SOA initiative would allow organisations to benefit from their choice of different back-end/Cloud products, the systems integrator, in the "Cloud broker" avatar now takes that to the cloud level. Further, various enterprise business processes are likely to span multiple products - SaaS and on premise. Orchestrating these together into coherent business processes with appropriate levels of security, operational monitoring and business analytics is going to be exciting and fulfilling in terms of business value delivered to the industry by effective use of technology. 

This role must not be taken lightly - without adding unnecessary detail, I would only like to state that "unnecessary" complexity costs more in the long run, not to mention increased security risks. I thought of adding this statement purely based on the wide range of quality of in-house SOA/ESB/EAI (now microservices) implementations I have seen over the years. With the decisive and inevitable move towards the cloud (and interclouds coming soon), the implications of bloatware and spaggeti could manifest in a larger scale with worse consequences. 
As Vikas Anand of Oracle pointed out in his presentation and I paraphrase:
Software as a Service (SaaS),  can quickly degenerate into "Silo" as a service if not properly integrated
This is why Cloud Brokers and PaaS platforms such as Oracle ICS/SOA CS and PCS have key roles to play. 
We don't hear much about Cloud Carriers yet, but they are a key actor in ensuring high quality, reliable and secure connectivity between Cloud Consumers and Providers and also in achieving true inter-cloud architectures. NaaS (Network As A Service) also seems to be an emerging offering or rather a sub-offering under wider IaaS to watch out for.


Chatbots
Last but not the least, chatbots were a popular theme at #PaaSForum 2017. Original reason for the hyped interest in chatbots was due to their potential as a stepping stone to mobile channels without having to develop and maintain mobile apps upfront. In addition to the basic chat dialogs that most people are familiar with, Facebook and possibly other platforms too offer support for menus within chat windows which is a simple and effective way of exposing functionality to customers. 
Unfortunately what also happened was the positioning of chatbots (perhaps as a result of overzealous marketing?) as a customer service tool with conversational ability. We are far from that in terms of how NLP (Natural Language Processing) has progressed and this expectation has led to some disappointment as well. Plus, in many many contexts, it is desirable for humans to provide interactive support for a long long time to come- AI is not close to "artificial general intelligence" yet (which is what would be needed for a really human level chatbot experience) and many risks, including ethical[6] implications are not yet fully understood.  

In terms of simple keyword matching and menu-driven interface for customers, I think technology might be ready to deliver good value in chatbots already - this screenshot is from my interaction with a demo chatbot. This bot was also used in Frank Nimphius' presentation on chatbots and one can still look it up on Facebook.. 

Another impressive presentation on chatbots was by Léon Smiers. I found the ideas genuinely innovative - he has since written an article about it which introduces a "bot maturity model" - a set of capabilities that chatbots should have and a staged approach to get there. Technically there are many gaps in "open" NLP platforms (proprietary platforms such as the one used in Apple's Siri are much more advanced).

Léon's categorisation of desirable capabilities in the "integration" category led me to think: we will have to go beyond NLP and machine learning- we are going to have to feed the outputs of NLP into rich domain knowledge models or "ontologies" such as those defined in RDF triples. The benefit is that in addition of simple querying (which is done via SQL /XQuery/XPath) more advanced graph queries and reasoners can be run on such data to draw inferences. This would be key in interpreting user inputs correctly, querying back-end sources/API's effectively and providing rich responses to users. 
Only then will these and similar systems deliver the level of artificial intelligence we aspire for. Still a maturing field with exciting possibilities. Also see: [2] and [4]

My conclusion is that it is possible to draw some value from the combination of technologies currently available or planned (such as Oracle's Intelligent Bot Cloud Service - IBCS) and they offer a compelling option for many organisations to bypass mobile apps altogether - keep in mind that it introduces Facebook (or other similar "front-end" chat enabled platforms) as an additional stakeholder and any analysis needs to take this into account. Many companies simply offer a chatbot inteface within their own mobile apps or websites - this way they control the platform which might be an important consideration for many organisations. 

Posts about #PaaSForum or topics covered there

References

Terminology & Acronym Soup
I know most readers might be familiar with these acronyms. I have simply developed this habit to create a glossary next to everything I write. As I noted these while writing, might as well share: 
1. SOA - Service Oriented Architecture
2. EAI - Enterprise Applications Integration
3. CC / CP/ CB - Cloud consumer/provider/broker
4. PaaS - Platform as a service (Oracle offerings such as SOA-CS, ICS, Java CS etc. that provide the foundations on which customers can develop applications). SaaS offerings in turn include pre-developed applications that Cloud Consumers can subscribe to and use
5. iPaaS - integration platform as a service (Such as Oracle ICS)
6. OMC - Oracle Management Cloud: cloud based, consolidated enterprise monitoring with offerings such as APM, infrastructure monitoring, log aggregation and monitoring. 
7. APM - Application Performance Measurement: a set of products, now increasingly available as cloud offerings (such as Oracle APM offering that's part of the Oracle OMC family) that provide insights into applications' technical performance.
8. NLP - Natural Language Processing
9. Intercloud - See https://en.wikipedia.org/wiki/Intercloud
10. ECID - Execution context ID. Each entry logged in Weblogic diagnostic logs is stamped with an ECID so that related log entries can be grouped together to trace a request-response flow

Tuesday, December 06, 2016

Progress with the Oracle Integration Cloud Adapter SDK

In the past few days, I have been making some progress with using the ICS Cloud Adapter SDK. 
Today, I created my first shell adapter - the design time views can be seen below!

The journey so far: 
 * Installation of all the offline material [Check]
       Gotcha's to note here: the step to install SDK patches wasn't required for 12.2.1 (the version I was on). 
 * Reading through the documentation [Ongoing]
 * Developing the empty adapter and deploying it for design time and runtime [Check]

There are a number of integration use-cases that we have identified. If all goes well, these will be available for a wider rollout, helping customers implement some complex integration use-cases with some important cloud services in "hours, not months" in keeping with the Oracle ICS philosophy (and in line with DRY software engineering)!

More coming soon...

Tuesday, October 25, 2016

WS Security - enabling passwordDigest authentication in an Oracle FMW environment

Objective:
To have a basic level of authentication on web services (especially where there's no transport layer security) without having to pass clear text passwords in the WS Security headers. 

Background:
The concepts are fairly generic but this post is highly Oracle Fusion middleware/SOA Suite specific. There can be complex decision tree (see [1]) involved when selecting the 'appropriate' level of security for any system. As security involves trade-offs between cost, performance, usability and other variables, the 'appropriate' level of security could be highly specific to the environment, usecase, system and people. But as developers, we can still perform some due diligence based on the tools and knowledge available to us.  

My rule of thumb when developing a traditional web service or microservice is: If it's reading from a secure database or some system that is accessible only via authentication, it must only expose a secure endpoint. 

Now sites can differ considerably and so does the definition of what "secure" is. 
When exposing ah http endpoint (SOAP or REST) hosted on cloud or accessible over the Internet, one would as a minimum ensure that it's over TLS and has authentication enabled. 

In an on-premise hosted solution, traditionally https has not been widespread within organisations and web service endpoints meant for internal consumption have most commonly only been exposed over http - hopefully accompanied by infrastructure level setup (firewalls, DMZs etc.) that ensures that the data or service is only accessible inside a 'trusted' network. 

Even in a trusted network without TLS, it is probably best if passwords weren't floating around in clear text (which is what the default UsernameToken with passwordText policies do)
With a few steps, one can enable the passwordDigest authentication that not only protects the password in-transit, but also provides protection against replay attacks (if nonce and creation time properties are set in the SOAP header as well)

Steps:
  • Basic steps are listed here:
  • For step 9.3.3, what I do is create a new policy pair based on oracle/wss_username_token_service_policy
This is done via /em -> WeblogicDomain -> Web Services -> WSM Policies

Search for oracle/wss_username_token_service_policy and copy to create a new one with the settings for passwordDigest applied (as per step 9.3.3 of the Oracle guide)

The one I created is singhpora/wss_UsernameToken_PasswordDigest_service_policy and 
singhpora/wss_UsernameToken_PasswordDigest_client_policy 
(based on oracle/wss_username_token_client_policy) and I keep these source controlled. (An additional benefit of putting them in source control is so developers can import these into their local JDeveloper policy store for design time and also to promote their initial versions or changes across environments from development through to production,much like any other artefact)
  • Another associated step is to have the oracle.wsm.security map and basic.credentials key to be present on the server that will use the client policy (you can use custom map and key names if required). This needs to contain the username(s) and passwords of the users who are allowed to invoke the web services that use the username token policies. (You can follow the principle of least privilege when assigning a group to these users.)
Impact:

This can clearly be seen when invoking the service via SOAPui. 
If your service uses a UsernameToken with PasswordDigest policy (like the one I shared above), SOAPui can be used to test it as it can automatically set the required security headers. 
If you look at the SOAPui logs, before applying the passwordDigest policy (e.g. when your services uses a username token with password text based authentication policy like in the default setup), this is how the password component of username token is created:

Unless your service is accessible only over SSL, this means you have passwords flowing around the network in clear text. Most corporate IT policies would I believe specifically forbid letting passwords float around like this and yet, this can often go unnoticed and unaddressed. 

After applying the username token with password digest policy, is how the WS Security headers get created:


Only the client and the server now know what the password is and no one in the middle can see this. 

Tradeoffs
* To enable digest authentication, the server has to store passwords in clear text as per the documentation (for the default authenticator to work - If you have more stringent requirements, it is possible to write your own authenticator that can read passwords from some encrypted credential store). The reason is that with digests, the client (such as SOAPui in the above example) creates a hash of the actual password, creation time and nonce -* the server on its side has to create the same hash for successful authentication and this requires the server to know the clear text password. 
But this is still okay as this can be contained behind strict administrator control. Way better than having clear text passwords travelling over the network. 

References:
[1] Decisions and choices involved when selecting the appropriate security policy: https://docs.oracle.com/middleware/1221/owsm/security/choose-owsm-policy.htm#OWSMS3988

[2] Setup steps for enabling digest authentication: https://docs.oracle.com/middleware/1212/owsm/OWSMS/configure-owsm-authentication.htm#OWSMS5450

Updates:
16-May-2017: Note about possibility of custom authenticator in the Tradeoffs section (prompted by Jason Scarfe's comment)

Managing shared metadata (MDS) in a CI environment

Goals and Summary:
* Package shared metadata in a SOA environment and make it widely distributable (SOA MDS [2], Servicebus, maven artifact repository) 
* Associated sample: https://github.com/jvsingh/SOATestingWithCitrus/tree/develop/shared-metadata  
* Key command (if you use the associated pom file) 
mvn deploy com.oracle.soa.plugin:oracle-soa-plugin:12.2.1-0-0:deploy -Dpassword=*****

       
Background:
Having worked on a wide range of projects, I came to the realisation that SOA can mean vastly different things in different places.
It can be about implementing the foundational service oriented architectural principles or it can be simply about using a tool or technology with SOA in its name- just like any other programming language.
In a mature SOA environment, the shared metadata contains valuable artefacts that provide the foundation – subject to design, it contains the canonical information model of the enterprise (in the form of business/entity objects) and the various organisational API interfaces (service interfaces and messages).
In the Fusion middleware world, this pattern is easily implemented via the MDS – a set of services that allows storage and retrieval of many types of shared resources such as WSDLs, XSDs, DVMs, reusable XQuery or XSL transformations etc. Within SOA composites, these are then accessible via the oramds:/ prefix
To take this one step further, we also can deploy the same copy of shared artefacts into the Oracle Service bus as a servicebus project so even the OSB services can access these without requiring local copies scattered everywhere.  A great benefit of deploying this content to the OSB is that you get some basic sanity checking of these artefacts for free (e.g. the OSB is a bit strict about unresolvable XSD imports in WSDLs – this kind of thing is highlighted at design-time only if you use a professional XML editor and not regular JDeveloper[1], which is what most FMW developers might commonly use)
There are some key principles here:
  • ·         Within an organisation, are service callers and called services able to access the same copies of schemas and WSDLs? Or are there copies floating all over in every project? This kind of thing invariably leads to ‘slightly’ different copies of the same schema and basically is a recipe for mess.

(Of course, when consuming ‘external’ services, we probably do want to save a specific version of their interface locally as that forms our ‘contract’)
  • ·         Are projects neat self contained units that interact with the ‘external world’ via well defined interfaces, or is there a complex web of cross-dependencies, deeply nested dependencies and even circular-dependencies with projects referencing each other? Shared metadata helps avoid these situations by providing both callers and implementers the same reference data model and interfaces.
  • ·         Is there any form of assurance or validation of the shared artefacts? Are the WSDLs and XSDs well-formed and valid? To be specific, are any schema errors flagged up regularly as part of a continuous build (rather than being detected much later when multiple such errors have accumulated?)
  • ·         Is the MDS being built and deployed as a single, versioned unit or do individuals simply zip up groups of “files” and promoting them across environments?

On the last point, I think it is important to consider the shared metadata as a single deployable unit that can be version controlled, tagged, built with identifiable versions,  validated, deployed, promoted, in the same way as a SOA composite or a ServiceBus project is a single deployable unit. (yes, I know you can create an *.sbar archive with only the  ‘files’ you changed within a project, but this kind of approach is completely contrary to practices that promote continuous integration and delivery. You essentially end up tracking individual files rather than treating a ‘project’ as a unit of deployment. )

Now, coming to the build and deployment of MDS, we use the approach of zipping these up (note the build section and packaging in my MDS pom.xml and then deploying the artefact using using maven the oracle-soa-plugin (specifying the sarFile property as apps.jar)

mvn deploy com.oracle.soa.plugin:oracle-soa-plugin:12.2.1-0-0:deploy -Dpassword=

Outcome:

·         As seen above, the MDS bundle is deployed to the SOA runtime.
·         It is also deployed to the maven repository configured in the distributionManagement section (this could be any repository such as nexus)
Note that since I call the oracle-soa-plugin directly in the maven command, I don’t need to explicitly configure it in the pom (I would have to do that only if I was piggybacking the soa deploy on top of one of the maven phases but here I specifically want “mvn deploy” to validate and then deploy the artefact to my maven repo. I specifically want my MDS deployment to the runtime MDS to happen separately).I have only configured some of the properties required by oracle-soa-plugin in the pom to keep my deploy command concise.
I further make it a point to make sure that the artefact produced by this last step is also deployed to the local and internal maven repositories (such as nexus). For this example, I have used a simple distributionManagement section in my MDS pom that installs the shared-metadata bundle into my local maven repository. This simple step ensures that ANY other consumer in the organisation is able to consume this metadata (e.g. a standalone Java web service or application that needs to call an internal web service)
In subsequent posts, I will add a Java consumer that can simply use the shared metadata as a dependency and consume the common repository of
In the brave new polyglot world of Oracle Application container cloud, this can in theory be ANY consumer – even PHP or python!

Coming up:
* Adding more validation for shared-metadata in CI



References and footnotes


[1] Some times unresolved types only come to light in JDeveloper if you ctrl+click on it. I think this flexibility might be by design to keep things simple for beginners perhaps but this is only an opinion. 
[2] Teams might use various approaches for this. Here is one of the earlier posts that also partly address MDS deployment via maven with a conceptually similar approach (create a zip then deploy using oracle-soa-plugin): http://weblog.redrock-it.nl/?p=740
My approach, though, avoids the need for the assembly plugin and its associated XML assembly descriptor to create the zip beforehand. The benefit is that the primary artifact produced by the main build is what maven also automatically pushes to the distribution repo (such as nexus) in the 'deploy' phase.

Sunday, September 11, 2016

Easy SOA releases with JGitFlow

If you use GIT as your source control system and if you use maven, the jgit-flow plugin is a massive time-saver, especially when we release a slightly large application with multiple modules (Each with it's own pom file). 

Two steps: 
 mvn clean external.atlassian.jgitflow:jgitflow-maven-plug in:release-start
and 
 mvn clean external.atlassian.jgitflow:jgitflow-maven-plug in:release-start

do the job. 

The above sequence basically updates the pom file versions to a release version (e.g. from 1.0-SNAPSHOT to 1.0, merges the development branch changes to the master branch, and sets the pom versions in the development branch to the next snapshot 1.1-SNAPSHOT)

If you have an application with multiple projects/modules, all of them can be released in one go (such as my application here that contains two modules)

Of course, there are some peculiarities when SOA Composite projects are involved. 
e.g. the oracle-soa-plugin maven plugin insists on 'deploying' the composite and running tests at the same time - so you need to keep a SOA server running and supply the serverUrl, username and password properties. (keep the properties names different - see sar-common pom for example names) just so they don't clash with the jgitflow username and password properties. 

I avoid this by simply using a private-public key pair to interact with github which saves time and avoids the above property name clash. 

Of course, there are ways to not have the oracle soa plugin insist on deployment when creating a release, but that is a post for a later day!. 




Saturday, September 03, 2016

Test Driven SOA - citrus for powerful SOA test coverage

Reading parts of Test-Driven Development for Embedded C" by James W. Grenning inspired me to take another look at this area and look for something new,  fresh and powerful for use in the SOA world. 

I don't think we need much convincing on the importance of  automated test coverage (if someone does, please read the first chapter of the book mentioned above, especially the section on "Physics of TDD" that tries to quantify the high long-term costs of "Debug later programming" - the nemesis of TDD)

A very simple application with a SOA composite project and Tests project can be found here: https://github.com/jvsingh/SOATestingWithCitrus

Although the test in this is just a simple SOAP request, what I am interested in are the features that citrus has to offer that can help create a solid battery of tests. 

  • Tests can be specified in Java or XML or a combination of both
  • A number of test utilities are inbuilt - including things like database access, JMS,mock SOAP endpoints (static responses), complex assertions - and these can be used to write complex setup and teardown routines


I will leave the reader to peruse the code on github but this shows the most important pieces of config in my test project:



  • To build+deploy+test, after making sure your SOA server is running, just run "mvn integration-test" from the application level (provide serverUrl, user and password in the SOAComposite pom or from the environment e.g. -serverUrl=http://soahost:port)
  • To only run the integration tests, just run "mvn integration-test" from the SOAApplication/SOACompositeTests level.

This is all neat and CI ready! 

Saturday, August 27, 2016

Maven builds for SOA 12c Composites with BPEL Java embedding and Java class

Enviornment: Oracle SOA Suite 12.2.1

Sample Application:  https://github.com/jvsingh/SOAAppWithJavaEmbedding/tree/develop/SOAApplication
(Git clone or use the download option from here: https://github.com/jvsingh/SOAAppWithJavaEmbedding/tree/develop )

Scenario: 
 A BPEL component has a Java embedding that in turn calls a Java class method (under the usual SCA-INF/src)

Issue:
This works and builds find using JDeveloper, but the oracle-soa-plugin for maven seems to have a few known issues (see references for one of them) that cause builds for such composites to fail. 


 My Java embedding, referring to my class com.singhpora.samples.SOAApplication.SCAJava under SCA-INF/src can be seen here: 

When I build the SOA project using "mvn clean package"  (from the SOAProject directory with the default pom) , you can see that I get two distinct errors as shown below: 
a) It can't find my class from under SCA-INF/src 
b) It cannot find even the BPEL platform classes






The workaround(s) for the two issues above involve:
a) create a simple java pom file under SCA-INF 

b) Add SOA/SCA-INF as a module in the 'Application' level pom 

c) Workaround for the second issue where it can't find BPEL's libraries:
Observe the use of maven-dependency-plugin here (which essentialy copies a BPEL platform dependency temporarily under SCA-INF/lib to keep the compiler happy):


After using the above workarounds, if I now build my application using the application pom, it builds and deploys fine:
(run mvn clean pre-integration-test from the SOAApplication level). 
As you can see, it now builds the two modules and the application. 

At runtime, my Java code is invoked successfully:



References/Related links:
2)   Same as workaround c) http://www.esentri.com/blog/2016/04/07/unable-to-compile-a-composite-java-embedded-maven/
3)   Builds for java classes under under SCA-INF but a slightly different approach:
http://www.avioconsulting.com/blog/building-soa-12c-projects-include-java-code-maven



Tuesday, June 07, 2016

#AMIS25 and The Oracle Cloud Shift : Insights from my first Holland trip


I would like to take this opportunity to wish AMIS Netherlands a very happy 25th birthday. In the context of Oracle SOA, the the name AMIS often keeps popping up - they have contributed a lot to the knowledge available to the community around this and related Oracle technology. 

As they chose to celebrate this occasion in a uniquely signature style - by holding a global Oracle conference with an impressive lineup of speakers from all 6 continents and also by holding the event in an old aircraft hangar (commemorating their origins as the Aircraft Management Information Systems) 
It was a pleasure to be invited by Lucas Jellema (@lucasjellema) so I decided to attend at least one day - the Friday, 3rd of June. The line-up of events though was fantastic on both days. 


I arrived in the Netherlands on Thursday, the 2nd (my first visit to the country, outside the airport that is) and decided to explore places nearby... More on this later!

All the speakers might upload their presentations as they see fit and of course, know the best about their subject matter. I'm going to write about the talks I attended and my observations on the main themes.

One thing that is quite apparent is that the mainstream Oracle world is now cloud. This is quite the realisation of c of 12c.

First, the conference day started for me with Simon Haslam's (@simon_haslam) talk on the Oracle Traffic Director. This was one of the aha moments when you realise a gap in existing technology that you vaguely knew was there but had always either ignored or worked around it!
OTD offers seriously advanced load-balancing, fit for globally distributed cloud applications that is also 'application aware' (both OOTB and with options to extend with custom programming)

In my second session, Matt Wright of Rubicon Red shared his company's insights and a roadmap for moving integrations to the cloud. 

Peter Ebell of AMIS presented a talk on new SOA paradigms ("Alien architectures" as he termed it) - the post RDBMS world. The premise was that traditionally, SOA service layers that directly perform DML on RDBMS databases are very prone to changes in the database. Perhaps new approaches might need to be explored - especially for the new world where data in general is more unstructured or semi-structured. 
He started with a typical 'napkin architecture' and then progressed on to explain how it would evolve for certain modern requirements. 
At first the speaker started the talk in Dutch and I thought it would be an interesting challenge to try and understand everything in Dutch! But he then switched to English.


Shay Shmeltzer (@JDevShay) introduced the Oracle Developer cloud - this is a boon for the developer community as with a few clicks, a developer can provision the basic development environment (Source control, wiki, issue tracker, build server) up and running for a whole team! 
As Shay reiterated "..A mature DevOps facilitates short and quick release cycles...." , which is precisely today's need and expectation from businesses. 


Lonneke Dikmans of eProseed and Lucas Jellema of AMIS introduced the various Oracle cloud offerings - PaaS offerings to be precise. Beyond the familiar SOA CS,  ICS (Integration cloud service), PCS (Process cloud service - with it's BPM engine and BPM Workspace, the IoT and Big data cloud services are interesting new offerings. 
I noticed that both IoT and BigData CS included 'analytics' -  Lonnenke clarified that this targeted different types of data (real time data in flux versus static-historic data). 
As I see it, the IoT cloud service adds value by "turning sensor data into sensible information" - that can subsequently be fed in to underlying data, integration and analytics services. Very compelling. 

Lucas described a realistic strategy for migration to the cloud by targeting 'edge systems' first. 

Bram Van Der Pelt of AMIS gave a session on Identity 3.0 and it's possible application in the Oracle world. Identity 3.0 is a new proposal developed by the Jericho Forum, which essentially proposes a mechanism whereby Identity and it's related attributes are maintained by and shared by the authority that owns them (such as a national government or the individual themselves). The root of every identity is proposed as anonymous. These principles facilitate privacy. 
This is a major paradigm from the currently prevalent model in every application where copies of user identities and lots of personal profile information are stored locally. 


......Beyond technology, the conference also gave me the opportunity to see some nice parts of Holland. As I arrived at Amsterdam on the afternoon of Thursday, I started to make my way towards Katwijk. As the historic city of Leiden was on the way, I took the opportunity to explore the Leiden town centre a bit and also see the  Rijksmuseum van Oudheden - which is the national archaeological museum of the Netherlands. The collection is nice and includes artifacts from ancient Egypt, Persia and local archaeological finds from the regions in and around the Netherlands. 





"Why should we look to the past in order to prepare for the future?
Because there is nowhere else to look"

~James Burke  (Quoted at the 
Rijksmuseum van Oudheden)
An interesting fact about Leiden is that it's the birth place of the famous Dutch painter Rembrandt. 

The day after the conference, I headed to Amsterdam (having stayed overnight at Den Haag/The Hague). Found a map of the city and started the day with walks along the canals from Central station to the Museum district. Eventually decided on exploring the Rijksmuseum, which I explored for most of the day with it's extensive collection of paintings by Rembrandt, Veneer and other artists. 



"You have two eyes and but one mouth. Let this be a signal to pay heed, not to talk here, but to read"
(~Quoted on the walls of the Library at the Rijksmuseum, Amsterdam, pictured below)



Having spent hours at the Rijksmuseum, for the remainder of the day all I could do was to walk around the city some more, before it was time to catch my flight. A very fruitful first trip to Holland - not only for the information packed conference, but also because I got to sightsee and visit two main national museums of the country!