Saturday, October 07, 2017

Raw results - countries list with total IP (IPv4) addresses


Background: 
http://weblog.singhpora.com/2017/10/how-many-programmers-does-it-take-to.html


Presented below is a list of countries (country codes) and the total count of live IPv4 addresses where a public facing service (such as a website) might be hosted as counted from the scan data of 1st October 2017

The reason these don't quite add up to anywhere in the ballpark of 4 billion (the total IPv4 address space) is because the data set I used might only be scanning for hosts that run some public service exposed over a TCP port (e.g. a website running on port 80 or 443)

The numbers definitely look incorrect and total up to only 145,430,195 - I will continue to investigate why, but they seem to be in proportion. 
It is likely that scans.io are only able to gather data about live IP addresses at the time of the scan as opposed to total allocated ones)


+-------+--------+
|country|ip_count|
+-------+--------+
|     LT|  120718|
|     DZ|  362827|
|     MM|    3494|
|     CI|   18954|
|     TC|     675|
|     AZ|   39468|
|     FI|  220723|
|     SC|   83878|
|     PM|     323|
|     UA|  768681|
|     RO|  730479|
|     ZM|    9618|
|     KI|     274|
|     SL|     474|
|     NL| 3077280|
|     LA|    5319|
|     SB|     746|
|     BW|    6165|
|     MN|    9664|
|     BS|    7838|
|     PS|   36320|
|     PL| 1539024|
|     AM|   57860|
|     RE|    6976|
|     MK|   41856|
|     MX| 9361233|
|     PF|    7506|
|     TV|      41|
|     GL|   10279|
|     EE|   74403|
|     VG|   13871|
|     SM|    2514|
|     CN|11905007|
|     AT|  403069|
|     RU| 4002435|
|     IQ|   76489|
|     NA|   13269|
|     SJ|     125|
|     CG|   13541|
|     AD|   12536|
|     LI|    6136|
|     HR|   84459|
|     SV|  134530|
|   null|  827348|
|     NP|   22618|
|     CZ|  434625|
|     VA|     409|
|     PT|  278365|
|     SO|    1158|
|     PG|    3291|
|     GG|    2601|
|     CX|     125|
|     KY|    5329|
|     GH|   11492|
|     HK| 1127634|
|     CV|    1745|
|     BN|    6363|
|     LR|     769|
|     TW| 2785149|
|     BD|   88409|
|     LB|   43745|
|     PY|   33953|
|     CL|  340123|
|     TO|     756|
|     ID|  495095|
|     LY|   18077|
|     FK|    1158|
|     AU| 1875091|
|     SA| 1098611|
|     PK|  279205|
|     CA| 3073028|
|     MW|    5162|
|     BM|    6359|
|     BL|     104|
|     UZ|   12856|
|     NE|    1597|
|     GB| 5182929|
|     MT|   20472|
|     YE|    6356|
|     BR| 3554113|
|     KZ|  400583|
|     BY|   59159|
|     NC|   18117|
|     HN|   25888|
|     GT|  115383|
|     MD|  107923|
|     DE| 6338938|
|     AW|    2612|
|     GN|    1140|
|     IO|      65|
|     ES| 1810492|
|     IR|  609566|
|     NR|     178|
|     MO|   26437|
|     BH|   24639|
|     EC|  210964|
|     VI|    1233|
|     IL|  337670|
|     TR|  751779|
|     ME|   26218|
|     VE|  660044|
|     MR|    3197|
|     ZA|  453373|
|     CR|  122065|
|     AI|     469|
|     SX|     869|
|     GU|   21634|
|     KR| 4705816|
|     TZ|   14240|
|     US|45381144|
|     RS|  128773|
|     MS|     262|
|     AL|   45857|
|     MY|  462057|
|     PN|     125|
|     IN| 2169583|
|     JM|   16720|
|     CK|     650|
|     LC|    1418|
|     GM|    1627|
|     AE| 1001729|
|     MQ|    5890|
|     CM|    9684|
|     RW|    3714|
|     TG|    1992|
|     FR| 2709666|
|     GF|    1521|
|     CH|  544074|
|     MG|    5532|
|     CC|     124|
|     TN|  293295|
|     GQ|     759|
|     NU|     136|
|     TL|     745|
|     WF|     479|
|     GR|  243484|
|     PA|  200845|
|     TD|     519|
|     GI|    5229|
|     SD|   15635|
|     AG|    4250|
|     MC|   10245|
|     DJ|     723|
|     JO|   40809|
|     BA|   59273|
|     ET|    1776|
|     SG|  734373|
|     KP|     319|
|     BF|    2820|
|     IT| 3523490|
|     CU|   13847|
|     GW|     254|
|     FO|    1282|
|     MV|    9439|
|     SE|  663630|
|     PH|  392585|
|     WS|    1259|
|     BG|  538707|
|     FJ|    3198|
|     GE|   61683|
|     SK|  128175|
|     FM|     906|
|     MH|    1745|
|     CW|   21457|
|     LV|  102735|
|     MU|   27736|
|     PE|  275323|
|     LS|    5507|
|     MZ|   12728|
|     GD|    3400|
|     DM|     646|
|     KM|     389|
|     DO|  554824|
|     QA|   34995|
|     XK|     581|
|     BZ|   12967|
|     TH| 1366956|
|     EG|  327882|
|     SH|     125|
|     BI|     771|
|     BJ|    1948|
|     MF|     429|
|     GY|    3847|
|     JP| 3299718|
|     TM|     572|
|     VC|    5377|
|     ZW|   11952|
|     SN|   12707|
|     NZ|  401608|
|     OM|   49103|
|     LK|   33816|
|     BT|    2126|
|     HU|  407222|
|     KN|    2990|
|     KE|   32116|
|     SI|  130608|
|     CY|   32025|
|     ML|    9998|
|     HT|    7375|
|     GP|    4018|
|     UG|    7357|
|     IE|  636087|
|     KW|   64836|
|     GA|    8910|
|     VU|    1473|
|     BE|  347894|
|     MA|  227130|
|     AS|     320|
|     KH|   33846|
|     NI|   53612|
|     KG|   14067|
|       |  649814|
|     TT|   32719|
|     SY|   75436|
|     NO|  368080|
|     BO|   93018|
|     ER|     257|
|     CO| 1135399|
|     IM|    7208|
|     SS|     570|
|     UY|   75799|
|     NG|   37838|
|     JE|    4069|
|     YT|     232|
|     AR| 1273489|
|     CF|     249|
|     PW|     251|
|     PR|   27204|
|     TK|     135|
|     LU|   56661|
|     SZ|    5313|
|     NF|     125|
|     VN|  880606|
|     IS|   50124|
|     MP|     529|
|     AF|   14127|
|     BB|    5340|
|     BQ|    4461|
|     SR|   23450|
|     DK|  772845|
|     CD|     458|
|     TJ|    5421|
|     AO|   17188|
|     AX|    1292|
|     ST|     335|
+-------+--------+


Total:
145,430,195

Friday, October 06, 2017

How many programmers does it take to update a Wikipedia page?

......or what it took to count the number of IPv4 addresses in every country (as of 1st October 2017). 

This Sunday, I found that the Wikipedia page on List of countries by IPv4 address allocation was using data from 2012. I wondered what it might take to add more up to date information on that page. During a recent course I attended, I got to know about scans.io - a fascinating project that involves periodically scanning ALL of the IPv4 address space and storing as much of publicly visible metadata about the active addresses as possible (location, ISP, open ports, services running, operating system, vulnerable services running if any). Each daily dump of the IPv4 address space is close to a terabyte.
An individual IP address record is represented as a JSon object - part of one of the records is shown here:


There is a lot of information to be gleaned from analysing this data - some might have very useful applications and some purely to satisfy curiousity. Also, copying the raw dataset is not the only way to analyse this - censys.io might allow querying their data directly on request.
 Given the volumes, this clearly falls in the realm of a Big Data problem and any querying or analytics on this is best achieved using a distributed approach - so this is a perfect problem to leverage fully cloud based resources.

Stage 1:

Copy the latest data set to an S3 bucket.

This might sound easy but the full data is close to 1TB. Ideally I would have preferred a more distributed way of transferring this data. But for now, old fashioned wget from the censys.io and then an "aws s3 cp" to S3 storage did the job. 
wget of the compressed data set took around 24 hours and "aws s3 cp" of the uncompressed data took just under 48 hours (with a few hours in the middle that it took to uncompress the downloaded lz4 file). 

For intermediate storage, I created an instance with 2TB of storage. The cost didn't seem bad if all my data transfer completed within a day or so.
https://aws.amazon.com/ebs/pricing/

Test run:
wget --user=jvsingh --ask-password https://censys.io/data/ipv4/historical

The actual command to get that ~221G file (compressed version):
nohup wget --user=jvsingh --password=***** https://scans.io/zsearch/r5vhnlm9vqxh5z1e-20170930.json.lz4 &

(used nohup as I know it was going to take hours so didn't want to keep my ssh terminal open just for this)

For the second stage of uploading the uncompressed file to my S3 bucket, it seems an elegant and faster way might have been to use a multipart upload using a distributed approach. But looking at the upfront setup required for it, I decided against it for this particular test. 

Stage 2:

AWS Setup - I already had an aws account with an SSL key-pair for the region I selected (the cheapest in terms of instance costs and also costs of S3 storage - to avoid intra region data transfer costs and possible network latency, I used the same region for both my S3 bucket and spark instances). 
Additionally, to allow command line tools (such as flintrock) to connect and operate the AWS account, I had to install and set up the local aws command line interface - which requires a pair of credentials generated through AWS - IAM 
I had also previously created an S3 bucket to hold the 1 TB data file. This would allow multiple spark instances to access the data, which otherwise won't be possible or too complex to set up with general purpose disk-like storage (might be possible with hadoop distributed file store but using S3 here definitely saved me from a lot of extra configuration)

Stage 3:

Download and install flintrock, configure its yaml configuration (here's their template) to set up the spark cluster. This is convenient as I intended to do this on AWS, which is very easy to set up with flintrock. (I used an Amazon Linux AMI - the rest of the setup is self-explanatory in the template)
I start an initial cluster with 3 worker nodes. 

One can configure a spark cluster without flintrock as well - I found a set of steps here. Flintrock made things a lot easier. 

Step 4:


  • Login to the spark master instance
  • Submit the spark job using spark-submit 

nohup ~/spark/bin/spark-submit --master spark://0.0.0.0:7077 --executor-memory 6G --packages com.amazonaws:aws-java-sdk-pom:1.10.34,org.apache.hadoop:hadoop-aws:2.7.2 --conf "spark.driver.maxResultSize=2g" sparkjob.py > main_submittedjob.out &

I first executed a dry run on a smaller 1GB dataset to make sure everything was ready and working. A snippet of results from the dry run is shown here (I used country_code instead of country name to be safe - these can always be translated and sorted later - at this point I am eager to get the main counts):


  • Gradually increase the number of worker instances to see the data analysis speeding up as the work gets distributed evenly on the newly joined instances.
"flintrock add-slaves" does this seamlessly for most part (it installed spark and other libraries) 
I did have to manually log in to each new instance and use the command 
spark/sbin/start-slave.sh spark://master_host:7077  
to ensure they got added to the cluster

After this, I could sit back and watch with satisfaction the jobs (rather individual tasks) getting evenly redistributed on the new nodes. 
  • Watch progress on the spark master console and wait for the final results to appear!


Shown below, the job stages console, 30 minutes in -

Coming up: The actual results

(if nothing breaks down till then!)
I posted my initial results here - sorry to report, the counts don't quite add up. Will investigate why in due course.
http://weblog.singhpora.com/2017/10/raw-results-countries-list-with-total.html

Credits: 

1) Paul Fremantle (WSO2 co-founder)- for the tools and techniques he taught on his Cloud and Big Data course at Oxford
2) scans.io for the idea of scanning the whole of IPv4 address space, the initiative and execution




Saturday, September 30, 2017

Test driven SOA: Tool kit for comprehensive automated test coverage

In this post I am going to share some tools I find useful when developing components for the Oracle service bus - same principles should apply to the integration cloud service as well. 

If we are not test first (or at least test alongside) programming, we are essentially debug later programming (See "Physics of Test Driven Development").

If the enterprise service bus sits in the middle of an organisation's messaging and integration landscape, there are some key architectural principles that help in getting the best out of any service bus solution:
  • It is not the place for business logic but for integration logic i.e. heavy on message transformations and often enrichment
  • Any operations, message flows or pipelines that the service bus exposes should be stateless and without side effects (ideally). To achieve this, a lot would depend on backend services too - they would ideally need to be idempotent. 
  • Exposed interfaces must be designed to be canonical while invoked endpoints abstracted away so that calling systems are decoupled from calling systems (and then there are non-functional elements of decoupling that the Service Bus can help achieve too such as by messaging - but this post is not about the value addition of Service buses)
  • Like any other software, it must have comprehensive unit test coverage (no, not the platform but what we have developed on it) and I might be stating the obvious here but I often find test coverage inadequate at many FMW customer sites. 
Whatever transformations, validations or enrichment the service bus does to incoming messages, must have some test coverage. Having a good test coverage means the solution is less prone to regression defects, easier to change and the whole solution is more agile (Agility comes with good practices and tools and not with ceremonies with strange names like scrum). 

Often I go to a customer site where they have important business data flows running on an ESB solution with complex data transformations. You never know how a change in some field or some complex template or xpath expression might lead to some unrelated side-effect. Needless to say, unless I find an exhaustive set of test cases (and surprisingly often I don't - maybe that's why they call me in the first place), the first thing I would do is to create some - this is the only way to ensure that the external interfaces to the system continue to work as before and after I make a change (except for the specific change I intended to make of course). 
It is also invaluable when I have to make improvements to the system - such as refactoring to improve old code. 

Some technical scenarios that we can address (frequently seen with Service bus implementations)
http to file/jms/http/database, File to File, File to JMS, JMS to JMS, JMS to http and other combinations thereof in more complex orchestrations, such as file to http and then JMS
Data formats exchanged can also vary: Native text, XML, JSon, binary

Requirements from a test framework (from an ESB point of view):
* One click to run multiple test cases
* Visual indication of pass or failure
* Can be run with mainstream build/CI tools (such as the popular maven)
* Ability to mock http endpoints 
* Ability to assert (equality, pattern matches)
* Ability to "Diff" - i.e. identify differences between two pieces of text but also identify differences between two XML or JSon documents 

For unit testing one can consider the Service bus (or ICS) as a message transformation black box and get the test framework to interact with endpoints only: filesystem locations, JMS destinations, inbound http endpoints, mock http endpoints. 
Again, for unit testing, I keep all endpoints on the local server (with invoked http endpoints served by a mocking tool) and an OSB Customisation file specifically for the test instance (which points to mock http endpoints where required, in addition to the local/test JMS destinations etc)

The tool kit that I have found most effective and have been using a lot lately:
1) JUnit - plain old tried and tested with all the power of Java at hand. 
In the Fusion middleware environment, we get access to all the weblogic client libraries (full and cut-down versions) to interact with JMS queues. I have made variants of this A-Team example for different scenarios such as to read a specific number of messages which I expect for a specific input. 

2) WireMock - Easy to set up and use and is effective. I only had to add this dependency to my maven POM file and with the WireMock import, I was ready with my mock http service (I have not tried the individually downloadable jar). For individual test cases, I could reply with different XML or JSon responses with different data and statuses (success, failure). 
The assertions can be performed at specific XPath level (ensuring that a specific XPath contains the value you expected) or at the full document level. 

Worth noting that in the SOA Composite test framework, we can mock endpoints as well, in addition to running them as part of a maven build - but my post is focused on OSB. 

3) XMLDiff - This is an API hidden away in one of the FMW libraries (Oracle XML Parser). 
For normal XML manipulation, we often get by with java DOM/SAX API's. However, I found XMLDiff very handy when comparing two XML documents which we often need to do in test scenarios. 
Think how you would compare an actual XML payload with an expected XML payload - XMLDiff does it for us by identifying the specific xpath where it found differences. 

Again, in a FMW environment you can add it as a library in JDeveloper or the following dependency in the maven pom:
      com.oracle.adf.library
   Oracle-XML-Parser-v2


The output of many of the diff operations is another XML document listing the differences. If it contains no "append-node" or "delete-node" elements, it means the documents are identical. 

4) SOAPui - last but not the least of course - this is a no-brainer for initiating unit tests for exposed http endpoints. Easily achieved by adding it as a plugin in your project pom.

The tools can then easily be extended to make repeatable, automated integration tests. Additional frameworks can add value where desirable (Citrus, Cucumber seem popular)

------
One final point: In addition to making code less prone to regression defects and more change friendly with potential to allow more frequent releases, test cases also serve as a "source of truth" repository of the business rules actually implemented in code - the more there are, the better. 
Documents might go out of sync, people might leave and forget to update documents, and then there is the semantic gap between documented language and code. If a test case says a field HEIGHT cannot exceed 9.99, then only a passing test can prove that it in fact doesn't. 
So given any business requirement, my priority would be to write failing tests first to document those requirements, write code that fulfils those requirements, accommodate all the "changes of mind" (whether genuine or ......) in a more agile way, and put everything into documentation once dust settles. 

Coming up: more sample code for testing Service bus "code", less essays. 
In the meantime, I can flaunt the e-unit tests I wrote the last I tried my hand at Erlang. It is a small component of a larger programming assignment I had to do and the assessment report said my software met the most number of requirements. I attribute this hands-down to the adequate test coverage I had added right from start. 

Summary: TDD allows us to write more complex software and keep it maintainable, more change-friendly and more responsive to change

Sunday, May 14, 2017

Enterprise World Wide Cloud - Notes from the Oracle #PaaSForum 2017



This must be one of the last posts on #PaaSForum 2017 that anyone has written. Too late but I hope not too little, as it gives us an opportunity to reflect upon the material presented there. I heartily thank Jürgen Kress for organising this event on a grand scale, at a beautiful location (Split, Croatia this year), for his invitation, warm welcome and hospitality. Also thanks to his colleagues from Oracle for their help in organising this event and in sharing their deep knowledge.  Further, we gained a wealth of information from leading Oracle professionals from all over the world - I felt privileged to meet so many of them in person!  We started this event on a positive note (Jürgen literally asked us to take our jackets off and get hands-on with some cutting edge Oracle technology!) and ended it full of enthusiasm watching the sunset over Adriatic sea over drinks and conversation.



Although I learnt a lot in the various presentations, please don't assume every idea or interpretation below is endorsed by Oracle or other partner presenters who spoke on the topics I have chosen to write about.
Review and comments are very welcome (especially if you find inaccuracies in my writing).
Notation guide: Numbers in [square brackets] refer to items in the References section. 

Oracle Management Cloud (OMC) and Application Performance Monitoring (APM)
APM has been a maturing discipline in the past decade - what is different about the Oracle Cloud is that APM sits within a wider OMC suite. As we cannot fix things we don't know about, as a first step, OMC "agents" aggregate all the relevant information from multiple sources and display these different dimensions on standard dashboards. Operators (possibly helped with some machine learning capability in the background) can look for anomalies to fix them, optimise their infrastructure and applications and continue monitoring. Technology wise, it uses simple agents that are installed on the consumer side and gather all kinds of important statistics from different levels such as infrastructure, operating system, application servers and applications. Operational monitoring teams can then use this wealth of information to debug situations, perform root cause analyses etc. 
Since I had an ongoing customer case for a similar requirement, Volker, one of the Oracle OMC sales team who Jürgen introduced me to, very kindly set up a demo account for me - he then gave me a very detailed walk-through of the products and I was able to easily install the agents on my test environment. 

After installing the APM agent and updating my startWeblogic script to reference it, my OMC dashboard now shows a consolidated view from across the system (Windows, FMW):
set JAVA_OPTIONS=%JAVA_OPTIONS% -javaagent:%DOMAIN_HOME%\apmagent\lib\system\ApmAgentInstrumentation.jar
There is a more detailed post by Lucas Jellema on the installation of APM in your environment who also introduced this at #PaaSForum in his presentation. 
I think this quote is from Lucas' presentation and this sums up the approach that OMC set of technologies follows: To find a needle in a haystack, we need to build the haystack first!


Agents are also available for other platforms and applications such as Node.js
Once the platform is installed and operational, the real work starts - in the initial stages of a product release, one could see useful statistics such as slow transactions, unusual or unexpected groups of errors, warnings, stuck threads - things that would be either invisible or hidden away in logs. Over time, these issues would be resolved, resulting in a more smoothly functioning system. Over time, operations would have built up an "expected profile" of how a system is supposed to look in optimal shape - so any deviations would get flagged up immediately. Any new releases in the system would have a benchmark for comparison. 
For critical unusual incidents, the log explorer is able to automatically group together related log entries and other information to help operational support  (something similar to searching by ECID - except that Oracle have specifically tried to keep it technology agnostic to allow monitoring of various kinds of technologies). 
Imagine this - if you can correlate performance of a specific business process or application to metrics in APM, optimising application performance can provide direct, tangible benefits in business revenue!

Good technology to get simple, stateless integration flows up and running fast without the hassle of managing infrastructure and platforms AND at a predictable cost. The ICS platform further offers a rich catalogue of pre-built adapters for many SaaS applications. A complete SaaS integration was done hands-on on our lab day (Days 4 and 5 at the #PaaSForum 2017  offered intense hands-on labs by some of the leading technology professionals from Oracle like Niall Commiskey and Deepak Arora of the A-Team fame). 

Deepak also gave a good presentation on his experiences and learning from the industry. 
One of the other notable presentations on ICS was by Robert van Mölken, who shared his experiences and workarounds with delivering particularly complex integrations with complex data mappings and transformations. Having a book published on Oracle ICS (co-authored with Phil Wilkins) on this technology while the technology is being changed and developed is quite a fine achievement! Robert also very supportively watched my 1,458,217th attempt at debugging my own Oracle ICS adapter. 

Full blown Oracle SOA Suite in the cloud - perfect for organisations that already have stateful service orchestrations running on-premise. With proper risk evaluation and security assessments, you can very well reduce or minimise costs of running and maintaining the cost of running the platform in-house and focus specifically on tasks that deliver business value. 
For customers who wish to adopt a proper business process management (BPM) approach with BPMN, multi-organisation or multi-departmental workflows, the Process Cloud Service (PCS) is the tried and tested offering from Oracle. 

API's have been in the limelight lately but as a concept an API simply means "Application Programming Interface" and have been around ever since computers have been programmable! If there is software (or hardware) that offers business value, an API is the gateway to unlocking that value. Seriously - even the assembly code instructions that a microcontroller acts on are API's. And API-first or contract first development is simply a good software engineering practice. 
I think what changed in recent years is:
  • Organisations - both software vendors and buyers - discovered the value in having clean, inter-operable interfaces. (Software Engineering long preached contract first development as a prerequisite for "high cohesion, loose coupling")
  • The advent of web services (SOAP and now REST) and their popularity, helped expose disparate systems first to other previously un-interoperable systems and now the cloud
  • API lifecycle management has matured as a field and there is value to be derived by monetising API's, enforcing policies and usage agreements (see definition of usage agreement in [5] Oracle SOA Reference architecture ) and monitoring of API usage
  • The realisation in the industry of the need for documented, visible catalogues of their digital capabilities. 
Oracle API cloud service fills many of the gaps in this area for end to end API lifecycle governance and value realisation. It is no wonder that Oracle acquired Apiary a while back. Having theoretical knowledge of the above needs is one thing but having tools to govern and enforce is the logical next step. 

Beyond #PaaSForum - actors in the cloud landscape and the future...
According to the US Standards body NIST's Cloud computing reference model [1], key actors in the cloud space are: Cloud Consumer (CC), Cloud Provider (or CP - SaaS, PaaS product providers), Cloud Auditor, Cloud Carrier and Cloud Broker (CB- just like Systems Integrators).
The role of Cloud Broker is somewhat clearly defined - the NIST definition states:
An entity that manages the use, performance and delivery of cloud services, and negotiates relationships between Cloud Providers and Cloud Consumers
This includes some or all of tasks such as: assessing consumer requirements, determine suitability to pick either the best possible SaaS/PaaS/iPaaS offering or a hybrid combination thereof or simple, custom solutions (after all, one size never fits all). Where a central EAI (Enterprise applications integration) system or a well designed SOA initiative would allow organisations to benefit from their choice of different back-end/Cloud products, the systems integrator, in the "Cloud broker" avatar now takes that to the cloud level. Further, various enterprise business processes are likely to span multiple products - SaaS and on premise. Orchestrating these together into coherent business processes with appropriate levels of security, operational monitoring and business analytics is going to be exciting and fulfilling in terms of business value delivered to the industry by effective use of technology. 

This role must not be taken lightly - without adding unnecessary detail, I would only like to state that "unnecessary" complexity costs more in the long run, not to mention increased security risks. I thought of adding this statement purely based on the wide range of quality of in-house SOA/ESB/EAI (now microservices) implementations I have seen over the years. With the decisive and inevitable move towards the cloud (and interclouds coming soon), the implications of bloatware and spaggeti could manifest in a larger scale with worse consequences. 
As Vikas Anand of Oracle pointed out in his presentation and I paraphrase:
Software as a Service (SaaS),  can quickly degenerate into "Silo" as a service if not properly integrated
This is why Cloud Brokers and PaaS platforms such as Oracle ICS/SOA CS and PCS have key roles to play. 
We don't hear much about Cloud Carriers yet, but they are a key actor in ensuring high quality, reliable and secure connectivity between Cloud Consumers and Providers and also in achieving true inter-cloud architectures. NaaS (Network As A Service) also seems to be an emerging offering or rather a sub-offering under wider IaaS to watch out for.


Chatbots
Last but not the least, chatbots were a popular theme at #PaaSForum 2017. Original reason for the hyped interest in chatbots was due to their potential as a stepping stone to mobile channels without having to develop and maintain mobile apps upfront. In addition to the basic chat dialogs that most people are familiar with, Facebook and possibly other platforms too offer support for menus within chat windows which is a simple and effective way of exposing functionality to customers. 
Unfortunately what also happened was the positioning of chatbots (perhaps as a result of overzealous marketing?) as a customer service tool with conversational ability. We are far from that in terms of how NLP (Natural Language Processing) has progressed and this expectation has led to some disappointment as well. Plus, in many many contexts, it is desirable for humans to provide interactive support for a long long time to come- AI is not close to "artificial general intelligence" yet (which is what would be needed for a really human level chatbot experience) and many risks, including ethical[6] implications are not yet fully understood.  

In terms of simple keyword matching and menu-driven interface for customers, I think technology might be ready to deliver good value in chatbots already - this screenshot is from my interaction with a demo chatbot. This bot was also used in Frank Nimphius' presentation on chatbots and one can still look it up on Facebook.. 

Another impressive presentation on chatbots was by Léon Smiers. I found the ideas genuinely innovative - he has since written an article about it which introduces a "bot maturity model" - a set of capabilities that chatbots should have and a staged approach to get there. Technically there are many gaps in "open" NLP platforms (proprietary platforms such as the one used in Apple's Siri are much more advanced).

Léon's categorisation of desirable capabilities in the "integration" category led me to think: we will have to go beyond NLP and machine learning- we are going to have to feed the outputs of NLP into rich domain knowledge models or "ontologies" such as those defined in RDF triples. The benefit is that in addition of simple querying (which is done via SQL /XQuery/XPath) more advanced graph queries and reasoners can be run on such data to draw inferences. This would be key in interpreting user inputs correctly, querying back-end sources/API's effectively and providing rich responses to users. 
Only then will these and similar systems deliver the level of artificial intelligence we aspire for. Still a maturing field with exciting possibilities. Also see: [2] and [4]

My conclusion is that it is possible to draw some value from the combination of technologies currently available or planned (such as Oracle's Intelligent Bot Cloud Service - IBCS) and they offer a compelling option for many organisations to bypass mobile apps altogether - keep in mind that it introduces Facebook (or other similar "front-end" chat enabled platforms) as an additional stakeholder and any analysis needs to take this into account. Many companies simply offer a chatbot inteface within their own mobile apps or websites - this way they control the platform which might be an important consideration for many organisations. 

Posts about #PaaSForum or topics covered there

References

Terminology & Acronym Soup
I know most readers might be familiar with these acronyms. I have simply developed this habit to create a glossary next to everything I write. As I noted these while writing, might as well share: 
1. SOA - Service Oriented Architecture
2. EAI - Enterprise Applications Integration
3. CC / CP/ CB - Cloud consumer/provider/broker
4. PaaS - Platform as a service (Oracle offerings such as SOA-CS, ICS, Java CS etc. that provide the foundations on which customers can develop applications). SaaS offerings in turn include pre-developed applications that Cloud Consumers can subscribe to and use
5. iPaaS - integration platform as a service (Such as Oracle ICS)
6. OMC - Oracle Management Cloud: cloud based, consolidated enterprise monitoring with offerings such as APM, infrastructure monitoring, log aggregation and monitoring. 
7. APM - Application Performance Measurement: a set of products, now increasingly available as cloud offerings (such as Oracle APM offering that's part of the Oracle OMC family) that provide insights into applications' technical performance.
8. NLP - Natural Language Processing
9. Intercloud - See https://en.wikipedia.org/wiki/Intercloud
10. ECID - Execution context ID. Each entry logged in Weblogic diagnostic logs is stamped with an ECID so that related log entries can be grouped together to trace a request-response flow