Wednesday, June 17, 2020

An article demystifying crypto-coins, and if it's even possible to value Bitcoin

This post has nothing to do with Oracle technology as I haven't yet tried Oracle blockchain. I'm also aware that much has been said  on the subject by others and inner workings of blockchain have been explained ad-infinitum over the years. 

But here's my analysis that describes ideas like money, "value", price, some history of "ledgers" and how they evolved to "Distributed Ledger Technology". 

Must add that many of the shortcomings around scalability of Bitcoin (which is only one of and the pioneering cryptocoins) are covered on Bitcoin FAQ's

Abstract below:
The meteoric rise of the price of bitcoin, and its accompanying wild fluctuations, piqued the interest of many investors and speculators. Cryptocurrency has often been hyped as a gold-like replacement for fiat currencies by virtue of its “finite” supply. There have been many publicised stories of the early stage “miners” who “solved puzzles” (as they put it) on their computers to “mine” bitcoin early on, who then went on to cash-out with spectacular windfalls. The underlying “blockchain” technology and proposed applications designed to be built “on the blockchain” also received much attention and, reportedly, investor funding. This paper will first attempt to clarify these terms, explain some of their working based on the author’s research, and then examine these claims. It will attempt to answer a simple question: where does the actual “value” for a unit of crypto come from? This should help both potential investors and users of applications based on these concepts to make more informed decisions.

Full article can be read here:
https://www.researchgate.net/publication/342027792_Demystifying_Crypto_-_From_Bookkeeping_Ledgers_to_Blockchain

Representation of a blockchain network, depicts main function of a node






Sunday, May 10, 2020

Recursive calls in Oracle Integration Flows (Scenario: Paginated API calls for large Data Sets)

A number of use-cases can be implemented cleanly using a recursive approach. This post is not to debate the pros and cons of recursion versus looping but provides a simple approach to achieve this.
For scenarios such as the ones listed below, and possibly more, this approach is quite efficient, concise, maintainable, and most importantly, it is highly scalable. It also leaves a smaller runtime footprint with a smaller execution time per instance than a looping flow instance. This also makes error handling easier as I will describe later. 

  • Polling (continuously monitoring an FTP location, a database, or an API output)
  • Paginated API's (when the target system exposes an API with a paginated* interface such as the eBay findProducts operation)
  • Retryable flows

Thursday, July 18, 2019

Fault tolerance in integration flows - handling target system availability problems

An important non-functional property of any software system is "Availability". In the ISO/IEC 25010:2011 product quality model, this is grouped under an overall category of "Reliability". 
Fault tolerance is a closely associated property also grouped under "Reliability". 

System downtimes could be either due to scheduled maintenance

Wednesday, October 18, 2017

Selective persistence of Oracle Diagnostic Logging (ODL) output

Background and Goal

In any application, logging is widely used for diagnostics and debugging. 

Logging at various "checkpoints" (such as entering with request, exiting with response, error handler) in the application can provide a fairly reliable way to trace the execution path of the application - which a subsequent sweep or count can be used to report on. When the logs are regularly analysed and reported on, anomalies can get flagged up proactively and investigated further. Some examples

Tuesday, October 17, 2017

Geographical clusters with the biggest concentration of web services

From a data set of approximately 145 million IP addresses running at least one publicly accessible web service (such as a website), I was able to determine these 20 geographic "clusters".



Saturday, October 07, 2017

Raw results - countries list with total IP (IPv4) addresses


Background: 
http://weblog.singhpora.com/2017/10/how-many-programmers-does-it-take-to.html


Presented below is a list of countries (country codes) and the total count of live IPv4 addresses where a public facing service (such as a website) might be hosted as counted from the scan data of 1st October 2017

The reason these don't quite add up to anywhere in the ballpark of 4 billion (the total IPv4 address space) is because the data set I used might only be scanning for hosts that run some public service exposed over a TCP port (e.g. a website running on port 80 or 443)

The numbers definitely look incorrect and total up to only 145,430,195 - I will continue to investigate why, but they seem to be in proportion. 
It is likely that scans.io are only able to gather data about live IP addresses at the time of the scan as opposed to total allocated ones)


+-------+--------+
|country|ip_count|
+-------+--------+
|     LT|  120718|
|     DZ|  362827|
|     MM|    3494|
|     CI|   18954|

Friday, October 06, 2017

How many programmers does it take to update a Wikipedia page?

......or what it took to count the number of IPv4 addresses in every country (as of 1st October 2017). 

This Sunday, I found that the Wikipedia page on List of countries by IPv4 address allocation was using data from 2012. I wondered what it might take to add more up to date information on that page. During a recent course I attended, I got to know about scans.io - a fascinating project that involves periodically scanning ALL of the IPv4 address space and storing as much of publicly visible metadata about the active addresses as possible (location, ISP, open ports, services running, operating system, vulnerable services running if any). Each daily dump of the IPv4 address space is close to a terabyte.
An individual IP address record is represented as a JSon object - part of one of the records is shown here:


Saturday, September 30, 2017

Test driven SOA: Tool kit for comprehensive automated test coverage

In this post I am going to share some tools I find useful when developing components for the Oracle service bus - same principles should apply to the integration cloud service as well. 

If we are not test first (or at least test alongside) programming, we are essentially debug later programming (See "Physics of Test Driven Development").

If the enterprise service bus sits in the middle of an organisation's messaging and integration landscape, there are some key architectural principles that help in getting the best out of any service bus solution:
  • It is not the place for business logic but for integration logic i.e. heavy on message transformations and often enrichment
  • Any operations, message flows or pipelines that the service bus exposes should be stateless and without side effects (ideally). To achieve this, a lot would depend on backend services too - they would ideally need to be idempotent. 
  • Exposed interfaces must be designed to be canonical while invoked endpoints abstracted away so that calling systems are decoupled from calling systems (and then there are non-functional elements of decoupling that the Service Bus can help achieve too such as by messaging - but this post is not about the value addition of Service buses)
  • Like any other software, it must have comprehensive unit test coverage (no, not the platform but what we have developed on it) and I might be stating the obvious here but I often find test coverage inadequate at many FMW customer sites. 

Sunday, May 14, 2017

Enterprise World Wide Cloud - Notes from the Oracle #PaaSForum 2017



This must be one of the last posts on #PaaSForum 2017 that anyone has written. Too late but I hope not too little, as it gives us an opportunity to reflect upon the material presented there. I heartily thank Jürgen Kress for organising this event on a grand scale, at a beautiful location (Split, Croatia this year), for his invitation, warm welcome and hospitality. Also thanks to his colleagues from Oracle for their help in organising this event and in sharing their deep knowledge.  Further, we gained a wealth of information from leading Oracle professionals from all over the world - I felt privileged to meet so many of them in person!  We started this event on a positive note (Jürgen literally asked us to take our jackets off and get hands-on with some cutting edge Oracle technology!) and ended it full of enthusiasm watching the sunset over Adriatic sea over drinks and conversation.



Although I learnt a lot in the various presentations, please don't assume every idea or interpretation below is endorsed by Oracle or other partner presenters who spoke on the topics I have chosen to write about.
Review and comments are very welcome (especially if you find inaccuracies in my writing).
Notation guide: Numbers in [square brackets] refer to items in the References section. 

Tuesday, December 06, 2016

Progress with the Oracle Integration Cloud Adapter SDK

In the past few days, I have been making some progress with using the ICS Cloud Adapter SDK. 
Today, I created my first shell adapter - the design time views can be seen below!

The journey so far: 
 * Installation of all the offline material [Check]
       Gotcha's to note here: the step to install SDK patches wasn't required for 12.2.1 (the version I was on). 
 * Reading through the documentation [Ongoing]
 * Developing the empty adapter and deploying it for design time and runtime [Check]

There are a number of integration use-cases that we have identified. If all goes well, these will be available for a wider rollout, helping customers implement some complex integration use-cases with some important cloud services in "hours, not months" in keeping with the Oracle ICS philosophy (and in line with DRY software engineering)!

More coming soon...