Quantcast
Channel: Oracle SOA / Java blog
Viewing all 142 articles
Browse latest View live

WebLogic Server: Automate obtaining performance metrics from DMS

$
0
0
Oracle provides the Dynamic Monitoring Service (DMS) as part of WebLogic Server which is extremely useful if you want to obtain aggregated data of an environment in case of for example a performance test. The data which can be obtained from DMS is extensive. This varies from average duration of service calls to JVM garbage collects to datasource statistics. DMS can be queried with WLST.  See for example here. On example script based on this can be found here. You can also directly go to a web-interface such as: http://<host>:<port>/dms/Spy. The DMS Spy servlet is by default only enabled on development environments but can be deployed on production environments (see here).

Obtaining data from DMS in an automated fashion, even with the WLST support, can be a challenge. In this blog I provide a Python 2.7 script which allows you to get information from the DMS and dump it in a CSV file for further processing. The script first logs and uses the obtained session information to download information from a specific table in XML. This XML is converted to CSV. The code does not require an Oracle Home (it is not WLST based). The purpose here is to provide an easy to use starting point which can be expanded to suit specific use-cases. The script works against WebLogic 11g and 12c environments (has been tested against 11.1.1.7 and 12.2.1). Do mind that the example URL given in the script obtains performance data on webservice operations. This works great on composites but not on Service Bus or JAX-WS services. You can download a general script here (which requires minimal changes to use) and a (more specific) script with examples of how to preprocess data in the script here.


How to work with the DMS

The dynamic contents of the DMS tables (like average service response times) are reset upon server restart. Static contents such as deployed composites, remain comparable even after a restart. The contents can also be reset by a script. See for example here. An easy way to work with the DMS is to first reset it, perform some tests and then collect data. After collecting data, you can again reset it and start with a next test.

The output of the script can be piped to a (CSV) file and then opened in Excel. There you can make a graph of the data to for example analyse the poorest performing operations or look at JVM garbage collects, datasource statistics, etc. You can also easily expand the script to do measures over time or put the output in a database table (see an example here on how to do JDBC from Jython/WLST).

Requirements for running the script

The Python script requires the 'requests' module to be installed. This module takes care of maintaining the DMS session. Installing this module can be challenge if the system you want to run the script from, has limited privileges or connectivity. Using WinPython might help. This distribution of Python is portable and can be copied standalone to a Windows folder or share. It will work just fine from there and is not dependant on Windows registry settings or specific OS configuration. I recommend you first prepare a WinPython folder with the requests module already installed and package that with your script if you have such a requirement.

Installing the module can be done with (supplied with the WinPython distribution (and almost every other Python distribution) in the Scripts folder)  'pip install requests' or 'easy_install requests'. If your customer is using a proxy server (which does not require NTLM authentication, otherwise consider NTLMaps, cNTLM or talk operations into using a non-proprietary authentication protocol), you can set the following environment variables:

set http_proxy=http://username:password@<host>:<port>
set https_proxy=https://username:password@<host>:<port>

After you have installed the module and the environment you want to connect to from the script, does not require you to go through the proxy server, do not forget to unset these.

set http_proxy=
set https_proxy=

Inside the script you should specify host, port, username and password of the WebLogic environment. The user should have administrative privileges (be in the Administrators group, see here).

Obtaining data

I've provided two versions of the script. One which just dumps the entire output of the DMS table to a CSV file here and one which provides some processing of the data which can be used as an example here. I've chosen the table 'wls_webservice_operation' as sample but you might want to obtain data from a different table. How to do that is described below.

Selecting a different table

The Python script contains a DMS URL to obtain data from. For example: /dms/index.html?format=xml&cache=false&prefetch=false&table=wls_webservice_operation. You can open http://<host>:<port>/dms/index.html?format=xml&cache=false&prefetch=false&table=wls_webservice_operation in a webbrowser to see the results. If you want a nice HTML output, replace 'xml' in the url with 'metricstable'. The XML output in 11g and 12c has a different namespace but the script takes care of that for you.

If you have another table you want data from, you can go to the DMS Spy servlet page: http://<host>:<port>/dms and browse from there. If you see a specific table which has interesting data, open the link in a new tab and you can see the URL of the table. Next change the format to xml (metricstable is default when using the webinterface) and you have the URL which the script can use:



Processing data

The example script here already has some sorting and processing of data. This has been created by a colleague: Rudi Slomp. It contains specific fields to filter and sort. That's the following part:

 for row in rows:  
columns = get_columns_from_row(row)
h,n,s,mint,maxt,avgt,comp='','','',0,0,0,0
for column in columns:
k,v = get_name_value_from_column(column)
if k == 'Host': h = v
if k == 'Name': n = v
if k == 'wls_ear': s = v
if k == 'Invoke.minTime': mint = exceldecimal(v)
if k == 'Invoke.maxTime': maxt = exceldecimal(v)
if k == 'Invoke.avg': avgt = exceldecimal(v)
if k == 'Invoke.completed': comp = int(v)

if comp > 0:
result.append([h,n,s,mint,maxt,avgt,comp])

result.sort(key=itemgetter(0)) #sort on wls_ear,name,host
result.sort(key=itemgetter(1))
result.sort(key=itemgetter(2))

result.insert(0, ['Host','Operation','Service','MinTime','MaxTime','AvgTime','Calls'])

After executing the script, the output could be something like:

Host;Operation;Service;MinTime;MaxTime;AvgTime;Calls
localhost;process;HelloWorld;13;1031;28,010460251046027;478

The Dutch Excel uses ',' instead of the usual '.' as decimal separator :(. For an English decimal separator replace:

   if k == 'Invoke.minTime':  mint = exceldecimal(v)  
if k == 'Invoke.maxTime': maxt = exceldecimal(v)
if k == 'Invoke.avg': avgt = exceldecimal(v)

with

   if k == 'Invoke.minTime':  mint = v  
if k == 'Invoke.maxTime': maxt = v
if k == 'Invoke.avg': avgt = v

If you select a different table, you need of course to change these fields also. So as in the sample of the above screenshots for the soainfra_composite table, you might only want to select the Name and soainfra_domain field. You might not require a selection and you might not require sorting. The above part would become something like:

 for row in rows:  
columns = get_columns_from_row(row)
n,d='',''
for column in columns:
k,v = get_name_value_from_column(column)
if k == 'Name':
n = v
if k == 'soainfra_domain':
d = v
result.append([n,d])
result.insert(0, ['Name','Partition'])

The output would be a CSV (well, more like semicolon separated values) like:

Name;Partition
HelloWorld;mypartition

What about IWS?

SOA Suite 12.2.1 comes with IWS, Integration Workload Statistics. Read more about it from my previous blog post here or from the official documentation here. IWS is part of the 'Oracle Integration Continuous Availability' option. This requires the 'Oracle WebLogic Server Continuous Availability' option. IWS is more specific to SOA/BPM Suite. Also statistics can be saved for later analysis and exported in a variety of formats from the EM Fusion Middleware Control. I've not spend time on automating gathering/scheduling/resetting IWS statistics from a script.

The DMS is more low-level and less specific for SOA/BPM Suite. You can for example also obtain detailed information on JVM behavior. Dynamic data like average service response times are reset upon server restart. The DMS gives way more information (also some very low level data) and requires careful selection of what you might want to know. 

Depending on the change you want to performance-test, one might be more useful than the other. IWS is easier to use since it has UI support and does not require custom scripting. DMS might be a bit overwhelming at first due to the large amount of data which is available.

Oracle Mobile Cloud Service (MCS). Implementing custom APIs using JavaScript on Node.js.

$
0
0
Oracle Mobile Cloud Service is a mobile backend as a service. MCS does its magic by providing a lot of features to make implementing mobile services easy such as (among many other) authentication, logging/analytics, lookups and calling other services. There are also features available to make integration with mobile clients easy such as providing an easy way to implement push notifications.

Personally I think one of the most powerful features of MCS is the ability to write custom JavaScript code and use that as an API implementation. This custom code can (among the regular JavaScript features) call MCS connectors and platform services. This provides a lot of flexibility in defining API behavior.

In this blog I will show how you can use this custom Node.js code to create an end to end example. I will use a RAML file to define my interface. Next I will define a connector in MCS to call the OpenWeatherMap API. This API returns (amongst other things) the temperature at a location in Kelvin. I want to define my own custom result message (with the temperature in Celsius) which better matches the requirements of my mobile client. I will use a custom JavaScript implementation to call the connector which calls the OpenWeatherMap API and create a custom response message from the result.

The described example is not suitable for a production implementation and is based on limited experience (and watching some really nice YouTube presentations). It is provided to give an idea on how to get started easily with a simple working example.


OpenWeatherMap API

From the ProgrammableWeb site I found OpenWeatherMap. This site provides current weather and forecasts via an easy to use API. You can get a free account which allows up to 600 calls per 10 minutes. This is enough for this demonstration.


Once you have generated an API key, you can call the API with a GET request on

http://api.openweathermap.org/data/2.5/weather?q=[location,country]&APPID=[your API key]

Location,country can be for example Birmingham,GB or Groningen,NL

After you have generated an API key, you can use your favorite REST service testing tool to check if you can call the service. The below screenshot is from Postman, a Chrome plugin.


MCS configuration

In order to get started with MCS, you first need to create a Mobile Backend. A Mobile Backend has a Mobile Backend Id. This will be used later to call API's available on this backend.


Next you create the connector to the OpenWeatherMap API.


You should of course test your connector


Next you should define your API. In the below screenshot you can already see some of the platform services which are available.


When defining the API, it is easiest if you use a prepared RAML file.


For security, I created a user (MaartenSmeets) under Mobile User Management, granted the user a role (MY_ROLE) and set the below properties to allow that user to call my API.


The API needs to be assigned to a Mobile Backend.


To get a simple (dummy) implementation for the API, you can generate a JavaScript Scaffold and upload that. For larger scale development, I recommend using the Git integration instead of manually uploading the ZIP file every time in the MCS UI.


Implementing a custom API

The JavaScript Scaffold can extracted and be edited (I use Visual Studio Code for that) to call the connector when the API is called and to rewrite the result to the required format. You can see an example with some extra notes below. You can download the code here.


The connector needs to be a dependency in the package.json file.


After you ZIP this and upload it to MCS as an API implementation, you can test your API.


I have used basic authentication here as defined when defining the API. I use the credentials of a user who is granted the specified role. With the request you also need to specify the Mobile Backend Id as an Oracle-Mobile-Backend-Id HTTP header.


After you have done this, you have an end to end sample working! You can browse the log files from Administration, Logs to confirm your message has arrived and browse console.log messages. This is also useful for debugging.


Finally

As usual, Oracle has done a good job at providing thorough documentation. Next to the YouTube presentations, the following was especially useful. Working with the result object was not documented though and I was having some difficulties with the security settings and required headers in my requests. When looking at the above example, you can save yourself the trouble of having to find out how to deal with them yourself. This example is the result of about an afternoon worth of experience. This is an indication of how easy/intuitive MCS is to use. My first impression is that MCS is a powerful product which offers a lot of functionality to easily create mobile backends. When using custom JavaScript code in MCS, you can define custom behavior for your API's. The MCS API itself which can be called from the custom JavaScript code seems not difficult to use and the documentation contains quite a lot of examples. I'm looking forward to start exploring the platform API's provided by MCS to add more functionality to my custom API's!

WebLogic Server: Logging the SOAP action in the access.log

$
0
0
WebLogic Server allows you to customize your access.log. This can be very powerful if you want to monitor for example service response times in a tool like Splunk (see here). When working with SOAP services though, especially those with many operations, it can be insufficient to monitor services to the level of the individual endpoint. You want to also know with which intent the endpoint is called. In this blog I will show how this can be achieved.

Extended Log File Format (ELFF)

The way you can add custom fields to the access.log is described here. This functionality has not changed noticeably for many releases of WebLogic Server. WebLogic Server supports the Extended Log File Format as described here. This can be configured by going to a server in WebLogic Console, Logging, HTTP. Click on Advanced and select Extended. Now you can specify additional fields like time-taken and bytes.



There is no custom field for the SOAP action available though.

Adding the SOAP Action field

WebLogic Server provides a feature to supply custom field identifiers. These have the simple format x-CustomFieldName where CustomFieldName is the fully qualified name of the class which provides the custom field.The class must implement the weblogic.servlet.logging.CustomELFLogger interface. Now we are nearly there.

Obtaining the SOAP action

The HTTP headers of a SOAP call often give an indication of the intent. For SOAP 1.1, the intent is supplied in the HTTP header field called SOAPAction. Filling this field is optional though. In SOAP 1.2 messages, the action parameter in the Content-Type header serves the same purpose. Often the intent gives a good indication of the operation which is called.

When implementing this logic in a custom class, it looks something like:


You can download the code here. Mind that the code is executed for every HTTP request going to the WebLogic Server so it is not recommended to write it in such a way that it can influence performance such as going to a DBMS or parsing entire messages. You should of course also mind that the function should return a value always, even in case of exceptions. It should thus be robust. If you don't, you might break the access.log format.

It is a JDeveloper 12.2.1 workspace but since the code is JDK 1.6 compliant, you can of course also use older versions of JDeveloper. Do mind to include WebLogic client libraries if you want to compile the project yourself. For ease of use, a JAR (compiled with JDK 1.6, usable for 11g and 12c) is also provided and a version which outputs a lot of information to System.out for debugging purposes. I have used Apache HttpComponents to make parsing the ContentType header easy.

Deploying the custom SOAPAction field

Copy the libraries in the deploy directory here (httpcore-4.4.6.jar and SOAPActionField.jar) to the DOMAINDIR/lib folder. As described above, enable Extended Log File Format and add the following field: x-nl.amis.customfield.SOAPActionField. Save your settings and restart the server. If you see a message like below, you have not put the JAR files in the correct directory.

<Error> <HTTP> <BEA-101234> <Attempting to initialize ExtendedLogFormat application specific header: x-nl.amis.customfield.SOAPActionField. However, initialization failed due to an exception.>

Testing

If you want to test this setting, you should test it with SOAP 1.1 and SOAP 1.2 services since the logic for determining the SOAP action differs. I have tested the custom SOAPActionField on SOA Suite composites and Service Bus running on WebLogic 11g and on 12c (12.2.1). Also, the access.log file is buffered. Batches of 8Kb are written at a time. For testing purposes, you can reduce this setting to 0. You can find it on the same page as the Extended Log settings. Fire up your favorite SOAP testing tool and send in some requests. Next check the access.log. Now you should see the SOAP action as the last field in this example.


Finally

If you want to do performance testing, it can be very useful to have time-taken, bytes and the soap action in the access.log. Luckily, with WebLogic Server, this can be easily achieved! It can also help to determine if requests to a certain operation often give HTTP responses other than code 200 (=OK). A drawback of using this method is a (very) slight performance impact and the SOAP action fields are optional in the SOAP 1.1 and 1.2 specification so some service or client implementations might not fill them in an expected way or not at all. SOA Suite does so nicely though. When using REST services, you of course do not need this since the resource is part of the URL and the HTTP verb is already present by default in the access.log.

Oracle Service Bus: Pipeline alerts in Splunk using SNMP traps

$
0
0
Oracle Service Bus provides a reporting activity called Alert. The OSB pipeline alerts use a persistent store. This store is file based. Changing the persistent store to JDBC based, does not cause pipeline alerts to be stored in a database instead of on disk. When the persistent store on disk becomes large, opening pipeline alerts in the Enterprise Manager (12c) or Service Bus console (11g) can suffer from poor performance. If you put an archive setting on pipeline alerts (see here), the space from the persistent store on disk is not reduced when alerts get deleted. You can compact the store to reduce space (see here), but this requires the store to be offline and this might require shutting down the Service Bus. This can be cumbersome to do often and is not good for your availability.

If you do not want to use the EM / SB console or have the issues with the filestore, there is an alternative. Pipeline alerts can produce SNMP traps. SNMP traps can be forwarded by a WebLogic SNMP Agent to an SNMP Manager. This manager can store the SNMP traps in a file and Splunk can monitor the file. Splunk makes searching alerts and visualizing them easy. In this blog I will describe the steps needed to get a minimal setup with SNMP traps going and how to see the pipeline alerts in Splunk.

Service Bus 

Create an AlertDestination in JDeveloper

Make sure you have Alert Logging and Reporting disabled and SNMP Trap enabled in the Alert Destination you are using in your Service Bus project. For testing purposes you can first keep the Alert Logging on to also see the alerts in the EM or SB Console.


Add the Alert action to a pipeline

In this example I'm logging the entire body of the message. You might also consider logging the (SOAP) header in a more elaborate setup if it contains relevant information. Configure the alert to use the alert destination.


WebLogic Server

Configure an SNMP Manager

On Ubuntu Linux installing an SNMP Manager and running it is as easy as:

sudo apt-get install snmptrapd

Update /etc/snmp/snmptrapd.conf
Uncomment the line: authCommunity log,execute,net public

The authCommunity public is the same as set in the WebLogic SNMP Agent configuration below for Community Based Access, Community Prefix.

sudo snmptrapd -Lf /var/log/snmp-traps

This runs an SNMP Manager on UDP port 162 and puts the output in a file called /var/log/snmp-traps. On my Ubuntu machine, snmptrapd logging ended up in /var/log/syslog.

Configure the SNMP Agent

Configuring an SNMP Agent on WebLogic Server is straightforward and you do not need to restart the server after you have done this configuration. Go to Diagnostics, SNMP and enable the SNMP Agent for the domain. Do mind the following pieces of configuration though:

On Linux a non-privileged user is not allowed to run servers under port 1024. I've added a zero after the port numbers to avoid the issue of the SNMP Agent not being able to start (see here).


For the Trap Destination specify the host/port where the SNMP Manager (snmptrapd) is running.


Test the setup

If you want to test the configuration of the agent, Service Bus alert and AlertDestination, you can use the following (inspired by this).

First run setDomainEnv.cmd or setDomainEnv.sh; weblogic.jar must be in the CLASSPATH.

java weblogic.diagnostics.snmp.cmdline.Manager SnmpTrapMonitor -p 162

The port is the port given in the trap destination. Use a port above 1024 if you do not have permissions to create a server running on a lower port.

Now if you call your service with the pipeline alert and alert destination configured correctly and you have configured the SNMP Agent in WebLogic Server correctly, you will see the SNMP Manager producing output in the console of the SNMP trap which has been caught. If you do not see any output, check the WebLogic server logs for SNMP related errors. If this is working correctly, you can change the trap destination to point to snmptrapd (which of course needs to be running). If you do not see pipeline alerts from snmptrapd in /var/log/snmp-traps, you might have a connectivity issue to snmptrapd or you have not configured snmptrapd correctly. For example, you forgot to edit /etc/snmp/snmptrapd.conf. Also check /var/log/syslog for snmptrapd messages.

Splunk

It is easy to add a file as a source in Splunk. OOTB you get results like below. As you can see, the entire message is present in the log including additional data such as the pipeline, the location of the alert and the domain.


You can read more about the Splunk setup here.

Some notes
  • Do you want to use pipeline alerts? The Alert activity in Service Bus is blocking; processing of the pipeline will continue after the Alert has been delivered (stored in the persistent store or after having produced an SNMP trap). This can delay service calls (in contrast to Report activities). Also there have been reports of memory leaks. See: 'OSB Alert Log Activities Generating Memory Leak on WebLogic Server (Doc ID 1536484.1)' on Oracle support.
  • Use a single alert destination for all your services. This makes changing the alert configuration more easy. 
  • Think about your alert levels. You do not want alerts for everything all the time since it has a performance impact.
  • Configure logrotate for the SNMP Manager trap file. Otherwise it might become very large and difficult to parse. See here for some examples.
  • Consider running snmptrapd on another host as the WebLogic Server. In case of large numbers of pipeline alerts, it will cause disk IO and potentially more than the regular persistent store because of its plain text format. I have not checked if this causes a delay in Service Bus pipeline processing. My guess is that producing alerts and sending it to the SNMP Agent might be part of the same thread which is used for processing the Service Bus pipeline, but sending SNMP traps from the SNMP Agent to the SNMP Manager is not; will not delay the Service Bus process. Do some performance tests before making decisions on a local or remote snmptrapd setup.
  • Which SNMP Manager do you want to use? I'm using snmptrapd because it is easy to produce files which can be read by Splunk but with this (Service Bus, WebLogic Server) setup you can of course easily use any other SNMP Manager instead of snmptrapd icm Splunk. For example Enterprise Manager Cloud Control (see here).

Oracle Service Bus: Produce messages to a Kafka topic

$
0
0
Oracle Service Bus is a powerful tool to provide features like transformation, throttling, virtualization of messages coming from different sources. There is a (recently opensourced!) Kafka transport available for Oracle Service Bus (see here). Oracle Service Bus can thus be used to do all kinds of interesting things to messages coming from Kafka topics. You can then produce altered messages to other Kafka topics and create a decoupled processing chain. In this blog I provide an example on how to use Oracle Service Bus to produce messages to a Kafka topic.


Messages from OSB to Kafka

First perform the steps as described here to setup the Service Bus with the Kafka transport. Also make sure you have a Kafka broker running.

Next create a new Business Service (File, New, Business Service). It is not visible in the component palette since it is a custom transport. Next use transport Kafka.


In the Type screen be sure to select Text as request message and None as response message.


Specify a Kafka bootstrap broker.


The body needs to be of type {http://schemas.xmlsoap.org/soap/envelope/}Body. If you send plain text as the body to the Kafka transport, you will get the below error message:

<Error> <oracle.osb.pipeline.kernel.router> <ubuntu> <DefaultServer> <[STUCK] ExecuteThread: '22' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <43b720fd-2b5a-4c93-a073-298db3e92689-00000132> <1486368879482> <[severity-value: 8] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <OSB-382191> <SBProject/ProxyServicePipeline: Unhandled error caught by system-level error handler: com.bea.wli.sb.pipeline.PipelineException: OSB Assign action failed updating variable "body": [OSB-395105]The TokenIterator does not correspond to a single XmlObject value

If you send XML as the body of the message going to the transport but not an explicit SOAP body, you will get errors in the server log like below:

<Error> <oracle.osb.pipeline.kernel.router> <ubuntu> <DefaultServer> <[STUCK]
 ExecuteThread: '22' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <43b720fd-2b5a-4c93-a07
3-298db3e92689-00000132> <1486368987002> <[severity-value: 8] [rid: 0] [partition-id: 0] [partition-name: DOMAIN]
 > <OSB-382191> <SBProject/ProxyServicePipeline: Unhandled error caught by system-level error handler: com.bea.wl
i.sb.context.BindingLayerException: Failed to set the value of context variable "body". Value must be an instance
 of {http://schemas.xmlsoap.org/soap/envelope/}Body.

As you can see, this causes stuck threads. In order to get a {http://schemas.xmlsoap.org/soap/envelope/}Body you can for example use an Assign activity. In this case I'm replacing text in the input body and assign it to the output body. I'm using <ns:Body xmlns:ns='http://schemas.xmlsoap.org/soap/envelope/'>{fn:replace($body,'Trump','Clinton')}</ns:Body>. This replaces Trump with Clinton.


When you check the output with a tool like for example KafkaTool you can see the SOAP body is not propagated to the Kafka topic.


Finally

Oracle Service Bus processes individual messages. If you want to aggregate data or perform analytics on several messages, you can consider using Oracle Stream Analytics (OSA). It also has pattern recognition and several other interesting features. It is however not very suitable to split up messages or perform more complicated actions on individual messages such as transformations. For such a use-case, use Oracle Service Bus.

Oracle SOA Suite: Find that composite instance!

$
0
0
When executing BPM or BPEL processes, they are usually executed in the context of a specific entity. Sometimes you want to find instances involved with a specific entity. There are different ways to make this easy. You can for example use composite instance titles or sensors and set them to a unique identifier for your entity. If they have not been used, you can check the audit trail. However, manually checking the audit trail, especially if there are many instances, can be cumbersome. Also if different teams use different standards or standards have evolved over time, there might not be a single way to look for your entity identifier in composite instances. You want to automate this.

It is of course possible to write Java or WLST code and use the API to gather all relevant information. It would however require fetching large amounts of data from the SOAINFRA database to analyse. Fetching all that data into WLST or Java and combining it, would not be fast. I've created a database package / query which performs this feat directly on the 11g SOAINFRA database (and most likely with little alteration on 12c).


How does it work

The checks which are performed in order (the first result found is returned):
  • Check the composite instance title
  • Check the sensor values
  • Check the composite audit trail
  • Check the composite audit details
  • Check the BPM audit trail
  • Check the Mediator audit trail
  • Do the above checks except this one for every composite sharing the same ECID.
It first looks for instance titles conforming to a specific syntax (with a regular expression), next it looks for sensor values of sensors with a specific name. After that it starts to look in the audit trail and if even that fails, it looks in the audit details where messages are stored when they become larger than a set value (look for Audit Trail threshold). Next the BPM and Mediator specific audit tables are looked at and as a last resort, it uses the ECID to find other composite instances in the same flow which might provide the required information and it does the same checks as mentioned above on those composite instances. Using this method I could find for almost any composite instance in my environment a corresponding entity identifier. The package/query has been tested on 11g but not on 12c. You should of course check to see if it fits your personal requirements. The code is mostly easy to read save the audit parsing details. For parsing the audit trail and details tables, I've used the following blog. The data is saved in a file which can be imported in Excel and can be scheduled on Linux with a provided sh script.

Getting the script to work for your case

You can download the script here. Several minor changes are required to make the script suitable for  a specific use case.
  • In the example script getcomposites_run.sql the identification regular expressing: AA\d\d\.\d+ is used. You should of course replace this with a regular expression reflecting the format of your entity identification. 
  • In the example script getcomposites_run.sql sensors which have AAIDENTIFICATION in the name will be looked at. This should be changed to reflect the names used by your sensors.
  • The getcomposites.sh contains a connect string: connect soainfra_username/soainfra_password. You should change this to your credentials.
  • The getcomposites.sh script can be scheduled. In the example script, it is scheduled to run at 12:30:00. If you do not need it, you can remove the scheduling. It can come in handy when you want to run it outside of office hours because the script most likely will impact performance. 
  • The selection in getcomposites_run.sql only looks at running composites. Depending on your usecase, you might want to change this to take all composites into consideration.
  • The script has not been updated to 12g. If you happen to create a 12g version of this script (I think not much should have to be changed), please inform me so I can add it to the Github repository.
Considerations

If you have much data in your SOAINFRA tables, the query will be slow. It could take hours. During this period, performance might be adversely affected.

That I had to create a script like this (first try this, then this, then this, etc) indicates that I encountered a situation in which there was not a single way to link composite instances to a specific identifier. If your project uses strict standards and these standards are enforced, a script like this would not be needed. For example, you set your composite instance title to reflect your main entity identifier or use specific sensors. In such a case, you do not need to fall back to parsing audit data.

Machine learning: Getting started with random forests in R

$
0
0
According to Gartner, machine learning is on top of the hype cycle at the peak of inflated expectations. There is a lot of misunderstanding about what machine learning actually is and what it can be done with it.

Machine learning is not as abstract as one might think. If you want to get value out of known data and do predictions for unknown data, the most important challenge is asking the right questions and of course knowing what you are doing, especially if you want to optimize your prediction accuracy.

In this blog I'm exploring an example of machine learning. The random forest algorithm. I'll provide an example on how you can use this algorithm to do predictions. In order to implement a random forest, I'm using R with the randomForest library and I'm using the iris dataset which is provided by the R installation.


The Random Forest

A popular method of machine learning is by using decision tree learning. Decision tree learning comes closest to serving as an off-the-shelf procedure for data mining (see here). You do not need to know much about your data in order to be able to apply this method. The random forest algorithm is an example of a decision tree learning algorithm.

Random forest in (very) short

How it works exactly takes some time to figure out. If you want to know details, I recommend watching some youtube recordings of lectures on the topic. Some of its most important features of this method:
  • A random forest is a method to do classifications based on features. This implies you need to have features and classifications.
  • A random forest generates a set of classification trees (an ensemble) based on splitting a subset of features at locations which maximize information gain. This method is thus very suitable for distributed parallel computation.
  • Information gain can be determined by how accurate the splitting point is in determining the classification. Data is split based on the feature at a specific point and the classification on the left and right of the splitting point are checked. If for example the splitting point splits all data of a first classification from all data of a second classification, the confidence is 100%; maximum information gain.
  • A splitting point is a branching in the decision tree.
  • Splitting points are based on values of features (this is fast)
  • A random forest uses randomness to determine features to look at and randomness in the data used to construct the tree. Randomness helps reducing compute time. 
  • Each tree gets to see a different dataset. This is called bagging.
  • Tree classification confidences are summed and averaged. Products of the confidences can also be taken. Individual trees have a high variance because they have only seen a small subset of data. Averaging helps creating a better result.
  • With correlated features, strong features can end up with low scores and the method can be biased towards variables with many categories.
  • A random forest does not perform well with unbalanced datasets; samples where there are more occurrences of a specific class.
Use case for a random forest

Use cases for a random forest can be for example text classification such as spam detection. Determine if certain words are present in a text can be used as a feature and the classification would be spam/not spam or even more specific such as news, personal, etc. Another interesting use case lies in genetics. Determining if the expression of certain genes is relevant for a specific disease. This way you can take someone's DNA and determine with a certain confidence if someone will contract a disease. Of course you can also take other features into account such as income, education level, smoking, age, etc.

R

Why R

I decided to start with R. Why? Mainly because it is easy. There are many libraries available and there is a lot of experience present worldwide; a lot of information can be found online. R however also has some drawbacks.

Some benefits
  • It is free and easy to get started. Hard to master though.
  • A lot of libraries are available. R package management works well.
  • R has a lot of users. There is a lot of information available online
  • R is powerful in that if you know what you are doing, you require little code doing it.
Some challenges
  • R loads datasets in memory
  • R is not the best at doing distributed computing but can do so. See for example here
  • The R syntax can be a challenge to learn
Getting the environment ready

To get a server to play with, I decided to go with Ubuntu Server. I first installed the usual things like a GUI. Next I installed some handy things like a terminal emulator, firefox and stuff like that. I finished with installing R and R-studio; the R IDE.

So first download and install Ubuntu Server (next, next, finish)

sudo apt-get update
sudo apt-get install aptitude

--Install a GUI
sudo aptitude install --without-recommends ubuntu-desktop

-- Install the VirtualBox Guest additions
sudo apt-get install build-essential linux-headers-$(uname -r)
Install guest additions (first mount the ISO image which is part of VirtualBox, next run the installer)

-- Install the below stuff to make Dash (Unity search) working
http://askubuntu.com/questions/125843/dash-search-gives-no-result
sudo apt-get install unity-lens-applications unity-lens-files

-- A shutdown button might come in handy
sudo apt-get install indicator-session

-- Might come in handy. Browser and fancy terminal application
sudo apt-get install firefox terminator

--For the installation of R I used the following as inspiration: https://www.r-bloggers.com/how-to-install-r-on-linux-ubuntu-16-04-xenial-xerus/
sudo echo "deb http://cran.rstudio.com/bin/linux/ubuntu xenial/" | sudo tee -a /etc/apt/sources.list
gpg --keyserver keyserver.ubuntu.com --recv-key E084DAB9
gpg -a --export E084DAB9 | sudo apt-key add -
sudo apt-get update
sudo apt-get install r-base r-base-dev

-- For the installation of R-studio I used: https://mikewilliamson.wordpress.com/2016/11/14/installing-r-studio-on-ubuntu-16-10/

wget http://ftp.ca.debian.org/debian/pool/main/g/gstreamer0.10/libgstreamer0.10-0_0.10.36-1.5_amd64.deb
wget http://ftp.ca.debian.org/debian/pool/main/g/gst-plugins-base0.10/libgstreamer-plugins-base0.10-0_0.10.36-2_amd64.deb
sudo dpkg -i libgstreamer0.10-0_0.10.36-1.5_amd64.deb
sudo dpkg -i libgstreamer-plugins-base0.10-0_0.10.36-2_amd64.deb
sudo apt-mark hold libgstreamer-plugins-base0.10-0
sudo apt-mark hold libgstreamer0.10

wget https://download1.rstudio.org/rstudio-1.0.136-amd64.deb
sudo dpkg -i rstudio-1.0.136-amd64.deb
sudo apt-get -f install

Doing a random forest in R

R needs some libraries to do random forests and create nice plots. First give the following commands:

#to do random forests
install.packages("randomForest")

#to work with R markdown language
install.packages("knitr")

#to create nice plots
install.packages("ggplot2")

In order to get help on a library you can give the following command which will give you more information on the library.

library(help = "randomForest")


Of course, the randomForest implementation does have some specifics:
  • it uses the reference implementation based on CART trees
  • it is biased in favor of continuous variables and variables with many categories
A simple program to do a random forest looks like this:

#load libraries
library(randomForest)
library(knitr)
library(ggplot2)

#random numbers after the set.seed(10) are reproducible if I do set.seed(10) again
set.seed(10)

#create a training sample of 45 items from the iris dataset. replace indicates items can only be present once in the dataset. If replace is set to true, you will get Out of bag errors.
idx_train <- sample(1:nrow(iris), 45, replace = FALSE)

#create a data.frame from the data which is not in the training sample
tf_test <- !1:nrow(iris) %in% idx_train

#the column ncol(iris) is the last column of the iris dataset. this is not a feature column but a classification column
feature_columns <- 1:(ncol(iris)-1)

#generate a randomForest. 
#use the feature columns from training set for this
#iris[idx_train, ncol(iris)] indicates the classification column
#importance=TRUE indicates the importance of features in determining the classification should be determined
#y = iris[idx_train, ncol(iris)] gives the classifications for the provided data
#ntree=1000 indicates 1000 random trees will be generated
model <- randomForest(iris[idx_train, feature_columns], y = iris[idx_train, ncol(iris)], importance = TRUE, ntree = 1000)

#print the model
#printing the model indicates how the sample dataset is distributed among classes. The sum of the sample classifications is 45 which is the sample size. OOB rate indicates 'out of bag' (the overall classification error).

print(model)


#we use the model to predict the class based on the feature columns of the dataset (minus the sample used to train the model).
response <- predict(model, iris[tf_test, feature_columns])

#determine the number of correct classifications
correct <- response == iris[tf_test, ncol(iris)]

#determine the percentage of correct classifications
sum(correct) / length(correct)

#print a variable importance (varImp) plot of the randomForest
varImpPlot(model)

#in this dataset the petal length and width are more important measures to determine the class than the sepal length and width.

Oracle Mobile Cloud Service (MCS): An introduction to API security: Basic Authentication and OAuth2

$
0
0
As an integration/backend developer, when starting a project using Mobile Cloud Service, it is important to have some understanding of what this MBaaS (Mobile Backend as a Service) has to offer in terms of security features. This is important in order to be able to configure and test MCS. In this blog I will give examples on how to configure and use the basic authentication and OAuth2 features which are provided to secure APIs. You can read the Oracle documentation (which is quite good for MCS!) on this topic here.


Introduction

Oracle Mobile Cloud Service offers platform APIs to offer specific features. You can create custom APIs by writing JavaScript code to run on Node.js. Connectors are used to access backend systems. This blogs focuses on authentication options for incoming requests.

The connectors are not directly available from the outside. MCS can secure custom and platform APIs. This functionality is taken care of by the Mobile Backend and the custom API configuration.



Getting started

The first thing to do when you want to expose an API is assign the API to a Mobile Backend. You can do this in the Mobile Backend configuration screen, APIs tab.


You can allow anonymous access, but generally you want to know who accesses your API. Also because MCS has a license option to pay for a specific number of API calls; you want to know who you are paying for. In order to require authentication on a per user basis, you first have to create a user and assign it to a group. You can also do this from the Mobile Backend configuration. Go to the Mobile Users Management tab to create users and groups.


After you have done this, you can assign the role to the API. You can also do this on a per endpoint basis which makes this authentication scheme very flexible.



Now we have configured our API to allow access to users who are in a specific role. We can now call our API using basic authentication or OAuth2

Basic Authentication

In order to test our API, Postman is a suitable option. Postman is a freely available Chrome plugin (but also available standalone for several OSes) which provides many options for testing HTTP calls.


Basic authentication is a rather weak authentication mechanism. You Base64 encode a string username:password and send that as an HTTP header to the API you are calling. If someone intercepts the message, he/she can easily Base64 decode the username:password string to obtain the credentials. You can thus understand why I've blanked out that part of the Authorization field in several screenshots.


In addition to specifying the basic authentication header, you also need to specify the Oracle-Mobile-Backend-Id HTTP header which can be obtained from the main page of the Mobile Backend configuration page.

Obtain Oracle-Mobile-Backend-Id


Call your API with Basic authentication



This mechanism is rather straightforward. The authorization header needs to be supplied with every request though.


OAuth2

OAuth works a bit different than basic authentication in that first a token is obtained from a token service and the token is used in subsequent requests. When using the token, no additional authentication is required.


You can obtain the token from the Mobile Backend settings page as shown above. When you do a request to this endpoint, you need to provide some information:

You can use basic authentication with the Client ID:Client secret to access the token endpoint. These can be obtained from the screen shown below.


You also need to supply a username and password of the user for whom the token is generated. After you have done a request to the token service, you obtain a token.


This token can be used in subsequent request to your API. You can add the Bearer field with the token as Authentication HTTP header to authenticate instead of sending your username/password every time. This is thus more secure.


Finally

I've not talked about security options for outgoing requests provided by the supplied connectors.


These have per connector specific options and allow identity propagation. For example the REST connector (described in the Oracle documentation here) supports SAML tokens, CSF keys, basic authentication, OAuth2, JWT. The SOAP connector (see here) can use WS-Security in several flavours, SAML tokens, CSF keys, basic authentication, etc (quite a list).


R: Utilizing multiple CPUs

$
0
0
R is a great piece of software to perform statistical analyses. Computing power can however be a limitation. R by default uses only a single CPU. In almost every machine, multiple CPUs are present, so why not utilize them?


Utilizing multiple CPUs

Luckily using multiple CPUs in R is relatively simple. There is a deprecated library multicore available which you shouldn't use. A newer library parallel is recommended. This library provides mclapply. This function only works on Linux systems so we're not going to use that one. The below examples work on Windows and Linux and do not use deprecated libraries.

A very simple example

library(parallel)

no_cores <- detectCores() - 1
cl <- makeCluster(no_cores)
arr <- c("business","done","differently")

#Work on the future together
result <- parLapply(cl, arr, function(x) toupper(x))

#Conclusion: BUSINESS DONE DIFFERENTLY
paste (c('Conclusion:',result),collapse = '')

stopCluster(cl)

The example is a minimal example of how you can use clustering in R. What this code does is spawn multiple processes and process the entries from the array c("business","done","differently") in those separate processes. Processing in this case is just putting them in uppercase. After it is done, the result from the different processes is combined in Conclusion: BUSINESS DONE DIFFERENTLY.

If you remove the stopCluster command, you can see there are multiple processes open on my Windows machine:

After having called the stopCluster command, the number of processes if much reduced:


You can imagine that for such a simple operation as putting things in uppercase, you might as well use the regular apply function which saves you from the overhead of spawning processes. If however you have more complex operations like the below example, you will benefit greatly from being to utilize more computing power!

A more elaborate example

You can download the code of this example from: https://github.com/MaartenSmeets/R/blob/master/htmlcrawling.R

The sample however does not work anymore since it parses Yahoo pages which have recently been changed. The sample does illustrate however how to do parallel processing.

Because there are separate R processes running, you need to make libraries and functions available to these processes. For example, you can make libraries available like:

#make libraries available in other nodes
clusterEvalQ(cl, {
  library(XML)
  library(RCurl)
  library(parallel)
  }
)

And you can make functions available like

clusterExport(cl, "htmlParseFunc")

Considerations

There are several considerations (and probably more than mentioned below) when using this way of clustering:

  • Work packages are separated equally over CPUs. If however the work packages differ greatly in the amount of work, you can encounter situations where parLapply is waiting for a process to complete while the other processes are already done. You should try and use work packages mostly of equal size to avoid this.
  • If a process runs too long, it will timeout. You can set the timeout when creating the cluster like: cl <- makeCluster(no_cores, timeout=50)
  • Every process takes memory. If you process large variables in parallel, you might encounter memory limitations.
  • Debugging the different processes can be difficult. I will not go into detail here.
  • GPUs can also be utilized to do calculations. See for example: https://www.r-bloggers.com/r-gpu-programming-for-all-with-gpur/. I have not tried this but the performance graphs online indicate a much better performance can be achieved than when using CPUs.

Oracle SOA Suite: Two-way SSL with TLS1.2 made easy (slightly less complicated)

$
0
0
Transport layer security is not an easy topic. Many blogs have been written about this already. Surprisingly though, I did not find a single blog which was more or less complete and provided me with everything I needed to know to get this working on SOA Suite 12.2.1. In this blog I try to make the topic more easy to understand and provide a complete end to end example.

Suppose you only want an implementation and do not care much about the explanation, you can skip the 'Some basics' section, only execute the commands in bold in the 'Lets get started!' section and the steps in the 'WebLogic and SOA Suite' section. Do take into consideration any existing SSL related configuration on your own system.

Some basics

SSL/TLS

SSL stands for Secure Sockets Layer. SSL is the predecessor of TLS. SSL should be considered insecure since in October 2014 the POODLE attack was announced. TLS currently has 4 versions. TLS 1.0, 1.1, 1.2 and 1.3. 1.3 is not widely supported/adopted yet. SSL/TLS provide integrity checks, security and authentication.

Identity

A server which hosts traffic on a port which has SSL/TLS enabled, has an identity keystore. This identity keystore contains a private key and a public key/certificate. The public key/certificate can safely be given to other parties. With websites when visiting an HTTPS website (HTTP with SSL enabled), the public key is send to you. The other party / client can use the public key to encrypt messages meant for the server. The only one who can decrypt the messages is the one having the private key of the server. This is usually only the server.

Trust

Can you trust a server? There are various ways to establish trust. An easy way which can be used in WebLogic Server is to check if a specific field in the public key corresponds to the hostname or domain of the server. This is of course easily faked.

You can use a certificate authority to create a signed public key. If someone trust the certificate authority, that someone also automatically trusts the signed key. With websites you often see a green lock when a certain website uses HTTPS with a public certificate signed by a (by your webbrowser) trusted certificate authority.

Usually a truststore is used to store trusted certificate authorities or specific trusted certificates. If you have many servers in your application landscape, it is recommended to use a certificate authority since it is cumbersome to load every public key of every server in every truststore. Trusting a single certificate authority makes things a lot easier.

Certificate authority

A certificate authority has a private key which it can use to sign a so-called certificate signing request. From this certificate signing request you can create a signed public key.

Certain companies such as Google and Microsoft provide certain checks to confirm someones identity before providing them with a signed public key. You can pay these companies to provide those checks and give you a signed certificate. Most of these companies are trusted certificate authorities by default in several OSs and browsers. This way for a website for example, you do not have to make changes on a client for your certificate to be trusted.

If you run several servers within your internal company network, you often do not require these external checks. You can create your own certificate authority private key and create a signed public key yourself. This certificate authority is not trusted by default so you should trust the public certificate of your self-signed certificate authority in order establish trust.

Cipher

A cipher is an algorithm for encryption and decryption. With SSL, during the handshake phase (the phase which establishes an SSL session), a cipher is determined. The client usually provides a list of the ciphers it supports and the server chooses which one to use. During an SSL handshake you can see in logfiles which cipher is chosen.

Lets get started!

I used 2 SOA Suite 12.2.1.2 installations (complete, no quickstart) in 2 different VM's for this example. soaserver1 and soaserver2. I used a host-only network with fixed IP's in VirtualBox and added IP/hostname mappings in the hosts files of the two servers.

Create a self-signed certificate autority

A blog explaining the topic on creating your own certificate authority can be found on here. This is just my short summary. Do read it for some easy to understand background information.

This simple example uses OpenSSL. OpenSSL is installed by default on most Linux environments and can also be installed on other OSs.

First create a private key for your certificate authority:

openssl genrsa -des3 -out rootCA.key 2048

I create an RSA key and protect it with the DES3 cipher algorithm based on a password. I want my key to have a length of 2048 bytes. You can also choose for ECC keys. They can be smaller when comparing to RSA keys to provide the same level of protection. ECDSA (Elliptic Curve Digital Signature Algorithm) ciphers use ECC keys. Keep this key private! It allows you to sign public keys (see later in this post) and create trust.

Next I self-sign this generated key. This creates a public signed key for the certificate authority. I can load this key in truststores to achieve trust for keys which are signed with this certificate:

openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem -subj '/CN=Conclusion/OU=Integration/O=AMIS/L=Nieuwegein/ST=Utrecht/C=NL' -extensions v3_ca

Lets break this down:
  • req: do a request
  • x509: this defines the format of the key to be generated. In the x509 standard, several pieces of metadata can be stored with the certificate and the certificate authority structure is also part of the x509 standard. Read more here.
  • new: generate a new key
  • nodes: this is actually 'no DES'. My public key does not need to be protected with a password.
  • key: specifies the private key to sign
  • sha256: secure hash algorithm. Hashing is used to provide data integrity functionality. Creating a hash of a transmission allows you to check at a later time if the transmission has been tampered with.
  • days: specifies the validity of the generated certificate
  • subj: provides some metadata for the certificate
  • extensions v3_ca: this adds a metadata field to the certificate indicating that it is a certificate of a certificate authority. If this extension is not added, certain validations might fail
You can use the certificate autority private key and certificate as server identity but you shouldn't. This will give certain validation errors because of the 'extensions v3_ca'.

Create server identity keys

Next we create a private key which will be used as identity of the WebLogic server

openssl genrsa -des3 -out soaserver1.key 2048

After we have created this private key, we can create a certificate signing request for this private key

openssl req -new -key soaserver1.key -out soaserver1.csr -subj '/CN=soaserver1/OU=Integration/O=AMIS/L=Nieuwegein/ST=Utrecht/C=NL'

This is pretty similar as to what we have done for the certificate authority. However  mind the subj clause here. The common name should match the server hostname. This will be used later for verification of the identity of the server by the client. In order to allow two-way SSL, I added the server hostname to IP mapping to every servers hosts file. In an enterprise you would use a DNS (domain name system) for this since you do not want to maintain every mapping in every server locally.

Next sign the certificate using the information in the private key and certificate of the certificate authority.

openssl x509 -req -in soaserver1.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out soaserver1.crt -days 1024 -sha256

This is very similar to signing the certificate authority certificate. Mind that a validity with a longer period than the validity of the certificate authority key is of course useless. Createserial creates a new file rootCA.srl. This serial number is unique for every signed certificate. You should save it so at a later time you can check if a certificate has been tampered with.

The next time you sign a certificate, you can use:

openssl x509 -req -in soaserver1.csr -CA rootCA.pem -CAkey rootCA.key -CAserial rootCA.srl -out soaserver1.crt -days 1024 -sha256

This will increase the previous serial with 1, making sure it is unique. 

Creating an identity keystore

Now you have a signed certificate and a private key. Time to make a Java keystore (JKS) which can be used in WebLogic server and SOA Suite and other pieces of Java.

openssl pkcs12 -export -in soaserver1.crt -inkey soaserver1.key -chain -CAfile rootCA.pem -name "soaserver1" -out soaserver1.p12

keytool -importkeystore -deststorepass Welcome01 -destkeystore soaserver1identity.jks -srckeystore soaserver1.p12 -srcstoretype PKCS12

The above steps; 
  • creating a private key
  • creating a certificate signing request
  • signing the certificate with the private key of the certificate authority
  • creating an identity keystore
 need to be done for every server.

Creating a trust keystore

Here you can pick the fruits of the above work of using a certificate authority to sign your server private keys. You can use the certificate authority certificate in a truststore and every key signed with the certificate is trusted. You do not need to load every specific server certificate into every truststore the server needs access to. Creating a truststore is easy and you can do this once and use the same trust.jks file in all your servers.

keytool -import -alias rootCA -file rootCA.pem -keystore trust.jks -storepass Welcome01

WebLogic and SOA Suite

It is interesting to notice the incoming WebLogic configuration differs from the SOA Suite outgoing configuration. This is of course not surprising since a server usually only has a single identity, but an integration product like SOA Suite should able to interact with multiple protected external resources, maybe requiring different ciphers and keys for each of them. Also SOA Suite in the past (I'm not sure if that is still the case) could run on IBM WebSphere instead of WebLogic Server. Thus I can understand Oracle chose to provide a more generic implementation of SSL in the SOA Suite than the WebLogic specific one.

WebLogic

The WebLogic server configuration is pretty straightforward. In this example I'm only looking at SSL for incoming and outgoing messages for SOA Suite. The WebLogic specific configuration is only relevant for incoming connections. Basically the steps are as followed:
  • Enable SSL for the managed server
  • Specify keystores for identity and trust
  • Configure incoming SSL specifics
Enable SSL for the managed server

First Enable the listen port for SSL. In WebLogic console, environment, servers, specify your server, configuration, general and indicate 'SSL Listen port enabled'. You can also specify the SSL port here.


Specify the keystores for identity and trust

 In WebLogic console, environment, servers, specify your server, configuration, keystores. You can specify the identity and trust keystores you have created during the above steps.


Configure incoming SSL specifics

 In WebLogic console, environment, servers, specify your server, configuration, SSL. You can specify the identity key used for the server and several checks which can be done when establishing the SSL connection.


Some important settings: 
  • BEA Hostname verifier. This indicates the CN field in the certificate is checked against the server hostname.
  • Client certs requested and enforced. If set, Two-Way SSL will be used and the client won't be able to connect unless it presents a certificate.
  • Buildin SSL Validation and Cert Path Validators. This checks the certificate chain.
It is important to understand what these checks do. A host name verifier ensures the host name in the URL to which the client connects matches the host name in the digital certificate that the server sends back as part of the SSL connection. This helps prevent man in the middle attacks where the client might connect to a different URL.

The below situation is something you won't prevent even with this checks. I could connect without problems with the soaserver2 WebLogic server from soaserver1 with the certificate of soaserver2. Also when using the private key of soaserver1 as identity on soaserver2, soaserver2 would not complain about this. FireFox would though and most likely also other clients.


SOA Suite

The SOA Suite configuration is a bit more elaborate in that it requires configuration in different places of which not all can be done from the GUI.

The steps which need to be performed are:
  • Specify the identity store used by SOA
  • Create a keystore password credential in the credential store
  • Configure composite to use two-way SSL
Specify identity store

First you have to specify the identity store which the SOA Suite will use for outbound connections. You can find this setting by going to SOA, soa-infra, SOA Administration, Common Properties, (scroll down), 'More SOA Infra Advanced Configuration Properties...'



Here you have to specify the servers identity keystore. In my case /home/oracle/certs/soaserver1identity.jks.

Create a keystore password credential in the credential store

Next you have to specify the keystore password. If you forget to do this, you will encounter errors like:

On the client:
<May 7, 2017, 12:58:43,939 PM CEST> <Error> <oracle.integration.platform.blocks.soap> <BEA-000000> <Unable to create SSL Socket Factory>

On the server:
[2017-05-07T12:26:02.364+02:00] [soa_server1] [NOTIFICATION] [] [oracle.integration.platform.common.SSLSocketFactoryManagerImpl] [tid: [ACTIVE].ExecuteThread: '25' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: <anonymous>] [ecid: cd874e6b-9d05-4d97-a54d-ff9a3b8358e8-00000098,0] [APP: soa-infra] [partition-name: DOMAIN] [tenant-name: GLOBAL] Could not obtain keystore location or password

You can set the keystore password by going to your domain, Security, Credentials. You can create a credential map SOA with a keyname/user of KeyStorePassword with the password you have used for your keystore.



Configure composite to use two-way SSL

This step is easy. You have to add a binding property to your reference which indicates you want to use two-way SSL.


In the composite.xml file on your reference you can add:

<property name="oracle.soa.two.way.ssl.enabled">true</property>

This causes the composite binding to use the identity (with the credential store password) for outbound SSL specified in the previously configured MBean.

You should of course also not forget to set the endpoint to a port which hosts HTTPS and indicate in the URL that it should use HTTPS to call this endpoint. In my example I've overridden the URL in the EM. Be aware though that overriding the endpoint URL might still cause the original endpoint to be called when the overridden endpoint is not accessible (if for example the SSL connection has issues).

Some useful tips

If you want to debug SSL connections, the following tips might help you.

FireFox

It might appear strange to use a webbrowser to test SSL connections, but which piece of software uses SSL more than a browser? FireFox is very clear in its error messages what has gone wrong which greatly helps with debugging. FireFox uses its own certificate store and can provide certificates to login to a server. You can configure them from FireFox, Preferences, Advanced, Certificates, View Certificates. Here you can import client certificates such as the p12 files you have generated in an earlier step.

This provides for a very easy way to check whether a server can be accessed with SSL and if the server certificate has been correctly generated / set-up. FireFox also extensively checks certificates to provide the green-lock icons people are quite familiar with.


In this case I have SSL enabled for soaserver1 on port 7002. I open https://soaserver1:7002 in FireFox (do not forget the HTTPS part). Since I have enabled 'Client certs requested and enforced' in the WebLogic SSL configuration, it will ask me for a client key.


In this case you can check whether a client certificate will be trusted. When opening https://soaserver1:7002 and you get a 404 message, the WebLogic server is responding to you after the SSL handshake has succeeded.

In FireFox you can tweak the cipher suites which are used. Read about this here. Do mind that SSL connections can be cached and FireFox can remember to send specific keys. If you run FireFox on soaserver1 and open a link on soaserver1, Wireshark (read below) will not detect traffic on the same interface which is used to access soaserver2.

Wireshark

Use Wireshark to monitor connections/handshakes.
  • You can confirm the SSL/TLS version being used
  • You can see the number of messages which have crossed the wire (allows you to distinguish retries of for example a handshake fails)
  • Allows you to decrypt SSL traffic (if you have the private key)
  • It allows you to confirm an SSL connection is actually being set up. If you do not see it in Wireshark, no message has been send and the connection build-up fails on the client. This for example happens when the SOA, KeyStorePassword entry has not been set in the SOA Suite credential store.
SSL debug logging

If you want to see what is happening with your SSL connection, it is very helpful to provide some JVM switches in setDomainEnv.

-Dweblogic.security.SSL.verbose -Djavax.net.debug=all -Dssl.debug=true 

You can also enable WebLogic SSL debugging in WebLogic console. Open a server and enable weblogic.security.SSL


Portecle

Portecle is a handy and freely available tool if you want to manage keystores and look at key details. 



Force TLS1.2

If you want to force WebLogic / SOA Suite to use TLS 1.2 you can specify the following JVM parameters in the setDomainEnv.sh file.

-Dweblogic.security.SSL.minimumProtocolVersion=TLSv1.2 -Dhttps.protocols=TLSv1.2

Which service is called?

Suppose you have process A on server X which calls process B on server Y. For testing you first deploy process B on server X and use the WSDL of process B locally from process A. Next you override the endpoint to refer to the SSL port of server Y. What happens if the SSL connection cannot be established? By default, there are 3 retries after which the process falls back to using the endpoint as specified in the WSDL file. When testing it might seem the call from process A on X to B on Y works but it is actually a local call because the local call is the fallback for the remote call. In this case you should confirm an instance of B is created on Y.

Finally

Performance impact

Using SSL of course has a performance impact. 1-way SSL is faster than 2-way SSL. Using encryption is slower than not using encryption. Key length and cipher suites also play a major role in how fast your SSL connection will be. I have not measured the precise cost of the different options, but you should consider what you need and what you are willing to pay for it in terms of performance impact.
  • One way SSL allows the client to verify the server identity (certificate, hostname). The server provides the client with a public key but not the other way around.
  • Two way SSL also allows the server to verify the client. The client also needs to provide a public key.
SSL verifies host identities, keys, certificate chains. It does not allow you to provide (specific user) application authentication or authorization. You could do it with SSL but it would require giving every user a specific certificate. There are better ways to do that such as WS-Security, SAML or OAuth.

Entropy

If you use a server which has a lot of SSL connections, the random number generator is asked often for a new random number. Random numbers are generated by using entropy (a measure of randomness/disorder), which is a limited resource, especially in virtualized environments.

There is a setting which allows WebLogic server to recycle random numbers at the cost of security (the random number generator becomes predictable). Read more about that here.

-Djava.security.egd=file:/dev/./urandom

Oracle does not recommend using this recycling mechanism in production environments since if you can predict the random number generator, you have introduced a security vulnerability which can be exploited. Next to speeding up SSL connections, your server startup will most likely also be improved.

CRLs

I've not talked about a lot of things such as certificate revocation lists (CRLs). These lists contain keys which have been compromised. Compromised means the private key of a certificate authority has become public. Using the private CA key, someone is able to create new certificates which are being trusted by people who trust the CA. If a person can do such a thing, he is able to gain access to systems. Remember private keys can also be used to decrypt traffic? This is of course an issue on the internet but also when you have your own certificate authority. More generally speaking, if a private key is compromised, all trust should be revoked since you cannot count anymore on that for example a server is the sole owner of the key and is the only one who can decrypt traffic.

Other things

I have not talked about securing the connection between managed servers in a cluster and between the NodeManager and managed servers. You can read more about that here. Do mind though that using trust can be more efficient than specifically putting every public key in every truststore. Especially when you have many servers.

JDBC and SSL

Read more about this in the whitepaper here. It requires Oracle Advanced Security (OAS), which is an Oracle Database Enterprise Edition option. The US Government does not allow double encryption (you can imagine why..). If you configure Oracle Advanced Security to use SSL encryption and another encryption method concurrently, then the connection fails. See SSL Usage issues here.

Oracle SOA Suite: Want performance? Don't log so much and clean up your database!

$
0
0
The Oracle SOA Suite infrastructure, especially composites, use the database intensively. Not only are the process definitions stored in the database, also a lot of audit information gets written there. The SOA infrastructure database, if not well managed, will grow and will eventually have detrimental effects on performance. In this blog post I will give some quick suggestions that will help you increase performance of your SOA Suite infrastructure on the database side by executing some simple scripts. These are some suggestions I have seen work at different customers. Not only do they help managing the SOA Suite data in the database, they will also lead to better SOA Suite performance.



Do not log too much!

Less data is faster. If you can limit database growth, management becomes easier.
  • Make sure the auditlevel of your processes is set to production level in production environments.
  • Think about the BPEL setting inMemoryOptimization. This can only be set for processes that do not contain any dehydration points such as receive, wait, onMessage and onAlarm activities. If set to true, the completionpersistpolicy can be used to tweak what to do after completion of the process. For example only save information about faulted instances in the dehydration store. In 12c this setting is part of the 'Oracle Integration Continuous Availability' feature and uses Coherence.
    Start with a clean slate regularly

    Especially for development environments it is healthy to regularly truncate all the major SOAINFRA tables. The script to do this is supplied by Oracle: MW_HOME/SOA_ORACLE_HOME/rcu/integration/soainfra/sql/truncate/truncate_soa_oracle.sql

    The effect of executing this script is that all instance data is gone. This includes all tasks, long running BPM processes, long running BPEL processes, recoverable errors. For short everything except the definitions. The performance gain from executing the script can be significant. You should consider for example to run the script at the end of every sprint to start with a clean slate.



    Delete instances

    Oracle has provided scripts to remove old instances. These are scheduled by default in a clean installation of 12c. If you upgrades from 11g to 12c, this scheduling is not enabled by default. The auto-purge feature of 12c is described here.

    What this feature does is execute the standard supplied purge scripts: MW_HOME/SOA_ORACLE_HOME//rcu/integration/soainfra/sql/soa_purge/soa_purge_scripts.sql

    In a normal SOA Suite 12c installation you can also find the scripts in MW_HOME/SOA_ORACLE_HOME/common/sql/soainfra/sql/oracle

    In 12c installations, the patched purge scripts for older versions are also supplied. I would use the newest version of the scripts since the patches sometimes fix logic which can cause data inconsistencies which can have consequences later, for example during migrations.


    What the scripts do is nicely described here. These scripts only remove instances you should not miss. Running instances and instances which can be recovered, are not deleted. In the script you can specify for how long data should be retained.

    You should schedule this and run it daily. The shorter the period you keep information, the more you can reduce your SOAINFRA space usage and the better the performance of the database will be.

    An example of how to execute the script:

    DECLARE
      MAX_CREATION_DATE TIMESTAMP;
      MIN_CREATION_DATE TIMESTAMP;
      BATCH_SIZE        INTEGER;
      MAX_RUNTIME       INTEGER;
      RETENTION_PERIOD  TIMESTAMP;
    BEGIN
      MIN_CREATION_DATE := TO_TIMESTAMP(TO_CHAR(sysdate-2000, 'YYYY-MM-DD'),'YYYY-MM-DD');
      MAX_CREATION_DATE := TO_TIMESTAMP(TO_CHAR(sysdate-30, 'YYYY-MM-DD'),'YYYY-MM-DD');
      RETENTION_PERIOD  := TO_TIMESTAMP(TO_CHAR(sysdate-29, 'YYYY-MM-DD'),'YYYY-MM-DD');
      MAX_RUNTIME       := 180;

      BATCH_SIZE        := 250000;

      SOA.DELETE_INSTANCES(
        MIN_CREATION_DATE    => MIN_CREATION_DATE,
        MAX_CREATION_DATE    => MAX_CREATION_DATE,
        BATCH_SIZE           => BATCH_SIZE,
        MAX_RUNTIME          => MAX_RUNTIME,
        RETENTION_PERIOD     => RETENTION_PERIOD,
        PURGE_PARTITIONED_COMPONENT => FALSE);
    );

    END;
    /



    The script also has a variant which can be executed in parallel (which is faster) but that requires extra grants for the SOAINFRA database user.


    Shrink space

    Tables

    Deleting instances will not free up space on the filesystem of the server. Nor does it make sure that the data is not fragmented over many tablespace segments. Oracle does not provide standard scripts for this but does tell you this is a good idea and explains why here (9.5.2). In addition you can rebuild indexes. You should also of course run a daily gather statistics on the schema.

    For 11g you can use the below script to shrink space for tables and rebuild indexes. You should execute it under XX_SOAINFRA where XX if your schema prefix.

    alter table mediator_case_instance enable row movement;
    alter table mediator_case_instance shrink space;
    alter table mediator_case_instance disable row movement;
    alter table mediator_audit_document enable row movement;
    alter table mediator_audit_document shrink space;
    alter table mediator_audit_document disable row movement;
    alter table mediator_callback enable row movement;
    alter table mediator_callback shrink space;
    alter table mediator_callback disable row movement;
    alter table mediator_group_status enable row movement;
    alter table mediator_group_status shrink space;
    alter table mediator_group_status disable row movement;
    alter table mediator_payload enable row movement;
    alter table mediator_payload shrink space;
    alter table mediator_payload disable row movement;
    alter table mediator_deferred_message enable row movement;
    alter table mediator_deferred_message shrink space;
    alter table mediator_deferred_message disable row movement;
    alter table mediator_resequencer_message enable row movement;
    alter table mediator_resequencer_message shrink space;
    alter table mediator_resequencer_message disable row movement;
    alter table mediator_case_detail enable row movement;
    alter table mediator_case_detail shrink space;
    alter table mediator_case_detail disable row movement;
    alter table mediator_correlation enable row movement;
    alter table mediator_correlation shrink space;
    alter table mediator_correlation disable row movement;
    alter table headers_properties enable row movement;
    alter table headers_properties shrink space;
    alter table headers_properties disable row movement;
    alter table ag_instance enable row movement;
    alter table ag_instance shrink space;
    alter table ag_instance disable row movement;
    alter table audit_counter enable row movement;
    alter table audit_counter shrink space;
    alter table audit_counter disable row movement;
    alter table audit_trail enable row movement;
    alter table audit_trail shrink space;
    alter table audit_trail disable row movement;
    alter table audit_details enable row movement;
    alter table audit_details shrink space;
    alter table audit_details disable row movement;
    alter table ci_indexes enable row movement;
    alter table ci_indexes shrink space;
    alter table ci_indexes disable row movement;
    alter table work_item enable row movement;
    alter table work_item shrink space;
    alter table work_item disable row movement;
    alter table wi_fault enable row movement;
    alter table wi_fault shrink space;
    alter table wi_fault disable row movement;
    alter table xml_document_ref enable row movement;
    alter table xml_document_ref shrink space;
    alter table xml_document_ref disable row movement;
    alter table document_dlv_msg_ref enable row movement;
    alter table document_dlv_msg_ref shrink space;
    alter table document_dlv_msg_ref disable row movement;
    alter table document_ci_ref enable row movement;
    alter table document_ci_ref shrink space;
    alter table document_ci_ref disable row movement;
    alter table dlv_subscription enable row movement;
    alter table dlv_subscription shrink space;
    alter table dlv_subscription disable row movement;
    alter table dlv_message enable row movement;
    alter table dlv_message shrink space;
    alter table dlv_message disable row movement;
    alter table rejected_msg_native_payload enable row movement;
    alter table rejected_msg_native_payload shrink space;
    alter table rejected_msg_native_payload disable row movement;
    alter table instance_payload enable row movement;
    alter table instance_payload shrink space;
    alter table instance_payload disable row movement;
    alter table test_details enable row movement;
    alter table test_details shrink space;
    alter table test_details disable row movement;
    alter table cube_scope enable row movement;
    alter table cube_scope shrink space;
    alter table cube_scope disable row movement;
    alter table cube_instance enable row movement;
    alter table cube_instance shrink space;
    alter table cube_instance disable row movement;
    alter table bpm_audit_query enable row movement;
    alter table bpm_audit_query shrink space;
    alter table bpm_audit_query disable row movement;
    alter table bpm_measurement_actions enable row movement;
    alter table bpm_measurement_actions shrink space;
    alter table bpm_measurement_actions disable row movement;
    alter table bpm_measurement_action_exceps enable row movement;
    alter table bpm_measurement_action_exceps shrink space;
    alter table bpm_measurement_action_exceps disable row movement;
    alter table bpm_cube_auditinstance enable row movement;
    alter table bpm_cube_auditinstance shrink space;
    alter table bpm_cube_auditinstance disable row movement;
    alter table bpm_cube_taskperformance enable row movement;
    alter table bpm_cube_taskperformance shrink space;
    alter table bpm_cube_taskperformance disable row movement;
    alter table bpm_cube_processperformance enable row movement;
    alter table bpm_cube_processperformance shrink space;
    alter table bpm_cube_processperformance disable row movement;
    alter table wftask_tl enable row movement;
    alter table wftask_tl shrink space;
    alter table wftask_tl disable row movement;
    alter table wftaskhistory enable row movement;
    alter table wftaskhistory shrink space;
    alter table wftaskhistory disable row movement;
    alter table wftaskhistory_tl enable row movement;
    alter table wftaskhistory_tl shrink space;
    alter table wftaskhistory_tl disable row movement;
    alter table wfcomments enable row movement;
    alter table wfcomments shrink space;
    alter table wfcomments disable row movement;
    alter table wfmessageattribute enable row movement;
    alter table wfmessageattribute shrink space;
    alter table wfmessageattribute disable row movement;
    alter table wfattachment enable row movement;
    alter table wfattachment shrink space;
    alter table wfattachment disable row movement;
    alter table wfassignee enable row movement;
    alter table wfassignee shrink space;
    alter table wfassignee disable row movement;
    alter table wfreviewer enable row movement;
    alter table wfreviewer shrink space;
    alter table wfreviewer disable row movement;
    alter table wfcollectiontarget enable row movement;
    alter table wfcollectiontarget shrink space;
    alter table wfcollectiontarget disable row movement;
    alter table wfroutingslip enable row movement;
    alter table wfroutingslip shrink space;
    alter table wfroutingslip disable row movement;
    alter table wfnotification enable row movement;
    alter table wfnotification shrink space;
    alter table wfnotification disable row movement;
    alter table wftasktimer enable row movement;
    alter table wftasktimer shrink space;
    alter table wftasktimer disable row movement;
    alter table wftaskerror enable row movement;
    alter table wftaskerror shrink space;
    alter table wftaskerror disable row movement;
    alter table wfheaderprops enable row movement;
    alter table wfheaderprops shrink space;
    alter table wfheaderprops disable row movement;
    alter table wfevidence enable row movement;
    alter table wfevidence shrink space;
    alter table wfevidence disable row movement;
    alter table wftaskaggregation enable row movement;
    alter table wftaskaggregation shrink space;
    alter table wftaskaggregation disable row movement;
    alter table wftask enable row movement;
    alter table wftask shrink space;
    alter table wftask disable row movement;
    alter table composite_sensor_value enable row movement;
    alter table composite_sensor_value shrink space;
    alter table composite_sensor_value disable row movement;
    alter table composite_instance_assoc enable row movement;
    alter table composite_instance_assoc shrink space;
    alter table composite_instance_assoc disable row movement;
    alter table attachment enable row movement;
    alter table attachment shrink space;
    alter table attachment disable row movement;
    alter table attachment_ref enable row movement;
    alter table attachment_ref shrink space;
    alter table attachment_ref disable row movement;
    alter table component_instance enable row movement;
    alter table component_instance shrink space;
    alter table component_instance disable row movement;
    alter table audit_details modify lob (bin) (shrink space);
    alter table composite_instance_fault modify lob (error_message) (shrink space);
    alter table composite_instance_fault modify lob (stack_trace) (shrink space);
    alter table cube_scope modify lob (scope_bin) (shrink space);
    alter table reference_instance modify lob (error_message) (shrink space);
    alter table reference_instance modify lob (stack_trace) (shrink space);
    alter table test_definitions modify lob (definition) (shrink space);
    alter table wi_fault modify lob (message) (shrink space);
    alter table xml_document modify lob (document) (shrink space);

    alter index ad_pk rebuild online;
    alter index at_pk rebuild online;
    alter index ci_creation_date rebuild online;
    alter index ci_custom3 rebuild online;
    alter index ci_ecid rebuild online;
    alter index ci_name_rev_state rebuild online;
    alter index ci_pk rebuild online;
    alter index composite_instance_cidn rebuild online;
    alter index composite_instance_co_id rebuild online;
    alter index composite_instance_created rebuild online;
    alter index composite_instance_ecid rebuild online;
    alter index composite_instance_id rebuild online;
    alter index composite_instance_state rebuild online;
    alter index cs_pk rebuild online;
    alter index dm_conversation rebuild online;
    alter index doc_dlv_msg_guid_index rebuild online;
    alter index doc_store_pk rebuild online;
    alter index ds_conversation rebuild online;
    alter index ds_conv_state rebuild online;
    alter index ds_fk rebuild online;
    alter index instance_payload_key rebuild online;
    alter index reference_instance_cdn_state rebuild online;
    alter index reference_instance_co_id rebuild online;
    alter index reference_instance_ecid rebuild online;
    alter index reference_instance_id rebuild online;
    alter index reference_instance_state rebuild online;
    alter index reference_instance_time_cdn rebuild online;
    alter index wf_crdate_cikey rebuild online;
    alter index wf_crdate_type rebuild online;
    alter index wf_fk2 rebuild online;
    alter index wi_expired rebuild online;


    http://docs.oracle.com/cd/E36909_01/admin.1111/e10226/soa-database-management.htm

    LOBs

    LOB columns are saved outside of the tables and can be shrunk separately. In the below script you should replace XX_SOAINFRA with your SOAINFRA schema. The script explicitly drops BRDECISIONINSTANCE_INDX5 since the table can become quite large in development environments and you cannot shrink it with the index still on it. The below script also might overlap with the script above for tables with LOB columns. It only shrinks for large tables where the LOB columns take more than 100Mb of space.

    DECLARE
      CURSOR c_tabs
      IS
        SELECT
          a.owner owner,
          a.table_name table_name,
          a.column_name column_name
        FROM
          all_tab_cols a
        WHERE
          a.owner LIKE 'XX_SOAINFRA'
        AND a.data_type LIKE '%LOB'
        AND EXISTS
          (
            SELECT
              1
            FROM
              all_tables b
            WHERE
              a.owner       =b.owner
            AND a.table_name=b.table_name
          )
      AND EXISTS
        (
          SELECT
            1
          FROM
            dba_lobs l ,
            dba_segments s
          WHERE
            s.segment_name      = l.segment_name
          AND s.owner           = l.owner
          AND s.bytes/1024/1024 > 100
          AND s.owner           =a.owner
          AND l.table_name      =a.table_name
          AND l.column_name     =a.column_name
        )
      ORDER BY
        owner,
        table_name,
        column_name;
      r_prevtab c_tabs%rowtype;
      l_countlobrecs NUMBER;
    FUNCTION hasFunctionBasedIndex(
        p_owner      VARCHAR2,
        p_table_name VARCHAR2)
      RETURN VARCHAR2
    IS
      l_indexcount NUMBER;
    BEGIN
      SELECT
        COUNT(*)
      INTO
        l_indexcount
      FROM
        all_indexes c
      WHERE
        c.table_owner =p_owner
      AND c.table_name=p_table_name
      AND c.index_type LIKE 'FUN%';
      IF l_indexcount>0 THEN
        RETURN 'Y';
      ELSE
        RETURN 'N';
      END IF;
    END;
    BEGIN
      begin
      execute immediate 'DROP INDEX XX_SOAINFRA.BRDECISIONINSTANCE_INDX5';
      dbms_output.put_line('Index created: XX_SOAINFRA.BRDECISIONINSTANCE_INDX5');
      exception
      when others then
      null;
      end; 
      r_prevtab.owner := NULL;
      FOR r_tabs IN c_tabs
      LOOP
        IF (r_prevtab.owner IS NOT NULL AND
          (
            r_prevtab.owner != r_tabs.owner OR r_prevtab.table_name !=
            r_tabs.table_name
          )
          ) OR r_prevtab.owner IS NULL THEN
          dbms_output.put_line('Processing table: "'||r_tabs.owner||'"."'||
          r_tabs.table_name||'"');
          IF hasFunctionBasedIndex(r_tabs.owner,r_tabs.table_name) = 'N' THEN
            EXECUTE immediate 'alter table "'||r_tabs.owner||'"."'||
            r_tabs.table_name||'" deallocate unused';
            EXECUTE immediate 'alter table "'||r_tabs.owner||'"."'||
            r_tabs.table_name||'" enable row movement';
            EXECUTE immediate 'alter table "'||r_tabs.owner||'"."'||
            r_tabs.table_name||'" shrink space compact';
            BEGIN
              --below causes lock and sets high water mark
              EXECUTE immediate 'alter table "'||r_tabs.owner||'"."'||
              r_tabs.table_name||'" shrink space';
            EXCEPTION
              --when a lock is present: skip
            WHEN OTHERS THEN
              dbms_output.put_line('Skipping shrink space due to: '||SQLERRM);
            END;
            EXECUTE immediate 'alter table "'||r_tabs.owner||'"."'||
            r_tabs.table_name||'" disable row movement';
          END IF;
          r_prevtab := r_tabs;
        ELSE
          dbms_output.put_line(
          'Table has function based index and cannot be shrinked: "'|| r_tabs.owner
          ||'"."'||r_tabs.table_name||'"');
        END IF;
        dbms_output.put_line('Processing column: "'||r_tabs.owner||'"."'||
        r_tabs.table_name||'"."'||r_tabs.column_name||'"');
        EXECUTE immediate 'alter table "'||r_tabs.owner||'"."'||r_tabs.table_name
        || '" modify lob("'||r_tabs.column_name||'") (deallocate unused)';
        --below causes lock
        BEGIN
          EXECUTE immediate 'alter table "'||r_tabs.owner||'"."'||
          r_tabs.table_name ||'" modify lob("'||r_tabs.column_name||
          '") (freepools 1)';
        EXCEPTION
        WHEN OTHERS THEN
          dbms_output.put_line('Skipping freepools: '||SQLERRM);
        END;
        BEGIN
          EXECUTE immediate 'alter table "'||r_tabs.owner||'"."'||
          r_tabs.table_name ||'" modify lob("'||r_tabs.column_name||
          '") (shrink space)';
        EXCEPTION
        WHEN OTHERS THEN
          dbms_output.put_line('Skipping shrink space: '||SQLERRM);
        END;
      END LOOP;
        begin
      execute immediate 'CREATE INDEX XX_SOAINFRA.BRDECISIONINSTANCE_INDX5 ON XX_SOAINFRA.BRDECISIONINSTANCE (ECID, "CREATION_TIME" DESC) LOGGING TABLESPACE XX_SOAINFRA NOPARALLEL';
      dbms_output.put_line('Index created: XX_SOAINFRA.BRDECISIONINSTANCE_INDX5');
      exception
      when others then
      null;
      end;
    END;


    Other database suggestions

    Redo log size

    Not directly related to cleaning, but related to SOAINFRA space management. The Oracle database uses so-called redo-log files to store all changes to the database. In case of a database instance failure, the database can use these redo-log files to recover. Usually there are two or more redo-logfiles. These files are rotated: if one is full, it goes to the next. When the last one is filled, it goes back to the first one overriding old data. Read more about redo-logs here. Rotating a redo-log file takes some time. When the redo-log files are small, they are rotated a lot. The following provides some suggestions in analyzing if increasing the size will help you. I've seen default values of 3 redo-log files of 100Mb. Oracle recommends having 3 groups of 2Gb each here.

    https://docs.oracle.com/cd/B19306_01/server.102/b14231/onlineredo.htm

    Clean up long running and faulted instances!

    The regular cleaning scripts which you might run on production do not clean instances which have an ECID which is the same as an instance which cannot be cleaned because it is for example still running or recoverable. If you have many processes running, you might be able to win a lot by for example restarting the running processes with a new ECID. You do have to build that functionality for yourself though. Also you should think about keeping track of time for tasks. If a certain task is supposed to only be open for a month, let it expire after a month. If you do not check this, you might encounter large numbers of tasks which remain open. This mains the instance which has created the task will remain open. This means you cannot undeploy the version of the process which has this task running. Life-cycle management is a thing!



    Finally

    SOAINFRA is part of the infrastructure

    Oracle SOA Suite logs a lot of audit information in the SOAINFRA database. You might be tempted to join that information to other business data directly on database level. This is not a smart thing to do.

    If the information in the SOAINFRA database is used to for example query BPM processes or tasks, especially when this information is being joined over a database link to another database with additional business data, you have introduced a timebomb. The performance will be directly linked to the amount of data in the SOAINFRA database and especially with long running processes and tasks. You have now not only introduced a potential performance bottleneck for all your SOA composites but also for other parts of your application.


    It is not a system of records

    Secondly, the business might demand you keep the information for a certain period. Eventually they might even want to keep the data forever and use it for audits of historic records. This greatly interferes with purging strategies, which are required if you want to keep your environment performant. If the business considers certain information important to keep, create a table and store the relevant information there.

    SSL/TLS: How to choose your cipher suite

    $
    0
    0
    For SSL/TLS connections, cipher suites determine for a major part how secure the connection will be. A cipher suite is a named combination of authentication, encryption, message authentication code (MAC) and key exchange algorithms used to negotiate the security settings (here). But what does this mean and how do you choose a secure cipher suite? The area of TLS is quite extensive and I cannot cover it in its entirety in a single blog post but I will provide some general recommendations based on several articles researched online. At the end of the post I'll provide some suggestions for strong ciphers for JDK8.

    Introduction

    First I'll introduce what a cipher suite is and how it is agreed upon by client / server. Next I'll explain several of the considerations which can be relevant while making a choice of cipher suites to use.

    What does the name of a cipher suite mean?

    The names of the cipher suites can be a bit confusing. You see for example a cipher suite called: TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 in the SunJSSE list of supported cipher suites. You can break this name into several parts:
    • TLS: transport layer security (duh..)
    • ECDHE: The key exchange algoritm is ECDHE (Elliptic curve Diffie–Hellman, ephemeral).
    • ECDSA: The authentication algorithm is ECDSA (Elliptic Curve Digital Signature Algorithm). The certificate authority uses an ECDH key to sign the public key. This is what for example Bitcoin uses.
    • WITH_AES_256_CBC: This is used to encrypt the message stream. (AES=Advanced Encryption Standard, CBC=Cipher Block Chaining). The number 256 indicates the block size.
    • SHA_384: This is the so-called message authentication code (MAC) algorithm. SHA = Secure Hash Algorithm. It is used to create a message digest or hash of a block of the message stream. This can be used to validate if message contents have been altered. The number indicates the size of the hash. Larger is more secure.
    If the key exchange algorithm or the authentication algorithm is not explicitly specified, RSA is assumed. See for example here for a useful explanation of cipher suite naming.

    What are your options

    First it is a good idea to look at what your options are. This is dependent on the (client and server) technology used. If for example you are using Java 8, you can look here (SunJSSE) for supported cipher suites. In you want to enable the strongest ciphers available to JDK 8 you need to install Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files (here). You can find a large list of cipher suites and which version of JDK supports them (up to Java 8 in case of the Java 8 documentation). Node.js uses OpenSSL for cipher suite support. This library supports a large array of cipher suites. See here.

    How determining a cipher suite works

    They are listed in preference order. How does that work? During the handshake phase of establishing an TLS/SSL connection, the client sends supported cipher suites to the server. The server chooses the cipher to use based on the preference order and what the client supports.


    This works quite efficiently, but a problem can arise when
    • There is no overlap in ciphers the client and server can speak
    • The only overlap between client and server supported cipher is a cipher which provides poor or no encryption
    This is illustrated in the image below. The language represents the cipher suite. The order/preference specifies the encryption strength. In the first illustration, client and server can both speak English so the server chooses English. In the second image, the only overlapping language is French. French might not be ideal to speak but the server has no other choice in this case but to accept speaking French or to refuse talking to the client. 


    Thus it is a good practice to for the server only select specific ciphers which conform to your security requirements, but do of course take client compatibility into account.

    How to choose a cipher suite

    Basics

    Check which cipher suites are supported

    There are various mechanisms to check which ciphers are supported. For cloud services or websites you can use SSLLabs. For internal server checking, you can use various scripts available online such as this one or this one.

    TLS 1.2

    Of course you only want TLS 1.2 cipher suites since older TLS and SSL versions contain security liabilities. Within TLS 1.2 there is a lot to choose from. OWASP provides a good overview of which ciphers to chose here ('Rule - Only Support Strong Cryptographic Ciphers'). Wikipedia provides a nice overview of (among other things) TLS 1.2 benefits such as GCM (Galois/Counter Mode) support which provides integrity checking.

    Disable weak ciphers

    As indicated before, if weak ciphers are enabled, they might be used, making you vulnerable. You should disable weak ciphers like those with DSS, DSA, DES/3DES, RC4, MD5, SHA1, null, anon in the name. See for example here and here. For example, do not use DSA/DSS: they get very weak if a bad entropy source is used during signing (here). For the other weak ciphers, similar liabilities can be looked up.

    How to determine the key exchange algorithm

    Types

    There are several types of keys you can use. For example:
    • ECDHE: Use elliptic curve diffie-hellman (DH) key exchange (ephemeral). One key is used for every exchange. This key is generated for every request and does not provide authentication like ECDH which uses static keys.
    • RSA: Use RSA key exchange. Generating DH symetric keys is faster than RSA symmetric keys. DH also currently seems more popular. DH and RSA keys solve different challenges. See here
    • ECDH: Use elliptic curve diffie-hellman key exchange. One key is for the entire SSL session. The static key can be used for authentication.
    • DHE: Use normal diffie-hellman key. One key is used for every exchange. Same as ECDHE but a different algorithm is used for the calculation of shared secrets.
    There are other key algorithms but the above ones are most popular. A single server can host multiple certificates such as ECDSA and RSA certificates. Wikipedia is an example. This is not supported by all web servers. See here.

    Forward secrecy

    Forward secrecy means that is a private key is compromised, past messages which are send cannot also be decrypted. Read here. Thus it is beneficial to have perfect forward secrecy for your security (PFS).

    The difference between ECDHE/DHE and ECDH is that for ECDH one key for the duration of the SSL session is used (which can be used for authentication) while with ECDHE/DHE a distinct key for every exchange is used. Since this key is not a certificate/public key, no authentication can be performed. An attacked can use their own key (here). Thus when using ECDHE/DHE, you should also implement client key validation on your server (2-way SSL) to provide authentication.

    ECDHE and DHE give forward secrecy while ECDH does not. See here. ECDHE is significantly faster than DHE (here). There are rumors that the NSA can break DHE keys and ECDHE keys are preferred (here). On other sites it is indicated DHE is more secure (here). The calculation used for the keys is also different. DHE is prime field Diffie Hellman. ECDHE is Elliptic Curve Diffie Hellman. ECDHE can be configured. ECDHE-ciphers must not support weak curves, e.g. less than 256 bits (see here).

    Certificate authority

    The certificate authority you use to get a certificate from to sign the key can have limitations. For example, RSA certificates are very common while ECDSA is gaining popularity. If you use an internal certificate authority, you might want to check it is able to generate ECDSA certificates and use them for signing. For compatibility, RSA is to be preferred.

    How to determine the message encryption mechanism

    As a rule of thumb: AES_256 or above is quite common and considered secure. 3DES, EDE and RC4 should be avoided.

    The difference between CBC and GCM

    GCM provides both encryption and integrity checking (using a nonce for hashing) while CBC only provides encryption (here). You can not use the same nonce for the same key to encrypt twice when using GCM. This protects against replay attacks. GCM is supported from TLS 1.2.

    How to choose your hashing algorithm

    MD5 (here) and SHA-1 (here) are old and should not be used anymore. As a rule of thumb, SHA256 or above can be considered secure.

    In summary

    Considerations

    Choosing a cipher suite can be a challenge. Several considerations play a role in making the correct choice here. Just to name a few;
    • Capabilities of server, client and certificate authority (required compatibility); you would choose a different cipher suite for an externally exposed website (which needs to be compatible with all major clients) than for internal security.
    • Encryption/decryption performance
    • Cryptographic strength; type and length of keys and hashes 
    • Required encryption features; such as prevention of replay attacks, forward secrecy
    • Complexity of implementation; can developers and testers easily develop servers and clients supporting the cipher suite?
    Sometimes even legislation plays a role since some of the stronger encryption algorithms are not allowed to be used in certain countries (we will not guess for the reason but you can imagine).

    Recommendation

    Based on the above I can recommend some strong cipher suites to be used for JDK8 in preference order:
    • TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
    • TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
    • TLS_RSA_WITH_AES_256_GCM_SHA384
    • TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384
    • TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384
    • TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
    My personal preference would be to use TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 as it provides
    • Integrity checking: GCM
    • Perfect forward secrecy: ECDHE
    • Uses strong encryption: AES_256
    • Uses a strong hashing algorithm: SHA384
    • It uses a key signed with an RSA certificate authority which is supported by most internal certificate authorities. 
    Since ECDHE does not provide authentication, you should tell the server to verify client certificates (implement 2-way SSL).

    Oracle Mobile Cloud Service (MCS) and Integration Cloud Service (ICS): How secure is your TLS connection?

    $
    0
    0
    In a previous blog I have explained which what cipher suites are, the role they play in establishing SSL connections and have provided some suggestions on how you can determine which cipher suite is a strong cipher suite. In this blog post I'll apply this knowledge to look at incoming connections to Oracle Mobile Cloud Service and Integration Cloud Service. Outgoing connections are a different story altogether. These two cloud services do not allow you control of cipher suites to the extend as for example Oracle Java Cloud Service and you are thus forced to use the cipher suite Oracle has chosen for you.

    Why should you be interested in TLS? Well, 'normal' application authentication uses tokens (like SAML, JWT, OAuth). Once an attacker obtains such a token (and no additional client authentication is in place), it is more or less free game for the attacker. An important mechanism which prevents the attacker from obtaining the token is TLS (Transport Layer Security). The strength of the provided security depends on the choice of cipher suite. The cipher suite is chosen by negotiation between client and server. The client provides options and the server chooses the one which has its preference.

    Disclaimer: my knowledge is not at the level that I can personally exploit the liabilities in different cipher suites. I've used several posts I found online as references. I have used the OWASP TLS Cheat Sheet extensively which provides many references for further investigation should you wish.



    Method

    Cipher suites

    The supported cipher suites for the Oracle Cloud Services appear to be (on first glance) host specific and not URL specific. The APIs and exposed services use the same cipher suites. Also the specific configuration of the service is irrelevant we are testing the connection, not the message. Using tools described here (for public URL's https://www.ssllabs.com/ssltest/ is easiest) you can check if the SSL connection is secure. You can also check yourself with a command like: nmap --script ssl-enum-ciphers -p 443 hostname. Also there are various scripts available. See for some suggestions here.

    I've looked at two Oracle Cloud services which are available to me at the moment:
    Results

    It was interesting to see the supported cipher suites for Mobile Cloud Service and Integration Cloud Service are the same and also the supported cipher suites for the services and APIs are the same. This could indicate Oracle has public cloud wide standards for this and they are doing a good job at implementing it!

    Supported cipher suites

    TLS 1.2
    TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014)   ECDH secp256r1 (eq. 3072 bits RSA)   FS
    TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (0xc027)   ECDH secp256r1 (eq. 3072 bits RSA)   FS
    TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013)   ECDH secp256r1 (eq. 3072 bits RSA)   FS
    TLS_RSA_WITH_AES_256_CBC_SHA256 (0x3d)
    TLS_RSA_WITH_AES_256_CBC_SHA (0x35)
    TLS_RSA_WITH_AES_128_CBC_SHA256 (0x3c)
    TLS_RSA_WITH_AES_128_CBC_SHA (0x2f)

    TLS 1.1
    TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014)   ECDH secp256r1 (eq. 3072 bits RSA)   FS
    TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013)   ECDH secp256r1 (eq. 3072 bits RSA)   FS
    TLS_RSA_WITH_AES_256_CBC_SHA (0x35)
    TLS_RSA_WITH_AES_128_CBC_SHA (0x2f)

    TLS 1.0
    TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014)   ECDH secp256r1 (eq. 3072 bits RSA)   FS
    TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013)   ECDH secp256r1 (eq. 3072 bits RSA)   FS
    TLS_RSA_WITH_AES_256_CBC_SHA (0x35)
    TLS_RSA_WITH_AES_128_CBC_SHA (0x2f)
    TLS_RSA_WITH_3DES_EDE_CBC_SHA (0xa)   WEAK

    Liabilities in the allowed cipher suites

    You should not read this as an attack against the choices made in the Oracle Public Cloud for SSL connections. Generally the cipher suites Oracle chose to support are pretty secure and there is no need to worry unless you want to protect yourself against groups like the larger security agencies. When choosing your cipher suite in your own implementations outside the mentioned Oracle cloud products, I would go for stronger cipher suites than which are provided. Read here.

    TLS 1.0 support

    TLS 1.0 is supported by the Oracle Cloud services. This standard is outdated and should be disabled. Read the following for some arguments of why you should do this. It is possible Oracle choose to support TLS 1.0 since some older browsers (really old ones like IE6) do not support TLS 1.1 and 1.2 yet. This is a consideration of compatibility versus security.

    TLS_RSA_WITH_3DES_EDE_CBC_SHA might be a weak cipher

    There are questions whether TLS_RSA_WITH_3DES_EDE_CBC_SHA could be considered insecure (read herehere and here why). Also SSLLabs says it is weak. You can mitigate some of the vulnerabilities by not using CBC mode, but that is not an option in the Oracle cloud as GCM is not supported (see more below). If a client indicates he only supports TLS_RSA_WITH_3DES_EDE_CBC_SHA, this cipher suite is used for the SSL connection making you vulnerable to collision attacks like sweet32. Also it uses a SHA1 hash which can be considered insecure (read more below).

    Weak hashing algorithms

    There are no cipher suites available which provide SHA384 hashing. Only SHA256 and SHA. SHA1 (SHA) is considered insecure (see here and here. plenty of other references to this can be found easily).

    No GCM mode support

    GCM provides data authenticity (integrity) and confidentiality checking. It is more efficient and performant compared to CBC mode. CBC only provides authenticity/integrity but no confidentiality checking. GCM uses a so-called nonce. You cannot use the same nonce to encrypt data with the same key twice. 

    Wildcard certificates are used

    As you can see in the screenshot below, the certificate used for my Mobile Cloud Service contains a wildcard: *.mobileenv.us2.oraclecloud.com. This means the same certificate is used for all Mobile Cloud Service hosts in a data center unless specifically overridden. See here Rule - Do Not Use Wildcard Certificates. They violate the principle of least privilege. If you decide to implement two-way SSL, I would definitely consider using your own certificates since you want to avoid trust on the data center level. They also violate the EV Certificate Guidelines. Since the certificate is per data center, there is no difference between the certificate used for development environments compared to production environments. In addition, everyone in the same data center will use the same certificate. Should the private key be compromised (of course Oracle will try not to let this happen!), this will be an issue for the entire data center and everyone using the default certificate.


    Oracle provides the option to use your own certificates and even recommends this. See here. This allows you to manage your own host specific certificate instead of the one used by the data center.

    Choice of keys

    Only RSA and ECDHE keys are used and no DSA/DSS keys. Also the ECDHE keys are given priority above the RSA keys. ECDHE gives forward secrecy. Read more here. DHE however is preferred above ECDHE (see here) since ECDHE uses Elliptic Curves and there are doubts they are really secure. Read here and here. Oracle does not provide DHE support in their list of cipher suites.

    Strengths of the cipher suites

    Is it all bad? No, definitely not! You can see Oracle has put thought into choosing their cipher suites and only provide a select list. Maybe it is possible to request stronger cipher suites to be enabled by contacting Oracle support.

    Good choice of encryption algorithm

    AES is the preferred encryption algorithm (here). WITH_AES_256 is supported which is a good thing. WITH_AES_128 is also supported. This one is obviously weaker, but it is not really terrible that it is still used and for compatibility reasons, OWASP even recommends TLS_RSA_WITH_AES_128_CBC_SHA as cipher suite (also SHA1!) so they are not completely against it.

    Good choice of ECDHE curve

    The ECDHE curve used is the default most commonly used secp256r1 which is equivalent to 3072 bits RSA. OWASP recommends> 2048 bits so this is ok.

    No support for SSL2 and SSL3

    Of course SSL2 and SSL3 are not secure anymore and usage should not be allowed.

    So why these choices?

    I've not been involved with these choices and have not talked to Oracle about this. In summary, I'm just guessing at the considerations.

    I can imagine the cipher suites have been chosen to create a balance between compatibility, performance and security. Also, they could be related to export restrictions / government regulations. The supported cipher suites do not all require the installation of JCE (here) but some do. For example usage of AES_256 and ECDHE require the JCE cryptographic provider but AES_128 and RSA do not. Also of course compatibility is taken into consideration. The list of supported cipher suites are common cipher suites supported by most web browsers (see here). When taking performance into consideration (although this is hardware dependent, certain cipher suites perform better on ARM processors, others better on for example Intel), using ECDHE is not at all strange while not using GCM might not be a good idea (try for example the following: gnutls-cli --benchmark-ciphers). For Oracle using a single certificate for your data center with a wildcard is of course an easy and cheap default solution.

    Recommendations
    • Customers should consider using their own host specific certificates instead of the default wildcard certificate.
    • Customers should try to put constraints on their clients. Since the public cloud offers support for weak ciphers, the negotiation between client and server determines the cipher suite (and thus strength) used. If the client does not allow weak ciphers, relatively strong ciphers will be used. It of course depends if you are able to do this since if you would like to provide access to the entire world, controlling the client can be a challenge. If however you are integrating web services, you are more in control (unless of course a SaaS solution has limitations).
    • Work with Oracle support to see what is possible and where the limitations are.
    • Whenever you have more control, consider using stronger ciphers like TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

    Oracle Mobile Cloud Service integration options

    $
    0
    0
    Oracle Mobile Cloud Service has a lot of options which allows it to integrate with other services and systems. Since it runs JavaScript on Node.js for custom APIs, it is very flexible.

    Some features allow it to extent its own functionality such as the Firebase configuration option to send notifications to mobile devices, while for example the connectors allow wizard driven integration with other systems. The custom API functionality running on a recent Node.js version ties it all together. In this blog article I'll provide a quick overview and some background of the integration options of MCS.

    MCS is very well documented here and there are many YouTube video's available explaining/demonstrating various MCS features here. So if you want to know more, I suggest looking at those.


    Some recent features

    Oracle is working hard on improving and expanding MCS functionality. For the latest improvements to the service see the following page. Some highlights I personally appreciate of the past half year which will also get some attention in this blog:
    - Zero footprint SSO (June 2017)
    - Swagger support in addition to RAML for the REST connector (April 2017)
    - Node.js version v6.10.0 support (April 2017)
    - Support for Firebase (FCM) to replace GCM (December 2016)
    - Support for third party tokens (December 2016)

    Feature integration

    Notification support

    In general there are two options for sending notifications from MCS. Integrating with FCM and integrating with Syniverse. Since they are third party suppliers, you should compare these options (license, support, performance, cost, etc) before choosing one of them.

    You can also use any other notification provider if it offers a REST interface by using the REST connector. You will not get much help in configuring it through the MCS interface though; it will be a custom implementation.

    Firebase Cloud Messaging / Google Cloud Messaging

    Notification support is implemented by integrating with Google cloud messaging products. Google Cloud Messaging (GCM) is being replaced with Firebase Cloud Messaging (FCM) in MCS. GCM has been deprecated by Google for quite a while now so this is a good move. You do need a Google Cloud Account though and have to purchase their services in order to use this functionality. See for example here on how to implement this from a JET hybrid application.



    Syniverse

    Read more on how to implement this here. You first have to create a Syniverse account. Next subscribe to the Syniverse Messaging Service, register the app and get credentials. These credentials you can register in MCS, client management.



    Beacon support

    Beacons create packages which can be detected on Bluetooth by mobile devices. The package structure the beacons broadcast, can differ. There are samples available for iBeacon, altBeacon and Eddystone but others can be added if you know the corresponding package structure. See the following presentation some background on beacons and how they can be integrated in MCS. How to implement this for an Android app can be watched here.





    Client support

    MCS comes with several SDKs which provide easy integration of a client with MCS APIs. Available client SDKs are iOS, Android, Windows, Web (plain JavaScript). These SDKs provide an easy alternative to using the raw MCS REST APIs. They provide a wrapper for the APIs and provide easy access in the respective language the client uses.


    Authentication options (incoming)

    SAML, JWT

    Third party token support for SAML and JWT is available. Read more here. A token exchange is available as part of MCS which creates MCS tokens from third party tokens based on specifically defined mappings. This MCS tokens can be used by clients in subsequent requests. This does require some work on the client side but the SDKs of course helps with this.



    Facebook Login

    Read here for an example on how to implement this in a hybrid JET application.



    OAuth2 and Basic authentication support. 

    No third party OAuth tokens are supported. This is not strange since the OAuth token does not contain user data and MCS needs a way to validate the token. MCS provides its own OAuth2 STS (Secure Token Service) to create tokens for MCS users.


    Oracle Enterprise Single Sign-on support. 

    Read here. This is not to be confused with the Oracle Enterprise Single Sign-on Suite (ESSO). This is browser based authentication of Oracle Cloud users which are allowed access to MCS.

    These provide the most common web authentication methods. Especially the third party SAML and JWT support provides for many integration options with third party authentication providers. OKTA is given as an example in the documentation.

    Application integration: connectors

    MCS provides connectors which allow wizard driven configuration in MCS. Connectors are used for outgoing calls. There is a connector API available which makes it easy to interface with the connectors from custom JavaScript code. The connectors support the use of Oracle Credential Store Framework (CSF) keys and certificates. TLS versions to TLS 1.2 are supported. You are of course warned that older versions might not be secure. The requests the connectors do are over HTTP since no other technologies are currently directly supported. You can of course use REST APIs and ICS as wrappers should you need it.

    Connector security settings

    For the different connectors, several Oracle Web Service Security Manager (OWSM) policies are used. See here. These allow you to configure several security settings and for example allow usage of WS Security and SAML tokens for outgoing connections. The policies can be configured with security policy properties. See here.

    REST

    It is recommended to use the REST connector instead of doing calls directly from your custom API code because of they integrate well with MCS and provide security and monitoring benefits. For example out of the box analytics.


    SOAP

    The SOAP connector can do a transformation from SOAP to JSON to make it easier to work with. This has some limitations however:


    Connector scope

    There are also some general limitations defined by the scope of the API of the connector:

    • Only SOAP version 1.1 and WSDL version 1.2 are supported.
    • Only the WS-Security standard is supported. Other WS-* standards, such as WS-RM or WS-AT, aren’t supported.
    • Only document style and literal encoding are supported.
    • Attachments aren’t supported.
    • Of the possible combinations of input and output message operations, only input-output operations and input-only operations are supported. These operations are described in the Web Services Description Language (WSDL) Version 1.2 specification.

    Transformation limitations


    The transformation from SOAP to XML has limitations
    • A choice group with child elements belonging to different namespaces having the same (local) name. This is because JSON doesn’t have any namespace information.
    • A sequence group with child elements having duplicate local names. For example, <Parent><ChildA/><ChildB/>...<ChildA/>...</Parent>. This translates to an object with duplicate property names, which isn’t valid.
    • XML Schema Instance (xsi) attributes aren’t supported.
    Integration Cloud Service connector


    Read more about this connector here. This connector allows you to call ICS integrations. You can connect to your ICS instance and select an integration from a drop-down menu. For people who also use ICS in their cloud architecture, this will probably be the most common used connector.

    Fusion Applications connector


    Read more about this connector here. The flow looks similar to that of the ICS Cloud Adapters (here). In short, you authenticate, a resource discovery is done and local artifacts are generated which contain the connector configuration. At runtime this configuration is used to access the service. The wizard driven configuration of the connector is a great strength. MCS does not provide the full range of cloud adapters as is available in ICS and SOA CS.

    Finally

    Flexibility

    Oracle Mobile Cloud Service allows you to define custom APIs using JavaScript code. Oracle Mobile Cloud Service V17.2.5-201705101347 runs Node.js version v6.10.0 and OpenSSL version 1.0.2k (process.versions) which are quite new! Because a new OpenSSL version is supported, TLS 1.2 ciphers are also supported and can be used to create connections to other systems. This can be done from custom API code or by configuring the OWSM settings in the connector configuration. It runs on Oracle Enterprise Linux 6 kernel 2.6.39-400.109.6.el6uek.x86_64 (JavaScript: os.release()). Most JavaScript packages will run on this version so few limitations there.

    ICS also provides an option to define custom JavaScript functions (see here). I haven't looked at the engine used in ICS though but I doubt this will be a full blown Node.js instance and suspect (please correct me if I'm wrong) a JVM JavaScript engine is used like in SOA Suite / SOA CS. This provides less functionality and performance compared to Node.js instances.

    What is missing?

    Integration with other Oracle Cloud services

    Mobile Cloud Service does lack out of the box integration options with other Oracle Cloud Services. Only 4 HTTP based connectors are available. Thus if you want to integrate with an Oracle Cloud database (a different one than which is provided) you have to use the external DB's REST API (with the REST connector or from custom API code) or use for example the Integration Cloud Service connector or the Application Container Cloud Service to wrap the database functionality. This of course requires a license for the respective services.

    Cloud adapters

    A Fusion Applications Connector is present in MCS. Also OWSM policies are used in MCS. It would therefore not be strange if MCS would be technically capable of running more of the Cloud adapters which are present in ICS. This would greatly increase the integration options for MCS.

    Mapping options for complex payloads

    Related to the above, if the payloads become large and complex, mapping fields also becomes more of a challenge. ICS does a better job at this than MCS currently. It has a better mapping interface and provides mapping suggestions.

    R and the Oracle database: Using dplyr / dbplyr with ROracle in Windows 10

    $
    0
    0
    R uses data extensively. Data often resides in a database. In this blog I will describe installing and using dplyr, dbplyr and ROracle on Windows 10 to access data from an Oracle database and use it in R.


    Accessing the Oracle database from R

    dplyr makes the most common data manipulation tasks in R easier. dplyr can use dbplyr. dbplyr provides a transformation from the dplyr verbs to SQL queries. dbplyr 1.1.0 is released 2017-06-27. See here. It uses the DBI (R Database Interface). See here. This interface is implemented by various drivers such as ROracle. ROracle is an Oracle driver based on OCI (Oracle Call Interface) which is a high performance native C interface to connect to the Oracle Database.

    Installing ROracle on Windows 10

    I encountered several errors when installing ROracle in Windows 10 on R 3.3.3. The steps to take to do this right in one go are the following:
    • Determine your R platform architecture. 32 bit or 64 bit. For me this was 64 bit 
    • Download and install the oracle instant client with the corresponding architecture (here). Download the basic and SDK files. Put the sdk file from the sdk zip in a subdirectory of the extracted basic zip (at the same level as vc14)
    • Download and install RTools (here)
    • Set the OCI_LIB64 or OCI_LIB32 variables to the instant client path
    • Set the PATH variable to include the location of oci.dll
    • Install ROracle (install.packages("ROracle") in R)
    Encountered errors

    Warning in install.packages :
      package ‘ROracle_1.3-1.zip’ is not available (for R version 3.3.3)

    You probably tried to install the ROracle package which Oracle provides on an R version which is too new (see here). This will not work on R 3.3.3. You can compile ROracle on your own or use the (older) R version Oracle supports.

    Package which is only available in source form, and may need compilation of C/C++/Fortran: ‘ROracle’ These will not be installed

    This can be done by installing RTools (here). This will install all the tools required to compile sources on a Windows machine.

    Next you will get the following question:

    Package which is only available in source form, and may need compilation of C/C++/Fortran: ‘ROracle’
    Do you want to attempt to install these from sources?
    y/n:

    If you say y, you will get the following error:

    installing the source package ‘ROracle’

    trying URL 'https://cran.rstudio.com/src/contrib/ROracle_1.3-1.tar.gz'
    Content type 'application/x-gzip' length 308252 bytes (301 KB)
    downloaded 301 KB

    * installing *source* package 'ROracle' ...
    ** package 'ROracle' successfully unpacked and MD5 sums checked
    ERROR: cannot find Oracle Client.
           Please set OCI_LIB64 to specify its location.

    In order to fix this, you can download and install the Oracle Instant Client (the basic and SDK downloads).

    Mind that when running a 64 bit version of R, you also need a 64 bit version of the instant client. You can check with the R version command. In my case: Platform: x86_64-w64-mingw32/x64 (64-bit). Next you have to set the OCI_LIB64 variable (for 64 bit else OCI_LIB32) to the specified path. After that you will get the error as specified below:

    Next it will fail with something like:

    Error in inDL(x, as.logical(local), as.logical(now), ...) :
      unable to load shared object 'ROracle.dll':
      LoadLibrary failure:  The specified module could not be found.

    This is caused when oci.dll from the instant client is not in the path environment variable. Add it and it will work! (at least it did on my machine). The INSTALL file from the ROracle package contains a lot of information about different errors which can occur during installation. If you encounter any other errors, be sure to check it.

    How a successful 64 bit compilation looks

    > install.packages("ROracle")
    Installing package into ‘C:/Users/maart_000/Documents/R/win-library/3.3’
    (as ‘lib’ is unspecified)
    Package which is only available in source form, and may need compilation of C/C++/Fortran: ‘ROracle’
    Do you want to attempt to install these from sources?
    y/n: y
    installing the source package ‘ROracle’

    trying URL 'https://cran.rstudio.com/src/contrib/ROracle_1.3-1.tar.gz'
    Content type 'application/x-gzip' length 308252 bytes (301 KB)
    downloaded 301 KB

    * installing *source* package 'ROracle' ...
    ** package 'ROracle' successfully unpacked and MD5 sums checked
    Oracle Client Shared Library 64-bit - 12.2.0.1.0 Operating in Instant Client mode.
    found Instant Client C:\Users\maart_000\Desktop\instantclient_12_2
    found Instant Client SDK C:\Users\maart_000\Desktop\instantclient_12_2/sdk/include
    copying from C:\Users\maart_000\Desktop\instantclient_12_2/sdk/include
    ** libs
    Warning: this package has a non-empty 'configure.win' file,
    so building only the main architecture

    c:/Rtools/mingw_64/bin/gcc  -I"C:/PROGRA~1/R/R-33~1.3/include" -DNDEBUG -I./oci    -I"d:/Compiler/gcc-4.9.3/local330/include"    -O2 -Wall  -std=gnu99 -mtune=core2 -c rodbi.c -o rodbi.o
    c:/Rtools/mingw_64/bin/gcc  -I"C:/PROGRA~1/R/R-33~1.3/include" -DNDEBUG -I./oci    -I"d:/Compiler/gcc-4.9.3/local330/include"    -O2 -Wall  -std=gnu99 -mtune=core2 -c rooci.c -o rooci.o
    c:/Rtools/mingw_64/bin/gcc -shared -s -static-libgcc -o ROracle.dll tmp.def rodbi.o rooci.o C:\Users\maart_000\Desktop\instantclient_12_2/oci.dll -Ld:/Compiler/gcc-4.9.3/local330/lib/x64 -Ld:/Compiler/gcc-4.9.3/local330/lib -LC:/PROGRA~1/R/R-33~1.3/bin/x64 -lR
    installing to C:/Users/maart_000/Documents/R/win-library/3.3/ROracle/libs/x64
    ** R
    ** inst
    ** preparing package for lazy loading
    ** help
    *** installing help indices
    ** building package indices
    ** testing if installed package can be loaded
    * DONE (ROracle)

    Testing ROracle

    You can read the ROracle documentation here. Oracle has been so kind as to provide developer VM's to play around with the database. You can download them here. I used 'Database App Development VM'.

    After installation of ROracle you can connect to the database and for example fetch employees from the EMP table. See for example below (make sure you also have DBI installed).

    library("DBI")
    library("ROracle")
    drv <- dbDriver("Oracle")
    host <- "localhost"
    port <- "1521"
    sid <- "orcl12c"
    connect.string <- paste(
      "(DESCRIPTION=",
      "(ADDRESS=(PROTOCOL=tcp)(HOST=", host, ")(PORT=", port, "))",
      "(CONNECT_DATA=(SID=", sid, ")))", sep = "")

    con <- dbConnect(drv, username = "system", password = "oracle", dbname = connect.string, prefetch = FALSE,
              bulk_read = 1000L, stmt_cache = 0L, external_credentials = FALSE,
              sysdba = FALSE)

    dbReadTable(con, "EMP")

    This will yield the data in the EMP table, perform a delete and do a rollback.

       EMPNO  ENAME      JOB  MGR            HIREDATE  SAL COMM DEPTNO
    1   7698  BLAKE  MANAGER 7839 1981-05-01 00:00:00 2850   NA     30
    2   7566  JONES  MANAGER 7839 1981-04-02 00:00:00 2975   NA     20
    3   7788  SCOTT  ANALYST 7566 1987-04-19 00:00:00 3000   NA     20
    4   7902   FORD  ANALYST 7566 1981-12-02 23:00:00 3000   NA     20
    5   7369  SMITH    CLERK 7902 1980-12-16 23:00:00  800   NA     20
    6   7499  ALLEN SALESMAN 7698 1981-02-19 23:00:00 1600  300     30
    7   7521   WARD SALESMAN 7698 1981-02-21 23:00:00 1250  500     30
    8   7654 MARTIN SALESMAN 7698 1981-09-27 23:00:00 1250 1400     30
    9   7844 TURNER SALESMAN 7698 1981-09-08 00:00:00 1500    0     30
    10  7876  ADAMS    CLERK 7788 1987-05-23 00:00:00 1100   NA     20
    11  7900  JAMES    CLERK 7698 1981-12-02 23:00:00  950   NA     30

    Using dplyr

    dplyr uses dbplyr and it makes working with database data a lot easier. You can see an example here.

    Installing dplyr and dbplyr in R is easy:

    install.packages("dplyr")
    install.packages("dbplyr")

    Various functions are provides to work with data.frames, a popular R datatype in combination with data from the database. Also dplyr uses an abstraction above SQL which makes coding SQL for non-SQL coders more easy. You can compare it in some ways with Hibernate which makes working with databases from the Java object world more easy.

    Some functions dplyr provides:

    filter() to select cases based on their values.
    arrange() to reorder the cases.
    select() and rename() to select variables based on their names.
    mutate() and transmute() to add new variables that are functions of existing variables.
    summarise() to condense multiple values to a single value.
    sample_n() and sample_frac() to take random samples.

    I'll use the same example data as with the above sample which uses plain ROracle

    library("DBI")
    library("ROracle")
    library("dplyr")

    #below are required to make the translation done by dbplyr to SQL produce working Oracle SQL
    sql_translate_env.OraConnection <- dbplyr:::sql_translate_env.Oracle
    sql_select.OraConnection <- dbplyr:::sql_select.Oracle
    sql_subquery.OraConnection <- dbplyr:::sql_subquery.Oracle 

    drv <- dbDriver("Oracle")
    host <- "localhost"
    port <- "1521"
    sid <- "orcl12c"
    connect.string <- paste(
      "(DESCRIPTION=",
      "(ADDRESS=(PROTOCOL=tcp)(HOST=", host, ")(PORT=", port, "))",
      "(CONNECT_DATA=(SID=", sid, ")))", sep = "")

    con <- dbConnect(drv, username = "system", password = "oracle", dbname = connect.string, prefetch = FALSE,
              bulk_read = 1000L, stmt_cache = 0L, external_credentials = FALSE,
              sysdba = FALSE)

    emp_db <- tbl(con, "EMP")
    emp_db

    The output is something like:

    # Source:   table<EMP> [?? x 8]
    # Database: OraConnection
       EMPNO  ENAME       JOB   MGR            HIREDATE   SAL  COMM DEPTNO
       <int>  <chr>     <chr> <int>              <dttm> <dbl> <dbl>  <int>
     1  7839   KING PRESIDENT    NA 1981-11-16 23:00:00  5000    NA     10
     2  7698  BLAKE   MANAGER  7839 1981-05-01 00:00:00  2850    NA     30
     3  7782  CLARK   MANAGER  7839 1981-06-09 00:00:00  2450    NA     10
     4  7566  JONES   MANAGER  7839 1981-04-02 00:00:00  2975    NA     20
     5  7788  SCOTT   ANALYST  7566 1987-04-19 00:00:00  3000    NA     20
     6  7902   FORD   ANALYST  7566 1981-12-02 23:00:00  3000    NA     20
     7  7369  SMITH     CLERK  7902 1980-12-16 23:00:00   800    NA     20
     8  7499  ALLEN  SALESMAN  7698 1981-02-19 23:00:00  1600   300     30
     9  7521   WARD  SALESMAN  7698 1981-02-21 23:00:00  1250   500     30
    10  7654 MARTIN  SALESMAN  7698 1981-09-27 23:00:00  1250  1400     30
    # ... with more rows

    If I now want to select specific records, I can do something like:

    emp_db %>% filter(DEPTNO == "10")

    Which will yield

    # Source:   lazy query [?? x 8]
    # Database: OraConnection
      EMPNO  ENAME       JOB   MGR            HIREDATE   SAL  COMM DEPTNO
      <int>  <chr>     <chr> <int>              <dttm> <dbl> <dbl>  <int>
    1  7839   KING PRESIDENT    NA 1981-11-16 23:00:00  5000    NA     10
    2  7782  CLARK   MANAGER  7839 1981-06-09 00:00:00  2450    NA     10
    3  7934 MILLER     CLERK  7782 1982-01-22 23:00:00  1300    NA     10

    A slightly more complex query:

    emp_db %>% 
      group_by(DEPTNO) %>%
      summarise(EMPLOYEES = count())

    Will result in the number of employees per department:

    # Source:   lazy query [?? x 2]
    # Database: OraConnection
      DEPTNO EMPLOYEES
       <int>     <dbl>
    1     30         6
    2     20         5
    3     10         3

    You can see the generated query by:

    emp_db %>% 
      group_by(DEPTNO) %>%
      summarise(EMPLOYEES = count()) %>% show_query()

    Will result in

    <SQL>
    SELECT "DEPTNO", COUNT(*) AS "EMPLOYEES"
    FROM ("EMP") 
    GROUP BY "DEPTNO"

    If I want to take a random sample from the dataset to perform analyses on, I can do:

    sample_n(as_data_frame(emp_db), 10)

    Which could yield something like:

    # A tibble: 10 x 8
       EMPNO  ENAME      JOB   MGR            HIREDATE   SAL  COMM DEPTNO
       <int>  <chr>    <chr> <int>              <dttm> <dbl> <dbl>  <int>
     1  7844 TURNER SALESMAN  7698 1981-09-08 00:00:00  1500     0     30
     2  7499  ALLEN SALESMAN  7698 1981-02-19 23:00:00  1600   300     30
     3  7566  JONES  MANAGER  7839 1981-04-02 00:00:00  2975    NA     20
     4  7654 MARTIN SALESMAN  7698 1981-09-27 23:00:00  1250  1400     30
     5  7369  SMITH    CLERK  7902 1980-12-16 23:00:00   800    NA     20
     6  7902   FORD  ANALYST  7566 1981-12-02 23:00:00  3000    NA     20
     7  7698  BLAKE  MANAGER  7839 1981-05-01 00:00:00  2850    NA     30
     8  7876  ADAMS    CLERK  7788 1987-05-23 00:00:00  1100    NA     20
     9  7934 MILLER    CLERK  7782 1982-01-22 23:00:00  1300    NA     10
    10  7782  CLARK  MANAGER  7839 1981-06-09 00:00:00  2450    NA     10

    Executing the same command again will result in a different sample.

    Finally

    There are multiple ways to get data to and from the Oracle database and perform actions on them. Oracle provides Oracle R Enterprise. Oracle R Enterprise is a component of the Oracle Advanced Analytics Option of Oracle Database Enterprise Edition. You can create R proxy objects in your R session from database-resident data. This allows you to work on database data in R while the database does most of the computations. Another feature of Oracle R Enterprise is an R script repository in the database and a feature to allow execution of R scripts from within the database (embedded), even within SQL statements. As you can imagine this is quite powerful. More on this in a later blog!

    Oracle SOA and WebLogic: Overview of key and keystore configuration

    $
    0
    0
    Keystores and the keys within can be used for security on the transport layer and application layer in Oracle SOA Suite and WebLogic Server. Keystores hold private keys (identity) but also public certificates (trust). This is important when WebLogic / SOA Suite acts as the server but also when it acts as the client. In this blog post I'll explain the purpose of keystores, the different keystore types available and which configuration is relevant for which keystore purpose.

    Why use keys and keystores?

    The below image (from here) illustrates the TCP/IP model and how the different layers map to the OSI model. When in the below elaboration, I'm talking about the application and transport layers, I mean the TCP/IP model layers and more specifically for HTTP.


    The two main reasons why you might want to employ keystores are that
    • you want to enable security measures on the transport layer
    • you want to enable security measures on the application layer
    Almost all of the below mentioned methods/techniques require the use of keys and you can imagine the correct configuration of these keys within SOA Suite and WebLogic Server is very important. They determine which clients can be trusted, how services can be called and also how outgoing calls identity themselves.

    You could think transport layer and application layer security are two completely separate things. Often they are not that separated though. The combination of transport layer and application layer security has some limitations and often the same products / components are used to configure both.
    • Double encryption is not allowed. See here. 'U.S. government regulations prohibit double encryption'. Thus you are not allowed to do encryption on the transport layer and application layer at the same time. This does not mean you cannot do this though, but you might encounter some product restrictions since, you know, Oracle is a U.S. company.
    • Oracle Webservice Manager (OWSM) allows you to configure policies that perform checks if transport layer security is used (HTTPS in this case) and is also used to configure application level security. You see this more often that a single product is used to perform both transport layer and application layer security. For example also API gateway products such as Oracle API Platform Cloud Service.
    Transport layer (TLS)

    Cryptography is achieved by using keys from keystores. On the transport layer you can achieve 
    You can read more on TLS in SOA Suite here.

    Application layer



    On application level you can achieve similar feats (authentication, integrity, security, reliability), however often more fine grained such as for example on user level or on a specific part of a message instead of on host level or for the entire connection. Performance is usually not as good as with transport layer security because the checks which need to be performed, can require actual parsing of messages instead of securing the transport (HTTP) connection as a whole regardless of what passes through. The implementation depends on the application technologies used and is thus quite variable.
    • Authentication by using 
      • Security tokens such as for example 
        • SAML. SAML tokens can be used in WS-Security headers for SOAP and in plain HTTP headers for REST. JSON Web Tokens (JWT) and OAuth are also examples of security tokens
        • Certificate tokens in different flavors can be used which directly use a key in the request to authenticate.
        • Digest authentication can also be considered. Using digest authentication, a username-password token is created which is send using WS-Security headers.
    • Security and reliability by using message protection. Message protection consists of measures to achieve message confidentiality and integrity. This can be achieved by 
      • signing. XML Signature can be used for SOAP messages and is part of the WS Security standard. Signing can be used to achieve message integrity.
      • encrypting. Encrypting can be used to achieve confidentiality.
    Types of keystores

    There are two types of keystores in use in WebLogic Server / OPSS. JKS keystores and KSS keystores. To summarize the main differences see below table:


    JKS

    There are JKS keystores. These are Java keystores which are saved on the filesystem. JKS keystores can be edited by using the keytool command which is part of the JDK. There is no direct support for editing JKS keystores from WLST, WebLogic Console or Fusion Middleware Control. You can use WLST however to configure which JKS file to use. For example see here

    connect('weblogic','Welcome01','t3://localhost:7001') 
    edit()
    startEdit()
    cd ('Servers/myserver/ServerMBean/myserver')

    cmo.setKeyStores('CustomIdentityAndCustomTrust')
    cmo.setCustomIdentityKeyStoreFileName('/path/keystores/Identity.jks')  
    cmo.setCustomIdentityKeyStorePassPhrase('passphrase') 
    cmo.setCustomIdentityKeyStoreType('JKS')
    cmo.setCustomIdentityKeyStoreFileName('/path/keystores/Trust.jks')  
    cmo.setCustomTrustKeyStorePassPhrase('passphrase') 
    cmo.setCustomTrustKeyStoreType('JKS')

    save()
    activate()
    disconnect()

    Keys in JKS keystores can have passwords as can keystores themselves. If you use JKS keystores in OWSM policies, you are required to configure the key passwords in the credential store framework (CSF). These can be put in the map: oracle.wsm.security and can be called: keystore-csf-key, enc-csf-key, sign-csf-key. Read more here. In a clustered environment you should make sure all the nodes can access the configured keystores/keys by for example putting them on a shared storage.


    KSS

    OPSS also offers KeyStoreService (KSS) keystores. These are saved in a database in an OPSS schema which is created by executing the RCU (repository creation utility) during installation of the domain. KSS keystores are the default keystores to use since WebLogic Server 12.1.2 (and thus for SOA Suite since 12.1.3). KSS keystores can be configured to use policies to determine if access to keys is allowed or passwords. The OWSM does not support using a KSS keystore which is protected with a password (see here: 'Password protected KSS keystores are not supported in this release') thus for OWSM, the KSS keystore should be configured to use policy based access.


    KSS keys cannot be configured to have a password and using keys from a KSS keystore in OWSM policies thus do not require you to configure credential store framework (CSF) passwords to access them. KSS keystores can be edited from Fusion Middleware Control, by using WLST scripts or even by using a REST API (here). You can for example import JKS files quite easily into a KSS store with WLST using something like:

    connect('weblogic','Welcome01','t3://localhost:7001')
    svc = getOpssService(name='KeyStoreService')
    svc.importKeyStore(appStripe='mystripe', name='keystore2', password='password',aliases='myOrakey', keypasswords='keypassword1', type='JKS', permission=true, filepath='/tmp/file.jks')

    Where and how are keystores / keys configured

    As mentioned above, keys within keystores are used to achieve transport security and application security for various purposes. If we translate this to Oracle SOA Suite and WebLogic Server.

    Transport layer
    • Incoming
      • Keys are used to achieve TLS connections between different components of the SOA Suite such as Admin Servers, Managed Servers, Node Managers. The keystore configuration for those can be done from the WebLogic Console for the servers and manually for the NodeManager. You can configure identity and trust this way and if the client needs to present a certificate of its own so the server can verify its identity. See for example here on how to configure this
      • Keys are used to allow clients to connect to servers via a secure connection (in general, so not specific for communication between WebLogic Server components). This configuration can be done in the same place as above, with the only difference that no manual editing of files on the filesystem is required (since no NodeManager is relevant here).
    • Outgoing
      • Composites (BPEL, BPM)
        Keys are be used to achieve TLS connections to different systems from the SOA Suite. The SOA Suite acts as the client here. The configuration of identity keystore can be done from Fusion Middleware Control by setting the KeystoreLocation MBean. See the below image. Credential store entries need to be added to store the identity keystore password and key password. Storing the key password is not required if it is the same as the keystore password. The credential keys to create for this are: SOA/KeystorePassword and SOA/KeyPassword with the user being the same as the keyalias from the keystore to use). In addition components also need to be configured to use a key to establish identity. In the composite.xml a property oracle.soa.two.way.ssl.enabled can be used to enable outgoing two-way-ssl from a composite.
        Setting SOA client identity for 2-way SSL
        Specifying the SOA client identity keystore and key password in the credential store
      • Service Bus
        The Service Bus configuration for outgoing SSL connections is quite different from the composite configuration. The following blog here describes the locations where to configure the keystores and key nicely. In WebLogic Server console, you create a PKICredentialMapper which refers to the keystore and also contains the keystore password configuration. From the Service Bus project, a ServiceKeyProvider can be configured which uses the PKICredentialMapper and contains the configuration for the key and key password to use. The ServiceKeyProvider configuration needs to be done from the Service Bus console since JDeveloper can not resolve the credential mapper.
    To summarize the above:


    Overwriting keystore configuration with JVM parameters

    You can override the keystores used with JVM system parameters such as javax.net.ssl.trustStore, javax.net.ssl.trustStoreType, javax.net.ssl.trustStorePassword, javax.net.ssl.keyStore, javax.net.ssl.keyStoreType, javax.net.ssl.keyStorePassword in for example the setDomainEnv script. These will override the WebLogic Server configuration and not the OWSM configuration (application layer security described below). Thus if you specify for example an alternative truststore by using the command-line, this will not influence HTTP connections going from SOA Suite to other systems. Even when message protection (using WS-Security) has been enabled, which uses keys and check trust. It will influence HTTPS connections though. For more detail on the above see here.

    Application layer
    • Keys can be used by OWSM policies to for example achieve message protection on the application layer. This configuration can be done from Fusion Middleware Control.



      The OWSM run time does not use the WebLogic Server keystore that is configured using the WebLogic Server Administration Console and used for SSL. The keystore which OWSM uses by default is kss://owsm/keystore since 12.1.2 and can be configured from the OWSM Domain configuration. See below for the difference between KSS and JKS keystores.

      OWSM keystore contents and management from FMW Control
      OWSM keystore domain config
      In order for OWSM to use JKS keystores/keys, credential store framework (CSF) entries need to be created which contain the keystore and key passwords. The OWSM policy configuration determines the key alias to use. For KSS keystores/keys no CSF passwords to access keystores/keys are required since OWSM does not support KSS keystores with password and KSS does not provide a feature to put a password on keys.

      Identity for outgoing connections (application policy level, e.g. signing and encryption keys) is established by using OWSM policy configuration. Trust for SAML/JWT (secure token service and client) can be configured from the OWSM Domain configuration.


    Finally

    This is only the tip of the iceberg

    There is a lot to tell in the area of security. Zooming in on transport and application layer security, there is also a wide range of options and do's and don'ts. I have not talked about the different choices you can make when configuring application or transport layer security. The focus of this blog post has been to provide an overview of keystore configuration/usage and thus I have not provided much detail. If you want to learn more on how to achieve good security on your transport layer, read here. To configure 2-way SSL using TLS 1.2 on WebLogic / SOA Suite, read here. Application level security is a different story altogether and can be split up in a wide range of possible implementation choices.

    Different layers in the TCP/IP model

    If you want to achieve solid security, you should look at all layers of the TCP/IP model and not just at the transport and application layer. Thus it also helps if you use different security zones, divide your network so your development environment cannot by accident access your production environment or the other way around.

    Final thoughts on keystore/key configuration in WebLogic/SOA Suite

    When diving into the subject, I realized using and configuring keys and keystores can be quite complex. The reason for this is that it appears that for every purpose of a key/keystore, configuration in a different location is required. It would be nice if that was it, however sometimes configuration overlaps such as for example the configuration of the truststore used by WebLogic Server which is also used by SOA Suite. This feels inconsistent since for outgoing calls, composites and service bus use entirely different configuration. It would be nice if it could be made a bit more consistent and as a result simpler.

    Quickly create a Virtualbox development VM with XE DB using Kickstart, Packer, Vagrant

    $
    0
    0
    The topic of quickly creating an Oracle development VM is not new. Several years ago Edwin Biemond and Lucas Jellema have written several blogs about this and have given presentations about the topics at various conferences. You can also download ready made Virtualbox images from Oracle here and specifically for SOA Suite here.

    Over the years I have created a lot (probably 100+) of virtual machines manually. For SOA Suite, the process of installing the OS, installing the database, installing WebLogic Server, installing SOA Suite itself can be quite time consuming and boring if you have already done it so many times. Finally my irritation has passed the threshold that I need to automate it! I wanted easily recreate a clean environment with a new version of specific software. This blog is a start; provisioning an OS and installing the XE database on it. It might seem a lot but this blog contains the knowledge of two days work. This indicates it is relatively easy to get started with these things.

    I decided to start from scratch and first create a base Vagrant box using Packer which uses Kickstart. Kickstart is used to configure the OS of the VM such as disk partitioning scheme, root password and initial packages. Packer makes using Kickstart easy and allows easy creation of a Vagrant base box. After the base Vagrant box was created, I can use Vagrant to create the Virtualbox machine, configure it and do additional provisioning such as in this case installing the Oracle XE database.


    Getting started

    First install Vagrant from HashiCorp (here).

    If you just want a quick VM with Oracle XE database installed, you can skip the Packer part. If you want to have the option to create everything from scratch, you can first create your own a base image with Packer and use it locally or use the Vagrant cloud to share the base box.

    Every Vagrant development environment requires a base box. You can search for pre-made boxes at https://vagrantcloud.com/search.

    Oracle provides Vagrant boxes you can use here. Those boxes have some default settings. I wanted to know how to create my own box to start with in case I for example wanted to use an OS not provided by Oracle. I was presented with three options in the Vagrant documentation. Using Packer was presented as the most reusable option.

    Packer

    'Packer is an open source tool for creating identical machine images for multiple platforms from a single source configuration.' (from here) Download Packer from HashiCorp (here).

    Avast Antivirus and maybe other antivirus programs, do not like Packer so you might have to temporarily disable them or tell them Packer can be trusted.

    virtualbox-iso builder

    Packer can be used to build Vagrant boxes (here) but also boxes for other platforms such as Amazon and Virtualbox. See here. For VirtualBox there are two so called builders available. Start from from scratch by installing the OS from an ISO file or start from an OVF/OVA file (pre-build VM). Here of course I choose the ISO file since I want to be able to easily update the OS of my VM and do not want to create a new OVF/OVA file for every new OS version. Thus I decided to use the virtualbox-iso builder.

    Iso

    For my ISO file I decided to go with Oracle Linux Release 7 Update 4 for x86 (64 bit) which is currently the most recent version. In order for Packer to work fully autonomous (and make it easy for the developer), you can provide a remote URL to a file you want to download. For Oracle Linux there are several mirrors available which provide that. Look one up close to you here. You have to update the checksum when you update the ISO image if you want to run on a new OS version.

    template JSON file

    In order to use Packer with the virtualbox-iso builder, you first require a template file in JSON format. Luckily samples for these have already been made available here. You should check them though. I made my own version here.

    Kickstart

    In order to make the automatic installation of Oracle Linux work, you need a Kickstart file. This is generated automatically when performing an installation at /root/anaconda-ks.cfg. Read here. I've made my own here in order to have the correct users, passwords, packages installed and swap partition size.

    After you have a working Kickstart file and the Packer ol74.json, you can kickoff the build by:
    packer build ol74.json

    Packer uses a specified username to connect to the VM (present in the template file). This should be a user which is created in the Kickstart script. For example if you have a user root with password Welcome01 in the kickstart file, you can use that one to connect to the VM. Creating the base box will take a while since it will do a complete OS installation and first download the ISO file.



    You can put the box remote or keep it local.

    Put the box remote

    After you have created the box, you can upload it to the Vagrant Cloud so other people can use it. The Vagrant Cloud free option offers unlimited free public boxes (here). The process of uploading a base box to the Vagrant cloud is is described here. You first create a box and then upload the file Packer has created as provider.


    After you're done, the result will be a Vagrant box which can be used as base image in the Vagrantfile. This looks like:


    Use the box locally

    Alternatively you can use the box you've created locally:
    vagrant box add ol74 file:///d:/vagrant/packer/virtualbox/ol74.box

    You of course have to change the box location to be specific to your environment

    And use ol74 as box name in your Vagrantfile. You can see an example of a local and remote box here.

    If you have recreated your box and want to use the new version in Vagrant to create a new Virtualbox VM:

    vagrant box remove ol74
    vagrant box add ol74 file:///d:/vagrant/packer/virtualbox/ol74.box

    Vagrant

    You now have a base clean OS (relatively clean, I added a GUI) and you want to install stuff in it. Vagrant can help you do that. I've used a simple shell script to do the provisioning (see here) but you can also use more complex pieces of software like Chef or Puppet. These are of course in the long run better suitable to also update and manage machines. Since this is just a local development machine, I decided to keep it simple.

    I've prepared the following Vagrant file.

    This expects to find a structure like:

    provision.sh
    Vagrantfile
    Directory: software
    --oracle-xe-11.2.0-1.0.x86_64.rpm.zip
    --xe.rsp

    Oracle XE comes with a rsp file (a so-called response file) which makes automating the installation easy. This is described here. You just have to fill in some variables like password and port and such. I've prepared such a file here.

    After everything is setup, you can do:

    vagrant up

    And it will create the soadb VM for you in Virtualbox


    10 reasons NOT to use Blockchain

    $
    0
    0
    A secure distributed ledger with smart contract capabilities not requiring a bank as an intermediary! Also a single source of truth with complete traceability. Definitely something we want! Blockchain technology promises to make this possible. Blockchain became famous through cryptocurrency like Bitcoin and Ethereum. The technology could also be considered to replace B2B functionality. With new technologies it is not a bad idea to look at pro's and con's before starting an implementation. Blockchain is the new kid on the block and there is not much experience yet on how well he will play with others and will mature. In this blog I summarize some of my concerns concerning blockchain of which I hope will be solved in due time.

    Regarding new/emerging technologies in the integration space, I'm quite open to investigate the potential value which they can offer. I'm a great proponent of for example Kafka, the highly scalable streaming platform and Docker to host microservices. However, I've been to several conferences and did some research online regarding blockchain and I'm sceptical. I definitely don't claim to be an expert on this subject so please correct me if I'm wrong! Also, this is my personal opinion. It might deviate from my employers and customers views.

    Most of the issues discussed here are valid for public blockchains. Private blockchains are of course more flexible since they can be managed by companies themselves. You can for example more easily migrate private blockchains to a new blockchain technology or fix issues with broken smart contracts. These do require management tooling, scripts and enough developers / operations people around your private blockchain though. I don't think it is a deploy and go solution just yet.




    1 Immutable is really immutable!

    A pure public blockchain (not taking into account sidechains and off chain code) is an immutable chain. Every block uses a hashed value of the previous block in its encryption. You cannot alter a block which is already on the chain. This makes sure things you put on the chain cannot suddenly appear or disappear. There is traceability. Thus you cannot accidentally create money for example on a distributed ledger (unless you create immutable smart contracts to provide you with that functionality). Security and immutability are great things but they require you to work in a certain way we are not that used to yet. For example, you cannot cancel a confirmed transaction. You have to do a new transaction counteracting the effects of the previous one you want to cancel. If you have an unconfirmed transaction, you can 'cancel' it by creating a new transaction with the same inputs and a higher transaction fee (at least on a public blockchain). See for example here. Also if you put a smart contract on a public chain and it has a code flaw someone can abuse, you're basically screwed. If the issue is big enough, public blockchains can fork (if 'the community' agrees). See for example the DAO hack on Etherium. In an enterprise environment with a private blockchain, you can fork the chain and replay the transactions after the issue you want corrected on the chain. This however needs to be performed for every serious enough issue and can be a time consuming operation. In this case it helps (in your private blockchain) if you have a 'shadow administration' of transactions. You do have to take into account however that transactions can have different results based on what has changed since the fork. Being careful here is probably required.


    2 Smart Contracts

    Smart contracts! It is really cool you can also put a contract on the chain. Execution of the contract can be verified by nodes on the chain which have permission and the contract is immutable. This is a cool feature!

    However there are some challenges when implementing smart contracts. A lot becomes possible and this freedom creates sometimes unwanted side-effects.


    CryptoKitties

    You can lookup CryptoKitties, a game implemented by using Smart Contracts on Etherium. They can clog a public blockchain and cause transactions to take a really long time. This is not the first time blockchain congestion occurs (see for example here). This is a clear sign there are scalability issues, especially with public blockchains. When using private blockchains, these scalability issues are also likely to occur eventually if the number of transactions increases (of course you can prevent CryptoKitties on a private blockchain). The Bitcoin / VISA comparison is an often quoted one, although there is much discussion on the validity of the comparison.


    Immutable software

    Smart contracts are implemented in code and code contains bugs and those bugs, depending on the implementation, sometimes cannot be fixed since the code on the chain is immutable. Especially since blockchain is a new technology, many people will put buggy code on public blockchains and that code will remain there forever. If you create DAO's (virtual organizations on a blockchain), this becomes even more challenging. See for example the Etherium DAO hack.

    Hello World forever!

    Because the code is immutable, it will remain on the chain forever. Every hello world tryout, every CryptoKitten from everyone will remain there. Downloading the chain and becoming a node will thus become more difficult as the amount of code on the chain increases, which it undoubtedly will.


    Business people creating smart contracts?

    A smart contract might give the idea a business person or lawyer should be able to design/create them. If they can create deterministic error free contracts which will be on the blockchain forever, that is of course possible. It is a question though how realistic that is.
    3 There is no intermediary and no guarantees

    There is no bank in between you and the (public) blockchain. This can be a good thing since a bank eats money. However in case of for example the blockchain loses popularity, steeply drops in value or has been hacked (compare with a bank going bankrupt, e.g. Icesave) than you won't have any guarantees like for example the deposit guarantee schemes in the EU. Your money might be gone.

    4 Updating the code of a blockchain

    Updating the core code of a running blockchain is due to its distributed nature, quite the challenge. This often leads to forks. See for example Bitcoin forks like Bitcoin Cash and Bitcoin Gold and an Etherium fork like Byzanthium. The issue with forks is that it makes the entire cryptocurrency landscape crowded. It is like Europe in the past when every country had their own coin. You have to exchange coins if you want to spend in a certain country (using the intermediaries everyone wants to avoid) or have a stack of each of them. Forks, especially hard forks come with security challenges such as replay attacks (transactions which can be valid on different chains). Some reasons you might want to update the code is because transactions are slow, security becomes an issue in the future (quantum computing) or new features are required (e.g. related to smart contracts).


    5 Blockchain and privacy legislation (GDPR)

    Security is one of the strong points of blockchain technology and helps with the security by design and by default GDPR requirements. There are some other things to think about though.

    Things put on a blockchain are permanent. You cannot delete them afterwards, although you might be able to make then inaccessible in certain cases. This conflicts with the GDPR right to be forgotten. Also (in public blockchains) there is often not a single owner so who do you make your contracts with?

    Every node has the entire blockchain and thus all the data. This might cause issues with legislation. For example requirements to have data contained within the same country. This becomes more of a challenge when running blockchain in a cloud environment. In Europe with many relatively small countries, this will be more of an issue compared to for example the US, Russia or China.


    6 Lost your private key?

    If you have lost your private key or lost access to your wallet (more business friendly name of a keystore) containing your private key, you might have lost your assets on the blockchain. Luckily a blockchain is secure and there is no easy way to fix this. If you have a wallet which is being managed by a 3rd party, they might be able to help you with recovering it. Those 3rd parties however are hacked quite often (a lot of value can be obtained from such a hack). See for example here, here and here.

    7 A blockchain transaction manager is required

    A transaction is put on the blockchain. The transaction is usually verified by several several nodes before it is distributed to all nodes and becomes part of the chain. Verification can fail or might take a while. This can be hours on some public blockchains. It could be the transaction has been caught up  by another transaction with higher priority. In the software which is integrated with a blockchain solution, you have to keep track on the state of transactions since you want to know what the up to date value is of your assets. This causes an integration challenge and you might have to introduce a product which has a blockchain transaction manager feature.


    8 Resource inefficient; not good for the environment

    Blockchain requires large amounts of resources when compared to classic integration.
    Everyone node has the complete chain so everyone can verify transactions. This is a good thing since if a single node is hacked, other nodes will overrule the transactions which this node offers to the chain if they are invalid in whatever way. However this means every transaction is distributed to all nodes (network traffic) and every verification is performed on every node (CPU). Also when the chain becomes larger, every node has a complete copy and thus diskspace is not used efficiently. See for example some research on blockchain electricity usage here. Another example is that a single Bitcoin transaction (4 can be processed per second) requires the same amount of electricity as 5000 VISA transactions (while VISA can do 4000 transactions per second, see here). Of course there is discussion on the validity of such a comparison and in the future this will most likely change. Also an indication blockchains are still in the early stages.


    9 Blockchain in the cloud?

    I would also offer blockchain as a cloud provider, especially if you let the customer pay for the resources used on your IaaS platform! It is really dependent on the types of services the blockchain cloud provider offers and how much they charge for it. It could be similar to using a bank, requiring you to pay per transaction. In that case, why not stick to a bank? Can you enforce the nodes being located in your country? If you need to fix a broken smart contract, will there be a service request and will the cloud provider fork and replay transactions for you? Will you get access to the blockchain itself? Will they provide a transaction manager? Will they guarantee a max transactions per second in their SLA? A lot of questions for which there are probably answers (which differ per provider) and based on those answers, you can make a cost calculation if it will be worthwhile to use the cloud blockchain. In the cloud, the challenges with being GDPR compliant are even greater (especially for European governments and banks).


    10 Quantum computing

    Most of the blockchain implementations are based on ECDSA signatures. Elliptic curve cryptography is vulnerable to a modified Shor's algorithm for solving the discrete logarithm problem on elliptic curves. This potentially makes it possible to obtain a user's private key from their public key when performing a transaction (see here and here). Of course this will be fixed, but how? By forking the public blockchains? By introducing new blockchains? As indicated before, updating the technology of a blockchain can be challenging.


    How to deal with these challenges?

    You can jump on the wagon and hope the ride will not carry you off a cliff. I would be a bit careful when implementing blockchain. I would not expect in an enterprise to quickly get something to production which will actually be worthwhile in use without requiring a lot of expertise to work on all the challenges.

    Companies will gain experience with this technology and architectures which mitigate these challenges will undoubtedly emerge. A new development could also be that the base assumptions the blockchain technology is based on, are not practical in enterprise environments and another technology arises to fill the gap.

    Alternatives

    Although blockchain is relatively new and the first child diseases have not yet been tackled, what are really viable alternatives? Exchanging value internationally has been done by using the SWIFT network (usually by using a B2B application to provide a bridge). This however often requires manual interventions (at least in my experience) and there are security considerations. SWIFT has been hacked for example.

    The idea of having a consortium which guards a shared truth has been around for quite a while in the B2B world. The technology such a consortium uses can just as well for example be a collection of Kafka topics. It would require a per use-case study if all the blockchain features can be implemented. It will perform way better, the order of messages (like in a blockchain) can be guaranteed and you can use compacted topics to get the latest value of something. Also, you can keep records of all transactions, allowing for complete traceability. Kafka has been designed to be easily scalable.


    Off-chain transactions and sidechains

    Some blockchain issues can be mitigated by using so-called off-chain transactions and code. See for example here. Sidechains are extensions to existing blockchains, enhancing their privacy and functionality by adding features like smart contracts and confidential transactions.


    Getting started with Oracle Database in a Docker container!

    $
    0
    0
    One of the benefits of using Docker is quick and easy provisioning. I wanted to find out first-hand if this could help me get an Oracle Enterprise Edition database quickly up and running for use in a development environment. Oracle provides Docker images for its Standard and Enterprise Edition database in the Oracle Container Registry. Lucas Jellema has already provided two blogs on this (here and here) which have been a useful starting point. In this blog I'll describe some of the choices to make and challenges I encountered. To summarize, I'm quite happy with the Docker images in the registry as they provide a very easy way to automate the install of an EE database. You can find a Vagrant provisioning shell script with the installation of Docker and Docker commands to execute here and a description on how to use it here.

    Docker

    Installing Docker on Oracle Linux 7

    Why Docker

    Preparing for this blog was my first real Docker experience outside of workshops. The benefits of Docker I mainly appreciated during this exercise is that
    • Docker uses paravirtualization which is more lightweight than full virtualization on for example VirtualBox or VMWare.
    • The installation of a product inside the container is already fully scripted if you have a Docker image or Dockerfile. There are a lot of images and Dockerfiles available. Also provided and supported by software vendors such as Oracle.
    • The Docker CLI is very user friendly. For example, you can just throw away your container and create a new one or stop it and start it again at a later time. Starting a shell within a container is also easy. Compare this to for example VBoxManage.
    In order to install Docker on Oracle Linux 7, you need to do some things which are described below.

    Preparing a filesystem

    Docker images/containers are created in /var/lib/docker by default. You do not need to specifically create a filesystem for that, however, Docker runs well on a filesystem of type BTRFS. This is however not supported on Oracle Linux. Docker has two editions. Docker CE and Docker EE. Docker CE is not 'certified' for Oracle Linux while EE is. For Docker CE, BTRFS is only recommended on Ubuntu or Debian and for Docker EE, BTRFS is only supported on SLES.

    When you do want to use a BTRFS partition (at your own risk), and you want to automate the installation of your OS using Kickstart, you can do this like:

    part btrfs.01 --size=1000 --grow
    btrfs /var/lib/docker --label=docker btrfs.01

    See a complete Kickstart example here for Oracle Linux 7 and the blog on how to use the Kickstart file with Packer here.


    Enable repositories

    Docker is not present in a repository which is enabled by default in Oracle Linux 7. You can automate enabling them by:

    yum-config-manager --enable ol7_addons
    yum-config-manager --enable ol7_optional_latest
    yum-config-manager --enable ul7_uekr4

    Install Docker

    Installing Docker can be done with a single command:

    yum install docker-engine btrfs-progs btrfs-progs-devel -y

    If you're not using BTRFS, you can leave those packages out.

    Start the Docker daemon

    The Docker CLI talks to a daemon which needs to be running. Starting the daemon and making it start on boot can be done with:

    systemctl start docker
    systemctl enable docker

    Allow a user to use Docker

    You can add a user to the docker group in order to allow it to use docker. This is however a bad practice since the user can obtain root access to the system. The way to allow a non-root user to execute docker is described here. You allow the user to execute the docker command using sudo and create an alias for the docker command to instead perform sudo docker. You can also tune the docker commands to only allow access to specific containers.

    Add to /etc/sudoers
    oracle        ALL=(ALL)       NOPASSWD: /usr/bin/docker

    Create the following alias
    alias docker="sudo /usr/bin/docker"

    Oracle database


    Choosing an edition

    Why not XE?

    My purpose is to automate the complete install of SOA Suite from scratch. In a previous blog I described how to get started with Kickstart, Vagrant, Packer to get the OS ready. I ended in that blog post with the installation of the XE database. After the installation of the XE database, the Repository Creation Utility (RCU) needs to be run to create tablespaces, schemas, tables, etc for SOA Suite. Here I could not continue with my automated install since the RCU wants to create materialized views. The Advanced Replication option however is not part of the current version of the XE database. There was no automated way to let the installer skip over the error and continue as you would normally do with a manual install. I needed a non-XE edition of the database! The other editions of the database however were more complex to install and thus automate. For example, you need to install the database software, configure the listener, create a database, create scripts to start the database when the OS starts. Not being a DBA (or having any ambitions to become one), this was not something I wanted to invest much time in.

    Enter Oracle Container Registry!

    The Oracle Container Registry contains preconfigured images for Enterprise Edition and Standard Edition database. The Container Registry also contains useful information on how to use these images. The Standard Edition database uses a minimum of 4Gb of RAM. The Enterprise Edition database has a slim variant with less features but which only uses 2Gb of RAM. The slim image also is a lot smaller. Only about 2Gb to download instead of nearly 5Gb. The Standard Edition can be configured with a password from a configuration file while the Enterprise Edition has the default password 'Oradoc_db1'. The Docker images can use a mounted share for their datafiles.


    Create an account and accept the license

    In order to use the Container Registry, you have to create an account first. Next you have to login and accept the license for a specific image. This has been described here and is pretty easy.


    After you have done that and you have installed Docker as described above, you can start using the image and create containers!

    Start using the image and create a container

    First you have to login to the container registry from your OS. This can be done using a command like:

    docker login -u maarten.smeets@amis.nl -p XXX container-registry.oracle.com

    XXX is not my real password and I also did not accidentally commit it to GitHub. You should use the account here which you have created for the Container Registry.

    I created a small configuration file (db_env.dat) with some settings. These are all the configuration options which are currently possible from a separate configuration file. The file contains the below 4 lines:

    DB_SID=ORCLCDB
    DB_PDB=ORCLPDB1
    DB_DOMAIN=localdomain
    DB_MEMORY=2GB

    Next you can pull the image and run a container with it:

    docker run -d --env-file db_env.dat -p 1521:1521 -p 5500:5500 -it --name dockerDB container-registry.oracle.com/database/enterprise:12.2.0.1-slim

    The -p options specify port mappings. I want port 1521 and port 5500 mapped to my host (VirtualBox, Oracle Linux 7) OS.

    You can see if the container is up and running with:

    docker ps

    You can start a shell inside the container: 

    docker exec -it dockerDB /bin/bash

    I can easily stop the database with:

    docker stop dockerDB

    And start it again with

    docker start dockerDB

    If you want to connect to the database inside the container, you can do so by using a service of ORCLPDB1.localdomain user SYS password Oradoc_db1 hostname localhost (when running on the VirtualBox machine) port 1521. For the RCU, I created an Oracle Wallet file from the RCU configuration wizard and used that to automate the RCU and install the SOA Suite required artifacts  in the container database. See here.

    Finally

    I was surprised at how easy it was to use the Docker image from the registry. Getting Docker itself installed and ready was more work. After a container is created based on the image, managing it with the Docker CLI is also very easy. As a developer this makes me very happy and I recommend other developers to try it out! There are some challenges though if you want to use the images on larger scale.

    Limited configuration possible

    Many customers use different standards. The Docker image comes with a certain configuration and can be configured only in a (very) limited way by means of a configuration file (as shown above). You can mount an external directory to store data files.

    Limitations in features

    Also, the Docker container database can only run one instance, cannot be patched and does not support Dataguard. I can imagine that in production, not being able to patch the database might be an issue. You can however replace the entire image with a new version and hope the new version can still use the old datafiles. You have to check this though.

    Running multiple containers on the same machine is inefficient

    If you have multiple Oracle Database containers running at the same time on the same machine, you will not benefit from the multitenancy features since every container runs its own container and pluggable database. Also every container runs its own listener.

    Getting started with Spring Boot microservices. Why and how.

    $
    0
    0
    In order to quickly develop microservices, Spring Boot is a common choice. Why should I be interested in Spring Boot? In this blog post I'll give you some reasons why looking at Spring Boot is interesting and give some samples on how to get started quickly. I'll shortly talk about microservices, move on to Spring Boot and end with Application Container Cloud Service which is an ideal platform to run and manage your Spring Boot applications on. This blog touches many subjects but they fit together nicely. You can view the code of my sample Spring Boot project here. Most of the Spring Boot knowledge has been gained by the following free course by Java Brains.



    Microservices

    Before we go deeper into why Spring Boot for microservices, we of course first need to know what microservices are. An easy question to ask but a little complex to answer in a few lines in this blog. One of the first people describing characteristics of microservices and actually calling them that was Martin Fowler in 2014. What better source to go back to then the articles he has written. For example here.

    'In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.'

    -- James Lewis and Martin Fowler

    Of course there are a lot of terms involved in this definition
    • It is an architectural style for developing a single application.
    • A suite of small services each running in its own process.
    • Communicating with lightweight mechanisms, often HTTP.
    • Build around business capabilities. Look up bounded context.
    • A bare minimum of centralized management of these services. This implies no application server which provides centralized management of the applications running on it.
    • May be written in different programming languages or use different storage technologies.
    A microservice architectural style also has several characteristics. It is very interesting to look at such an architecture in more detail like for example the OMESA initiative to help you get started. As is of course obvious and true with all architectural styles, you will gain most benefits when doing it right. It is however often not trivial to determine 'right'.

    Spring Boot microservices

    Spring Boot features and microservice principles

    Spring Boot is based on certain principles which align with microservice architecture. The primary goals of Spring Boot:

    • Provide a radically faster and widely accessible getting started experience for all Spring development.
    • Be opinionated out of the box, but get out of the way quickly as requirements start to diverge from the defaults.
    • Provide a range of non-functional features that are common to large classes of projects (e.g. embedded servers, security, metrics, health checks, externalized configuration).
    • Absolutely no code generation and no requirement for XML configuration.

    The features provided by Spring Boot also make it a good fit to implement microservices in.
    • Spring Boot applications can contain an embedded Tomcat server. This is a completely standalone Tomcat container which has its configuration as being part of the application. 
    • Spring Boot is very well suited to create light weight JSON/REST services.
    • Features like health checks are provided. Spring Boot offers Actuator. A set of REST services which allow monitoring and management. Look here. Also externalized configuration can be used. Few centralized management features are required.
    • Since different storage techniques can be used, Spring provides Spring Data JPA. JPA is Java Persistence API. This API provides ORM capabilities to make working with relational databases easier (mostly vendor independent, supports EclipseLink, Hibernate and several others).
    Example of an Actuator call to request health status
    Easy to implement API design patterns

    There are plenty of descriptions online to provide API design guidelines. See for example here. An example API URL can be something like: http://api.yourservice.com/v1/companies/34/employees. Notice the structure of the URL which amongst other things contains a version number. Oracle Mobile Cloud Service documentation also has several design recommendations. See here. These design considerations are of course easily implemented in Spring Boot.

    See for example the below code sample:

    A simple Spring Boot controller

    You can see how the HTTP operations are used and the way method calls are mapped to URLs. Added benefit of this sample is that it also shows how to access the body of the request message.

    Integration with backend systems

    Spring Boot integrates with JPA. JPA provides an API to easily do ORM. It allows you to work with objects in Java which are backed by database data. For basic CRUD operations, the effort required to implement JPA in Spring Boot is minimal.

    You only need three things to do simple CRUD operations when using the embedded Derby database.

    • An annotated entity. You only require two annotations inside your POJO. @Entity to annotate the class and @Id to indicate the variable holding primary key.
    • A repository interface extending CrudRepository (from org.springframework.data.repository)
    • Inside your service, you can use the @Autowired annotation to create a local variable with an instance of the repository.

    Connection details for the embedded Derby server are not required. They are for external databases though.

    Pretty comparable to Microservices on Node

    Node or Spring Boot? This is of course a topic which has many opinions. Many blogs have been written to compare the 2. See for example here.

    In several aspects, Spring Boot beats Node.js.
    • Performance. Read the following article here. Spring Boot microservices can achieve higher throughput than similar services on Node.js. 
    • Maturity. Spring has a long history of running Enterprise Applications. Node.js can also be used but is less mature.
    • Security. Spring and Spring Boot are clearly better than Node.js. For example, Kerberos support in Node is limited while Spring Boot provides easy abstractions for several security  implementations amongst which Kerberos tokens.
    • RDBMS. This is more easy to use in Spring Boot because of JPA.
    Node.js beats Spring Boot also in several aspects
    • Build/package management. People who have experience with Maven and NPM often prefer NPM
    • UI. JavaScript is of course the language of choice for front-end applications. The Java based frameworks such as the JSF variants by far do not have the productivity as for example a framework like AngularJS.
    • Document databases like MongoDB. When you can work with JSON, JavaScript code running on Node.js makes it very easy to interact with the database.
    Spring Boot, being in the Java ecosystem can also be combined with for example Ratpack. See here. Ratpack provides a high throughput, non-blocking web layer. The syntax is similar to how you would code Node.js code. This is of course not so much of an argument for Spring Boot since modules on Node.js provides similar functionality. Both solutions are more alike than you would think on first glance.

    It depends probably mainly on the skills you have available and your application landscape if you would choose Node.js or Spring Boot. If you're from the JavaScript world, you might prefer to write your microservices on Node.js. If you're from the Java world, you will prefer Spring Boot. It is important to understand there is not an obvious superior choice whether to go for Node.js or Spring Boot.

    Getting started with Spring Boot

    The easiest way to get started is first watch some online courses. For example this one from Java Brains. I'll provide some nice to knows below.

    Spring Tool Suite (STS)

    As for an IDE, every Java IDE will do, however, since Spring Boot is build on top of Spring, you could consider using Spring Tool Suite (STS). This is a distribution of Eclipse with many specific Spring features which make development of Spring applications easier.



    Spring Initializr

    An alternative way to get your start project is to go to https://start.spring.io/, indicate your dependencies and click the Generate project button. This will generate a Maven or Gradle project for you with the required dependencies already added.


    With STS, you can also use the Spring Initializr functionality easily.


    Spring Boot CLI

    Spring Boot CLI offers features to create and run Groovy Spring Boot applications. Groovy requires less code than Java to do similar things. It is a script language which runs on the JVM and from Groovy you can access regular Java classes/libraries.

    You can for example create an application like:

    @RestController
    class HelloWorldClass {

        @RequestMapping("/")
        String home() {
            return "Hello World!"
        }
    }

    Save this as a Groovy script (e.g. app.groovy) and run it with Spring Boot CLI like: spring run app.groovy

    Getting actually started

    To get started with Spring Boot, you have to add some entries to your pom.xml file and you're ready to go. Easiest is to use the New Spring Starter project from STS since it will generate a pom, a main and test class for you. That is what I used for my sample project here.

    A simple pom.xml to get started with Spring Boot

    Spring and Oracle

    Spring is a very common Java framework. You can find traces of it in several Oracle products and features. Below some examples. If you look in other Oracle products, especially those who are Java based, I expect you will find many more examples.

    SOA Suite

    For example in Oracle SOA Suite.
    • SOA Suite itself under the covers uses Spring
    • SOA Suite can use Spring components 
    Massive Open Online Course

    Oracle uses Spring Boot in courses it provides. For example in the Develop RESTful Java Microservices deployable on Oracle Cloud MOOC.

    Application Container Cloud Service (ACCS)


    ACCS has been around for a while. Together with Spring Boot, they provide an ideal combination to get your microservices developed and running quickly.

    Application Container Cloud Service provides the features of The Twelf-Factor App out of the cloudy box your application so you don't have to develop these yourself. These of course also align with the microservice principles like executing apps as stateless processes.
      Viewing all 142 articles
      Browse latest View live