Quantcast
Channel: Oracle SOA / Java blog
Viewing all 142 articles
Browse latest View live

Sonatype Nexus: Retrieving artifacts using the REST API or Apache Ivy

$
0
0
Sonatype Nexus is an often used artifact repository. In a previous blog post I have shown an example how Maven can be used to assemble and release artifacts to Nexus. In this blog post I will describe two ways how artifacts can be fetched from the Nexus repository; by using the REST API and by using Apache Ivy.


The Nexus API

Sonatype has done a great job at providing an extensive well documented REST API. See: http://www.sonatype.org/nexus/2015/01/26/learn-the-nexus-rest-api-automating-sonatype-nexus/. The webinterface which allows administration and browsing of the repository, makes use of this API and is thus provides many examples on how the API can be used. You can use a network traffic monitor (Chrome Developer tools for instance) to obtain the requests send from the webinterface to the API.

You can also try requests on the API by using for example SOAP UI or an internet browser. SOAP UI is in my opinion the more user friendly way to experiment.


Below are some examples on how you can use this API. I have used the same Nexus installation and testproject as in the previous mentioned post.

The keyword LATEST is used in the samples. This only works (as far as I've seen) if a pom file is present in the artifact directory (the Maven resolver can be used).

POM

In order to fetch the latest version of a project, you can perform a query like below. This fetches the POM (Maven project object model).

http://localhost:8081/nexus/service/local/artifact/maven?g=nl.amis.smeetsm.application&a=testproject&v=LATEST&r=releases

The response XML conforms to the following XSD: https://repository.sonatype.org/nexus-restlet1x-plugin/default/docs/ns0.xsd.(which contains a duplicate definition...)

Browsing

You can browse the directory structure. Suppose all your services have a specific groupid, you can query artifacts from the group and use some scripting to gather the required artifact. For example;

http://localhost:8081/nexus/service/local/repositories/releases/index_content/nl/amis/smeetsm/application/testproject/

Gives me an XML describing the artifacts and versions of my testproject. You can use this response in for example a simple Python script to query for artifacts from outside the webinterface. When applying an XSL, you can also easily transform the XML to a nice viewable HTML.

Fetching an artifact directly

You can fetch an artifact directly by going to an URL like for example below;

http://localhost:8081/nexus/service/local/repositories/releases/content/nl/amis/smeetsm/application/testproject/1.2/testproject-1.2-distribution.zip

Query for an artifact

Or you can use a query based on the GAV (group id, artifact id, version) coordinates and the filename to fetch the latest version.

http://localhost:8081/nexus/service/local/artifact/maven/content?g=nl.amis.smeetsm.application&a=testproject&v=LATEST&r=releases&c=distribution&e=zip

Apache Ivy

Apache Ivy can be used in Ant tasks to make dependency management easier. Apache Ivy easily integrates with Maven repositories (like Nexus) while still allowing usage of the extensive scripting options of Ant. The Nexus API allows resolving the latest version of Maven artifacts. When using Apache Ivy however to publish artifacts and you do not have a separate pom.xml file, you are dependent on the Apache Ivy resolver for obtaining the latest version of artifacts. Apache Ivy however provides Ant tasks for generating a pom file if you do not already have one (makepom).

What do you need to get Apache Ivy fetching artifacts from Nexus for you? There are several examples online, but I could not find a complete one so I'll provide it here. First download Apache Ivy and put the jar file in a lib directory

ivy.xml

You can compare this with a Maven pom.xml file. It describes an artifact.

<ivy-module version="2.0" xmlns:maven="http://maven.apache.org">  
<info organisation="nl.amis.smeetsm.application" module="ivytest"/>
<configurations>
<conf name="runtime" description="runtime" />
</configurations>
<dependencies>
<dependency org="nl.amis.smeetsm.application" name="testproject" rev="1.1" conf="runtime->default">
<artifact name="testproject" maven:classifier="distribution" type="zip" ext="zip"/>
</dependency>
</dependencies>
</ivy-module>

ivysettings.xml

Describes stuff like credentials and resolvers. You can compare this with a Maven settings.xml

<ivysettings>  
<settings defaultResolver="nexus"/>
<credentials host="localhost" realm="Sonatype Nexus Repository Manager" username="deployment" passwd="deployment123"/>
<property name="nexus-public" value="http://localhost:8081/nexus/content/groups/public"/>
<resolvers>
<ibiblio name="nexus" m2compatible="true" root="${nexus-public}"/>
</resolvers>
</ivysettings>

build.xml

This is an Ant script which can be executed by calling the Ant binary in the directory which contains this file. Important here is that the combination of the artifact definition in ivy.xml and the pattern in the ivy:retrieve call, determine which file is actually fetched.

<project name="ivytestproject" default="init"
xmlns:ivy="antlib:org.apache.
ivy.ant">
<!--
================
Build properties
================
-->
<property name="build.dir" location="build"/>
<property name="ivy.reports.dir" location="${build.dir}/ivy-reports"/>
<!--
===========
Build setup
===========
-->
<target name="init">
<ivy:settings file="ivysettings.xml" />
<ivy:retrieve pattern="${build.dir}/[artifact](-[revision])(-[classifier]).[ext]"/>
<ivy:report todir='${ivy.reports.dir}' graph='false' xml='false'/>
<ivy:cachepath pathid="runtime.path" conf="runtime"/>
</target>
</project>

Result

When executing Ant in the same directory as the build.xml, Ivy goes fetch the artifact from Nexus and puts it in a build directory. There you can do whatever Ant scripting you like with it. In the ant commandline, you need to specify the library directory in which you have put Apache Ivy (and probably other Ant dependencies). See the example below.

 [maarten@hotspot ivytest]$ cat runme.sh  
#!/bin/sh
/home/maarten/Oracle/Middleware1213/Oracle_Home/oracle_common/modules/org.apache.ant_1.9.2/bin/ant -lib /home/maarten/ivytest/apache-ivy-2.4.0-rc1
[maarten@hotspot ivytest]$ ./runme.sh
Buildfile: /home/maarten/ivytest/build.xml
init:
[ivy:retrieve] :: Apache Ivy 2.4.0-rc1 - 20140315220245 :: http://ant.apache.org/ivy/ ::
[ivy:retrieve] :: loading settings :: file = /home/maarten/ivytest/ivysettings.xml
[ivy:retrieve] :: resolving dependencies :: nl.amis.smeetsm.application#ivytest;working@hotspot.s-bit.nl
[ivy:retrieve] confs: [runtime]
[ivy:retrieve] found nl.amis.smeetsm.application#testproject;1.1 in nexus
[ivy:retrieve] :: resolution report :: resolve 85ms :: artifacts dl 4ms
---------------------------------------------------------------------
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
---------------------------------------------------------------------
| runtime | 1 | 0 | 0 | 0 || 1 | 0 |
---------------------------------------------------------------------
[ivy:retrieve] :: retrieving :: nl.amis.smeetsm.application#ivytest
[ivy:retrieve] confs: [runtime]
[ivy:retrieve] 1 artifacts copied, 0 already retrieved (0kB/7ms)
[ivy:report] Processing /home/maarten/.ivy2/cache/nl.amis.smeetsm.application-ivytest-runtime.xml to /home/maarten/ivytest/build/ivy-reports/nl.amis.smeetsm.application-ivytest-runtime.html
BUILD SUCCESSFUL
Total time: 6 seconds
[maarten@hotspot ivytest]$ ls build
classes ivy-reports test-classes testproject-1.1-distribution.zip test-reports

The example provided in this article is specific to my testproject and Nexus installation/configuration. It can be downloaded here though.

Conclusion

Both the Nexus API and Apache Ivy provide means to retrieve artifacts from Nexus. Depending on your preference of scripting language, you can take either path or choose an alternative such as Maven for retrieving artifacts. When using Apache Ivy, you should be confident in your Ant scripting skills. When using the REST API, you can use shell scripting (probably using curl), Python, Perl or whatever else you like. My conclusion is that Nexus provides many options for retrieving artifacts. This helps in allowing a place for Nexus in almost every build process.

Oracle introduces API Manager!

$
0
0
Oracle has introduced a new product; API Manager (you can find the official documentation here). API Manager is an important addition to the already impressive Oracle SOA stack. In this article I'll explain what this new product does and how it helps in managing your API's. I will focus on the features and benefits you can have of this product and will also elaborate a little about my current experiences with it.


API Manager

What does API Manager do?

API Manager is a product which extends the Service Bus functionality and provides an API Manager Portal to manage API's and browse analytics. API Manager allows you to save certain metadata as part of a Service Bus proxy service. This metadata is used to allow access to an API and provide data on their usage. SOAP and REST API's are supported (HTTP API's).


As you can see in the screenshot, you can set an API as managed or unmanaged. If an API is managed, you can only call it if you have a registered subscription. A subscription allows you to use an API key (HTTP header X-API-KEY) in order to access the API. Requests to managed API's which have not specified a correct key, are denied.


If you test an API from inside the Service Bus console or Fusion Middleware Control however, you can still call the service without an API key.

API Manager workflow

API Manager uses several (application) roles.

Developer / Deployer
This role is not specific to API Manager. The API Developer creates a new API. Someone with the group membership Deployer can deploy it to the Service Bus.

API Consumer
The API Consumer can access the API Manager Portal, browse API's and register as subscriber (generate an API key and use it in requests).

API Curator
The API curator is able to set service metadata in the Service Bus console. An API Curator can publish a service so it becomes visible in the API Manager Portal or set it deprecated.

API Administrator
The API Administrator can view analytics and can import/export metadata using WLST scripts.

API Manager Portal

A subscription can be created for an API consumer in the new API Manager Portal. This is accessible at http://[host]:[port]/apimanager/. The API Manager Portal is a clean easy to use interface. It uses several application roles which need to be configured before you can access the portal. API Curator, API Administrator and API Consumer. This is described in the installation manual.

Inside the portal, you can access 3 tabs; Subscriptions, Analytics and Catalog. Inside the Catalog and Subscriptions pages, you can create subscriptions. You first have to create an application in order to add subscriptions to it. An application has an API key and all API's part of the application use the same key.


You can not subscribe to an API which is not published (it is not visible in the portal and if it is visible because you just updated the state, this is denied). Also you can not create new subscriptions for a deprecated API. The API state (published, draft, private) and if it is deprecated, can be set in the Service Bus console.


The configuration done in the API Manager Portal can be imported and exported as a configuration JAR using WLST.

Using API Manager

Publish the service
One of the features API manager provides is Service Bus proxy states. When you publish a service, it is not available externally but gets the state 'Draft'. When you call this service you get an HTTP 403 Forbidden. You have to specifically tell it to publish the service.

Can't update the API key?
I did not see a mechanism in the API Manager Portal to update an API key. Probably this can be done 'the hard way' by looking at the database. Maybe you should ask yourself why you would want to change this key.

Key propagation
When using composite services as API, you will need to propagate the API key in service calls. The Service Bus and BPEL have their own mechanisms for this. Other components will also have their own way of doing this.

Circumventing API Manager
I was curious if I could circumvent the API Manager API key header check. When I have two Service Bus proxy services. One of the services is managed and other is not. The unmanaged service calls the managed service without API key. The call from the unmanaged service also gets an HTTP 403 message. This is a very good thing! It allows API Manager to manage internal and external API's. If a service wants to use another service, it has to be registered as subscriber (if it is managed). I have not tried using a Java API or direct binding to call the service.

Some other things to mind

Upgrade existing DB schema's
The API Manager Installation patches the Repository Creation Utility. If you create schema's with the patched RCU, you can use API Manager. I have not seen (could have missed this) a mechanism to upgrade existing database schema's with the functionality required by API Manager.

Service Bus extension
API Manager can be used for Service Bus Proxy services. I have not yet seen support for other Oracle SOA components/composites. This is understandable since it is a good practice to use the Service Bus in front of other components. It would be nice though if it is not dependent on a Service Bus implementation.

Installation
I followed the standard installation and created 3 users which were in the groups API Administrator, API Curator and API Consumer. I had assigned the application roles as described. I could have made a mistake though. When I tried to access the API Management Portal, I could only log in with the API Administrator role. The other users were not allowed access. None of the users were allowed access to the Service Bus Console (after login I got HTTP 403 Forbidden messages). The API Administrator user did not have enough permissions (I could for example not create or view applications). In order to write this article, I have created a superuser which was assigned all groups. With this user I could access all the required functionality to get everything working. My idea is that more permissions are required to use the described roles. I have not looked into this further.

No analytics?
During the writing of this blog I did not see any analytics data. Later I found out this was caused because I did not indicate monitoring for a service should be enabled (API tab Service Bus console). If you want to use this feature, do not forget to enable it!

Conclusion

API Manager adds important new features to the Oracle Service Bus. It provides a mechanism to secure API's, provide insight in consumers and allows more active management of the API lifecycle. This product does not work on a harvest of services to allow adding of metadata but it works on the actual service as you can see it in the Service Bus console. This allows true management and does not provide an abstraction which might become out of sync with the actual situation.

In order to use it though, some (minor) code changes are required. You need to supply a specific API Manager HTTP header when you want to access a managed service. This API key can be different per environment and consumers should be able to deal with these differences. Also if you want to use this, you need to look into the roles/groups/users. Using the roles though, you can implement a structured workflow which will also benefit your development process.

Because API Manager is not easily circumvented, consumers need to register in order to use an API. A danger here is that everyone starts using the same API key or every environments uses the same API key. This is of course not secure and voids the benefits of additional insight into your consumers. This insight is in my opinion the most important feature of this product. Not only do you know who uses your API (dependencies!), but you can even gather statistics on them. If for example requests originating from a certain consumer take a long time to process, you can take action and contact this consumer to maybe optimize their API usage. Also the mechanism of draft and deprecated API's is very useful to indicate something shouldn't be used yet or shouldn't be used by new consumers. A developer can still test the service using the test console. In summary, this looks like a very useful product. I like it!

Deploying SOA Suite 12c artifacts from Nexus

$
0
0
SOA Suite 12c introduces Maven support to build and deploy artifacts. Oracle has provided extensive documentation on this. Also there already are plenty of blog posts describing how to do this. I will not repeat those posts (only shortly describe the steps). What I couldn't find quickly enough though was how to deploy artifacts from an artifact repository to an environment. This is a task often done by provisioning software such as Puppet or Jenkins. Sometimes though you want to do this from a command-line. In this post I'll briefly describe steps required to get your Continuous Delivery efforts going and how to deploy an artifact from the Nexus repository to a SOA Suite runtime environment.
Preparations

In order to allow building and deploying of artifacts without JDeveloper, several steps need to be performed. See the official Oracle documentation on this here: http://docs.oracle.com/middleware/1213/soasuite/develop-soa/soa-maven-deployment.htm#SOASE88425

Preparing your artifact repository

Optional but highly recommended
Required
  • install the Oracle Maven Sync plugin (Oracle manual 48.2.1)
Below steps are described on the blog of Edwin Biemond: http://biemond.blogspot.nl/2014/06/maven-support-for-1213-service-bus-soa.html
  • use the Oracle Maven Sync plugin to put libraries required for the build/deploy process in your local Maven repository
  • (if using an artifact repository) put the Oracle Maven Sync plugin in your artifact repository and use it to add the required Oracle libraries
Preparing your project

Referring to the competition here, but Roger Goossens has done a good job at describing what needs to be done: http://blog.whitehorses.nl/2014/10/13/fusion-middleware-12c-embracing-the-power-of-maven/. Mind here though that the serverUrl is provided as a hardcoded part of the sar-common pom.xml. You can of course override this by providing it to Maven in the command-line. If you like it to always be provided command-line (to avoid accidentally not overriding it), don't add it to the sar-common pom.xml.
  • make sure your project can find your MDS (update the appHome and oracleHome properties in your pom.xml)
  • create a jndi.properties file (you will want to replace properties in this file during your build process)
  • update the composite.revision property
Now you can compile and deploy your project to Nexus and to a runtime SOA environment. During the next steps, I'll use a test project already deployed to Nexus (a simple HelloWorld SCA composite).


Deploy from Nexus

The repository is prepared. Your project is prepared. You can deploy to an environment from your local directory. You can deploy to Nexus from your local directory. However, during your build process, you don't want to build and deploy from your source directory / version control, but you want to deploy from your artifact repository. How do you do that? Usually a provisioning tool does this, but such a tool is not always available at a customer or their process does not allow using such tools. We can fall back to the command-line for this.

Get the SAR

During the next step, we start deploying. Because the sarLocation parameter used during deployment cannot be an URL, you have to download your SAR manually first by using the repository API. For Nexus several options are described here and a sample is provided below.

wget http://localhost:8081/nexus/service/local/repositories/snapshots/content/nl/amis/smeetsm/HelloWorld/1.0-SNAPSHOT/HelloWorld-1.0-20150314.150901-1.jar

You can also use curl instead of wget if you prefer. wget and curl are Linux tools. PowerShell 3.0 (Windows 7+) also can use its own variant of wget.

Deploy the SAR

I created a dummy pom.xml file which did nothing but avoid complaining from Maven..

<?xml version="1.0" encoding="UTF-8"?>  
<project xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd" xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<modelVersion>4.0.0</modelVersion>
<groupId>nl.amis.smeetsm</groupId>
<artifactId>DeployApp</artifactId>
<version>1.0-SNAPSHOT</version>
</project>

Now you can deploy your downloaded SAR:

 [maarten@localhost mvntest]$ mvn com.oracle.soa.plugin:oracle-soa-plugin:deploy -DsarLocation=HelloWorld-1.0-20150314.150901-1.jar -Duser=weblogic -Dpassword=Welcome01 -DserverURL=http://localhost:7101  
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building DeployApp 1.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- oracle-soa-plugin:12.1.3-0-0:deploy (default-cli) @ DeployApp ---
[INFO] ------------------------------------------------------------------------
[INFO] ORACLE SOA MAVEN PLUGIN - DEPLOY COMPOSITE
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] setting user/password..., user=weblogic
Processing sar=HelloWorld-1.0-20150314.150901-1.jar
Adding shared data file - /home/maarten/jdeveloper/mywork/mvntest/HelloWorld-1.0-20150314.150901-1.jar
INFO: Creating HTTP connection to host:localhost, port:7101
INFO: Received HTTP response from the server, response code=200
---->Deploying composite success.
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2.479s
[INFO] Finished at: Sun Mar 15 16:09:24 CET 2015
[INFO] Final Memory: 14M/218M
[INFO] ------------------------------------------------------------------------

At first I thought that fetching the project pom.xml file would be required and that this pom.xml could be used for deployment. This did not work for me since the plugin expects to find the SAR file in the target directory (even when I override this).

 [maarten@localhost mvntest]$ mvn -f HelloWorld-1.0-20150314.150901-1.pom com.oracle.soa.plugin:oracle-soa-plugin:deploy -DsarLocation=./HelloWorld-1.0-20150314.150901-1.jar -Duser=weblogic -Dpassword=Welcome01 -DserverURL=http://localhost:7101  
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building HelloWorld 1.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- oracle-soa-plugin:12.1.3-0-0:deploy (default-cli) @ HelloWorld ---
[INFO] ------------------------------------------------------------------------
[INFO] ORACLE SOA MAVEN PLUGIN - DEPLOY COMPOSITE
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] setting user/password..., user=weblogic
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2.007s
[INFO] Finished at: Sun Mar 15 16:04:03 CET 2015
[INFO] Final Memory: 15M/218M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal com.oracle.soa.plugin:oracle-soa-plugin:12.1.3-0-0:deploy (default-cli) on project HelloWorld: file not found: /home/maarten/jdeveloper/mywork/mvntest/target/sca_HelloWorld_rev1.0-SNAPSHOT.jar -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException

Conclusion

Oracle has done a very nice job at providing Maven support for SOA Suite composites in SOA Suite 12c. The documentation provides a good start and several blog posts are already available for filling your artifact repository with Service Bus and SCA composite projects. In this blog post I have described how you can deploy your composite from your artifact repository to an environment using the command-line.

Of course a provisioning tool is preferable, but when such a tool is not available or the tool does not have sufficient Maven support, you can use the method described in this post as an alternative. This can of course also be used if you want to create a command-line only release for the operations department. If you want to provide a complete command-line installation though without requirements for settings.xml configuration to find the repository (in order to allow usage of the oracle-soa-plugin) you need to provide a separate Maven installation with settings.xml in your release. If the installation is performed from a location which cannot reach your artifact repository, you need to provide the repository as part of your release. These are workarounds though.

Exposing JMS queues and topics with a JAX-WS webservice

$
0
0
Everyone can do HTTP calls and thus call most webservices. Interfacing with JMS queues or topics though is a bit more difficult (when not using Oracle SOA Suite). An alternative is using custom code. This usually requires libraries, JNDI lookups, opening connections and such. Because I wanted to make it easy for myself to put stuff on queues and topics, I created a simple JAX-WS wrapper service. By using this service, JMS suddenly becomes a whole lot easier.


Implementation

If you just want to download and use the code, go to the usage section. I wrote the code in a short time-span. It can use some improvements to make the request message better and to allow dequeueing. Also I have not tested it under load and I might not do a nice cleanup of the connection.

Getting started

The implementation is relatively straightforward if you're a bit familiar with JMS programming. There are some things to mind though. The first thing I encountered were some difficulties after I selected the JAX-WS Sun reference implementation in JDeveloper when creating my JAX-WS webservice. I should of course have selected the Weblogic implementation to avoid issues (such as missing metro-default.xml and missing classes after having added that file). Deleted the application and started over again. No issues the second time.


This next part is also shown in the title image. I first obtain the Context which is easy since the webservice is running in the application server. Using this context you can obtain a Destination by doing a JNDI lookup. This Destination can be used to obtain a ConnectionFactory. Using this ConnectionFactory you can obtain... yes... a Connection! This Connection can be used to obtain a Session. This Session in turn can be used to create a TextMessage and a MessageProducer. You can imagine what those two can do together.

Avoid separate code for queues and topics

It is important to realize that for this implementation it is not relevant if you are posting to a queue or a topic. Specific destinations exist for topics and queues but you can just as well use the Destination class itself. The same goes for the ConnectionFactory. Using these common classes avoids duplication in the code.

JMSProperties and JMSHeaders

JMSProperties
I didn't like this part. The JMSProperties are custom properties which can have a specific type such as integer, string, float, double, boolean. There are separate methods on TextMessage instances to set these different types. In an XSD this would have been a choice. I didn't do contract first development though and a Java implementation of an XSD choice isn't something which can be called pretty (http://blog.bdoughan.com/2011/04/xml-schema-to-java-xsd-choice.html). Thus I supplied a string and an enum indicating the type in order to map it to the correct method and set the property.

JMSHeaders
The JMSHeaders also weren't fun. The TextMessage class had several methods specific to individual headers! What I wanted though was just to specify name/value pairs and let it set the value based on that. I was required to make a mapping to the header specific methods of the TextMessage class and do a type conversion from string to the input of the specific method. This would have been easier with Oracle BPEL and invoke activity properties.

Base64

I choose to supply the message as Base64. Why? Well, because escaping XML doesn't look good and we're not even sure every message is going to be XML. We might want to send JSON. JSON escapes differently. In order to avoid escape issues, base64 always works. I used Apache Commons Codec to do the Base64 part. For quick online encoding/decoding you can use something like: https://www.base64encode.org/. Beware though not to feed the site with business sensitive information.

Usage

You can download the code here. The project is specifically written to run on Weblogic server (developed on the 12.1.3 SOA Suite quickstart). A WAR is included. It might also run on older SOA Suite versions with some minor changes.

First you have to create a queue or topic. A queue is easiest for testing. You can look at for example http://middlewaremagic.com/weblogic/?p=1987 on how to create a queue. I've created a queue called MyQueue which I supply as JNDI name.

After you deploy the service, you can call it using the Enterprise Manager test console or SOAP UI or anything which can do HTTP. After a call you can verify in the Weblogic console the message has arrived.



Warning

Beware though that you are providing a hole in the Weblogic security layer by exposing JMS queues and topics to 'the outside'. This webservice needs some pretty good security. I therefore recommend to only use it for development and testing purposes and avoid using it in a production environment.

Combine version control (SVN) and issue management (JIRA) to improve traceability

$
0
0
Version control and bug tracking systems are found in almost every software development project. Both contain information on release content. In version control, it is usual (and a best practice) to supply an issue number when code is checked in. Also it allows identification of code which is in a release (by looking at release branches). Issue management allows providing metadata to issues such as the fix release and test status. This is usually what release management thinks is in a release.

In this article I will provide a simple example on how you can quickly add value to your software project by improving traceability. This is done by combining the information from version control (SVN) and issue management (JIRA) to generate release notes and enforcing some version control rules.

To allow this to work, certain rules need to be adhered to.
  • code is committed using a commit message or tag which allows linking of code to issue or change
  • it should be possible to identify the code which is part of a release from version control
  • the bug tracking system should allow a selection of issues per release

Version control; link code to function

In this example I'll talk about Subversion since I have most experience with this. Git also supports a similar mechanism of commit hooks. SVN can easily be installed and a repository be created by doing what is described on: http://www.civicactions.com/blog/2010/may/25/how_set_svn_repository_7_simple_steps

First you need to make sure you can link your code to your functionality. This is easily done with commit messages. In a small team you can quickly agree on a set standard and use that. When the team grows larger and more distributed, enforcing standards, becomes more of a challenge. SVN provides pre-commit hooks which can provide the needed functionality to require a certain format in the commit message. This avoids deviations of the agreed standard and allows more easily to extract (reliable) information from version control commit messages.

After creation of this repository, there will be a 'hooks' folder underneath the specified directory. Templates for hooks are provided there. Those are in shell scripts however and I prefer Perl for this. Mind though that the pre-commit hook script (even if it is a Perl file) should be executable!

In the below script I check for the format of a JIRA issue. You can also look at: http://stackoverflow.com/questions/10499098/restricting-subversion-commits-if-the-jira-issue-key-is-not-in-the-commit-messag. This allows commits to be prevented by directly checking Jira. If you want to allow check-ins specifying a JIRA ID while not checking JIRA itself, you can use the below example. It also checks the directory (myproject directly under the repository root). Usually multiple projects use the same repository and you don't want to bother everyone with your beautiful commit standards.

 #!/usr/bin/perl -w  
use strict;
my $repos = $ARGV[0];
my $txn = $ARGV[1];
my $svnlook = '/usr/bin/svnlook';
my $require = '\[([A-Z_0-0]+-[0-9]+)\]';
my $checklog = "N";
foreach my $line (`$svnlook changed -t "$txn""$repos"`)
{
chomp($line);
if ($line !~ /^\s*(A|D|U|UU|_U)\s*(.+)$/)
{
die "!!Script Error!! Can't parse line: $line\n";
} else {
if ($2 =~ /^myproject.*$/)
{
$checklog = "Y";
}
}
}
if ($checklog ne "N")
{ my $log = `$svnlook log -t $txn $repos`;
if ($log =~ /$require/) {
exit 0;
} else {
die "No JIRA issue specified. Commit aborted!\n";
}
}

 [maarten@localhost trunk]$ svn commit -m'Please kick me'
Adding trunk/test.txt
Transmitting file data .svn: Commit failed (details follow):
svn: Commit blocked by pre-commit hook (exit code 255) with output:
No JIRA issue specified. Commit aborted!

[maarten@localhost trunk]$ svn commit -m'[ABC-1]: Nice commit message'
Adding trunk/test.txt
Transmitting file data .
Committed revision 386327.

Extract issue numbers

From JIRA

The JIRA API can be used to extract issues using a selection. Your selection might differ. Below is just an example giving me issues of project ABC assigned to user smeetsm with password "password". It is a nice example of how simple the JIRA API is. Also it gives an example on how to extract specific information using the commandline from a JSON string. You can see the same regular expression as the one used in the pre-commit hook.

 /usr/bin/curl -u smeetsm:password http://jira/rest/api/2/search?jql=project=ABC%20and%20assignee=smeetsm  
| grep -Eho '"key":"([A-Z_0-0]+-[0-9]+)"'

This command can have output like:

"key":"ABC-1"
"key":"ABC-2"
"key":"ABC-3"

If you pipe this to a file (issues.txt), you can easily convert this to an XML by using something like:

 echo \<issues\>;cat issues.txt | sed 's/"key"=\"\(.*\)"/\<issue\>\1\<\/issue\>/'; echo \</issues\>  

This will have as output:

<issues>
<issue>ABC-1</issue>
<issue>ABC-2</issue>

<issue>ABC-3</issue>
</issues>

I choose this method of converting the JSON to XML since I wanted minimal overhead in my process (quick, easy, as few as possible external dependencies).
 

From SVN

You can use the following Python script to parse the SVN log and get the issues checked in from there. The script requires Python 2.7 and the lxml library. The lxml library (+installer) can be downloaded at: https://pypi.python.org/pypi/lxml/ or you can download it using the Python package manager PIP (supplied with Python 2.7.9+).

I have specified a duration between 2015-03-09 and now (HEAD) to identify the release. Identifying a release is usually done by looking at a release branch but the method is similar. You can again see the same regular expression which has been used in the pre-commit hook and in the Jira API call.


 import os  
import xml.etree.ElementTree as ET
import re
import subprocess
def getsvnlog():
p = subprocess.Popen(['/usr/bin/svn','log','--verbose','--xml','-r','{2015-03-09}:HEAD','file:///home/maarten/myrepository/myproject'],stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
return out
def getfilesfromlogentry(logitem):
result=[]
for path in logitem.findall("./paths/path[@kind='file']"):
result.append(path.text)
return result
def getfieldfromlogentry(logitem,fieldname):
result=[]
for item in logitem.findall("./"+fieldname):
result.append(item.text)
return result
def parse_svnlog():
svnlog = getsvnlog()
root = ET.fromstring(svnlog)
uniquelist={}
print "<issues>"
for logitem in root.findall("./logentry"):
for msg in getfieldfromlogentry(logitem,"msg"):
p = re.compile('([A-Z_0-0]+-[0-9]+)')
iterator = p.finditer(msg)
for match in iterator:
print "<issue>"+ msg[match.start():match.end()]+"</issue>"
print "</issues>"
return root
parse_svnlog()

This will yield a result like:

<issues>
<issue>ABC-1</issue>
<issue>ABC-3</issue>
<issue>ABC-4</issue>
</issues>


Generate release notes

Once you have issue numbers from version control and from issue management, you can do interesting things like generating release notes or just a report. The nice thing here is that by comparing the issues from version control and issue management, you can draw interesting conclusions.

If for example you have the following issues from SVN (svnissues.xml):

<issues>
<issue>ABC-1</issue>
<issue>ABC-3</issue>
<issue>ABC-4</issue>
</issues>

And the following from Jira (jiraissues.xml):

<issues>
<issue>ABC-1</issue>
<issue>ABC-2</issue>
<issue>ABC-3</issue>
</issues>

You'll notice ABC-4 is only present in SVN and ABC-2 is only present in JIRA. Why is that? Has the developer checked in code he was not supposed to? Has the developer checked in the code in the correct release branch? Is the JIRA issue status correct? It is something which should be investigated and corrected.

You can use the following Python script combined with the following XSL to produce output. The layout and contents of the release notes is of course greatly simplified. This is usually very customer specific.

transform.pl

 from xml.dom.minidom import *  
import lxml.etree as ET
dom1 = ET.Element("dummy")
xslt = ET.parse("transform.xsl")
transform = ET.XSLT(xslt)
print(ET.tostring(transform(dom1), pretty_print=True))

The XSLT shows how you can load XML files and use a reusable template call to compare the results.

transform.xsl:

<?xml version="1.0"?>  
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:variable name="issues1" select="document('issuessvn.xml')"/>
<xsl:variable name="issues2" select="document('issuesjira.xml')"/>
<xsl:template match="/">
<html>
<body>
<xsl:for-each select="$issues1/issues/issue">
<xsl:call-template name="getIssue">
<xsl:with-param name="search" select="."/>
<xsl:with-param name="content" select="$issues2"/>
<xsl:with-param name="ident1" select="'SVN'"/>
<xsl:with-param name="ident2" select="'JIRA'"/>
<xsl:with-param name="showfound" select="true()"/>
<xsl:with-param name="shownotfound" select="true()"/>
</xsl:call-template>
</xsl:for-each>
<xsl:for-each select="$issues2/issues/issue">
<xsl:call-template name="getIssue">
<xsl:with-param name="search" select="."/>
<xsl:with-param name="content" select="$issues1"/>
<xsl:with-param name="ident1" select="'JIRA'"/>
<xsl:with-param name="ident2" select="'SVN'"/>
<xsl:with-param name="showfound" select="false()"/>
<xsl:with-param name="shownotfound" select="true()"/>
</xsl:call-template>
</xsl:for-each>
</body>
</html>
</xsl:template>
<xsl:template name="getIssue">
<xsl:param name="search"/>
<xsl:param name="content"/>
<xsl:param name="ident1"/>
<xsl:param name="ident2"/>
<xsl:param name="showfound"/>
<xsl:param name="shownotfound"/>
<xsl:choose>
<xsl:when test="$content/issues/issue[text()=$search]">
<xsl:if test="$showfound">
<p>Issue <xsl:value-of select="$search"/> found in <xsl:value-of select="$ident1"/> and <xsl:value-of select="$ident2"/></p>
</xsl:if>
</xsl:when>
<xsl:otherwise>
<xsl:if test="$shownotfound">
<p>Issue <xsl:value-of select="$search"/> found in <xsl:value-of select="$ident1"/> but not in <xsl:value-of select="$ident2"/></p>
</xsl:if>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
</xsl:stylesheet>

Finally

Take a look at the sample generated release notes below. Of course a very simple sample only focusing on version control and issue management.

Issue ABC-1 found in SVN and JIRA
Issue ABC-3 found in SVN and JIRA
Issue ABC-4 found in SVN but not in JIRA
Issue ABC-2 found in JIRA but not in SVN

You now have a means to check whether the developer was allowed to check the code into version control and what the status of the change/bug was. You are now also able to identify which parts of other issues might also be part of the release (by accident?). If you allow developers to indicate which issues are part of the release, they will most likely not be 100% accurate (describe release content with developer prejudice). If you automate this, you can be at least more accurate. Because you check version control against issue management, you also have a means to make the issue management information more accurate. Maybe for example someone forgot to update the issue status or put in the correct fix release. Both improve traceability from code to release.

Small note

You can do many things with version control hooks. You can do code compliance checks, check character sets, check filename conventions. All of these will help improve code quality. You can provide people with all kinds of interesting reports from version control and issue management about developer productivity and the quality of work they provide. Be careful with this and keep the custom scripts small and maintainable (unless of course you want to stay there forever).

Searching Oracle Service Bus Pipeline Alert contents

$
0
0
There are several ways monitor messages passing through the Service Bus. Using pipeline alerts is one of them. Pipeline alerts can be searched in the Enterprise Manager based on several parameters such as summary or when they have occurred. Usually an important part of the message payload is saved in the content of the alert. This content can not be searched from the Enterprise Manager. In this post I will provide an example for logging Service Bus request and response messages using pipeline alerts and a means to search alert contents for a specific occurrence. The example provided has been created in SOA Suite 12.1.3 but the script also works in SOA Suite 11.1.1.6.


Service Bus Pipeline Alerts

The Oracle Service Bus provides several monitoring mechanisms. These can be tweaked in the Enterprise Manager.


In this example I'm going to use Pipeline Alerts. Where you can find them in the Enterprise Manager has been described on: https://technology.amis.nl/2014/06/27/soa-suite-12c-where-to-find-service-bus-pipeline-alerts-in-enterprise-manager-fusion-middleware-control/. I've created a small sample process called HelloWorld. This process can be called with a name and returns 'Hello name' as a response. The process itself has a single AlertDestination and has two pipeline alerts. One for the request and one for the response. These pipeline alerts write the content of the header en body variables to the content field of the alert.


When I call this service with 'Maarten' and with 'John', I can see the created pipeline alerts in the Enterprise Manager.


Next I want to find the requests done by 'Maarten'. I'm not interested in John. I can search for the summary, but this only indicates the location in the pipeline where the alert occurred. I want to search the contents or description as it is called in the Enterprise Manager. Since clicking on every entry is not very time efficient, I want to use a script for that.


Search for pipeline alerts using WLST

At first I thought I could use a method like on: http://docs.oracle.com/cd/E21764_01/web.1111/e13701/store.htm#CNFGD275 in combination with the location of the file-store which is used for the alerts; servers/[servername]/data/store/diagnostics. The dump however of this filestore was not readable enough for me and this method required access to the filesystem of the applicationserver. I decided to walk the WLST path.

The below WLST lists the pipeline alerts where 'Maarten' is in the contents / description. I used the following script. The script works on Service Bus 11.1.1.6 and 12.1.3. You should of course replace the obvious variables like username, password, url, servername and searchfor.

 import datetime  
#Conditionally import wlstModule only when script is executed with jython
if __name__ == '__main__':
from wlstModule import *#@UnusedWildImport
print 'starting the script ....'
username = 'weblogic'
password = 'Welcome01'
url='t3://localhost:7101'
servername='DefaultServer'
searchfor='Maarten'
connect(username,password,url)
def get_children():
return ls(returnMap='true')
domainRuntime()
cd('ServerRuntimes')
servers=get_children()
for server in servers:
#print server
cd(server)
if server == servername:
cd('WLDFRuntime/WLDFRuntime/WLDFAccessRuntime/Accessor/DataAccessRuntimes/CUSTOM/com.bea.wli.monitoring.pipeline.alert')
end = cmo.getLatestAvailableTimestamp()
start = cmo.getEarliestAvailableTimestamp()
cursorname = cmo.openCursor(start,end,"")
if cmo.hasMoreData(cursorname):
records=cmo.fetch(cursorname)
for record in records:
#print record
if searchfor in record[9]:
print datetime.datetime.fromtimestamp(record[1]/1000).strftime('%Y-%m-%d %H:%M:%S')+' : '+record[3]+' : '+record[13]
cmo.closeCursor(cursorname)
cd('..')

The output in my case looks like:

 2015-04-18 12:59:21 : Pipeline$HelloWorld$HelloWorldPipeline : HelloWorldPipelineRequest  
2015-04-18 12:59:21 : Pipeline$HelloWorld$HelloWorldPipeline : HelloWorldPipelineResponse
2015-04-18 13:18:39 : Pipeline$HelloWorld$HelloWorldPipeline : HelloWorldPipelineRequest
2015-04-18 13:18:39 : Pipeline$HelloWorld$HelloWorldPipeline : HelloWorldPipelineResponse

Now you can extend the script to provide more information or lookup the relevant requests in the Enterprise Manager.

Unleash the power of Java API's on your WLST scripts!

$
0
0
Oracle SOA Suite and many other Oracle products have extensive Java API's to expose their functionality. WLST can often be used for relatively course grained actions.  WLST (the version supplied in Weblogic 12.1.3) uses Jython 2.2.1. Jython is the Python scripting language implemented on the Java Virtual Machine. Jython allows easy integration with Java. In this article I describe how you can unleash the power of these Java API's on your WLST scripts!


Considerations

Why WLST and not Java?

For system operators, WLST is easier to work with than Java code. For Java code you need to supply all dependencies in the classpath and updating code requires recompilation. Also Java code can be a bit verbose compared to WLST code and requires (for most developers) more time to write. With a WLST script you do not need to provide dependencies since they are already present in the classpath set by the wlst.sh (of wlst.cmd) command used to start WLST scripts and you can more easily update the scripts without need for recompilation.

Why use Java classes in WLST?

In this example I wanted to create a script which undeployed composites which where not the default revision (are not called by default). Also I wanted to look at the instances. I did not want to undeploy composites which had running instances (long running instances like BPM and ACM). WLST provides some nifty features to undeploy composites; https://docs.oracle.com/middleware/1213/soasuite/wlst-reference-soa/custom_soa.htm#SOACR2689 for example the sca_undeployComposite command. I did however not see WLST commands I could use to query instances.

Undeploying composites using Java

I started out with a piece of Java code shown below. In order to make the required classes available in your project, you need to import Weblogic Remote Client, JRF API and SOA Runtime (see http://javaoraclesoa.blogspot.nl/2015/01/oracle-soa-suite-12c-soa-instance.html for a more elaborate example of using the Java API). With the Locator class you can find your composites and instances. By calling the MBean oracle.soa.config:Application=soa-infra,j2eeType=CompositeLifecycleConfig,name=soa-infra method removeCompositeForLabel you can undeploy composites from Java. This is based on what I found at https://community.oracle.com/thread/1632905.

 package nl.amis.smeetsm.utils.soa;  
import java.util.Hashtable;
import javax.management.MBeanServerConnection;
import javax.management.ObjectName;
import javax.management.openmbean.CompositeData;
import javax.management.remote.JMXConnector;
import javax.management.remote.JMXConnectorFactory;
import javax.management.remote.JMXServiceURL;
import javax.naming.Context;
import oracle.soa.management.facade.Composite;
import oracle.soa.management.facade.CompositeInstance;
import oracle.soa.management.facade.Locator;
import oracle.soa.management.facade.LocatorFactory;
import oracle.soa.management.util.CompositeFilter;
import oracle.soa.management.util.CompositeInstanceFilter;
public class UndeployComposites {
Locator myLocator;
MBeanServerConnection mbsc;
ObjectName mbean;
public UndeployComposites(String user, String pass, String host,
String port) throws Exception {
super();
String providerURL = "t3://" + host + ":" + port + "/soa-infra";
String mbeanRuntime = "weblogic.management.mbeanservers.runtime";
String jmxProtoProviderPackages = "weblogic.management.remote";
String mBeanName =
"oracle.soa.config:Application=soa-infra,j2eeType=CompositeLifecycleConfig,name=soa-infra";
Hashtable jndiProps = new Hashtable();
jndiProps.put(Context.PROVIDER_URL, providerURL);
jndiProps.put(Context.INITIAL_CONTEXT_FACTORY,
"weblogic.jndi.WLInitialContextFactory");
jndiProps.put(Context.SECURITY_PRINCIPAL, user);
jndiProps.put(Context.SECURITY_CREDENTIALS, pass);
myLocator = LocatorFactory.createLocator(jndiProps);
String jmxurl =
"service:jmx:t3://" + host + ":" + port + "/jndi/" + mbeanRuntime;
JMXServiceURL serviceURL = new JMXServiceURL(jmxurl);
Hashtable<String, String> ht = new Hashtable<String, String>();
ht.put("java.naming.security.principal", user);
ht.put("java.naming.security.credentials", pass);
ht.put("jmx.remote.protocol.provider.pkgs", jmxProtoProviderPackages);
JMXConnector jmxConnector =
JMXConnectorFactory.newJMXConnector(serviceURL, ht);
jmxConnector.connect();
mbsc = jmxConnector.getMBeanServerConnection();
mbean = new ObjectName(mBeanName);
}
private CompositeInstanceFilter getCompositeInstanceFilter() {
CompositeInstanceFilter myFilter = new CompositeInstanceFilter();
int[] instanceStates =
{ CompositeInstance.STATE_UNKNOWN, CompositeInstance.STATE_RUNNING,
CompositeInstance.STATE_SUSPENDED };
myFilter.setStates(instanceStates);
return myFilter;
}
public void undeployComposites() throws Exception {
CompositeFilter filter = new CompositeFilter();
CompositeInstanceFilter instanceFilter = getCompositeInstanceFilter();
int instanceCount = 0;
String dnString;
Object compositeObjArray = mbsc.getAttribute(mbean, "DeployedComposites");
for (Composite myComposite : myLocator.getComposites(filter)) {
if (!myComposite.isDefaultRevision()) {
instanceCount =
myComposite.getInstances(instanceFilter).size();
if (instanceCount < 1) {
System.out.println("Undeploying: " + myComposite.getCompositeDN());
//Get all the CompositeData objects from MBean. They contain DNs
//Note- this DN and composite.getDN()/getCompositeDN() are not same. This DN is required for undeploying
CompositeData[] compositeData = (CompositeData[])compositeObjArray;
dnString = getDNToUndeploy(compositeData, myComposite.getCompositeDN().toString());
mbsc.invoke(mbean, "removeCompositeForLabel", new Object[]{dnString},new String[]{"java.lang.String"});
}
}
}
}
private String getDNToUndeploy(CompositeData[] compositeData,
String compositeToBeUndeployed) throws Exception {
String dnString = null;
for (CompositeData tmpCData : compositeData) {
String tempDN = (String)tmpCData.get("DN");
if (tempDN.contains(compositeToBeUndeployed)) {
dnString = tempDN;
break;
}
}
return dnString;
}
public static void main(String[] args) throws Exception {
System.out.println("Initializing");
UndeployComposites me =
new UndeployComposites("weblogic", "Welcome01",
"localhost", "7101");
System.out.println("Running");
me.undeployComposites();
}
}

Rewriting the Java code to WLST

Below is the result of rewriting the Java code to WLST. This was suprisingly easy. I noticed though that entire books are written about Jython Java integration. Basically, with the below simple translation steps (which come quite naturally) it became easy to rewrite the Java code to WLST. The resulting example isn't a perfect one on one copy but it provides the same functionality. The first thing is to replace the {} with Python indentation to indicate nesting and to remove the ; from the line endings.

Method calls

The following Java line:

private String getDNToUndeploy(CompositeData[] compositeData, String compositeToBeUndeployed) throws Exception

Becomes in WLST

def getDNToUndeploy(compositeData,compositeToBeUndeployed):

I've not paid attention to the Java access modifiers. Didn't seem very relevant for my script. Because of the introspection properties of Jython, you don't need to specify which exception is thrown.

Types and constructors

There are some other differences between Java and WLST. WLST determines its types by introspection and does not require explicit declarations or casts. Calling a constructor for example looks in Java like:

Hashtable jndiProps = new Hashtable();

and in WLST like

jndiProps = Hashtable()

The effect of the line is exactly the same.

Imports

Although pretty straightforward, the following Java import:

import java.util.Hashtable;

Looks in WLST like

from java.util import Hashtable

Arrays

Converting Python arrays to Java arrays such as String[] and Object[] can be done with the jarray module array function. Be careful when also using a different function which is called array. You have to import one of the methods under a different name as I have done with the array from jarray which is imported as jarray_c below.

 import array  
from jarray import array as jarray_c
from java.util import Hashtable
from javax.management import MBeanServerConnection
from javax.management import ObjectName
from javax.management.openmbean import CompositeData
from javax.management.remote import JMXConnector
from javax.management.remote import JMXConnectorFactory
from javax.management.remote import JMXServiceURL
from javax.naming import Context
from java.lang import String
from java.lang import Object
from oracle.soa.management.facade import Composite
from oracle.soa.management.facade import CompositeInstance
from oracle.soa.management.facade import Locator
from oracle.soa.management.facade import LocatorFactory
from oracle.soa.management.util import CompositeFilter
from oracle.soa.management.util import CompositeInstanceFilter
host='localhost'
port='7101'
username='weblogic'
password='Welcome01'
providerURL = "t3://" + host + ":" + port + "/soa-infra";
mbeanRuntime = "weblogic.management.mbeanservers.runtime";
jmxProtoProviderPackages = "weblogic.management.remote";
mBeanName = "oracle.soa.config:Application=soa-infra,j2eeType=CompositeLifecycleConfig,name=soa-infra";
jndiProps = Hashtable()
jndiProps.put(Context.PROVIDER_URL, providerURL)
jndiProps.put(Context.INITIAL_CONTEXT_FACTORY,"weblogic.jndi.WLInitialContextFactory")
jndiProps.put(Context.SECURITY_PRINCIPAL, username)
jndiProps.put(Context.SECURITY_CREDENTIALS, password)
myLocator = LocatorFactory.createLocator(jndiProps)
jmxurl = "service:jmx:t3://" + host + ":" + port + "/jndi/" + mbeanRuntime
serviceURL = JMXServiceURL(jmxurl)
ht = Hashtable()
ht.put("java.naming.security.principal", username)
ht.put("java.naming.security.credentials", password)
ht.put("jmx.remote.protocol.provider.pkgs", jmxProtoProviderPackages)
jmxConnector = JMXConnectorFactory.newJMXConnector(serviceURL, ht)
jmxConnector.connect()
mbsc = jmxConnector.getMBeanServerConnection()
mbean = ObjectName(mBeanName)
instanceFilter = CompositeInstanceFilter()
instanceStates = array.array('i', [CompositeInstance.STATE_UNKNOWN, CompositeInstance.STATE_RUNNING, CompositeInstance.STATE_SUSPENDED,CompositeInstance.STATE_RECOVERY_REQUIRED])
instanceFilter.setStates(instanceStates)
filter = CompositeFilter()
def getDNToUndeploy(compositeData,compositeToBeUndeployed):
#print compositeToBeUndeployed
dnString = '';
for tmpCData in compositeData:
tempDN = tmpCData.get("DN")
#print "tempDN: "+tempDN
if compositeToBeUndeployed in tempDN:
dnString = tempDN
break
return dnString;
instanceCount = 0;
compositeObjArray = mbsc.getAttribute(mbean, "DeployedComposites");
for myComposite in myLocator.getComposites(filter):
try:
if not myComposite.isDefaultRevision():
instanceCount = myComposite.getInstances(instanceFilter).size()
if instanceCount < 1:
#print "Undeploying: " + str(myComposite.getCompositeDN())
#Get all the CompositeData objects from MBean. They contain DNs
#Note- this DN and composite.getDN()/getCompositeDN() are not same. This DN is required for undeploying
dnString = getDNToUndeploy(compositeObjArray, myComposite.getCompositeDN().toString());
print "Undeploying "+dnString
strarray = ["java.lang.String"]
#print "Array made"
jarray=jarray_c(strarray,String)
objectarray=[dnString]
jobjectarray=jarray_c(objectarray,Object)
#print "Array converted"
mbsc.invoke(mbean, "removeCompositeForLabel", jobjectarray,jarray)
except:
print "Unexpected error: "+str(sys.exc_info()[0])+""+str(myComposite.getCompositeDN())

Conclusion

Rewriting Java to WLST was suprisingly easy. With this example you can now use the full power of the Oracle SOA Suite Java API in WLST scripts to make them even more powerful and versatile. You can of course easily simplify the above WLST code by using sca_undeployComposite for the undeploy action and remove everything related to calling the MBean.

Authentication using OpenLDAP. Weblogic Console and BPM Worklist

$
0
0
In this blog I will illustrate how you can configure Weblogic Server to use OpenLDAP as authentication provider and to allow OpenLDAP users to login to the Oracle BPM Worklist application. In a previous blog I have already shown how to do Weblogic Authentication with ApacheDS (http://javaoraclesoa.blogspot.nl/2014/08/ldap-and-weblogic-using-apacheds-as.html). In this blog I will use OpenLDAP to also do BPM Worklist authentication.


Why use OpenLDAP?

Oracle Platform Security Services (OPSS) supports the use of several authentication providers. See: http://docs.oracle.com/cd/E23943_01/core.1111/e10043/devuserole.htm#JISEC2474. OpenLDAP is the only open source provider available in this list.
  • Microsoft Active Directory
  • Novell eDirectory
  • Oracle Directory Server Enterprise Edition
  • Oracle Internet Directory
  • Oracle Virtual Directory
  • OpenLDAP
  • Oracle WebLogic Server Embedded LDAP Directory
  • Microsoft ADAM
  • IBM Tivoli
When you can use a certain provider for Weblogic authentication, this does not automatically mean you also use this user in Fusion Middleware applications which use JPS such as the BPM Worklist application. Possible authentication providers in Weblogic Server cover a wider range of servers and mechanisms than can be used in JPS out of the box.

What causes this limitation? Well, most Fusion Middleware Applications (all as far as I've seen) can only look at the first LDAP provider for authentication. This is usually the default authenticator (Weblogic Embedded LDAP server). When I add another LDAP authenticator, it will be ignored. The solution is straightforward; use a single LDAP. Of course if you don't want that, you can also virtualize several LDAPs and offer them as a single LDAP for the application to talk to. The most common solutions for this are; Oracle Virtual Directory (OVD, http://docs.oracle.com/cd/E12839_01/oid.1111/e10036/basics_10_ovd_what.htm) and LibOVD. Oracle Virtual Directory is a separate product. LibOVD is provided with Weblogic Server but does not have its own web-interface and is limited in functionality (and configuration is more troublesome in my opinion). When (for example for ApacheDS) you specify the generic LDAPAuthenticator and not a specific one such as for OpenLDAP, you need to specify an idstore.type in the jps-config.xml in DOMAINDIR\config\fmwconfig. This idstore.type is limited to the list below (see https://docs.oracle.com/cd/E14571_01/core.1111/e10043/jpsprops.htm#JISEC3159);
  • XML
  • OID - Oracle Internet Directory
  • OVD - Oracle Virtual Directory
  • ACTIVE_DIRECTORY - Active Directory
  • IPLANET - Sun Java System Directory Server
  • WLS_OVD - WebLogic OVD
  • CUSTOM - Any other type
Custom can be any type, but mind you that if you specify custom, you will also need to specify an implementation of the oracle.security.idm.IdentityStoreFactory interface in the property 'ADF_IM_FACTORY_CLASS' and here you are limited or you have to build your own. When using OpenLDAP, you don't have this problem.

Configuring OpenLDAP

Installing

This has been described on various other blogs such as https://blogs.oracle.com/jamesbayer/entry/using_openldap_with_weblogic_s and http://biemond.blogspot.nl/2008/10/using-openldap-as-security-provider-in.html. I'll not go into much detail here, just describe what I needed to do to get it working.

First install OpenLDAP. I used a Windows version since at the time of writing this blog I was sitting behind a Windows computer. http://sourceforge.net/projects/openldapwindows. There are also plenty of other versions. The benefit of this version (I downloaded 2.4.38) is that it pretty much works out of the box. I updated part of the etc\openldap\slapd.conf file which you can see below to provide my own domain and update the Manager password. The password (you can make a SSHA version of this by looking at https://onemoretech.wordpress.com/2012/12/17/encoding-ldap-passwords/) is 'Welcome01' in my case. There are also a couple of other references to the dc=example,dc=com domain in the config file and you should replace those also.

#######################################################################
# BDB database definitions
#######################################################################


databasebdb
suffix"dc=smeetsm,dc=amis,dc=nl"
rootdn"cn=Manager,dc=smeetsm,dc=amis,dc=nl"
# Cleartext passwords, especially for the rootdn, should
# be avoid.  See slappasswd(8) and slapd.conf(5) for details.
# Use of strong authentication encouraged.
rootpw{SSHA}2HdAW3UmR5uK4zXOVwxO01E38oYanHUa
# The database directory MUST exist prior to running slapd AND
# should only be accessible by the slapd and slap tools.
# Mode 700 recommended.
directory       ../var/openldap-data
# Indices to maintain

index   default         pres,eq
indexobjectClasseq
index   uniqueMember    eq

access to attrs=userPassword
       by dn="cn=Manager,dc=smeetsm,dc=amis,dc=nl" write
       by anonymous auth
       by * none

access to dn.base=""
       by * read

access to *
       by dn="cn=Manager,dc=smeetsm,dc=amis,dc=nl" write
       by * read

access to *
       by dn="cn=root,dc=smeetsm,dc=amis,dc=nl" write
       by * read

Adding users

Commandline with an ldif file

I used Apache Directory Studio to add users in a graphical way (described below). The result I exported to the below ldif file (all passwords are 'Welcome01'). After you have done this you have a sample Administrator user and group available which will correspond to the below Weblogic Server configuration. You can save the below file in base.ldif.

version: 1

dn: dc=smeetsm,dc=amis,dc=nl
objectClass: top
objectClass: domain
dc: smeetsm

dn: ou=people,dc=smeetsm,dc=amis,dc=nl
objectClass: top
objectClass: organizationalUnit
ou: people

dn: cn=smeetsm,ou=people,dc=smeetsm,dc=amis,dc=nl
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
cn: smeetsm
sn: Smeets
userPassword:: e3NzaGF9Y1lEOE9hM09IdjhGWjFQSVZPWG9DMTFHeDBvQThZcVV1TGV5aVE9P
 Q==

dn: ou=groups,dc=smeetsm,dc=amis,dc=nl
objectClass: top
objectClass: organizationalUnit
ou: groups

dn: cn=Administrators,ou=groups,dc=smeetsm,dc=amis,dc=nl
objectClass: top
objectClass: groupOfNames
cn: Administrators
member: cn=smeetsm,ou=people,dc=smeetsm,dc=amis,dc=nl

On an empty database (configured with the slapd.conf above) you can import this like;

ldapadd.exe -f base.ldif -xv -D "cn=Manager,dc=smeetsm,dc=amis,dc=nl" -w Welcome01
(ldapadd.exe is in the bin directory of my OpenLDAP installation)

With a GUI (Apache Directory Studio)

Download Apache Directory Studio from: https://directory.apache.org/studio/. First create a connection in Apache Directory Studio. Use the same login data as specified in the slapd.conf file.

Host: localhost port: 389
BindDN or user: cn=Manager,dc=smeetsm,dc=amis,dc=nl
Password: Welcome01

Next, right-click Root DSE. Add a new entry. Create from scratch. Add the 'domain' object class.


Specify parent: 'dc=smeetsm,dc=amis,dc=nl'
Specify RDN: 'dc=smeetsm'


Using a similar method, you can look at the ldif file above to add the other entries. You only have to add the last class per object as the other classes are its super-classes (check though). The end result will be;



Weblogic Server configuration

Authentication provider configuration

This part has been described in other posts as well. I'll just shortly repeat it here for thoroughness.

In your security realm add a new authentication provider, select OpenLDAPAuthenticator. Fill in the below details;

Group Base DN:  ou=groups,dc=smeetsm,dc=amis,dc=nl
Static Group Object Class:  groupOfNames
User Base DN:  ou=people,dc=smeetsm,dc=amis,dc=nl
User Object Class:  inetOrgPerson
Principal:  cn=Manager,dc=smeetsm,dc=amis,dc=nl
Host:  localhost
Credential:  Welcome01
Static Group DNs from Member DN Filter:  (&(member=%M)(objectclass=groupOfNames))
User From Name Filter:  (&(cn=%u)(objectclass=inetOrgPerson))
Group From Name Filter:  (&(cn=%g)(objectclass=groupOfNames))

Mind that the DefaultAuthenticator and your newly created authenticator should both have their control flag set to SUFFICIENT.

You can now use the new user to login to the Weblogic Console and Enterprise Manager. In this example I have added the user to the Administrators group. If you don't want that, you can create your own group and add the users to that group. The user won't be able to login to the Weblogic Console but using the worklist application will work if the below configuration is also done.

LibOVD configuration

You can enable LibOVD as specified on http://fusionsecurity.blogspot.nl/2012/06/libovd-when-and-how.html. Set the virtualize=true property from the Enterprise Manager Fusion Middleware control. Click the arrow before Security Provider, Click configure and add the property.


In order to allow people to login to the worklist application, they should be able to login or have a valid role as you can see in the screenshot below. You can of course also make this more specific.


Thus after the virtualize=true property has been set (and the server has been restarted), you can add users to your OpenLDAP and they can be assigned tasks. I do recommend though when working with tasks to map the application roles to LDAP groups and not to specific users directly. This will make management of the users a lot easier at a later stage (especially when working with Organizational Units).

Now you can use the Oracle BPM Worklist application to login and do things. You don't have any assigned tasks though so you won't see much yet but you can assign them to this user or the group it belongs to.


Resources

OVD JPS properties

OpenLDAP with Weblogic

OpenLDAP Windows

Encoding LDAP passwords

LibOVD idstore.type for ApacheDS?

Identity store providers

LibOVD when and how?

WebLogic Server and OpenLDAP. Using Dynamic groups

$
0
0
Dynamic groups in an LDAP are groups which contain a query to specify its members instead of specifying every member separately. Efficient usage of dynamic groups makes user maintenance a lot easier. Dynamic groups are implemented differently in different LDAP server implementations. Weblogic Server can be configured to use dynamic groups in order to fetch users for a specific group. In this blog I will describe how dynamic groups can be created in OpenLDAP and used in Weblogic Server.

In this example I use two users. smeetsm the developer and doej the operator. As shown in the image below, there are many servers which follow a similar access pattern for operators and developers. We are considering a case here where users do not use a shared account (e.g. weblogic) to login to different systems. This is for trace-ability and security purposes a better practice than when everyone uses the same shared user. See http://otechmag.com/magazine/2015/spring/maarten-smeets.html for a more thorough explanation on why you would want this.


A small note though. I'm a developer and this is not my main area of expertise. I have not implemented this specific pattern in any large scale organization.

Why dynamic groups?

In the group definition you can specify a query which determines members based on specific attribute values of users (e.g. privileges). What can you achieve with dynamic groups? You can provide an abstraction between users and groups which allows management of just user attributes to grant privileges. Groups which are usually per server, do not require as much changing this way. Since usually there are many servers (see example above) this saves a lot of time.

For example, you can use the departmentNumber attribute to differentiate what developers and operators can do on different machines. For readability I have misused the employeeType here since it allows string content. In the below image there are two users. smeetsm who is a developer and doej who is an operator. I have defined roles per server in the LDAP. The Monitor role on Server1 has smeetsm and doej as members because the memberURL query selects persons who have employeeType Developer or Operator. On Server1 only doej is Administrator and not smeetsm. This can for example be considered an acceptance test environment. On Server2 both are Administrator and Monitor. This can be considered a development environment. When smeetsm leaves and goes to work somewhere else, I just have to remove the Developer employeeType attribute at the user level and he won't be able to access Server1 and Server2 anymore. So there is no problem anymore with forgetting which server which person has access to.


OpenLDAP configuration

Install

First download OpenLDAP from http://sourceforge.net/projects/openldapwindows.

In order to reproduce the configuration I have used do the following;

Download the configuration and LDAP export: here

Put the slapd.conf in <OpenLDAP INSTALLDIR>\etc\openldap

Check if the password specified for the administrator works. Not sure if the seed is installation dependent. You can generate a new password by going to  <OpenLDAP INSTALLDIR>\bin and execute slappasswd -h {SSHA}

Start OpenLDAP by executing  <OpenLDAP INSTALLDIR>\libexec\StartLDAP.cmd (or the shortcut in your startmenu)

Put the export.ldif in <OpenLDAP INSTALLDIR>\bin
Open a command-prompt and go to the <OpenLDAP INSTALLDIR>\bin

Execute
ldapadd.exe -f export.ldif -xv -D "cn=Manager,dc=smeetsm,dc=amis,dc=nl" -w Welcome01

Now you can browse your OpenLDAP server using for example Apache Directory Studio. In my case I could use the following connection data (I used Apache Directory Studio to connect);

BindDN or user: cn=Manager,dc=smeetsm,dc=amis,dc=nl
Password: Welcome01


The member field gets generated automatically (dynlist configuration in slapd.conf). This happens however after a search is performed. WebLogic can't find this person if defined as a static group (I've enabled authentication debugging to see this in the log, Server, Debug, weblogic.security.Atn);

<search("ou=Server1, ou=groups, dc=smeetsm, dc=amis, dc=nl", "(&(member=cn=doej,ou=people,dc=smeetsm,dc=amis,dc=nl)(objectclass=groupofurls))", base DN & below)> 
<getConnection return conn:LDAPConnection {ldaps://localhost:389 ldapVersion:3 bindDN:"cn=Manager,dc=smeetsm,dc=amis,dc=nl"}> 
<Result has more elements: false> 

Unless you want to invest time in getting to know your specific LDAP server in order to make the dynamic groups transparent to the client (so you can access them in a similar way as static groups), you're probably better of fixing this in WebLogic Server using dynamic groups (at least for development purposes). You can try however to let OpenLDAP produce memberof entries at the user level. This will perform better as WebLogic does not need to analyse all groups for MemberURL entries to determine in which group the user is present.

There are several tutorials available online for this (for example http://www.schenkels.nl/2013/03/how-to-setup-openldap-with-memberof-overlay-ubuntu-12-04/). Most however use OpenLDAPs online configuration (olc) and not slapd.conf. olc is the recommended way of configuring OpenLDAP and in most distributions the default. However not in the one I was using.

From slapd.conf to olc (optional)

This part is optional. It might help if you're planning to take a dive into the depths of OpenLDAP (don't forget the oxygen... I mean coffee). You can convert your slapd.conf to an online configuration as shown below.

See http://www.zytrax.com/books/ldap/ch6/slapd-config.html. I had some problems with creation of the slapd.d directory so I first create another directory called 't' and rename it. It is a good idea to also rename the slapd.conf in order to make sure this configuration file is not used anymore.

cd <OpenLDAP INSTALLDIR>\etc
mkdir t
<OpenLDAP INSTALLDIR>\sbin\slaptest.exe -f openldap\slapd.conf -F t
move openldap\slapd.d
move openldap\slapd.conf openldap\slapd.conf.bak

Update the last line of <OpenLDAP INSTALLDIR>\libexec\StartLDAP.cmd to use the newly created directory for its configuration
slapd.exe -d -1 -h "ldap://%FQDN%/ ldaps://%FQDN%/" -F ..\etc\openldap\slapd.d

Create a user which can access cn=config. Update <OpenLDAP INSTALLDIR>\etc\openldap\slapd.d\cn=config\olcDatabase={0}config.ldif (from: http://serverfault.com/questions/514870/how-do-i-authenticate-with-ldap-via-the-command-line)

Add between
olcMonitoring: FALSE
and
structuralObjectClass: olcDatabaseConfig
the following lines. Use the same password as in the previously used slapd.conf (created with slappasswd -h {SSHA})

olcRootDN: cn=admin,cn=config
olcRootPW: {SSHA}2HdAW3UmR5uK4zXOVwxO01E38oYanHUa

Now you can use a graphical LDAP client to browse cn=config. Authenticate using cn=admin,cn=config and use cn=config as Base DN. This makes browsing and editing configuration easier.


To add a configuration file you can do the following for example;

<OpenLDAP INSTALLDIR>\bin>ldapadd.exe -f your_file.ldif -xv -D "cn=admin,cn=config" -w Welcome01

Here I'll leave you to figure out the rest for yourself to get the memberof attribute working. Good luck! (as told before, you don't really need this unless performance becomes a bottleneck)

WebLogic configuration

In the WebLogic Console, Security Realms, myrealm, Providers, New, OpenLDAPAuthenticator.

Use the following properties;
Common: Control Flag. SUFFICIENT. Also set the control flag for the DefaultAuthenticator to SUFFICIENT.

Provider specific

Connection

  • Host: localhost
  • Port: 389
  • Principle: cn=Manager,dc=smeetsm,dc=amis,dc=nl
  • Credential: Welcome01

Users

  • User Base DN: ou=people, dc=smeetsm, dc=amis, dc=nl
  • All users Filter:
  • User from name filter: (&(cn=%u)(objectclass=inetOrgPerson))
  • User Search Scope: Subtree
  • User name attribute: cn
  • User object class: person
  • Use Retrieved User Name as Principal: (leave unchecked)

Groups

  • Group Base DN: ou=Server1, ou=groups, dc=smeetsm, dc=amis, dc=nl
  • All groups filter:
  • Group from name filter: (&(cn=%g)(|(objectclass=groupofnames)(objectclass=groupofurls)))
  • Group search scope: Subtree
  • Group membership searching: unlimited
  • Max group membership search level: 0

Static groups

  • Static Group Name Attribute: cn
  • Static Group Object Class: groupofnames
  • Static Member DN Attribute: member
  • Static Group DNs from Member DN Filter: (&(member=%M)(objectclass=groupofnames))

Dynamic groups

  • Dynamic Group Name Attribute: cn
  • Dynamic Group Object Class: groupofurls
  • Dynamic Member URL Attribute: memberurl
  • User Dynamic Group DN Attribute:

GUID Attribute: entryuuid

Points of interest

Notice that the group from name filter specifies two classes. The class for the static groups and the class for the dynamic groups.

Notice User Dynamic Group DN Attribute is empty. If you can enable generation of the memberof attribute in your LDAP server, you can use that.

Notice that the Group Base DN specifies the server. For Server2 I would use Server2 instead of Server1.

You can use static and dynamic groups together and also nest them. In the below image, Test3 is a groupofnames with smeetsm as static member. Monitor is a dynamic group.


Be careful though with the performance. It might not be necessary to search entire subtrees to unlimited depth.

Result

After the above configuration is done, can login with user smeetsm on Server1 into the WebLogic Console and get the Monitor role while on Server2 with the same username, you get the Administrator role.


If I change the employeeType of smeetsm to operator, I get the Administrator role on Server1. If I remove the attribute, I cannot access any system. User management can easily be done this way on user level with very little maintenance needed on group level (where there usually are many servers) unless for example the purpose of an environment changes. Then the query to obtain users needs changing.

I could not get the memberof attribute working in my OpenLDAP installation. Luckily for a development environment you don't need this but if you plan on using a similar pattern on a larger scale, you can gain performance by letting the LDAP server generate these attributes in order to allow clients (such as WebLogic Server) to get quick insight into user group memberships.

Please mind that in order for the FMW components (from the IdentityService to WebCenterContent) to use dynamic groups you need to enable the DynamicGroups plugin in the LibOVD configuration. See: http://www.ateam-oracle.com/oracle-webcenter-and-dynamic-groups-from-an-external-ldap-server-part-2-of-2/ in order to allow usage of the dynamic groups.

Continuous delivery culture. Why do we do the things we do the way we do them?

$
0
0
Usually at first there is a problem to be solved. A solution is conjured and implemented. After a while, the solution is re-used and re-used again. It changes depending on the person implementing it and his/hers background, ideas, motives, likes and dislikes. People start implementing the solution because other people do it or someone orders you to do it. The solution becomes part of a culture. This can happen to such extents that the solution causes increasing amounts of side effects, other new problems which require new solutions.


In software development, solutions are often methods and/or pieces of software which change rapidly. This is especially true for the area of continues delivery, which is relatively young and still much in development. Continuous delivery tools and methods are meant to increase software quality and to make software development, test and deployment more easy. Are your continuous delivery efforts actually increasing your software quality and decreasing your time to market or have they lost their momentum and become a bother?

Sometimes it is a good idea to look at the tools you are using or are planning to use and think about what they contribute. Is using them intuitive and do they avoid errors and misunderstandings? Do you spend more time on merging changes and solving deployment issues than actually creating  new functionality? Maybe that is a time to think about how you can improve things.

In this article I will look at current usage of version control and artifact repositories. I will not go to the level of specific products. Next I will describe some common challenges which often arise and give some suggestions on how you can deal with them. The purpose of this is to try and let the reader not take continuous delivery culture for granted but be able to think about the why before and during the what.

Version Control

A purpose of software version control is to track changes in software versions. Who made which change in which version of the software? In version control you can track back what is in a certain version of the software. A release can be installed on an environment and thus indirectly version control allows tracing back which code is installed (comes in handy when something goes wrong).

When using version control, you should ask yourself; can I still without a doubt identify a (complete) version of the software? Do I still know who made which change in which version? If someone says a certain version is installed in a certain environment, can I without a doubt identify the code which was installed from my version control system?

Branching and merging; dangerous if not done right

Most software development projects I've seen, have implemented a branching and merging strategy. People want to work on their own independent code-base and not be bothered by changes other people make (and the other way around). Develop their software in their own isolated sandbox. The idea is that when a change is completed (and conforms to certain agreements (such as quality, testing)), it is merged back to the originating branch and after merging has been completed, usually the branch ceases to have function.

Projects and code modularity

Sometimes you see the following happen which can be quite costly and annoying. Project A and Project B partially share the same code (common components) and have their own separate not overlapping code. One of the projects creates a version control branch to have a stable base to work with, an independent life-cycle and not be bothered by development done by the other project. Both projects go their own way, both also editing the common components (which are now living in two places). At a certain moment they realize they need to come back together again (for example due to environment constraints (a single acceptance environment) or because Project A has something useful which Project B also wants). The branches have to be merged again. This can be a problem because are all the changes Project A and Project B have done to the common components compatible with each other? After merging is complete (this could take a while), an entire regression-test has to be performed for both projects if you want to ensure the merged code still works like expected for both projects. In my experience, this can be painful, especially if automated regression testing is not in place.

Lots of copies

Branching and keeping the branch alive for a prolonged time is against the continuous delivery principle of integrating early and often.

The problem started with the creation of the branch and separate development between the different projects. A branch is essentially an efficient copy of the code. Having multiple copies of the same code is not they way we were taught to develop; Don't repeat yourself (DRY) or Duplication is Evil (DIE) or Once and Only Once (OAOO), Single Point of Truth (SPoT), Single Source Of Truth (SSOT).

Remember agent Smith from The Matrix? Copies are not good!
Increase development time

When developing new features, the so-called 'feature branch' is often used. This can be a nice way to isolate development of a specific piece of software. However at a certain moment, the feature has to be merged with the other code, which in the meanwhile might have changed a lot. Essentially, the feature has to be rebuild on another branch. This is especially so when the technology used is not easy to merge. This can in some cases dramatically increase development time of a feature.

Danger of regression

When bug-fixes are created and there are feature branches and several release branches, is it still clear where a certain fix should go? Is your branching strategy making it easy for yourself or are you introducing an extra complexity and more work? If you do not update the branch used for the release and future releases, your fix might get lost somewhere and resurface at a later time.

A similar issue arises with release branches on which different teams develop. Team A works on release 1.0 which is in production. Team 2 works on release 2.0 which is still in development. Are all the fixes Team A makes (when relevant), also applied to Release 2? Is this checked and enforced?

Solutions

In order to counter such issues, there are several possible and quite obvious solutions. Try to keep the number of separate branches small to avoid misunderstandings and reduce merge effort. Merge back (part of) the changes made on the branch regularly (integrate early and often) and check if they still function as expected. Do not forget to allow unique identification of a version of the software. Introduce a separate life-cycle for the shared components (think about project modularity) and project specific components. This way branching might not even be needed.


Artifact repository

An artifact repository is used for storing artifacts. An artifact has a certain version number. Usually this can be tracked back to a version control system. An artifact repository uniquely identifies an artifact of a specific version. Usually deployable units are stored. An artifact stored in a repository usually has a certain status. For example, it allows you to distinguish released artifacts from 'work-in-progress' or snapshot artifacts. Also an artifact repository is often used as a means to transfer responsibility of the artifact from a certain group to another. For example, development is done, it is put in the artifact repository for operations to deploy it.

When working with an artifact repository, you should consider the following (among other things). If someone says an artifact with a specific version is deployed, can I still say I know exactly what was deployed from the artifact repository, even for example after a year? Once a version is created and released, is it immutable in the artifact repository? If I have deployed a certain artifact, can I at a later time repeat the procedure and get exactly the same result?

Artifact repository as a means of communication

An artifact repository can be used to transfer an artifact from development to operations. Sometimes the artifact in the repository is not complete. For example environment dependent properties are added by operations. Also some of the placeholders are replaced from the artifact and several artifacts are combined and reordered to make deployment easier. Deployment tooling has changed or a property file has been added. Do I still know a year later exactly what is deployed or have the deployment steps after the artifact is fetched from the repository modified the original artifact in such a way it is not recognizable anymore?

Changes in deployment software

Suppose the deployment software has been enhanced with several cool new features. For example the deployment now supports deploying to clustered environments and new property files make deployment more flexible, for example, allow specifying which database database code should be deployed to. Only I can't deploy my old artifacts anymore because the artifact structure and added property files are different. You have a problem here.

Solutions

Carefully thing about the granularity of your artifacts. Small granularity means it might be more difficult to keep track of dependencies but you gain flexibility in your deployment software and better traceability from artifact to deployment. Large artifacts means some actions might be required to allow deployment of your custom deployment unit (custom scripts) and you will get more artifact versions since often code changes lead to new versions and generally more code changes more often.

Carefully think about how you link your deployment to your artifact and how to deal with changes in the deployment software. You can add a dependency to the version of the deployment software to your artifacts or make your deployment software backwards compatible. You can also accept that after you change your deployment software, you cannot deploy old artifacts anymore. This might not be a problem if the new style artifacts are already installed in the production environment and the old style artifacts will never be altered or deployed anymore. You can also create new versions of the different artifacts in the new structure or update as you go.

Conclusion

Implementing continuous delivery can be a pain in the ass! It requires a lot of thought about responsibilities and implementation methods (not even talking about the implementation itself). It is easy to just do what everyone else does and smart people say you should do, but it has also never hurt to think about what you are doing yourself and to understand what you are doing and why. Also it is important to realize what the limitations are of the methods and tools used in order to make sound judgments about them. Try to keep it easy to use and make sure it adds value.

Sonatype Nexus: Delete artifacts based on a selection

$
0
0
Sonatype Nexus provides several mechanisms to remove artifacts from the repository. You can schedule a job to keep only specified number of the latest releases (see here). You can also specifically remove a single artifact or an entire group using the API (see here). Suppose you want to make a selection though. I only want to delete artifacts from before a certain date with a specified groupid. In this article I have provided a Python 2.7 script which allows you to do just that.

The script has been created for my specific sample situation. Yours might differ. For example, I have only used the Releases repository and no snapshot versions. First check if the artifacts are the ones you expect to be selected based on your criteria before actually performing the artifact deletion. If they differ, it is easy to alter the script to suit your particular needs.

You can download the NetBeans 8.0.2 project containing the code of the script here. I've used the NetBeans Python plugin you can find here. Also I have not used any third party Python libraries so a default installation should suffice.

Script to delete artifacts

Configuration

The script starts with some configuration. First the connection information for Nexus followed by artifact selection criteria. Only the group is required. All other criteria can be left empty (None). If empty, any test related to the selection criteria passes. Thus for example setting the ARTIFACTVERSIONMIN to None means all previous versions could become part of the selection.

#how to access Nexus. used to build the URL in get_nexus_artifact_version_listing and get_nexus_artifact_names
NEXUSHOST = "localhost"
NEXUSPORT = "8081"
NEXUSREPOSITORY = "releases"
NEXUSBASEURL = "/nexus/service/local/repositories/"
NEXUSUSERNAME = 'admin'
NEXUSPASSWORD = 'admin123'

#what to delete
ARTIFACTGROUP = "nl.amis.smeetsm.application" #required
ARTIFACTNAME = None #"testproject" #can be an artifact name or None. None first searches for artifacts in the group
ARTIFACTVERSIONMIN = "1.1" #can be None or a version like 1.1
ARTIFACTVERSIONMAX = "1.2" #can be None or a version like 1.2
ARTIFACTMAXLASTMODIFIED = datetime.datetime.strptime("2014-10-29 12:00:00","%Y-%m-%d %H:%M:%S") #can be None or datetime in format like 2014-10-29 12:00:00
ARTIFACTMINLASTMODIFIED = datetime.datetime.strptime("2014-10-28 12:00:00","%Y-%m-%d %H:%M:%S") #can be None or datetime in format like 2014-10-28 12:00:00

What does the script do?

The script uses the Nexus API (see for example my previous post). If the artifact name is specified, that is used. Else the API is used to query for artifacts which are part of the specified group. When the artifacts are determined, artifact versions are looked at.

For example, a group nl.amis.smeetsm.application is specified and an artifact name of testproject is specified. This translates to an URL like;

http://localhost:8081/nexus/service/local/repositories/releases/content/nl/amis/smeetsm/application/testproject/

When I go to this URL in a browser, an XML is returned containing directory content which among others contain the artifact versions and several properties of these versions such as lastModified date. This I can then use in the selection.

If an artifact version is determined to be part of the provided selection, it is removed. Interesting about the actual removing of the artifact using the Nexus API is the Python usage of HTTP Basic Authentication. See for the sample I have used this as inspiration.

Seeing it work

My test situation looks at follows. testproject is my artifact name. I have 4 versions. 1.0, 1.2, 1.3, 1.4. 1.0 is the oldest one with a lastModifiedDate of 2014-10-28. I want to remove it.


I have used the following selection (delete testproject releases before 2014-10-29 12:00:00)

ARTIFACTGROUP = "nl.amis.smeetsm.application"
ARTIFACTNAME = "testproject"
ARTIFACTVERSIONMIN = None
ARTIFACTVERSIONMAX = None
ARTIFACTMAXLASTMODIFIED = datetime.datetime.strptime("2014-10-29 12:00:00","%Y-%m-%d %H:%M:%S")
ARTIFACTMINLASTMODIFIED = None

The output of the script is as follows;

Processing artifact: testproject
URL to determine artifact
versions:http://localhost:8081/nexus/service/local/repositories/releases/content/nl/amis/smeetsm/application/testproject/
Item datetime: 2015-07-11 14:43:32.0 UTC
Item version: 1.3
Item datetime: 2015-07-11 14:43:57.0 UTC
Item version: 1.4
Item datetime: 2014-10-28 18:20:49.0 UTC
Item version: 1.0
Artifact to be removed nl.amis.smeetsm.application: testproject: 1.0
Sending HTTP DELETE request to
http://localhost:8081/nexus/service/local/repositories/releases/content/nl/amis/smeetsm/application/testproject/1.0
Response:  204 No Content
Item datetime: 2014-11-03 13:36:43.0 UTC
Item version: 1.2

As you can see, all versions are evaluated and only one is selected and removed. The HTTP 204 indicates the action has been successful.

References

NetBeans Python plugin
http://plugins.netbeans.org/plugin/56795/python4netbeans802

Can I delete releases from Nexus after they have been published?
https://support.sonatype.com/entries/20871791-Can-I-delete-releases-from-Nexus-after-they-have-been-published-

curl : safely delete artifacts from Nexus
https://parkerwy.wordpress.com/2011/07/10/curl-safely-delete-artifacts-from-nexus/

Python: HTTP Basic authentication with httplib
http://mozgovipc.blogspot.nl/2012/06/python-http-basic-authentication-with.html

Retrieve Artifacts from Nexus Using the REST API or Apache Ivy
http://www.sonatype.org/nexus/2015/02/18/retrieve-artifacts-from-nexus-using-the-rest-api-or-apache-ivy/

Overview of WebLogic RESTful Management Services

$
0
0
Inspired by a presentation given by Shukie Ganguly on the free Oracle Virtual Technology Summit in July (see here); "New APIs and Tools for Application Development in WebLogic 12c", I decided to take a look at an interesting new feature in WebLogic Server 12c: the RESTful Management Services. You can see here how to enable them. In this post I will provide an overview of my short study on the topic.

RESTful management services consist of two sets of resources. tenant-monitoring resources and 'wls' resources. The first is more flexible in response format (JSON, XML, HTML) and more suitable for monitoring. With the latter you can for example update datasource properties and create entire servers. It however only supports JSON as return format. The 'wls' resources also provide links so you can automagically traverse the resource tree which is very useful. I've provided a Python script to do just that at the end of this post.

Monitoring

In the past I have already created all kinds of tools to do remote monitoring of WebLogic Server 11g. See for example http://javaoraclesoa.blogspot.nl/2012/09/monitoring-datasources-on-weblogic.html for some code to monitor datasources and for the state of the SOA Infrastructure; http://javaoraclesoa.blogspot.nl/2012/11/soa-suite-cluster-deployments-and.html and also for BPEL: http://javaoraclesoa.blogspot.nl/2013/03/monitoring-oracle-soa-suite-11g.html.

With the 12c RESTful Management Services this becomes a lot easier and does not require any custom code, which is of course a major improvement!

It is possible to let the RESTful Management Services return HTML, JSON or XML by using the Accept HTTP header (application/json or application/xml. HTML is the default). See here.

What can you monitor?
Available resources under http(s)://host:port/management/tenant-monitoring are (WLS 12.1.1):

  • servers
  • clusters
  • applications
  • datasources

You can also go to the level of an individual resource like for example datasources/datasourcename.

SOA Suite
The tenant-monitoring resources of the RESTful Management Services are not specific for SOA Suite. They do not allow you to obtain much information about the inner workings of applications like the SOA infrastructure application or the BPEL process manager. Thus my SOA infrastructure monitoring tool and BPEL process state monitoring tool could still be useful. You can potentially replace this functionality however with for example Jolokia. See below.

Monitoring a lot of resources
Because the Management Services allow monitoring of many resources, they would be ideal to use in a monitoring tool like Nagios. Mark Otting beat me to this however; http://www.qualogy.com/monitoring-weblogic-12c-with-nagios-and-rest/.

The RESTful Management services provide a specific set of resources which you can monitor. These resources are limited. There is also an alternative for the RESTful Management Services for monitoring WebLogic Server (and other application servers), namely Jolokia. See here. One of the nice things about Jolokia is that it allows you to directly access MBeans and you are not limited to a fixed set of available resources. Directly accessing MBeans is very powerful (and potentially dangerous!). This could for example allow obtaining SOA infrastructure state and list deployed composites.

Management

The RESTful Management Services do not only provide monitoring capabilities but also editable resources;
http://docs.oracle.com/middleware/1213/wls/WLRMR/resources.htm#WLRMR471. These resources can be accessed by going to an URL like; http(s)://host:port/management/wls/{version}/path, for example http://localhost:7001/management/wls/latest/. The resources only provide the option to reply with JSON (Accept: application/json) and provide links entries so you can see the parent and children of a resource. With POST, PUT and DELETE HTTP verbs you can update, create or remove resources and with GET and OPTIONS you can obtain information.

Deploying without dependencies (just curl)
An interesting usecase is command-line deployments without dependencies. This was an example given in the Oracle documentation. (see here). You could use for example a curl command (or whatever command-line HTTP client) to deploy an ear without need for Java libraries or WLST/Ant/Maven scripts.

Walking the resource tree
In contrast to the tenant-monitoring resources, the management resources allow traversing the JSON tree. The response of a HTTP GET request contains a links element, which contains parent and child entries. When an HTTP GET is not allowed or the links element does not exist, you can't go any further down the resource. In order to display available resources on your WebLogic Server I wrote a small Python script.

 import json  
import httplib
import base64
import string
from urlparse import urlparse

WLS_HOST = "localhost"
WLS_PORT = "7101"
WLS_USERNAME = "weblogic"
WLS_PASSWORD = "Welcome01"

def do_http_request(host,port,url,verb,accept,username,password,body):
# from http://mozgovipc.blogspot.nl/2012/06/python-http-basic-authentication-with.html
# base64 encode the username and password
auth = string.strip(base64.encodestring(username + ':' + password))
service = httplib.HTTP(host,port)

# write your headers
service.putrequest(verb, url)
service.putheader("Host", host)
service.putheader("User-Agent", "Python http auth")
service.putheader("Content-type", "text/html; charset=\"UTF-8\"")
# write the Authorization header like: 'Basic base64encode(username + ':' + password)
service.putheader("Authorization", "Basic %s" % auth)
service.putheader("Accept",accept)
service.endheaders()
service.send(body)
# get the response
statuscode, statusmessage, header = service.getreply()
#print "Headers: ", header
res = service.getfile().read()
#print 'Content: ', res
return statuscode,statusmessage,header,res

def do_wls_http_get(url,verb):
return do_http_request(WLS_HOST,WLS_PORT,url,verb,"application/json",WLS_USERNAME,WLS_PASSWORD,"")

def get_links(body):
uris = []
json_obj = {}
json_obj = json.loads(body)
if json_obj.has_key("links"):
for link in sorted(json_obj["links"]):
if (link["rel"] != "parent"):
uri = link["uri"]
uriparsed = urlparse(uri)
uris.append(uriparsed.path)
return uris

def get_links_recursive(body):
uris=[]
links = get_links(body)
for link in links:
statuscode,statusmessage,header,res = do_wls_http_get(link,"GET")
if statuscode==200:
print link
get_links_recursive(res)

statuscode,statusmessage,header,res= do_wls_http_get("/management/wls/latest/","GET")
if statuscode != 200:
print "HTTP statuscode: "+str(statuscode)
print "Have you enabled RESTful Management Services?"
else:
get_links_recursive(res)

Output of this script on a WebLogic 12.1.3 server contains information on all datasources, application deployments, servers and jobs. You can use it to for example compare two environments for the presence of resources. The script is easily expanded to include the configuration of individual resources. This way you can easily compare environments and see if you have missed a specific configuration setting. Of course, only resources are displayed which can be accessed by the RESTful Management Services. Absence of for example a data-source or application deployment can easily be detected but absence of a credential store or JMS queue will not be detected this way. The links are parsed in order (sorted) to help in comparing. You can also use this script to compare WebLogic Server versions to see what new resources Oracle has added since the last release.

SOA Suite 12c: Collect & Deploy SCA composites & Service Bus artifacts using Maven

$
0
0
An artifact repository has many benefits for collaboration and governance of artifacts. In this blog post I will illustrate how you can fetch SCA composites and Service Bus artifacts from an artifact repository and deploy them. The purpose of this exercise is to show that you do not need loads of custom scripts to do these simple tasks. Why re-invent a wheel when Oracle already provides it?

This example has been created for SOA Suite 12.1.3. This will not work as-is for 11g and earlier since they lack Maven support for SOA Suite artifacts. In order to start using Maven to do command-line deployments, you need to have some Oracle artifacts in your repository. See http://biemond.blogspot.nl/2014/06/maven-support-for-1213-service-bus-soa.html on how to put them there. I have used two test projects which were already in the repository. A SCA composite called HelloWorld_1.0 and a Service Bus project also called HelloWorld_1.0. In my example, the SCA composite is in the GroupId nl.amis.smeetsm.composite and the Service Bus project is in the GroupId nl.amis.smeetsm.servicebus.

SCA Composite

Quick & dirty with few dependencies

I have described getting your SCA composite out of Nexus and into an environment on http://javaoraclesoa.blogspot.nl/2015/03/deploying-soa-suite-12c-artifacts-from.html. The process described there has very few dependencies. First you manually download your jar file using the repository API and then you deploy it using a Maven command like:

mvn com.oracle.soa.plugin:oracle-soa-plugin:deploy -DsarLocation=HelloWorld-1.0.jar -Duser=weblogic -Dpassword=Welcome01 -DserverURL=http://localhost:7101 

In order for this to work, you need to have a (dummy) pom.xml file in the current directory. You cannot use the project pom file for this.

The only requisites (next to a working Maven installation) are;
  • the sar file
  • serverUrl and credentials of the server you need to deploy to
Notice that you do not even need an Oracle home location for this. In order to build the project from sources however, you do need an Oracle home.

Less quick & dirty using Maven

An alternative to the previously described method is to use a pom which has the artifact you want to deploy as a dependency. This way Maven obtains the artifact from the repository (configured in settings.xml) for you. This is also a very useful method to combine artifacts in a greater context such as for example a release. The Maven assembly plugin (which uses the configuration file unit-assembly.xml in this example) can be used to specify how to treat the downloaded artifacts. The format 'dir' specifies that the downloaded artifacts should be put in a specific directory as-is (not zipped or otherwise repackaged). Format 'zip' will (surprise!) zip the result so you can for example put it in your repository or somewhere else. The dependencySet directive indicates which dependencies should go to which directory. When combining Service Bus and SOA artifacts in a single pom, you can use this information to determine which artifact should be put in which directory and this can then be used to determine which artifact should be deployed where.

You can for example use a pom.xml file like:

<?xml version="1.0" encoding="UTF-8"?>  
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>nl.amis.smeetsm.unit</groupId>
<artifactId>HelloWorld_1.0</artifactId>
<packaging>jar</packaging>
<version>1.0</version>
<name>HelloWorld_1.0</name>
<url>http://maven.apache.org</url>
<dependencies>
<dependency>
<groupId>nl.amis.smeetsm.composite</groupId>
<artifactId>HelloWorld_1.0</artifactId>
<version>1.0</version>
<type>jar</type>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<version>2.5.4</version>
<configuration>
<descriptors>
<descriptor>unit-assembly.xml</descriptor>
</descriptors>
</configuration>
</plugin>
</plugins>
</build>
</project>

With a unit-assembly.xml file like

<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3 http://maven.apache.org/xsd/assembly-1.1.3.xsd">
<id>unit</id>
<formats>
<format>dir</format>
</formats>
<dependencySets>
<dependencySet>
<outputDirectory>/unit/composite</outputDirectory>
<includes>
<include>nl.amis.smeetsm.composite:*</include>
</includes>
</dependencySet>
</dependencySets>
</assembly>

Using this method you also need the following in your settings.xml file so it can find the repository. In this example I have used a local Nexus repository.

<mirror>  
<id>nexus</id>
<name>Internal Nexus Mirror</name>
<url>http://localhost:8081/nexus/content/groups/public/</url>
<mirrorOf>*</mirrorOf>
</mirror>

And then in order to obtain the jar from the repository

mvn assembly:single

And deploy it the same as described above only with a slightly longer location of the sar file.

mvn com.oracle.soa.plugin:oracle-soa-plugin:deploy -DsarLocation=target/HelloWorld_1.0-1.0-unit/HelloWorld_1.0-1.0/unit/composite/HelloWorld_1.0-1.0.jar -Duser=weblogic -Dpassword=Welcome01 -DserverURL=http://localhost:7101  

Thus what you need here (next to a working Maven installation) is;
  • a settings.xml file containing a reference to the repository (you might be able to avoid this by providing it command-line)
  • a specific pom with the artifact you want to deploy specified as dependency
  • serverUrl and credentials of the server you want to deploy to
Service Bus

For the Service Bus in general the methods used to get artifacts in and out of an artifact repository are very similar to the SCA composites.

Getting the Service Bus sbar from an artifact repository to an environment does require the projects pom file since you cannot specify an sbar file directly in a deploy command. The command to do the actual deployment also differs from deploying a SCA composite. You do require an Oracle home for this.

mvn pre-integration-test -DoracleHome=/home/maarten/Oracle/Middleware1213/Oracle_Home -DoracleUsername=weblogic -DoraclePassword=Welcome01 -DoracleServerUrl=http://localhost:7101

You can also use a method similar to the one described for the SCA composites. Mind though that you need the project pom file also as a dependency.

<?xml version="1.0" encoding="UTF-8"?>  
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>nl.amis.smeetsm.unit</groupId>
<artifactId>HelloWorld_1.0</artifactId>
<packaging>jar</packaging>
<version>1.0</version>
<name>HelloWorld_1.0</name>
<url>http://maven.apache.org</url>
<dependencies>
<dependency>
<groupId>nl.amis.smeetsm.servicebus</groupId>
<artifactId>HelloWorld_1.0</artifactId>
<version>1.0</version>
<type>sbar</type>
</dependency>
<dependency>
<groupId>nl.amis.smeetsm.servicebus</groupId>
<artifactId>HelloWorld_1.0</artifactId>
<version>1.0</version>
<type>pom</type>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<version>2.5.4</version>
<configuration>
<descriptors>
<descriptor>unit-assembly.xml</descriptor>
</descriptors>
</configuration>
</plugin>
</plugins>
</build>
</project>

And a unit-assembly.xml like;

<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3 http://maven.apache.org/xsd/assembly-1.1.3.xsd">
<id>unit</id>
<formats>
<format>dir</format>
</formats>
<dependencySets>
<dependencySet>
<outputDirectory>/unit/servicebus</outputDirectory>
<includes>
<include>nl.amis.smeetsm.servicebus:*</include>
</includes>
</dependencySet>
</dependencySets>
</assembly>

Thus what you need here (next to a working Maven installation) is;
  • an Oracle home location
  • a settings.xml file containing a reference to the repository (you might be able to avoid this by providing it command-line)
  • a specific pom with the artifact specified as dependency (this will fetch the sbar and pom file)
  • serverUrl and credentials of the server you want to deploy to
Deploy many artifacts

In order to obtain large amounts of artifacts from Nexus and deploy them, it is relatively easy to create a shell script, for example something like the one below. The script below uses the structure created by the above described method to deploy artifacts. It has a part which first downloads a ZIP, unzips it and then loops through deployable artifacts and deploys them. The script depends on a ZIP in the artifact repository with the specified structure. In order to put the unit in Nexus,  replace 'dir' with 'zip' in the assembly file and deploy the unit. You are creating a copy of the artifact though so you should probably use the pom and assembly directly for creating the unit of artifacts and loop over them without the step in between of creating a separate ZIP of the assembly.

The local directory should contain a dummypom.xml for the SCA deployment. The script creates a tmp directory, downloads the artifact, extracts it, loops over its contents, creates a deploy shell script and execute it. Separating assembly (deploy_unit.sh) and actual deployment (deploy_script.sh) is advised. This allows you to rerun the deployment or continue from a certain point where it might have failed. The assembly can be handed to someone else (operations?) to do the deployment.

dummypom.xml:

<?xml version="1.0" encoding="UTF-8"?>  
<project xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd" xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<modelVersion>4.0.0</modelVersion>
<groupId>nl.amis.smeetsm</groupId>
<artifactId>DummyPom</artifactId>
<version>1.0</version>
</project>

deploy_unit.sh

The script has a single parameter. The URL of the unit to be installed. This can be a reference to an artifact in a repository (if you have your unit as a separate artifact in the repository). The script is easily updated to use a local file or structure as described above.

 #!/bin/sh  

servicebus_hostname=localhost
servicebus_port=7101
servicebus_username=weblogic
servicebus_password=Welcome01
servicebus_oraclehome=/home/maarten/Oracle/Middleware1213/Oracle_Home/
composite_hostname=localhost
composite_port=7101
composite_username=weblogic
composite_password=Welcome01

if [ -d "tmp" ]; then
rm -rf tmp
fi
mkdir tmp
cp dummypom.xml tmp/pom.xml
cd tmp

#first fetch the unit ZIP file
wget $1
for f in *.zip
do
echo "Unzipping $f"
unzip $f
done

#deploy composites
for D in `find . -type d -name composite`
do
echo "Processing directory $D"
for f in `ls $D/*.jar`
do
echo "Deploying $f"
URL="http://$composite_hostname:$composite_port"
echo "URL: $URL"
echo mvn com.oracle.soa.plugin:oracle-soa-plugin:deploy -DsarLocation=$f -Duser=$composite_username -Dpassword=$composite_password -DserverURL=$URL >> deploy_script.sh
done
done

#deploy servicebus
for D in `find . -type d -name servicebus`
do
echo "Processing directory $D"
for f in `ls $D/*.pom`
do
echo "Deploying $f"
URL="http://$composite_hostname:$composite_port"
echo "URL: $URL"
echo mvn -f $f pre-integration-test -DoracleHome=$servicebus_oraclehome -DoracleUsername=$servicebus_username -DoraclePassword=$servicebus_password -DoracleServerUrl=$URL >> deploy_script.sh
done
done

./deploy_script.sh

cd ..
rm -rf tmp

For this example I created a very basic script. It does require a Maven installation, a settings.xml telling where the repository is and an Oracle home location (Service Bus requires it). Also it has some liabilities, for example in the commands used to find the deployable artifacts. It does give an idea though on how you can easily deploy large amounts of composites using relatively little code by leveraging Maven commands. It also illustrates the difference between SCA composite and Service Bus deployments.

Finally

You can easily combine the assembly files and pom files for the SCA composites and the Service Bus to create a release containing both. Deploying them is also easy using a single command. I also illustrated how you can easily loop over several artifacts using a shell script. I have not touched the usage of configuration plans and how to efficiently group related artifacts in your artifact repository. Those will be the topic of a next blog post.

SOA Suite 12c: Best practices for project structure and deployment

$
0
0
Efficient usage of version control has specific requirements to allow identification of versions and synchronous development on different branches. Design time you will want to have your Service Bus projects in a single application in order to allow usage of shared objects. At deploy-time or when creating a release, you want to group SCA composites together with Service Bus projects. How do you combine these different requirements?

In this article I'll describe several practices and considerations which can help you structuring your version control and artifact repository. The main challenge is finding a workable balance between the amount/complexity your deployment scripts and developer productivity / focus on business value. A lot of scripts (large investment) can make it easy for developers on the short term, however those scripts can easily become a burden.

If you are just looking for some good practices to structure your version control and artifact repository, look at the list below. If however you want to know why I think certain things are good and bad practice, read on.


Development


Use a per technology structure
SCA composites can use customizable MDS directories (you can update the path in adf-config.xml and even use variables. See here for example). In order to use shared objects in your Service Bus project however, they should be part of the same application (in order to avoid compilation errors in JDeveloper). The application poms for the Service Bus and SCA composites use the Maven module structure to refer to their projects. This means the application should be able to find the projects. When creating a new application with a new project for SCA composites and Service Bus, the application has the projects as sub-directories. SCA composites and Service Bus projects require separate applications. Thus there are several reasons why you would want to group projects per technology. This makes development more easy, avoids the dirty fixes needed for a custom directory structure and is more in line with the default structure provides when creating a new project.

Version control structure


A version control system (VCS) should allow you to identify different versions. The versions of the software also live in the artifact repository. If you want to create a fix on a specific version of the software, it is usual to create a tag from that version, branch the tag and fix it there since the trunk might have evolved further and sometimes contains changes you do not want yet.


When structuring VCS, it becomes important to think about how you are going to deploy. This determines what you want to branch. If you have a main job in your deployment tooling which calls sub-jobs per technology, you can branch per technology and the result can look like a branched functional unit in version control. You can also use (but I would not recommend it) references / externals (with specified revision) but this requires some extra scripting and might not work as expected in all cases. You want to use the same version number for the different artifacts in your functional unit (e.g. mvn versions:set -DnewVersion=x.x.x.x) to make it easy to see what belongs together. You can use a deploy job parameter for this or a separate file. This would mean that if a SCA composite changes and a Service Bus project is in the same functional unit but is unchanged, it still gets an increase in version number and gets deployed.

There are several benefits of the method described above;
  • it is easy to identify the versions of the artifacts (e.g. Service Bus project, SCA composite) which are part of the functional unit. since they all have the same version
  • you do not require a separate definition of a functional unit since you know based on the version number which artifacts belong together and you can use the Maven GroupId to identify the functional unit
There are some drawbacks of using this method though.
  • a lot of versions contain the same code
  • you need to automate keeping the versions of the parts of functional unit in sync
The functional unit as a separate definition
We can let go of the 'keep-the-version-of-the-artifacts-of-the-functional-unit-in-sync' method. Then you would need a container artifact to determine which versions belong together and form a version of the functional unit or release. You can use a functional unit definition (pom with dependencies) for that (and have the release consist of functional units) or directly mention the separate components in your release since the GroupId in the artifact repository can indicate the functional unit. In the latter case, your functional unit will not have a separate definition / pom since it is no artifact itself and you would require less scripting since you do not need to bridge the gap from artifact to functional unit to release but only from artifact to release. Makes it all a bit simpler and of course simplicity requires less code and results in better maintainability.

Artifact repository


Snapshot releases
You can ask yourself if in a continuous delivery environment, you need snapshot releases or snapshot artifacts. Every release has the potential to go to the production environment and the release content is continuously updated with new artifact versions. My suggestion is to not use snapshots but do keep track (automated) of which artifact version is in a release. After a release (usually every sprint in scrum), you can clean out the artifacts which did not make it to the release. See for example here.

GroupId
For ease of deployment I recommend to have an artifact structure which is in line with your deployment methodology. An artifact repository often uses so-called Maven coordinates to identify an artifact. These are GroupId, ArtifactId and Version. You can also use an optional classifier. The GroupId is ideal for identifying functional units, separate artifacts from functional units from releases.

Classifier
The classifier can be used to add for example a configuration plan. Mind though that you should not add configuration plans or property files which are environment specific since environments tend to change a lot. The configuration plans should contain placeholders and the deployment software (e.g. XLDeploy, Bamboo, Jenkins, Hudson and the like) should replace them with correct values. This makes it easier to secure those values and to let someone else manage them.

Deploying and build pipeline


Suppose you have a functional service which consists of two components. What should you do when deploying them?

Split deployment per technology
First I recommend to split them per technology in your deployment tooling. Use a modular setup. Do not create one superscript which does deployment of your entire custom functional unit to all required environments! When you want to install a functional service, the tooling should kick-off sub-jobs which do the deployment of the individual technologies. This makes maintaining the jobs and scripts easier (not a single large black box but several smaller black boxes). Also this allows you to provide jobs only with the information required to deploy a specific technology which is of course more secure. For example, a job deploying a SCA composite does not need the weblogic password of the Service Bus server. Also when branching, you can do this per technology and you do not need to wrap this in a greater abstraction.

In summary, splitting deployment per technology
  • requires a modular setup of deployment scripts (better maintainability)
  • is more secure
  • is more flexible; changes can be applied faster
Build pipeline
A build pipeline consists of several steps. Important steps to have at the end are;
  • store the artifact in your artifact repository
  • make the relevant tags in version control
  • update the release with the new version of the deployed artifact
Avoid manual steps to add certain software to a release (such as a Wiki which is the base for what is in a release). This will cause issues since developers tend to forget this (do not consider the deployment of their own software their responsibility). When the process of adding code to a release is automatic, the responsibility of collecting the release is with the developer instead of a build / deployment team. If it is not in the release, the developer has not released it.

BPM Suite 12c: Oracle Adaptive Case Management: Monitoring Case Events

$
0
0
Oracle Adaptive Case Management (ACM) is an interesting addition to Oracle BPM Suite which has been introduced in 11.1.1.7. Adaptive Case Management is suitable to model complex work-flows in which there is no set order of activities taking place. This allows more control to the end user on what to do when.

When a case is started, it is a running process in the SOA infrastructure. The main component is Oracle Business Rules which governs (among other things) the availability of activities and when certain process milestones are achieved. The case API allows you to query the case events and milestones (how you can expose the API as a service is described here and here by Roger Goossens).

Sometimes people want to obtain information about cases such as;
  • in how many cases has a certain activity been executed?
  • in which cases has a certain milestone been reached?
Cases can crash, be restarted, migrated, aborted, purged, etc. Sometimes you might not want to depend on the running case being there to provide the information you want. Also using the API every time you want certain information might put a serious strain on your system. Using sensors or BAM might help but they require an investment to implement and are still manual implementations with no guarantee you can obtain information in the future you did not think you would need in the present/past.

Publish Case Events

Luckily Oracle has provided the perfect solution for monitoring case events! You can publish case events to the Event Delivery Network (read here 31.17.2 How to Publish Case Events). This can easily be monitored by for example a BPEL process, which can store the information in a custom table.


You can find the event definition of a case event in the MDS at oramds:/soa/shared/casemgmt/CaseEvent.edl

Publishing case events does not work however when using the (BPM 12.1.3) quickstart installation (with the Derby database). I got the below error:

Caused By: org.apache.derby.client.am.SqlException: 'EDN_INTERNAL_PUBLISH_EVENT' is not recognized as a function or procedure.
    at org.apache.derby.client.am.Statement.completeSqlca(Unknown Source)
    at org.apache.derby.client.net.NetStatementReply.parsePrepareError(Unknown Source)
    at org.apache.derby.client.net.NetStatementReply.parsePRPSQLSTTreply(Unknown Source)
    at org.apache.derby.client.net.NetStatementReply.readPrepare(Unknown Source)
    at org.apache.derby.client.net.StatementReply.readPrepare(Unknown Source)
    at org.apache.derby.client.net.NetStatement.readPrepare_(Unknown Source)
    at org.apache.derby.client.am.Statement.readPrepare(Unknown Source)

The PL/SQL procedure EDN_INTERNAL_PUBLISH_EVENT apparently is used when publishing EDN (Event Delivery Network) events from the Case. Of course the quickstart Derby database doesn't have PL/SQL support.

I quickly (Vagrant/Puppet) created a full blown SOA Suite 12c installation with a serious Oracle database to continue (from the following article by Lucas Jellema using scripts from Edwin Biemond). When enabling debug logging for the Event Delivery Network (see here) I could see the following (this is a sample event).

[SRC_METHOD: fineEventPublished] Received event: Subject: null  Sender: oracle.integration.platform.blocks.event.saq.SAQRemoteBusinessEventConnection   Event:[[
<business-event xmlns:ns="http://xmlns.oracle.com/bpm/case/event" xmlns="http://oracle.com/fabric/businessEvent">
   <name>ns:CaseEvent</name>
   <id>67622616-f339-4478-b033-f773be3eba78</id>
   <tracking>
      <ecid>eac134b8-79de-49bc-9fc7-1b70f9e6ccf8-0003eb63</ecid>
      <conversationId>c9fb038e-7666-47e7-9f70-5e2f19a39d82</conversationId>
   </tracking>
   <content>
      <ce:caseEvent xmlns:ce="http://xmlns.oracle.com/bpm/case/event" eventType="ACTIVITY_EVENT">
   <ce:eventId>e427621f-a35e-4106-ac84-f650d3b92222</ce:eventId>
   <ce:caseId>50ad482f-5024-403e-aea9-3ec56ae623ae</ce:caseId>
   <ce:updatedBy>weblogic</ce:updatedBy>
   <ce:updatedByDisplayName>weblogic</ce:updatedByDisplayName>
   <ce:updatedDate>2015-08-31T11:06:45.811+02:00</ce:updatedDate>
   <ce:activityEvent>
      <ce:activityName>actConfirmUserCreditProcess</ce:activityName>
      <ce:activityType>BPMN</ce:activityType>
      <ce:activityEvent>COMPLETED</ce:activityEvent>
      <ce:startedDate>2015-08-31T11:06:45.316+02:00</ce:startedDate>
      <ce:completedDate>2015-08-31T11:06:45.811+02:00</ce:completedDate>
   </ce:activityEvent>
   <ce:documentEvent>
      <ce:document>
         <ce:documentId>5896af2e-156d-45e9-8329-81010cbe292b</ce:documentId>
      </ce:document>
   </ce:documentEvent>
</ce:caseEvent>
   </content>
</business-event>


Thus the event was published to the EDN... at least I thought it was.

Debugging EDN

In order to retrieve events, I created a small case and logger BPEL process as sample. You can download them here. The logger stores events on the filesystem. You can of course easily expand this to store the information in a database table. The sample logger process also shows how you can Base64 encode the event for the fileadapter.
 
My CaseEventLogger process however did not pickup the EDN events.

Removing EDNDataSource and EDNLocalTxDataSource

I read however (here) that two datasources (EDNDataSource and EDNLocalTxDataSource) needed to be removed for the WLJMS implementation of EDN in 11g to work (default is AQJMS). In 12c the WLJMS implementation is default so I tried removing the datasources and restarted my soa-infra. Next I got the following errors:

Exception in publishing business event[[
java.lang.NullPointerException
    at oracle.bpm.casemgmt.event.CaseEventEDNPublisher.publish(CaseEventEDNPublisher.java:112)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)


This was curious since ACM was not supposed to use the EDN datasources (that I had just removed). Remember the logging I posted before? SAQRemoteBusinessEventConnection was used. Why AQ? (hardcoded in CaseEventEDNPublisher?).

When publishing an event from the Enterprise Manager I see logging like;

[SRC_METHOD: publish] EDN outbound: JMS Config: oracle.soa.management.config.edn.EDNJmsConfig@d6c5dac2 [remote=false, jmsType=WLJMS, durable=true, xa=true, connectionName=eis/wls/EDNxaDurableTopic, topicName=jms/fabric/EDNTopic]

I also see other logging from oracle.integration.platform.blocks.event.jms2.EdnBus12c. This is a class the Case does not seem to use. The Case uses oracle.integration.platform.blocks.event.saq.SAQBusinessEventBus.

Setting the implementation to AQJMS

In order to confirm this finding, I set the EDN implementation to AQJMS (this MBean: oracle.as.soainfra.config:Location=SoaServer1,name=edn,type=EDNConfig,Application=soa-infra) and redeployed my logger process is order to have it use the correct datasource (you can also alter it at runtime from the Enterprise Manager in the Business Event screen).

I still didn't see my event arrive in my logger process. I did see however the EDN activation:

[SRC_METHOD: log] eis/aqjms/EDNxaDurableTopic EdnBus12c Populating ActivationSpec oracle.tip.adapter.jms.inbound.JmsConsumeActivatio
nSpec with properties: {UseNativeRecord=false, DurableSubscriber=CaseEventd_SCA_60016, PayloadType=TextMessage, UseMessageListener=false, MessageSelector=EDN$namespace = 'http://xmlns.oracle.com/bpm/case/event' AND EDN$localName
= 'CaseEvent', endpoint.ActionSpec.className=oracle.tip.adapter.jms.inbound.JmsConsumeActivationSpec, DestinationName=jms/fabric/EDNAQjmsTopic}


Maybe I shouldn't use the new (and very useful!) 12c durable subscriber feature? Lets retry without.

[SRC_METHOD: log] eis/aqjms/EDNxaTopic EdnBus12c Populating ActivationSpec oracle.tip.adapter.jms.inbound.JmsConsumeActivationSpec w
ith properties: {UseNativeRecord=false, PayloadType=TextMessage, UseMessageListener=false, MessageSelector=EDN$namespace = 'http://xmlns.oracle.com/bpm/case/event' AND EDN$localName = 'CaseEvent', endpoint.ActionSpec.className=or
acle.tip.adapter.jms.inbound.JmsConsumeActivationSpec, DestinationName=jms/fabric/EDNAQjmsTopic}


They both use (check the JmsAdapter configuration) jms/fabric/EDNAQjmsTopicXAConnectionFactory. In the soajmsmodule, you can find the topic used: EDN_AQJMS_TOPIC.

But what does the Case do?

When looking at what the class used by the Case for publishing events does, we see something different. The oracle.integration.platform.blocks.event.saq.SAQBusinessEventBus class is used by the oracle.integration.platform.blocks.event.saq.SAQBusinessEventConnection class of which we see logging. The SAQBusinessEventBus class uses the EDN_EVENT_QUEUE (indirectly, via the EDN_INTERNAL_PUBLISH_EVENT procedure, which you could also see in the first error when still using the Derby quickstart database). When looking at the database, my messages also appeared to have ended up there.

Finally

Thus in Adaptive Case Management 12c (I've used 12.1.3.0.2) publishing of case events to the event delivery network does not work yet (and this should not be something which we should fix on our own). In Adaptive Case Management 11g we do not have guaranteed delivery when using the EDN. Thus currently this feature cannot be used yet to for example obtain management information about cases since in 11g it is not guaranteed all case events are delivered and in 12c it does not work.

If it could be used, it would be great to for example store case events in a data-warehouse and allow a management information department to query on the results. It requires minimal effort to implement and has the potential to provide a lot of (business relevant) information.

Resources

EDN Debugging
https://blogs.oracle.com/ateamsoab2b/entry/edn_debugging

SOA 12c – EDN –Using AQ JMS
https://svgonugu.wordpress.com/2014/12/31/soa-12cednusing-aq-based-jms/

BPM Suite 11.1.1.7 with Adaptive Case Management (ACM) User Interface available for download
https://blogs.oracle.com/soacommunity/entry/bpm_suite_11_1_11

How to Publish Case Events
http://docs.oracle.com/cd/E28280_01/doc.1111/e15176/case_mgmt_bpmpd.htm#BPMPD87511


Rapid creation of Virtual Machine(s) for SOA Suite 12.1.3 server run time environment – leveraging Vagrant, Puppet and Biemond
https://technology.amis.nl/2014/07/31/rapid-creation-of-virtual-machines-for-soa-suite-12-1-3-server-run-time-environment-leveraging-vagrant-puppet-and-biemond/


How to Configure JMS-based EDN Implementations
http://docs.oracle.com/cd/E23943_01/dev.1111/e10224/obe_intro.htm#SOASE295

  
Sample case and logger process
https://dl.dropboxusercontent.com/u/6693935/blog/SOAApplication.zip 

Create a release of artifacts. Automate adding Maven dependencies

$
0
0
"Continuous Delivery (CD) is a software engineering approach in which teams keep producing valuable software in short cycles and ensure that the software can be reliably released at any time." (from here)

Software artefacts are developed using a build pipeline. This pipeline consists of several steps to provide quick feedback on software quality by means of code quality checks, automated tests, test coverage checks, etc. When the software is done (adhering to a Definition of Done (DoD)), it is released. This release as a whole is then tested and promoted until it reaches a production environment. In the meantime, work on a next release has already started. This process is shown in the below image. This is a minimal example (especially on the test part). In the below image, you can see there are 3 releases in progress. This is only in order to illustrate the process. You should of course try to limit the number of releases which are not in production to reduce the overhead of fixes on those releases.



Automation of the release process is often a challenge. Why is this difficult? One of the reasons is the identification of what should be in a release. This is especially so when the process of creating a release is not automated. There is a transition phase (in the image between Test phase 2 and Test phase 3) when the unit (artefact) build pipeline stops and when the release as a whole continues to be promoted through the environments. In the image of the process above, you can easily identify where you can automate the construction of a release; at the end of the unit build pipeline. This is where you can identify the unit which has been approved by means of the different test phases / quality checks in the unit build pipeline and you know the release the unit has to be put in to be propagated  as a whole. Why not add the unit to the release in an automated fashion there?

Automation

Artifact repository, Maven POM and dependencies

A common practice is to put artifacts in an artifact repository. Artifact repositories are often Maven compliant (i.e. Nexus, Artifactory) so you can identify your artifact with Maven GAV coordinates (groupId, artifactId, version). What better way to describe a collection of artifacts which can be identified with GAV attributes than a Maven POM file? You can define your artifacts as dependencies in your POM file. The Maven assembly plugin can then download those artifacts and put them in a specific structure to allow easy deployment. In case a dependency is already there, you want to update the version in the POM. If it is not there, you want to add it.

Automating adding dependencies to a POM

Below is a short Python (2.7) script to add dependencies to a POM file if the dependency is not there yet or to update the version of the dependency if it already is there.

 import os  
import xml.etree.ElementTree as ET
import xml.dom.minidom as minidom
import sys,re
import argparse

#script updates a pom.xml file with a specific artifactid/groupid/version/type/classifier dependency
#if the dependency is already there, the version is checked and updated if needed
#if the dependency is not there, it is added
#the comparison of dependencies is based on artifactid/groupid/type (and optionally classifier). other fields are ignored
#the pom file should be in UTF-8

#set the default namespace of the pom.xml file
pom_ns = dict(pom='http://maven.apache.org/POM/4.0.0')
ET.register_namespace('',pom_ns.get('pom'))

#parse the arguments
parser = argparse.ArgumentParser(description='Update pom.xml file with dependency')
parser.add_argument('pomlocation', help='Location on the filesystem of the pom.xml file to update')
parser.add_argument('artifactid', help='ArtifactId of the artifact to update')
parser.add_argument('groupid', help='GroupId of the artifact to update')
parser.add_argument('version', help='Version of the artifact to update')
parser.add_argument('type', help='Type of the artifact to update')
parser.add_argument('--classifier', help='Classifier of the artifact to update',default=None)
args = parser.parse_args()

pomlocation=args.pomlocation
artifactid=args.artifactid
groupid=args.groupid
version=args.version
type=args.type
classifier=args.classifier

#read a file and return a ElementTree
def get_tree_from_xmlfile(filename):
if os.path.isfile(filename):
tree = ET.parse(filename)
return tree
else:
raise Exception('Error opening '+filename)

#obtain a specific element from an ElementTree based on an xpath
def get_xpath_element_from_tree(tree,xpath,namespaces):
return tree.find(xpath, namespaces)

#returns the content of an element as a string
def element_to_str(element):
return ET.tostring(element, encoding='utf8', method='xml')

#returns an ElementTree as a pretty printed string
def elementtree_to_str(et):
root=et.getroot()
ugly_xml = ET.tostring(root, encoding='utf8', method='xml')
dom=minidom.parseString(ugly_xml)
prettyXML=dom.toprettyxml('\t','\n','utf8')
trails=re.compile(r'\s+\n')
prettyXML=re.sub(trails,"\n",prettyXML)
return prettyXML

#creates an Element object with artifactId, groupId, version, type, classifier elements (used to append a new dependency). classifier is left out if None
def create_dependency(param_groupid,param_artifactid,param_version,param_type,param_classifier):
dependency_element = ET.Element("dependency")
groupid_element = ET.Element("groupId")
groupid_element.text = param_groupid
dependency_element.append(groupid_element)
artifactid_element = ET.Element("artifactId")
artifactid_element.text = param_artifactid
dependency_element.append(artifactid_element)
version_element = ET.Element("version")
version_element.text = param_version
dependency_element.append(version_element)
type_element = ET.Element("type")
type_element.text = param_type
dependency_element.append(type_element)
if param_classifier is not None:
classifier_element = ET.Element("classifier")
classifier_element.text = param_classifier
dependency_element.append(classifier_element)
return dependency_element

#adds a dependency element to a pom ElementTree. the dependency element can be created with create_dependency
def add_dependency(pom_et,dependency_element):
pom_et.find('pom:dependencies',pom_ns).append(dependency_element)
return pom_et

#update the version of a dependency in the pom ElementTree if it is already present. else adds the dependency
#returns the updated ElementTree and a boolean indicating if the pom ElementTree has been updated
def merge_dependency(pom_et,param_artifactid,param_groupid,param_type,param_version,param_classifier):
artifactfound=False
pom_et_changed=False
for dependency_element in pom_et.findall('pom:dependencies/pom:dependency',pom_ns):
checkgroupid = get_xpath_element_from_tree(dependency_element,'pom:groupId',pom_ns).text
checkartifactid = get_xpath_element_from_tree(dependency_element,'pom:artifactId',pom_ns).text
checktype = get_xpath_element_from_tree(dependency_element,'pom:type',pom_ns).text
if param_classifier is not None:
checkclassifier_el = get_xpath_element_from_tree(dependency_element,'pom:classifier',pom_ns)
if checkclassifier_el is not None:
checkclassifier=checkclassifier_el.text
else:
checkclassifier=None
else:
checkclassifier = None
if (checkgroupid == param_groupid and checkartifactid == param_artifactid and checktype == param_type and (checkclassifier == param_classifier or param_classifier is None)):
artifactfound=True
print 'Artifact found in '+pomlocation
pomversion=dependency_element.find('pom:version',pom_ns).text
if pomversion != param_version:
print "Artifact has different version in "+pomlocation+". Updating"
dependency_element.find('pom:version',pom_ns).text=param_version
pom_et_changed=True
else:
print "Artifact already in "+pomlocation+" with correct version. Update not needed"
if not artifactfound:
print 'Artifact not found in pom. Adding'
dependency_element = create_dependency(param_groupid,param_artifactid,param_version,param_type,param_classifier)
pom_et = add_dependency(pom_et,dependency_element)
pom_et_changed=True
return pom_et,pom_et_changed

#read the file at the pomlocation parameter
pom_et = get_tree_from_xmlfile(pomlocation)

#merge the dependency into the obtained ElementTree
pom_et,pom_et_changed=merge_dependency(pom_et,artifactid,groupid,type,version,classifier)

#overwrite the pomlocation if it has been changed
if pom_et_changed:
print "Overwriting "+pomlocation+" with changes"
target = open(pomlocation, 'w')
target.truncate()
target.write(elementtree_to_str(pom_et))
target.close()
else:
print pomlocation+" does not require changes"

The script can deal with an optional classifier. When not specified it updates dependencies without looking at the classifier so be careful with this.

Also the script does some pretty printing when updating. This makes it easy to compare the POM file after a version control commit to for example compare different releases.

Seeing it work

Example pom.xml file.

<?xml version="1.0" encoding="utf8"?>  
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>nl.amis.smeetsm.release</groupId>
<artifactId>Release</artifactId>
<packaging>pom</packaging>
<version>1.0</version>
<dependencies>
<dependency>
<groupId>nl.amis.smeetsm.functionalunit.HelloWorld</groupId>
<artifactId>HelloWorld_FU</artifactId>
<version>1.0</version>
<type>pom</type>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<version>2.5.4</version>
<configuration>
<descriptors>
<descriptor>release-assembly.xml</descriptor>
</descriptors>
</configuration>
</plugin>
</plugins>
</build>
</project>

The artifact HelloWorld_FU contains dependencies to other artifacts ending in SCA or SB to indicate if it is a SOA Suite SCA composite artifact or a Service Bus artifact. The release-assembly.xml file below puts the different types in different directories and zips the result. This way a release zip file is created.

<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3 http://maven.apache.org/xsd/assembly-1.1.3.xsd">
<id>release</id>
<formats>
<format>zip</format>
</formats>
<dependencySets>
<dependencySet>
<outputDirectory>/composite</outputDirectory>
<includes>
<include>nl.amis.smeetsm.*:*_SCA</include>
</includes>
</dependencySet>
<dependencySet>
<outputDirectory>/servicebus</outputDirectory>
<includes>
<include>nl.amis.smeetsm.*:*_SB</include>
</includes>
</dependencySet>
</dependencySets>
</assembly>

Updating a dependency version: releasescript.py pom.xml HelloWorld_FU nl.amis.smeetsm.functionalunit.HelloWorld 2.0 pom

<?xml version="1.0" encoding="utf8"?>  
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>nl.amis.smeetsm.release</groupId>
<artifactId>Release</artifactId>
<packaging>pom</packaging>
<version>1.0</version>
<dependencies>
<dependency>
<groupId>nl.amis.smeetsm.functionalunit.HelloWorld</groupId>
<artifactId>HelloWorld_FU</artifactId>
<version>2.0</version>
<type>pom</type>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<version>2.5.4</version>
<configuration>
<descriptors>
<descriptor>release-assembly.xml</descriptor>
</descriptors>
</configuration>
</plugin>
</plugins>
</build>
</project>

Adding a dependency version: releasescript.py pom.xml ByeWorld_FU nl.amis.smeetsm.functionalunit.ByeWorld 2.0 pom

<?xml version="1.0" encoding="utf8"?>  
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>nl.amis.smeetsm.release</groupId>
<artifactId>Release</artifactId>
<packaging>pom</packaging>
<version>1.0</version>
<dependencies>
<dependency>
<groupId>nl.amis.smeetsm.functionalunit.HelloWorld</groupId>
<artifactId>HelloWorld_FU</artifactId>
<version>2.0</version>
<type>pom</type>
</dependency>
<dependency>
<groupId>nl.amis.smeetsm.functionalunit.ByeWorld</groupId>
<artifactId>ByeWorld_FU</artifactId>
<version>2.0</version>
<type>pom</type>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<version>2.5.4</version>
<configuration>
<descriptors>
<descriptor>release-assembly.xml</descriptor>
</descriptors>
</configuration>
</plugin>
</plugins>
</build>
</project>

Finally

When you want to start with the next release, you should create a branch in your version control system of the current release. This way you can separate releases in version control and can also easily create fixes on existing releases.

How to use WLST as a Jython 2.7 module

$
0
0
WebLogic Scripting Tool (WLST) in WebLogic Server 12.1.3 uses Jython version 2.2.1 (based on Python 2.2.1). This can be an important limitation when using WLST. Many modules are not available for 2.2.1 or are difficult to install. See here for an example. WLST however can be used as a module in Jython 2.7. This allows you to use all kinds of nice Jython 2.7 goodness while still having all the great WLST functionality available.

To just name some nice Jython 2.7 features:

  • pip and easy_install can be used to easily add new modules
  • useful new API’s are available such as xml.etree.ElementTree to allow XML processing, the multiprocessing module to use multiple threads and the argparse module to make parsing of script arguments easy.

In this article I’ll describe how you can use WLST as a Jython 2.7 module in order to allow you to combine the best of both worlds in your scripts.


Ready Jython

First you need to install Jython. You can obtain Jython from: http://www.jython.org/.

Obtain the classpath

In order for WLST as a module to function correctly, it needs its dependencies. Those dependencies are generated by several scripts such as:
  • <WLS_HOME>/wlserver/server/bin/setWLSEnv.sh
  • <WLS_HOME>/oracle_common/common/bin/wlst.sh
  • <WLS_HOME>/osb/tools/configjar/wlst.sh
  • <WLS_HOME>/soa/common/bin/wlst.sh
  • <WLS_HOME>/wlserver/common/bin/wlst.sh
It can be a challenge to abstract the logic used to obtain a complete classpath from those scripts. Why make it difficult for yourself? Just ask WLST:

<WLS_HOME>/soa/common/bin/wlst.sh

This will tell you the classpath. Even though this is usually a long list, this is not enough! You also need wlfullclient.jar (see here on how to create it). Also apparently there are some JAR's which are used but not in the default wlst classpath such as several <WLS_HOME>/oracle_common/modules/com.oracle.cie.* files. Just add <WLS_HOME>/oracle_common/modules/* to the classpath to fix issues like:

 java.lang.RuntimeException: java.lang.RuntimeException: Could not find the OffLine WLST class  
weblogic.management.scripting.utils.WLSTUtil.getOfflineWLSTScriptPathInternal

You can remove overlapping classpath entries. Since <WLS_HOME>/oracle_common/modules/* is in the classpath, you don’t need to mention individual modules anymore.

Obtain the module path

Jython needs a module path in order to find the modules used by WLST which are hidden in several JAR files. Again, simply ask WLST for it. Start

<WLS_HOME>/soa/common/bin/wlst.sh

And issue the command:

print sys.path

It will give you something like

 ['.', '/home/maarten/Oracle/Middleware1213/Oracle_Home/wlserver/modules/features/weblogic.server.merged.jar/Lib', '__classpath__', '/home/maarten/Oracle/Middleware1213/Oracle_Home/wlserver/server/lib/weblogic.jar', '/home/maarten/Oracle/Middleware1213/Oracle_Home/wlserver/common/wlst/modules/jython-modules.jar/Lib', '/home/maarten/Oracle/Middleware1213/Oracle_Home/wlserver/common/wlst', '/home/maarten/Oracle/Middleware1213/Oracle_Home/wlserver/common/wlst/lib', '/home/maarten/Oracle/Middleware1213/Oracle_Home/wlserver/common/wlst/modules', '/home/maarten/Oracle/Middleware1213/Oracle_Home/oracle_common/common/wlst', '/home/maarten/Oracle/Middleware1213/Oracle_Home/oracle_common/common/wlst/lib', '/home/maarten/Oracle/Middleware1213/Oracle_Home/oracle_common/common/wlst/modules', '/home/maarten/Oracle/Middleware1213/Oracle_Home/oracle_common/common/script_handlers', '/home/maarten/Oracle/Middleware1213/Oracle_Home/soa/common/script_handlers', '/home/maarten/Oracle/Middleware1213/Oracle_Home/soa/common/wlst', '/home/maarten/Oracle/Middleware1213/Oracle_Home/soa/common/wlst/lib', '/home/maarten/Oracle/Middleware1213/Oracle_Home/soa/common/wlst/modules']  


Interesting to see where Oracle has hidden all those modules. You can add them to the Jython module path by setting the PYTHONPATH variable.

Create a Jython start script

The easiest way to make sure your classpath and Python module path are set prior to executing a script is to create a Jython start script (similar to wlst.sh). My start script looked like:

startjython.sh

export WL_HOME=/home/maarten/Oracle/Middleware1213/Oracle_Home  

export CLASSPATH=$WL_HOME/oracle_common/soa/modules/oracle.soa.mgmt_11.1.1/soa-infra-mgmt.jar:$WL_HOME/oracle_common/soa/modules/commons-cli-1.1.jar:$WL_HOME/soa/soa/modules/oracle.soa.mgmt_11.1.1/soa-infra-mgmt.jar:$WL_HOME/soa/soa/modules/commons-cli-1.1.jar:$WL_HOME/soa/soa/modules/oracle.soa.fabric_11.1.1/fabric-runtime.jar:$WL_HOME/soa/soa/modules/oracle.soa.fabric_11.1.1/soa-infra-tools.jar:$WL_HOME/soa/soa/modules/oracle.soa.fabric_11.1.1/tracking-core.jar:$WL_HOME/soa/soa/modules/oracle.soa.workflow_11.1.1/bpm-services.jar:$WL_HOME/soa/soa/modules/chemistry-opencmis-client/chemistry-opencmis-client.jar:$WL_HOME/soa/soa/modules/oracle.soa.fabric_11.1.1/testfwk-xbeans.jar:$WL_HOME/soa/soa/modules/oracle.soa.fabric_11.1.1/oracle-soa-client-api.jar:$WL_HOME/soa/soa/modules/oracle.bpm.alm.script-legacy.jar:$WL_HOME/soa/soa/modules/oracle.bpm.bac.script.jar:$WL_HOME/oracle_common/modules/com.oracle.webservices.fmw.wsclient-rt-impl_12.1.3.jar:$WL_HOME/oracle_common/modules/com.oracle.classloader.pcl_12.1.3.jar:$WL_HOME/oracle_common/modules/org.apache.commons.logging_1.0.4.jar:$WL_HOME/oracle_common/modules/org.apache.commons.beanutils_1.6.jar:$WL_HOME/oracle_common/modules/oracle.ucp_12.1.0.jar:$WL_HOME/soa/soa/modules/oracle.rules_11.1.1/rulesdk2.jar:$WL_HOME/soa/soa/modules/oracle.rules_11.1.1/rl.jar:$WL_HOME/oracle_common/modules/oracle.adf.model_12.1.3/adfm.jar:$WL_HOME/oracle_common/modules/oracle.jdbc_12.1.0/ojdbc6dms.jar:$WL_HOME/oracle_common/modules/oracle.xdk_12.1.3/xmlparserv2.jar:$WL_HOME/oracle_common/modules/*:$WL_HOME/jdeveloper/wlserver/lib/wlfullclient.jar:$WL_HOME/oracle_common/soa/modules/oracle.soa.mgmt_11.1.1/soa-infra-mgmt.jar:$WL_HOME/oracle_common/soa/modules/commons-cli-1.1.jar:$WL_HOME/soa/soa/modules/oracle.soa.mgmt_11.1.1/soa-infra-mgmt.jar:$WL_HOME/soa/soa/modules/commons-cli-1.1.jar:$WL_HOME/soa/soa/modules/oracle.soa.fabric_11.1.1/fabric-runtime.jar:$WL_HOME/soa/soa/modules/oracle.soa.fabric_11.1.1/soa-infra-tools.jar:$WL_HOME/soa/soa/modules/oracle.soa.fabric_11.1.1/tracking-core.jar:$WL_HOME/soa/soa/modules/oracle.soa.workflow_11.1.1/bpm-services.jar:$WL_HOME/soa/soa/modules/chemistry-opencmis-client/chemistry-opencmis-client.jar:$WL_HOME/soa/soa/modules/oracle.soa.fabric_11.1.1/testfwk-xbeans.jar:$WL_HOME/soa/soa/modules/oracle.soa.fabric_11.1.1/oracle-soa-client-api.jar:$WL_HOME/soa/soa/modules/oracle.bpm.alm.script-legacy.jar:$WL_HOME/soa/soa/modules/oracle.bpm.bac.script.jar:$WL_HOME/oracle_common/modules/com.oracle.webservices.fmw.wsclient-rt-impl_12.1.3.jar:$WL_HOME/oracle_common/modules/com.oracle.classloader.pcl_12.1.3.jar:$WL_HOME/oracle_common/modules/org.apache.commons.logging_1.0.4.jar:$WL_HOME/oracle_common/modules/org.apache.commons.beanutils_1.6.jar:$WL_HOME/oracle_common/modules/oracle.ucp_12.1.0.jar:$WL_HOME/soa/soa/modules/oracle.rules_11.1.1/rulesdk2.jar:$WL_HOME/soa/soa/modules/oracle.rules_11.1.1/rl.jar:$WL_HOME/oracle_common/modules/oracle.adf.model_12.1.3/adfm.jar:$WL_HOME/oracle_common/modules/oracle.jdbc_12.1.0/ojdbc6dms.jar:$WL_HOME/oracle_common/modules/oracle.xdk_12.1.3/xmlparserv2.jar

export PYTHONPATH=.:$WL_HOME/wlserver/modules/features/weblogic.server.merged.jar/Lib:$WL_HOME/wlserver/server/lib/weblogic.jar:$WL_HOME/wlserver/common/wlst/modules/jython-modules.jar/Lib:$WL_HOME/wlserver/common/wlst:$WL_HOME/wlserver/common/wlst/lib:$WL_HOME/wlserver/common/wlst/modules:$WL_HOME/oracle_common/common/wlst:$WL_HOME/oracle_common/common/wlst/lib:$WL_HOME/oracle_common/common/wlst/modules:$WL_HOME/oracle_common/common/script_handlers:$WL_HOME/soa/common/script_handlers:$WL_HOME/soa/common/wlst:$WL_HOME/soa/common/wlst/lib:$WL_HOME/soa/common/wlst/modules

/home/maarten/jython2.7.0/bin/jython "$@"
exit $?

You can of course see that the PYTHONPATH is created from some search and replace actions on the output of sys.path executed with WLST. I removed [,] and '. Next I replaced , by : and removed the extra spaces after the :. Also I replaced my WL_HOME with a variable just to make the script look nice and more reusable. For a Windows script, the search and replace commands are slightly different such as ; as path separator and set instead of export.

You can use the start script in the same way as the wlst start script. You only have to mind that using WLST as a module requires some minor changes to WLST scripts. See below.

Ready the WLST module

In order to use WLST as a module in Jython 2.7 you need to generate a wl.py file. This is described here. Actually starting wlst.sh and executing: writeIniFile("wl.py") is enough.

When using the module though, the following exception is raised:
 Traceback (most recent call last):  
File "sample.py", line 8, in <module>
import wl
File "/home/maarten/tmp/wl.py", line 13, in <module>
origPrompt = sys.ps1
AttributeError: '<reflected field public org.python.core.PyObject o' object has no attribute 'ps1'

WLST apparently has some shell specific prompt handling code. Easy to get rid of this exception though by replacing the following line in wl.py

origPrompt = sys.ps1

With

origPrompt = ">>>"

This origPrompt looks pretty much like my default prompt and I didn't encounter any errors after setting it like this.

Seeing it work

My directory contains the following script: wl.py. Generated as explained above with origPrompt replaced.

Next my listserver.py script:
 import wl  

wl.connect("weblogic","Welcome01", "t3://localhost:7101")
mbServers= wl.getMBean("Servers")
servers= mbServers.getServers()
print( "Array of servers: " )
print( servers )
for server in servers :
print( "Server Name: " + server.getName() )
print( "Done." )
Because I'm using the WebLogic module, you need to do wl.connect instead of connect and similar for other calls from the wl module. Otherwise you will get exceptions like:
 Traceback (most recent call last):  
File "listserver.py", line 9, in <module>
connect("weblogic","Welcome01", "t3://localhost:7101")
NameError: name 'connect' is not defined
The output when using my startjython.sh script as explained above:
 startjython.sh listserver.py  
Connecting to t3://localhost:7101 with userid weblogic ...
Successfully connected to Admin Server "DefaultServer" that belongs to domain "DefaultDomain".

Warning: An insecure protocol was used to connect to the
server. To ensure on-the-wire security, the SSL port or
Admin port should be used instead.

Array of servers:
array(weblogic.management.configuration.ServerMBean, [[MBeanServerInvocationHandler]com.bea:Name=DefaultServer,Type=Server])
Server Name: DefaultServer

Done.
Installing the logging module becomes
 jython2.7.0/bin/pip install logging  
Downloading/unpacking logging
Downloading logging-0.4.9.6.tar.gz (96kB): 96kB downloaded
Running setup.py (path:/tmp/pip_build_maarten/logging/setup.py) egg_info for package logging

Installing collected packages: logging
Running setup.py install for logging

Successfully installed logging
Cleaning up...
And of course using the logger also works.
 import logging  
logging.basicConfig()
log = logging.getLogger("MyFirstLogger")
log.setLevel(logging.DEBUG)
log.info("That does work =:-)")
Output:
 INFO:MyFirstLogger:That does work =:-)  

How to get most out of your PaaS solution

$
0
0
Many companies are starting to implement PaaS (platform as a service) solutions. There are obvious benefits such as easy patching, scaling, pay per use, etc. There are also challenges when implementing a PaaS solution. In this post I will describe some of the challenges and provide some suggestions to allow you to enjoy your PaaS solution to the fullest.

The post is based on my presentation of a customer case on the Oracle Cloud day the 6th of October in the Netherlands this year.

https://www.oracle.com/cloudday/index.html
This customer had implemented a private PaaS solution and had faced several challenges. In this post I would like to present some of my findings (mainly based on interviews) in order to help customers who are considering a PaaS solution to make a good start with their implementation. The challenges / benefits are valid for private PaaS and public PaaS solutions.

Start with PaaS, then application

Why migrate if you can have it from the start?
Choose your timing for implementing your PaaS solution. When you already have a running project and all environments including production are more or less running ok, who is going to take responsibility for the migration to the PaaS solution? It is better to have your PaaS ready before your project starts (or as start of your project) and require your project to use it. This avoids the responsibility discussion and additional migration cost.

Automate from the start is easier than automation in retrospect
Using PaaS features such as providing new environments, should be done regularly (when it hurts do it more often). For example after every sprint in Scrum to recreate the test environment. This makes sure there is a drive to automate everything from the start. It is a lot harder to determine what has been done to an environment in retrospect and automate that than to start with a clean slate and automate as you go.

Automate configuration

There is a grey area between provisioning and application deployment; namely the application specific environment configuration. For example data sources, JMS queues, etc. It greatly helps if your provisioning software allows you to treat configuration as code, version it and automatically deploy it. MyST is an example of such a tool. This avoids issues like configuration drift. You want to avoid that since it costs time and causes frustration (as an experienced developer..).

Image borrowed from http://www.rubiconred.com/eliminating-configuration-drift-for-oracle-soa-and-bpm-projects/
Development has PaaS requirements

Development and operations need to work together on this. The PaaS solution is maintained by operations. Changes to the environment (patches, configuration), whether initiated by operations or development need to be incorporated in the PaaS template to ensure this is propagated to new environments. Developers have requirements. Those requirements should be taken into account by operations when creating or implementing your PaaS solution.
http://javaoraclesoa.blogspot.nl/2014/12/some-thoughts-on-continuous-delivery.html
PaaS and Continuous Delivery

Continuous Delivery and PaaS go well together. Automated environment provisioning and an automated release and deployment process of course allow you to quickly get a new environment up and running or to refresh/reinstall an environment from scratch.

Your continuous delivery process needs to make sure that the complete set of software to be deployed on the PaaS solution, is known. This is required to recreate an environment from scratch. There are of course several ways to achieve this. One method is described here.

For a service or a web application this is easy. You can in most cases just deploy the latest version. For database software however, this can be a different story. Usually you have to work with a certain set of tables and data which can not be 'overridden' by a new version. There are tools which help you create incremental database releases such as Liquibase. Many customers also create custom scripts to achieve similar functionality. What you shouldn't do for database software is to try and maintain a complete set of scripts at a separate location from the 'releases' and not regularly test these scripts. Developers can (and will) forget to update them. If you can't trust the scripts, it is no use having them. It helps if your database scripts are rerunnable.
http://javaoraclesoa.blogspot.nl/2015/09/create-release-of-artifacts-automate.html
Standards, consistency and few artifact types

Your development standards can take into account that for every type of artifact you want to deploy, automation is required. For example, there are several ways to use properties in J2EE containers. Choose one and be consistent in using it. More consistency makes automation a lot more easy. If every team uses their own property file deployment method / artifact structure, things will get messy. Less types of artifacts require less automation.

For databases, it helps if you deploy through a single user or schema since every schema you want to deploy to, requires several properties (such as host, port, sid, user, password) which need to be configured for every environment.


PaaS often requires specific automation tools

A PaaS solution requires some different methods of automation the operations department in most cases is familiar with. You need unattended installation scripts for example and usually the scripting languages used, differ. If for example the customer uses shell scripts to do automation and the PaaS solution does not give you access to the OS (it is a PaaS solution, not an IaaS solution), you need to rewrite your shell scripts to for example WLST scripts. Tools like Puppet and Chef are regularly used for provisioning. You might need to gain (or hire) some experience with them also.



Don't repeat past mistakes. Dare to change!

Last but certainly not least, do not make the same mistakes you made with the 'do-it-yourself' environments on the PaaS solution. This is a chance to start anew. I suggest you take it, learn from mistakes made in the past and do not repeat them.


Quick overview of SOA Suite 12.2.1 new features

$
0
0
Oracle has just released SOA Suite 12.2.1 which contains several exciting new features. The below entries have shamelessly copied from the developers guide in order to provide a quick overview of highlights for this release of the SOA Suite. Also at the end of the article some links for new features of WebLogic Server 12.2.1 which has also been released.

Patching running instances

See Patching Running Instances of a SOA Composite.

Oracle SOA Suite 12c (12.2.1) supports Composite Instance Patching, which enables you to patch running instances of a composite and recover faulted instances after patching the runtime. You can only include those fixes in the patch that are compatible with Composite Instance Patching. Use the SOA Patch Developer role in Oracle JDeveloper to make the fixes and create the patch.

Composite Instance Patching enables you to deliver urgent composite fixes that can be picked up by long running instances. You can make compatible/allowed changes without aborting in-flight instances. If a patched running instance comes across a business process that has been fixed by the patch, say a BPEL transformation, then it picks up the fixes applied to the business process.

When designing the patch, the SOA Patch Developer mode in JDeveloper automatically disables changes that cannot be made to the patch. Some of the compatible changes that you can make include:

Non-schema related XSLT changes, changes to fault policy, sensor data, and analytics data.
Compatible BPEL changes such as transformation activity, assign operations, etc.
JCA Adapter configuration properties.

In-Memory SOA

See Using In-Memory SOA to Improve System Performance.

You can leverage the Coherence cache associated with WebLogic Server to run your non-transactional business processes in memory. This improves performance and scalability for these business processes, as read and write operations are performed out of the cache. Database performance and management also improves, as the costs associated with continuous disk reads and writes are significantly reduced.

In-memory SOA enables short-running processes to live in memory. The process state gets written to the database only when faulted, or at regular, deferred intervals using a write-behind thread. The BPEL state information is dehydrated and rehydrated to/from the Coherence cache.

Support for debugging XSLT maps

See Debugging the XSLT Map.

Starting in 12.2.1, you can debug your XSLT maps using the SOA Debugger. You can add breakpoints at strategic locations in the XSLT map. When debugging, the debugger halts execution at the breakpoints, enabling you to verify the data and output.

XSLT maps can be complex, making them difficult to debug. For example, you may have a Java function, or other functionality, that is best tested in the application server. Also, you might find it easier to debug in the application environment, as the XSLT may be invoked from many different applications in the server. The SOA debugger provides remote debugging capability for XSLT maps that have been deployed in the application server.

You can also use the debugger with your Oracle Service Bus projects.

Support for End-to-End JSON and JavaScript

See Integrating REST Operations in SOA Composite Applications.

Starting in 12.2.1, your SOA composites can use end-to-end JSON. This means that the REST service can receive the REST request and route it to the BPEL engine without translating it to XML. The BPEL component can use the JavaScript action, and also use JavaScript in conditional and iterative constructs, to work on JSON objects directly. The REST reference can receive the REST message from the BPEL engine and route it to an external REST endpoint without translation.

The REST interfaces and BPEL component support end-to-end JSON. However, if you are using other service components, like the Mediator, you need to use the 12.1.3–style composite that internally maps REST resources and verbs to WSDL operations and XML schemas, and translates the incoming payload into XML.

Running on WebLogic Server 12.2.1

Of course SOA Suite 12.2.1 runs on WebLogic Server 12.2.1 which also has several Interesting new features. See here.


Among several other interesting things. Interesting to read the WebLogic full client is being deprecated.

SOA Suite 12.2.1: A first look at end-to-end JSON support in SOA Composites

$
0
0
SOA Suite 12.2.1 introduces end-to-end JSON support in composites, support for JavaScript in BPEL and a JavaScript embedding activity. The REST-binding (which can be used by Service Bus, BPEL, BPM) can receive and send untyped JSON without the need to translate it to XML. In BPEL, JavaScript can be used as expression language in various activities and there is a JavaScript embedding activity available.

In this article I'll show some examples on what you can do with this end-to-end JSON support and give some examples on how to use JavaScript in your BPEL process.


About the implementation

Oracle has used the Mozilla Rhino JavaScript engine which is embedded in Java SE 6 and 7. WebLogic Server 12.2.1 and thus also SOA Suite 12.2.1, runs on Java SE 8 (Java EE 7). Java SE 8 has a new JavaScript engine Nashorn embedded (see here). A possible reason of Oracle for choosing Rhino could be that Nashorn is not thread safe. See here. You can imagine you need thread safety in an environment like SOA Suite.

SOAIncomingRequests_maxThreads is a property indicating the number of threads available to incoming requests. It is by default set to the same value as the connection pool size of the SOADataSource. This might not be enough for REST/JSON services. You can find this setting by going to the MBean browser, Configuration MBeans, com.bea, SelfTuning, (domain), MaxThreadsConstraint.

Use untyped JSON

You can use untyped JSON in various BPEL activities such as assign and assert activities. Since the JSON is untyped, you can assign values to elements which do not exist yet and they will be created; there is no message definition. Payload validation will cause a NullPointerException.

In order to achieve an end-to-end JSON BPEL process, you first create a REST Binding and indicate you do not want to invoke components based on a WSDL interface.


Next, configure the binding and indicate the request and response will be JSON.



Create a new BPEL process based on this REST binding.


Now the BPEL process will use end-to-end JSON. It is useful to set the default expression language to JSON instead of XPATH since you will be working with JSON. Right-click your process, edit and set the expression language to Javascript. In the below screenshot, you can also set the query language to JavaScript.


You can create an assign activity which assigns a JavaScript expression to the result variable. In this case the 'resultfield' object is created in the resulting JSON.


As usual, you can test the service in the EM Fusion Middleware Control.



Use JavaScript

You can use JavaScript inline (JavaScript embedding activity) or import a JavaScript file which is present in the project. Currently putting JavaScript files in the MDS is not yet supported.

Importing a JavaScript file from within BPEL can look like:
<bpelx:js include="jslib/main.js"/>

The JavaScript embedding object is of the type org.mozilla.javascript and has various properties. You can determine properties of an object by looping them:

for (property in this)
{ audit.log("output: ", property);
}

Some examples of how you can use these properties can be found here.

  • xpath can be used to execute XPath expressions. For example: xpath.ora.getECID()
  • console and audit can be used for for logging to console or audit log. For example: audit.log("Hello audit log") 
  • process allows you to access process variables such as input and output. For example: process.outputVar.result = new Object();
  • bpel can help you with expressions in activities. For example: bpel.until(process.counter > 3)

If I assign a value to a variable from within the JavaScript embedding activity, that value is available inside the BPEL process:


In an assign activity I can then use myvar to use the string.


Finally

The end-to-end JSON and JavaScript support are both powerful additions to Oracle SOA Suite. It has the potential (I have not measured this yet) of providing a significant performance gain when using BPEL to orchestrate JSON/REST services since translation to XML can be avoided. When you enable end-to-end JSON support in your BPEL process, it still remains possible to call SOAP services and use the adapters thus mixing REST/JSON and SOAP/XML in the same process becomes relatively easy.

I have yet to research the possibilities of using already publicly available JavaScript libraries inside BPEL processes. Most libraries are front-end based, but I can imagine there are JavaScript libraries available which provide new functionality to JavaScript running inside the JVM, which can then be used in BPEL. Also I have not yet studied limitations of using JavaScript from the JVM as compared to from a browser. Running from the JVM, JavaScript allows easy integration with Java, which might also be interesting to look at; can we use JavaScript to more easily interact with SOA Suite API's?

This new feature of using JavaScript with BPEL also of course requires some standards. I can imagine it is useful to put most JavaScript functionality in a library and include that instead of having difficult to maintain JavaScript embedding activities. You can then also use for example TypeScript to allow easy editing / syntax checking of the JavaScript library.
Viewing all 142 articles
Browse latest View live