Quantcast
Channel: Oracle SOA / Java blog
Viewing all 142 articles
Browse latest View live

Automate calls to SOAP and REST webservices using simple Python scripts

$
0
0
Probably not many people will tell you running batches over webservices is a good idea. Sometimes though, it can be handy to have a script available to generate webservice calls based on a template message with variables and automate processing the response messages. In addition, if you have a large number of calls, executing the calls in parallel might save you a lot of time if your service platform can handle the concurrency.


Scripts such as this might help bridging part of the gap between the old fashioned batch oriented world and the service world. You can for example use it to call services based on a text-file export from an old system to create certain entities in a modern system which offers APIs. Scripts such as these should of course not be used to actually perform structured regular integration but are valuable as one-off solutions. The provided scripts of course come with no warranties or guarantees of any nature. Most likely you will need to make them suitable for your specific use-case.

Python setup

The scripts below require Python 3.6 with the requests module installed (pip install requests). The other used modules are present by default in your usual Python installations. I've used PyCharm by JetBrains as IDE. Without IDE, you can also install Python 3.6 from python.org here to just run the script.


Performing calls to SOAP services

SOAP services use the same endpoint for all requests. Based on the operation (the SOAPAction HTTP header), a different part of the service is executed. The message contents can differ. For example, you want to call a service for a list of different customers. A template message is ideally suited for such a case. The below Python 3.6 script will do just that. Generate messages based on a template and an input file and fire them at a service endpoint with a specified number of concurrent threads. After the response is received, it is parsed and a specific field from the response is saved in an output file. Errors are saved in a separate file.

You can view the script here. The script is slightly more than 50 lines and contains Python samples of (among other things):
  • How to execute SOAP calls (POST request with HTTP headers) with the requests module
  • How to work with a message template and variables with string.Template
  • How to concurrently execute calls with concurrent.futures
  • How to parse SOAP responses with xml.etree.ElementTree
The line Maarten in input.txt will give you Maarten : Hello Maarten in the outputok.txt file if the call succeeded. I've used a simple SOA Suite test service which you can also find in the mentioned directory.

Performing calls to REST services

When working with REST services, usually the URL contains variables. In this example I'm calling an online and publicly available API at the Dutch Chamber of Commerce to search for companies based on their file number (KvK number). When I receive the result, I'll look if it is found and only has a single location. In other cases, I'll consider it an error.

You can view the script here. It contains samples of (among other things) on how you can do URL manipulation and GET requests. The parsing of the response for this sample is extremely simple. I just check if the result document contains specific text strings. For a 'real' REST service you might want to do some more thorough JSON parsing. For this example, I've kept the code as simple/short as possible.

Java: How to fix Spring @Autowired annotation not working issues

$
0
0
Spring is a powerful framework, but it requires some skill to use efficiently. When I started working with Spring a while ago (actually Spring Boot to develop microservices) I encountered some challenges related to dependency injection and using the @Autowired annotation. In this blog I'll explain the issues and possible solutions. Do note that since I do not have a long history with Spring, the provided solutions might not be the best ones.
Introduction @Autowired

In Spring 2.5 (2007), a new feature became available, namely the @Autowired annotation. What this annotation basically does is provide an instance of a class when you request it in for example an instance variable of another class. You can do things like:

@Autowired
MyClass myClass;

This causes myClass to automagically be assigned an instance of MyClass if certain requirements are met.

How does it know which classes can provide instances? The Spring Framework does this by performing a scan of components when the application starts. In Spring Boot the @SpringBootApplication provides this functionality. You can use the @ComponentScan annotation to tweak this behavior if you need to. Read more here.

The classes of which instances are acquired, also have to be known to the Spring framework (to be picked up by the ComponentScan) so they require some Spring annotation such as @Component, @Repository, @Service, @Controller, @Configuration. Spring manages the life-cycle of instances of those classes. They are known in the Spring context and can be used for injection.

Order of execution

When a constructor of a class is called, the @Autowired instance variables do not contain their values yet. If you are dependent on them for the execution of specific logic, I suggest you use the @PostConstruct annotation. This annotation allows a specific method to be executed after construction of the instance and also after all the @Autowired instances have been injected.

Multiple classes which fit the @Autowired bill

If you create an instance of a class implementing an interface and there are multiple classes implementing that interface, you can use different techniques to let it determine the correct one. Read here.

You can indicate a @Primary candidate for @Autowired. This sets a default class to be wired. Some other alternatives are to use @Resource, @Qualifier or @Inject. Read more here. @Autowired is Spring specific. The others are not.

You can for example name a @Component like:

@Component("beanName1")
public class MyClass1 implements InterfaceName {
}

@Component("beanName2")
public class MyClass2 implements InterfaceName {
}

And use it in an @Autowired like

@Autowired
@Qualifier("beanName1")
InterfaceName myImpl; 

MyImpl will get an instance of MyClass1

When @Autowired doesn't work

There are several reasons @Autowired might not work.

  • When a new instance is created not by Spring but by for example manually calling a constructor, the instance of the class will not be registered in the Spring context and thus not available for dependency injection. Also when you use @Autowired in the class of which you created a new instance, the Spring context will not be known to it and thus most likely this will also fail.
  • Another reason can be that the class you want to use @Autowired in, is not picked up by the ComponentScan. This can basically be because of two reasons. 
    • The package is outside the ComponentScan search path. Move the package to a scanned location or configure the ComponentScan to fix this.
    • The class in which you want to use @Autowired does not have a Spring annotation. Add one of the following annotatons to the class: @Component, @Repository, @Service, @Controller, @Configuration. They have different behaviors so choose carefully! Read more here.

Instances created not by Spring

Autowired is cool! It makes certain things very easy. How do you create the right circumstances so you actually can use it?

Do not create your own instances; let Spring handle it

If you can do this, this is the easiest way to go. If you need to deal with instances created not by Spring, there are some workarounds available below, but most likely, they will have unexpected side-effects. It is easy to add Spring annotations, have the class be picked up by the ComponentScan and let instances be @Autowired when you need it. This avoids you having to create new instances regularly or having to forward them through a call stack.

Not like this

//Autowired annotations will not work inside MyClass. Other classes who want to use MyClass have to create their own instances or you have to forward this one. 

public class MyClass {
}

public class MyParentClass {
MyClass myClass = new MyClass();
}

But like this

Below how you can refactor this in order to Springify it.

//@Component makes sure it is picked up by the ComponentScan (if it is in the right package). This allows @Autowired to work in other classes for instances of this class
@Component
public class MyClass {
}

//@Service makes sure the @Autowired annotation is processed
@Service
public class MyParentClass {
//myClass is assigned an instance of MyClass
@Autowired
MyClass myClass;
}

Manually force @Autowired to be processed

If you want to manually create a new instance and force the @Autowired annotation used inside it to be processed, you can obtain the  SpringApplicationContext (see here) and do the following (from here):

B bean = new B();
AutowireCapableBeanFactory factory = applicationContext.getAutowireCapableBeanFactory();
factory.autowireBean( bean );
factory.initializeBean( bean, "bean" );

initializeBean processes the PostConstruct annotation. There is some discussion though if this does not break the inversion of control principle. Read for example here.

Manually add the bean to the Spring context

If you not only want the Autowired annotation to be processed inside the bean, but also make the new instance available to be autowired to other instances, it needs to be present in the SpringApplicationContext. You can obtain the SpringApplicationContext by implementing ApplicationContextAware (see here) and use that to register the bean. A nice example of such a 'dynamic Spring bean' can be found here and here. There are other flavors which provide pretty similar functionality. For example here.

Application Container Cloud Service (ACCS): Using the Application Cache from a Spring Boot application

$
0
0
Spring Boot allows you to quickly develop microservices. Application Container Cloud Service (ACCS) allows you to easily host Spring Boot applications. Oracle provides an Application Cache based on Coherence which you can use from applications deployed to ACCS. In order to use the Application Cache from Spring Boot, Oracle provides an open source Java SDK. In this blog post I'll give an example on how you can use the Application Cache from Spring Boot using this SDK. You can find the sample code here.

Using the Application Cache Java SDK

Create an Application Cache

You can use a web-interface to easily create a new instance of the Application Cache. A single instance can contain multiple caches. A single application can use multiple caches but only a single cache instance. Multiple applications can use the same cache instance and caches. Mind that the application and the application cache are deployed in the same region in order to allow connectivity. Also do not use the '-' character in your cache name, since the LBaaS configuration will fail.




Use the Java SDK

Spring Boot applications commonly use an architecture which defines abstraction layers. External resources are exposed through a controller. The controller uses services. These services provide operations to execute specific tasks. The services use repositories for their connectivity / data access objects. Entities are the POJO's which are exchanged/persisted and exposed for example as REST in a controller. In order to connect to the cache, the repository seems like a good location. Which repository to use (a persistent back-end like a database or for example the application cache repository) can be handled by the service. Per operation this can differ. Get operations for example might directly use the cache repository (which could use other sources if it can't find its data) while you might want to do Put operations in both the persistent backend as well as in the cache. See for an example here.


In order to gain access to the cache, first a session needs to be established. The session can be obtained from a session provider.  The session provider can be a local session provider or a remote session provider. The local session provider can be used for local development. It can be created with an expiry which indicated the validity period of items in the cache. When developing / testing, you might try setting this to 'never expires' since else you might not be able to find entries which you expect to be there. I have not looked further into this issue or created a service request for it. Nor do I know if this is only an issue with the local session provider. See for sample code here or here.

When creating a session, you also need to specify the protocol to use. When using the Java SDK, you can (at the moment) choose from GRPC and REST. GRPC might be more challenging to implement without an SDK in for example Node.js code, but I have not tried this. I have not compared the performance of the 2 protocols. Another difference is that the application uses different ports and URLs to connect to the cache. You can see how to determine the correct URL / protocol from ACCS environment variables here.


The ACCS Application Cache Java SDK allows you to add a Loader and a Serializer class when creating a Cache object. The Loader class is invoked when a value cannot be found in the cache. This allows you to fetch objects which are not in the cache. The Serializer is required so the object can be transferred via REST or GRPC. You might do something like below.


Injection

Mind that when using Spring Boot you do not want to create instances of objects by directly doing something like: Class bla = new Class(). You want to let Spring handle this by using the @Autowired annotation.

Do mind though that the @Autowired annotation assigns instances to variables after the constructor of the instance is executed. If you want to use the @Autowired variables after your constructor but before executing other methods, you should put them in a @PostConstruct annotated method. See also here. See for a concrete implemented sample here.

Connectivity

The Application cache can be restarted at certain times (e.g. maintenance like patching, scaling) and there can be connectivity issues due to other reasons. In order to deal with that it is a good practice to make the connection handling more robust by implementing retries. See for example here.

Deploy a Spring Boot application to ACCS

Create a deployable

In order to deploy an application to ACCS, you need to create a ZIP file in a specific format. In this ZIP file there should at least be a manifest.json file which describes (amongst other things) how to start the application. You can read more here. If you have environment specific properties, binding information (such as which cache to use) and environment variables, you can create a deployment.json file. In addition to those metadata files, there of course needs to be the application itself. In case of Spring Boot, this is a large JAR file which contains all dependencies. You can create this file with the spring-boot-maven-plugin. The ZIP itself is most easily composed with the maven-assembly-plugin.



Deploy to ACCS

There are 2 major ways (next to directly using the API's with for example CURL) in which you can deploy to ACCS. You can do this manually or use the Developer Cloud Service. The process to do this from Developer Cloud Service is described here. This is quicker (allows redeployment on Git commit for example) and more flexible. Below globally describes the manual procedure. An important thing to mind is that if you deploy the same application under the same name several times, you might encounter issues with the application not being replaced with a new version. In this case you can do 2 things. Deploy under a different name every time. The name of the application however is reflected in the URL and this could cause issues with users of the application. Another way is to remove files from the Storage Cloud Service before redeployment so you are sure the deployable is the most recent version which ends up in ACCS.

Manually

Create a new Java SE application.

Upload the previously created ZIP file


References

Introducing Application Cache Client Java SDK for Oracle Cloud


Caching with Oracle Application Container Cloud


Complete working sample Spring Boot on ACCS with Application cache (as soon as a SR is resolved)


A sample of using the Application Cache Java SDK. Application is Jersey based

Running Spring Boot in a Docker container on OpenJDK, Oracle JDK, Zulu on Alpine Linux, Oracle Linux, Ubuntu

$
0
0
Spring Boot is great for running inside a Docker container. Spring Boot applications 'just run'. A Spring Boot application has an embedded servlet engine making it independent of application servers. There is a Spring Boot Maven plugin available to easily create a JAR file which contains all required dependencies. This JAR file can be run with a single command-line like 'java -jar SpringBootApp.jar'. For running it in a Docker container, you only require a base OS and a JDK. In this blog post I'll give examples on how to get started with different OSs and different JDKs in Docker. I'll finish with an example on how to build a Docker image with a Spring Boot application in it.


Getting started with Docker

Installing Docker

Of course you need a Docker installation. I'll not get into details here but;

Oracle Linux 7

yum-config-manager --enable ol7_addons
yum-config-manager --enable ol7_optional_latest
yum install docker-engine
systemctl start docker
systemctl enable docker

Ubuntu

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get update
apt-get install docker-ce

You can add a user to the docker group or give it sudo docker rights. They do allow the user to become root on the host-OS though.

Running a Docker container

See below for commands you can execute to start containers in the foreground or background and access them. For 'mycontainer' in the below examples, you can fill in a name you like. The name of the image can be found in the description further below. This can be for example for an Oracle Linux 7 image container-registry.oracle.com/os/oraclelinux:7 when using the Oracle Container Registry or store/oracle/serverjre:8 for for example a JRE image from the Docker Store.

If you are using the Oracle Container Registry (for example to obtain Oracle JDK or Oracle Linux docker images) you first need to
  • go to container-registry.oracle.com and enable your OTN account to be used
  • go to the product you want to use and accept the license agreement
  • do docker login -u username -p password container-registry.oracle.com
If you are using the Docker Store, you first need to
  • go to store.docker.com and create an account
  • find the image you want to use. Click Get Content and accept the license agreement
  • do docker login -u username -p password
To start a container in the foreground

docker run --name mycontainer -it imagename /bin/sh

To start a container in the background

docker run --name mycontainer -d imagename tail -f /dev/null

To 'enter' a running container:

docker exec -it mycontainer /bin/sh

/bin/sh exists in Alpine Linux, Oracle Linux and Ubuntu. For Oracle Linux and Ubuntu you can also use /bin/bash.

Cleaning up

Good to know is how to clean up your images/containers after having played around with them. See here.

#!/bin/bash
# Delete all containers
docker rm $(docker ps -a -q)
# Delete all images
docker rmi $(docker images -q)

Options for JDK

Of course there are more options for running JDKs in Docker containers. These are just some of the more commonly used.

Oracle JDK on Oracle Linux 

When you're running in the Oracle Cloud, you have probably noticed the OS running beneath it is often Oracle Linux (and currently also often version 7.x). When for example running Application Container Cloud Service, it uses the Oracle JDK. If you want to run in a similar environment locally, you can use Docker images.

store.docker.com

This is described here. The steps are as follows:
Create an account on store.docker.com. Go to https://store.docker.com/images/oracle-serverjre-8. Click Get Content. Accept the agreement and you're ready to login, pull and run.


#use the store.docker.com username and password
docker login -u yourusername -p yourpassword
docker pull store/oracle/serverjre:8

To start in the foreground:

docker run --name jre8 -it store/oracle/serverjre:8 /bin/bash

container-registry.oracle.com

You can use the image from the container registry. First, same as for just running the OS, enable your OTN account and login.

#use your OTN username and password
docker login -u yourusername -p yourpassword container-registry.oracle.com

docker pull container-registry.oracle.com/java/serverjre:8

#To start in the foreground:
docker run --name jre8 -it container-registry.oracle.com/java/serverjre:8 /bin/bash

OpenJDK on Alpine Linux


When running Docker containers, you want them to as small as possible to allow quick starting, stopping, downloading, scaling, etc. Alpine Linux is a suitable Linux distribution for small containers and is being used quite often. There can be some thread challenges with Alpine Linux though. See for example here and here.

Running OpenJDK in Alpine Linux in a Docker container is more easy than you might think. You don't require any specific accounts for this and also no login.

When you pull openjdk:8, you will get a Debian 9 image. In order to run on Alpine Linux, you can do

docker pull openjdk:8-jdk-alpine

Next you can do

docker run --name openjdk8 -it openjdk:8-jdk-alpine /bin/sh

Zulu on Ubuntu Linux


You can also consider OpenJDK based JDK's like Azul's Zulu. This works mostly the same only the image name is something like 'azul/zulu-openjdk:8'. The Zulu images are Ubuntu based.

Do it yourself

Of course you can create your own image with a JDK. See for example here. This requires you download the JDK code and build the image yourself. This is quite easy though.

Spring Boot in a Docker container


Creating a container with a Spring Boot application based on an image which already has a JDK in it, is easy. This is described here. You can create a simple Dockerfile like:

FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE
ADD ${JAR_FILE} app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]

The FROM image can also be an Oracle JDK or Zulu JDK image as mentioned above.

And add a dependency to com.spotify.dockerfile-maven-plugin and some configuration to your pom.xml file to automate building the Dockerfile once you have the Spring Boot JAR file. See for a complete example pom.xml and Dockerfile also here. The relevant part of the pom.xml file is below.

<build>
<finalName>accs-cache-sample</finalName>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>com.spotify</groupId>
<artifactId>dockerfile-maven-plugin</artifactId>
<version>1.3.6</version>
<configuration>
<repository>${docker.image.prefix}/${project.artifactId}</repository>
<buildArgs>
<JAR_FILE>target/${project.build.finalName}.jar</JAR_FILE>
</buildArgs>
</configuration>
</plugin>
</plugins>
</build>

To actually build the Docker image, which allows using it locally, you can do:

mvn install dockerfile:build

You can then do something like (in case of the sample pom.xml file)

docker run -t maartensmeets/accs-cache-sample:latest

A simple dashboard to monitor HTTP endpoints

$
0
0
To monitor different environments, it is not unusual to use a monitoring dashboard to obtain information about the status of different servers. This blog describes some considerations for implementing a simple monitoring dashboard and some of the challenges I encountered. The simple-dashboard I've used in this blog runs solely from a browser and does not have a server side component.

Why?

Monitoring is hard, but quite worthwhile if done correctly. In my opinion, most of the challenges when implementing monitoring, are related to
  • Organisational structures. Split project and operations teams / organisations. Separation of concerns and no shared responsibility.
  • Lack of communication. Waterfall like projects, teams on different locations.
  • Functional focus. The customer often prefers to spend money on new functionality instead of non-functionals which are often not immediately visible to the outside world and often harder to sell.
Even if there a customer has bought an expensive monitoring product, has started special monitoring projects, has SLA agreements on availability and many other things, a simple dashboard of which servers are up and running, still provides great value. It is something which is easy/cheap to implement and maintain and quickly provides insight to everyone interested.

In order to provide something simple to provide a minimal set of information to development and operations teams, I stumbled upon simple-dashboard. An open source project on Github. This dashboard provides a quick and easy way to monitor various endpoints. Its (JavaScript / HTML) code runs entirely from the browser and does not require a separate server. Configuration it done with a simple JSON file.

Simple dashboard

Getting started

On the simple-dashboard site, there is a description on how to get started. It's as simple as:

Install git, install Node.js.
npm install -g grunt-cli
git clone https://github.com/Lugribossk/simple-dashboard.git
cd simple-dashboard
npm install

After these initial steps, you have 2 options to run the dashboard.
  • Start a local development server: grunt dev. This refreshes the browser after you make changes to the JavaScript code. It runs a local Node.js server on port 8080. You can access the dashboard by accessing localhost:8080 in a browser.
  • Make a distributable version: grunt build. This creates a distributable version of the required files to run the dashboard. You do need to provide a relevant config.json next to the index.html file. You can access the dashboard by opening index.html in a browser.
Below are some reasons this product seemed interesting.

Runs in just the browser. No server required

There are customers who do not readily create new servers. This is especially true in organisations who have not implemented automated provisioning or have not implemented a cloud solution. In such organisations, creating a new server is hard and thus expensive. If you can avoid having to use a server, you take away one of the barriers.

Simple to configure

Having experience with products like Splunk, Nagios, Oracle Enterprise Manager, I know these require a separate server, often with database, sometimes even agent installations, specific configuration and user management. It would require a project and specific expertise to do a solid implementation of any of those. For a simple dashboard, many of the features are overkill. For example, you do not need a history of availability if you only display the current status. If you make it available to anyone interested / run it locally, you do not need user management. This simple dashboard is configured with a single simple JSON file which gives information on an HTTP endpoint to poll and a polling interval.


In order to check connectivity and responses to certain status codes, http://httpstat.us is useful! In this case I'm monitoring 4 services. The first 3 respond with the indicated HTTP status code. Of course 500 and 400 are errors and 200 is OK. The last one, www.oracle.com was added to test with a site which does not respond with the HTTP header Access-Control-Allow-Origin. We'll get back to that later.

However

This simple dashboard seemed like a great solution to quickly implement a simple dashboard. In reality though there were some challenges.

Access to servers. Security concerns

In order to provide a single dashboard which provides a complete overview of server availability, the dashboard should have access to all different systems. This requires the dashboard to run on a machine which has access to all different servers. When the software has a server side component, it is more easy to securely configure which endpoints the monitoring software is allowed to access from which machine. When running solely from a browser, this becomes a challenge since any client could run the monitoring software.

Sharing results among clients

A simple dashboard polls different servers once in a while. If however a large amount of clients all run the simple dashboard, this can significantly increase the load on the network and the monitored systems. Results from the different clients are not shared since there is no easy way to do this without a server side component.

To help mitigate this, you could run the dashboard on a single server and provide a screen in a common room so everyone can view the dashboard.

Only basic features

The simple dashboard is a simple dashboard with basic functionality. It works well for specific HTTP requests but you will need to extent it if you want something more than it currently offers, such as database connectivity checks or actual response body checks. It is no extensive suite which has lots of features but just a simple dashboard. You could also check out the forks if someone already implemented the additional functionality you are looking for.

Cross domain requests

This is probably the most significant challenge. The dashboard runs from a browser. Browsers perform certain checks on responses, such as that if cross domain requests are performed, the response should specifically indicate the request is allowed; CORS. This however requires that the servers you are polling, send a specific header in response (Access-Control-Allow-Origin). Sometimes you are not in control of the server you are monitoring and it does not respond with this header or does not return a header with an accepted content. Result is that the browser blocks the response and the dashboard receives an error while the server response might have been valid. This behavior is similar when running the dashboard from disk or hosted by the local Node.js server. This is a challenge which you do not have when you're using a server side component which performs the requests since you are in complete control then.

You can see the errors in the Developer tools (or something called similar) which most browsers have.

How to disable cross domain checks

You can disable cross domain checks in browsers but this is specific to the browser. You should not use the browser to go online anymore since disabling security features can be dangerous. I use a separate Chrome with a specific data directory.
  • Chrome / Chromium: start with --disable-web-security --user-data-dir
  • Safari: Enable the developer menu, and select "Disable Cross-Origin Restrictions"
  • Firefox: apparently there are plug-ins available which add headers to response messages. Have not tried them though.
  • IE (don't use it!): Tools->Internet Options->Security tab, click on “Custom Level” button. Find the Miscellaneous -> Access data sources across domains setting and select “Enable” option.
Finally

The challenges might not be relevant to you

Of course, the challenges are dependent on your organisation and might not be relevant to your environment. This simple-dashboard can be considered a last resort when all else fails to at least have something.

Running a dashboard solely from a browser is not a good idea

After having looked at the simple dashboard, I realized running a dashboard from a web browser is not the best thing to do.
  • Server access. You need to allow access to different servers from every client running the dashboard. This is a security risk since there can be many clients.
  • Security. The responses should implement CORS since else the browser might block requests. If not all servers implement it, you can disable cross domain checking, which of course is dangerous or resort to more elaborate workarounds.
  • Performance. Results cannot be shared between clients since there is no server side component. This can cause server hammering/performance issues if here are many clients.
  • Limited features. A browser can only do HTTP and some other things, but for example no database availability checking if the database does not have an HTTP endpoint.

SOA Suite 12c in Docker containers. Only a couple of commands, no installers, no third party scripts

$
0
0
For developers, installing a full blown local SOA Suite environment has never been a favorite (except for a select few). It is time consuming and requires you to download and run various installers after each other. If you want to start clean and haven't done your installation inside a VM and created a snapshot, you can start all over again.

There is a new and easy way to get a SOA Suite environment up and running without downloading any installers in only a couple of commands without depending on scripts provided by any party other than Oracle. The resulting environment is an Oracle Enterprise Edition database, an Admin Server and a Managed Server. All of them running in separate Docker containers with ports exposed to the host. The 3 containers can run together within an 8Gb VM.

The documentation Oracle provides in its Container Registry for the SOA Suite images, should be used as base, but since you will encounter some errors if you follow it, you can use this blog post to help you solve them quickly.


A short history

QuickStart and different installers

During the 11g times, a developer, if he wanted to run a local environment, he needed to install a database (usually XE), WebLogic Server, SOA Infrastructure, run the Repository Creation Utility (RCU) and one or more of SOA, BPM, OSB. In 12c, the SOA Suite QuickStart was introduced. The QuickStart uses an Apache Derby database instead of the Oracle database and lacks features like ESS, split Admin Server / Managed Server, NodeManager and several other features, making this environment not really comparable to customer environments. If you wanted to install a standalone version, you still needed to go through all the manual steps or automate them yourself (with response files for the installers and WLST files for domain creation). As an alternative, during these times, Oracle has been so kind as to provide VirtualBox images (like this one or this one) with everything pre-installed. For more complex set-ups Edwin Biemond / Lucas Jellema have provided Vagrant files and blog posts to quickly create a 12c environment.

Docker

One of the benefits of running SOA Suite in Docker containers is that the software is isolatd in the container. You can quickly remove and recreate domains. Also, in general, Docker is more resource efficient compared to for example VMWare, VirtualBox or Oracle VM and the containers are easily shippable to other environments/machines.

Dockerfiles

Docker has become very popular and there have been several efforts to run SOA Suite in Docker containers. At first these efforts where by people who created their own Dockerfiles and used the installers and responsefiles to create images. Later Oracle provided their own Dockerfiles but you still needed the installers from edelivery.oracle.com and first build the images. The official Oracle provided Docker files can be found in GitHub here.

Container Registry

Oracle has introduced its Container Registry recently (the start of 2017). The Container Registry is a Docker Registry which contains prebuild images, thus just Dockerfiles. Oracle Database appeared, WebLogic and the SOA Infrastructure and now (May 2018) the complete SOA Suite.


How do you use this? You link your OTN account to the Container Registry. This needs to be done only once. Next you can accept the license agreement for the images you would like to use. The Container Registry contains a useful description with every image on how to use it and what can be configured. Keep in mind that since the Container Registry has recently been restructured, names of images have changed and not all manuals have been updated yet. That is also why you want to tag images so you can access them locally in a consistent way.

Download and run!

For SOA Suite, you need to accept the agreement for the Enterprise Edition database and SOA Suite. You don't need the SOA Infrastructure; it is part of the SOA Suite image.

Login

docker login -u OTNusername -p OTNpassword container-registry.oracle.com

Pull, tag, create env files

Pulling the images can take a while... (can be hours on Wifi). The commands for pulling differ slightly from the examples given in the image documentation in the Container Registry because image names have recently changed. For consistent access, tag them.

Database

docker pull container-registry.oracle.com/database/enterprise:12.2.0.1
docker tag container-registry.oracle.com/database/enterprise:12.2.0.1 oracle/database:12.2.0.1-ee

The database requires a configuration file. The settings in this file are not correctly applied by the installation which is executed when a container is created from the image however. I've updated the configuration file to reflect what is actually created:

db.env.list
ORACLE_SID=orclcdb
ORACLE_PDB=orclpdb1
ORACLE_PWD=Oradoc_db1

SOA Suite

docker pull container-registry.oracle.com/middleware/soasuite:12.2.1.3
docker tag container-registry.oracle.com/middleware/soasuite:12.2.1.3 oracle/soa:12.2.1.3

The Admin Server also requires a configuration file:

adminserver.env.list
CONNECTION_STRING=soadb:1521/ORCLPDB1.localdomain
RCUPREFIX=SOA1
DB_PASSWORD=Oradoc_db1
DB_SCHEMA_PASSWORD=Welcome01
ADMIN_PASSWORD=Welcome01
MANAGED_SERVER=soa_server1
DOMAIN_TYPE=soa

As you can see, you can use the same database for multiple SOA schema's since the RCU prefix is configurable.

The Managed Server also requires a configuration file:

soaserver.env.list
MANAGED_SERVER=soa_server1
DOMAIN_TYPE=soa
ADMIN_HOST=soaas
ADMIN_PORT=7001

Make sure the Managed Server mentioned in the Admin Server configuration file matches the Managed Server in the Managed Server configuration file. The Admin Server installation creates a boot.properties for the Managed Server. If the server name does not match, the Managed Server will not boot.

Create local folders and network

Since you might not want to lose your domain or database files when you remove your container and start it again, you can create a location on your host machine where the domain will be created and the database can store its files. Make sure the user running the containers has userid/groupid 1000 for the below commands to allow the user access to the directories. Run the below commands as root. They differ slightly from the manual since errors will occur if SOAVolume/SOA does not exist.

mkdir -p /scratch/DockerVolume/SOAVolume/SOA
chown 1000:1000 /scratch/DockerVolume/SOAVolume/
chmod -R 700 /scratch/DockerVolume/SOAVolume/

Create a network for the database and SOA servers:

docker network create -d bridge SOANet

Run

Start the database

You'll first need the database. You can run it by:

#Start the database
docker run --name soadb --network=SOANet -p 1521:1521 -p 5500:5500 -v /scratch/DockerVolume/SOAVolume/DB:/opt/oracle/oradata --env-file /software/db.env.list  oracle/database:12.2.0.1-ee

This installs and starts the database. db.env.list, which is described above, should be in /software in this case.

SOA Suite

In the examples documented, it is indicated you can run the Admin Server and the Managed Server in separate containers. You can and they will startup. However, the Admin Server cannot manage the Managed Server and the WebLogic Console / EM don't show the Managed Server status. The configuration in the Docker container uses a single machine with a single host-name and indicates both the Managed Server and Admin Server both run there. In order to fix this, I'll suggest two easy workarounds.

Port forwarding. Admin Server and Managed Server in separate containers

You can create a port-forward from the Admin Server to the Managed Server. This allows the WebLogic Console / EM and Admin Server to access the Managed Server at 'localhost' within the Docker container on port 8001.

#This command starts an interactive shell which runs the Admin Server. Wait until it is up before continuing!
docker run -i -t  --name soaas --network=SOANet -p 7001:7001 -v /scratch/DockerVolume/SOAVolume/SOA:/u01/oracle/user_projects --env-file /software/adminserver.env.list oracle/soa:12.2.1.3


#This command starts an interactive shell which runs the Managed Server.
docker run -i -t  --name soams --network=SOANet -p 8001:8001 --volumes-from soaas --env-file /software/soaserver.env.list oracle/soa:12.2.1.3 "/u01/oracle/dockertools/startMS.sh"


#The below commands install and run socat to do the port mapping from Admin Server port 8001 to Managed Server port 8001
docker exec -u root soaas yum -y install socat
docker exec -d -u root soaas "/usr/bin/socat" TCP4-LISTEN:8001,fork TCP4:soams:8001"

The container is very limited. It does not contain executables for ping, netstat, wget, ifconfig, iptables and several other common tools. socat seemed an easy solution (easier than iptables or SSH tunnels) to do port forwarding and it worked nicely.

Admin Server and Managed Server in a single container

An alternative is to run the both the Managed Server and the Admin Server in the same container. Here you start the Admin Server with both the configuration files so all environment variables are available. Once the Admin Server is started, the Managed Server can be started in a separate shell with docker exec.

#Start Admin Server
docker run -i -t  --name soaas --network=SOANet -p 7001:7001 -p 8001:8001 -v /scratch/DockerVolume/SOAVolume/SOA:/u01/oracle/user_projects --env-file /software/adminserver.env.list --env-file /software/soaserver.env.list oracle/soa:12.2.1.3

#Start Managed Server
docker exec -it soaas "/u01/oracle/dockertools/startMS.sh"

Start the NodeManager

If you like (but you don't have to), you can start the NodeManager in both set-ups like;

docker exec -d soaas "/u01/oracle/user_projects/domains/InfraDomain/bin/startNodeManager.sh"

The NodeManager runs on port 5658.

How does it look?

A normal running SOA environment.



Securely access remote content using a proxy server accessed with SSH

$
0
0
There are numerous occasions that I was limited in my work because of connectivity which could not be trusted. For example;
  • I could not download large installers due to a proxy anti virus tool which manipulated downloads causing files to become corrupted.
  • I needed to visit a website to find a solution to a problem, but the local proxy server found the content offensive and disallowed me to visit the site. 
  • I have stayed in hotels in which I was not sure that my internet traffic was not being monitored. I was hesitant to access remote services which required credentials.
  • At the airport, the public Wifi can sometimes not be trusted. Someone could run a local hotspot with the same name and become a man in the middle intercepting credentials of people connecting to it.
The method described in this blog allows you to access external resources with few limitations in a relatively secure way. It makes it easy to circumvent most content scanning/manipulation. Do mind that this method might be a violation of certain rules/regulations/policies. When in doubt, first confirm you're allowed to use it.

In short what you do is
  • Run an SSH server on a different location on port 443
  • On the same server which runs an SSH server, run your own HTTP/HTTPS proxy server (or use the SSH server itself as SOCKS proxy)
  • Connect to the SSH server
  • Map the proxy port to your local machine
  • Use the configured port as proxy server in your browser configuration. 
This might seem complex but it is easier than you might think and once setup, it is easy to re-use. Also it is easier, more flexible and in some cases also more secure than using a VPN.


Create a proxy server


I've used a Raspberry Pi for this. I'm not interested in running a full blown server. A Raspberry does not require much electricity and is very small. On a Raspberry Pi, you can install Rasbian. This is a Debian flavor.

It is easy to enable SSH and install Squid; 'apt-get install squid'. The default install will do. Mind that Squid is an HTTP(S) proxy and no SOCKS proxy. Squid only proxies HTTP(S) and no other TCP protocols. It does however also function as a cache which can be useful.


As an alternative you can use the SSH server itself as SOCKS proxy. This is also described below.

Make the server accessible

Next, you need to create a port-forward in your router in order to be able to access 443 remotely. Why port 443? That is the usual port HTTPS sites run on. Most proxy servers and routers allow access to remote hosts on port 443. This configuration differs per router. Usually you configure static routes based on IP addresses. In order for this to work, your server needs to always have the same IP address. In order to fix this you can configure your routers DHCP server to always give the MAC address of the SSH server the same IP address. As an alternative, you can configure a static IP address on your SSH server. This way you only need to create a port forwarding rule in your router and do not require DHCP server configuration.

Proxy requests

Just running an SSH server is not enough. You want to proxy requests! Bitvise SSH Client (also called Tunnelier) is my favorite tool to create/manage tunnels with. The easiest is to use the remote SSH server as a SOCKS proxy and configure a local listen port for it.


You can configure Firefox like below to use it:


An alternative it to create a port forwarding rule to the previously installed squid instance. The default port for squid is 3128.


In Firefox the configuration is the same as for the Squid HTTP proxy except the port is 3128. You can expect more protocols to work when using the SOCKS proxy.

Connectivity challenges
  • If you are behind a proxy server, you can configure it at the 'Proxy settings' link. The proxy server is used to establish the connection to your SSH server so you can in turn access your own proxy server (which does not have the limitations as the server you use to establish the SSH connection).
  • This of course only works if the proxy server you are using is a SOCKS proxy which allows TCP connection to your SSH server. Just an HTTP proxy won't do.
  • If you select Initial method 'Password' you can authenticate with a password on your SSH server. You can of course also use a key to make it more secure but a password is easy. This password can be stored in your profile if you want to.
After you have authenticated, you will most likely get 2 pop-up windows. a terminal and an SFTP window which allows you to exchange files. The creation of these windows upon connection can be disabled of course if you don't like them. I recommend installing fortune ('sudo apt-get install fortune' and add '/usr/games/fortune -a' as the last line of /etc/profile for some entertainment)


If your connection fails, you could get something like:


In this case the hostname to connect to is incorrect


In this picture the proxy server name (set under Proxy settings) is incorrect.

To summarize, there can be various connection issues depending on the situation/environment. When setting up the your SSH server, it is a good idea to first confirm it works from a local environment since otherwise it will be difficult to determine what the issue is when in a remote location.

Finally

Alternatives

A workaround could have been using my mobile phone as a Wifi hotspot, but downloading large files this way can be expensive (especially when abroad). A VPN could also have been an option but since this requires you to use PPTP and L2TP/IPsec to access a remote server on specific (TCP and UDP) ports, this might not be an option when for example behind a company proxy server which limits access to those ports. OpenVPN uses SSL which can be configured to run on a single TCP port which could have been a viable option. Also a proxy provides more flexibility. In case you need to sometimes access remote and sometimes local resources, using a proxy, which can be configured per application, instead of a VPN would be easier.

Why is this secure?

If you are connecting to a remote SSH server, the protocol uses various measures to make the exchange secure.
When using for example a local proxy to connect to your remote SSH server, the connection cannot easily be monitored by the proxy itself since it proxies a byte stream and does not have a way to decrypt the traffic due to the symmetric key cryptography which is used. Read more here. If the proxy server does screw up the transfer, this is detected by the data integrity checks which are part of the protocol. The connection will (most likely, have not tested) be terminated in that case and you have to reconnect. Thus in my humble opinion, you are pretty secure and someone without solid security knowledge (and/or expensive tools) cannot easily translate the traffic.

VirtualBox networking explained

$
0
0
VirtualBox networking is extremely flexible. With this flexibility comes the challenge of making the correct choices. In this blog, the different options are explained and some example cases are elaborated. Access between guests, host and other members of the network is explained and the required configuration is shown. This information is also available in the following presentation here.


Internal network

Overview

VirtualBox makes available a network interface inside a guest. If multiple guests share the same interface name, they are connected like a switch and can access each other.

Benefits
  • Easy to use. Little configuration required
  • No VirtualBox virtual host network interface (device + driver) required
  • Guests can access each other
  • Secure (access from outside the host is not possible)
Drawbacks
  • The host can’t access the guests
  • Guests can’t access the host
  • Guests can’t access the internet
  • The VirtualBox internal DHCP server has no GUI support, only a CLI
Configuration


NAT


Overview

VirtualBox makes available a single virtual isolated NAT router on a network interface inside a guest. Every guest gets his own virtual router and can’t access other guests.

DHCP (Dynamic Host Configuration Protocol) requests on the interface are answered with an IP for the guest and address of the NAT router as gateway. The DHCP server can be configured using a CLI (no GUI support).

The NAT router uses the hosts network interface. No specific VirtualBox network interface needs to be created. External parties only see a single host interface.

The NAT router opens a port on the hosts interface. The internal address is translated to the hosts IP. The request to the destination IP is done. The response is forwarded back towards the guest (a table of external port to internal IP is kept by the router).

Port mappings can be made to allow requests to the host on a specific port to be forwarded to the guest.

Benefits

  • Easy to use. Little configuration required
  • Isolated. Every guest their own virtual router
  • No VirtualBox virtual host network interface (device + driver) required
  • Internet access
  • Fixed IP possible

Drawbacks

  • Guests can’t access each other or the host
  • The virtual NAT router DHCP server can be configured using a CLI only
  • To access the guest from the host requires port forwarding configuration and might require an entry in the hosts hosts file for specific web interfaces 

Configuration

NAT network


Overview

VirtualBox makes available a virtual NAT router on a network interface for all guests using the NAT network. Guests can access each other. The NAT network needs to be created.

DHCP (Dynamic Host Configuration Protocol) requests on the interface are answered with an IP for the guest and address of the NAT router as gateway. The DHCP server can be configured.

The NAT router uses the hosts network interface. No specific VirtualBox network interface needs to be created. External parties only see a single host interface.

The NAT router opens a port on the hosts interface. The internal address is translated to the hosts IP to a specific port per host. The request to the destination IP is done. The response is forwarded back towards the guest (a table of external port to internal IP is kept by the router).

Port mappings can be made to allow requests to the host on a specific port to be forwarded to a guest.

Benefits

  • Guests can access each other
  • No VirtualBox virtual host network interface (device + driver) required
  • DHCP server can be configured using the GUI
  • Internet access
  • Fixed IP possible

Drawbacks

  • To access the guest from the host requires port forwarding configuration and might require an entry in the hosts hosts file for specific webinterfaces
  • Requires additional VirtualBox configuration to define the network / DHCP server
Configuration

Host only



Overview

VirtualBox creates a host interface (a virtual device visible on the host). This interface can be shared amongst guests. Guests can access each other.

DHCP (Dynamic Host Configuration Protocol) requests on the interface are answered with an IP for the guest and address of the Host only adapter. The DHCP server can be configured using the VirtualBox GUI

The virtual host interface is not visible outside of the host. The internet cannot be accessed via this interface from the guest.

The host can access the guests by IP. Port mappings are not needed.

Benefits

  • Guests can access each other
  • You can create separate guest networks
  • DHCP server can be configured using the GUI
  • Fixed IP possible

Drawbacks

  • To access the guest from the host requires port forwarding configuration and might require an entry in the hosts hosts file for specific webinterfaces
  • Requires additional VirtualBox configuration to define the network / DHCP server
  • VirtualBox virtual host network interface (device + driver) required
  • No internet access

Configuration


Bridged


Overview

The guest uses a host interface. On the host interface a net filter driver is applied to allow VirtualBox to send data to the guest. This requires a so-called promiscuous mode to be used by the adapter. Promiscuous mode means the adapter can have multiple MAC addresses. Most wireless adapters do not support this. In that case VirtualBox replaces the MAC address of packages which are visible to the adapter.

An external DHCP server is used. Same way the host gets its IP / gateway. No additional configuration required. It might not work if the DHCP server only allows registered MACs (some company networks)

Easy access. The guest is directly available from the network (every host) the host is connected to. Port mappings are not required. The host can access the guests by IP. Guests can access the host by IP.

Benefits

  • Guests can access each other
  • Host can access guests and guests can access the host. Anyone on the host network can access the guests
  • No virtual DHCP server needed
  • Easy to configure / use
  • Same access to internet as the host has

Drawbacks

  • Guests can’t be split into separate networks (not isolated)
  • Sometimes doesn’t work; dependent on external DHCP server and ability to filter packets on a host network interface. Company networks might block your interface
  • No easy option for a fixed IP since host network is a variable
  • Not secure. The guest is exposed on the hosts network

Configuration


Use case examples

Case 1: ELK stack

I’m trying out the new version of the ELK stack (Elasticsearch, Logstash, Kibana)

Requirements:

  • I do not require internet access inside the guest
  • I want to access my guest from my host
  • I do not want my guest to be accessible outside of my host
  • I do not want to manually configure port mappings
Solution: Host only adapter 

Case 2:  SOA Suite for security workshop

I’m using Oracle SOA Suite for a security workshop. SOA Suite consists of 3 separate VMs, DB, Admin Server, Managed Server

Requirements:
  • The VMs require fixed (internal) IPs
  • The VMs need to be able to access each other
  • Course participants need to call my services from the same network
  • I only want to expose specific ports
Solution: NAT + Host only (possibly NAT network)

Case 3: VM for distribution during course

I’ve created an Ubuntu / Spring Tool Suite VM for a course. The VM will be distributed to participants.

Requirements:
  • The VM to distribute requires internet access. During the course several things will need to be downloaded
  • I am unaware of the VirtualBox created interfaces present on the host machines and don’t want the participants to manually have to select an adapter
  • I want the participants to do as little networking configuration as possible. VirtualBox networking is not the purpose of this course.
Solution: NAT

Case 4: Server hosting application

I’ve created a server inside a VM which hosts an application. 

Requirements:
  • The MAC of the VM is configured inside the routers DHCP server so it will always get the same IP. Use the external DHCP server to obtain an IP
  • The application will be used by (and thus needs to be accessible for) different people on the network.
  • The application uses many different ports for different features. These ports change regularly. Some features use random ports. Manual port mappings are not an option
  • The application accesses different resources (such as a print server) on the hosts network
Solution: Bridged

Automate the installation of Oracle JDK 8 and 10 on RHEL and Debian derivatives

$
0
0
Automating the Oracle JDK installation on RHEL derivatives (such as CentOS, Oracle Linux) and Debian derivatives (such as Mint, Ubuntu) differs. This is due to different package managers and repositories. In this blog I'll provide quick instructions on how to automate the installation of Oracle JDK 8 and 10 on different Linux distributions. I chose JDK 8 and 10 since they are currently the only Oracle JDK versions which receive public updates (see here).

Debian derivatives

Benefit of using the below repositories is that you will often get the latest version and can easily update to the latest version in an existing installation if you want.

Oracle JDK 8

sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo echo debconf shared/accepted-oracle-license-v1-1 select true | sudo debconf-set-selections
sudo echo debconf shared/accepted-oracle-license-v1-1 seen true | sudo debconf-set-selections
sudo apt-get -y install oracle-java8-installer
sudo apt-get -y install oracle-java8-set-default

Oracle JDK 10

sudo add-apt-repository ppa:linuxuprising/java
sudo apt-get update
sudo echo debconf shared/accepted-oracle-license-v1-1 select true | sudo debconf-set-selections
sudo echo debconf shared/accepted-oracle-license-v1-1 seen true | sudo debconf-set-selections
sudo apt-get -y install oracle-java10-installer
sudo apt-get -y install oracle-java10-set-default

RHEL derivatives

Since RHEL derivatives are often provided by commercial software vendors such as RedHat and Oracle, they like to work on a subscription basis for their repositories since people pay for using them. Configuration of the specific repositories and subscriptions of course differs per vendor and product. For Oracle Linux you can look here. For RedHat you can look here.

The below described procedure makes you independent of vendor specific subscriptions, however you will not gain automatic updates and if you want to have the latest version you have to manually update the download URL from here and update the Java installation path in the alternatives commands. You also might encounter issues with the validity of the used cookie which might require you to update the URL.

Oracle JDK 8

sudo wget -O ~/jdk8.rpm -N --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u181-b13/96a7b8442fe848ef90c96a2fad6ed6d1/jdk-8u181-linux-x64.rpm
sudo yum -y localinstall ~/jdk8.rpm
sudo update-alternatives --install /usr/bin/java java /usr/java/jdk1.8.0_181-amd64/jre/bin/java 1
sudo update-alternatives --install /usr/bin/jar jar /usr/java/jdk1.8.0_181-amd64/bin/jar 1
sudo update-alternatives --install /usr/bin/javac javac /usr/java/jdk1.8.0_181-amd64/bin/javac 1
sudo update-alternatives --install /usr/bin/javaws javaws /usr/java/jdk1.8.0_181-amd64/jre/bin/javaws 1

Oracle JDK 10

sudo wget -O ~/jdk10.rpm -N --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/10.0.2+13/19aef61b38124481863b1413dce1855f/jdk-10.0.2_linux-x64_bin.rpm
sudo yum -y localinstall ~/jdk10.rpm
sudo update-alternatives --install /usr/bin/java java /usr/java/jdk-10.0.2/bin/java 1
sudo update-alternatives --install /usr/bin/jar jar /usr/java/jdk-10.0.2/bin/jar 1
sudo update-alternatives --install /usr/bin/javac javac /usr/java/jdk-10.0.2/bin/javac 1
sudo update-alternatives --install /usr/bin/javaws javaws /usr/java/jdk-10.0.2/bin/javaws 1

Running Spring Tool Suite and other GUI applications from Docker containers

$
0
0
Running an application within a Docker container helps in isolating the application from the host OS. Running GUI applications like for example an IDE from Docker containers, can be challenging. I'll explain several of the issues you might encounter and how to solve them. For this I will use Spring Tool Suite as an example. The code (Dockerfile and docker-compose.yml) can also be found here. Due to (several) security concerns, this is not recommended in a production environment.



Running a GUI from a docker container

In order to run a GUI application from a Docker container and display its GUI on the host OS, several steps are needed;

Which display to use?

The container needs to be aware of the display to use. In order to make the display available, you can pass the DISPLAY environment variable to the container. docker-compose describes the environment/volume mappings/port mappings and other things of docker containers. This makes it easier to run containers in a quick and reproducible way and avoids long command lines.

docker-compose

You can do this by providing it in a docker-compose.yml file. See for example below. The environment indicates the host DISPLAY variable is passed as DISPLAY variable to the container.


Docker

In a Docker command (when not using docker-compose), you would do this with the -e flag or with --env. For example;

docker run --env DISPLAY=$DISPLAY containername

Allow access to the display

The Docker container needs to be allowed to present its screen on the Docker host. This can be done by executing the following command:

xhost local:root

After execution, during the session, root is allowed to use the current users display. Since the Docker daemon runs as root, Docker containers (in general!) now can use the current users display. If you want to persist this, you should add it to a start-up script.

Sharing the X socket

The last thing to do is sharing the X socket (don't ask me details but this is required...). This can be done by defining a volume mapping in your Docker command line or docker-compose.yml file. For Ubuntu this looks like you can see in the image below.


Spring Tool Suite from a Docker container

In order to give a complete working example, I'll show how to run Spring Tool Suite from a Docker container. In this example I'm using the Docker host JVM instead of installing a JVM inside the container. If you want to have the JVM also inside the container (instead of using the host JVM), look at the following and add that to the Dockerfile. As a base image I'm using an official Ubuntu image.

I've used the following Dockerfile:

FROM ubuntu:18.04

MAINTAINER Maarten Smeets <maarten.smeets@amis.nl>

ARG uid

LABEL nl.amis.smeetsm.ide.name="Spring Tool Suite" nl.amis.smeetsm.ide.version="3.9.5"

ADD https://download.springsource.com/release/STS/3.9.5.RELEASE/dist/e4.8/spring-tool-suite-3.9.5.RELEASE-e4.8.0-linux-gtk-x86_64.tar.gz /tmp/ide.tar.gz

RUN adduser --uid ${uid} --disabled-password --gecos '' develop

RUN mkdir -p /opt/ide && \
    tar zxvf /tmp/ide.tar.gz --strip-components=1 -C /opt/ide && \
    ln -s /usr/lib/jvm/java-10-oracle /opt/ide/sts-3.9.5.RELEASE/jre && \
    chown -R develop:develop /opt/ide && \
    mkdir /home/develop/ws && \
    chown develop:develop /home/develop/ws && \
    mkdir /home/develop/.m2 && \
    chown develop:develop /home/develop/.m2 && \
    rm /tmp/ide.tar.gz && \
    apt-get update && \
    apt-get install -y libxslt1.1 libswt-gtk-3-jni libswt-gtk-3-java && \
    apt-get autoremove -y && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/* && \
    rm -rf /tmp/*

USER develop:develop
WORKDIR /home/develop
ENTRYPOINT /opt/ide/sts-3.9.5.RELEASE/STS -data /home/develop/ws

The specified packages are required to be able to run STS inside the container and create the GUI to display on the host.

I've used the following docker-compose.yml file:

version: '3'

services:
    sts:
        build:
            context: .
            dockerfile: Dockerfile
            args:
                uid: ${UID}
        container_name: "sts"
        volumes:
            - /tmp/.X11-unix:/tmp/.X11-unix
            - /home/develop/ws:/home/develop/ws
            - /home/develop/.m2:/home/develop/.m2
            - /usr/lib/jvm/java-10-oracle:/usr/lib/jvm/java-10-oracle
            - /etc/java-10-oracle:/etc/java-10-oracle
        environment:
            - DISPLAY
        user: develop
        ports:
            "8080:8080"

Notice this docker-compose file has some dependencies on the host OS. It expects a JDK 10 to be installed in /usr/lib/jvm/java-10-oracle with configuration in /etc/java-10-oracle. Also it expects to find /home/develop/ws and /home/develop/.m2 to be present on the host to be mapped to the container. The .X11-unix mapping was already mentioned as needed to allow a GUI screen to be displayed. There are also some other things which are important to notice in this file.

User id

First the way a non-privileged user is created inside the container. This user is created with a user id (uid) which is supplied as a parameter. Why did I do that? Files in mapped volumes which are created by the container user will be created with the uid which the user inside the container has. This will cause issues if inside the container the user has a different uid as outside of the container. Suppose I run the container onder a user develop. This user on the host has a uid of 1002. Inside the container there is also a user develop with a uid of 1000. Files on a mapped volume are created with uid 1000; the uid of the user in the container. On the host however, uid 1000 is a different user. These files created by the container cannot be accessed by the develop user on the host (with uid 1002). In order to avoid this, I'm creating a develop user inside the VM with the same uid as the user used outside of the VM (the user in the docker group which gave the command to start the container).

Workspace folder and Maven repository

When working with Docker containers, it is a common practice to avoid storing state inside the container. State can be various things. I consider the STS application work-space folder and the Maven repository among them. This is why I've created the folders inside the container and mapped them in the docker-compose file to the host. They will use folders with the same name (/home/develop/.m2 and /home/develop/ws) on the host.

Java

My Docker container with only Spring Tool Suite was big enough already without having a more than 300Mb JVM inside of it (on Linux Java 10 is almost double the size of Java 8). I'm using the host JVM instead. I installed the host JVM on my Ubuntu development VM as described here.

In order to use the host JVM inside the Docker container, I needed to do 2 things:

Map 2 folders to the container:


And map the JVM path to the JRE folder onder STS: ln -s /usr/lib/jvm/java-10-oracle /opt/ide/sts-3.9.5.RELEASE/jre.

Seeing it work

First build:

docker-compose build

Building sts
Step 1/10 : FROM ubuntu:18.04
 ---> 735f80812f90
Step 2/10 : MAINTAINER Maarten Smeets <maarten.smeets@amis.nl>
 ---> Using cache
 ---> 69177270763e
Step 3/10 : ARG uid
 ---> Using cache
 ---> 85c9899e5210
Step 4/10 : LABEL nl.amis.smeetsm.ide.name="Spring Tool Suite" nl.amis.smeetsm.ide.version="3.9.5"
 ---> Using cache
 ---> 82f56ab07a28
Step 5/10 : ADD https://download.springsource.com/release/STS/3.9.5.RELEASE/dist/e4.8/spring-tool-suite-3.9.5.RELEASE-e4.8.0-linux-gtk-x86_64.tar.gz /tmp/ide.tar.gz


 ---> Using cache
 ---> 61ab67d82b0e
Step 6/10 : RUN adduser --uid ${uid} --disabled-password --gecos '' develop
 ---> Using cache
 ---> 679f934d3ccd
Step 7/10 : RUN mkdir -p /opt/ide &&     tar zxvf /tmp/ide.tar.gz --strip-components=1 -C /opt/ide &&     ln -s /usr/lib/jvm/java-10-oracle /opt/ide/sts-3.9.5.RELEASE/jre &&     chown -R develop:develop /opt/ide &&     mkdir /home/develop/ws &&     chown develop:develop /home/develop/ws &&     rm /tmp/ide.tar.gz &&     apt-get update &&     apt-get install -y libxslt1.1 libswt-gtk-3-jni libswt-gtk-3-java &&     apt-get autoremove -y &&     apt-get clean &&     rm -rf /var/lib/apt/lists/* &&     rm -rf /tmp/*
 ---> Using cache
 ---> 5e486a4d6dd0
Step 8/10 : USER develop:develop
 ---> Using cache
 ---> c3c2b332d932
Step 9/10 : WORKDIR /home/develop
 ---> Using cache
 ---> d8e45440ce31
Step 10/10 : ENTRYPOINT /opt/ide/sts-3.9.5.RELEASE/STS -data /home/develop/ws
 ---> Using cache
 ---> 2d95751237d7
Successfully built 2d95751237d7
Successfully tagged t_sts:latest

Next run:

docker-compose up


When you run a Spring Boot application on port 8080 inside the container, you can access it on the host on port 8080 with for example Firefox.

Docker host and bridged networking. Running library/httpd on different ports

$
0
0
Docker provides different networking options. When using the Docker host networking, you don't have the option to create port mappings. When using images like library/httpd:2.4, you don't have the option to update the port on which it runs; it runs by default on port 80. Suppose you want to use the host networking feature and want to run library/httpd:2.4 on different ports, how would you do this?

In this blog I'll explain 2 mechanisms by which you can expose library/httpd on different ports using host networking and how you can do the same using bridged networking. I'll describe several features of the different solutions and consequences for connectivity / host lookup options. At the end of the post I'll give some tips on how to test connectivity between containers.



Using Docker host networking

Image per port

You could create your own Dockerfile and use an ARG (argument) such as below:

FROM httpd:2.4
MAINTAINER Maarten Smeets <maarten.smeets@amis.nl>
LABEL nl.amis.smeetsm.httpd.name="Apache Httpd" nl.amis.smeetsm.httpd.version="2.4"

#COPY ./www/ /usr/local/apache2/htdocs/
ARG PORT
RUN sed -ri "s/^Listen 80/Listen $PORT/g" /usr/local/apache2/conf/httpd.conf
ENTRYPOINT ["httpd-foreground"]

You can build this like:

docker build --build-arg PORT=84 -t smeetsm/httpd:2.4 .

And run it like:

docker run -dit --network host --name my-running-app-01 smeetsm/httpd:2.4

This allows you to build a container for running on a specific port. Drawback of this is that you build the image specifically for running on a single port. If you want containers running on multiple ports, you'd need multiple images.

docker build --build-arg PORT=84 -t smeetsm/httpdport84:2.4 .
docker build --build-arg PORT=85 -t smeetsm/httpdport85:2.4 .
docker build --build-arg PORT=86 -t smeetsm/httpdport86:2.4 .

And run them like

docker run -dit --network host --name my-running-app-01 smeetsm/httpdport84:2.4
docker run -dit --network host --name my-running-app-02 smeetsm/httpdport85:2.4
docker run -dit --network host --name my-running-app-03 smeetsm/httpdport86:2.4

You cannot run the same image on multiple ports. Thus you have to create an image per port and this might not be what you want. Also the containers you create this way are not easy to scale.

Single image running on different ports

A much cleaner solution would be to use the same base image supplied by Apache and supply the port with the run command. You can do this like:

docker run -dit --network host --name my-running-app-01 library/httpd:2.4 /bin/bash -c "sed -ri 's/^Listen 80/Listen 84/g' /usr/local/apache2/conf/httpd.conf && httpd-foreground"

docker run -dit --network host --name my-running-app-01 library/httpd:2.4 /bin/bash -c "sed -ri 's/^Listen 80/Listen 85/g' /usr/local/apache2/conf/httpd.conf && httpd-foreground"

docker run -dit --network host --name my-running-app-01 library/httpd:2.4 /bin/bash -c "sed -ri 's/^Listen 80/Listen 86/g' /usr/local/apache2/conf/httpd.conf && httpd-foreground"

Docker run allows you to specify a single command to be run and you can supply parameters for this command. If you want to run multiple commands after each other, you can use bash with the -c parameter.

The above commands start 3 containers on the specified ports (84,85,86) using the host network driver. This does have the limitation that the containers cannot communicate with each without going over the host interface. They all share the Docker host hostname since they use the Docker host network interface directly. Interesting to see is that they can use their own hostname to directly access different ports on the host.

For example, if I run my-running-app-01 on port 84 using hostname ubuntu-vm and I'm running my-running-app-02 using the same hostname (since the same network interface is used) running on port 85, I can access my-running-app-02 from my-running-app-01 by accessing ubuntu-vm or localhost(!) port 85. my-running-app-01 does not know if my-running-app-02 is running inside a Docker container or is directly hosted on the Docker host.

Bridged networking

Using bridged networking, which is often the default when using Docker, you can create named bridged networks. Hosts on these named bridged networks can find each other by their container name (automatic DNS).

Also bridged networks provide a layer between the host network and the container network. The containers can access other network resources by going through a NAT interface. You have to explicitly map ports if you want to access them from the host or the outside world. Using a bridged network, the software within the container can run on the same port and only the outside port can differ. You thus don't need to update configuration files when you want to run on different ports. In this case you use the same image for the creation of the different containers.

The below example uses the default bridge network. Containers can access each other by IP

docker run -dit --name my-running-app-01 -p 84:80 library/httpd:2.4
docker run -dit --name my-running-app-02 -p 85:80 library/httpd:2.4
docker run -dit --name my-running-app-03 -p 86:80 library/httpd:2.4

The below example uses a named bridge network. The containers can access each other by name.

docker network create --driver bridge my-net
docker run -dit --name my-running-app-01 -p 84:80 --network my-net library/httpd:2.4
docker run -dit --name my-running-app-02 -p 85:80 --network my-net library/httpd:2.4
docker run -dit --name my-running-app-03 -p 86:80 --network my-net library/httpd:2.4

Notes

In order to test network connectivity I used the following:

Networks and containers

In order to find out which networks were used:

docker network ls

NETWORK ID          NAME                DRIVER              SCOPE
3457e6f0a394        bridge              bridge              local
43e8356475ab        host                host                local
bffb13042787        my-net              bridge              local
fc4390096330        none                null                local

In order to find out which container was connected to which network and which IP it used I diid:

docker network inspect my-net

        "Containers": {
            "3398bb1f84504d1d5cb85a107420059dce3b617a91aef6663f526e0f7cd610b0": {
                "Name": "my-running-app-02",
                "EndpointID": "7f8191b81db6718b6f4c8091344e35a1b9641bb591025a6d5aa12699b631fbaf",
                "MacAddress": "02:42:ac:12:00:03",
                "IPv4Address": "172.18.0.3/16",
                "IPv6Address": ""
            },
            "810f87402961d79538238d07a9fb70774621b5f6363878d83884fafc89e382ed": {
                "Name": "my-running-app-01",
                "EndpointID": "5a6c99d83d4d43fec8cb7b6812f1628620f39dd13abf4caa4e5bacbf36f2707a",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            }
        }

The above is just a part of the output but does indicate containers and IP's

Check connectivity from within a container

Enter the container: docker exec -it my-running-app-01 /bin/bash

Since the container is Debian based, I can use Apt to install packages.

apt-get update && apt-get install -y telnet

Try to connect to a specific container from within a specific container

telnet my-running-app-02 80

If I got a response like:

telnet my-running-app-02 80
Trying 172.18.0.3...
Connected to my-running-app-02.
Escape character is '^]'.
^C

A connection could be established.

If I got a response like

telnet my-running-app-02 81
Trying 172.18.0.3...
telnet: Unable to connect to remote host: Connection refused

I could not establish the connection. The above does indicate resolving my-running-app-02 to an IP worked.

If the host also couldn't be resolved, the exception would be like:

telnet whatever 80
telnet: could not resolve whatever/80: Name or service not known

Oracle SOA: Sending delayed JMS messages

$
0
0
Sometimes you might want to put something on a JMS queue and make it available after a certain period has passed to consumers. How can you achieve this using Oracle SOA Suite?

Queue or connection factory configuration. Works but not message specific

You can set the Time-to-Deliver on the queue or on the connection factory. This indicates a period during which the message is visible in the WebLogic console but will not be seen by consumers (they will have state 'delayed').
  • Queue overrides
    • On queue level you can configure a Time-to-Deliver override. This will delay all messages which are being send to the queue. In this case however, we wanted to tweak the delay per message.
  • Connection Factory
    • On connection factory level you can configure the default Time-to-Deliver. This delay will be given by default to all messages using the specific connection factory. If you want to use multiple delays on the same queue you can connect to it using multiple connection factories. This again is configuration which is not message specific
JMSAdapter. Sending delayed messages is not possible

Producing delayed messages can be done by calling relevant Java classes (dependent on your JMS implementation) such as described here. When implementing Oracle SOA solutions however, it is more common to use the JMSAdapter instead of directly calling Java code.With the JMSAdapter you can set and get specific JMS header properties. See for example here.
  • JMSProperties
    • At first I tried to set the JMS header DeliveryTime. This header however is calculated when a message is produced to a queue or topic.I could not set this property externally
    • I also tried the property JMS_OracleDelay which can be used with the Oracle AQ JMS implementation. This also did not work with a JMS implementation which used a JDBC persistent store.
By setting specific JMS properties using the JMSAdapter, I did not manage to get this working. Maybe there was some other way using the JMSAdapter? I discovered the JMSAdapter does not call the relevant Java method to produce delayed messages (a feature the AQAdapter does provide). The JMSAdapter thus could not be used to achieve the required functionality. The method which needed to be called was: setTimeToDeliver on the weblogic.jms.extensions.WLMessageProducer.

Consuming messages

Using the JMSAdapter however, we can pick-up delayed messages. A benefit of using the JMSAdapter is that you can easily configure threads (even singleton over a clustered environment) and the delay between messages which are consumed. See for example the below snipplet from the composite.xml;

    <binding.jca config="MyServiceInboundQueue_jms.jca">
    <property name="minimumDelayBetweenMessages">10000</property>
    <property name="singleton">true</property>
    </binding.jca>

This makes sure only one message every 10 seconds is picked up from the queue.

BPEL Java embedding. Producing JMS messages without extending the BPEL engine classpath is not possible

Oracle SOA BPEL provides a feature to embed Java code. We thought we could use this Java code to produce JMS messages with delay since when using Java, we could call the required method. It appeared however that the classpath which was used by the BPEL engine was limited. Classes like javax.jms.* were not available. We could add additional libraries by configuring the BpelcClasspath property in the System MBean Browser to make these standard J2EE libraries available. See here. We did not want to do this however since this would make automatic deployment more challenging and we were unsure we would not introduce side-effects..

Spring component

It appeared the classpath which was available from the Spring component did contain javax.jms.* classes! We did fear however that the context in which the Spring component would run could potentially make it difficult to access the relevant connection factory and queue. Luckily this did not appear to be an issue. Additional benefit of using the Spring component is encapsulation of the Java code and better maintainability. Also in the BPEL process the callout to the Java code was more explicitely visible in form of an invoke.

In order to create a Spring component, the following needs to be done. See for a more elaborate example here.
  • Create a JDeveloper Java project with a library as dependency which contained the javax.jms.* classes such as 'JAX-WS Web Services'. For SOA Suite 11g make sure you indicate the Java SE version is 1.6. Create a deployment profile to be able to package the code as a JAR file.
  • Implement the Java code to put a message on the queue. See for example here and create a JAR file from it by compiling the code.
  • For JDeveloper 11g make sure the Oracle JDeveloper Spring, WebLogic SCA Integration plugin is installed.
  • Copy the previously created JAR file to your composite project folder subdirectory SCA-INF/lib
  • In the composite editor create a Spring component. Add XML code like for example below
  • The Spring component will display an interface. Drag it to the BPEL process where you want to use it. An XSD/WSDL will be generated for you and you can use an assign and invoke to call this service. If you update the interface file / replace the JAR file, you can remove the Spring component interface, add it again to the bean definition xml file, re-wire it to the BPEL component and it will regenerate the WSDL and XSD files.
Summary

  • The JMSAdapter does not allow enqueueing messages with a specific delay (time-to-deliver).
  • A (default) time-to-deliver (delay) can be configured on the queue but also on the connection factories
  • The Spring component uses a different classpath than Java embedded in BPEL
  • The Spring component can access the InitialContext which in turn allows access to WebLogics JNDI tree
  • Using the Spring component it is relatively easy to enqueue messages with a message specific delay

Securing Oracle Service Bus REST services with OAuth2 (without using additional products)

$
0
0
OAuth2 is a popular authentication framework. As a service provider it is thus common to provide support for OAuth2. How can you do this on a plain WebLogic Server / Service Bus without having to install additional products (and possibly have to pay for licenses)? If you just want to implement and test the code, see this installation manual. If you want to know more about the implementation and choices made, read on!

OAuth2 client credentials flow

OAuth2 supports different flows. One of the easiest to use is the client credentials flow. It is recommended to use this flow when the party requiring access can securely store credentials. This is usually the case when there is server to server communication (or SaaS to SaaS).

The OAuth2 client credentials flow consists of an interaction pattern between 3 actors which all have their own roll in the flow.
  • The client. This can be anything which supports the OAuth2 standard. For testing I've used Postman
  • The OAuth2 authorization server. In this example I've created a custom JAX-RS service which generates and returns JWT tokens based on the authenticated user.
  • A protected service. In this example I'll use an Oracle Service Bus REST service. The protection consists of validating the token (authentication using standard OWSM policies) and providing role based access (authorization).
When using OAuth2, the authorization server returns a JSON message containing (among other things) a JWT (JSON Web Token).

In our case the client authenticates using basic authentication to a JAX-RS servlet. This uses the HTTP header Authorization which contains 'Basic' followed by Base64 encoded username:password. Of course Base64 encoded strings can be decoded easily (e.g. by using sites like these) so never use this over plain HTTP!

When this token is obtained, it can be used in the Authorization HTTP header using the Bearer keyword. A service which needs to be protected can be configured with the following standard OWSM policies for authentication: oracle/http_jwt_token_service_policy and oracle/http_jwt_token_over_ssl_service_policy and a custom policy for role based access / authorization.

JWT

JSON Web Tokens (JWT) can look something like:

eyJraWQiOiJvYXV0aDJrZXlwYWlyIiwiYWxnIjoiUlMyNTYifQ.eyJzdWIiOiJ3ZWJsb2dpYyIsImlzcyI6Ind3dy5vcmFjbGUuY29tIiwiZXhwIjoxNTQwNDY2NDI4LCJpYXQiOjE1NDA0NjU4Mjh9.ZE8wMnFyjHcmFpdswgx3H8azVCPtHkrRjqhiKt-qZaV1Y5YlN9jAOshUnPIQ76L8K4SAduhJg7MyLQsAipzCFeT_Omxnxu0lgbD2UYtz-TUIt23bjcsJLub5pNrLXJWL3k7tSdkcVxlyHuRPYCvoLhLZzCksqnRdD6Zf9VjxGLFPktknXwpn7_aOAdzXEatj-Gd9lm321R2BdFL7ii9sXh9A1KL8cblLbhLlrXGwTF_ifTxuHSBz1B_p6xng6kmOfIwDIAJQ9t6KESQm8dQQeilcny1uRmhg4o85uc4gGzhH435q1DRuHQm22wN39FHbNT4WP3EuoZ49PpsTeQzSKA

This is not very helpful at first sight. When we look a little bit closer, we notice it consists of 3 parts separated by a '.' character. These are the header, body and signature of the token. The first 2 parts can be Base64 decoded.

Header

The header typically consists of 2 parts (see here for an overview of fields and their meaning). The type of token and the hashing algorithm. In this case the header is

{"kid":"oauth2keypair","alg":"RS256"}

kid refers to the key id. In this case it provides a hint to the resource server on which key alias to use in its key store to validate the signature.

Body

The JWT body contains so-called claims. In this case the body is

{"sub":"weblogic","iss":"www.oracle.com","exp":1540466428,"iat":1540465828}

The subject is the subject for which the token was issued. www.oracle.com is the issuer of the token. iat indicates an epoch at which the token was issued and exp indicates until when the token is valid. Tokens are valid only for a limited duration. www.oracle.com is an issuer which is accepted by default so no additional configuration was required.

Signature

The signature contains an encrypted hash of the header/body of the token. If those are altered, the signature validation will fail. To encrypt the signature, a key-pair is used. Tokens are signed using a public/private key pair.

Challenges

Implementing the OAuth2 client credentials flow using only a WebLogic server and OWSM can be challenging. Why?
  • Authentication server. Bare WebLogic + Service Bus do not contain an authentication server which can provide JWT tokens.
  • Resource Server. Authentication of tokens. The predefined OWSM policies which provide authentication based on JWT tokens (oracle/http_jwt_token_service_policy and oracle/http_jwt_token_over_ssl_service_policy) are picky to what tokens they accept.
  • Resource Server. Authorization of tokens. OWSM provides a predefined policy to do role based access to resources: oracle/binding_permission_authorization_policy. This policy works for SOAP and REST composites and Service Bus SOAP services, but not for Service Bus REST services.
How did I fix this?
  • Create a simple authentication server to provide tokens which conform to what the predefined OWSM policies expect. By increasing the OWSM logging and checking for errors when sending in tokens, it becomes clear which fields are expected.
  • Create a custom OWSM policy to provide role based access to Service Bus REST resources
Custom components

Authentication server

The authentication server has several tasks:
  • authenticate the user (client credentials) 
    • using the WebLogic security realm
  • validate the client credentials request
    • using Apache HTTP components
  • obtain a public and private key for signing 
    • from the OPSS KeyStoreService (KSS)
  • generate a token and sign it 
Authentication

User authentication on WebLogic Server of servlets consists of 2 configuration files.

A web.xml. This file indicates
  • which resources are protected
  • how they are protected (authentication method, TLS or not)
  • who can access the resources (security role)

The weblogic.xml indicates how the security roles map to WebLogic Server roles. In this case any user in the WebLogic security realm group tokenusers (which can be in an external authentication provider such as for example an AD or other LDAP) can access the token service to obtain tokens.


Validate the credentials request

From Postman you can do a request to the token service to obtain a token. This can also be used if the response of the token service conforms to the OAuth2 standard.

By default certificates are checked. With self-signed certificates / development environments, those checks (such as host name verification) might fail. You can disable the certificate checks in the Postman settings screen.


Also Postman has a console available which allows you to inspect requests and responses in more detail. The request looked like


Thus this is what needed to be validated; an HTTP POST request with a body containing application/x-www-form-urlencoded grant_type=client_credentials. I've used the Apache HTTP components org.apache.http.client.utils.URLEncodedUtils class for this.

After deployment I of course needed to test the token service. Postman worked great for this but I could also have used Curl commands like:

curl -u tokenuser:Welcome01 -X POST -d "grant_type=client_credentials" http://localhost:7101/oauth2/resources/tokenservice

Accessing the OPSS keystore

Oracle WebLogic Server provides Oracle Platform Security Services.


OPSS provides secure storage of credentials and keys. A policy store can be configured to allow secure access to these resources. This policy store can be file based, LDAP based and database based. You can look at your jps-config.xml file to see which is in use in your case;


You can also look this up from the EM


In this case the file based policy store system-jazn-data.xml is used. Presence of the file on the filesystem does not mean it is actually used! If there are multiple policy stores defined, for example a file based and an LDAP based, the last one appears to be used.

The policy store can be edited from the EM


You can create a new permission:

Codebase: file:${domain.home}/servers/${weblogic.Name}/tmp/_WL_user/oauth2/-
Permission class: oracle.security.jps.service.keystore.KeyStoreAccessPermission
Resource name: stripeName=owsm,keystoreName=keystore,alias=*
Actions: read

The codebase indicates the location of the deployment of the authentication server (Java WAR) on WebLogic Server.

Or when file-based, you can edit the (usually system-jazn-data.xml) file directly

In this case add:

<grant>
<grantee>
<codesource>
<url>file:${domain.home}/servers/${weblogic.Name}/tmp/_WL_user/oauth2/-</url>
</codesource>
</grantee>
<permissions>
<permission>
<class>oracle.security.jps.service.keystore.KeyStoreAccessPermission</class>
<name>stripeName=owsm,keystoreName=keystore,alias=*</name>
<actions>*</actions>
</permission>
</permissions>
</grant>

At the location shown below


Now if you create a stripe owsm with a policy based keystore called keystore, the authentication server is allowed to access it!



The name of the stripe and name of the keystore are the default names which are used by the predefined OWSM policies. Thus when using these, you do not need to change any additional configuration (WSM domain config, policy config). OWSM only supports policy based KSS keystores. When using JKS keystores, you need to define credentials in the credential store framework and update policy configuration to point to the credential store entries for the keystore password, key alias and key password. The provided code created for accessing the keystore / keypair is currently KSS based. Inside the keystore you can import or generate a keypair. The current Java code of the authentication server expects a keypair oauth2keypair to be present in the keystore.


Accessing the keystore and key from Java

I defined a property file with some parameters. The file contained (among some other things relevant for token generation):

keystorestripe=owsm
keystorename=keystore
keyalias=oauth2keypair

Accessing the keystore can be done as is shown below.

            AccessController.doPrivileged(new PrivilegedAction<String>() {
                public String run() {
                    try {
                        JpsContext ctx = JpsContextFactory.getContextFactory().getContext();
                        KeyStoreService kss = ctx.getServiceInstance(KeyStoreService.class);
                        ks = kss.getKeyStore(prop.getProperty("keystorestripe"), prop.getProperty("keystorename"), null);
                    } catch (Exception e) {
                        return "error";
                    }
                    return "done";
                }
            });

When you have the keystore, accessing keys is easy

            PasswordProtection pp = new PasswordProtection(prop.getProperty("keypassword").toCharArray());
            KeyStore.PrivateKeyEntry pkEntry = (KeyStore.PrivateKeyEntry) ks.getEntry(prop.getProperty("keyalias"), pp);

(my key didn't have a password but this still worked)

Generating the JWT token

After obtaining the keypair at the keyalias, the JWT token libraries required instances of RSAPrivateKey and RSAPublicKey. That could be done as is shown below

            RSAPrivateKey myPrivateKey = (RSAPrivateKey) pkEntry.getPrivateKey();
            RSAPublicKey myPublicKey = (RSAPublicKey) pkEntry.getCertificate().getPublicKey();

In order to sign the token, an RSAKey instance was required. I could create this from the public and private key using a RSAKey.Builder method.

            RSAKey rsaJWK = new RSAKey.Builder(myPublicKey).privateKey(myPrivateKey).keyID(prop.getProperty("keyalias")).build();

Using the RSAKey, I could create a Signer

JWSSigner signer = new RSASSASigner(rsaJWK);

Preparations were done! Now only the header and body of the token. These were quite easy with the provided builder.

Claims:

JWTClaimsSet claimsSet = new JWTClaimsSet.Builder()
.subject(user)
.issuer(prop.getProperty("tokenissuer"))
.expirationTime(expires)
.issueTime(new Date(new Date().getTime()))
.build();

Generate and sign the token:

SignedJWT signedJWT = new SignedJWT(new JWSHeader.Builder(JWSAlgorithm.RS256).keyID(rsaJWK.getKeyID()).build(), claimsSet);
signedJWT.sign(signer);
String token = signedJWT.serialize();


Returning an OAuth2 JSON message could be done with

String output = String.format("{ \"access_token\" : \"%s\",\n" + "  \"scope\"        : \"read write\",\n" +  "  \"token_type\"   : \"Bearer\",\n" + "  \"expires_in\"   : %s\n}", token,expirytime);

Role based authorization policy

The predefined OWSM policies oracle/http_jwt_token_service_policy and oracle/http_jwt_token_over_ssl_service_policy create a SecurityContext which is available from the $inbound/ctx:security/ctx:transportClient inside Service Bus. Thus you do not need a custom identity asserter for this!

However, the policy does not allow you to configure role based access and the predefined policy oracle/binding_permission_authorization_policy does not work for Service Bus REST services. Thus we need a custom policy in order to achieve this. Luckily this policy can use the previously set SecurityContext to obtain principles to validate.

Challenges

Provide the correct capabilities to the policy definition was a challenge. The policy should work for Service Bus REST services. Predefined policies provide examples, however they could not be exported from the WSM Policies screen. I did 'Create like' a predefined policy which provided the correct capabilities and then copied those capability definitions to my custom policy definition file. Good to know: some capabilities required the text 'rest' to be part of the policy name.

Also I encountered a bug in 12.2.1.2 which is fixed with the following patch: Patch 24669800: Unable to configure Custom OWSM policy for OSB REST Services. In 12.2.1.3 there were no issues.

An OWSM policy consists of two deployments

A JAR file

  • This JAR contains the Java code of the policy. The Java code uses the parameters defined in the file below.
  • A policy-config.xml file. This file indicates which class is implementing the policy. Important part of this file is the reference to restUserAssertion. This maps to an entry in the file below

A policy description ZIP file

  • This contains a policy description file. 

The description ZIP file contains a single XML file which answers questions like;

  • Which parameters can be set for the policy? 
  • Of which type are the parameters? 
  • What are the default values of the parameters?
  • Is it an authentication or authorization policy?
  • Which bindings are supported by the policy?

The policy description file contains an element which maps to the entry in the policy-config.xml file. Also the ZIP file has a structure which is in line with the name and Id of the policy. It is like;


Thus the name of the policy is CUSTOM/rest_user_assertion_policy
This name is also part of the contents of the rest_user_assertion_policy file. You can also see there is again a reference to the implementation class and the restUserAssertion element which is in the policy-config.xml file is also there. The capabilities of the policy are mentioned in the restUserAssertion attributes.


Finally

As mentioned before, the installation manual and code can be found here. Of course this solution does not provide all the capabilities of a product like API Platform Cloud Service, OAM, OES. Usually you don't need all those capabilities and complexity and just a simple token service /policy is enough. In such cases you can consider this alternative. Of course since it is hosted on WebLogic / Service Bus, it needs some extra protection when exposed to the internet such as a firewall, IP whitelisting, SSL offloading, etc.

Oracle Mobile Cloud Service (MCS): An introduction to API security: Basic Authentication and OAuth2

$
0
0
As an integration/backend developer, when starting a project using Mobile Cloud Service, it is important to have some understanding of what this MBaaS (Mobile Backend as a Service) has to offer in terms of security features. This is important in order to be able to configure and test MCS. In this blog I will give examples on how to configure and use the basic authentication and OAuth2 features which are provided to secure APIs. You can read the Oracle documentation (which is quite good for MCS!) on this topic here.


Introduction

Oracle Mobile Cloud Service offers platform APIs to offer specific features. You can create custom APIs by writing JavaScript code to run on Node.js. Connectors are used to access backend systems. This blogs focuses on authentication options for incoming requests.

The connectors are not directly available from the outside. MCS can secure custom and platform APIs. This functionality is taken care of by the Mobile Backend and the custom API configuration.



Getting started

The first thing to do when you want to expose an API is assign the API to a Mobile Backend. You can do this in the Mobile Backend configuration screen, APIs tab.


You can allow anonymous access, but generally you want to know who accesses your API. Also because MCS has a license option to pay for a specific number of API calls; you want to know who you are paying for. In order to require authentication on a per user basis, you first have to create a user and assign it to a group. You can also do this from the Mobile Backend configuration. Go to the Mobile Users Management tab to create users and groups.


After you have done this, you can assign the role to the API. You can also do this on a per endpoint basis which makes this authentication scheme very flexible.



Now we have configured our API to allow access to users who are in a specific role. We can now call our API using basic authentication or OAuth2

Basic Authentication

In order to test our API, Postman is a suitable option. Postman is a freely available Chrome plugin (but also available standalone for several OSes) which provides many options for testing HTTP calls.


Basic authentication is a rather weak authentication mechanism. You Base64 encode a string username:password and send that as an HTTP header to the API you are calling. If someone intercepts the message, he/she can easily Base64 decode the username:password string to obtain the credentials. You can thus understand why I've blanked out that part of the Authorization field in several screenshots.


In addition to specifying the basic authentication header, you also need to specify the Oracle-Mobile-Backend-Id HTTP header which can be obtained from the main page of the Mobile Backend configuration page.

Obtain Oracle-Mobile-Backend-Id


Call your API with Basic authentication



This mechanism is rather straightforward. The authorization header needs to be supplied with every request though.


OAuth2

OAuth works a bit different than basic authentication in that first a token is obtained from a token service and the token is used in subsequent requests. When using the token, no additional authentication is required.


You can obtain the token from the Mobile Backend settings page as shown above. When you do a request to this endpoint, you need to provide some information:

You can use basic authentication with the Client ID:Client secret to access the token endpoint. These can be obtained from the screen shown below.


You also need to supply a username and password of the user for whom the token is generated. After you have done a request to the token service, you obtain a token.


This token can be used in subsequent request to your API. You can add the Bearer field with the token as Authentication HTTP header to authenticate instead of sending your username/password every time. This is thus more secure.


Finally

I've not talked about security options for outgoing requests provided by the supplied connectors.


These have per connector specific options and allow identity propagation. For example the REST connector (described in the Oracle documentation here) supports SAML tokens, CSF keys, basic authentication, OAuth2, JWT. The SOAP connector (see here) can use WS-Security in several flavours, SAML tokens, CSF keys, basic authentication, etc (quite a list).

Running Reactive Spring Boot on GraalVM in Docker

$
0
0
GraalVM is an open source polyglot VM which makes it easy to mix and match different languages such as Java, Javascript and R. It has the ability (with some restrictions) to compile code to native executables. This of course offers great performance benefits. Recently, GraalVM Docker files and images have become available. See here.

Since Spring Boot is a popular Java framework and reactive (non blocking) RESTful services/clients implemented in Spring Boot are also interesting to look at, I thought; lets combine those and produce a Docker image running a reactive Spring Boot application on GraalVM.

I've used and combined the following
As a base I've used the code provided in the following Git repository here. In the 'complete' folder (the end result of the tutorial) is a sample Reactive RESTful Web Service and client.
The reactive Spring Boot RESTful web service and client

When looking at the sample, you can see how you can implement a non-blocking web service and client. Basically this means you use;
  • org.springframework.web.reactive.function.server.ServerRequest and ServerResponse and instead of the org.springframework.web.bind.annotation.RestController
  • Mono<ServerResponse> for the response of the web service
  • for a web service client you use org.springframework.web.reactive.function.client.ClientResponse and Mono<ClientResponse> for getting a response
  • since you won't use the (classic blocking) RestController with the RequestMapping annotations, you need to create your own configuration class which defines routes using  org.springframework.web.reactive.function.server.RouterFunctions
Since the response is not directly a POJO, it needs to be converted into one explicitly like with res.bodyToMono(String.class). For more details look at this tutorial or browse this repository

Personally I would have liked to have something like a ReactiveRestController and keep the rest (pun intended) the same. This would make refactoring to reactive services and clients more easy.

GraalVM

GraalVM is a polyglot VM open sourced by Oracle. It has a community edition and enterprise edition which provides improved performance (a smaller footprint) and better security (sandboxing capabilities for native code) as indicated here. The community edition can be downloaded from GitHub and the enterprise edition from Oracle's Technology Network. Support for GraalVM for Windows is currently still under development and not released yet. A challenge for Oracle with GraalVM will be to keep the polyglot systems it supports up to date version wise. This already was a challenge with for example the R support in Oracle database and Node support in Application Container Cloud Service. See here.

When you download GraalVM CE you'll get GraalVM with a specific OpenJDK 8 version (for GraalVM 1.0.0-rc8 this is 1.8.0_172). When you download GraalVM EE from OTN, you'll get Oracle JDK 8 of the same version.

To see which components are available, you can do:

bash-4.2# gu available
Downloading: Component catalog
ComponentId              Version             Component name
----------------------------------------------------------------
python                   1.0.0-rc8           Graal.Python
R                        1.0.0-rc8           FastR
ruby                     1.0.0-rc8           TruffleRuby

GraalVM and LLVM

GraalVM supports LLVM. LLVM is a popular toolset to provide language agnostic compilation and optimization of code for specific platforms. LLVM is one of the reasons many programming languages have starting popping up recently. Read more about LLVM here or visit their site here. If you can compile a language into LLVM bitcode or LLVM Intermediate Representation (IR), you can run it on GraalVM (see here). The LLVM bitcode is additionally optimized by GraalVM to receive even better results.

GraalVM and R

GraalVM uses FastR which is based on GNU-R, the reference implementation of R. This is an alternative implementation of the R language for GraalVM and thus not actual R! For example: 'support for dplyr and data.table are on the way'. Read more here. Especially if you use exotic packages in R, I expect there to be compatibility issues. It is interesting to compare the performance of FastR on GraalVM to compiling R code to LLVM instructions and run that on GraalVM (using something like RLLVMCompile). Haven't tried that though. GraalVM seems to have momentum at the moment and I'm not so sure about RLLVMCompile.

Updating the JVM of GraalVM

You can check out the following post here for building GraalVM with a JDK 8 version. This refers to documentation on GitHub here.

"Graal depends on a JDK that supports a compatible version of JVMCI (JVM Compiler Interface). There is a JVMCI port for JDK 8 and the required JVMCI version is built into the JDK as of JDK 11 (build 20 or later)."

I have not tried this but it seems thus relatively easy to compile GraalVM from sources with support for a different JDK.

GraalVM in Docker

Oracle has recently provided GraalVM as Docker images and put the Dockerfile's in their Github repository. See here. These are only available for the community edition. Since the Dockerfiles are provided on GitHub, it is easy to make your own GraalVM EE images if you want (for example want to test with GraalVM using Oracle JDK instead of OpenJDK).

To checkout GraalVM you can run the container like:

docker run -it oracle/graalvm-ce:1.0.0-rc8 bash

Spring Boot in GraalVM in Docker

How to run a Spring Boot application in Docker is relatively easy and described here. I've run Spring Boot applications on various VM's also and described the process on how to achieve this here. As indicated above, I've used this Ubuntu Development VM.

sudo apt-get install maven
git clone https://github.com/spring-guides/gs-reactive-rest-service.git
cd gs-reactive-rest-service/complete

Now create a Dockerfile:

FROM oracle/graalvm-ce:1.0.0-rc8
VOLUME /tmp
ARG JAR_FILE
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]

Edit the pom.xml file

Add to the properties tag a prefix variable:

        <properties>
                <java.version>1.8</java.version>
                <docker.image.prefix>springio</docker.image.prefix>
        </properties>

Add a build plugin

        <build>
                <plugins>
                        <plugin>
                                <groupId>org.springframework.boot</groupId>
                                <artifactId>spring-boot-maven-plugin</artifactId>
                        </plugin>
                        <plugin>
                                <groupId>com.spotify</groupId>
                                <artifactId>dockerfile-maven-plugin</artifactId>
                                <version>1.3.6</version>
                                <configuration>                                        <repository>${docker.image.prefix}/${project.artifactId}</repository>
                                        <buildArgs><JAR_FILE>target/${project.build.finalName}.jar</JAR_FILE>
                                        </buildArgs>
                                </configuration>
                        </plugin>
                </plugins>
        </build>

Now you can do:

mvn clean package
mvn dockerfile:build

And run it:

docker run -p 8080:8080 -t springio/gs-reactive-rest-service:latest

It’s as simple as that!

Monitoring Spring Boot applications with Prometheus and Grafana

$
0
0
In order to compare the performance of different JDK's for reactive Spring Boot services, I made a setup in which a Spring Boot application is wrapped in a Docker container. This makes it easy to create different containers for different JDK's with the same Spring Boot application running in it. The Spring Boot application exposes metrics to Prometheus. Grafana can read these metrics and allows to make nice visualizations from it. This blog post describes the setup. A next post will show the results. You can download the code here (in the complete folder). To indicate how easy this is, getting this setup up and running and write this blog post took me less than 1.5 hours total. I did not have much prior knowledge on Prometheus and Grafana save for a single workshop at AMIS by Lucas Jellema.



Wrapping Spring Boot in a Docker container

Wrapping Spring Boot applications in a Docker container is easy. See for example here. You need to do the following:

Create a Dockerfile as followed (change the FROM entry to get a different JDK)


Add a plugin to the pom.xml file.


And define the property used:


Now you can do mvn clean package dockerfile:build and it will create the Docker image springio/gs-reactive-rest-service:latest for you. You can run this with: docker run -p 8080:8080 -t springio/gs-reactive-rest-service:latest

Making Prometheus metrics available from Spring Boot

In order to make Prometheus metrics available from the String Boot application, some dependencies need to be added (see here).

Now you can run the Docker container and go to an URL like: http://localhost:8080/actuator/prometheus and you will see something like:


Provide Prometheus configuration

I've provided a small configuration file to make Prometheus look at the metrics URL from Spring Boot (see here):


Putting Spring Boot, Prometheus and Grafana together

As you can see in the above screenshot, I've used the hostname spring-boot. I can do this because of the docker compose configuration, container_name. This is as you can see below:


Grafana and Prometheus are the official Docker images for those products. I've added the previously mentioned configuration file to the Prometheus instance (the volumes entry under prometheus).

Now I can do docker-compose up and it will start Spring Boot, Prometheus with the configuration file (available at localhost:9090) and Grafana (available at localhost:3000). They will be put in the same Docker network and can access each other by the hostnames 'prometheus', 'grafana' and spring-boot'

Configure Grafana

In Grafana it is easy to add Prometheus as a data source.


When you have done this, you can add dashboards. An easy way to do this is to create a simple query in Prometheus and copy it to Grafana to create a graph from it. There are probably better ways to do thus but I have yet to dive into Grafana to learn more about its capabilities.



Finally

Thus it is easy and powerful to monitor a Spring Boot application using Prometheus and Grafana. Also using a docker-compose file, it is also easy to put an assembly together to start/link the different containers. This makes it easy to start fresh if you want to.

To try it out for yourself do the following (I've used the following VM (requires Vagrant and VirtualBox to build) with docker and docker-compose maven preinstalled: here)

git clone https://github.com/MaartenSmeets/gs-reactive-rest-service
cd gs-reactive-rest-service/complete
mvn clean package
mvn dockerfile:build
docker-compose up

Then you can access the previously specified URL's to access the Spring Boot application, Prometheus and Grafana.

Comparing JVM performance; Zulu, OpenJDK, Oracle JDK, GraalVM CE

$
0
0
There are many different choices for a JVM for your Java application. Which would be the best to use? This depends on various factors. Performance being an important one. Solid performance research however is difficult. In this blog I'll describe a setup I created to perform tests on different JVMs at the same time. I also looked at the effect of resource isolation (assigning specific CPUs and memory to the process). This effect was negligible. My test application consisted of a reactive (non-blocking) Spring Boot REST application and I've used Prometheus to poll the JVMs and Grafana for visualization.

Below is an image of the used setup. Everything was running in Docker containers except SoapUI.

Isolated measures

How can you be sure there is not something interfering with your measures? Of course you can't be absolutely sure but you can try and isolate resources assigned to processes. For example assign a dedicated CPU and a fixed amount of memory. I also did several tests which put resource constraints on the load generating software, monitoring software and visualization software (assign different CPUs and memory to those resources). Assigning specific resources to the processes (using docker-compose v2 cpuset and memory parameters) did not seem to greatly influence the measures of individual process load and response times. I also compared startup, under load and without load situations. The findings did not change under different circumstances.

Assigning a specific CPU and memory to a process

Using docker-compose to configure a specific CPU for a process is challenging. The version 3 docker-compose format does not support assigning a specific CPU to a process. In addition, the version 3 format does not support assigning resource constraints at all when you use docker-compose to run it. This is because the people working on Docker appear to want to get rid of docker-compose (which is a separately maintained Python wrapper around Docker commands) in favor of docker stack deploy which uses Docker swarm and maybe Kubernetes in the future. You can imagine assigning a specific CPU in a potentially multi host environment is not trivial. Thus I migrated my docker-compose file back to version 2 format which does allow assigning specific CPUs to test this. The software to generate load and monitor the JVMs I assigned to specific CPUs not shared by the JVMs processing the load. I used the taskset command for this.

Measures under the same circumstances

How can you be sure that all measures are conducted under exactly the same circumstances? When I run a test against a JVM and run the same test scenario again tomorrow, my results will differ. This can have various causes such as different CPUs pickup the workload and those CPUs are also busy with other things or I'm running different background processes inside my host or guest OS. Even when first testing a single JVM and after the test, test another single JVM, the results will not be comparable since you cannot role out something has changed. For example I'm using Prometheus to gather measures. During the second run, the Prometheus database might be filled with more data. This might cause adding new data might be slower and this could influence the second JVM performance measures. This example might be rather far fetched though but you can think of other reasons by measures taken at different times can differ. That's why I choose to perform all measures simultaneously.

Setup

My setup consisted of a docker-compose file which allowed me to easily start 4 times a reactive Spring Boot application running on the different JVMs. In front of the 4 JDKs I've put an haproxy instance to load balance requests. Why did I do this? To make sure there was no difference between the different tests by time related differences I did not account for; all JVMs were put under the same load at the same time.

In order to monitor results I've used Micrometer to provide and endpoint to enable Prometheus to read JVM metrics. I've used Grafana to visualize the data using the following dashboard: https://grafana.com/dashboards/4701

Since GraalVM is only available currently as a JDK 8 version, I've also used a JDK 8 version for the other JVMs also.

When the container is running, the JVM version can be checked by accessing the actuator url: localhost:8080/actuator/env


or with for example

docker exec -it store/oracle/serverjre:8 java -version

I've used the following versions:
  • GraalVM CE rc9 (8u192)
  • OpenJDK 8u191
  • Zulu 8u192
  • Oracle JDK 8u181
Why the difference in versions? These were the versions which were available to me at the moment of writing this blog on hub.docker.com.

Getting started

You can download the code here from the complete folder. You can run the setup with:

sh ./buildjdkcontainers.sh
docker-compose --compatibility -f docker-compose-jdks.yml up

Next you can access
  • the haproxy (which routes to different the JVMs) at localhost:8080
  • Prometheus at localhost:9090
  • Grafana at localhost:3000
You need to configure Grafana to access Prometheus;


Next you need to import the dashboard in Grafana;


Next you can do a load test on http://localhost:8080/hello (HTTP GET) and see the results in the Grafana dashboard.

Prometheus itself can also feed information to Grafana and HAproxy can also by using an exporter. I did not configure this in my setup.

Different OSs

A difference between the different Docker images was the OS used within the image.

The OS can be determined with:

docker exec -it store/oracle/serverjre:8 cat /etc/*-release 
  • azul/zulu-openjdk:8 used Ubuntu 18.04
  • oracle/graalvm-ce:1.0.0-rc9 used Oracle Linux Server 7.5
  • openjdk:8 used Debian GNU/Linux 9
  • store/oracle/serverjre:8 used Oracle Linux Server 7.5
I don't think this would have had much effect on the JVMs running inside (with Alpine I would have expected an effect). At least Oracle JDK and GraalVM use the same OS.

Results

Using the JVM micrometer dashboard, it was easy to distinguish specific areas of difference in order to investigate them further. 

CPU usage


GraalVM had the highest CPU usage overall during the test. Oracle JDK the lowest CPU usage.

Response times

Overall GraalVM had the worst response times and OpenJDK the best followed closely by Oracle JDK and Zulu. On average the difference was about 30% between OpenJDK and GraalVM. 


Garbage collection

Interesting to see is that GraalVM loads way more classes then the other JDKs. OpenJDK loads least classes. The difference between GraalVM and OpenJDK is about 25%. I have not yet determined if this is a fixed amount of additional classes overhead for GraalVM or if this scales with the amount of classes used and this is a fixed percentage.


Of course these additional classes could cause delays during garbage collection (although this correlation might not necessarily be a causation). Longer GC pause times for GraalVM is what we see below though.

Below is a graph of the sum of the GC pause times. The longest pause times (the one line on top) are GC pause times due to allocation failures in GraalVM.


Memory usage


JVM memory usage is interesting to look at. As you can see in the above graph, the OpenJDK JVM uses most memory. The garbage collection behavior of GraalVM and Zulu appears to be similar, but GraalVM has a higher base memory usage. Oracle JDK shows much slower garbage collection. When looking at averages the OpenJDK JVM uses most memory while Zulu uses the least.

When looking at a zoomed out graph over a longer period, the behavior of Oracle JDK and OpenJDK seem erratic and can spike to relatively high values while Zulu and GraalVM seem more stable.


Summary

Overview

I've conducted a load test using SOAP UI with a reactive Spring Boot REST application running on 4 different JVMs behind a round robin haproxy load balancer. I've used Prometheus to poll the JVM instances (which used Micrometer to produce data) every 5 seconds and used Grafana and Prometheus to visualize the data.

The results would suggest GraalVM is not a suitable drop-in replacement JVM for for example OpenJDK since it performs worse, uses more resources, loads more classes and spends more time in garbage collection.
  • GraalVM loads more classes for the same application
  • GraalVM causes the slowest response times for the application
  • GraalVM uses most CPU (to achieve the slowest response time)
  • GraalVM uses most time on garbage collection
  • Zulu is most efficient in memory usage of the compared JVMs. Zulu and GraalVM are more stable in their memory usage when comparing to Oracle JDK and Open JDK.
Of course since GraalVM is relatively new, it could be the metrics provided by Micrometer do not give a correct indication of actual throughput and resource usage. Also it could be my setup has liabilities which causes this difference. I tried to rule out the second though by looking at the metrics in different situations.

If you want to use the polyglot features of GraalVM, of course the other JVMs do not provide a suitable alternative.

Further research

Native executables?

GraalVM allows code to be compiled to a native executable. I've not looked at performance of these native files but potentially this could make GraalVM a lot more interesting. Also it would be interesting to see how the Prometheus metrics would behave in a native executable since there is no real JVM anymore in this case.

Blocking calls

The application used was simple. The behavior under load might differ with more complex applications or for example when using blocking calls in Spring Boot.

Tweaking the JVM parameters

I've not specifically tweaked the JVM performance. This was out of the box without any specific tweaks. I've not looked at defaults for parameters or specific parameters for certain JVMs. It might be tweaked parameters cause very different results.

GraalVM EE and Java 11 (or 12 or ...)

It would be interesting to check out GraalVM EE since it is compiled with Oracle JDK instead of OpenJDK. I've not found a Docker image available of this yet. Also comparing Java 11 with Java 8 would be interesting. More to come!

Comparing JVM performance startup time and memory usage (process + JVM)

$
0
0
In a previous blog post I created a setup to compare JVM performance of several JVMs. I received some valuable feedback on the measures I conducted and requests to add additional JVMs. In this second post I'll look at some more JVMs and I've added some measures. Also I've automated the test and reduced the complexity of the setup by removing haproxy and testing a single JVM at a time.

Setup

Test application

I've used the reactive Spring Boot application from here.

JVMs

The JVMs which were looked at;
  • openjdk:8u181
  • oracle/graalvm-ce:1.0.0-rc9
  • adoptopenjdk/openjdk8:jdk8u172-b11
  • adoptopenjdk/openjdk8-openj9:jdk8u181-b13_openj9-0.9.0
  • azul/zulu-openjdk:8u192
  • store/oracle/serverjre:8
The versions were the currently available latest versions. I also quickly looked at Azul Zing but couldn't get a Docker image with my application running quickly enough so for now I skipped this.

Automated tests

I've used SoapUI loadrunner to automate my tests with. First I executed a 10s 'primer' loadtest to reach a steady state. Next I performed a 5 minute test with the following settings:


Dockerfile

I've used the following Dockerfile:

FROM openjdk:8u181
VOLUME /tmp
ARG JAR_FILE
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-XX:+UnlockExperimentalVMOptions","-XX:+UseCGroupMemoryLimitForHeap","-jar","/app.jar"]

And of course varied the FROM entry.

Docker-compose

The people from Docker have reduced the options which are available in the v3 docker-compose.yml file. In order to set memory limits and configure your network stack, a v2 docker-compose.yml is required. I've used the following:

version: '2'
services:
  spring-boot-jdk:
    image: "spring-boot-jdk"
    container_name: spring-boot-jdk
    ports:
    - "8080:8080"
    networks:
            - dockernet
    mem_limit: 1024M
  prometheus:
    image: "prom/prometheus"
    ports:
    - "9090:9090"
    volumes:
     - ./prom-jdks.yml:/etc/prometheus/prometheus.yml
    container_name: prometheus
    networks:
            - dockernet
  grafana:
     image: "grafana/grafana"
     ports:
     - "3000:3000"
     container_name: grafana
     networks:
            - dockernet
networks:
    dockernet:
        driver: bridge
        ipam:
            config:
            - subnet: 192.168.0.0/24
              gateway: 192.168.0.1

I've used a memory limit to make sure all the JVMs were running under a similar amount of available memory.

I stopped, removed and recreated the spring-boot-jdk container. Everytime with a different JVM.

process-exporter

Why hardcode the network settings in the docker-compose.yml file? I wanted to measure the complete JVMs memory. When using for example Micrometer, you only get the memory used inside the JVM and not the memory the OS process uses. In order to achieve this, I've used process-exporter with the following configuration process-exporter.yml in the proc-exp folder:

process_names:
  # comm is the second field of /proc/<pid>/stat minus parens.
  # It is the base executable name, truncated at 15 chars.  
  # It cannot be modified by the program, unlike exe.
  - comm:
    - java
    cmdline: 
    - app.jar  

This monitors java processes which have app.jar in their command-line. If I didn't also check the command-line, my Java test processes would also be included and I didn't want that.

Next I started process-exporter on my host with:

docker run -d --rm -p 9256:9256 --privileged -v /proc:/host/proc -v `pwd`/proc-exp:/config ncabatoff/process-exporter --procfs /host/proc -config.path /config/process-exporter.yml

I wanted to monitor process exporter with Prometheus inside my Docker container. To make this possible, my host (=gateway from within the Docker network) should be available at the same IP so I could configure that in my Prometheus configuration.

Results

Response time

I did HTTP GET requests from SOAPUI. This is the average response time of the service measured after a steady state was reached.

By Micrometer from within the applications, the reported response times were as follows:



OpenJDK and Oracle JDK were fastest while AdoptOpenJDK was slowest.

When looking at what SOAP UI reported as response times, we see something different.


This differs from what I measured previously. In that previous measure GraalVM appeared to provide the slowest response times while during this test, that was clearly not the case and GraalVM was one of the faster JVMs when looking at measures from within the JVM but also from outside the JVM.

Between the different measures, there was also quite a lot of difference. The response times from OpenJDK are slowest here instead of fastest. This makes me wonder if the measures from within the JVM across JVMs are really comparable and if they are measuring the same thing. This might differ due to implementation differences? AdoptOpenJDK was slowest in response times both when looking at within JVM measures and outside.

Startup time

This is the period reported by Spring Boot about how long it took for the application to start and how long the JVM was running before the application was actually up.


Here again we see the results are not quite as reproducible as I would want. Adopt OpenJ9 was clearly slowest in both tests for application startup followed by GraalVM. There's no clear winner though.

Process memory usage

This is a result from process exporter on how much memory the Java process took in total. This consists of virtual, reserved/resident and swap memory. Swap memory was for all JVMs zero during the test. Virtual memory also consists of shared libraries (which are also used by other programs). When looking at resident and virtual memory I saw the following (using https://grafana.com/dashboards/249):


Clear winner here with least memory usage is OpenJ9 followed at distance by Oracle JDK. OpenJDK and GraalVM use most memory (both virtual and resident).

JVM memory usage

This is the heap and non-heap inside the JVM measured with Micrometer and exposed to Prometheus. Non-heap consists of reserved memory, a cache and PermGen space. Heap consists of several memory areas in which the JVM moves stuff around.

Heap


I've used the following Grafana dashboard: https://grafana.com/dashboards/4701. When looking at heap memory, OpenJ9 seems clearly to be the winner followed again at distance by Oracle JDK. GraalVM uses most memory for the same application within the JVM.

When looking at the parts the heap consists of, the different JVMs show some remarkable differences. Especially OpenJ9 behaves really differently compared to the other JVMs.

Non heap


OpenJ9 appears to be a winner here again. GraalVM uses most non heap memory. When we look at a bit more detail of what happens in the non-heap area we see the following:


OpenJ9 (the 4th bar in the graphs) clearly behaves differently.

Threads

When looking at threads, GraalVM uses slightly more threads and OpenJ9 a lot when compared to the other JVMs.


More threads and less memory usage for OpenJ9(?).

Conclusions

Startup times

OpenJ9 and GraalVM are slowest to start. The results here are also not that reproducible so I should do more tests on this with larger applications.

Response times

Since the response times measured inside and outside of the JVM differed a lot and the results were not solidly reproducible, I won't draw any conclusions here yet.

Memory usage

Upon request I also looked at OS process memory using process-exporter. Also I've split up heap and non-heap memory. All memory measures provided similar results in that the JVM which used most memory was GraalVM and the JVM which used least memory (by far) was OpenJ9. If memory usage is a concern I would recommend you to consider OpenJ9 as an option.

Not looked at yet
  • larger applications containing more complex logic
  • non-reactive Spring Boot
  • only compared Java 8 JVMs because for GraalVM at the moment of writing there was no newer version available yet. Is Java 11 faster? (I'm going to skip 9 and 10, no Oracle LTS versions)
  • Azul Zing should be added as it is claimed it is fast
  • GraalVM can produce native executables. Interesting to also use them in a comparison.
  • Garbage collection behavior also differs. I have measures but did not have the time yet to look at it in more detail.
GraalVM

Of course GraalVM is much more than just a JVM in that is allows you to run other languages like Javascript (not confuse this with Nashorn or Rhino) and R in a seamless matter and allows you to create native executables which are supposed to be much faster. Haven't tested this yet though.

Minikube on Windows. Hyper-V vs Vagrant/VirtualBox

$
0
0
Minikube as a good way to get started with Kubernetes. In this blog post I'll describe how you can quickly get started with Minikube on Windows using 2 different ways to get a working environment. One based on using Vagrant and VirtualBox (in which an Ubuntu environment is created) and one which uses Hyper-V (and an out of the box Minikube Linux distribution running inside). With some colleagues we've created a Kubernetes workshop here. All steps required to get both environments up and running, are described in detail there. This blog post will only compare and provide a short overview. Do mind that many of the things in this blog post are a personal opinion. Things like ease of use are subjective.
Why Minikube?

At first I was not convinced using Minikube would provide sufficient environment to get to learn Kubernetes. There are several reasons though why I decided to go with Minikube.

Differences in Kubernetes distributions

When setting up something for a workshop or a blog, a purpose is to have it applicable to as many people as possible. The knowledge provided should be applicable in different environments. When using a complete Kubernetes distribution or a PaaS solution, only a part of the knowledge is reusable for different platforms. Minikube distributions are far more comparable than Kubernetes distributions (running locally or provided as PaaS).

Kubernetes distributions differ significantly
I started checking out complete Kubernetes distributions such as the one provided by Canonical and the one provided by Oracle in Vagrant boxes. Quickly I discovered that they differ significantly in many aspects such as virtualization technology (LXC for Ubuntu vs containerd for Oracle Linux) and installation procedure (conjure, snap, juju for Ubuntu, Vagrant, shell scripts for Oracle Linux). Also the resulting environments were different in configuration and in what was pre-installed inside them.

PaaS Kubernetes providers provide different experiences
Also most PaaS providers provide a different experience and tools. Usually PaaS providers have specific CLI tools for their environment and specific web interfaces. Large PaaS providers are Google (GKE), Azure (AKS) and Amazon (EKS) thus should you be required to specialize, it is probably safest to choose one of those. Google donated Kubernetes to the CNCF and is probably most advanced in its implementation but I do not know this for sure. On most cloud providers you can do unmanaged Kubernetes (do it yourself on IaaS) or managed (on PaaS). Again, the choices are largely provider specific.

Light on resources and easy to install

Since my target audience for blog posts is mostly developers and developers often develop locally on their own laptop, the ease of installation and resource usage are important.

Minikube on Windows using Hyper-V

When using Hyper-V, you cannot use VirtualBox (5.x) at the same time to run VMs; you need to make a choice and switching afterwards requires a Windows restart. The reason for this is that when Hyper-V is running, Hyper-V claims the hardware resources required to provide virtualization and provides an abstraction on top. The only way to access those resources is through an Hyper-V interface. This is also the reason why Hyper-V running on top of Hyper-V works and other virtualization technologies on top of Hyper-V are often an issue (unless they implement the Hyper-V interface). When VirtualBox is running, VirtualBox asks the host for access to the required resources and not Hyper-V. When Hyper-V is not running underneath the host, the host can provide this. Otherwise the host is (more or less) just a VM running on Hyper-V and cannot help VirtualBox with its requirements (something like 'the wrong person to ask').

Getting started with Minikube on Windows requires the use of several tools and some configuration;
  • Hyper-V needs to be enabled and the Hyper-V manager installed. Hyper-V is an out of the box  Windows 10 feature.
  • Chocolatey. Chocolatey is a package manager for Windows which takes away the manual steps required for installing and updating software on Windows. Chocolatey is not required to install CLI tools and Minikube but saves you a lot of time. I can also recommend it for wider use and not just for Docker, Kubernetes, Minikube, Helm and such.
  • Hyper-V configuration. You need to create a Virtual Switch which is linked to the network adapter which obtains an IP for the Minikube or Docker Desktop Hyper-V VM to be accessible. Be aware that when your network changes (e.g. cable, wireless), you will need to select a different adapter. Hyper-V has a bug with using dynamic memory and you need to disable it.
  • Docker Desktop. Software running in Minikube is packaged as Docker containers. The Minikube installation creates a VM which has a Docker registry. You use Docker to publish images to that registry. Docker Desktop includes Docker Machine which allows you to manage Docker hosts. In order to set the environment Docker requires to access your Minikube Hyper-V VM, you can use Minikube docker-env.
  • Kubectl. This is the Kubernetes CLI. You will use this often.
  • Minikube itself of course. Usually also Helm but this is out of scope for this post.
Minikube in Ubuntu in VirtualBox using Vagrant

This setup requires you to build your own Ubuntu installation with a Vagrantfile.

For this setup you need:
  • VirtualBox. This provides the virtualization.
  • Vagrant. This makes creating a VirtualBox VM and prosioning it easy. Vagrant requires a Vagrantfile and provisioning scripts. Obtaining or writing these might seem the hardest part but it is actually pretty easy. You can find examples of such a Vagrant files here and here.
No additional tools are required for this since the provisioning scripts will take care of creating the Ubuntu VM and install the required CLI tools such as docker, kubectl, minikube.

You cannot do nested virtualization within VirtualBox 5.x or earlier. Thus you will need to run minikube with --vm-driver=none within the VirtualBox machine to start Minikube. Also since it is a custom VM, you also manually need to start minikube proxy in order to have a consistent endpoint (host:port) for the API and the dashboard. In VirtualBox 6, nested virtualization is possible though the feature is still young (read more here). This might allow you to run VirtualBox, an Ubuntu VM on top and for example use KVM for the Minikube VM. This way you do not use a custom Minikube environment but an out of the box one making the environment easier to maintain and upgrade. Have not tested this yet though.

This does mean though that the CLI tools will be there also and that building Docker containers will need to be done within the VirtualBox container or you will need to work with a shared folder in order to obtain Docker images from your Windows environment. You might be able to expose this environment to the host and use the Windows CLI tools but I have not tried this yet and I'm not planning to. Using the Hyper-V solution would be easier if you want to use the Windows CLI tools. Docker Desktop explicitly states only Hyper-V support so there might be issues.

Comparing the environments

In order to compare the environments it is helpful to look at different aspects.

How easy is it?

Installation
The Hyper-V path requires the installation of several tools. Once this is done, it does not require much maintenance. The VirtualBox/Vagrant path requires less tools (quicker to get up and running), however it does require provisioning scripts which creates an environment for you. Setting up these scripts yourself can take some time (more than the Hyper-V environment installation). Once you have them, the recreation of the entire environment (Minikube + CLI tools) is automated and documented (infrastructure as code). You can also use provided scripts such as here and here to save yourself the work.

Environment
The environment created when using the Hyper-V path, require the local CLI tools to access the Minikube VM running on Hyper-V. The Hyper-V VM and the tools are separated and need to be managed that way as two separate things. The tools need connectivity to the VM (Hyper-V virtual switch configuration) and the host needs configuration to allow this connectivity. This can cause issues when for example you change your laptops network from wireless to wired. These connectivity challenges might be a benefit though since when using a full Kubernetes, you will need to configure and connect to a similar remote instance and you already know how that works when you have to deal with it regularly (frequency reduces difficulty). A question is whether you want to deal with it regularly.

When using VirtualBox and have everything within that VirtualBox, connectivity is no issue since everything is located on the same machine; a single environment. Also creating backups of the complete environment (tools, configuration, running Minikube) becomes easy with clones and snapshots. The environment can quickly be rebuild and distributed (for a workshop for example) if necessary.  On Hyper-V you can only easily do that for the Minikube machine but the tools need a local installation.

Where are the CLI tools located?

Hyper-V
The tools are installed natively on Windows. This is useful since if you are using a development environment on Windows, building a Docker container and deploying it to Kubernetes can be done from the same environment. Management of tools is also done on Windows. When using Chocolatey this is pretty easy. For large groups of developers, this tends to become everyone's own responsibility, which has some risks. Benefit of having the tools native on Windows is that integration with other tools which are installed on Windows (such as an IDE) is easy.

VirtualBox/Vagrant
The tools are installed within the VirtualBox VM. If you are developing on Windows and want to use images within the VM, you have 2 options; figure out how to expose the VirtualBox VM to Docker Desktop (though it says only Hyper-V is supported) or develop completely from the VirtualBox VM. Developing from the VirtualBox VM has benefits such as isolation of the environment and you have the option to manage the development environment as code, allowing easy updates for large groups of developers. A drawback is the overhead of the virtualized environment / OS. This overhead is visible in CPU, memory and disk usage.

Virtualization

When using Hyper-V, a virtual machine runs in the background which can be managed from the Hyper-V manager. You can connect to it using RDP and can also use docker-machine to manage the machine from the CLI. Minikube runs within the Hyper-V VM.


You can login to this VM with user root. It uses docker-containerd inside to run containers. The custom Minikube Linux distribution is created by using Buildroot (although you do not have to care). It runs kube-proxy and a single node Kubernetes cluster.

Hyper-V is a Type 1 hypervisor. This means it works directly on hardware without having an OS layer in between. The host OS also runs through Hyper-V (installing Hyper-V actually puts Hyper-V between the host OS and the hardware). Since the host OS is the Parent Partition or Root Partition, there is not much delay of the host even though the host also accesses the hardware through the hypervisor. Since Hyper-V is dependent on the Root Partition for many things, there is some debate on whether Hyper-V is truly Type 1 and not Type 1+2.


VirtualBox is a Type 2 hypervisor. It depends on the host OS for access to hardware. This means it is slower since there is an extra layer between the VM and the hardware; the host OS. Also since it cannot directly access hardware, VirtualBox needs to provide an abstraction of the hardware provided by the host OS to the guest and provides several drivers for this. For example you can choose different NICs for the guest.

Hyper-V manager manages VMs running on Hyper-V but does not run them. To display a VM, RDP is used. VirtualBox actually runs VMs itself, in my experience VirtualBox provides a more seamless experience (between host/guest and hypervisor/guest) for developers when compared to Hyper-V + Hyper-V manager. With VirtualBox, the VMs run on VirtualBox which runs on the host. With Hyper-V the VM runs on the same hypervisor the host runs.

Both Hyper-V and VirtualBox provide paravirtualization drivers. These drivers allow the guest to access the hypervisor in an optimized manner.

Finally

Summary

To summarize see the below table. Of course many things are personal opinions. Yours might differ!


Which setup works best for you depends on your personal tastes. Generally speaking; if you like Windows, go for Hyper-V/Chocolatey and if you prefer Linux, go for VirtualBox/Vagrant. I haven't done much with Apple products so can't compare those with the above setups. I can imagine it would be similar to the Windows setup.

The Minikube environments created by both setups are comparable in usage. It is useful to get some experience with both in order to increase your personal experience. When getting started with Kubernetes or want to develop locally, I can definitely recommend Minikube! When learning Kubernetes, do not expect it to be easy. It will take some time to get the hang of it. As indicated, some of the knowledge you gain might be platform specific and not generally applicable. Kubernetes basics however are portable.

Alternatives

There are of course many alternatives worth investigating

  • Using VirtualBox 6 nested virtualization feature
  • Using Minikube to create a standard VirtualBox Minikube VM. See for example here. This way you can still use the Windows CLI tools but don't have to manage a custom VM. You might have connectivity issues tools/VM but since VirtualBox is (in my experience) easier to configure (you will not need to fiddle with Hyper-V memory bugs, virtual switches), this setup would probably be easier to use than Hyper-V.
  • Getting rid of Windows altogether (or dual boot) and switch to for example Ubuntu or other developer friendly distributions. This would probably make working with Minikube easiest. You could use KVM for the Minikube host and install the CLI tools locally. Docker on Linux is also easier than on Windows.

Some challenges with Oracle Reports 12.2.1.3

$
0
0
Oracle Reports has been around for a long time and future versions will most likely not be created  (see here). Hence this is going to be my first and also last blog post on this product. Installing Reports is not an easy task. It requires several steps which are not well documented. This blog post contains a few pointers. The main source of inspiration is here.

Preparations

See for more detail here

  • Install an Oracle database. The database can be picky in required libraries. This is out of scope for this blog post.
  • Install Oracle JDK 8. WebLogic is not certified for OpenJDK.
  • Install WebLogic
  • Install WebLogic FMW Infrastructure
  • Use the RCU to create required schema's




  • Install Oracle Forms / Reports
  • Create a new domain (using config.sh)
    Target the AdminServer to the correct machine


Some challenges

The above, although requires quite some work, is not difficult. The below part took me some time to figure out.

Multicast

Error: REP-51002: Bind to Reports Server failed

Reports uses multicast by default. When installing on Oracle Linux, you need to configure your firewall to allow this. I was running in a VM so just disabled the firewall like described here.

sudo systemctl disable firewalld
sudo systemctl stop firewalld

Attempts to allow multicast requests by correctly configuring the firewall such as described here have not succeeded. If you want to (not required) you can configure multicast to use a specific interface. This allows VMs to communicate with each other and makes monitoring multicast requests easy. See here.

JPS

Error: Jps startup failed

See Oracle Support Doc ID 2233555.1. Oracle Reports requires installation of JCE, Java Cryptography Extension. No issue there since you can download it here. What they don't tell you though is that WebLogic uses an included JRE together with the JDK which was specified during installation. Thus you need to enable JCE in 2 locations. For for example JDK8u201 you can specify in java.security crypto.policy = unlimited. For the included JRE (8u131 for WebLogic 12.2.1.3) you need to copy the JAR files from the JCE download to ORACLE_HOME/oracle_common/jdk/jre/lib/security.

Starting the server

Don't forget to first create a tools and server component.

connect('weblogic','Welcome01','localhost:7001')
createReportsToolsInstance(instanceName='reptools1',machine='AdminServerMachine')
createReportsServerInstance(instanceName='my_repsrv',machine='AdminServerMachine')

And start the server component
/home/oracle/Oracle/Middleware/Oracle_Home/user_projects/domains/base_domain/bin/startComponent.sh my_repsrv

You can check if the server is working correctly in several ways. rwdiag is useful. The nodemanager logs also provide some hints and of course the managed server and component logs. A restart loop is regular when you're just starting out with this product.

Logging in

Reports uses OPSS by default. If you want to login, you need to assign application roles to users.


Now you can access URL's like http://localhost:9002/reports/rwservlet/showenv?server=my_repsrv

Starting Reports

I will not go into details on how you can actually create and run a report, but you can supply the database, database user, password, path of a report and output in the URL such as:

http://localhost:9002/reports/rwservlet?report=/home/oracle/reps/test.rdf&destype=file&desname=/home/oracle/reps/output.pdf?p_jdbcpds=system/Welcome01@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))(CONNECT_DATA=(SID=XE)))

Thus security might be something to carefully look at.
Viewing all 142 articles
Browse latest View live