<![CDATA[ Chronicles of Technology ]]> https://chronicler.tech https://chronicler.tech/favicon.png Chronicles of Technology https://chronicler.tech Thu, 28 Mar 2024 09:17:30 -0400 60 <![CDATA[ Some common solutions to Terratest Go test errors ]]> https://chronicler.tech/some-common-terratest-errors/ 65edef8b90d633000154f07a Sun, 10 Mar 2024 14:13:16 -0400 This post provides a few resolutions to some Terratest errors encountered.


Problem:

You get a go: cannot find main module error as shown when running a go test:

ahmed@devhost:/u01/terraform/test> go test -v -timeout 1m -run TestVM

go: cannot find main module, but found .git/config in /u01
to create a module there, run:
cd ../.. && go mod init

Solution:

Run these commands to install and initialize go:

# Install the golang library if not already installed
sudo yum install -y go

# Go to top level Terraform folder
cd /u01/terraform

# This is optional
go mod tidy

# The last parameter is your module name
go mod init gitlab.com/AhmedAboulnaga/terraform/test

Problem:

You get a no required module provides package when running a go test:

ahmed@devhost:/u01/terraform/test> go test -v -timeout 1m -run TestVM

terratest_vm_test.go:7:2: no required module provides package github.com/gruntwork-io/terratest/modules/terraform; to add it:
go get github.com/gruntwork-io/terratest/modules/terraform

terratest_vm_test.go:8:9: no required module provides package github.com/stretchr/testify/assert; to add it:
go get github.com/stretchr/testify/assert

Solution:

Since the modules listed in the error were reference in the .go test scripts, download them:

# Navigate to top level Terraform folder
cd /u01/terraform

# Download the modules shown in the error
go get github.com/gruntwork-io/terratest/modules/terraform
go get github.com/stretchr/testify/assert

Problem:

You get no test files when running a go test:

ahmed@devhost:/u01/terraform/test> go test -v -timeout 1m -run TestVM

? gitlab.com/AhmedAboulnaga/terraform/test [no test files]

Solution:

In this particular error, the test file was not named correctly. Rename the file to include _test and the file extension must be .go:

mv terratest_vm.go terratest_vm_test.go

Problem:

You get warning: no tests to run when running a go test:

ahmed@devhost:/u01/terraform/test> go test -v -timeout 1m -run asdf

testing: warning: no tests to run
PASS
ok gitlab.com/AhmedAboulnaga/terraform/test 0.043s

Solution:

If you notice the command above it is running a test called "asdf". This test does not exist. The test must match the function name in your .go test file:

ahmed@devhost:/u01/terraform/test> go test -v -timeout 1m -run TestVM

Problem:

You get a found packages note when running a go test:

ahmed@devhost:/u01/terraform/test> go test -v -timeout 10m -run TestVM

found packages compute (terratest_vm_test.go) and database (terratest_db_test.go) in /u01/terraform/test

Solution:

The package name should be identical in all files in this directory.

  1. Edit all .go files in this folder.
  2. Ensure the first line (e.g., package unittests), should be identical in all files.
]]>
<![CDATA[ JDBC URL: Short And Secure ]]> https://chronicler.tech/jdbc-url-short-and-secure/ 65e7186a90d633000154ef98 Thu, 07 Mar 2024 08:40:53 -0500 For years, Oracle SQL*Net offered a secured connection to the database and a prevalent unencrypted one. With the Zero Trust Architecture, that is no longer an option; you should get ready for end-to-end encryption in any environment, including your workstation.

I won't rephrase Oracle Database documentation on transport security and leave the comprehensive connection descriptor publication out of my scope.

I always wonder why SQL*Net offers a nice and clean "Easy Connection" format for plain connections and a TNS-formatted string for secured links.

According to the documentation, an easy connection descriptor is:

jdbc:oracle:thin:@listener.address:1521/service.name

And the simplest security connection descriptor looks like this:

jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=
                   (PROTOCOL=TCPS)(PORT=15022)(HOST=listener.address))
                   (CONNECT_DATA=(SERVICE_NAME=service.name)))

That kind of language gives a lot of grief to CLI enthusiasts and leads to errors-prone connection description. But to my amusement and great relief, SQL*Net and JDBC drivers are, in fact, URI-compliant. So, for testing and development purposes, you can use an easy and secure JDBC URL:

jdbc:oracle:thin:@tcps://listener.address:15022/service.name

The main difference is a protocol descriptor tcps:// instructing the JDBC driver to use secured sockets. This connector URI works well for base-level configurations or development and testing purposes, while TNS descriptors have no alternative for advanced security settings and highly available environments.

]]>
<![CDATA[ Python: Testing Secure Connections ]]> https://chronicler.tech/python-ssl-scokets/ 65da0e8b90d633000154eec2 Tue, 27 Feb 2024 08:45:29 -0500 Security control validation and enforcement are tasks we have regularly. One of those never-ending efforts is to ensure the right combination of protocols, ciphers, and algorithms are available. Security experts have tools and methods we don't have and must develop substitutes.

Cipher sites have my special love since they have different naming conventions in different frameworks or products. The OpenSSL community even keeps the list of cipher suite name mappings as a part of the product documentation. Through the years, I've created numerous connectivity validators with Java or Shell. But every time, I had to identify ciphers from the report, translate them to the "native" format, and then interpret the results, arguing that the ciphers in my output were the same as the original.

Here is yet another secure socket validation utility written in Python. Due to Python's data-processing capabilities, you can write code that requires much more effort in other languages. The small application in this repository can test secure listeners with a subset of ciphers in any convention from the OpenSSL documentation page. In essence, the script performs the steps as follows:

  • Load cipher suite names into a list of tuples.
In [11]: cipher_names[-10:]
Out[11]: 
[('TLS_ECDHE_PSK_WITH_CHACHA20_POLY1305_SHA256',
  'ECDHE-PSK-CHACHA20-POLY1305'),
 ('TLS_DHE_PSK_WITH_CHACHA20_POLY1305_SHA256',
  'DHE-PSK-CHACHA20-POLY1305'),
 ('TLS_RSA_PSK_WITH_CHACHA20_POLY1305_SHA256',
  'RSA-PSK-CHACHA20-POLY1305'),
 ('TLS_AES_128_GCM_SHA256',
  'TLS_AES_128_GCM_SHA256'),
 ('TLS_AES_256_GCM_SHA384',
  'TLS_AES_256_GCM_SHA384'),
 ('TLS_CHACHA20_POLY1305_SHA256',
  'TLS_CHACHA20_POLY1305_SHA256'),
 ('TLS_AES_128_CCM_SHA256', 'TLS_AES_128_CCM_SHA256'),
 ('TLS_AES_128_CCM_8_SHA256', 'TLS_AES_128_CCM_8_SHA256'),
 ('SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA', 'EDH-RSA-DES-CBC3-SHA'),
 ('SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA', 'EDH-DSS-DES-CBC3-SHA')]

In [12]: 

The last ten cipher name pairs

  • Search through the ciphers to find any matches
#Walk through the list of incoming ciphers
for cp in test_ciphers:
    # Find matching name tuple in a list
    ctpl = [item for item in cp_list if cp in item][0]

Find matching cipher name pairs in both conventions.

  • Tests connection against internet address using matching cipher suite name.
Now, we can use any naming convention or mix and match them.

The code intentionally doesn't use advanced modules to make it as portable and compatible as possible.

]]>
<![CDATA[ Database: Rolling Password Change ]]> https://chronicler.tech/oracle-db-rolling-password-change/ 65ce6c0690d633000154ede5 Fri, 16 Feb 2024 08:44:11 -0500 Recently, I found a great Oracle Database 21c feature: Gradual Password Rollover. Here is the best part: it was backported to Database 19c.

This database trait that only middleware administrators or DevOps engineers could really appreciate. For Oracle Fusion Middleware installations, a database password change is a carefully choreographed effort that still could hit your error budget. By introducing the feature, Oracle eliminates all risks associated with database password rotation.

Besides the compatible database version, your accounts should have a profile. I hope you have them already to keep other security controls in check.

-- Enable password rollover for profile for 24 hours
ALTER PROFILE ofmw_accounts 
     LIMIT PASSWORD_ROLLOWER_TIME 2;

-- Change the user password with the profile ofmw_accounts
ALTER USER WLS_OPSS 
      IDENTIFIED BY my-new-and-log-password;

After the password change, old and new passwords would co-exist for the next forty-eight hours. Which allows your middleware administrators to update database connections without any downtime. Rollover window sets in days, but you can use fractions. For example, value 1/2 means that you have 12 hours until an old password disappears.

You can not set a rollover window longer than 60 days or set it larger than the expiration grace time or the password lifetime. The graceful rollover window should be the smallest one.

Finally, to disable this feature, update an account profile and set it back to default. It is the same as setting it to 0.

--- Disable password rollover 
ALTER PROFILE ofmw_accounts 
   LIMIT PASSWORD_ROLLOWER_TIME DEFAULT;
]]>
<![CDATA[ Git: From SSH to Personal Access Tokens ]]> https://chronicler.tech/git-from-ssh-to-personal-access-tokens/ 65be3da24d2e1c57c43eedf0 Tue, 06 Feb 2024 08:30:20 -0500 I used SSH protocol for years to access remote repositories on-premises and remotely. It's a convenient, reliable, and secure way to sync your local and remote repositories. However, architectural changes and security requirements encourage moving from SSH  to HTTPS/TLS protocols, sometimes in a rush way.

For command line riders, it means that  SSH keys are not in use, and you should provide credentials to access the Source Control Management (SCM) system over HTTPS and how you will manage them. SCM vendors unanimously introduced personal access tokens (PAT) as an alternative to your username/password. Switching to PAT gives you plenty of benefits, such as:

  • Reduce the risk of losing your credentials;
  • You can manage your tokens w/o interfere with the regular password management policies;
  • Tokens have expiration dates and could be easily revoked or rotated.
  • Tokens allow fine-grained access control by limiting access to functions and projects.
  • Tokens could be bound to the function/project rather than to users.

After the ode to PATs, let's discuss how it impacts your repository interactions.

The most secure and inconvenient way is not to store any passwords on your system, but you still need to keep them somewhere safe. There are plenty 3rd party solutions, from Windows sticky notes to CyberArk. In practice, every interaction with the remote repository will ask you for credentials.

To soften the annoyance, the Git client offers a cache credential helper.

$ git config credential.helper 'cache --timeout=600'

The command above configures credential helper and sets the expiration time to ten minutes (600 seconds). The first git command will ask you for your credentials, and all the consequent interactions will use the cached credentials.  

On the plus side, you don't store any credentials with Git but must keep them somewhere else and enter them every time you pull or push the remote system.

For convenience and security, use system credential helpers.   This approach integrates Git with the OS secret management solutions, directly accessing your stored credentials until they expire or get revoked. Along with the OS-bounded solutions, there are third-party security managers, so you can find the one that works best for you. The most common commands are:

# Windows. Git stores in Credentials Manager/Windows Credentials
$ git config --global credential.helper manager

# MacOS. Utlizes MacOS keychain 
$ git config --global credential.helper osxkeychain

# Linux. You may want to find actual location of the library first.
# locate -b git-credential-libsecret
$ git config --global credential.helper \
/usr/lib/git-core/git-credential-libsecret

This is a controversial way to keep your PAT on the system when you need to avoid user intervention or you can't access any other credential managers. Git offers insteadOf clause to override URLs. This trick would help you in situations where The sample construct is:

$ git config --global url."https://my-token:glpat-XXXXXXXXXX@gilab.com/".insteadOf "https://gitlab.com/"

It also helps with the submodules or when you refer to other repositories, such as Ansible roles requirements.yml.

 

]]>
<![CDATA[ Eclipse pom.xml error "Downloading external resources is disabled" ]]> https://chronicler.tech/eclipse-pom-xml/ 65a54e534d2e1c57c43eed74 Mon, 15 Jan 2024 10:44:05 -0500 I am using the most recent Eclipse IDE build id 20231201-2043 and created a barebones Maven project. However, out-of-the-box there is an error in the pom.xml file.

Problem:

Here in the pom.xml file you can see an error:

When hovering over the first error icon, the error is:

  • cvc-elt.1.a: Cannot find the declaration of element 'project'.

The second error is:

  • Downloading external resources is disabled.

Solution:

Seems to be a bug in the IDE. Simply change the URL as follows:

OLD: https://maven.apache.org/xsd/maven-4.0.0.xsd

NEW: http://maven.apache.org/xsd/maven-4.0.0.xsd

]]>
<![CDATA[ HTTP 400 Bad Request when calling OCI REST due to "Unable to parse message body" ]]> https://chronicler.tech/http-400-bad-request-when-calling-oci-rest/ 659366da4d2e1c57c43eed18 Mon, 01 Jan 2024 20:43:44 -0500 I was able to successfully use the instructions in a blog post titled Oracle Cloud Infrastructure (OCI) REST call walkthrough with curl written by the Oracle A-Team to call an OCI REST service to create an Autonomous Database without using oci-curl.

I was able to use the bash script (no longer available on the page, but a sample provided here) that adds necessary headers in the POST request and creates a signing string prior to calling curl. But that is all a topic for a different blog post.

This was my payload used to to create an Autonomous Database

{
  "compartmentId"        : "ocid1.tenancy.oc1..aaaaaaaamvsnb6fsq6ynaxtpsq"
  "displayName"          : "Live Demo",
  "dbName"               : "AHMEDDB",
  "adminPassword"        : "Kobe_24_24_24",
  "cpuCoreCount"         : 1,
  "dataStorageSizeInTBs" : 1
}

When I ran the script, I encountered this error:

root@dev:/root/ocitemp> ./createdb.sh
===============================================================================
signing string is (request-target): post /20160918/autonomousDatabases
date: Tue, 12 Dec 2023 03:48:28 GMT
host: database.us-ashburn-1.oraclecloud.com
x-content-sha256: WnYlVLJ5xLgmKI0o64G52BVSc2GK69WgZTW07T2TLi4=
content-type: application/json
content-length: 297
Enter pass phrase for /root/ocitemp/oci.pem:
Signed Request is
uQ/13u3lubtp79N9OAS3aojahQ13oExTtrNZckm34wDYkLa...
===============================================================================
+ curl -v -X POST --data-binary @request.json -sS https://database.us-ashburn-1.oraclecloud.com/20160918/autonomousDatabases -H 'date: Tue, 12 Dec 2023 03:48:28 GMT' -H 'x-content-sha256: WnYlVLJ5xLgmKI0o64G52B' -H 'content-type: application/json' -H 'content-length: 297' -H 'Authorization: Signature version="1",keyId="ocid1.tenancy.oc1..aaaaaaaamvsnb6fteatsnynaxtpsq/ocid1.user.oc1..aaaaaaaaqtpmtdoc7664vcc6ywpoasdppgdjq/5d:2b:a5:aa::d2:9f:d8:46:7a",algorithm="rsa-sha256",headers="(request-target) date host x-content-sha256 content-type content-length",signature="uQ/13u3lubtp79NPA5bQiqJregZ5/Be4c6sGbf6sml+25ubkza6Plbw=="'
* About to connect() to database.us-ashburn-1.oraclecloud.com port 443 (#0)
*   Trying 140.91.12.32...
* Connected to database.us-ashburn-1.oraclecloud.com (140.91.12.32) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
*       subject: CN=*.us-ashburn-1.oraclecloud.com,O=Oracle Corporation,L=Redwood City,ST=California,C=US
*       start date: Jun 08 00:00:00 2023 GMT
*       expire date: Jun 07 23:59:59 2024 GMT
*       common name: *.us-ashburn-1.oraclecloud.com
*       issuer: CN=DigiCert Global G2 TLS RSA SHA256 2020 CA1,O=DigiCert Inc,C=US
> POST /20160918/autonomousDatabases HTTP/1.1
> User-Agent: curl/7.29.0
> Host: database.us-ashburn-1.oraclecloud.com
> Accept: */*
> date: Tue, 12 Dec 2023 03:48:28 GMT
> x-content-sha256: WnYlVLJ5xLgmKI0o64G52BVS7T2TLi4=
> content-type: application/json
> content-length: 297
> Authorization: Signature version="1",keyId="ocid1.tenancy.oc1..aaaaaaaamvsnb6xljssq6ynaxtpsq/ocid1.user.oc1..aaaaaaaaqtpmtdoc7664vcc34poasdppgdjq/5d:2b:d2:9f:d8:46:7a",algorithm="rsa-sha256",headers="(request-target) date host x-content-sha256 content-type content-length",signature="uQ/13u3lubtp79N9OAS3aojahQ13oExTtrN3URO6sGbf6sml+25ubkza6Plbw=="
>
* upload completely sent off: 297 out of 297 bytes
< HTTP/1.1 400 Bad Request
< Date: Tue, 12 Dec 2023 03:48:31 GMT
< opc-request-id: /62CDD05D684809/E9C168528D409992
< Content-Type: application/json
< Strict-Transport-Security: max-age=31536000; includeSubDomains;
< Content-Length: 79
<
{
  "code" : "InvalidParameter",
  "message" : "Unable to parse message body"
* Connection #0 to host database.us-ashburn-1.oraclecloud.com left intact

Note the error HTTP 400 and the JSON response message from the output above:

< HTTP/1.1 400 Bad Request

"code" : "InvalidParameter",
"message" : "Unable to parse message body"

It look me a while to realize that I was missing a comma in my input JSON payload:

BAD:
  "compartmentId"        : "ocid1.tenancy.oc1..aaaaaaaamvsnb6fsq6ynaxtpsq"

GOOD:
  "compartmentId"        : "ocid1.tenancy.oc1..aaaaaaaamvsnb6fsq6ynaxtpsq",
]]>
<![CDATA[ AAP: Code Compatibility ]]> https://chronicler.tech/aap-code-compatibility/ 658ca5a64d2e1c57c43eebd1 Thu, 28 Dec 2023 09:00:33 -0500 Our good old Red Hat Ansible Tower was recently upgraded to the Ansible Automation Platform. Although the AAP adoption was fast, Ansible compatibility tack derailed half an O&M templates in our organization. The error message says, "Invalid data passed to 'loop', it requires a list, got this instead: dict_keys([])."

The issue comes from the  AAP Execution Engine. The Ansible 2.9 container runs on Python 3. That breaks a lot of code that loops through data structures. The meta code below is a generic representation of the original code.

- name: "Loop through Managment Servers"
  include_role: 
      name: do_stuff
  vars:
     env: development
     target: "{{ item.name }}"
  loop: "{{ servers.keys() }}"
    
Ansible Task with Loop

Function d.keys()  returns a list for Python 2 and a class for Python3 interpreter. The JDoodle allows you to run the code against different Python engines, as in the screenshot below.

Execution results side-by-side

Luckily, the simple explicit typecast fixes the issue.

- name: "Loop through Managment Servers"
  include_role: 
      name: do_stuff
  vars:
     env: development
     target: "{{ item.name }}"
  loop: "{{ servers.keys()|list }}"
Python 3 compatible code
]]>
<![CDATA[ Setting up Selenium/Java to test Multi-Factor Authentication (MFA) ]]> https://chronicler.tech/setting-up-selenium-java-to-support-multi-factor-authentication-mfa/ 65807e764d2e1c57c43eea6f Wed, 20 Dec 2023 13:12:44 -0500 This post describes how to set up Selenium and run a test case to authenticate with multi-factor authentication (MFA).

By following these instructions, you can use Selenium and Java to authenticate to Login.gov and enter a one-time code using your Java code to simulate an authenticator app.

1. Install Java

JDK is required as the Selenium code I will be writing is written in Java.

a. Navigate to https://www.oracle.com/java/technologies/downloads/archive/.

b. Download Java. For compatibility reasons with my IDE, I installed jdk-11.0.21_windows-x64_bin.exe from https://www.oracle.com/java/technologies/javase/jdk11-archive-downloads.html.

c. After installation, make sure that you are able to invoke Java from the command line (e.g., use "java -version" as an example). If not, you may have to add it to your system path manually.

2. Install Eclipse

Eclipse is the development tool of choice (aka IDE). All Java code will be written in Eclipse.

a. Navigate to https://www.eclipse.org/downloads/.

b. Click on the Download Packages link.

c. Download "Eclipse IDE for Java Developers" and install it.

3. Download Apache Maven

Maven is used to manage package dependencies. By using Maven, you do not need to download Selenium, Selenium webdrivers, or the security JARs. Maven will download, import, and reference all these dependencies for you.

a. Navigate to https://maven.apache.org/download.cgi.

b. Download the binary zip archive (e.g., apache-maven-3.9.6-bin.zip) and extract it to a set folder (e.g., C:\software\apache-maven-3.9.6).

c. Add MVN_HOME as a system variable and append %MVN_HOME%\bin to the user variable Path.

d. Make sure that you are able to invoke Maven from the command line (e.g., use "mvn -v" as an example).

5. Create a Maven project in Eclipse

a. In Eclipse, click on File > New > Maven Project.

b. Select "Create a simple project (skip archetype selection)" then click Next.

c. Add a group id (e.g., MFA) and artifact id (e.g., SeleniumTest) then click Finish.

d. Add the following dependencies to the pom.xml file, which should look like this:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>MFA</groupId>
  <artifactId>SeleniumTest</artifactId>
  <version>0.0.1-SNAPSHOT</version>
	<dependencies>
		<dependency>
			<groupId>org.seleniumhq.selenium</groupId>
			<artifactId>selenium-java</artifactId>
			<version>4.16.1</version>
		</dependency>
		<dependency>
			<groupId>io.github.bonigarcia</groupId>
			<artifactId>webdrivermanager</artifactId>
			<version>5.6.2</version>
		</dependency>
		<dependency>
			<groupId>de.taimos</groupId>
			<artifactId>totp</artifactId>
			<version>1.0</version>
		</dependency>
		<dependency>
			<groupId>commons-codec</groupId>
			<artifactId>commons-codec</artifactId>
			<version>1.16.0</version>
		</dependency>
		<dependency>
			<groupId>com.google.zxing</groupId>
			<artifactId>javase</artifactId>
			<version>3.5.2</version>
		</dependency>
	</dependencies>
</project>

e. Right-click on src/test/java and select New > Class.

f. For Package, enter "mfa".

g. For Name, enter "TestAuthentication".

h. Check public static void main(String[] args), then click Finish.

i. Add this code. Note that a portion of the code in the main() method is commented out (for now).

package mfa;

import org.openqa.selenium.By;
import org.openqa.selenium.chrome.ChromeDriver;
import io.github.bonigarcia.wdm.WebDriverManager;
import java.security.SecureRandom;
import org.apache.commons.codec.binary.Base32;
import org.apache.commons.codec.binary.Hex;
import de.taimos.totp.TOTP;

public class TestAuthentication {

	public static void main(String[] args) {

		// This secretKey is provided by the application as a one-time setup
		// Do not use this key in your authenticator app, but put it here in the Java code instead
		String secretKey = "PYYOO2RHNGSPMYXR3XXQTIACXDDSK4AT";
		String lastCode = null;
		String code = null;

		code = getTOTPCode(secretKey);
		if (!code.equals(lastCode)) {
			System.out.println(code);
		}
		lastCode = code;
		try {
			Thread.sleep(1000);
		} catch (InterruptedException e) {
			System.out.println(e);
		}

/* Keep this commented until authenticator app is registered on Login.app

		WebDriverManager.chromedriver().setup();
		ChromeDriver driver = new ChromeDriver();
		driver.get("https://secure.login.gov");
		driver.findElement(By.xpath("//*[@id=\"user_email\"]")).sendKeys("user@someemail.com");
		driver.findElement(By.xpath("//*[contains(@id, 'password-toggle-input-')]")).sendKeys("welcome1");
		driver.findElement(By.xpath("//*[@id=\"new_user\"]/lg-submit-button/button")).click();
		driver.findElement(By.xpath("//*[contains(@id, 'code-')]")).sendKeys(code);
		driver.findElement(By.xpath("//*[@id=\"main-content\"]/div/form/lg-submit-button/button")).click();
		// driver.close();

*/

	}

	// Generate the Google Authenticator 20 bytes secret key encoded as base32
	public static String generateSecretKey() {
		SecureRandom random = new SecureRandom();
		byte[] bytes = new byte[20];
		random.nextBytes(bytes);
		Base32 base32 = new Base32();
		return base32.encodeToString(bytes);
	}

	// Convert base32 encoded secret keys to hex and use the TOTP to turn them into 6-digits codes based on the current time
	public static String getTOTPCode(String secretKey) {
		Base32 base32 = new Base32();
		byte[] bytes = base32.decode(secretKey);
		String hexKey = Hex.encodeHexString(bytes);
		return TOTP.getOTP(hexKey);
	}

}

4. Create an account with Login.gov (one time operation)

a. Navigate to https://login.gov

b. Click on Sign in with LOGIN.GOV.

c. Click on Create an account and create an account.

d. You will be asked to add an authentication method. Feel free to use any approach, but I recommend using Text or voice message as we will reserve the Authentication application to be used for the Java code.

e. You will now be logged in to your Login.gov account, and on the left hand side, click Your authentication methods. You will see how a phone number has been set up, but not an authentication app.

f. Click on Add authentication apps.

g. Give it a nickname (can be anything, it will not be used anywhere) (e.g., JCode).

h. Copy the code that in the silver box and paste it in your Java code.

String secretKey = "PYYOO2RHNGSPMYXR3XXQTIACXDDSK4AT";

i. Run the Java code, and copy the output shown. The output is the temporary one-time code.

j. Paste this temporary code in the Login.gov window and click Submit.

k. The new authentication method is now set up, and your Java code is now registered as an authentication app in your Login.gov account.

5. Run the Java code

a. Uncomment the next section in the Java code under the main() method, then run the code.

The Chrome instance will authentication, generate and populate the one-time code, and login to the application.

References

]]>
<![CDATA[ Putty error "PuTTY key format too new" ]]> https://chronicler.tech/puttyformat/ 61f35266b1de7575bd574724 Mon, 18 Dec 2023 12:12:44 -0500 I recently received a "PuTTY key format too new" error when trying to SSH into one of my cloud VMs.

Simply download the latest version of Putty from https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html and you should be good to go.

]]>
<![CDATA[ My Git Cheat Sheet ]]> https://chronicler.tech/git/ 62389821bf7df61b39a3fe51 Thu, 21 Sep 2023 12:08:00 -0400 # Clone from remote repo

ssh-keygen -t rsa

# Then copy this SSH key to GitLab

git clone git@scm.revelationtech.com:soa/oracle-soa.git

# Identify yourself as author

git config --global user.name "ahmed"

git config --global user.email ahmed@revelationtech.com

git config --global --list

# Statuses

git status

git log

git remote -v

# Add to local repo

git add apps

git add apps_12_2

git add bpm

git add ci

git add conf

git add docs

git add play

git add utils

git commit -m "First time commit of SOA/OSB folder apps from SVN"

# Add to remote repo

git push

# Delete unneeded SVN directories from local repo and push to remote

rm -r -f apps/.svn

rm -r -f apps_12_2/.svn

rm -r -f bpm/.svn

rm -r -f ci/.svn

rm -r -f conf/.svn

rm -r -f docs/.svn

rm -r -f play/.svn

rm -r -f utils/.svn

git rm -r -f apps/.svn

git rm -r -f apps_12_2/.svn

git rm -r -f bpm/.svn

git rm -r -f ci/.svn

git rm -r -f conf/.svn

git rm -r -f docs/.svn

git rm -r -f play/.svn

git rm -r -f utils/.svn

git commit -m "Removed .svn folders"

git push

]]>
<![CDATA[ Applying Coherence patch 35122398 to JDeveloper 12c ]]> https://chronicler.tech/patchingcoherence/ 647fb15828284a2f0840b9c0 Thu, 31 Aug 2023 12:00:00 -0400 When I was issued a client laptop, an Oracle Coherence vulnerability was reported as part of the local Oracle JDeveloper 12.2.1.4 installation on Windows.

This blog walks through the patching process.

  1. Navigate to Oracle Support and download the Oracle Coherence patch number 35122398 and the latest OPatch patch number 28186730.

2. After unzipping the two patches, run these commands to update to the latest opatch then apply the Coherence patch.

c:\>set ORACLE_HOME=c:\Oracle\Middleware\Oracle_Home c:\>set JAVA_HOME=c:\Progra~1\Java\jdk-20 c:\>cd c:\temp\Coherence_patch_jun2023\p28186730_1394212_Generic\6880880 c:\temp\Coherence_patch_jun2023\p28186730_1394212_Generic\6880880>java -jar C:\temp\Coherence_patch_jun2023\p28186730_1394212_Generic\6880880\opatch_generic.jar -silent oracle_home=c:\oracle\middleware\oracle_home c:\temp\Coherence_patch_jun2023\p28186730_1394212_Generic\6880880>cd c:\temp\p35122398_122140_Generic\1221417 c:\temp\p35122398_122140_Generic\1221417>%ORACLE_HOME%\OPatch\opatch lsinventory -jdk %JAVA_HOME% c:\temp\p35122398_122140_Generic\1221417>%JAVA_HOME%\bin\java -jar %ORACLE_HOME%\coherence\lib\coherence.jar c:\temp\p35122398_122140_Generic\1221417>%ORACLE_HOME%\OPatch\opatch apply 1221417 -jdk %JAVA_HOME% Java HotSpot(TM) 64-Bit Server VM warning: Ignoring option --illegal-access=deny; support was removed in 17.0Oracle Interim Patch Installer version 13.9.4.2.12Copyright (c) 2023, Oracle Corporation. All rights reserved. Oracle Home : c:\Oracle\Middleware\Oracle_Home Central Inventory : C:\Program Files\Oracle\Inventory from : OPatch version : 13.9.4.2.12 OUI version : 13.9.4.0.0 Log file location : c:\Oracle\Middleware\Oracle_Home\cfgtoollogs\opatch\opatch2023-06-06_17-36-05PM_1.log OPatch detects the Middleware Home as "C:\Oracle\Middleware\Oracle_Home" Verifying environment and performing prerequisite checks...OPatch continues with these patches: 1221417 Do you want to proceed? [y|n]y User Responded with: Y All checks passed. Backing up files... Applying interim patch '1221417' to OH 'c:\Oracle\Middleware\Oracle_Home' Patching component oracle.coherence, 12.2.1.4.0...Patch 1221417 successfully applied. Log file location: c:\Oracle\Middleware\Oracle_Home\cfgtoollogs\opatch\opatch2023-06-06_17-36-05PM_1.log OPatch succeeded. c:\temp\p35122398_122140_Generic\1221417> ]]>
<![CDATA[ Getting "Cannot remove the API" in WSO2 API Manager ]]> https://chronicler.tech/getting-cannot-remove-the-api-in-wso2-api-manager/ 64aabd6d28284a2f0840bbc7 Sun, 09 Jul 2023 10:15:57 -0400 I tried to delete an API in the WSO2 API Manager Publisher at https://hostname:9443/publisher/ but received the error Cannot remove the API as active subscriptions exist (see screenshot below).

To delete this API, navigate to the WSO2 API Manager Developer Portal at https://hostname:9443/devportal/.

Click on "Applications" and you will notice that the application DefaultApplication has 2 active subscriptions (see screenshot below). Applications can be created here to allow you group APIs into a logical grouping. Each application has a consumer key and consumer secret pair.

Click on the application name then click on "Subscriptions" on the left-most navigation pane.

Now the subscriptions appear. Click on "Delete". Now you should be able to delete the API from the Publisher.

A subscription means that there are active users subscribed to the API.

--

References:

]]>
<![CDATA[ Ghost: Create an Instance Group ]]> https://chronicler.tech/ghost-create-an-instance-group/ 648efce728284a2f0840b9cf Tue, 04 Jul 2023 08:35:21 -0400 Since the beginning of the quest, we have had cloud-based site backups, Google Cloud architecture for a low-maintenance and inexpensive test site, plus a working instance template. Now let's ensure instance availability and security.

Instance Group

In this low-cost design, the primary goal of the instance group is spot instance recovery. If Google Cloud decides that my current instance should be revoked, it will send a termination signal and kills the VM. The instance group will immediately request and configure a new one. The latest template will restore a backup and configure a new swarm.

Since low maintenance and cost savings are priorities, I created a single zone-managed (stateless) instance group with autoscaling. Although we will autoscale only to one instance, it still should be configured with max and minimum replicas set to one.

New Instance Group 

Google Console shows you a single form for the instance group configuration, but it takes two separate gcloud commands.

#Create instance group
gcloud beta compute instance-groups managed create dev-chronicler-ig1 --project=chronicler-dev-XXXX --base-instance-name=dev-chronicler-ig1 --size=0 --description=Test\ site\ instance\ group. --template=dev-chronicler-template --zone=us-central1-a --list-managed-instances-results=PAGELESS --no-force-update-on-repair

# Configure group autoscale
gcloud beta compute instance-groups managed set-autoscaling dev-chronicler-ig1 --project=chronicler-dev-XXXX --zone=us-central1-a --cool-down-period=60 --max-num-replicas=1 --min-num-replicas=1 --mode=on --target-cpu-utilization=0.6
Create and Configure an Instance Group

When you create the new instance group, it will start scaling the group, and VM instances will show you a new VM with the instance group as "In use by instnace-group-name."

Instance Group-controlled VM

At this time, you may want to drop unmanaged VMs and relay to the instance group only. Now, having permanent and memorizable access to the setup would be nice. So the next configuration effort is the load balancer.

HTTP Load Balancer  

The first puzzle you should solve - is where the load balancer is. You may think it's under the VPC Networks section, but it is not. You may not even find it without the console search. Start typing "load bal.." and you find the whole class of products - Network Services. Here, under the Load Balancing section, click the "+ CREATE LOAD BALANCER" link to start the configuration wizard.

Supported Load balancers

Since we create a website with HTTP(-S) traffic only, click on START CONFIGURATION under the Application Load Balancer.

On the next step, define allocation and access to your new load balancer. Of course we want to be able access it from the internet, and there is no much reson select classic load balance for the new project, or make it global since we have a single VM in a single region. Walk through the loadbalancer configuration steps from the Google's how-to document. The document gives exact firewall configuration instructions to enable HTTP traffic to the backend services for. Take anoter minute to configure the load balancer health check, it could be used  for the instance group as well.

When you finish with the load balanacer configuration, your development site will be available on the http://<lb-ephemerial-IP-address>/. You could even try it, but please don't send your credentials over HTTP protocol. Let's wait the next topic - DNS and certificate configuration.

Previous articles in the series are:

]]>
<![CDATA[ Resolving OCI Management Agent installation error ]]> https://chronicler.tech/oci-management-agent-installation-failed/ 647cc46428284a2f0840b944 Sun, 04 Jun 2023 13:17:53 -0400 Problem

Getting "Error: Transaction failed" when installing the Oracle Management Agent which is downloaded from the OCI console.

These are the steps I performed:

  1. Navigate to "Observability & Management", then click on "Management Agent"
  2. Click on "Download and Keys"
  3. Download the Agent for LINUX (X86_64) RPM (filename: oracle.mgmt_agent.230427.2233.Linux-x86_64.rpm)
  4. Transfer the RPM to my Linux host
  5. As root, install Java 11:
yum install java

6.  As root, run the command:

yum install -y oracle.mgmt_agent.230427.2233.Linux-x86_64.rpm
OCI console to download the Management Agent installation binaries
Error after installation of Management Agent

Solution

Apparently the OCI Management Agent does not support Java 11, and only supports JDK 8u281+.

Java 8 can be downloaded from https://www.oracle.com/in/java/technologies/javase/javase8u211-later-archive-downloads.html.

Here are the steps to uninstall Java 11, install Java 8, and install the Agent:

# ----------------------------------------
# Uninstall JDK 11
# ----------------------------------------

root@dev:/tmp> yum remove java
This system is receiving updates from OSMS server.
Dependencies resolved.
==================================================================================================================================================================================
 Package                                Architecture                      Version                                       Repository                                           Size
==================================================================================================================================================================================
Removing:
 jdk-11.0.10                            x86_64                            2000:11.0.10-ga                               @ol8_oci_included-x86_64                            292 M

Transaction Summary
==================================================================================================================================================================================
Remove  1 Package

Freed space: 292 M
Is this ok [y/N]: y
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                          1/1
  Running scriptlet: jdk-11.0.10-2000:11.0.10-ga.x86_64                                                                                                                       1/1
  Erasing          : jdk-11.0.10-2000:11.0.10-ga.x86_64                                                                                                                       1/1
  Running scriptlet: jdk-11.0.10-2000:11.0.10-ga.x86_64                                                                                                                       1/1
  Verifying        : jdk-11.0.10-2000:11.0.10-ga.x86_64                                                                                                                       1/1

Removed:
  jdk-11.0.10-2000:11.0.10-ga.x86_64

Complete!

# ----------------------------------------
# Install Java 8
# ----------------------------------------

root@dev:/tmp> rpm -i jdk-8u361-linux-x64.rpm
warning: jdk-8u361-linux-x64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Unpacking JAR files...
        tools.jar...
        plugin.jar...
        javaws.jar...
        deploy.jar...
        rt.jar...
        jsse.jar...
        charsets.jar...
        localedata.jar...
root@dev:/tmp> java -version
java version "1.8.0_361"
Java(TM) SE Runtime Environment (build 1.8.0_361-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.361-b09, mixed mode)

# ----------------------------------------
# Install the OCI Management Agent
# ----------------------------------------

root@dev:/tmp> yum install -y oracle.mgmt_agent.230427.2233.Linux-x86_64.rpm
This system is receiving updates from OSMS server.
Last metadata expiration check: 0:53:05 ago on Tue 30 May 2023 05:32:59 PM GMT.
Dependencies resolved.
==================================================================================================================================================================================
 Package                                        Architecture                        Version                                       Repository                                 Size
==================================================================================================================================================================================
Installing:
 oracle.mgmt_agent                              x86_64                              230427.2233-1                                 @commandline                               93 M

Transaction Summary
==================================================================================================================================================================================
Install  1 Package

Total size: 93 M
Installed size: 93 M
Downloading Packages:
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                          1/1
  Running scriptlet: oracle.mgmt_agent-230427.2233-1.x86_64                                                                                                                   1/1
Checking pre-requisites
        Checking if any previous agent service exists
        Checking if OS has systemd or initd
        Checking available disk space for agent install
        Checking if /opt/oracle/mgmt_agent directory exists
        Checking if 'mgmt_agent' user exists
                'mgmt_agent' user already exists, the agent will proceed installation without creating a new one.
        Checking Java version
                JAVA_HOME is not set or not readable to root
                Trying default path /usr/bin/java
                Java version: 1.8.0_361 found at /usr/bin/java
        Checking agent version

  Installing       : oracle.mgmt_agent-230427.2233-1.x86_64                                                                                                                   1/1
  Running scriptlet: oracle.mgmt_agent-230427.2233-1.x86_64                                                                                                                   1/1

Executing install
        Unpacking software zip
        Copying files to destination dir (/opt/oracle/mgmt_agent)
        Initializing software from template
        Checking if JavaScript engine is available to use
        Creating mgmt_agent daemon
        Agent Install Logs: /opt/oracle/mgmt_agent/installer-logs/installer.log.0

        Setup agent using input response file (run as any user with 'sudo' privileges)
        Usage:
                sudo /opt/oracle/mgmt_agent/agent_inst/bin/setup.sh opts=[FULL_PATH_TO_INPUT.RSP]

Agent install successful


  Verifying        : oracle.mgmt_agent-230427.2233-1.x86_64                                                                                                                   1/1

Installed:
  oracle.mgmt_agent-230427.2233-1.x86_64

Complete!

]]>
<![CDATA[ Ghost: Building an Instance ]]> https://chronicler.tech/ghost-building-an-instance/ 646a06f828284a2f0840b524 Tue, 23 May 2023 08:35:35 -0400 I publish journal notes of my cost-effective blog engineering challenge. This time I'll walk you through the cornerstone of my design - compute instance. Bear with me for a few more minutes, and I'll walk you through the chain of decision points to the final solution.

Once again, I start with my mantra: Containers are the most effective way to keep up with the latest software releases. Lucky me: Ghost community publishes Docker images and a few examples of a container-based deployment.  

Since I need at least two containers (MySQL and Ghost), I might go with the Podman Compose (I use it with the WSL2 Linux), but Google has made this choice with the standard Container-Optimised OS image. It is a hardened image with a single purpose - safely and effectively running docker containers. Next stop - building the startup script.

Startup Scripts

In broad strokes, a  new instance should perform the steps:

  • Adjust the instance configuration
  • Download the latest backup and configuration files
  • Spin up a new container stack

For starters, the Container Optimized OS has Google API, and you can't install it because there is no package manager. It does not allow you to execute custom scripts or applications either. Fortunately, it offers a toolbox utility - containerized app that you can use to access Google Cloud Services. Another feature of this instance - OS takes about 5Gb of the storage device; the rest is mounted as stateful storage. What we can take and use:  toolbox container mounts /var location by default, and my extra space for site and database is outside the root partition. There is another essential system update - disable Docker's daemon live restoration. So the first part of my startup script would look like this:

echo "0. Prepare file structure"
 mkdir -p /mnt/stateful_partition/ghost 
 ln -s /mnt/stateful_partition/ghost /var/gost
 mkdir -p /var/ghost/sql-init
 mkdir -p /var/ghost/sql-load

echo "1. Update Docker Daemon Configuration"
 sed -i 's/\("live-restore"\): true/\1: false/g' /etc/docker/daemon.json 
 systemctl restart docker
Setup a Compute Engine Instance.

Now, the system is ready for the configuration files. Next, I need to restore: static content, the database dump file, and the stack configuration. With the toolbox in mind, my restoration commands are:

echo "2.1 Fetch Site Content"
 cd /var/ghost/
 toolbox gsutil cp gs://${google_cloud_bucket}/site-backup/chronicler.content.tgz  /media/root/var/ghost/chronicler.content.tgz
echo "2.2 Fetch Site Content"
 tar zxf chronicler.content.tgz && rm -r chronicler.content.tgz

echo "3.1 Fetch Database Content"
  toolbox gsutil cp gs://${google_cloud_bucket}/site-backup/chronicler.sql.gz /media/root/var/ghost/sql-load/chronicler.sql.gz
echo "3.2 Unpack DB Dump"
  gunzip /var/ghost/sql-load/chronicler.sql.gz 
echo "3.3 Fix Legacy Charset Settings"
  sed -i 's/utf8mb4_0900_ai_ci/utf8mb4_0900_ai_ci/g;s/utf8mb4/utf8mb4/g' /var/ghost/sql-load/chronicler.sql 
echo "4. Get Stack File"
  toolbox gsutil cp gs://${google_cloud_bucket}/stack-scripts/docker-compose.yaml /media/root/var/ghost/docker-compose.yaml
Restore the Site Content

Please note that gsutil runs in the toolbox container, so the /var/ghost folder becomes a /media/root/var/ghost one. An extra step is to adjust my legacy database character set and collation configurations.

Now the whole scene is ready to start up the blog's clone. All I need to do is initialize the swarm and bring up the ghost stack. But before we go through, let's look at the docker stack description. It's very similar to the docker-compose with a few product-specific twists:

version: '3.1'

services:
  db:
    image: docker.io/library/mysql:latest
    environment:
      MYSQL_ROOT_PASSWORD: example
    volumes:
      -  /var/ghost/sql-load/chronicler.sql:/docker-entrypoint-initdb.d/chronicler.sql:ro

  ghost:
    image: docker.io/library/ghost:latest
    ports:
      - 80:2368
    environment:
      # see https://ghost.org/docs/config/#configuration-options
      database__client: mysql
      database__connection__host: db
      database__connection__user: root
      database__connection__password: example
      database__connection__database: ghost_dbatabase
      # this url value is just an example, and is likely wrong for your environment!
      url: http://localhost:80
      logging__info: 'info'
    volumes:
      -  /var/ghost/content:/var/lib/ghost/content:rw
Docker Stack Descriptor

There are a few critical things to consider:

  • Static content for the Ghost container is mounted as /var/lib/ghost/content. Respectfully, on our side, the content location is /var/ghost/content.
  • Set your database password. The "example" one is for illustration purposes only, even though it's unavailable outside the stack.
  • Database name 'ghost_dbatabase' is a part of the full database export.
  • The /docker-entrypoint-initdb.d/ mapping is for the site content restoration. If you have some complex initial setup - put .sql, .gz, or .sh files, so MySQL will l run them as part of the database initialization.
  • Ghost blog connection parameters are derived from MySQL container configuration.
  • I used a port 80 mapping since it's only one click in the instance configuration.

When all the commands were executed without a scratch, I combined them in a single shell script and uploaded them to the same storage bucket. This way, I can use it as a startup-script-url parameter for my instance template.

Create an Instance Group Template

Since day one, I mean to use spot instances for my development environment. So let's start with saving startup and shutdown scripts on the same Google Store. You can use startup-script-url and shutdown-script-url attributes with the Google Store files. After some cost/performance estimations, I ended up with the regular, two vCPU instance with the 2GB RAM  and the standard persistent disk (50GB is more than enough today). For development purposes, I use the e2-small VM with the container-optimized OS image for the spot instance.

gcloud compute instance-templates create dev-chronicler-template --project=${your-project} --machine-type=e2-small --network-interface=network=default,\
network-tier=PREMIUM --metadata=startup-script-url=https://storage.cloud.google.com/${your-bucket}/init-scripts/init-vm.sh,shutdown-script-url=https://storage.cloud.google.com/${your-bucket}/init-scripts/stop-vm.sh,google-logging-enabled=true,google-monitoring-enabled=true \
--no-restart-on-failure --maintenance-policy=TERMINATE --provisioning-model=SPOT \
--instance-termination-action=STOP --service-account=${your-id}-compute@developer.gserviceaccount.com \
--scopes=https://www.googleapis.com/auth/cloud-platform \
--tags=allow-health-check,http-server,https-server \
--create-disk=auto-delete=yes,boot=yes,device-name=instance-template-1,image=projects/cos-cloud/global/images/cos-101-17162-210-12,mode=rw,size=50,type=pd-standard \
--no-shielded-secure-boot --shielded-vtpm --shielded-integrity-monitoring \
--reservation-affinity=an
Instance Template Configuration 

To fit the strict budget limitations, my instance template:

  • Uses the e2-small instance as the most suitable low-load shape for the task.
  • Use the standard persistent device, not the balanced one (default and more expensive)  
  • Have a 50Gb allocated size instead of the default 10Gb

The next step is - LoadBalancer, firewall, and instance group configuration.  

Previous articles in the series are:

]]>
<![CDATA[ Ghost: Architecting a Test Site ]]> https://chronicler.tech/ghost-automated-test-site/ 645d57d028284a2f0840b255 Tue, 16 May 2023 08:35:19 -0400 My previous topic described preparation steps for my new pet project - a test site in the Google Cloud. This post describes the design stage with my train of thought explanation and the current design state.

Let's start with the requirements reiteration and then review the design steps. My original goal is to bring up the latest site clone for tests and do it for less than $25/month.

Plus a few additional requirements:

  • Minimize site engine and database maintenance as much as possible, but control the system entirely.
  • Make production backups reliable and not as annoying as the Google Drive version.
  • Quickly bring up the dev system from the latest site backup and try the new design or site-wide changes.
  • In perspective, make out of it a production setup, not necessarily on the same cloud provider.

Components selection

Since I  don't want to do patching and configurations, containers are the only reasonable way. Fortunately, Google Cloud offers quite a few ways to run containers. But, unfortunately,  most of those ways are predominantly stateless.

I don't want to set up a Kubernetes cluster for something I may use once or twice a month. Building a new cluster and deployment takes a while (not a minute). Even just keeping it down all the time is expensive for the task.

The next best option would be a Cloud Run, and its older relative  AppEngine, allows you to spin serverless apps in Docker containers. Unfortunately, none of those passed the inexpensive state preservation check, primarily due to static content on the POSIX file system. Plus, it may require a custom version of the standard Ghost container, which practically brings me to square one -  watch Ghost releases and do manual upgrades.

Being said, it leaves me no option but a good old compute instance with an ephemeral boot device  (less expensive than a separate block device, though.) Plus, because I don intend to preserve instance content, it could be a spot VM that drives the cost of ownership to the ground level.

Since Spot VM could be killed at any time - it should be a part of the instance group with the max instance == 1.   Plus, I need a load balancer to make my dev instance accessible from the outside.  

A few more conveniences:

  • I want to refresh the running instance when the new backup is available.
  • I want to stop VM when the project cost hits 75% of the monthly budget.    

All that led me to the small architecture depicted below.

The current design state. 

Here is a walk-through:

  • The scheduled job on this site uploads the new backup into the Google Cloud Store. The bucket has a lifecycle policy enabled and keeps the current and two latest site copies for two weeks. After that, outdated versions are deleted automatically.
  • New file checkout triggers pub/sub notification for the Cloud Function. It checks the current state of the instance and, if it's up - kills it for the restart and rebuild.
  • When the old instance is killed, the instance group spins up a new one using the standard OS image and some tricky init scripts to initialize a new site clone.
  • The project budget watches project spending and pushes a reach-the-limit alert to the pub/sub-topic.
  • The budget-driven cloud function stops the instance and keeps it down until the next month.  

The design is current but not final and will be adjusted on the go.  I'll post another update with the results and all the code when this design goes live.

   

]]>
<![CDATA[ Ghost: Rebuild Backup Procedure ]]> https://chronicler.tech/ghost-revamp-backups/ 645827c128284a2f0840af2d Tue, 09 May 2023 08:45:13 -0400 Ahmed and I run this blog on the standalone Ghost for almost four years. Around the same time, I created a small cron task that backs up the site content, dumps the database, and uploads archives to Google Drive. But now it's time for the change.

Google Drive was a good option in 2019, but product upgrades, site growth, and the fast decaying of the meaning "unlimited" made the current approach obsolete. Plus, I have another purpose for these backup files in mind. So here is the list of the backup requirements that fit my bill:

  • Use free or very inexpensive cloud storage;
  • Leverage versions and lifecycle policies to keep space usage in check;
  • Do not use passwords in the command line;
  • Suitable for quick cloning and testing.

Although almost the same could be done with any big cloud player, I decided to see how it works with Google Cloud and how it fits my final goal requirements. But before changing my backup scripts, some preparations are due.

Cloud Storage Preparation

  1. Create or activate your Google Cloud account. If it's your first account, don't forget to grab your $300 voucher.
  2. Create a new project to group your cloud artifacts.  
  3. Create a new Cloud Storage bucket.
  4. Enable Object Versioning
  5. Create Lifecycle rules regarding how many object versions you want to keep and how long.
Bucket object protection.

Since we have a few monthly updates, I will keep two previous versions of each file for two weeks. The bi-weekly backup will give the current and two earlier copies of the site at any given moment.

Environment Preparation

Install and configure Google Command Line Interface.

  1. Install Google Cloud CLI, following the documentation steps. Instructions are straightforward and shouldn't give you any trouble.

  2. During the CLI configuration, be ready to open an URL in the browser and log in with your cloud account credentials. Then, paste a confirmation key back to the terminal upon successful authentication and permission consent.

  3. Make sure that your configuration works and you see your target bucket.

     gcloud config configurations describe default
     gcloud storage ls
    
  4. Enable parallel file uploads for the best performance and suppress warnings.

     gcloud config set storage/parallel_composite_upload_enabled True
    

Configure your MySQL database and tools.

  1. MySQL v8 requires additional global privileges for full database exports.

     GRANT PROCESS ON  *.* TO ghostuser@localhost;
    
  2. The utility mysql_config_editor allows you to encrypt passwords for database tools. Enter the database user password at the prompt.

     mysql_config_editor set --login-path=mysqldump --host=localhost --user=ghostuser --password
    
  3. Test the full database export; You should not see any prompts or warnings.

     mysqldump -u ghostuser --databases ghostdb > /dev/null
    
?
The key --databases may seem excessive since we export only one database at a time. But it makes an important difference: it creates a new database if it does not exist. 

Assembly backup script

Essentially we have everything we need to compile a shell script. I stripped down all the bells and whistles from the original, but it will do.

#!/bin/sh

#Backup Site
tmp_f="/tmp/ghost.bkp"
pref="your-site-name"
gs_bucket="gs://put-your-name-here/"
ghost_home=/var/opt/ghost/

# Create temp folder
mkdir -p $tmp_f

# Export Ghost Database 
mysqldump --user your_dabase_user --databases your_ghost_db |gzip >"${tmp_f}/${pref}.sql.gz"

# Archive non text content
cd $ghost_home
tar zcf "${tmp_f}/${pref}.content.tgz" content

# Upload to the GC
gcloud -q storage cp $tmp_f/* $gs_bucket
rc=$?

if [ $rc == "0" ]; then
 printf " Done.\n"
 rm $tmp_f/*
fi

# Remove temp folder
rmdir $tmp_f

echo -e "==========================================================================="
echo -e "         Backup completed"
echo -e "==========================================================================="

Save the backup script to your ~/bin/  folder and adjust execution permissions.

Now, use crontab to schedule your backups and watch for cloud storage content.  By the way, I always struggle with the cron schedule syntax, but https://crontab.guru is very helpful.  

]]>
<![CDATA[ Google Cloud Certified: Done! ]]> https://chronicler.tech/google-cloud-certified-done/ 6446d4d328284a2f0840ae0d Tue, 25 Apr 2023 08:40:45 -0400 This post is straight and clear brag about myself. Last week I passed one of the most challenging exams I've ever had - Google Cloud Certified Professional Cloud Architect!

The hardest one I failed miserably, and it was all technical. On second thought, all other certifications I've obtained may be a little too technical. What I like about the latest one - Google tries to make exams close to real-life challenges - make technical decisions, narrow down potential root causes, and carefully decide how you will present your design to stakeholders and what arguments you should use. It was a long and somewhat rocky road, but I did it, and only time will show if it is worth something.

]]>
<![CDATA[ Oracle Database: Zoning Conundrum ]]> https://chronicler.tech/oracle-database-zoning/ 642ec0a8fe5f993af7ac7530 Tue, 18 Apr 2023 08:35:58 -0400 The Oracle Database has long and complicated relations with time-aware data. It has gotten even more interesting in the internet era and global data processing. As usual, it's a story about how a single line of code at the design time could save many work hours in maintenance.

Traditionally, if you do not set the database instance timezone, it will make the best effort to retrieve this information from an operating system. Naturally, if your operating system has no timezone configured, your database wouldn't have a clue and presume that it is UTC (+0:00), or it could be configured this way - as a database user, you may only guess.

The second source of the date and time details is your workstation. Again, the classic SQL*Net client will pick up everything from the OS and set this information in your session.  

SQLcl and SQL Worksheet results

As you may see on my screenshot - SQLcl has the correct session setup, while SQL Worksheet relies on some nameless container. A good example here is scheduled jobs.    

The real fun starts when you have no client - with scheduled jobs running in the background of your database instance and manipulating the data on your behalf. There is a complex and somewhat cumbersome hierarchy that DBMS_SCHEDULER uses to identify the job timezone. Regarding the various sources, starting the documentation order of the source evaluation is

  1. When you define the start_date job's attribute as a TIMESTAMP WITH TIMEZONE.
  2. If you don't specify the timezone for the start_date, the scheduler will try to use the session timezone information.
  3. If there is no start_date defined and the session has no timezone information, the scheduler will use the PDB's default scheduler timezone.  
  4. The next one is the CDB's default scheduler timezone if the PDB is not defined.
Time zone information sources

Even with all that chain of sources, the particular job run may use something you would not expect, especially when Daylight Save Time (aka DST) comes into play. Oracle Support has released detailed explanations and instructions on the matter. Of course, I'm not going to reproduce the copyrighted document, but I want to conclude with two recommendations you should follow:

  • If possible, always prefer a region over a specific location and a location over a numeric format. For example, I choose the US/Eastern time zone region over America/New_York and the latest over -05:00.
  • Explicitly set the session timezone in your job code.
CREATE OR REPLACE PROCEDURE
             my_tz_avare_job(
                       stop_time in number,
                      job_tz in VARCHAR2:='US/Eastern') AS
BEGIN
     -- Set Job session timezone 
     EXECUTE IMMEDIATE 
        'alter session set timezone='''||job_tz||'''';
     -- Do the usefull job
     null;
END;
Explicitly set session time zone. 

The code above will ensure that your job runs in your time and not in London.

]]>
<![CDATA[ Security Scans: Hide and Seek ]]> https://chronicler.tech/security-scans-hide-and-seek/ 641b5c06fe5f993af7ac744a Tue, 28 Mar 2023 08:45:08 -0400 Oracle recently made a gift to all middleware engineers who operate in "security-first" environments. Government contractors should love it as much as I do, especially if you are adopting or already have continuous ATO.

Everyone who works with good old on-prem technologies has used OPatch utility. Database or Middleware, you run it one way or another every time you need to apply fixes or improve overall security posture and get your servers off the security scanner list. One of the strong sides of the tool is the ability to make a smart rollback and restore the system to the previous state if something goes wrong. To achieve this OPatch keeps the records in the hidden folder under $ORACLE_HOME/.patch_storage.

We may think that patch was applied and the product installation is safe, but all those security scanners think differently. They detect the folder content and mark the system as vulnerable anyways. Until recently, you had only two options - manually archive impacted patch folders, or use OPatch to clean up the patch cache. Both approaches mean that you can't roll back your system to the previous state (cleanup) or need to make unarchive the patch cache before operating on the installation.  

With the recent OPatch versions, you have a third, convenient, and safe way to keep your patch cache operable and get off the radars.  The name of the game is obfuscation. There is massive functionality addition to the opatch util command. Those new functions allow you to clean up the patch cache and move it to the other location, clean it up, backup and restore, and more.  Unfortunately, I can't tell when those new functions were introduced, but they emerged somewhere between 13.9.4.2.5 and 13.9.4.2.11.

The difference 

My personal favorite is the command:

$ $ORACLE_HOME/OPatch/opatch util Obfuscate

It makes the patch storage content unrecognizable to one's eye, and all previously vulnerable libraries and files are no longer detectable as a vulnerability.

]]>
<![CDATA[ OCI: Secure Load Balancer ]]> https://chronicler.tech/oci-load-balancer/ 640c9868fe5f993af7ac731c Tue, 14 Mar 2023 08:40:14 -0400 It started as a routine wildcard certificate renewal and updating on the Oracle Cloud Infrastructure. It usually takes about 20 minutes, but not today when the certbot offered me a private key update.

I spent most of those twenty minutes remembering how to update wildcard certificates and why the Load balancer states no certificates are configured. Today it started with a private key update offer, presumably, a smart move in the pre-quantum computing era.

Replace the RSA key with ECDSA.

I went through the steps and got my new certificate and key pair. The next step is to create a new load balancer certificate (I don't think regular certificates are allowed on always-free accounts) and instruct the secure listener to use it.

The result was disappointing, and the site threw the error below without reason.

My pet site with the new certificate 

The OpenSSL has no problem with the certificate and private key but threw the "handshake failure" on the load balancer port. The answer was in the listener configuration.

Default Cipher Suite Set 

Open listener for edit (three dots on the right), and expand the Advanced Options section. Select the predefined oci-modern-ssl-cipher-suite-v1 or create a custom set of ciphers that meet your security policies.  

The predefined set of EC and AES ciphers 

Save the changes and give it a few seconds to propagate the changes. No,w the site is available, and the browser has no issues with the protocols and ciphers.

The updated certificate with an EC key.

To summarize: if you make significant certificate changes, ensure that your infrastructure is ready.  

]]>
<![CDATA[ Database: It's Hard to Explain ]]> https://chronicler.tech/database-its-hard-to-explain/ 63fa13e5fe5f993af7ac716f Tue, 28 Feb 2023 08:35:14 -0500 When you are enough time in the business, you learn the universal rule of IT: "Computers don't make mistakes, people do." It is especially true for database performance tuning.

Yes, the newest database optimizers are sophisticated (some of them even have building AI already) and they have numerous options to adopt and run your queries in the best possible way. To help computers win this battle, vendors have chosen the "dumb-down" path - stash away dangerous controls and show nice pictures to the "two-day-dba" folks and their managers. Something like the one below.

SQL Monitoring Dashboard.

That is one of the latest (and probably the last one) OEM 13c consoles.  On five pages it gives you fancy charts and good-looking execution plans but you hardly find why this particular plan has been chosen. If you really want to know - go through the Oracle Database performance tuning guide, or watch videos, or read blogs of the SQL Maria. Actually, that one leads to the great post about Oracle's build DBMS_XPLAN package.

Since all those nice charts and graphs build out from the same data you can have all those details and more right in the shell terminal. The query below does all that.

SELECT * FROM TABLE(
     DBMS_XPLAN.DISPLAY_CURSOR(
              sql_id=>'SOMESQL0ID',
              cursor_child_no=>0,
              format=>'ALLSTATS LAST')
          );
Query Execution Plan and Statistics 

A few parameters to work with:

Parameter Explanation
sql_id Hashed tag for the SQL text, you want to examine. You can grab it from SQL Monitoring Console, or find it using the V$SQL system view. Mandatory parameter
cursor_child_no Sometimes Optimizer overloads cursors with adjusted values, so you may find the same SQLID but with different child numbers. The default value is 0 and most of the time you may not need it. But in some cases, the original plan is not available and you should provide the correct SQL child # to access the execution plan.
format Prescribes the output format of the explanation. Almost every time, I run this query I put 'ALLSTATS LAST' which prescribes fetching all available execution statistics and printing it after the execution plan tree.

I'll skip the SQL query and plan specifics and show you only the essentials:

The highlight explains why the query is executed this way.

A single line in the note says that somewhere in the past execution plan for this query was pinned by the SPM's baseline.

In that particular case, it's still a good execution plan,   but it may become a problem in the feature after yet another database upgrade.    

]]>
<![CDATA[ A Shell Tricks: Session Color Coding ]]> https://chronicler.tech/shell-tricks-c/ 63f10311fe5f993af7ac6ea9 Tue, 21 Feb 2023 08:45:10 -0500 As most of us, human beings, we learn on our own mistakes and what I learned recently - you better watch the caffeine level in your blood and what terminal window you use to run your commands, especially scalable ones. So, this small how-to is a consequence of  SSH windows mess and brief production outage (yes automation is a two-sided sword and you can recover system as fast as you kill it).

The best way to understand what server you deal with is color-coded command line prompts. Different Shell interpreters and different terminals offer a different commands and capabilities, but modern server-side world is extremely Linux centric, where the BASH dominates the market.

It's actually quite simple especially. BASH uses environment variable PS1 ad prints out it's content at the beginning of the new command line. You can use regular characters, special characters and color indicators.  Plus, there are special symbols to substitute with the BASH and user session parameters. My requirements are quite simple. I want to see the current user, what ansible controller I'm dealing with,  and my ful path location. Ideally, It should not take much from my command line space. Let's take a look at the first version of that:

export PS1="\u@DEV:\W\n\$ "
\u Prompt starts with the current user name. BASH substutes this entry with your active login name.
**@DEV: ** Static string. In fact, host name (\h or \H) would bring more confuion, since all of them named acording to hostname guidelines, and my three envrionemts differs only by few numbers.
\W Full path to the current folder. Your home folder would be abreviated to ~. You also can use \w, but in most cases just folder name is not enough, isn't it?
'\n$ ' Since a path to thr current location could be mouthful, I prefer to start from a new line \n and mark my input with the regular $ sign. Mark that space after a dollar, your input wouldn't "stick" to the prompt and you can read it faster.

That information is not, what I exactly remember by hart, and there are much more control characters you can yous. To save you googling a good one, I put a few links at the bottom of this page. Now, let's add some colors. Since we are deal with the text-based terminals, there are control sequences, that instruct BASH what foreground and background colors should be used.

The simple color instruction looks like: \e[**STL**;**FG**;**BG**m
where:

\e[ Extended istrcution start
STL Font decoration. I'e bold, blinking and so on.
FG Foreground color
BG Background color
m Closing statement

So, for my development environment, green text in the prompt would be not only appropriate, but healthier for your eyes.
Print all in green on the default background: \e[32m
Reset text to the default: \e[m

Let's combine all components and add a small, color-enabled welcome|warning plaque.

export PS1="\[\e[32m\]\u\[\e[m\]@\[\e[32m\]DEV\[\e[m\]: \[\e[36m\]\w\[\e[m\]\n\[\e[37m\]\\$\[\e[m\] "
echo -e "\e[32m###########################################################"
echo -e "##                                                       ##"
echo -e "##   This is a Development Instance                      ##"
echo -e "##   Please watch your steps.                            ##"
echo -e "##                                                       ##"
echo -e "###########################################################\e[m"
PS1 Prompt and welcome plaque

Resulting PS1 prompt contains a lot of the escapes. The reason is - we want control characters out of the string content and each color sequence is surrounded by square brackets.  \[\] . As you may see, brackets are not required for the regular echo command, but you may want to use -e switch, to process those instructions.

I have added mine to the ~/.bash_profile. I have found, thehard way that ~/.bashrc cold not be the right place, or you should test BASH execution mode and run those commands only if it's interactive. The screenshot of the code above shows the final result.

Development server is green but be careful anyways. 

Now you can create a variations for all your environments. I use stop lite patterns: green for DEV/TEST, yellow/orange for STAGE/QA and red for the production tarets.    

 Wrapping up the matter:

  • Use terminal prompts as an additional indication of the current environment. It may save your production environments form downtime or even a destruction.
  • Add the prompt instructions to the .bash_profile to avoid clash with the noninteractive sessions.
  • Use online helpers, to create the first build and learn advanced color codes and additional control characters. See helpful sites below:
  • If you ae looking a functionality beyond omatted prompts - take a look at the modern approach - PROMPT_COMMAND variable.  

BASH Prompt Generators

https://ezprompt.net/ BASH Prompt generator. Interactive site that helps you with PS1 prompt and basic color instructions.
https://robotmoon.com/bash-prompt-generator/ Another BASH prompt enerator, with accent on prompt colors.
]]>
<![CDATA[ Maintaining Oracle HTTP instances ]]> https://chronicler.tech/how-to-maintain-oracle-http-instances/ 63e44385fe5f993af7ac6b22 Tue, 14 Feb 2023 09:54:59 -0500 The Oracle HTTP Server (OHS) is essential to any large Oracle Fusion Middleware environment. And despite the most common description for that product being "Apache HTTP server with custom modules," it's not as simple as it sounds. Let's walk around and take a look at how it's made.

Oracle HTTP Server 12c integration with the Oracle Fusion Middleware is so deep you need to bring a big part of the infrastructure to make a standalone installation. So, if you want to run a standalone OHS server, start from the classic domain configuration procedure. The new domain is not an application grade, and the whole purpose of it - accommodate standard Oracle Fusion Middleware Infrastructure components:  

  • Node Manager
  • WLST Processor
  • Components
  • Oracle PKI Tool, aka orapki

The standard NodeManager process controls the domain configuration and deployed components.  So if you need to run OHS lifecycle commands, keep in mind that the corresponding NodeMnager is up and running.

I don't want to walk through OHS instance creation or removal steps. The vendor documents them well, plus hundreds if not thousands of step-by-step instructions on the tip of your finger.  Let's take a closer look at the domain's component, in our case - a pretty standard set of Apache HTTPD configuration files.  

Instance folder and content are buried deep into the domain entrails. I will use environment variables referring to file locations. Let's assume that:

ORACLE_HOME=/opt/oracle/Middleware
DOMAIN_HOME=$ORACLE_HOME/user_projects/domains/ohsdomain
OHS1_INST=$DOMAIN_HOME/config/fmwconfig/components/ohs1
OHS2_INST=$DOMAIN_HOME/config/fmwconfig/components/ohs2
Sample environment variables

Some people prefer pictures over text, so the diagram below displays relations between various components and locations. It's a bit complicated and depicts

  • Two independent installations, which we refer to as the $ORACLE_HOME;
    There are a handful of reasons why you need multiple OHS binaries on the same VM, and the most obvious are: complex migrations and some compatibility requirements.
  • Each $ORACLE_HOME servers to one or more OHS domains, and it's a common practice to assign this to the $DOMAIN_HOME variable. As any WebLogic domain, it has a NodeManager, configuration folders, scripts, and directory structure to accommodate one or more OHS instances.
  • Each $DOMAIN_HOME supports one or more OHS instances or, in domain terms - components. Finally, we reached the level where it is  "same as Apache HTTPD." You find pretty much the same configuration files, well-known folder structure, and instructions of Apache HTTPD 2.4.x. Of course, Oracle's extras are here: webgate (if configured), mod_wl, and Oracle's proprietary security module.
Overcomplicated OHS Installation
?
One significant difference between Apache HTTPD and Oracle OHS is that you can't create a name-based secured virtual host. You still could have port- or address-based virtual hosts, managed by the same instance.

You may already wonder, why each component is duplicated on the diagram above. That's because Oracle has engineered it this way! At any point in time, you have at least two copies of your web server component:

  • $DOMAIN_HOME/config/fmwconfig/components/OHS/ohs1 - is your "master copy" of the component. If you want to update the site certificate, change parameters, or refresh static content this folder is your go-to location for that.
  • $DOMAIN_HOME/config/fmwconfig/components/OHS/instance/ohs1 - is a working replica of your master copy. The Running HTTPD process uses an instance folder for the process configuration and static content serving; NodeManager monitors component changes and replicates them to the instance.  
You can make changes directly on the instance, but NodeManager would never propagate them back to the master copy. Considering the instance/component copy there is no guarantee that your direct instance update would survive some reconfiguration event. 

The commands that make NodeManager check and update components are:

$DOMAIN_HOME/bin/stopComponent.sh  - Script accepts a component name as a parameter and performs shutdown of the HTTPD processes. It will ask for the NodManager password if the credentials weren't stored in the user's home folder.

$DOMAIN_HOME/bin/startComponent.sh  - Script accepts a component name as a parameter and tries to start HTTPD processes for the given component.  It will ask for the NodManager password if the credentials weren't stored in the user's home folder. Additionally, you can instruct this script to store NodeManasger credentials in the users' home folder.

$DOMAIN_HOME/bin/restartComponent.sh  - This script is not a stop/start command combination. HTTPD would make the best effort to serve all existing sessions and relocate them to the updated process.  

To summarize:

  • The standalone Oracle HTTP Server has simplified infrastructure to manage HTTPD instances in the same way as the other non-Java components.
  • Each OHS component is an independent HTTPD configuration, used by NodeManager to maintain the working replica. Never update the component instance, only the component itself, otherwise, your changes could be lost.
  • OHS offers start, stop, and restart commands for OHS components, and restart is not the same as the stop + start combination.
]]>
<![CDATA[ A Shell Tricks: Forwarding X session ]]> https://chronicler.tech/a-shell-tricks/ 63ac5c645a6b8f481d4cb9d6 Tue, 03 Jan 2023 08:45:24 -0500 I use MobaXterm as my primary terminal client. Besides some excellent features, it has a selling point - the built-in X server. Yet you may still be in a position where you can't just forward your X windows to your local machine and have to make some extra steps.

Some environments are very restrictive on how you access privileged OS accounts, so you cannot use the application owner's account to access the system but only sudo into it. There are plenty of how-tos regarding forwarding X sessions for sudoers. In general, before switching to the user, you should:  

  • Collect session variable  DISPLAY value
  • Find out your X session authentication  details
  • Switch to the new user with sudo su
  • Replicate your original X session for the new user environment

To automate and simplify this process I create a small Bash script on all VMs where I may need X session and placed it into ~/bin folder.

cat > ~/bin/x-appuser <<EOF
#!/bin/sh
echo -e "Graphical session commands:"
echo -e "===================================="
echo -e "xauth add $(xauth list |tail -1)"
echo -e "export DISPLAY=${DISPLAY}"
sudo su - appowner
EOF

chmod u+x ~/bin/x-appuser
X session configuration commands

Now, if you need only terminal session you switch to the appowner account as usual and use this script if you need to run some GUI-based applications.

# Regular user switch
[yourname]$ sudo su - appuser
[appuser]$ exit
# Enable X session forwarding
[yourname]$ x-appuser
Graphical session commands:
======================================
xauth add my-vm.domain.name/unix:10 MIT-MAGIC-COOKIE-1 c543177654cc33ee66fffg
export DISPLAY=localhost:3.0
[appuser]$ 
Regular and session with X forwarding

Now, anytime before you want to run xclock, copy and execute those two lines from the command output. with the cookie and display details commands.  

]]>
<![CDATA[ Happy New Year! ]]> https://chronicler.tech/happy-new-year/ 63ac99475a6b8f481d4cbb18 Sun, 01 Jan 2023 00:00:11 -0500 I want to say thank you for reading our blog. I wish you and your families all the best in the new 2023!!

How cool it would be to rollback a few crazy years and do over as simple as that  


run
{
set UNTIL TIME "to_date('08/01/2019','mm/dd/yyyy')";
restore world;
recover world;
alter world open resetlogs;
}

 

But it's impossible; we haven't hacked our space/time continuum yet.  

So all I can wish to all of you is:

  • Be safe
  • Be strong
  • Be alive

]]>
<![CDATA[ Using the SQL*Plus Instant Client ]]> https://chronicler.tech/using-the-sql-plus-instant-client/ 5eb5672f0f5abe37b745a776 Tue, 20 Dec 2022 22:32:57 -0500 Now that you have the Oracle SQL*Plus Instant Client installed, here's the quickest way to start using it. (Installation instructions on this blog post.)

# Set the environment variables to your local installation

export SQLPLUS_HOME=/u01/sqlplus/instantclient_21_5
export TNS_ADMIN=${SQLPLUS_HOME}
export LD_LIBRARY_PATH=${SQLPLUS_HOME}


# Call SQL*Plus

${SQLPLUS_HOME}/sqlplus dbusername@dbhost:1521/dbservucebane


# Call SQL*Plus with password on prompt

${SQLPLUS_HOME}/sqlplus dbusername/welcome1@//dbhost:1521/dbservicename
]]>
<![CDATA[ Oracle OCI: It has queues! ]]> https://chronicler.tech/oci-finally-it-has-queues/ 639cc8a55a6b8f481d4cb879 Tue, 20 Dec 2022 08:45:47 -0500 Oracle OCI and Amazon AWS are similar and different at the same time. Yes, at the very basic, they offer you the same services - cloud infrastructure to run your workload, but AWS always dominated by the broad set of tools, features, and capabilities if you compare them by dozens.  

However, getting late to the battle is not always a bad thing. Yes, your competitors dominate the market; They may say that you have no experience, and your cloud customer base could be wider. But smart ones may gain an advantage of the knowledge and jump straight to the modern state bypassing rocky roads and having no technology debts. That is how I see the missing AWS features in OCI. Some are unnecessary due to the difference between the core architecture and better compartmentalization. For some of them, Oracle uses industry-acclaimed solutions (i.e., OCI and Terraform Stacks vs. AWS CloudFormation) with better results and positive feedback from the cloud community.  

Yet one significant feature was missed from the OCI components - queues. Of course, you can use Notification Service (ONS) to decouple your components, as people use AWS Simple Notifications instead of Simple Queue Service. The alternative was to build and maintain the custom solution: Database AQ, Kafka, or third-party queue management solutions.

But no Oracle offers ready-to-use Queues. The concept looks promising, yet, as of today, it is not entirely integrated with the other components. Still, it provides all the essential features:

  • Queues and DLQ
  • Security
  • Message lifecycle
  • Subscribers
  • Consumers

It comes with the RESTful API and ready-to-go SDKs for Java and Python.

I'm going to fiddle with this long-awaiting and exciting novelty and post more on queue integration with the other OCI components, especially with functions and another new feature: container instances.

]]>
<![CDATA[ Cheat sheet for XSL transformation ]]> https://chronicler.tech/cheat-sheet-for-xsl-transformation/ 638228fa5a6b8f481d4cb23f Wed, 14 Dec 2022 10:28:50 -0500 I'm really publishing this blog post for myself as a future reference. :)

This .xsl file (aka XSL transformation file) contains perhaps 90% of everything I generally use in my Oracle SOA development projects.

Here is the file in its entirety (I'll break it down further below):

<?xml version="1.0" encoding="UTF-8" ?>
<xsl:stylesheet version="1.0" xmlns:ns0="http://revelationtech.com/JobCodes"
                xmlns:socket="http://www.oracle.com/XSL/Transform/java/oracle.tip.adapter.socket.ProtocolTranslator"
                xmlns:oracle-xsl-mapper="http://www.oracle.com/xsl/mapper/schemas"
                xmlns:dvm="http://www.oracle.com/XSL/Transform/java/oracle.tip.dvm.LookupValue"
                xmlns:mhdr="http://www.oracle.com/XSL/Transform/java/oracle.tip.mediator.service.common.functions.MediatorExtnFunction"
                xmlns:oraxsl="http://www.oracle.com/XSL/Transform/java"
                xmlns:oraext="http://www.oracle.com/XSL/Transform/java/oracle.tip.pc.services.functions.ExtFunc"
                xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                xmlns:xsd="http://www.w3.org/2001/XMLSchema"
                xmlns:xp20="http://www.oracle.com/XSL/Transform/java/oracle.tip.pc.services.functions.Xpath20"
                xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
                xmlns:xref="http://www.oracle.com/XSL/Transform/java/oracle.tip.xref.xpath.XRefXPathFunctions"
                exclude-result-prefixes="oracle-xsl-mapper xsi xsd xsl ns0 socket dvm mhdr oraxsl oraext xp20 xref"
                xmlns:plt="http://schemas.xmlsoap.org/ws/2003/05/partner-link/"
                xmlns:tns="http://xmlns.oracle.com/pcbpel/adapter/file/Application/Project/FileRead"
                xmlns:pc="http://xmlns.oracle.com/pcbpel/"
                xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"
                xmlns:strClass="http://www.oracle.com/XSL/Transform/java/java.lang.String"
                xmlns:jca="http://xmlns.oracle.com/pcbpel/wsdl/jca/">
  <oracle-xsl-mapper:schema>
    <oracle-xsl-mapper:mapSources>
      <oracle-xsl-mapper:source type="WSDL">
        <oracle-xsl-mapper:schema location="../WSDLs/FileRead.wsdl"/>
        <oracle-xsl-mapper:rootElement name="jobcodes" namespace="http://revelationtech.com/JobCodes"/>
      </oracle-xsl-mapper:source>
    </oracle-xsl-mapper:mapSources>
    <oracle-xsl-mapper:mapTargets>
      <oracle-xsl-mapper:target type="WSDL">
        <oracle-xsl-mapper:schema location="../WSDLs/FileRead.wsdl"/>
        <oracle-xsl-mapper:rootElement name="jobcodesconcat" namespace="http://revelationtech.com/TargetInfo"/>
      </oracle-xsl-mapper:target>
    </oracle-xsl-mapper:mapTargets>
  </oracle-xsl-mapper:schema>
  <xsl:template match="/">
    <ns0:targetinfo>
      <xsl:for-each select="/ns0:jobcodes/ns0:jobcode">
        <xsl:variable name="i" select="position()"/>
        <xsl:value-of select="strClass:replaceAll(concat (/targetinfo, '&quot;', /ns0:jobcodes/ns0:jobcode[$i]/ns0:Active, '&quot;;&quot;', /ns0:jobcodes/ns0:jobcode[$i]/ns0:Code, '&quot;;&quot;', /ns0:jobcodes/ns0:jobcode[$i]/ns0:Name, '&quot;&#13;&#10;'), '&amp;', '%26amp;')"/>
      </xsl:for-each>
    </ns0:targetinfo>
  </xsl:template>
</xsl:stylesheet>

I'll go through each of the various areas, but it'll be a bit modified for clarity.

Selecting

This simply copies an element from the source to the target (the target being /ns0:targetinfo).

<ns0:targetinfo>
  <xsl:value-of select="/ns0:jobcodes/ns0:jobcode/ns0:Active"/>
</ns0:targetinfo>

Looping

This loops through the source array, and references the array position jobcode[$i] in each iteration. You would need to create a counter $i.

<ns0:targetinfo>
  <xsl:for-each select="/ns0:jobcodes/ns0:jobcode">
    <xsl:variable name="i" select="position()"/>
    <xsl:value-of select="/ns0:jobcodes/ns0:jobcode[$i]/ns0:Active"/>
  </xsl:for-each>
</ns0:targetinfo>

Using Quotes

This concatenates a double quote in the beginning and end of the source element before copying it to the target element. A double quote is &quot; and a single quote is &apos;.

<ns0:targetinfo>
  <xsl:value-of select="concat('&quot;', /ns0:jobcodes/ns0:jobcode[$i]/ns0:Active, '&quot;')"/>
</ns0:targetinfo>

Replacing & with &amp;

Many target systems don't like the & sign, and require it to be replaced with &amp;. It looks weird here, but do to this you actually replace &amp; with %26amp;.

<ns0:targetinfo>
  <xsl:value-of select="strClass:replaceAll(/ns0:jobcodes/ns0:jobcode[$i]/ns0:Active, '&amp;', '%26amp;')"/>
</ns0:targetinfo>

If using the strClass namespace, it has to be referenced in the stylesheet above as:

xmlns:strClass="http://www.oracle.com/XSL/Transform/java/java.lang.String"

Adding a Line Break

If you want to add a line break after the element, you will need to manually concatenate the characters &#13; and &$10;.

<ns0:targetinfo>
  <xsl:value-of select="concat (/ns0:jobcodes/ns0:jobcode/ns0:Active, '&#13;&#10;')"/>
</ns0:targetinfo>

Concatenating Source Array to a Single Target Element

This loops through all the source elements, and concatenates them all into a single target element that is comma delimited; the target element being /targetinfo.

<ns0:targetinfo>
  <xsl:for-each select="/ns0:jobcodes/ns0:jobcode">
    <xsl:variable name="i" select="position()"/>
    <xsl:value-of select="strClass:replaceAll(concat (/targetinfo, /ns0:jobcodes/ns0:jobcode[$i]/ns0:Active, ',', /ns0:jobcodes/ns0:jobcode[$i]/ns0:Code, ',', /ns0:jobcodes/ns0:jobcode[$i]/ns0:Name)"/>
  </xsl:for-each>
</ns0:targetinfo>

]]>
<![CDATA[ AWS API Gateway: Stage Variables ]]> https://chronicler.tech/aws-api-gateway-stage-variables/ 638cbd0a5a6b8f481d4cb5c0 Tue, 13 Dec 2022 08:45:02 -0500 For one of the projects, I was supposed to publish a new resource to the existing API. AWS API Gateway allows you to deploy the same API for different stages. It separates deployments of the same code for different environments, enabling "code once" principles. But you already see where the trap is.

Most likely, you have different backends for different tiers, maybe in different accounts. That means when you deploy a new version of API, you have to change your targets appropriately. AWS offers a solution - stage variables.

Let's see how to use stage variables and reuse the same code across all environments.    

Prepare Backend Lambdas

Since AWS Lambda is not the show's star, I have created two simple functions with different names and status messages.

Sample Lambda Function

Create a New API

For demo purposes, I created a new regional RESTful API from scratch.

Now, add a new resource with the creative name "lambda." Do not forget to enable API Gateway CORS.  

Define a resource for the new API

Now add the POST method to the existing resource. To do so:

  • Select /lambda resource;
  • Click the Actions button;
  • Then use the "Create Method" item from the drop-down menu.
  • Select POST from the new blank method  and then click the "Ok" icon next to the method selector;
Add the new method to the API resource. 

For  the method setup window, I configured:

  • Integration Type: Lambda Function
  • Use Lambda Proxy Integration: Checked
  • My Region: us-east-1
  • Lambda Function: lambda-stage
  • Click the "Save" button to complete the method configuration.  
  • API wizard will warn you that you are giving execution permission for the Lambda function.  Click "Ok"
Configured POST Method with Lambda attached
  • Click on the Client's "Test" link to validate our configuration.
  • You can put anything into headers and Request body and click "Test."
  • If API Gateway has all the privileges and Lambda is deployed properly, you will get the function response
Test POST method before deployment. 

Now our demo API is ready for deployment.

  • Click on "Actions" and then select "Deploy API"
  • For the Deployment stage, select [New stage]
  • Give a Stage name, for Example - "stg"
  • Give some meaningful descriptions for the environment and deployment and click "Deploy."

Now you have an additional entity available - Stage.  Stage allows you to control some aspects of the  API deployment. For this demo we focus on Stage variables.

We need another stage for the production deployment. Let's quickly repeat some of the configuration steps:

  • Select "Resources" from the navigation pane.
  • Navigate to the /lambda{POST} method
  • Click on Integration Request to open the configuration form
  • Replace your staging Lambda with the production one. In my case, I change lambda-stage to lambda-prod.
  • Deploy the updated API to the new "prd" stage.

Upon completion, you should have both stages with identical API but different code behind it.  Let's run some tests from the AWS Cloud Shell.

# curl - Test HTTP/s RESTful resources. jq - JSON query tyo format output  
curl -s -d '{"test":"messgae"}' https://xxxxx.execute-api.us-east-1.amazonaws.com/stg/lambda |jq
Test deployed APIs 

It is time to untangle the API source from the deployment stage

Implementing Stage Variables

  • Select "Stages"  from the API navigation panel
  • Select "stg" from the Stages tree
  • Select "Stage Variables" tab
  • Click "Add Stage Variable" and set the name to "LambdaName"  and value to your staging Lambda ARN. You can use only the Lambda name if your functions are under the same account.
Stage Variable LambdaName
  • Select "Resources"  then POST method under "/lambda" resource.
  • Click on the "Integration Request" and edit the "Lambda Function"  field. Make a note that now the integration request points to production Lambda.
  • Set ${stageVariables.LambdaName} as a new value.
  • Confirm the change and accept the warning.
  • Use the "Actions" button to deploy the updated API to "stg."
  • Now create a variable with the same name for the "prd"  stage.
  • Give it a  production Lambda name or ARN.
  • Make a new deployment to "prd".

Let's make sure that our functions are still reachable and we get a response from both environments.

Final function tests

A Quick Recap

  • The AWS API Gateway allows you to deploy the same API to multiple environments.
  • API Gateway offers stage variables that allow you to disconnect the API code from the backend specifics.
  • Stage variable may contain a function name (i.e. "my-lamnbda-func") if the API and Lambda belong to the same account, or full-qualified ARN ("arn:aws:lambda:my-region:11111111111:function:my-lambda-name") for cross-account deployments.
  • You can use stage variables for message enrichment or to provide environment-specific information for the backend.  
]]>
<![CDATA[ A Shell Tricks: Search and Replace ]]> https://chronicler.tech/a-shell-tricks-search-and-replace/ 638249215a6b8f481d4cb3c4 Tue, 06 Dec 2022 08:40:40 -0500 It's easy to use VS Code to search and replace text entries. Yet, mass file renaming is a fairly common task, especially when you move Ansible inventories or other artifacts across environments.

Let's assume there is some host naming convention that allows you to make an educated guess with a glance at the host's name. For example, something like rhl-stg-itdweb01.mydomain.com gives you a pretty good idea of the OS, environment, department, and primary function.  

To move your scripts from staging to the production system, you should update text and filenames to keep the Ansible inventory coherent.

Typically, I start with the mass file rename. The example below gives you an idea of producing a new name from the old one.

$ for f in *-stg-*; do nf=$(echo $f|sed "s/-stg-/-prd-/"); mv $f $nf; done
Single Line that renames files by the rules.

Let's expand this cryptic charm to a human-readable block of code below.

# For all files that match *-stg-* mask dp
for f in *-stg-*; do
#       inline command that translates all '-stg-' entries into '-prd-'   
  nf=$(echo $f | sed "s/-stg-/-prd-/")
# Rename file using original and new name   
  mv $f $nf
# Done with this one, iterate next if available.   
done  
Command explained.
If you want to be on the safe side, replace mv with cp and preserve the original files until you make all the changes. 

The next step is to update the content of the files and substitute all staging references with the production ones.

$ sed -i "s/-stg-/-prd-/g" *-prd-* 
Replace all stg entries with prd ones. 

There are a few notes regarding sed processing instructions:

  • The argument "-i" instructs sed to make in-place changes. Without this argument, the sed prints out results to the standard output as in the first command.
  • You may notice a subtle difference in the search and replacement instructions for files and content. The latter one ends up with the "/g" which stands for"global." It may have no much sense for file names, but should always be used to replace all entries in the file body.
  • Most of the time, we use "/" as an argument separator, but any non-space character will work just fine. My second choice is '#', and commands "s/abs/cde/g" and "s#abs#cde#g"  are identical.    
]]>
<![CDATA[ Not getting the "Sign In" button on the Oracle Fusion Middleware EM Console? Try this ]]> https://chronicler.tech/sign-in-ofmw/ 60cfd23203fe466ba353e253 Sat, 03 Dec 2022 09:56:31 -0500 Quite a few times I've opened up the Oracle Enterprise Manager Fusion Middleware Control 12c console and the prompt for the user name and password is right there, but not the Sign In button (see screenshot).

Where's the "Sign In" button?!

The solution to this is rather simple:

  • Simply edit the URL and remove all entries following /em and press ENTER. Basically, just go to the main page at http://soa12:7001/em.

That's it. It appears that the console gets flaky when there are URL parameters from your previously expired session or if it's a pasted URL.

]]>
<![CDATA[ A Shell Tricks: Dash-Minus Fuss ]]> https://chronicler.tech/a-few-good-old-shell-tricks/ 62d202342e8b2060128bd3dc Tue, 29 Nov 2022 08:35:30 -0500 My friend Ahmed has recently posted a few Linux and Shell keepsakes. So today, I followed his lead and started publishing a few of mine's.  

To state the obvious. The commands below were tested on Red Hat Linux systems with Bash v4.x. If you manage to find KSH or CSH nowadays, you have my respect.

All *nix systems are similar to ogres and onions - all have layers. System layer, user layer, core commands with the neanderthal accessory lines, modern commands, and all kinds of scripts. It adds to neverending fun because command-line warriors should memorize all kid of argument notations and formats for a score of the most common tools.

Let's take a look at the command below:

tar zcf --exclude tmp/* /some/remote/volume/backup.tgz my_data_folder/
The Booby Trap command

At first glance, the command looks legit, except it does not. You may expect a new backup on some network device, but you create an archive '--exclude' on your local filesystem. It happens because the Linux kernel allows almost any character in the file and folder names, but not all commands concur. The fun begins when you try to delete this file. None of the commands below would work because rm interprets it as an argument and fails to parse it.

rm --exclude
rm \--exclude
rm "--exclude"
rm '--exclude'
That is not going to happen. Therefore, command treats - as an argument start character.

Fortunately, it has a straightforward fix. The latest BASH version gives you a tip - don't use a relative path; use the absolute one. Then, in the version below, you can delete it and run the well-formatted tar command.

rm ./--exclude
tar zcf --exclude=tmp/* /some/remote/volume/backup.tg my_data_folder/
In case you haven't noticed, now exclude option includes = sign.

There are a few more situations where '-' could give you grief. My second best one is search content similar to -something.  

To be continued ...

]]>
<![CDATA[ EMAIL activity in Oracle BPEL does not support single quotes ]]> https://chronicler.tech/email-activity-in-oracle-bpel-does-not-support-single-quotes/ 6381337a5a6b8f481d4cb1ea Fri, 25 Nov 2022 16:35:53 -0500 During compilation of an Oracle SOA BPEL project, I ran into an error related to the EMAIL activity.

Problem:

I received an error invalid syntax when building the project, but the error message did not tell me what the issue exactly was:

Error(782): The expression "concat(string('<br><table border='1px' style='bo ..." has invalid syntax - Expected: )	Project.bpel
D:\SOA\Code\Ahmed\Project\SOA\BPEL	Project.jpr

The contents of the "Expected:" field in the log was empty.

Solution:

In the body of the EMAIL activity, my HTML text included single quotes, as follows:

<br><table border='1px' style='border-collapse:collapse' cellspacing='0' cellpadding='0' width='50%'>...

Replace all single quotes with double quotes and the problem was resolved:

<br><table border="1px" style="border-collapse:collapse" cellspacing="0" cellpadding="0" width="50%">...
]]>
<![CDATA[ Testing an XSL transformation mapping in Oracle JDeveloper 12c ]]> https://chronicler.tech/testing-an-xsl-mapping-in-oracle-jdeveloper-12c/ 6380d47c5a6b8f481d4cb1af Fri, 25 Nov 2022 10:04:52 -0500 So you developed a somewhat complicated XSL transformation in your Oracle SOA/BPEL project. But how do you go about testing this? Traditionally, we deploy the project to the SOA server, conduct a test of the end-to-end process, see what failed, and go back and re-edit the project. Then repeat.

JDeveloper offers an XSTL testing tool. It's not perfect, but works well for most cases actually.

When editing your XSLT file, right-click on the middle pane and select Test XSLT Map.

Here, on the pop-up, you can manually specify the location of a source XML data file (if you have one). Alternatively, check on Generate Source XML File to have JDeveloper create some mock data for you, then click OK.

Voila!

On the left pane, the mock source data generated by JDeveloper can be observed. And on the right pane, the target data is shown. Now you can continually tweak your XSLT until you get it right.

]]>
<![CDATA[ Ansible: Date and Time ]]> https://chronicler.tech/ansible-date-and-time/ 637779fc5a6b8f481d4cb028 Tue, 22 Nov 2022 08:30:29 -0500 Anyone who works with Ansible knows how to access the current date and time. But have you ever dealt with something a bit more advanced? There are a few tips you may find helpful.

I worked on the certificate validation report. From a programming and automation standpoint, it's boring:

  • Access target and read certificate from keystore
  • Grab the expiration date from the output. Let's say it's a string "12/15/22"
  • Test if it expires within two months from today

Ansible doesn't offer much for current date/time manipulations. Fact -  ansible_date_time, and converter to_datetime. For all the rest, one should rely on Jinja2 and Python. Ansible is well known for smart types conversion, but the first attempt fails:

- name: Get Days from Today
  set_fact:
    lifecount: "{{ some_date - today }}"
  vars:
    today: "{{ ansible_date_time.iso8601 }}"
    some_date: "{{ '12/15/22' | to_datetime('%m/%d/%y')) }}"
Use ISO date in the hope that Ansible/Jinja does the conversion.

The task above fails miserably. Function to_datetime returns a value of Python's datetime, yet the variable today is just a string. Let's convert everything to datetime. Please make a note that now I use ansible_date_time.date not iso8601

- name: Get Days from Today
  set_fact:
    lifecount: "{{ some_date - today }}"
  vars:
    today: "{{ ansible_date_time.date | to_datetime('%Y-%m-%d') }}"
    some_date: "{{ '12/15/22' | to_datetime('%m/%d/%y')) }}"
Line up variable types 

To my surprise, this task has failed too. After a quick research and a set of trials and errors with multiple Ansible versions, I found a solution.

- name: Ansible Date Operations
  hosts: all
  tasks:
    - name: Get Days from Today
      set_fact:
        lifecount: "{{ (some_date |to_datetime - today|to_datetime).days }}"
      vars:
        today: "{{ ansible_date_time.date | to_datetime('%Y-%m-%d') }}"
        some_date: "{{ '12/15/22' | to_datetime('%m/%d/%y') }}"

    - name: Show Facts
      debug:
        msg: "You have  {{ lifecount }} days left."
Calculate time interval

It doesn't matter what type your object has inside the Jinja template. Jinja2 engine converts an object to its String representation before returning the result to Ansible.  

You can find the source code in our GitHub repository.  

]]>
<![CDATA[ Ghost + MySQL Upgrade ]]> https://chronicler.tech/ghost-mysql-upgrade/ 6370f3365a6b8f481d4cade8 Tue, 15 Nov 2022 08:30:17 -0500 "Never do two changes at a time, especially on the production system," which I always tell anyone listening. But do and tell do not always come together. So if you consider upgrading your standalone Ghost to the latest releases, spend a few minutes on this post and save a day of struggle and pain.

I have pushed our little blog engine upgrade back as much as possible because the new major Ghost release does not support MySQL 5.x. Every system upgrade, even small as this one, is a project, but any database upgrade excites and dreads me equally. I had my decent share of database upgrades in the past, and I'm not happy to get back into this boat. The thing is, we do not always have a choice, don't we?

I started by googling up the MySQL 5.7 to 8.0 upgrade process. turns out it is just a "stop old - run new" situation when you have no compatibility issues. Better be safe than sorry. I downloaded and unpacked the latest available MySQL Community Edition Server and tried to run recommended upgrade validation procedure.  

# ./mysqlsh root:@localhost:3307 \ 
-e "util.checkForServerUpgrade();"
MySQL Shell - Upgrade Readiness Validation 

After a few seconds report gave me a few minor warnings, none of which impacted my ghost database.  I was good to go, and I went.  

Always, and I mean it, always take a full database backup before you'd go wild and ruin all your data.

The serve instance started as MySQL version 8, and after a few seconds, my database was available. I was rather excited about the success and running the Ghost upgrade.  It warned me that my theme had some compatibility issues, and upgraded the system. Naturally,  at the startup, it threw an infamous incompatible collation error, and the site went down. For a good part of Saturday, I tried to fix the situation and complete the schema upgrade.  I changed the collation and the default character set on the database and all Ghost tables. I upgrade the tables afterward.   I found a note from the Ghost team and decided to redo the database completely, then import the corrected MySQL dump file. Nothing worked. Finally, I gave up, rolled back the database (don't forget my full backup!), and downgraded Ghost to the last worked version.

I start my proper siege the next morning. Looking through compatibility and requirements, I started with small steps:

  • Upgrade NodeJS from 14 LTS to 16 LTS. Upgrade node, Force Ghost to upgrade to the current version. You want this to force packages to refresh and re-download.
  • The second MySQL upgrade to version 8.0. Success, the database is upgraded, and Ghost 4.x is up and available.  

Since all recommendations failed already (Most of them - upgrade to the MySQL 8 default character set utf8mb4 and utf8mb4_0900_ai_ci for collation),  I decided to go where the flow led me. My database and tables were configured with utf8mb4 and utf8mb4_0900_ai_ci, while Ghost upgrade tried to create new objects with a different pair  - utf8mb4 and utf8mb4_general_ci. There are steps to fix the issue:

  1. Connect to your Ghost database with the MySQL client and run the query

     SELECT CONCAT("ALTER TABLE ",TABLE_SCHEMA,".",TABLE_NAME," CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci;\n ", "ALTER TABLE ",TABLE_SCHEMA,".",TABLE_NAME," CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci;") AS alter_sql FROM information_schema.TABLES WHERE TABLE_SCHEMA = database();
    
  2. Save SQL commands from the output to the file, i.e., ghost-cnv.sql

  3. Connect to the database again and convert your schema.

     -- To avoid foreign key errors 
     SET FOREIGN_KEY_CHECKS = 0;
     source ghost-cnv.sql
     -- We still need them 
     SET FOREIGN_KEY_CHECKS = 1;
    
  4. To make sure that the incoming session matches your configuration, add the into my.cnf under sqld section as below:

     [sqld]
     init_connect='SET collation_connection = utf8mb4_general_ci'
    
  5. Restart the database server to pick up the new configuration.

Finally, the Ghost engine 5.22.10 gave up and completed the database upgrade.

]]>
<![CDATA[ Getting JPS-01050 when starting WebLogic (cannot open wallet) ]]> https://chronicler.tech/getting-jps-01050-when-starting-weblogic-cannot-open-wallet/ 630bc35d9e76640eaa66c70e Sun, 28 Aug 2022 15:43:47 -0400 When starting up WebLogic after a network outage, we received the following exception:

oracle.security.jps.JpsException: JPS-01050: Opening of wallet based credential store failed. Reason java.io.IOException
        at oracle.security.jps.internal.config.OpssCommonStartup.preStart(OpssCommonStartup.java:423)
        at oracle.security.jps.JpsStartup.preStart(JpsStartup.java:389)
        at oracle.security.jps.wls.JpsBootStrapService.start(JpsBootStrapService.java:80)
        .
        .
        .
Caused by: oracle.security.jps.service.credstore.CredStoreException: JPS-01050: Opening of wallet based credential store failed. Reason java.io.IOException
        at oracle.security.jps.internal.credstore.ssp.CsfWalletManager.openWallet(CsfWalletManager.java:191)
        at oracle.security.jps.internal.credstore.ssp.WalletCredentialStore.doInit(WalletCredentialStore.java:170)
	.
        .
        .
Caused by: java.io.IOException
        at oracle.security.pki.OracleWallet.open(Unknown Source)
        at oracle.security.jps.internal.credstore.ssp.CsfWalletManager.openWallet(CsfWalletManager.java:179)
        ... 31 more
Caused by: java.lang.ExceptionInInitializerError
        at oracle.security.pki.OracleFileSSOWalletImpl.a(Unknown Source)
        ... 33 more
Caused by: java.lang.RuntimeException: java.io.IOException: Read-only file system
        at oracle.security.pki.FileLocker.(Unknown Source)
        ... 34 more
Caused by: java.io.IOException: Read-only file system
        at java.io.UnixFileSystem.createFileExclusively(Native Method

I tried manually opening up the wallet but received a "ewallet.p12 not present" error:

oracle@soahost:/home/oracle> $MW_HOME/oracle_common/bin/orapki wallet display -wallet $DOMAIN_HOME/config/fmwconfig -complete

Oracle PKI Tool : Version 12.2.1.4.0
Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved.

ewallet.p12 not present at /u01/oracle/domains/soa_domain/config/fmwconfig

My first warning sign was when I ran the df command to report file system disk space usage, it actually hung:

oracle@soahost:/home/oracle> df -h

Then the root cause of the issue was finally identified:

oracle@soahost:/home/oracle> touch /tmp/z
touch: cannot touch `/tmp/z': Read-only file system

The /tmp folder was in read-only mode. Oracle WebLogic Server requires read/write to the /tmp folder, as you will notice adhoc product files created there usually.

]]>
<![CDATA[ Creating a test file to test I/O performance ]]> https://chronicler.tech/test-file-xfs-mkfile/ 630bc1d59e76640eaa66c6e2 Sun, 28 Aug 2022 15:34:25 -0400 For the purpose of testing the performance of a cloud storage volume, I recently came across the xfs_mkfile command.

This command creates a file called 30GigFile.out that is 30 GB in size:

xfs_mkfile 30720m 30GigFile.out

What I do is simply test the timing of how long it takes using the following:

date; xfs_mkfile 30720m 30GigFile.out; date

The Linux man page simply state that xfs_mkfile will "create an XFS file". The file is padded with zeroes by default.


Other commands to create temporary files that can be used are (but not ideal for performance testing; you'll see why when you run them; because they instantaneously create the file):

fallocate -l 30G 30GigFile.out
dd if=/dev/zero of=30GigFile.out bs=1 count=0 seek=30G
]]>
<![CDATA[ Quickly installing APEX 22 and ORDS 22 ]]> https://chronicler.tech/apex-ords-installation/ 62d184ed523c2d0ba701d023 Tue, 26 Jul 2022 10:18:29 -0400 Here are my quick notes on installing Oracle APEX Release 22.1 and Oracle REST Data Services 22.2.

Extract Software

1. Set path. Use this path when installing APEX, ORDS, and Tomcat. Customize as necessary.

export JAVA_HOME=/u01/oracle/jdk-18.0.1
export JAVA_OPTS="-Dconfig.url=${ORDS_CONFIG}"
export APEX_HOME=/u01/oracle/apex
export ORDS_HOME=/u01/oracle/ords
export ORDS_CONFIG=/u01/oracle/ords
export TOMCAT_HOME=/u01/oracle/apache-tomcat-8.5.78
export PATH=$ORDS_HOME/bin:$JAVA_HOME/bin:$PATH
export SOFTWARE_BINARIES=/u01/software

2. Download Java, APEX, ORDS, and Tomcat software.

cd ${SOFTWARE_BINARIES}
wget --no-check-certificate https://download.oracle.com/java/18/latest/jdk-18_linux-x64_bin.tar.gz
wget --no-check-certificate https://download.oracle.com/otn_software/apex/apex-latest.zip
wget --no-check-certificate https://download.oracle.com/otn_software/java/ords/ords-latest.zip
wget --no-check-certificate https://dlcdn.apache.org/tomcat/tomcat-8/v8.5.78/bin/apache-tomcat-8.5.78.tar.gz

3. Extract Java.

cd /u01/oracle
gtar -xzvf ${SOFTWARE_BINARIES}/jdk-18_linux-x64_bin.tar.gz

4. Extract APEX.

cd /u01/oracle
unzip ${SOFTWARE_BINARIES}/apex-latest.zip

5. Extract ORDS.

mkdir -p ${ORDS_HOME}
cd ${ORDS_HOME}
unzip ${SOFTWARE_BINARIES}/ords-latest.zip

6. Extract Tomcat.

cd /u01/oracle
tar xvf ${SOFTWARE_BINARIES}/apache-tomcat-8.5.78.tar.gz

7. Copy images from APEX extracted folder to Tomcat webapps, and rename the folder from "images" to "i".

cp -Rp ${APEX_HOME}/images ${TOMCAT_HOME}/webapps
mv ${TOMCAT_HOME}/webapps/images ${TOMCAT_HOME}/webapps/i

8. Change Tomcat configuration to allow the Tomcat Manager console to be accessed outside of localhost.

vi ${TOMCAT_HOME}/webapps/manager/META-INF/context.xml
Comment out the <Valve> entry.

9. Change Tomcat configuration to create a user to access the Tomcat Manager console.

vi ${TOMCAT_HOME}/conf/tomcat-users.xml
Add the following entry:
  <role rolename="manager-gui"/>
  <user username="admin" password="welcome1" roles="manager-gui"/>

Install APEX Schemas

1. Copy ${APEX_HOME} to your database server.

2. Change to the APEX folder (on your database server).

cd ${APEX_HOME}

3. Connect to the database as SYS.

sqlplus "/ as sysdba"

4. Run various APEX SQL scripts. If each script exits the sqlplus prompt, then simply reconnect and run the next script.

SQL> @apexins APEX APEX TEMP /i/
SQL> @apxchpwd.sql
SQL> ALTER USER apex_public_user IDENTIFIED BY "welcome1" ACCOUNT UNLOCK;
SQL> @apex_rest_config.sql

Install ORDS

1. Go back to the middleware host.

2. Install ORDS.

cd ${ORDS_HOME}
ords install

3. You will be prompted with the following:

oracle@hostname:/u01/oracle/ords> ords install
2022-05-20T12:34:07.142Z INFO
Your configuration folder /u01/oracle/ords is located in ORDS product folder.
Oracle recommends to use a different configuration folder.
 
ORDS: Release 22.1 Production on Fri May 20 12:34:07 2022
 
Copyright (c) 2010, 2022, Oracle.
 
Configuration:
  /u01/oracle/ords/
 
The configuration folder /u01/oracle/ords does not contain any configuration files.
 
Oracle REST Data Services - Interactive Install
 
  Enter a number to select the type of installation
    [1] Install or upgrade ORDS in the database only
    [2] Create or update a database pool and install/upgrade ORDS in the database
    [3] Create or update a database pool only
  Choose [2]:

3. Select Create or update a database pool and install/upgrade ORDS in the database.

4. Select Basic (host name, port, service name).

5. Enter your database information:

  • Host name: dbhost
  • Port: 1521
  • Service name: servicename

6. Enter the SYSTEM password.

7. Select Install ORDS in the database, which uses the SYSAUX and TEMP tablespaces (you can change the schemas if you want).

8. Select Database Actions (all features).

9. Select Configure and start ORDS in standalone mode.

10. Select HTTP protocol.

  • HTTP port: 8080
  • APEX static resources: /u01/oracle/apache-tomcat-8.5.78/webapps/i

11. Create a database account for initial development and grant permissions:

CREATE USER apex_mycustom IDENTIFIED BY "welcome1";
GRANT resource, connect TO apex_mycustom;
GRANT create dimension TO apex_mycustom;
GRANT create job TO apex_mycustom;
GRANT create materialized view TO apex_mycustom;
GRANT create synonym TO apex_mycustom;
GRANT create view TO apex_mycustom;

Startup Tomcat

1. Copy the ORDS WAR file to Tomcat webapps.

cp ${ORDS_HOME}/ords.war ${TOMCAT_HOME}/webapps

2. Startup Tomcat.

cd ${TOMCAT_HOME}/bin
./startup.sh

3. Login to the APEX console as the INTERNAL workspace and user ADMIN.

  • http://hostname:8080/ords/
]]>
<![CDATA[ Resetting the ORDS_PUBLIC_USER password for APEX 22 and ORDS 22 ]]> https://chronicler.tech/resetting-the-ords_public_user-password-for-apex-22-and-ords-22/ 62d16b5c523c2d0ba701cfbd Fri, 15 Jul 2022 09:41:12 -0400 I recently installed Oracle APEX Release 22.1 and Oracle REST Data Services 22.2 (installation instructions here). All was working fine, but two days later when we navigated to the web console at http://hostname:8080/ords we received the following exception:

Specifically, it says:

The request could not be mapped to any database. Check the request URL is correct, and that URL to database mappings have been correctly configured

Solution

1. Shutdown Tomcat:

cd /u01/oracle/products/apache-tomcat-8.5.78/bin
./shutdown.sh

2. Unlock and reset the ORDS_PUBLIC_USER database account:

ALTER USER ords_public_user IDENTIFIED BY "welcome1" ACCOUNT UNLOCK;

3. Reset the password using the ORDS configuration tool:

cd /u01/oracle/products/ords
java -jar ords.war config --db-pool default secret db.password

The prompts will display as follows:

oracle@hostname:/u01/oracle/products/ords> java -jar ords.war config --db-pool default secret db.password
Warning: Support for executing: java -jar ords.war has been deprecated.
Please add ords to your PATH and use the ords command instead.
Run the following command to add ords to your PATH:

echo -e 'export PATH="$PATH:/u01/oracle/products/ords/bin"' >> ~/.bash_profile

Start a new shell to pick up this change.
2022-07-15T13:20:59.734Z INFO        Your configuration folder /u01/oracle/products/ords is located in ORDS product folder.  Oracle recommends to use a different configuration folder.

ORDS: Release 22.2 Production on Fri Jul 15 09:21:00 2022

Copyright (c) 2010, 2022, Oracle.

Configuration:
  /u01/oracle/products/ords/

Enter the database password:
Confirm password:
The setting named: db.password was set to: ****** in configuration: default

Keep in mind that the ORDS connection pool is located here for the default pool:

  • /u01/oracle/products/ords/databases/default/pool.xml

The password is stored in a wallet located here:

  • /u01/oracle/products/ords/databases/default/wallet/cwallet.sso

4. Startup Tomcat:

cd /u01/oracle/products/apache-tomcat-8.5.78/bin
./shutdown.sh

Now you should be able to navigate to the web console at http://hostname:8080/ords and life is good.

]]>
<![CDATA[ Why Expedia's slow killing of their desktop web experience is not a good thing ]]> https://chronicler.tech/expedia-ui-redesign/ 62c26312e3c9c91cfc0c880c Tue, 12 Jul 2022 10:28:11 -0400 TRIGGER WARNING: This post may include frustration towards recent changes made by Expedia.

Expedia has recently (in 2022) redesigned their user interface to provide a completely consistent experience between desktop web browser, mobile web browser, and their mobile app. This makes sense from a software development standpoint. After all, having a single codebase for all clients reduces complexity and enables them to rollout new features with less issues. This would also appear to make sense from a user experience standpoint (or so they say), particularly since customers these days now alternate between mobile and desktop transparently.

With responsive websites, mobile users are accustomed to more minimal and lighter interfaces, albeit at the expense of reduced options. Mobile users are now generally accustomed to this, if not expect it.

The IT hotshot who sold the concept of a unified interface to their executive leadership probably had a slide like this:

I'm not opposed to the recent changes. But Expedia failed in one key area. What Expedia did wrong was port the limitations of the mobile experience to the desktop web version.

Options and self-service capabilities on the desktop browser that existed in the older interface no longer exist. Others are now very difficult to find. Everything bad about the mobile browsing experience is now ported to the desktop version too.

Where Expedia Failed in their UI Redesign

The major failures in Expedia's recent UI redesign can be summarized as follows:

  • Too little information in the desktop version. The desktop version displays less information than before, and is now similar looking to the mobile web and mobile app versions. Getting to a lot of information you need almost always requires engaging customer service (otherwise now known as the virtual agent).
  • Excessive navigation required in the desktop version. The amount of navigation required to get to a feature or function in the new UI takes 5x more steps than the previous one.
  • Functionality removed from the desktop version. Moving previously existing functionality from the desktop web application to the virtual chatbot has ruined the desktop experience.
  • Loss of historical booking data that coincided with the UI redesign. It's unclear why Expedia did this, but customers have now permanently lost all previous booking data that pre-dated 2019. No warning was given to their customers.
  • Less than ideal desktop UI. Awkward user interface, menus, and navigation riddle the desktop version. What works in mobile doesn't necessarily translate well in desktop.
This may be the desired experience I'm looking for in a mobile browser, but not on my desktop.

The User Experience on Desktop is Downgraded

Do you know how long it now takes to get an invoice for a past flight on the newly designed desktop interface?

  1. Click on "Trips"
  2. Click on "Past"
  3. Click on the image of your trip (add a few more seconds if it's not obvious)
  4. Click on the departure flight
  5. Click on "Menu"
  6. Spend 30 seconds deciding if you should click on 'Print itinerary' or 'View as PDF' or 'View receipt'
  7. Click on "View as PDF"
  8. Click on the back icon
  9. Click on the return flight
  10. Click on "Menu"
  11. Click on "View as PDF"

Eleven steps. Unintuitive too. If I shared a screenshot walkthrough of how all this played out, likely you won't be impressed. In their older desktop interface, everything was streamlined on the desktop. Click on "Trips," select your trip, and "View Invoice." That's it. Simple and straightforward.

Desktop Self-Service Functions are Reduced

In the older desktop UI, you could email an itinerary to yourself. Now, that option is eliminated. You have to use a chatbot to do this.

Let's see how many steps it now takes on the new desktop interface to email yourself an itinerary:

  1. Click on "Trips"
  2. Click on the image of your current trip
  3. Spend 1 minute clicking on the various menu options only to realize there's no way to email an itinerary to yourself
  4. Click on "Help" to launch the Virtual Agent
  5. Click on "Do something else" in the chatbot window when prompted
  6. Click on "Resend confirmation email"
  7. Spend 10 seconds trying to figure out how to scroll right on the upcoming bookings
  8. Spend 20 seconds to determine whether you should click 'See itinerary' or 'Select booking,' as all you want to do is email the itinerary
  9. Click on "See itinerary" and you're back on the same page
  10. Go back and click on "Select booking"
  11. Click on "Yes, send copy"
  12. Enter your email address again

Take a look at this screenshot of the Virtual Agent. Turns out that the chat window is not resizable.

Can someone explain how I scroll right to view the rest of my bookings?

Expedia's Misguided Confidence in their "Virtual Agent"

Have you arrived at your destination and are having a problem with your car rental reservation? Have no worry! The Expedia virtual chat agent is here to assist! You're standing at the counter of the rental car agency and spending 2 minutes typing on your smartphone with an automated chatbot. What fun!

I looked up my history of chats. It consistently takes 2 minutes to get to a live agent on chat (remember, this is text chat, not phone). Now mind you, this may not sound like a long time. But now imagine a line behind you and a tired family, all while you're unsuccessfully trying to get hold of a human agent... for 2 minutes straight.

One of my many struggles with the virtual agent.

It takes an average of 20 minutes to get a simple question answered. Mind you, this is typing on your smartphone and waiting, often minutes, to get a response to each message you send.

Most of Expedia's customer support is based in the Philippines and Egypt. If you thought it was challenging working with an offshore support agent before, wait till you start texting them!

In this next screenshot, all I wanted to know was whether the flight I was about to book was refundable. This only required a 35 minute (11:03pm to 11:38pm) virtual chat with Ayman to get the question answered. To be fair, he was quite knowledgeable but I did have to repeat and rephrase some questions, likely due to a language barrier.

It takes longer to explain the situation on chat, and repeat clarification is almost always needed, as was the case with Femehring. I even provided a screenshot in the beginning of the chat to try to avoid any confusion!

I frequently needed to further clarify to Femehring what I'm needing help with.

So he answers my question. But remember, I've been using Expedia well over a decade, so I'm reasonably familiar with the policies. Femehring, my friend, I'm pretty certain you're wrong here.

Femehring claims he's "well equipped" to handle my inquiry and felt that it wasn't necessary to transfer me to a supervisor, so let's see when I present him with hard evidence disputing his previous statement.

This is the third time I'm explaining the situation to try to get an answer to my single question.

I get that English may not be the first language of many of these agents, but a 35 minute text chat could have concluded in no more 5 minutes on the phone.

My Final Thoughts

Are these problems simply the growing pains of deploying new interfaces, functionality, or chatbots?

Are the UI redesign decisions that Expedia made simply where everyone is headed? I sure hope not. Responsive web design doesn't restrict usability. Lazy development does.

While it's essential to remove clutter in mobile to improve readability, mobile still requires all features that its desktop counterpart offers. However, Expedia took the opposite route of eliminating the features on the desktop version instead. To compensate for these limitations, they aggressively pushed their virtual agent (aka chatbot) onto their customers as a means to obtaining whatever information is missing. Removing previously existing functionality and pushing it to a mediocre chatbot is simply a poor design decision.

I'm mindful of the challenges recruiting and maintaining human talent the past few years, and perhaps Expedia thought that chatbots were a means to addressing the lack of staff. This is likely not their reasoning, because I spend more time with their human agents on the virtual agent than I ever did on the phone.

What Expedia did was simply make bad technology decisions that are negatively affecting their users' experience. I've spent over 20 hours virtual chatting with their agents over the last two months because of the endless limitations and challenges that I've been running into since their new UI rollout a couple of months ago. This is simply unacceptable.

I've been loyal to Expedia, but now am questioning how long I can tolerate these bad decisions.

Update July 16, 2022

Wth?! I went to cancel a hotel reservation (I'm on my desktop), and Expedia is forcing me to navigate to the virtual agent to cancel! Why not simply give me a Cancel option similar to how you have the Change reservation option? I honestly don't get it.

Granted the entire process took 3 steps, but now I have to read every single word from the virtual agent which is attempting to mimic human dialogue as best as possible (meaning I have to be extra careful before confirming an activity). I still had to open another window and view the itinerary to confirm the amount refunded to the original amount charged.

If anyone has any suggestions to an alternate online travel agency that I can spend my hard earned money on for the next 10 years, please let me know. I'm fed up with Expedia forcing the virtual agent on me for the most basic actions (which happen to be the most frequent too).

Update July 18, 2022

Amazing!

Expedia finally allows you to cancel a flight through the desktop browser! And it only takes 1 click!

Do you know how many times I've engaged the awful virtual agent to try to find out how much time I have left to cancel my flight within the refund window? The agents could never tell me when, and I've had multiple frustrating discussions with them, often exceeding 45 minutes, just to try to get an answer (I was never successful).

Credit to Expedia for fixing their issues. Shame on them with making the awful design decisions in the first place.

To be continued.

]]>
<![CDATA[ When specialty beats functionality ]]> https://chronicler.tech/when-specialty-beats-functionality/ 62c81e71523c2d0ba701cd3a Tue, 12 Jul 2022 08:30:00 -0400 While back I wrote a short piece about Pandoc. My excitement about it stands true, yet it's a last-century utility  you don't want or can't use in cloud-native applications. Well, I found a new one - WKHTMLTOx

While the name of the utility as much unpronounceable as my last name it light, versatile, and self-content. Developers offer you two compiled and statically liked options: wkhtmltopdf  and wkhtmltoimg. The utility uses QT WebKit to render PDFs or images from various sources. It does not require any dependencies, packages, or graphic engines. All it does is convert virtually anything to PDF or imageThehe site is lean, laconic as the product itself, and mostly reproduces the application's manual page. What stands it out - transformation quality and low resource usage. For example, it takes less than 5 minutes to convert a 25 Mb HTML document to a 93Mb PDF, and what is even more impressive - using only 2Gb of RAM.  

Deveopers offer releases for the most common platforms   and what important to me there is a prebuuld AWS Lambda Layer for AWS Linux2. Now, there is only one component missing - JavaScript wrapper for the utility. Of course you can do it yourself and Idid, but it worth to search and see if someone more gifted has done it already. sure enough I found a quite versatile and reliable implementation with no brainer name wkhtmltopdf.  Many thanks to collaborators, they have added support for NodeJS Streams.   This approach perfectly fits AWS S3 API  and allows you to convert s3 objects on the fly.

Serverless PDF Converter Diagram

My overall solution looks like the diagram below. Lambda code imports a wkhtmltopdf module and runs on top of wkhtmltopdf lambda layer. Lambda could receive events from application or from S3 itself. just a few tips to consider:

  • Point NodeJS object to the utility from the layer. It resides in /opt/bin. Alternatively, you could update process.ev.PATH value
  • To not be disappointed with conversion quality, don't forget about another environment variable - FONTSCONFIG_FILE. I just added it to the Lambda configuration FONTSCONFIG_FILE=/opt/fonts/fonts.conf.
  • Converter is smart enough to pull images along, as long as they are available. it's not a big deal if the image reference is an absolute URL, but for images with the relative path you may want to have a local copies before conversion happens.
  • Speaking of streams, wkhtmltopdf will search images in the /tmp folder. So, if your source document refers to image as <img src='images/my-picture.png'/> the image file should be available /tmp/images/my-picture.png first.
]]>
<![CDATA[ Monitor JMS queues through OEM 13c ]]> https://chronicler.tech/monitor-jms-oem/ 62c9bd2e523c2d0ba701cf2e Sat, 09 Jul 2022 13:50:08 -0400 If you have the WebLogic Management Pack for Oracle Enterprise Manager (OEM), you can monitor a slew of metrics. Often, I am asked if OEM can monitor if there are pending messages in a WebLogic JMS queue or topic. The answer is "sort of."

You can set warning and critical thresholds at the JMS Server level. Here are the instructions to do so through OEM:

  1. Log in to the OEM console.
  2. Navigate to your Oracle Web Server target.
  3. Click on WebLogic Server > Monitoring > All Metrics.
  4. Expand JMS Server Metrics.
  5. Click on Destinations.
  6. Click on the destination name in the table.
  7. Click on Modify Thresholds, and set the warning and critical values to thresholds of your choice.

You may want to experiment with this first and see if alerts are sent. If not, try other metrics such as [JMS Server Metrics | Messages Pending], [JMS Server Metrics by Server | Current JMS Messages], or [JMS Server Metrics by Server | Pending JMS Messages].

It's a bit flaky to be honest. In some cases I had to create a custom metric extension which invoked a custom shell script to retrieve the message count in a particular queue. So good luck!

]]>
<![CDATA[ WLST script to monitor WebLogic status, heap, JDBC, and JMS ]]> https://chronicler.tech/wlst-script-to-monitor-weblogic-status-heap-jdbc-and-jms/ 62c9a483523c2d0ba701cec2 Sat, 09 Jul 2022 13:20:02 -0400 Do you want a custom script to send out an email that reports on your Oracle WebLogic Server status, heap, data source, and JMS information?

If you want the an equivalent of the output below, all you need is a single crontab entry, a single bash script, and a single WLST Python script, all included below.

Basic scripting and WLST knowledge is required. The scripts can be customized to your liking.

Crontab:

# --------------------------------------------------------------------------------
# 2020-03-16 | Script to monitor WebLogic Server
# --------------------------------------------------------------------------------
0 * * * * /home/oracle/scripts/monitorSOAWLS.sh > /dev/null 2>&1

soamonitor.sh:

#!/bin/bash
##############################################
# Weblogic Server Monitoring Script
# Author: Ahmed Aboulnaga
# Date:   2020-03-16
##############################################

#----------------------------------------
# Variables
#----------------------------------------
ENV=PROD12C
ORACLE_HOME=/u01/app/oracle/middleware
SCRIPT_PATH=/home/oracle/scripts/wlst
SERVERS=soadev1
PORT=7001
EMAILS="ahmed@revelationtech.com,ahmed.aboulnaga@revelationtech.com"

#----------------------------------------
# Set environment
#----------------------------------------
source ${ORACLE_HOME}/wlserver/server/bin/setWLSEnv.sh 2>&1 > /dev/null

#----------------------------------------
# Loop through server list
#----------------------------------------
for serv in ${SERVERS}
do

    #----------------------------------------
    # Run WLST script
    #----------------------------------------
    echo "********************************************************";
    echo " Running Server status report for :${serv}  ";
    echo "********************************************************";
    ${ORACLE_HOME}/common/bin/wlst.sh ${SCRIPT_PATH}/monitor_all_servers.py ${serv} ${port}
    echo '<h3> ********** END OF REPORT ********** </h3></div>' >> ${SCRIPT_PATH}/monitorstatus.html

    #----------------------------------------
    # Set email subject
    #----------------------------------------
    grep "yellow" ${SCRIPT_PATH}/monitorstatus.html >> /dev/null
    if [ $? == 0 ]; then
        ALERT_CODE="[WARNING]"
    fi

    grep "red" ${SCRIPT_PATH}/monitorstatus.html >> /dev/null
    if [ $? == 0 ]; then
        ALERT_CODE="[CRITICAL]"
    fi

    grep "green" ${SCRIPT_PATH}/monitorstatus.html >> /dev/null
    if [ $? == 0 ]; then
        ALERT_CODE="[GREEN]"
    fi

    ENVIRONMENT=`echo $ENV | tr '[:lower:]' '[:upper:]'`
    
    #----------------------------------------
    # Send email
    #----------------------------------------
    CONTENT=${SCRIPT_PATH}/monitorstatus.html
    SUBJECT="${ALERT_CODE} - SOA ${ENVIRONMENT} ENVIRONMENT STATUS REPORT "
    ( echo "Subject: $SUBJECT"
    echo "MIME-Version: 1.0"
    echo "Content-Type: text/html"
    echo "Content-Disposition: inline"
    cat $CONTENT )| /usr/sbin/sendmail $EMAILS

done

rm  ${SCRIPT_PATH}/monitorstatus.html

monitor_all_servers.py:

##############################################
# Weblogic Server WLST Script
# Author: Ahmed Aboulnaga
# Date:   2020-03-16
##############################################

import os
import sys

# Username and password
uname = 'weblogic'
pwd = 'welcome1'

# Get server name
parServerName = sys.argv[1]

# Connect to server
url = 't3://' + parServerName + ':7001'
connect(uname, pwd, url)

# Write output to HTML file
fo = open("/home/oracle/scripts/monitorstatus.html", "wb+")

#----------------------------------------
# Report Server Status
#----------------------------------------
fo.write('<div>')
fo.write('\n<h3>SERVER STATUS REPORT: ' + url + '</h3>\n\n')

def getRunningServerNames():
    domainConfig()
    return cmo.getServers()

serverNames = getRunningServerNames()
domainRuntime()

def healthstat(server_name):
    cd('/ServerRuntimes/' + server_name + '/ThreadPoolRuntime/ThreadPoolRuntime')
    s = get('HealthState')
    x = s.toString().split(',')[2].split(':')[1].split('HEALTH_')[1]
    return x

serverNames = domainRuntimeService.getServerRuntimes()
getRunningServerNames()
domainRuntime()

# Create table for Server Report Status
fo.write('<table style="font:normal 12px verdana, arial, helvetica, sans-serif; border:1px solid #1B2E5A;text-align:center" bgcolor="#D7DEEC" width="400" border="0">')
fo.write('<caption style="font-weight:bold; letter-spacing:10px; border:1px solid #1B2E5A">SERVER STATUS</caption>')
fo.write('<tr align="center" bgcolor="#5F86CF"><td>Server Name</td><td>Status</td><td>Health</td></tr>')

rowNum = 0;

for name in serverNames:
    status = str(name.getState())
    health = healthstat(name.getName())
    # Alternate Report Row Color
    if rowNum % 2 == 0:
        rowColor = '#D7DEEC'
    else:
        rowColor = '#F4F6FA'
    # Change cell color based on status returned
    hcolor = 'green'
    if health != 'OK':
        if health == 'WARN':
            hcolor = 'yellow'
        else:
            hcolor = 'red'
    else:
        hcolor = 'green'

    if status != 'RUNNING':
        if  status == 'WARNING':
            fo.write('<tr align="center" bgcolor=' + rowColor + '><td> ALERT!' + name.getName() + ' </td><td>' + status + '</td><td style="background-color:' + hcolor + ';font-weight:bold;">' + health + '</td></tr>')
        else:
            fo.write('<tr align="center" bgcolor=' + rowColor + '><td> ALERT!' + name.getName() + ' </td><td> ' + status + '  </td><td style="background-color:' + hcolor + ';font-weight:bold;">' + health + '</td></tr>')
    else:
        fo.write('<tr align="center" bgcolor=' + rowColor + '><td> ' + name.getName() + ' </td><td> ' + status + ' </td><td style="background-color:' + hcolor + ';"><b>' + health + ' </b></td></tr> ')

    rowNum += 1

fo.write("</table><br/><br/>")

#----------------------------------------
# Report Heap Details
#----------------------------------------

# Definition to print a running servers heap details
def printHeapDetails(server_name):
    domainRuntime()
    cd('/')
    cd('ServerRuntimes/' + server_name + '/JVMRuntime/' + server_name)
    hf = float(get('HeapFreeCurrent')) / 1024
    hs = float(get('HeapSizeCurrent')) / 1024
    hfpct = float(get('HeapFreePercent'))
    hf = round(hf / 1024, 2)
    hs = round(hs / 1024, 2)
    cellcolor = rowColor
    if hfpct <= 20 and server_name != 'AdminServer':
        if hfpct <= 10:
            cellcolor = 'red'
        else:
            cellcolor = 'yellow'
    else:
        cellcolor = rowColor

    fo.write('<tr bgcolor=' + cellcolor + ' align="center"><td align="left">' + server_name + '  </td><td>' + `hf` + 'MB  </td><td>' + `hs` + 'MB  </td><td>' + `hfpct` + '%  </td></tr>')

# Calling printHeapDetails with arguments
# Create Table for Heap Details
fo.write('<table style="font:normal 12px verdana, arial, helvetica, sans-serif; border:1px solid #1B2E5A" bgcolor="#D7DEEC" width="600" border="0">')
fo.write('<caption style="font-weight:bold; letter-spacing:10px; border:1px solid #1B2E5A">SERVER HEAP SIZE REPORT</caption>')
fo.write('<tr align="center" bgcolor="#5F86CF"><td> Managed Server</td><td>HeapFreeCurrent</td><td>HeapSizeCurrent</td><td>HeapFreePercent</td></tr>')
servers = domainRuntimeService.getServerRuntimes();
rowNum = 0;
for server in servers:
    # Alternate Report Row Color
    if rowNum % 2 == 0:
        rowColor = '#D7DEEC'
    else:
        rowColor = '#F4F6FA'
    printHeapDetails(server.getName())
    # Increment Row Color
    rowNum += 1

fo.write('</table><br /><br />')

#----------------------------------------
# Report JDBC Status
#----------------------------------------

fo.write('\n<h3>SERVER JDBC RUNTIME INFORMATION</h3>\n\n')
servers = domainRuntimeService.getServerRuntimes();
for server in servers:
    jdbcRuntime = server.getJDBCServiceRuntime();
    datasources = jdbcRuntime.getJDBCDataSourceRuntimeMBeans();
    # Create Table for JDBC Status
    fo.write('<table style="font:normal 12px verdana, arial, helvetica, sans-serif; border:1px solid #1B2E5A" bgcolor="#D7DEEC" width="600" border="0">')
    fo.write('<caption style="font-weight:bold; letter-spacing:10px; border:1px solid #1B2E5A">' + server.getName() + '</caption>')
    fo.write('<tr align="center" bgcolor="#5F86CF"><td> Data Source:</td><td>State</td><td>Active Connections</td><td>Waiting for Connections</td></tr>')
    rowNum = 0;
    for datasource in datasources:
        if rowNum % 2 == 0:
            rowColor = '#D7DEEC'
        else:
            rowColor = '#F4F6FA'

        if datasource.getState() != "Running":
            stateColor = "red"
        else:
            stateColor = rowColor
        if datasource.getActiveConnectionsCurrentCount() > 10:
            acColor = "yellow"
            if datasource.getActiveConnectionsCurrentCount() > 20:
                acColor = "red"
        else:
            acColor = rowColor
        if datasource.getWaitingForConnectionCurrentCount() > 2:
            wcColor = "yellow"
            if datasource.getWaitingForConnectionCurrentCount() > 5:
                wcColor = "red"
        else:
            wcColor = rowColor

        fo.write('<tr align="center" bgcolor=' + rowColor + '><td align="left">' + datasource.getName() + ' </td><td style="background-color:' + stateColor + '">' + datasource.getState() + ' </td><td  style="background-color:' + acColor + '" >' + repr(datasource.getActiveConnectionsCurrentCount()) + ' </td><td  style="background-color:' + wcColor + '" > ' + repr(datasource.getWaitingForConnectionCurrentCount()) + ' </td></tr>');
        rowNum += 1
    fo.write('</table><br /><br />')

#----------------------------------------
# Report JMS Status
#----------------------------------------

fo.write('\n<h3>SERVER JMS STATUS INFORMATION</h3>\n\n')
# Print JMS status for all servers
servers = domainRuntimeService.getServerRuntimes();

for server in servers:
    serverName = server.getName();
    jmsRuntime = server.getJMSRuntime();
    jmsServers = jmsRuntime.getJMSServers();
    if not jmsServers:
        fo.write('<h4>No JMS Information For ' + serverName + ' </h4> \n')
    else:
        # Create Table for JMS Status
        fo.write('<table style="font:normal 12px verdana, arial, helvetica, sans-serif; border:1px solid #1B2E5A" bgcolor="#D7DEEC" width="900" border="0">')
        fo.write('<caption style="font-weight:bold; letter-spacing:10px; border:1px solid #1B2E5A">JMS Runtime Info for :' + serverName + ' </caption>')
        fo.write('<tr align="center" bgcolor="#5F86CF"><td>SERVER</td><td>JMSSERVER</td><td>DestinationName</td><td>DestinationType</td><td>MessagesCurrentCount</td><td>MessagesHighCount</td><td>ConsumersCurrentCount</td><td>ConsumersHighCount</td><td>ConsumersTotalCount</td></tr>')
        for jmsServer in jmsServers:
            jmsServerName = jmsServer.getName();
            destinations = jmsServer.getDestinations();
            rowNum = 0;
            for destination in destinations:
                if destination.getMessagesCurrentCount() >= 0 :
                    # Alternate Report Row Color
                    if rowNum % 2 == 0:
                            rowColor = '#D7DEEC'
                    else:
                            rowColor = '#F4F6FA'

                    fo.write('<tr align="center" bgcolor=' + rowColor + '><td align="left">' + serverName + '  </td><td> ' + jmsServerName + '  </td><td> ' + str(destination.getName()) + ' </td><td> ' + str(destination.getDestinationType()) + '  </td><td> ' + str(destination.getMessagesCurrentCount()) + '  </td><td> ' + str(destination.getMessagesHighCount()) + '   </td><td> ' + str(destination.getConsumersCurrentCount()) + ' </td> <td> ' + str(destination.getConsumersHighCount()) + ' </td> <td> ' + str(destination.getConsumersTotalCount()) + ' </td></tr>')
                    rowNum += 1
        fo.write('</table> <br /><br />')
fo.write('</div>')

#----------------------------------------
# Exit WLST
#----------------------------------------

exit()
]]>
<![CDATA[ Promptless SFTP in bash script ]]> https://chronicler.tech/promptless-sftp/ 62ab3d2044fea135884076cc Thu, 16 Jun 2022 10:34:06 -0400 Want to run SFTP command in a bash script without being prompted? Here are quick examples of 2 approaches; one with password based authentication and one with private key authentication.

Password Authentication

cd /source_directory_on_local

echo "cd /target_directory_on_remote" > /tmp/commands.txt

echo "put *.*" >> /tmp/commands.txt

echo "quit" >> /tmp/commands

expect -c "spawn sftp -o "BatchMode=no" -b "/tmp/commands.txt" "ftpuser@ftphost.revelationtech.com" expect -nocase \"*password:\" { send \"ftppassword\r\"; interact }"

* This script may now work from the crontab.

Private Key Authentication

cd /source_directory_on_local

echo "cd /target_directory_on_remote" > /tmp/commands.txt

echo "put *.*" >> /tmp/commands.txt

echo "quit" >> /tmp/commands

sftp -oIdentityFile=/home/oracle/.ssh/id_rsa -b /tmp/commands.txt ftpuser@ftphost.revelationtech.com
]]>
<![CDATA[ Getting ORA-12504 when connecting with sqlplus ]]> https://chronicler.tech/getting-ora-12504-when-connecting-with-sqlplus/ 6265c31fbf7df61b39a404f2 Sun, 24 Apr 2022 17:45:12 -0400 I happened to install the Oracle Instant Client (instructions here) to quickly connect to a database using sqlplus.

Problem

After installation, I simply tried to connect to my database as follows:

./sqlplus scott@dbhost:1521/dsoa.revelationtech.com

And got the ORA-12504 error shown here:

SQL*Plus: Release 18.0.0.0.0 - Production on Sun Apr 24 17:37:43 2022
Version 18.5.0.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.

ERROR:
ORA-12504: TNS:listener was not given the SERVICE_NAME in CONNECT_DATA


Enter user-name:

Solution

Include the slash and double quotes in the sqlplus command as shown:

./sqlplus scott@\"dbhost:1521/dsoa.revelationtech.com\"

Voila!

SQL*Plus: Release 18.0.0.0.0 - Production on Sun Apr 24 17:39:32 2022
Version 18.5.0.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.

Enter password:
Last Successful login time: Sun Apr 24 2022 17:37:03 -04:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL>

References

]]>
<![CDATA[ Conda: Environment configuration ]]> https://chronicler.tech/conda/ 6256ad4dbf7df61b39a402a1 Tue, 19 Apr 2022 08:35:00 -0400 My imaginary opponent shrugs his shoulders in bewilderment on this topic. Who needs to manage packages and environments in the age of ubiquitous containerization, clouds, and 5G? You should if you are a thousand miles south of the Beltway with motley mobile coverage. Internet in the rentals is good enough for me but scarcely covers the whole family's needs. Not to mention that even "cold" cloud infrastructure costs you money, and when it does not, compute instance is powerful just enough to run a small Flask/HTML5 application.

That intro supposedly leads you to the point: you own (with a probability of .95, according to Google Analytics) a powerful computing device that you can use as a standalone system with minimal connectivity requirements for developing, debugging, and testing applications. And it would be a good idea - to use virtual environments to limit cross-project dependencies and keep your OS package list under control.

I'll fast forward the obvious prises to virtual development environments, skipping the part where I moved from Python's pip, wheels, and virtualenv to Conda. The primary reason is that Conda helps me manage quite different environments and requirements, most of the time not related to Python at all. For example, I have a WSO2 API Manager installed into the Conda-controlled environment. However, I have missed configuring session variables within a virtual environment and restoring it when I deactivate it or switch to another project. I played with the idea to have separate shell scripts for project activation/deactivation, but then I found a more convenient solution. Yes, you still should create scripts to set up and clean up environment variables, but now it's a part of the environment, and you would not miss it when you backup it or commit it to the source code repository.  

Let's go through the practical example. for my WSO2 environment, I want to have:

  • Set up a JAVA_HOME variable to run the API Manager server
  • Update PATH variables to search API-M and API CLI commands
  • Change my current folder to the environment location
  • Restore my session when I deactivate the environment.  

I have Miniconda3 already installed and configured in my WSL2 Ubuntu, so just a few more steps:

  1. Create a new Conda environment
## Create a new environment with the name wso2am
(base) mmikhail@my-laptop:~$ conda create -n wso2am
(base) mmikhail@my-laptop:~$ conda info --envs

# conda environments:
#
base          *  /home/mmikhail/miniconda3
tftests          /home/mmikhail/miniconda3/envs/tftests
wso2am           /home/mmikhail/miniconda3/envs/wso2am
  1. Activate the new environment
(base) mmikhail@my-laptop:~$ conda activate wso2am
(wso2am) mmikhail@my-laptop:~$ cd ~/miniconda3/envs/wso2am/
(wso2am) mmikhail@my-laptop:~/miniconda3/envs/wso2am$
  1. Create activation and deactivation scripts
$ mkdir -p ./etc/conda/activate.d
$ mkdir -p ./etc/conda/deactivate.d
$ touch ./etc/conda/activate.d/env_vars.sh
$ touch ./etc/conda/deactivate.d/env_vars.sh

It'a a time to complete shell scripts with commands.

Activation steps in the ./etc/conda/activate.d/env_vars.sh

#!/bin/sh
# Set JAVA_HOME to run WSO2 API-M
export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64
# SAVE the Original PATH variable
export SAVE_APIM_PATH=$PATH
# Current WSO2 API-M installation 
export WSO2AM=wso2am-4.1.0
# Adjust PATH variables
export PATH=$CONDA_PREFIX/$WSO2AM/bin:\
$CONDA_PREFIX/apictl:$PATH
# Go to the environment home
cd $CONDA_PREFIX

Deactivation commands forthe ./etc/conda/deactivate.d/env_vars.sh

#!/bin/sh
# Clean up variables 
unset JAVA_HOME
unset WSO2AM
#Restore the original PATH values
export PATH=$SAVE_APIM_PATH
# Go to the my home folder
cd $HOME

This setup does exactly what I wanted with someextra perks:

  • I use standard conda (or mamba if you want to) commands with no additional wrappers
  • Environment configuration files are part of the environment and could be transferred or containerized if I need to.
  • Deactivation script gives you an opportunity to restore session environment to the original state.    
]]>
<![CDATA[ JQ is a new GREP ]]> https://chronicler.tech/jq-is-a-new-grep/ 624e19bebf7df61b39a4008b Tue, 12 Apr 2022 08:30:00 -0400 If you are a cloud system administrator, security analyst, or data scientist, you already use this marvelous command-line tool. If you don't, it's time to learn something new and improve your scripting capabilities.  

If you think that JQ stands for JSON Query, you guess it right. It is a command-line JSON query tool that extracts and transforms data from JSON sources. We live at the Big Data More down, when more and more systems and services produce and consume JSON formatted information. You can name any shred of information that comes to your mind:  messages, metadata, log entries, anything that other systems may consume.

Unfortunately,  I cant consume JSON directly, and I prefer formatted tables and well-formatted emails. And from time to time, you need to use one particular value from the three-page output. So without any further ado, let's go through a few examples.

Most likely, you don't have the jq utility installed on your VM, so install it or and your system administrator to install it for you:

$ sudo apt-get install jq -y 
Install jq utility on Ubuntu

Since I use my Always Free OCI instance, the most natural JSO source would be a compute instance metadata, and I want to keep it small, so let's start with VNICs.

Query network interface metadata
  1. Make your JSON output much more readable.  
curl -s -H "Authorization: Bearer Oracle" \
  http://169.254.169.254/opc/v2/vnics | jq
Query all elements
JQ: Formatted output. 

The query result shows a single-item list, and it matches the input. Because the default query is '.' (period), which in human words means "select the current object."  

2. Select the first object in the list. Remember, we start the index with 0.

curl -s -H "Authorization: Bearer Oracle" \
  http://169.254.169.254/opc/v2/vnics | jq '.[0]'
Query the first element in the list
The first item on the list

3. Let's query some key values, for example, the CIDR block for this instance. We do it the same way ad the first object  

curl -s -H "Authorization: Bearer Oracle" \
http://169.254.169.254/opc/v2/vnics | \
jq '.[0].subnetCidrBlock'
Query a single key.

4. You can request multiple keys  and use them to produce new objects

curl -s -H "Authorization: Bearer Oracle" \
 http://169.254.169.254/opc/v2/vnics | \
 jq '.[0].privateIp, .[0].virtualRouterIp, .[0].subnetCidrBlock'
Query multiple keys. 
Query multiple keys response

5. Let's get the same result but more conventional, with the join function.

curl -s -H "Authorization: Bearer Oracle" \ 
http://169.254.169.254/opc/v2/vnics | \
jq '[.[0].privateIp, .[0].virtualRouterIp, .[0].subnetCidrBlock] |join(", ")'
Join query results into a string
Joined string result

Now you get the idea of how powerful this tool is. To unlock its full potential, check some additional resources:

]]>
<![CDATA[ Install WSO2 API Manager 4.0.0 for Linux ]]> https://chronicler.tech/install-wso2-api-manager-4-0-0-for-linux/ 62486389bf7df61b39a3ffc2 Sat, 02 Apr 2022 11:55:54 -0400 This post describes how to install, configure, and start WSO2 API Manager 4.0.0 on Linux.

Download WSO2 API Manager

  1. Navigate to https://wso2.com/api-manager/
  2. Click on TRY IT NOW
  3. Enter your email address
  4. Click on the checkbox to accept the license
  5. Download the Zip Archive (you will get a file called wso2am-4.0.0.zip)

Download JDK 11

  1. Navigate to https://www.oracle.com/java/technologies/downloads/
  2. Scroll down and click on Java 11
  3. Click on Linux
  4. Click on jdk-11.0.14_linux-x64.tar.gz

Install Software

1. You should have these files on your Linux box:

jdk-11.0.14_linux-x64_bin.tar.gz

wso2am-4.0.0.zip

2. Define your installation folder:

export WSO2_BASE=/home/wso2

3. Extract software:

cd $WSO2_BASE
gtar -xzvf jdk-11.0.14_linux-x64_bin.tar.gz
unzip wso2am-4.0.0.zip

4. Set environment variables:

export CARBON_HOME=$WSO2_BASE/wso2am-4.0.0
export JAVA_HOME=$WSO2_BASE/jdk-11.0.14
export PATH=$JAVA_HOME/bin:$PATH

Configure Hostname

You may need to do this step to be able to access the consoles outside of your box, since the default installation binds specifically to localhost.

1. Edit the following files:

$CARBON_HOME/repository/conf/api-manager.xml
$CARBON_HOME/repository/conf/event-broker.xml
$CARBON_HOME/repository/conf/tomcat/catalina-server.xml
$CARBON_HOME/repository/conf/carbon.xml
$CARBON_HOME/repository/conf/event-processor.xml

2. Change localhost to the actual hostname of your server

Commands

1. Startup:

cd $CARBON_HOME/bin
./api-manager.sh --start

2. Stop:

cd $CARBON_HOME/bin
./api-manager.sh --stop

3. Restart:

cd $CARBON_HOME/bin
./api-manager.sh --restart

4. Status:

cd $CARBON_HOME/bin
./api-manager.sh --status

URLs / Consoles

The default username and password for all consoles is admin / admin.

Publisher - https://hostname:9443/publisher

Developer Portal - https://hostname:9443/devportal

Admin - https://hostname:9443/admin

Carbon - https://hostname:9443/carbon

Logs

$CARBON_HOME/repository/logs/wso2carbon.log

References

]]>
<![CDATA[ Install WSO2 API Manager 4.0.0 for Windows ]]> https://chronicler.tech/install-wso2-api-manager-4-0-0-for-windows/ 624867e9bf7df61b39a40057 Sat, 02 Apr 2022 11:55:51 -0400 This post describes how to install, configure, and start WSO2 API Manager 4.0.0 on Microsoft Windows.

Download WSO2 API Manager

  1. Navigate to https://wso2.com/api-manager/
  2. Click on TRY IT NOW
  3. Enter your email address
  4. Click on the checkbox to accept the license
  5. Download the Zip Archive (you will get a file called wso2am-4.0.0.zip)

Download JDK 11

  1. Navigate to https://www.oracle.com/java/technologies/downloads/
  2. Scroll down and click on Java 11
  3. Click on Windows
  4. Click on jdk-11.0.14_windows-x64_bin.zip

Install Software

1. You should have these files on your Windows box:

jdk-11.0.14_windows-x64_bin.zip

wso2am-4.0.0.zip

2. Open a command prompt

3. Define your installation folder:

set WSO2_BASE=D:\wso2am

3. Extract software:

cd %WSO2_BASE%
unzip jdk-11.0.14_windows-x64_bin.zip
unzip wso2am-4.0.0.zip

4. Set environment variables:

set CARBON_HOME=%WSO2_BASE%\wso2am-4.0.0
set JAVA_HOME=%WSO2_BASE%\jdk-11.0.14
set PATH=%JAVA_HOME%\bin;%PATH%

Configure Hostname

You may need to do this step to be able to access the consoles outside of your box, since the default installation binds specifically to localhost.

1. Edit the following files:

%CARBON_HOME%\repository\conf\api-manager.xml
%CARBON_HOME%\repository\conf\event-broker.xml
%CARBON_HOME%\repository\conf\tomcat\catalina-server.xml
%CARBON_HOME%\repository\conf\carbon.xml
%CARBON_HOME%\repository\conf\event-processor.xml

2. Change localhost to the actual hostname of your server

Commands

1. Startup:

cd %CARBON_HOME%\bin
api-manager.bat --start

2. Stop:

cd %CARBON_HOME%\bin
api-manager.bat --stop

3. Restart:

cd %CARBON_HOME%\bin
api-manager.bat --restart

4. Status:

cd %CARBON_HOME%\bin
api-manager.bat --status

URLs / Consoles

The default username and password for all consoles is admin / admin.

Publisher - https://hostname:9443/publisher

Developer Portal - https://hostname:9443/devportal

Admin - https://hostname:9443/admin

Carbon - https://hostname:9443/carbon

Logs

%CARBON_HOME%\repository\logs\wso2carbon.log

References

]]>
<![CDATA[ Game of Words ]]> https://chronicler.tech/game-of-words/ 6235bd28bf7df61b39a3fcfc Tue, 22 Mar 2022 08:30:00 -0400 I felt for Wordle the day I read about it, and now my morning starts with a fresh cup of coffee and a fresh puzzle.

But sometimes, I struggle to see a word even if it is evident for a native speaker. and a few times, I haven't seen a word when I have four letters out of five. So then I've got the idea to have something to match words against a mask and print matching combinations, and after a few weeks of fun, I present you my pet project - "Match My Word."

It is a simple Flask RESTful service with a single page HTML + jQuery application. It was a fun ride into the uncharted territory of frontend development, Python deployments, and Docker containers. I end up with

]]>
<![CDATA[ Quickest way to get PL/SQL source code from the Oracle Database ]]> https://chronicler.tech/quickest-plsql-source/ 623897f3bf7df61b39a3fe47 Mon, 21 Mar 2022 11:21:37 -0400 I run into network or security challenges where I can't use tools to connect to my Oracle Database, so I end up having to rely on SQL*Plus to get the source of my PL/SQL code, be it a PACKAGE, PROCEDURE, FUNCTION, or TYPE.

Here you go...

set linesize 200
set pagesize 9000
col text format a200

SELECT text
FROM   all_source
WHERE  owner = 'SCOTT'
AND    name = 'CALCULATE_BONUS'
AND    type = 'PROCEDURE'
ORDER BY line;
]]>
<![CDATA[ Quickest way to set up the Oracle Instant Client for sqlplus access to the Oracle Database ]]> https://chronicler.tech/quickest-way-to-set-up-the-oracle-instant-client-for-sqlplus-access-to-the-oracle-database/ 623891f0bf7df61b39a3fe19 Mon, 21 Mar 2022 11:20:50 -0400 Need quick and immediate access to SQL*Plus (i.e., sqlplus) from your Linux server?

1. Navigate to: https://www.oracle.com/database/technologies/instant-client/linux-x86-64-downloads.html

2. Download these files:

  • instantclient-basic-linux.x64-21.5.0.0.0dbru.zip
  • instantclient-sqlplus-linux.x64-21.5.0.0.0dbru.zip

3. Unzip the software:

unzip instantclient-basic-linux.x64-21.5.0.0.0dbru.zip

unzip instantclient-sqlplus-linux.x64-21.5.0.0.0dbru.zip

4.   Connect:

cd instantclient_21_5

./sqlplus username@dbhost:1521/dbname

]]>
<![CDATA[ No Technology Today ]]> https://chronicler.tech/no-techno/ 62237ca3c6c1911e1cd4caec Sat, 05 Mar 2022 10:28:58 -0500 For the second day in a row, I can't get my thoughts together to write another technological post. The horror in Ukraine and the grim prospect of Russian citizens condemned by their insane rulers to poverty and humiliation is painful and furious. But worst of all, the increasingly clear prospect of destroying the world in which my children live.
All that remains is to pray for the cleansing of Ukraine from its invaders and the liberation of Russia from its criminal government. God grant that everyone will refrain from the last argument in this bloody theater of the absurd.


Который день я не могу собратся с мыслями и написать очередной технологичный пост. Ужас, творящийся в Украине, и мрачные перспективы российских граждан, приговорнных своими безумнысм правителями к нищите и унижениям, вызывают боль и гнев. Но хуже всего, все более явная перспектива уничтожения мира, в котором живут мои дети.
Остается только молится за очищение Украины от захватчиков и освобождение России от преступного равительства. Дай бог всем удержатся от последнего аргумента в этом кровавом театре абсурда.

]]>
<![CDATA[ OCI: Firewall on Ubuntu 20.04 ]]> https://chronicler.tech/oci-firewall-configuration-tip/ 6215645fb1de7575bd574e4f Tue, 01 Mar 2022 08:35:00 -0500 If you have created a load balancer and can't reach your backend servers, check a target firewall configuration, especially if you run Ubuntu.    

If you run some workload on non-standard ports and want to expose them for the other components, there are regular steps you do:

  • Add ingress rule to the Security List
  • Add routing rules if your clients are in the different network/subnet
  • Update firewall rules on the backend nodes to allow incoming traffic.

Relatively mundane configuration steps, yet Ubuntu instance was able to surprise me. The first surprise was IPTables as a local firewall. Fine, I can live with IPTables:

# Append new rule to the rules table
sudo iptables -A INPUT -p tcp --dport 8080 -j ACCEPT
# Make changes permanent
sudo netfilter-persistence save
Append firewall rule to the INPUT chain

My application is up and accepts requests, but the load balancer still reports that the backend pool is not available.  Let's take a closer look at the INPUT chain.

#List all rules 
 iptables --list INPUT
Chain INPUT (policy ACCEPT)
target     prot opt source    destination
ACCEPT     all  --  anywhere  anywhere   state RELATED,ESTABLISHED
ACCEPT     icmp --  anywhere  anywhere
ACCEPT     all  --  anywhere  anywhere
ACCEPT     udp  --  anywhere  anywhere   udp spt:ntp
ACCEPT     tcp  --  anywhere  anywhere   state NEW tcp dpt:ssh
ACCEPT     tcp  --  anywhere  anywhere   tcp dpt:http
ACCEPT     tcp  --  anywhere  anywhere   tcp dpt:https
REJECT     all  --  anywhere  anywhere   reject-with icmp-host-prohibited
ACCEPT     tcp  --  anywhere  anywhere   tcp dpt:8080
List rules in the INPUT chain

As requested, the new rule is the last rule in the chain, which is actually not good, because the second to last rule rejects all requests, which means our rule never gets a chance to fire.  So, we should put our port before the final REJECT.

# Delete Existing Rule
sudo iptables -D INPUT -p tcp --dport 8080 -j ACCEPT
# Insert Rule to the 6th position
sudo iptables -I INPUT 6 -p tcp --dport 8080 -j ACCEPT

# Check new rule position
iptables -L INPUT  --line-numbers
Chain INPUT (policy ACCEPT)
num  target  prot opt source    destination
1    ACCEPT  all  --  anywhere  anywhere     state RELATED,ESTABLISHED
2    ACCEPT  icmp --  anywhere  anywhere
3    ACCEPT  all  --  anywhere  anywhere
4    ACCEPT  udp  --  anywhere  anywhere     udp spt:ntp
5    ACCEPT  tcp  --  anywhere  anywhere     state NEW tcp dpt:ssh
6    ACCEPT  tcp  --  anywhere  anywhere     tcp dpt:5000
7    ACCEPT  tcp  --  anywhere  anywhere     tcp dpt:http
8    ACCEPT  tcp  --  anywhere  anywhere     tcp dpt:https
9    REJECT  all  --  anywhere  anywhere     reject-with icmp-host-prohibited

# Make changes permanent
sudo netfilter-persistence save
Relocate IPTables rule above 'reject all' clause

This time, the load balancer could reach the backend pool and sends traffic to the backend application.

]]>
<![CDATA[ Pandoc: My Missing Tool ]]> https://chronicler.tech/pandoc-the-missing-daac-link/ 62041544b1de7575bd57480f Tue, 22 Feb 2022 08:30:00 -0500 Now, when everything in IT is a code, it is only natural to treat documentation as a code and use the same code management and delivery techniques as the source code.  There are plenty of "languages" to produce documentation and tools to deliver it to the end-users. I recently found a tool that closes the "deployment" gap: static or printable content generation.    

I use text formatted documents whenever I have an opportunity; for instance, you are reading this document, powered by Ghost engine.  I have used standalone Wiki-based sites for knowledge management. I have done my best trying to use Markdown-based text in Microsoft SharePoint and Oracle WebCenter Portal. A long story short, all those attempts were painful, limited, and mostly failed since smartphones were expensive geek toys. Time's past and my first Data Science course with HelloWorld R-flavored Markdown report revitalized my interest in this area. The idea of having a text that you can read, execute, and publish was so exciting.  

The way was long and winding, but I found my almost ideal tool - Git* flavored markdowns.  All advanced systems offer Wiki/Markdown support on all tiers and page publishing capabilities so that you can keep your documentation close to your source code and use it as a reporting platform. I said "almost" because you still need a hard copy of your documentation with all the bells and whistles from time to time. Unfortunately, one thing never changes - horrible printing support by all browsers, and this rule has no exceptions. It does not matter how good-looking your page is; it will be more or less crippled as soon as you hint Ctrl-P.  

Finally, I have found the Pandoc a great tool to generate good-looking, clean HTML5 documents, printable-ready PDF documents, and even presentations.

It supports a multitude of input and output formats; you can customize output results with CSS-based themes. The cherry on the top: you can run pandoc from a command line and could be integrated into your CI/CD pipelines or Ansible playbooks.

  # Generate PDF document with pandoc utility 
  pandoc --from=gfm+smart \
  --output my-out-document.pdf print-my-document.md 

You can find more examples and conversion results on the site examples repository

chronicler-examples/pandoc at master · mikhailidim/chronicler-examples
Sample code for the https://chronicler.tech posts. Contribute to mikhailidim/chronicler-examples development by creating an account on GitHub.
]]>
<![CDATA[ Standalone Oracle HTTP Server: Deployment Options ]]> https://chronicler.tech/deployments-oracle-http-server/ 62041eb0b1de7575bd57487a Tue, 15 Feb 2022 08:30:00 -0500 In one of my previous posts, I mentioned multiple Oracle HTTP server deployment options. Let me do a small "show & tell" to explain how I pick the configuration.

Before we start to discuss deployment options, let's refresh what is Oracle HTTP Server 12c is. Oracle offers a precompiled version of the Apache HTTPD Server with a few proprietary modules, such as Oracle Security Layer (mod_ossl)   and prepackaged products as Oracle WebGates 12c. To line up Oracle HTTP with the other Oracle Fusion Middleware infrastructure, standalone Oracle HTTP Server comes with a limited edition of Oracle WebLogic Server and WLST libraries. Each Oracle HTTP Server Installation could be used for one or more WebLogic Domains. While limited domains can't run managed or admin server instances, they facilitate one or more HTTP Server instances, controlled by domain Node Manager. This complicated structure gives you multiple options to fit your web tier design. The diagram below illustrates Oracle HTTP Server Installations, WebLogic domains, and OHS instances.

One single Oracle HTTP server Installation could be used to configure one or more WebLogic domains. Each domain could manage one or more HTTP server instances.
Oracle HTTP Server Components relations

It's about time to talk deployments and configurations, and virtual hosts would be first in line.

One OHS instance with multiple virtual hosts. This is the most natural approach for classic HTTP 2.4 or NGINX system architects. But Oracle HTTP Server has a severe limitation: it does not support secured name-based virtual hosts. It means that if you have secured virtual hosts, you should consider IP- or port-based virtual hosts instead. That's not an issue for big enterprises; users never interact with web servers but load balancers. With that in mind, it's a lean installation for the applications that share the same security configuration and access settings. For example - use Oracle Access Manager (OAM) as a Single Sign-On.

One WebLogic domain, multiple OHS instances. When a virtual host can't meet your application requirements, the separate HTTP Server instance is an excellent option to separate such resources. The most common case in Oracle Fusion Middleware - OAM and WebGates protects only a subset of your applications. Each HTTP instance manages one or more virtual hosts but enforced security configuration does not impact the public instance.  You still have only one domain and node manager, with a single set of binaries to maintain. I would recommend this configuration all the time unless you have some specific WebLogic domain requirements.

Multiple WebLogic domains, one or more OHS instances. There is nothing wrong with numerous WebLogic domains on the same machine. However, it takes a toll; you need extra space on disk devices. It will produce an additional set of logs in the different locations, implying an additional Node Manager to control the domain. On top of my head, there are only a few options when you should create a separate domain:

  • You need to create an OHS instance for the different OS users or groups on the same VM
  • You have specific application requirements for Oracle FMW installation patches that impact the other applications. This situation requires not only a separate WebLogic domain but separate product installation.  
  • You have mixed standalone and integrated installation requirements: Fully functional WebLogic domain with OHS instances and standalone OHS installation on the same VM.  

With all being said, we come to a twist in the plot. If you have no explicit limitations - consider the Apache HTTPD server instead.   There are multiple arguments to consider, but the only one requirement that could prevent you from that drastic change.

PROS CONS
Open source software. No license costs, no limitations. Wide community adoption. The #2 HTTP server in the World OSS support could be tricky. Enterprises prefer to pay vendors to have a predictable response and assistance.
The Apache HTTPD offers all the features from all developers. Wide range of modules and configuration options. Proprietary Oracle modules come only with Oracle HTTP Server. A variety of the 3rd party solutions may raise an eyebrow in security-centric environments.
No NZOS-related limitations. HTTPD supports all the latest and greatest security protocols and ciphers, as far as your OpenSSL installation allows. No support for Oracle security implementation, Oracle wallets, and other security-related features.
If standard ReverseProxy configuration gives you a hard time, Oracle offers mod_wl_24 for Apache HTTPD server to enable native communications with WebLogic backend. WebLogic Proxy plug-in works way better as a part of Oracle HTTP Server.
-- Your application requires OAM or Oracle Cloud Identity Service Single Sign-On configuration.

]]>
<![CDATA[ Mysterious TLS in Oracle HTTP ]]> https://chronicler.tech/mysterious-tls-in-oracle-http/ 62027af6b1de7575bd574733 Tue, 08 Feb 2022 10:20:15 -0500 A little "forget-me-not" from a recent Oracle HTTP server troubleshooting, but it could happen to Apache HTTPD as well. To set up a stage: a work installation of the Oracle HTTP Server, stop presenting certificates on the secured port.

The mystery was - an OHS instance does not complain. There are no issues with startup, there are no logs entries related to TLS/SSL. It opens listen port and accepts plain HTTP requests, but ignores all secured configurations for the virtual host.  

Here is a tip, that would save you and future myself about two hours of life and jump straight to the root cause.

The listen port didn't connect to the virtual host definition.

So, if you would get this from your Apache/OHS check that


# Your secured port
Listen 6888

# Your secured site
# does not match actual port 

<VirtualHost *:6880>
 <IfModule mod_osso>
  ## All the secuirty definitions. 
 </IfModule> 
</VirtualHost>

I've missed this fact because port numbers were quite alike and OHS gives no warning if a virtual host definition does not match any listen port.  The questions how this ever happened and how to improve the architecture - that for a different story.

]]>
<![CDATA[ Flashback: The Lesson I Learned ]]> https://chronicler.tech/flashback-the-lesson-1/ 61ec3681b1de7575bd574523 Tue, 25 Jan 2022 08:35:00 -0500 This piece is not about features or products. It's about lessons that we learn or not. The one I learned some twenty-three years ago was: "Don't ask for troubles and do one thing at a time."

At the end of the last century (I love how it sounds), I worked as a lead Oracle DBA for the big regional bank back in my home country. With the proper training from Oracle University and extensive hands-on experience, I knew everything until the server migration weekend and the revelation Monday.

To set a stage,  it was a time when businesses realized that you could move from proprietary and costly platforms to Intel machines and save big-time on support, licenses, and spare parts. So we have tested the newest Oracle 8i release on an Intel box with SCO Unix and reported to the CIO that we are happy with the performance. Banks didn't run 24/7; still, nobody wanted to go through the series of changes. There were platform compatibility issues, so we decided that we would finish everything and be ready for the following Monday if we start Saturday night by Sunday afternoon. By everything, I mean:

  • Move database files from AIX partitions to the SCO Unix files.
  • Upgrade database from 7.3.4 to 8i
  • Upgrade banking core application to support new database version and address a few PL/SQL compatibility issues.
  • Upgrade client applications and database clients to match database version

The migration went smoothly, and we were happy with the smoke test results. I can't say the same about our ~ 200 clerks and managers because the system hung dead, and nobody can't do anything. Operations went to the manual mode; we tried to figure out what was happening with an unknown operating system and not many hints from the database or system logs. We have fallen back to the original servers; the bank worked late-night hours to catch up on the missing operations and send out regulatory reports.

Tuesday morning, I got a call from the CIO's office. He summoned my manager and me for the rundown. First time in my life, I expected to be fired on the spot. But, to my surprise, the meeting was productive and professional, and that was the place and time where I learned my main lesson:

Set big goals but make small steps.

Since then, I don't make too many changes unless I have no other choice.

]]>
<![CDATA[ Docker: Logn Errors ]]> https://chronicler.tech/docker-logn-errors/ 61c87678b1de7575bd5743ff Tue, 28 Dec 2021 08:30:00 -0500 Recently, I spent some time trying to find why I can not push my just-from-the-own image into the GitLab project registry. The message "illegal base64 data at input" is a bit confusing, but the solution is simple.

I'm messing around with a small pet project. Like any new endeavor, it hits a lot of buzz-bells: Kubernetes, Docker, Containers, you name it. The project itself is in the embryo phase and I'm jigsawing a combination of languages, tools, and optimal practices. Anyway, I tried to login into GitLab project's contained registry and received a rather unusual error message.

[mmikhail@localhost build-site]$ podman login registry.gitlab.com/my-docker-prj/container_registry --username user@example.com --password $MY_TOKEN
Error: get credentials for repository: 1 error occurred:
        * illegal base64 data at input byte 11
Get Credentials Error Message

The error assumes that I have a string encoding issue of some sort, but I have no clue where, because none of my strings in the command line are base-64 encoded. After checking a few options from unavailable GitLab (well, Google and Facebook have set some bad examples), reissuing access tokens and a score of login fails, I have found a root cause - broken auth.json. This file is a residual from the previous configuration and has authentication details, that my container tools can't decipher and use for authentication.  if you will see this message on your system, check file ~/.config/containers/auth.json. I have dropped it completely, but you may want to review its content and remove only the line related to a particular registry.    

]]>
<![CDATA[ Bash scripts to diagnose network issues ]]> https://chronicler.tech/bash-script-to-test-for-network-issues/ 61c1f008438d7f752b1a4d16 Tue, 21 Dec 2021 10:52:03 -0500 These are 2 simple scripts I wrote that run various types of tests to help diagnose network issues between a source and target. They can be safely run as a non-root user as long as the commands exist.

Script 1 - Various network tests

This script runs 3 separate tests and generates 3 logs.

Logs Generated

1. Check port availability (test_network_checkport.log)

This simply uses the nc command to see if the port is available or not. No packets are sent or received, but if you notice high response times occasionally, then something is fishy.

# TIMESTAMP | SOURCE HOSTNAME | TARGET IP | TARGET PORT | RESPONSE TIME (SECS)
2021-12-17 21:14:14.731991069|soadev|192.168.1.31|5901|0.01
2021-12-17 21:14:15.284783737|soadev|192.168.1.31|5901|0.01
2021-12-17 21:14:15.837768193|soadev|192.168.1.31|5901|0.01

2. Check error and dropped packets (test_network_packets.log)

This uses the ip command to count the number of dropped and error packets during transmission and receive.

# TIMESTAMP | SOURCE  HOSTNAME | TRANSMISSION PACKETS ERRORS | TRANSMISSION PACKETS DROPS | RECEIVE PACKET ERRORS | RECEIVE ERROR DROPS
2021-12-17 21:14:13.626336229|soadev|0|0|0|0
2021-12-17 21:14:14.179361714|soadev|0|0|0|0
2021-12-17 21:14:14.731991069|soadev|0|0|0|0

3. Check bad and retransmitted packets (test_network_segments.log)

The netstat command is used here to observe the bad and retransmitted segments. Don't look at the actually values returned, but rather if they are growing instead.

# TIMESTAMP | SOURCE HOSTNAME | BAD SEGMENTS | RETRANSMITTED SEGMENTS | RETRANSMITTED SEGMENTS %
2021-12-17 21:14:13.626336229|soadev|297|30846213|6.25439
2021-12-17 21:14:14.179361714|soadev|297|30846213|6.25439
2021-12-17 21:14:14.731991069|soadev|297|30846213|6.25439

Running the Script

ifconfig -a

./test_network.sh 192.168.1.31 443 8 1 eth0

The ifconfig command is used to get the network interface name to be used in the script (as the last parameter).

In this script, the target IP and target port (443) are passed, and the number of loops (8) with a delay (1 second) on an interface (ens3).

Source Code

Source code for the script:

#!/bin/bash

#--------------------------------------------------------------#
# FILENAME:      test_network.sh                               #
# CREATION DATE: 2021-12-06                                    #
# DESCRIPTION:   Various port, packet, segment checks          #
# AUTHOR:        Ahmed Aboulnaga                               #
# LOG:           test_network_checkport.log                    #
#                test_network_packets.log                      #
#                test_network_segments.log                     #
#--------------------------------------------------------------#

#--------------------------------------------------------------#
# Help                                                         |
#--------------------------------------------------------------#
if [ $# -ne 5 ]; then
  echo ""
  echo "Usage:"
  echo "  ./test_network.sh [target_ip] [target_port] [number_of_loops] [delay_in_secs] [interface]"
  echo ""
  echo "Logs:"
  echo ""
  echo "  test_network_checkport.log"
  echo "    # Port availability, no packets transmitted"
  echo "    # TIMESTAMP | SOURCE HOSTNAME | TARGET IP | TARGET PORT | RESPONSE TIME (SECS)"
  echo "    2021-12-17 21:14:14.731991069|soadev|192.168.1.31|5901|0.01"
  echo "    2021-12-17 21:14:15.284783737|soadev|192.168.1.31|5901|0.01"
  echo "    2021-12-17 21:14:15.837768193|soadev|192.168.1.31|5901|0.01"
  echo ""
  echo "  test_network_packets.log"
  echo "    # Packet transmission and receive packet errors and drops"
  echo "    # TIMESTAMP | SOURCE  HOSTNAME | TARGET IP | TRANSMISSION PACKETS ERRORS | TRANSMISSION PACKETS DROPS | RECEIVE PACKET ERRORS | RECEIVE ERROR DROPS"
  echo "    2021-12-17 21:14:13.626336229|soadev|0|0|0|0"
  echo "    2021-12-17 21:14:14.179361714|soadev|0|0|0|0"
  echo "    2021-12-17 21:14:14.731991069|soadev|0|0|0|0"
  echo ""
  echo "  test_network_segments.log"
  echo "    # Bad and retransmitted segments"
  echo "    # TIMESTAMP | SOURCE HOSTNAME | TARGET IP | BAD SEGMENTS | RETRANSMITTED SEGMENTS | RETRANSMITTED SEGMENTS %"
  echo "    2021-12-17 21:14:13.626336229|soadev|297|30846213|6.25439"
  echo "    2021-12-17 21:14:14.179361714|soadev|297|30846213|6.25439"
  echo "    2021-12-17 21:14:14.731991069|soadev|297|30846213|6.25439"
  echo ""
  echo "Example:"
  echo "  ./test_network.sh"
  echo "  ./test_network.sh 192.168.1.31 443 8 1 eth0"
  echo "  nohup ./test_network.sh 192.168.1.31 443 345600 0.5 eth0 &"
  echo ""
  exit 0
fi

#--------------------------------------------------------------#
# Parameters                                                   |
#--------------------------------------------------------------#
V_TARGETIP=${1}
V_TARGETPORT=${2}
V_LOOP=${3}
V_DELAY=${4}
V_INTERFACE=${5}

#--------------------------------------------------------------#
# Loop                                                         #
#--------------------------------------------------------------#
i=1
while [ ${i} -le ${V_LOOP} ]; do
  V_TIMESTAMP=`date +'%Y-%m-%d %H:%M:%S.%N'`
  i=`expr ${i} + 1`
  sleep ${V_DELAY}

  #--------------------------------------------------------------#
  # Check port                                                   |
  #--------------------------------------------------------------#
  # Log: number of seconds for positive response
  if command -v nc &> /dev/null; then
    nc -vz ${V_TARGETIP} ${V_TARGETPORT} > test_network.tmp 2>&1
    echo "${V_TIMESTAMP}|`hostname`|${V_TARGETIP}|${V_TARGETPORT}|`cat test_network.tmp | tail -1 | awk '{print $9}'`" >> test_network_checkport.log
    rm -f test_network.tmp
  fi

  #--------------------------------------------------------------#
  # Packet errors                                                #
  #--------------------------------------------------------------#
  # Log: Transmitted Packet Errors | Transmitted Packet Drops | Received Packet Errors | Received Packet Drops
  if command -v ip &> /dev/null; then
    V_TX_PACKETERRORS=`cat /proc/net/dev | grep ${V_INTERFACE} | awk {'print $12'}`
    V_TX_PACKETDROPS=`cat /proc/net/dev | grep ${V_INTERFACE} | awk {'print $13'}`
    V_RX_PACKETERRORS=`cat /proc/net/dev | grep ${V_INTERFACE} | awk {'print $4'}`
    V_RX_PACKETDROPS=`cat /proc/net/dev | grep ${V_INTERFACE} | awk {'print $5'}`
    echo "${V_TIMESTAMP}|`hostname`|${V_TX_PACKETERRORS}|${V_TX_PACKETDROPS}|${V_RX_PACKETERRORS}|${V_RX_PACKETDROPS}" >> test_network_packets.log
  fi

  #--------------------------------------------------------------#
  # Segment retransmissions & bad                                #
  #--------------------------------------------------------------#
  # Log: Segments Bad Count | Segments Retransmitted Count | Segments Retransmitted %
  if command -v netstat &> /dev/null; then
    V_SEGS_RETRANSMIT=`netstat -s | grep retransmited | awk {'print $1'}`
    V_SEGS_BAD=`netstat -s | grep bad | grep segments | awk {'print $1'}`
    V_SEGS_RETRANSMITPER=`gawk 'BEGIN {OFS=" "} $1 ~ /Tcp:/ && $2 !~ /RtoAlgorithm/ {print ($13/$12*100)}' /proc/net/snmp`
    echo "${V_TIMESTAMP}|`hostname`|${V_SEGS_BAD}|${V_SEGS_RETRANSMIT}|${V_SEGS_RETRANSMITPER}" >> test_network_segments.log
  fi

done

exit 1

Script 2 - Traffic loss

This script generates a single log file.

Log Generated

1. Check traffic loss (test_trafficloss.log)

This script uses the mtr command, which combines the functionality of traceroute and ping into a single network diagnostic tool.

Start: Fri Dec 17 21:10:33 2021
HOST: soadev                      Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- 140.91.196.17              0.0%    10    0.1   0.1   0.1   0.1   0.0
  2.|-- 4.16.73.206                0.0%    10    0.5   6.7   0.5  57.1  17.7
  3.|-- ae60.edge5.Washington12.L  0.0%    10   20.5   6.6   0.5  27.6   9.6
  4.|-- 142.250.166.242            0.0%    10    2.9   2.9   2.7   3.2   0.0
  5.|-- 108.170.240.97             0.0%    10    1.9   1.7   1.5   2.0   0.0
  6.|-- 108.170.240.112            0.0%    10    0.7   2.5   0.6  15.4   4.5
  7.|-- 142.251.49.17             70.0%    10    1.1   1.3   1.1   1.5   0.0
  8.|-- 142.251.49.29             50.0%    10    1.5   1.7   1.5   1.9   0.0
  9.|-- 216.239.56.73              0.0%    10    8.2  14.5   8.2  66.7  18.3
 10.|-- 216.239.40.130            20.0%    10   15.0  14.9  14.8  15.2   0.0
 11.|-- 209.85.240.16             60.0%    10   32.1  32.5  31.8  33.6   0.0
 12.|-- 216.239.62.212             0.0%    10   32.2  33.2  32.2  41.3   2.8
 13.|-- 108.170.252.129            0.0%    10   31.5  31.5  31.5  31.5   0.0
 14.|-- 209.85.244.59              0.0%    10   32.6  33.3  32.2  41.5   2.8
 15.|-- atl14s07-in-f142.1e100.ne  0.0%    10   31.7  31.7  31.7  31.7   0.0
  • If traffic loss is observed in the last hop, this is generally not an issue with the connection to the target.
  • If traffic loss is observed in the middle, this is possibly due to ICMP rate limiting which is not an issue.
  • If traffic loss is observed from the middle to the end, there is likely loss in traffic.

Running the Script

./test_trafficloss.sh 192.168.1.31 10

The target IP is passed with a count of number of times you want to loop.

Source Code

Source code for the script:

#!/bin/bash

#--------------------------------------------------------------#
# FILENAME:      test_trafficloss.sh                           #
# CREATION DATE: 2021-12-17                                    #
# DESCRIPTION:   Test traffic loss                             #
# AUTHOR:        Ahmed Aboulnaga                               #
# LOG:           test_trafficloss.log                          #
#--------------------------------------------------------------#

#--------------------------------------------------------------#
# Help                                                         |
#--------------------------------------------------------------#
if [ $# -ne 2 ]; then
  echo ""
  echo "Usage:"
  echo "  ./test_trafficloss.sh [target_ip] [number_of_loops]"
  echo ""
  echo "Description:"
  echo "  If traffic loss is in the last hop, not an issue with the connection to target."
  echo "  If traffic loss is in the middle, possibly due to ICMP rate limiting; not an issue."
  echo "  If traffic loss is in the middle to the end, likely losing some traffic."
  echo ""
  echo "Example:"
  echo "  ./test_trafficloss.sh"
  echo "  ./test_trafficloss.sh 192.168.1.31 10"
  echo "  nohup ./test_trafficloss.sh 192.168.1.31 10 &"
  echo ""
  exit 0
fi

#--------------------------------------------------------------#
# Parameters                                                   |
#--------------------------------------------------------------#
V_TARGETIP=${1}
V_LOOP=${2}

#--------------------------------------------------------------#
# Loop                                                         #
#--------------------------------------------------------------#
i=1
while [ ${i} -le ${V_LOOP} ]; do
  V_TIMESTAMP=`date +'%Y-%m-%d %H:%M:%S.%N'`
  i=`expr ${i} + 1`

  #--------------------------------------------------------------#
  # Traffic loss                                                 |
  #--------------------------------------------------------------#
  if command -v mtr &> /dev/null; then
    mtr --report ${V_TARGETIP} >> test_trafficloss.log
  fi

done

exit 1
]]>
<![CDATA[ Ansible: Dynamic Inventory ]]> https://chronicler.tech/ansible-dynamic-inventory/ 61ba7fb5438d7f752b1a4bda Tue, 21 Dec 2021 08:35:00 -0500 I know that Ansible could use as inventory virtually anything, including scripts. However, all inventories on my day job are static, and we keep them in a source repository, and until now, I haven't had a chance to create or use one.

As any developer, I have checked the Stack Exchange before looking into Ansible documentation where I  find a boiled-down solution. There are a few ground rules for scripting dynamic directories:

  • Script should accept at least two arguments:
    • --list - return all hosts as a response
    • --host <hostname> - return a singe host
  • The script result is valid invnetory JSON data

Lets combain all we have learned and create a simple dynamic inventory script with two groups, one static and one dynamic.  

#!/bin/sh
set +x
if [ "$1" == "--list" ]; then
 echo -e '{ "static_group": {\n\t"hosts": [ "localhost"]},'
 echo -e ' "dynamic_group": {\n\t"hosts": ['
 # Calcualte second group member
 coin=$(( $RANDOM % 2 ))
 if [ $coin -ge 1 ]; then
  echo -e '"localhost"]},'
 else
  echo -e ']},'
 fi
 echo -e '"_meta": {"hostvars": {}}\n}'

elif [ "$1" == "--host" ]; then
  echo '{"_meta": {"hostvars": { }}}'
else
 echo "{ }"
fi
Dynamic repository script randomgroup.sh 

Script accepts required parameters and returns group and hstnames only for the --list argument, otherwise it returns an empty (almost) JSON object. To demonstrate the dynamic nature of the script I use special environment variable $RANDOM. For odd number script returns a list ["localhost] and empty list for even numbers. On the screenshot below, you may see a different set of targets for the same command.

Screenshot demostrates ansible=playbook --list-hosts execution results. Te first try has hosts only in the static group, the second try shows hosts in all groups.
Hosts list with the dynamic inventory

Athough, this code is merely a demonstration of capabilities, some advanced version could be useful fo automated collection or module tests.

Sample code and playbook are published on the site Github project.

]]>
<![CDATA[ A Simple Token Generator ]]> https://chronicler.tech/a-simple-token-generator/ 61b225cc438d7f752b1a4ab9 Fri, 10 Dec 2021 08:30:00 -0500 My friend's given me a perfect tip: "Whenever I need a random sequence of characters,  I use OpenSSL rand functions."  And to be a bit fancy, I pick the value that is easy to remember.  

So, when I need a token for QA tests, I generate a line that starts with "Qa" or "qa." Naturally, you need a lot of runs to hit the mark. So, I ended up with a one-line bash script that produced a random sequence of my choice.

rc=1; while [[ $rc -ne 0 ]]; do openssl rand -base64 32 |grep -ie '^Qa';rc=$?; done
Random sequence example

Let's go through the parts of it.

  1. rc=1 - Initialize flag variable to run OpenSSL at least once.
  2. while [[ $rc -ne 0]]; do - Enter into the while loop until rc does not equal 0.
  3. openssl rand -base64 27 - Generate a random 27-byte sequence and print it in base64 format. The length 27 has two reasons - it's odd, and you have no alignment "==" characters at the end. For different conditions, you may want to play a bit with the lengths and look of the result.
  4. grep -e '^Qa' - Filters out results from ##3 by regexp mask. You can use different conditions to get a sequence that fits your purpose. This particular one says: "String starts with Qa, exactly." If you don't mind a case, you can use grep -ie '^Qa', it will find any combinations of Qq and Aa at the beginning of the line.
  5. rc=$? - Assign the last execution result to the flag variable. The last executed command is grep, so rc would be set to 0 only if a string matches the pattern, otherwise, the execution code would be different.

I hope this little script will save you a few minutes of your life.

]]>
<![CDATA[ Bash script to test port availability ]]> https://chronicler.tech/bash-script-to-test-network-latency/ 61ae9361438d7f752b1a4a45 Mon, 06 Dec 2021 18:10:01 -0500 I created a quick script to test if a port is consistently available between a source server to a target server for the purpose of gathering statistics to see if there are any network hiccups not.

This is a very simple and straightforward script.

Simply copy the content below to a file called test_network_port.sh and change the permissions (e.g., chmod 700 test_network_port.sh):

#!/bin/bash

#--------------------------------------------------------------#
# FILENAME:      test_network_port.sh                          #
# CREATION DATE: 2021-12-06                                    #
# DESCRIPTION:   Test network port to IP:PORT                  #
# AUTHOR:        Ahmed Aboulnaga                               #
# LOG:           Log file 'test_network_port.log' generated    #
#--------------------------------------------------------------#

#--------------------------------------------------------------#
# Help                                                         |
#--------------------------------------------------------------#
if [ $# -ne 4 ]; then
  echo ""
  echo "Usage: ./test_network_port.sh [target_ip] [target_port] [number_of_loops] [delay_in_secs]"
  echo ""
  echo "Example: ./test_network_port.sh"
  echo "         ./test_network_port.sh 8 1"
  echo "         nohup ./test_network_port.sh 345600 0.5 &"
  echo ""
  exit 0
fi

#--------------------------------------------------------------#
# Check file existence                                         |
#--------------------------------------------------------------#
if ! command -v nc &> /dev/null; then
  echo ""
  echo "ERROR: Command 'nc' does not exist."
  echo ""
fi

#--------------------------------------------------------------#
# Parameters                                                   |
#--------------------------------------------------------------#
V_TARGETIP=${1}
V_TARGETPORT=${2}
V_LOOP=${3}
V_DELAY=${4}

#--------------------------------------------------------------#
# Loop                                                         |
#--------------------------------------------------------------#
i=1
while [ ${i} -le ${V_LOOP} ]; do
  V_TIMESTAMP=`date +'%Y-%m-%d %H:%M:%S.%N'`
  nc -vz ${V_TARGETIP} ${V_TARGETPORT} > test_network_port.tmp 2>&1
  echo "${V_TIMESTAMP}|`cat test_network_port.tmp | tail -1 | awk '{print $9}'`" >> test_network_port.log
  rm -f test_network_port.tmp
  sleep ${V_DELAY}
  i=`expr ${i} + 1`
done

exit 1

Pass the following arguments in this order:

  1. Target IP address
  2. Target port
  3. Number of times to loop
  4. Number of seconds between each loop (fraction of a second is allowed)

Executing it is simple (it should be run from your source server):

root@soadev:/root/temp> ./test_network_port.sh 192.168.1.1 1521 10 1
root@soadev:/root/temp>

It can also be run in the background for 2 days as shown:

root@soadev:/root/temp> ./test_network_port.sh 192.168.1.1 1521 345600 0.5
root@soadev:/root/temp>

It generates a log file called test_network_port.log that has the following output:

2021-12-06 23:01:06.471441274|0.01
2021-12-06 23:01:07.493995977|0.01
2021-12-06 23:01:08.517366674|0.01
2021-12-06 23:01:09.540446846|0.01
2021-12-06 23:01:10.563645494|0.01
2021-12-06 23:01:11.587027598|0.01
2021-12-06 23:01:12.610403491|0.01
2021-12-06 23:01:13.633558440|0.01
2021-12-06 23:01:14.656778628|0.01
2021-12-06 23:01:15.679733290|0.01
]]>
<![CDATA[ Sensitive data in Terraform ]]> https://chronicler.tech/terraform/ 61910354438d7f752b1a4901 Tue, 16 Nov 2021 08:35:00 -0500 Terraform is a terrific configuration management tool. This is probably the best choice if you know your infrastructure and are ready to manage it as a code. And if we are talking code, let's see how you can keep sensitive information away from your code repository.

Terraform allows you to decalre variables and use them to configure your target environment. Variable could have a default value, value declared in one of .tfvars file, or Terraform will ask you to enter missing value during the plan phase.  This approach allows you to keep your sensitive input details separate from the source code  in the repository.

With all this in mind, my Oracle OCI  Terraform variable declarations looks similar to the diagram below.

Image depicts Terraform project files with the separate varables and values. Value definitions are added to .gitignore rules.
OCI Terraform Provider Declaration 

The provider declaration contains two files - module definition and variables declaration (provider.tf and provider.vars.tf). The specific variable values are in the separate file - provider.auto.tfvars. To exclude sensitive content from the source code, .gitignore has exclude rules for all *.tfvars definitions.

Hoverer, I still want to maintain the list of variables required by the configuration, so there is a sample definition - provider.auto.tfvars.sample. It is a template with all the necessary inputs. So for the new project I clone repository, rename the sample file and populate it with the project specific details, plus other sensitive files as private keys.    

Of course, for a full scale projects and complex implementation scenarios you should look to specialized solutions such as Hashicorp Vault or AWS Secrets Manager. Even then it is a good practice to keep separate definition of configuration, variables, and input values.

]]>
<![CDATA[ Ansible: Markdown Reports ]]> https://chronicler.tech/ansible-markdown-reports/ 6175566a2febf81565f5a719 Tue, 09 Nov 2021 08:30:00 -0500 The previous post promised a simple reporting solution and two potential possibilities. Now it's time to go through the solution that would help you with report publications.

The diagram from the previous post depicts information flow between components.  

Ansible works withthe close integration with GitLab to manage project code. Ansible playbooks produce Markdown report and publish them back to Wiki repository.
Ansible Markdown Reports on Git/GitLab

  • The Ansible Controller pulls project code and reports templates from the code repository.  Separate repositories give you better access control to the code repository.  
  • The controller runs playbook(-s)  against targets to collect target-specific details.  
  • The local part of the playbook uses collected reports and updates the local Wiki-repository clone.  
  • The last step - commit the new changes and publish changes to the server.

I have created one sample report and published it on GitHub.  There are a few things that required your attention before (and if) running this example:

  1. Sample code refers to the sample Wiki repository - ansible-wiki-demo. Feel free to fork it or use your own.  In any case, do not forget to update the repository URL in the code.
  2. The published code was created for Oracle Cloud Infrastructure instances. It was tested on a single compute instance with core Ansible installation. If you are going to use it on-premises or run against other cloud providers, you may need to populate variable inst_meta with your own data or adjust the instance template code to report some different target facts.
  3. Core Ansible git task can clone a repository but can't update or publish changes. There are a few collections and roles to address it, for example, git_acp. But I intentionally used shell task to run native command, because you may not have enough privileges to install additional Python and system libraries.  

Sample code produces a Wiki home page with a reference to all your inventory targets and a separate report page per target. The final home page should look like the one on the screenshot below.

]]>
<![CDATA[ Connecting with a private key using SSH and SFTP ]]> https://chronicler.tech/using-ssh-and-sftp-to-connect/ 6142347106f7cb4af32cc61b Thu, 04 Nov 2021 12:00:06 -0400 SSH uses public/private key pairs.

id_rsa is your RSA private key (do not share this!).

id_rsa.pub is your RSA public key; this you give out to the administrator of the target system so that they can add it to verify that the signature came from your private key.

On your source system, these 2 files are located under under the .ssh folder. For example: /home/oracle/.ssh

To connect via SSH:

ssh -i /home/oracle/.ssh/id_rsa targetusername@soa01.revelationtech.com

To connect via SFTP:

sftp -oIdentityFile=/home/oracle/.ssh/id_rsa targetusername@soa01.revelationtech.com
]]>
<![CDATA[ Ansible: Format e-mail body. ]]> https://chronicler.tech/ansible-format-e-mail-body/ 617555ac2febf81565f5a713 Tue, 02 Nov 2021 08:35:00 -0400 The whole automation idea is to minimize or exclude humans from the process. However, you need to notify users about the progress, request to perform some out-of-reach activity, or just report the results. System administrators may be happy with text or even JSON bodies, but sometimes you should do it in style.

The opening post mentions email messages as one of the reporting tools.  It may be not the most reliable tool, yet it bears a few distinctive characteristics:

  • You can target email messages to a specific audience, with no overheads or additional integrations. Send to the named recipients or a distribution list and your report will be delivered to the right hands  
  • Users can consume reports literally everywhere. The ability to receive and send emails is one of the key characteristics of any "smart" device.
  • With some help from designers and little knowledge of the HTML, you can deliver messages suitable for stakeholders.
  • And the killer feature  - you can send attachments.

I have created a sample playbook that produces an HTML-formatted email with the Ansible logo and attachments.  It's a simplified and sanitized version of the real production playbook. Let's take a quick look at the main task.

- name: Send Email Report to Users
      mail:
        host: smtp.mydomain.com
        subject: " HTML-fromatted Report Example"
        body: "{{ lookup('template','templates/mail_body.html.j2') }}"
        from: "Ansible Host <ansible@mydomain.com>"
        to: "{{ to_emails }}"
        cc: "{{ cc_emails }}"
        attach: "{{ req_zip.stat.path|default([]) }}"
        headers: 'Reply-To=Do.Not.Reply@mydomain.com'
        subtype: html
        charset: utf8
Ansible mail task example

Like many other things in Ansible, you can comprehend it as easily as a regular text. Two attributes are important:

  • subtype -  It should be set to 'html', otherwise you will receive HTML source code.
  • charset   - set it to UTF-8 to make sure your HTML body will go through mail servers with no harm.

I know this is not a fine example of a good-looking e-mail, but it could be the start of your own reporting system.

Image depicts HTML document generated by Ansible playbook. It contains header, links and embeded Ansible logo.
E-mail body produced by template.
]]>
<![CDATA[ Getting "The payload data size exceeds the configured payload threshold size" in WebLogic ]]> https://chronicler.tech/getting-the-payload-data-size-exceeds-the-configured-payload-threshold-size-in-weblogic/ 617807262febf81565f5a979 Thu, 28 Oct 2021 14:22:53 -0400 An Oracle SOA Suite 12c composite was consuming files via inbound polling on the FTP Adapter.

The file size was larger than 10MB and we got a Payload Size Exceeds Threshold. The payload data size: "" exceeds the configured payload threshold size: "":

Option #1: Update the binding in the SOA composite

One option is to update the binding in composite.xml and redeploy the composite:

<binding.jca config="myFtp.jca">
  <property name="payloadSizeThreshold" type="xs:string" many="false" override="may">20000000</property>
</binding.jca>

Option #2: Update the Reference Configuration domain

Now, starting with Oracle SOA Suite 12.2.1.4, your SOA domain would have been created as either a Reference Configuration domain or a Classic domain.

The Oracle Documentation states that "A Reference Configuration domain provides tuned parameters out-of-the-box for newly created SOA projects. Tuned parameters include but are not limited to: ... Product-Specific: SOA, Service Bus, Adapters - Work Manager configuration, payload size restriction, and more."

Update $MW_HOME/soa/common/bin/setSOARefConfigEnv.sh and increase this value:

-Dsoa.payload.threshold.kb=30000

The setSOARefConfigEnv.sh is called by setDomainEnv.sh. Restart the domain and you should be good.

Check out Oracle Doc ID 2736500.1 as well.

]]>
<![CDATA[ Ansible: Simple Reports ]]> https://chronicler.tech/ansible-simple-reports/ 6174982a2febf81565f5a60b Tue, 26 Oct 2021 08:30:00 -0400 Any IT system, bigger than a chihuahua, ought to produce reports. It does not matter how they look, but it should generate and share information that humans can read. Automation projects are not immune to this, and I don't mean reports on automation tools but rather on the target state.

Ansible is a great candidate for generating reports, starting with the integrated Jinja 2 engine, native Python syntax and ubiquitous template support are keys for producing dynamic content. We need to find a modern way for the report presentation, and Ansible can offer you more than one option. Naturally, every environment is different, and you may have requirements, limitations, or some excellent reporting tool, that you use for such tasks. Let's talk about something you can do on your  

HTML-formatted emails

Ansible Core has a mail task that allows you to create plain-text or HTML-formatted e-mails and supports attachments. I use them for messaging or as a part of human tasks. You can find a code sample and walkthrough in my post  "Ansible: Format e-mail Body." My biggest concern with e-mails - they tend to get lost. Your report should survive through the e-mail server security policies, spam filters, inbox cleanups, and endless FWD: chains. Plus, somebody should print it or store it somewhere else.

To summarize: e-mails are suitable for sharing current information, not great for publications and records management.

Markdown reports

Markdown reports are hard to miss when you consider your report format.  It is text-based, and the report source is still human-readable, even if you don't know Markdown language. Still, it is a text, not a well-formatted document you can share with your stakeholders.  The good news here - Markdown is everywhere; most of the modern CMS or SCM supports Markdown natively. It means that if you use one of the modern code management platforms, you already have the publication platform.

Ansible works withthe close integration with GitLab to manage project code. Ansible playbooks produce Markdown report and publish them back to Wiki repository.
Ansible Markdown Reports on Git/GitLab

With GitHub or GitLab as a publication platform, you would benefit from

  • Most likely, you already have Git or GitLab configured with your controller.
  • It will produce good-looking pages from your Markdown reports.
  • Structured content with navigation and URLs.
  • Track history of all report changes.    
  • No separate integration and publication tools are required.

The separate post describes base implementation details.  

 

]]>
<![CDATA[ Setting up single and multiple WebLogic NodeManagers in the same machine ]]> https://chronicler.tech/multiple-nodemanagers-for-weblogic-domains/ 617165c92febf81565f5a540 Thu, 21 Oct 2021 09:35:15 -0400 If you have multiple Oracle WebLogic Server 12c domains on a single machine, can each domain have its dedicated Node Manager? Can a single Node Manager manage all the domains in this machine? Yes and yes.

Multiple Node Managers on a Single Machine

If you have multiple domains on your machine, say soa_dev_domain and soa_tst_domain, each domain has its own set of Node Manager configuration and startup files under $DOMAIN_HOME/nodemanager.

On the soa_dev_domain, you will find a nodemanager.domains configuration file:

oracle@soahost:/u01> cat /u01/domains/soa_dev_domain/nodemanager/nodemanager.domains
soa_dev_domain=/u01/domains/soa_dev_domain

Similarly on the soa_tst_domain, the nodemanager.domains file will also exist:

oracle@soahost:/u01> cat /u01/domains/soa_tst_domain/nodemanager/nodemanager.domains
soa_tst_domain=/u01/domains/soa_tst_domain

Before you start up the 2 Node Managers, make sure that each of them has a different ListenPort:

oracle@soahost:/u01> cat /u01/domains/soa_dev_domain/nodemanager/nodemanager.properties | grep Port
ListenPort=5656

oracle@soahost:/u01> cat /u01/domains/soa_tst_domain/nodemanager/nodemanager.properties | grep Port
ListenPort=5657

Under the Machine configuration for each domain, update the Listen Address and Listen Port in the WebLogic Admin Console accordingly:

Now you can start up both NodeManagers:

oracle@soahost:/u01> export DOMAIN_HOME_DEV=/u01/domains/soa_dev_domain
oracle@soahost:/u01> nohup $DOMAIN_HOME_DEV/bin/startNodeManager.sh >> $DOMAIN_HOME_DEV/bin/nodemanager.out &

oracle@soahost:/u01> export DOMAIN_HOME_TST=/u01/domains/soa_tst_domain
oracle@soahost:/u01> nohup $DOMAIN_HOME_TST/bin/startNodeManager.sh >> $DOMAIN_HOME_TST/bin/nodemanager.out &

Monitoring Multiple Domains with a Single Node Manager

If you wanted the Node Manager instance under the soa_dev_domain to manage multiple domains, simply add the domains to nodemanager.domains as shown:

oracle@soahost:/u01> cat /u01//domains/soa_dev_domain/nodemanager/nodemanager.domains
soa_dev_domain=/u01/app/oracle/domains/soa_dev_domain
soa_tst_domain=/u01/app/oracle/domains/soa_tst_domain

Now, update the machine configuration in the WebLogic Admin Console and startup Node Manager and you're good to go!

]]>
<![CDATA[ Regenerating the 'DemoIdentity' certificate in WebLogic 12c ]]> https://chronicler.tech/regenerating-the-demoidentity-certificate-in-weblogic-12c/ 6168263f2febf81565f5a4dc Thu, 14 Oct 2021 09:04:27 -0400 I recently encountered an Oracle WebLogic 12c environment that was installed 5 years ago, and whoever set it up at the time settled on using the included demo certificate created with the installation. Unfortunately, this demo cert expires after 5 years and now they are unable to start up their managed server.

The error in the logs are:

<Oct 7, 2021, 2:20:59,944 PM EDT> <Alert> <Security> <BEA-090154> <Identity certificate has expired: [
[
  Version: V3
  Subject: CN=DemoCertFor_test12c
  Signature Algorithm: SHA256withRSA, OID = 1.2.840.113549.1.1.11

  Key:  Sun RSA public key, 1024 bits
  modulus: 135687768825257970920645103749378512647737621009184020990762899426300960611923373430758885109924074087110250668541195216859214695760272683547985604471057131191030374090625201144697417163468413950677609292596657234544449316372941272371625659602678021396555756075822965335563707180282782523324781153272285770993
  public exponent: 65537
  Validity: [From: Mon Sep 26 15:36:24 EDT 2016,
               To: Sat Sep 25 15:36:24 EDT 2021]
  Issuer: CN=CertGenCA, OU=FOR TESTING ONLY, O=MyOrganization, L=MyTown, ST=MyState, C=US
  SerialNumber: [    067ff157 a915]

To regenerate a demo cert, simply log into the EM console, navigate to WebLogic Domain > Security > Keystore, and expand system.

Highlight the demoidentity keystore row and click Manage, using the password DemoIdentityKeyStorePassPhrase:

Here you will find the certificate. Highlight it and click on Delete. You will be prompted for the private key password which is DemoIdentityPassPhrase:

Afterwards, click on Generate Keypair and enter the values below, replacing Common name with your server hostname. I suggest keeping the same password DemoIdentityPassPhrase:

Remember, this is for users of the demo keystore, which is not recommended to be used in a production environment.

For reference purposes, the default passwords for the demo trust, keystore, and cert are as follows:

  • Trust store password = DemoTrustKeyStorePassPhrase
  • Key store password = DemoIdentityKeyStorePassPhrase
  • Private key password = DemoIdentityPassPhrase
]]>
<![CDATA[ WebLogic managed server RUNNING but with 'Failed' health ]]> https://chronicler.tech/weblogic-managed-server-running-but-in-failed-health/ 616348232febf81565f5a392 Sun, 10 Oct 2021 16:16:22 -0400 On Oracle SOA Suite 12c (and technically Oracle WebLogic Server 12c), I noticed that a couple of managed servers were in a RUNNING state but the health was reporting Failed.

What could cause this particular scenario?

In this domain, the TLOGs were persisted in the database and the Persistent Store used to store the TLOGS returned a SetPrimaryStore failed exception. This was because the data source was suspended (e.g., Pool wls_tlogs is Suspended).

There apparently was a temporary hiccup in the database that caused this, and thus WebLogic couldn't write to the TLOG, hence the Failed health status.

]]>
<![CDATA[ Getting "Element 'binding.rest' not expected" in BPEL ]]> https://chronicler.tech/getting-element-binding-rest-not-expected-in-bpel/ 616346ac2febf81565f5a370 Sun, 10 Oct 2021 16:06:33 -0400 I created a BPEL process and attempted to add the property oracle.webservices.http.headers to my reference as shown below.

However, during compilation, I received the following error:

Error(39,100): schema - (Error) Element 'binding.rest' not expected.

Solution

Move the <binding.rest> line above the <property> line.

]]>
<![CDATA[ Ansible, YAML, and JSON ]]> https://chronicler.tech/ansible-yaml-and-json/ 6150779a2febf81565f5a12a Wed, 06 Oct 2021 11:37:00 -0400 Ansible is very flexible with data types, and easily transforms primitives and even JSON strings, but from time to time you need to process and transform something a bit more complex than String to Boolean conversions.

Ansible is not the first choice for data transformation, but its core functions are Python-based, baked by Jinja2 templates. Let's see how you can leverage them for data processing. For this example, I'm going to use a simple shell script that does nothing but a set of lines similar to the screenshot below.  I should note, that Oracle Opatch 12c utility produces quite a similar patch list report.

Sample Data Output

My goal is a playbook that receives the raw data and produces a list of objects. Let start with the data capture.

---
- name: Process List of Patches 
  hosts: localhost
  vars: 
    patch_list: []
  tasks:
    - name: Receive External Data
      tags:
       - always
      shell: 
        cmd: |
         ./shell-output.sh
      register:  os_output
    - name:  Raw Output Lines
      debug:
        verbosity: 2 
        var: os_output.stdout_lines
Capture External Data 

Now, all that we need is to transform the list of strings into a list of objects. I'm going to use YAML multiline definitions  '|' and '>' similar to the shell task above. The new task would generate a JSON-compliant string and convert it into a list.  

    - name: Transformation v1
      tags:
        - v1
      set_fact: 
       patch_list: |-
          [
          {%- for line in os_output.stdout_lines -%}
            { "id": "{{ line.split(';')[0] }}",
              "title":"{{ line.split(';')[1] }}" }
              {{ ", " if not loop.last else "" }}
          {%- endfor -%}
          ]
    - name: Result For V1
      tags: 
        - v1
      debug:
        var: data_list
Data Conversion

The code above assigns the Jinja2 template to the variable data list. The template has a 'for' control to iterate over the output lines and generate JSON. A few things that may require an explanation:

  • Multi-line starts with '|-'. It keeps new line characters and helping to avoid syntax errors. Some parsers allow new lines before JSON data but throw an error for whitespaces. The dash instructs to remove all extra newlines after the main text.
  • Loop controls are surrounded by {%- -%} where '-' mandates left and right trims for the lines. The resulting output would have no empty lines between data elements.
  • The end of the line template checks if the current line is the last one. If it's not it adds a comma character to the element definition.

As an output playbook prints structured data - a list of objects with two attributes each. Here is the output.

Playbook Output - Parsed JSON

I have one more conversion task to the final playbook. It looks a bit more complex, but it could be useful if you need to generate nested lists and dictionaries. The full source code is available on Github.

 

]]>
<![CDATA[ Ansible: Iterations ]]> https://chronicler.tech/ansible-iterations/ 6103df7e6b7b965625b2fa29 Tue, 28 Sep 2021 08:35:00 -0400 As I mentioned before, iterations in Ansible are not first-class citizens. The best you can have - loop for a single task, with one nested loop maximum. Anything more sophisticated than that should be implemented as a separate module with proper procedural language. Yet once again, the include_tasks command gives us a helping hand.

Let's take a pretty typical, old-school example: you have a set of application servers, and each one may have one or more binaries installed. From time to time, you apply one or more patches to each installation. The description screams "iterations": one over the installation locations and the other over the list of patches to apply. The sample code below resembles the one I use to patch Oracle Fusion Middleware farms.

# Main task list - ofmw-patch.yml
- name: Apply Patches to OFMW Home
  include_tasks: 
    name: apply_patches.yml
  vars:
    target_home: "{{ mw_home }}"
    what_to_apply: "{{ mw_patch_list }}"
  loop: "{{ homes_on_host }}"
  loop_control:
    loop_var: mw_home
#
# Inner loop tasks - apply_patches.yml
- name: Apply Patch from patch list
  include_tasks: 
    name: opatch.yml
  vars:
    current_patch: "{{ l_patch }}"
  loop: "{{ what_to_apply }}"
  loop_control: 
    loop_var: l_patch
     
Nested loops in Ansible

As you may see, I use more than one include_tasks to loop over:

  • Main task flow - installation preparation: Stop the processes, prepare folders, set up some facts.
  • Outer loop -  prepare patch binaries on targets: download archives, unpack the code, identify the patch type, etc.
  • The inner loop - apply a single patch to the current installation.  

A few takeaways :

  • You may have more than one task list to iterate. It means instead of one playbook file, you would carry at least two or even three, as in the example above. It makes playbooks mouthful and too techy. Dictionaries to iterate over will add up more to it. It may be worth hiding all the complexity inside the role.
  • Use loop_control clause to redefine default loop variable name and avoid name overlaps for nested loops.
  • Use unique parameter names for each include_tasks when possible, same as we discussed in the previous post. It eliminates the hassle with variable redefinitions, previous values, and residual values.        
]]>
<![CDATA[ Ansible: Selections ]]> https://chronicler.tech/ansible-selections/ 6103df3d6b7b965625b2fa1d Tue, 21 Sep 2021 08:30:00 -0400 Let's continue a talk about control structures in Ansible. Now, when we know how to imitate subroutines, it is time to discuss selections - if-then-else and case/switch operators.

Ansible offers conditional execution for single tasks,  task blocks,  and task imports. In addition, it allows effectively simulate if-then selection. Let's take a look at the pseudocode below:

- hosts: localhost
  vars:
    true_var: yes
  tasks:
    - debug: 
        msg: "It's true!"
      when: true_var|bool    
 
Task condition

   Ansible has no else-elseif clauses, but you can easily substitute them, like the following code snippet.

- hosts: localhost
  vars: 
    true_var: yes
  tasks:   
    - debug:
        msg: "It's true!"
      when: true_var|bool
    - debug: 
        msg: "No it's not!"
      when: not true_var|bool  
If-then-else Simulation

Naturally, you can build quite a complex combination using multiple conditions and combining tasks, even simulate switch operator. Yet, Ansible offers a rather elegant way to implement switch selectors.  

- hosts: localhost
  tasks:
    - include_tasks:
         file: "run-{{ selector |default('default') }}-tasks.yml" 
Switch Operator 

The code above uses a variable selector to identify which file to import into your play. If the variable is not defined, the task will use 'run-default-tasks.yml' as a file name. Thus, you can create multiple files for import, one for each option you want to handle.

Ansible offers another and very powerful mechanism to execute a part of your code - tags. Specifying tags to include or exclude at runtime you can completely change the playbook behavior. You can check my previous post to find more details.

]]>
<![CDATA[ Getting `ORA-00933: SQL command not properly ended` due to blank line ]]> https://chronicler.tech/getting-ora-00933-sql-command-not-properly-ended-due-to-blank-line/ 6147a6b206f7cb4af32cc789 Sun, 19 Sep 2021 17:25:14 -0400 What's strange is that a SQL script someone wrote (which apparently was running fine for months) seemed to return an ORA-00933: SQL command not properly ended error when executed through the Linux command line, as shown below:

oracle@dbhost:/home/oracle> /u01/sqlplus/instantclient_12_2/sqlplus ahmed/welcome1@//dbhost:1521/orcl @/home/oracle/script.sql

SQL*Plus: Release 12.2.0.1.0 Production on Sun Sep 19 12:31:27 2021

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Last Successful login time: Sun Sep 19 2021 12:30:50 -04:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

            ) B
            *
ERROR at line 2:
ORA-00933: SQL command not properly ended


Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

What's odd is that when I pasted the same exact SQL statement to Oracle SQL Developer as is, it executed and ran just fine:

Turns out sqlplus wasn't happy with this blank line in the script. I removed it and everything was fine:

Like I said, this previously worked fine, but since the database had already gone through a few quarterly CPU patching cycles, I honestly can't point to what triggered this not to work anymore.

]]>
<![CDATA[ Don't let the '&' confuse you when viewing Oracle SOA instances on a web browser ]]> https://chronicler.tech/untitled-8/ 613ab92d06f7cb4af32cc5c2 Thu, 09 Sep 2021 22:00:40 -0400 Man... dealing with & (ampersands) in SOA, web services, XML, and REST sucks. Dealing with special characters in general sucks-having to figure out if the product or tool is escaping/encoding them for you or not.

Recently, I had to update my BPEL process to replace every occurrence of & in the input and replace it with %26%26.

The problem is, here's how the Receive activity in my SOA instance looked like. Clearly it's telling me that the Name element is coming in with the data Assessment & Authorization:

While this is true, what the console doesn't tell you is that though the input indeed came in as Assessment & Authorization, BPEL automatically encodes the ampersand, so the data in my instance was actually Assessment &amp; Authorization.

It's just that when it gets displayed on a web console, the browser renders it as &.

This put me on a multi-day troubleshooting effort, all because I thought I was manipulating & instead of &amp; in my data.

Moral of the story, be cautious about what you see on a web console when it comes to special characters!

]]>
<![CDATA[ Working around messy REST references using WSDL interfaces in JDeveloper ]]> https://chronicler.tech/working-around-messy-rest-references-using-wsdl-interfaces-in-jdeveloper/ 6134175c06f7cb4af32cc4b3 Sun, 05 Sep 2021 17:13:37 -0400 I recently ran into multiple issues with Oracle JDeveloper 12.2.1.4 when trying to call external REST web services from a BPEL project. It was quite painful and pretty much what I concluded is that even this latest version of JDeveloper remains to be buggy when it comes to REST development.

My Scenario

I want to invoke a REST service from my BPEL project. This external REST web service is accessible at http://soatest/test/.

I thus create a REST reference against http://soatest/test/{identifier}, wherein {identifier} is a variable that I can dynamically manipulate. For example, this allows me to do a POST against this service at http://soatest/test/ahmed.

Creating the REST Reference

Adding a REST reference is simple enough. In the screenshot below, I chose not to select the Reference will be invoked by components using WSDL interfaces option.

I then created a Resource Path /test/{identifier} and created a POST method. Now, as you can see in the screenshot, JDeveloper is smart enough to understand that {identifier} is a parameter and creates a Runtime Property for you that you can reference in your Invoke activity later on.

Now in my Invoke activity, and I copy a variable to the runtime property rest.template.identifier which was defined in the adapter earlier. So far, so good.

For some crazy reason, although the code is developed correctly, I got an error during runtime. After enabling TRACE in the logs, I was getting an HTTP 400 error upon invocation:

[2021-08-17T19:01:01.125-06:00] [soa_server1] [TRACE:16] [] [oracle.wsm.agent.handler.jaxrs.RESTClientFilter] [tid: [ACTIVE].ExecuteThread: '21' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: ] [ecid: c94d4151-e2bc-4185-bd1d-c29d554e6334-81400023,0:1:1] [APP: soa-infra] [partition-name: DOMAIN] [tenant-name: GLOBAL] [oracle.soa.tracking.FlowId: 171470] [oracle.soa.tracking.InstanceId: 543970] [oracle.soa.tracking.SCAEntityId: 514000] [composite_name: myBpelProject!1.0] [FlowId: d6G1X4yE400000T0000NhMldMpEcLkqwwf] [SRC_CLASS: oracle.wsm.agent.handler.jaxrs.RESTClientFilter] [SRC_METHOD: filter] ENTRY org.glassfish.jersey.client.ClientRequest@241cd9a0 ClientResponse{method=POST, uri=http://soatest/test/ahmed, status=400, reason=400}

Essentially an HTTP 400 is a "Bad Request", which means it's on the client side (i.e., my BPEL process). In fact, the target service never received a call.

Oracle Support states that when creating the REST adapter, I should have selected the Reference will be invoked by components using WSDL interfaces option.

The Problem with the Oracle Support Solution

Similar to the steps above, I added a REST reference but now selected the option Reference will be invoked by components using WSDL interfaces.

The problem is that there's no longer any Runtime Property!

This is a real problem actually.

As an ugly workaround, I manually added an existing property to the Expression, so I selected salesforce.LocaleOptions.language which I knew I would never use.

This is how my adapter looked like in the end:

Now in the Invoke activity, I simply copied my variable over to the jca.salesforce.LocalOptions.language property, which the adapter then used to override the identifier parameter.

This surprisingly worked.

]]>
<![CDATA[ Getting java.lang.NoClassDefFoundError in JDeveloper Native Format Builder for JSON ]]> https://chronicler.tech/getting-java-lang-noclassdeffounderror-in-jdeveloper-native-format-builder-for-json/ 613413fa06f7cb4af32cc4a3 Sun, 05 Sep 2021 11:49:14 -0400 Apparently I've run into an issue with Oracle JDeveloper 12.2.1.4.0 when using the Native Format Builder against a JSON file. Other online posts point to Oracle Doc ID 2628833.1 which tell you to apply patch 30482761 to solve this problem. This did not work.

In this post, I walk through what I did to resolve this.

My JDeveloper version

I am running Oracle JDeveloper 12.2.1.4.0 for Microsoft Windows (Build JDEVADF_PT.12.2.1.4.0_GENERIC_190911.2248.S).

Error encountered using the Native Format Builder

  1. Create a SOA project.
  2. Create a REST adapter and select the option Reference will be invoked by components using WSDL interfaces.
  3. After creating an operation, click on the icon to "Define Schema for Native Format". This starts the Native Format Builder.
  4. Select JSON Interchange Format.
  5. Upload a file (e.g., pg.json).
  6. Click on Next.

This is the error that pops up:

Error creating stream factory:
java.lang.NoClassDefFoundError
com/fasterxml/jackson/core/JsonFactory

And here is a screenshot of the error:

The Solution

You need to apply both patches 30482761 and 32363659 to JDeveloper.

set ORACLE_HOME=c:\JDeveloper\Oracle_Home
set JAVA_HOME=c:\Progra~2\Java\jdk1.8.0_241
set PATH=%JAVA_HOME%\bin;%PATH%
cd %ORACLE_HOME%\OPatch
opatch lsinventory
unzip c:\p30482761_122140_Generic.zip
cd 30482761
..\opatch.bat apply
cd ..
unzip c:\p32363659_122140_Generic.zip
cd 32363659
..\opatch.bat apply
cd ..
opatch lsinventory
]]>
<![CDATA[ My template for technical installation documents ]]> https://chronicler.tech/installation-document-template/ 612e995506f7cb4af32cc3ac Tue, 31 Aug 2021 17:53:44 -0400 As a technical administrator, I often recommend creating high quality document deliverables. In a typical installation document, I am adamant about including certain sections which I normally don't seen many people add.

A copy of the template referenced in this post can be downloaded here.

While this document template is far from perfect, scroll through and learn what you can add that will help you make a standout installation document.

Be specific about the software versions you're installing.

No explanation necessary here.

Always include an architecture diagram.

The level of detail can vary, but a diagrammatic depiction of your architecture helps the reader visualize the end result.

Provide direct links to downloadable software whenever possible.

Include software and version, file name, size, checksum, and direct link to the downloadable software whenever possible. There will be no doubt what is needed when the reader attempts an installation.

Prepare a password list.

It's very possible that a single product installation may require the creation of tens of user accounts and passwords. Track the usernames in the installation document (but not the passwords!).

Document your bash or environment script.

Include specifics of your bash or environment script that's used in your installation.

Be specific in your installation instructions.

I'm a big fan of adding how long it generally takes to complete the section. That way, the reader has a general idea of whether the section he's following is a 4 hour effort or a 10 minute effort.

I'm also not a fan of screenshots in installation documents, especially if the instructions can be replicated by exactly following the steps documented. The overwhelming majority of authors who include screenshots do so because of the lack of confidence and poor quality of their instructions, requiring the reader to decipher and interpret what is needed from the screenshot and making adjustments to the instructions on the fly.

Always include a URL reference.

Add this as an appendix. Include every single accessible console URL.

Provide startup and shutdown commands.

This is a no brainer. The startup/shutdown reference should require the reader to simply copy-and-paste the command as is without thinking. If the reader has to customize these commands, then you have failed as an author.

Include fully qualified paths to all relevant log files.

Listing out all important log files make it easy for the younger administrator to know where to look.

Include instructions on how to check for services.

Provide one or more ways to allow the administrator to check if a service is up and running.

]]>
<![CDATA[ Now on Telegram ]]> https://chronicler.tech/now-in-the-telegram/ 6128faae06f7cb4af32cc331 Fri, 27 Aug 2021 11:02:30 -0400 The Ghost blog platform offers powerful integration capabilities. From the early days, we have enabled  RSS feed, available practically out of the box. In addition, pre-configured integrations with modern automation tools allow you to translate your posts to major social platforms.  

I'm pleased to announce that now you can follow our fresh-from-the-oven Chronicler of Technology public channel on Telegram.

Stay tuned and never miss a single publication.  

]]>
<![CDATA[ Checking Linux host performance (for the newbie) ]]> https://chronicler.tech/checking-linux-host-performance/ 5eb55e560f5abe37b745a6da Wed, 11 Aug 2021 14:45:28 -0400 Every time I interview someone (for a non-Unix admin position) and ask them how they would check for performance on Linux, I'd like to say that the only answer I always get is top. So this blog post is meant for everyone I've interviewed in my life and asked that question!

Here, I'll highlight a few other tools that you can use.

Top

The top command is popular because it provides a dynamic and real-time view of system performance.

System Load

The uptime command provides some information such as system load averages. The last 3 numbers in the output are essentially the average number of processes that are in a runnable state; for the past 1 minute, 5 minutes, and 15 minutes respectively. If you're running a load test, you'll see these number creep up. If the system is cooling down or has reduced activity, the number will drop.

Don't use this value to different sets of servers, but rather use it to compare a single server at various points in time instead.

SAR (System Activity Report)

SAR stands for System Activity Report and the sar command reports on a slew of CPU, memory, and I/O metrics. Typing sar without any parameters will return the CPU utilization captured every 10 minutes, giving you a little more historical information. The documentation describes a lot more options.

I/O Statistics

The iostat command (followed by a number, for number of seconds to refresh) provides I/O related metrics. Here you can see the kilobytes in and kilobytes out for every I/O device on your server, helping identify if there is extensive I/O activity or not.

VM Statistics

The vmstat command is perhaps my favorite of all the commands on this post. Followed by a number X (e.g., vmstat 1), each row will be refreshed every X seconds.

You can observe the amount of SWAP memory used, free memory available, bytes in and out to the SWAP space, bytes in and out for I/O, as well as CPU utilization for user (us), system (sy), idle (id), and wait (wa).

Things I normally look out for:

  • I ignore the free column. This supposedly reports how much memory is available, but the value is a little misleading (see this post to understand why).
  • I look at the si and so columns (bytes in and out of SWAP). Typically if these values are greater than zero, that means that data is being written to SWAP (bad!). This is a clear sign that you've run out of memory.
  • I look at bo and bi columns (I/O bytes in and out). If they are consistently high, then I know a lot of I/O activity is going on. This would not necessary indicate a problem, but rather just an observation and data point.
  • Adding the values us, sy, and wa columns returned the total CPU used. Alternatively, I just look at the id column to identify how much of the CPU is idle.
  • One of the most important data points here is the wa column. This means the processor is waiting on I/O, which is an extremely bad position to be in. Remember in the old days when you inserted a floppy disk in your desktop and the entire machine would freeze for 5 seconds until it could be read? That exemplifies what's happening here.
]]>
<![CDATA[ Getting "Cannot find reference" in BPEL instance ]]> https://chronicler.tech/getting-cannot-find-reference-in-bpel-instance/ 6113e20506f7cb4af32cc24f Wed, 11 Aug 2021 14:12:46 -0400 When executing a BPEL process, the instance faulted with the following exception:

java.lang.RuntimeException: Cannot find reference or service named 'Touch' in composite.xml when invoking the partnerLink Touch from BPEL component 'ProcessBPEL'.

Here's the same error on the console:

Basically, I had just made a non-breaking change to the REST Adapter reference (shown as 'Touch' in the screenshot):

The code compiled and actually deployed fine, but the instance would error out.

What I ended up doing to resolve this is to delete the wire (highlighted in red in the screenshot) and connecting it again. Then I had to re-configure the Invoke activity.

That was it.

]]>
<![CDATA[ Updating SOA Polling Frequency through WLST ]]> https://chronicler.tech/updating-soa-polling-frequency-through-wlst/ 6101b2df6b7b965625b2f994 Mon, 09 Aug 2021 09:16:44 -0400 Have you ever wondered how you can update the PollingFrequency of an inbound adapter such as the FileAdapter or FtpAdapter using WLST?

Taking a look at an Oracle SOA 12c composite in the EM console, when you click on your service, you can navigate to the Properties tab to view and update the polling frequency of the inbound adapter at any time during runtime.

We experienced a strange issue. For example, the inbound FileAdapter would be configured to poll the folder every 10 seconds. However, we noticed that in some cases (which we can't explain), the file never gets picked up. It may sit there for hours. A restart of the SOA managed servers took care of this. We also found out that updating the polling frequency re-triggers or re-initializes the inbound file polling for some odd reason.

So the scripts below check for files that have been sitting for a few minutes (that should have been polled), and updates the polling frequency automatically in hopes that the inbound FileAdapter is re-triggered or re-initialized.

Export Composite List Once Per Day

This section describes how the list of composites is exported from the SOA server to a flat file once a day.

1. Cron Job

In cron, I schedule a job to run once a day at 12:05am to export the list of composites. This cron job calls a shell script.

0 5 * * * /home/oracle/scripts/ExportPollingFrequencyComposites.sh prod >> /home/oracle/scripts/ExportPollingFrequencyComposites.log 2>&1

2. ExportPollingFrequencyComposites.log

This cron entry dumps the output of the shell script in ExportPollingFrequencyComposites.log, but this log is mostly used to see if you have errors during execution. An example of this output is:

--------------------------------------------------------------------------------
START: Wed Jul 28 03:00:01 EDT 2021

----------------------------------------
 Export list of composites (/tmp/ahmed_wlst.log)
----------------------------------------

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

Connecting to t3://soaprod:8001 with userid soaadmin ...
Successfully connected to managed Server "soa_server1" that belongs to domain "soa_domain".

Warning: An insecure protocol was used to connect to the server.
To ensure on-the-wire security, the SSL port or Admin port should be used instead.

Location changed to custom tree. This is a writable tree with No root.
For more help, use help('custom')

drw-   oracle.soa.config:SCAComposite.SCAService=Get_File,revision=1.0,name=AdapterBinding,partition=Default,SCAComposite="HelloWorld",label=soa_5f753a29-3bf9-40e6-97b0-7949c68dafde,j2eeType=SCAComposite.SCAService.SCABinding,Application=soa-infra
.
.
.

WLST output will no longer be redirected to /tmp/ahmed_wlst.log.
Disconnected from weblogic server: soa_server1


Exiting WebLogic Scripting Tool.


----------------------------------------
Create file with only relevant composites (ExportPollingFrequencyComposites.lst)
----------------------------------------

----------------------------------------
 Show contents of ExportPollingFrequencyComposites.lst
----------------------------------------
END: Wed Jul 28 03:00:19 EDT 2021

3. ExportPollingFrequencyComposites.sh

The shell script ExportPollingFrequencyComposites.sh connects to the SOA server and executes a WLST script ExportPollingFrequencyComposites.py to dump the entire list of composites in /tmp/ahmed_wlst.log.

The arrays in this script (under the section 'COMPOSITE FILTER LIST') removes all exported entries except those explicitly listed in the array. So if there are 100 composites deployed to the SOA server, the final list will only have those that are listed here. The final file is ExportPollingFrequencyComposites.lst.

#!/bin/bash

################################################################################
#
# FILENAME:     ExportPollingFrequencyComposites.sh
# DESCRIPTION:  Exports SOA composite list for the composites below from WLST, to be used in Check script
# AUTHOR:       Ahmed Aboulnaga
# LAST UPDATED: 2020-06-27
# DETAILS:      Called by cron every 24 hours (since it doesn't change often)
#
################################################################################

#----------------------------------------
# SOA 12c ENVIRONMENT SETTINGS
#----------------------------------------
export ORACLE_BASE=/u01/app/oracle
export ORAINVENTORY=/home/oracle/oraInventory
export ORACLE_HOSTNAME=`hostname`
export ORACLE_TERM=xterm
export JAVA_HOME=$ORACLE_BASE/latest
export MW_HOME=$ORACLE_BASE/middleware
export WL_HOME=$MW_HOME/wlserver
export ORACLE_HOME=/u01/app/oracle/middleware
export DOMAIN=soa_domain
export DOMAIN_HOME=/u01/app/oracle/domains/${DOMAIN}
export PATH=$JAVA_HOME/bin:$ORACLE_HOME/OPatch:$PATH:/home/oracle/admin:.

# ----------------------------------------
# SET VARIABLES
# ----------------------------------------
scriptsPath=/home/oracle/scripts

# ----------------------------------------
# HELP
# ----------------------------------------
if [ "$arg" == "--help" ] || [ "$arg" == "-h" ] || [ ${#} != 1 ]; then
  echo ""
  echo "Usage: ./ExportPollingFrequencyComposites.sh "
  echo ""
  echo "Examples:"
  echo "  ./ExportPollingFrequencyComposites.sh test"
  echo ""
  exit
fi

# ----------------------------------------
# INPUT
# ----------------------------------------
varEnv=$1
case "$varEnv" in
  "test")
    varHostname=soatest
    varUsername=soaadmin
    varPassword=welcome1
    ;;
  "prod")
    varHostname=soaprod
    varUsername=soaadmin
    varPassword=welcome1
    ;;
  *)
    echo "Unrecognized environment"
    exit
    ;;
esac

# ----------------------------------------
# COMPOSITE FILTER LIST
# ----------------------------------------

arrayComposites=(
        HelloWorld
        HelloWorld2
        )

arrayRevisions=(
        1.0
        1.0
        )

arrayServices=(
        Get_File
        Get_File2
        )

arrayPollingFrequency=(
        18
        10
        )

arrayFolder=(
        /u01/app/oracle/share/inbound/HelloWorld
        /u01/app/oracle/share/inbound/HelloWorld2
        )

arrayFile=(
        *.csv
        *.csv
        )

echo "--------------------------------------------------------------------------------"
echo "START: `date`"

echo ""
echo "----------------------------------------"
echo " Export list of composites (/tmp/ahmed_wlst.log)"
echo "----------------------------------------"
$MW_HOME/oracle_common/common/bin/wlst.sh ${scriptsPath}/ExportPollingFrequencyComposites.py $varUsername $varPassword $varHostname

echo ""
echo "----------------------------------------"
echo " Create file with only relevant composites (ExportPollingFrequencyComposites.lst)"
echo "----------------------------------------"
rm -f ${scriptsPath}/ExportPollingFrequencyComposites.lst
for i in ${!arrayComposites[@]}; do
  cat /tmp/ahmed_wlst.log | grep "${arrayComposites[$i]}" | grep "revision=${arrayRevisions[$i]}" | grep "${arrayServices[$i]}" | grep "oracle.soa.config:SCAComposite.SCAService" | awk '{print $2}' >> ${scriptsPath}/ExportPollingFrequencyComposites.lst
done

echo ""
echo "----------------------------------------"
echo " Show contents of ExportPollingFrequencyComposites.lst"
echo "----------------------------------------"

echo "END: `date`"

4. ExportPollingFrequencyComposites.py

This is the WLST script that lists all composites and dumps them into /tmp/ahmed_wlst.log:

weblogicUsername = sys.argv[1]
weblogicPassword = sys.argv[2]
weblogicHost = sys.argv[3]

connect(weblogicUsername,weblogicPassword,'t3://' + weblogicHost + ':8001')

custom()
cd('oracle.soa.config')
redirect('/tmp/ahmed_wlst.log')
ls()
stopRedirect()

disconnect()
exit()

The composites listed here are in a very particular and specific format which we'll soon need.

5. ExportPollingFrequencyComposites.lst

The final output of this daily dump is maintained in ExportPollingFrequencyComposites.lst, an example which is shown here. This is the list of composites we want to focus on (based on the array in the .sh script).

oracle.soa.config:SCAComposite.SCAService=Get_File,revision=1.0,name=AdapterBinding,partition=Default,SCAComposite="HelloWorld",label=soa_4c33e070-eb79-443e-bcc4-1b2651bd880d,j2eeType=SCAComposite.SCAService.SCABinding,Application=soa-infra
oracle.soa.config:SCAComposite.SCAService=Get_File2,revision=1.0,name=AdapterBinding,partition=Default,SCAComposite="HelloWorld2",label=soa_9384e070-eb79-935s-xyz4-1b2651b9987d,j2eeType=SCAComposite.SCAService.SCABinding,Application=soa-infra

Check Files Every 5 Minutes and (Maybe) Update Polling Frequency

Now these next scripts go through the list of composites, checks if there are files in the directory older than 5 minutes, and if so updates the polling frequency.

1. Cron Job

In cron, I schedule a job to run every 5 minutes to check if there are files older than 5 minutes, and if so will update the polling frequency of the inbound FileAdapter. This cron job calls a shell script.

*/5 * * * * /home/oracle/scripts/CheckPollingFrequency.sh prod >> /home/oracle/scripts/CheckPollingFrequency.log 2>&1

2. CheckPollingFrequency.log

This cron entry dumps the output of the shell script in CheckPollingFrequency.log, but this log is mostly used to see if you have errors during execution. An example of this output is:

--------------------------------------------------------------------------------
[INFO] Start process...         Wed Jul 28 16:00:01 EDT 2021
[INFO] Checking composite #1... HelloWorld | Get_File | 1.0
[INFO] Checking these files...  /u01/app/oracle/share/inbound/HelloWorld/*.csv
[INFO] Checking composite #2... HelloWorld2 | Get_File2 | 1.0
[INFO] Checking these files...  /u01/app/oracle/share/inbound/HelloWorld2/*.csv
|
| ******************************
| *           WARNING          *
| * FILES OLDER THAN 5 MINUTES *
| ******************************
|
| COMPOSITE: HelloWorld2
| SERVICE:   Get_File2
| REVISION:  1.0
|
| FILES:
| Jul 21 11:11 /u01/app/oracle/share/inbound/HelloWorld2/data.csv
|
[CRITICAL] Setting polling frequency to 10 for composite... HelloWorld2 | Get_File2 | 1.0

[INFO] End process...           Wed Jul 28 16:00:02 EDT 2021

3. CheckPollingFrequency.sh

The shell script CheckPollingFrequency.sh loops through all folders and checks if any file is older than 5 minutes. If so, it calls the WLST CheckPollingFrequency.py to update the polling frequency of the respective composite.

#!/bin/bash

################################################################################
#
# FILENAME:     CheckPollingFrequency.sh
# DESCRIPTION:  Updates SOA composite PollingFrequency if files for FileAdapter not picked up in 5 minutes
# AUTHOR:       Ahmed Aboulnaga
# LAST UPDATED: 2020-12-16
# DETAILS:      - Called by cron every 5 minutes
#               - Requires the file 'ExportPollingFrequencyComposites.lst' to exist
#               - File 'ExportPollingFrequencyComposites.lst' created by 'ExportPollingFrequencyComposites.sh'
#
################################################################################

#----------------------------------------
# SOA 12c ENVIRONMENT SETTINGS
#----------------------------------------
export ORACLE_BASE=/u01/app/oracle
export ORAINVENTORY=/export/home/oracle/oraInventory
export ORACLE_HOSTNAME=`hostname`
export ORACLE_TERM=xterm
export JAVA_HOME=$ORACLE_BASE/latest
export MW_HOME=$ORACLE_BASE/middleware
export WL_HOME=$MW_HOME/wlserver
export ORACLE_HOME=/u01/app/oracle/middleware
export DOMAIN=soa_domain
export DOMAIN_HOME=/u01/app/oracle/domains/${DOMAIN}
export PATH=$JAVA_HOME/bin:$ORACLE_HOME/OPatch:$PATH:/home/oracle/admin:.

#----------------------------------------
# SET VARIABLES
#----------------------------------------
varOlderThanMinutes=5
scriptsPath=/home/oracle/scripts
EMAILS=ahmed@revelationtech.com

# ----------------------------------------
# INPUT
# ----------------------------------------
if [ "$arg" == "--help" ] || [ "$arg" == "-h" ] || [ ${#} != 1 ]; then
  echo ""
  echo "Usage: ./CheckPollingFrequencyComposites.sh "
  echo ""
  echo "Examples:"
  echo "  ./CheckPollingFrequencyComposites.sh test"
  echo ""
  exit
fi

# ----------------------------------------
# CHECK FILE EXISTENCE
# ----------------------------------------
if [ ! -f "${scriptsPath}/ExportPollingFrequencyComposites.lst" ]; then
  echo "[INFO] Start process...         `date`"
  echo "[ERROR] File 'ExportPollingFrequencyComposites.lst' does not exist..."
  echo "[INFO] End process...           `date`"
  exit
fi

# ----------------------------------------
# ENVIRONMENT SETTINGS
# ----------------------------------------
varEnv=$1
case "$varEnv" in
  "test")
    varHostname=soatest
    varUsername=soaadmin
    varPassword=welcome1
    ;;
  "prod")
    varHostname=soaprod
    varUsername=soaadmin
    varPassword=welcome1
    ;;
  *)
    echo "Unrecognized environment"
    exit
    ;;
esac

# ----------------------------------------
# SOA COMPOSITES WITH INBOUND FILEADAPTER TO CHECK
# ----------------------------------------

arrayComposites=(
        HelloWorld
        HelloWorld2
        )

arrayRevisions=(
        1.0
        1.0
        )

arrayServices=(
        Get_File
        Get_File2
        )

arrayPollingFrequency=(
        18
        10
        )

arrayFolder=(
        /u01/app/oracle/share/inbound/HelloWorld
        /u01/app/oracle/share/inbound/HelloWorld2
        )

arrayFile=(
        *.csv
        *.csv
        )

#----------------------------------------
# Loop through each composite in ExportPollingFrequencyComposites.lst
#----------------------------------------
varLoopCount=0
echo "--------------------------------------------------------------------------------"
echo "[INFO] Start process...         `date`"

while read f; do

  varLoopCount=$((varLoopCount+1))
  varCurrentLine=${f}

  #----------------------------------------
  # Extract Service and Composite and Composite Revision from current line
  #----------------------------------------
  varCurrentLineService=`echo $varCurrentLine | cut -f1 -d, | cut -f2 -d=`
  varCurrentLineComposite=`echo $varCurrentLine | cut -f5 -d, | cut -f2 -d= | cut -d "\"" -f 2`
  varCurrentLineRevision=`echo $varCurrentLine | cut -f2 -d, | cut -f2 -d=`

  #echo "------ ${varCurrentLine}"
  echo "[INFO] Checking composite #${varLoopCount}... ${varCurrentLineComposite} | ${varCurrentLineService} | ${varCurrentLineRevision}"

  #----------------------------------------
  # Loop through array
  #----------------------------------------
  for i in ${!arrayComposites[@]}; do

    #----------------------------------------
    # If current line matches array values
    #----------------------------------------
    if [ "${varCurrentLineService}" == "${arrayServices[$i]}" ] && [ "${varCurrentLineComposite}" == "${arrayComposites[$i]}" ] && [ "${varCurrentLineRevision}" == "${arrayRevisions[$i]}" ]; then

      #----------------------------------------
      # Extract polling frequency
      #----------------------------------------
      varPollingFrequency=${arrayPollingFrequency[$i]}

      if [ "${arrayFile[$i]}" == "XYZ" ]; then
        echo "[INFO] Checking these files...  ${arrayFolder[$i]}/*.*"
      else
        echo "[INFO] Checking these files...  ${arrayFolder[$i]}/${arrayFile[$i]}"
      fi
      #----------------------------------------
      # If Folder exists
      #----------------------------------------
      if [ -d "${arrayFolder[$i]}" ]; then

        #----------------------------------------
        # If Folder has File types older than 5 minutes
        #----------------------------------------
        if [ "${arrayFile[$i]}" == "XYZ" ]; then
          varStaleFileCount=`find ${arrayFolder[$i]} -maxdepth 1 -mmin +${varOlderThanMinutes} -type f -name "*.*" | wc -l`
        else
          varStaleFileCount=`find ${arrayFolder[$i]} -maxdepth 1 -mmin +${varOlderThanMinutes} -type f -name "${arrayFile[$i]}" | wc -l`
        fi

        if [ ${varStaleFileCount} -gt 0 ] ; then

          #----------------------------------------
          # Log a warning
          #----------------------------------------
          echo "| "
          echo "| ******************************"
          echo "| *           WARNING          *"
          echo "| * FILES OLDER THAN 5 MINUTES *"
          echo "| ******************************"
          echo "| "
          echo "| COMPOSITE: ${varCurrentLineComposite}"
          echo "| SERVICE:   ${varCurrentLineService}"
          echo "| REVISION:  ${varCurrentLineRevision}"
          echo "| "
          echo "| FILES:"
          if [ "${arrayFile[$i]}" == "XYZ" ]; then
            find ${arrayFolder[$i]} -maxdepth 1 -mmin +${varOlderThanMinutes} -type f -name "*.*" -ls | awk '{print "| " $8 " " $9 " " $10 " " $11}'
          else
            find ${arrayFolder[$i]} -maxdepth 1 -mmin +${varOlderThanMinutes} -type f -name "${arrayFile[$i]}" -ls | awk '{print "| " $8 " " $9 " " $10 " " $11}'
          fi
          echo "|"

          #----------------------------------------
          # Touch the files
          #----------------------------------------
          if [ "${arrayFile[$i]}" == "XYZ" ]; then
            for ix in `find ${arrayFolder[$i]} -maxdepth 1 -mmin +${varOlderThanMinutes} -type f -name "*.*" -ls | awk '{print $11}'`
            do
              touch ${ix}
            done
          else
            for ix in `find ${arrayFolder[$i]} -maxdepth 1 -mmin +${varOlderThanMinutes} -type f -name "${arrayFile[$i]}" -ls | awk '{print $11}'`
            do
              touch ${ix}
            done
          fi

          #----------------------------------------
          # Call WLST script to update polling frequency
          #----------------------------------------
          echo "[CRITICAL] Setting polling frequency to ${varPollingFrequency} for composite... ${varCurrentLineComposite} | ${varCurrentLineService} | ${varCurrentLineRevision}"

          $MW_HOME/oracle_common/common/bin/wlst.sh ${scriptsPath}/CheckPollingFrequency.py $varUsername $varPassword $varHostname $varCurrentLine $varPollingFrequency

          #----------------------------------------
          # Send custom email
          #----------------------------------------
          if [ "${varCurrentLineComposite}" == "AllvacLims_BPEL" ] || [ "${varCurrentLineComposite}" == "FileProcessorUtilityUsingShell" ]; then
            mail -s "EM Event: Critical:${HOSTNAME} - PollingFrequency updated (${varCurrentLineComposite}|${varCurrentLineService}|${varCurrentLineRevision})" ${EMAILS_MORE}
          else
            mail -s "EM Event: Critical:${HOSTNAME} - PollingFrequency updated (${varCurrentLineComposite}|${varCurrentLineService}|${varCurrentLineRevision})" ${EMAILS}
          fi

        fi

      else
        echo "[WARNING] Skipping, folder does not exist... ${arrayFolder[$i]}"

      fi
    fi

  done

done < ${scriptsPath}/ExportPollingFrequencyComposites.lst
echo "[INFO] End process...           `date`"

4. CheckPollingFrequency.py

This WLST script updates the polling frequency of the composite. The 5 input parameters are passed by the .sh script.

weblogicUsername = sys.argv[1]
weblogicPassword = sys.argv[2]
weblogicHost = sys.argv[3]
weblogicComposite = sys.argv[4]
weblogicPollingFrequency = sys.argv[5]

import re
import tempfile

connect(weblogicUsername,weblogicPassword,'t3://' + weblogicHost + ':8001')

custom()
cd('oracle.soa.config')

myComposite = ObjectName(weblogicComposite)

# DEBUG
# myComposite = ObjectName('oracle.soa.config:SCAComposite.SCAService=Get_File,revision=1.0,name=AdapterBinding,partition=Default,SCAComposite="HelloWorld",label=soa_6d8fbdcb-d6be-49ed-8a73-ea1dc159427f,j2eeType=SCAComposite.SCAService.SCABinding,Application=soa-infra')

print '-----------myComposite------------------'
print myComposite

print '----------Before Properties-------------'
print mbs.getAttribute(myComposite, 'Properties')

print '-----------Set Properties---------------'
#params = ['PollingFrequency','55']
params = ['PollingFrequency',weblogicPollingFrequency]
sign = ['java.lang.String','java.lang.String']
mbs.invoke(myComposite, 'setStringProperty', params, sign)

print '-----------Save Properties--------------'
mbs.invoke(myComposite, 'save', None, None)

print '-----------After Changes-----------------'
print mbs.getAttribute(myComposite, 'Properties')

disconnect()
exit()

This is not a particularly easy set of scripts to understand at first glance, so feel free to reach out to me for further explanation.

]]>
<![CDATA[ Passing parameters to a REST URI in BPEL/SOA 12c ]]> https://chronicler.tech/untitled-7/ 61107b1a06f7cb4af32cc019 Sun, 08 Aug 2021 21:19:16 -0400 In this blog post, I describe how, during the development of an Oracle SOA Suite 12c composite, to pass URL parameters when invoking an external REST service.

In this example, my SOA composite has 2 separate external references. These are both REST services using the GET method. The first one take the format of https://revelationtech.com/rest/v2/subscribers?q=firstname wherein an explicit query parameter q is passed in the URL with some value after it. The second reference takes the format of https://revelationtech.com/rest/v2/subscribers/Ahmed, wherein Ahmed is a parameterized part of the URL.

Passing a Query Parameter to the REST URI

For the first reference, when creating the REST adapter, it looks like the following:

When editing the method, it is configured as a GET method and the URI parameter is defined with a runtime property rest.query.q:

Now that the REST adapter is created, double-click on the Invoke activity in your BPEL process, navigate to the Properties tab, and manually add a property as shown. Here, the BPEL variable $inputName becomes the value that is passed to the rest.query.q property:

Adding a Parameterized Context Path to the REST URI

As for the the second reference, you can see here when creating the REST adapter, the {inputID} is explicitly defined as a variable in the resource path:

Now when you edit the method, it is also a GET method, but the runtime property here is now rest.template.inputID:

Similarly, when editing the Invoke activity in your BPEL process, simply add a property to map your BPEL variable (e.g., $input) to the runtime property (e.g., rest.template.inputID):

]]>
<![CDATA[ Add WSM policy to authenticate using BasicAuth when calling an external reference in SOA 12c ]]> https://chronicler.tech/authenticate-using-basicauth/ 6110720906f7cb4af32cbfca Sun, 08 Aug 2021 20:28:47 -0400 If you're developing a composite in Oracle SOA Suite 12c (12.2.1.3+), you may need to call an external reference using the SOAP or REST adapters. If these external services require HTTP Basic Authentication (BA), then you will need to attach an OWSM policy to your external reference.

Add WS-Security Policy

  1. In Oracle JDeveloper, right-click on your external reference, and click on Configure SOA WS Policies....

2.  Click + and select the policy oracle/wss_http_token_client_policy.

*Note: The steps here describe how to attach a policy during design time (i.e., development), but you can also add them during runtime through the EM console.

Add the Credentials

  1. Login to the EM console.
  2. Click on the menu icon on the top-left.
  3. Expand WebLogic Domain.
  4. Click on your domain name (e.g., soadomain).
  5. Navigate to WebLogic Domain --> Security --> Credentials.
  6. If you don't find a credential entry for oracle.wsm.security, then click on Create Map and manually add it.
  7. Click on the row oracle.wsm.security, then click Create Key.

8.  Here, add the username and password of your target external reference.

Now, when your SOA composite is instantiated and it makes the invocation to the external web service, it will automatically pass the credentials.

Here is an example of an error you could get if the credential key is not created:

{"RestFaultElement": {
  "summary": "oracle.fabric.common.FabricInvocationException: javax.ws.rs.ProcessingException: java.io.IOException: oracle.wsm.common.sdk.WSMException: WSM-00054 : The password credentials cannot be obtained from the Credential Store Framework (CSF). Unable to retrieve the csfKey \"basic.credentials\". The csf map \"oracle.wsm.security\" found in the credential store is being used and the following keys are found in this map :- ",
  "code": "null",
  "detail": "oracle.wsm.security.SecurityException: WSM-00054 : The password credentials cannot be obtained from the Credential Store Framework (CSF). Unable to retrieve the csfKey \"basic.credentials\". The csf map \"oracle.wsm.security\" found in the credential store is being used and the following keys are found in this map :- "
}}
]]>
<![CDATA[ Ghost Upgrade: A Bit of MySQL ]]> https://chronicler.tech/ghost-upgrade-a-bit-of-mysql/ 6109a5d906f7cb4af32cbecc Wed, 04 Aug 2021 08:30:00 -0400 This morning I accidentally brought down this site. The Ghost CLI told me that there is a minor upgrade to version 4.11.0. Unfortunately, even though the start version was 4.10.0, schema update has broken the server.

We use MySQL as a Ghost blog database, and it's turned out, scripts wouldn't create a table and foreign key at the startup time. The error message points to a foreign key issue with no further explanation. The blog server fails to start because the database has no trace of the requested table. A quick search for the error has returned generic recommendations but nothing related to my problem.

Well, if you can't fix the error - prevent it. So I decided to create the missing table and references. After some search on GitHub, I found the schema migration code for that release. The next step is to translate the JS definition into MySQL commands. Short story short, in case if you run into the same problem, here is DDL to create the missing pieces.  Connect to your MySQL database with ghost credentials and run commands as below

## Create missing table 
create table `oauth` (
 `id` varchar(24) not null primary key,
 `provider` varchar(50) not null,
 `provider_id` varchar(191) not null,
 `access_token` text,
 `refresh_token` varchar(2000) default null,
 `created_at` datetime not null,
 `updated_at` datetime default null,
 `user_id` varchar(24) not null
  ) engine=InnoDB default charset=utf8;
## Define a new index
alter table `oauth` add index `oauth_user_id`(`user_id`);
## Build a new foreign key
alter table `oauth` add constraint `oauth_user_id_foreign` 
    foreign key (`user_id`) REFERENCES `users`(`id`) on delete no action;
Create a new table in the MySQL database

Two key factors to success:

  • Make sure that you use the same engine and character set as table USERS. In my case - they are InnoDB and utlf8mb4.
  • Create an index on the user_id field before the foreign key. It's not that obvious for people from the Oracle side.  Normally, an index on the source column improves query performance in the Oracle database, but it's not required for the constraint itself.  
]]>
<![CDATA[ Ansible: Subroutines ]]> https://chronicler.tech/ansible-subroutines/ 60fb4d8903fe466ba353e916 Tue, 03 Aug 2021 08:30:00 -0400 The second post in the Ansible: Control Structures series describes using Ansible core components to emulate subroutines - procedures and "packages."

The whole paradigm of Ansible is the opposite of some core programming languages principles - encapsulation and isolation. Any task on the same host has access to all the process facts. All facts from different sources are compiled with the single pool accessible for all tasks in the playbook. If you have some fact declared in multiple places: inventory, play, or role, Ansible will calculate the final value using a precedence list. If you haven't learned it before, take a moment and study it for good. So, why do we discuss variables instead of subroutines? Because you should always keep in mind:

  • There is no true encapsulation. Even you consider a variable as local actually it's available for all subsequent tasks and roles. Same for global facts: something from another play could overwrite your defaults.
  • There are no procedure parameters. Instead, they are variables with all the consequences. So, naturally, you cannot return anything from your "subroutines."

With this necessary warning, let's name the ways you can simulate subroutines without custom modules.  There are only a few:

  • include_tasks -  This module allows you dynamically execute a set of tasks in a predefined order. The task points to a list of tasks in a separate file. The list of tasks should be a valid YAML file and contains tasks only. This module enables selections and iterations in Ansible. Even without encapsulation, you can pass parameters to the task and use them inside a task list. Here is a sample code:
- hosts: all
  vars: 
    my_playbook_var: yes
  tasks:
     - name: Call Subrotine with Parameters
       include_tasks:  tasks-to-run.yml
       vars:   
         my_task_var: "{{ not my_plabook_var|bool }}"
         
Include tasks with parameters (procedure)
  • Ansible roles - The language structure allows you to encapsulate a complex set of tasks and enrich the core set of modules with the environment- or product-specific actions. Playbook offers two places for role calls: roles part of the Ansible play, and the task include_role. The Ansible engine executes roles listed in the roles section, prior execute tasks section, while the include_role task allows you to execute a role among the other play tasks.    
- hosts: all
  vars: 
    my_playbook_var: yes
  roles: 
    - my_awesome_role1
      vars:
        my_role_var: "{{ not my_playbook_var|bool }}"
  taks:
    - debug:
        msg: "I go between my awesome roles"
    - include_role: 
        name: my_awesome_role2
      vars:
        my_role_var: "{{ not my_playbook_var|bool }}" 
    - debug: "I go after my_awesome_role2"    
Use Roles in Ansible Play

And just a few points for the conclusion:

  • All ansible variables, aka facts, are available within the playbook. The current variable value calculates for targets using fact's precedence list;
  • You can simulate encapsulation use role-specific variable names. However, you can lose some of the most powerful Ansible features.      
  • You can simulate procedures and packages with a set of tasks in the separate file and Ansible roles (standalone or within collections);
  • Modules include_tasks and include_role enables precise control over the execution task order;
  • You can "pass" parameters to roles and subtasks using the vars clause.

   

]]>
<![CDATA[ Ansible: Control Structures ]]> https://chronicler.tech/ansible-control-structures/ 60f40d7f03fe466ba353e7df Tue, 20 Jul 2021 08:30:00 -0400 Red Hat Ansible is a powerful and easy-to-learn automation product. Simplicity is one of the basic principles of the language. Open the product description or documentation; it states simplicity and easy-to-use as key differentiators. And they are not wrong; this paradigm pays with the clean YAML code, one of the most human-compatible formats. Yet, any task a titbit more complex than installing and starting an HTTPD server requires good old procedural control flows.

I plan to create a series of posts about how I work around Ansible limitations and create custom roles without creating custom modules. Not that I don't know Python, everyone who spends more than a year with WebLogic knows it a bit. But custom modules require a different level of commitment and involvement. With playbooks and custom roles, you automate daily routines and deliver perfectly unified environments. When you create modules, you are a Python developer and create code that someone would use for the automation.

Anyway, procedural language has three types of command flow:

  • Sequential flow - the most natural way for Ansible. The engine executes plays in the playbook and tasks in plays in the same order they appear in the code. The Ansible does not maintain the sequence of targets or tags, but an order of tasks is sacred. So you don't need to do anything special to keep your task execution in order.
  • Subroutines - Ansible has no formal functions or procedures that you can define and reuse. The closest to the procedure by meaning is a role:  a predefined set of tasks, facts, and artifacts to perform repeatable tasks on targets. And this is what I use the most on the playbook level.
  • Selections - With the "Simpler is merrier" on the banner, Ansible does not have proper selection controls, yet even simple by choice language must check conditions and act accordingly. Any Ansible task can be paired with the when clause. That and perks of the Jinja2 templates allow you to mimic if ... then and case controls.
  • Iterations - When you reflect on the fact that the Ansible engine executes multiple tasks simultaneously on the scores of hosts, you would better understand why Ansible developers do not like iterations. Yet, Ansible allows you to iterate a task and even offers nested loops (two levels limit). Yet can't iterate over a block of tasks and use language loopholes to implement complex loop clauses.

   

]]>
<![CDATA[ Ansible Variables: a Rookie Mistake ]]> https://chronicler.tech/ansible-rookie-mistake/ 60edecff03fe466ba353e6aa Wed, 14 Jul 2021 08:35:46 -0400 I have spent a few hours trying to find why my role applies patches to the wrong Oracle Home. The mistake I made could be excused for Java Developer, but not for a guy who spends days with Ansible.

The root cause is the different variable visibility model. Red Hat Ansible does not appreciate role or play boundaries and dynamically evaluates facts and variable values for each target. Let me illustrate  this with an example:

  • I have a playbook with two plays.
  • Each play select a different sub-set of patches, using product code
  • The role applies selected patches to the current oracle home.
  • Both plays target the same host.
  • Role derivates target Oracle Home using other variables defined in the call.

There is some Ansible code to illustrate the behavior

- name: Apply OHS Patches
  hosts: oim-cluster
  vars: 
    java_home: "/u01/app/product/jdk"
    mw_base: "/u01/app/product"
    mw_version: "12"
  tasks:
    - name: Apply Product patches:
      include_role: 
         name: oracle-patch
      vars: 
        product: "ohs" 
        oracle_home: "{{ mw_base }}/{{ product }}{{ mw_version }}"
        patch_id: "all" 
        
- name: Apply OIM Patches
  hosts: oim-cluster
  vars: 
    java_home: "/u01/app/product/jdk"
    mw_base: "/u01/app/product"
    mw_version: "12"
  tasks:
    - name: Apply Product patches:
      include_role: 
         name: oracle-patch
      vars: 
        product: "idm" 
        oracle_home: "{{ mw_base }}/{{ product }}{{ mw_version }}"
        patch_id: "all"
        
Two plays call the same role to upgrade oracle product

And now the mistake. The role oracle-patch had  an "internal" variable definition.

- name: Werdo Local variable declaration
  set_fact: 
     oh: "{{ oracle _home }}"
  when: oh is not defined    
Cod that lead to the error

The problem with the code above that it defines a fact. And it will be preserved over role calls, or different plays in the same playbook, as soon as they run on the same targets. On top of it, this task updates the fact only if the fact is new. So I eliminated a new fact definition and replaced it with the direct variable usage to fix the error.

It's time to go ahead and refresh my knowledge about variables and facts in Red Hat Ansible. If you have the same plans, I'd recommend starting with documentation and learn about variable usage and especially about variable precedencies.

]]>
<![CDATA[ IOUG Member Spotlight (Dec 2015) ]]> https://chronicler.tech/ioug-member-spotlight/ 60e7057103fe466ba353e621 Thu, 08 Jul 2021 10:06:43 -0400 I happen to randomly stumble upon this today when I was searching for an old blog post, so figured it might be interesting to share. This was when I was featured in December 2015 in the monthly IOUG Member Spotlight, which gave an inside look at the life of an OUG member.

]]>
<![CDATA[ Adding a new WebLogic Domain target to OEM ]]> https://chronicler.tech/adding-a-new-weblogic-domain-target-to-oem/ 60e45b3403fe466ba353e4cf Tue, 06 Jul 2021 09:57:23 -0400 Adding an Oracle Fusion Middleware/WebLogic Domain target to OEM is not that difficult, albeit must be done manually.

Simply navigate to the gear icon, then Add Target > Add Targets Manually. Click on Add Using Guided Process.

Now select Oracle Fusion Middleware/WebLogic Domain then click Add.

Enter the AdminServer host, port, username, and password. Make note of the following:

  • If your AdminServer port is SSL, then under JMX Protocol select t3s.
  • Consider creating a weblogic account specific to the OEM Agent, only granting it the Operator group (e.g., in this screenshot this account is referred to as 'oemagent').

By having a separate account used for monitoring, any changes to the primary 'weblogic' password won't break the monitoring. Furthermore, this 'oemagent' account does not require elevated privileges, thus improving security.

Once added, you're good to go!

By default, new targets have auto discovery enabled once a day. Navigate to the gear icon, then Target > Configure Auto Discovery.

Here you can see that 6 new targets were auto discovered. Auto discovered targets are not automatically added so you must do so manually. It's probably a good habit of reviewing this page if new software or code is deployed to this host so that you can add it.

Click on the "6".

The Agent is smart enough to identify other software installed on the host, even if it's not used. You can either Promote it (i.e., add it as a monitored target) or Delete it and forever be forgotten.

Once again, the only time you really need to add auto discovered targets is if there have been software or code changes on your host.

]]>
<![CDATA[ Removing a WebLogic Domain from OEM Cloud Control ]]> https://chronicler.tech/removing-a-weblogic-domain-from-oem-cloud-control/ 60e3855b03fe466ba353e4ab Mon, 05 Jul 2021 18:28:16 -0400 When you add a WebLogic Domain target to OEM, it automatically adds a number of target types included in this domain such as Application Deployments, Oracle WebLogic Server, Oracle Coherence Cache, Email Driver, and so on. It's not unusual to have hundreds of targets automatically added any time you add a single WebLogic Domain target to OEM.

Fortunately, removing the WebLogic Domain will automatically remove all subtargets without having to remove each one of them manually. To do so:

  1. Navigate to Targets > Middleware.
  2. Click on the WebLogic Domain you wish to remove under the Target Name column.
  3. Click on the Remove button.
Easily remove a WebLogic Domain from OEM through the Middleware dashboard
]]>
<![CDATA[ Checking the 2-way communication between OMS and OEM Agent ]]> https://chronicler.tech/checking-2-way-communication-to-your-oem-agent/ 60e3640b03fe466ba353e3ea Mon, 05 Jul 2021 18:18:33 -0400 The Oracle Enterprise Manager (OEM) Cloud Control Agent is installed on each host in your environment. This can be a Linux, Windows, or Solaris host, and typically (though not necessarily) there is 1 agent installed per host.

The OEM Agent listens on a default HTTP (or HTTPS) port 3872. You can take steps to secure this port later on if you choose to. The typical listen service is accessible at http://agenthost:3872/emd/main and is referred to as the Agent URL.

Agent URL

The Oracle Management Server (OMS) also has its own HTTP listener (but not to be confused with the URL for the web console). This URL is used exclusively for all your OEM Agents to upload metrics on an ongoing basis. This URL is accessible at https://omshost:4902/empbs/upload and is referred to as the Repository URL.

Repository URL

The communication between the Agent and OMS is straightforward. The administrator on the web console issues commands which is sent from OMS to the Agents on port 3872 and the Agents regularly/automatically upload their metric data to OMS through port 4903.

2-way communication between OMS and Agent

What if the firewall is blocking this communication?

If OMS cannot communicate to the Agent, then you simply can't perform "management" activities. For example, you would not be able to issue startup or shutdown commands, or invoke remote jobs, and so on. This would not impact your ability to monitor your targets though (and it won't impact your ability to receive alerts), as the Agents can still continuously upload their metric data.

If the Agent(s) cannot communicate to OMS, this is a serious problem, as the Agents will continue collecting metrics, but it will be saved locally, unable to push it to OMS. Thus, the monitoring data on the web console will be stale and no alerts will be received.

How can you quickly check connectivity?

If you want to confirm whether the communication from OMS to the Agent is working, on the web console, navigate to the Agent and click on Upload Metric Data.

If you want to confirm whether the communication from the Agent back to OMS is working, on the host running the Agent type emctl upload. If it works, this means that the Agent was able to successfully upload the currently collected local metric data to OMS, and you're good to go.

You can also type emctl status agent to get more details on the status of the agent.

]]>
<![CDATA[ Unappreciated SSH Client Config ]]> https://chronicler.tech/underappreciated-ssh-config/ 60cb236703fe466ba353e172 Tue, 29 Jun 2021 08:30:00 -0400 I guess every IT guy used SSH/SFTP to access remote computers and systems once in a while. Many of them know how to run graphical commands remotely, but not so many are fluent with the server configuration of the SSH daemon. But I can hardly name a few who knows how helpful client SSH configuration is.

You can make your life so much simpler, especially if you navigate through numerous and heterogeneous *nix environments with different SSH ports, various access keys, and different usernames. You can use some advanced SSH terminal, like MobaXterm, or you can keep your environment details in the SSH configuration rules. I, personally, do both, and I'm about to show you how helpful SSH client configuration could be.

It's prevalent, especially for government projects, when each device welcomes you with a noticeably long greeting banner. And in some cases, a few useful lines get buried under pages of warnings of fair computer use. I fix this with the SSH config clause on all my Ansible controllers and on my workstation. With the clause below to ~/.ssh/config or %userprofile%\.ssh\config to suppress all SSH related output and /etc/motd notifications.

Match User git
   LogLevel QUIET
Suppress SSH output for user git

If you want to learn more here is a good point to start.

]]>
<![CDATA[ Automate Housekeeping ]]> https://chronicler.tech/automate-housekeeping/ 5fd613394f0fc30da4e63a91 Tue, 22 Jun 2021 08:30:00 -0400 To be honest, my house is not the brightest kid on the block. Of course, we have Roomba, some smart light switches, a few IP cameras, an advanced mesh network, and a couple Alexa stations. Yet all of it is not connected or doesn't work the way I'd like to. The main reason for such a hectic approach is I'm a lazy person. That's and of course money. For one who does automation for a living, it's okay to be a bit lazy. So let's talk about how I keep my local git repositories in order with minimal effort.

From time to time, I post items around Ansible, Ansible Tower, and GitLab repositories. So, you may see that it consumes a noticeable amount of my work time. I ranted about GitLab repositories, projects, inventories, and hybrid configurations a lot. So, it's time to take a look at my workstation.

I do Oracle Fusion Middleware automation, yet I rarely use Oracle development tools. Instead, I spend my days with Microsoft VS Code, Git, and SSH terminal. I don't want to tell you how great VS Code is or some tips and tricks around Git; I'd love to learn more myself. Yet you may have similar struggles, managing local repositories.

 

My typical Ansible project in Microsoft VS Code

The simplest VS Code workspace I can imagine uses at least two Git/GitLab repositories:

  • Project repository
  • Inventory, through the submodule reference

Projects with custom roles consume significantly more than two repositories, and as a consequence VS Code struggles with multiple Git repositories within the project. It's painful to perform multiple commits or updates all local repositories. With the score of roles, even the simple repository clone turns to a project and begs for automation, if you ask me.

After the third project clone, I gave up and created a small shell script to run git commands against multiple local repositories. It has grown from a single clone command and expanded functionality with time. This one is the most recent version, cleaned from the project-specific details.

#!/bin/sh
##############################
#  Michel Mikhailidi
#  mm@chronicler.tech
#  August, 2020 
##############################

# List your role names here. 
# I use role name as a repository name  
ROLES=( "certificate" "domain-control" "weblogic-install" \
"product-install" "product-patch" "weblogic-patch" "rcu" "keystore")
# Role base is the location of all your local repositories 
# Script uses it for all repository operations
# Example below represents Windows 10 mounts in the Shell console
#ROLES_BASE=/c/my-projects/ansible-roles
ROLES_BASE=$(pwd)
# All role projects are in the same group on SCM server
# U can use SSH or HTTPS links for the groups reference. 
ROLES_GROUP=https://github.com//my/ansible/tower/roles
# The first parameter specify the operation 
# I have implemented: clone, pull, push, checkout, and status. 
ops=$1
# Default branch name is master
brnch=${2:-master}
case $ops in
 clone)
    cd $ROLES_BASE
    for rl in ${ROLES[@]}; do
      git clone ${ROLES_GROUP}/${rl}.git
    done
    ;;
  checkout)
    for rl in ${ROLES[@]}; do
      echo -e "Push for ${rl} \
\n=========================================\n"
      cd $ROLES_BASE/$rl
      git $ops $brnch
      git pull origin $brnch
    done
    ;;
  pull)
    for rl in ${ROLES[@]}; do
      echo -e "Push for ${rl} \
\n=========================================\n"
      cd $ROLES_BASE/$rl
      git pull origin $brnch
    done
    ;;
  push)
   for rl in ${ROLES[@]}; do
      echo -e "Push for ${rl} \
\n=========================================\n"
      cd $ROLES_BASE/$rl
      git $ops origin $brnch
    done
    ;;
  status)
   for rl in ${ROLES[@]}; do
      echo -e "Push for ${rl} \
\n=========================================\n"
      cd $ROLES_BASE/$rl
      git $ops
    done
    ;;
  *)
    echo "Operation git ${ops}  ${brnch} is not yet implemented."
    ;;
esac
Mass Git project operations. 

You may never need such a thing, but if you want to use it - don't forget to customize the code:

  • ROLES - Array of the project names to work with. I left only a fraction of my original list;
  • ROLES_BASE - Path to the folder where you keep your local projects. The exact path format depends on what Shell interpreter you use.
  • ROLES_GROUP - Your Git server URL. In my case, all repositories are under the same group. You may never need them if you have all repositories cloned already.

With this script, I can manipulate the local repositories much faster and more effectively. There are some examples:

# Prepare new roles folder
[user@workstation]$ mkdir -p /c/my-projects/new-roles
[user@workstation]$ cd /c/my-projects/new-roles

# Initiate Local repostories
[user@workstation]$ mass-git-ops.sh clone

# Checkout the same branch 
[user@workstation]$ mass-git-ops.sh checkout dev-branch

# Check the statatus of local repositories
[user@workstation]$ mass-git-ops.sh status
Sample use of the mass git operations
]]>
<![CDATA[ Ansible and Git Submodules ]]> https://chronicler.tech/git-submodules/ 60bb5e9503fe466ba353dd07 Tue, 15 Jun 2021 08:35:00 -0400 Let's discuss a problem I faced, moving from Ansible to the Ansible Tower. The problem is how to maintain multiple inventories scattered all over new projects effectively.

All my new projects are tightly coupled with the corporate GitLab server. It allows me to keep code under control and maintain multiple projects with minimal effort, except the project inventory. When each project has its own tailored copy of the inventory, the original idea works only to some extent. After the sevenths project on Git, I realized that I'm going straight to the version maintenance hell.

The slightest change in the actual configuration requires review and update of the inventories for all affected repositories. Not to mention you have to remember which one is impacted. That's how I have turned my eye toward git submodules.

In a few words, it's a reference from one git project to another one. It does not mean much for Ansible Tower, but it works well for classical Ansible and keeps all project inventories in sync. The diagram below illustrates all Ansible project artifacts and relations.

Hybird Ansible Project Diagram

The diagram requires some explanations, so let's walk it through. Each project copy on the Ansible Controller compiles from

  • Project artifacts in the GitLab repository. Usually, projects have multiple protected branches to maintain environment-specific details: domain names, vaults, inventory references, etc.
  • Role references in roles/requirements.yml. It points to separate role repositories and specific branch information. Technically, all roles should be environment agnostic, but I maintain the same branches to match the project repository, mainly to simplify automation activities.
  • Project inventory cloned from a git submodule. It comes with the additional .gitmodule file with the description(-s) of the path and repository you want to clone to it.
  • Non-Ansible files such as - project execution script -runs Ansible playbook, after updating project roles and project/inventory artifacts from GitLab server. Environment configuration file - to use Ansible command-line tools with project-specific setup.

Now the caveat - Ansible Tower doesn't support modules. I'm okay with that because Ansible Tower inventories could be sourced from a separate project, which I already have. As for the core Ansible - git submodules significantly reduce my efforts on maintaining environment knowledge up to date. It's essential because we keep all environment knowledge in the repository.

]]>
<![CDATA[ Getting ssh_init when using SSH or SCP ]]> https://chronicler.tech/getting-ssh_init-when-using-ssh-or/ 60c4bdab03fe466ba353dfb8 Sat, 12 Jun 2021 10:07:18 -0400 When using pscp to transfer a file, you may receive the error:

ssh_init: Network error: Cannot assign requested address

The solution is simple. Simply specify the -P 22 parameter to explicitly connect to the port, and you should be good to go.

]]>
<![CDATA[ Ansible: Conditional Playbooks ]]> https://chronicler.tech/ansible-conditional-plybooks/ 60606757afdf636c021010e3 Tue, 01 Jun 2021 08:30:00 -0400 I have praised Ansible tags all about the previous post. Unfortunately, tags don't go along with Ansible Tower complex workflows. Let me guide you through the playbook transformation we've done. The answer to this challenge is extra variables.

Practically, it's the only answer for now if you want to keep the same functionality for Ansible and Ansible Tower. You can define it as an extra variable or add a user survey and pass it down to all jobs in the workflow.

Let's get the example from the previous piece and sprinkle it with that extra variables support. Modified playbook tries to use an external tower_operation variable or uses 'start' or 'stop' values if the variable is undefined.

---
## WebLogic Domain Control Playbook
## Stop Section
- name: Stop Managed Servers
  hosts: wls_hosts
  tags:
   - stop
   - server-stop
  vars:
    tower_op: "{{ tower_operation|default('stop') }}
  roles: 
    - role: domain-ctl
      vars:
        process: server
        state: stop
      when: tower_op in ['stop','server-stop'] 

- name: Stop Admin Server
  hosts: wls_host_admin
  tags:
   - stop
   - admin-stop
  vars:
    tower_op: "{{ tower_operation|default('stop') }}
  roles: 
    - role: domain-ctl
      vars:
        process: admin
        state: stop
      when: tower_op in ['stop','admin-stop'] 
        
- name: Stop NodeManager
  hosts: wls_hosts
  tags:
   - stop
   - nm-stop
  vars:
    tower_op: "{{ tower_operation|default('stop') }}
  roles: 
    - role: domain-ctl
      vars:
        process: node
        state: stop
      when: tower_op in ['stop','nm-stop'] 

## Start Section        
- name: Start NodeManager
  hosts: wls_hosts
  tags:
   - start
   - nm-start
  vars:
    tower_op: "{{ tower_operation|default('start') }}
  roles: 
    - role: domain-ctl
      vars:
        process: node
        state: start
      when: tower_op in ['start','nm-start'] 

- name: Start Admin Server
  hosts: wls_host_admin
  tags:
   - start
   - admin-start
  vars:
    tower_op: "{{ tower_operation|default('start') }}
  roles: 
    - role: domain-ctl
      vars:
        process: admin
        state: start     
      when: tower_op in ['start','admin-start'] 


- name: Start Managed Servers
  hosts: wls_hosts
  tags:
   - start
   - server-start
  vars:
    tower_op: "{{ tower_operation|default('start') }}
  roles: 
    - role: domain-ctl
      vars:
        process: server
        state: start
      when: tower_op in ['start','server-start'] 

...
wls-doman-ctl.yml

Now you have more options to control playbook execution:

  • From the command line with tags, for example --tags stop.
  • From the command line with extra variable -e "tower_operation='stop'".
  • Declare an extra variable for the Ansible Tower Workflow as a survey or extra variable.
]]>
<![CDATA[ Ansible: the Power of Tags ]]> https://chronicler.tech/ansible-the-power-of-tags/ 609db2bb9776a713243da6d3 Mon, 24 May 2021 08:33:00 -0400 Let's talk about why you should use tags in your Ansible playbooks. For starters, I'll lecture you a bit about tags in the Ansible language and then show you how we use them in a real-life scenario.

Ansible is a great automation and configuration tool, built with simplicity and idempotency in mind. And like all in this world, it comes with a price tag: Ansible language is simple, not to say minimalistic in terms of the control structures. Of course, you immediately refute me and say that you can do all that in Ansible, and you would be correct. Ansible has its ways to compensate for missing constructs and develop some quite sophisticated scenarios.

Plus, Ansible gives you tags, another unique and quite powerful way to control the execution. You can put a label or labels on tasks, roles, and plays. The tag value could be any valid string. Just keep in mind there are few reserved words - always and never and I do not recommend use words tagged and untagged for tags as well. Let's use a small example to illustrate how it looks like:

- name: Start MySQL Server
  tags: 
   - database
   - start
  service:
   name: mysql
   state: started
   
- name: Start HTTPD Server
  tags:
   - http
   - start
  service: 
   name: httpd
   state: started
system-start.yml

Now, let's see how we tags alter the playbook outcome.


# Run all tasks and plays in the book
ansible-playbook system-start.yml

# Start all systems 
ansible-playbook system-start.yml --tags start

# Start HTTP Server only
ansible-playbook system-start.yml --tags http,start

# Same as previous 
ansible-playbook system-start.yml --skip-tags database

Special usage tags:

  • always - Ansible always executes the task, regardless of the specified tags in --tags and --skip-tags arguments.
  • never - Engine will skip the task, ignoring the --tags and --skip-tags arguments.
  • untagged - Not a tag, but with -- tags untagged, Ansible will execute all tasks with no tags.
  • tagged - Opposite to the previous one. With --tags tagged, the engine picks a task if it has any tag.

Special tags (especially never) are pretty handy when you have something that you execute only on a special occasion. For example, I have an Oracle WebLogic configuration play with the tasks like this:

---
- name: Drop RCU Domain 
  hosts: wls-domin-admin
  tags:
   - never
   - rcu-drop
  roles:
   - my_rcu_role
     vars:
       state: absent
       
- name: Create RCU Domain 
  hosts: wls-domin-admin
  tags:
   - rcu-create
  roles:
   - my_rcu_role
     vars:
       state: present
 # Anotehr plays are skipped.
 ...
wls-domain.yml

In the example above, play "Drop RCU Domain" will never be executed until I say so. It assures me that I wouldn't drop all my database artifacts by chance. Now, if I decide to re-create my WebLogic domain, I run this playbook with tags:

# Recreate Database objects
ansible-playbook wls-domain.yml --tags rcu-drop,rcu-create
Run play, marked as "never"

There are more fine details on using tags with roles or imported tasks, but you already have a general idea of how valuable tags for your projects. One more tag-related tip:

  • --list-tags - List all tags from the playbook. Quite useful to understand what you can specify for the play.
  • --list-tasks - the command itself shows you the list of tasks in the playbook, but in combination with --tags or --skip-tags, it gives you a heads up on what Ansible will execute from your playbook.

Control Playbook

One of the most common playbooks we run - is the control playbooks. They are designed to start/stop or restart a specific system, with all sequences and validations we need. To add some flexibility, I created separate plays to manage domain components individually. You don't need to know all the WebLogic domain details to understand this example, but the precedence of commands is important for the result, and you can't bring up the next component without the previous one.

Let me show the playbook structure, and then we discuss how to use it.

---
## WebLogic Domain Control Playbook
## Stop Section
- name: Stop Managed Servers
  hosts: wls_hosts
  tags:
   - stop
   - server-stop
  roles: 
    - role: domain-ctl
      vars:
        process: server
        state: stop

- name: Stop Admin Server
  hosts: wls_host_admin
  tags:
   - stop
   - admin-stop
  roles: 
    - role: domain-ctl
      vars:
        process: admin
        state: stop
        
- name: Stop NodeManager
  hosts: wls_hosts
  tags:
   - stop
   - nm-stop
  roles: 
    - role: domain-ctl
      vars:
        process: node
        state: stop

## Start Section        
- name: Start NodeManager
  hosts: wls_hosts
  tags:
   - start
   - nm-start
  roles: 
    - role: domain-ctl
      vars:
        process: node
        state: start

- name: Start Admin Server
  hosts: wls_host_admin
  tags:
   - start
   - admin-start
  roles: 
    - role: domain-ctl
      vars:
        process: admin
        state: start     

- name: Start Managed Servers
  hosts: wls_hosts
  tags:
   - start
   - server-start
  roles: 
    - role: domain-ctl
      vars:
        process: server
        state: start
...
wls-domain-ctl.yml

Now, let see how I use playbooks

# Restart Domain
ansible-playbook wls-domain-ctl.yml  
# Stop Domain
ansible-playbook wls-domain-ctl.yml --tags stop
# Start Domain.  
ansible-playbook wls-domain-ctl.yml --tags start 
# Start AdminServer only 
ansible-playbook wls-domain-ctl.yml --tags nm-start,admin-start,stop

Properly arranged plays give you the most common control activity for the information system - restart. As a bonus, if the system was down, it will be started if you run playbook as is.

Every play bears a start or stop tag, so if I want to bring down the domain for the maintenance, I use only one tag for it.

If I need to start only the Admin Server, I make sure that the domain went down and specify component-branded tags for the start sequence. This command also illustrates that I can list tags in any order; Ansible will select and execute tasks in the playbook precedence order.

]]>
<![CDATA[ Getting DN mismatch in Java? Use keytool ]]> https://chronicler.tech/getting-dn-mismatch-in-java-use-keytool/ 60a7bd259776a713243daa74 Fri, 21 May 2021 10:19:02 -0400

I wrote some Java code that connects to a secure Oracle Database listener port through JDBC. The code I used is published here.

However, every time I execute this code, I receive the following exception:

java.sql.SQLRecoverableException: IO Error: Mismatch with the server cert DN.

This means that the DN I've configured in my Java code doesn't match that of the listener.

So how do I get the DN of the secure database listener?

I identified 3 mechanisms to extract the DN from the database listener; one using curl, another using openssl, and the last using keytool. Apparently, the DN returned is slightly different based on which of these commands you use.

cURL:

oracle@soadev:/home/oracle> curl -vvI dbhost.raastech.com:1522

*       subject: CN=dbhost.raastech.com,serialNumber=1955-01-01,businessCategory=Government Entity,O=Raastech Inc.,incorporationCountry=US,L=Washington,ST=District of Columbia,C=US

OpenSSL:

oracle@soadev:/home/oracle> openssl s_client -connect dbhost.raastech.com:1522

subject=/C=US/ST=District of Columbia/L=Washington/jurisdictionC=US/O=Raastech Inc./businessCategory=Government Entity/serialNumber=1955-01-01/CN=dbhost.raastech.com

Keytool:

oracle@soadev:/home/oracle> keytool -printcert -sslserver dbhost.raastech.com:1522

Owner: CN=dbhost.raastech.com, SERIALNUMBER=1955-01-01, OID.2.5.4.15=Government Entity, O=Raastech Inc., OID.1.3.6.1.4.1.311.60.2.1.3=US, L=Washington, ST=District of Columbia, C=US

Solution? Use the DN returned from keytool.

]]>
<![CDATA[ Ansible Tower: Dynamic Usernames ]]> https://chronicler.tech/ansible-tower-dynamic-usernames/ 609ea2d29776a713243da701 Mon, 17 May 2021 08:23:00 -0400 The slow drift toward the Red Hat Ansible Tower uncovers more and more compatibility issues, so I have to go back and revisit some decisions and make sure that the same code would fit both worlds.

Some six months ago, I posted a note on dynamic Ansible user configuration. The reason I have to do it: an independent set of credentials for the restricted environment, where a username is a derivative from the primary username. The solution works well with the Red Hat Ansible but has failed as a part of the Ansible Tower workflow.

The error message suggested that I have no account nor private keys to access my targets, even though I configured the template with all the appropriate SSH keys and security settings. I haven't dwelt much on that problem, but after a while, my brain came up with the insight: "The user on the Tower machine is not necessarily the same user I use to login Ansible Tower."

Now everything has fallen into place, and with extra variables populated for each job instance. I have fixed the job.

secured_servers:
    vars:
       ansible_user: "priv_{{ tower_user_name |default(lookup('env','USER')) }}"
    hosts:
       host1.secure.domain.com:
       host2.secure.domain.com:
...       
Calculate username for the remote connection

The Ansible engine would try to use tower_user_name variable to calculate new credentials. And if it is not the Ansible Tower job, it will use a username from the controller environment.

]]>
<![CDATA[ Ghost 4: the Major Upgrade ]]> https://chronicler.tech/ghost-4-the-major-upgrade/ 6096b008eaa939732840f23a Mon, 10 May 2021 09:00:00 -0400 Finally, our chronicles are on the latest version of the Ghost software. The preparation and tests have taken a while, yet the upgrade brought down the site for an hour. This small how-to could be helpful if you have a standalone Ghost platform and plan to upgrade it to the latest & greatest.

Node.js and NPM

For Ghost v3.x I have some Node v10 installed, but the new version prefers Node v12 or higher. If you share Node binaries with multiple projects, make sure that your new binaries will work with all applications.

The NPM package manager also should be updated. I have shared Node binaries, so run it as a root user and with global switch.

# npm install -g npm@latest

Your updated npm version would be 7.12.0 or higher.

Ghost Command Line Interface

With all the new Node binaries, it's time to upgrade Ghost CLI. Again, I have it as a global application:

# npm install -g ghost-cli@latest

As of today, the latest version of this package is 1.17.0. If you made it up to this point, you are ready for a site upgrade.


Take a pause and make sure that you have the latest site backup. Run through the checklist below and see if you:

  • Downloaded the site content
  • Downloaded your custom site theme
  • Have a backup of your content folder
  • Have a backup of your database

I have a tiny local VM to test all dangerous operations before I go to the production system. Additionally, I restore my site backups to validate them and keep local VM consistent.


Ghost upgrade

The engine upgrade is as simple as the previous steps. Before the main upgrade, make sure that you have a compatible version of Ghost 3 - 3.42.5. To do so, upgrade your current v3.x installation:

$ cd $GHOST_SITE/
$ ghost update v3

This command applies the latest changes and restarts the engine if your site was up. Check your site and admin console to see that you are still online.
Now it's time for the last jump, to the Ghost v4.4.0:

$ ghost update

Potential upgrade issues

If you keep your Ghost data in the MySQL database, you may run into some post-upgrade issues. An instrumental piece describes similar symptoms, but my final set of trials and runs was slightly different.

--- Added with no issues
alter table members_stripe_customers_subscriptions add column stripe_price_id varchar(255) not null default '' unique key ;

--- Failed for me. Key was already defined 
alter table members_stripe_customers_subscriptions add key (stripe_price_id);

--- Added the mssing table
create table members_products (
  id varchar(24) not null primary key, 
  member_id varchar(24) not null references members(id) on delete cascade,
  product_id varchar(24) not null references products(id) on delete cascade, 
  sort_order integer unsigned not null default 0);

After several database alterations and ghost starts, it finally came through, and you can read this post on the updated site.

]]>
<![CDATA[ When is the Oracle SOA Platform ready to accept requests? ]]> https://chronicler.tech/confirming_soa_platform/ 608860c936702067f26ce4c3 Tue, 27 Apr 2021 15:17:39 -0400 If you've started up your Oracle SOA Suite 12c managed servers, and even after the managed server is reporting RUNNING and OK, this doesn't mean that the SOA platform is completely up yet.

WebLogic reporting the state of the managed servers

Even after the managed servers are up, each and every SOA composite must be loaded, and you'll find a deploying composite model entry in the log for each composite (which is not lazy loaded) (learn about lazy loading here!).

Only after you've seen the following message in soa_server1.out is your SOA platform truly up:

SOA Platform is running and accepting requests.

And a screenshot:

Confirmation that the SOA Platform is ready and accepting requests
]]>
<![CDATA[ Recovering from a corrupted Embedded LDAP in WebLogic 12c ]]> https://chronicler.tech/untitled-4/ 60422c5e9eaecb1f2966086e Fri, 05 Mar 2021 08:42:41 -0500 A customer experienced an issue in Oracle WebLogic 12c in which their Global Roles suddenly disappeared. Not having seen this exact issue before, I did however experience issues in the past where the Embedded LDAP was corrupted.

Per Oracle Doc ID 1192253.1, Oracle Support states that "LDAP corruption usually occurs when the server instance is killed or shut down improperly."

Option 1: Restore Embedded LDAP from internal backup

  1. Navigate to $DOMAIN_HOME/servers/AdminServer/data/ldap/ldapfiles.
  2. See if there are backup files for EmbeddedLDAP.XYZ (where XYZ is a numeric value).
  3. Shutdown the entire domain, backup EmbeddedLDAP.data and replace it with this backup.

Option 2: Restore Embedded LDAP from file system backup

  1. Restore the $DOMAIN_HOME/servers/AdminServer/data/ldap/ldapfiles from an earlier file system backup.

Option 3: Copy Embedded LDAP from another environment

  1. Copy the $DOMAIN_HOME/servers/AdminServer/data/ldap/ldapfiles from a different environment, assuming all environments are identical.

References

As a friendly reminder, don't forget to Enable 'configuration archive' in WebLogic.

]]>
<![CDATA[ Enable 'configuration archive' in WebLogic ]]> https://chronicler.tech/enable-configuration-archive-in-weblogic/ 60423016afdf636c0210109d Fri, 05 Mar 2021 08:40:45 -0500 When you enable configuration archive in your Oracle WebLogic Server domain, the entire domain configuration is backed up at every server restart. This is pretty much a must-have setting for every domain. There are absolutely no disadvantages in enabling this, and can come in extremely handy if you experience an issue and want to easily recover/rollback.

  1. Login to the Oracle WebLogic Server Administration Console.
  2. Click on the domain name on the left.
  3. Click on Advanced.
  4. Enable the 'Configuration Archive Enabled' option.
  5. Set the 'Archive Configuration Count'.
  6. Save and activate changes.
  7. Restart your entire domain.

The backup is saved to config-booted.jar under the $DOMAIN_HOME folder as shown:

Each archive is about 1 MB in size. Previous archives are stored in the ~configArchive folder.

]]>
<![CDATA[ Why does Linux report 100% memory usage all the time? ]]> https://chronicler.tech/why-does-linux-report-100-memory-usage-all-the-time/ 602be6c29eaecb1f2966084b Tue, 16 Feb 2021 10:39:50 -0500 Why does Linux report memory usage as being 99% used even though nothing is running on the server?

I'll start with the conclusion: If little swap is being used, then memory usage is not impacting performance at all.

  • Traditional Unix tools like top often report a surprisingly small amount of free memory after a system has been running for a while.
  • The biggest place memory is being used is in the disk cache which is reported by top as "cached". Cached memory is essentially free, in that it can be replaced quickly if a running (or newly starting) program needs the memory.
  • The reason Linux uses so much memory for disk cache is because the RAM is wasted if it isn't used. Keeping the cache means that if something needs the same data again, there's a good chance it will still be in the cache in memory. Fetching the information from there is around 1,000 times quicker than getting it from the hard disk. If it's not found in the cache, the hard disk needs to be read anyway, but in that case nothing has been lost in time.

There you have it.

If you don't see any SWAP activity, don't worry about how much memory is reported being consumed at the OS level.

]]>
<![CDATA[ Install and Startup Keycloak ]]> https://chronicler.tech/untitled-3/ 601f5ec69eaecb1f296607f5 Sat, 06 Feb 2021 22:42:25 -0500 Keycloak is an open source software product (Apache License 2.0) identity and access management solution. This post walks you through an extremely quick setup of this product.

Download Keycloak

  1. Download Keycloak from here.

Install Keycloak

  1. Run these commands (filename may differ depending on version):
mkdir -p /u01/keycloak
cd /u01/keycloak
unzip keycloak-11.0.0.zip

Startup Keycloak

  1. Run these commands to start Keycloak:
cd /u01/keycloak/keycloak-11.0.0
bin/standalone.sh

First Time Login

  1. Login to the Keycloak Admin Console to do a first time setup of the admin account.

Wildfly (app server)

http://127.0.0.1:9990

Keycloak Admin Console

https://devhost:8443/auth/

References

]]>
<![CDATA[ Login to Linux using SSH Keys in PuTTY ]]> https://chronicler.tech/add/ 601f55239eaecb1f2966074c Sat, 06 Feb 2021 22:28:52 -0500 This post provides a quick guide to configuring access to a Linux account using SSH keys.

Generate a Private and Public Key Pair

  1. Download puttygen.exe from here.
  2. Click Generate.
  3. Move the mouse until complete.
  4. Enter the "Key passphrase".
  5. Click Save public key.
  6. Click Save private key.
  7. Copy the value of the public key in the box on the top (see screenshot) (Do not copy the trailing rsa-key-20210206 at the end of the string!)

Configure the Linux Server

  1. Login to the Linux account you want to give access to.
  2. Run these commands and paste the contents of your public key (from Step 7 above) here:
mkdir ~/.ssh
chmod 700 ~/.ssh
touch ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
vi ~/.ssh/authorized_keys

Configure Session in PuTTY

  1. Make sure you have the Linux hostname and .ppk file to configure a new PuTTY session.
  2. When configuring a new session in PuTTY, navigate to Connection > SSH > Auth and enter the location of your private key .ppk file on your local file system.

Logging in to PuTTY

  1. Upon login, you will be prompted for the Linux username and the private key password.

Exporting PuTTY .ppk Private Key to .pem Format

  1. In PuTTY Key Generator, select Conversions > Export OpenSSH Key.
  2. Name the file and add the .pem extension.
  3. Choose Save.
]]>
<![CDATA[ Getting "Argument list too long" when using ls ]]> https://chronicler.tech/getting-argument-list-too-long-when-using-ls/ 60175aa19eaecb1f29660728 Sun, 31 Jan 2021 20:38:19 -0500 I was trying to count the number of files in a particular directory in Linux using the ls command and received the following error:

oracle@soadev:/home/oracle> ls /u01/archive/*.* | wc -l
-bash: /bin/ls: Argument list too long

Turns out the resolution to this is to increase increase the stack space to a higher value:

oracle@soadev:/home/oracle> ulimit -s
10240

oracle@soadev:/home/oracle> ulimit -s 65536

oracle@soadev:/home/oracle> ls /u01/archive/*.* | wc -l
97101

References:

]]>
<![CDATA[ Maven Project with Docker Compose ]]> https://chronicler.tech/apache-maven-and-docker-compose/ 6006fc3809ba204db895bd5b Mon, 25 Jan 2021 08:45:00 -0500 My natural laziness is the perfect driver for automation efforts. This little tweak streamlines the development lifecycle. If you build something with Docker and Apache Maven, you could find this article useful.

When I started a new proof of concept, I decide that do something with Docker could be educational and fun at the same time. I started with one container that runs a couple JAX-RS services in the container. For one project, it worked just fine. Later I've decided to add another one with the JSP application. Now I had to build two Maven projects and two containers to run simple tests. At this time, I have discovered Docker Composer.  It creates multi-container applications and provides better ways to configure, build, and run container images as a single application. At this moment, I converted two separate projects into a multi-module Maven application, so I can compile and test my code with a single maven command, yet I have to run additional commands to build and test containerized app. It wasn't good enough for me because I want to:

  • Use a single tool for all operations
  • Build containers only when all modules are packaged successfully.
  • Run JUnit tests, rebuild container images and test application with the same tool

After series of trials and errors, I reached all my goals when following the sage advice: "If you want to do something with Maven - create a new module." the diagram below shows the current project structure.

Image depicts Apache Maven projects structure with the separate module for container operations.
Multi-module Maven project

The main POM is pretty standard and contains all must-have artifacts and three modules:

  • application services container,
  • web application container
  • container build module.  (should be the last one)

The first modules are war applications, and they know how to build the appropriate image, but only the last one knows how to compose the application and work with the image registry.

Essentially, the last module has only two artifacts:

  • The Docker Compose build descriptor with relative references to application containers.
  • Maven module descriptor with prescriptions how to test, package,  and run container application.

The Apache Maven document below is self-explanatory. It defines executable actions for different goals. For example, on mvn build command, Maven will execute the shell command docker-compose build from the module folder after building all the other modules. To actually run the application on my local Docker engine, I implemented an integration-test phase.

<?xml version="1.0" encoding="UTF-8"?>

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <parent>
    <groupId>com.example</groupId>
    <artifactId>oauth.jsp</artifactId>
    <version>1.0-SNAPSHOT</version>
    <relativePath>../pom.xml</relativePath>
  </parent>
  <groupId>com.example</groupId>
  <artifactId>build.composer</artifactId>
  <version>1.0-SNAPSHOT</version>
  <packaging>pom</packaging>

  <name>Package Docker Containers</name>
  <url>https://example.com/ojsp.app</url>
  
  <build>
    <plugins>
      <plugin>
          <artifactId>exec-maven-plugin</artifactId>
          <version>3.0.0</version>
          <groupId>org.codehaus.mojo</groupId>
          <executions>
            <execution>
              <id>Build Containers</id>
              <phase>package</phase>
              <configuration>
                     <executable>docker-compose</executable>
                     <commandlineArgs>build</commandlineArgs>
                 </configuration>
              <goals>
                <goal>exec</goal>
              </goals>
            </execution>
              <id>Test Configuration</id>
              <phase>test</phase>
              <configuration>
                     <executable>docker-compose</executable>
                     <commandlineArgs>config</commandlineArgs>
                 </configuration>
              <goals>
                <goal>exec</goal>
              </goals>
            </execution>              
            <execution>
              <id>Run Containers</id>
              <phase>integration-test</phase>
              <configuration>
                     <executable>docker-compose</executable>
                     <commandlineArgs>up --detach</commandlineArgs>
                 </configuration>
              <goals>
                <goal>exec</goal>
              </goals>
            </execution>
            <execution>
              <id>Registry Login</id>
              <phase>install</phase>
              <configuration>
                     <executable>docker</executable>
                     <commandlineArgs>login registry.gitlab.com</commandlineArgs>
                 </configuration>
              <goals>
                <goal>exec</goal>
              </goals>
            </execution>
            <execution>
              <id>Install Containers</id>
              <phase>install</phase>
              <configuration>
                     <executable>docker-compose</executable>
                     <commandlineArgs>push</commandlineArgs>
                 </configuration>
              <goals>
                <goal>exec</goal>
              </goals>
            </execution>
          </executions>
        </plugin>
    </plugins>
</build>
</project>

Now I can use a single command to build, test, and run my composite container application.

[composite-app]$ mvn integration-test
]]>
<![CDATA[ Find directories with largest number of files in Linux ]]> https://chronicler.tech/find-directories-with-largest-number-of-files-in-linux/ 6000bf3a09ba204db895bd28 Thu, 14 Jan 2021 17:14:20 -0500 I needed to find the list of directories that had the largest number of files on my Linux file system, searching recursively through the filesystem.

Here's the content you can paste in a script:

#!/bin/bash

if [ $# -ne 1 ];then
  echo "Usage: `basename $0` DIRECTORY"
  exit 1
fi

echo "Wait a moment if you want a good top of the bushy folders..."

find "$@" -type d -print0 2>/dev/null | while IFS= read -r -d '' file; do
    echo -e `ls -A "$file" 2>/dev/null | wc -l` "files in:\t $file"
done | sort -nr | head | awk '{print NR".", "\t", $0}'

exit 0

Here's the output (may take a few minutes to run depending on how much needs to be searched):

oracle@soadev:/home/oracle/scripts> ./countfiles.sh /u01/oracle

Wait a moment if you want a good top of the bushy folders...
1.  135097 files in: /u01/oracle/dir1
2.  96598 files in:  /u01/oracle/dir2
3.  74119 files in:  /u01/oracle/dir3
4.  73828 files in:  /u01/oracle/dir4/dir5/dir6
5.  55183 files in:  /u01/oracle/dir7
6.  55042 files in:  /u01/oracle/dir8/dir9
7.  52089 files in:  /u01/oracle/dir10
8.  47142 files in:  /u01/oracle/dir11/dir12
9.  19893 files in:  /u01/oracle/dir13
10. 17280 files in:  /u01/oracle/dir14

References

Find directories with lots of files in
So a client of mine got an email from Linode today saying their server was causing Linode’s backup service to blow up. Why? Too many files. I laughed and then ran: # df -ihFilesystem Inodes I...

]]>
<![CDATA[ Getting OPatch error code 255 and 1 ]]> https://chronicler.tech/getting-opatch-error-code-255-and-1/ 5ff5193709ba204db895bcee Tue, 05 Jan 2021 21:05:49 -0500 I ran OPatch against my Oracle Access Manager (OAM) 12.2.1.4 installation, and I got the OPatch error code 255.

oracle@soadev:/u01/oracle/middleware/OPatch> ./opatch lsinventory

OPatch failed with error code 255

I guess I had to set the ORACLE_HOME first, which I then did. The OAM product is under the ~/idm subfolder, so that's what I set it to.

oracle@soadev:/u01/oracle/middleware/OPatch> export ORACLE_HOME=/u01/oracle/middleware/idm

oracle@soadev:/u01/oracle/middleware/OPatch> ./opatch lsinventory
The Oracle Home /u01/oracle/middleware/idm is not OUI based home. Please give proper Oracle Home.
OPatch returns with error code = 1

Still no good.

So now, I set the ORACLE_HOME to the same value of  MW_HOME, which is the top-level directory of my Oracle Fusion Middleware installation.

Success!

oracle@soadev:/u01/oracle/middleware/OPatch> export ORACLE_HOME=/u01/oracle/middleware

oracle@soadev:/u01/oracle/middleware/OPatch> ./opatch lsinventory
Oracle Interim Patch Installer version 13.9.4.2.1
Copyright (c) 2021, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/oracle/middleware
Central Inventory : /u01/oraInventoryAgent
   from           : /u01/oracle/middleware/oraInst.loc
OPatch version    : 13.9.4.2.1
OUI version       : 13.9.4.0.0
Log file location : /u01/oracle/middleware/cfgtoollogs/opatch/opatch2021-01-06_01-57-05AM_1.log


OPatch detects the Middleware Home as "/u01/oracle/middleware"

Lsinventory Output file location : /u01/oracle/middleware/cfgtoollogs/opatch/lsinv/lsinventory2021-01-06_01-57-05AM.txt

--------------------------------------------------------------------------------
Local Machine Information::
Hostname: soadev.raastech.com
ARU platform id: 226
ARU platform description:: Linux x86-64


Interim patches (1) :

Patch  31556630     : applied on Tue Sep 08 01:09:30 GMT 2020
Unique Patch ID:  23664288
Patch description:  "OAM BUNDLE PATCH 12.2.1.4.200629"
   Created on 30 Jun 2020, 01:44:15 hrs PST8PDT
   Bugs fixed:
     30355996, 30832165, 30882267, 30793308, 31000954, 30771422, 31110638
     30053037, 30748479, 30831364, 30669352, 28108712, 29883498, 30762860
     30120631, 29715441, 30628496, 30911495, 30406633, 31366419, 30622957
     30953737, 31419785, 31413189, 31465732, 31065568, 31510690, 31508059
     30468914, 21391069, 29717855, 30634571, 30069618, 30792754, 30571576
     29240849, 30213267, 29783271, 30426370, 30460435, 30169956, 30820170
     29885236, 30805164, 30805154, 30805180, 30134427, 31029076, 31042676
     31073659, 29771448, 29482858, 29837657, 30062772, 30180492, 29290091
     30363797, 30156607, 30243111, 30267123, 30156706, 30389257, 30176378
     29649734, 30144617, 26679791, 29541818, 30311080



--------------------------------------------------------------------------------

OPatch succeeded.

]]>
<![CDATA[ Ansible: Consume SQL*Plus Output ]]> https://chronicler.tech/ansible-consume-sql-output/ 5fe178b509ba204db895b8b3 Mon, 04 Jan 2021 08:44:00 -0500 Due to tight security restrictions, I can't use some of the existing Ansible modules or 3rd party projects. But with a pinch of creativity, your Ansible playbooks can consume database data without extensive Python or PL/SQL coding.
Let's assume that your target host has an Oracle Database client installed,  and the database version is at least 12.2. The main idea is obvious: you need to get output in one of the Ansible native formats, so after crossing XML (native to the Oracle database, but hard to digest by Ansible) and YAML (which is not a thing for the Oracle database ), there is only one format left - JSON.
Without further ado, a sample script below queries database data in JSON format and transforms it into Ansible facts.

---
- name: Consume SQL Output Example
  hosts: localhost
  vars:
    sql_password: !vault |
      $ANSIBLE_VAULT;1.1;AES256
      31663439333934653339323738336538353632663561643633316362366434303261316163613161        
      6664363630333466363566366636636439383333323334650a363836343036336165353662663961
      353332666130333235386365343132666534363435663635633533306333383363303237343033625
      37643331653731620a363562346435376664363536356662626338626331383035643330373265     
  tasks:
    - name: Query Simple Fact
      no_log: yes
      shell:
        cmd: |
          sqlplus -S admin/{{ sql_password }}@kclck01_low <<EOF
          set feedback off
          set heading off
          SET SERVEROUTPUT ON SIZE 5000;
          SET LINESIZE 2500;
          set pagesize 5000;
          set long 5000;
          select json_object('db_version' VALUE  BANNER) from v\$version;
          EOF
      register: simple_out
    - name: Transform response
      set_fact:
        simple_json: "{{ simple_out.stdout|from_json }}"       
    - name: Show result
      debug:
        var: simple_json
...

Playbook does three simple things: runs SQL*Plus command and produces JSON output, next task turns JSON formatted string to the Ansible fact, and the last task uses the result.

The only real trick here receives raw data output form SQL*Plus, and the primary helpers are:

  • Option -S suppresses all the standard SQL*Plus output.
  • Command set feedback off shuts down all the post query output
  • Command set heading off switches off table headers

The last tip for thoughts: if you need to produce a complex result as a single JSON array, look at the aggregating function json_arrayagg.

]]>
<![CDATA[ Automating Oracle Forms environment files configuration ]]> https://chronicler.tech/automating-oracle-forms-env/ 5fa06f795368c91a255a05ee Sun, 03 Jan 2021 19:45:42 -0500 Automating everything many times comes to a halt once you discover that specific applications or platforms do not support changes to be done using a command-line interface. One such application is Oracle Forms, even if you run the latest and greatest version of it.
This post is not to prove it different, but I instead consider it as an aid to those who want to dig a bit deeper.
In Forms, configurations are supposed to be made through Enterprise Manager and not manually. This includes editing files manually and also creating/copying env files. Forms runtime reads its environment parameters from the *.env files. If these files are created manually, it is not possible to manage these files through EM.
You can check 1223345.1 Doc ID at Oracle Support to find some clues about how to register manually created .env files to Enterprise Manager in Forms version 11g. It stays true for the 12c version as well. The sequence, namely, is:

  1. Get a backup copy of the manually created env file and delete it from the OS
  2. Go to EM, "Environment Configuration" page of forms and using "Duplicate File" button create a new copy of this file form default.env
  3. Shutdown WLS_Forms
  4. Copy the backup you get in step#1 above on top of the new file generated in step#2 above
  5. Start the WLS_Forms once more
  6. Logout from Enterprise Manager and Login back

OS changes and start/stop operations are easy to automate. But when it comes to creating a new copy of the default.env file, it's not easy to do it from a command line unless you know what MBean to call.
You can use FMW Control application to find the following MBean: oracle.forms:type=FormsSystemComponent,name=FORMSInstanceManager
This MBean allows performing operations with Forms environment files.
As an example, here is how you can create a wlst script template in Ansible that will help you with the handling of Oracle Forms using Ansible role:

connect('{{ admin_user }}','{{ admin_pwd }}',url='t3://{{ admin_host }}:{{ admin_port }}')

edit()
startEdit()
fBean=ObjectName('oracle.forms:type=FormsSystemComponent,name=FORMSInstanceManager')
operationName='duplicateConfigFile'
params = ["{{ server_name }}","formsapp","default.env","{{ env_file }}","Type_default.env"]
sign = ["java.lang.String","java.lang.String","java.lang.String","java.lang.String","java.lang.String"]
mbs.invoke(fBean, operationName, params, sign)
activate()

Knowledge of the right MBean together with Ansible tool will allow you to replace configuration process manual tasks with a simple playbook.

]]>
<![CDATA[ Ansible: dynamic usernames ]]> https://chronicler.tech/ansible/ 5fd683254f0fc30da4e63ad9 Wed, 16 Dec 2020 08:24:00 -0500 Let me start with the small quiz: What username Ansible uses to access remote targets? What if you need to change it? Is it possible to make it dynamic? If you know the answers to all questions, please hold my beer. I'm going to tell this story anyway.

In most cases, Ansible uses OS username to authenticate against targets. When you need to run commands with a different username, you have a few options. Let's talk them through.

Privilege escalation: This method works if you have sudo privilege on target hosts. By default, Ansible uses root as the target user, but the become_user variable alters this behavior. If you put ansible_become_user in the inventory, Ansible picks it up automatically. If you have privileged access to the targets, it's the best option because you define a username for a single task. There are two issues: you should be creative to alter username for more than one task, or for the paly and it doesn't work if you have limited sudo privilege.

Set SSH user name:  An alternative to the become_user approach. To use it, distribute public key SSH from your account on Ansible controller on target devices, and you can directly connect to the target host using your name of choice.

## Equvivalent of ssh target-host-id
$ ansible target-host -a id

## Equvivalent of ssh oracle@target-host-id
$ ansible target-host -u oracle -a id

I use this method most of the cases due to limited sudo access in my current environment. Almost all automation books start with the play header similar to the one below.

---
- name: Some Oracle-related Play 
  remote_user: oracle
  hosts: target-host
  

 It's a straightforward, reliable method, but it comes with the price tags:

  • You should prepare all target hosts
  • You can't change username from command to command. It works for plays only.
  • Options to dynamically username definitions are limited.

Yet, there is a way to calculate username during the execution. Let me give you an example.    

Dynamic remote user name. The vast majority of targets in my environment share the same user names and groups. And I need no worry about the connection user, except for the small set of highly isolated machines where you have a totally different set of user names. For the sake of illustration, let say that my primary account is ofmwadmin, and for the secured targets, it's priv_ofmwadmin.

For the small installations or for a simple project, I would put this name straight into the inventory like this one:

  app_servers:
     hosts: 
       host1.apps.domain.com:
       host2.apps.domain.com:

  secured_servers:
     vars:
       ansible_user: priv_ofmwadmin 
     hosts: 
       host1.secure.domain.com:
       host2.secure.domain.com:
       

Now Ansible will switch username for the secured_servers group, but it's not dynamic enough because it will use the same user name for everyone who runs this project.

We maintain shared Ansible controllers with common plays and inventories, so the ansible_user value should be set to something similar to "priv_<current username>".
The ideal version would look like

  secured_servers:
     vars:
       ansible_user: "priv_{{ ansible_user }}" 
     hosts: 
       host1.secure.domain.com:
       host2.secure.domain.com:
       

Most languages will handle such assignments, but Ansible fails the endless loop at the first access to the server. It means we need another source for the current username. Fortunately, there is a way, and it may be quite handy for similar situations. There is my working version with the lookup module:

  secured_servers:
    vars:
       ansible_user: "priv_{{ lookup('env','USER') }}"
    hosts:
    

this function gets my current user name from the controller environment and uses it to calculate user name for the secured group at runtime.

This function gets the username from the controller environment and uses it to calculate a final result for the secured group at runtime.

]]>
<![CDATA[ Keycloak: Use Oracle Autonomous database ]]> https://chronicler.tech/keycloack-use-oracle-atp-service/ 5fa838d34f0fc30da4e6371e Thu, 12 Nov 2020 08:35:00 -0500 The Keycloak Server Installation and Configuration documentation recommends an external database to persist realms configuration. I decided to spice my life a little bit and configure it with the Oracle Autonomous Transaction Processing service. The configuration steps should be useful for any application deployed on WildFly/JBoss application server.

I have broken installation steps into three parts:

Database Service Preparation

For this sample installation, I have created a free tier transaction processing instance. It's eligible for the free tier, so I shouldn't worry about the license or operational costs for now.

The image is a screenshot with the database instacne "Keycloack DB", provisioned as a free tier database.
Always Free Autonomous Database

Click on the instance name to get access to the instance-specific resources. To prepare the database, we need two things: database connection descriptor and create a database user for the Keycloak server.

Image depicts database instance description with two highlighted buttons "DB Connection" and "Service Console"
Database instance details

Click the "DB Connection" button to get all the necessary connection details. Don't be confused with the popup window title; it's not only a wallet but an archive with Oracle client configuration files. Select wallet type - Instance Wallet, for this database instance, or Regional if you have more than one database instance.

Please, keep in mind that the wallet's certificate has an expiration date relatively short validity, and you should rotate wallets to keep your applications connected.

To create a new database user, you need to do a few more clicks: click on the  Service Console button, then on the left side click on the Development link, finally select SQL Developer WEB. Now you can provision users, assign roles, and do regular DBA tasks.

image displays part of the SQL Developer WEB interface with worksheet, query and results
SQL Developer WEB

The database part is ready for clients, and it's time to prepare Oracle Client.

Oracle Instant Client Configuration

You can find detailed instructions and how-tos all over the internet. So I'll keep it brief.

  • Download the latest Oracle Instant Client for your target platform. I configured Keycloak on Ubuntu instance, so Basic Package for Linux x86-64 is my choice. The client could be downloaded straight to the boxes with the direct archive links and wget utility.
  • Unpack the client archive on the server, and make sure that the server owner has read-only access to the Oracle client location.
  • Update the OS user environment with the configuration as below. Use your own database client location:
ORACLE_HOME=/opt/instantclient_19_9/
TNS_ADMIN=$ORACLE_HOME/network/admin
PATH="${PATH}:${ORACLE_HOME}"
LD_LIBRARY_PATH="${ORACLE_HOME}:${LD_LIBRARY_PATH}"

export ORCLE_HOME TNS_ADMIN PATH LD_LIBRARY_PATH 
  • Create $TNS_ADMIN folders if it does not exist.
$ source ~/.bash_profile
$ mkdir -p $TNS_AFMIN
  • Unpack the database connection archive under the $TNS_ADMIN/ folder
$ cd $TNS_ADMIN
$ unzip /tmp/Wallet_dbsid01.zip

Now the database client and environment are ready for the next step.

Application Server Configuration

WildFly is a highly modular application server, so if you want some additional functionality, you should add new modules and use them. I have started with this good piece, posted by @AdamBien. To make it work with the Autonomous database service, I alter the database connection pool descriptor to use JDBC/OCI drivers instead of JDBC/Thin layer.

The Oracle note Doc ID 2321763.1 describes the exact symptoms of the issue with JDBC/Thin drivers.

My final version of the JDBC pool configuration for the standalone Keycloak server:

<datasources>
  <datasource jndi-name="java:jboss/datasources/KeycloakDS" pool-name="KeycloakDS" enabled="true" use-java-context="true" statistics-enabled="${wildfly.datasources.statistics-enabled:${wildfly.statistics-enabled:false}}">
    <connection-url>jdbc:oracle:oci:@kclck01_high?TNS_ADMIN=/opt/instantclient_19_9/network/admin</connection-url>
    <driver>oracle</driver>
    <security>
       <user-name>KCLCK</user-name>
       <password>***********</password>
    </security>
    <pool>
      <max-pool-size>100</max-pool-size>
    </pool>
   </datasource>
.....
   <drivers>
     <driver name="h2" module="com.h2database.h2">
        <xa-datasource-class>org.h2.jdbcx.JdbcDataSource</xa-datasource-class>
	 </driver>
	 <driver name="oracle" module="com.oracle">
        <driver-class>oracle.jdbc.driver.OracleDriver</driver-class>
     </driver>
   </drivers>
</datasources>
Standalone Keycloack Server Configuration 

It's taken a good half of my day to combine and compile all the steps and produce a working installation, but with the steps above, you could reproduce the same configuration in terms of minutes.

]]>
<![CDATA[ Keycloack: configure frontend URL ]]> https://chronicler.tech/keycloack-configure-front-end-url/ 5fa83c6e4f0fc30da4e6373f Mon, 09 Nov 2020 08:30:00 -0500 It's turned out that my free tier VMs are powerful enough to run a standalone Keycloack server. I'm not a big fan of the direct instance access on high ports, so I decided to configure it with the OCI load balancer. So my server would be available for access through a conventional URL and have all the appropriate certificates.

In case  when you already have a DNS name and load balancer sends requests are routes down to your application server ports. configuration for Wildfly and JBoss application servers are straightforward :

  1. Locate your profile configuration file; in my case, it is under /opt/keycloak-11.0.3/standalone/configuration.
  2. Open standalone.xml for edit and locate entry
  3. Update the frontendURL element so it would contain your load balancer URL. Make sure that you added /auth at the end.
<spi name="hostname">
    <default-provider>default</default-provider>
    <provider name="default" enabled="true">
      <properties>
	    <property name="frontendUrl" value="**https://idm.in-oci.com/auth**"/>
	    <property name="forceBackendUrlToFrontendUrl" value="false"/>
      </properties>
    </provider>
 </spi>
  1. Save changes and restart your standalone instance.

Original property value refers to the variable keycloack.frontendUrl, so another way to achieve the same result is to pass the parameter to the JVM

$ export JAVA_OPTS="${JAVA_OPTS} -Dkeycloack.frontendURL=https://idm.in-oci.com/auth"
$ /opt/keycloack-11.0.3/bin/standalone.sh 
]]>
<![CDATA[ What I ignore in Ansible projects ]]> https://chronicler.tech/my-gitignore-for-ansible-projects/ 5f8df90d5368c91a255a053a Wed, 21 Oct 2020 08:46:00 -0400 Keep your repository neat and clean is a mandatory repository requirement, same as readable documentation and a good code, preferably useful. So, one of the first things I do in a new project is a .gitignore file.

Naturally, different languages and projects have quite a different project structure and file names, but I'm about to show you my list for the Ansible Tower projects.

  1. I always skip all backups and runtime leftovers.
  2. I ignore entire roles/ folder content for the Ansible Tower projects, except the list of requirements.
  3. All files that may contain plain-text passwords. Keep passwords in vaults, guys.
  4. Most common binaries. Git is not the best repo for that kind of content.
# Ansible Tower ignore list

# Ansible runtime and backups
*.original
*.tmp
*.bkp
*.retry
*.*~

# Tower runtime roles 
roles/**
!roles/requirements.yml

# Try tyo avoid any plain-text passwords 
*pwd*
*pass*
*password*
*.txt

# Exclude all binaries
*.bin
*.jar
*.tar
*.zip
*.gzip
*.tgz
Ansible Tower project .gitignore

Image source: https://www.flickr.com/photos/7729940@N06/6441399337

]]>
<![CDATA[ How to run Ansible playbooks à-la Tower ]]> https://chronicler.tech/fast-test-of-ansible-tower-plays/ 5f89c17c5368c91a255a0352 Tue, 20 Oct 2020 08:40:00 -0400 Our Red Hat Ansible projects drift to the Ansible Tower. I found that it much faster to do quick fixes and debug playbooks from the server rather than go through the full chain VSCode -> Git -> Tower. Well, to be productive, I automated the automation.

There are a few differences between Ansible books, and Ansible Tower projects the should be covered:

  1. The script should dynamically identify inventory. Our Tower project includes inventory. We keep a lot of information in group and host variables, and it's always an inventory.yml or inventory folder with host and group variables.
  2. Update roles from the code repository. Our Tower projects refer to the custom role repository, using projected dependencies.
  3. Point to the appropriate Vault credentials. Tower keeps Vault and host credentials separately, so I want to keep using our vaulted variables in the command line as well.

There could be more controls to touch, but this short is good enough to run Ansible Tower projects from the Ansible Controller console with the script below

#!/bin/sh
# Script Emulates Ansible Tower activities 
# and runs playbooks from the Ansible controller 

# Add project roles to the Ansible roles patch 
export ANSIBLE_ROLES_PATH="./roles:${ANSIBLE_ROLES_PATH}"

# Point to the Vault password file
# Outside project scope
export ANSIBLE_VAULT_PASSWORD_FILE=~/.mysecret/.vault.pwd

# Use project invetory if possible 
if [ -z "${ANSIBLE_INVENTORY}" ]; then
 [ -e inventory* ] && (export ANSIBLE_INVENTORY="$(ls -1|grep inventory)")
fi

# Refresh project roles 

ansible-galaxy install -r ./roles/requirements.yml -p .roles -f

# Environment is set, ready to run 
ansible-playbook $@

Keep it along with your Ansible Tower project, and you can always run it from command line similar to the example below:

 [myself@ansible-ctr tower-prj]$ ./tower-project.sh -vv my-tower-task.yml

Image source is https://www.wallpaperflare.com/three-man-working-together-construction-site-construction-workers-wallpaper-zypij

]]>
<![CDATA[ Absolutely insane billing from Google Cloud Platform! ]]> https://chronicler.tech/absolute-instance-billing-from-google-cloud-platform/ 5f8783b95368c91a255a02ef Wed, 14 Oct 2020 19:20:00 -0400 Google Cloud Platform (aka GCP) has the most insane approach to charging that I've ever seen. I haven't experienced this with any other cloud service provider such as Amazon Web Services, Oracle Cloud Infrastructure, IBM Cloud, or Microsoft Azure.

Let's take a look at how GCP invoices you... then charges you.

GCP invoices you every calendar month, which is pretty standard, makes sense, and the cleanest way to billing. But they auto-charge your credit card in a seemingly random cycle, based on when you're first charged.

Essentially, the auto-charged amount will never match the invoice.

In fact, the periods are not even consistent across months!

You can see why this is problematic:

  • It's practically impossible to determine if the charged amount is even accurate.
  • Your expense report will never get approved because the charged amount never matches the invoiced amount.

Google Billing Support seems to think this is completely normal and opt for a "trust us" attitude. When I requested documentation that correlates every charge to the service, I was merely pointed to this link and told to figure it out on my own.

I asked them for a full refund for all charges made in 2020 until I figure out if the charges are accurate. We'll see what they say (I'm not holding my breath).

And for your information, the photo on the top of this blog post is borrowed from 60 insane cloud formations from around the world. Which is exactly how I feel about GCP's billing practices.

]]>
<![CDATA[ Quick Certificate Validation ]]> https://chronicler.tech/quick-certificate-validation/ 5f6409745368c91a255a0242 Fri, 18 Sep 2020 08:40:00 -0400 Check certificate validity for a site; what could be easier? It's a click away if the address of interest is in your browser. Yet, if you want to check expiration dates for multiple endpoints and can't really do it from the browser.

Whenever it comes to the security on the Linux box, you can't avoid OpenSSL. Still, you need to use it twice to get a certificate and then decode it. So, there is a small shell script that could save you some time.

#!/bin/sh
set +x
# Set parameters
hst=$1; shift
if [ $# -ge 1 ]; then
 prt=${1:-443}
 shift
else
 prt=443 
fi
# Extract Certificate Text
openssl  s_client -connect $hst:$prt $@ 2>/dev/null </dev/null  |\
 awk '/BEGIN CERTIFICATE/,/END CERTIFICATE/{print $0}'|\
 openssl x509 -noout -text 

The shell script and description are available on GitHub. The script usage is straightforward, as on the screenshot below.

Site certificate in text 

And never forget about server name indication, aka SNI. If you have a web server with multiple virtual hosts, revers proxy software, or any other network appliance, the same host and port combination may produce different certificates.
Sample calls below to illustrate my point:

# Direct certificate request 
$ get-cert.sh 172.168.1.1 443 
# some certificate 
# And SNI request 
$ get-cert.sh 172.168.1.1 443 -servername virtual.server.com
# Completely different certificate 
]]>
<![CDATA[ Enforce HTTPS With OCI Load Balancer ]]> https://chronicler.tech/enforce-ssl-with-oci-loadbalancer/ 5f567a4c5368c91a255a00e8 Tue, 08 Sep 2020 08:30:00 -0400 Could you remember the last time you typed the URL with the protocol prefix? The modern browsers can test both HTTP and HTTPS and choose one for you. Yet it's the application responsibility to enforce secure connection.  So, you should apply rewrite rules to make sure that all your clients use only encrypted connections. Let's see how you can achieve the same with the Oracle Cloud's load balancer.

I presume that you already have a back-end server and an HTTPS listener with all certificates and mappings. So we start with the new ruleset for your load balancer and we start with a new redirect rule:

  1. Open Networking -> Load Balancers and click on your load balancer to open the details page.
  2. From the Resources pane, select Rule Sets and then click the Create Rule Set button.
  3. Choose some name and select SPECIFY URL REDIRECT RULES checkbox.
  4. You cannot  leave SOURCE PATH filed blank, so put / character and change MATCH TYPE to the  Prefix Match
  5. For the redirect: PROTOCOL to HTTPS, PORT to 443, and RESPONSE CODE to 301.
  6. Click Create when you are ready
For to create new rule set for the Oracle Cloud Infrastructure load balancer
URL Redirect Rule Set

Click on Create Listener button. Give it some name and define the new HTTP protocol and port; select an existing back-end definition; it wouldn't serve any requests anyway.

New HTTP Listener configuration

When a new listener appears in the list, select Edit from the context menu on the right.

Click the button + Additional Rule Set and select the rule you've just defined. Update listener configuration and give a few seconds to complete work requests.

Edit Listener window allows you to update listener configuration and attach rules for routing and request handling
Apply Rule Set to HTTP Listener

Now the load balancer accepts both HTTP and HTTPS requests with no changes or rewrites on the back-end servers.

]]>
<![CDATA[ Is Oracle Cloud really that cheap? ]]> https://chronicler.tech/is-oracle-cloud-really-that-cheap/ 5f4fcecd5368c91a2559ff6d Wed, 02 Sep 2020 14:50:32 -0400 In January 2019, I conducted performance tests of various compute cloud providers, specifically against services from Amazon Web Services (AWS), Oracle Cloud Infrastructure (OCI), IBM Cloud, Google Cloud Platform (GCP), and Microsoft Azure.

Comparing Compute Cloud Performance Results

A summary of our January 2019 results can be found in this article here. Basically, performance on both AWS and OCI generally came out on top relative to the others, with AWS in some cases having a slight edge but only because the underlying CPU was a more recent model.

Here are some snippets from our performance results from January 2019:

Results of compute cloud performance testing (host, database, application) - January 2019

This scope of this testing covered host, database server, and application server on identically sized servers. There are indeed testing limitations and certain disclaimers (ask me if you want to know more).

Generally speaking, my conclusions were that performance for medium-sized compute cloud footprints is not a driver in cloud provider selection (with the exception of Azure).

However, other non-performance related factors can affect the overall experience:

  • Consider Amazon Web Services to experience the least amount of issues
  • Consider Oracle Cloud for cost reasons
  • Consider alternatives to Google Cloud for support reasons

New Testing Planned in 2020

Now, 1.5 years later, I'm repeating the same exercise with a few other colleagues to see how performance of compute has evolved during that time.

Completely new instances were configured. But testing has been parked for some time due to other priorities we had going on. To cut down cost in the meantime, all compute services were turned off (and have been down for 4-5 months), so I was only paying for storage and some other odd expenses.

My Tweet

So I recently took a look at my last invoice on each cloud service provider and realized that I'm paying considerably less on Oracle Cloud than the other providers, and hence published this tweet.

My blasphemous tweet

This blog post is intended to explain where these numbers exactly came from.

Specifications of the Provisioned Servers

In 2020, in preparation for our second round of testing, we attempted to create comparable medium-sized virtual servers across these cloud providers, and the table below documents the profile/type/shape selected.

Similar to our January 2019 testing, we recognize that it's near impossible to match hardware across the different cloud providers, so variances are expected.

Virtual servers were configured as identically as possible

Actual Cloud Provider Invoices

As stated earlier, I've shutdown all compute cloud services, and they've been down for the past 4-5 months. Below are actual invoices from each provider for the month of July.

Oracle Cloud simply charged me for storage, with a total cost of $11.51. I'm not sure where the $3.01 compute charge came from, considering the instances were down.

The AWS invoice doesn't specifically break down the cost, but below this screenshot is another one from the billing console that shows the breakdown. Apparently there was a snapshot taken (not much, only $1.51) and I was charged a nominal amount for my elastic IP.

I have had endless billing issues with IBM Cloud, and every time I complain (evidence ready on-hand), they end up simply refunding me the entire month. Below, IBM Cloud billed me for compute, even though the instances were down. But generally speaking, if you exclude the billing error, it appeared I was being charged for storage and a static IP and my calculation ended up being $61.19 based on the breakdown on the billing console.

Google Cloud Platform billing was pretty straightforward. Apparently I had a snapshot for $0.13. I was charged for storage and IP addresses, for a total cost of $53.53.

As for Microsoft Azure, the invoicing is straightforward and I was charged $56.49.

Conclusion

I communicated my findings in my tweet based on my experiences with each of these cloud providers. The reality is, from a core IaaS perspective, Oracle Cloud is simply considerably cheaper than their competition. Look up the cost of each of the profiles/types/shapes. The cost of each service is publicly available on their websites, so none of this is a secret. Oracle Cloud is lower in cost by 33-44% for compute cloud infrastructure.

]]>
<![CDATA[ OutOfMemoryError when running bsu.sh ]]> https://chronicler.tech/outofmemoryerror-when-running-bsu-sh/ 5f4ee3cd5368c91a2559ff2f Tue, 01 Sep 2020 20:24:21 -0400 I was trying to apply a CPU (Critical Patch Update) on an older version of Oracle WebLogic Server, namely 11g (10.3.6). This is done through the bsu utility.

I received the following OutOfMemory exception when trying to apply the patch:

oracle@weblogicdev:/u01/oracle/middleware/utils/bsu> ./bsu.sh -install -patch_download_dir=/u01/oracle/middleware/utils/bsu/cache_dir -patchlist=FMJJ -prod_dir=/u01/oracle/middleware/wlserver_10.3
Exception in thread "Main Thread" Exception in thread "Thread-0" java.lang.OutOfMemoryError
java.lang.NoClassDefFoundError: com/bea/plateng/patch/PatchSystem
        at com.bea.plateng.patch.PatchClientHelper.getAllPatchDetails(PatchClientHelper.java:74)
        at com.bea.plateng.patch.PatchInstallationHelper.cleanupPatchSets(PatchInstallationHelper.java:130)
        at com.bea.plateng.patch.PatchTarget.(PatchTarget.java:272)
        at com.bea.plateng.patch.PatchTargetFactory.create(PatchTargetFactory.java:30)
        at com.bea.plateng.patch.ProductAliasTarget.constructPatchTargetList(ProductAliasTarget.java:88)
        at com.bea.plateng.patch.ProductAliasTarget.(ProductAliasTarget.java:46)
        at com.bea.plateng.patch.ProductAliasTargetHelper.getProdAliasTargetList(ProductAliasTargetHelper.java:55)
        at com.bea.plateng.patch.ProductAliasTargetHelper.getAllHomeToProdAliasesTargetMap(ProductAliasTargetHelper.java:32)
        at com.bea.plateng.patch.ProductAliasTargetHelper.checkProfilesInProductAliases(ProductAliasTargetHelper.java:133)
        at com.bea.plateng.patch.Patch$1.run(Patch.java:376)
        at java.lang.Thread.run(Thread.java:662)

To resolve this, you can simply edit the bsu.sh file and increase the memory settings.

Old settings:

#!/bin/sh

JAVA_HOME="/u01/oracle/JRockit"

MEM_ARGS="-Xms256m -Xmx512m"

"$JAVA_HOME/bin/java" ${MEM_ARGS} -jar patch-client.jar $*

New settings:

#!/bin/sh

JAVA_HOME="/u01/oracle/JRockit"

MEM_ARGS="-Xms2048m -Xmx2048m"

"$JAVA_HOME/bin/java" ${MEM_ARGS} -jar patch-client.jar $*
]]>
<![CDATA[ Speeding up WLST ]]> https://chronicler.tech/speeding-up-wlst/ 5f37daec060b7e2fba0bef2b Mon, 17 Aug 2020 08:35:00 -0400 Back in Java 7 days, when your WebLogic domain runs like a snail on a sunny meadow, you knew that you forgot to fix your random source. It is not quite a safe solution, plus it doesn't work for Java 8.

One of the recent installations quite annoyed me with 15 minutes of startups, so I have decided to google around and see if there are other solutions, with no security compromise. JDK uses /dev/random or /dev/urandom to fetch random values for secure sessions, temporary keys, and who knows what else. Now, if the system writes are low, the generator's random number pool is shallow and holds back all domain activities.

The right question always contains the answer, so to improve performance, you should keep random number pools loaded, and Linux has just the service for that. With the commands below, I busted domain performance at least tenfold.

sudo yum install random-tools
sudo systemctl enable rngd 
sudo systemctl start rngd

 Enjoy your secure and yet fast environment.

]]>
<![CDATA[ Using Ansible to connect to Oracle Dynamic Monitoring Service (DMS) ]]> https://chronicler.tech/using-ansible-to-connect-to/ 5f0a7e51060b7e2fba0becba Sun, 12 Jul 2020 00:55:06 -0400 The Oracle Dynamic Monitoring Service (DMS) enables Oracle Fusion Middleware components to provide administration tools, such as Oracle Enterprise Manager, with data regarding the component's performance, state and on-going behaviour. It is the best tool to use if you want to have a scripting solution to monitor the states of your WebLogic domain servers, and all you need to have is the ability to connect over HTTP, and a username/password pair. Imagine that you could orchestrate a sequence of action to start/stop different components of your OFMW environment based on the state of other servers they are dependent upon.

If Ansible is a tool you use for orchestration of your environment startup, here is what you have to do:

  • login to a form based webpage
  • download a metric table of your choice as an xml file
  • use xpath to retrieve the text or attribute that you need

DMS application implements Java EE security container login method j_security_check for authentication. The way you login to DMS with Ansible uri module is to use the following url:

url: "http://{{ admin_host }}:{{ admin_port }}/dms/j_security_check"

The authentication form would expect two parameters: j_username and j_password, and upon successful authentication, it will return a status code that would be different depending on a version of Oracle FMW.

For 12c version you can define default variable:

stat_code: 303

For the 11g version, you have to use code 302. So, assuming that you have fmw_version variable defined with "11g" value:

- name: set status code
  set_fact:
    stat_code: 302
  when: fmw_version == 	'11g'

Once you logged in, you connect to DMS application using a previously stored cookie to retrieve an xml file. Your tasks would look like:

---
- name: login to DMS application
  uri:
    url: "http://{{ admin_host }}:{{ admin_port }}/dms/j_security_check"
    method: POST
    body_format: form-urlencoded
    body:
      j_username: "{{ admin_user }}"
      j_password: "{{ admin_pwd }}"
    status_code: "{{ stat_code }}"
  register: login
  
- name: DMS Spy table retrieval
  uri:
    url: "http://{{ admin_host }}:{{ admin_port }}/dms/index.html?format=xml&cache=false&prefetch=false&table={{ dmstbl }}&orderby={{ dmsord }}"
    method: GET
    return_content: yes
    dest: "{{ tmp_path }}/dms.xml"
    headers:
      Cookie: "{{ login.set_cookie }}"
 ...

If you want to get information about your servers' state, here is how you define your variables for the metric table:

dmstbl: weblogic.management.runtime.ServerRuntimeMBean
dmsord: Name

Finally, you have to use xml Ansible module to get the value of a metric you're looking for. Again, there would be a difference depending on a version of Oracle FMW. For 12c version, your resulting xml file uses namespaces that you have to present to xpath. Assuming that you store you WebLogic Admin server name in "admin_name" variable, and managed servers in "mservers" dictionary:

---
- name: "Read {{ admin_server }} status"
  xml:
    path: "{{ tmp_path }}/dms.xml"
    namespaces:
      ns: http://www.oracle.com/AS/collector
      xsi: http://www.w3.org/2001/XMLSchema-instance
    xpath: "/ns:tbml/ns:table/ns:row[ns:column/@name='{{ dmscolsearch }}' and ns:column[contains(.,'{{ admin_name }}')]]/ns:column[@name='{{ dmscoltarget }}']"
    content: text
  register: xmlresp_admin

- name: Read managed servers status
  xml:
    path: "{{ tmp_path }}/dms.xml"
    namespaces:
      ns: http://www.oracle.com/AS/collector
      xsi: http://www.w3.org/2001/XMLSchema-instance
    xpath: "/ns:tbml/ns:table/ns:row[ns:column/@name='{{ dmscolsearch }}' and ns:column[contains(.,'{{ server }}')]]/ns:column[@name='{{ dmscoltarget }}']"
    content: text
  register: xmlresp_managed
  ignore_errors: yes
  loop: "{{ mservers.keys()|trim }}"
  loop_control:
    loop_var: server
  
- name: Show admin server status
  debug:
    msg: "{{ admin_name }} status is {{ xmlresp_admin.matches[0]['{http://www.oracle.com/AS/collector}column'] }}"
  
- name: Show managed servers status:
  debug:
    msg: "{{ itr.server }} status is {{ itr.matches[0]['{http://www.oracle.com/AS/collector}column']|default('DOWN') }}"
  loop: "{{ xmlresp_managed.results }}"
  loop_control:
    loop_var: itr
    label: "{{ itr_server }}"
...

You would need to define the following variables for your xpath search to work, pointing it to a column attribute you're searching for a particular server name and a column with a resulting value:

dmscolsearch: Name
dmscoltarget: State

In the 11g version, which would be almost similar, you don't need to define namespaces.

---
- name: "Read {{ admin_server }} status"
  xml:
    path: "{{ tmp_path }}/dms.xml"
    xpath: "/tbml/table/row[column/@name='{{ dmscolsearch }}' and column[contains(.,'{{ admin_name }}')]]/column[@name='{{ dmscoltarget }}']"
    content: text
  register: xmlresp_admin

- name: Read managed servers status
  xml:
    path: "{{ tmp_path }}/dms.xml"
    xpath: "/tbml/table/row[column/@name='{{ dmscolsearch }}' and column[contains(.,'{{ server }}')]]/column[@name='{{ dmscoltarget }}']"
    content: text
  register: xmlresp_managed
  ignore_errors: yes
  loop: "{{ mservers.keys()|trim }}"
  loop_control:
    loop_var: server
  
- name: Show admin server status
  debug:
    msg: "{{ admin_name }} status is {{ xmlresp_admin.matches[0]['column'] }}"
  
- name: Show managed servers status:
  debug:
    msg: "{{ itr.server }} status is {{ itr.matches[0]['column']|default('DOWN') }}"
  loop: "{{ xmlresp_managed.results }}"
  loop_control:
    loop_var: itr
    label: "{{ itr_server }}"
...
]]>
<![CDATA[ TTL expired in transit on Windows client ]]> https://chronicler.tech/ttl-expired-in-transit-on-windows-client/ 5ef5fb8a060b7e2fba0be8ae Fri, 26 Jun 2020 10:54:33 -0400 Normally I get speeds in excess of 100 Mbps on my home wifi on both uploads and downloads.

Speeds during normal operation

This morning, it was down to 6-10 Mbps and I couldn't figure out why. I tried connecting to my router's gateway at 192.168.1.1 but was unsuccessful.

Pinging it returned an odd TTL expired in transit error.

Strangely enough, other laptops and mobile devices on the network were having no problems at all. I tried connecting to another wifi network in my home (yes, I have 3 separate wifi networks with 3 separate routers, but that's a story for a different day), but the results were the same. I even tried restarting my laptop and resetting the Windows network settings, and neither helped. Clearly the problem had something to do with my laptop, not the network.

Online searches returned recommendations of doing a hard reboot of the router or removing the offending rule in the routing table.

I dumped the contents of the routing table via the route print command and compared the results on two separate laptops on the same network. Both were identical. So it wasn't a routing rule issue.

It was only when I did a tracert to the default gateway via tracert 192.168.1.1 did I immediately recognize the problem.

I had installed a Citrix client the day before, and apparently it had still been running and overridden some routing rules. Simply closing it took care of everything.

In the screenshot below, you can see the tracert results before killing the Citrix process and after.

]]>
<![CDATA[ Running into IntegratedWebLogicServer issues for Windows users with spaces ]]> https://chronicler.tech/running-into-integratedweblogicserver-issues-for-windows-users-with-spaces/ 5ef0e652060b7e2fba0be803 Mon, 22 Jun 2020 13:36:48 -0400 When attempting to start the IntegratedWebLogicServer in Oracle JDeveloper 12.2.1.3, I received the following error:

Error: Could not find or load main class Aboulnaga\AppData\Roaming\JDeveloper\system12.2.1.3.42.170820.0914\DefaultDomain

And here's a screenshot from the IDE:

Analysis

Turns out the issue is that my Microsoft Windows user "Ahmed Aboulnaga" contains a space, which threw off a lot of the directories in many of the scripts.

So I did a recursive search for all *.cmd files under the newly DefaultDomain which contains the string "Ahmed Aboulnaga", and here's what I found:

Here's an example of one of the scripts, and you can see the subfolder/user that is causing the issue in question. Note that this is only applicable to JDeveloper running on Windows-based workstations:

First Attempt (did not work)

I edited each of the *.cmd files identified and qualified them with double quotes as shown:

Normal this takes care of the spaces in other Windows scripts, but JDeveloper threw a new error:

The system cannot find the path specified.

And here's the snippet from the IDE log output:

Clearly it's not finding Java because the path looks wrong, so instead of trying to troubleshoot this further, I opted for an alternate solution.

Second Attempt (worked!)

Windows allows you to reference files and folder names by their long name or their short name for this exact reason. I ran the dir /X command to find the what the short name for this particular folder was:

Now I edited each of the *.cmd for a second time and replaced all references as follows:

Success!

References

]]>
<![CDATA[ Can't create OSB Business Service in JDeveloper 12.2.1.3 ]]> https://chronicler.tech/cant-create-osb-business-service-in-jdeveloper-12-2-1-3/ 5ef0abd6060b7e2fba0be7c7 Mon, 22 Jun 2020 09:13:54 -0400 I ran into an error recently where I wasn't able to create a Business Service when developing an Oracle Service Bus (OSB) project in Oracle JDeveloper 12.2.1.3.

The error received was this:

Failed to generate the business service
Can't set the text value, current token can have no text value

And here's a screenshot of the popup:

Apparently, this error is not reproducible in JDeveloper 12.2.1.2, and Oracle identified it as a bug in Oracle Doc ID 2323781.1 as being specific to the Database Adapter.

The solution is to apply patch 26851310 in the local Oracle JDeveloper installation. Simply unzip the patch under ORACLE_HOME\OPatch, then change into that directory and run opatch.bat apply.

Success!

]]>
<![CDATA[ Easy JDK update ]]> https://chronicler.tech/jdk-update/ 5eee9a53060b7e2fba0be71b Mon, 22 Jun 2020 08:35:00 -0400 Working on the integrated WebLogic server configuration, I run into a very neat Middleware 12c feature. The primary fix for the issue is to use old JDK8. When you have JDeveloper and domain configured, you can upgrade your installation to the desired JDK version, with simple steps below.

Configuration time JDK
  1. Install the new JDK8 and mark its location
  2. In the terminal window, change the current folder to the OUI binaries folder. Make corrections according to your operating system.
cd $ORACLE_HOME/oui/bin
  1. Identify the current JDK location with the command
./getProperty.sh JAVA_HOME
  1. Save it as a new property
./setProperty.sh -name OLD_JAVA_HOME -value <JDK path from step 4>
  1. Configure your middleware home with the new JDK
./setProperty.sh -name JAVA_HOME -value <your new JDK8 path>

Start JDeveloper and make sure that you run on the desired JDK version.  

]]>
<![CDATA[ DefaultDomain in SOA Studio 12.2.1.4 ]]> https://chronicler.tech/internalserver-soa-studio-12-2-1-4/ 5eed05b5060b7e2fba0be410 Sat, 20 Jun 2020 08:56:36 -0400 The Cloud drift has impacted all Oracle on-premises products and tools. Far from ideal SOA Studio 11g was followed by half-baked 12c releases. I always struggle to configure internal WebLogic 12.2.x server and domain. Finally, Oracle Support helped me to complete the SOA/BPM Studio 12.2.1.4 configuration and make this circus fly. Let me save you some time and neurons.

  1. First of all, you would need a JDK8. Don't even think about 1.8.0_251 from the April CPU, just go to the Java Archives and download 1.8.0_141. You can upgrade your JDK later; for now, use the old one.
  2. Complete the binaries installation and configuration. The quick setup is pretty straightforward and looks like any other Oracle Middleware installations. If you have never installed any Oracle product, you find plenty of how-to instructions everywhere.
  3. In case if you have shared installation (Linux VM with multiple users), make sure that your account has write access to the $ORACLE_HOME folders.
  4. Open your $ORACLE_HOME/jdeveloper/ide/bin/jdk.conf and add a new line: AddVMOption -Dfile.encoding=UTF8
  5. I hope you have an Oracle Support account and can install the latest 12.2.1.4 patches: WebLogic 12.2.1.4 update 30970477, SOA 12.2.1.4 bundle patch 30995852. In this case, please don't forget the Coherence update 30729380, it will save you plenty of time.

Now, you are prepared for the internal configuration. I'm not sure if the patches are essential, but it's always better to have updated software.

Update: Use of the Outdate JDK is not the best idea. During startup, the WebLogic domain urges you to upgrade to at least JDK1.8.0_181. Fortunately, Oracle has a documented and painless JDK update approach.

 


Image Source: WikiMedia
]]>
<![CDATA[ Too much security: Is it a thing? ]]> https://chronicler.tech/xfi-advanced-security/ 5eeba477060b7e2fba0be346 Fri, 19 Jun 2020 08:57:32 -0400 Recently I have started a new training class with Red Hat Connect and training for partners. What I especially love about Red Hat classes, you always get an environment to mess around. Nothing was different from any previous cases, except this time, my new lab was partially unavailable.

The browser behavior was quite odd.  I click on the link, glimpse the application page, and then I get a protocol error page.

Security protocol error in browser
Any browser will show you this 

I tried all available browsers, on three different machines on two platforms, with no luck. I contacted Open TLC support, and they convinced me that applications are up and available, and they have no issues with access to my servers. That was extremely odd. It is definitely some corporate policies or antivirus configurations because all devices demonstrated the same behavior in my home network.
White a minute, in the home network!!! I turned off WiFi on my smartphone, and I get the login page instead of an error. The same happened when I switched my primary laptop to the mobile hotspot.
Now I was troubled; I 'm quite familiar with the standard Comcast Xfinity admin page, and nothing could cause such behavior. Well, there is always a space for a miracle. After a short investigation, I found that the new xFi  "offers" you the Advanced Security Service. Do you see an URL on the screenshot below?

xFi Advanced Security configuration page
Make sure you know what are you doing. 

That's right, it's not visible on your modem page, and it's pretty much all the configuration for this service. You can only turn it on or off: "Mommy knows the best."

Don't get me wrong: I don't urge you to turn it off entirely or start shopping for a new double-play package. What I want you to take away from this:

  • Don't blame application providers on the spot; your home network could be full of surprises.
  • Think twice before you decide to switch off this service permanently. Regarding the reports in my console, it's rather useful.
  • If you turn it off and then turn it back, there are about 10 minutes delay before it starts protection.
]]>
<![CDATA[ Red Hat Ansible: Yes and No, True and False ]]> https://chronicler.tech/red-hat-ansible-yes-no-and/ 5ecbdd14060b7e2fba0be01c Thu, 28 May 2020 08:35:00 -0400 During the code review, my colleague and I have fixed a few ageless Boolean variable issues. If you have ever done even a humble sized JavaScript project, you know what I mean. What is in common between Red Hat Ansible and JavaScript? Well, they do not have types. Of course, they have, but the variable type is fluent and always depends on the context. Ansible scripts do not have type declarations, because it's built on top of YAML, combined with Jinja2 templates, and Python is a core language for everything.

On practice, it means you should be quite mindful and consistent in your coding style, because 1, True, yes, and "false" look quite different, but have the same logical meaning and my mean not what you have expected.

The table below summarizes how Ansible interprets variable values. It may be surprising for developers with Java or C/C++ background.

Test Outcome Variable Value
True Any positeve number >0: 1,2 ..
Non-empty string: "1", "True","False",'Nonsence'
Boolean value (unquoted): True,true
Ansible-specific (case-agnostic): Yes,yes
False Any number <=0: 0,-1 ..
Empty string: "", ''
Boolean value (unquoted): False,false
Ansible-specific (case-agnostic): No,no

Even after the significant unification in Oracle Fusion Middleware 12c, the WLST scripting language still adds more fun with the mix of Boolean and String parameters. The only way to keep it under control is to develop and (it's even more important!!) to follow coding guidelines. There are few that I've found as useful:

  • Always quote String values. It applies for everything: variable values, task or play names, static strings, everything. I use double quotes for String and single quotes for the in-line quotes and templates.
  • For Ansible scenarios, use only yes or no as a Boolean value. It brings some overhead in templates but pays off in the long run.
  • If you prefer classic True/False, use them capitalized and do not put any quotes.

This small playbook illustrates how Ansible converts variable values to a Boolean value.

---
- hosts: localhost
  vars:
    upper_bool: True
    lower_bool: false
    quote_false: "false"
    empty_str: ''
    yes_var: yes
    yes_str: "yes"
    cap_yes: Yes
    num_yes: 1
    num_no: 0
  tasks:
    - debug: msg="upper_bool [True] {{ ':' }} Logical value is {{ 'True' if upper_bool else 'False' }}"
    - debug: msg="lower_bool [false] {{ ':' }} Logical value is {{ 'True' if lower_bool else 'False' }}"
    - debug: msg="quote_false [\"false\"] {{ ':' }} Logical value is {{ 'True' if quote_false else 'False' }}"
    - debug: msg="emty_str [ \'\' ] {{ ':' }} Logical value is {{ 'True' if empty_str else 'False' }}"
    - debug: msg="yes_var [yes]  {{ ':' }} Logical value is {{ 'True' if yes_var else 'False' }}"
    - debug: msg="yes_str [\"yes\"] {{ ':' }} Logical value is {{ 'True' if yes_str else 'False' }}"
    - debug: msg="cap_yes [Yes] {{ ':' }} Logical value is {{ 'True' if cap_yes else 'False' }}"
    - debug: msg="num_yes [1] {{ ':' }} Logical value is {{ 'True' if num_yes else 'False' }}"
    - debug: msg="num_no [0] {{ ':' }} Logical value is {{ 'True' if num_no else 'False' }}"
...

ansible-playbook 2.6.3

  config file = /home/user/ansible.cfg

  configured module search path = [u'/opt/ansible/modules']

  ansible python module location = /usr/lib/python2.7/site-packages/ansible

  executable location = /usr/bin/ansible-playbook

  python version = 2.7.5 (default, Sep 26 2019, 13:23:47) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]

Using /home/user/ansible.cfg as config file

PLAYBOOK: true-false-yes-no.yml ************************************************************

1 plays in true-false-yes-no.yml

PLAY [localhost] ***************************************************************************

TASK [Gathering Facts] *********************************************************************

task path: /home/user/true-false-yes-no.yml:2

ok: [localhost]

META: ran handlers

TASK [debug] *******************************************************************************

task path: /home/user/true-false-yes-no.yml:14

ok: [localhost] => {

    "msg": "upper_bool [True] : Logical value is True"

}

TASK [debug] *******************************************************************************

task path: /home/user/true-false-yes-no.yml:15

ok: [localhost] => {

    "msg": "lower_bool [false] : Logical value is False"

}

TASK [debug] *******************************************************************************

task path: /home/user/true-false-yes-no.yml:16

ok: [localhost] => {

    "msg": "quote_false [\"false\"] : Logical value is True"

}

TASK [debug] *******************************************************************************

task path: /home/user/true-false-yes-no.yml:17

ok: [localhost] => {

    "msg": "emty_str [ '' ] : Logical value is False"

}

TASK [debug] *******************************************************************************

task path: /home/user/true-false-yes-no.yml:18

ok: [localhost] => {

    "msg": "yes_var [yes]  : Logical value is True"

}

TASK [debug] *******************************************************************************

task path: /home/user/true-false-yes-no.yml:19

ok: [localhost] => {

    "msg": "yes_str [\"yes\"] : Logical value is True"

}

TASK [debug] *******************************************************************************

task path: /home/user/true-false-yes-no.yml:20

ok: [localhost] => {

    "msg": "cap_yes [Yes] : Logical value is True"

}

TASK [debug] *******************************************************************************

task path: /home/user/true-false-yes-no.yml:21

ok: [localhost] => {

    "msg": "num_yes [1] : Logical value is True"

}

TASK [debug] *******************************************************************************

task path: /home/user/true-false-yes-no.yml:22

ok: [localhost] => {

    "msg": "num_no [0] : Logical value is False"

}

META: ran handlers

META: ran handlers

PLAY RECAP *********************************************************************************

localhost                  : ok=10   changed=0    unreachable=0    failed=0


Image author: @geralt (pixabay.com)
]]>
<![CDATA[ ER_MUST_CHANGE_PASSWORD_LOGIN on Ghost blog platform ]]> https://chronicler.tech/er_must_change_password_login-on-ghost-blog-platform/ 5ecc198f060b7e2fba0be094 Mon, 25 May 2020 15:22:41 -0400

Seems that our Ghost blog platform received the following error when I tried to publish a post:

Internal server error, cannot list posts. ER_MUST_CHANGE_PASSWORD_LOGIN: Your password has expired. To log in you must change it using a client that supports expired passwords.

See screenshot below:

At first glance, I thought it was related to my own Ghost user account. Turns out it was the underlying database user.

To reset the MySQL password, run the following commands:

$MYSQL_HOME/bin/mysql -u ghost_db_user -p"oldpassword" -h localhost -P 3306 ghost_db

SET PASSWORD = PASSWORD('newpassword');
]]>
<![CDATA[ Don't kid yourself - you're not a Cloud Architect ]]> https://chronicler.tech/dont-kid-yourself-youre-not-a-cloud-architect/ 5ebff2ce060b7e2fba0bdb4f Sun, 24 May 2020 19:43:27 -0400 With the boom in cloud adoption, many IT professionals are looking to formalize their new skillsets by acquiring various cloud certifications. Last week, I took advantage of Oracle's free training and exam offer and successfully passed the Oracle Cloud Infrastructure Foundations 2020 Certified Associate and Oracle Cloud Infrastructure 2019 Architect Associate certifications (link to Acclaim badges below).

Now each of the other major cloud providers offers its own version of a Cloud Architect certification, be it the AWS Certified Solutions Architect Associate, IBM Certified Architect - Cloud Solutions v3, Google Professional Cloud Architect, or Microsoft Certified: Azure Solutions Architect Expert. Concepts are generally the same, but each requires specific knowledge of their own cloud implementation.

Now my recent exam experiences got me thinking. Most IT professionals are far from qualified in becoming true Cloud Architects. There, I said it.

Though the cloud has simplified a lot of the traditional know-how when it comes to implementing a lot of its infrastructure configuration, core knowledge in areas of networking, firewall, infrastructure, security, system administration, hardware, and even budgeting are all required. Most people haven't even dabbled in more than two of the above in their entire professional careers.

Prior to the advent of cloud, have you ever configured a load balancer before? Modified routing tables? Walked into a data center and cabled machines? Set up RAID? Encrypted disks? Vertically scaled hardware? Created a network architecture diagram? Automated and scripted server provisioning? Modified firewall rules? Estimated, priced, and purchased hardware? Installed, configured, and monitored a hypervisor?

I'm not saying that you need all of the above to pass a certification exam, but these are the kind of skills needed to become a truly qualified and competent Cloud Architect. Some people have made careers out of simply setting up F5 Networks load balancers, let alone all the other stuff.

Don't be the guy who botches the architecture

In the end, the cloud is just a data center. System administrators and network engineers are probably good candidates to become Cloud Architects. They have most of the fundamental knowledge needed in setting up data centers. DBAs and developers? Not so much. But as IT professionals, we should strive to familiarize ourselves in cloud concepts and technologies because our future work will require it.

This by no means is meant to discourage your pursuit of cloud credentials and a career in cloud implementations. Far from it. All I'm saying is that a DBA might benefit by focusing on becoming a Cloud Database Architect instead. And a solid transition for a developer could be a Cloud Developer Architect. These are natural extensions to your existing skillsets and are areas that systems administrators and network engineers simply have no knowledge of. Think DBaaS, Kubernetes, DevOps, and serverless computing.

Remember the questions above? Yes, I've done it all (in varying degrees obviously). But am I personally qualified to become a competent Cloud Architect? More than some, less than others.

Whatever you decide to pursue, keep in mind that fundamentals are essential, certifications are just a stepping stone, and there is no substitute for hard work. Good luck!

]]>
<![CDATA[ What you need to pass the OCI Architect Associate certification exam ]]> https://chronicler.tech/passing-the-oci-architect-exam/ 5ecae6bf060b7e2fba0bde6d Sun, 24 May 2020 19:26:52 -0400 In March 2020, Oracle announced free access to online learning content and certifications to a broad array of Oracle Cloud certifications for about 2 months (details here). Literally tens of thousands of professionals took advantage of this excellent offer, causing Oracle to further extend the deadline of the offering.

This was a smart move by Oracle to garner interest in their cloud platform.

In this post, I describe what I did to pass the Oracle Cloud Infrastructure Architect Associate exam. This is the first of two Cloud Architect level certifications from Oracle, the next being the Architect Professional certification.

Keep in mind that I have strong opinions on those looking to become Cloud Architects (see my rants here). Regardless, having hands-on experience with Oracle Cloud, though not a necessity, will give you around 30% of the knowledge you need to pass the exam. 50% of the knowledge will come from the excellent online videos curated by Rohit Rahi from Oracle. The remaining 20% is where your professional experience will come in handy.

What is the Oracle Cloud Infrastructure (OCI) Architect Associate certification?

This certification tests your skills on topics that include: cloud computing concepts (HA, DR, security), regions, availability domains, terminology, services, networking, databases, load balancing, compartments, and so on, as it pertains to OCI.

Who is this certification designed for?

The Oracle Cloud Infrastructure Architect Associate exam is designed for individuals who possess strong foundational knowledge in architecting infrastructure using OCI services.

Those with strong data center, network engineering, and systems administration experience are ideal candidates for this certification.

What is the exam number?

1Z0-1072: Oracle Cloud Infrastructure 2019 Architect Associate certification.

Is hands-on experience needed to pass the exam?

No. But it will possibly be that factor that determines whether you pass or fail.

How do you pass the exam?

Below are 6 things I personally did to prepare for the exam.

1. Watch the free online training videos.

The videos are the most important aspect of your preparation. The speaker does an excellent job talking through concepts and walking through a lot of hands-on exercises.

Turn up the speed to 1.5x at least. For those who can keep up, you can even try 2x speeds. This will reduce the time to complete the videos from 9 hours to 6 or even 5 hours.

The speaker talks slowly, so watch at 1.5x speed (or even 2x if you can handle it)

2. Take screenshots of key slides from the online videos.

Take screenshots of relevant information from the online videos. Some of the information is particularly useful and will come in handy later when reviewing.

Sample screenshot from one of the slides from the online video tutorials

3. Copy the transcript of each video into a Microsoft Word document.

Each video is fortunately transcribed into text. Copy each transcription into a document, because this will come in handy when creating notes and studying later on.

All videos are transcribed into text

4. Review your notes and highlight key areas.

Now your Microsoft Word Document will likely exceed 150+ pages. As for my personal style, I ended up bolding portions of sentences that I wanted to review later. Red means something that I knew I would likely forget.

Then I started adding my own questions in green sprinkled throughout the document. When I was done, I would quickly scroll through the document, focus on the green questions, asking them to myself.

Obviously, this first round of highlighting the document will take a good amount of time, but you'll notice yourself skipping through a lot of paragraphs.

5. Search for the word "exam" in your document.

While presenting the online videos, the presenter admitted that what he was covering is included in the exam. Simply search for the word "exam" in the document.

6. Take the sample exam.

Taking the sample exam is exceptionally important as it is a true representation of the type of questions that will appear in the actual exam. I can't remember, but I'm somewhat certain that a few of the questions did actually appear in the exam.

The practice exam is extremely important

Final Thoughts

Preparation, and preferably hands-on experience, are requirements for taking any certification exam. Even for those with strong experience, certifications educate you in areas you rarely touch. A certification is also an excellent way to formalize your knowledge. Statistically speaking, professionals who are certified tend to make a little bit more than those who are not.

I personally did exceptionally well in this exam. In my opinion, this is attributed to my 6+ years of hands-on and professional experience with Oracle Cloud Infrastructure (as well other non-Oracle cloud providers). Furthermore, I've had a decent amount of system administration, network engineering, and data center experience in my past life which was invaluable in preparing me to achieve a Cloud Architect certification.

Regardless of your background, be it a developer, database administrator, or systems engineer, having strong cloud concepts and cloud architecture experience is a necessity in this day and age.

Good luck!

]]>
<![CDATA[ Setting up our compute cloud virtual machines for POCs ]]> https://chronicler.tech/setting-up-our-compute-cloud-virtual-machines-for-testing/ 5ec95c7a060b7e2fba0bdcf6 Sat, 23 May 2020 17:07:20 -0400 We've been doing some quick performance testing of Oracle software on various compute cloud providers that include AWS, Oracle Cloud, IBM Cloud, Google Cloud, and Microsoft Azure. Once the virtual machine is created, most setup/configuration is generally standard.

This post is mostly self-documentation of personal notes, and not really intended for widespread use. So follow the instructions at your own discretion.

Set Hostname

Set the hostname if it's not set to what you want.

hostname pochost1.something.soc

Preserve Hostname (Oracle Cloud only)

On Oracle Cloud, changes you make to the /etc/hosts file are overwritten whenever the DHCP lease is renewed or the instance is rebooted. To persist the changes, perform the following.

vi /etc/oci-hostname.conf

    PRESERVE_HOSTINFO=2

Create Oracle Software User

Create the oracle Linux user with standard groups for the installation of Oracle software.

groupadd oinstall
groupadd dba
useradd -g oinstall -G dba oracle

Install RPMs

These are generic RPMs used by most Oracle software. Others are the GUI and VNC server. Some utilities such as telnet and xclock are for testing purposes only and shouldn't be installed on a production system.

yum -y groupinstall 'Server with GUI'
yum -y install tigervnc-server
yum -y install xclock
yum -y install telnet
yum -y install wget
yum -y install gcc
yum -y install gcc-c++
yum -y install glibc-devel
yum -y install libaio
yum -y install libaio-devel
yum -y install sysstat
yum -y install libstdc++-devel
yum -y install compat-libstdc++
yum -y install compat-libstdc
yum -y install compat-libcap1
yum -y install ksh
yum install lksctp-tools

Download and Install Stress Tool

Both stress and stress-ng are used for Linux host stress testing. Keep in mind that newer versions may be available.

wget https://download-ib01.fedoraproject.org/pub/epel/7/x86_64/Packages/s/stress-1.0.4-16.el7.x86_64.rpm
wget https://download-ib01.fedoraproject.org/pub/epel/7/x86_64/Packages/s/stress-ng-0.07.29-2.el7.x86_64.rpm
wget https://download-ib01.fedoraproject.org/pub/epel/7/x86_64/Packages/l/libbsd-0.8.3-1.el7.x86_64.rpm

rpm -i libbsd-0.8.3-1.el7.x86_64.rpm
rpm -i stress-ng-0.07.29-2.el7.x86_64.rpm
rpm -i stress-1.0.4-16.el7.x86_64.rpm

Open Local Firewall Ports

These are examples of ports used for our testing. Port 5901 is required for access to the VNC server. Port 3872 is for inbound OEM Agent access. Port 7002 is the default secure WebLogic AdminServer port. Open other ports as necessary.

firewall-cmd --permanent --zone=public --add-port=5901/tcp
firewall-cmd --permanent --zone=public --add-port=3872/tcp
firewall-cmd --permanent --zone=public --add-port=7002/tcp
firewall-cmd --reload

Edit Kernel and Profile Settings

These are generic settings that work for most Oracle Database instances, as well as other Oracle software.

vi /etc/sysctl.conf

    kernel.shmmax = 68719476736
    kernel.shmall = 4294967296
    kernel.msgmax = 65536
    kernel.msgmnb = 65535
    kernel.shmmni = 4096
    kernel.sem = 256 32000 100 142
    fs.file-max = 6815744
    fs.aio-max-nr = 1048576
    kernel.hostname   = pochost1.something.soc
    kernel.domainname = something.soc
    net.ipv4.ip_local_port_range = 9000 65500
    net.core.rmem_default=262144
    net.core.wmem_default=262144
    net.core.rmem_max=4194304
    net.core.wmem_max=1048576
    kernel.msgmni = 2878

sysctl -p

vi /etc/security/limits.conf

    oracle	soft	nofile	4096
    oracle	hard	nofile	65536
    oracle	soft	nproc	2047
    oracle	hard	nproc	16384
    oracle	soft	stack	10240

Run iSCSI Commands (Oracle Cloud only)

On Oracle Cloud, block volumes can be attached as iSCSI or paravirtualized. If using iSCSI, the commands are specific and available on the Oracle Cloud console.

# Example only, get actual values from Oracle Cloud console

# sudo iscsiadm -m node -o new -T iqn.2015-12.com.oracleiaas:163e16fb-4b8c-a002-43fd-262784e9fa98 -p 192.168.1.10:3260

# sudo iscsiadm -m node -o update -T iqn.2015-12.com.oracleiaas:163e16fb-4b8c-a002-43fd-262784e9fa98 -n node.startup -v automatic

# sudo iscsiadm -m node -T iqn.2015-12.com.oracleiaas:163e16fb-4b8c-a002-43fd-262784e9fa98 -p 192.168.1.10:3260 -l

Mount Disk

These are standard mount instructions. Values should be updated per your environment (specifically the underlined ones).

# Create directory
mkdir /u01
chown oracle:oinstall /u01

# View available disks
lsblk

# Confirm that /dev/xvdb is a 'data' volume, assumming the disk is mounted to /dev/xvdb
file -s /dev/xvdb

# Create file system, all data will be lost in it
mkfs -t ext4 /dev/xvdb

# Backup fstab and add the mount folder
cp /etc/fstab /etc/fstab.orig
echo "/dev/xvdb       /u01   ext4    defaults,_netdev,nofail        0       2" >> /etc/fstab

# Mount /u01
mount /u01
chown oracle:oinstall /u01

# Confirm that /u01 is mounted and available
df -h

Install OEM Agent

The following are instructions to silently install the OEM Agent, assuming that both the installation file exists in /tmp and that firewalls are open between this host and the OMS server.

chown oracle:oinstall /tmp/13.3.0.0.0_AgentCore_226.zip
mkdir -p /u01/software/agent13c
chown oracle:oinstall /u01/software
chown oracle:oinstall /u01/software/agent13c

su - oracle
mv /tmp/13.3.0.0.0_AgentCore_226.zip /u01/software
unzip /u01/software/13.3.0.0.0_AgentCore_226.zip -d /u01/software/agent13c
cd /u01/software/agent13c

./agentDeploy.sh AGENT_BASE_DIR=/u01/oracle/agent13c -invPtrLoc /etc/oraInst.loc AGENT_PORT=3872 EM_UPLOAD_PORT=4903 OMS_HOST=oem.something.soc ORACLE_HOSTNAME=pochost1.something.soc AGENT_INSTANCE_HOME=/u01/oracle/agent13c/agent_inst AGENT_REGISTRATION_PASSWORD=welcome1 SCRATCHPATH=/tmp

sudo su -
/u01/oracle/agent13c/agent_13.3.0.0.0/root.sh

Start/Stop VNC Server

Straightforward instructions to start up and shut down the VNC server are below.

# Start VNC server
vncserver :1 -geometry 1280x720 -depth 16

# Stop VNC server
vncserver -kill :1
]]>
<![CDATA[ How Oracle Cloud helps you save on Oracle Cloud ]]> https://chronicler.tech/oci-webconsole/ 5eb7fd8a060b7e2fba0bda29 Mon, 11 May 2020 08:35:00 -0400 The recent Ahmed's post on OCI client installation gave me an idea to post this piece. My Oracle Cloud trial has ended, and I have no other option but to be more budget aware. The most apparent recipe is quite similar to the basic household rule: "Turn the lights off when you leave the room."

It's obvious and easy sounded task could be a quite annoying task in Oracle Cloud. Even for my small OCI footprint, I have to jump between several compartments to select instances and change states. The automation of the dull tasks is the right answer, and what I like about Oracle Cloud, you don't need to have any additional clients, terminals, or any of that such if your region has Oracle Cloud Shell enabled.

Check if you have this icon on your OCI page

The Cloud Shell precisely fits my case. It's a small (about 5Gb) Linux Shell environment with the preconfigured OCI tools and clients. All you need to do is to configure your command line and create automation scripts. I have used OCI SDK documentation as the main reference, but you may find plenty of articles about CLI configuration.

Screen shows you the result of OCI SDK configuration.

Create this quite simple script in your home folder and replace items in the inst_array with your instance OCIDs:

#!/bin/sh
# List of instances you want to manage as a group
inst_array=(ocid1.instance.oc1.iad.abcde.1 ocid1.instance.oc1.iad.abcde.2 ocid1.instance.oc1.iad.abcde3)
for inst in "${inst_array[@]}"; do
oci compute instance action --action ${1^^} --instance-id ${inst}
done
My name for this script is all-oci-vms.sh

Now you can stop all VMs  with the single command and save a few dollars.

michael_mi@cloudshell:~ (us-ashburn-1)$ bin/all-oci-vms.sh stop
or start them with the start argument

]]>
<![CDATA[ Using "utils.dbping" to test JDBC database connectivity ]]> https://chronicler.tech/using-utils-dbping-to-test-database-connectivity/ 5eb5600e0f5abe37b745a6f8 Sun, 10 May 2020 10:00:00 -0400 Recently I ran into a problem with an Oracle WebLogic Server data source. We were getting an IO Error: Connection reset in the console and logs when we try to start up the data source.

I could connect to the target database using a client tool such as Oracle SQL Developer, which indicated that the problem was likely not on the database server. So I downloaded the SQL*Plus Instant Client onto my server, and I was able to successfully connect using sqlplus as well.

This seemed to indicate that the problem was not the network or the database, but possibly something with the JDBC driver.

So how can you test the JDBC driver from WebLogic without having to write some Java code? Fortunately, there is a utility called utils.dbping which you can use for this exact purpose. To execute it is simple:

cd $DOMAIN_HOME/bin
. setDomainEnv.sh
java utils.dbping ORACLE_THIN dbuser dbpassword dbhost:1521/dbsid

Here is an example of a successful connection:

oracle@soahost:/u01/app/oracle/domains/soa_domain> java utils.dbping ORACLE_THIN dbuser dbpassword dbhost:1521/dbsid

**** Success!!! ****

You can connect to the database in your app using:

  java.util.Properties props = new java.util.Properties();
  props.put("user", "dbuser");
  props.put("password", "********");
  java.sql.Driver d =
    Class.forName("oracle.jdbc.OracleDriver").newInstance();
  java.sql.Connection conn =
    Driver.connect("dbuser", props);

Here is an example of an unsuccessful connection:

oracle@soahost:/u01/app/oracle/domains/soa_domain> java utils.dbping ORACLE_THIN dbuser dbpassword dbhost:1521/dbsid

Error encountered:

java.security.PrivilegedActionException: java.sql.SQLRecoverableException: IO Error: Connection reset
        at java.security.AccessController.doPrivileged(Native Method)
        at utils.dbping.getConnection(dbping.java:322)
        at utils.dbping.main(dbping.java:287)
Caused by: java.sql.SQLRecoverableException: IO Error: Connection reset
        at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:816)
        at oracle.jdbc.driver.PhysicalConnection.connect(PhysicalConnection.java:793)
        at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:33)
        at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:614)
        at java.sql.DriverManager.getConnection(DriverManager.java:664)
        at java.sql.DriverManager.getConnection(DriverManager.java:208)
        at utils.dbping$1.run(dbping.java:327)
        ... 3 more
Caused by: java.net.SocketException: Connection reset
        at java.net.SocketInputStream.read(SocketInputStream.java:209)
        at java.net.SocketInputStream.read(SocketInputStream.java:141)
        at oracle.net.nt.MetricsEnabledInputStream.read(TcpNTAdapter.java:759)
        at oracle.net.ns.Packet.receive(Packet.java:312)
        at oracle.net.ns.DataPacket.receive(DataPacket.java:106)
        at oracle.net.ns.NetInputStream.getNextPacket(NetInputStream.java:306)
        at oracle.net.ns.NetInputStream.read(NetInputStream.java:250)
        at oracle.net.ns.NetInputStream.read(NetInputStream.java:172)
        at oracle.net.ns.NetInputStream.read(NetInputStream.java:90)
        at oracle.jdbc.driver.T4CSocketInputStreamWrapper.readNextPacket(T4CSocketInputStreamWrapper.java:124)
        at oracle.jdbc.driver.T4CSocketInputStreamWrapper.read(T4CSocketInputStreamWrapper.java:80)
        at oracle.jdbc.driver.T4CMAREngineStream.unmarshalUB1(T4CMAREngineStream.java:452)
        at oracle.jdbc.driver.T4C8TTIdty.receive(T4C8TTIdty.java:711)
        at oracle.jdbc.driver.T4C8TTIdty.doRPC(T4C8TTIdty.java:616)
        at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1798)
        at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:539)
        ... 9 more

This utility is incredibly useful because it uses the same exact JDBC driver that is configured in your WebLogic instance.

In our particular problem, we were able to confirm that the issue was related to the MTU size at the network adapter level:

]]>
<![CDATA[ Can't find the "oci" executable in Oracle Cloud? ]]> https://chronicler.tech/cant-find-the-oci-executable-in-oracle-cloud/ 5e45a5800b1b670a1724f310 Sun, 03 May 2020 16:39:51 -0400 A few months ago, I attended an Oracle Cloud Infrastructure workshop in Oracle Reston. In one of the hands-on labs, it appears that the oci executable was not found in my compute instance:

To remedy this, simply run this command:

bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)"

This is the output of the command above:

Now you are good to go:

]]>
<![CDATA[ Yet another custom OEM uptime report ]]> https://chronicler.tech/yet-another-custom-oem-uptime-report/ 5eaf26e00f5abe37b745a650 Sun, 03 May 2020 16:31:01 -0400 Here is my custom availability report in Oracle Enterprise Manager (OEM) 13c (using Information Publisher). It consists of multiple elements across multiple rows.

Though this report is specific to WebLogic Managed Servers, it could apply to any target type.

Each element is a distinct query:

SELECT
    a.availability_status                "Status",
    to_char(SUM(a.value), 990.99) || '%' "Uptime"
FROM
    (
        ( SELECT
            'Down' availability_status,
            0 value,
            2 order_col
        FROM
            dual
        UNION ALL
        SELECT
            'Up' availability_status,
            0 value,
            1 order_col_txt_id
        FROM
            dual
        UNION ALL
        SELECT
            'System Error' availability_status,
            0 value,
            5 order_col_txt_id
        FROM
            dual
        UNION ALL
        SELECT
            'Agent Down' availability_status,
            0 value,
            4 order_col_txt_id
        FROM
            dual
        UNION ALL
        SELECT
            'Blackout' availability_status,
            0 value,
            3 order_col_txt_id
        FROM
            dual
        UNION ALL
        SELECT
            'Status Pending' availability_status,
            0 value,
            6 order_col_txt_id
        FROM
            dual
        )
        UNION ALL
        SELECT
            decode(lower(availability_status),
                'target down',     'Down',
                'target up',       'Up',
                'metric error',    'System Error',
                'agent down',      'Agent Down',
                'unreachable',     'Unreachable',
                'blackout',        'Blackout',
                'pending/unknown', 'Status Pending') availability_status_txt_id,
            round(SUM(least(nvl(end_timestamp, sysdate), sysdate) - 
            greatest(start_timestamp, sysdate - 30)) * 100 / 30, 2) 
            AS value_txt_id,
            decode(lower(availability_status), 
                'target down',     2, 
                'target up',       1,
                'metric error',    5,
                'agent down',      4,
                'unreachable',     7,
                'blackout',        3,
                'pending/unknown', 6) order_col_txt_id
        FROM
            sysman.mgmt$availability_history b,
            sysman.mgmt$target               t
        WHERE
            b.target_guid = t.target_guid
            -- ----------------------------------------
            -- Hardcoded target information
            -- ----------------------------------------
            AND b.target_name = '/prod_soa_domain/soa_domain/soa_server1'
            AND lower(availability_status) != 'unreachable'
            -- ----------------------------------------
            -- Hardcoded timeframe
            -- ----------------------------------------
            AND ( ( b.end_timestamp >= sysdate - 30 ) OR b.end_timestamp IS NULL )
            AND b.start_timestamp <= sysdate group by lower(availability_status), decode(lower(availability_status), 'target down', 2, up', 1, 'metric error', 5, 'agent 4, 'unreachable', 7, 'blackout', 3, 'pending unknown', 6)) a where lower(availability_status) !="status pending" and a.availability_status, a.order_col order < code>

Observe the comments in the query above, and tailor it to your specific needs.

These metrics are returned from the MGMT$AVAILABILITY_HISTORY and MGMT$TARGET views.

]]>
<![CDATA[ Create a custom OEM uptime report for business hours only ]]> https://chronicler.tech/create-a-custom-oem-business-hours-uptime-report/ 5eaf11090f5abe37b745a58c Sun, 03 May 2020 15:43:10 -0400 There was a need to create a custom Information Publisher Report in Oracle Enterprise Manager 13c to provide uptime metrics. These uptime metrics were to be calculated weekly, from Monday to Friday, only for the hours of 8am to 5pm, and for specific targets. Any target status of Up, System Error, Agent Down, Blackout, Status Pending would be considered up. Only a target status of Down would be classified as down.

Yes, I do understand that Information Publisher is deprecated in lieu of BI Publisher, but it's just so easy to use.

Creating the Report

Here is the output of that report:

Unfortunately, creating this was more complicated than it looked.

The query used in the report is the following:

As you can see from the query, I am simply selecting from a function:

SELECT 
  TO_CHAR(oem_week, 'YYYY-MM-DD') "Week Of",
  SUBSTR(oem_target,INSTR(oem_target,'/',-1)+1,LENGTH(oem_target)) "Target",
  TO_CHAR(TRUNC(oem_uptime, 2), 'FM999.00') || '%' "Uptime"
FROM 
  TABLE("AHMED.ABOULNAGA".get_uptime_weekly())
ORDER BY 
  TO_CHAR(oem_week,'YYYYMMDD') DESC, 
  oem_target

This function simply returns exactly the output needed, allowing the query to be as simple as possible. Here is the output of the query when you run it manually:

Since I'm running the report as my own user, not SYSMAN, I have to provide a few grants. Run these as SYSTEM or SYSMAN on the database:

GRANT select ON sysman.mgmt$availability_history TO "AHMED.ABOULNAGA";
GRANT select ON sysman.MGMT$TARGET TO "AHMED.ABOULNAGA";
GRANT create type TO "AHMED.ABOULNAGA";

Now, logging in as my database user AHMED.ABOULNAGA, I create a couple of types which are used by my function, which includes the exact output returned by the function:

CREATE TYPE custom_uptime_weekly_type AS OBJECT (
  oem_target VARCHAR2(100),
  oem_week   DATE,
  oem_uptime VARCHAR(10)
);

CREATE TYPE custom_uptime_weekly_table AS TABLE OF custom_uptime_weekly_type;

Now here is the dreaded query.

You can customize the list of targets, hours of operation, and start and end dates (query is hardcoded to start from the first day of the calendar year up to today's date).

CREATE OR REPLACE FUNCTION ati_ahmed_get_uptime_weekly RETURN ati_ahmed_uptime_weekly_table
    PIPELINED
IS

    vcurrday         NUMBER;
    vstarttime       DATE;
    vendtime         DATE;
    vuptime          VARCHAR2(10);
    TYPE vtargetarray_type IS
        VARRAY(50) OF VARCHAR2(100);
    vtarget          vtargetarray_type;
    vrecord          ati_ahmed_uptime_weekly_type := ati_ahmed_uptime_weekly_type(NULL, NULL, NULL);
    vuptime_weekly   NUMBER;
    vday_count       INT;
BEGIN

    -- ----------------------------------------
    -- This is the list of targets
    -- ----------------------------------------
    vtarget := vtargetarray_type(
        '/prod_soa_domain/soa_domain/osb_server1', 
        '/prod_soa_domain/soa_domain/osb_server2', 
        '/prod_soa_domain/soa_domain/soa_server1', 
        '/prod_soa_domain/soa_domain/soa_server2', 
        '/prod_soa_domain/soa_domain/ess_server1',
        '/prod_soa_domain/soa_domain/ess_server2',   
        '/prod_soa_domain/soa_domain/bam_server1', 
        '/prod_soa_domain/soa_domain/bam_server2'
        );

    vcurrday := to_number(to_char(sysdate, 'DDD'));

    -- ----------------------------------------
    -- Start from the first day of the year
    -- ----------------------------------------
    FOR j IN 1..vtarget.count LOOP
        vuptime_weekly := 0;
        vday_count := 0;
        FOR i IN 1..vcurrday LOOP IF to_char(sysdate - vcurrday + i, 'DY') NOT IN (
            'SAT',
            'SUN'
        ) THEN
            -- ----------------------------------------
            -- This is the hours of operation
            -- ----------------------------------------
            vstarttime := to_date(to_char(sysdate - vcurrday + i, 'DD-MON-YYYY')
                                  || ' 08:00:00', 'DD-MON-YYYY HH24:MI:SS');

            vendtime := to_date(to_char(sysdate - vcurrday + i, 'DD-MON-YYYY')
                                || ' 17:00:00', 'DD-MON-YYYY HH24:MI:SS');

            SELECT
                100 - SUM(a.value) "Uptime"
            INTO vuptime
            FROM
                (
                    ( SELECT
                        'Down' availability_status,
                        0 value,
                        2 order_col
                    FROM
                        dual
                    UNION ALL
                    SELECT
                        'Up' availability_status,
                        0 value,
                        1 order_col_txt_id
                    FROM
                        dual
                    UNION ALL
                    SELECT
                        'System Error' availability_status,
                        0 value,
                        5 order_col_txt_id
                    FROM
                        dual
                    UNION ALL
                    SELECT
                        'Agent Down' availability_status,
                        0 value,
                        4 order_col_txt_id
                    FROM
                        dual
                    UNION ALL
                    SELECT
                        'Blackout' availability_status,
                        0 value,
                        3 order_col_txt_id
                    FROM
                        dual
                    UNION ALL
                    SELECT
                        'Status Pending' availability_status,
                        0 value,
                        6 order_col_txt_id
                    FROM
                        dual
                    )
                    UNION ALL
                    SELECT
                        decode(lower(availability_status), 
                            'target down',     'Down', 
                            'target up',       'Up',
                            'metric error',    'System Error', 
                            'agent down',      'Agent Down', 
                            'unreachable',     'Unreachable',
                            'blackout',        'Blackout',
                            'pending/unknown', 'Status Pending') 
                            availability_status_txt_id
                               ,
                        round(SUM(least(nvl(end_timestamp, sysdate), vendtime) - 
                          greatest(start_timestamp, vstarttime)) * 100 / vcurrday
                          , 2) AS value_txt_id,
                        decode(lower(availability_status),
                            'target down',     2,
                            'target up',       1,
                            'metric error',    5,
                            'agent down',      4,
                            'unreachable',     7,
                            'blackout',        3,
                            'pending/unknown', 6) order_col_txt_id
                    FROM
                        sysman.mgmt$availability_history   b,
                        sysman.mgmt$target                 t
                    WHERE
                        b.target_name = vtarget(j)
                        AND b.target_guid = t.target_guid
                        AND lower(availability_status) != 'unreachable'
                        AND b.start_timestamp <= vendtime and ( b.end_timestamp>= vstarttime
                              OR b.end_timestamp IS NULL )
                    GROUP BY
                        lower(availability_status),
                        decode(lower(availability_status),
                            'target down',     2,
                            'target up',       1,
                            'metric error',    5,
                            'agent down',      4,
                            'unreachable',     7,
                            'blackout',        3,
                            'pending/unknown', 6)
                        ) a
            WHERE
                -- ----------------------------------------
                -- Only report target status 'Down' as down
                -- ----------------------------------------
                lower(availability_status) = 'down'
            GROUP BY
                a.availability_status,
                a.order_col
            ORDER BY
                a.order_col;

            IF to_char(sysdate - vcurrday + i, 'Dy') = 'Mon' THEN
                vuptime_weekly := 0;
                vday_count := 0;
            END IF;

            vuptime_weekly := vuptime_weekly + vuptime;
            vday_count := vday_count + 1;
            IF to_char(sysdate - vcurrday + i, 'Dy') = 'Fri' THEN
                vrecord.oem_target := vtarget(j);
                vrecord.oem_week := vstarttime - 4;
                vrecord.oem_uptime := vuptime_weekly / vday_count;
                PIPE ROW ( vrecord );
            END IF;

        END IF;
        END LOOP;

    END LOOP;

END;

Now a final grant is needed:

GRANT execute ON "AHMED.ABOULNAGA".custom_get_uptime_weekly TO mgmt_view;

Keep in mind that you can run this report for practically any target type.

This was perhaps the messiest query I created for an OEM Information Publisher report, but only because the specific requirements became too complicated to return in a SQL query alone coupled with some limitations in Information Publisher.

]]>
<![CDATA[ Create a custom metric extension in OEM 13c ]]> https://chronicler.tech/create-a-custom-metric-extension/ 5eacf0ec0f5abe37b745a57f Sat, 02 May 2020 00:03:40 -0400 Oracle Enterprise Manager (OEM) 13c is a solid and comprehensive monitoring tool, providing a long list of monitoring metrics across a large number of target types. But there are bound to be certain custom checks you'd like to perform that are not natively provided by the product.

To address this, OEM provides the ability to create what are known as metric extensions to help address this.

In this blog post, I walk through a simple example of creating a shell script that performs a basic NFS check and integrating it into OEM.

Create a Custom Shell Script

I've created a couple of shell scripts that simply test if our NFS mount point is available and responding in a timely manner.

Create a script check_nfs_availability.sh:

#--------------------------------------------------
# Check availability and performance of NFS
#--------------------------------------------------
# Logic:
#   1. Create a TestFile0 with size 10 MB on local storage
#   2. Move the file to NFS as TestFile1
#   3. Move the file from NFS back to local storage as TestFile2 
#   4. If the file copy takes longer than 2 seconds, return "NFS FAIL"
#   5. If the final TestFile2 is not exactly 10 MB in size, return "NFS FAIL"
#   6. Otherwise, all is good and return "NFS SUCCESS"
#   7. Output should be a single line, to be parsed by OEM 13c Metric Extension
#   * Make sure to use fully qualified paths in the scripts
#--------------------------------------------------

#--------------------------------------------------
# Parameters
#--------------------------------------------------
SLEEP_TIME=2
TEST_FILE_SIZE=10240
LOCAL_FOLDER=/tmp
NFS_FOLDER=/share

#--------------------------------------------------
# Create TestFile1 with a file size of TEST_FILE_SIZE
#--------------------------------------------------
rm -f ${LOCAL_FOLDER}/test_file_0.txt
dd if=/dev/zero of=${LOCAL_FOLDER}/test_file_0.txt count=1024 bs=${TEST_FILE_SIZE} > /dev/null 2>&1

#--------------------------------------------------
# Call script to copy file from local disk to NFS and back in the background
#--------------------------------------------------
nohup /home/oracle/scripts/check_nfs_availability_2.sh > /dev/null 2>&1

#--------------------------------------------------
# Wait for a maximum of SLEEP_TIME seconds
#--------------------------------------------------
sleep ${SLEEP_TIME}

#--------------------------------------------------
# Check if file made it back in time and the right size
#--------------------------------------------------
VFILE=${LOCAL_FOLDER}/test_file_2.txt
if [[ -e "${VFILE}" ]]; then
  # Error means that file that is copied back from NFS is not the expected 10MB in size
  VSIZE=`du ${LOCAL_FOLDER}/test_file_2.txt | awk '{print $1}'`
  if [ ${VSIZE} -eq 10240 ]; then
    echo "NFS SUCCESS"
  else
    echo "NFS FAIL"
  fi
else
  # Error means that file is not copied back from NFS in under 2 seconds
  echo "NFS FAIL"
fi

#--------------------------------------------------
# Remove any temporary files
#--------------------------------------------------
rm -f ${LOCAL_FOLDER}/test_file_0.txt > /dev/null 2>&1
rm -f ${NFS_FOLDER}/test_file_1.txt   > /dev/null 2>&1
rm -f ${LOCAL_FOLDER}/test_file_2.txt > /dev/null 2>&1

Create a script check_nfs_availability_2.sh:

#--------------------------------------------------
# Parameters
#--------------------------------------------------
LOCAL_FOLDER=/tmp
NFS_FOLDER=/share

mv ${LOCAL_FOLDER}/test_file_0.txt ${NFS_FOLDER}/test_file_1.txt
mv ${NFS_FOLDER}/test_file_1.txt ${LOCAL_FOLDER}/test_file_2.txt

Here's a sample execution of the script. The output is either "NFS SUCCESS" or "NFS FAIL".

Create an OEM Metric Extension

1. Log in to the OEM 13c console.

2. Navigate to Enterprise > Monitoring > Metric Extensions.

3. Click on Create > Metric Extension.

4. In this case, the Target Type we are referencing is of type "Host". Provide a name for the metric extension, a display name, and choose "OS Command - Multiple Columns".

5. Keep the Collection Schedule as is (so that historical metrics are collected), but update the collection frequency as you see fit.

6. Enter the full path to the script. Don't worry about which host it will run on just yet.

7. In the Create New: Columns page, click on Add > New metric column.

8. Provide a column name, and fill out the relevant information. Add the alert threshold. This will determine the condition on which a Warning or Critical alert will be sent.

9. On the Create New: Test page, this is where we test the script. After selecting a target, click on Run Test.

10. The metric extension is now created and you will be redirected to the Metric Extensions page.

Deploy the Metric Extension

11. Select the newly metric extension, then click on Actions > Save as Deployable Draft.

12. The metric extension is now deployed.

Deploy the Metric Extension to a Target

13. Then select the metric extension again, but this time click on Actions > Deploy to Targets.

14. Select any number of targets. Remember to make sure that the scripts you intend on running exists on the hosts.

View and Customize the Alert Thresholds

15. Now navigate to the target that we just deployed metric extension to, in this case, the target type was a particular host.

16. Click on Host > Monitoring > Metric and Collection Settings.

17. You will now find our newly created metric extension!

This is one way to easily create shell script based custom extensions.

You can also create scripts that return a delimited output (must be on a single line though).

]]>
<![CDATA[ Ugly (but quick) script to get Oracle Fusion Middleware software versions ]]> https://chronicler.tech/script-to-get-oracle-software-versions/ 5eaadcd40f5abe37b745a44d Thu, 30 Apr 2020 10:23:32 -0400 I created a quick script that I ran across all our environments to consolidate a list of all software versions. This includes versions of: operating system, WebLogic, Java, RDA, OPatch, and OPatch patches.

It's a crude script, but quite quick and efficient. It may have to be tweaked for non-Oracle SOA Suite products (you'll see below when grepping against registry.xml).

Here are the contents of getSoaSoftwareVersions.sh:

export DOMAIN_HOME=/u01/app/oracle/middleware
export JAVA_HOME=/u01/app/oracle/java

echo ""

echo "HOSTNAME:     `hostname`"
echo "LAST CHECKED: `date +%m/%d/%Y`"
echo "RED HAT:      `cat /etc/redhat-release | awk '{print $7}'`"

export MYVAR=`cat ${DOMAIN_HOME}/inventory/registry.xml | grep "SOA_QuickStart" | awk -F '"' '{print $6}'`
echo "WEBLOGIC:     `echo $MYVAR`"

export MYVAR=`${JAVA_HOME}/bin/java -version 2>&1 | head -n 1 | awk -F '"' '{print $2}'`
echo "JAVA:         `echo $MYVAR`"

export MYVAR=`cat ${DOMAIN_HOME}/oracle_common/rda/rda.sh | grep "Id" | grep "rda.sh" | head -1 | cut -d 'v' -f 2 | cut -d "R" -f 1`
echo "RDA:          `echo $MYVAR`"

export MYVAR=`${DOMAIN_HOME}/OPatch/opatch version | head -1 | awk '{print $3}'`
echo "OPATCH:       `echo $MYVAR`"

echo "PATCHES:"
${DOMAIN_HOME}/OPatch/opatch lsinventory | grep "Patch  " | awk '{print $2 " | " $8 "-" $7 "-" $11 " " $9 " " $10}'

echo ""

Here's the output:

HOSTNAME:     soatest
LAST CHECKED: 04/28/2020
RED HAT:      6.6
WEBLOGIC:     12.2.1.0.0
JAVA:         1.8.0_102
RDA:          1.20 2015/07/23 15:05:38
OPATCH:       13.3.0.0.0
PATCHES:
25527688 | 26-Apr-2020 10:24:18 EDT
24327938 | 26-Apr-2020 10:18:24 EDT
25388847 | 26-Apr-2020 10:16:12 EDT
25439226 | 03-Apr-2020 11:05:09 EDT
21830665 | 07-Mar-2018 13:49:53 EST
19154304 | 23-Feb-2018 14:45:45 EST
]]>
<![CDATA[ Querying incidents in OEM 13c ]]> https://chronicler.tech/querying-incidents-in-oem-13c/ 5e9b06aa0f5abe37b745a426 Sat, 18 Apr 2020 10:04:35 -0400 Unfortunately, the Incident Manager page in Oracle Enterprise Manager 13c is slow, cumbersome, and lacks customization.

Running an SQL query against the OMS repository is extremely quick and efficient and you can get exactly what you need.

Here is a simple query to pull incidents for a particular set of targets:

SELECT 
  TO_CHAR(a.last_updated_date, 'YYYY-MM-DD') "Last Updated Date", 
  TO_CHAR(a.last_updated_date, 'HH24:MI:SS') "Last Updated Time",
  a.summary_msg                              "Message", 
  b.target_type                              "Target Type", 
  b.target_name                              "Target Name", 
  a.severity                                 "Severity", 
  a.resolution_state                         "Resolution State"
FROM   
  sysman.mgmt$incidents a,
  sysman.mgmt$target b
WHERE a.target_guid = b.target_guid
AND   a.last_updated_date >= SYSDATE - 30
AND   (b.target_name LIKE '%soaprod1%'
OR     b.target_name LIKE '/soa_domain/%')
AND    a.severity != 'Clear'
AND    b.target_type IN (
  'host',
  'j2ee_application',
  'j2ee_application_cluster',
  'j2ee_application_domain',
  'oracle_apache',
  'oracle_coherence',
  'oracle_coherence_cache',
  'oracle_coherence_node',
  'oracle_home',
  'oracle_sdpmessagingdriver',
  'oracle_sdpmessagingdriver_email',
  'oracle_sdpmessagingdriver_smpp',
  'oracle_sdpmessagingdriver_xmpp',
  'oracle_sdpmessagingserver',
  'oracle_soa_composite',
  'oracle_soa_folder',
  'oracle_soainfra',
  'oracle_soainfra_cluster',
  'scheduler_service',
  'scheduler_service_group',
  'weblogic_cluster',
  'weblogic_domain',
  'weblogic_j2eeserver',
  'weblogic_nodemanager')
ORDER BY 1 DESC, 2 DESC

Here is a sample of the output:

]]>
<![CDATA[ Off-the-map Apex ]]> https://chronicler.tech/off-the-map-apex/ 5e9307d90f5abe37b745a3be Mon, 13 Apr 2020 08:45:00 -0400 From time to time, you find yourself in the situation when you need good old Oracle software. There are many reasons, for example, to reproduce "do-not-ever-touch-production" and see how it fits modern environments. And the first challenge you face is to find the software binaries. Today, Oracle offers for download Application Express releases down to 5.0.

There are few places where to get an older version:

  • Apex 4.0.1 is built-in into Oracle Database Express 11g. It's a universal and lightweight solution.
  • Apex 4.2 is not available for download, but Oracle Database 12.1 is still there. After installation, you can find $ORACLE_HOME/apex folder with Oracle Apex 4.2.5 in it.

 

]]>
<![CDATA[ Ansible host in different groups and group_vars ]]> https://chronicler.tech/ansible-host-in-different-groups-and-group_vars/ 5e6dc45d0f5abe37b745a382 Sun, 15 Mar 2020 02:16:15 -0400 When you try to organize your variables in Ansible, storing them in your inventory will help you to keep all your environments metadata in one place. Splitting host and group variables into separate files make it easier to catalogue different components variables.

Let say you have configured SOA, WebCenter Portal, Identity Manager Oracle Fusion Middleware products in highly available mode, and created a set of files in group_vars directory:
group_vars/soa.yml
group_vars/wcp.yml
group_vars/idm.yml

Your SOA servers belong to soa group, WebCenter portal servers to wcp group, and Identity Management servers to idm group. You have three WebLogic domains, and you have three Admin servers. There are separate scripts for WebLogic Admin servers maintenance, and you keep them in independent groups: soa-admin, wcp-admin, idm-admin.

Now you need to create more files in the group_vars directory. And you could end up with duplicate copies of your metadata for servers, clusters, machines.

Thanks to the fact that Ansible binds the value of variables to the host, not to the group, you only have to add your admin hosts to exiting groups without duplicating files in group_vars directory.

For example, in your hosts.yml file:

wcp-admin:
  hosts:
    adm1.acme.com
wcp:
  hosts:
    adm1.acme.com
    wcp1.acme.com
    wcp2.acme.com

Now, when you execute your playbook for wcp-admin group, you have full visibility to variables from group_vars/wcp.yml file.

You might have all Admin servers on one host, keep the host in the admin group, and use the following approach to specify the group_var directory file to use depending on the product you are performing maintenance on:

---
- hosts: admin
  vars_files:
    - group_vars/{{ product }}.yml
  tasks:
  - name: my command
    command: "command with {{ servicepath }}"

"servicepath" variable would be taken from the respective product group_var file.

]]>
<![CDATA[ Unsafe writes - why you need it. ]]> https://chronicler.tech/unsafe-writes-why-you-need-it/ 5e693ac50f5abe37b745a2b3 Thu, 12 Mar 2020 08:35:00 -0400 As a former Oracle DBA, I'm totally against anything marked as "unsafe." Today I have learned why RedHat Ansible has unsafe_writes clause for some commands.

Here is a real-life scenario. Let say you have Oracle EM Agent installed on the target machines, and you want to prevent automatic agent startup. The easiest way to do it comment out agent entry in the file /etc/oragchomelist. I have made a task which comments out the agent entry in this file.

- name: Disable OEM Agent
  lineinfile:
    path: /etc/oragchomelist
    regexp: "(.*{{ old_agent_inst }}$)"
    line: '#\1'
    backrefs: yes

The syntax is 100% correct, but it fails; As oracle user, you have full access to the file, but you can't write into /etc/ folder. Ansible does safe writes by default. It creates a copy of the original file and performs all required modifications. On success, updated copy replaces the original file. To make it work, we go unsafe:

- name: Disable OEM Agent
  lineinfile:
    path: /etc/oragchomelist
    regexp: "(.*{{ old_agent_inst }}$)"
    line: '#\1'
    backrefs: yes
    unsafe_writes: yes
 

Now the task updates file in place with no permission issues.


Image by John Hain from Pixabay ]]>
<![CDATA[ I want to be a ... ]]> https://chronicler.tech/i-want-to-be-a/ 5e6777410f5abe37b745a10b Wed, 11 Mar 2020 08:45:00 -0400 I'm working on the small project which touches multiple cloud-based hosts across multiple providers. And this is a good chance to refresh your knowledge on how to handle various connections with Red Hat Ansible.

Let's see how you control the connection. There are a few ways:

  • By default, Ansible would use your current user id and public RSA key to establish a connection with the remote target. You can spend some time an create new accounts for all your providers, or you can go with the next option
  • User remote_user for plays or --user parameter for ad-hock commands. Ansible would try to connect with the provided user name. It's quite similar to the command ssh username@host.name.com
  • If you have all different SSH users for your hosts (as in my case), you can modify your inventory and add variable ansible_ssh_user. The example below shows you how to use it.
  • Ansible allows you to run a play or single task with an escalated privilege. Just add become  clause or --become for ad-hoc command
  • If you need to execute a command as a non-root user, pair become with the become_user ( --become-user for ad-hoc) to specify desired user name

Let's take a look at my inventory excerpt to illustrate my topics:

---
all:
 vars:
   ansible_ssh_user: opc
 children:
   test-ready:
     hosts:
       host1.cloud.prj:
        ansible_ssh_user: ec2-user
       host2.cloud.prj:
       host4.cloud.prj:
       host5.cloud.prj:
        ansible_ssh_user: oracle
   not-ready:
     hosts:
       host3.cloud.prj:
        ansible_ssh_user: root
...
Ansible inventory with SSH users

The first ansible_ssh_user says that opc is a remote user for all the hosts. Few lines down, you may see that host1 overrides global value with ec2-user. The same declaration happens for a few other hosts.

As usual, everything comes with a price. User id from inventory overpowers remote_user or ad-hoc keys, and you should use become_user instead. The output below illustrates the difference.

opc@control:~> ansible test-ready -u oracle -a id

host1.cloud.prj | CHANGED | rc=0 >> uid=1000(ec2-user) gid=1000(ec2-user) groups=1000(ec2-user),4(adm),10(wheel),190(systemd-journal) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
host5.cloud.prj | CHANGED | rc=0 >> uid=1000(oracle) gid=1100(oinstall) groups=1100(oinstall),1000(oracle) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
host4.cloud.prj | CHANGED | rc=0 >> uid=1001(opc) gid=1002(opc) groups=1002(opc),1000(google-sudoers) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
host2.cloud.prj | CHANGED | rc=0 >> uid=1000(opc) gid=1000(opc) groups=1000(opc),4(adm),10(wheel),190(systemd-journal) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

opc@control:~> ansible test-ready --become --become-user oracle -a id

host1.cloud.prj | CHANGED | rc=0 >> uid=1001(oracle) gid=1002(oinstall) groups=1002(oinstall),1001(oracle) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
host5.cloud.prj | CHANGED | rc=0 >> uid=1000(oracle) gid=1100(oinstall) groups=1100(oinstall),1000(oracle) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
host2.cloud.prj | CHANGED | rc=0 >> uid=1001(oracle) gid=1002(oinstall) groups=1002(oinstall),1001(oracle) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
host4.cloud.prj | CHANGED | rc=0 >> uid=1000(oracle) gid=1001(oracle) groups=1001(oracle),4(adm),39(video),1000(google-sudoers) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

Title image depicts Theatrical masks of Tragedy and Comedy. Mosaic, Roman mosaic, 2nd century AD. ]]>
<![CDATA[ Web server returning 'Cross-Origin Request Blocked' to browser ]]> https://chronicler.tech/cors-header/ 5e5d992c0f5abe37b745a0d0 Mon, 02 Mar 2020 21:53:00 -0500 With a typical web service call made, the browser debugger is showing a Cross-Origin Request Blocked error because it Did not find method in CORS header, specifically Access-Control-Allow-Methods.

The resolution to this is quite simple with Oracle HTTP Server (OHS) 12c.

Browser Error: "Did not find method in CORS header"

The browser is showing the following in the debug console:

Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://soadev:8888/HelloWorld. (Reason: Did not find method in CORS header 'Access-Control-Allow-Methods').

Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://soadev:8888/HelloWorld. (Reason: CORS request did not succeed.)

Resolution

1. Stop OHS:

export WEB_DOMAIN_HOME=/u01/oracle/domains/ohs_domain
$WEB_DOMAIN_HOME/bin/stopComponent.sh ohs1

2. Edit these two files:

vi $WEB_DOMAIN_HOME/config/fmwconfig/components/OHS/instances/ohs1/moduleconf/osb.conf
vi $WEB_DOMAIN_HOME/config/fmwconfig/components/OHS/ohs1/moduleconf/osb.conf

3. Add the following entry:

Header set Access-Control-Allow-Methods: "POST, GET, OPTIONS, PUT, DELETE" 

4. Start OHS:

$WEB_DOMAIN_HOME/bin/startComponent.sh ohs1 showErrorStack

5. Repeat this for all nodes in the OHS cluster.

]]>
<![CDATA[ How to show a timestamp in Bash history ]]> https://chronicler.tech/bash-history-timestamp/ 5e5c1f340f5abe37b745a0a1 Sun, 01 Mar 2020 16:00:23 -0500 By default, when you run the history command, it lists the history of all commands previously executed, but it does not include a timestamp.

Adding a timestamp to the bash history can easily be accomplished with the HISTTIMEFORMAT Bash variable as shown.

export HISTTIMEFORMAT="%F %T "
export PS1="\u@\h:$(pwd)> "

As an added bonus, I like the PS1 variable in the example above because it lists the username, hostname, and full path in a single line.

Here's an example of the output:

]]>
<![CDATA[ Sending test UMS notification fails due to missing JMS queue ]]> https://chronicler.tech/unable-to-send-test-ums-notification-due-to-missing-jms-queue/ 5e5a82f70f5abe37b7459ff4 Sat, 29 Feb 2020 11:03:14 -0500 I recently ran into a problem sending test email notifications from the Human Workflow Engine in Oracle SOA Suite 12.2.

In the EM Console, if you navigate to SOA > soa-infra (soa_server1) > Service Engines > Human Workflow > Notification Management, you can click on the "Send Test Notification" button to see if UMS is configured appropriately for email.

In my case, I received the following error:

exception.code:31015 exception.type: ERROR exception.severity: 2 exception.name: Error while sending notification. exception.description: Error while sending notification to email: ahmed@. exception.fix: Check the underlying exception and fix it. ; exception.code:31002 exception.type: ERROR exception.severity: 2 exception.name: Error while publishing message to notification queue. exception.description: Error while publishing message to notification queue. exception.fix: Check Application server data source properties for BPEL-Workflow connections. Check schema and verify database connections. ; ; Unable to resolve 'jms.Queue.NotificationSenderQueue'. Resolved 'jms.Queue';

If you cleanup and parse this error, you will notice this is what it says:

exception.code:        31015
exception.type:        ERROR
exception.severity:    2
exception.name:        Error while sending notification.
exception.description: Error while sending notification to email: ahmed@. 
exception.fix:         Check the underlying exception and fix it. ; 
exception.code:        31002
exception.type:        ERROR
exception.severity:    2
exception.name:        Error while publishing message to notification queue. 
exception.description: Error while publishing message to notification queue. 
exception.fix:         Check Application server data source properties for BPEL-Workflow connections.
                       Check schema and verify database connections. ; ;
                       Unable to resolve 'jms.Queue.NotificationSenderQueue'.
                       Resolved 'jms.Queue';

So you'll see that it is unable to resolve the JMS queue name jms.Queue.NotificationSenderQueue.

This queue exists under the SOAJMSModule which happens to be targeted to SOAJMSServer_auto_1 and SOAJMSServer_auto_2.

What's odd is that these JMS Servers, although properly targeted, are not assigned to a current target nor are reporting OK. Apparently, many other JMS Servers have this same problem.

I have not been able to determine why this is happening, but not being a fan of migratable servers anyway, I chose an alternate option:

  • Click on each "faulty" JMS Server, and target it to the managed server instead of the migratable server.

Once you do so, you might see this error, but temporarily ignore it (since we'll need to update the persistent store shortly).

The following failures occurred: -- JMS server or SAF agent SOAJMSServer_auto_1 is not targeted to the same target as its persistent store -- JMS server or SAF agent SOAJMSServer_auto_1 is not targeted to the same target as its persistent storeMessage icon - Error Errors must be corrected before procee

Now click on Configuration and note the name of the Persistent Store of this JMS Server.

Navigate to this persistent store, and change it's target from the migratable server (e.g., soa_server1_bam-exactly-once (migratable)) to the managed server (e.g., soa_server1).

This step will not give you an error.

Save and activate the changes.

Now when you navigate back to the JMS Servers summary page, you will see the JMS Server reporting healthy, and the test notification will work.

]]>
<![CDATA[ Getting HTTP-403 connecting to OHS using CORS ]]> https://chronicler.tech/getting-http-403-connecting-to-ohs-using-cors/ 5e5812e40f5abe37b7459fc5 Thu, 27 Feb 2020 20:14:00 -0500 We had a vanilla installation of Oracle HTTP Server (OHS) 12.1.3 and configured it to support CORS. On a new installation of OHS 12.2.1, the same behavior wasn't working.

What is CORS?

CORS is Cross-Origin Resource Sharing, and you can find an explanation of it here and here.

Essentially, it enables client-side code running in a browser in a particular domain to access resources hosted in another domain in a secure manner. Cross-origin requests are typically not permitted by browsers, and CORS provides a framework in which cross-domain requests are treated as same-domain requests.

For example, using CORS, JavaScript embedded in a web page can make an HTTP XMLHttpRequest to a different domain. This is used to send an HTTP or HTTPS request to a web server, and to load the server response data back into the script.

Configuring CORS with Oracle HTTP Server

This is typically done by adding the following to httpd.conf:

Header always set Access-Control-Allow-Origin "*"
Header always set Access-Control-Max-Age "1000"
Header always set Access-Control-Allow-Headers "X-Requested-With, Content-Type, Origin, Authorization, Accept, Client-Security-Token, Accept-Encoding"
Header always set Access-Control-Allow-Methods "POST, GET, OPTIONS, DELETE, PUT"

Problem Experienced: HTTP-403

To test whether this is functioning, simply execute a simple curl command against the service. In our case, even though the above settings were configured, we still received an HTTP-403:

oracle@soadev:/home/oracle> curl -XOPTIONS -I http://soadev:8888/HelloWorld
HTTP/1.1 403 Forbidden
Date: Wed, 26 Feb 2020 23:39:18 GMT
Server: Oracle-HTTP-Server
Content-Length: 236
Content-Type: text/html; charset=iso-8859-1

Resolution

1. Stop OHS:

export WEB_DOMAIN_HOME=/u01/oracle/domains/ohs_domain
$WEB_DOMAIN_HOME/bin/stopComponent.sh ohs1

2. Edit these two files:

vi $WEB_DOMAIN_HOME/config/fmwconfig/components/OHS/instances/ohs1/httpd.conf
vi $WEB_DOMAIN_HOME/config/fmwconfig/components/OHS/ohs1/httpd.conf

3. Comment out these lines, which are included by default in all OHS 12.2.1 installations:

#<IfModule mod_rewrite.c>
#    RewriteEngine on
#    RewriteCond %{REQUEST_METHOD} ^OPTIONS
#    RewriteRule .* . [F]
#</IfModule>

4. Start OHS:

$WEB_DOMAIN_HOME/bin/startComponent.sh ohs1 showErrorStack

5. Repeat this for all nodes in the OHS cluster.

Success!

oracle@soadev:/home/oracle> curl -XOPTIONS -I http://soadev:8888/HelloWorld
HTTP/1.1 200 OK
Date: Wed, 26 Feb 2020 23:42:04 GMT
Server: Oracle-HTTP-Server
Last-Modified: Wed, 26 Feb 2020 16:42:04 MST
X-ORACLE-DMS-ECID: 005bsCap_zu6uHIqyofd6G000EyQ000001
X-ORACLE-DMS-RID: 0:1
Allow: POST,OPTIONS
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: Content-Type
Transfer-Encoding: chunked
Content-Type: application/vnd.sun.wadl+xml
]]>
<![CDATA[ Repeated DFW-99998 errors in OBIEE 12c? Some of them you can ignore ]]> https://chronicler.tech/repeated-dfw-99998-errors/ 5e57b04f0f5abe37b7459f4d Thu, 27 Feb 2020 08:22:00 -0500 On OBIEE 12.2.1.1.0, you may get repeated DFW-99998 errors, such as:

<Feb 20, 2020, 3:22:08,281 PM EST> <Emergency> <oracle.dfw.incident> <BEA-000000> <incident 9116 created with problem key "DFW-99998 [java.io.IOException][oracle.bi.security.filter.BISecurityFilter.handleAnonymousSubject][analytics]">

[2019-11-15T14:40:27.643-05:00] [bi_server1] [NOTIFICATION] [DFW-40101] [oracle.dfw.incident] [tid: [ACTIVE].ExecuteThread: '12' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: <WLS Kernel>] [ecid: b98a9e33-00a2-4167-ad51-74caa842b8e3-000f2a7d,0] [partition-name: DOMAIN] [tenant-name: GLOBAL] An incident has been signalled with the incident facts: [problemKey=DFW-99998 [java.io.IOException][oracle.bi.security.filter.BISecurityFilter.handleAnonymousSubject][analytics] incidentSource=SYSTEM incidentTime=Fri Nov 15 14:40:27 EST 2019 errorMessage=DFW-99998 executionContextId=b98a9e33-00a2-4167-ad51-74caa842b8e3-000f2a75]

[2019-11-15T14:40:28.099-05:00] [bi_server1] [INCIDENT_ERROR] [DFW-40104] [oracle.dfw.incident] [tid: [ACTIVE].ExecuteThread: '12' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: <WLS Kernel>] [ecid: b98a9e33-00a2-4167-ad51-74caa842b8e3-000f2a7d,0] [errid: 4263] [detailLoc: /u01/middleware/user_projects/domains/obiee/servers/bi_server1/adr/diag/ofm/obiee/bi_server1/incident/incdir_4263] [probKey: DFW-99998 [java.io.IOException][oracle.bi.security.filter.BISecurityFilter.handleAnonymousSubject][analytics]] [partition-name: DOMAIN] [tenant-name: GLOBAL] incident 4263 created with problem key "DFW-99998 [java.io.IOException][oracle.bi.security.filter.BISecurityFilter.handleAnonymousSubject][analytics]"

Now keep in mind that DFW errors are diagnostic framework errors, and particularly DFW-99998 could mean anything, so you don't want to simply ignore all DFW-99998 errors.

But in this specific case, the error is a Java I/O exception on some OBIEE security filter:

[java.io.IOException][oracle.bi.security.filter.BISecurityFilter.handleAnonymousSubject]

After two SRs with Oracle Support, we've confirmed that these can be safely ignored.

The annoying problem is that we've been getting 0-10 entries of these in the logs on a daily basis, and the OEM Agent picks them up and alerts on them.

The only possible solution is to filter these errors out using standard WebLogic filters.

]]>
<![CDATA[ Graceful restart for Oracle HTTP Server 11g ]]> https://chronicler.tech/graceful-restart-for-oracle-http-server/ 5e56d84b3e7c273c97ebba77 Thu, 27 Feb 2020 08:19:28 -0500 From time to time, you need to update configuration HTTP server configuration. Apache HTTPD and NGINX offer graceful restart, Oracle HTTP Server 12c uses graceful mode for restarts by default, but there are no graceful restarts for OHS 11g.

And yet you can do a graceful restart with OHS 11g, and it's effortless. You just send USR1 signal to the HTTPD process, as on example below.

#Send USR1 singal to HTTPD process
[oracle@oracle-webtier11 ~]$ kill -USR1 $(cat $INSTANCE_HOME/diagnostics/logs/OHS/ohs1/httpd.pid)
# Validating result 
[oracle@oracle-webtier11 ~]$ tail $INSTANCE_HOME/diagnostics/logs/OHS/ohs1/ohs1.log
[2020-02-26T16:07:42.5875-05:00] [OHS] [NOTIFICATION:16] [OHS-9999] [core.c] [host_id: oracle-webtier11.domain.com] [host_addr: 192.168.1.22] [pid: 65333] [tid: 140185203439488] [user: oracle] [VirtualHost: main]  SIGUSR1 received.  Doing graceful restart
[2020-02-26T16:07:43.8346-05:00] [OHS] [NOTIFICATION:16] [OHS-9999] [core.c] [host_id: oracle-webtier11.domain.com] [host_addr: 192.168.1.22] [pid: 65333] [tid: 140185203439488] [user: oracle] [VirtualHost: main]  WebLogic Server Plugin version 1.1 
[2020-02-26T16:07:43.9410-05:00] [OHS] [NOTIFICATION:16] [OHS-9999] [core.c] [host_id: oracle-webtier11.domain.com] [host_addr: 192.168.1.22] [pid: 65333] [tid: 140185203439488] [user: oracle] [VirtualHost: main]  Oracle-HTTP-Server/11.1.1.9.0 (Unix) mod_ssl/11.1.1.9.0 OtherSSL/0.0.0 mod_plsql/11.1.1.0.0 mod_onsint/2.0 configured -- resuming normal operations
]]>
<![CDATA[ URLs for Oracle SOA, BPM, OSB, BAM, WSM, ESS, MFT, and API consoles ]]> https://chronicler.tech/soaurls/ 5e5425f93e7c273c97ebba43 Mon, 24 Feb 2020 15:50:00 -0500 Just a quick reference to most console URLs typically available on the following products:

  • Oracle WebLogic Server
  • Oracle SOA Suite
  • Oracle Business Process Manager (BPM) Suite
  • Oracle Service Bus (OSB)
  • Oracle Business Activity Monitoring (BAM)
  • Oracle Web Services Manager (WSM)
  • Oracle Enterprise Scheduler (ESS)
  • Oracle Managed File Transfer (MFT)
  • Oracle API Manager
WebLogic Server Admin Consolehttp://soadev:7001/console
EM Controlhttp://soadev:7001/em
Service Bus Consolehttp://soadev:7001/servicebus
Service Bus Console (redirect)http://soadev:7001/sbconsole
MFT Consolehttp://soadev:7020/mftconsole
API Manager Consolehttp://soadev:7001/apimanager
BPM Worklisthttp://soadev:8001/integration/worklistapp
SOA Composerhttp://soadev:8001/soa/composer
BPM Composerhttp://soadev:8001/bpm/composer
BAM Composerhttp://soadev:9001/bam/composer
SOA Infrastructurehttp://soadev:8001/soa-infra
User Messaging Preference UIhttp://soadev:8001/sdpmessaging/userprefs-ui
WSM Policy Manager Validationhttp://soadev:8001/wsm-pm
ESS Healthcheckhttp://soadev:6001/EssHealthCheck/checkHealth.jsp
ESS Diagnosticshttp://soadev:6001/EssHealthCheck/diagnoseHealth.jsp
DMS Applicationshttp://soadev:[port]/dms

Some things to note:

  • Ports may differ.
  • URLs may be different in Oracle's PaaS cloud services.
  • The DMS application is targeted to and accessible from every managed server on every host.
]]>
<![CDATA[ Always free ... to go ]]> https://chronicler.tech/always-free-to-go/ 5e4d21d6389e8f53dd7dc4ee Fri, 21 Feb 2020 06:17:00 -0500 Last week I attended Oracle's two-day Oracle Cloud Infrastructure marketing workshop. It's turned out that there are way more marketing than I'd expect. There were a lot of talks about how the Oracle's "NextGen" Cloud is better than all the other public cloud providers and how Oracle stands out among all the other vendors with always free offerings.

Why, it's an excellent chance to move your home lab to the cloud, that what I thought. Sure thing, the first I had to share was my credit card information. Not the best start for free stuff, isn't it? Anyway, I have finished registration and got my very own namespace and account and nice-looking interface. It was promising and somewhat exciting. I have completed my labs and messed around with a free database.  In the end, I have trashed everything and disconnected with the big plans in mind.

Now, a week later, after registration, I cannot create a free instance with public IP address. I spent hours before giving up. Whenever I choose a  paid option, there is enough computer power, even for big shapes.

I still have my $299.81 credit and would use it, but I'm a bit confused.
Is it your Cloud 2 infrastructure not as good as you boasting, or your always free components more like a public restroom in some cafeterias: It's free, but a access code is on the check.


UPDATE: I finally get my always free instance provisioned (conspiracy aside), so be ready; it takes a while.

]]>
<![CDATA[ Quickest way to decrypt passwords in Oracle WebLogic Server 12c ]]> https://chronicler.tech/simplest-way-to-decrypt-passwords-in-oracle-weblogic-server-12c/ 5e4463e70b1b670a1724f23c Wed, 12 Feb 2020 16:18:40 -0500 There are lots of scripts online that show you how to decrypt WebLogic passwords. Some work on different versions of the product but not others. Here's the simplest, most straightforward approach to decrypting passwords in Oracle WebLogic Server 12c.

This has been tested on Oracle WebLogic Server 12.1.3 and 12.2.1.

Location of AES Encrypted Passwords

Passwords are mainly located in configuration files here:

$DOMAIN_HOME/config/config.xml
$DOMAIN_HOME/config/jdbc/*.xml


Decrypt WebLogic Passwords

1. Get the password you would like to decrypt (in AES format). For example:

cd /u01/app/oracle/middleware/user_projects/domains/wl_domain/config
cat config.xml | grep AES

2. Run WLST:

cd /u01/app/oracle/middleware/oracle_common/common/bin
./wlst.sh

3. Set the domain and decrypt the password:

domain = "/u01/app/oracle/middleware/user_projects/domains/wl_domain"
service = weblogic.security.internal.SerializedSystemIni.getEncryptionService(domain)
encryption = weblogic.security.internal.encryption.ClearOrEncryptedService(service)
print encryption.decrypt("{AES}nFIptO4HdY8fxSgLjrS8ZNqsVlcB2zQZzYJQ9o7AbJU=")

Enter the domain name, and paste the entire {AES} password as shown.

Example

]]>
<![CDATA[ Easy Fix For Quarkus 1.2.0 Native Build ]]> https://chronicler.tech/fix-quarkus-1-2-0-native-builds/ 5e401f810b1b670a1724f0e7 Tue, 11 Feb 2020 08:35:00 -0500 If you are into Red Hat products than you have heard about promising Quarkus Java framework. It works quite well with GraalVM and has native profile sou you can run an application with no external dependencies. It is convenient if you build a container project (and who doesn't nowadays ).

The new release has just arrived and I decided to refresh my workstation to see how it works all together. I have tried to rebuild my good old demo project with the new JDK 11 and the latest framework. The native build failed with an indistinct exception.

Caused by: io.quarkus.builder.BuildException: Build failure: Build failed due to errors
	[error]: Build step io.quarkus.deployment.pkg.steps.NativeImageBuildStep#build threw an exception: java.lang.RuntimeException: Failed to build native image
	at io.quarkus.deployment.pkg.steps.NativeImageBuildStep.build(NativeImageBuildStep.java:319)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at io.quarkus.deployment.ExtensionLoader$2.execute(ExtensionLoader.java:915)
	at io.quarkus.builder.BuildContext.run(BuildContext.java:279)
	at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
	at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:2011)
	at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1535)
	at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1426)
	at java.lang.Thread.run(Thread.java:748)
	at org.jboss.threads.JBossThread.run(JBossThread.java:479)
Caused by: java.lang.RuntimeException: Image generation failed. Exit code: 1
	at io.quarkus.deployment.pkg.steps.NativeImageBuildStep.build(NativeImageBuildStep.java:308)
	... 12 more

Google didn't bring me anything but some complaints about Quarkus, GraalVM and images compatibility. But it wasn't a case (I tested 6 Graal versions), and it wasn't a memory though. Full Maven  trace showed me the real cause - missing  zlib headers. So the fix is quite easy, just install zlib-devel package.  

[root@pvln src]# dnf install zlib-devel
Last metadata expiration check: 2:37:23 ago on Sun 09 Feb 2020 07:23:47 AM EST.

...........

Installed:
  zlib-devel-1.2.11-10.el8.x86_64                                                                                    

Complete!
[root@pvln src]# 

And have your native application ready to run.

[mmikhailidi@pvln getting-started]$ ./mvnw clean package -Pnative 
[INFO] Scanning for projects...

.............................................

[INFO] [io.quarkus.deployment.pkg.steps.NativeImageBuildStep] /usr/local/src/graalvm/bin/native-image -J-Dsun.nio.ch.maxUpdateArraySize=100 -J-Djava.util.logging.manager=org.jboss.logmanager.LogManager -J-Dvertx.logger-delegate-factory-class-name=io.quarkus.vertx.core.runtime.VertxLogDelegateFactory -J-Dvertx.disableDnsResolver=true -J-Dio.netty.leakDetection.level=DISABLED -J-Dio.netty.allocator.maxOrder=1 --initialize-at-build-time= -H:InitialCollectionPolicy=com.oracle.svm.core.genscavenge.CollectionPolicy$BySpaceAndTime -jar getting-started-1.0-SNAPSHOT-runner.jar -H:FallbackThreshold=0 -H:+ReportExceptionStackTraces -H:-AddAllCharsets -H:EnableURLProtocols=http -H:NativeLinkerOption=-no-pie -H:+JNI --no-server -H:-UseServiceLoaderFeature -H:+StackTrace getting-started-1.0-SNAPSHOT-runner
[getting-started-1.0-SNAPSHOT-runner:27174]    classlist:   6,134.06 ms
[getting-started-1.0-SNAPSHOT-runner:27174]        (cap):   1,060.04 ms
[getting-started-1.0-SNAPSHOT-runner:27174]        setup:   2,983.01 ms
10:01:46,481 INFO  [org.jbo.threads] JBoss Threads version 3.0.0.Final
[getting-started-1.0-SNAPSHOT-runner:27174]   (typeflow):  15,563.74 ms
[getting-started-1.0-SNAPSHOT-runner:27174]    (objects):   8,731.99 ms
[getting-started-1.0-SNAPSHOT-runner:27174]   (features):     565.49 ms
[getting-started-1.0-SNAPSHOT-runner:27174]     analysis:  25,985.90 ms
[getting-started-1.0-SNAPSHOT-runner:27174]     (clinit):     691.94 ms
[getting-started-1.0-SNAPSHOT-runner:27174]     universe:   2,029.82 ms
[getting-started-1.0-SNAPSHOT-runner:27174]      (parse):   3,015.51 ms
[getting-started-1.0-SNAPSHOT-runner:27174]     (inline):  12,080.51 ms
[getting-started-1.0-SNAPSHOT-runner:27174]    (compile):  27,263.60 ms
[getting-started-1.0-SNAPSHOT-runner:27174]      compile:  44,412.33 ms
[getting-started-1.0-SNAPSHOT-runner:27174]        image:   2,751.31 ms
[getting-started-1.0-SNAPSHOT-runner:27174]        write:     729.34 ms
[getting-started-1.0-SNAPSHOT-runner:27174]      [total]:  85,520.01 ms
[INFO] [io.quarkus.deployment.QuarkusAugmentor] Quarkus augmentation completed in 87914ms
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  01:37 min
[INFO] Finished at: 2020-02-09T10:03:00-05:00
[INFO] ------------------------------------------------------------------------
]]>
<![CDATA[ Keeping a list of OUD backends for exporting ]]> https://chronicler.tech/keeping-a-list-of-oud-backends-for-exporting/ 5e40e4ba0b1b670a1724f1ef Mon, 10 Feb 2020 00:35:23 -0500 When you plan to move data between Oracle Unified Directories (OUD) instances, there are several essential details you should remember.

The first thing to consider is ensuring that you export data with operational attributes. Otherwise, after running import-ldif command, you will discover that your user accounts miss the necessary privileges to perform daily work. When you perform an export from ODSM (or OUDSM in version 12c) interface, you need to check a checkbox in the pop-up window.

The second thing is that you should be aware of all OUD backends that you have in your instance. OUD directories have several backends that you could find using  list-backends command.

[oracle@myhost bin] ./list-backends

You might see the userRoot backend that you are planning to export, among other backends. But when you run the oud-setup command at the time of instance creation, it also creates OracleContext0 backend that might have a dn name of “cn=OracleContext,dc=yourcompany,dc=com”.

You have to export and then subsequently import content of this backend as well, to avoid “missing policy subentry error” when accessing your baseDN in ODSM.

When using Ansible playbook for importing ldif files into a new OUD instance, you can loop through a list of your ldif files (each for different backend) and have them loaded in tasks one by one. Non-matching backend data would be skipped.

- name: reload ldif as userRoot backend
  shell:
    cmd: |
      cd “{{ instance_home }}/{{ instance_name }}/bin”
      ./import-ldif -a -r -h {{ inventory_hostname_short }} -p {{ instance_port }} \
      -D “cn=Directory Manager” -j /tmp/{{ oud_password_file }} -X -b {{ instance_base }} \
      -n userRoot -l {{ ldif_file }}

- name: reload ldif as OracleContext0 backend
  shell:
    cmd: |
      cd “{{ instance_home }}/{{ instance_name }}/bin”
      ./import-ldif -a -r -h {{ inventory_hostname_short }} -p {{ instance_port }} \
      -D “cn=Directory Manager” -j /tmp/{{ oud_password_file }} -X -b {{ instance_base }} \
      -n OracleContext0 -l {{ ldif_file }}
]]>
<![CDATA[ Unable to create an error notification rule in Oracle SOA Suite 12.2.1? Patch 26088894 is not the fix ]]> https://chronicler.tech/unable-to-create/ 5e3ed81a0b1b670a1724f08f Sat, 08 Feb 2020 11:45:03 -0500 On Oracle SOA Suite 12.2.1.0.0, when we tried to create an error notification rule to alert on failed ESS scheduled jobs, we experienced an issue where the dropdown was not showing any schedules. Unfortunately, the solution to apply patch 26088894 does not work.

Patch 26088894

Patch 26088894 is a zero downtime ESS patch and applicable for Oracle SOA Suite versions 12.1.3, 12.2.1, 12.2.1.1, and 12.2.1.2.

It addresses the issue of the Adapter Schedule generating a Null Error on the GUI, which may appear to be the issue we experienced, but it is not.

Problem: Error Notification Rule Issue

  1. Log in to the EM Console
  2. Navigate to Scheduling Services > Job Request > Define Schedules
  3. Create a schedule called "Ahmed_Job_10Minutes"
  4. Navigate to SOA > soa-infra (soa_server1) > SOA Infrastructure > Error Notification Rules
  5. Click on Create

When trying to create an error notification rule, you can see that the Schedule dropdown box is empty, preventing us from being able to create a rule.

Solution: Define the Proper Schedule Package

When creating the job schedule, it must be in the package /oracle/apps/ess/custom/soa. See screenshot below. That's it!

Only schedules in package /oracle/apps/ess/custom/soa of application EssNativeHostingApp can be used for creating error notification rules.

]]>
<![CDATA[ Filtering WebLogic log messages ]]> https://chronicler.tech/filtering-weblogic-log-messages/ 5e3df0560b1b670a1724eff2 Fri, 07 Feb 2020 18:44:46 -0500 There is a ton of documentation online on how to create WebLogic log filters. In this blog post, I describe how to easily filter out a repeating error that appears to be harmless.

What harm is there keeping these entries in the log file? Why are we filtering them out and not maintaining them instead? Because our OEM Agent was picking up these "Emergency" entries and repeatedly emailing out alerts.

The error message, which occurred 0-10 times daily, that appeared in the bi_server1.out log file is:

<Feb 6, 2020, 3:37:06,362 PM EST> <Emergency> <oracle.dfw.incident> <BEA-000000> <incident 899 created with problem key "DFW-99998 [java.io.IOException][oracle.bi.security.filter.BISecurityFilter.handleAnonymousSubject][analytics]">

Create a Filter to Suppress Logging of the DFW-99998 Error

  1. Login to the WebLogic Admin Console.
  2. Click on the domain name on the top-left.
  3. Navigate to Configuration > Log Filters.
  4. Click on New.
  5. Enter a filter name. For example: LogFilter-Suppress-DFW99998.
  6. Click on the filter name.
  7. Click on Edit, and enter a custom filter expression such as:
NOT (MESSAGE LIKE 'created with problem key "DFW-99998 [java.io.IOException][oracle.bi.security.filter.BISecurityFilter.handleAnonymousSubject][analytics]"')

It will look like this:

Assign the Filter to the Managed Server Log Files

  1. Navigate to the managed server. For example: bi_server1
  2. Navigate to Logging > General > Advanced.
  3. Assign the newly created filter to all log files as shown in the screenshot.

References

]]>
<![CDATA[ Ansible NOVA meetup ]]> https://chronicler.tech/nova-feb-12/ 5e2e1e960b1b670a1724efac Mon, 27 Jan 2020 09:30:00 -0500 If you live in the metro DC area and interesting in RedHat Ansible come and join us to learn how you can use this marvelous automation tool to manage Oracle Fusion Middleware environments. The event is scheduled for February 12, 2020. Please find all arrangement details on the event page at https://www.meetup.com/Ansible-NOVA/events/267997275.

Looking forward to meeting you there.

]]>
<![CDATA[ NATCAP OUG Meetup ]]> https://chronicler.tech/natcap-oug-meetup/ 5e2e246b0b1b670a1724efd6 Mon, 27 Jan 2020 08:30:00 -0500 If you live in the metro DC area and interesting in cutting end technologies come and join us and learn from the first hands. I'm going to talk about containers and virtualization. At the end of the presentation, I'll show you ho you can have your very own RedHat OpenShift 4 on your laptop. The event is scheduled for February 4, 2020. Please find all arrangement details on the event page at https://www.meetup.com/natcapoug-middleware/events/267734332/.

Looking forward to meeting you there.

]]>
<![CDATA[ Using OFMW 12c global variables in automating upgrades with Ansible ]]> https://chronicler.tech/using-ofmw-12c-global-variables-in-automating-upgrades/ 5e2692350b1b670a1724ef44 Tue, 21 Jan 2020 01:13:07 -0500 Automating OFMW 12c patches and upgrades with Ansible gives you a chance to find a lot more about the peculiarities of Oracle Fusion Middleware implementation.

One of the recent discoveries I have made while automating JDK replacement for 12c R2 (12.2.1.3+) line of products is the central location of JAVA_HOME variable. If you plan to store your JDK in a new directory and use a symbolic link to point to the new location, you might be in trouble. Specifying JAVA_HOME as a symbolic link is not certified/tested with Oracle WebLogic Server, and it’s not mentioned in any documents. If you keep the older JDK directory, you won’t even discover anything. However, simple deletion of older JDK will cripple your wlst.sh and other scripts from $ORACLE_HOME/oracle_common/common/bin

The JAVA_HOME environment variable is centrally located in $ORACLE_HOME/oui/.globalEnv.properties and is updated upon installation, as well as during patching.

One of the possible solutions to handle it when automating JDK upgrade with Ansible would be using lineinfile module.

- name: Update JAVA_HOME variable in the .globalEnv.properties file
  lineinfile:
    path: "{{ oracle_home }}/oui/.globalEnv.properties"
    regexp: '^(.*)JAVA_HOME=(.*)$'
    line: "JAVA_HOME={{ oracle_home }}"/java/jdk{{ jdk_version }}
	
- name: Update JAVA_HOME variable in the .globalEnv.properties file
  lineinfile:
    path: "{{ oracle_home }}/oui/.globalEnv.properties"
    regexp: '^(.*)JAVA_HOME=(.*)$'
    line: "JAVA_HOME_1_8={{ oracle_home }}"/java/jdk{{ jdk_version }}

oracle_home  variable value in this example would be your FMW product installation directory.

jdk_version variable value example could be: “1.8.0_241”.

You can use a similar approach to update other files under $ORACLE_HOME/oui/bin directory.

One additional benefit you can get from scripts in $ORACLE_HOME/oui/bin directory for automation with Ansible is using viewInventory.sh script to determine a version of your product distribution.

Consider the following example where you can store the version into prod_check variable:

---
- name: Checking distribution
  hosts: soa
  vars:
    oracle_home: "/u01/app/oracle/product/fmw12c"
    fmw_product: "SOA"
  tasks:
    - name: Check installed version
      shell: "{{ oracle_home }}/oui/bin/viewInventory.sh | grep 'Distribution' | awk '/{{ fmw_product }}/ {print $3}'"
      register: prod_check
    - debug: var=prod_check.stdout
...
]]>
<![CDATA[ blowfish-cbc is not supported in FtpAdapter? Not true ]]> https://chronicler.tech/blowfish-cbc-is-not-supported-in-ftpadapter/ 5e2075680b1b670a1724eed6 Thu, 16 Jan 2020 10:03:06 -0500 We added a new JNDI to the FtpAdapter in Oracle SOA Suite 12.2.1.0.0, and upon execution of the SOA composite, received the error "blowfish-cbc is not supported".

Blowfish is a cipher, and this generally should be configured within the adapter itself.

Oracle Doc ID 2294667.1 states that patch 26561747 is available to resolve this, but this patch is only available for 12.2.1.2.0 and 12.2.1.3.0. The same note describes adding a properties to the composite.

Here are snippets of what can appear in the SOA diagnostic log file:

[2020-01-15T00:13:41.062-05:00] [soa_server2] [ERROR] [] [oracle.soa.adapter.ftp] [tid: [ACTIVE].ExecuteThread: '33' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: <anonymous>] [ecid: ad133614-5555-4444-8996-dd19444645ae-000001b5,1:18533] [APP: soa-infra] [partition-name: DOMAIN] [tenant-name: GLOBAL] [oracle.soa.tracking.FlowId: 2220001] [oracle.soa.tracking.InstanceId: 11671111] [oracle.soa.tracking.SCAEntityId: 470044] [composite_name: HelloWorld!1.0] [FlowId: 0000MybblRlF4Ep0^Ru1Vq1U7dr0000001] Exception while setting up session[[
BINDING.JCA-11443
Adapter internal error.
Adapter internal error.
The adapter has become unstable. This could be because of incorrect parameters supplied to the adapter. The parameter: {0} had value: {1}
Please make sure that SFTP has been setup correctly.

        at oracle.tip.adapter.ftp.SshImpl.SSHSessionImpl.setContext(SSHSessionImpl.java:1510)
        at oracle.tip.adapter.ftp.SshImpl.SSHSessionImpl.setUpPublicKeySocketConnection(SSHSessionImpl.java:414)
        at oracle.tip.adapter.ftp.SshImpl.SSHSessionImpl.<init>(SSHSessionImpl.java:208)
	.
	.
	.
Caused by: com.maverick.ssh.SshException: blowfish-cbc is not supported [Unknown cause]
        at com.maverick.ssh2.Ssh2Context.setPreferredCipherSC(Unknown Source)
        at oracle.tip.adapter.ftp.SshImpl.SSHSessionImpl.setContext(SSHSessionImpl.java:1480)
        ... 84 more
[2020-01-15T00:13:41.111-05:00] [soa_server2] [ERROR] [] [oracle.soa.bpel.engine.ws] [tid: [ACTIVE].ExecuteThread: '33' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: <anonymous>] [ecid: ad133614-5555-4444-8996-dd19444645ae-000001b5,1:18533] [APP: soa-infra] [partition-name: DOMAIN] [tenant-name: GLOBAL] [oracle.soa.tracking.FlowId: 2190001] [oracle.soa.tracking.InstanceId: 11110002] [oracle.soa.tracking.SCAEntityId: 471111] [oracle.soa.tracking.FaultId: 1080001] [composite_name: HelloWorld!1.6] [FlowId: 0000MybblRlF4Ep0^Ru1Vq1U7dr0000001] got FabricInvocationException[[
 ** Cikey: 11670002
 ** FlowId: 2190001
 ** Current Activity Key: 11670002-BpInv10-BpSeq29.27-1
 ** Current Activity Label: Invokewriteftp
 ** InvokeMessageGuid: ca4ccca2-3755-11ea-95c2-0050568aa3f6
 ** ComponentDN: Default/HelloWorld!1.6*soa_4eed0ef0-4414-4a00-9a1b-d7505f2231da/HelloWorld
 ** Properties for component HelloWorld:
   ** bpel.preference.ERPInstance: Prod
   ** bpel.preference.AlertEmailDL: ahmed.aboulnaga@notrealoracle.com
   ** bpel.config.oneWayDeliveryPolicy: async.persist
 ** Transaction info: Name=[EJB com.collaxa.cube.engine.ejb.impl.bpel.BPELDeliveryBean.handleInvoke(com.collaxa.cube.engine.dispatch.message.invoke.InvokeInstanceMessage)],Xid=BEA1-2891D2B01A6E7C053D71(436384373),Status=Active,numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds since begin=1,seconds left=1998,useSecure=false,activeThread=Thread[[ACTIVE] ExecuteThread: '33' for queue: 'weblogic.kernel.Default (self-tuning)',5,Pooled Threads],XAServerResourceInfo[SOADataSource_soa_domain]=(ServerResourceInfo[SOADataSource_soa_domain]=(state=started,assigned=none),xar=SOADataSource,re-Registered = false),SCInfo[soa_domain+soa_server2]=(state=active),properties=({weblogic.transaction.partitionName=DOMAIN, weblogic.transaction.name=[EJB com.collaxa.cube.engine.ejb.impl.bpel.BPELDeliveryBean.handleInvoke(com.collaxa.cube.engine.dispatch.message.invoke.InvokeInstanceMessage)]}),local properties=({weblogic.jdbc.jta.SOADataSource=[autoCommit=true,enabled=true,isXA=true,isJTS=false,vendorID=0,connUsed=true,doInit=false,'null',destroyed=false,poolname=SOADataSource,appname=null,moduleName=null,connectTime=275,dirtyIsolationLevel=false,initialIsolationLevel=2,infected=false,lastSuccessfulConnectionUse=1579065219873,secondsToTrustAnIdlePoolConnection=10,currentUser=null,currentThread=null,lastUser=null,currentError=null,currentErrorTimestamp=null,JDBC4Runtime=true,supportStatementPoolable=true,needRestoreClientInfo=false,defaultClientInfo={},supportIsValid=true]}),OwnerTransactionManager=ServerTM[ServerCoordinatorDescriptor=(CoordinatorURL=soa_server2+127.0.0.1:8001+soa_domain+t3+ CoordinatorNonSecureURL=soa_server2+127.0.0.1:8001+soa_domain+t3+ coordinatorSecureURL=null, XAResources={WSATGatewayRM_soa_server2_soa_domain, WLStore_soa_domain_BPMJMSFileStore_auto_2, eis/Coherence/Remote, eis/tibjms/Queue, eis/Coherence/XALocal, eis/oracle/in-memory, eis/aqjms/EDNLocalTxDurableTopic, , oracle.tip.adapter.jms.JmsXAResource, eis/activemq/Queue, WLStore_soa_domain__WLS_soa_server2, eis/aqjms/EDNLocalTxTopic, eis/Coherence/Local, eis/tibjmsDirect/Topic, eis/wls/Topic, eis/tibjms/Topic, eis/jms/aiaB2BQueueCF, eis/wls/Queue, eis/jms/aiaErrorTopicCF, eis/tibjmsDirect/Queue, eis/aqjms/EDNxaDurableTopic, SOADataSource_soa_domain, eis/wls/EDNxaDurableTopic, eis/aqjms/EDNxaTopic, eis/webspheremq/Queue, eis/wls/EDNLocalTxDurableTopic, eis/sunmq/Queue, eis/aqjms/Topic, tangosol.coherenceTxCCI, eis/File/XAFileAdapter, WLStore_soa_domain_UMSJMSFileStore_auto_4, eis/File/XAFileAdapter2, eis/wls/EDNLocalTxTopic, eis/XAFileAdapter3, OraSDPMDataSource_soa_domain, tangosol.coherenceTx, eis/XAFileAdapter, WLStore_soa_domain_SOAJMSFileStore_auto_2, eis/AQ/aqSample, EDNDataSource_soa_domain, oracle.tip.adapter.apps.AppsXAResource, eis/wls/EDNxaTopic, eis/aqjms/Queue, eis/aq/aiaB2BInfraAQCF, rep_user_soa_domain},NonXAResources={})],CoordinatorURL=soa_server2+127.0.0.1:8001+soa_domain+t3+)
 ** MaxThreadsConstraints: 150
 ** Total dispatcher messages scheduled for processing: 0
 ** Total number of threads processing dispatcher messages: 1
 ** Max Heap size: 8555069440
 ** Free Heap size: 7727236144 com.maverick.ssh.SshException: blowfish-cbc is not supported [Unknown cause]
        at com.maverick.ssh2.Ssh2Context.setPreferredCipherSC(Unknown Source)
        at oracle.tip.adapter.ftp.SshImpl.SSHSessionImpl.setContext(SSHSessionImpl.java:1480)
        at oracle.tip.adapter.ftp.SshImpl.SSHSessionImpl.setUpPublicKeySocketConnection(SSHSessionImpl.java:414)
        at oracle.tip.adapter.ftp.SshImpl.SSHSessionImpl.<init>(SSHSessionImpl.java:208)
	.
	.
	.

The solution is simply to add the cipher to the adapter.

  1. Log in to the WebLogic Server Administration Console.
  2. Click on Lock & Edit.
  3. Click on Deployments.
  4. Click on FtpAdapter.
  5. Click on Configuration.
  6. Click on Outbound Connection Pools.
  7. Expand javax.resource.cci.ConnectionFactory.
  8. Click on the outbound connection pool for the FTP resource adapter.
  9. In the PreferredCipherSuite property, add "aes128-cbc" (see screenshot below)
  10. Save.
  11. Navigate to Deployments.
  12. Select the checkbox beside FtpAdapter.
  13. Click on Update.
  14. Select Update this application in place with new deployment plan changes.
  15. Click Next.
  16. Click Finish.
  17. Activate changes.
  18. Restart the entire domain (including AdminServer).
]]>
<![CDATA[ RedHat OpenShift and private repositories ]]> https://chronicler.tech/redhat-openshift-and-private-repositories/ 5e0cabea0b1b670a1724ecf6 Thu, 02 Jan 2020 08:15:00 -0500 For a while, I play around with RedHat CodeReady Containers and one thing annoyed me most - access to private repositories. My pet projects are GitLab and I'm not ready to expose them as public ones. So, whenever I create an application, the first build fails, due to no access to the code. Normally, I get to the Admin console, created a new code secret and updated build descriptor with the codeSecret parameter. It's boring and time-consuming, so why don't do it the right way?

By the right way, I mean automatically, from a Shell script or from an Ansible playbook.

Make sure that our containers are up and running and start with the new Quarkus Java project.

$ crc status
CRC VM:          Running
OpenShift:       Running (v4.2.10)
Disk Usage:      15.87GB of 32.2GB (Inside the CRC VM)
Cache Usage:     13.75GB
Cache Directory: /home/mmikhailidi/.crc/cache
$ oc login api.crc.testing:6443 --username=developer --password=developer
Login successful.

You dont have any projects. You can try to create a new project, by running

   oc new-project <projectname>

$ oc new-project quarkus-project
Now using project "quarkus-project" on server "https://api.crc.testing:6443".

You can add applications to this project with the 'new-app' command. For example, ry:

    oc new-app django-psql-example

to build a new example application in Python. Or use kubectl to deploy a simple Kubernetes application:

    kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
$

Now let's create a basic secret with the GitLab username and token (could be account password).  I used literals as a source to make it more clear.

$ oc create secret generic my-gitlab-code --type=kubernetes.io/basic-auth \
--from-literal=username=mikhailidim --from-literal=password=****************
secret/my-gitlab-code created
$ oc annotate secret my-gitlab-code \
> 'build.openshift.io/source-secret-match-uri-1=https://gitlab.com/mikhailidim/*'
secret/my-gitlab-code annotated
$ oc secrets link builder my-gitlab-code
$

The interesting part here is the  annotation. With this entry, OpenShift will use this secret every time, when source URI matches the annotation mask. A secret may have more than one annotation to reuse the same credentials for different repositories/sites. the last preparation step allows the builder to access our new secret entity.
Now, your project is ready for the first build

$ oc new-app quay.io/quarkus/ubi-quarkus-native-s2i:19.2.1~https://gitlab.com/mikhailidim/quarkus-hello.git \
--name=quarkus-hello
-> Found container image 5583407 (2 months old) from quay.io for "quay.io/quarkus/ubi-quarkus-native-s2i:19.2.1"

    Quarkus.io S2I (GraalVM Native)
    -------------------------------
    Quarkus.io S2I image for building Kubernetes Native Java GraalVM applications and running its Native Executables

    Tags: builder, java, quarkus, native

    * An image stream tag will be created as "ubi-quarkus-native-s2i:19.2.1" that will track the source image
    * A source build using source code from https://gitlab.com/mikhailidim/quarkus-hello.git will be created
      * The resulting image will be pushed to image stream tag "quarkus-hello:latest"
      * Every time "ubi-quarkus-native-s2i:19.2.1" changes a new build will be triggered
    * This image will be deployed in deployment config "quarkus-hello"
    * Port 8080/tcp will be load balanced by service "quarkus-hello"
      * Other containers can access this service through the hostname "quarkus-hello"

--> Creating resources ...
    imagestream.image.openshift.io "ubi-quarkus-native-s2i" created
    imagestream.image.openshift.io "quarkus-hello" created
    buildconfig.build.openshift.io "quarkus-hello" created
    deploymentconfig.apps.openshift.io "quarkus-hello" created
    service "quarkus-hello" created
--> Success
    Build scheduled, use 'oc logs -f bc/quarkus-hello' to track its progress.
    Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
     'oc expose svc/quarkus-hello'
    Run 'oc status' to view your app.

$oc logs -f bc/quarkus-hello

The last command allows you to watch the progress, but as you can see on screenshot this time builder has no problem with accessing private source code repository.

On my laptop, the new image build takes about 5 minutes. When our application container is ready, let's expose the service and test it out.

$ oc expose service quarkus-hello
route.route.openshift.io/quarkus-hello exposed
$ curl http://quarkus-hello-quarkus-project.apps-crc.testing/hello -v
* About to connect() to quarkus-hello-quarkus-project.apps-crc.testing port 80 (#0)
*   Trying 192.168.130.11...
* Connected to quarkus-hello-quarkus-project.apps-crc.testing (192.168.130.11) port 80 (#0)
> GET /hello HTTP/1.1
> User-Agent: curl/7.29.0
> Host: quarkus-hello-quarkus-project.apps-crc.testing
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Length: 5
< Content-Type: text/plain;charset=UTF-8
< Set-Cookie: baf6177ebae8ba28e68f6fe44e4918f0=8a52f39652e28461b5925530449430f0; path=/; HttpOnly
< Cache-control: private
<
* Connection #0 to host quarkus-hello-quarkus-project.apps-crc.testing left intact
hello
$
]]>
<![CDATA[ RedHat CodeReady Containers update ]]> https://chronicler.tech/redhat-codeready-containers-update/ 5e0b5cdf0b1b670a1724ec75 Tue, 31 Dec 2019 10:32:53 -0500 Today, during the container start, I have got a notification that version 1.3.0 is available. I have downloaded the new archive (be careful it has the same name as the previous releases) and started the upgrade. Immediately, I get an error that my setup is not compatible with the new release. There I've made a mistake and topic for this blog:  erased crc folder

Well, let's start with the recommended approach: stop the cluster and delete it. On my dedicated machine it should look like this:

$ cd /u01/crc-linux-1.1.0-amd64
$ ./crc stop
....
$ ./crc delete
$ cd /u01/crc-linux-1.3.0-amd64
$ ./crc setup 

Unfortunately, I delete configuration already and "crc delete" doesn't work anymore. I was able to set up containers, but when I get this error during the startup

INFO Creating CodeReady Containers VM for OpenShift 4.2.10... 
ERRO Error creating host: Error creating the VM: Error creating machine: Error in driver during machine creation: virError(Code=9, Domain=20, Message='operation failed: domain 'crc' already exists with uuid 3b7cc2a0-6f30-4984-a1fb-ae74f2f2883c')

After several trials and runs, there is a solution that could save you a few hours.

# Drop crc domain
$ sudo virsh undefine crc
# Drop crc configuration for dnsmasq
$ sudo rm /etc/NetworkManager/dnsmasq.d/crc.conf
# Reload network configuration 
$ sudo systemctl reload NetworkManager

Now you can setup new cluster by the books.

Happy New Year to everybody who was brave enough to read it all. See you next year.

]]>
<![CDATA[ Ansible and Jinja Templates ]]> https://chronicler.tech/ansible-jinja/ 5e0908dc0b1b670a1724eb11 Mon, 30 Dec 2019 08:30:00 -0500 Quite recently I have adopted the OUD 12c configuration playbook to work with 11g PS3. And here I run into a tricky difference in configuration parameters. I'm pretty sure there are numerous ways to solve this, but my story is all about Ansible and Jinja.

Let's start with the task: OUD 12c introduced new parameter --instancePath, so you can specify where you want to place your new OUD instance, something like this:

$ $ORACLE_HOME/oud/oud-setup --cli --instancePath /u01/app/oracle/instances/oud1 ...

There are no easy ways to do so in OUD 11g. If you need a custom location, you should declare environment INSTANCE_NAME (really?) and specify a path to the location. Sounds easy, doesn't it? Not so much, you should specify the patch relative to the $ORACLE_HOME which means it would look similar to:

$ export INSTANCE_NAME=../../../../instances/oud1
$ $OUD_HOME/OUD/oud-setup --cli ...

And that where the all fun begins. Just stop for a second and try to solve this problem in any language. Java, RegExp, Bash, Python - your call. Right, if you are not the RegExp guru, you end up with the same conclusion: some code to produce the relative path from the root folder. Fortunately, Jinja2 templates are powerful enough to handle this. Here is my quick and dirty solution:

- name: Configure OUD 11g PS3 instance
  shell:
    cmd: |
     export JAVA_HOME=/u01/app/oracle/product/jdk
     {% set oud_path=( oracle_home ~ "/Oracle_OUD1").split('/') %}
     export INSTANCE_NAME="{%- for p in oud_path[1:] %}
     {{ '..' if loop.last else '../' }}
     {% endfor -%}{{ instnace_home }}/{{ instance_name }}"
     {{ oracle_home }}/Oracle_OUD1/OUD/oud-setup --cli  ....
     

A few comments on how I generate the final shell script.

  • Expression yaml {% set oud_path = %} defines new list oud_path. It contains all path entries to the OUD 11g home.
  • Jinja2 control yaml {% for in %} {% endfor %} lops through the oud_path list, from 1st element to and produces backtrack to the root folder.
  • I don't use list values. The template produces '../' for each element, or '..' if it is the last one.
  • The rest of the path yaml {{ instnace_home }}/{{ instance_name }} points to the desired instance location.

Worth to mention: To make it work I added whitespace stripping instructions: {%- -%}. The dash instructs Jinja template to strip all whitespaces, including linefeeds.

]]>
<![CDATA[ Advanced find/replace commands in Linux ]]> https://chronicler.tech/linuxsearchandreplace/ 5e0673460b1b670a1724eafa Fri, 27 Dec 2019 16:13:22 -0500 Below are some random find/replace examples that I've used in the past with a brief explanation of each.

Example 1 - Recursively find a string

Recursively search through all files in $ORACLE_HOME/bpel and list out the filenames that have the hostname string "dev78.net" in them.

find $ORACLE_HOME/bpel -type f | xargs grep "dev78.net"

Example 2 - Recursively replace a string

Recursively search from the current directory, and replace all references of "orabpel" with "orabpel2" in all files.

find . -type f -exec sed -i "s%orabpel%orabpel2%" {} \;

Example 3 - Recursively replace a string in a specific location within the line

In the command below, everything between the first % and the second % represents the original search string. Everything between the second % and the third % is the new string to be replaced.

find . -type f -exec sed -i "s%\(\)\(.*\)?wsdl\(.*\)%\1\2\.wsdl\?wsdl\3%" {} \;

The original search string consists of 4 parts:

(1) <soapEndpointURI>
(2) *
(the string "?wsdl" in between 2 and 3)
(3) *

The new string search string keeps parts 1, 2, and 3 intact, but replaces the text in between with ".wsdl?wsdl".

So if the original string looked like this:

<soapEndpointURI>http://thisisahmed/hello?wsdl</soapEndpointURI>

It would now look like this:

<soapEndpointURI>http://thisisahmed/hello.wsdl?wsdl</soapEndpointURI>

Example 4 - Exclude files
Same as Example 3, but will not search/replace inside of .class, .jar, or .zip files.

find . -type f \( -iname "*.class" ! -iname "*.jar" ! -iname "*.zip" \) -exec sed -i "s%\(\)\(.*\)?wsdl\(.*\)%\1\2\.wsdl\?wsdl\3%" {} \;

Example 5 - Recursively replace the hostname in URLs
For every string that takes the form "http://*:7777" (the * is just a wildcard, but can be any value), replace it with "http://${HOSTNAME}:7777" where ${HOSTNAME} is an environment variable.

find -type f -exec sed -i "s%\(ocation=\"http://\)\(.*\):7777\(.*\)%\1${HOSTNAME}:7777\3%" {} \;
]]>
<![CDATA[ My article in #PTK magazine ]]> https://chronicler.tech/my-article-in-ptk/ 5e04cf9d0b1b670a1724eada Thu, 26 Dec 2019 11:00:37 -0500 UK Oracle User Group has printed my piece about RedHat Ansible and Oracle Fusion Middleware platform. Take a look and let me know if you can use it for your environments and projects. The tech part of the issue #72 could be found here.

]]>
<![CDATA[ Get Oracle Coherence version ]]> https://chronicler.tech/get-oracle-coherence-version/ 5db23d120b1b670a1724e854 Thu, 24 Oct 2019 20:16:28 -0400 Many Oracle Fusion Middleware products are bundled with Oracle Coherence, either optionally or required by the product you are installing. Often it is necessary to identify the Coherence version.

These are the commands to extract the Coherence version from coherence.jar:

cp $DOMAIN_HOME/coherence/lib/coherence.jar /tmp
cd /tmp
unzip coherence.jar META-INF/MANIFEST.MF
cat META-INF/MANIFEST.MF | grep Implementation-Version

The output may be similar to the following:

oracle@devhost:/tmp> cat META-INF/MANIFEST.MF | grep Implementation-
VersionImplementation-Version: 12.2.1.1.0

References:

]]>
<![CDATA[ XML documents and Ansible ]]> https://chronicler.tech/xml-documents-and-ansible/ 5dac59390b1b670a1724e6ba Mon, 21 Oct 2019 08:59:00 -0400 When you automate Oracle Fusion Middleware environments you cannot avoid XML documents. For simple modifications, you can treat them as regular text files or use tokenized templates for the configuration updates. Let's see what you can do if you should process XML as a structured data set. Here is a real-life example: any changes in Oracle Access Manager (OAM) configuration file should update the document version. When the OAM server finds never version it reads the document and propagates updates.

So, change OAM settings steps are:

  1. Perform configuration updates
  2. Read current configuration version
  3. Increment current version
  4. Update OAM configuration with the new value

I let you do payload updates and concentrate on the version part. The playbook below uses xml module to query and update XML content.

  vars:
    oam_config_file: oam-config.xml 
    oam_version_xpath: "/xsd:Configuration/xsd:Setting/xsd:Setting[@Name = 'Version']" 
    oam_namespaces: 
        xsd: "http://www.w3.org/2001/XMLSchema"
        htf: "http://higgins.eclipse.org/sts/Configuration"      
  tasks:
    - name: Get current version
      xml:
       path: "{{ oam_config_file }}"
       xpath: "{{ oam_version_xpath }}"
       content: text
       namespaces: "{{ oam_namespaces }}"
      register: cver
      
    - name: Set new version
      block:    
        - name: Calculate next version
          set_fact:
            new_version: "{{ cver.matches[0]['{http://www.w3.org/2001/XMLSchema}Setting']|int + 1 }}"           
        - name: Update configuration
          xml:
           path: "{{ oam_config_file }}"
           xpath: "{{ oam_version_xpath }}"
           value: "{{ new_version }}"
           namespaces: "{{ oam_namespaces }}"
      when: cver.count == 1
...

It's quite straightforward and requires only a few additional comments:

  • If the XML document has namespaces, you can declare them as a namespaces dictionary. It allows you to simplify the XPath query.
  • Module returns result as strings. Convert it to the appropriate type to perform further calculations ( |int + 1 in set_fact task)
  • Make sure that you update the appropriate node (cver.count == 1). Most of the nodes in OAM configuration are <xsd:Setting> and few of them have attribute Name with the value 'Version'.
  • Make sure that you have python-lxml package installed.

You can download sampl OAM configuration file and playbook here.

]]>
<![CDATA[ Demo certificates in BPM 12c domain ]]> https://chronicler.tech/bpm-12c-domain-demo-certs/ 5d9f8a5f0b1b670a1724e60e Fri, 11 Oct 2019 09:28:22 -0400 Last week I wrote a piece on Oracle HTTP Server 12c security. We have found similar hiccup, now in the BPM domain environment settings. If your freshly backed BPM domain works fine but you can't access BPM applications, check your setDomainEnv.sh properties. By default, it refers to the DemoTrust.jks in the EXTRA_JAVA_PROPERTIES, like the one below.

EXTRA_JAVA_PROPERTIES="-Djavax.net.ssl.trustStore=${WL_HOME}/server/lib/DemoTrust.jks ${EXTRA_JAVA_PROPERTIES} -Dsoa.archives.dir=${SOA_ORACLE_HOME}/soa -Dsoa.oracle.home=${SOA_ORACLE_HOME} -Dsoa.instance.home=${DOMAIN_HOME} -Dtangosol.coherence.log=jdk -Djavax.xml.soap.MessageFactory=oracle.j2ee.ws.saaj.soap.MessageFactoryImpl -Dweblogic.transaction.blocking.commit=true -Dweblogic.transaction.blocking.rollback=true -Doracle.xml.schema/Ignore_Duplicate_Components=true -Doracle.xdkjava.compatibility.version=11.1.1 -Doracle.soa.compatibility.version=11.1.1 -Ddisable-implicit-bean-discovery=true

At first, I have tried to overlap this property with one in the setUserOverride.sh with no much success. So, remove this entry and restart domain.

]]>
<![CDATA[ Secure Oracle HTTP Server 12c scripts ]]> https://chronicler.tech/secure-oracle-http-server-12c/ 5d90b9260b1b670a1724e489 Mon, 30 Sep 2019 08:30:00 -0400 This post is a spin-off on my instance configuration note and covers another security finding. A small intro to set up a context: I always use custom certificates for the WebLogic servers and all the components. In nova days certificates are free and if you have a cloud-based or open project, services like Let's Encrypt give you free SSL certificates and tools for automating the certificate life cycle. But you not always have a choice and most of my projects use internal CA services to manage certificates company-wide.

One way or another, my quite standard NodeManager property file looks similar to the template below:

# Keystore configuration
KeyStores=CustomIdentityAndCustomTrust
CustomTrustKeyStoreFileName={{ jks_home }}/trust.jks
CustomIdentityKeyStoreFileName={{ jks_home }}/identity.jks
CustomIdentityAlias={{ key_alias }}
CustomIdentityKeyStoreType=JKS
CustomIdentityPrivateKeyPassPhrase={{ key_pass }}
CustomIdentityKeyStorePassPhrase={{ store_pass }}

Of course, Oracle HTTP server 12c has received the same NodeManager configuration as all the other domains we manage through the Ansible playbooks. Immediately, after NodeManager start I run into the odd issue: my startComponent.sh script failed to connect to the NodeManager port. Well, it happens all the time, so as part of the standard WLST script call I add an environment variable declaration:

export WLST_PROPERTIES="-Dweblogic.security.SSL.enableJSSE=true -Dweblogic.security.TrustKeyStore=CustomTrust -Dweblogic.security.CustomTrustKeyStoreType=JKS -Dweblogic.security.CustomTrustKeyStoreFileName={{ jks_home }}/trust.jks -Dweblogic.security.SSL.minimumProtocolVersion=TLSv1.2 -Dweblogic.MaxMessageSize=300000000"

Well, this time it doesn't work. All the domain scripts ignore my arguments and failed with the handshake exception: PKIX path building failed.  To check if WLST_PROPERTIES works,  I added -Djavax.net.ssl.debug argument I get the debug output. That's how I figured out that scripts ignore WebLogic arguments and use DemoIdentity.jks and DemoTrust.jks, ignoring even the standard Java trust keystore. After a few trials, I end up with the environment variable as below.

export WLST_PROPERTIES="-Dweblogic.MaxMessageSize=3000000 -Djavax.net.ssl.trustStore={{ jks_home }}/trust.jks"

It adds custom keystore to the list of the trusted certificates on the JVM level, so java.weblogic.WLST can validate custom certificates.

I think that WebLogic base component template is not quite a WebLogic server and it does not understand WebLogic security arguments, while tools use WebLogic server framework and default configuration settings.


© Copyright Billy McCrorie and licensed for reuse under this Creative Commons Licence. ]]>
<![CDATA[ Start OHS 12c no questions asked ]]> https://chronicler.tech/configure-credentials-for-ohs-12c/ 5d909be60b1b670a1724e324 Mon, 30 Sep 2019 08:00:00 -0400 By default, when you start a domain component, such as Oracle HTTP Server instance, the script prompts you for the NodeManager password. It is good security-wise, not so great for the automation.

When I figured out that our script wrapper start OHS with the plain-text password in the code, I start looking for a more secure solution.

echo "plain-text-password" | $DOMAIN_HOME/bin/startComponent.sh ohs1

As it always happens, I found the answer in the documentation. The script startComponent.sh has additional parameter storeUserConfig. It allows you to store NodeManager credentials under ~/.wlst folder.

And because I use Ansible for the automation let's take a look at the instance configuration and start tasks.

Configure instance credentials:

- name: Configure {{ instance_name }} credentials
  no_log: yes
  shell:
    cmd: |
      export WLST_PROPERTIES="-Dweblogic.MaxMessageSize=3000000 -Djavax.net.ssl.trustStore={{ jks_home }}/trust.jks"
      echo "{{ node_password }}" | {{ domain_home }}/bin/startComponent.sh {{ instance_name }} storeUserConfig<

The quick explanation of the code above:

  • name: Name for the task
  • no_log: Do not log commnad and output to protect sensibtive information
  • shell: Ansible module, executes shell commands on target machine
  • cmd: Module argument with the commands to execute. Intial "|" indicates multi-line value, that will be passed "as-is" to the module.

Start instance task:

- name: Start {{ inatnace_name }} 
  shell:
    cmd: |
      export WLST_PROPERTIES="-Dweblogic.MaxMessageSize=3000000 -Djavax.net.ssl.trustStore={{ jks_home }}/trust.jks"    
      {{ domain_home }}/bin/startComponent.sh {{instance_name }}

Small hint: if you use custom identity and custom trust certificates for NodeManager, don't forget to specify trust certificate store so you wouldn't have issues with the secured connection.


Image by Gerd Altmann from Pixabay

]]>
<![CDATA[ Deployment exception because operation cannot be performed until the WebLogic server is restarted ]]> https://chronicler.tech/operation-cannot-be-performed-until-the-weblogic-server-is-restarted/ 5d8cf2cb0b1b670a1724e1f4 Thu, 26 Sep 2019 14:07:59 -0400 Problem:

I tried to create a datasource on Oracle WebLogic 12.1.3, but encountered the following error on the WebLogic Admin Console:

The full error stack in AdminServer.log:

####<Sep 26, 2019 10:25:07 AM MDT> <Error> <Console> <soahost1> <AdminServer> <[ACTIVE] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)'> <weblogic> <> <2f60d060-b092-4c27-b969-de8684116755-0001bb39> <1569515107237> <BEA-240003> <Administration Console encountered the following error: weblogic.management.DeploymentException: [Deployer:149189]An attempt was made to execute the "activate" operation on an application named "myDS" that is not currently available. The application may have been created after non-dynamic configuration changes were activated. If so, the operation cannot be performed until the server is restarted so that the application will be available.
        at weblogic.deploy.internal.targetserver.DeploymentManager.assertDeploymentMBeanIsNonNull(DeploymentManager.java:1341)
        at weblogic.deploy.internal.targetserver.DeploymentManager.findDeploymentMBean(DeploymentManager.java:1382)
        at weblogic.deploy.internal.targetserver.DeploymentManager.createOperation(DeploymentManager.java:1072)
        at weblogic.deploy.internal.targetserver.DeploymentManager.createOperations(DeploymentManager.java:1428)
        at weblogic.deploy.internal.targetserver.DeploymentManager.handleUpdateDeploymentContext(DeploymentManager.java:164)
        at weblogic.deploy.internal.targetserver.DeploymentServiceDispatcher.updateDeploymentContext(DeploymentServiceDispatcher.java:168)
        at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.doUpdateDeploymentContextCallback(DeploymentReceiverCallbackDeliverer.java:147)
        at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.updateDeploymentContext(DeploymentReceiverCallbackDeliverer.java:28)
        at weblogic.deploy.service.internal.statemachines.targetserver.ReceivedPrepare.callDeploymentReceivers(ReceivedPrepare.java:203)
        at weblogic.deploy.service.internal.statemachines.targetserver.ReceivedPrepare.handlePrepare(ReceivedPrepare.java:112)
        at weblogic.deploy.service.internal.statemachines.targetserver.ReceivedPrepare.receivedPrepare(ReceivedPrepare.java:52)
        at weblogic.deploy.service.internal.targetserver.TargetRequestImpl.run(TargetRequestImpl.java:211)
        at weblogic.deploy.service.internal.transport.CommonMessageReceiver$1.run(CommonMessageReceiver.java:457)
        at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:553)
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:311)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:263)
>

Analysis:

After inspecting the log file AdminServer.log, I have the following snippet preceding the error above:

####<Sep 26, 2019 10:25:06 AM MDT> <Info> <Deployer> <soahost1> <AdminServer> <[ACTIVE] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)'> <weblogic> <> <2f60d060-b092-4c27-b969-de8684116755-0001bb39> <1569515106221> <BEA-149038> <Initiating task for myDS : [Deployer:149026]activate application myDS on soa_cluster..>

####<Sep 26, 2019 10:25:07 AM MDT> <Warning> <Deployer> <soahost1> <AdminServer> <[ACTIVE] ExecuteThread: '18' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <2f60d060-b092-4c27-b969-de8684116755-0001bb3a> <1569515107232> <BEA-149004> <Failures were detected while initiating activate task for application "myDS".>

The error appeared to be an activation issue on "soa_cluster".

Solution:

1. Create the datasource but do not target it to anything.

2. Save and activate changes.

3. Go back and target the datasource to “soa_cluster”.

4. Save and activate changes.

]]>
<![CDATA[ Getting "Hostname verification failed" in OSB 12c ]]> https://chronicler.tech/getting-sslkepexception-hostname-verification-failed-in-osb-12c/ 5d8b9af10b1b670a1724e1ca Wed, 25 Sep 2019 13:01:34 -0400 Problem:

When calling a service on Oracle Service Bus (OSB) 12c, I received an SSLKeyException related to hostname verification even though we do not have SSL configured.

The entire error stack in osb_server1.out is:

[2019-09-24T21:58:33.853-06:00] [osb_server1] [ERROR] [OSB-381304] [oracle.osb.transports.main.httptransport] [tid: [ACTIVE].ExecuteThread: '27' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: ] [ecid: 005^pYeqrIi6UOIqyovX6G000KCT00000D,0:1:1] [APP: Service Bus Framework Starter Application] [partition-name: DOMAIN] [tenant-name: GLOBAL] [FlowId: 0000Mpa^X_L6uHIqyofd6G1TYiLL000002] Exception in InvocationCallback.failed: javax.ws.rs.ProcessingException: javax.net.ssl.SSLKeyException: Hostname verification failed: HostnameVerifier=weblogic.security.utils.SSLWLSHostnameVerifier, hostname=soa.raastech.com.[[
javax.ws.rs.ProcessingException: javax.net.ssl.SSLKeyException: Hostname verification failed: HostnameVerifier=weblogic.security.utils.SSLWLSHostnameVerifier, hostname=soa.raastech.com.
        at org.glassfish.jersey.client.internal.HttpUrlConnector$3.run(HttpUrlConnector.java:299)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at jersey.repackaged.com.google.common.util.concurrent.MoreExecutors$DirectExecutorService.execute(MoreExecutors.java:299)
        at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
        at jersey.repackaged.com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:50)
        at jersey.repackaged.com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:37)
        at org.glassfish.jersey.client.internal.HttpUrlConnector.apply(HttpUrlConnector.java:293)
        at org.glassfish.jersey.client.ClientRuntime$2.run(ClientRuntime.java:178)
        at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
        at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
        at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
        at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
        at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
        at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:340)
        at org.glassfish.jersey.client.ClientRuntime$3.run(ClientRuntime.java:210)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at weblogic.work.WorkAreaContextWrap.run(WorkAreaContextWrap.java:60)
        at com.bea.alsb.platform.weblogic.WlsWorkManagerServiceImpl$WorkAdapter.run(WlsWorkManagerServiceImpl.java:283)
        at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:678)
        at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352)
        at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337)
        at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57)
        at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
        at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:652)
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:420)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:360)
Caused by: javax.net.ssl.SSLKeyException: Hostname verification failed: HostnameVerifier=weblogic.security.utils.SSLWLSHostnameVerifier, hostname=soa.raastech.com.
        at weblogic.security.SSL.jsseadapter.JaSSLEngine.doPostHandshake(JaSSLEngine.java:686)
        at weblogic.security.SSL.jsseadapter.JaSSLEngine.doAction(JaSSLEngine.java:757)
        at weblogic.security.SSL.jsseadapter.JaSSLEngine.unwrap(JaSSLEngine.java:133)
        at weblogic.socket.JSSEFilterImpl.unwrap(JSSEFilterImpl.java:656)
        at weblogic.socket.JSSEFilterImpl.unwrapAndHandleResults(JSSEFilterImpl.java:553)
        at weblogic.socket.JSSEFilterImpl.doHandshake(JSSEFilterImpl.java:108)
        at weblogic.socket.JSSEFilterImpl.doHandshake(JSSEFilterImpl.java:87)
        at weblogic.socket.JSSESocket.startHandshake(JSSESocket.java:240)
        at weblogic.net.http.HttpsClient.New(HttpsClient.java:566)
        at weblogic.net.http.HttpsClient.New(HttpsClient.java:546)
        at weblogic.net.http.HttpsURLConnection.connect(HttpsURLConnection.java:235)
        at weblogic.net.http.HttpURLConnection.getInputStream(HttpURLConnection.java:685)
        at weblogic.net.http.SOAPHttpsURLConnection.getInputStream(SOAPHttpsURLConnection.java:42)
        at weblogic.net.http.HttpURLConnection.getResponseCode(HttpURLConnection.java:1547)
        at org.glassfish.jersey.client.internal.HttpUrlConnector._apply(HttpUrlConnector.java:394)
        at org.glassfish.jersey.client.internal.HttpUrlConnector.access$000(HttpUrlConnector.java:96)
        at org.glassfish.jersey.client.internal.HttpUrlConnector$3.run(HttpUrlConnector.java:297)
        ... 27 more

]]

Analysis:

This error occurred when the OSB business service was calling the SOA service through the load balancer. Though the SOA managed servers do not have SSL configured, the load balancer did.

Solution:

  1. Navigate to the OSB managed server.
  2. Set Hostname Verification to "None".
  3. Save and activate the changes.
]]>
<![CDATA[ Article: Compute Cloud Performance Showdown ]]> https://chronicler.tech/c/ 5d8a3c390b1b670a1724e16d Tue, 24 Sep 2019 12:12:44 -0400 My article titled Compute Cloud Performance Showdown was just published in the latest issue of UKOUG PTK. PTK, which stands for Pass The Knowledge, was formerly Oracle Scene, and is a publication from the UK Oracle User Group.

The article is also highlighted on the cover of the issue.

We provisioned comparable medium-spec'ed virtual machines from the 5 leading cloud providers and shared the performance results from a series of load tests conducted against the Linux host, Oracle WebLogic Server 12c, and Oracle Database 18c.

The article summarizes our findings, results, and experiences during our effort.

Publication web site: here

Direct link to entire issue: here

]]>
<![CDATA[ Dude, where is my kernel? ]]> https://chronicler.tech/where-is-my-kernel-dude/ 5d878ccd0b1b670a1724e129 Mon, 23 Sep 2019 08:30:00 -0400 I've got a little bit lost, trying to to install VirtualBox Guest Additions  on my freshly installed and upgraded CentOS 7 virtual machine. I have tried all the tricks I found and still Virtual Box installer wont able to find my kernel headers. The secret is erasy. Don't forget to reboot your system after setup and upgrade. Otherwise you will have new headers and configruation with the old kernel in flight.

So the right order is:

  1. Install your target system using ISO, thumbdrive or netwrok loader
  2. Update your packgage  manager and system to the actual state
  3. Reboot to pickup new kernel
  4. Install virtuallBox Guast Addtions and the rest of your software.
]]>
<![CDATA[ Force shutdown in stopWebLogic.sh ]]> https://chronicler.tech/force-shutdown-in-stopweblogic-sh/ 5d824bab0b1b670a1724e0f5 Wed, 18 Sep 2019 11:26:36 -0400 The WebLogic stop scripts stopWebLogic.sh and stopManagedWebLogic.sh do not "force" a shutdown when being called. This means that you could be waiting indefinitely until all transactions are completed.

It is probably best to configure these scripts to force=true. This only takes effect when you manually call these scripts, and is not applicable when stopping the managed servers via Node Manager or the WebLogic Admin Console.

1. Run this command on all WebLogic Servers on all environments:

vi $DOMAIN_HOME/bin/stopWebLogic.sh

vi $DOMAIN_HOME/bin/stopManagedWebLogic.sh

2. Replace the following:

OLD:

echo "shutdown('${SERVER_NAME}','Server', ignoreSessions='true')" >>"shutdown.py"

NEW:

echo "shutdown('${SERVER_NAME}','Server', ignoreSessions='true', force='true')" >>"shutdown.py"
]]>
<![CDATA[ Changing the WebLogic password on OEM Agents ]]> https://chronicler.tech/changing-the-oem-agent-password/ 5d7a25ba0b1b670a1724e0b2 Thu, 12 Sep 2019 07:12:42 -0400 This post describes the steps needing to be done on the OEM Agent if the WebLogic password used to connect has been changed. It needs only be changed once per domain. This same set of instructions can be used if you decide to use an alternate WebLogic account as well.

It is recommended not to use the weblogic account when adding the WebLogic target to Oracle Enterprise Manager (OEM) Cloud Control.

Instead, it is preferred to create a separate WebLogic account, for example oemagent, and assign that user both the Operators and Monitors group. The OEM Agent does not require elevated administrator privileges, and when a problem occurs, having the separate account makes troubleshooting easier.

Changing the WebLogic Username and/or Password on the OEM Agent

  1. Log in to the WebLogic Admin Console for the domain.
  2. Change the password for the oemagent account (or weblogic if you happen to use that).
  3. Log in to the unix host of the OMS server, and run the following commands:
export OMS_HOME=/u01/app/oracle/product/OEM13c/oms

cd $OMS_HOME/bin
    
./emctl login -username=sysman
    
./emctl modify_target -name="/soa_prod_soa_domain/soa_domain" -type="weblogic_domain" -credentials="Username:oemagent;password:welcome1" -on_agent

Here, the target name should reflect the actual target name you find in the OMS Console (e.g., /soa_prod_soa_domain/soa_domain), the username should be the weblogic user that the OEM Agent is connecting to.

]]>
<![CDATA[ Getting continuous "MBean attribute access denied" in SOA logs due to OEM Agent ]]> https://chronicler.tech/getting-continuous-mbean-attribute-access-denied-in-soa-logs/ 5d7a21e30b1b670a1724e069 Thu, 12 Sep 2019 06:58:56 -0400 Recently, we started getting continuous errors every few seconds in our Oracle SOA Suite logs (specifically in the SOA managed server standard out log).

The entire exception is shown here:

<Sep 11, 2019 9:27:41 PM EDT> <Error> <oracle.as.jmx.framework.generic.spi.security.AbstractMBeanSecurityInterceptor> <J2EE JMX-46335> <MBean attribute access denied.
  MBean: oracle.soa.config:name="PayPal",j2eeType=SOAFolder,Application=soa-infra
  Getter for attribute State
  Detail: access denied ("oracle.fabric.permission.CompositePermission" "PayPal" "read")
java.security.AccessControlException: access denied ("oracle.fabric.permission.CompositePermission" "PayPal" "read")
        at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)
        at java.security.AccessController.checkPermission(AccessController.java:884)
        at oracle.security.jps.util.JpsAuth$AuthorizationMechanism$3.checkPermission(JpsAuth.java:527)
        at oracle.security.jps.util.JpsAuth.checkPermission(JpsAuth.java:587)
        at oracle.security.jps.util.JpsAuth.checkPermission(JpsAuth.java:623)
        at oracle.fabric.permission.internal.InternalSOAPermissionCheckHelper$2.run(InternalSOAPermissionCheckHelper.java:204)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at oracle.fabric.permission.internal.InternalSOAPermissionCheckHelper.internalCheckSOAPermission(InternalSOAPermissionCheckHelper.java:202)
        at oracle.fabric.permission.internal.InternalSOAPermissionCheckHelper.checkSOAPermission(InternalSOAPermissionCheckHelper.java:173)
        at oracle.fabric.permission.management.SOAPermissionCheckPluginFactory$SOAMBeanCustomSecurityPlugin.checkGetAttribute(SOAPermissionCheckPluginFactory.java:60)
        at oracle.as.jmx.framework.MBeanCustomSecurityHelper.checkGetAttribute(MBeanCustomSecurityHelper.java:193)
        at oracle.as.jmx.framework.generic.spi.security.AbstractMBeanSecurityInterceptor.checkAttributeAccess(AbstractMBeanSecurityInterceptor.java:269)
        at oracle.as.jmx.framework.generic.spi.security.AbstractMBeanSecurityInterceptor.internalGetAttribute(AbstractMBeanSecurityInterceptor.java:128)
        at oracle.as.jmx.framework.generic.spi.interceptors.AbstractMBeanInterceptor.doGetAttribute(AbstractMBeanInterceptor.java:86)
        at oracle.security.jps.ee.jmx.JpsJmxInterceptor$GetAttributeDelegator.delegate(JpsJmxInterceptor.java:634)
        at oracle.security.jps.ee.jmx.JpsJmxInterceptor$3.run(JpsJmxInterceptor.java:540)
        at java.security.AccessController.doPrivileged(Native Method)
        at oracle.security.jps.util.JpsSubject.doAsPrivileged(JpsSubject.java:315)
        at oracle.security.jps.ee.util.JpsPlatformUtil.runJaasMode(JpsPlatformUtil.java:649)
        at oracle.security.jps.ee.jmx.JpsJmxInterceptor.jpsInternalInvoke(JpsJmxInterceptor.java:558)
        at oracle.security.jps.ee.jmx.JpsJmxInterceptor.internalGetAttribute(JpsJmxInterceptor.java:265)
        at oracle.as.jmx.framework.generic.spi.interceptors.AbstractMBeanInterceptor.doGetAttribute(AbstractMBeanInterceptor.java:86)
        at oracle.as.jmx.framework.generic.spi.interceptors.ContextClassLoaderMBeanInterceptor.internalGetAttribute(ContextClassLoaderMBeanInterceptor.java:63)
        at oracle.as.jmx.framework.generic.spi.interceptors.AbstractMBeanInterceptor.doGetAttribute(AbstractMBeanInterceptor.java:86)
        at oracle.as.jmx.framework.generic.spi.interceptors.MBeanRestartInterceptor.internalGetAttribute(MBeanRestartInterceptor.java:67)
        at oracle.as.jmx.framework.generic.spi.interceptors.AbstractMBeanInterceptor.doGetAttribute(AbstractMBeanInterceptor.java:86)
        at oracle.as.jmx.framework.standardmbeans.spi.OracleStandardEmitterMBean.getAttribute(OracleStandardEmitterMBean.java:631)
        at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
        at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
        at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase$17.run(WLSMBeanServerInterceptorBase.java:466)
        at java.security.AccessController.doPrivileged(Native Method)
        at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase.getAttribute(WLSMBeanServerInterceptorBase.java:464)
        at weblogic.management.mbeanservers.internal.JMXContextInterceptor.getAttribute(JMXContextInterceptor.java:165)
        at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase$17.run(WLSMBeanServerInterceptorBase.java:466)
        at java.security.AccessController.doPrivileged(Native Method)
        at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase.getAttribute(WLSMBeanServerInterceptorBase.java:464)
        at weblogic.management.mbeanservers.internal.SecurityInterceptor.getAttribute(SecurityInterceptor.java:294)
        at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase$17.run(WLSMBeanServerInterceptorBase.java:466)
        at java.security.AccessController.doPrivileged(Native Method)
        at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase.getAttribute(WLSMBeanServerInterceptorBase.java:464)
        at weblogic.management.mbeanservers.internal.MBeanCICInterceptor.access$101(MBeanCICInterceptor.java:38)
        at weblogic.management.mbeanservers.internal.MBeanCICInterceptor$1.call(MBeanCICInterceptor.java:134)
        at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:284)
        at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:269)
        at weblogic.management.mbeanservers.internal.MBeanCICInterceptor.getAttribute(MBeanCICInterceptor.java:130)
        at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase$17.run(WLSMBeanServerInterceptorBase.java:466)
        at java.security.AccessController.doPrivileged(Native Method)
        at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase.getAttribute(WLSMBeanServerInterceptorBase.java:464)
        at weblogic.management.mbeanservers.internal.PartitionJMXInterceptor.getAttribute(PartitionJMXInterceptor.java:303)
        at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase$17.run(WLSMBeanServerInterceptorBase.java:466)
        at java.security.AccessController.doPrivileged(Native Method)
        at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase.getAttribute(WLSMBeanServerInterceptorBase.java:464)
        at weblogic.management.mbeanservers.internal.CallerPartitionContextInterceptor.getAttribute(CallerPartitionContextInterceptor.java:177)
        at weblogic.management.jmx.mbeanserver.WLSMBeanServer.getAttribute(WLSMBeanServer.java:283)
        at weblogic.management.mbeanservers.internal.JMXConnectorSubjectForwarder$5$1.run(JMXConnectorSubjectForwarder.java:308)
        at java.security.AccessController.doPrivileged(Native Method)
        at weblogic.management.mbeanservers.internal.JMXConnectorSubjectForwarder$5.run(JMXConnectorSubjectForwarder.java:306)
        at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:368)
        at weblogic.management.mbeanservers.internal.JMXConnectorSubjectForwarder.getAttribute(JMXConnectorSubjectForwarder.java:301)
        at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1445)
        at javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
        at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1408)
        at javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:639)
        at javax.management.remote.rmi.RMIConnectionImpl_WLSkel.invoke(Unknown Source)
        at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:645)
        at weblogic.rmi.internal.BasicServerRef$2.run(BasicServerRef.java:534)
        at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:368)
        at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:163)
        at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:531)
        at weblogic.rmi.internal.wls.WLSExecuteRequest.run(WLSExecuteRequest.java:137)
        at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:348)
        at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:333)
        at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:54)
        at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
        at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:617)
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:397)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:346)
>
<Sep 11, 2019 9:27:41 PM EDT> <Warning> <RMI> <BEA-080003> <A RuntimeException was generated by the RMI server: javax.management.remote.rmi.RMIConnectionImpl.getAttribute(Ljavax.management.ObjectName;Ljava.lang.String;Ljavax.security.auth.Subject;)
 javax.management.RuntimeMBeanException: java.lang.SecurityException: MBean attribute access denied.
  MBean: oracle.soa.config:name="PayPal",j2eeType=SOAFolder,Application=soa-infra
  Getter for attribute State
  Detail: access denied ("oracle.fabric.permission.CompositePermission" "PayPal" "read").
javax.management.RuntimeMBeanException: java.lang.SecurityException: MBean attribute access denied.
  MBean: oracle.soa.config:name="PayPal",j2eeType=SOAFolder,Application=soa-infra
  Getter for attribute State
  Detail: access denied ("oracle.fabric.permission.CompositePermission" "PayPal" "read")
        at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839)
        at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:852)
        at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:651)
        at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
        at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase$17.run(WLSMBeanServerInterceptorBase.java:466)
        Truncated. see log file for complete stacktrace
Caused By: java.lang.SecurityException: MBean attribute access denied.
  MBean: oracle.soa.config:name="PayPal",j2eeType=SOAFolder,Application=soa-infra
  Getter for attribute State
  Detail: access denied ("oracle.fabric.permission.CompositePermission" "PayPal" "read")
        at oracle.as.jmx.framework.generic.spi.security.AbstractMBeanSecurityInterceptor.checkAttributeAccess(AbstractMBeanSecurityInterceptor.java:312)
        at oracle.as.jmx.framework.generic.spi.security.AbstractMBeanSecurityInterceptor.internalGetAttribute(AbstractMBeanSecurityInterceptor.java:128)
        at oracle.as.jmx.framework.generic.spi.interceptors.AbstractMBeanInterceptor.doGetAttribute(AbstractMBeanInterceptor.java:86)
        at oracle.security.jps.ee.jmx.JpsJmxInterceptor$GetAttributeDelegator.delegate(JpsJmxInterceptor.java:634)
        at oracle.security.jps.ee.jmx.JpsJmxInterceptor$3.run(JpsJmxInterceptor.java:540)
        Truncated. see log file for complete stacktrace
Caused By: java.security.AccessControlException: access denied ("oracle.fabric.permission.CompositePermission" "PayPal" "read")
        at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)
        at java.security.AccessController.checkPermission(AccessController.java:884)
        at oracle.security.jps.util.JpsAuth$AuthorizationMechanism$3.checkPermission(JpsAuth.java:527)
        at oracle.security.jps.util.JpsAuth.checkPermission(JpsAuth.java:587)
        at oracle.security.jps.util.JpsAuth.checkPermission(JpsAuth.java:623)
        Truncated. see log file for complete stacktrace
>

By observing the log, you will notice the following error in specific:

access denied ("oracle.fabric.permission.CompositePermission" "PayPal" "read")

Here, "PayPal" is the name of our SOA partitions. The error repeats repeatedly for all partitions in our domain. So some process is trying to read some composite information from all our partitions.

Analysis

The culprit here turned out to be the Oracle Enterprise Manager (OEM) Cloud Control 13c. Agent.

The OEM Agent is configured against the weblogic domain not using the weblogic account, but a custom created account called oemagent. This account was giving the Operators group. It appears that the OEM Agent would probably need more privileges.

Resolution

  1. Shut down and clear the state of the agent.
emctl stop agent
emctl clearstate agent

2. Assign the Administrators group to the oemagent account (note that oemagent is a weblogic account, and this account is used by the agent to log in to the WebLogic target).

3. Restart the agent.

]]>
<![CDATA[ WebLogic AdminServer startup hanging at "Initializing self-tuning thread pool" ]]> https://chronicler.tech/weblogic-adminserver-startup-hanging-at-initializing-self-tuning-thread-pool/ 5d63c2cd0b1b670a1724dff7 Mon, 26 Aug 2019 07:53:30 -0400 Starting up the AdminServer on Oracle WebLogic Server 11g hung indefinitely at the log entry "Initializing self-tuning thread pool". There was planned maintenance on the network, so we suspect it may (or may not) have been related to that.

This is the snippet from the output of startWebLogic.sh:

.
.
.
<Aug 24, 2019 3:55:50 AM EDT> <Notice> <WebLogicServer> <BEA-000395> <Following extensions directory contents added to the end of the classpath:
/u01/app/oracle/admin/aserver/soa_domain/lib/CSFUtil.jar>

<Aug 24, 2019 3:55:51 AM EDT> <Info> <WebLogicServer> <BEA-000377> <Starting WebLogic Server with Java HotSpot(TM) 64-Bit Server VM Version 24.161-b13 from Oracle Corporation>

<Aug 24, 2019 3:55:52 AM EDT> <Info> <Management> <BEA-141107> <Version: WebLogic Server Temporary Patch for BUG29800003 Mon May 20 03:48:59 PDT 2019
WebLogic Server 10.3.6.0.190416 PSU Patch for BUG29204678 Mon Feb  4 02:06:33 PST 2019
WebLogic Server Temporary Patch for BUG14339868 Thu Jun 27 00:39:43 CDT 2013
WebLogic Server 10.3.6.0  Tue Nov 15 08:52:36 PST 2011 1441050 >

<Aug 24, 2019 3:55:57 AM EDT> <Info> <Management> <BEA-141227> <Making a backup copy of the configuration at /u01/app/oracle/admin/aserver/soa_domain/config-original.jar.>

<Aug 24, 2019 3:55:58 AM EDT> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING>

<Aug 24, 2019 3:55:58 AM EDT> <Info> <WorkManager> <BEA-002900> <Initializing self-tuning thread pool>

At that last line, it would hang. Forever.

No additional information is provided.

We checked and confirmed all of the following; none helped with the issue:

  • Confirmed using the nc command that connectivity to other hosts, such as the database server and second administration server was fine.
  • Confirmed no OS load issues, neither CPU nor I/O, by checking top, vmstat, and iostat.
  • Deleted boot.properties and started the AdminServer directly using startWebLogic.sh.
  • Verified that "-Djava.security.egd=file:///dev/./urandom" is already set.
  • Since the binaries resided on NFS, checked and found no NFS errors in the OS logs and copied a 300 MB file in and out of NFS to local storage (took under 1 second).
  • Cleared the ~/tmp and ~/cache folders on the AdminServer.
  • Cleared the entire ~/data folder on the AdminServer.
  • Added "securerandom.source=file:/tmp/big.random.file" to java.security.
  • Changed the listen address "<listen-address>localhost</listen-address>" in config.xml to use localhost instead of the server hostname.
  • Removed the Oracle Unified Directory external authentication provider to from config.xml.

As noted, there was some major planned network maintenance going on, so we anticipated issues related to that.

What was the issue in the end?

A problem with the DNS server.

Once that was resolved, everything was fine.

]]>
<![CDATA[ OEM 13c custom certificate drama ]]> https://chronicler.tech/oem13ccustomcertificate/ 5d459e680b1b670a1724dc65 Sat, 24 Aug 2019 08:30:38 -0400 For years I have so much of questionable syntax, partial automation support, and non-human design in Oracle products, so I could get anything with the perfect poker face. However, Oracle always has a way to give a little bit of fun, for example, with the long-lasting Oracle wallet compatibility saga. In general, Oracle Wallet is nothing more than PKCS#12 keystore with the proprietary implementation of passwordless access to the key and certificates, yet Oracle customers struggle with the wallet compatibility for years.

Quite a recent example: Oracle Fusion Middleware 12.2.1.3 infrastructure has no issues with the create Wallet from the PKCS#12 or import OpenSSL PKCS#12 keystore into a new wallet. Inside the platform, all products can use it with no issues.  The situation is quite the opposite for the previous release. Oracle HTTP Server 12.1.3  doesn't tolerate anything but "true born" Oracle Wallets. If you did a wallet conversion or private key import of any kind, OHS wouldn't accept it; although any other component can open such a wallet in autologin mode.

In strange synchronicity, I have run into the same issue in multiple projects about the same time and have tried multiple approaches, so there is the most painless one for OEM 13c or any other OHS 12.1.2.3.

The bare minimum for the Oracle Wallet manipulations is Oracle JDK and Oracle PKI utility 12.2.1.3. If you have no 12.2.1.3 products installed, download WebLogic Server proxy plugin 12.2.1.3 binaries. The single archive contains plugins for all platforms and HTTP servers. It's bulky, but it's only half of the WebLogic installer and only fraction of the Oracle Fusion Middleware Infrastructure binaries. What's even better: WebLogic proxy plugin does not require any installation and has Oracle PKI tool ready to use.

Let's check that you have everything:

  • Oracle JDK, I have used JDK8, but higher versions should work too.
  • Oracle PKI 12.2.1.3 is available.  In example below, java and OraclePKI are in the PATH list, but you can use $JAVA_HOME and $ORACLE_HOME  variables
  • Your certificate file, key file and trust chain certificates are in the same place.  
# Prepare trust chain 
$ cat sub-ca.cer root-ca.cer >ca-chain.cer
# Create new PKCS file 
$ openssl pkcs12 -export -in em13host.domain.com.cer -inkey em13host.domain.com.key  -out ewallet.p12 -certfile ca-chain.cer
# Create  new auto login only wallet
$ orapki wallet create -wallet oem01/ -auto_login_only
# import PKCS12 storage
$ orapki wallet import_pkcs12 -wallet oem01/ -auto_login_only  -pkcs12file ewallet.p12 -pkcs12pwd welcome1
# Check the new wallet status 
$ orapki wallet display -wallet oem01/

Adjust file names and as a result you should have oem01/ wallet folder with the single cwallet.sso file in it. Now, you can run OEM configuration commands and use this wallet for the console, servers and agent configurations with no issues.


Image source: https://www.flickr.com/photos/sidelong/3878741556

]]>
<![CDATA[ Customizing Oracle Enterprise Manager 12c/13c email notifications ]]> https://chronicler.tech/untitled-2/ 5d5ec4110b1b670a1724de94 Thu, 22 Aug 2019 16:48:55 -0400 The email alerts generated from Oracle Enterprise Manager Cloud Control are not the most appealing visually, with formatting not easy on the eyes and includes a slew of unnecessary information and clutter.

To customize email notification in OEM Cloud Control 12c/13c, login as SYSMAN and navigate to Setup > Notifications > E-Mail Customizations. Below is a screenshot of what this page may look like:

Here is how the default notification for a Metric Alert looks like:

As you can see, the information is all there, but it is just not visually appealing. Unfortunately, HTML formatting is not allowed so the options for customization are limited.

Here is the default (i.e., awful) email template for Metric Alerts:

-- Enterprise Manager default template

-- if the email is for the status of corrective action
-- show the details of the execution of the corrective action
[IF NOTIF_TYPE EQ "NOTIF_CA"]
    [CA_JOB_NAME_LABEL]=[CA_JOB_NAME]
    [CA_JOB_OWNER_LABEL]=[CA_JOB_OWNER]
    [CA_JOB_STATUS_LABEL]=[CA_JOB_STATUS]
    [CA_JOB_STEP_OUTPUT_LABEL]=[CA_JOB_STEP_OUTPUT]
[ENDIF]

-- Source object name is the entity raising the issue.
[IF SOURCE_OBJ_NAME NOT NULL]
    [SOURCE_OBJ_TYPE] [NAME_LABEL]=[SOURCE_OBJ_NAME]
    [SOURCE_OBJ_TYPE] [OWNER_LABEL]=[SOURCE_OBJ_OWNER]
[ENDIF]
[IF SOURCE_OBJ_SUB_TYPE NOT NULL]
    [SOURCE_OBJ_TYPE] [TYPE_LABEL]=[SOURCE_OBJ_SUB_TYPE]
[ENDIF]
[IF HOST_NAME NOT NULL]
    [HOST_NAME_LABEL]=[HOST_NAME]
[ENDIF]
-- Target name links to the respective target home page 
-- in Enterprise Manager console
[IF TARGET_NAME NOT NULL]
    [TARGET_TYPE_LABEL]=[TARGET_TYPE]
    [TARGET_NAME_LABEL]=[TARGET_NAME]
[ENDIF]
[IF CATEGORIES NOT NULL]
    [CATEGORIES_LABEL]=[CATEGORIES]
[ENDIF]
[MESSAGE_LABEL]=[MESSAGE]
[IF ACTION_MSG NOT NULL]
    [ACTION_MSG_LABEL]=[ACTION_MSG]
[ENDIF]
[SEVERITY_LABEL]=[SEVERITY]
[EVENT_REPORTED_TIME_LABEL]=[EVENT_REPORTED_TIME]

[IF TARGET_LIFECYCLE_STATUS NOT NULL]
    [TARGET_LIFECYCLE_STATUS_LABEL]=[TARGET_LIFECYCLE_STATUS]
[ENDIF]
[USER_DEFINED_TARGET_PROP]

[IF ASSOC_INCIDENT_ID NOT NULL]
    [ASSOC_INCIDENT_ID_LABEL]=[ASSOC_INCIDENT_ID]
    [ASSOC_INCIDENT_STATUS_LABEL]=[ASSOC_INCIDENT_STATUS]
    [ASSOC_INCIDENT_OWNER_LABEL]=[ASSOC_INCIDENT_OWNER]
    [ASSOC_INCIDENT_ACKNOWLEDGED_BY_OWNER_LABEL]=[ASSOC_INCIDENT_ACKNOWLEDGED_BY_OWNER]
    [ASSOC_INCIDENT_PRIORITY_LABEL]=[ASSOC_INCIDENT_PRIORITY]
    [ASSOC_INCIDENT_ESCALATION_LEVEL_LABEL]=[ASSOC_INCIDENT_ESCALATION_LEVEL]
[ENDIF]
[EVENT_TYPE_LABEL]=[EVENT_TYPE]
[EVENT_NAME_LABEL]=[EVENT_NAME]
-- if it is a repeat email, show the repeat count
[IF NOTIF_TYPE EQ "NOTIF_REPEAT"]
    [REPEAT_COUNT_LABEL]=[REPEAT_COUNT]
[ENDIF]
-- Event Dedup related Attributes
[IF TOTAL_OCCURRENCE_COUNT NOT NULL]
    [TOTAL_OCCURRENCE_COUNT_LABEL]=[TOTAL_OCCURRENCE_COUNT]
[ENDIF]
[IF CURRENT_OCCURRENCE_COUNT NOT NULL]
    [CURRENT_OCCURRENCE_COUNT_LABEL]=[CURRENT_OCCURRENCE_COUNT]
[ENDIF]
[IF CURRENT_FIRST_OCCUR_DATE NOT NULL]
    [CURRENT_FIRST_OCCUR_DATE_LABEL]=[CURRENT_FIRST_OCCUR_DATE]
[ENDIF]
[IF CURRENT_LAST_OCCUR_DATE NOT NULL]
    [CURRENT_LAST_OCCUR_DATE_LABEL]=[CURRENT_LAST_OCCUR_DATE]
[ENDIF]
[EVENT_TYPE_ATTRS]

[IF RCA_STATUS NOT NULL]
    [RCA_STATUS_LABEL]=[RCA_STATUS]
[ENDIF]

-- Root Cause Analysis details shows up when available. This is 
-- normally applies to availability alerts for service targets
[RCA_DETAILS]

[RULE_NAME_LABEL]=[RULE_NAME]
[RULE_OWNER_LABEL]=[RULE_OWNER]
-- Check if any updates
[IF UPDATES NOT NULL]
[UPDATES_LABEL]:[UPDATES]
[ENDIF]

Updating the Subject Template

The default email subject looks like this:

The default template for the email Subject is:

-- Enterprise Manager Default Event Template

-- Subject of an e-mail is rendered in one line.
-- The resulting text from the following logic will be concatenated together into one line.

-- if this is a repeat email
-- show the repeat count
[IF NOTIF_TYPE EQ "NOTIF_REPEAT"]
    \[[REPEAT_LABEL] #[REPEAT_COUNT]\]
[ENDIF]
[EM_EVENT_PREFIX]: 

-- if it is an email for success or failure of corrective action 
-- show the name and execution status of the corrective action
[IF NOTIF_TYPE EQ "NOTIF_CA"]
    CA:[CA_JOB_NAME]:[CA_JOB_STATUS]
[ELSE] -- Regular email for metric alert
    [SEVERITY]:[TARGET_NAME]
[ENDIF]
-- Show message if available
- [MESSAGE]

After applying my custom template:

[IF NOTIF_TYPE EQ "NOTIF_REPEAT"]
    \[[REPEAT_LABEL] #[REPEAT_COUNT]\]
[ENDIF]
\[OEM\]

[IF NOTIF_TYPE EQ "NOTIF_CA"]
    CA:[CA_JOB_NAME]:[CA_JOB_STATUS]
[ELSE]
    \[[SEVERITY]\] [TARGET_NAME]
[ENDIF]
: [MESSAGE]

The custom email subject now looks like this:

In my opinion, the little subtle changes make it easier to visually filter on, particularly as tens or hundreds of emails come in.

Custom Email Body #1

I played around with a few custom templates, updating the original template to look a little more streamlined:

The template for this custom Metric Alert notification is:

-- Enterprise Manager custom template

[IF HOST_NAME NOT NULL]
    [HOST_NAME_LABEL]
    [HOST_NAME]
    &nbsp;
[ENDIF]

-- Target name links to the respective target home page 
-- in Enterprise Manager console
[IF TARGET_NAME NOT NULL]
    [TARGET_TYPE_LABEL]
    [TARGET_TYPE]
    &nbsp;

    [TARGET_NAME_LABEL]
    [TARGET_NAME]
    &nbsp;
[ENDIF]

[IF CATEGORIES NOT NULL]
    [CATEGORIES_LABEL]
    [CATEGORIES]
    &nbsp;
[ENDIF]

[MESSAGE_LABEL]
[MESSAGE]
&nbsp;

[SEVERITY_LABEL]
[SEVERITY]
&nbsp;

[EVENT_REPORTED_TIME_LABEL]
[EVENT_REPORTED_TIME]
&nbsp;

[IF ASSOC_INCIDENT_ID NOT NULL]
    [ASSOC_INCIDENT_ID_LABEL]
    [ASSOC_INCIDENT_ID]
    &nbsp;

    [ASSOC_INCIDENT_STATUS_LABEL]
    [ASSOC_INCIDENT_STATUS]
    &nbsp;
[ENDIF]

[EVENT_TYPE_LABEL]
[EVENT_TYPE]
&nbsp;

[EVENT_NAME_LABEL]
[EVENT_NAME]
&nbsp;

-- if it is a repeat email, show the repeat count
[IF NOTIF_TYPE EQ "NOTIF_REPEAT"]
    [REPEAT_COUNT_LABEL]
    [REPEAT_COUNT]
    &nbsp;
[ENDIF]

-- Event Dedup related Attributes
[IF TOTAL_OCCURRENCE_COUNT NOT NULL]
    [TOTAL_OCCURRENCE_COUNT_LABEL]
    [TOTAL_OCCURRENCE_COUNT]
    &nbsp;
[ENDIF]

[IF CURRENT_OCCURRENCE_COUNT NOT NULL]
    [CURRENT_OCCURRENCE_COUNT_LABEL]
    [CURRENT_OCCURRENCE_COUNT]
    &nbsp;
[ENDIF]

[IF CURRENT_FIRST_OCCUR_DATE NOT NULL]
    [CURRENT_FIRST_OCCUR_DATE_LABEL]
    [CURRENT_FIRST_OCCUR_DATE]
    &nbsp;
[ENDIF]

[IF CURRENT_LAST_OCCUR_DATE NOT NULL]
    [CURRENT_LAST_OCCUR_DATE_LABEL]
    [CURRENT_LAST_OCCUR_DATE]
    &nbsp;
[ENDIF]

[RULE_NAME_LABEL]
[RULE_NAME]
&nbsp;

[EVENT_TYPE_ATTRS]
&nbsp;

Custom Email Body #2

I prefer an even further reduced email notification as follows:

The template for this custom Metric Alert notification is:

-- Enterprise Manager custom template

\[[SEVERITY]\] [EVENT_REPORTED_TIME]
&nbsp;

-- Target name links to the respective target home page 
-- in Enterprise Manager console
[IF TARGET_NAME NOT NULL]
    \[[TARGET_TYPE]\] [TARGET_NAME]
    &nbsp;
[ENDIF]

\[[EVENT_NAME]\] [MESSAGE]
&nbsp;

[IF ASSOC_INCIDENT_ID NOT NULL]
    \[[ASSOC_INCIDENT_ID_LABEL]\] [ASSOC_INCIDENT_ID]
    &nbsp;
[ENDIF]

-- if it is a repeat email, show the repeat count
[IF NOTIF_TYPE EQ "NOTIF_REPEAT"]
    \[[REPEAT_COUNT_LABEL]\] [REPEAT_COUNT]
    &nbsp;
[ENDIF]

-- Event Dedup related Attributes
[IF TOTAL_OCCURRENCE_COUNT NOT NULL]
    \[[TOTAL_OCCURRENCE_COUNT_LABEL]\] [TOTAL_OCCURRENCE_COUNT]
    &nbsp;
[ENDIF]

[IF CURRENT_OCCURRENCE_COUNT NOT NULL]
    \[[CURRENT_OCCURRENCE_COUNT_LABEL]\] [CURRENT_OCCURRENCE_COUNT]
    &nbsp;
[ENDIF]

[IF CURRENT_FIRST_OCCUR_DATE NOT NULL]
    \[[CURRENT_FIRST_OCCUR_DATE_LABEL]\] [CURRENT_FIRST_OCCUR_DATE]
    &nbsp;
[ENDIF]

[IF CURRENT_LAST_OCCUR_DATE NOT NULL]
    \[[CURRENT_LAST_OCCUR_DATE_LABEL]\] [CURRENT_LAST_OCCUR_DATE]
    &nbsp;
[ENDIF]

\[Rule Name\] [RULE_NAME]
&nbsp;

Obviously, you may need to update the templates for various other email types as needed (e.g., Target Availability Events).

References

  • Enterprise Manager Cloud Control Administrator's Guide 13.2: Email Customization
https://docs.oracle.com/cd/E73210_01/EMADM/GUID-B48F6A84-EE89-498D-94E0-5DE1E7A0CFBC.htm#EMADM9092
]]>
<![CDATA[ Unresolved IOException when starting up Oracle JDeveloper 12c ]]> https://chronicler.tech/untitled/ 5d55604f0b1b670a1724de54 Thu, 15 Aug 2019 09:47:56 -0400 Sadly, a recently successful installation of Oracle JDeveloper 12c (12.2.1.3) failed to start up on a desktop virtual machine with the following error being returned:

java.version=1.8.0_152
java.vendor=Oracle Corporation
BootLoader constants: OS=win32, ARCH=x86_64, WS=win32, NL=en_US

!ENTRY org.eclipse.osgi 4 2019-08-09 15:12:25.922
!MESSAGE Error reading configuration: Unable to create lock manager.
!STACK 0
java.io.IOException: Unable to create lock manager.
        at org.eclipse.osbi.storagemanager.StorageManager.open(StorageManager.java:699)

A user was created for me on the Windows 10 virtual machine and granted administrator privilege. It is unclear what policies or lockdowns were performed on this desktop VM.

This error is clearly an IO exception and likely related to some folder permission outside of the installation directory.

I am still unable to get JDeveloper to start.

Failed attempts included:

  • Disabling the local virus scanner.
  • Ensuring the c:\Oracle folder and all subfolders and files were explicitly granted full permissions.
  • Ensuring that the Command Prompt window was started as administrator prior to installation.
  • Right-clicking and selecting Run as administrator on both the jdeveloper.exe and jdev.exe binaries.
]]>
<![CDATA[ Running RDA for Oracle Fusion Middleware 12c ]]> https://chronicler.tech/running-rda-for-oracle-fusion-middleware-12c/ 5d51722c0b1b670a1724dddc Mon, 12 Aug 2019 20:52:58 -0400 Never ran an RDA (Remote Diagnostic Agent) before? It is a way to collect comprehensive diagnostic information to provide to Oracle Support.

Some Oracle Fusion Middleware installations already have RDA installed under the ~/oracle_common or ~/utils folder. If not or if you have problems running the installed RDA, you may need to download, install, and run it on your own.

Instructions

1. Download RDA from the My Oracle Support via Oracle Patch 21769913.

2. Depending on your operating system and the latest version of RDA, the filename may be called something like p21769913_1931982_Linux-x86-64.zip.

3. Extract the RDA software:

cd /u01/app/oracle/middleware
unzip /tmp/p21769913_1931982_Linux-x86-64.zip
mv readme.txt rda
cd rda

4. Running the RDA for SOA Suite:

./rda.sh -s soa_issue -p OFM_SoaMax

--(follow the prompts)
--(enter /u01/app/oracle/middleware for middleware home)
--(enter /u01/app/oracle/middleware/soa for product oracle home)

gtar -czvf soa_issue.tgz soa_issue

5. Running the RDA for OSB:

./rda.sh -s osb_issue -p OFM_OsbMax

--(follow the prompts)
--(enter /u01/app/oracle/middleware for middleware home)
--(enter /u01/app/oracle/middleware/osb for product oracle home)

gtar -czvf osb_issue.tgz osb_issue

6. Send the zipped file to Oracle Support.

]]>
<![CDATA[ Inspect 'em_prereqs_results.xml' for OEM Agent installation errors ]]> https://chronicler.tech/asdf/ 5d4cd1f10b1b670a1724dd9a Thu, 08 Aug 2019 22:04:13 -0400 In rare cases, the Oracle Enterprise Manager Cloud Control Agent may fail during the installation, and the cause of the installation failure can be found in any of the .err, .log, or .out files.

In one particular instance, the installation of an OEM Agent 13.2 failed, but there was no indication in any of the logs as to why it was not successful.

Try looking in the em_prereqs_results.xml log.

This file displays the results of the prerequisite checks, and the attribute <RESULT VALUE> can pinpoint on whether the check is "Passed" or "Failed".

An example of where this file is located:

/u01/app/oracle/agent13c/agent_13.2.0.0.0/cfgtoollogs/prereqlogs/OraInstall2019-08-08_09-01-27PM/results/em_prereqs_results.xml

An example of a successful prerequisite entry in this log:

<PREREQUISITE NAME="CheckHostName" EXTERNALNAME="Is the host name valid?" EXTERNALNAMEID="S_CHECK_HOSTNAME@oracle.sysman.install.prereq.resource.PrereqRes" SEVERITY="Warning">

  <DESCRIPTION TEXT="This is a prerequisite condition to test whether the host name where the installation will be done, is correct or not." TEXTID="S_CHECK_HOSTNAME_DESCRIPTION@oracle.sysman.install.prereq.resource.PrereqRes"/>

  <RULESETREF NAME="HostCheck" RULE="CheckHost" FILE="agent_completeRefhost.xml" RESULTS_FILE="install_rule_results.xml"/>

  <PROBLEM TEXT="The host name specified for the installation or retrieved from the system is incorrect." TEXTID="S_CHECK_HOSTNAME_ERROR@oracle.sysman.install.prereq.resource.PrereqRes"/>

  <RECOMMENDATION TEXT="Make sure your host name should meet required criteria." TEXTID="S_CHECK_HOSTNAME_ACTION@oracle.sysman.install.prereq.resource.PrereqRes"/>

  <RESULT VALUE="Passed" TEXT="Expected result: Fully qualified domain name, for example foo.mydomain.com
Actual Result: soahost.raastech.com. Ensure that you provide a fully qualified domain name.
Check complete. The overall result of this check is: Passed
"/>

</PREREQUISITE>

An example of a failed prerequisite entry in this log:

<PREREQUISITE NAME="CertifiedVersions_agent" EXTERNALNAME="Is the software certified on the current operating system?" EXTERNALNAMEID="S_CHECK_CERTIFIED_VERSIONS@oracle.sysman.install.prereq.resource.PrereqRes" SEVERITY="Warning">

  <DESCRIPTION TEXT="This is a prerequisite condition to test whether the Oracle software is certified on the current O/S or not." TEXTID="S_CHECK_CERTIFIED_VERSIONS_DESCRIPTION@oracle.sysman.install.prereq.resource.PrereqRes"/>

  <RULESETREF NAME="OSChecks" RULE="CertifiedVersions" FILE="agent_completeRefhost.xml" RESULTS_FILE="install_rule_results.xml"/>

  <PROBLEM TEXT="This Oracle software is not certified on the current O/S." TEXTID="S_CHECK_CERTIFIED_VERSIONS_ERROR@oracle.sysman.install.prereq.resource.PrereqRes"/>

  <RECOMMENDATION TEXT="Make sure you are installing the software on a certified platform." TEXTID="S_CHECK_CERTIFIED_VERSIONS_ACTION@oracle.sysman.install.prereq.resource.PrereqRes"/>

  <RESULT VALUE="Failed <<<<" TEXT="Expected result: One of enterprise-6,oracle-8,oracle-7,SuSE-11,SuSE-12,redhat-6,redhat-7
Actual Result: redhat-5.9
Check complete. The overall result of this check is: Failed <<<<
"/>
  
</PREREQUISITE>
]]>
<![CDATA[ Custom .bash_profile for Oracle Fusion Middleware ]]> https://chronicler.tech/my-custom-bash_profile-scripts-for-oracle-fusion-middleware/ 5d40124e0b1b670a1724db4b Tue, 30 Jul 2019 06:22:39 -0400 I have created some crude custom .bash_profile scripts that I use to make things easy for me after I install some of the various Oracle Fusion Middleware products.

Oracle HTTP Server (OHS) 12c

export PS1="\u@\h:\$PWD> "
export DISPLAY=:1
export MW_HOME=/u01/oracle/webtier
export DOMAIN=ohs_domain
export DOMAIN_HOME=${MW_HOME}/user_projects/domains/${DOMAIN}
export JAVA_HOME=/u01/java
export ORACLE_HOME=${MW_HOME}/ohs
export WL_HOME=${MW_HOME}/wlserver
export PATH=$MW_HOME/oracle_common/common/bin:$DOMAIN_HOME/bin:$JAVA_HOME/bin:$ORACLE_HOME/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin

alias info="source ~/.bash_profile"

echo ""
echo "+----------------------------------------+"
echo "| SHORTCUTS                              |"
echo "+----------------------------------------+"
echo "info"
echo ""

echo ""
echo "+----------------------------------------+"
echo "| OHS START                              |"
echo "+----------------------------------------+"
echo "nohup \$DOMAIN_HOME/bin/startNodeManager.sh >> \$DOMAIN_HOME/bin/NodeManager.out 2>&1 &"
echo "\$DOMAIN_HOME/bin/startComponent.sh ohs1 showErrorStack"

echo ""
echo "+----------------------------------------+"
echo "| OHS STOP                               |"
echo "+----------------------------------------+"
echo "\$DOMAIN_HOME/bin/stopComponent.sh ohs1"
echo "\$DOMAIN_HOME/bin/stopNodeManager.sh"

echo ""

Oracle SOA Suite 12c

export PS1="\u@\h:\$PWD> "
export DISPLAY=:1
export MW_HOME=/u01/oracle/middleware
export DOMAIN=soa_domain
export DOMAIN_HOME=${MW_HOME}/user_projects/domains/${DOMAIN}
export JAVA_HOME=/u01/java
export PATH=$DOMAIN_HOME/bin:$JAVA_HOME/bin:$ORACLE_HOME/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin

alias info="source ~/.bash_profile"

echo ""
echo "+----------------------------------------+"
echo "| SHORTCUTS                              |"
echo "+----------------------------------------+"
echo "info"
echo ""

echo ""
echo "+----------------------------------------+"
echo "| SOA START                              |"
echo "+----------------------------------------+"
echo "nohup \$DOMAIN_HOME/bin/startNodeManager.sh >> \$DOMAIN_HOME/bin/NodeManager.out 2>&1 &"
echo "nohup \$DOMAIN_HOME/bin/startWebLogic.sh >> \$DOMAIN_HOME/servers/AdminServer/logs/AdminServer.out 2>&1 &"
echo "nohup \$DOMAIN_HOME/bin/startManagedWebLogic.sh bam_server1 http://localhost:7001 >> \$DOMAIN_HOME/servers/bam_server1/logs/bam_server1.out 2>&1 &"
echo "nohup \$DOMAIN_HOME/bin/startManagedWebLogic.sh ess_server1 http://localhost:7001 >> \$DOMAIN_HOME/servers/ess_server1/logs/ess_server1.out 2>&1 &"
echo "nohup \$DOMAIN_HOME/bin/startManagedWebLogic.sh soa_server1 http://localhost:7001 >> \$DOMAIN_HOME/servers/soa_server1/logs/soa_server1.out 2>&1 &"
echo "nohup \$DOMAIN_HOME/bin/startManagedWebLogic.sh ums_server1 http://localhost:7001 >> \$DOMAIN_HOME/servers/ums_server1/logs/ums_server1.out 2>&1 &"

echo ""
echo "+----------------------------------------+"
echo "| SOA STOP                               |"
echo "+----------------------------------------+"
echo "\$DOMAIN_HOME/bin/stopManagedWebLogic.sh ums_server1 t3://localhost:7001"
echo "\$DOMAIN_HOME/bin/stopManagedWebLogic.sh soa_server1 t3://localhost:7001"
echo "\$DOMAIN_HOME/bin/stopManagedWebLogic.sh ess_server1 t3://localhost:7001"
echo "\$DOMAIN_HOME/bin/stopManagedWebLogic.sh bam_server1 t3://localhost:7001"
echo "\$DOMAIN_HOME/bin/stopWebLogic.sh"
echo "\$DOMAIN_HOME/bin/stopNodeManager.sh"

echo ""
echo "+----------------------------------------+"
echo "| SOA STATUS                             |"
echo "+----------------------------------------+"
echo "ps -ef | grep -v grep | grep weblogic.NodeManager"
echo "ps -ef | grep -v grep | grep AdminServer"
echo "ps -ef | grep -v grep | grep soa_server1"

echo ""
echo "+----------------------------------------+"
echo "| SOA LOGS                               |"
echo "+----------------------------------------+"
echo ""
echo "tail -f \$DOMAIN_HOME/bin/NodeManager.out"
echo "tail -f \$DOMAIN_HOME/servers/AdminServer/logs/AdminServer.out"
echo "tail -f \$DOMAIN_HOME/servers/AdminServer/logs/AdminServer.log"
echo "tail -f \$DOMAIN_HOME/servers/soa_server1/logs/bam_server1.out"
echo "tail -f \$DOMAIN_HOME/servers/soa_server1/logs/bam_server1.log"
echo "tail -f \$DOMAIN_HOME/servers/soa_server1/logs/ess_server1.out"
echo "tail -f \$DOMAIN_HOME/servers/soa_server1/logs/ess_server1.log"
echo "tail -f \$DOMAIN_HOME/servers/soa_server1/logs/soa_server1.out"
echo "tail -f \$DOMAIN_HOME/servers/soa_server1/logs/soa_server1.log"
echo "tail -f \$DOMAIN_HOME/servers/soa_server1/logs/ums_server1.out"
echo "tail -f \$DOMAIN_HOME/servers/soa_server1/logs/ums_server1.log"

echo ""

Oracle Enterprise Manager (OEM) Agent 12c

export PS1="\u@\h:\$PWD> "
export DISPLAY=:1

export AGENT_HOME=/u01/oracle/agent13c/agent_13.3.0.0.0
export PATH=$AGENT_HOME/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin

alias info="source ~/.bash_profile"

echo ""
echo "+----------------------------------------+"
echo "| SHORTCUTS                              |"
echo "+----------------------------------------+"
echo "info"

echo ""
echo "+----------------------------------------+"
echo "| AGENT START                            |"
echo "+----------------------------------------+"
echo "\$AGENT_HOME/bin/emctl start agent"

echo ""
echo "+----------------------------------------+"
echo "| AGENT STOP                             |"
echo "+----------------------------------------+"
echo "\$AGENT_HOME/bin/emctl stop agent"

echo "+----------------------------------------+"
echo "| AGENT STATUS                           |"
echo "+----------------------------------------+"
echo "\$AGENT_HOME/bin/emctl status agent"

echo ""

Oracle Enterprise Manager (OEM) Cloud Control 13c

export PS1="\u@\h:\$PWD> "
export DISPLAY=:1
export MW_HOME=/u01/app/oracle/middleware
export AGENT_HOME=/u01/app/oracle/agent/agent_13.3.0.0.0
export PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/oracle/.local/bin:/home/oracle/bin:$MW_HOME/bin:$AGENT_HOME/bin

alias info="source ~/.bash_profile"

echo ""
echo "+----------------------------------------+"
echo "| SHORTCUTS                              |"
echo "+----------------------------------------+"
echo "info"
echo ""

echo ""
echo "+----------------------------------------+"
echo "| OMS STATUS                             |"
echo "+----------------------------------------+"
echo "\$MW_HOME/bin/emctl start oms"
echo "\$AGENT_HOME/bin/emctl start agent"

echo ""
echo "+----------------------------------------+"
echo "| OMS STOP                               |"
echo "+----------------------------------------+"
echo "\$MW_HOME/bin/emctl stop oms"
echo "\$AGENT_HOME/bin/emctl stop agent"

echo ""
echo "+----------------------------------------+"
echo "| OMS STATUS                             |"
echo "+----------------------------------------+"
echo "\$MW_HOME/bin/emctl status oms -detail"
echo "\$AGENT_HOME/bin/emctl status agent"

echo ""
echo "+----------------------------------------+"
echo "| AGENT DOWNLOAD                         |"
echo "+----------------------------------------+"
echo "emcli login -username=sysman"
echo "emcli sync"
echo "emcli get_supported_platforms"
echo "emcli get_agentimage -destination=/u01/software/agentinstaller -platform=\"Linux x86-64\" -version=\"13.3.0.0.0\""
echo ""

Oracle Database 12c

export PS1="\u@\h:\$PWD> "
export DISPLAY=:1
export ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1/
export PATH=${ORACLE_HOME}/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin
export ORACLE_SID=oem

alias info="source ~/.bash_profile"

echo ""
echo "+----------------------------------------+"
echo "| SHORTCUTS                              |"
echo "+----------------------------------------+"
echo "info"
echo ""

echo ""
echo "----------------------------------------"
echo "DB START"
echo "----------------------------------------"
echo "lsnrctl start"
echo "sqlplus \"/ as sysdba\""
echo "startup"
echo "ALTER PLUGGABLE DATABASE ALL OPEN READ WRITE;"

echo ""
echo "----------------------------------------"
echo "DB STOP"
echo "----------------------------------------"
echo "sqlplus \"/ as sysdba\""
echo "shutdown immediate"
echo "lsnrctl stop"

echo ""
]]>
<![CDATA[ Red Hat OpenShift Overview ]]> https://chronicler.tech/red-hat-openshift-overview/ 5d3957140b1b670a1724daae Thu, 25 Jul 2019 03:34:39 -0400 OpenShift is a family of containerization software developed by Red Hat and has been around since 2011. However, it has recently gained more steam over the past couple of years.

In this post, I'll give an introduction to OpenShift, with a series of follow ups that give examples of provisioning, scaling, and CI/CD.

What is OpenShift?

  • OpenShift is Red Hat's cloud development Platform as a Service (PaaS).
  • OpenShift is based on top of Docker containers and Kubernetes container cluster manager.
  • OpenShift is an open source cloud-based platform.
  • OpenShift allows developers to create, test, and run their applications and deploy them to the cloud.
  • Gartner refers to OpenShift as a Cloud Enabled Application Platform (CEAP).
  • OpenShift provides self-service capabilities for container management.
  • OpenShift can be deployed on to a public or a private cloud.

Built on Kubernetes and Docker

Docker provides containerization at the OS level wherein each application deployment and all its relevant libraries and artifacts are packaged into a single Docker container.

Kubernetes, built by Google, provides orchestration, scheduling, and management of these Docker containers.

Simplistically speaking, OpenShift is a front-end management tool built on top of Kubernetes.

Gartner Peer Insight Reviews

Gartner provides verified reviews from members of the IT community on various products and solutions. OpenShift is rated quite well, given an overall 4.2 rating across 48 reviews.

Some comments include:

"User Interface is excellent but improvement is needed in Developer tool and monitoring" ~ Senior Engineer in the Finance Industry
"Easy to integrate with existing authentication mechanism, good UI interface" ~ QA Architect in the Manufacturing Industry
"Easy containerization, requires high level support and clear understanding of legacy apps" ~ IT Project Manager in the Manufacturing Industry
"Amazing container orchestration tool" ~ Architect in the Services Industry
"The implementation is not easy, but it is a great tool that makes day to day much easier" ~ Manager Information Technology Infrastructure in the Services Industry
"Creating a POC was difficult but end product has been great" ~ Specialist Application Engineer in the Healthcare Industry

From these review snippets, the common themes appear to be that it boasts a great interface (true), provides great containerization management functionality (true), and is not too easy (somewhat true).

OpenShift Alternatives

There are numerous alternatives to OpenShift on the market, many of them open sourced as well. The offerings and features vary, but that is for another post.

]]>
<![CDATA[ Always Disable Auto Licensing in OEM Cloud Control ]]> https://chronicler.tech/always-disable-auto-licensing-in-oem-cloud-control/ 5d3953840b1b670a1724da6f Thu, 25 Jul 2019 03:11:57 -0400 Oracle Enterprise Manager Cloud Control is a free product, but is extremely limited in functionality without paying for the additional management packs. These management packs offer a slew of necessary features and are licensed separately.

By default, the OEM Cloud Control product allows you to install and monitor any target type without restriction, and assumes that you are licensed for all management packs.

Thus, it is safest to always disable "auto licensing" for all management packs that you are not licensed for.

Checking Auto Licensing Status

  1. Log in to the OEM Cloud Control 13c console as SYSMAN.

2. Navigate to Setup > Management Packs > Management Pack Access.

3. Click on the Auto Licensing radio button.

4. Here, you will find the management packs that you are not auto licensed for under the Auto Licensing Disabled List.

Disabling Auto Licensing

  1. If you want to disable auto licensing for a particular management pack, then:

a. Select the management pack, click on Move.

b. Under Auto Licensing, click on the Disable radio button.

c. Click on Apply.

References

https://docs.oracle.com/cd/E63000_01/OEMLI/introduction.htm#OEMLI108

]]>
<![CDATA[ Modify startManagedWebLogic.sh after unpacking to 2nd host ]]> https://chronicler.tech/modify-startmanagedweblogic-sh-after-unpacking-to-2nd-node/ 5d2e8dca9bc7bd46a31c56ae Tue, 16 Jul 2019 23:01:17 -0400 After packing a WebLogic domain on the first host and unpacking it on the second host, I noticed an interesting behavior.

After this was done, I attempted to start up a managed server on the second host on the command line via the startManagedWebLogic.sh script. For example:

nohup $DOMAIN_HOME/bin/startManagedWebLogic.sh wls_soa1 >> $DOMAIN_HOME/servers/wls_soa1/logs/wls_soa1.out &

The managed server did not start up because it could not communicate with the AdminServer.

It turns out that the unpack command replaces the ADMIN_URL in startManagedWebLogic.sh, which assumed that the AdminServer is running on the second host:

if [ "${ADMIN_URL}" = "" ] ; then
    ADMIN_URL="t3://soahost2:7001"
    export ADMIN_URL
fi

You must manually update startManagedWebLogic.sh and replace the ADMIN_URL to reflect the actual host which is running the AdminServer.

]]>
<![CDATA[ BEA-000362 and Consensus Leasing in WebLogic ]]> https://chronicler.tech/bea-000362-and-understanding-census/ 5d2e8a349bc7bd46a31c5654 Tue, 16 Jul 2019 22:52:02 -0400 If you start up a WebLogic managed server using the startManagedWebLogic.sh script, you may see the following exception in the logs:

<Apr 14, 2019 2:34:42,929 PM EDT> <Critical> <ConsensusLeasing> <BEA-000001> <Server must be started by Node Manager when consensus leasing is enabled.>
<Apr 14, 2019 2:34:45,345 PM EDT> <Critical> <WebLogicServer> <BEA-000362> <Server failed. Reason: Server must be started by NodeManager when consensus leasing is enabled>

As a Critical error, the managed server would not start up as a result.

To resolve this:

  1. Login to the WebLogic Admin Console
  2. Navigate to Clusters > [cluster_name] > Configuration > Migration
  3. Change the Migration Basis from 'Consensus' to 'Database'
  4. Select a valid Data Source

Database leasing requires the availability of a high-availability database, such as Oracle RAC, to store leasing information.

Consensus leasing stores the leasing information in-memory within a cluster member. This option requires Node Manager to be configured and running, and you can only start the managed server using Node Manager.

Unless you are setting up and configuring migratable servers (I hope not!), it doesn't matter what you set this to.

In case I'm not clear, migratable servers and migratable services are generally a bad idea.

]]>
<![CDATA[ Getting ORA-01031 "insufficient privileges" during 12.2.1.3 RCU ]]> https://chronicler.tech/getting-ora-01031-insufficient-privileges-during-12-2-1-3-rcu/ 5d2629869bc7bd46a31c5621 Wed, 10 Jul 2019 14:12:55 -0400 Problem:

When running the Repository Configuration Utility (RCU) for Oracle Fusion Middleware 12.2.1.3, you may receive the following error during the creation process in the GUI:

ORA-01031: insufficient privileges

The rcu.log file may show entries that look like this:

Tue Jul 9 23:55:51.780 GMT 2019 ERROR assistants.rcu.backend.action.AbstractAction: oracle.sysman.assistants.rcu.backend.action.AbstractAction::handleNonIgnorableError: Received Non-Ignorable Error: ORA-01031: insufficient privileges
File:/u01/oracle/middlewareTEST/soa/common/sql/soainfra/sql/oracle/createuser_grant_privs_oracle.sql
Statement:GRANT execute on dbms_utility    to DELETETHIS_SOAINFRA

Tue Jul 9 23:55:55.545 GMT 2019 ERROR assistants.common.dbutil.jdbc.JDBCEngine: oracle.sysman.assistants.common.dbutil.jdbc.JDBCEngine::onException: SQLException: ORA-01031: insufficient privileges

java.sql.SQLSyntaxErrorException: ORA-01031: insufficient privileges

        at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:494)
        at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:446)
        at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1054)
        at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:623)
        at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:252)
        at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:612)
        at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:226)
        at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:59)
        at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:910)
        at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1119)
        at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3780)
        at oracle.jdbc.driver.T4CPreparedStatement.executeInternal(T4CPreparedStatement.java:1343)
        at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3887)
        at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1079)
        at oracle.sysman.assistants.common.dbutil.jdbc.JDBCEngine.runSqlStatement(JDBCEngine.java:1272)
        at oracle.sysman.assistants.common.dbutil.jdbc.JDBCEngine.callRunSqlStatement(JDBCEngine.java:931)
        at oracle.sysman.assistants.common.dbutil.jdbc.JDBCEngine.executeSql(JDBCEngine.java:950)
        at oracle.sysman.assistants.common.dbutil.jdbc.JDBCEngine.executeSql(JDBCEngine.java:925)
        at oracle.sysman.assistants.common.dbutil.jdbc.OracleDDLStatement.execute(ANSISQLStatementType.java:713)
        at oracle.sysman.assistants.common.dbutil.jdbc.JDBCEngine.executeNextSQLStatement(JDBCEngine.java:1785)
        at oracle.sysman.assistants.common.dbutil.jdbc.JDBCEngine.parseNexecuteScript(JDBCEngine.java:1688)
        at oracle.sysman.assistants.rcu.backend.action.JDBCAction.perform(JDBCAction.java:424)
        at oracle.sysman.assistants.rcu.backend.task.AbstractCompTask.execute(AbstractCompTask.java:255)
        at oracle.sysman.assistants.rcu.backend.task.ActualTask.run(TaskRunner.java:346)
        at java.lang.Thread.run(Thread.java:748)
Caused by: Error : 1031, Position : 17, Sql = GRANT execute on dbms_lob        to DELETETHIS_SOAINFRA, OriginalSql = GRANT execute on dbms_lob        to DELETETHIS_SOAINFRA, Error Msg = ORA-01031: insufficient privileges

        at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:498)
        ... 24 more

Solution:

When running the RCU, make sure that account you select has the SYSDBA role.

Depending on the Oracle Fusion Middleware components being installed, you will need to use the "SYSDBA" role and not the "Normal" role.

In the screenshot, you can observe the message, "One or more components may require SYSDBA for the operation to succeed."

]]>
<![CDATA[ Automate SOA Infra configuration ]]> https://chronicler.tech/automate-soa-infra-configuration/ 5d206ec69bc7bd46a31c51b6 Mon, 08 Jul 2019 08:30:00 -0400 For the last six months, I work on the fully automated Oracle Fusion Middleware platform delivery. We use Ansible as primary automation tool and OEM for most of the daily Ops routines. So we are completing the last 20% of the final 20%, and in theory, it would never end, so you can stop it because the rule 80/20 says:

«The first eighty percent of the project, take 20% of the time. The last 20% takes the rest.»

The same works for construction, software development, and many other things when you should trade between, time, money,  and quality. Well, let's go back to the point.
An essential part of the SOA Suite configuration is setting your server and callback URLs, especially, if your SOA server is behind a reverse proxy or load balancer. Oracle provides the only way to configure those properties: Oracle Enterprise Manager Fusion Middleware Control (see Figure 1).

Figure 1. Oracle SOA 12c Common Infrastructure properties

It works, especially if you are human and do it for life. You can make a step further and click on "More SOA Infra ..." and have all infrastructure configuration properties (Figure 2).

Figure 2. Oracle SOA 12c Common properties as MBean attributes

I have conducted intensive searches over the internet, Oracle Support and documentation of all main SOA Suite versions (11.1.x and 12.x ). You would be surprised how much information you can find in the previous releases documentation. For example, I still take Oracle Business Rules as 10g documentation as the best Rules SDK specification. 10g documentation as the best Rules SDK specification.
It's turned out that Oracle doesn't provide a custom WLST commands for the essential configuration properties, nor any other documented way to update those fields. Few factors didn't allow me to stop:

  • I hate to give up, and we still should complete the project;
  • Simple or Advanced EM console it's just the web interface for WebLogic JMX MBeans;

I turned my sight toward this way and start digging. After soa-infra MBean introspection I figure out that it does not: doesn't expose any commit, or save methods; Attribute's getter and setter methods  are not available.

Then I found old Paul Done's piece on WebLogic custom MBeans. Finally, I have seen the light and managed to create a worked solution.
Code excerpt below is a part of the Jinja2 template. Ansible turns templates into the final WLST code and then executes it to perform desired changes. The code below is self-explanatory, although some clarifications could be useful:

  • Script connects to the managed server port, not the admin one.
  • Since managed servers have no edit() section you don't need to start complete changes
  • WLST environment provides mbs object - the common interface to MBeans
  • MBean name is hardcoded and as far as I know it's the same for 11g and 12c
  • Sections {% if %} {% endif %} decides which part will go to the final script.
  • Ansible replaces {{ variable }} tokens with the actual variable values.
## {{ ansible_managed }}
{% if server_up %}
connect(userConfigFile="adminUser.cfg",userKeyFile="adminUser.key",url="t3s://{{ admin_host }}:{{ server_port }}")
custom()
#Get MBean object name 
oname=ObjectName("oracle.as.soainfra.config:name=soa-infra,type=SoaInfraConfig,Application=soa-infra")
{% if "absent" == state|lower %}
# Reset URLs
mbs.setAttribute(oname,Attribute("ServerURL",None))
mbs.setAttribute(oname,Attribute("CallbackServerURL",None))
mbs.setAttribute(oname,Attribute("HttpsServerURL",None))
{% else %}
# Configure URLs
mbs.setAttribute(oname,Attribute("ServerURL","https://{{ frontendHost }}:{{ frontendSPort }}"))
mbs.setAttribute(oname,Attribute("CallbackServerURL","https://{{ frontendHost }}:{{ frontendSPort }}"))
mbs.setAttribute(oname,Attribute("HttpsServerURL","https://{{ frontendHost }}:{{ frontendSPort }}"))
{% else %}
print("WARNING: Skip configuration: server %s is not available on %d." % ("{{ admin_host }}",{{ server_port }}))
{% endif %}
Ansible way of WLST scripting

Another proof to my axiom: there are at least two ways to do something with Oracle products.

]]>
<![CDATA[ Break the line ]]> https://chronicler.tech/line-breaks-in-code-blocks/ 5d1f79558051b64e8857122a Mon, 08 Jul 2019 06:55:06 -0400 My fired, Ahmed, has published few posts with the WebLogic output. And I'm sure you have seen endless output lines, especially if the system throws an exception. Here is a code block from his last post:

An error occurred during activation of changes, please see the log for details.

[Deployer:149189]An attempt was made to execute the "activate" operation on an application named "mydatasource" that is not currently available. The application may have been created after non-dynamic configuration changes were activated. If so, the operation cannot be performed until the server is restarted so that the application will be available.

So he asked me: "How we can wrap the lines in code blocks?"   I did some search and read a few posts on how PrismJS can address the long lines issue. The solution from the other Ghost users is not as pretty as regular ```code blocks but quite close to it. If your code has long lines, put it into the markdown section and surround with the tags as below.  


<pre class="language-bash"><code style="white-space: pre-wrap; font-size: small">
Your code goes here
</code></pre>

The key is  "white-space: prewrap"  style in the code tag.

I found another caveat during this post preparation: It would be wise to escape HTML and XML code to avoid misinterpretations and tags mismatches. Besides that, it works just fine.

An error occurred during activation of changes, please see the log for details.

[Deployer:149189]An attempt was made to execute the "activate" operation on an application named "mydatasource" that is not currently available. The application may have been created after non-dynamic configuration changes were activated. If so, the operation cannot be performed until the server is restarted so that the application will be available.

I used bash as language, just to show you that you can keep syntax highlight, and do some style manipulations, for example font size.

]]>
<![CDATA[ Getting activation error on GridLink data source in WebLogic 12.2.1.3 ]]> https://chronicler.tech/getting/ 5d1d2d778051b64e885711d0 Wed, 03 Jul 2019 18:47:48 -0400 Problem:

When creating a GridLink data source in Oracle WebLogic Server 12.2.1.3, you may receive an activation error.

An error occurred during activation of changes, please see the log for details.

[Deployer:149189]An attempt was made to execute the "activate" operation on an application named "mydatasource" that is not currently available. The application may have been created after non-dynamic configuration changes were activated. If so, the operation cannot be performed until the server is restarted so that the application will be available.

Solution:

  1. Create the data source normally, but do not target it to a managed server or clustered (keep it untargeted).
  2. Save and activate changes.
  3. Go back and target the data source to a managed server or cluster.
  4. Save and activate changes.
]]>
<![CDATA[ Where to download Oracle JDeveloper 12.2.1.3 for SOA and OSB development ]]> https://chronicler.tech/where-to-download-oracle-jdeveloper-12c-for-soa-and-osb-development/ 5d1cbc488051b64e8857118f Wed, 03 Jul 2019 10:43:49 -0400 Oracle likes to confuse you sometimes.

Are you looking to download Oracle JDeveloper 12c (12.2.1.3.0) to do your SOA and OSB development?

You might struggle finding the correct version. Essentially, you will need to download Oracle SOA Suite 12.2.13.0 QuickStart. Do not download Oracle JDeveloper Studio Edition 12.2.1.3.0.

RIGHT:
Download the two required files from here:

https://www.oracle.com/middleware/technologies/soasuite/downloads.html

Extract them, then run the command:

java -jar fmw_12.2.1.3.0_soa_quickstart.jar

WRONG:

Do not download from either of these links:

https://www.oracle.com/technetwork/developer-tools/jdev/downloads/index.html
https://www.oracle.com/technetwork/pt/developer-tools/jdev/downloads/index.html

Applicable Versions:

  • Oracle JDeveloper 12c (12.2.1.3.0) for SOA Suite and Service Bus
]]>
<![CDATA[ Blast from the Past: My Oracle Magazine Interview ]]> https://chronicler.tech/blast-from-the-past-oracle-magazine-interview/ 5d1c21598051b64e88571149 Tue, 02 Jul 2019 23:50:00 -0400 Here's a brief interview conducted with me in Oracle Magazine waaaaaay back in 2014.

I still stand by my answers! :)

And it looks like Oracle heard me and created the Oracle Autonomous Database. ;)

And here's a never-before-published fact... the original photo I submitted was with me and "Batman". Sadly, Oracle Magazine decided to crop it for copyright reasons.

]]>
<![CDATA[ Invalid target for JMS proxy: osb_cluster ]]> https://chronicler.tech/invalid-target-for-jms-proxy-osb_cluster/ 5d1c22ac8051b64e88571163 Tue, 02 Jul 2019 23:37:54 -0400 Problem:

We developed an OSB 11g project that polls from a WebLogic JMS queue. Upon deployment, we received the following error:

[WliSb Transports:381515]Invalid target for JMS proxy: osb_cluster

Solution:

This OSB project was developed against the DEV server, where the OSB cluster name was called osb_cluster. However, our TEST environment had an OSB cluster name of osbcluster.

  1. One option is to change the name of your OSB cluster.
  2. Another option is to update your code.
]]>
<![CDATA[ Customize properties in WebLogic 12c ]]> https://chronicler.tech/customize-environment-for-weblogic-12c/ 5d1bc0158051b64e885710b4 Tue, 02 Jul 2019 17:12:34 -0400 WebLogic always has a way to customize environment settings for domain or even for each managed servers. WebLogic 12c introduces a new option: setUserOverrides.sh. All you need to do: create a regular shell script and place it to the $DOMAIN_HOME/bin folder. WebLogic server does the rest.
The example below has helped me to fix communication issues between WebLogic IdentiyProvider and OUD.

#!/bin/sh
JAVA_OPTIONS="${JAVA_OPTIONS} -Dweblogic.ssl.SSLv2HelloEnabled=false"
JAVA_OPTIONS="${JAVA_OPTIONS} -Dweblogic.security.SSL.minimumProtocolVersion=TLSv1.2"
export JAVA_OPTIONS
$DOAMIN_HOME/bin/setUserOverrides.sh


Image source: https://www.flickr.com/photos/showbiz_kids/430918601

]]>
<![CDATA[ Blockchain for Newbies (4 of 5) ]]> https://chronicler.tech/blockchain-for-newbies-4-of-5/ 5d1a1c118051b64e88571018 Mon, 01 Jul 2019 11:03:24 -0400 Still confused? As you embark on your blockchain education journey, undoubtedly some questions still arise.

Can I use blockchain to replace my current transactional database?

No.

Blockchain serves a separate function. Blockchain as a concept is not intended to provide OLTP functionality.

What is blockchain ideally used for?

Simply put, verification of a transaction (through the means of an immutable audit trail).

Can't I develop many blockchain features myself using a traditional application and database?

Yes, but you would have to develop that functionality.

For example, have you considered how you plan on creating a decentralized and distributed database immune to manipulation? The cost and effort of implementing even a subset of blockchain features will likely be too costly.

If the data in a previous block requires updating, can this be done?

No. Blockchain is intended for insert transactions only. The idea is to maintain an unaltered audit trail.

Unlike a traditional database, blockchain is not intended to update or delete records. Only newly inserted entries can change the values of the data in past entries, but these past entries themselves cannot be modified.

What is a clear example of where blockchain can be used?

Take the example of buying and selling of land.

In the past, titles for land were maintained as a paper trail. If you buy a piece of land, you would accompany the seller to the county, exchange money, and transfer ownership of the title in the presence of a government official.

Nowadays, this information is stored digitally in some database. So as the land is bought and sold over the years, more records in the database are generated which maintains this history of transactions. The problem is, the system administrator who has underlying access to the database can alter these digital records without anyone noticing. It is possible that 2, 5, or 20 years later this is not caught, and becomes difficult to prove especially as systems are upgraded and migrated.

This is where blockchain comes in. This same database doesn't change. It would still maintain all its transactional data. But when it comes to title-based transactions, that data would also be pushed to a blockchain service. Now, we can use blockchain as a trusted audit trail.

Would I to get rid of my traditional database if I adopt blockchain?

No.

Keep your data, users, and transactions in your database as is, but in parallel push any transaction (such as a financial transaction) that requires auditability and immutability to a blockchain service.

What are disadvantages of blockchain?

It is difficult to regulate transactions due to the decentralized nature of blockchain, so it becomes attractive to criminals to use for illegal trade or money laundering purposes.

]]>
<![CDATA[ Git client for Windows: basic configuration ]]> https://chronicler.tech/windows-password-manager-and-git/ 5d175c5f8051b64e88570d57 Mon, 01 Jul 2019 08:47:00 -0400 Last week, I tried to get one of my private repositories to my workstation at the client site. It's quite common when you have no admin access to the workstation and proxy server with the authorization for internet access. First part is quite easy, you can find most of the DEV tools such as maven or git in a portable format. Now,  I'll tell you how to configure git with the corporate proxy.

Let's make sure that you have git installed and perform the initial configuration. Don't forget to substitute my name, email, and project with your values.

git --version
(out)git version 2.17.0.windows.1
git config --global user.email mikhailidim@gmail.com
git config --global user.name "Michael Mikhailidi"

My first try throws me an error similar to the one below.

git clone https://gitlab.com/mikhailidim/Json2Xml.git
(out)Cloning into 'Json2Xml'...
(out)fatal: unable to update url base from redirection:
(out)  asked for: https://gitlab.com/mikhailidim/Json2Xml.git/info/refs?service=git-upload-pack
(out)  redirect: http://proxy-server.domain:8080/plugin?target=Auth&reason=Auth&ClientID=<%some-internal-paraleters%>
(out) 

It gives you two useful hints:

  1. You are behind the proxy, and now you have the URL;
  2.  the error reason is authentication.

Git has global parameters for that too.  Add the parameter http.proxy  with your proxy URL and your domain account name. I also suppressed certificates validation. It's not exactly safe, but you don't have much choice if your proxy intercepts encrypted traffic.

git config --global http.proxy http://DOMAIN\username@proxy-server.domain:8080
git config --global http.sslverify false

As you may noticed I did not provided any passwords. But git will ask you when you start clone the repository for the first time.

git clone https://gitlab.com/mikhailidim/Json2Xml.git                       
(out)Cloning into 'Json2Xml'...                                                    
(out)Password for 'http://DOMAIN\username@proxy-server.domain:8080':         
(out)Username for 'https://gitlab.com': mikhailidim                                
(out)Password for 'https://mikhailidim@gitlab.com':                                
(out)remote: Enumerating objects: 142, done.                                       
(out)remote: Counting objects: 100% (142/142), done.                               
(out)remote: Compressing objects: 100% (65/65), done.                              
(out)remote: Total 142 (delta 27), reused 142 (delta 27)                           
(out)Receiving objects: 100% (142/142), 20.25 KiB | 1.12 MiB/s, done.              
(out)Resolving deltas: 100% (27/27), done.                                         

The first password is for the proxy authentication and then credentials for my GitLab account. Next time, it doesn't ask you any credentials, because they already stored in the Windows Password Manager Vault. Next time I would worry about passwords when the domain controller forces me to change it. You can find your credentials in the Windows Credential Manager. You can see my sample credentials on the screenshot below.

It works fine only if you use one account to access one server. If for some reasons you have multiple GitLab or GitHub accounts, it wouldn't work well. To enter credentials every time, reset git's credential helper git config --system --unset credential.helper.
You can restore the behavior with the command:
git config --global credential.helper manager


Special thanks to Mr. Gray for the great image.

]]>
<![CDATA[ OEM 12c: Elusive NodeManager ]]> https://chronicler.tech/elusive-nodemanager/ 5d195f698051b64e88570f5e Sun, 30 Jun 2019 22:09:04 -0400 I have spent a few very confusing hours in desperate attempts to figure out why OMS instance won't start. The command output suggests checking the NodeManager log file. But it has no new entries, in fact, the file wasn't accessed at all.

emctl start oms
(out)Oracle Enterprise Manager Cloud Control 12c Release 5
(out)Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
(out)Starting Oracle Management Server...
(out)Starting WebTier...
(out)WebTier Successfully Started
(out)Node Manager Could Not Be Started
(out)Check Node Manager log file for details: /u01/app/oracle/gc_inst/NodeManager/emnodemanager/nodemanager.log
(out)Oracle Management Server is Down
(out)Starting BI Publisher Server ...
(out)Node Manager Could Not Be Started
(out)Check Node Manager log file for details: /u01/app/oracle/gc_inst/NodeManager/emnodemanager/nodemanager.log
(out)BI Publisher Server is Down

It's always about the network, you know: typos in /etc/hosts, DNS record suddenly turns to a different IP address, wrong name resolution order. I check them all. Everything was neat and clean, plus you would have a hint in the logs, at least a few new lines. From now on, I have another check mark in my validation list - the  LogLevel property.  Make sure that parameter value matches to the one of the list1 :

  • SEVERE (highest value)
  • WARNING
  • INFO
  • CONFIG
  • FINE
  • FINER
  • FINEST (lowest value)

In my case, LogLevel was set to DEBUG. Good enough to do not drag my attention and bad enough to kill NodeManager even before it opens a log file.


  1. https://docs.oracle.com/javase/7/docs/api/java/util/logging/Level.html
]]>
<![CDATA[ Sample output of the Oracle Fusion Middleware 12.2.1.3 UA readiness logs ]]> https://chronicler.tech/sample-output-of-the-oracle-fusion-middleware-12-2-1-3-ua-readiness-logs/ 5d18cf2a8051b64e88570f42 Sun, 30 Jun 2019 11:05:48 -0400 Here is a sample output of the Oracle Fusion Middleware 12.2.1.3 upgrade assistant (UA) readiness logs, primarily for reference purposes.

The UA can be executed as follows:

cd ${MW_HOME}/oracle_common/upgrade/bin
./ua -readiness

Sample output:

[2019-06-29T17:32:42.603+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Oracle Fusion Middleware Upgrade Assistant 12.2.1.3.0
[2019-06-29T17:32:42.607+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Label name: FMWUPGRADE_12.2.1.3.0_GENERIC_170809.2231.S
[2019-06-29T17:32:42.612+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Label name for SchemaVersion.jar file: FMWUPGRADE_12.2.1.3.0_GENERIC_170809.2231.S
[2019-06-29T17:32:42.616+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Host name: soadev
[2019-06-29T17:32:42.618+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Middleware home: /u01/oracle/middleware12213
[2019-06-29T17:32:42.620+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Oracle home: /u01/oracle/middleware12213/oracle_common
[2019-06-29T17:32:42.622+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  WebLogic home: /u01/oracle/middleware12213/wlserver
[2019-06-29T17:32:42.624+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  os.name: Linux
[2019-06-29T17:32:42.627+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  os.version: 4.14.35-1844.1.3.el7uek.x86_64
[2019-06-29T17:32:42.630+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  os.arch: amd64
[2019-06-29T17:32:42.631+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  user.name: oracle
[2019-06-29T17:32:42.636+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  user.home: /home/oracle
[2019-06-29T17:32:42.638+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  user.dir: /u01/oracle/middleware12213/oracle_common/upgrade/bin
[2019-06-29T17:32:42.641+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  user.country: US
[2019-06-29T17:32:42.642+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  user.language: en
[2019-06-29T17:32:42.642+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  file.encoding: UTF-8
[2019-06-29T17:32:42.642+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  java.vendor: Oracle Corporation
[2019-06-29T17:32:42.643+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  java.version: 1.8.0_201
[2019-06-29T17:32:42.645+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  java.home: /u01/jdk1.8.0_201/jre
[2019-06-29T17:32:42.646+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Java.class.path: /u01/oracle/middleware12213/oracle_common/upgrade/jlib/ua.jar:/u01/oracle/middleware12213/oracle_common/upgrade/jlib/SchemaVersion.jar:/u01/oracle/middleware12213/oracle_common/modules/features/com.oracle.db.jdbc-dms.jar:/u01/oracle/middleware12213/oracle_common/modules/datadirect/fmwgenerictoken.jar:/u01/oracle/middleware12213/oracle_common/modules/datadirect/wlsqlserver.jar:/u01/oracle/middleware12213/oracle_common/modules/datadirect/wldb2.jar:/u01/oracle/middleware12213/oracle_common/modules/mysql-connector-java-commercial-5.1.22/mysql-connector-java-commercial-5.1.22-bin.jar:/u01/oracle/middleware12213/wlserver/common/derby/lib/derbyclient.jar:/u01/oracle/middleware12213/oracle_common/modules/oracle.bali.jewt/jewt4.jar:/u01/oracle/middleware12213/oracle_common/modules/oracle.bali.jewt/olaf2.jar:/u01/oracle/middleware12213/oracle_common/modules/oracle.odl/ojdl.jar:/u01/oracle/middleware12213/oracle_common/modules/oracle.dms/dms.jar:/u01/oracle/middleware12213/oracle_common/modules/oracle.bali.share/share.jar:/u01/oracle/middleware12213/oracle_common/modules/oracle.ldap/ojmisc.jar:/u01/oracle/middleware12213/oracle_common/modules/oracle.ldap/ldapjclnt11.jar:/u01/oracle/middleware12213/oracle_common/modules/oracle.help/help-share.jar:/u01/oracle/middleware12213/oracle_common/modules/oracle.help/ohj.jar:/u01/oracle/middleware12213/oracle_common/modules/oracle.help/oracle_ice.jar:/u01/oracle/middleware12213/oracle_common/modules/oracle.pki/oraclepki.jar:/u01/oracle/middleware12213/oracle_common/modules/oracle.nlsrtl/orai18n-mapping.jar:/u01/oracle/middleware12213/oracle_common/modules/oracle.jrf/jrf-api.jar:/u01/oracle/middleware12213/wlserver/server/lib/weblogic.jar:/u01/oracle/middleware12213/wlserver/modules/wlstt3client.jar:/u01/oracle/middleware12213/oracle_common/jlib/wizardCommonResources.jar:/u01/oracle/middleware12213/oracle_common/jlib/rcucommon.jar:/u01/oracle/middleware12213/oracle_common/modules/features/rcuapi_lib.jar:/u01/oracle/middleware12213/oracle_common/modules/features/cieCfg_wls_lib.jar:/u01/oracle/middleware12213/oracle_common/modules/features/cieCfg_cam_lib.jar:/u01/oracle/middleware12213/oui/modules/gdr-external.jar:/u01/oracle/middleware12213/oracle_common/modules/oracle.jps/jps-manifest.jar
[2019-06-29T17:32:42.648+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Java.library.path: /u01/oracle/middleware12213/oracle_common/lib:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
[2019-06-29T17:32:42.650+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Log file is located at: /u01/oracle/middleware12213/oracle_common/upgrade/logs/ua2019-06-29-17-32-41PM.log
[2019-06-29T17:32:42.653+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Reading installer inventory, this will take a few moments...
[2019-06-29T17:32:56.829+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  ...completed reading installer inventory.
[2019-06-29T17:32:56.830+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Feature set cieCfg_cam_hybrid with component oracle.fmwconfig.common.cam not found.
[2019-06-29T17:32:56.831+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Feature set wls_server found.
[2019-06-29T17:32:56.832+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Running in a managed environment.
[2019-06-29T17:33:01.757+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Reading upgrade descriptor /u01/oracle/middleware12213/oracle_common/plugins/upgrade/syscomp.xml
[2019-06-29T17:33:01.884+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Plugin for component CAM in /u01/oracle/middleware12213/oracle_common
[2019-06-29T17:33:01.885+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Reading upgrade descriptor /u01/oracle/middleware12213/oracle_common/plugins/upgrade/ess.xml
[2019-06-29T17:33:01.918+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Plugin for component ESS in /u01/oracle/middleware12213/oracle_common
[2019-06-29T17:33:01.918+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Reading upgrade descriptor /u01/oracle/middleware12213/oracle_common/plugins/upgrade/audit.xml
[2019-06-29T17:33:01.943+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Plugin for component IAU in /u01/oracle/middleware12213/oracle_common
[2019-06-29T17:33:01.944+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Reading upgrade descriptor /u01/oracle/middleware12213/oracle_common/plugins/upgrade/jrfua.xml
[2019-06-29T17:33:01.968+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Plugin for component JRF in /u01/oracle/middleware12213/oracle_common
[2019-06-29T17:33:01.969+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Reading upgrade descriptor /u01/oracle/middleware12213/oracle_common/plugins/upgrade/Opss.xml
[2019-06-29T17:33:01.994+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Plugin for component OPSS in /u01/oracle/middleware12213/oracle_common
[2019-06-29T17:33:01.995+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Reading upgrade descriptor /u01/oracle/middleware12213/oracle_common/plugins/upgrade/cie.xml
[2019-06-29T17:33:02.017+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin FMWCONFIG.CIE_SCHEMA_PLUGIN needs feature set cieStb_rcu.
[2019-06-29T17:33:02.018+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Feature set cieStb_rcu found.
[2019-06-29T17:33:02.018+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Upgrade disabled for plugin FMWCONFIG.CIE_CONFIG_PLUGIN because this plugin is only invoked during a readiness check.
[2019-06-29T17:33:02.023+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Plugin for component FMWCONFIG in /u01/oracle/middleware12213/oracle_common
[2019-06-29T17:33:02.023+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Reading upgrade descriptor /u01/oracle/middleware12213/oracle_common/plugins/upgrade/usermessaging.xml
[2019-06-29T17:33:02.058+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Plugin for component UCSUMS in /u01/oracle/middleware12213/oracle_common
[2019-06-29T17:33:02.059+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Reading upgrade descriptor /u01/oracle/middleware12213/oracle_common/plugins/upgrade/mds.xml
[2019-06-29T17:33:02.076+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Plugin for component MDS in /u01/oracle/middleware12213/oracle_common
[2019-06-29T17:33:02.077+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Reading upgrade descriptor /u01/oracle/middleware12213/oracle_common/plugins/upgrade/wsm.xml
[2019-06-29T17:33:02.097+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Plugin for component WSM in /u01/oracle/middleware12213/oracle_common
[2019-06-29T17:33:02.097+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Reading upgrade descriptor /u01/oracle/middleware12213/oracle_common/plugins/upgrade/wlsservices.xml
[2019-06-29T17:33:02.112+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Plugin for component WLS in /u01/oracle/middleware12213/oracle_common
[2019-06-29T17:33:02.112+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Reading upgrade descriptor /u01/oracle/middleware12213/soa/plugins/upgrade/soainfra.xml
[2019-06-29T17:33:02.130+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Plugin for component SOA in /u01/oracle/middleware12213/soa
[2019-06-29T17:33:02.132+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Sorting components according to dependencies
[2019-06-29T17:33:02.134+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin UAWLSINTERNAL.UAWLS is not a schema plugin
[2019-06-29T17:33:02.135+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin OPSS.OPSS_SCHEMA_PLUGIN is not a configuration plugin
[2019-06-29T17:35:17.792+00:00] [oracle] [WARNING] [com.oracle.cie.domain.template.catalog.impl.LocalTemplateCat]  Couldn't load [/u01/oracle/middleware12213/soa/common/templates/wls/oracle.bpm.jms.reconfig_template_12.2.1.3.0.jar].
java.util.MissingResourceException: Not managing namespace: (config).
	at com.oracle.cie.common.util.ResourceBundleManager.getPublishedMessage(ResourceBundleManager.java:249)
	at com.oracle.cie.domain.template.TemplateInfoHolder.loadFromJar(TemplateInfoHolder.java:202)
	at com.oracle.cie.domain.template.catalog.impl.LocalTemplateCat.loadTemplatesFromDir(LocalTemplateCat.java:272)
	at com.oracle.cie.domain.template.catalog.impl.ProdTemplateCat.loadTemplates(ProdTemplateCat.java:113)
	at com.oracle.cie.domain.template.catalog.impl.ProdTemplateCat.<init>(ProdTemplateCat.java:40)
	at com.oracle.cie.domain.template.catalog.impl.GlobalTemplateCat.populateProductCatalogs(GlobalTemplateCat.java:461)
	at com.oracle.cie.domain.template.catalog.impl.GlobalTemplateCat.<init>(GlobalTemplateCat.java:90)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at com.oracle.cie.domain.template.catalog.TemplateCatalogFactory.createGlobalTemplateCatalog(TemplateCatalogFactory.java:133)
	at com.oracle.cie.domain.template.catalog.TemplateCatalogFactory.getGlobalCatalog(TemplateCatalogFactory.java:78)
	at com.oracle.cie.domain.template.catalog.TemplateCatalogFactory.getGlobalCatalog(TemplateCatalogFactory.java:33)
	at com.oracle.cie.domain.info.DomainUtilsImpl.validateReconfigurable(DomainUtilsImpl.java:76)
	at oracle.ias.update.wls.WLSConfig.validateDomain(WLSConfig.java:182)
	at oracle.ias.update.gui.UAUpgradeModePage.loadConfig(UAUpgradeModePage.java:864)
	at oracle.ias.update.gui.UAUpgradeModePage.wizardValidatePage(UAUpgradeModePage.java:507)
	at oracle.bali.ewt.wizard.WizardPage.processWizardValidateEvent(WizardPage.java:710)
	at oracle.bali.ewt.wizard.WizardPage.validatePage(WizardPage.java:680)
	at oracle.bali.ewt.wizard.BaseWizard.validateSelectedPage(BaseWizard.java:2414)
	at oracle.bali.ewt.wizard.BaseWizard._validatePage(BaseWizard.java:3162)
	at oracle.bali.ewt.wizard.BaseWizard.doNext(BaseWizard.java:2187)
	at oracle.bali.ewt.wizard.BaseWizard$Action$1.run(BaseWizard.java:4072)
	at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:311)
	at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:758)
	at java.awt.EventQueue.access$500(EventQueue.java:97)
	at java.awt.EventQueue$3.run(EventQueue.java:709)
	at java.awt.EventQueue$3.run(EventQueue.java:703)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:74)
	at java.awt.EventQueue.dispatchEvent(EventQueue.java:728)
	at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:205)
	at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:116)
	at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:105)
	at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101)
	at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:93)
	at java.awt.EventDispatchThread.run(EventDispatchThread.java:82)

]]
[2019-06-29T17:35:19.716+00:00] [oracle] [WARNING] [com.oracle.cie.domain.info.DomainUtilsImpl]  No reconfig template exists for custom template; but original template exists: Oracle WebCenter Skin:12.2.1.0
[2019-06-29T17:35:19.718+00:00] [oracle] [WARNING] [com.oracle.cie.domain.info.DomainUtilsImpl]  No reconfig template exists for custom template; but original template exists: Oracle WebCenter Composer:12.2.1.0
[2019-06-29T17:35:19.721+00:00] [oracle] [WARNING] [com.oracle.cie.domain.info.DomainUtilsImpl]  No reconfig template exists for custom template; but original template exists: AuthProviders:12.2.1.0
[2019-06-29T17:35:19.722+00:00] [oracle] [WARNING] [com.oracle.cie.domain.info.DomainUtilsImpl]  No reconfig template exists for custom template; but original template exists: Oracle WSM Policy Attachment:12.2.1.0
[2019-06-29T17:35:19.723+00:00] [oracle] [WARNING] [com.oracle.cie.domain.info.DomainUtilsImpl]  No reconfig template exists for custom template; but original template exists: Common Infrastructure Engineering Runtime:12.2.1
[2019-06-29T17:35:19.725+00:00] [oracle] [WARNING] [com.oracle.cie.domain.info.DomainUtilsImpl]  No reconfig template exists for custom template; but original template exists: WebLogic Coherence Cluster Extension:12.2.1
[2019-06-29T17:35:19.727+00:00] [oracle] [WARNING] [com.oracle.cie.domain.info.DomainUtilsImpl]  No reconfig template exists for custom template; but original template exists: Oracle Click History:12.2.1.0
[2019-06-29T17:35:19.727+00:00] [oracle] [WARNING] [com.oracle.cie.domain.info.DomainUtilsImpl]  No reconfig template exists for custom template; but original template exists: Oracle Business Rules Extension:12.2.1
[2019-06-29T17:35:19.727+00:00] [oracle] [WARNING] [com.oracle.cie.domain.info.DomainUtilsImpl]  No reconfig template exists for custom template; but original template exists: Oracle ESS MDS Datasource:12.2.1
[2019-06-29T17:35:19.728+00:00] [oracle] [WARNING] [com.oracle.cie.domain.info.DomainUtilsImpl]  No reconfig template exists for custom template; but original template exists: Oracle SOA ESS DC:12.2.1.0
[2019-06-29T17:35:19.728+00:00] [oracle] [WARNING] [com.oracle.cie.domain.info.DomainUtilsImpl]  No reconfig template exists for custom template; but original template exists: Oracle Workflow Client Extension:12.2.1
[2019-06-29T17:35:19.728+00:00] [oracle] [WARNING] [com.oracle.cie.domain.info.DomainUtilsImpl]  No reconfig template exists for custom template; but original template exists: Service Bus Common Components:12.2.1
[2019-06-29T17:35:19.728+00:00] [oracle] [WARNING] [com.oracle.cie.domain.info.DomainUtilsImpl]  No reconfig template exists for custom template; but original template exists: Oracle SOA Rules Webapp:12.2.1
[2019-06-29T17:35:19.729+00:00] [oracle] [WARNING] [com.oracle.cie.domain.info.DomainUtilsImpl]  No reconfig template exists for custom template; but original template exists: Oracle User Messaging Service Basic:12.2.1
[2019-06-29T17:35:19.730+00:00] [oracle] [WARNING] [com.oracle.cie.domain.info.DomainUtilsImpl]  No reconfig template exists for custom template; but original template exists: Oracle Enterprise Manager Plugin for BAM:12.2.1
[2019-06-29T17:35:19.731+00:00] [oracle] [WARNING] [com.oracle.cie.domain.info.DomainUtilsImpl]  No reconfig template exists for custom template; but original template exists: Oracle SOA Management12:12.1.3.0
[2019-06-29T17:35:19.733+00:00] [oracle] [WARNING] [com.oracle.cie.domain.info.DomainUtilsImpl]  No reconfig template exists for custom template; but original template exists: Oracle BPM Processviewer:12.2.1
[2019-06-29T17:35:19.734+00:00] [oracle] [WARNING] [com.oracle.cie.domain.info.DomainUtilsImpl]  No reconfig template exists for custom template; but original template exists: Oracle BAM Client:12.2.1
[2019-06-29T17:35:19.736+00:00] [oracle] [WARNING] [com.oracle.cie.domain.info.DomainUtilsImpl]  No reconfig template exists for custom template; but original template exists: Oracle SOA Worklist Shared Library:12.2.1
[2019-06-29T17:35:19.737+00:00] [oracle] [WARNING] [com.oracle.cie.domain.info.DomainUtilsImpl]  No reconfig template exists for custom template; but original template exists: Oracle SOA BPEL Shared Library:12.2.1
[2019-06-29T17:35:19.789+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  The specified domain has a supported version of 12.2.1.0.0.
[2019-06-29T17:35:19.789+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Domain Attributes
[2019-06-29T17:35:19.790+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    RootDirectory: /u01/oracle/middleware12213/user_projects/domains/base_domain
[2019-06-29T17:35:19.790+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    DomainVersion: 12.2.1.0.0
[2019-06-29T17:35:19.790+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Name: AdminServer
[2019-06-29T17:35:21.920+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  The domain knows about 22 data sources
[2019-06-29T17:35:21.921+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source OraSDPMDataSource found
[2019-06-29T17:35:21.921+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database type: Oracle
	Component ID: UCSUMS
	URL: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
	JDBC driver: oracle.jdbc.xa.client.OracleXADataSource
	User name: DEV_UMS

]]
[2019-06-29T17:35:21.921+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source SOADataSource found
[2019-06-29T17:35:21.922+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database type: Oracle
	Component ID: SOAINFRA
	URL: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
	JDBC driver: oracle.jdbc.xa.client.OracleXADataSource
	User name: DEV_SOAINFRA

]]
[2019-06-29T17:35:21.922+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source mds-soa found
[2019-06-29T17:35:21.922+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database type: Oracle
	Component ID: MDS
	URL: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
	JDBC driver: oracle.jdbc.OracleDriver
	User name: DEV_MDS

]]
[2019-06-29T17:35:21.922+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source EssInternalDS found
[2019-06-29T17:35:21.923+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database type: Oracle
	Component ID: ESS
	URL: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
	JDBC driver: oracle.jdbc.OracleDriver
	User name: DEV_ESS

]]
[2019-06-29T17:35:21.923+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source oracle_ebs_apps found
[2019-06-29T17:35:21.923+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database type: Oracle
	Component ID: null
	URL: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
	JDBC driver: oracle.jdbc.OracleDriver
	User name: ati_apps

]]
[2019-06-29T17:35:21.923+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source mds-ESS_MDS_DS found
[2019-06-29T17:35:21.923+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database type: Oracle
	Component ID: MDS
	URL: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
	JDBC driver: oracle.jdbc.OracleDriver
	User name: DEV_MDS

]]
[2019-06-29T17:35:21.923+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source mds-owsm found
[2019-06-29T17:35:21.924+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database type: Oracle
	Component ID: MDS
	URL: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
	JDBC driver: oracle.jdbc.OracleDriver
	User name: DEV_MDS

]]
[2019-06-29T17:35:21.924+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source mds-bam found
[2019-06-29T17:35:21.924+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database type: Oracle
	Component ID: MDS
	URL: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
	JDBC driver: oracle.jdbc.OracleDriver
	User name: DEV_MDS

]]
[2019-06-29T17:35:21.924+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source BamLeasingDataSource found
[2019-06-29T17:35:21.925+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database type: Oracle
	Component ID: DEV_WLS_RUNTIME
	URL: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
	JDBC driver: oracle.jdbc.OracleDriver
	User name: DEV_WLS_RUNTIME

]]
[2019-06-29T17:35:21.925+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source EDNDataSource found
[2019-06-29T17:35:21.925+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database type: Oracle
	Component ID: SOAINFRA
	URL: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
	JDBC driver: oracle.jdbc.xa.client.OracleXADataSource
	User name: DEV_SOAINFRA

]]
[2019-06-29T17:35:21.925+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source LocalSvcTblDataSource found
[2019-06-29T17:35:21.926+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database type: Oracle
	Component ID: STB
	URL: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
	JDBC driver: oracle.jdbc.OracleDriver
	User name: DEV_STB

]]
[2019-06-29T17:35:21.926+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source opss-data-source found
[2019-06-29T17:35:21.926+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database type: Oracle
	Component ID: OPSS
	URL: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
	JDBC driver: oracle.jdbc.OracleDriver
	User name: DEV_OPSS

]]
[2019-06-29T17:35:21.926+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source opss-audit-viewDS found
[2019-06-29T17:35:21.926+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database type: Oracle
	Component ID: IAU_VIEWER
	URL: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
	JDBC driver: oracle.jdbc.OracleDriver
	User name: DEV_IAU_VIEWER

]]
[2019-06-29T17:35:21.927+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source BamDataSource found
[2019-06-29T17:35:21.927+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database type: Oracle
	Component ID: SOAINFRA
	URL: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
	JDBC driver: oracle.jdbc.xa.client.OracleXADataSource
	User name: DEV_SOAINFRA

]]
[2019-06-29T17:35:21.927+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source BamNonJTADataSource found
[2019-06-29T17:35:21.927+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database type: Oracle
	Component ID: SOAINFRA
	URL: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
	JDBC driver: oracle.jdbc.OracleDriver
	User name: DEV_SOAINFRA

]]
[2019-06-29T17:35:21.927+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source EssXADS found
[2019-06-29T17:35:21.928+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database type: Oracle
	Component ID: ESS
	URL: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
	JDBC driver: oracle.jdbc.OracleDriver
	User name: DEV_ESS

]]
[2019-06-29T17:35:21.928+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source EssDS found
[2019-06-29T17:35:21.928+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database type: Oracle
	Component ID: ESS
	URL: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
	JDBC driver: oracle.jdbc.OracleDriver
	User name: DEV_ESS

]]
[2019-06-29T17:35:21.928+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source EDNLocalTxDataSource found
[2019-06-29T17:35:21.928+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database type: Oracle
	Component ID: SOAINFRA
	URL: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
	JDBC driver: oracle.jdbc.OracleDriver
	User name: DEV_SOAINFRA

]]
[2019-06-29T17:35:21.929+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source opss-audit-DBDS found
[2019-06-29T17:35:21.929+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database type: Oracle
	Component ID: IAU_APPEND
	URL: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
	JDBC driver: oracle.jdbc.OracleDriver
	User name: DEV_IAU_APPEND

]]
[2019-06-29T17:35:21.929+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source SOALocalTxDataSource found
[2019-06-29T17:35:21.929+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database type: Oracle
	Component ID: SOAINFRA
	URL: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
	JDBC driver: oracle.jdbc.OracleDriver
	User name: DEV_SOAINFRA

]]
[2019-06-29T17:35:21.929+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source raastechDataSource found
[2019-06-29T17:35:21.930+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database type: Oracle
	Component ID: null
	URL: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
	JDBC driver: oracle.jdbc.xa.client.OracleXADataSource
	User name: RAASTECH

]]
[2019-06-29T17:35:21.930+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source BamJobSchedDataSource found
[2019-06-29T17:35:21.930+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database type: Oracle
	Component ID: WLS
	URL: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
	JDBC driver: oracle.jdbc.OracleDriver
	User name: DEV_WLS

]]
[2019-06-29T17:35:22.451+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Started discovering components to be upgraded
[2019-06-29T17:35:22.456+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Libraries for OPSS.OPSS_SCHEMA_PLUGIN
[2019-06-29T17:35:22.456+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/plugins/upgrade/opss-upgrade-plugin.jar
[2019-06-29T17:35:22.456+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/modules/oracle.jps/jps-upgrade.jar
[2019-06-29T17:35:22.457+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/modules/oracle.jps/jps-api.jar
[2019-06-29T17:35:22.457+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/modules/oracle.jps/jps-internal.jar
[2019-06-29T17:35:22.457+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/modules/oracle.jps/jps-common.jar
[2019-06-29T17:35:22.457+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/modules/oracle.jps/jps-unsupported-api.jar
[2019-06-29T17:35:22.457+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/modules/oracle.jps/jps-az-rt.jar
[2019-06-29T17:35:22.457+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/modules/oracle.jps/jps-ee.jar
[2019-06-29T17:35:22.457+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/modules/oracle.jps/jps-se.jar
[2019-06-29T17:35:22.457+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/modules/oracle.jps/jps-audit.jar
[2019-06-29T17:35:22.458+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/modules/oracle.jps/jacc-spi.jar
[2019-06-29T17:35:22.464+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/modules/oracle.iau/fmw_audit.jar
[2019-06-29T17:35:22.464+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/modules/oracle.idm/identitystore.jar
[2019-06-29T17:35:22.464+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/modules/oracle.idm/identityutils.jar
[2019-06-29T17:35:22.465+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/modules/oracle.osdt/osdt_xmlsec.jar
[2019-06-29T17:35:22.465+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Label name: OPSCS-12.2.1.3.0-170810.0837
[2019-06-29T17:35:24.632+00:00] [OPSS] [NOTIFICATION] [upgrade.OPSS.OPSS_SCHEMA_PLUGIN]  Oracle Platform Security Service schema upgrade required
[2019-06-29T17:35:24.632+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin OPSS.OPSS_SCHEMA_PLUGIN enabled, isUpgradeRequired is true
[2019-06-29T17:35:24.633+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin OPSS.OPSS_SCHEMA_PLUGIN instance count: 1
[2019-06-29T17:35:24.634+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Libraries for MDS.SCHEMA_UPGRADE
[2019-06-29T17:35:24.634+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/plugins/upgrade/mdsplugin.jar
[2019-06-29T17:35:24.634+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Label name: JDEVADF_12.2.1.PATCHSETS_GENERIC_170820.0914.S
[2019-06-29T17:35:24.676+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  The auto-datasource mode is ignoring data source OraSDPMDataSource because the JDBC driver is not supported for schema upgrade: oracle.jdbc.xa.client.OracleXADataSource
[2019-06-29T17:35:24.676+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  The auto-datasource mode is ignoring data source SOADataSource because the JDBC driver is not supported for schema upgrade: oracle.jdbc.xa.client.OracleXADataSource
[2019-06-29T17:35:24.679+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_MDS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:27.998+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_ESS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:28.094+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user ati_apps: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:28.189+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  The auto-datasource mode is ignoring data source oracle_ebs_apps because no schema version registry was found
[2019-06-29T17:35:28.189+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_MDS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:28.273+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_MDS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:28.387+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_MDS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:28.581+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_WLS_RUNTIME: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:28.694+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  The auto-datasource mode is ignoring data source BamLeasingDataSource because no schema version registry was found
[2019-06-29T17:35:28.694+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  The auto-datasource mode is ignoring data source EDNDataSource because the JDBC driver is not supported for schema upgrade: oracle.jdbc.xa.client.OracleXADataSource
[2019-06-29T17:35:28.695+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_STB: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:28.777+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_OPSS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:28.862+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_IAU_VIEWER: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:28.920+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  The auto-datasource mode is ignoring data source BamDataSource because the JDBC driver is not supported for schema upgrade: oracle.jdbc.xa.client.OracleXADataSource
[2019-06-29T17:35:28.920+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_SOAINFRA: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:28.979+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_ESS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:29.033+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_ESS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:29.095+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_SOAINFRA: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:29.157+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_IAU_APPEND: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:29.219+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_SOAINFRA: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:29.274+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  The auto-datasource mode is ignoring data source raastechDataSource because the JDBC driver is not supported for schema upgrade: oracle.jdbc.xa.client.OracleXADataSource
[2019-06-29T17:35:29.275+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_WLS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:29.333+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Processing component MDS for data source mds-soa
[2019-06-29T17:35:29.335+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  The number of schema version entries found for query "SELECT COUNT(*)  FROM SCHEMA_VERSION_REGISTRY WHERE comp_id=? AND owner=?" (comp_id=MDS, owner=DEV_MDS) equals 1
[2019-06-29T17:35:29.336+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Processing component MDS for data source EssInternalDS
[2019-06-29T17:35:29.338+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  The number of schema version entries found for query "SELECT COUNT(*)  FROM SCHEMA_VERSION_REGISTRY WHERE comp_id=? AND owner=?" (comp_id=MDS, owner=DEV_ESS) equals 0
[2019-06-29T17:35:29.338+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Processing component MDS for data source mds-ESS_MDS_DS
[2019-06-29T17:35:29.339+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  The number of schema version entries found for query "SELECT COUNT(*)  FROM SCHEMA_VERSION_REGISTRY WHERE comp_id=? AND owner=?" (comp_id=MDS, owner=DEV_MDS) equals 1
[2019-06-29T17:35:29.341+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Processing component MDS for data source mds-owsm
[2019-06-29T17:35:29.343+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  The number of schema version entries found for query "SELECT COUNT(*)  FROM SCHEMA_VERSION_REGISTRY WHERE comp_id=? AND owner=?" (comp_id=MDS, owner=DEV_MDS) equals 1
[2019-06-29T17:35:29.345+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Processing component MDS for data source mds-bam
[2019-06-29T17:35:29.347+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  The number of schema version entries found for query "SELECT COUNT(*)  FROM SCHEMA_VERSION_REGISTRY WHERE comp_id=? AND owner=?" (comp_id=MDS, owner=DEV_MDS) equals 1
[2019-06-29T17:35:29.350+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Processing component MDS for data source LocalSvcTblDataSource
[2019-06-29T17:35:29.353+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  The number of schema version entries found for query "SELECT COUNT(*)  FROM SCHEMA_VERSION_REGISTRY WHERE comp_id=? AND owner=?" (comp_id=MDS, owner=DEV_STB) equals 0
[2019-06-29T17:35:29.355+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Processing component MDS for data source opss-data-source
[2019-06-29T17:35:29.360+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  The number of schema version entries found for query "SELECT COUNT(*)  FROM SCHEMA_VERSION_REGISTRY WHERE comp_id=? AND owner=?" (comp_id=MDS, owner=DEV_OPSS) equals 0
[2019-06-29T17:35:29.362+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Processing component MDS for data source opss-audit-viewDS
[2019-06-29T17:35:29.368+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  The number of schema version entries found for query "SELECT COUNT(*)  FROM SCHEMA_VERSION_REGISTRY WHERE comp_id=? AND owner=?" (comp_id=MDS, owner=DEV_IAU_VIEWER) equals 0
[2019-06-29T17:35:29.369+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Processing component MDS for data source BamNonJTADataSource
[2019-06-29T17:35:29.372+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  The number of schema version entries found for query "SELECT COUNT(*)  FROM SCHEMA_VERSION_REGISTRY WHERE comp_id=? AND owner=?" (comp_id=MDS, owner=DEV_SOAINFRA) equals 0
[2019-06-29T17:35:29.374+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Processing component MDS for data source EssXADS
[2019-06-29T17:35:29.379+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  The number of schema version entries found for query "SELECT COUNT(*)  FROM SCHEMA_VERSION_REGISTRY WHERE comp_id=? AND owner=?" (comp_id=MDS, owner=DEV_ESS) equals 0
[2019-06-29T17:35:29.379+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Processing component MDS for data source EssDS
[2019-06-29T17:35:29.384+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  The number of schema version entries found for query "SELECT COUNT(*)  FROM SCHEMA_VERSION_REGISTRY WHERE comp_id=? AND owner=?" (comp_id=MDS, owner=DEV_ESS) equals 0
[2019-06-29T17:35:29.392+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Processing component MDS for data source EDNLocalTxDataSource
[2019-06-29T17:35:29.397+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  The number of schema version entries found for query "SELECT COUNT(*)  FROM SCHEMA_VERSION_REGISTRY WHERE comp_id=? AND owner=?" (comp_id=MDS, owner=DEV_SOAINFRA) equals 0
[2019-06-29T17:35:29.399+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Processing component MDS for data source opss-audit-DBDS
[2019-06-29T17:35:29.404+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  The number of schema version entries found for query "SELECT COUNT(*)  FROM SCHEMA_VERSION_REGISTRY WHERE comp_id=? AND owner=?" (comp_id=MDS, owner=DEV_IAU_APPEND) equals 0
[2019-06-29T17:35:29.406+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Processing component MDS for data source SOALocalTxDataSource
[2019-06-29T17:35:29.410+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  The number of schema version entries found for query "SELECT COUNT(*)  FROM SCHEMA_VERSION_REGISTRY WHERE comp_id=? AND owner=?" (comp_id=MDS, owner=DEV_SOAINFRA) equals 0
[2019-06-29T17:35:29.411+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Processing component MDS for data source BamJobSchedDataSource
[2019-06-29T17:35:29.415+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  The number of schema version entries found for query "SELECT COUNT(*)  FROM SCHEMA_VERSION_REGISTRY WHERE comp_id=? AND owner=?" (comp_id=MDS, owner=DEV_WLS) equals 0
[2019-06-29T17:35:29.416+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin MDS.SCHEMA_UPGRADE enabled via auto-datasource option
[2019-06-29T17:35:29.417+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Libraries for ESS.ESS_SCHEMA
[2019-06-29T17:35:29.418+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/plugins/upgrade/ess-upgrade-plugin.jar
[2019-06-29T17:35:29.418+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Label name: PCBPEL_MAIN_GENERIC_170820.1700.2557
[2019-06-29T17:35:29.453+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin ESS.ESS_SCHEMA enabled, isUpgradeRequired is true
[2019-06-29T17:35:29.453+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin ESS.ESS_SCHEMA instance count: 1
[2019-06-29T17:35:29.453+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Libraries for IAU.AUDIT_SCHEMA_PLUGIN
[2019-06-29T17:35:29.454+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/plugins/upgrade/opss-upgrade-plugin.jar
[2019-06-29T17:35:29.454+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/modules/oracle.jps/jps-upgrade.jar
[2019-06-29T17:35:29.454+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/modules/oracle.jps/jps-api.jar
[2019-06-29T17:35:29.454+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/modules/oracle.jps/jps-internal.jar
[2019-06-29T17:35:29.457+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/modules/oracle.jps/jps-common.jar
[2019-06-29T17:35:29.458+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/modules/oracle.jps/jps-unsupported-api.jar
[2019-06-29T17:35:29.458+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/modules/oracle.jps/jps-az-rt.jar
[2019-06-29T17:35:29.458+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/modules/oracle.jps/jacc-spi.jar
[2019-06-29T17:35:29.458+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/modules/oracle.idm/identitystore.jar
[2019-06-29T17:35:29.458+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/modules/oracle.idm/identityutils.jar
[2019-06-29T17:35:29.458+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/modules/oracle.osdt/osdt_xmlsec.jar
[2019-06-29T17:35:29.459+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Label name: OPSCS-12.2.1.3.0-170810.0837
[2019-06-29T17:35:29.771+00:00] [IAU] [NOTIFICATION] [upgrade.IAU.AUDIT_SCHEMA_PLUGIN]  Oracle Audit schema upgrade required
[2019-06-29T17:35:29.774+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin IAU.AUDIT_SCHEMA_PLUGIN enabled, isUpgradeRequired is true
[2019-06-29T17:35:29.775+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin IAU.AUDIT_SCHEMA_PLUGIN instance count: 1
[2019-06-29T17:35:29.776+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Libraries for FMWCONFIG.CIE_SCHEMA_PLUGIN
[2019-06-29T17:35:29.777+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/plugins/upgrade/com.oracle.cie.upgrade-plugin_1.3.0.0.jar
[2019-06-29T17:35:29.778+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Label name: 1.3.0.0
[2019-06-29T17:35:29.844+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin FMWCONFIG.CIE_SCHEMA_PLUGIN enabled, isUpgradeRequired is true
[2019-06-29T17:35:29.844+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin FMWCONFIG.CIE_SCHEMA_PLUGIN instance count: 1
[2019-06-29T17:35:29.845+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin UCSUMS.UCSUMS_SCHEMA_PLUGIN enabled, found application usermessagingserver
[2019-06-29T17:35:29.845+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Libraries for UCSUMS.UCSUMS_SCHEMA_PLUGIN
[2019-06-29T17:35:29.847+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/plugins/upgrade/usermessaging-upgrade-plugin.jar
[2019-06-29T17:35:29.849+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/modules/thirdparty/features/jsch.jar
[2019-06-29T17:35:29.852+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Label name: WIRELESS_MAIN_GENERIC_170502.1346.S
[2019-06-29T17:35:29.869+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin UCSUMS.UCSUMS_SCHEMA_PLUGIN instance count: 1
[2019-06-29T17:35:29.869+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin SOA.SOA1 enabled, found application soa-infra
[2019-06-29T17:35:29.869+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Libraries for SOA.SOA1
[2019-06-29T17:35:29.869+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/soa/plugins/upgrade/soainfra-plugin.jar
[2019-06-29T17:35:29.870+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Label name: PCBPEL_MAIN_GENERIC_170820.1700.2557
[2019-06-29T17:35:29.876+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin SOA.SOA1 instance count: 1
[2019-06-29T17:35:29.876+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Libraries for WLS.WLS
[2019-06-29T17:35:29.876+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    /u01/oracle/middleware12213/oracle_common/plugins/upgrade/WebLogicPlugin.jar
[2019-06-29T17:35:29.877+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Label name: ftp://null.us.oracle.com/orkspace/weblogic_src122130/src122130_build/work/rsync_repo/1882952/all/oracle_common/plugins/upgrade/WebLogicPlugin.jar@1882952
[2019-06-29T17:35:29.892+00:00] [WLS] [NOTIFICATION] [upgrade.WLS.WLS]  Using version 12.2.1.0.0 for upgrade
[2019-06-29T17:35:29.901+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin WLS.WLS enabled, isUpgradeRequired is true
[2019-06-29T17:35:29.902+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin WLS.WLS instance count: 1
[2019-06-29T17:35:29.902+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin MDS.SCHEMA_UPGRADE instance 2 added
[2019-06-29T17:35:29.904+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin MDS.SCHEMA_UPGRADE instance 3 added
[2019-06-29T17:35:29.904+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin MDS.SCHEMA_UPGRADE instance 4 added
[2019-06-29T17:35:29.904+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Ended discovering components to be upgraded
[2019-06-29T17:35:29.946+00:00] [OPSS] [NOTIFICATION] [upgrade.OPSS.OPSS_SCHEMA_PLUGIN]  Detected jdbc/OpssDataSource as Oracle Platform Security Service datasource
[2019-06-29T17:35:29.955+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source named "opss-data-source" was found for schema OPSS
[2019-06-29T17:35:29.958+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_OPSS component OPSS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:30.025+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_MDS component MDS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:30.077+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source named "EssDS" was found for schema ESS
[2019-06-29T17:35:30.078+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_ESS component ESS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:30.161+00:00] [IAU] [NOTIFICATION] [upgrade.IAU.AUDIT_SCHEMA_PLUGIN]  Detected jdbc/AuditAppendDataSource as Oracle Audit datasource
[2019-06-29T17:35:30.299+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_IAU component IAU: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:30.368+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source named "LocalSvcTblDataSource" was found for schema STB
[2019-06-29T17:35:30.370+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_STB component STB: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:30.437+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source named "OraSDPMDataSource" was found for schema UMS
[2019-06-29T17:35:30.439+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_UMS component UMS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:30.519+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source named "SOADataSource" was found for schema SOAINFRA
[2019-06-29T17:35:30.522+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_SOAINFRA component SOAINFRA: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:30.596+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Data source named "BamJobSchedDataSource" was found for schema WLS
[2019-06-29T17:35:30.597+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_WLS component WLS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:30.680+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_MDS component MDS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:30.762+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_MDS component MDS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:30.838+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_MDS component MDS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:35:30.916+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  serverFingerprint=2019-02-27 01:26:25.340089,2019-03-03 02:31:28.774561,20
[2019-06-29T17:35:30.923+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  serverFingerprint=2019-02-27 01:26:25.340089,2019-03-03 02:31:28.774561,20
[2019-06-29T17:35:30.929+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  serverFingerprint=2019-02-27 01:26:25.340089,2019-03-03 02:31:28.774561,20
[2019-06-29T17:35:30.936+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  serverFingerprint=2019-02-27 01:26:25.340089,2019-03-03 02:31:28.774561,20
[2019-06-29T17:35:30.941+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  serverFingerprint=2019-02-27 01:26:25.340089,2019-03-03 02:31:28.774561,20
[2019-06-29T17:35:30.948+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  serverFingerprint=2019-02-27 01:26:25.340089,2019-03-03 02:31:28.774561,20
[2019-06-29T17:35:31.041+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Optional input OPSS.OPSS_SCHEMA_PLUGIN.OPSS is true
[2019-06-29T17:35:47.388+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin OPSS.OPSS_SCHEMA_PLUGIN instance 1 is enabled
[2019-06-29T17:35:47.388+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin MDS.SCHEMA_UPGRADE instance 1 is enabled
[2019-06-29T17:35:47.388+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin ESS.ESS_SCHEMA instance 1 is enabled
[2019-06-29T17:35:47.389+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin IAU.AUDIT_SCHEMA_PLUGIN instance 1 is enabled
[2019-06-29T17:35:47.389+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin FMWCONFIG.CIE_SCHEMA_PLUGIN instance 1 is enabled
[2019-06-29T17:35:47.389+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin UCSUMS.UCSUMS_SCHEMA_PLUGIN instance 1 is enabled
[2019-06-29T17:35:47.389+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin SOA.SOA1 instance 1 is enabled
[2019-06-29T17:35:47.389+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Plugin WLS.WLS instance 1 is enabled
[2019-06-29T17:36:14.439+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  User confirms prerequisite has been met: All affected servers are down.
[2019-06-29T17:36:14.440+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  User confirms prerequisite has been met: All affected data is backed up.
[2019-06-29T17:36:14.440+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  User confirms prerequisite has been met: Database version is certified by Oracle for Fusion Middleware upgrade
[2019-06-29T17:36:14.440+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  User confirms prerequisite has been met: Certification and system requirements have been met.
[2019-06-29T17:36:42.359+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user system component OPSS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:36:57.665+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_OPSS component OPSS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:36:57.811+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user system component MDS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:37:14.273+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_MDS component MDS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:37:14.374+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user system component ESS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:37:34.824+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user system component ESS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:37:36.666+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_ESS component ESS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:37:36.833+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user system component IAU: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:38:08.514+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_ESS component ESS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:38:26.012+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user system component IAU: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:38:45.011+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_IAU component IAU: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:38:45.085+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user system component STB: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:39:30.752+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_STB component STB: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:39:30.891+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user system component UMS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:39:47.656+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_UMS component UMS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:39:47.814+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user system component SOAINFRA: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:40:09.951+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_SOAINFRA component SOAINFRA: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:40:10.043+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user system component WLS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:40:28.355+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Database connect string for user DEV_WLS component WLS: jdbc:oracle:thin:@//soadev:1521/ORCLPDB
[2019-06-29T17:40:28.568+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  s_maxThreads=4
[2019-06-29T17:40:28.591+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Starting to examine components.
[2019-06-29T17:40:28.878+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Preparing concurrent plugin: OPSS.OPSS_SCHEMA_PLUGIN
[2019-06-29T17:40:28.895+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Preparing concurrent plugin: ESS.ESS_SCHEMA
[2019-06-29T17:40:28.888+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Preparing concurrent plugin: MDS.SCHEMA_UPGRADE
[2019-06-29T17:40:28.910+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Starting concurrent plugin: ESS.ESS_SCHEMA
[2019-06-29T17:40:28.906+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Preparing concurrent plugin: IAU.AUDIT_SCHEMA_PLUGIN
[2019-06-29T17:40:28.912+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Starting concurrent plugin: IAU.AUDIT_SCHEMA_PLUGIN
[2019-06-29T17:40:28.906+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Starting concurrent plugin: OPSS.OPSS_SCHEMA_PLUGIN
[2019-06-29T17:40:28.912+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Running enabled plugin: ESS.ESS_SCHEMA
[2019-06-29T17:40:28.913+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Running enabled plugin: OPSS.OPSS_SCHEMA_PLUGIN
[2019-06-29T17:40:28.911+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Starting concurrent plugin: MDS.SCHEMA_UPGRADE
[2019-06-29T17:40:28.913+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Running enabled plugin: IAU.AUDIT_SCHEMA_PLUGIN
[2019-06-29T17:40:28.914+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Running enabled plugin: MDS.SCHEMA_UPGRADE
[2019-06-29T17:40:28.923+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Starting to examine OPSS.OPSS_SCHEMA_PLUGIN.
[2019-06-29T17:40:28.924+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Starting to examine IAU.AUDIT_SCHEMA_PLUGIN.
[2019-06-29T17:40:28.924+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Starting to examine MDS.SCHEMA_UPGRADE.
[2019-06-29T17:40:28.924+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Inputs to plugin for MDS.SCHEMA_UPGRADE
[2019-06-29T17:40:28.924+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Oracle Home /u01/oracle/middleware12213/oracle_common
[2019-06-29T17:40:28.924+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    WebLogic Offline
[2019-06-29T17:40:28.924+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Domain Directory: /u01/oracle/middleware12213/user_projects/domains/base_domain
[2019-06-29T17:40:28.924+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Input MDS.SCHEMA_UPGRADE.SCHEMA.MDS
[2019-06-29T17:40:28.925+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Database type: Oracle Database
[2019-06-29T17:40:28.925+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Connection string: //soadev:1521/ORCLPDB
[2019-06-29T17:40:28.925+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      DBA name: system
[2019-06-29T17:40:28.925+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Schema name: DEV_MDS
[2019-06-29T17:40:28.923+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Starting to examine ESS.ESS_SCHEMA.
[2019-06-29T17:40:28.926+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Inputs to plugin for ESS.ESS_SCHEMA
[2019-06-29T17:40:28.926+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Oracle Home /u01/oracle/middleware12213/oracle_common
[2019-06-29T17:40:28.926+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    WebLogic Offline
[2019-06-29T17:40:28.926+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Domain Directory: /u01/oracle/middleware12213/user_projects/domains/base_domain
[2019-06-29T17:40:28.926+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Input ESS.ESS_SCHEMA.SCHEMA.ESS
[2019-06-29T17:40:28.926+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Database type: Oracle Database
[2019-06-29T17:40:28.926+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Connection string: //soadev:1521/ORCLPDB
[2019-06-29T17:40:28.926+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      DBA name: system
[2019-06-29T17:40:28.926+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Schema name: DEV_ESS
[2019-06-29T17:40:28.925+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Database product version: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
]]
[2019-06-29T17:40:28.924+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Inputs to plugin for OPSS.OPSS_SCHEMA_PLUGIN
[2019-06-29T17:40:28.924+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Inputs to plugin for IAU.AUDIT_SCHEMA_PLUGIN
[2019-06-29T17:40:28.927+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Database driver name: Oracle JDBC driver
[2019-06-29T17:40:28.927+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Oracle Home /u01/oracle/middleware12213/oracle_common
[2019-06-29T17:40:28.927+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Database product version: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
]]
[2019-06-29T17:40:28.927+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    WebLogic Offline
[2019-06-29T17:40:28.927+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Database driver version: 12.2.0.1.0
[2019-06-29T17:40:28.927+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Oracle Home /u01/oracle/middleware12213/oracle_common
[2019-06-29T17:40:28.928+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    WebLogic Offline
[2019-06-29T17:40:28.928+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Domain Directory: /u01/oracle/middleware12213/user_projects/domains/base_domain
[2019-06-29T17:40:28.931+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Input IAU.AUDIT_SCHEMA_PLUGIN.SCHEMA.IAU
[2019-06-29T17:40:28.931+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Database type: Oracle Database
[2019-06-29T17:40:28.931+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Connection string: //soadev:1521/ORCLPDB
[2019-06-29T17:40:28.931+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      DBA name: system
[2019-06-29T17:40:28.931+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Schema name: DEV_IAU
[2019-06-29T17:40:28.932+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Database product version: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
]]
[2019-06-29T17:40:28.933+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Database driver name: Oracle JDBC driver
[2019-06-29T17:40:28.933+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Database driver version: 12.2.0.1.0
[2019-06-29T17:40:28.928+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Domain Directory: /u01/oracle/middleware12213/user_projects/domains/base_domain
[2019-06-29T17:40:28.927+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Database driver name: Oracle JDBC driver
[2019-06-29T17:40:28.934+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Database driver version: 12.2.0.1.0
[2019-06-29T17:40:28.934+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Input OPSS.OPSS_SCHEMA_PLUGIN.SCHEMA.OPSS
[2019-06-29T17:40:28.935+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Database type: Oracle Database
[2019-06-29T17:40:28.935+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Connection string: //soadev:1521/ORCLPDB
[2019-06-29T17:40:28.935+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      DBA name: system
[2019-06-29T17:40:28.935+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Schema name: DEV_OPSS
[2019-06-29T17:40:28.950+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Database product version: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
]]
[2019-06-29T17:40:28.950+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Database driver name: Oracle JDBC driver
[2019-06-29T17:40:28.951+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Database driver version: 12.2.0.1.0
[2019-06-29T17:40:29.002+00:00] [OPSS] [NOTIFICATION] [OPSSUPG-05502] [upgrade.OPSS.OPSS_SCHEMA_PLUGIN]  Oracle Platform Security Services schema version 12.2.1.0.0 is latest one. Upgrade is not required.
[2019-06-29T17:40:29.018+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Component OPSS, newversion=12.2.1.0.0
[2019-06-29T17:40:29.070+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Component ID=OPSS, Schema=DEV_OPSS, Schema version=12.2.1.0.0, Status=VALID, Upgraded=false
[2019-06-29T17:40:29.071+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Finished examining OPSS.OPSS_SCHEMA_PLUGIN with status: ALREADY_UPGRADED.
[2019-06-29T17:40:29.071+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Finished concurrent plugin: OPSS.OPSS_SCHEMA_PLUGIN
[2019-06-29T17:40:29.073+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Progress bar updated 2 times, status text 1 times, for plugin OPSS.OPSS_SCHEMA_PLUGIN executing phase examine
[2019-06-29T17:40:29.080+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Preparing concurrent plugin: FMWCONFIG.CIE_SCHEMA_PLUGIN
[2019-06-29T17:40:29.080+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Starting concurrent plugin: FMWCONFIG.CIE_SCHEMA_PLUGIN
[2019-06-29T17:40:29.082+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Running enabled plugin: FMWCONFIG.CIE_SCHEMA_PLUGIN
[2019-06-29T17:40:29.084+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Starting to examine FMWCONFIG.CIE_SCHEMA_PLUGIN.
[2019-06-29T17:40:29.084+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Inputs to plugin for FMWCONFIG.CIE_SCHEMA_PLUGIN
[2019-06-29T17:40:29.084+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Oracle Home /u01/oracle/middleware12213/oracle_common
[2019-06-29T17:40:29.084+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    WebLogic Offline
[2019-06-29T17:40:29.084+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Domain Directory: /u01/oracle/middleware12213/user_projects/domains/base_domain
[2019-06-29T17:40:29.084+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Input FMWCONFIG.CIE_SCHEMA_PLUGIN.SCHEMA.STB
[2019-06-29T17:40:29.084+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Database type: Oracle Database
[2019-06-29T17:40:29.084+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Connection string: //soadev:1521/ORCLPDB
[2019-06-29T17:40:29.084+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      DBA name: system
[2019-06-29T17:40:29.084+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Schema name: DEV_STB
[2019-06-29T17:40:29.085+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Database product version: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
]]
[2019-06-29T17:40:29.085+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Database driver name: Oracle JDBC driver
[2019-06-29T17:40:29.085+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Database driver version: 12.2.0.1.0
[2019-06-29T17:40:29.146+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Component STB, newversion=12.2.1.3.0
[2019-06-29T17:40:29.192+00:00] [ESS] [NOTIFICATION] [upgrade.ESS.ESS_SCHEMA]  Oracle Enterprise Scheduler schema internal version 3.0.0.12_0.2, status VALID.
[2019-06-29T17:40:29.203+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Component ID=STB, Schema=DEV_STB, Schema version=12.1.3.0.0, Status=VALID, Upgraded=false
[2019-06-29T17:40:29.204+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Finished examining FMWCONFIG.CIE_SCHEMA_PLUGIN with status: SUCCESS.
[2019-06-29T17:40:29.204+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Finished concurrent plugin: FMWCONFIG.CIE_SCHEMA_PLUGIN
[2019-06-29T17:40:29.204+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Progress bar updated 2 times, status text 1 times, for plugin FMWCONFIG.CIE_SCHEMA_PLUGIN executing phase examine
[2019-06-29T17:40:29.205+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Preparing concurrent plugin: UCSUMS.UCSUMS_SCHEMA_PLUGIN
[2019-06-29T17:40:29.205+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Starting concurrent plugin: UCSUMS.UCSUMS_SCHEMA_PLUGIN
[2019-06-29T17:40:29.205+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Running enabled plugin: UCSUMS.UCSUMS_SCHEMA_PLUGIN
[2019-06-29T17:40:29.212+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Starting to examine UCSUMS.UCSUMS_SCHEMA_PLUGIN.
[2019-06-29T17:40:29.213+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Inputs to plugin for UCSUMS.UCSUMS_SCHEMA_PLUGIN
[2019-06-29T17:40:29.213+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Oracle Home /u01/oracle/middleware12213/oracle_common
[2019-06-29T17:40:29.213+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    WebLogic Offline
[2019-06-29T17:40:29.213+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Domain Directory: /u01/oracle/middleware12213/user_projects/domains/base_domain
[2019-06-29T17:40:29.213+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Input UCSUMS.UCSUMS_SCHEMA_PLUGIN.SCHEMA.UMS
[2019-06-29T17:40:29.213+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Database type: Oracle Database
[2019-06-29T17:40:29.213+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Connection string: //soadev:1521/ORCLPDB
[2019-06-29T17:40:29.213+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      DBA name: system
[2019-06-29T17:40:29.213+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Schema name: DEV_UMS
[2019-06-29T17:40:29.214+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Database product version: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
]]
[2019-06-29T17:40:29.215+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Database driver name: Oracle JDBC driver
[2019-06-29T17:40:29.215+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Database driver version: 12.2.0.1.0
[2019-06-29T17:40:29.320+00:00] [UCSUMS] [NOTIFICATION] [SDP-25852] [upgrade.UCSUMS.UCSUMS_SCHEMA_PLUGIN]  Oracle User Messaging Service schema version 12.2.1.0.0 is the latest one. Upgrade is not required.
[2019-06-29T17:40:29.321+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Component UCSUMS, newversion=12.2.1.0.0
[2019-06-29T17:40:29.394+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Component IAU, newversion=12.2.1.2.0
[2019-06-29T17:40:29.401+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Component ID=UCSUMS, Schema=DEV_UMS, Schema version=12.2.1.0.0, Status=VALID, Upgraded=false
[2019-06-29T17:40:29.401+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Finished examining UCSUMS.UCSUMS_SCHEMA_PLUGIN with status: ALREADY_UPGRADED.
[2019-06-29T17:40:29.401+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Finished concurrent plugin: UCSUMS.UCSUMS_SCHEMA_PLUGIN
[2019-06-29T17:40:29.401+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Progress bar updated 2 times, status text 1 times, for plugin UCSUMS.UCSUMS_SCHEMA_PLUGIN executing phase examine
[2019-06-29T17:40:29.402+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Preparing concurrent plugin: SOA.SOA1
[2019-06-29T17:40:29.402+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Starting concurrent plugin: SOA.SOA1
[2019-06-29T17:40:29.402+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Running enabled plugin: SOA.SOA1
[2019-06-29T17:40:29.405+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Starting to examine SOA.SOA1.
[2019-06-29T17:40:29.406+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Inputs to plugin for SOA.SOA1
[2019-06-29T17:40:29.406+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Oracle Home /u01/oracle/middleware12213/soa
[2019-06-29T17:40:29.406+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    WebLogic Offline
[2019-06-29T17:40:29.406+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Domain Directory: /u01/oracle/middleware12213/user_projects/domains/base_domain
[2019-06-29T17:40:29.406+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Input SOA.SOA1.SCHEMA.SOAINFRA
[2019-06-29T17:40:29.406+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Database type: Oracle Database
[2019-06-29T17:40:29.406+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Connection string: //soadev:1521/ORCLPDB
[2019-06-29T17:40:29.406+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      DBA name: system
[2019-06-29T17:40:29.407+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Schema name: DEV_SOAINFRA
[2019-06-29T17:40:29.407+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Database product version: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
]]
[2019-06-29T17:40:29.407+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Database driver name: Oracle JDBC driver
[2019-06-29T17:40:29.408+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Database driver version: 12.2.0.1.0
[2019-06-29T17:40:29.452+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Component ID=IAU, Schema=DEV_IAU, Schema version=12.2.1.0.0, Status=VALID, Upgraded=false
[2019-06-29T17:40:29.452+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Component SOAINFRA, newversion=12.2.1.3.0
[2019-06-29T17:40:29.454+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Finished examining IAU.AUDIT_SCHEMA_PLUGIN with status: SUCCESS.
[2019-06-29T17:40:29.454+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Finished concurrent plugin: IAU.AUDIT_SCHEMA_PLUGIN
[2019-06-29T17:40:29.467+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Progress bar updated 2 times, status text 1 times, for plugin IAU.AUDIT_SCHEMA_PLUGIN executing phase examine
[2019-06-29T17:40:29.467+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Preparing concurrent plugin: WLS.WLS
[2019-06-29T17:40:29.468+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Starting concurrent plugin: WLS.WLS
[2019-06-29T17:40:29.475+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Running enabled plugin: WLS.WLS
[2019-06-29T17:40:29.480+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Starting to examine WLS.WLS.
[2019-06-29T17:40:29.480+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Inputs to plugin for WLS.WLS
[2019-06-29T17:40:29.480+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Oracle Home /u01/oracle/middleware12213/oracle_common
[2019-06-29T17:40:29.480+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    WebLogic Offline
[2019-06-29T17:40:29.480+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Domain Directory: /u01/oracle/middleware12213/user_projects/domains/base_domain
[2019-06-29T17:40:29.480+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Input WLS.WLS.SCHEMA.WLS
[2019-06-29T17:40:29.480+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Database type: Oracle Database
[2019-06-29T17:40:29.480+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Connection string: //soadev:1521/ORCLPDB
[2019-06-29T17:40:29.481+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      DBA name: system
[2019-06-29T17:40:29.481+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]      Schema name: DEV_WLS
[2019-06-29T17:40:29.483+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Database product version: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
]]
[2019-06-29T17:40:29.483+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Database driver name: Oracle JDBC driver
[2019-06-29T17:40:29.484+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]    Database driver version: 12.2.0.1.0
[2019-06-29T17:40:29.521+00:00] [WLS] [NOTIFICATION] [upgrade.WLS.WLS]  Oracle WebLogic Server schema has already been upgraded.
[2019-06-29T17:40:29.532+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Component ID=SOAINFRA, Schema=DEV_SOAINFRA, Schema version=12.2.1.0.0, Status=VALID, Upgraded=false
[2019-06-29T17:40:29.532+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Finished examining SOA.SOA1 with status: SUCCESS.
[2019-06-29T17:40:29.533+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Finished concurrent plugin: SOA.SOA1
[2019-06-29T17:40:29.536+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Component WLS, newversion=12.2.1.0.0
[2019-06-29T17:40:29.537+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Progress bar updated 2 times, status text 1 times, for plugin SOA.SOA1 executing phase examine
[2019-06-29T17:40:29.596+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Component ID=WLS, Schema=DEV_WLS, Schema version=12.2.1.0.0, Status=VALID, Upgraded=false
[2019-06-29T17:40:29.596+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Finished examining WLS.WLS with status: ALREADY_UPGRADED.
[2019-06-29T17:40:29.596+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Finished concurrent plugin: WLS.WLS
[2019-06-29T17:40:29.597+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Progress bar updated 2 times, status text 1 times, for plugin WLS.WLS executing phase examine
[2019-06-29T17:40:31.145+00:00] [ESS] [ERROR] [ESS-01011] [upgrade.ESS.ESS_SCHEMA]  The logon DBA user does not have sufficient privileges to upgrade the target Oracle Enterprise Scheduler schema user
[2019-06-29T17:40:31.146+00:00] [ESS] [ERROR] [upgrade.ESS.ESS_SCHEMA]  Cause: The logon DBA user must be able to grant execute privilege on certain DBMS packages to other users for upgrading the schema user privileges in target Oracle Enterprise Scheduler schema. The logon DBA user does not have sufficient privileges to perform actions needed to upgrade the target Oracle Enterprise Scheduler schema.
[2019-06-29T17:40:31.146+00:00] [ESS] [ERROR] [upgrade.ESS.ESS_SCHEMA]  Action: Logon as SYSDBA or other DBA users with sufficient privileges and run the Assistant again.
[2019-06-29T17:40:31.155+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Component ESS, newversion=12.2.1.3.0
[2019-06-29T17:40:31.228+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Component ID=ESS, Schema=DEV_ESS, Schema version=12.2.1.0.0, Status=VALID, Upgraded=false
[2019-06-29T17:40:31.229+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Finished examining ESS.ESS_SCHEMA with status: FAILURE.
[2019-06-29T17:40:31.229+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Finished concurrent plugin: ESS.ESS_SCHEMA
[2019-06-29T17:40:31.237+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Progress bar updated 2 times, status text 1 times, for plugin ESS.ESS_SCHEMA executing phase examine
[2019-06-29T17:40:31.591+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Component MDS, newversion=12.2.1.3.0
[2019-06-29T17:40:31.609+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Component ID=MDS, Schema=DEV_MDS, Schema version=12.2.1.0.0, Status=VALID, Upgraded=false
[2019-06-29T17:40:31.609+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Finished examining MDS.SCHEMA_UPGRADE with status: SUCCESS.
[2019-06-29T17:40:31.609+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Finished concurrent plugin: MDS.SCHEMA_UPGRADE
[2019-06-29T17:40:31.610+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Progress bar updated 2 times, status text 1 times, for plugin MDS.SCHEMA_UPGRADE executing phase examine
[2019-06-29T17:40:31.611+00:00] [Framework] [NOTIFICATION] [upgrade.Framework]  Finished examining components.
]]>
<![CDATA[ Another ode to Ghost ]]> https://chronicler.tech/another-ode-to-ghost/ 5d174c0c8051b64e88570c46 Sun, 30 Jun 2019 08:00:00 -0400 As a newbie webmaster and regular blogger, you may want to know if your recent composition has blown the Google charts. Our site still not even close to the top, but we're working on it.

While I played with search, Google (the engine, not an Alphabet, Inc) gives me a tip: "Hey, looks like it's your domain! Why don't you try Google Search Console?" and so I did.  The setup process is quite easy, especially if you have our domains with Google, and use the same account for the Search Console. Application is friendly for newcomers and gives you a hint on every part and control.

At one moment it suggested using a sitemap to improve indexing process. Console accepts numerous formats, and one of them is RSS/Atom feed. Oh, that's nice, because I added RSS  to the site template using Ghost documentation as a guide. It didn't work well: Console didn't find all the pages and reported incorrect post formats. It works just fine for my old Google Blogger-based site, though and I returned to search.

If you work in IT long enough, you know that it's really hard to run into a genuine problem. Somebody someday has been there, found a solution, and twitted it to the world.
This time, search has brought a post with the information that your blog has built-in sitemap functionality out of the box.

All that you need with ghost: use the link similar to  http://your.ghost-blog.site/sitemap.xml.

Ghost sitemap in the Google Search console

Two takeouts from the story:

  • Don't over complicate and read the documentation first
  • It's another score to Ghost as a blog platform.
]]>
<![CDATA[ Blockchain for Newbies (3 of 5) ]]> https://chronicler.tech/blockchain-for-newbies-3/ 5d165f708051b64e88570c08 Fri, 28 Jun 2019 14:47:42 -0400 Some of the key benefits of leveraging blockchain is that it's distributed (across multiple nodes in the network), immutable (tamper resistant), maintains an audit trail (through the use of a linked chain), secure (by virtue of advanced cryptography), and transparent (can be looked up and validated).

________________________________________

Click here for Part 4 of 5.

]]>
<![CDATA[ List as a parameter ]]> https://chronicler.tech/ansible-handle-lists/ 5d13bd5c8051b64e88570aa4 Thu, 27 Jun 2019 06:09:00 -0400 About a month ago, I developed an Ansible role to manage OPSS application policies. By original design, my role uses two lists: application role names and principals and then assigned each role to each principal.

Something similar to:

- name: Grant application roles
  include_roles:
    role: opss-grant
  vars:
    app_name: soa-infra
    app_role:
      - SOAAdmin
      - SOADesigner
    principal:
      - weblogic
      - operator
Working code with lists

For a while, it worked just fine to the moment when I used my role this way:.

- name: Grant application roles
  include_roles:
    role: opss-grant
  vars:
    app_name: OAMAdmin
    app_role:
      - USMAdmin
      - USMViewer 
    principal: weblogic
Quite unexpected outcome. 

This code looks like as legit as the previous one, except the fact that Ansible/Python treats string "weblogic" as a list of characters so this time it produces operations: grant('OAMAdmin','USMAdmin','w'), grant('OAMAdmin','USMViewer','w') ... and so forth.

Well, not a big deal, just another bug in the code, there are two possible solutions:

  • The obvious one: Name it as a feature and document it as "Don't hold it this way™."
  • The desired one: Follow the Ansible way, which means that role should handle strings and lists equally.

Fortunately, Python is all about strings and lists so solution is very simple.The current version doesn't work with the parameters directly, instead it uses two internal arrays, initialized with the code similar to

# Role variables 
# opss-grant/vars/main.yml

role_list: "{{ [app_role] |flatten }}"
prcpl_list: "{{ [principal] |flatten }}"
internal list declaration

Now role always works with the list. In case if you pass the list, filter flatten converts it to the one level array.

]]>
<![CDATA[ Blockchain for Newbies (2 of 5) ]]> https://chronicler.tech/blockchain-for-newbies-2/ 5d117f238051b64e88570a55 Wed, 26 Jun 2019 22:12:00 -0400 Let's take the example of the buying and selling of land. How would you maintain this information to ensure that there is an unalterable audit trail?

Traditional Database

In a traditional database, this can be represented as a table as shown here. Every transaction is logged as a separate entry.

Traditional Database (CRUD operations)

However, the records in this database can be manually manipulated by anyone with underlying access to the database. In fact, it's likely that any malicious changes to these records may even go undetected.

Manipulating data in a traditional database can often go undetected

Blockchain

Now in a blockchain, every block has a block header and block data. The actual data itself is referred to as a transaction.

Each of these blocks are linked together by the previous blocks' hash, creating a chain.

Tamper-Resistant Blockchain (insert only)

For example, if someone tries to manually manipulate the transaction in the second block...

  • The hash linking the 3rd block back to the 2nd block would be broken.
  • The entire blockchain is distributed, so other nodes would flag this as a bad chain.
Manipulated data breaks the chain

Any chain deemed malicious or broken is flagged on the decentralized, distributed blockchain network.

Blockchain is a decentralized and distributed network of nodes

Terminology

Through this simple example, we learned a few key terms:

Block - Consists of header and data.

Blockchain - Is comprised of multiple chained blocks; linked through a hash.

Block hash - The cryptographic hash function used to hash the current block.

Block header - Contains relevant metadata, such as hash, previous hash, date, etc.

Block data - Contains the actual transactional data.

Transaction - Is stored in the block data, and can contain one or more transactions.

Node - Each blockchain sits on a node and is replicated to other nodes in the distributed network.

Immutable - Unable to be changed.

________________________________________

Click here for Part 3 of 5.

]]>
<![CDATA[ Blockchain for Newbies (1 of 5) ]]> https://chronicler.tech/blockchain-for-newbies-1/ 5d117e268051b64e88570a34 Tue, 25 Jun 2019 22:00:00 -0400 What is blockchain? Over a series of small blog posts, I'll try to provide quick 1-minute insights into blockchain.

Blockchain is:

  • Decentralized ledger or distributed ledger.
  • Managed by a peer-to-peer network of nodes.
  • Reduces the need for 3rd party intermediaries or middlemen.
  • Linked and secured using cryptography.
  • Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data.
  • Resistant to tampering or modification of data.
  • Tampering is detected and rejected as hash link is corrupted.
  • Bitcoin is the first use case leveraging blockchain.

________________________________________

Click here for Part 2 of 5.

]]>
<![CDATA[ Set log levels using WLST ]]> https://chronicler.tech/setting-log-levels-using-wlst/ 5d10fe468051b64e885709b2 Mon, 24 Jun 2019 16:02:08 -0400 Setting Fusion Middleware log levels is easily done on the EM Console. But there may be a need to script this instead.

Using WLST to set log levels:

Connect to WLST and get the runtime information:

export MW_HOME=/u01/oracle/middleware
cd $MW_HOME/oracle_common/common/bin
./wlst.sh
connect('weblogic','Welcome1','t3://soadev:7001')
domainRuntime()

List the existing loggers and their log levels:

listLoggers()

View the log level of a particular logger (e.g., oracle.soa.bpel):

getLogLevel(target='soa_server1',logger='oracle.soa.bpel')

Set the desired log level:

setLogLevel(target='soa_server1',logger='oracle.soa.bpel',level='TRACE:16',persist='0')

Log levels:

Check out Setting the Level of Information Written to Log Files:

Example:

]]>
<![CDATA[ About syntax highlight ]]> https://chronicler.tech/code-highlight-on-site/ 5d0df9778051b64e8857070e Mon, 24 Jun 2019 09:30:00 -0400 A few days ago, Ahmed and I opened this site for public access. So far, I love the Ghost engine, and one of the killer features is built-in code support. Let's see how you can make them syntax aware.

Ghost and syntax highlight is not a novelty at all. I used the same path as some other Ghost-busted bloggers have chosen. The final pretenders, with no drama, was prism.js and highlight.js. So, if you take a look into the site code, you would know who the winner is. Lake all the other syntax coloring libraries, both projects use JavaScript (I'll be doomed, if somebody uses VisualBasic for the sites), both work through code injection on the site level, and both have CDN-based sources.  They handle dedicated and inline code blocks (of course you should put mixed text into the box first.)

Highlight.js has a great variety of themes and supports languages I haven't heard about in my life. Another cool thing, you should not worry what syntax you want to use, it does it's best to find and load the proper library.

The idea behind the Prism.js  - to be as lean and as clear as possible, so if you need something - import it explicitly. If you want to highlight lines, or show line numbers, treat yourself and import a desired components. But it didn't tip the scale.

Prism won with 2 scores ahead for:

  • In my clumsy hands, Prism.js shows better compatibility with the current ghost version (2.23.4 as of now) and our Casper-based theme. Highlight.js does well with the dark themes but fails the light ones (all light themes go with the right syntax on the pitch black background).
  • I gave up to make custom highlights for posts through the posts injection. Yep, with Prism you can have different themes and languages for every post.

As of today, I have default Prism theme and five syntaxes: HTML/XML, YAML, Bash/shell,  JavaScript, CSS. It works all across the site, just specify a language you need. For example, check my previous post.


To make post looks different, like the one you read, make a few simple steps.

  1. In the post editor click on the gear icon in the top right corner
  2. Locate "Code Injection" link at the bottom of the list.
  3. if you want to change theme, add it to the Post Header field.
<link rel="stylesheet" type="text/css" href="//cdnjs.cloudflare.com/ajax/libs/prism/1.16.0/themes/prism-okaidia.min.css"/>
  1. To handle new syntax, put a source into the Post footer.
<script type="text/javascript" src="//cdnjs.cloudflare.com/ajax/libs/prism/1.16.0/components/prism-sql.min.js"></script>

Now, add your thoughts, code, pictures and have fun.

select 'Hello world!' from dual;
Dark theme and SQL with post-level injection
]]>
<![CDATA[ Get JDBC driver version ]]> https://chronicler.tech/get-jdbc-driver-version/ 5d0ced118051b64e885706d8 Sat, 22 Jun 2019 10:57:00 -0400 There are times you need to get the actual JDBC version (in ojdbc6.jar) for certification or compatibility purposes. The steps below describe how to do this.

export ORACLE_HOME=/u01/app/oracle/middleware
export TMPDIR=/tmp/jdbctemp
mkdir -p ${TMPDIR}
unzip ${ORACLE_HOME}/wlserver_10.3/server/lib/ojdbc6.jar -d ${TMPDIR}
cat ${TMPDIR}/META-INF/MANIFEST.MF | grep "Implementation-Version"

Sample Content of MANIFEST.MF

oracle@oraprod:/tmp/jdbctemp> cat META-INF/MANIFEST.MF
Manifest-Version: 1.0
Ant-Version: Apache Ant 1.6.5
Created-By: 1.5.0_30-b03 (Sun Microsystems Inc.)
Implementation-Vendor: Oracle Corporation
Implementation-Title: JDBC
Implementation-Version: 11.2.0.3.0
Repository-Id: JAVAVM_11.2.0.3.0AS11.1.1.6.0_LINUX.X64_111104
Specification-Title: JDBC
Specification-Version: 4.0
Main-Class: oracle.jdbc.OracleDriver

Quicker Version

unzip -p ${ORACLE_HOME}/wlserver_10.3/server/lib/ojdbc6.jar META-INF/MANIFEST.MF | grep ^Implementation-Version
]]>
<![CDATA[ Use emcli to download and install Oracle Management Agent 13c ]]> https://chronicler.tech/installing-oem/ 5d0c2efa8051b64e88570622 Fri, 21 Jun 2019 11:38:00 -0400 You can use emcli (Enterprise Manager Command Line Interface) to download any number of versions of the Oracle Management Agent from the Oracle Management Server (OMS).

The instructions in this blog post are specific to Oracle Enterprise Manager Cloud Control 13.3.0.

Download Agent from OMS using emcli

On the OMS host, login using emcli and do a sync:

export MW_HOME=/u01/app/oracle/middleware/
$MW_HOME/bin/emcli login -username=sysman
$MW_HOME/bin/emcli sync

Find out what platforms are included in your OMS to download:

oracle@oraprod:/home/oracle> emcli get_supported_platforms
=----------------------------------------------
Version = 13.3.0.0.0
 Platform = Linux x86-64
=----------------------------------------------
Platforms list displayed successfully.

Now you can download the appropriate agent platform and version:

emcli get_agentimage -destination=/tmp -platform="Linux x86-64" -version="13.3.0.0.0"

Copy the agent download file to your target host:

scp /tmp/13.3.0.0.0_AgentCore_226.zip oracle@targethost:/tmp

Install downloaded Agent on target host

Extract the Agent software on the target host:

cd /tmp
unzip 13.3.0.0.0_AgentCore_226.zip -d /u01/temp_install
cd /u01/temp_install

Install the Agent:

./agentDeploy.sh AGENT_BASE_DIR=/u01/oracle/agent13c -invPtrLoc /etc/oraInst.loc AGENT_PORT=3872 EM_UPLOAD_PORT=4903 OMS_HOST=omshostname ORACLE_HOSTNAME=targethostname AGENT_INSTANCE_HOME=/u01/oracle/agent13c/agent_inst AGENT_REGISTRATION_PASSWORD=welcome1 SCRATCHPATH=/tmp

Run root.sh as 'root':

/u01/oracle/agent13c/agent_13.3.0.0.0/root.sh

Startup and Shutdown the Agent

Commands to start, stop, and check status of the Agent:

export AGENT_HOME=/u01/oracle/agent13c/agent_13.3.0.0.0
$AGENT_HOME/bin/emctl start agent
$AGENT_HOME/bin/emctl stop agent
$AGENT_HOME/bin/emctl status agent
]]>
<![CDATA[ Firewall considerations for Google reCAPTCHA ]]> https://chronicler.tech/firewall-considerations-for-google-recaptcha/ 5d0bf5028051b64e885705d9 Thu, 20 Jun 2019 17:23:00 -0400 Some websites leverage Google's reCAPTCHA service to provide a means to stop bots from abusing the site. This blog post describes how to identify what outbound firewall IP addresses and ports that are needed.

What is reCAPTCHA?

  • A free service that protects your site from spam and abuse.
  • Uses advanced analysis techniques to tell humans and robots apart.
  • Comes in the form of a widget that you could easily add to a page.

There are essentially two approaches to opening up your outbound firewall: (1) Allow outbound access to all Google IP addresses on ports 80 and 443 (see below), or (2) use a proxy server to access control on www.google.com.

Identifying Google IP Addresses and Ports

  1. Run the following commands from the servers that is hosting your code that requires access to Google reCAPTCHA:
dig -t TXT \_netblocks.google.com
dig -t TXT \_netblocks2.google.com
dig -t TXT \_netblocks3.google.com

Note that:

  • _netblocks.google.com <– lists IPv4 addresses
  • _netblocks2.google.com <– lists IPv6 addresses
  • _netblocks3.google.com <– lists IPv4 addresses

2. If you are opening up your firewall for IPv4 ports, then copy all IP subnets identified in the output, and open up ports 80 and 443 to them.

For example:

References

]]>
<![CDATA[ Oracle 18c XE on Linux Mint ]]> https://chronicler.tech/mint-oracle-18c-xe/ 5d041ea78051b64e88570307 Wed, 19 Jun 2019 21:32:00 -0400 When things go naturally, you select an operating system, supported by the database of choice. Unfortunately, the only natural in our business is a changes.  And somewhere in the middle of that, you find yourself with the task like "Hey, we need Oracle database on the Linux Mint box."  Ugh, that's a funky cocktail, and you may need my cookbook for that.

Start with the shopping list:

  • The OS is Linux x86-64, and if it's a RedHat based system you could stop right there and follow the Oracle instllation guide.
  • You shoud have 1GB+ of RAM, Oracle says 2GB is even better.
  • Elevated access to the system. You could not do it without sudo access.
  • Basic knowledge of Bash and Nano editor
  • You need Oracle XE 18c RPM package downloaded on your machine.
  • At least 10GB of free space for package conversion and DB installation

And if you meet  all the requirements, go ahead an prepare you Linux.  

  1. Update Aptitude package cache and install libaio. Most likely you don't have it
sudo apt-get update && sudo apt-get upgrade
sudo apt-get install libaio*
  1. Install alien package.
sudo apt-get-install alien

If you haven't done it before, copy a database RPM package to the server filesystem. Any location should be okay, but make sure that you have enough space for the pack unpack operations. The RPM is massive, so you have some decent amount of time until alien will convert it.

The alien command allows you to combine conversion and installation: don't do it. Otherwise, you will lose another half an hour if you decide to reinstall database from scratch.

  1. If you database RPM in the /opt/distr folder. Conversion commands are:
cd /opt/distr
alien  --scripts oracle-database-xe-18c-1.0-1.x86_64.rpm 
  1. Make sure that you see new package and if it's about 2.4Gb, you can delete original binaryies.
ls -la oracle*.deb
rm oracle-database-xe-18c-1.0-1.x86_64.rpm
  1. The intallation is stright forward, just install your new .deb package.
dpkg -i oracle-database-xe-18c_1.0-2_amd64.deb

The Installer will ask for database  administrators passwords.  You could do the silent installation as in Oracle Documentation, but don't follow documentation further than that. We need to address a few issues before you can complete the database configuration.

  1. Make sure that you have IPv4 record for your host in /etc/hosts file. Without it, network configuration wizard will be really upset. For my VM it's similar to
10.10.10.10 lmde3.vb.mmikhail.com lmde3
  1. We know from the begining, Debian-based systems can't pass the system validation process. To bypass systemm check, edit the file /etc/init.d/oracle-xe-18c-config
cd /etc/init.d/
cp oracle-database-xe-18c oracle-database-xe-18c-cfg
nano oracle-database-xe-18c-cfg
  1. Add the parameter -J-Doracle.assistants.dbca.validate.ConfigurationParams=false to line 289 as below:
$SU -s /bin/bash  $ORACLE_OWNER -c "(echo '$ORACLE_PASSWORD'; echo '$ORACLE_PASSWORD'; echo '$ORACLE_PASSWORD') | $DBCA -silent -createDatabase -gdbName $ORACLE_SID -templateName $TEMPLATE_NAME -characterSet $CHARSET -createAsContainerDatabase $CREATE_AS_CDB -numberOfPDBs $NUMBER_OF_PDBS -pdbName $PDB_NAME -sid $ORACLE_SID -emConfiguration DBEXPRESS -emExpressPort $EM_EXPRESS_PORT -J-Doracle.assistants.dbca.validate.DBCredentials=false -sampleSchema true -J-Doracle.assistants.dbca.validate.ConfigurationParams=false $SQLSCRIPT_CONSTRUCT $DBFILE_CONSTRUCT $MEMORY_CONSTRUCT"
  1. Save the changes in the script. Another tweak will address missed library issue. Original link is broken, so I just linked existing library fro mthe different folder
LD_LIBRARY_PATH=/opt/oracle/product/18c/dbhomeXE/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH
cd /opt/oracle/product/18c/dbhomeXE/lib/
ln -s /opt/oracle/product/18c/dbhomeXE/inventory/Scripts/ext/lib/libclntshcore.so.18.1
chown oracle:oinstall libclntshcore.so.18.1
  1. Finally, configure the database
/etc/init.d/oracle-xe-18c-cfg configure
....
# Follow the documentation and answer the questions 
# When setup is completed, remove the altered file. 
....
rm /etc/init.d/oracle-xe-18c-cfg
# Start Oracle XE database
/etc/init.d/oracle-xe-18c start

At this point you have database and listener started. Enjoy  your mint fresh XE instance!

Oracle XE 18c EM Console
]]>
<![CDATA[ Welcome to our blog! ]]> https://chronicler.tech/welcome-to-our-blog/ 5d07f8498051b64e8857053a Mon, 17 Jun 2019 16:33:05 -0400 Powered by Ghost and hosted in our cloud infrastructure, we are happy to announce the go-live of our new blog, founded by Ahmed and Mikhail in 2019.

Here, we share our thoughts, solutions, and ideas on everything technology related.

]]>