Back to in-person – Conference contributions in September

Finally, we’re back to in-person events! In September, I had a very busy month doing different presentations at various conferences. I wanted to use this short blog to collect the presentations, I delivered and link to the respective slides, in case you missed the presentations, but might be interested in the materials presented during the talks.

Digital Exchange Bergisches Rheinland 2022 (DIX 2022)

Promoting innovation, establishing networks, shaping transformation that’s the overall goal of Digital Exchange, which is a regional one day conference in the “Bergische Rheinland“, in the state of North Rhine-Westphalia. With over 700 attendees and 80 presentations it was a fully-packed day, with lots of opportunities to learn new and interesting things as well as to exchange ideas or simply meet people.

At DIX 2022, I delivered two sessions:

DOAG 2022

DOAG is the main conference of the German Oracle User Group. With about 1000 participants, the largest Oracle user conference in Europe took place again on site, in the city of Nuremberg, after a two-year break. This year, we had one special theme day, shaped by the different communities within the user group, two classic conference days and one training day.

At DOAG 2022, I did the following presentations:

I addition, I was part of a Panel discussion about challenges and experiences in Cloud transitions projects.

Kong Summit 2022

My highlight so far this year was Kong Summit that took place in San Francisco. At this 2 day conference, with almost 500 attendees with 75 speakers delivering various session, I did one session about how to implement a consistent Observability (o11y) strategy in Microservices architectures without changing your service implementations. If you wanted to learn, what happened at Kong Summit, you can read that in my post in our OPITZ CONSULTING company blog.

For my personally the event had a special surprise, as I was named Kong Champion of the year. This was a great honor for me and shows me that the community activities I did in the past months are valuable.

Advertisement

Helidon – Java-based Cloud-native application development

In my last post, I wrote about Cloud-native app development in the Oracle Cloud (respectively Oracle Cloud Infrastructure, OCI). From that, we’ve now a rough idea about Cloud-native development, the principles behind it and what OCI offers to support Cloud-native applications. In this post – as promised in the last post -, I’ll give a brief introduction to Helidon, an open source framework that is mainly driven by Oracle.

Helidon – One framework, two implementation-styles

Helidon is basically a set of libraries for implementing Java-based Microservices resp. Cloud-native apps. So far, so good, but what’s the differentiating factor, since there are – as the figure below shows – a lot of those frameworks around these days?

Screen Shot 2020-04-05 at 20.38.42.png

Between those frameworks, we can distinguish Full-Stack, MicroProfile Based and Microframeworks.

The Full-Stack frameworks provide everything what is needed to implement a Microservice starting with accessing persistence services, implementing  business logic, exposing a respective REST service until providing a respective UI. Spring Boot is the most popular framework in this category, coming with a huge ecosystem for implementing various use cases.

The next category of frameworks are the MicroProfile ones. MicroProfile is a community-driven specification for the development of Java-based Microservices. It is hosted by the Eclipse Foundation and comprises a collection of individual specifications, partly borrowed from the classic Java EE or Jakarta EE space. Huge advantage of the  MicroProfile spec: it has no reference implementation – it’s just specs and interfaces. This makes the specification process lightweight and enables short release cycles. Scope and ecosystem are less pronounced in MicroProfile frameworks than in Full-Stack frameworks, which may impose limitations for certain use cases. On the other hand it makes those frameworks more intuitive and easy for newcomers to get started.

Last, but not least there are the so called Microframeworks. They’re often characterized by a reactive, non-blocking architecture and are optimized for fast startup as well as processing times. So they’re also well suited to be used in Serverless resp. Function as a Service (FaaS) scenarios. Frameworks of this category (or at least the core of the respective frameworks) usually dispense with any form of implicit “framework magic” like dependency injection. However, developers then take responsibility for certain processes themselves, such as the correct initialization of class and object networks or claiming and releasing of certain resources. Depending on the application scenario, this increases the amount of boilerplate code and the testing effort. On the other hand, these frameworks are extremely lightweight and flexible due to few external dependencies.

With these categorizations in mind and as it can also be seen from the figure above, Helidon comes in two flavours: A MicroProfile Based (Helidon MP) and Microframework (Helidon SE) one. This also answers the question about the differentiating factor of the framework.

Helidon framework architecture

To support the two implementation flavours, Helidon needs a respective framework architecture that is depicted in the figure below.

Screen Shot 2020-04-12 at 14.25.12

Basis for both framework variants is Netty, an asynchronous, event-driven framework.

Helidon SE is the lightweight, reactive variant of the framework and consists of an reactive web server and features with respect to flexible configurations as well as security. Helidon SE supports a functional programming model. The web server is characterised by a simple, functional routing model and provides support for OpenTracing, Metrics and Healthchecks. So this variant of the framework is a perfect fit for implementing reactive REST-based services.

Helidon MP is a MicroProfile implementation, currently supports MicroProfile version 3.2 and is built upon the basic components of Helidon SE. This variant of the framework supports the Java EE specifications for CDI, JAX-RS and JSON-B as well as JSON-P. In addition, Helidon MP is extensible using CDI-Extensions. Currently, such extension are available for JPA, JTA and specific extensions for accessing OCI-Resources like storage. So, this variant of the framework can be used for more advanced use cases.

With respect to flexibility, the framework variants provides both REST and gRPC-style service exposure, where gRPC is an experimental feature in the current version of the framework.

From a runtime perspective, Heldion runs on top of JDK 8+. In addition, Helidon SE applications provide support for GraalVM and can be compiled to GraalVM Native Images. The latter one is very interesting, especially to lower startup times and decrease the resulting Docker image’s size.

Roadmap

Helidon is flying! The current version is 1.4.4, a minor bugfix release comes nearly every month and in parallel the team is working on the next major release 2.0.0, which will drop JDK 8 support and will completely move to JDK 11 APIs. Furthermore, the following features will be added to the framework:

  • Reactive DB Client implementation (Helidon SE)
  • New reactive Web Client
  • Helidon CLI
  • MicroProfile Reactive Streams Operators (Helidon MP)
  • MicroProfile Reactive Messaging (Helidon MP)
  • WebSocket support
  • jlink image support
  • Preliminary GraalVM native-image support for Helidon MP

Some of the aforementioned features do have an experimental character and are suspect to change in case of further releases. Please checkout the release notes for Helidon 2.0.0-M1 and Helidon 2.0.0-M2. Within the release notes you can find some useful links to further blog posts, explaining some of the features in depth.

Summary

Helidon is a very interesting framework, when it comes to developing Cloud-native applications on top of the Java ecosystem.

From a developer’s point of view, Helidon’s approach is highly exciting: Developers can decide on a variant depending on the use case. The basic framework remains unchanged, only the programming model changes. This makes development more efficient and offers more flexibility in implementing business requirements.

In upcoming blog posts, I’ll go into more detail about developing services using Helidon and also how to run Cloud-native Helidon apps on top of Oracle Cloud infrastructure.

Cloud-native app development with Oracle Cloud

Cloud-native is a very popular keyword nowadays. But is it just another hype topic? My personal opinion is: No, it isn’t. Cloud-native development is essential to build sustainable, future-oriented architectures (in this context we also speak of evolutionary architectures) that can deal with the volatile, rapidly changing requirements with respect to products and business models! But what does Cloud-native mean?

According to the definition of the Cloud Native Computing Foundation (CNCF), Cloud-native apps are loosely-coupled, resilient, manageable and observable. To meet the requirements to quickly react on changed business requirements, a robust and consistent automation strategy (CI/CD). Technological Cloud-native apps massively count on containerisation. Conceptually such apps rely on modern concepts like Microservices, APIs, DevOps and 12-factor app.

To make it more concrete: Cloud-native apps are built with a Cloud-first mindset. In addition, those apps should not depend on a specific tooling or vendor so that it can be both deployed in the Cloud and on-premises as well as in the Cloud of vendor A and vendor B without changing the implementation. Technologies like Kubernetes (and of course other technologies specifically certified by CNCF) are key therefore.

From my perspective, the ideas behind Cloud-native should be the basis for any app that is developed nowadays!

Today, every Cloud vendor provides Cloud-native services; at least all of them provide a managed Kubernetes offering. With a special look to the Oracle Cloud, developers are provided with a complete Cloud-native development stack.

Screen Shot 2020-03-29 at 22.08.08.png The figure above shows what Oracle Cloud Infrastructure offers in the area of Cloud-native Development. In the following, I’ll give a brief introduction to the services with special focus on the App development & Ops services.

As mentioned before, Oracle – as the others also do – offers Managed Kubernetes offering called OKE. Supplementary to this we have Cloud Infrastructure Registry (OCIR), which is a private Docker Registry. So developing apps to be deployed to OKE can be pushed to this registry instead of public Docker registries like Docker Hub.

With Oracle Functions there’s also a Serveless/FaaS offering. Functions are built using Fn Project, which is a quite interesting project as such. Fn allows developers to completely develop and test functions locally. This is possible due to the flexible architecture of the functions runtime. Function apps are wrapped in Docker images that can be managed in a Docker registry (e.g. OCIR) and executed within the Functions runtime. The Functions server architecture allows to have it on the local development machine, in the local datacenter or in the Cloud. A very flexible approach.

To able to access Functions provided by in the Oracle Cloud, a HTTP Endpoint needs to be securely exposed so that the function can only be called by authorized clients. To ensure that the OCI API Gateway can be used. The gateway component is completely managed by Oracle, users just have to provision it, provision the respective APIs and the corresponding policies and use it. That’s it!

Provisioning of the gateway, the APIs and policies can be fully automated by using an Infrastructure as Code approach (IaC) with Terraform (which is also the case for most of the Cloud-native Services). With respect to IaC, the Resource Manager Service is provided, which supports you with provisioning all Cloud-native Services.

In addition to the aforementioned services, Oracle is also providing a Cloud-based development runtime, the Developer Cloud Service. This service provides a complete development environment (with exception of the IDE) and amongst others contains a GIT repo, an artifact repo, Kanban boards, a Wiki and a Build server. Setting up a new project can be done within 1-2 minutes.

With Oracle Helidon (where I will give a brief introduction in an upcoming post), a Microservice development framework is available, which can be used in two flavours: a  MicroProfile-based approach and in a more functional-based Microframework way. So the framework is very flexible to address different requirements.

From a Observability and Messaging perspective, the Logging and the Streaming Service are the most relevant ones from my perspective. The Logging Service (which currently is in limited availability) is a centralized Log Management with respect to the provisioned Service within a users Cloud tenant. This kind of functionality is very important, because of the distributed, loosley-coupled way Cloud-native apps are usually build.

The Streaming Service Service is basically a Cloud-native, Kafka-compatible, Event Hub implementation. It is designed for high throughput with the intention to handle large data streams that for instance may occur in IoT scenarios.

As you can see, Oracle has a solid foundation to build and run Cloud-native applications in the Cloud. More details about the aforementioned service can be found on the official landing page.

In upcoming posts, I’ll dig a little deeper into the different services. I’ll also show how the services can be combined together to build the foundation of a Cloud-native runtime and development platform.

 

 

 

 

 

 

Oracle Open World Wrap up: Autonomous Cloud platform to built intelligent Cloud Native apps

Oracle Open World and Code One are just over, so it’s the ideal time to reflect what happened during the days at the conference. The big things of this years conference were:

  • New Data centers
  • Autonomous Database enhancements
  • Autonomous Linux
  • Partnerships with Microsoft & VM Ware
  • Intelligent Apps development (Powered by ML/AI)
  • Cloud Native

In the following sections let’s take a closer look with regards to the aforementioned topics.

New Data centers

Besides that Oracle showed how aggressively they’re building out new Gen-2 data center regions, to catch up with the main competitor AWS.

Neue Datei 2019-09-13 08.47.27_1.jpg

By the end of this year it is planned to have 19 data centers all over the world, for both Commercial and Governmental customers.

Autonomous Database enhancements

The Autonomous Database (ADB) is Oracle’s flagship product for Data Management in the Cloud. It is nothing new, but a lot of new enhancements are currently happening or going to happen in the near future.

Besides features like automatic indexing, self-scaling and built-in Machine Learning capabilities, one of the most important messages was that ADB will be evolved in the direction of a Multi-model Data Management Platform. Multi-model means that besides relational data there will be support for JSON, Key Value, Graph, Spatial and Files. Saying that, it was announced that a new Autonomous JSON database will be available that will coexist with Oracle Autonomous Transaction Processing and Autonomous Data Warehouse.

To run Production workloads in a secure and isolated way, Autonomous Database Dedicated was announced. This basically means that such database tenants are isolated and run on dedicated Exadata Cloud infrastructure. This model feels like a fully isolated private Cloud in the Public Cloud and comes with a guaranteed availability of 99.995%.

Since ADB will be the central Data Management Platform within Oracle Cloud, Security is another hot topic. To further security, Oracle Data Safe was announced, which is the new, unified database security Control Center and comes with features like Security configuration assessment, User Activity Auditing and Data masking. This new offering is free for Cloud Databases and can also be used for On-prem Databases (for those you’re charged). With that you have kind of a hybrid, unified and central Security Control Center, which is quite cool from my perspective.

One of the biggest announcements for sure was the Free Cloud tier. This includes an always free Autonomous Data (2 Micro instances with 20 GB storage and 1 OCPU per instance). In addition to that you get the full development experience because it includes APEX, ORDS, SQL Developer Web, Machine Learning Notebooks. Having said that: Finally also APEX arrived on Autonomous Database. APEX is a low-code development platform to rapidly develop DB-centric applications.

Autonomous Linux

Taking the autonomous strategy to the next level, Oracle announced Autonomous Linux, along with the new Oracle OS Management Service, which the first and only autonomous operating system offering that eliminates complexity and human error.

You can get more information about these new offerings in the official Press release: https://www.oracle.com/corporate/pressrelease/oow19-oracle-autonomous-linux-091619.html

Partnerships with Microsoft and VM Ware

It is good to see that Oracle is further opening themselves embracing Open Source and collaborate with other companies. Just before Open World a partnership with Microsoft was announced and during Open World more details about this partnership were announced, which includes:

  • Running MSSQL on Oracle Cloud
  • Running Oracle Ecosystem on Azure (Database, Apps, Linux, Java on Azure, WebLogic)
  • Oracle provides license mobility for Oracle Software from On-prem to Azure

Besides that on some data centers there’ll be Oracle Gen-2 infrastructure co-located to Azure infra, which means less latency.

Oracle on VM Ware virtualization was always on of the most annoying topics for Oracle customers. Now Oracle and VM Ware announced a partnership where customers will be enabled to run VM Ware workloads on Oracle Cloud and furthermore Oracle will provide technical support for customers to run Oracle products on VM Ware virtualization. Read more about this in the official VM Ware press release: https://www.vmware.com/company/news/releases/vmw-newsfeed.Oracle-and-VMware-Partner-to-Support-Customers-Hybrid-Cloud-Strategies.1916340.html

Intelligent Apps development

Intelligent apps are supposed to deliver a next gen User experience by supporting users in an intelligent and convenient way.

In this area the Oracle Digital Assistant (ODA) platform, which is officially there since last years Open World, is one important thing. ODA allows you to create intelligent, conversational apps. Intelligence is derived from the existing data using AI and ML capabilities.

One big announcement for ODA was the upcoming support for Voice. On top of that it was announced that ODA will be enabled to understand specific Enterprise vocabulary using semantic pattern matching, so that the bot is able to better understand specific user intents.

ODA is constantly evolving and is also used by Oracle’s SaaS offering like HCM or ERP (pre-built skills). To allow ODA to connect and talk to other systems, Oracle Integration Cloud (OIC) can be used, which comes with OOTB business accelerators. Those are basically pre-built integration recipes that can be used and adjusted to a certain use case if needed.

Neue Datei 2019-09-13 09.33.53_1.jpg

Content Management for ODA solutions can be done in Oracle Content and Experience Cloud (OCE), which has been evolved over to a multi-channel, intelligent Content Hub.

Besides Conversational apps, powered by ODA, Progressive Web App development is also a very interesting and relevant topic, especially when it comes to efficient software development. For that Oracle has Visual Builder Cloud Service sided by Oracle Developer Cloud Service (which is basically free). One announcement here was that those two services will become more integrated (Visual Builder Studio/Platform) to further Developer productivity and to provide an outstanding developer experience.

Neue Datei 2019-09-17 15.44.03_1.jpg

Cloud Native

Cloud Native was an omni-present topic especially at Oracle Code One. Here we had a lot presentations with respect to Kubernetes, Microservice development, Function as a Service, Reactive app development, etc. The interesting thing to see was that Java is still relevant and as a programming language is constantly evolving. In the Java Keynote the GA of Java 13 was announced. Furthermore there are tons of Java-based framework to develop Microservices around (MicroProfile-based (Helidon), Quarkus, Micronaut).

From my perspective new applications today should be developed using Cloud Native technologies adhering the respective design principles (12-factor app). So, I recommend to make  yourselves familliar with those concepts and technologies. A great source for that is the CNCF website (https://www.cncf.io).

Oracle itself has an impressive amount of Cloud Native OCI services, like Event Streaming or Functions (based on Project Fn), which become constantly improved and integrated with each other. The idea is to build and run scalable apps in public, private and hybrid Clouds. The philosophy for those services is to provide a completly managed Cloud Native development stack based on leading Open Source technologies that are certified by CNCF. In this area new services were announced Logging Service, for centralized log management, and a native OCI API Gateway. The later one is a fully Oracle-managed API Gateway.

Neue Datei 2019-09-14 09.56.31_1.jpg

By the way – What about On-prem?

Besides all the noise around Cloud, it is good to see that also the On-prem offering is further evolved in parallel. Shortly, we can expect a new Patchset for WebLogic Server (12.2.1.4), which brings in some enhancements and security fixes.

Furthermore it was announced that WebLogic Server 14.1.1 is currently under development and will be out in this calendar year. This version is expected to fully support Jakarta EE 8 and runs on top of JDK8 resp. JDK13. There will also be support for Middleware components like SOA Suite or Servicebus with upcoming releases of Weblogic Server (14.1.2).

Summary

Puuuuh… A lot of stuff is obviously going on. Oracle is moving forward and has – at least from my perspective – a very strong vision where the Cloud should go to. I am glad to see that, especially with respect to the newly announced partnerships and the constantly evolving adoption of Open Source technologies and also give things back to the Open Source community (Helidon, Oracle JET, Java EE -> Jakarta EE).

 

 

 

 

Quick steps for setting up a Gateway Node in Oracle API Cloud Service

Oracle API Platform Cloud Service (APIP CS) is an API Management Platform for covering the complete API lifecycle. A general overview about the solution is provided in one of my previous blog posts.

In this blog post, I’ll summarize the steps that are needed to setup a first API Gateway Gateway Node.

Logical Gateway and Gateway Nodes

Before getting started with the Gateway setup, a basic concept needs to be clarified.

Oracle APIP CS support the concept of a so called Logical Gateway, which depicts a logical configuration and management unit for the several Gateway Nodes. A Gateway Node is a physical representation of a API Gateway. It is the runtime component, where APIs are exposed to the outside world and where the defined API policies are enforced, when an API is called by a client.

From a subscription perspective the number of Logical Gateways is the relevant criterium with respect to the occurring costs. No matter, on how many Gateway Nodes are registered to a Logical Gateway.

Installation

Prerequisites

Before getting started with the installation, a respective Compute Node (OCI, AWS, Azure, On-Premise) instance is needed on which the Gateway Node should be deployed. In my case, I used a OCI Compute instance, which I setup using the OCI console. The general system requirements for the target machine can be found in the documentation.

Create needed users

As mentioned in the documentation, a prerequisite for the API Gateway deployment, is the availability of the following two users:

  • Gateway Manager user, who is responsible for managing the Gateway and needs to be assigned to the Gateway Manager role
  • Gateway Runtime user, who is responsible for the interaction between Gateway Node and Management Service and needs to be assigned to the Gateway Runtime role

Those two users need to be created by an Identity Domain administrator using the User section in the Service Dashboard.

User_management.png

After user creation, the respective roles need to be assigned in the user’s details.

Define the Logical Gateway

In a first step, I used the Oracle APIP CS Management Portal, I created a new Logical Gateway and named it “Development Gateway”.

In the Logical Gateway Nodes section, the Gateway Node installer can be downloaded.

Logical_Gateway_Nodes_details.png

In addition, the page provides the “Open installation wizard” button, which is useful to create an initial Gateway installation configuration (gateway-props.json) for the specific Logical Gateway.

In the Grants section of the Logical Gateway section, the following grants needs to be defined for the two previously created users:

  • Gateway Manager grant to the Gateway Manager user
  • Node Service account grant to the Gateway Runtime user

Logical_Gateway_grants_gr.png

Logical_Gateway_grants_gr.png

Install the Gateway

After downloading the Gateway Installer, I copied this to my previously configured OCI Compute instance, connected to the instance via SSH and unzipped the installer to /u01/installer.

sudo mkdir -p /u01/apics
sudo mkdir -p /u01/installer

sudo chown -R opc /u01

unzip ApicsGatewayInstaller.zip -d /u01/installer

After that, I replaced the file /u01/installer/gateway-props.json with one I created using the Installation wizard using the APIP CS Management Portal.

Before the Gateway installation can be started, a valid Oracle JDK need to be installed and the JAVA_HOME environment needs to be set appropriately.

sudo mkdir -p /usr/java
curl -v -j -k -L -H "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.rpm > /usr/java/jdk-8u131-linux-x64.rpm

sudo rpm -ivh /usr/java/jdk-8u131-linux-x64.rpm

export JAVA_HOME=/usr/java/jdk1.8.0_131

After that preparation steps, the Gateway Node can be installed.

/u01/installer/APIGateway -f /u01/installer/gateway-props.json -a install-configure-start

During installation and configuration you’re prompted for a Weblogic domain username (Weblogic domain administrator), who will be created during this step. I called the user “weblogic” with a respective password.

Join the Gateway Node to the Logical Gateway

After the Gateway Node has been successfully installed and started, it needs to be registered with the previously created Logical Gateway.

/u01/installer/APIGateway -f /u01/installer/gateway-props.json -a join

While executing this step, you’re prompted for the usernames and passwords of the previously created Gateway Manager user and Gateway Runtime user.

In addition to the User credentials, the IDCS Client credentials for APIP CS also need to be passed. Those credentials, namely the Client Id and the Client Secret, can be found in the Platform settings section of the APIP CS Management Portal.

Platform_Settings.png

Approving the Gateway Node

After the Gateway has been joined successfully, it needs to be approved by a Gateway Manager, using the Management Portal.

Gateway_Approval.png

After approving the Gateway and before deploying the first API, respective Load Balancer URLs need to be defined for the Logical Gateway instance. Since I  just have one Gateway Node, I set to the hostname of the Gateway Node.

Logical_Gateway_Settings_2.png

Testing the API Gateway

For testing purposes, I just created a test API against a httpbin.org mock service that replies with the passed status code. The API definition is super simple, does a passthrough without further policy definitions.

Test_Service.png

To test the service quickly, I simply did an HTTP call via HTTPie.

http api-gw-01.oracle.com:8011/api/v1/test/204

This results in the following response:

HTTP 1.1 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Content-Length: 0
Content-Type: text/html; charset=utf-8
Date: Fri, 10 May 2019 06:38:35 GMT
Referrer-Policy: no-referrer-when-downgrade
Server: nginx
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block

With that it’s proven that the Gateway has been deployed successfully and is working correctly.

Autonomous, intelligent and open Cloud – An Oracle Open World and Code One Wrap-up

Oracle Open World 2018 is over, so it’s time to take a step back and replay about what happened during some interesting days fully-packed with great and useful information.

Oracle Gen2 Cloud Infrastructure – the big thing in IaaS

Oracle Gen2 Cloud Infrastructure (OCI), is intended to deliver a better Performance (Compute, Memory, Block Storage, Network) and a better Pricing to customers than the Gen1 infrastructure.

From an architectural perspective Oracle’s new Cloud infrastructure is more than just a facelift, since it has been re-designed from the ground up.

Cloud_Gen1_vs_Gen2.png

As the picture above shows, Oracle introduced a completely new tier: the Cloud Control Computers. These specific components, called the impenetrable barrier, run all Cloud control code. Before, the Cloud control code was co-located with all customer code, which was suspect to be less secure and vulnerable. The Cloud Control computers surround the Oracle Cloud infrastructure to protect the Cloud as such and additionally surround each customer zone. This leads to enhanced security and more data privacy.

In addition to the impenetrable barrier, Oracle introduced so called Autonomous Robots that detect and kill potential threats automatically. To be able to identify those threats, the Robots are empowered by Machine Learning algorithms and so protect the Oracle Gen2 Cloud infrastructure for attacks.

OCI is already available in most regions today and will also be available for Cloud@Customer in Summer 2019.

Oracle Autonomous Database

The Oracle Autonomous Database was already announced during last years Open World; now the vision seems to be compete. The Database can be used for implementing transaction-intense applications (OLTP) as well as for defining analytics applications (OLAP, Oracle Autonomous Data Warehouse) and leverages the new OCI infrastructure. In the context of the Autonomous Database autonomous Robots are responsible for:

  • Provisioning
  • Scaling
  • Tuning (tuning is constantly applied)
  • Recovery
  • Patch & update
  • Fault-tolerant failover
  • Backup & Recovery

Doing so, the Database is supposed to be more stable and available (Availability: 99.995%) and should allow Developers and Administrators to focus on more important questions with respect to data organisation and business logic.

From an architectural perspective, Oracle Autonomous Database is designed in a Serverless fashion, which means that customers only need to pay when data is actively processed. When the Servers are idle, nothing needs to be paid – with the exception of storage.

OCI Security announcements

The Security topic was very prominent this year. For the OCI infrastructure the following new announcements in this area were made:

  • Key Management Service– Store & Manage all encryption keys for all storage layers
  • Cloud Access Security Borker (CASB)– Automated, continuous security monitoring and management (e.g. configuration changes done by potential attackers)
  • Web Application Firewall– Web application traffic inspection
  • Distributed Denial of Service Protection– Automated DDoS attack detection and mitigation of high volume layer 3 & 4 attacks

With that the OCI offering becomes more secure and trustworthy, so that customers have less to worry about data security in the Cloud.

News and noteworthy from the SaaS and PaaS space

Oracle SaaS and PaaS solutions are leveraging from the innovations for Oracle Gen2 Cloud infrastructure, since the respective solutions are running upon the IaaS components.

Oracle SaaS

In the SaaS space, Oracle claims the market leader position, especially in the Cloud ERP space. Bringing existing on-premise customers to Oracle Fusion SaaS is something were Oracle is working on hard, to make this journey as easy as possible. In addition, it should be done at a very low-cost level and in the shortest possible period, which is depending on the number of customisations built in the existing on-premises solution.

Talking about customisations Larry Ellison said: “We love extensions, extensions are great! We have these great tools for extensions to our SaaS applications.”, and he further explained that customisations are not welcomed. From a long-term maintenance perspective, this is comprehensible.

With great tools, Ellison points amongst others to the integration accelerators that can be used to integrate Fusion SaaS apps and the respective data with other applications. Regarding data integration and analytics of the existing Fusion data Oracle introduces the brand new Fusion Analytics Data Warehouse that is build upon Oracle Autonomous Data Warehouse as well as the Oracle Analytics Cloud Service (PaaS), which is intended to make data analytics very easy and efficient by just pushing a button.

Oracle_Fusion_Warehouse.png

Oracle PaaS

Machine Learning (ML) and Artificial Intelligence (AI) seems to be very popular nowadays and omnipresent at a lot of presentations at Open World this year, as already mentioned in this post, while talking about OCI and the concept of the autonomous Robots.

That Oracle takes the topic seriously, shows itself also by the announced acquisition of the Cloud-based AI data engine company DataFox for undisclosed terms. The acquired tech will enhance Oracle Cloud Applications and the Data as a Service offering.

“Machine learning is a technology as revolutionary as the internet” (Larry Ellison, CTO Oracle)

ML and AI technologies (and therefore the Autonomous Data Warehouse, which provides the data basis) are also the basis for the newly announced Oracle Digital Assistant, which is the next evolution level of Chatbots and Intelligent Bots.

Other as the Chatbot or Intelligent Bot offering before, the Oracle Digital Assistant is a new standalone Service offerings and combines diverse so called skills for different business contexts under a common interface. This makes the User experience more consistent, since users have a single entry point to follow up with different user journeys, depending on their current context. Empowered by ML and AI the digital assistance knows, by analysing the information provided by the user, which skill to use to fulfil the current request. From a interface perspective, Oracle provides an app, but also supports integration with existing Services like Slack or Facebook messenger. In addition to that there a completely new support for Voice is available, which allows integration with existing voice assistants like Siri or Alexa.

With respect to Oracle Integration Cloud (OIC), we’ll see some new innovations also driven by ML and AI. For example in the Process Space there’ll be support for Dynamic Business Rules and next best action offerings in the area of dynamic processes and in the integration space integrations can be built more efficiently by providing intelligent recommendations for data mappings.

A new kid on the block in the Process and Integration space is Robotic Process Automation (RPA), where application integration is done by so called Robots (other than the autonomous Robots used by OCI) by basically leveraging the existing UI capabilities of an existing application to realise a certain integration scenario. The RPA technology can be used in cases where no appropriate UI is available and integrations needs to be established quickly. To implement RPA-based integrations a developer basically defines a UI Flow, similar to a Screencast, which is replayed by the Robot.

For developing and running the Robots, Oracle has established a cooperation with UIPath, a leading company in the RPA space. At Open World Oracle announced a new OIC RPA Adapter, which can be used to easily integrate with UIPath’s RPA solution, which makes the development of those solutions more efficient.

Cloud-native application development

Cloud-native application development denotes a modern approach to build and run applications by exploiting the advantages that Cloud and emerging technologies for developing modern applications deliver. Cloud-native applications embrace the 12-factor  principles, integrate concepts like DevOps, Continuous delivery and are often build on Container technologies.

Oracle also implement some of their Cloud offerings considering Cloud-native principles. While doing so they also share technologies and frameworks with the Open-Source Community, like the Oracle JET framework (the standard UI Framework used for Cloud UIs). With Fn Project Oracle last year open-sourced a framework for defining Functions-as-a-Service (FaaS) apps which are Docker-based and therefore can be executed vendor-agnostic.

At this years Open World Oracle introduced a new framework that was open-sourced just before Open World: Helidon. It is a framework to implement Microservices. It comes in two different flavours: MicroFramework, which is a lightweight and function-based variant, and MicroProfile, which supports MicroProfile version 1.1 and therefore comes with support for JEE features. So Helidon is a valid alternative to Spring Boot, when it comes to Microservices implementation on a Java basis.

During Open World 2018 Oracle Oracle announced 9 new Services to support Cloud-native application development, from Managed K8S, Kafka and Serverless, Orchestration, Telemetry, Notifications, Auto scaling and Cloud events.

9_Cloud_Native_Services.jpg

The Orchestration Service for example aims at Infrastructure-as-Code, which is a very important thing for Cloud-native application development, since with that applications become even more independent from the runtime as it’s runtime is part of the software.

Orchestration_Service_details.jpg

From a technology perspective topics like APIs,  Microservice technologies, like Service-Mesh with Istio or Envoy and Kubernetes as the Next-gen application development platform, were prominent citizens especially at Oracle Code One. In addition, the Kafka platform for real-time Data streaming and analytics, Serverless technologies and implementations as well as Machine Learning based on Open-Source technologies and frameworks were on the agenda.

Conclusion

This years Open World was mainly branded by the new Gen2 Infrastructure, the enhancements in this area and the autonomy of certain Oracle Cloud components, like the Database or the Data Warehouse. It seems that at least the Oracle IaaS stack is following a consistent vision and is becoming more mature. Also on the PaaS-level the available product palette seems to become more homogeneous and consistent, since everything converges together from a higher-level perspective. There are still some childhood illnesses, but maybe that’s just a matter of time.

Code One was a conference with many different facets, amazing presentations and awesome speakers. Here developers were able to share knowledge and exchange opinions, about how applications development should be done nowadays. It’s good to see that trend for embracing Open-Source technologies, which I already noticed last year, evolved further.

I am curious to see how the observed trends will develop further. Latest at Oracle Open World and Code One 2019, we’ll see how trend will look like.