60
Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3 Last Updated: 2018-04-08

Reference Architectures 2018

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Reference Architectures 2018

Reference Architectures 2018

Vert.x Microservices on Red Hat OpenShiftContainer Platform 3

Last Updated: 2018-04-08

Page 2: Reference Architectures 2018
Page 3: Reference Architectures 2018

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShiftContainer Platform 3

Babak [email protected]

Page 4: Reference Architectures 2018

Legal Notice

Copyright © 2018 Red Hat, Inc.

The text of and illustrations in this document are licensed by Red Hat under a Creative CommonsAttribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA isavailable athttp://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you mustprovide the URL for the original version.

Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.

Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinitylogo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and othercountries.

Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.

Java ® is a registered trademark of Oracle and/or its affiliates.

XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United Statesand/or other countries.

MySQL ® is a registered trademark of MySQL AB in the United States, the European Union andother countries.

Node.js ® is an official trademark of Joyent. Red Hat Software Collections is not formally related toor endorsed by the official Joyent Node.js open source or commercial project.

The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marksor trademarks/service marks of the OpenStack Foundation, in the United States and other countriesand are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed orsponsored by the OpenStack Foundation, or the OpenStack community.

All other trademarks are the property of their respective owners.

Abstract

This reference architecture demonstrates the design, development, and deployment of Vert.xMicroservices on Red Hat® OpenShift Container Platform 3.

Page 5: Reference Architectures 2018

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Table of Contents

COMMENTS AND FEEDBACK

CHAPTER 1. EXECUTIVE SUMMARY

CHAPTER 2. SOFTWARE STACK2.1. FRAMEWORK2.2. CLIENT LIBRARY

2.2.1. Overview2.2.2. Ribbon2.2.3. gRPC2.2.4. Vert.x Web Client

2.3. SERVICE REGISTRY2.3.1. Overview2.3.2. Eureka2.3.3. Consul2.3.4. ZooKeeper2.3.5. OpenShift

2.4. LOAD BALANCER2.4.1. Overview2.4.2. Ribbon2.4.3. gRPC2.4.4. OpenShift Service

2.5. CIRCUIT BREAKER2.5.1. Overview2.5.2. Hystrix2.5.3. Vert.x Circuit Breaker

2.6. EXTERNALIZED CONFIGURATION2.6.1. Overview2.6.2. Spring Cloud Config2.6.3. OpenShift ConfigMaps

2.7. DISTRIBUTED TRACING2.7.1. Overview2.7.2. Jaeger

2.8. PROXY/ROUTING2.8.1. Overview2.8.2. Zuul2.8.3. Istio2.8.4. Custom Proxy

CHAPTER 3. REFERENCE ARCHITECTURE ENVIRONMENT

CHAPTER 4. CREATING THE ENVIRONMENT4.1. OVERVIEW4.2. PROJECT DOWNLOAD4.3. SHARED STORAGE4.4. OPENSHIFT CONFIGURATION4.5. JAEGER DEPLOYMENT4.6. SERVICE DEPLOYMENT4.7. FLIGHT SEARCH4.8. EXTERNAL CONFIGURATION4.9. A/B TESTING

5

6

777777778888888999999999

10101010101010101011

12

17171717181821222325

Table of Contents

1

Page 6: Reference Architectures 2018

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

CHAPTER 5. DESIGN AND DEVELOPMENT5.1. OVERVIEW5.2. MAVEN PROJECT MODEL

5.2.1. Supported Software Components5.2.1.1. Red Hat Supported Software5.2.1.2. Community Open-Source Software

5.2.2. OpenShift Health Probes5.3. RESOURCE LIMITS5.4. VERT.X REST SERVICE

5.4.1. Overview5.4.2. Vert.x REST Service5.4.3. Startup Initialization

5.5. VERT.X WEB CLIENT AND LOAD BALANCING5.5.1. Overview5.5.2. Vert.x Web Client

5.6. VERT.X WEB APPLICATION5.6.1. Overview5.6.2. Context Disambiguation5.6.3. Bower Package Manager5.6.4. PatternFly5.6.5. JavaScript

5.6.5.1. jQuery UI5.6.5.2. jQuery Bootstrap Table

5.7. VERT.X CIRCUIT BREAKER5.7.1. Overview5.7.2. Circuit Breaker

5.8. OPENSHIFT CONFIGMAP5.8.1. Overview5.8.2. Property File Mount

5.9. VERT.X CONCURRENCY5.9.1. Verticles5.9.2. Worker Verticles5.9.3. Combining Patterns5.9.4. Reactive Patterns5.9.5. Orchestrating Reactive Calls

5.10. JAEGER5.10.1. Overview5.10.2. Cassandra Database

5.10.2.1. Persistence5.10.2.2. Persistent Volume Claims5.10.2.3. Persistent Volume

5.10.3. Jaeger Image5.10.4. Jaeger Tracing Client

5.10.4.1. Tracing Outgoing Calls5.10.4.2. Baggage Data

5.11. EDGE SERVICE5.11.1. Overview5.11.2. Usage5.11.3. Implementation Details5.11.4. A/B Testing

CHAPTER 6. CONCLUSION

2828282828292930313131323232333434343434353535353535373737373838394041434343434344444446474747474851

53

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

2

Page 7: Reference Architectures 2018

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

APPENDIX A. AUTHORSHIP HISTORY

APPENDIX B. CONTRIBUTORS

APPENDIX C. REVISION HISTORY

54

55

56

Table of Contents

3

Page 8: Reference Architectures 2018

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

4

Page 9: Reference Architectures 2018

COMMENTS AND FEEDBACKIn the spirit of open source, we invite anyone to provide feedback and comments on any referencearchitecture. Although we review our papers internally, sometimes issues or typographical errors areencountered. Feedback allows us to not only improve the quality of the papers we produce, but allowsthe reader to provide their thoughts on potential improvements and topic expansion to the papers.Feedback on the papers can be provided by emailing [email protected]. Please refer to thetitle within the email.

COMMENTS AND FEEDBACK

5

Page 10: Reference Architectures 2018

CHAPTER 1. EXECUTIVE SUMMARYThis reference architecture demonstrates the design, development, and deployment of Vert.xmicroservices on Red Hat® OpenShift Container Platform 3. The reference application is builtwith the use of various Vert.x components as well as other open source components, commonly found inVert.x applications.

Red Hat OpenShift Application Runtimes (RHOAR) is an ongoing effort by Red Hat to provideofficial OpenShift images with a combination of fully supported Red Hat software and popular third-partyopen-source components. The productized Eclipse Vert.x version 3.4.2.redhat-006 is certified to run onOpenShift Container Platform and provided as part of the Red Hat subscription.

The reference architecture serves as a potential blueprint for cloud-native application development onVert.x, OpenShift Container Platform and other commonly associated software.

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

6

Page 11: Reference Architectures 2018

CHAPTER 2. SOFTWARE STACK

2.1. FRAMEWORK

Numerous frameworks are available for building microservices, and each provides various advantageand disadvantages. This reference architecture focuses on a microservice architecture built on top of Vert.x. Eclipse Vert.x is a toolkit for writing reactive, non-blocking, asynchronous applications that runon the JVM (Java Virtual Machine). Eclipse Vert.x provides an unopinionated and flexible way to writepolyglot applications that are fast and lightweight. Eclipse Vert.x is also designed to be truly cloud-nativeby efficiently allowing many processes to switch between one or very few threads. This allows EclipseVert.x applications and services to more effectively use their CPU quotas in cloud environments andavoids the unnecessary overhead caused when creating new threads.

2.2. CLIENT LIBRARY

2.2.1. Overview

While invoking a microservice is typically a simple matter of sending a JSON or XML payload overHTTP, various considerations have led to the prevalence of different client libraries. These libraries inturn support many other tools and frameworks often required in a microservice architecture.

2.2.2. Ribbon

Ribbon is an Inter-Process Communication (remote procedure calls) library with built-in client-side loadbalancers. The primary usage model involves REST calls with various serialization scheme support.

2.2.3. gRPC

The more modern gRPC is a replacement for Ribbon that’s been developed by Google and adopted by alarge number of projects.

While Ribbon uses simple text-based JSON or XML payloads over HTTP, gRPC relies on ProtocolBuffers for faster and more compact serialization. The payload is sent over HTTP/2 in binary form. Theresult is better performance and security, at the expense of compatibility and tooling support in theexisting market.

2.2.4. Vert.x Web Client

The Vert.x Web Client is an asynchronous HTTP and HTTP/2 client. This Web Client makes easy to doHTTP request/response interactions with a web server, and provides advanced features like:

Json body encoding / decoding

request/response pumping

request parameters

unified error handling

form submissions

2.3. SERVICE REGISTRY

CHAPTER 2. SOFTWARE STACK

7

Page 12: Reference Architectures 2018

2.3.1. Overview

Microservice architecture often implies dynamic scaling of individual services, in a private, hybrid orpublic cloud where the number and address of hosts cannot always be predicted or statically configuredin advance. The solution is the use of a service registry as a starting point for discovering the deployedinstances of each service. This will often be paired by a client library or load balancer layer thatseamlessly fails over upon discovering that an instance no longer exists, and caches service registrylookups. Taking things one step further, integration between a client library and the service registry canmake this lookup and invoke process into a single step, and transparent to developers.

In modern cloud environments, such capability is often provided by the platform, and service replicationand scaling is a core feature. This reference architecture is built on top of OpenShift, therefore benefitingfrom the Kubernetes Service abstraction.

2.3.2. Eureka

Eureka is a REST (REpresentational State Transfer) based service that is primarily used in the AWScloud for locating services for the purpose of load balancing and failover of middle-tier servers.

2.3.3. Consul

Consul is a tool for discovering and configuring services in your infrastructure. It is provided both as partof the HashiCorp enterprise suite of software, as well as an open source component that is used in theSpring Cloud.

2.3.4. ZooKeeper

Apache ZooKeeper is a centralized service for maintaining configuration information, naming,providing distributed synchronization, and providing group services.

2.3.5. OpenShift

In OpenShift, a Kubernetes service serves as an internal load balancer. It identifies a set of replicatedpods in order to proxy the connections it receives to them. Additional backing pods can be added to, orremoved from a service, while the service itself remains consistently available, enabling anything thatdepends on the service to refer to it through a consistent address.

Contrary to a third-party service registry, the platform in charge of service replication can provide acurrent and accurate report of service replicas at any moment. The service abstraction is also a criticalplatform component that is as reliable as the underlying platform itself. This means that the client doesnot need to keep a cache and account for the failure of the service registry itself.

This reference architecture builds microservices on top of OpenShift and uses the OpenShift servicecapability to perform the role of a service registry.

2.4. LOAD BALANCER

2.4.1. Overview

For client calls to stateless services, high availability (HA) translates to a need to look up the service froma service registry, and load balance among available instances. The client libraries previously mentionedinclude the ability to combine these two steps, but OpenShift makes both actions redundant by includingload balancing capability in the service abstraction. OpenShift provides a single address where calls willbe load balanced and redirected to an appropriate instance.

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

8

Page 13: Reference Architectures 2018

2.4.2. Ribbon

Ribbon allows load balancing among a static list of instances that are declared, or however manyinstances of the service that are discovered from a registry lookup.

2.4.3. gRPC

gRPC also provides load balancing capability within the same library layer.

2.4.4. OpenShift Service

OpenShift provides load balancing through its concept of service abstraction. The cluster IP addressexposed by a service is an internal load balancer between any running replica pods that provide theservice. Within the OpenShift cluster, the service name resolves to this cluster IP address and can beused to reach the load balancer. For calls from outside and when going through the router is notdesirable, an external IP address can be configured for the service.

2.5. CIRCUIT BREAKER

2.5.1. Overview

The highly distributed nature of microservices implies a higher risk of failure of a remote call, as thenumber of such remote calls increases. The circuit breaker pattern can help avoid a cascade of suchfailures by isolating problematic services and avoiding damaging timeouts.

2.5.2. Hystrix

Hystrix is a latency and fault tolerance library designed to isolate points of access to remote systems,services and 3rd party libraries, stop cascading failure and enable resilience in complex distributedsystems where failure is inevitable.

Hystrix implements both the circuit breaker and bulkhead patterns.

2.5.3. Vert.x Circuit Breaker

Vert.x Circuit Breaker is an implementation of the Circuit Breaker pattern for Vert.x. It keeps track of thenumber of failures and opens the circuit when a threshold is reached. Optionally, a fallback is executed.Supported failures are:

failures reported by your code in a Future

exception thrown by your code

uncompleted futures (timeout)

Operations guarded by a circuit breaker are intended to be non-blocking and asynchronous inorder to benefit from the Vert.x execution model.

2.6. EXTERNALIZED CONFIGURATION

2.6.1. Overview

Externalized configuration management solutions can provide an elegant alternative to the typical

CHAPTER 2. SOFTWARE STACK

9

Page 14: Reference Architectures 2018

combination of configuration files, command line arguments, and environment variables that are used tomake applications more portable and less rigid in response to outside changes. This capability is largelydependent on the underlying platform and is provided by ConfigMaps in OpenShift.

2.6.2. Spring Cloud Config

Spring Cloud Config provides server and client-side support for externalized configuration in adistributed system. With the Config Server you have a central place to manage external properties forapplications across all environments.

2.6.3. OpenShift ConfigMaps

ConfigMaps can be used to store fine-grained information like individual properties, or coarse-grainedinformation like entire configuration files or JSON blobs. They provide mechanisms to inject containerswith configuration data while keeping containers agnostic of OpenShift Container Platform.

2.7. DISTRIBUTED TRACING

2.7.1. Overview

For all its advantages, a microservice architecture can be very difficult to analyze and troubleshoot. Eachbusiness request spawns multiple calls to, and between, individual services at various layers. Distributedtracing ties all individual service calls together, and associates them with a business request through aunique generated ID.

2.7.2. Jaeger

Jaeger, inspired by Dapper and OpenZipkin, is an open source distributed tracing system that fullyconforms to the Cloud Native Computing Foundation (CNCF) OpenTracing standard. It can beused for monitoring microservice-based architectures and provides distributed context propagation andtransaction monitoring, as well as service dependency analysis and performance / latency optimization.

This reference architecture uses OpenTracing for instrumenting services. Jaeger is an implementationand tracing system which aggregates/shows the data.

2.8. PROXY/ROUTING

2.8.1. Overview

Adding a proxy in front of every service call enables the application of various filters before and aftercalls, as well as a number of common patterns in a microservice architecture, such as A/B testing. Staticand dynamic routing rules can help select the desired version of a service.

2.8.2. Zuul

Zuul is an edge service that provides dynamic routing, monitoring, resiliency, security, and more. Zuulsupports multiple routing models, ranging from declarative URL patterns mapped to a destination, togroovy scripts that can reside outside the application archive and dynamically determine the route.

2.8.3. Istio

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

10

Page 15: Reference Architectures 2018

Istio is an open platform-independent service mesh that provides traffic management, policyenforcement, and telemetry collection. Istio is designed to manage communications betweenmicroservices and applications. Istio is still in pre-release stages.

Red Hat is a participant in the Istio project.

2.8.4. Custom Proxy

While Istio remains under development, this application builds a custom reverse proxy componentcapable of both static and dynamic routing rules. The reverse proxy component developed as part of thisreference architecture application relies on the vertx-web-client to proxy requests. The capability isextended to introduce a mapping framework, allowing for simple declarative mapping comparable toZuul, in addition to support for JavaScript. Rules written in JavaScript can be very simple, or take fulladvantage of standard JavaScript and the JVM surrounding it. The mapping framework supports achained approach, so scripts can simply focus on overriding static rules when necessary.

CHAPTER 2. SOFTWARE STACK

11

Page 16: Reference Architectures 2018

CHAPTER 3. REFERENCE ARCHITECTURE ENVIRONMENTThis reference architecture demonstrates an airline ticket search system built in the microservicearchitectural style. Each individual microservice is implemented as a Vert.x REST service deployed onan OpenShift image with a supported OpenJDK. The software stack of a typical microservice is asfollows:

Figure 3.1. Microservice Software Stack

Each microservice instance runs in a container instance, with one container per OpenShift pod and onepod per service replica. At its core, an application built in the microservice architectural style consists of anumber of replicated containers calling each other:

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

12

Page 17: Reference Architectures 2018

Figure 3.2. Container Software Stack

The core functionality of the application is provided by microservices, each fulfilling a singleresponsibility. One service acts as the API gateway, calling individual microservices and aggregating theresponse so it can be consumed easier.

CHAPTER 3. REFERENCE ARCHITECTURE ENVIRONMENT

13

Page 18: Reference Architectures 2018

Figure 3.3. Functional Diagram

The architecture makes extended use of Jaeger for distributed tracing. The Jaeger productiontemplates are used to run the backend service with a Cassandra datastore, and tracing data is sent to itfrom every service in the application.

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

14

Page 19: Reference Architectures 2018

Figure 3.4. Jaeger Calls

Finally, the reference architecture uses an edge service to provide static and dynamic routing. The resultis that all service calls are actually directed to the reverse proxy and it forwards the request asappropriate. This capability is leveraged to demonstrate A/B testing by providing an alternate version ofthe Sales service and making a runtime decision to use it for a group of customers.

CHAPTER 3. REFERENCE ARCHITECTURE ENVIRONMENT

15

Page 20: Reference Architectures 2018

Figure 3.5. Reverse Proxy

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

16

Page 21: Reference Architectures 2018

CHAPTER 4. CREATING THE ENVIRONMENT

4.1. OVERVIEW

This reference architecture can be deployed in either a production or a trial environment. In both cases,it is assumed that ocp-master1 refers to one (or the only) OpenShift master host and that theenvironment includes two other OpenShift schedulable hosts with the host names of ocp-node1 and ocp-node2. Production environments would have at least 3 master hosts to provide High Availability (HA)resource management, and presumably a higher number of working nodes.

It is further assumed that OpenShift Container Platform has been properly installed, and that a Linux userwith sudo privileges has access to the host machines. This user can then set up an OpenShift userthrough its identity providers.

4.2. PROJECT DOWNLOAD

Download the source code and related artifacts for this reference architecture application from its publicgithub repository:

$ git clone https://github.com/RHsyseng/vertx-msa-ocp.git LambdaAir

Change directory to the root of this project. It is assumed that from this point on, all instructions areexecuted from inside the LambdaAir directory.

$ cd LambdaAir

4.3. SHARED STORAGE

This reference architecture environment uses Network File System (NFS) to make storage available to allOpenShift nodes.

Attach 3GB of storage and create a volume group for it, as well as a logical volume of 1GB for eachrequired persistent volume:

$ sudo pvcreate /dev/vdc$ sudo vgcreate vertx /dev/vdc$ sudo lvcreate -L 1G -n cassandra-data vertx$ sudo lvcreate -L 1G -n cassandra-logs vertx$ sudo lvcreate -L 1G -n edge vertx

Create a corresponding mount directory for each logical volume and mount them.

$ sudo mkfs.ext4 /dev/vertx/cassandra-data$ sudo mkdir -p /mnt/vertx/cassandra-data$ sudo mount /dev/vertx/cassandra-data /mnt/vertx/cassandra-data

$ sudo mkfs.ext4 /dev/vertx/cassandra-logs$ sudo mkdir -p /mnt/vertx/cassandra-logs$ sudo mount /dev/vertx/cassandra-logs /mnt/vertx/cassandra-logs

CHAPTER 4. CREATING THE ENVIRONMENT

17

Page 22: Reference Architectures 2018

$ sudo mkfs.ext4 /dev/vertx/edge$ sudo mkdir -p /mnt/vertx/edge$ sudo mount /dev/vertx/edge /mnt/vertx/edge

Share these mounts with all nodes by configuring the /etc/exports file on the NFS server, and make sureto restart the NFS service before proceeding.

4.4. OPENSHIFT CONFIGURATION

Create an OpenShift user, optionally with the same name, to use for creating the project and deployingthe application. Assuming the use of HTPasswd as the authentication provider:

$ sudo htpasswd -c /etc/origin/master/htpasswd ocpAdminNew password: PASSWORDRe-type new password: PASSWORDAdding password for user ocpAdmin

Grant OpenShift admin and cluster admin roles to this user, so it can create persistent volumes:

$ sudo oadm policy add-cluster-role-to-user admin ocpAdmin$ sudo oadm policy add-cluster-role-to-user cluster-admin ocpAdmin

At this point, the new OpenShift user can be used to sign in to the cluster through the master server:

$ oc login -u ocpAdmin -p PASSWORD --server=https://ocp-master1.xxx.example.com:8443

Login successful.

Create a new project to deploy this reference architecture application:

$ oc new-project lambdaair --display-name="Lambda Air" --description="Vert.x Microservices on Red Hat OpenShift Container Platform 3"Now using project "lambdaair" on server "https://ocp-master1.xxx.example.com:8443".

4.5. JAEGER DEPLOYMENT

Jaeger uses the Cassandra database for storage, which in turn requires OpenShift persistent volumes tobe created. Edit Jaeger/jaeger-pv.yml and provide a valid NFS server and path for each entry, beforeproceeding. Once the file has been corrected, use it to create the six persistent volumes:

$ oc create -f Jaeger/jaeger-pv.ymlpersistentvolume "cassandra-pv-1" createdpersistentvolume "cassandra-pv-2" createdpersistentvolume "cassandra-pv-3" createdpersistentvolume "cassandra-pv-4" createdpersistentvolume "cassandra-pv-5" createdpersistentvolume "cassandra-pv-6" created

Validate that the persistent volumes are available:

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

18

Page 23: Reference Architectures 2018

$ oc get pvNAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS AGEcassandra-pv-1 1Gi RWO Recycle Available 11scassandra-pv-2 1Gi RWO Recycle Available 11scassandra-pv-3 1Gi RWO Recycle Available 11scassandra-pv-4 1Gi RWO Recycle Available 11scassandra-pv-5 1Gi RWO Recycle Available 11scassandra-pv-6 1Gi RWO Recycle Available 11s

With the persistent volumes in place, use the provided version of the Jaeger production template todeploy both the Jaeger server and the Cassandra database services. The template also uses a volumeclaim template to dynamically create a data and log volume claim for each of the three pods:

volumeClaimTemplates: - metadata: name: cassandra-data spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi - metadata: name: cassandra-logs spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi

$ oc new-app -f Jaeger/jaeger-production-template.yml

--> Deploying template "swarm/jaeger-template" for "Jaeger/jaeger-production-template.yml" to project lambdaair

Jaeger --------- Jaeger Distributed Tracing Server

* With parameters: * Jaeger Service Name=jaeger * Image version=0.6 * Jaeger Cassandra Keyspace=jaeger_v1_dc1 * Jaeger Zipkin Service Name=zipkin

--> Creating resources ... service "cassandra" created statefulset "cassandra" created job "jaeger-cassandra-schema-job" created deployment "jaeger-collector" created

CHAPTER 4. CREATING THE ENVIRONMENT

19

Page 24: Reference Architectures 2018

service "jaeger-collector" created service "zipkin" created deployment "jaeger-query" created service "jaeger-query" created route "jaeger-query" created--> Success Run 'oc status' to view your app.

You can use oc status to get a report, but for further details and to view the progress of the deployment,watch the pods as they get created and deployed:

$ watch oc get pods

Every 2.0s: oc get pods

NAME READY STATUS RESTARTS AGEcassandra-0 1/1 Running 0 4mcassandra-1 1/1 Running 2 4mcassandra-2 1/1 Running 3 4mjaeger-cassandra-schema-job-7d58m 0/1 Completed 0 4mjaeger-collector-418097188-b090z 1/1 Running 4 4mjaeger-query-751032167-vxr3w 1/1 Running 3 4m

It may take a few minutes for the deployment process to complete, at which point there should be fivepods in the Running state with a database loading job that is completed.

Next, deploy the Jaeger agent. This reference architecture deploys the agent as a single separate pod:

$ oc new-app Jaeger/jaeger-agent.yml

--> Deploying template "swarm/jaeger-jaeger-agent" for "Jaeger/jaeger-agent.yml" to project lambdaair

--> Creating resources ... deploymentconfig "jaeger-agent" created service "jaeger-agent" created--> Success Run 'oc status' to view your app.

NOTE

The Jaeger agent may be deployed in multiple ways, or even bypassed entirely throughdirect HTTP calls to the collector. Another option is bundling the agent as a sidecar toevery microservice, as documented in the Jaeger project itself. Select an appropriateapproach for your production environment

Next, to access the Jaeger console, first discover its address by querying the route:

$ oc get routes

NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARDjaeger-query jaeger-query-lambdaair.ocp.xxx.example.com jaeger-query <all> edge/Allow None

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

20

Page 25: Reference Architectures 2018

Use the displayed URL to access the console from a browser and verify that it works correctly:

Figure 4.1. Jaeger UI

4.6. SERVICE DEPLOYMENT

To deploy a Vert.x service, use Maven to build the project, with the fabric8:deploy target for the openshiftprofile to deploy the built image to OpenShift. For convenience, an aggregator pom file has beenprovided at the root of the project that delegates the same Maven build to all 6 configured modules:

$ mvn clean fabric8:deploy -Popenshift

[INFO] Scanning for projects...[INFO][INFO] ------------------------------------------------------------------------[INFO] Building Lambda Air 1.0-SNAPSHOT[INFO] ------------------------------------------------------------------------.........[INFO] --- fabric8-maven-plugin:3.5.30:deploy (default-cli) @ aggregation ---[WARNING] F8: No such generated manifest file /Users/bmozaffa/RedHatDrive/SysEng/Microservices/Vert.x/vertx-msa-ocp/target/classes/META-INF/fabric8/openshift.yml for this project so ignoring[INFO] ------------------------------------------------------------------------[INFO] Reactor Summary:[INFO][INFO] Lambda Air ......................................... SUCCESS [02:26 min][INFO] Lambda Air ......................................... SUCCESS [04:18 min][INFO] Lambda Air ......................................... SUCCESS [02:07 min][INFO] Lambda Air ......................................... SUCCESS [02:42

CHAPTER 4. CREATING THE ENVIRONMENT

21

Page 26: Reference Architectures 2018

min][INFO] Lambda Air ......................................... SUCCESS [01:17 min][INFO] Lambda Air ......................................... SUCCESS [01:13 min][INFO] Lambda Air ......................................... SUCCESS [ 1.294 s][INFO] ------------------------------------------------------------------------[INFO] BUILD SUCCESS[INFO] ------------------------------------------------------------------------[INFO] Total time: 14:16 min[INFO] Finished at: 2017-12-08T16:57:11-08:00[INFO] Final Memory: 81M/402M[INFO] ------------------------------------------------------------------------

Once all services have been built and deployed, there should be a total of 11 running pods, including the5 Jaeger pods from before, and a new pod for each of the 6 services:

$ oc get podsNAME READY STATUS RESTARTS AGEairports-1-bn1gp 1/1 Running 0 24mairports-s2i-1-build 0/1 Completed 0 24mcassandra-0 1/1 Running 0 55mcassandra-1 1/1 Running 2 55mcassandra-2 1/1 Running 3 55medge-1-nlb4b 1/1 Running 0 12medge-s2i-1-build 0/1 Completed 0 13mflights-1-n0lbx 1/1 Running 0 11mflights-s2i-1-build 0/1 Completed 0 11mjaeger-agent-1-g8s9t 1/1 Running 0 39mjaeger-cassandra-schema-job-7d58m 0/1 Completed 0 55mjaeger-collector-418097188-b090z 1/1 Running 4 55mjaeger-query-751032167-vxr3w 1/1 Running 3 55mpresentation-1-dscwm 1/1 Running 0 1mpresentation-s2i-1-build 0/1 Completed 0 1msales-1-g96zm 1/1 Running 0 4msales-s2i-1-build 0/1 Completed 0 5msales2-1-36hww 1/1 Running 0 3msales2-s2i-1-build 0/1 Completed 0 4m

4.7. FLIGHT SEARCH

The presentation service also creates a route. Once again, list the routes in the OpenShift project:

$ oc get routesNAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARDjaeger-query jaeger-query-lambdaair.ocp.xxx.example.com jaeger-query <all> edge/Allow Nonepresentation presentation-lambdaair.ocp.xxx.example.com presentation 8080 None

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

22

Page 27: Reference Architectures 2018

Use the URL of the route to access the HTML application from a browser, and verify that it comes up:

Figure 4.2. Lambda Air Landing Page

Search for a flight by entering values for each of the four fields. The first search may take a bit longer, sowait a few seconds for the response:

Figure 4.3. Lambda Air Flight Search

4.8. EXTERNAL CONFIGURATION

The Presentation service, which contains the API Gateway, configures the Vert.x web client with anHTTP connection pool size of 20 in its environment properties. Confirm this by searching the logs of thepresentation pod after it has started, and verify that the pool size is as expected:

$ oc logs presentation-1-fxnkm | grep poolFINE: Will price flights with a connection pool size of 20

Make a copy of the project configuration file app-config.yml file that assumes a higher number of Salesservice pods relative to Presentation pods. Accordingly, increase the connection pool size:

$ vi app-config.yml

Change the connection pool size to 80:

CHAPTER 4. CREATING THE ENVIRONMENT

23

Page 28: Reference Architectures 2018

pricing: pool: size: 80

Create a configmap using the oc utility based on this file:

$ oc create configmap presentation --from-file=app-config.yml

configmap "presentation" created

Edit the Presentation deployment config and mount this ConfigMap as /deployments/config, where it willautomatically be part of the Vert.x application classpath:

$ oc edit dc presentation

Add a new volume with an arbitrary name, such as config-volume, that references the previously createdconfigmap. The volumes definition is a child of the template spec. Next, create a volume mount underthe container to reference this volume and specify where it should be mounted. The final result is asfollows, with the new lines highlighted:

... resources: {} securityContext: privileged: false terminationMessagePath: /dev/termination-log volumeMounts: - name: config-volume mountPath: /deployments/app-config.yml subPath: app-config.yml volumes: - name: config-volume configMap: name: presentation dnsPolicy: ClusterFirst restartPolicy: Always...

Once the deployment config is modified and saved, OpenShift will deploy a new version of the servicethat will include the overriding properties. This change is persistent and pods created in the future withthis new version of the deployment config will also mount the yaml file.

List the pods and note that a new pod is being created to reflect the change in the deployment config,which is the mounted file:

$ oc get podsNAME READY STATUS RESTARTS AGEairports-1-bn1gp 1/1 Running 0 24mairports-s2i-1-build 0/1 Completed 0 24mcassandra-0 1/1 Running 0 55mcassandra-1 1/1 Running 2 55mcassandra-2 1/1 Running 3 55medge-1-nlb4b 1/1 Running 0 12medge-s2i-1-build 0/1 Completed 0 13mflights-1-n0lbx 1/1 Running 0 11mflights-s2i-1-build 0/1 Completed 0 11m

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

24

Page 29: Reference Architectures 2018

jaeger-agent-1-g8s9t 1/1 Running 0 39mjaeger-cassandra-schema-job-7d58m 0/1 Completed 0 55mjaeger-collector-418097188-b090z 1/1 Running 4 55mjaeger-query-751032167-vxr3w 1/1 Running 3 55mpresentation-1-dscwm 1/1 Running 0 1mpresentation-2-deploy 0/1 ContainerCreating 0 3spresentation-s2i-1-build 0/1 Completed 0 1msales-1-g96zm 1/1 Running 0 4msales-s2i-1-build 0/1 Completed 0 5msales2-1-36hww 1/1 Running 0 3msales2-s2i-1-build 0/1 Completed 0 4m

Wait until the second version of the pod has started in the running state. The first version will beterminated and subsequently removed:

$ oc get podsNAME READY STATUS RESTARTS AGE...presentation-2-36dt9 1/1 Running 0 2s...

Once this has happened, use the browser to do one or several more flight searches. Then verify theupdated connection pool size by searching the logs of the new presentation pod and verify the batchsize:

$ oc logs presentation-2-36dt9 | grep poolFINE: Will price flights with a connection pool size of 80

Notice that with the mounted overriding properties, pricing calls are now made from a pool size of 80,instead of 20.

4.9. A/B TESTING

Copy the JavaScript file provided in the Edge project over to the shared storage for this service:

$ cp Edge/misc/routing.js /mnt/vertx/edge/

Create a persistent volume for the Edge service. External JavaScript files placed in this location canprovide dynamic routing.

$ oc create -f Edge/misc/edge-pv.ymlpersistentvolume "edge" created

Also create a persistent volume claim:

$ oc create -f Edge/misc/edge-pvc.ymlpersistentvolumeclaim "edge" created

Verify that the claim is bound to the persistent volume:

$ oc get pvcNAME STATUS VOLUME CAPACITY ACCESSMODES

CHAPTER 4. CREATING THE ENVIRONMENT

25

Page 30: Reference Architectures 2018

STORAGECLASS AGEcassandra-data-cassandra-0 Bound cassandra-pv-1 1Gi RWO 39mcassandra-data-cassandra-1 Bound cassandra-pv-2 1Gi RWO 39mcassandra-data-cassandra-2 Bound cassandra-pv-3 1Gi RWO 39mcassandra-logs-cassandra-0 Bound cassandra-pv-4 1Gi RWO 39mcassandra-logs-cassandra-1 Bound cassandra-pv-5 1Gi RWO 39mcassandra-logs-cassandra-2 Bound cassandra-pv-6 1Gi RWO 39medge Bound edge 1Gi RWO 3s

Attach the persistent volume claim to the deployment config as a directory called edge on the root of thefilesystem:

$ oc volume dc/edge --add --name=edge --type=persistentVolumeClaim --claim-name=edge --mount-path=/edgedeploymentconfig "edge" updated

Once again, the change prompts a new deployment and terminates the original edge pod, once the newversion is started up and running.

Wait until the second version of the pod reaches the running state. Then return to the browser andperform one or more flight searches. After that, return to the OpenShift environment and look at the logfor the edge pod.

If the IP address received from your browser ends in an odd number, the JavaScript filters pricing callsand sends them to version B of the sales service instead. This will be clear in the edge log:

$ oc logs edge-2-67g55 | grep ReroutingRerouting to B instance for IP Address 10.3.116.25Rerouting to B instance for IP Address 10.3.116.25Rerouting to B instance for IP Address 10.3.116.25Rerouting to B instance for IP Address 10.3.116.25Rerouting to B instance for IP Address 10.3.116.25Rerouting to B instance for IP Address 10.3.116.25Rerouting to B instance for IP Address 10.3.116.25Rerouting to B instance for IP Address 10.3.116.25Rerouting to B instance for IP Address 10.3.116.25Rerouting to B instance for IP Address 10.3.116.25Rerouting to B instance for IP Address 10.3.116.25Rerouting to B instance for IP Address 10.3.116.25Rerouting to B instance for IP Address 10.3.116.25Rerouting to B instance for IP Address 10.3.116.25

In this case, the logs from sales2 will show tickets being priced with a modified algorithm:

$ oc logs sales2-1-667gw | grep PricedFINE: Priced ticket at 135FINE: Priced ticket at 169FINE: Priced ticket at 169

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

26

Page 31: Reference Architectures 2018

FINE: Priced ticket at 169FINE: Priced ticket at 391FINE: Priced ticket at 414FINE: Priced ticket at 324FINE: Priced ticket at 473FINE: Priced ticket at 559FINE: Priced ticket at 597FINE: Priced ticket at 250FINE: Priced ticket at 237FINE: Priced ticket at 629FINE: Priced ticket at 283

If that is not the case and your IP address ends in an even number, you will see other log statementsdepending on the configured verbosity. In this case, you can change the filter criteria to send IPaddresses with an even digit to the new version of pricing algorithm, instead of the odd ones.

$ cat /mnt/vertx/edge/routing.js

vertx.eventBus().consumer("routing.js", function (message) { var jsonObject = map( message.body() ); message.reply(jsonObject);});console.log("Loaded routing.js");

function map(jsonObject) { if( jsonObject["host"].startsWith("sales") ) { var ipAddress = jsonObject["forwarded-for"]; console.log( 'Got IP Address as ' + ipAddress ); if( ipAddress ) { var lastDigit = ipAddress.substring( ipAddress.length - 1 ); console.log( 'Got last digit as ' + lastDigit ); if( lastDigit % 2 != 0 ) { console.log( 'Rerouting to B instance for IP Address ' + ipAddress ); //Odd IP address, reroute for A/B testing: jsonObject["host"] = "sales2:8080"; } } } return jsonObject;}

This is a simple matter of editing the file and deploying a new version of the edge service to pick up theupdated script:

$ oc rollout latest edgedeploymentconfig "edge" rolled out

Once the new pod is running, do a flight search again and check the logs. The calls to pricing should goto the sales2 service now, and logs should appear as previously described.

CHAPTER 4. CREATING THE ENVIRONMENT

27

Page 32: Reference Architectures 2018

CHAPTER 5. DESIGN AND DEVELOPMENT

5.1. OVERVIEW

The source code for the Lambda Air application is made available in a public github repository. Thischapter briefly covers each microservice and its functionality while reviewing the pieces of the softwarestack used in the reference architecture.

5.2. MAVEN PROJECT MODEL

Each microservice project includes a Maven POM file, which in addition to declaring the moduleproperties and dependencies, also includes a profile definition to use fabric8-maven-plugin tocreate and deploy an OpenShift image.

5.2.1. Supported Software Components

Software components used in this reference architecture application include both Red Hat supportedsoftware and community open-source software. The use of a Maven BOM file to declare librarydependencies helps distinguish between the two.

5.2.1.1. Red Hat Supported Software

To use the supported Vert.x stack from Red Hat OpenShift Application Runtimes (RHOAR), configurethe Red Hat repository for Maven:

<repositories> <repository> <id>redhat-ga-repository</id> <name>Redhat GA Repository</name> <url>https://maven.repository.redhat.com/ga/all/</url> </repository></repositories><pluginRepositories> <pluginRepository> <id>redhat-ga-repository</id> <name>Redhat GA Repository</name> <url>https://maven.repository.redhat.com/ga/all/</url> </pluginRepository></pluginRepositories>

Select a supported version of Vert.x:

<properties> <version.vertx>3.4.2.redhat-006</version.vertx>...

Further down in the POM file, the dependency section references a BOM file in the Red Hat repositorythat maintains a list of supported versions and libraries for Vert.x:

<dependencyManagement> <dependencies> <dependency> <groupId>io.vertx</groupId> <artifactId>vertx-dependencies</artifactId>

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

28

Page 33: Reference Architectures 2018

<version>${version.vertx}</version> <scope>import</scope> <type>pom</type> </dependency> </dependencies></dependencyManagement>

The POM file declares the base image containing the operating system and Java Development Kit (JDK) in the configuration section of the plugin. All the services in this application build on top of a Red Hat Enterprise Linux (RHEL) base image, containing a supported version of OpenJDK:

... <from> registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift </from>...

This BOM file allows the project Maven files to reference declared packages without providing a version,and import the supported library versions:

<dependency> <groupId>io.vertx</groupId> <artifactId>vertx-core</artifactId> </dependency> <dependency> <groupId>io.vertx</groupId> <artifactId>vertx-web</artifactId> </dependency>...

5.2.1.2. Community Open-Source Software

The reference architecture application also makes use of open-source libraries that are not currentlysupported by Red Hat. In such cases, the POM files do not make use of dependency management, anddirectly import the required version of each library:

<!-- Jaeger client for distributed tracing with opentracing-vertx-web -→ <dependency> <groupId>com.uber.jaeger</groupId> <artifactId>jaeger-core</artifactId> <version>0.24.0</version> </dependency> <dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-vertx-web</artifactId> <version>0.1.0</version> </dependency>

5.2.2. OpenShift Health Probes

To generate OpenShift health probes, an enricher element is inserted in the configuration section of thefabric8 maven plugin:

<enricher> <config> <vertx-health-check>

CHAPTER 5. DESIGN AND DEVELOPMENT

29

Page 34: Reference Architectures 2018

<path>/health</path> </vertx-health-check> </config></enricher>

As a result, fabric8 generates default OpenShift health probes that call the pods to determine whether aservice is running and ready to service requests:

livenessProbe: failureThreshold: 3 httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 180 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 readinessProbe: failureThreshold: 3 httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1

To handle health checks, simply provide a handler for Vert.x Web Router:

Router router = Router.router(vertx);router.get("/health").handler(routingContext -> routingContext.response().end("OK"));

5.3. RESOURCE LIMITS

OpenShift allows administrators to set constraints to limit the number of objects or amount of computeresources that are used in each project. While these constraints apply to projects in the aggregate, eachpod may also request minimum resources and/or be constrained with limits on its memory and CPU use.

The OpenShift template provided in the project repository for the Jaeger agent uses this capability torequest that at least 20% of a CPU core and 200 megabytes of memory be made available to itscontainer. Twice the processing power and the memory may be provided to the container, if necessaryand available, but no more than that will be assigned.

resources: limits: cpu: "400m" memory: "400Mi" requests: cpu: "200m" memory: "200Mi"

When the fabric8 Maven plugin is used to create the image and direct edits to the deployment

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

30

Page 35: Reference Architectures 2018

configuration are not convenient, resource fragments can be used to provide the desired snippets. Thisapplication provides deployment.yml files to leverage this capability and set resource requests and limitson the Vert.x projects:

spec: replicas: 1 template: spec: containers: - resources: requests: cpu: '200m' memory: '400Mi' limits: cpu: '400m' memory: '800Mi'

Control over the memory and processing use of individual services is often critical. Proper configurationof these values, as specified above, is seamless to the deployment and administration process.However, it can be helpful to set up resource quotas in projects for the purpose of enforcing theirinclusion in pod deployment configurations.

5.4. VERT.X REST SERVICE

5.4.1. Overview

The Airports service is the simplest microservice of the application, which makes it a good point ofreference for building a basic Vert.x REST service.

5.4.2. Vert.x REST Service

To easily include the dependencies for a simple Vert.x application that provides a REST service, declarethe following artifact:

<dependency> <groupId>io.vertx</groupId> <artifactId>vertx-web</artifactId></dependency>

To receive and process REST requests, create and start an http server, along with a router to listen onlisted paths for specific http methods:

... Router router = Router.router(vertx); router.get("/health").handler(routingContext -> routingContext.response().end("OK")); router.get("/airports").handler(this::getAirports); router.get("/airports/:airport").handler(this::getAirport); setupTracing(router, configResult.result()); HttpServer httpServer = vertx.createHttpServer(); httpServer.requestHandler(router::accept); int port = config().getInteger("http.port", 8080); httpServer.listen(port, result -> { if (result.succeeded()) { startFuture.succeeded();

CHAPTER 5. DESIGN AND DEVELOPMENT

31

Page 36: Reference Architectures 2018

logger.info("Running http server on port " + result.result().actualPort()); } else { startFuture.fail(result.cause()); } }); }

This is enough to create a REST service that listens on the default port of 8080 on the specified contexts.When the web listener is successfully started, there will be a log statement indicating it is listening on theconfigured port.

Each REST operation is implemented by a Java method, for example:

private void getAirports(RoutingContext routingContext) { getActiveSpan(routingContext).setTag("Operation", "Look Up Airports"); HttpServerResponse response = routingContext.response(); response.putHeader("Content-Type", "application/json; charset=utf-8");

String filter = routingContext.request().getParam("filter"); logger.fine("Filter is " + filter); Collection<Airport> airports; if (filter == null || filter.isEmpty()) { airports = AirportsService.getAirports(); } else { airports = AirportsService.filter(filter); } response.end(Json.encode(airports));}

5.4.3. Startup Initialization

The Airports service uses eager initialization to load airport data into memory at the time of startup. Thisis implemented in the verticle start method:

@Overridepublic void start(Future<Void> startFuture) { try { AirportsService.loadAirports(); ... } catch (IOException e) { startFuture.fail(e); }}

In case of an exception, the verticle initialization in interrupted and its startup fails.

5.5. VERT.X WEB CLIENT AND LOAD BALANCING

5.5.1. Overview

The Flights service has a similar structure to that of the Airports service, but relies on, and calls theAirports service. As such, it makes use of the Vert.x Web Client and the generated OpenShift service forhigh availability.

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

32

Page 37: Reference Architectures 2018

5.5.2. Vert.x Web Client

The Vert.x Web Client is imported by declaring a dependency on its respective package:

<dependency> <groupId>io.vertx</groupId> <artifactId>vertx-web-client</artifactId></dependency>

To obtain a new instance of the web client with default option, simply pass a vertx instance to the factorymethod provided by the class:

webClient = WebClient.create(vertx);

The instance is used as a field to avoid recreating and closing an instance for each invocation.

The Presentation project, on the other hand, needs to further configure the web client, so it creates aWebClientOptions object:

WebClientOptions options = new WebClientOptions(); int connectionPoolSize = config().getInteger("pricing.pool.size"); logger.fine("Will price flights with a connection pool size of " + connectionPoolSize); options.setMaxPoolSize(connectionPoolSize); webClient = WebClient.create(vertx, options);

To make calls to a service using the client, create a request for a given URL and use one of the sendmethod to execute the request:

HttpRequest<Buffer> httpRequest = webClient.getAbs(config().getString("service.airports.baseUrl") + "/airports"); httpRequest.send(asyncResult -> { if (asyncResult.succeeded()) { ... } else { ... } });

The AsyncResult is considered to have succeeded as long as an HTTP response is received. It’s oftennecessary to check the HTTP status code to further determine how to handle the response:

logger.fine("Got code " + asyncResult.result().statusCode());

The content of the response can be retrieved in different formats, including a String:

... asyncResult.result().bodyAsString() ...

The JSON helper class can parse a valid response to an object:

Airport[] airportArray = Json.decodeValue(message.body(), Airport[].class);

CHAPTER 5. DESIGN AND DEVELOPMENT

33

Page 38: Reference Architectures 2018

The service address provided to the convenience method is resolved based on values provided in theconfiguration properties:

service: airports: baseUrl: http://edge:8080/airports

The service address is resolved, based on the service name, to a URL with the hostname of edge withport 8080. The Edge service uses the second part of the address, the root web context, to redirect therequest through static or dynamic routing, as explained later in this document.

The provided hostname of edge is the OpenShift service name, and is resolved to the cluster IP addressof the service, then routed to an internal OpenShift load balancer. The OpenShift service name isdetermined when a service is created using the oc tool, or when deploying an image using the fabric8Maven plugin, it is declared in the service yaml file.

To emphasize, it should be noted that all calls are being routed to an OpenShift internal load balancer,which is aware of replication and failure of service instances, and can redirect the request properly.

5.6. VERT.X WEB APPLICATION

5.6.1. Overview

The Presentation service uses based Vert.x WebApp capability to expose HTML and JavaScript and runa client-side application in the browser.

5.6.2. Context Disambiguation

The Presentation service listens for health probes on /health, REST calls on /gateway/airportCodes and/gateway/query, and reroutes other calls to static resources for the presentation layer:

Router router = Router.router(vertx); router.get("/health").handler(routingContext -> routingContext.response().end("OK")); router.get("/gateway/airportCodes").handler(this::getAirportCodes); router.get("/gateway/query").handler(this::query); router.get("/*").handler(StaticHandler.create("webapp")); setupTracing(router);

5.6.3. Bower Package Manager

The Presentation service uses Bower package manager to declare, download and update JavaScriptlibraries. Libraries, versions and components to download (or rather, those to ignore) are specified in abower JSON file. Running bower install downloads the declared libraries to the bower_componentsdirectory, which can in turn be imported in the HTML application.

5.6.4. PatternFly

The HTML application developed for this reference architecture uses PatternFly to provide consistentvisual design and improved user experience.

PatternFly stylesheets are imported in the main html:

<!-- PatternFly Styles -→

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

34

Page 39: Reference Architectures 2018

<link href="bower_components/patternfly/dist/css/patternfly.min.css" rel="stylesheet" media="screen, print"/> <link href="bower_components/patternfly/dist/css/patternfly-additions.min.css" rel="stylesheet" media="screen, print"/>

The associated JavaScript is also included in the header:

<!-- PatternFly -→ <script src="bower_components/patternfly/dist/js/patternfly.min.js"></script>

5.6.5. JavaScript

The presentation tier of this application is built in HTML5 and relies heavily on JavaScript. This includesusing ajax calls to the API gateway, as well as minor changes to HTML elements that visible anddisplayed to the user.

5.6.5.1. jQuery UI

Some features of the jQuery UI library, including autocomplete for airport fields, are utilized in thepresentation layer.

5.6.5.2. jQuery Bootstrap Table

To display flight search results in a dynamic table with pagination, and the ability to expand each row toreveal more data, a jQuery Bootstrap Table library is included and utilized.

5.7. VERT.X CIRCUIT BREAKER

5.7.1. Overview

The Presentation service contains a Vert.x verticle, that acts as an API gateway. The API gatewaymakes simple REST calls to the Airports service, similar to the previously discussed Flights service, butalso calls the Sales service to get pricing information and uses a different pattern for this call. The CircuitBreaker is used to avoid a large number of hung connections and lengthy timeouts when the Salesservice is down. It avoids flooding a failing service and makes the application fault tolerant by providing afallback response. In this instance, flight information can be returned without providing a ticket price. Thecalls leverage the Observable reactive pattern to implement parallel processing.

5.7.2. Circuit Breaker

The Presentation service maintains a Vert.x Circuit Breaker as a field and configures it while beinginitialized:

private void startWithConfig(Future<Void> startFuture, AsyncResult<JsonObject> configResult) { ... CircuitBreakerOptions circuitBreakerOptions = new CircuitBreakerOptions() .setMaxFailures(config().getInteger("pricing.failure.max", 3)) .setTimeout(config().getInteger("pricing.failure.timeout", 1000)) .setFallbackOnFailure(config().getBoolean("pricing.failure.fallback", false)) .setResetTimeout(config().getInteger("pricing.failure.reset",

CHAPTER 5. DESIGN AND DEVELOPMENT

35

Page 40: Reference Architectures 2018

5000)); circuitBreaker = CircuitBreaker.create("PricingCall", vertx, circuitBreakerOptions);

Configuration values for the circuit breaker are included in the service configuration file, which can beoverriden in different environments:

pricing: pool: size: 20 failure: max: 50 timeout: 5000 fallback: true reset: 30000

The circuit breaker can be used in multiple ways. The Presentation service provides a command and afallback to the circuit breaker.

The command makes an HTTP request, returns a successful response as an Itinerary object andindicates a failure if the request fails:

Handler<Future<Itinerary>> priceFlightHandler = future -> { request.rxSendJson(flight).subscribe(httpResponse -> { future.complete(httpResponse.bodyAsJson(Itinerary.class)); }, future::fail); };

The fallback is to log the failure and create an Itinerary object based on the input, with no price attached:

Function<Throwable, Itinerary> priceFlightFailback = throwable -> { logger.log(Level.WARNING, "Fallback while obtaining price for " + flight, throwable); return new Itinerary(flight); };

Given this command and fallback, you can make the call to get an Itinerary, with the fallback providingthe response for errors within the scope of the circuit breaker:

Single<Itinerary> itinerarySingle = circuitBreaker.rxExecuteCommandWithFallback(priceFlightHandler, priceFlightFailback);

However, the Presentation service instead creates a list of Observable objects and uses the Zip patternto process them in parallel:

private Observable<List<Itinerary>> priceFlights(RoutingContext routingContext, Flight[] flights) { HttpRequest<Buffer> request = webClient.postAbs(config().getString("service.sales.baseUrl") + "/price"); traceOutgoingCall(routingContext, request); List<Observable<Itinerary>> itineraryObservables = new ArrayList<>(); for (Flight flight : flights) { Handler<Future<Itinerary>> priceFlightHandler = future -> { request.rxSendJson(flight).subscribe(httpResponse -> {

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

36

Page 41: Reference Architectures 2018

future.complete(httpResponse.bodyAsJson(Itinerary.class)); }, future::fail); }; Function<Throwable, Itinerary> priceFlightFailback = throwable -> { logger.log(Level.WARNING, "Fallback while obtaining price for " + flight, throwable); return new Itinerary(flight); }; Observable<Itinerary> observable = circuitBreaker.rxExecuteCommandWithFallback(priceFlightHandler, priceFlightFailback).toObservable(); itineraryObservables.add(observable); } return Observable.zip(itineraryObservables, objects -> { List<Itinerary> pricedItineraries = new ArrayList<>(); for (Object object : objects) { pricedItineraries.add((Itinerary) object); } return pricedItineraries; }); }

5.8. OPENSHIFT CONFIGMAP

5.8.1. Overview

While considering the concurrent execution of pricing calls, the number of concurrent calls and otherconfiguration values should be fine-tuned based on realistic expectations of load as well as the horizontalscaling of the environment. This configuration can be externalized by creating a ConfigMap for eachOpenShift environment, with overriding values provided in a properties file that is then provided to allfuture pods.

5.8.2. Property File Mount

Refer to the steps in creating the environment for detailed instructions on how to create an externalapplication properties and mounting it in the pod. The property file is placed in the application class pathand the provided values supersede those of the application.

5.9. VERT.X CONCURRENCY

Vert.x uses a variation of the Reactor pattern to handle load and deliver higher throughput. Themultireactor pattern used in Vert.x involves the use of several event loops, where each can run on a CPUcore without repeated and wasteful context switching.

In contrast with the Java EE threading model, the entry point of our Vert.x microservices is typically asingle-threaded Verticle that quickly dispatches incoming requests as events. The event loops work onprocessing each request, but the reactive nature of Vert.x means that when the code is waiting for aresponse from another service, that event loop can process a different event without sitting idle orneedlessly occupying an entire thread. For a better understanding of these concepts, refer to variousavailable resources. For this discussion, a few of the patterns used in this application are reviewed andexplained.

CHAPTER 5. DESIGN AND DEVELOPMENT

37

Page 42: Reference Architectures 2018

5.9.1. Verticles

Vert.x only uses a single instance of a verticle. In the simplest case of the Airports service, the verticlethread sets up an HTTP server and uses a router to delegate incoming requests to processing methods:

router.get("/airports").handler(this::getAirports); router.get("/airports/:airport").handler(this::getAirport);

Each request to either web context is treated as an event. The reactive pattern means that eachrequest/event is made available to be picked up by the event loop, without the thread waiting foranything to complete. The event loop works on a request until its synchronous execution is completed,and then picks up the next event. Pending requests are queued, along with other events, and eventuallyprocessed.

The Airports service is very simple and lacks any I/O operations. As such, an entire request is handledas a single event.

5.9.2. Worker Verticles

A worker verticle is like a standard verticle, but is executed on the Vert.x worker thread pool, rather thanan event loop. Worker verticles are designed for calling blocking code, as they won’t block any eventloops. Worker verticles still handle one event at a time, so multiple instances need to be deployed withthe worker pool size properly configured, where required, to avoid introducing a bottleneck in theapplication. The Sales service, responsible for pricing tickets, is a very simple mock-up of a pricingalgorithm, but treated as a complex blocking operation for demonstration purposes. The logic forcalculating ticket prices is implemented in the SalesTicketingService class, itself modeled as a verticle:

public class SalesTicketingService extends AbstractVerticle {

On startup, the verticle registers itself as an Event Bus consumer with its fully qualified class name asthe address:

@Override public void start(Future<Void> startFuture) { vertx.eventBus().consumer(SalesTicketingService.class.getName(), SalesTicketingService::price);

The entry point of the application is the Verticle specified in the Maven file. For this worker Verticle to bedeployed, the following code snippet is added to the primary verticle startup:

DeploymentOptions deploymentOptions = new DeploymentOptions().setWorker(true); vertx.deployVerticle(new SalesTicketingService(), deploymentOptions, deployResult ...);

The verticle then sends messages to the worker through the event bus, also registering a listener tohandle the response:

vertx.eventBus().<String>send(SalesTicketingService.class.getName(), buffer.toString(), asyncResult -> respondPrice(routingContext, asyncResult));

Messages posted to the event bus for this address are routed to the price method:

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

38

Page 43: Reference Architectures 2018

private static void price(Message<String> message) { Flight flight = Json.decodeValue(message.body(), Flight.class); ... message.reply(Json.encode(itinerary)); }

The event bus response is then inspected and handled by the main verticle:

private void respondPrice(RoutingContext routingContext, AsyncResult<Message<String>> reply) { HttpServerResponse response = routingContext.response(); if (reply.succeeded()) { response.putHeader("Content-Type", "application/json; charset=utf-8"); response.end(reply.result().body()); } else { Throwable exception = reply.cause(); logger.log(Level.SEVERE, "Failed to price ticket", exception); response.setStatusCode(500); response.setStatusMessage(exception.getMessage()); response.end(); } }

In this example, where we assumed blocking code in the pricing algorithm, the workers process eventbus messages and reply to the main verticle, which returns the response in an event pool.

5.9.3. Combining Patterns

Applications often end up combining several patterns to get the desired effect. The Flights service uses aworker verticle called FlightSchedulingService. It also handles a number of events before and after theworker verticle does its processing:

String url = config().getString("service.airports.baseUrl") + "/airports"; HttpRequest<Buffer> httpRequest = webClient.getAbs(url); Span childSpan = traceOutgoingCall(httpRequest, routingContext, HttpMethod.GET, url); HttpServerResponse response = routingContext.response(); httpRequest.send(asyncResult -> { if (asyncResult.succeeded()) { closeTracingSpan(childSpan, asyncResult.result()); logger.fine("Got code " + asyncResult.result().statusCode()); DeliveryOptions options = new DeliveryOptions(); options.addHeader("date", routingContext.request().getParam("date")); options.addHeader("origin", routingContext.request().getParam("origin")); options.addHeader("destination", routingContext.request().getParam("destination")); vertx.eventBus().<String>send(FlightSchedulingService.class.getName(), asyncResult.result().bodyAsString(), options, reply -> { if (reply.succeeded()) { response.putHeader("Content-Type", "application/json;

CHAPTER 5. DESIGN AND DEVELOPMENT

39

Page 44: Reference Architectures 2018

charset=utf-8"); response.end(reply.result().body()); } else { handleExceptionResponse(response, reply.cause()); } }); } else { Throwable throwable = asyncResult.cause(); closeTracingSpan(childSpan, throwable); handleExceptionResponse(response, throwable); } });

The incoming web request is itself always an event. In this example, this event is processed by making acall to another microservice, in the first 5 lines of the above code snippet. After this event, the event loopmay tend to other business while the response of the Airports service is pending, or in cases of highload, has been received and is queued. The second event involves combining that response with parts ofthe original request, stored as delivery option headers, and sending it as a message on the event bus.The next step in processing this business request is handled by the worker thread pool. The responsefrom the event bus is a third and final event for this business request in this verticle, resulting in aresponse to the caller.

5.9.4. Reactive Patterns

The Presentation service makes extensive use of the Vert.x Reactive API. RxJava is a library forcomposing asynchronous and event-based programs using observable sequences for the Java VirtualMachine. The simplest example of this is the use of rxSend instead of send:

HttpRequest<Buffer> httpRequest = webClient.getAbs(url); Span span = traceOutgoingCall(httpRequest, routingContext, HttpMethod.GET, url); Single<HttpResponse<Buffer>> = httpRequest.rxSend();

The rxSend() method returns a object, which can be then be used to handle the response with reactiveAPI. To better adapt the response to the use case, the map operation is used to convert the responsetype to Airport[]:

Single<HttpResponse<Buffer>> airportsSingle = httpRequest.rxSend().map(httpResponse -> { closeTracingSpan(span, httpResponse); if (httpResponse.statusCode() < 300) { return httpResponse.bodyAsJson(Airport[].class); } else { throw new RuntimeException("Airport call failed with " + httpResponse.statusCode() + ": " + httpResponse.statusMessage()); } });

The only addition here was the mapping code, which either throws an exception or maps the genericresponse to the concrete return type that is expected.

Finally, the getAirports() method goes one step further and converts the response to an Observable bycalling the toObservable() method on the Single object:

private Observable<Airport[]> getAirports(RoutingContext routingContext)

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

40

Page 45: Reference Architectures 2018

{ String url = config().getString("service.airports.baseUrl") + "/airports"; HttpRequest<Buffer> httpRequest = webClient.getAbs(url); Span span = traceOutgoingCall(httpRequest, routingContext, HttpMethod.GET, url); return httpRequest.rxSend().map(httpResponse -> { closeTracingSpan(span, httpResponse); if (httpResponse.statusCode() < 300) { return httpResponse.bodyAsJson(Airport[].class); } else { throw new RuntimeException("Airport call failed with " + httpResponse.statusCode() + ": " + httpResponse.statusMessage()); } }).toObservable();

5.9.5. Orchestrating Reactive Calls

To process a query request, the Presentation service need to take the following steps: - Get the list ofairports - Get the list of flights from origin to destination, given airport data - Obtain pricing information forabove fights - If return flight is requested, get list of flights in reverse direction - If return flight isrequested, obtain pricing for above fights - Combine above itineraries, sort the results and return to caller

This is accomplished by the use of the Observer Pattern, where each step is first implemented as anObservable. To get airports:

private Observable<Airport[]> getAirports(RoutingContext routingContext) { ... return httpRequest.rxSend().map(httpResponse -> { ... }).toObservable();

To get flights given airport information:

private Observable<Flight[]> getFlights(RoutingContext routingContext, Airport[] airports... ... return httpRequest.rxSend().map( ... }).toObservable();

Obtaining prices for flights is more complicated. To demonstrate the challenges and solutions involved, itis assumed that the service can only calculate the price of one flight at a time. Given multiple flights, theresult is therefore a list of Observable objects:

private Observable<List<Itinerary>> priceFlights(RoutingContext routingContext, Flight[] flights) { String url = config().getString("service.sales.baseUrl") + "/price"; List<Observable<Itinerary>> itineraryObservables = new ArrayList<>(); for (Flight flight : flights) { Handler<Future<Itinerary>> priceFlightHandler = future -> { HttpRequest<Buffer> request = webClient.postAbs(url); request.rxSendJson(flight)...); }; Function<Throwable, Itinerary> priceFlightFailback = ...

CHAPTER 5. DESIGN AND DEVELOPMENT

41

Page 46: Reference Architectures 2018

Observable<Itinerary> observable = circuitBreaker.rxExecuteCommandWithFallback(priceFlightHandler, priceFlightFailback).toObservable(); itineraryObservables.add(observable); }

The Zip pattern is used to indicate that when all results are available, they should be aggregated as asingle result and returned:

return Observable.zip(itineraryObservables, objects -> { List<Itinerary> pricedItineraries = new ArrayList<>(); for (Object object : objects) { pricedItineraries.add((Itinerary) object); } return pricedItineraries; });

There are a number of Observable operators that can be very helpful in orchestrating these reactive callsinto a sequence of dependencies. Given a list of airports to be retrieved, a list of flights to be retrievedcan be defined through the flatMap operation:

Observable<Airport[]> airportsObservable = getAirports(routingContext);Observable<Flight[]> flightsObservable = airportsObservable.flatMap(airports -> getFlights(routingContext, airports, "departureDate", "origin", "destination"));

Similarly, the eventual list of flights can be used to obtain eventual pricing for them:

Observable<List<Itinerary>> itinerariesObservable = flightsObservable.flatMap(flights -> priceFlights(routingContext, flights));

For a one-way flight, all that is left, is to wait for the content of the above Observable to be retrieved, sothat it can be sorted and returned to the caller. Things are yet a bit more complicated for a roundtripflight. Two similar steps need to take place to obtain eventual return itineraries:

Observable<Flight[]> returnFlightsObservable = airportsObservable.flatMap(airports -> getFlights(routingContext, airports, "returnDate", "destination", "origin"));Observable<List<Itinerary>> returnItinerariesObservable = returnFlightsObservable.flatMap(flights -> priceFlights(routingContext, flights));

These then have to be matched against the departing itineraries to form complete tickets. Once again,the Zip operator can be used to combine and map the eventual results:

itinerariesObservable = itinerariesObservable.zipWith(returnItinerariesObservable, (departureItineraries, returnItineraries) -> { List<Itinerary> itineraries = new ArrayList<>(); for (Itinerary departingItinerary : departureItineraries) { for (Itinerary returnItinerary : returnItineraries) {

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

42

Page 47: Reference Architectures 2018

Itinerary itinerary = new Itinerary(departingItinerary.getFlights()[0], returnItinerary.getFlights()[0]); itinerary.setPrice(departingItinerary.getPrice() + returnItinerary.getPrice()); itineraries.add(itinerary); } } return itineraries;});

Whether it is a one-way or a round-trip flight query, the final processing of the eventual list of itinerariesis a simple matter of sorting and returning the results, or handling any errors that may have happenedalong the way and bubbled up:

itinerariesObservable.subscribe(itineraries -> { itineraries.sort(Itinerary.durationComparator); itineraries.sort(Itinerary.priceComparator); logger.fine("Returning " + itineraries.size() + " flights"); routingContext.response().putHeader("Content-Type", "application/json; charset=utf-8"); routingContext.response().end(Json.encode(itineraries)); }, throwable -> handleExceptionResponse(routingContext, throwable));

5.10. JAEGER

5.10.1. Overview

This reference architecture uses Jaeger and the OpenTracing API to collect and broadcast tracingdata to the Jaeger back end, which is deployed as an OpenShift service and backed by persistentCassandra database images. The tracing data can be queried from the Jaeger console, which isexposed through an OpenShift route.

5.10.2. Cassandra Database

5.10.2.1. Persistence

The Cassandra database configured by the default jaeger-production_template is an ephemeraldatastore with 3 replicas. This reference architecture adds persistence to Cassandra by configuringpersistent volume.

5.10.2.2. Persistent Volume Claims

This reference architecture uses volumeClaimTemplates to dynamically create the required number ofpersistent volume claims:

volumeClaimTemplates: - metadata: name: cassandra-data spec: accessModes: [ "ReadWriteOnce" ] resources: requests:

CHAPTER 5. DESIGN AND DEVELOPMENT

43

Page 48: Reference Architectures 2018

storage: 1Gi - metadata: name: cassandra-logs spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi

These two volume claim templates generate a total of 6 persistent volume claims for the threeCassandra replicas.

5.10.2.3. Persistent Volume

In most cloud environments, corresponding persistent volumes would be available or dynamicallyprovisioned. The reference architecture lab creates and mounts a logical volume that is exposed through NFS. In total, 6 OpenShift persistent volumes serve to expose the storage to the image. Once the storageis set up and shared by the NFS server:

$ oc create -f Jaeger/jaeger-pv.ymlpersistentvolume "cassandra-pv-1" createdpersistentvolume "cassandra-pv-2" createdpersistentvolume "cassandra-pv-3" createdpersistentvolume "cassandra-pv-4" createdpersistentvolume "cassandra-pv-5" createdpersistentvolume "cassandra-pv-6" created

5.10.3. Jaeger Image

Other than configuring persitence, this reference architecture uses the Jaeger production template of thejaeger-openshift project as provided in the latest release, at the time of writing. This results in the use ofversion 0.6 images from jaeger-tracing:

- description: The Jaeger image version to use displayName: Image version name: IMAGE_VERSION required: false value: "0.6"

5.10.4. Jaeger Tracing Client

While the Jaeger service allows distributed tracing data to be aggregated, persisted and used forreporting, this application also relies on the client-side Java implementation of the OpenTracing API byJaeger to correlate calls and send data to the server.

Integration with Vert.x makes it very easy to use Jaeger in the application. Include the libraries bydeclaring a dependency in the project Maven file:

<!-- Jaeger client for distributed tracing with opentracing-vertx-web --><dependency> <groupId>com.uber.jaeger</groupId> <artifactId>jaeger-core</artifactId> <version>0.24.0</version></dependency><dependency>

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

44

Page 49: Reference Architectures 2018

<groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-vertx-web</artifactId> <version>0.1.0</version></dependency>

Also specify in application properties, the percentage of requests that should be traced, as well as theconnection information for the Jaeger server. Once again, we rely on the OpenShift service abstract toreach the Jaeger service, and jaeger-agent is the OpenShift service name:

JAEGER_AGENT_HOST: jaeger-agentJAEGER_AGENT_PORT: 6831JAEGER_SERVICE_NAME: presentationJAEGER_REPORTER_LOG_SPANS: trueJAEGER_SAMPLER_TYPE: constJAEGER_SAMPLER_PARAM: 1

The sampler type is set to const, indicating that the Constant Sampler should be used. The sampling rateof 1, meaning 100%, is therefore already implied but is a required configuration that should be left in. TheJaeger service name affects how tracing data from this service is reported by the Jaeger console, andthe agent host and port is used to reach the Jaeger agent through UDP.

These properties are used by the application to create a Configuration object:

Configuration.SamplerConfiguration samplerConfiguration = new Configuration.SamplerConfiguration( config.getString("JAEGER_SAMPLER_TYPE"), config.getDouble("JAEGER_SAMPLER_PARAM"), config.getString("JAEGER_SAMPLER_MANAGER_HOST_PORT"));

Configuration.ReporterConfiguration reporterConfiguration = new Configuration.ReporterConfiguration( config.getBoolean("JAEGER_REPORTER_LOG_SPANS"), config.getString("JAEGER_AGENT_HOST"), config.getInteger("JAEGER_AGENT_PORT"), config.getInteger("JAEGER_REPORTER_FLUSH_INTERVAL"), config.getInteger("JAEGER_REPORTER_MAX_QUEUE_SIZE"));

Configuration configuration = new Configuration( config.getString("JAEGER_SERVICE_NAME"), samplerConfiguration, reporterConfiguration);

Obtain a Jaeger tracer based on this configuration and use it to create a TracingHandler that can beassociated with the router:

TracingHandler tracingHandler = new TracingHandler(configuration.getTracer()); router.route().handler(tracingHandler).failureHandler(tracingHandler);

These steps are enough to collect tracing data for incoming HTTP calls, but the active tracing span canalso be obtained for finer control:

private Span getActiveSpan(RoutingContext routingContext) { return routingContext.get(TracingHandler.CURRENT_SPAN); }

CHAPTER 5. DESIGN AND DEVELOPMENT

45

Page 50: Reference Architectures 2018

For example, while every remote call can produce and store a trace by default, adding a tag can help tobetter understand traces:

getActiveSpan(routingContext).setTag("Operation", "Look Up Airports");

5.10.4.1. Tracing Outgoing Calls

With the version of Vert.x available at the time of writing and used in this reference architecture, tightcoupling with Jaeger to seamlessly trace outgoing calls and correlate tracing data between services isnot possible. Doing so requires a few lines of code. The Presentation services relies on a number ofconvenience methods to propagate tracing data on every outgoing web call:

Span span = traceOutgoingCall(httpRequest, routingContext, HttpMethod.GET, url);

This method creates a child span for the outgoing HTTP request and then uses the Jaeger tracer toobtain the required data from this span and inject equivalent headers into the http request:

private Span traceOutgoingCall(HttpRequest<Buffer> httpRequest, RoutingContext routingContext, HttpMethod method, String url) { Tracer tracer = configuration.getTracer(); Tracer.SpanBuilder spanBuilder = tracer.buildSpan("Outgoing HTTP request"); Span activeSpan = getActiveSpan(routingContext); if (activeSpan != null) { spanBuilder = spanBuilder.asChildOf(activeSpan); } Span span = spanBuilder.start(); Tags.SPAN_KIND.set(span, Tags.SPAN_KIND_CLIENT); Tags.HTTP_URL.set(span, url); Tags.HTTP_METHOD.set(span, method.name());

Map<String, String> headerAdditions = new HashMap<>(); tracer.inject(span.context(), Format.Builtin.HTTP_HEADERS, new TextMapInjectAdapter(headerAdditions)); headerAdditions.forEach(httpRequest.headers()::add);

return span; }

After the call and when the HTTP response is received, the span can be closed by calling anotherconvenience method:

closeTracingSpan(span, httpResponse);

This convenience method checks the HTTP response status code and records it on the span. If aresponse code over 400 is returned, an error tag is set on the span. Finally, the span is marked asfinished:

private void closeTracingSpan(Span span, HttpResponse<Buffer> httpResponse) { int status = httpResponse.statusCode(); Tags.HTTP_STATUS.set(span, status); if (status >= 400) {

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

46

Page 51: Reference Architectures 2018

Tags.ERROR.set(span, true); } span.finish(); }

Another similar convenience method is provided to handle errors, where the exception is logged in thespan along with a generic error tag:

private void closeTracingSpan(Span span, Throwable throwable) { Tags.ERROR.set(span, true); span.log(WebSpanDecorator.StandardTags.exceptionLogs(throwable)); span.finish(); }

5.10.4.2. Baggage Data

While the Jaeger client library is primarily intended as a distributed tracing tool, its ability to correlatedistributed calls can have other practical uses as well. Every created span allows the attachment ofarbitrary data, called a baggage item, that will be automatically inserted into the HTTP header andseamlessly carried along with the business request from service to service, for the duration of the span.This application is interested in making the original caller’s IP address available to every microservice. Inan OpenShift environment, the calling IP address is stored in the HTTP header under a standard key. Toretrieve and set this value on the span:

querySpan.setBaggageItem("forwarded-for", routingContext.request().getHeader("x-forwarded-for"));

This value will later be accessible from any service within the same call span under the header key ofuberctx-forwarded-for . It is used by the Edge service in JavaScript to perform dynamic routing.

5.11. EDGE SERVICE

5.11.1. Overview

This reference architecture uses a reverse proxy for all calls between microservices. The reverse proxyis a custom implementation of an edge service with support for both declarative static routing, as well asdynamic routing through simple JavaScript, or other pluggable implementations.

5.11.2. Usage

By default, the Edge service uses static declarative routing as defined in its application properties:

edge: proxy: presentation: address: http://presentation:8080 airports: address: http://airports:8080 flights: address: http://flights:8080 sales: address: http://sales:8080

CHAPTER 5. DESIGN AND DEVELOPMENT

47

Page 52: Reference Architectures 2018

It should be noted that declarative mapping is considered static, only because routing is based on theservice address without regard for the content of the request or other contextual information. It is stillpossible to override the mapping properties through an OpenShift ConfigMap, as outlined in the sectionon configuring thread pools, to change the mapping rules for a given web context.

Dynamic routing through JavaScript is a simple matter of reassigning the host field:

... if( jsonObject["host"].startsWith("sales") ) { var ipAddress = jsonObject["forwarded-for"]; console.log( 'Got IP Address as ' + ipAddress ); if( ipAddress ) { var lastDigit = ipAddress.substring( ipAddress.length - 1 ); console.log( 'Got last digit as ' + lastDigit ); if( lastDigit % 2 != 0 ) { console.log( 'Rerouting to B instance for IP Address ' + ipAddress ); //Odd IP address, reroute for A/B testing: jsonObject["host"] = "sales2:8080"; } } }

Mapping through the application properties happens first, and if a script does not modify the value ofhostAddress, the original mapping remains effective.

5.11.3. Implementation Details

This edge service uses standard vertx-web and vertx-web-client to receive calls and proxy them. Forease of use, a global error handler is defined and the BodyHandler is used to easily retrieve the body ofeach request:

router.route().handler(BodyHandler.create()); router.route("/*").handler(this::proxy); router.route().failureHandler(rc -> { logger.log(Level.SEVERE, "Error proxying request", rc.failure()); rc.response().setStatusCode(500).setStatusMessage(rc.failure().getMessage()).end(); });

The main logic for the Servlet class is provided in the proxy method. The service uses its routing rules toset the destination address by making use of the Observable provided by the getHostAddress() method:

private void proxy(RoutingContext routingContext) { HttpServerRequest request = routingContext.request(); HttpServerResponse response = routingContext.response(); getHostAddress(request).subscribe(proxiedURL -> { logger.fine("Will forward request to " + proxiedURL);

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

48

Page 53: Reference Architectures 2018

The getHostAddress() method supplies the required context information to the mappers in a JSONobject, and chains together mappers that will return an updated JSON object pointing the proxiedaddress:

private Observable<String> getHostAddress(HttpServerRequest request) { JsonObject jsonObject = new JsonObject(); jsonObject.put("scheme", request.scheme()); jsonObject.put("host", request.host()); jsonObject.put("path", request.path()); jsonObject.put("query", request.query()); jsonObject.put("forwarded-for", request.getHeader("uberctx-forwarded-for")); Observable<Message<JsonObject>> hostAddressObservable = getHostAddressObservable(PropertyMapper.class.getName(), jsonObject); for (String busAddress : customMappers.keySet()) { hostAddressObservable = hostAddressObservable.flatMap(message -> getHostAddressObservable(busAddress, message.body())); } return hostAddressObservable.map(mapMessage -> constructURL(mapMessage.body())); }

The response is read back into a URL string:

private String constructURL(JsonObject jsonObject) { String scheme = jsonObject.getString("scheme"); String host = jsonObject.getString("host"); String path = jsonObject.getString("path"); String query = jsonObject.getString("query"); String url = scheme + "://" + host + path; if (query != null) { url += '?' + query; } return url; }

Once the destination address is known, the actual proxy implementation is simple through the use of theVert.x libraries:

Buffer body = routingContext.getBody();

HttpRequest<Buffer> serviceRequest = client.requestAbs(request.method(), proxiedURL); serviceRequest.headers().addAll(request.headers()); serviceRequest.sendBuffer(body, asyncResult -> { if (asyncResult.succeeded()) { HttpResponse<Buffer> serviceResponse = asyncResult.result(); response.setStatusCode(serviceResponse.statusCode()) .setStatusMessage(serviceResponse.statusMessage()); if (serviceResponse.statusCode() < 300) { response.headers().addAll(serviceResponse.headers()); } response.end(serviceResponse.bodyAsBuffer()); } else {

CHAPTER 5. DESIGN AND DEVELOPMENT

49

Page 54: Reference Architectures 2018

response.setStatusCode(500).setStatusMessage(asyncResult.cause().getMessage()).end(); } });

Mapping is provided by a number of worker verticles consuming event bus messages. This starts withthe PropertyMapper, which is automatically deployed:

deploymentObservables.add( deployVerticle(startFuture, PropertyMapper.class.getName(), options) );

...

private Observable<String> deployVerticle(Future<Void> future, String name, DeploymentOptions options) { return vertx.rxDeployVerticle(name, options).doOnError(future::fail).toObservable(); }

The PropertyMapper looks up the root web context in system properties, typically defined in a app-config.yml file, and uses the value as the destination address.

The presence of the JavaScript file is detected, and it is added to the mapping chain:

private static final String JS_FILE_NAME = "/edge/routing.js"; private static final String JS_BUS_ADDRESS = "routing.js";

private WebClient client; private Map<String, String> customMappers = getCustomMappers();

...

private Map<String, String> getCustomMappers() { Map<String, String> mappers = new HashMap<>(); if (new File(JS_FILE_NAME).exists()) { mappers.put(JS_BUS_ADDRESS, JS_FILE_NAME); } return mappers; }

...

customMappers.values().forEach(name -> deploymentObservables.add( deployVerticle(startFuture, name, options) ));

...

for (String busAddress : customMappers.keySet()) { hostAddressObservable = hostAddressObservable.flatMap(message -> getHostAddressObservable(busAddress, message.body())); }

The JavaScript registers itself with the event bus when deployed:

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

50

Page 55: Reference Architectures 2018

vertx.eventBus().consumer("routing.js", function (message) { var jsonObject = map( message.body() ); message.reply(jsonObject);});console.log("Loaded routing.js");

5.11.4. A/B Testing

To implement A/B testing, the Sales2 service introduces a minor change in the algorithm for calculatingfares. Dynamic routing is provided by Edge through JavaScript.

Only calls to the Sales service are potentially filtered:

if( jsonObject["host"].startsWith("sales") ){ ...}

From those calls, requests that originate from an IP address ending in an odd digit are filtered, bymodifying the host field of the JSON object:

var ipAddress = jsonObject["forwarded-for"]; console.log( 'Got IP Address as ' + ipAddress ); if( ipAddress ) { var lastDigit = ipAddress.substring( ipAddress.length - 1 ); console.log( 'Got last digit as ' + lastDigit ); if( lastDigit % 2 != 0 ) { console.log( 'Rerouting to B instance for IP Address ' + ipAddress ); //Odd IP address, reroute for A/B testing: jsonObject["host"] = "sales2:8080"; } }

To enable dynamic routing without changing application code, shared storage is made available to theOpenShift nodes and a persistent volume is created and claimed. With the volume set up and theJavaScript in place, the OpenShift deployment config can be adjusted administratively to mount adirectory as a volume:

$ oc volume dc/edge --add --name=edge --type=persistentVolumeClaim --claim-name=edge --mount-path=/edge

This results in a lookup for a routing.js file under the edge directory. If found, it is deployed as a Verticle,and included in the mapping chain by sending messages to its event bus address:

private Map<String, String> getCustomMappers() { Map<String, String> mappers = new HashMap<>(); if (new File(JS_FILE_NAME).exists()) { mappers.put(JS_BUS_ADDRESS, JS_FILE_NAME);

CHAPTER 5. DESIGN AND DEVELOPMENT

51

Page 56: Reference Architectures 2018

} return mappers; }

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

52

Page 57: Reference Architectures 2018

CHAPTER 6. CONCLUSIONVert.x provides an event-driven and non-blocking platform that allows for a higher degree of concurrencywith a small number of kernel threads. The approach, along with the large and growing ecosystem,makes Vert.x well-suited to the development of microservices. Vert.x libraries provide powerful and easyintegration with a large number of features. Services built on this platform are easily deployed on Red Hat® OpenShift Container Platform 3.

This paper and its accompanying technical implementation seek to serve as a useful reference forbuilding an application in the microservice architectural style, while providing a proof of concept that caneasily be replicated in a customer environment.

CHAPTER 6. CONCLUSION

53

Page 58: Reference Architectures 2018

APPENDIX A. AUTHORSHIP HISTORY

Revision Release Date Author(s)

1.0 Mar 2018 Babak Mozaffari

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

54

Page 59: Reference Architectures 2018

APPENDIX B. CONTRIBUTORSWe would like to thank the following individuals for their time and patience as we collaborated on thisprocess. This document would not have been possible without their many contributions.

Contributor Title Contribution

Pavol Loffay Software Engineer Jaeger Tracing

Clement Escoffier Principal Software Engineer Code Review

Thomas Segismont Senior Software Engineer Technical Content Review

APPENDIX B. CONTRIBUTORS

55

Page 60: Reference Architectures 2018

APPENDIX C. REVISION HISTORYRevision 1.0-0 March 2018 BM

Reference Architectures 2018 Vert.x Microservices on Red Hat OpenShift Container Platform 3

56