Service architecture

HeartAI services generally refers to the primary application-level software that provides domain-relevant behaviour, including functionalities such as:

  • Data integration.
  • Data processing.
  • Data linkage.
  • Data aggregation.
  • Data brokering.
  • Reporting and analytics.

Services often implement reactive microservices architectures and follow concepts from The Reactive Manifesto. For HeartAI platform development, service design encourages high-performance and extendable architectures. Current HeartAI service design supports:

  • Natively cloud deployable and distributable services.
  • High-performance services, with support for real-time data streaming.
  • Well-defined service scope, with service context defined with a corresponding domain entity.
  • Mature support for backing service integration, such as PostgreSQL data server integration and Apache Kafka message bus integration.
  • Hardened security constructs including identity integration and rigorous logging, monitoring, and auditing capabilities.
  • Service development that allows iterative and well-managed development practices.

These capabilities are particularly important for the digital health ecosystem, where there are many data and application assets, and often service requirements are complex with a large variety of interface standards. To support health system care, HeartAI services have the capability to provide:

  • Broad support for data interface standards, including international, legacy, and proprietary standards, such as the HL7 health data standard.
  • High-performance data processing, with support for stream-native data interfacing and transmission. This allows interfacing with high-throughput data generation systems such as:
    • Patient observation machines.
    • Anaesthetic machines.
    • Ambulance GPS devices.
    • Wearable devices.
    • Bio-implantable devices.
HeartAI system services

HeartAI provides and develops a range of system services for end-user and client use, these include: data services, linkage services, reporting services, and analytics services. Further information about HeartAI system services may be found with the following documentation

Service architecture overview

HeartAI services implement reactive microservices architectures and follow concepts from The Reactive Manifesto. Many architectural concepts of the system may be considered as reactive design patterns and event-driven architectures. The overall composition of these services creates the service-level application software of HeartAI. Services often represent domain models and bounded contexts, for example the HIB interface service implements domain functionality specific to the interface to the SA Health Health Information Broker (HIB). From the perspective of software design, service boundaries are bounded contexts and the state of domain entities are represented as corresponding aggregate roots. From the perspective of computational resource usage a service often implements delimited consistency to define a logical computation boundary. The Lagom Framework provides service abstractions, the Play Framework provides web services, and Akka provides an actor-based concurrency model. These approaches allow for robust message passing and event-driven architectures, providing systems that are extensible, scalable, reactive, performant, secure, resilient, and tolerant to failure.

HeartAI services provide:

  • Data architectures with performant persistence mechanisms that support event sourcing and CQRS, with a decomposition of write-side and read-side responsibilities. Write-side persistence optimises for high-throughput and low-latency transactions. Read-side persistence optimises for dynamic and performant query operations.

  • Decoupled inter-service communication, often through implementing a publish-subscribe message bus paradigm of data communication. These approaches allow services to communicate by translating message bus streaming layers into natively supported Akka Streams software streaming layers. Through these approaches, HeartAI services support the Reactive Streams specification including support for non-blocking backpressure propagation. The composition of these service functionalities allow the overall system to be elastic.

  • Application Programming Interface (API) layers that implement standard communication protocols. Current supported interfaces are: RESTful HTTP / HTTPS, WebSockets / WebSockets Secure, and Apache Kafka message bus endpoints. The current implementation primarily supports JSON and CBOR data encoding, however capability also exists for additional encoding formats.

The follow figure and table describe a typical HeartAI service architecture:

HAI-Service-Architecture.svg

Service component Description
Service API Service endpoint application-programming interface (API). Typically a web services endpoint with support for HTTP or WebSockets protocols. Provides a facade to the underlying implementation layer. Supports strong endpoint security and logging of endpoint interaction.
Service IMPL Software layer implementation for the service API. May provide service functionality directly, but often coordinates with service domain entity references through a service command, effectively functioning as a task scheduler. Also provides mechanisms for authentication/authorisation and subscription to the service-brokered message bus instances.
Entity cluster Distributed entity cluster implemented with Akka Cluster. Provides primary domain behaviour through an event-sourcing paradigm. The service implementation layer communicates with this layer through domain commands, typically with asynchronous communication. Domain commands may generate domain events, which are persisted to the write-side database to guarantee eventual consistency of event acknowledgement. These events are also published to the software event stream to trigger downstream behaviour.
Write-side database Write-side data persistence component of service event sourcing process. Optimised for high-throughput writes. Provides a guarantee of eventual consistency for the write-side component of the event-sourcing paradigm. Acknowledgements from the write-side database provide the base reference for successful communication with a corresponding domain entity.
Event stream Software-layer event stream to trigger downstream event-driven behaviour. Publishes event-driven behaviour to (i) The service implementation layer, (ii) Service-brokered message bus endpoints, (iii) Service read-side repositories.
Read-side database Read-side data persistence component of service event sourcing process. Optimised for high-throughput reads. Through the service implementation, the read-side database subscribes to the software-layer event stream to process events into corresponding objects appropriate for structured persistence within the read-side data store. Persistence of these objects is eventually consistent with reference to the event-stream, although consistency is typically achieved within seconds. Functions as a backing service to the service implementation for read-side database queries.
Message bus subscription Message bus endpoints implemented with Apache Kafka. Enables service communication through a distributed publish-subscribe paradigm. Functions as a backing service to the service implementation for subscription to the message bus.
Message bus publication Message bus endpoints implemented with Apache Kafka. Enables service communication through a distributed publish-subscribe paradigm. Functions as a backing service to the event stream for publication to the message bus.

Backing services

System services communicate with infrastructural components, such as persistent data systems, message buses, and logging utilities, as system backing services. These backing services are typically passed into the service through the process of dependency injection, such as with the macwire and guice dependency injection frameworks. This allows dependent backing services to be built into the service implementation, decreasing reliance on specific backing service implementation by abstracting backing service functionality through an injectable unit of service-level software. Through inversion of control these approaches provide an increase to service modularity and extensibility.

These backing service include:

Backing service Description Service-level management framework Management framework functionality
Apache Kafka Message-bus software Alpakka Apache Kafka Stream-native integration
PostgreSQL Relational data system Slick JDBC protocol function-relational mapping
Apache Log4j Logging utility SLF4J Facade layer

Lagom implementation

HeartAI services are developed with the Lagom microservices framework. Lagom provides libraries for the Scala and Java programming languages. HeartAI services are primarily developed with the Scala language. Lagom design choices provide a structured environment for developers to benefit from modern microservices software concepts, and many of the Lagom implementations are best practice for reactive microservices architectures.

Lagom provides best practice constructs for:

Although Lagom is relatively opinionated as a framework, the native implementation of Akka and Play allows for diverse flexibility and extensibility.

Lagom implementation

HeartAI services are development with the Lagom microservices framework. Further information about the HeartAI Lagom implementation may be found with the following documentation:

Service endpoint application-programming interface architecture

HeartAI system service endpoints provide thin-layer application-programming interfaces (APIs), which are specified separately from corresponding service endpoint implementations. This allows API-specific components to be declared and managed independently of their corresponding implementation details.

Service interfaces

HeartAI system services provides abstractions for declaring service application-programming interfaces (APIs) which service clients may use as a manifest of service functionalities and communication protocols. Further information about interfacing with HeartAI system services may be found with the following documentation:

Example: Lagom service endpoint application-programming interface

A typical Lagom-designed API has declarations for service dependencies, service resources, service descriptors, and service class definitions. End-users and developers should consult the service API as a manifest of service functionalities and communication protocols, and design corresponding service clients with the service API as a contract for how to interface with the service.

Example: Lagom service endpoint application-programming interface - Service dependencies

The following example for a Lagom-designed service endpoint API shows the service dependencies for the HeartAI HelloWorldService service:

import akka.Done
import akka.NotUsed
import com.lightbend.lagom.scaladsl.api.broker.Topic
import com.lightbend.lagom.scaladsl.api.broker.kafka.KafkaProperties
import com.lightbend.lagom.scaladsl.api.broker.kafka.PartitionKeyStrategy
import com.lightbend.lagom.scaladsl.api.transport.Method
import com.lightbend.lagom.scaladsl.api.Descriptor
import com.lightbend.lagom.scaladsl.api.Service
import com.lightbend.lagom.scaladsl.api.ServiceCall
import com.typesafe.config.ConfigFactory
import net.heartai.core.PingServiceAPI
import play.api.libs.json.Format
import play.api.libs.json.Json
import java.util.UUID

The corresponding libraries are:

Library Description Reference
Akka Akka actor system https://akka.io
Lagom Scala DSL Lagom implementation with Scala https://www.lagomframework.com/documentation/latest/scala/Home.html
Typesafe Config Typesafe JVM configuration library https://github.com/lightbend/config
HeartAI PingService Service functionality for ping endpoints
Play JSON Play package for JSON management https://www.playframework.com/documentation/2.8.x/ScalaJson
Java Utils Java utilities package https://docs.oracle.com/javase/8/docs/api/java/util/package-summary.html

Example: Lagom service endpoint application-programming interface - Endpoint resources

Lagom provides ServiceCall and ServerServiceCall traits to implement HTTP-based service endpoint resources provided with the Play Framework.

Examples of ServiceCall for the HelloWorldService are shown with the following:

Service endpoint implementation resources

The corresponding service endpoint implementation resources to this example service endpoint API resources may be found at the following documentation section:

def helloPublic(
  id: String):
ServiceCall[NotUsed, Greeting]

def helloSecure(
  id: String):
ServiceCall[NotUsed, Greeting]

def updateGreetingMessage(
  id: String):
ServiceCall[GreetingMessage, Done]

These endpoint resources provide the following functionalities:

Service endpoint resource Functionality
helloPublic() Returns a Greeting message corresponding to the id of the resource. Each id has an individual Greeting message, with the default message being "Hello".
helloSecure() Provides the same functionality as helloPublic(), but also requires a secure access token to be presented to the service endpoint.
updateGreetingMessage() Allows the Greeting message corresponding to the id to be updated. Future invocations of helloPublic or helloSecure will return a Greeting with the updated message.

Example: Lagom service endpoint application-programming interface - Brokered message bus topics

Lagom also natively supports Topic traits to broker with a corresponding message bus, with integrated support for Apache Kafka.

Examples of Topic for the HelloWorldService are shown with the following:

def greetingUpdatedTopic():
Topic[Greeting]

These brokered topics provide the following functionalities:

Service brokered topic Functionality
greetingUpdatedTopic() Service broker to Apache Kafka message bus endpoint, allowing service entity generated GreetingMessageUpdatedEvent to be published to the message bus endpoint

Example: Lagom service endpoint application-programming interface - Descriptor

Lagom provides abstractions for specifying service interface endpoints through the use of service descriptors. The Lagom Descriptor trait allows the specification of service endpoint resource pathing and provides automated methods for generating an access control list.

The Descriptor implementation for the example HelloWorldService is shown following:

override final def descriptor: Descriptor = {
  import Service._
  named("hello-world")
    .withCalls(
      restCall(Method.GET, "/hello/api/public/ping", pingService()),
      restCall(Method.POST, "/hello/api/public/ping", pingServiceByPOST()),
      restCall(Method.GET, "/hello/api/public/ping_ws_count", pingServiceByWebSocketCount),
      restCall(Method.GET, "/hello/api/public/ping_ws_echo", pingServiceByWebSocketEcho),
      restCall(Method.GET, "/hello/api/public/hello/:id", this.helloPublic _),
      restCall(Method.GET, "/hello/api/secure/hello/:id", this.helloSecure _),
      restCall(Method.POST, "/hello/api/secure/greeting/:id", this.updateGreetingMessage _))
    .withTopics(
      topic(HelloWorldServiceAPI.GREETING_MESSAGES_CHANGED_TOPIC, greetingUpdatedTopic _)
        .addProperty(
          KafkaProperties.partitionKeyStrategy,
          PartitionKeyStrategy[Greeting](_.id)))
    .withAutoAcl(true)
}

Example: Lagom service endpoint application-programming interface - Full declaration

Example: Lagom service endpoint application-programming interface - Full declaration

The full declaration of the Lagom service endpoint application-programming interface for the HeartAI HelloWorldService may be found with the following documentation

Service endpoint implementation

Example: Lagom service endpoint implementation

The service implementation for a Lagom-designed service declares the primary implementation components of the service. The responsibilities of the service implementation typically include the internal declaration of service endpoint resources, approaches for coordinating with a corresponding service domain entity, and interface mechanisms for read-side repositories and service-brokered message buses.

Example: Lagom service endpoint implementation - Service dependencies

The following example shows the service dependencies for the HeartAI HelloWorldService service application-programming interface:


import akka.Done import akka.NotUsed import akka.actor.ActorSystem import akka.cluster.sharding.typed.scaladsl.ClusterSharding import akka.cluster.sharding.typed.scaladsl.EntityRef import akka.management.scaladsl.AkkaManagement import akka.pattern.StatusReply import akka.stream.Materializer import akka.util.Timeout import com.lightbend.lagom.scaladsl.api.ServiceCall import com.lightbend.lagom.scaladsl.api.broker.Topic import com.lightbend.lagom.scaladsl.api.transport.BadRequest import com.lightbend.lagom.scaladsl.api.transport.ResponseHeader import com.lightbend.lagom.scaladsl.broker.TopicProducer import com.lightbend.lagom.scaladsl.persistence.EventStreamElement import com.lightbend.lagom.scaladsl.persistence.PersistentEntityRegistry import com.lightbend.lagom.scaladsl.persistence.ReadSide import com.lightbend.lagom.scaladsl.server.ServerServiceCall import com.typesafe.config.ConfigFactory import net.heartai.core.PingServiceIMPL import org.pac4j.core.authorization.authorizer.RequireAnyRoleAuthorizer.requireAnyRole import org.pac4j.core.config.Config import org.pac4j.core.profile.CommonProfile import org.pac4j.lagom.scaladsl.SecuredService import org.slf4j.Logger import org.slf4j.LoggerFactory import slick.jdbc.JdbcBackend.Database import scala.concurrent.ExecutionContext import scala.concurrent.Future import scala.concurrent.duration._

The corresponding libraries are:

Library Description Reference
Akka Akka actor system https://akka.io
Lagom Scala DSL Lagom implementation with Scala https://www.lagomframework.com/documentation/latest/scala/Home.html
Typesafe Config Typesafe JVM configuration library https://github.com/lightbend/config
HeartAI PingService Service functionality for ping endpoints
pac4j Lagom Security library for Lagom https://github.com/pac4j/lagom-pac4j
SLF4J Logging facade for Java http://www.slf4j.org/
Slick Slick JDBC functional-relational mapping https://scala-slick.org/
Scala concurrent Scala standard library concurrency components https://www.scala-lang.org/api/2.13.6/scala/concurrent/index.html

Example: Lagom service endpoint implementation - Service endpoint resources

Following from the design of separation of resource implementation from the corresponding API declaration, Lagom service implementations define the methods of endpoint resources. The service endpoint resources typically communicate with a corresponding service domain entity.

Service entity

The corresponding service entity to this example service implementation may be found at the following documentation section:

By message passing to these domain entities, service endpoint resources effectively function as task schedulers. Service endpoint resources are able to locate corresponding domain entities through the Lagom integrated instance of Akka Cluster. The corresponding domain entity is referenced by the Akka Cluster EntityRef. Akka Cluster provides internal name resolution for service entities across a distributed and sharded software-defined network (SDN), with message communication and serialisation that is managed at the SDN transport-level with Akka Artery Remoting.

Service endpoint API resources

The corresponding service endpoint API resources to this example service implementation may be found at the following documentation section:

def askHello(
  id: String): NotUsed => Future[Greeting] = {
  (_: NotUsed) =>
    entityRef(id)
      .ask[StatusReply[GreetingIMPL]](
        replyTo => GreetingCommand(id, replyTo))
      .map(_.getValue.msg)
      .map(message =>
        Greeting(
          id = id,
          message = message))
}

override def helloPublic(
  id: String):
ServiceCall[NotUsed, Greeting] =
  ServiceCall {
    askHello(id)
  }

override def helloSecure(
  id: String):
ServiceCall[NotUsed, Greeting] =
  authorize(
    requireAnyRole[CommonProfile](keycloakAuthGroup), (_: CommonProfile) =>
      ServerServiceCall {
        (requestHeader, _: NotUsed) =>
          val response: Future[Greeting] =
            entityRef(id)
              .ask[StatusReply[GreetingIMPL]](
                replyTo => GreetingCommand(id, replyTo))
              .map(_.getValue.msg)
              .map(message =>
                Greeting(
                  id = id,
                  message = message))
          response
            .map(res =>
              (ResponseHeader.Ok, res))
      })

override def updateGreetingMessage(
  id: String):
ServiceCall[GreetingMessage, Done] =
  authorize(
    requireAnyRole[CommonProfile](keycloakAuthGroup), (_: CommonProfile) =>
      ServerServiceCall {
        request: GreetingMessage =>
          val ref = entityRef(id)
          ref
            .ask[StatusReply[Done]](
              replyTo =>
                UpdateGreetingMessageCommand(
                  request.message,
                  replyTo))
            .map(_.getValue)
      })

Example: Lagom service endpoint implementation - Brokered message bus topics

The following example shows the Lagom service implementation of brokering with a corresponding message bus for the HeartAI HelloWorldService:

override def greetingUpdatedTopic(): Topic[Greeting] =
  TopicProducer.taggedStreamWithOffset(HelloWorldEvent.Tag) {
    (tag, fromOffset) =>
      persistentEntityRegistry
        .eventStream(tag, fromOffset)
        .map(ev => (processEvent(ev), ev.offset))
  }

private def processEvent(
  helloWorldEvent: EventStreamElement[HelloWorldEvent]
): Greeting = {
  helloWorldEvent.event match {
    case _ =>
      Greeting(
        helloWorldEvent.entityId, "HelloWorld")
  }
}

Example: Lagom service endpoint implementation - Full declaration

Example: Lagom service endpoint implementation - Full declaration

The full declaration of the Lagom service endpoint implementation for the HeartAI HelloWorldService may be found with the following documentation

Service domain entity architecture

The states of system services are managed through the design concept of service domain entities. The state of a service domain entity is typically contained within a corresponding bounded context, often referenced by a corresponding aggregate root. Design of service domain entities also follows concepts and ideas from Domain-Driven Design.

State progression follows the principles of event-sourcing. Through this approach, service behaviour is progressed within a domain entity by message passing a domain Command. This Command has the potential to generate domain Events. A domain Event itself has the potential to alter the domain State. Akka Persistence provides abstractions for managing these approaches and guaranteeing consistencies with data persistence. PostgreSQL data servers provide write-side persistence of entity state as an event journal of all generated events within the service domain entity. This append-only process has a high-throughput and low-latency and generally provides useful advantages to overall data architecture. Domain entities are internally instances of Akka actors and are themselves able to be passivated into or populated from backing data server instances.

heartai-event-sourcing-process.svg

Example: Lagom service entity

Lagom provides implementations to define a service domain entity that represents the bounded context of the service, with Akka Cluster EntityRef instances corresponding to service entity aggregate roots. Lagom provides these capabilities by abstracting Akka Persistence support for persistent event-sourced behaviour.

External references

Further documentation about the Lagom implementation of service domain entity architecture may be found with the following external references:

Example: Lagom service entity - Entity commands

The following example shows the service entity Commands for the HeartAI HelloWorldService:

trait JSONSerialisable

trait CBORSerialisable

final case class GreetingIMPL(
  msg: String)

object GreetingIMPL {
  implicit val format: Format[GreetingIMPL] =
    Json.format
}

sealed trait HelloWorldCommand
  extends JSONSerialisable

case class GreetingCommand(
  name: String,
  replyTo: ActorRef[StatusReply[GreetingIMPL]])
  extends HelloWorldCommand

case class UpdateGreetingMessageCommand(
  msg: String,
  replyTo: ActorRef[StatusReply[Done]])
  extends HelloWorldCommand

These service entity Commands have the following functionality:

Service entity command Functionality
GreetingCommand Triggers the entity to respond with a GreetingIMPL, using the active greeting message of the entity.
UpdateGreetingMessageCommand Triggers the entity to update its active greeting message. Future GreetingCommands will respond with the updated greeting message.

Example: Lagom service entity - Entity events

The following example shows the service entity Events for the HeartAI HelloWorldService:

sealed trait HelloWorldEvent
  extends AggregateEvent[HelloWorldEvent] {
  override def aggregateTag: AggregateEventTagger[HelloWorldEvent] =
    HelloWorldEvent.Tag
}

object HelloWorldEvent {
  val nShards:
    Int = 10
  val Tag: AggregateEventShards[HelloWorldEvent] =
    AggregateEventTag.sharded[HelloWorldEvent](
      numShards = nShards)
}

case class GreetingMessageUpdatedEvent(
  message: String)
  extends HelloWorldEvent

object GreetingMessageUpdatedEvent {
  implicit val format: Format[GreetingMessageUpdatedEvent] =
    Json.format
}

These service entity Eventss have the following functionality:

Service entity event Functionality
GreetingMessageUpdatedEvent Generated when the UpdateGreetingMessageCommand successfully updates the entity greeting message.

Example: Lagom service entity - Entity state

The following example shows the service entity State for the HeartAI HelloWorldService:

case class HelloWorldState(
  msg: String,
  timestamp: Instant) {

  def applyCommand(
    cmd: HelloWorldCommand):
  ReplyEffect[HelloWorldEvent, HelloWorldState] =
    cmd match {
      case cmd: GreetingCommand =>
        onGreetingCommand(cmd)
      case cmd: UpdateGreetingMessageCommand =>
        onUpdateGreetingMessageCommand(cmd)
    }

  private def onGreetingCommand(
    cmd: GreetingCommand):
  ReplyEffect[HelloWorldEvent, HelloWorldState] =
    Effect.reply(cmd.replyTo)(
      StatusReply.success(
        GreetingIMPL(s"$msg, ${cmd.name}!")))

  private def onUpdateGreetingMessageCommand(
    cmd: UpdateGreetingMessageCommand):
  ReplyEffect[HelloWorldEvent, HelloWorldState] =
    Effect
      .persist(
        GreetingMessageUpdatedEvent(
          cmd.msg))
      .thenReply(cmd.replyTo) {
        _ => StatusReply.Ack
      }

  def applyEvent(
    evt: HelloWorldEvent):
  HelloWorldState =
    evt match {
      case thisEvt: GreetingMessageUpdatedEvent =>
        onGreetingMessageUpdatedEvent(thisEvt)
      case _ =>
        this
    }

  private def onGreetingMessageUpdatedEvent(
    evt: GreetingMessageUpdatedEvent): HelloWorldState =
    copy(evt.message, Instant.now())
}

object HelloWorldState {

  val typeKey: EntityTypeKey[HelloWorldCommand] =
    EntityTypeKey[HelloWorldCommand]("HelloWorld")

  def initial: HelloWorldState =
    HelloWorldState(
      msg = "Hello",
      timestamp = Instant.now())

  implicit val format: Format[HelloWorldState] = Json.format
}

Example: Lagom service entity - Event-sourcing processes

The following example shows the GreetingCommand event-sourcing process for the HeartAI HelloWorldService:

heartai-hello-world-service-greeting-command-process.svg

The following example shows the UpdateGreetingMessageCommand event-sourcing process for the HeartAI HelloWorldService:

heartai-hello-world-service-update-greeting-message-command-process.svg

Example: Lagom service entity - Full declaration

Example: Lagom service entity - Full declaration

The full declaration of the Lagom service entity for the HeartAI HelloWorldService may be found with the following documentation

Service event processing

System services publish persisted Events to intra-service EventStreams. The events of these event steams may modify general system behaviour in three ways:

  • Service events may be published to a network message bus that is coordinated with Apache Kafka. Subscribing clients (or other services) may then receive these events and trigger domain behaviour.
  • By processing these events, services may generate read-side data projections that are eventually consisted to read-side repositories, such as a PostgreSQL database. Often query requests through service APIs are directed to these read-side repositories to perform the query. This data is serialisable to transmit back to the user. These approaches are particularly suitable for high-throughput querying and analytics.
  • The event stream may also implement general service behaviour at the service implementation.

The following figure shows the processing behaviour that is triggered by a service EventStream:

heartai-service-event-processing.svg

Example: Lagom service read-side processor - Class declaration

The following shows the Lagom read-side processor class declaration for the HeartAI HelloWorldService:

class HelloWorldReadSideProcessor(
  readSide: SlickReadSide,
  repository: HelloWorldReadSideRepository
) extends ReadSideProcessor[HelloWorldEvent] {

  override def buildHandler():
  ReadSideProcessor.ReadSideHandler[HelloWorldEvent] =
    readSide
      .builder[HelloWorldEvent]("hello_world")
      .setGlobalPrepare(repository.createTable())
      .setEventHandler[GreetingMessageUpdatedEvent] {
        evt =>
          repository.generateDatabaseEntry(
            Greeting(
              id = evt.entityId,
              message = evt.event.message))
      }
      .build()

  override def aggregateTags:
  Set[AggregateEventTag[HelloWorldEvent]] =
    HelloWorldEvent.Tag.allTags
}

This read-side processor triggers the following functionality:

Service entity event Triggered read-side repository method Functionality
GreetingMessageUpdatedEvent generateDatabaseEntry() Inserts or updates corresponding read-side repository entry.

The following figure shows the read-side repository process for the HeartAI HelloWorldService. Note the event-driven behaviour following from the event-sourced creation of a GreetingMessageUpdatedEvent:

heartai-hello-world-service-read-side-repository-process.svg

Example: Lagom service read-side processor - Full declaration

Example: Lagom service read-side processor - Full declaration

The full declaration of the Lagom service read-side processor for the HeartAI HelloWorldService may be found with the following documentation

Service read-side repository

HeartAI system services provide approaches for interfacing with system backing services with service-level integration frameworks. For service read-side repositories, the corresponding backing persistent data store is implemented with the PostgreSQL relational data system software. Integration with PostgreSQL as a backing service is provided with the Slick functional-relational mapping (FRM) framework. Slick is implemented with an FRM framework to overcome limitations inherent with the similar approaches of object-relational mapping, such as the concept of the object-relational impedance mismatch. The FRM framework implemented with Slick allows native mapping within Scala, with loose-coupling, lightweight configuration, and guiding abstractions for working with backing relational data systems.

Example: Lagom service read-side repository - Table declaration

The following example shows a declaration of a Slick Table implementation for the HeartAI HelloWorldService. Note how the two columns in this example table, id and message, are mapped onto corresponding class methods of HelloWorldTable. In addition, the combined tuple of id and message together are mapped onto Greeting case classes. These mappings are examples of functional projections of the backing relational data system onto the service-level type system.

class HelloWorldTable(
  tag: Tag
) extends Table[Greeting](tag, "hello_world") {

  def id:
  Rep[String] =
    column[String]("id", O.PrimaryKey)

  def message:
  Rep[String] =
    column[String]("string")

  def * :
  ProvenShape[Greeting] =
    (id, message) <>
      ((Greeting.apply _).tupled, Greeting.unapply)
}

Example: Lagom service read-side repository - Table query declaration

Following the declaration of a Slick Table, functional operations are callable through TableQuery class methods. The following example shows how a TableQuery class declarable for the HeartAI HelloWorldService:

def mapTable:
TableQuery[HelloWorldTable] =
  TableQuery[HelloWorldTable]

Example: Lagom service read-side repository - Table operations

The following examples show how functional table operations may be declared through TableQuery class methods:

generateDatabaseEntry()

def generateDatabaseEntry(
  greeting: Greeting):
DBIO[Done] = {
  greeting.id match {
    case queryGreeting =>
      findByIDQuery(queryGreeting)
        .flatMap {
          case None =>
            mapTable.insertOrUpdate(greeting)
          case _ =>
            DBIO.successful(Done)
        }
        .map(_ => Done)
        .transactionally
  }
}

removeDatabaseEntry()

def removeDatabaseEntry(
  id: String):
DBIOAction[Done.type, NoStream, Effect] = {
  val action = mapTable
    .filter(_.id === id)
    .delete
  database.run(action)
  DBIO.successful(Done)
}

findByID()

def findByID(
  id: String):
Future[Option[Greeting]] =
  database.run(findByIDQuery(id))

private def findByIDQuery(
  id: String):
DBIO[Option[Greeting]] =
  mapTable
    .filter(_.id === id)
    .result
    .headOption

These table operations correspond to the follow HelloWorldService service-level functionalities:

Table operation Functionality
generateDatabaseEntry() Inserts or updates Greeting at corresponding id index.
removeDatabaseEntry() Removes Greeting at corresponding id index.
findByID() Optionally finds Greeting at corresponding id index.

Example: Lagom service read-side repository - JDBC configuration

The following example shows a HeartAI HelloWorldService development environment Typesafe Config configuration of a Slick JDBC connection to PostgreSQL with HikariCP connection pooling:

# PostgreSQL
db.default {
  driver = "org.postgresql.Driver"
  url = "jdbc:postgresql://localhost:5432/heartai"
  username = heartai
  password = heartai
}
hikaricp {
  minimumIdle = 5
  maximumPoolSize = 10
}
jdbc-defaults.slick {
  profile = "slick.jdbc.PostgresProfile$"
}

Example: Lagom service read-side repository - Full declaration

Example: Lagom service read-side repository - Full declaration

The full declaration of the Lagom service read-side repository for the HeartAI HelloWorldService may be found with the following documentation

Inter-service communication

Communication between system services occurs through:

Service interfaces

Interaction with HeartAI system services occurs through well-defined application programming interface (API) layers, that act as software encapsulations to the underlying implementation (IMPL) layers. These APIs are accessible to internal and external networks through reverse-proxied service ingress endpoints.

System services support the following data communication protocols:

Service interfaces

Further information about the HeartAI service interfaces may be found with the following documentation section:

Service distribution and concurrency

Akka Cluster provides native and stateful serverless distribution of system services. The following sequence describes the approach of entity distribution provided by Akka Cluster Sharding. The subsequent figure shows an example distribution with computational nodes in orange, shard regions in green, and entities in blue. This approach allows dynamic hydration and passivation of entity state.

The Akka Cluster Sharding process

Akka_Entity_Cluster_Sharding.svg

Akka Cluster Sharding topology

Akka_Cluster_Sharding.png

HeartAI Hello World service initial server boot log

heartai-hello-world-log.png

HeartAI Hello World service cluster formation

heartai-hello-world-cluster-formation-log.png

Service configuration

HeartAI system services are configurable at compile-time and at run-time. In the system cluster-based production environments, configuration parameters that are dynamic or sensitive are typically hosted in encrypted configuration stores and injectable at runtime. Kubernetes Secrets provides the implementation for these approaches. For example, the following configuration for Play allows injection of configuration secrets to the corresponding service environment at runtime:

Creating a secret with OpenShift

oc create secret generic hai-phocqus-pathology-secret --from-literal=secret="$(openssl rand -base64 48)"

Example deployment declaration excerpt

...
  - name: APPLICATION_SECRET
    valueFrom:
      secretKeyRef:
        name: hai-phocqus-pathology-secret
        key: secret
...

Example configuration excerpt

# Play configuration
http {
  address = ${?HTTP_BIND_ADDRESS}
  port = 14000
}

play {
  server {
    pidfile.path = "/dev/null"
  }
  http.secret.key = "${APPLICATION_SECRET}"
}

Example: Lagom service configuration

HeartAI implements service-level configuration for Lagom using the Typesafe Config, a configuration library for JVM languages that is specifiable with the HOCON format. Typesafe config configuration files are loaded at the time of JVM initialisation, but where appropriate JVM properties may also be modified at runtime.

Typesafe config configuration files follow the MAVEN Standard Directory Layout, and may usually be found at:

src/main/resources/
src/test/resources/

Example: Lagom service configuration - Local-machine development environment

The following example shows a Typesafe config configuration file for HeartAI HelloWorldService local-machine development environments. In particular, note that the PostgreSQL driver configuration refers to a PostgreSQL instance that is present on localhost, which is provided as part of the HeartAI local-machine development environment.

# HeartAI
heartai.local_mode = true
heartai.service_topic.greeting_messages_changed="hello_world_greeting_messages_changed_topic_dev"

# Play
play.application.loader = net.heartai.hello_world.HelloWorldLoader

# Akka serialisation
akka.actor {
  serializers {
    jackson-json = "akka.serialization.jackson.JacksonJsonSerializer"
  }
  serialization-bindings {
    "net.heartai.hello_world.JSONSerialisable" = jackson-json
    "net.heartai.hello_world.CBORSerialisable" = jackson-cbor
  }
}

# Akka cluster
akka.cluster {
  shutdown-after-unsuccessful-join-seed-nodes = 60s
}
akka.remote {
  artery {
    transport = tcp
  }
}

# PostgreSQL
db.default {
  driver = "org.postgresql.Driver"
  url = "jdbc:postgresql://localhost:5432/heartai"
  username = heartai
  password = heartai
}
hikaricp {
  minimumIdle = 5
  maximumPoolSize = 10
}
jdbc-defaults.slick {
  profile = "slick.jdbc.PostgresProfile$"
}

# Lagom
lagom.cluster {
  exit-jvm-when-system-terminated = on
}
lagom.persistence.jdbc {
  create-tables {
    auto = true
    timeout = 20s
    failure-exponential-backoff {
      min = 3s
      max = 30s
      random-factor = 0.2
    }
  }
}

# Keycloak
keycloak.service_group = "hello-world-service"

Example: Lagom service configuration - Cluster production environment

The following example shows a Typesafe config configuration file for HeartAI HelloWorldService cluster-based production environments. Note that this configuration includes the following declaration:

include "application.conf"

which injects the application.conf local-machine development environment configurations. In this instance, if a production environment configuration file declares the same configuration parameter, the production environment configuration takes precedence as it is declared later in the configuration file. This allows the production.conf configuration values to inherit application.conf configuration values where these are shared across local-machine development environments and cluster-based production environments.

Configuration declarations of the following type:

remote {
  artery {
    bind.hostname = ${HTTP_BIND_ADDRESS}
    bind.port = ${AKKA_REMOTING_PORT}
    canonical.port = ${AKKA_REMOTING_PORT}
  }
}

search the local process context for environment variables, in this case for the environment variables named HTTP_BIND_ADDRESS and AKKA_REMOTING_PORT. This allows the injection of configuration values through mechanisms such as Kubernetes / OpenShift Pod specifications, for example as described later in the documentation section Service deployment: OpenShift Deployment.

include "application.conf"

# HeartAI
heartai.local_mode = false
heartai.service_topic.greeting_messages_changed = ${SERVICE_TOPIC_GREETING_MESSAGES_CHANGED}

# Akka
akka.discovery {
  method = akka-dns
}
coordinated-shutdown.exit-jvm = on
cluster {
  shutdown-after-unsuccessful-join-seed-nodes = 60s
}

# Akka Remoting
remote {
  artery {
    bind.hostname = ${HTTP_BIND_ADDRESS}
    bind.port = ${AKKA_REMOTING_PORT}
    canonical.port = ${AKKA_REMOTING_PORT}
  }
}

# Akka Management
akka.management {
  cluster.bootstrap {
    contact-point-discovery {
      discovery-method = kubernetes-api
      service-name = "heartai-hello-world"
      required-contact-point-nr = ${REQUIRED_CONTACT_POINT_NR}
    }
  }

  http {
    bind-hostname = ${HTTP_BIND_ADDRESS}
    port = ${AKKA_MANAGEMENT_PORT}
    bind-port = ${AKKA_MANAGEMENT_PORT}
  }
}

# Play configuration
play {
  server {
    pidfile.path = "/dev/null"
    http.port = ${HTTP_PORT}
    https.port = ${HTTPS_PORT}
  }
  http.secret.key = "${APPLICATION_SECRET}"
  https.secret.key = "${APPLICATION_SECRET}"
}

# Lagom
lagom.broker.kafka {
  service-name = ""
  brokers = ${?KAFKA_BOOTSTRAP_SERVICE}
}

lagom.persistence.ask-timeout = 30s
lagom.persistence.call-timeout = 30s
lagom.persistence.jdbc.create-tables.auto = false

# PostgreSQL
db.default {
  driver = "org.postgresql.Driver"
  url = ${POSTGRESQL_CONTACT_POINT}
  username = ${POSTGRESQL_USERNAME}
  password = ${POSTGRESQL_PASSWORD}
}

Service management and packaging

The sbt software manager provides packaging and tooling of the HeartAI system. Implemented components of sbt include:

Data interfaces

System services support data transmission across many common data interfaces. In addition to standard synchronous data transmission, asynchronous non-blocking data streaming is supported for many interfaces.

Example: Alpakka implementation

The Alpakka project natively supports many common data interfaces, with support also for legacy and proprietary interfaces. These interfaces implement reactive stream-native integration pipelines for Java and Scala. HeartAI services implement Akka Streams to provide functionality for reactive and stream-oriented programming. These approaches support the Reactive Streams specifications and the JDK 9+ java.util.concurrent.Flow implementations and allow internal and external system data transmission to support non-blocking reactive backpressure propagation.

Through these implementations, HeartAI services provide native support for the following data interfaces:

Function-relational mapping

HeartAI services implement Slick for function-relational mapping to instances of backing data systems. These interfaces implement JDBC-based data connections.

Current support exists for the following data system:

Data encoding

The following service-level software provides functionality for transport serialisation:

Support exists for the following serialisation formats:

Specification references

HeartAI system services refer to the following specifications:

Hypertext Transfer Protocol Version 1.1 (HTTP/1.1)

Hypertext Transfer Protocol Version 2 (HTTP/2)

The WebSocket Protocol