eMagiz Documentation

eMagiz Documentation

  • Docs
  • API
  • Help
  • Blog

›eMagiz Docs

eMagiz Docs

  • Home
  • Configuration import
  • Entity
  • SFTP size file list filter
  • Parameter map to XML transformer
  • Standard header enricher - Reply channel
  • XMPP header enricher - Chat thread id
  • IMAP idle channel adapter
  • Blocking channel connector
  • SOAP attachments header mapper
  • FTP accept once per modification file list filter
  • Standard header enricher - Expiration date
  • Recipient list router
  • HTTP Components message sender
  • Edit Message Bus: Namespace settings
  • Select Properties
  • Transform from enumeration
  • Standard splitter
  • TCP outbound gateway
  • Add security settings
  • Edit Gateway Call
  • Edit XPath Field
  • XML to JSON transformer
  • View license
  • Transform from date/time
  • Connector transport configuration
  • Custom error message activator
  • Payload validating interceptor
  • Netty connector transport configuration
  • New Mendix project credentials
  • Segment group
  • Job registry bean post processor
  • Web service message receiver servlet
  • Ehcache cache
  • SFTP inbound channel adapter
  • Flat file to XML transformer
  • Netty acceptor
  • Create popup - Step 2
  • Kafka template
  • Simple job launcher
  • Tag
  • Message endpoint adapter
  • Byte array raw (de)serializer
  • Edit error alert template
  • JDBC result set to XML transformer
  • Default HTTP header mapper
  • Default SFTP caching session factory
  • Merlin crypto
  • JDBC H2 connection pool
  • Edit Alert Notifying
  • Parse map
  • ThClient table reading activator
  • JMS queue scheduled trigger
  • SQL Script
  • Simple WSDL 1.1 definition
  • Standard router
  • EDI mapping
  • Concurrent map cache
  • EDI to XML transformer
  • XSLT transformer
  • WebSphere caching connection factory
  • SFTP composite file list filter
  • XMPP outbound channel adapter
  • SFTP age file list filter
  • Base64 decoding transformer
  • Source filter
  • Copy properties to runtime
  • Firewall configuration (system admin)
  • Combined entry connector details
  • Payload root QName endpoint mapping - endpoint map entry
  • File to bytes transformer
  • Specify requirements integration details
  • MIME to S/MIME transformer
  • Management
  • Form Title
  • Task executor
  • SSL web service message sender
  • Netty acceptor transport configuration
  • FTP outbound channel adapter
  • Accept once file list filter
  • Metacom authentication SAAJ SOAP interceptor
  • Mendix authentication SOAP header mapper
  • Context handler
  • Simple dispatcher servlet
  • XML to flat file transformer
  • JDBC inbound channel adapter
  • XML mapping message consumer
  • JDBC parameter
  • XSLT snippet
  • Web service header enricher
  • OAuth 2.0 access token advice
  • Header filter
  • Property placeholder
  • Copy CDM Codes and System Codes to Environment
  • HTTP header
  • JSON to XML transformer
  • Generate flat file definition based on comma separated values
  • Standard header enricher - Priority
  • Image transformer
  • HTTP inbound channel adapter
  • OAuth 2.0 resource
  • File item reader message source
  • System settings
  • Amazon S3 authentication interceptor
  • Field parse info
  • MxConnector settings
  • TCP connection factory
  • HornetQ JMS server manager
  • XPath filter
  • Address settings
  • Error to XML transformer
  • Mail header enricher - Bcc
  • SOAP to message header mapping
  • Create popup - Step 1
  • Regex pattern file list filter
  • Message history
  • Kafka outbound channel adapter
  • Proxy web service message sender
  • Custom field
  • JDBC channel message store
  • JDBC outbound channel adapter
  • Runtime scheduled trigger
  • New definition
  • JMS caching connection factory
  • TCP inbound channel adapter
  • Port Forward Maintenance
  • Edit Image
  • Standard header enricher - Custom header
  • eMagiz 3.x license
  • In-VM acceptor transport configuration
  • OAuth 2.0 user redirect required exception resolver
  • Default FTP session factory
  • Simple cache manager
  • Bridge configuration
  • job-dashboard
  • Mendix FileDocument WS request transformer
  • Create new message bus
  • Log entry trigger
  • New XML definition
  • String to wrapped XML transformer
  • Servlet context handler
  • Ehcache cache manager
  • Mail inbound channel adapter
  • Default SFTP session factory
  • Conditional
  • SOAP action callback
  • Standard filter
  • SAAJ SOAP message factory
  • Group
  • ISO8583 XML to bytes transformer
  • Mapping service gateway
  • Delaying file list filter
  • HTTP outbound gateway
  • S/MIME to MIME transformer
  • SFTP simple pattern file list filter
  • Expression evaluating request handler advice
  • Mail attachment transformer
  • History of properties in this environment
  • eMagiz 1.x license
  • Composite cache manager
  • Default FTP caching session factory
  • Cache annotation driven
  • Deployment step
  • Qpid AMQP connector settings
  • Cluster connection configuration
  • XML validating filter
  • Kafka message driven channel adapter
  • Apache ActiveMQ Artemis server
  • Debug interceptor
  • JMS listener
  • Global wire tap
  • XSLT parameter
  • FTP simple pattern file list filter
  • Code type
  • XPath header enricher - Header
  • MIME message to XML transformer
  • A logical service level
  • CDM code
  • XMPP header enricher
  • Recipient
  • Notification setting
  • Column definition
  • New configuration
  • Entry tracking interceptor
  • File inbound channel adapter
  • Security settings
  • FOP XSL-FO result factory
  • File outbound channel adapter
  • Zimbra authentication SAAJ SOAP interceptor
  • new-version
  • Port Forward DMZ
  • Payload type router
  • FTP composite file list filter
  • Sonic connection factory
  • Job
  • eMagiz 2.x license
  • Component
  • HTTP outbound channel adapter
  • Route configuration
  • Default WSDL 1.1 definition
  • JMS listener container
  • JMS outbound gateway
  • JDBC BoneCP data source
  • Channel
  • Edit System
  • Edit Data Type
  • Message channel monitoring trigger
  • XML to MIME message transformer
  • Job manager
  • XMPP presence outbound channel adapter
  • ISO header
  • FTP age file list filter
  • Basic access authentication interceptor
  • Base64 encoding transformer
  • Web service header enricher - SOAP action
  • SOAP action
  • Standard service activator
  • Byte array STX/ETX (de)serializer
  • XML to string transformer
  • Default FTPS caching session factory
  • Accept once per modification file list filter
  • Servlet handler
  • ThClient connector
  • JDBC stored procedure outbound gateway
  • Standard header enricher - Error channel
  • ThClient changes reading activator
  • Logging channel adapter
  • Edit e-mail
  • SFTP outbound channel adapter
  • JMS queue configuration
  • XSLT view
  • JMS message driven channel adapter
  • JDBC stored procedure inbound channel adapter
  • JMS inbound gateway
  • FOP XSL-FO result transformer
  • Mail to string transformer
  • ActiveMQ security manager gateway
  • Message bridge
  • Result to string transformer
  • Payload root QName endpoint mapping
  • ISO8583 bytes to XML transformer
  • Size file list filter
  • Byte array CR/LF (de)serializer
  • Relationship
  • Rename configuration
  • TCP inbound gateway
  • Attribute
  • HTTP request address extracting interceptor
  • Standard header enricher - Correlation id
  • Mendix authentication SAAJ SOAP interceptor
  • Byte array length header (de)serializer
  • Instance configuration
  • Select type of new trigger
  • Byte array text length header (de)serializer
  • Mail header enricher - Attachment filename
  • Poller
  • Mail header enricher - Content type
  • JDBC SQL parameter definition
  • Result to document transformer
  • Concurrent map cache manager
  • Request handler circuit breaker advice
  • Global channel interceptor
  • Mail header enricher - From
  • Calculate exponential moving average
  • Edit eMagiz Mendix Connector configuration
  • Jetty Server
  • XML to EDI transformer
  • Age file list filter
  • JMS outbound channel adapter
  • Transform from boolean
  • OSGi service
  • MBean export
  • JMS XML message converter
  • Object to string transformer
  • Edit popup
  • Lock-file file list filter
  • Edit Attribute
  • FTP size file list filter
  • Mail outbound channel adapter
  • Edit String Value Transformation
  • Transform from numbers
  • UN/EDIFACT to XML transformer
  • Microflow invoking message consumer
  • SFTP regex pattern file list filter
  • New Mendix user credentials
  • In VM acceptor
  • File to string transformer
  • Timestamped filename generator
  • WebSphere connection factory
  • Wizard 'publish web service'
  • Property
  • HTTP inbound gateway
  • SSL select channel connector
  • FTP lock-file file list filter
  • Web service inbound gateway
  • Job execution listener gateway
  • JVM performance monitor
  • Topic configuration
  • Standard inbound channel adapter
  • Netty connector
  • Log appender channel adapter
  • Destination filter
  • Custom attribute
  • Mail header enricher
  • OAuth 2.0 REST template
  • Mail header enricher - Reply to
  • Edit Root Entity
  • Map job registry
  • Character replacing transformer
  • SOAP fault message resolver
  • XMPP inbound channel adapter
  • Detailed SOAP fault message resolver
  • Default FTPS session factory
  • FTP inbound channel adapter
  • Table definition
  • Basic security handler
  • Message handler monitoring trigger
  • New Mendix module
  • WS-Addressing action callback
  • XPath expression
  • Import a Mendix domain model
  • SFTP accept once per modification file list filter
  • Default TCP SSL context support
  • Resource handler
  • Static Input
  • Field
  • Qpid JMS connection factory
  • Mail header enricher - Cc
  • FTP regex pattern file list filter
  • REST template
  • MBean server
  • Segment
  • Format filename generator
  • Certificate configuration
  • Transform from string
  • Sub-component
  • JDBC OSGi data source reference
  • XPath transformer
  • Port Forward User
  • JDBC Initialize database
  • Job repository
  • Message type
  • Standard transformer
  • Byte array single terminator (de)serializer
  • XPath splitter
  • Properties
  • Select channel connector
  • JMX multi attribute polling message source
  • Log item type mapping
  • Header value router
  • Groovy variable
  • Port Forward JMS
  • XPath router
  • Mail header enricher - Multipart mode
  • JDBC outbound gateway
  • Sonic caching connection factory
  • SOAP message dispatcher
  • Control bus
  • Standard header enricher
  • Exit tracking interceptor
  • TCP outbound channel adapter
  • Azure Storage Services authentication interceptor
  • SFTP lock-file file list filter
  • Edit line tokenizer
  • XMPP connection
  • Edit message type
  • Servlet mapping
  • WSS4J security interceptor
  • XPath header enricher
  • Runtime monitoring trigger
  • In-VM connector transport configuration
  • Mail header enricher - Subject
  • Aggregator
  • Edit extra JMS queue
  • Create queue
  • Simple pattern file list filter
  • Mendix service definition
  • Java mail sender
  • Request handler retry advice
  • Job launching gateway
  • XSLT extension gateway
  • In VM connector
  • Composite file list filter
  • Cluster connection
  • XML to UN/EDIFACT transformer
  • Web service message listener
  • XMPP presence inbound channel adapter
  • JMS queue monitoring trigger
  • XMPP header enricher - Chat to
  • Target Accept
  • XMPP presence to XML transformer
  • JMS OSGi connection factory reference
  • Web service outbound gateway
  • Mail header enricher - To
  • JDBC stored procedure outbound channel adapter
  • Header value mapping
  • Flow controller
  • Simple XSD schema
  • HornetQ caching connection factory
  • HornetQ connection factory
  • Command controller
  • Payload type mapping
  • View Property
  • Create an AWS slot for this bus environment
  • XPath value mapping
  • Authorization service gateway
  • Command executor gateway
  • HornetQ security manager gateway
  • Message handler scheduled trigger
  • Error message trigger
  • Edit Cron Helper
  • Cron-trigger helper
  • Message channel scheduled trigger
  • Mikrotik service activator
  • -
  • Dashboard
  • Runtime dashboard
  • Packages
  • Bus processes
  • Edit Message Bus
  • Settings
  • Edit settings
  • New cloud runtime
  • Cloud runtime settings
  • Wire tap
  • Integration
  • Notification setting
  • Step
  • amqp-broker
  • view-amqp-broker
  • Value mapping
  • Kafka message listener container
  • Global settings
  • Web service URI variable
  • HTTP URI variable
  • Edit transport configuration
  • Edit
  • Flow designer
  • Other
  • System code
  • Edit eMagiz Runtime VM

Other

Name

Name of the flow.

Version

The version of the flow.

Uses the major.minor.micro format.

Description

Description of this flow.

Name

Name of this bus component.

Name

Name of this bus component.

System

The system that the connector this infra flow is part of connects with.

Name

Name of this bus component.

Asynchronous

Whether this message flow is asynchronous ("send and forget") or synchronous ("request/response").

Inbox

The JMS queue the error process consumes messages from.

Outbox

The JMS queue the error process produces messages on.

Note that these are messages that are handled correctly by the error process, i.e. a valid error message.

Error

The JMS queue the error process places error messages on.

Note that this are messages that result in errors while being handled by the error process, i.e. an invalid error message.

Name

Name of this bus component.

Asynchronous

Whether this message flow is asynchronous ("send and forget") or synchronous ("request/response").

System

The system this offramp process sends messages to.

Message type

The type of messages this offramp process handles.

Inbox

The JMS queue this offramp process consumes messages from.

Outbox

The JMS queue this offramp process produces messages on.

Error

The JMS queue this offramp process places error messages on.

Name

Name of this bus component.

Is backup node

Whether this JMS server is a backup server or not.

Cluster name

Name of the cluster this JMS server is part of.

There are two situations where a JMS server is part of a cluster:

  • when using a cluster of JMS servers to share the messaging load
  • when using failover, the live server and all its backups are also placed in a cluster

Name

Name of this bus component.

Asynchronous

Whether this message flow is asynchronous ("send and forget") or synchronous ("request/response").

Inbox

The JMS queue this routing process consumes messages from.

Error

The JMS queue this routing process places error messages on.

Name

Name of this bus component.

Asynchronous

Whether this message flow is asynchronous ("send and forget") or synchronous ("request/response").

System

The system this exit connector sends messages to.

Message type

The type of messages that are send from the message bus to the connected system by this connector.

Inbox

The JMS queue this exit connector consumes messages from.

Error

The JMS queue this exit connector places error messages on.

Exit/entry connector

Normally, if a system consumes and produces messages, separate entry and exit connector flows are used for connecting the system to the message bus.

In some cases however, the consumption of a message also produces a message as the result. If these messages can be handled by asynchronous message flows but there is a need to somehow correlate them, use an exit/entry connector: this generates an exit connector that delivers messages to the consuming system, that also acts like an entry connector by placing messages produced by that same system back onto the message bus.

Message type

The type of messages that are send back to the message bus by this exit/entry connector.

Outbox

The JMS queue this exit/entry connector produces messages on that are send back by the connected system.

Name

Name of this bus component.

Asynchronous

Whether this message flow is asynchronous ("send and forget") or synchronous ("request/response").

System

The system this entry connector receives messages from.

Message type

The type of messages that are send from the connected system to the message bus by this connector.

Outbox

The JMS queue this entry connector produces messages on.

Combined entry connector

Normally, one connector is generated per system that contains multiple entry flows (one for each message type). In some cases however, it is required that all these flows are combined into one big configuration, usually to be able to share resources.

For example, when exposing the connector as a SOAP web service, you'd probably want all different incoming message types to be hosted as different web service operations combined into a single web service (i.e., one HTTP server, one port that needs to be accessible and one WSDL).

Type

The type of combined entry connector. This determines how the flows are generated:

SOAP web service: This generates a fully configured Jetty web server hosting a single web service, containing one operation for each message type. The incoming messages are then extracted from the SOAP message and placed on the correct JMS queue.

Custom: This generates a fully configured flow endpoint for each message type, and lets the user define the starting points of these flows.

SOAP WS name

Name for the exposed web service as used in the web service URL.

The full URL of the WSDL will look as follows (the underlined words need to be replaced by the actual values): http://host:port/ws/ws-name/ws-name.wsdl

SOAP WS namespace

The namespace for the exposed web service.

Name

Name of this bus component.

Asynchronous

Whether this message flow is asynchronous ("send and forget") or synchronous ("request/response").

System

The system this onramp process receives messages from.

Message type

The type of messages this onramp process handles.

Inbox

The JMS queue this onramp process consumes messages from.

Outbox

The JMS queue this onramp process produces messages on.

Error

The JMS queue this onramp process places error messages on.

Display name

Display name for this message bus.

This value is used for displaying in the GUI only.

Name

Abbreviated name of the message bus, with a maximum length of seven characters.

This value will be used when auto-generating names for bus components, and must therefore adhere to the naming convensions (use only lower case letters, digits or the '-' character).

Namespace URL

The first part of the namespace for this message bus. Usually the domain name of the company is used, e.g. http://www.mycompany.com/.

This value will be used when auto-generating namespaces for bus components, and must therefore adhere to the naming convensions (starting with http:// or https://, followed by a domain name and ending with a slash).

Enforce CDM best practices

Indicates if eMagiz best practices are applied to all the CDM messages from the message bus. These best practices are applied to the XML message definitions in the Create phase.

For example, eMagiz autogenerates a namespace based on the provided Namespace URL and we recommend that every element in the XML message must use the target namespace. Therefore, the xs:elementFormDefault value is not editable. Additionally, the creation of xs:attribute or xs:simpleContent is not allowed. These XSD constructs are not needed for building CDM messages and using them only adds unnecessary complexity to your message.

Nr. of process containers

The number of process containers this message bus uses to deploy and run processes.

Using more then one process container will add more processing capacity, redundancy and load balancing of the bus processes to the messaging solution, but also requires more hardware to deploy on.

Label

A choice for the Infrastructure as a Service determines were the eMagiz runtimes will be deployed.

AWS Cloud computing platform. Using failover with AWS means that the runtimes are run on stand-by severs, offering a very fast back-up system. In the case of failover, shared store is the default. In the case of no failover there are no back-up servers running, but new servers will be started and set up in a state from before the failure.

Root Cloud computing platform. In the case of the use of failover, data replication is the only option.

On-premises The software is going to be installed locally, on the company's own servers. Does not use the cloud options eMagiz offers.

Failover

Whether this message bus uses backup JMS servers that can take over when a live JMS server fails.

Using failover will add high availability and redundancy of the JMS servers to the messaging solution, but also requires more hardware (this includes a SAN) to deploy on.

Note that in most cases using failover with only a single container is not very useful, as this makes the container a single point of failure.

Nr. of backup nodes per live node

The number of backup JMS servers per live JMS server. Normally one should be sufficient, but in some critical situations a "backup for the backup" can be useful.

Using more then one backup will add more redundancy to the messaging solution, but also requires more hardware to deploy on.

Failover type

The strategy to use for backing up a server.

Data replication    All data synchronization between live and the backup servers is done through network traffic. Therefore all (persistent) data traffic received by the live server will be duplicated to the backup.    Notice that upon startup the backup server will first need to synchronize all existing data from the live server, before becoming capable of replacing the live server should it fail. So unlike the shared store case, a replicating backup will not be a fully operational backup right after start, but only after it finishes synchronizing the data. The time it will take for this to happen will depend on the amount of data to be synchronized and the connection speed.    One issue to be aware of is: in case of a successful failover, the backup's data will be newer than the one at the live's storage. When the live server then restarts, it will synchronize its data with the backup's. If both servers are shutdown however, the administrator will have to determine which one has the lastest data. Shared store doesn't have this issue, because at any moment there is just one copy of the data.    There is one more important distinction between data replication and shared store failover: with shared store, if the backup starts and does not find its live server, the server will just activate and start to serve client requests (this is possible because the shared store is always up to date). In the replication case, the backup just keeps waiting for a live server to pair with (the backup server does not know whether any data it might have is up to date, so it really cannot decide to activate automatically).

Shared store    Both live and backup servers share the same entire data directory using a shared file system. When failover occurs and a backup server takes over, it will load the persistent storage from the shared file system and clients can connect to it.    This style of high availability differs from data replication in that it requires a shared file system which is accessible by both the live and backup nodes. Typically this will be some kind of high performance Storage Area Network (SAN). The use of Network Attached Storage (NAS), e.g. NFS mounts to store any shared journal (NFS is slow), is not recommended.    The advantage of shared store high availability is that no replication occurs between the live and backup nodes, this means it does not suffer any performance penalties due to the overhead of replication during normal operation. If you require the highest performance during normal operation and have access to a fast SAN, using shared store high availability is recommended.

Clustered

Whether this message bus uses a cluster of multiple JMS servers to host the message queues.

Using a cluster will add redundancy and load balancing of the JMS queues to the messaging solution, but also requires more hardware to deploy on.

Note that in most cases using a cluster with only a few process containers is not very useful, as the process containers are usually the bottle neck for the total message throughput long before the JMS servers are.

Also note that a cluster does not provide failover behaviour: if a server fails, clients will experience connection problems and messages might be (temporarily) lost. If you want a cluster with failover behaviour, select both use cluster and use failover. This will increase the hardware requirements significantly however, because every JMS server in the cluster needs its own dedicated failover server(s).

Nr. of nodes in cluster

The number of JMS servers in the cluster. Every server in the cluster will share the messaging load and host the same message queues (a symmetrical cluster).

Connection type

The transport type used for creating connections between the JMS clients and JMS servers.

Plain TCP Normal, unencrypted TCP connections that don't need any SSL certificates.

TCP with SSL Encrypted TCP connections that need an SSL certificate on the server side. If the certificate is not signed by an official Certificate Authority you'll also need to (manually) add this certificate to the trust store on all clients.

TCP with 2-way SSL Encrypted TCP connections that need a correct SSL certificate on both the server and the client side. Also, the server certificates need to be added to the trust store of all clients and the client certificates must be added to the trust store of all servers.

Self-signed certificate

By indicating that the (server) certificate is not signed by an official Certificate Authority but is self-signed, properties for specifying the trust store path and password will be used by all clients.

Display name

Display name for this message bus.

This value is used for displaying in the GUI only.

Name

Abbreviated name of the message bus, with a maximum length of seven characters.

This value will be used when auto-generating names for bus components, and must therefore adhere to the naming convensions (use only lower case letters, digits or the '-' character).

Namespace URL

The first part of the namespace for this message bus. Usually the domain name of the company is used, e.g. http://www.mycompany.com/.

This value will be used when auto-generating namespaces for bus components, and must therefore adhere to the naming convensions (starting with http:// or https://, followed by a domain name and ending with a slash).

Enforce CDM best practices

Indicates if eMagiz best practices are applied to all the CDM messages from the message bus. These best practices are applied to the XML message definitions in the Create phase.

For example, eMagiz autogenerates a namespace based on the provided Namespace URL and we recommend that every element in the XML message must use the target namespace. Therefore, the xs:elementFormDefault value is not editable. Additionally, the creation of xs:attribute or xs:simpleContent is not allowed. These XSD constructs are not needed for building CDM messages and using them only adds unnecessary complexity to your message.

Nr. of process containers

The number of process containers this message bus uses to deploy and run processes.

Using more then one process container will add more processing capacity, redundancy and load balancing of the bus processes to the messaging solution, but also requires more hardware to deploy on.

Label

A choice for the Infrastructure as a Service determines were the eMagiz runtimes will be deployed.

AWS Cloud computing platform. Using failover with AWS means that the runtimes are run on stand-by severs, offering a very fast back-up system. In the case of failover, shared store is the default. In the case of no failover there are no back-up servers running, but new servers will be started and set up in a state from before the failure.

Root Cloud computing platform. In the case of the use of failover, data replication is the only option.

On-premises The software is going to be installed locally, on the company's own servers. Does not use the cloud options eMagiz offers.

Failover

Whether this message bus uses backup JMS servers that can take over when a live JMS server fails.

Using failover will add high availability and redundancy of the JMS servers to the messaging solution, but also requires more hardware (this includes a SAN) to deploy on.

Note that in most cases using failover with only a single container is not very useful, as this makes the container a single point of failure.

Nr. of backup nodes per live node

The number of backup JMS servers per live JMS server. Normally one should be sufficient, but in some critical situations a "backup for the backup" can be useful.

Using more then one backup will add more redundancy to the messaging solution, but also requires more hardware to deploy on.

Failover type

The strategy to use for backing up a server.

Data replication    All data synchronization between live and the backup servers is done through network traffic. Therefore all (persistent) data traffic received by the live server will be duplicated to the backup.    Notice that upon startup the backup server will first need to synchronize all existing data from the live server, before becoming capable of replacing the live server should it fail. So unlike the shared store case, a replicating backup will not be a fully operational backup right after start, but only after it finishes synchronizing the data. The time it will take for this to happen will depend on the amount of data to be synchronized and the connection speed.    One issue to be aware of is: in case of a successful failover, the backup's data will be newer than the one at the live's storage. When the live server then restarts, it will synchronize its data with the backup's. If both servers are shutdown however, the administrator will have to determine which one has the lastest data. Shared store doesn't have this issue, because at any moment there is just one copy of the data.    There is one more important distinction between data replication and shared store failover: with shared store, if the backup starts and does not find its live server, the server will just activate and start to serve client requests (this is possible because the shared store is always up to date). In the replication case, the backup just keeps waiting for a live server to pair with (the backup server does not know whether any data it might have is up to date, so it really cannot decide to activate automatically).

Shared store    Both live and backup servers share the same entire data directory using a shared file system. When failover occurs and a backup server takes over, it will load the persistent storage from the shared file system and clients can connect to it.    This style of high availability differs from data replication in that it requires a shared file system which is accessible by both the live and backup nodes. Typically this will be some kind of high performance Storage Area Network (SAN). The use of Network Attached Storage (NAS), e.g. NFS mounts to store any shared journal (NFS is slow), is not recommended.    The advantage of shared store high availability is that no replication occurs between the live and backup nodes, this means it does not suffer any performance penalties due to the overhead of replication during normal operation. If you require the highest performance during normal operation and have access to a fast SAN, using shared store high availability is recommended.

Clustered

Whether this message bus uses a cluster of multiple JMS servers to host the message queues.

Using a cluster will add redundancy and load balancing of the JMS queues to the messaging solution, but also requires more hardware to deploy on.

Note that in most cases using a cluster with only a few process containers is not very useful, as the process containers are usually the bottle neck for the total message throughput long before the JMS servers are.

Also note that a cluster does not provide failover behaviour: if a server fails, clients will experience connection problems and messages might be (temporarily) lost. If you want a cluster with failover behaviour, select both use cluster and use failover. This will increase the hardware requirements significantly however, because every JMS server in the cluster needs its own dedicated failover server(s).

Nr. of nodes in cluster

The number of JMS servers in the cluster. Every server in the cluster will share the messaging load and host the same message queues (a symmetrical cluster).

Connection type

The transport type used for creating connections between the JMS clients and JMS servers.

Plain TCP Normal, unencrypted TCP connections that don't need any SSL certificates.

TCP with SSL Encrypted TCP connections that need an SSL certificate on the server side. If the certificate is not signed by an official Certificate Authority you'll also need to (manually) add this certificate to the trust store on all clients.

TCP with 2-way SSL Encrypted TCP connections that need a correct SSL certificate on both the server and the client side. Also, the server certificates need to be added to the trust store of all clients and the client certificates must be added to the trust store of all servers.

Self-signed certificate

By indicating that the (server) certificate is not signed by an official Certificate Authority but is self-signed, properties for specifying the trust store path and password will be used by all clients.

Display name

Display name for this message bus.

This value is used for displaying in the GUI only.

Name

Abbreviated name of the message bus, with a maximum length of seven characters.

This value will be used when auto-generating names for bus components, and must therefore adhere to the naming convensions (use only lower case letters, digits or the '-' character).

Namespace URL

The first part of the namespace for this message bus. Usually the domain name of the company is used, e.g. http://www.mycompany.com/.

This value will be used when auto-generating namespaces for bus components, and must therefore adhere to the naming convensions (starting with http:// or https://, followed by a domain name and ending with a slash).

Enforce CDM best practices

Indicates if eMagiz best practices are applied to all the CDM messages from the message bus. These best practices are applied to the XML message definitions in the Create phase.

For example, eMagiz autogenerates a namespace based on the provided Namespace URL and we recommend that every element in the XML message must use the target namespace. Therefore, the xs:elementFormDefault value is not editable. Additionally, the creation of xs:attribute or xs:simpleContent is not allowed. These XSD constructs are not needed for building CDM messages and using them only adds unnecessary complexity to your message.

Nr. of process containers

The number of process containers this message bus uses to deploy and run processes.

Using more then one process container will add more processing capacity, redundancy and load balancing of the bus processes to the messaging solution, but also requires more hardware to deploy on.

Label

A choice for the Infrastructure as a Service determines were the eMagiz runtimes will be deployed.

AWS Cloud computing platform. Using failover with AWS means that the runtimes are run on stand-by severs, offering a very fast back-up system. In the case of failover, shared store is the default. In the case of no failover there are no back-up servers running, but new servers will be started and set up in a state from before the failure.

Root Cloud computing platform. In the case of the use of failover, data replication is the only option.

On-premises The software is going to be installed locally, on the company's own servers. Does not use the cloud options eMagiz offers.

Failover

Whether this message bus uses backup JMS servers that can take over when a live JMS server fails.

Using failover will add high availability and redundancy of the JMS servers to the messaging solution, but also requires more hardware (this includes a SAN) to deploy on.

Note that in most cases using failover with only a single container is not very useful, as this makes the container a single point of failure.

Nr. of backup nodes per live node

The number of backup JMS servers per live JMS server. Normally one should be sufficient, but in some critical situations a "backup for the backup" can be useful.

Using more then one backup will add more redundancy to the messaging solution, but also requires more hardware to deploy on.

Failover type

The strategy to use for backing up a server.

Data replication    All data synchronization between live and the backup servers is done through network traffic. Therefore all (persistent) data traffic received by the live server will be duplicated to the backup.    Notice that upon startup the backup server will first need to synchronize all existing data from the live server, before becoming capable of replacing the live server should it fail. So unlike the shared store case, a replicating backup will not be a fully operational backup right after start, but only after it finishes synchronizing the data. The time it will take for this to happen will depend on the amount of data to be synchronized and the connection speed.    One issue to be aware of is: in case of a successful failover, the backup's data will be newer than the one at the live's storage. When the live server then restarts, it will synchronize its data with the backup's. If both servers are shutdown however, the administrator will have to determine which one has the lastest data. Shared store doesn't have this issue, because at any moment there is just one copy of the data.    There is one more important distinction between data replication and shared store failover: with shared store, if the backup starts and does not find its live server, the server will just activate and start to serve client requests (this is possible because the shared store is always up to date). In the replication case, the backup just keeps waiting for a live server to pair with (the backup server does not know whether any data it might have is up to date, so it really cannot decide to activate automatically).

Shared store    Both live and backup servers share the same entire data directory using a shared file system. When failover occurs and a backup server takes over, it will load the persistent storage from the shared file system and clients can connect to it.    This style of high availability differs from data replication in that it requires a shared file system which is accessible by both the live and backup nodes. Typically this will be some kind of high performance Storage Area Network (SAN). The use of Network Attached Storage (NAS), e.g. NFS mounts to store any shared journal (NFS is slow), is not recommended.    The advantage of shared store high availability is that no replication occurs between the live and backup nodes, this means it does not suffer any performance penalties due to the overhead of replication during normal operation. If you require the highest performance during normal operation and have access to a fast SAN, using shared store high availability is recommended.

Clustered

Whether this message bus uses a cluster of multiple JMS servers to host the message queues.

Using a cluster will add redundancy and load balancing of the JMS queues to the messaging solution, but also requires more hardware to deploy on.

Note that in most cases using a cluster with only a few process containers is not very useful, as the process containers are usually the bottle neck for the total message throughput long before the JMS servers are.

Also note that a cluster does not provide failover behaviour: if a server fails, clients will experience connection problems and messages might be (temporarily) lost. If you want a cluster with failover behaviour, select both use cluster and use failover. This will increase the hardware requirements significantly however, because every JMS server in the cluster needs its own dedicated failover server(s).

Nr. of nodes in cluster

The number of JMS servers in the cluster. Every server in the cluster will share the messaging load and host the same message queues (a symmetrical cluster).

Connection type

The transport type used for creating connections between the JMS clients and JMS servers.

Plain TCP Normal, unencrypted TCP connections that don't need any SSL certificates.

TCP with SSL Encrypted TCP connections that need an SSL certificate on the server side. If the certificate is not signed by an official Certificate Authority you'll also need to (manually) add this certificate to the trust store on all clients.

TCP with 2-way SSL Encrypted TCP connections that need a correct SSL certificate on both the server and the client side. Also, the server certificates need to be added to the trust store of all clients and the client certificates must be added to the trust store of all servers.

Self-signed certificate

By indicating that the (server) certificate is not signed by an official Certificate Authority but is self-signed, properties for specifying the trust store path and password will be used by all clients.

Display name

Display name for this message bus.

This value is used for displaying in the GUI only.

Name

Abbreviated name of the message bus, with a maximum length of seven characters.

This value will be used when auto-generating names for bus components, and must therefore adhere to the naming convensions (use only lower case letters, digits or the '-' character).

Namespace URL

The first part of the namespace for this message bus. Usually the domain name of the company is used, e.g. http://www.mycompany.com/.

This value will be used when auto-generating namespaces for bus components, and must therefore adhere to the naming convensions (starting with http:// or https://, followed by a domain name and ending with a slash).

Enforce CDM best practices

Indicates if eMagiz best practices are applied to all the CDM messages from the message bus. These best practices are applied to the XML message definitions in the Create phase.

For example, eMagiz autogenerates a namespace based on the provided Namespace URL and we recommend that every element in the XML message must use the target namespace. Therefore, the xs:elementFormDefault value is not editable. Additionally, the creation of xs:attribute or xs:simpleContent is not allowed. These XSD constructs are not needed for building CDM messages and using them only adds unnecessary complexity to your message.

Nr. of process containers

The number of process containers this message bus uses to deploy and run processes.

Using more then one process container will add more processing capacity, redundancy and load balancing of the bus processes to the messaging solution, but also requires more hardware to deploy on.

Label

A choice for the Infrastructure as a Service determines were the eMagiz runtimes will be deployed.

AWS Cloud computing platform. Using failover with AWS means that the runtimes are run on stand-by severs, offering a very fast back-up system. In the case of failover, shared store is the default. In the case of no failover there are no back-up servers running, but new servers will be started and set up in a state from before the failure.

Root Cloud computing platform. In the case of the use of failover, data replication is the only option.

On-premises The software is going to be installed locally, on the company's own servers. Does not use the cloud options eMagiz offers.

Failover

Whether this message bus uses backup JMS servers that can take over when a live JMS server fails.

Using failover will add high availability and redundancy of the JMS servers to the messaging solution, but also requires more hardware (this includes a SAN) to deploy on.

Note that in most cases using failover with only a single container is not very useful, as this makes the container a single point of failure.

Nr. of backup nodes per live node

The number of backup JMS servers per live JMS server. Normally one should be sufficient, but in some critical situations a "backup for the backup" can be useful.

Using more then one backup will add more redundancy to the messaging solution, but also requires more hardware to deploy on.

Failover type

The strategy to use for backing up a server.

Data replication    All data synchronization between live and the backup servers is done through network traffic. Therefore all (persistent) data traffic received by the live server will be duplicated to the backup.    Notice that upon startup the backup server will first need to synchronize all existing data from the live server, before becoming capable of replacing the live server should it fail. So unlike the shared store case, a replicating backup will not be a fully operational backup right after start, but only after it finishes synchronizing the data. The time it will take for this to happen will depend on the amount of data to be synchronized and the connection speed.    One issue to be aware of is: in case of a successful failover, the backup's data will be newer than the one at the live's storage. When the live server then restarts, it will synchronize its data with the backup's. If both servers are shutdown however, the administrator will have to determine which one has the lastest data. Shared store doesn't have this issue, because at any moment there is just one copy of the data.    There is one more important distinction between data replication and shared store failover: with shared store, if the backup starts and does not find its live server, the server will just activate and start to serve client requests (this is possible because the shared store is always up to date). In the replication case, the backup just keeps waiting for a live server to pair with (the backup server does not know whether any data it might have is up to date, so it really cannot decide to activate automatically).

Shared store    Both live and backup servers share the same entire data directory using a shared file system. When failover occurs and a backup server takes over, it will load the persistent storage from the shared file system and clients can connect to it.    This style of high availability differs from data replication in that it requires a shared file system which is accessible by both the live and backup nodes. Typically this will be some kind of high performance Storage Area Network (SAN). The use of Network Attached Storage (NAS), e.g. NFS mounts to store any shared journal (NFS is slow), is not recommended.    The advantage of shared store high availability is that no replication occurs between the live and backup nodes, this means it does not suffer any performance penalties due to the overhead of replication during normal operation. If you require the highest performance during normal operation and have access to a fast SAN, using shared store high availability is recommended.

Clustered

Whether this message bus uses a cluster of multiple JMS servers to host the message queues.

Using a cluster will add redundancy and load balancing of the JMS queues to the messaging solution, but also requires more hardware to deploy on.

Note that in most cases using a cluster with only a few process containers is not very useful, as the process containers are usually the bottle neck for the total message throughput long before the JMS servers are.

Also note that a cluster does not provide failover behaviour: if a server fails, clients will experience connection problems and messages might be (temporarily) lost. If you want a cluster with failover behaviour, select both use cluster and use failover. This will increase the hardware requirements significantly however, because every JMS server in the cluster needs its own dedicated failover server(s).

Nr. of nodes in cluster

The number of JMS servers in the cluster. Every server in the cluster will share the messaging load and host the same message queues (a symmetrical cluster).

Connection type

The transport type used for creating connections between the JMS clients and JMS servers.

Plain TCP Normal, unencrypted TCP connections that don't need any SSL certificates.

TCP with SSL Encrypted TCP connections that need an SSL certificate on the server side. If the certificate is not signed by an official Certificate Authority you'll also need to (manually) add this certificate to the trust store on all clients.

TCP with 2-way SSL Encrypted TCP connections that need a correct SSL certificate on both the server and the client side. Also, the server certificates need to be added to the trust store of all clients and the client certificates must be added to the trust store of all servers.

Self-signed certificate

By indicating that the (server) certificate is not signed by an official Certificate Authority but is self-signed, properties for specifying the trust store path and password will be used by all clients.

Display name

Display name for this message bus.

This value is used for displaying in the GUI only.

Name

Abbreviated name of the message bus, with a maximum length of seven characters.

This value will be used when auto-generating names for bus components, and must therefore adhere to the naming convensions (use only lower case letters, digits or the '-' character).

Namespace URL

The first part of the namespace for this message bus. Usually the domain name of the company is used, e.g. http://www.mycompany.com/.

This value will be used when auto-generating namespaces for bus components, and must therefore adhere to the naming convensions (starting with http:// or https://, followed by a domain name and ending with a slash).

Enforce CDM best practices

Indicates if eMagiz best practices are applied to all the CDM messages from the message bus. These best practices are applied to the XML message definitions in the Create phase.

For example, eMagiz autogenerates a namespace based on the provided Namespace URL and we recommend that every element in the XML message must use the target namespace. Therefore, the xs:elementFormDefault value is not editable. Additionally, the creation of xs:attribute or xs:simpleContent is not allowed. These XSD constructs are not needed for building CDM messages and using them only adds unnecessary complexity to your message.

Nr. of process containers

The number of process containers this message bus uses to deploy and run processes.

Using more then one process container will add more processing capacity, redundancy and load balancing of the bus processes to the messaging solution, but also requires more hardware to deploy on.

Label

A choice for the Infrastructure as a Service determines were the eMagiz runtimes will be deployed.

AWS Cloud computing platform. Using failover with AWS means that the runtimes are run on stand-by severs, offering a very fast back-up system. In the case of failover, shared store is the default. In the case of no failover there are no back-up servers running, but new servers will be started and set up in a state from before the failure.

Root Cloud computing platform. In the case of the use of failover, data replication is the only option.

On-premises The software is going to be installed locally, on the company's own servers. Does not use the cloud options eMagiz offers.

Failover

Whether this message bus uses backup JMS servers that can take over when a live JMS server fails.

Using failover will add high availability and redundancy of the JMS servers to the messaging solution, but also requires more hardware (this includes a SAN) to deploy on.

Note that in most cases using failover with only a single container is not very useful, as this makes the container a single point of failure.

Nr. of backup nodes per live node

The number of backup JMS servers per live JMS server. Normally one should be sufficient, but in some critical situations a "backup for the backup" can be useful.

Using more then one backup will add more redundancy to the messaging solution, but also requires more hardware to deploy on.

Failover type

The strategy to use for backing up a server.

Data replication    All data synchronization between live and the backup servers is done through network traffic. Therefore all (persistent) data traffic received by the live server will be duplicated to the backup.    Notice that upon startup the backup server will first need to synchronize all existing data from the live server, before becoming capable of replacing the live server should it fail. So unlike the shared store case, a replicating backup will not be a fully operational backup right after start, but only after it finishes synchronizing the data. The time it will take for this to happen will depend on the amount of data to be synchronized and the connection speed.    One issue to be aware of is: in case of a successful failover, the backup's data will be newer than the one at the live's storage. When the live server then restarts, it will synchronize its data with the backup's. If both servers are shutdown however, the administrator will have to determine which one has the lastest data. Shared store doesn't have this issue, because at any moment there is just one copy of the data.    There is one more important distinction between data replication and shared store failover: with shared store, if the backup starts and does not find its live server, the server will just activate and start to serve client requests (this is possible because the shared store is always up to date). In the replication case, the backup just keeps waiting for a live server to pair with (the backup server does not know whether any data it might have is up to date, so it really cannot decide to activate automatically).

Shared store    Both live and backup servers share the same entire data directory using a shared file system. When failover occurs and a backup server takes over, it will load the persistent storage from the shared file system and clients can connect to it.    This style of high availability differs from data replication in that it requires a shared file system which is accessible by both the live and backup nodes. Typically this will be some kind of high performance Storage Area Network (SAN). The use of Network Attached Storage (NAS), e.g. NFS mounts to store any shared journal (NFS is slow), is not recommended.    The advantage of shared store high availability is that no replication occurs between the live and backup nodes, this means it does not suffer any performance penalties due to the overhead of replication during normal operation. If you require the highest performance during normal operation and have access to a fast SAN, using shared store high availability is recommended.

Clustered

Whether this message bus uses a cluster of multiple JMS servers to host the message queues.

Using a cluster will add redundancy and load balancing of the JMS queues to the messaging solution, but also requires more hardware to deploy on.

Note that in most cases using a cluster with only a few process containers is not very useful, as the process containers are usually the bottle neck for the total message throughput long before the JMS servers are.

Also note that a cluster does not provide failover behaviour: if a server fails, clients will experience connection problems and messages might be (temporarily) lost. If you want a cluster with failover behaviour, select both use cluster and use failover. This will increase the hardware requirements significantly however, because every JMS server in the cluster needs its own dedicated failover server(s).

Nr. of nodes in cluster

The number of JMS servers in the cluster. Every server in the cluster will share the messaging load and host the same message queues (a symmetrical cluster).

Connection type

The transport type used for creating connections between the JMS clients and JMS servers.

Plain TCP Normal, unencrypted TCP connections that don't need any SSL certificates.

TCP with SSL Encrypted TCP connections that need an SSL certificate on the server side. If the certificate is not signed by an official Certificate Authority you'll also need to (manually) add this certificate to the trust store on all clients.

TCP with 2-way SSL Encrypted TCP connections that need a correct SSL certificate on both the server and the client side. Also, the server certificates need to be added to the trust store of all clients and the client certificates must be added to the trust store of all servers.

Self-signed certificate

By indicating that the (server) certificate is not signed by an official Certificate Authority but is self-signed, properties for specifying the trust store path and password will be used by all clients.

Display name

Display name for this message bus.

This value is used for displaying in the GUI only.

Name

Abbreviated name of the message bus, with a maximum length of seven characters.

This value will be used when auto-generating names for bus components, and must therefore adhere to the naming convensions (use only lower case letters, digits or the '-' character).

Namespace URL

The first part of the namespace for this message bus. Usually the domain name of the company is used, e.g. http://www.mycompany.com/.

This value will be used when auto-generating namespaces for bus components, and must therefore adhere to the naming convensions (starting with http:// or https://, followed by a domain name and ending with a slash).

Enforce CDM best practices

Indicates if eMagiz best practices are applied to all the CDM messages from the message bus. These best practices are applied to the XML message definitions in the Create phase.

For example, eMagiz autogenerates a namespace based on the provided Namespace URL and we recommend that every element in the XML message must use the target namespace. Therefore, the xs:elementFormDefault value is not editable. Additionally, the creation of xs:attribute or xs:simpleContent is not allowed. These XSD constructs are not needed for building CDM messages and using them only adds unnecessary complexity to your message.

Nr. of process containers

The number of process containers this message bus uses to deploy and run processes.

Using more then one process container will add more processing capacity, redundancy and load balancing of the bus processes to the messaging solution, but also requires more hardware to deploy on.

Label

A choice for the Infrastructure as a Service determines were the eMagiz runtimes will be deployed.

AWS Cloud computing platform. Using failover with AWS means that the runtimes are run on stand-by severs, offering a very fast back-up system. In the case of failover, shared store is the default. In the case of no failover there are no back-up servers running, but new servers will be started and set up in a state from before the failure.

Root Cloud computing platform. In the case of the use of failover, data replication is the only option.

On-premises The software is going to be installed locally, on the company's own servers. Does not use the cloud options eMagiz offers.

Failover

Whether this message bus uses backup JMS servers that can take over when a live JMS server fails.

Using failover will add high availability and redundancy of the JMS servers to the messaging solution, but also requires more hardware (this includes a SAN) to deploy on.

Note that in most cases using failover with only a single container is not very useful, as this makes the container a single point of failure.

Nr. of backup nodes per live node

The number of backup JMS servers per live JMS server. Normally one should be sufficient, but in some critical situations a "backup for the backup" can be useful.

Using more then one backup will add more redundancy to the messaging solution, but also requires more hardware to deploy on.

Failover type

The strategy to use for backing up a server.

Data replication    All data synchronization between live and the backup servers is done through network traffic. Therefore all (persistent) data traffic received by the live server will be duplicated to the backup.    Notice that upon startup the backup server will first need to synchronize all existing data from the live server, before becoming capable of replacing the live server should it fail. So unlike the shared store case, a replicating backup will not be a fully operational backup right after start, but only after it finishes synchronizing the data. The time it will take for this to happen will depend on the amount of data to be synchronized and the connection speed.    One issue to be aware of is: in case of a successful failover, the backup's data will be newer than the one at the live's storage. When the live server then restarts, it will synchronize its data with the backup's. If both servers are shutdown however, the administrator will have to determine which one has the lastest data. Shared store doesn't have this issue, because at any moment there is just one copy of the data.    There is one more important distinction between data replication and shared store failover: with shared store, if the backup starts and does not find its live server, the server will just activate and start to serve client requests (this is possible because the shared store is always up to date). In the replication case, the backup just keeps waiting for a live server to pair with (the backup server does not know whether any data it might have is up to date, so it really cannot decide to activate automatically).

Shared store    Both live and backup servers share the same entire data directory using a shared file system. When failover occurs and a backup server takes over, it will load the persistent storage from the shared file system and clients can connect to it.    This style of high availability differs from data replication in that it requires a shared file system which is accessible by both the live and backup nodes. Typically this will be some kind of high performance Storage Area Network (SAN). The use of Network Attached Storage (NAS), e.g. NFS mounts to store any shared journal (NFS is slow), is not recommended.    The advantage of shared store high availability is that no replication occurs between the live and backup nodes, this means it does not suffer any performance penalties due to the overhead of replication during normal operation. If you require the highest performance during normal operation and have access to a fast SAN, using shared store high availability is recommended.

Clustered

Whether this message bus uses a cluster of multiple JMS servers to host the message queues.

Using a cluster will add redundancy and load balancing of the JMS queues to the messaging solution, but also requires more hardware to deploy on.

Note that in most cases using a cluster with only a few process containers is not very useful, as the process containers are usually the bottle neck for the total message throughput long before the JMS servers are.

Also note that a cluster does not provide failover behaviour: if a server fails, clients will experience connection problems and messages might be (temporarily) lost. If you want a cluster with failover behaviour, select both use cluster and use failover. This will increase the hardware requirements significantly however, because every JMS server in the cluster needs its own dedicated failover server(s).

Nr. of nodes in cluster

The number of JMS servers in the cluster. Every server in the cluster will share the messaging load and host the same message queues (a symmetrical cluster).

Connection type

The transport type used for creating connections between the JMS clients and JMS servers.

Plain TCP Normal, unencrypted TCP connections that don't need any SSL certificates.

TCP with SSL Encrypted TCP connections that need an SSL certificate on the server side. If the certificate is not signed by an official Certificate Authority you'll also need to (manually) add this certificate to the trust store on all clients.

TCP with 2-way SSL Encrypted TCP connections that need a correct SSL certificate on both the server and the client side. Also, the server certificates need to be added to the trust store of all clients and the client certificates must be added to the trust store of all servers.

Self-signed certificate

By indicating that the (server) certificate is not signed by an official Certificate Authority but is self-signed, properties for specifying the trust store path and password will be used by all clients.

Display name

Display name for this message bus.

This value is used for displaying in the GUI only.

Name

Abbreviated name of the message bus, with a maximum length of seven characters.

This value will be used when auto-generating names for bus components, and must therefore adhere to the naming convensions (use only lower case letters, digits or the '-' character).

Namespace URL

The first part of the namespace for this message bus. Usually the domain name of the company is used, e.g. http://www.mycompany.com/.

This value will be used when auto-generating namespaces for bus components, and must therefore adhere to the naming convensions (starting with http:// or https://, followed by a domain name and ending with a slash).

Enforce CDM best practices

Indicates if eMagiz best practices are applied to all the CDM messages from the message bus. These best practices are applied to the XML message definitions in the Create phase.

For example, eMagiz autogenerates a namespace based on the provided Namespace URL and we recommend that every element in the XML message must use the target namespace. Therefore, the xs:elementFormDefault value is not editable. Additionally, the creation of xs:attribute or xs:simpleContent is not allowed. These XSD constructs are not needed for building CDM messages and using them only adds unnecessary complexity to your message.

Nr. of process containers

The number of process containers this message bus uses to deploy and run processes.

Using more then one process container will add more processing capacity, redundancy and load balancing of the bus processes to the messaging solution, but also requires more hardware to deploy on.

Label

A choice for the Infrastructure as a Service determines were the eMagiz runtimes will be deployed.

AWS Cloud computing platform. Using failover with AWS means that the runtimes are run on stand-by severs, offering a very fast back-up system. In the case of failover, shared store is the default. In the case of no failover there are no back-up servers running, but new servers will be started and set up in a state from before the failure.

Root Cloud computing platform. In the case of the use of failover, data replication is the only option.

On-premises The software is going to be installed locally, on the company's own servers. Does not use the cloud options eMagiz offers.

Failover

Whether this message bus uses backup JMS servers that can take over when a live JMS server fails.

Using failover will add high availability and redundancy of the JMS servers to the messaging solution, but also requires more hardware (this includes a SAN) to deploy on.

Note that in most cases using failover with only a single container is not very useful, as this makes the container a single point of failure.

Nr. of backup nodes per live node

The number of backup JMS servers per live JMS server. Normally one should be sufficient, but in some critical situations a "backup for the backup" can be useful.

Using more then one backup will add more redundancy to the messaging solution, but also requires more hardware to deploy on.

Failover type

The strategy to use for backing up a server.

Data replication    All data synchronization between live and the backup servers is done through network traffic. Therefore all (persistent) data traffic received by the live server will be duplicated to the backup.    Notice that upon startup the backup server will first need to synchronize all existing data from the live server, before becoming capable of replacing the live server should it fail. So unlike the shared store case, a replicating backup will not be a fully operational backup right after start, but only after it finishes synchronizing the data. The time it will take for this to happen will depend on the amount of data to be synchronized and the connection speed.    One issue to be aware of is: in case of a successful failover, the backup's data will be newer than the one at the live's storage. When the live server then restarts, it will synchronize its data with the backup's. If both servers are shutdown however, the administrator will have to determine which one has the lastest data. Shared store doesn't have this issue, because at any moment there is just one copy of the data.    There is one more important distinction between data replication and shared store failover: with shared store, if the backup starts and does not find its live server, the server will just activate and start to serve client requests (this is possible because the shared store is always up to date). In the replication case, the backup just keeps waiting for a live server to pair with (the backup server does not know whether any data it might have is up to date, so it really cannot decide to activate automatically).

Shared store    Both live and backup servers share the same entire data directory using a shared file system. When failover occurs and a backup server takes over, it will load the persistent storage from the shared file system and clients can connect to it.    This style of high availability differs from data replication in that it requires a shared file system which is accessible by both the live and backup nodes. Typically this will be some kind of high performance Storage Area Network (SAN). The use of Network Attached Storage (NAS), e.g. NFS mounts to store any shared journal (NFS is slow), is not recommended.    The advantage of shared store high availability is that no replication occurs between the live and backup nodes, this means it does not suffer any performance penalties due to the overhead of replication during normal operation. If you require the highest performance during normal operation and have access to a fast SAN, using shared store high availability is recommended.

Clustered

Whether this message bus uses a cluster of multiple JMS servers to host the message queues.

Using a cluster will add redundancy and load balancing of the JMS queues to the messaging solution, but also requires more hardware to deploy on.

Note that in most cases using a cluster with only a few process containers is not very useful, as the process containers are usually the bottle neck for the total message throughput long before the JMS servers are.

Also note that a cluster does not provide failover behaviour: if a server fails, clients will experience connection problems and messages might be (temporarily) lost. If you want a cluster with failover behaviour, select both use cluster and use failover. This will increase the hardware requirements significantly however, because every JMS server in the cluster needs its own dedicated failover server(s).

Nr. of nodes in cluster

The number of JMS servers in the cluster. Every server in the cluster will share the messaging load and host the same message queues (a symmetrical cluster).

Connection type

The transport type used for creating connections between the JMS clients and JMS servers.

Plain TCP Normal, unencrypted TCP connections that don't need any SSL certificates.

TCP with SSL Encrypted TCP connections that need an SSL certificate on the server side. If the certificate is not signed by an official Certificate Authority you'll also need to (manually) add this certificate to the trust store on all clients.

TCP with 2-way SSL Encrypted TCP connections that need a correct SSL certificate on both the server and the client side. Also, the server certificates need to be added to the trust store of all clients and the client certificates must be added to the trust store of all servers.

Self-signed certificate

By indicating that the (server) certificate is not signed by an official Certificate Authority but is self-signed, properties for specifying the trust store path and password will be used by all clients.

Display name

Display name for this message bus.

This value is used for displaying in the GUI only.

Name

Abbreviated name of the message bus, with a maximum length of seven characters.

This value will be used when auto-generating names for bus components, and must therefore adhere to the naming convensions (use only lower case letters, digits or the '-' character).

Namespace URL

The first part of the namespace for this message bus. Usually the domain name of the company is used, e.g. http://www.mycompany.com/.

This value will be used when auto-generating namespaces for bus components, and must therefore adhere to the naming convensions (starting with http:// or https://, followed by a domain name and ending with a slash).

Enforce CDM best practices

Indicates if eMagiz best practices are applied to all the CDM messages from the message bus. These best practices are applied to the XML message definitions in the Create phase.

For example, eMagiz autogenerates a namespace based on the provided Namespace URL and we recommend that every element in the XML message must use the target namespace. Therefore, the xs:elementFormDefault value is not editable. Additionally, the creation of xs:attribute or xs:simpleContent is not allowed. These XSD constructs are not needed for building CDM messages and using them only adds unnecessary complexity to your message.

Nr. of process containers

The number of process containers this message bus uses to deploy and run processes.

Using more then one process container will add more processing capacity, redundancy and load balancing of the bus processes to the messaging solution, but also requires more hardware to deploy on.

Label

A choice for the Infrastructure as a Service determines were the eMagiz runtimes will be deployed.

AWS Cloud computing platform. Using failover with AWS means that the runtimes are run on stand-by severs, offering a very fast back-up system. In the case of failover, shared store is the default. In the case of no failover there are no back-up servers running, but new servers will be started and set up in a state from before the failure.

Root Cloud computing platform. In the case of the use of failover, data replication is the only option.

On-premises The software is going to be installed locally, on the company's own servers. Does not use the cloud options eMagiz offers.

Failover

Whether this message bus uses backup JMS servers that can take over when a live JMS server fails.

Using failover will add high availability and redundancy of the JMS servers to the messaging solution, but also requires more hardware (this includes a SAN) to deploy on.

Note that in most cases using failover with only a single container is not very useful, as this makes the container a single point of failure.

Nr. of backup nodes per live node

The number of backup JMS servers per live JMS server. Normally one should be sufficient, but in some critical situations a "backup for the backup" can be useful.

Using more then one backup will add more redundancy to the messaging solution, but also requires more hardware to deploy on.

Failover type

The strategy to use for backing up a server.

Data replication    All data synchronization between live and the backup servers is done through network traffic. Therefore all (persistent) data traffic received by the live server will be duplicated to the backup.    Notice that upon startup the backup server will first need to synchronize all existing data from the live server, before becoming capable of replacing the live server should it fail. So unlike the shared store case, a replicating backup will not be a fully operational backup right after start, but only after it finishes synchronizing the data. The time it will take for this to happen will depend on the amount of data to be synchronized and the connection speed.    One issue to be aware of is: in case of a successful failover, the backup's data will be newer than the one at the live's storage. When the live server then restarts, it will synchronize its data with the backup's. If both servers are shutdown however, the administrator will have to determine which one has the lastest data. Shared store doesn't have this issue, because at any moment there is just one copy of the data.    There is one more important distinction between data replication and shared store failover: with shared store, if the backup starts and does not find its live server, the server will just activate and start to serve client requests (this is possible because the shared store is always up to date). In the replication case, the backup just keeps waiting for a live server to pair with (the backup server does not know whether any data it might have is up to date, so it really cannot decide to activate automatically).

Shared store    Both live and backup servers share the same entire data directory using a shared file system. When failover occurs and a backup server takes over, it will load the persistent storage from the shared file system and clients can connect to it.    This style of high availability differs from data replication in that it requires a shared file system which is accessible by both the live and backup nodes. Typically this will be some kind of high performance Storage Area Network (SAN). The use of Network Attached Storage (NAS), e.g. NFS mounts to store any shared journal (NFS is slow), is not recommended.    The advantage of shared store high availability is that no replication occurs between the live and backup nodes, this means it does not suffer any performance penalties due to the overhead of replication during normal operation. If you require the highest performance during normal operation and have access to a fast SAN, using shared store high availability is recommended.

Clustered

Whether this message bus uses a cluster of multiple JMS servers to host the message queues.

Using a cluster will add redundancy and load balancing of the JMS queues to the messaging solution, but also requires more hardware to deploy on.

Note that in most cases using a cluster with only a few process containers is not very useful, as the process containers are usually the bottle neck for the total message throughput long before the JMS servers are.

Also note that a cluster does not provide failover behaviour: if a server fails, clients will experience connection problems and messages might be (temporarily) lost. If you want a cluster with failover behaviour, select both use cluster and use failover. This will increase the hardware requirements significantly however, because every JMS server in the cluster needs its own dedicated failover server(s).

Nr. of nodes in cluster

The number of JMS servers in the cluster. Every server in the cluster will share the messaging load and host the same message queues (a symmetrical cluster).

Connection type

The transport type used for creating connections between the JMS clients and JMS servers.

Plain TCP Normal, unencrypted TCP connections that don't need any SSL certificates.

TCP with SSL Encrypted TCP connections that need an SSL certificate on the server side. If the certificate is not signed by an official Certificate Authority you'll also need to (manually) add this certificate to the trust store on all clients.

TCP with 2-way SSL Encrypted TCP connections that need a correct SSL certificate on both the server and the client side. Also, the server certificates need to be added to the trust store of all clients and the client certificates must be added to the trust store of all servers.

Self-signed certificate

By indicating that the (server) certificate is not signed by an official Certificate Authority but is self-signed, properties for specifying the trust store path and password will be used by all clients.

Display name

Display name for this message bus.

This value is used for displaying in the GUI only.

Name

Abbreviated name of the message bus, with a maximum length of seven characters.

This value will be used when auto-generating names for bus components, and must therefore adhere to the naming convensions (use only lower case letters, digits or the '-' character).

Namespace URL

The first part of the namespace for this message bus. Usually the domain name of the company is used, e.g. http://www.mycompany.com/.

This value will be used when auto-generating namespaces for bus components, and must therefore adhere to the naming convensions (starting with http:// or https://, followed by a domain name and ending with a slash).

Enforce CDM best practices

Indicates if eMagiz best practices are applied to all the CDM messages from the message bus. These best practices are applied to the XML message definitions in the Create phase.

For example, eMagiz autogenerates a namespace based on the provided Namespace URL and we recommend that every element in the XML message must use the target namespace. Therefore, the xs:elementFormDefault value is not editable. Additionally, the creation of xs:attribute or xs:simpleContent is not allowed. These XSD constructs are not needed for building CDM messages and using them only adds unnecessary complexity to your message.

Nr. of process containers

The number of process containers this message bus uses to deploy and run processes.

Using more then one process container will add more processing capacity, redundancy and load balancing of the bus processes to the messaging solution, but also requires more hardware to deploy on.

Label

A choice for the Infrastructure as a Service determines were the eMagiz runtimes will be deployed.

AWS Cloud computing platform. Using failover with AWS means that the runtimes are run on stand-by severs, offering a very fast back-up system. In the case of failover, shared store is the default. In the case of no failover there are no back-up servers running, but new servers will be started and set up in a state from before the failure.

Root Cloud computing platform. In the case of the use of failover, data replication is the only option.

On-premises The software is going to be installed locally, on the company's own servers. Does not use the cloud options eMagiz offers.

Failover

Whether this message bus uses backup JMS servers that can take over when a live JMS server fails.

Using failover will add high availability and redundancy of the JMS servers to the messaging solution, but also requires more hardware (this includes a SAN) to deploy on.

Note that in most cases using failover with only a single container is not very useful, as this makes the container a single point of failure.

Nr. of backup nodes per live node

The number of backup JMS servers per live JMS server. Normally one should be sufficient, but in some critical situations a "backup for the backup" can be useful.

Using more then one backup will add more redundancy to the messaging solution, but also requires more hardware to deploy on.

Failover type

The strategy to use for backing up a server.

Data replication    All data synchronization between live and the backup servers is done through network traffic. Therefore all (persistent) data traffic received by the live server will be duplicated to the backup.    Notice that upon startup the backup server will first need to synchronize all existing data from the live server, before becoming capable of replacing the live server should it fail. So unlike the shared store case, a replicating backup will not be a fully operational backup right after start, but only after it finishes synchronizing the data. The time it will take for this to happen will depend on the amount of data to be synchronized and the connection speed.    One issue to be aware of is: in case of a successful failover, the backup's data will be newer than the one at the live's storage. When the live server then restarts, it will synchronize its data with the backup's. If both servers are shutdown however, the administrator will have to determine which one has the lastest data. Shared store doesn't have this issue, because at any moment there is just one copy of the data.    There is one more important distinction between data replication and shared store failover: with shared store, if the backup starts and does not find its live server, the server will just activate and start to serve client requests (this is possible because the shared store is always up to date). In the replication case, the backup just keeps waiting for a live server to pair with (the backup server does not know whether any data it might have is up to date, so it really cannot decide to activate automatically).

Shared store    Both live and backup servers share the same entire data directory using a shared file system. When failover occurs and a backup server takes over, it will load the persistent storage from the shared file system and clients can connect to it.    This style of high availability differs from data replication in that it requires a shared file system which is accessible by both the live and backup nodes. Typically this will be some kind of high performance Storage Area Network (SAN). The use of Network Attached Storage (NAS), e.g. NFS mounts to store any shared journal (NFS is slow), is not recommended.    The advantage of shared store high availability is that no replication occurs between the live and backup nodes, this means it does not suffer any performance penalties due to the overhead of replication during normal operation. If you require the highest performance during normal operation and have access to a fast SAN, using shared store high availability is recommended.

Clustered

Whether this message bus uses a cluster of multiple JMS servers to host the message queues.

Using a cluster will add redundancy and load balancing of the JMS queues to the messaging solution, but also requires more hardware to deploy on.

Note that in most cases using a cluster with only a few process containers is not very useful, as the process containers are usually the bottle neck for the total message throughput long before the JMS servers are.

Also note that a cluster does not provide failover behaviour: if a server fails, clients will experience connection problems and messages might be (temporarily) lost. If you want a cluster with failover behaviour, select both use cluster and use failover. This will increase the hardware requirements significantly however, because every JMS server in the cluster needs its own dedicated failover server(s).

Nr. of nodes in cluster

The number of JMS servers in the cluster. Every server in the cluster will share the messaging load and host the same message queues (a symmetrical cluster).

Connection type

The transport type used for creating connections between the JMS clients and JMS servers.

Plain TCP Normal, unencrypted TCP connections that don't need any SSL certificates.

TCP with SSL Encrypted TCP connections that need an SSL certificate on the server side. If the certificate is not signed by an official Certificate Authority you'll also need to (manually) add this certificate to the trust store on all clients.

TCP with 2-way SSL Encrypted TCP connections that need a correct SSL certificate on both the server and the client side. Also, the server certificates need to be added to the trust store of all clients and the client certificates must be added to the trust store of all servers.

Self-signed certificate

By indicating that the (server) certificate is not signed by an official Certificate Authority but is self-signed, properties for specifying the trust store path and password will be used by all clients.

← Flow designerSystem code →
eMagiz Documentation
Docs
Getting Started (or other categories)Guides (or other categories)API Reference (or other categories)
Community
User ShowcaseStack OverflowProject ChatTwitter
More
BlogGitHubStar
Copyright © 2019 eMagiz