...
(Optional, Duration) The timeout for the reading from the server. Default is 3s
.
Cache Manager
The local cache is an extension to the one provided by Micronaut and implemented with Caffeine. It simplifies the process of caching entities with the use of annotations and configurations in the application.yml
...
Amazon S3
Uses an Amazon S3 bucket as storage:
storage:
cachebinary:
local:
provider: s3
bucket: bucket
myEntity:region: us-east-1
dataPath: ingestion
initialCapacitycredentials:
10 accessKey: access-key
maximumSize: 50
secretKey: secret-key
connection:
maximumWeightconnectTimeout: 10030000
socketTimeout: 30000
expireAfterWrite: 1h maxConnections: 20
expireAfterAccessawsThrottlingProperties: 5m
recordStatsenabled: true
testModeretriesBeforeThrottling: 3
false
For a cache named myEntity
with the following properties:
...
Each cache must have a unique name, which will be automatically normalized to kebab-case (i.e.
myEntity
becomesmy-entity
)
A default configuration for a cache can be defined as a bean:
@Factory
@Requires(missingProperty = LocalCacheProperties.PREFIX + "." + MyEntityCacheConfig.CACHE_NAME)
public class MyEntityCacheConfig {
public static final String CACHE_NAME = "my-entity";
@Bean
@Singleton
@Named(CACHE_NAME)
LocalCacheProperties cacheProperties(ApplicationConfiguration applicationConfiguration) {
return LocalCacheProperties.builder()
.cacheName(CACHE_NAME)
.applicationConfiguration(applicationConfiguration)
.build();
}
}
Weigher
A weigher determines if an element becomes too heavy to be in the cache. If for a cache's maximumWeight
there is not a corresponding named weigher, any other weigher will be selected. If there is no registered weigher, a default weigher where every element has a weight of 1 will be used:
import com.github.benmanes.caffeine.cache.*;
@Singleton
@Named(MyEntityCacheConfig.CACHE_NAME)
public class MyEntityCacheWeigher implements Weigher<UUID, MyEntity> {
@Override
public @NonNegative int weigh(@NonNull UUID key, @NonNull MyEntity value) {
return 0;
}
}
Removal Listener
A listener that triggers every time an element is evicted:
import com.github.benmanes.caffeine.cache.*;
@Singleton
@Named(MyEntityCacheConfig.CACHE_NAME)
public class MyEntityCacheRemovalListener implements RemovalListener<UUID, MyEntity> {
@Override
public void onRemoval(@Nullable UUID key, @Nullable MyEntity value, @NonNull RemovalCause cause) {
// Do something with the event
}
}
Annotation-based caching
Any class (POJOs, Connections, Factories...) can be stored in the cache. For example, instances of the following entity type:
@Data
public class MyEntity implements CoreEntity<UUID> {
private UUID id;
private String name;
...
}
can be cached in any managed bean with the use of io.micronaut.cache.annotation
annotations:
@Cacheable
@CachePut
@CacheInvalidate
@Singleton
@CacheConfig(MyEntityCacheConfig.CACHE_NAME)
public class MyEntityService {
@Inject
protected MyEntityRepository myEntityRepository;
@Cacheable(keyGenerator = CoreEntityKeyGenerator.class)
public List<MyEntity> getAll() {
return myEntityRepository.getAll();
}
@Cacheable
public MyEntity getOne(UUID id maxRetries: 5
Configuration parameters
storage.binary.provider
(Required, String) Must be s3
.
storage.binary.region
(Required, String) The AWS region where the bucket is located.
storage.binary.bucket
(Required, String) The AWS bucket name used for storage.
storage.binary.dataPath
(Optional, String) The path where the data will be stored within a S3 bucket. Default is binary
.
storage.binary.credentials.accessKey
(Required, String) The access key provided by Amazon for connecting to the AWS S3 bucket.
storage.binary.credentials.secretKey
(Required, String) The secret key provided by Amazon for connecting to the AWS S3 bucket.
storage.binary.connection.connectTimeout
(Optional, Integer) Number of milliseconds before the connection times out. Defaults to 10000
.
storage.binary.connection.socketTimeout
(Optional, Integer) Number of milliseconds before the socket times out. Defaults to 50000
.
storage.binary.connection.maxConnections
(Optional, Integer) Number of maximum concurrent connections to AWS S3. Defaults to 50
.
storage.binary.connection.awsThrottlingProperties.enabled
(Optional, Boolean) Whether throttled retries should be used. Defaults to false
.
storage.binary.connection.awsThrottlingProperties.retriesBeforeThrottling
(Optional, Integer) Maximum number of consecutive failed retries that the client will permit before throttling all subsequent retries of failed requests.
storage.binary.connection.awsThrottlingProperties.maxRetries
(Optional, Integer) Maximum number of retry attempts for failed retryable requests.
Google Cloud Storage
Uses a Google Cloud Storage bucket for storage:
storage:
binary:
provider: gcs
gcsBucket: bucket
dataPath: ingestion
Configuration parameters
storage.binary.provider
(Required, String) Must be gcs
.
storage.binary.gcsBucket
(Required, String) The GCS bucket name used for storage.
storage.binary.dataPath
(Optional, String) The path where the data will be stored within a GCS bucket. Default is binary
.
Azure Cloud Storage
Uses an Azure Cloud Storage container for storage. The provider is able to authenticate by using the storageAccount
and setting the Default Azure Credential (recommended) or the use of the connectionString
field by configuring the parameter with the value in AZURE_STORAGE_CONNECTION_STRING .
storage:
binary:
provider: azure
dataPath: ingestion
azureContainer: bucket
connectionString: connectionStringVariable
storageAccount: storageAccountName
Configuration parameters
storage.binary.provider
(Required, String) Must be azure
.
storage.binary.dataPath
(Optional, String) The path where the buckets will be created within a container. Default is binary
.
storage.binary.azureContainer
(Required, String) The Azure Storage Container to connect to.
storage.binary.connectionString
(Optional, String) The connection string value. If present, it will have priority over the DefaultAzureCredentials.
storage.binary.storageAccount
(Optional, String) The storage account name to connect. It is required for default credentials and connectionString
must be null.
Cache Manager
The local cache is an extension to the one provided by Micronaut and implemented with Caffeine. It simplifies the process of caching entities with the use of annotations and configurations in the application.yml
Configuration
storage:
cache:
local:
myEntity:
initialCapacity: 10
maximumSize: 50
maximumWeight: 100
expireAfterWrite: 1h
expireAfterAccess: 5m
recordStats: true
testMode: false
For a cache named myEntity
with the following properties:
Property | Type | Default | Description |
---|---|---|---|
initialCapacity | Integer | 16 | The minimum size of the cache |
maximumSize | Long | The maximum size of the cache. Can not be combined with a weigher | |
maximumWeight | Long | The maximum weight to be allowed for an element in the cache (see Weigher section) | |
expireAfterWrite | Duration | The time to wait to expire an element after its creation | |
expireAfterAccess | Duration | 5m | The time to wait to expire an element after the last time it was accessed |
recordStats | boolean | true | To record statistics about hit rates and evictions (see Cache Statistics section) |
testMode | boolean | false | To execute all cache operations in a single thread |
Each cache must have a unique name, which will be automatically normalized to kebab-case (i.e.
myEntity
becomesmy-entity
)
A default configuration for a cache can be defined as a bean:
@Factory
@Requires(missingProperty = LocalCacheProperties.PREFIX + "." + MyEntityCacheConfig.CACHE_NAME)
public class MyEntityCacheConfig {
public static final String CACHE_NAME = "my-entity";
@Bean
@Singleton
@Named(CACHE_NAME)
LocalCacheProperties cacheProperties(ApplicationConfiguration applicationConfiguration) {
return myEntityRepositoryLocalCacheProperties.getOnebuilder(id);
} @CachePut(keyGenerator = CoreEntityKeyGenerator.classcacheName(CACHE_NAME)
public void store(MyEntity myEntity) { myEntityRepository.storeapplicationConfiguration(myEntity);
}
@CacheInvalidateapplicationConfiguration)
public MyEntity delete(UUID id) { return myEntityRepository.deletebuild(id);
}
}
The key for the cacheable object must implement
equals()
andhashCode()
Note that for the getAll()
and store(MyEntity)
methods, a custom key generator needs to be specified. This way the cache will calculate the appropriate key
for each entity. If no generator is defined, the DefaultCacheKeyGenerator
is used.
The
CoreEntityKeyGenerator
can be used with any entity that implementsCoreEntity<T>
Multiple caches can be configured in the same @CacheConfig
, in which case, the name of the used cache must be specified. Likewise, the key for the cached value can be a composite of multiple objects (internally wrapped as a ParametersKey
and generated by a DefaultCacheKeyGenerator
):
...
Weigher
A weigher determines if an element becomes too heavy to be in the cache. If for a cache's maximumWeight
there is not a corresponding named weigher, any other weigher will be selected. If there is no registered weigher, a default weigher where every element has a weight of 1 will be used:
import com.github.benmanes.caffeine.cache.*;
@Singleton
@Named(MyEntityCacheConfig.CACHE_NAME)
public class MyMultiCacheServiceMyEntityCacheWeigher {implements Weigher<UUID, MyEntity> {
@Cacheable("cacheA")@Override
public @NonNegative MyEntityint getOneAweigh(@NonNull UUID idkey, @NonNull MyEntity value) {
return 0;
}
}
Removal Listener
A listener that triggers every time an element is evicted:
import com.github.benmanes.caffeine.cache.*;
}
@Cacheable("cacheB")@Singleton
@Named(MyEntityCacheConfig.CACHE_NAME)
public class MyEntityCacheRemovalListener implements RemovalListener<UUID, MyEntity> {
@Override
public MyEntityvoid getOneBonRemoval(@Nullable UUID id, UUID parentId key, @Nullable MyEntity value, @NonNull RemovalCause cause) {
...// Do something with the event
}
}
Cache Statistics
If the cache statistics are enabled, they will be published as part of the application metrics:
cache.eviction.weight
- the sum of weights of evicted entriescache.evictions
- the count of cache evictionscache.size
- the estimated number of entries in the cachecache.gets
- the number of times a cache-annotated method has returned an item (regardless if it was cached or not). This metric can be refined with the use of tags:result:hit
- the number of times cache lookup methods have returned a cached valueresult:miss
- the number of times cache lookup methods have returned an uncached value
If the application has multiple caches, the metrics can be filtered with the
cache:my-entity
tag
Collections
Flushable Collection
A data structure that asynchronously flushes its content any time a preconfigured criteria is met. It is backed up by an ArrayList<T>
and it is guaranteed to be thread-safe.
Configuration
Basic Properties
...
Properties Template
For a collection of type my-collection
, a set of default properties can be defined as a bean or in the application.yml
:
collections:
flushable:
myCollection:
maxCount: 10
maxDataSize: 1mb
flushAfter: 5m
threads: 10
flushTimeout: 10m
The same configuration can be defined as:
...
Annotation-based caching
Any class (POJOs, Connections, Factories...) can be stored in the cache. For example, instances of the following entity type:
@Data
public class MyEntity implements CoreEntity<UUID> {
private UUID id;
private String name;
...
}
can be cached in any managed bean with the use of io.micronaut.cache.annotation
annotations:
@Cacheable
@CachePut
@CacheInvalidate
@Singleton
@CacheConfig(MyEntityCacheConfig.CACHE_NAME)
public class MyEntityService {
@Inject
protected MyEntityRepository myEntityRepository;
@Cacheable(keyGenerator = CoreEntityKeyGenerator.class)
public List<MyEntity> getAll() {
return myEntityRepository.getAll();
}
@Cacheable
public MyEntity getOne(UUID id) {
return myEntityRepository.getOne(id);
}
@CachePut(keyGenerator = CoreEntityKeyGenerator.class)
public void store(MyEntity myEntity) {
myEntityRepository.store(myEntity);
}
@CacheInvalidate
public MyEntity delete(UUID id) {
return FlushableCollectionPropertiesmyEntityRepository.builderdelete(id);
}
.type(TYPE)
.maxCount(10)
.maxDataSize(DataSize.ofMegabytes(1).asBytes())
.flushAfter(Duration.ofMinutes(5))
.threads(10)
.flushTimeout(Duration.ofMinutes(10))
}
The key for the cacheable object must implement
equals()
andhashCode()
Note that for the getAll()
and store(MyEntity)
methods, a custom key generator needs to be specified. This way the cache will calculate the appropriate key
for each entity. If no generator is defined, the DefaultCacheKeyGenerator
is used.
The
CoreEntityKeyGenerator
can be used with any entity that implementsCoreEntity<T>
Multiple caches can be configured in the same @CacheConfig
, in which case, the name of the used cache must be specified. Likewise, the key for the cached value can be a composite of multiple objects (internally wrapped as a ParametersKey
and generated by a DefaultCacheKeyGenerator
):
@Singleton
@CacheConfig({ "cacheA", "cacheB" })
public class MyMultiCacheService {
@Cacheable("cacheA")
public MyEntity getOneA(UUID id) {
...
}
.build@Cacheable("cacheB");
}public }
Each collection definition must have a unique name, which will be automatically normalized to kebab-case (i.e.
myCollection
becomesmy-collection
)
Action Handlers
Flush Handler
Consumer to be called when the flush event is triggered. For instance, a flush handler for a collection of integer elements can be defined as:
public class MyCollectionFlushHandler implements Consumer<FlushableCollection.Batch<Integer>> MyEntity getOneB(UUID id, UUID parentId) {
@Override public void accept(FlushableCollection.Batch<Integer> batch) {
// Do something with the batch
...
}
}
By default, does nothing: builder.flushHandler(batch -> {})
Weigher
Function to determine the size of an element. Required to trigger a Data Size flush. For instance, a collection of integer elements can define its weigher as:
public class MyCollectionWeigher implements Function<Integer, Long> {
@Override
public Long apply(Integer element) {
return element.toString().length();
}
}
By default, the weight is calculated after converting the element to String and counting its number of bytes using UTF-8 encoding.
Success Handler
Consumer to be executed if the flush handler was successfully executed. For instance, a success handler for a collection of integer elements can be defined as:
public class MyCollectionSuccessHandler implements Consumer<FlushableCollection.Batch<Integer>> {
@Override
public void accept(FlushableCollection.Batch<Integer> batch) {
// Do something with the successful batch
}
}
By default, logs a debug message with the details of the processed batch: builder.successHandler(batch -> log.debug(...))
Failure Handler
BiConsumer to be executed if the flush handler failed. For instance, a failure handler for a collection of integer elements can be defined as:
public class MyCollectionFailureHandler implements Consumer<FlushableCollection.Batch<Integer>, Throwable> {
@Override
public void accept(FlushableCollection.Batch<Integer> batch, Throwable ex) {
// Do something with the failed batch
}
}
By default, logs an error message with the details of the batch and its exception: builder.failureHandler((batch, ex) -> log.error(...))
. If the flush event causes a timeout, the input Throwable
will be of type java.util.concurrent.TimeoutException
.
Usage
The collection must be created through the FlushableCollectionFactory
bean, by providing the expected type. If the application context finds a FlushableCollectionProperties
with the same name, it will be used as template for the new collection. Otherwise, a new properties set with default values will be created. Note that any pre-defined property can be overridden during the build phase.
@Inject
protected FlushableCollectionFactory flushableCollectionFactory;
void submit() {
try (var collection = flushableCollectionFactory.<Integer>builder("my-collection")
.flushHandler(new MyCollectionFlushHandler())
.weigher(new MyCollectionWeigher()
Cache Statistics
If the cache statistics are enabled, they will be published as part of the application metrics:
cache.eviction.weight
- the sum of weights of evicted entriescache.evictions
- the count of cache evictionscache.size
- the estimated number of entries in the cachecache.gets
- the number of times a cache-annotated method has returned an item (regardless if it was cached or not). This metric can be refined with the use of tags:result:hit
- the number of times cache lookup methods have returned a cached valueresult:miss
- the number of times cache lookup methods have returned an uncached value
If the application has multiple caches, the metrics can be filtered with the
cache:my-entity
tag
Collections
Flushable Collection
A data structure that asynchronously flushes its content any time a preconfigured criteria is met. It is backed up by an ArrayList<T>
and it is guaranteed to be thread-safe.
Configuration
Basic Properties
Property | Type | Default | Description |
---|---|---|---|
maxCount | Integer | The maximum number of elements before flushing. Triggers a Count flush | |
maxDataSize | Long | The maximum size of the collection elements before flushing. Triggers a Data Size flush | |
flushAfter | Duration | The duration before flushing. Triggers a Scheduled flush | |
threads | Integer | 5 | The number of threads used to execute the flush event |
flushTimeout | Duration | 10m | The timeout for the flush event |
Properties Template
For a collection of type my-collection
, a set of default properties can be defined as a bean or in the application.yml
:
collections:
flushable:
myCollection:
maxCount: 10
maxDataSize: 1mb
flushAfter: 5m
threads: 10
flushTimeout: 10m
The same configuration can be defined as:
@Factory
@Requires(missingProperty = FlushableCollectionProperties.PREFIX + "." + MyFlushableCollectionConfig.TYPE)
public class MyFlushableCollectionConfig {
public static final String TYPE = "my-collection";
@Bean
@Named(TYPE)
FlushableCollectionProperties collectionProperties() {
return FlushableCollectionProperties.builder()
.type(TYPE)
.successHandler(new MyCollectionSuccessHandler()maxCount(10)
.maxDataSize(DataSize.failureHandler(new MyCollectionFailureHandlerofMegabytes(1).asBytes())
.buildflushAfter(Duration.ofMinutes(5))
{ .threads(10)
for (int i = 0; i < 10; i++) {.flushTimeout(Duration.ofMinutes(10))
collection.addbuild(i);
}
}
}
Flush Events
COUNT
- Triggers if the collection contains more elements than the value defined in themaxCount
property. If undefined, it will never be triggered.DATA_SIZE
- Triggers if the size of the elements in the collection is greater than the value defined in themaxDataSize
property. If undefined, it will never be triggered.SCHEDULED
- Triggers based on the schedule defined with theflushAfter
property. If undefined, it will never be triggered.MANUAL
- Triggers whenever thecollection.flush()
method is called.CLOSE
- Triggers whenever the collection is closed by either using atry
with resources, or by calling thecollection.close()
method.
Flush Metrics
Each collection will publish metrics about the duration of each flush event, its size, and the count of success/failures (see Metrics section).
This can be refined with the use of custom tags during the creation of the collection:
...
Each collection definition must have a unique name, which will be automatically normalized to kebab-case (i.e.
myCollection
becomesmy-collection
)
Action Handlers
Flush Handler
Consumer to be called when the flush event is triggered. For instance, a flush handler for a collection of integer elements can be defined as:
public class MyCollectionFlushHandler implements Consumer<FlushableCollection.Batch<Integer>> {
@Override
public void accept(FlushableCollection.Batch<Integer> batch) {
// Do something with the batch
collection }
}
The metrics will then be available in:
GET /metrics/pdp.collections.flushable.[type]
- The count of successful and failed flush events.GET /metrics/pdp.collections.flushable.[type].duration
- The duration for the flush handler.GET /metrics/pdp.collections.flushable.[type].size
- The size of the flushed elements.
DSL
The PDP DSL is an abstract definition for a common language to be used in any PDP product. Its intention is to have a standardized way to express configurations that might be interpreted in a different way according to the needs of the product itself.
Filters
A filter is a criteria to be applied to a given object. In order to use it, the FilterAdapter
interface needs to be implemented. The core supports the following concrete adapters:
MapFilterAdapter
- Converts the filter into a predicate used to evaluateMap<String, Object>
structures. The expected field name is the key of the map.JsonPathFilterAdapter
- Converts the filter into a predicate used to evaluateDocumentContext
structures. The expected field name is a JSON Path to be found in the JSON document.JsonPointerFilterAdapter
- Converts the filter into a predicate used to evaluateObjectNode
structures. The expected field name is a JSON Pointer to be found in the JSON document.
All filters have an optional
source
field that could be used by the concrete implementation to select among multiple data structures
"Equals" Filter
The value of the field must be exactly as the one provided.
var filter = EqualsFilter.builder().field("field").value("value").build();
{
"equals": {
"field": "field",
"value": "value"
}
}
"Greater Than" Filter
The value of the field must be greater than the one provided.
var filter = GreaterThanFilter.builder().field("field").value(1).build();
{
"gt": {
"field": "field",
"value": 1
}
}
"Greater Than or Equals" Filter
The value of the field must be greater than or equals to the one provided.
var filter = GreaterThanOrEqualsFilter.builder().field("field").value(1).build();
{
"gte": {
"field": "field",
"value": 1
}
}
"Less Than" Filter
The value of the field must be less than the one provided.
var filter = LessThanFilter.builder().field("field").value(1).build();
{
"lt": {
"field": "field",
"value": 1
}
}
"Less Than or Equals" Filter
The value of the field must be less than or equals to the one provided.
var filter = LessThanOrEqualsFilter.builder().field("field").value(1).build();
{
"lte": {
"field": "field",
"value": 1
}
}
"In" Filter
The value of the field must be one of the provided values.
var filter = InFilter.builder().field("field").value("valueA").value("valueB").build();
{
"in": {
"field": "field",
"values": [
"valueA",
"valueB"
]
}
}
"Empty" Filter
Checks if a field is empty:
- For a collection,
true
if its size is 0 - For a String,
true
if its length is 0 - For any other type,
true
if it isnull
...
By default, does nothing: builder.flushHandler(batch -> {})
Weigher
Function to determine the size of an element. Required to trigger a Data Size flush. For instance, a collection of integer elements can define its weigher as:
public class MyCollectionWeigher implements Function<Integer, Long> {
@Override
public Long apply(Integer element) {
return element.toString().length();
}
}
By default, the weight is calculated after converting the element to String and counting its number of bytes using UTF-8 encoding.
Success Handler
Consumer to be executed if the flush handler was successfully executed. For instance, a success handler for a collection of integer elements can be defined as:
public class MyCollectionSuccessHandler implements Consumer<FlushableCollection.Batch<Integer>> {
@Override
public void accept(FlushableCollection.Batch<Integer> batch) {
// Do something with the successful batch
}
}
By default, logs a debug message with the details of the processed batch: builder.successHandler(batch -> log.debug(...))
Failure Handler
BiConsumer to be executed if the flush handler failed. For instance, a failure handler for a collection of integer elements can be defined as:
public class MyCollectionFailureHandler implements Consumer<FlushableCollection.Batch<Integer>, Throwable> {
@Override
public void accept(FlushableCollection.Batch<Integer> batch, Throwable ex) {
// Do something with the failed batch
}
}
By default, logs an error message with the details of the batch and its exception: builder.failureHandler((batch, ex) -> log.error(...))
. If the flush event causes a timeout, the input Throwable
will be of type java.util.concurrent.TimeoutException
.
Usage
The collection must be created through the FlushableCollectionFactory
bean, by providing the expected type. If the application context finds a FlushableCollectionProperties
with the same name, it will be used as template for the new collection. Otherwise, a new properties set with default values will be created. Note that any pre-defined property can be overridden during the build phase.
@Inject
protected FlushableCollectionFactory flushableCollectionFactory;
void submit() {
try (var collection = flushableCollectionFactory.<Integer>builder("my-collection")
.flushHandler(new MyCollectionFlushHandler())
.weigher(new MyCollectionWeigher())
.successHandler(new MyCollectionSuccessHandler())
.failureHandler(new MyCollectionFailureHandler())
.build()) {
for (int i = 0; i < 10; i++) {
collection.add(i);
}
}
}
Flush Events
COUNT
- Triggers if the collection contains more elements than the value defined in themaxCount
property. If undefined, it will never be triggered.DATA_SIZE
- Triggers if the size of the elements in the collection is greater than the value defined in themaxDataSize
property. If undefined, it will never be triggered.SCHEDULED
- Triggers based on the schedule defined with theflushAfter
property. If undefined, it will never be triggered.MANUAL
- Triggers whenever thecollection.flush()
method is called.CLOSE
- Triggers whenever the collection is closed by either using atry
with resources, or by calling thecollection.close()
method.
Flush Metrics
Each collection will publish metrics about the duration of each flush event, its size, and the count of success/failures (see Metrics section).
This can be refined with the use of custom tags during the creation of the collection:
try (var collection = flushableCollectionFactory.<Integer>builder("my-collection")
.tag("key", "value")
.build()) {
// Do something with the collection
}
The metrics will then be available in:
GET /metrics/pdp.collections.flushable.[type]
- The count of successful and failed flush events.GET /metrics/pdp.collections.flushable.[type].duration
- The duration for the flush handler.GET /metrics/pdp.collections.flushable.[type].size
- The size of the flushed elements.
DSL
The PDP DSL is an abstract definition for a common language to be used in any PDP product. Its intention is to have a standardized way to express configurations that might be interpreted in a different way according to the needs of the product itself.
Filters
A filter is a criteria to be applied to a given object. In order to use it, the FilterAdapter
interface needs to be implemented. The core supports the following concrete adapters:
MapFilterAdapter
- Converts the filter into a predicate used to evaluateMap<String, Object>
structures. The expected field name is the key of the map.JsonPathFilterAdapter
- Converts the filter into a predicate used to evaluateDocumentContext
structures. The expected field name is a JSON Path to be found in the JSON document.JsonPointerFilterAdapter
- Converts the filter into a predicate used to evaluateObjectNode
structures. The expected field name is a JSON Pointer to be found in the JSON document.
All filters have an optional
source
field that could be used by the concrete implementation to select among multiple data structures
"Equals" Filter
The value of the field must be exactly as the one provided.
var filter = EqualsFilter.builder().field("field").value("value").build();
{
"emptyequals": {
"field": "field",
"value": "value"
}
}
"
...
Greater Than" Filter
Checks if a field existsThe value of the field must be greater than the one provided.
var filter = ExistsFilterGreaterThanFilter.builder()
.field("field")
).value(1).build();
{
"existsgt": {
"field": "field",
"value": 1
}
}
"
...
Greater Than or Equals" Filter
Negates the inner clauseThe value of the field must be greater than or equals to the one provided.
var filter = NotFilterGreaterThanOrEqualsFilter.builder()
.clause(EqualsFilter.builder().field("field").value("value"1).build())
.build();
{
"notgte": {
"equals": {
"field": "field",
"value": "value"1
}
}
}
"
...
Less Than" Filter
Checks if a field is null. Note that while the "exists" filter checks whether the field is present or not, the "null" filter expects the field to be present but with null
valueThe value of the field must be less than the one provided.
var filter = NullFilterLessThanFilter.builder()
.field("field")
.value(1).build();
{
"nulllt": {
"field": "field",
"value": 1
}
}
...
"
...
Less Than or Equals" Filter
All conditions in the list must be evaluated to true
The value of the field must be less than or equals to the one provided.
var filter = AndFilterLessThanOrEqualsFilter.builder()
.clause(EqualsFilter.builder().field("fieldAfield("field").value("valueA"1).build());
{
"lte": {
"field": "field",
"value": 1
}
}
.clause(EqualsFilter
"In" Filter
The value of the field must be one of the provided values.
var filter = InFilter.builder().field("fieldBfield").value("valueBvalueA").buildvalue())
"valueB").build();
{
"andin": [{
{
"field": "field",
"equalsvalues": {[
"field": "fieldA"valueA",
"valueB"
"value": "valueA" ]
}
}
}, {}
"Empty" Filter
Checks if a field is empty:
- For a collection,
true
if its size is 0 - For a String,
true
if its length is 0 - For any other type,
true
if it isnull
var filter = EmptyFilter.builder() .field("
equalsfield"
:)
{.build();
{
"fieldempty": "fieldB",
{
"valuefield": "valueBfield"
}
}
]
}
"
...
Exists" Filter
At least one condition in the list must be evaluated to true
Checks if a field exists.
var filter = OrFilterExistsFilter.builder()
.clause(EqualsFilter.builder().field("fieldA").value("valueA").build())field("field")
.build();
{
"exists": {
"field": "field"
}
}
"Not" Filter
Negates the inner clause.
var filter = NotFilter.builder()
.clause(EqualsFilter.builder().field("fieldBfield").value("valueBvalue").build())
.build();
{
"ornot": [
{
"equals": {
"field": "fieldAfield",
"value": "valueAvalue"
}
}
}, {
"equals": {
"Null" Filter
Checks if a field is null. Note that while the "exists" filter checks whether the field is present or not, the "null" filter expects the field to be present but with null
value.
var filter = NullFilter.builder() .field("field"
: "fieldB",) .build();
{
"valuenull": "valueB"{
"field": "field"
}
}
Boolean Operators
"And" Filter
All conditions in the list must be evaluated to true
.
var filter =
}AndFilter.builder()
]}
EvalEx
EvalEx is a lightweight library capable of processing numeric, logical, conditional, String, array and structure-based expressions at runtime. It doesn’t need any external dependencies. Some minor functionalities such as support for hexadecimal and implicit multiplication are present. None of the functions are null safe due to the conversion of the parameters to EvaluationValue
during processing.
Example of basic expression evaluation without variables or configuration.
var input = "5 * 2";
var expression = new Expression(input);
var rawResult = expression.evaluate();
int finalResult = rawResult.getNumberValue().intValue();
If the expression has variables, a Map with the value of said variables must be provided.
var input = "x * y";
Map<String, Object> variables = new HashMap<>();
variables.put("x", "5");
variables.put("y", "2");
var expression = new Expression(input);
var rawResult = expression.withValues(variables).evaluate();
int finalResult = rawResult.getNumberValue().intValue();
Exceptions
The library provides two new exceptions which must be handled. A ParseException
is thrown, when the expression could not be parsed. An EvaluationException
is thrown, when the expression could be parsed, but not evaluated. If possible, both exceptions report the following details:
- Start Position of the error (character position, starting with 1).
- End Position of the error (character position, starting with 1).
- The Token string, usually the operator, function, variable or literal causing the error.
- The error message.
Default Functions
Although the default operators and functions are not explicitly listed in the official documentation, they can be seen in the ExpressionConfiguration
class.
Function | Description | Example |
---|---|---|
ABS | Returns the absolute value of a value | ABS(-7) = 7 |
CEILING | Rounds a number towards positive infinity | CEILING(1.1) = 2 |
FACT | Returns the factorial of a number | FACT(5) = 120 |
FLOOR | Rounds a number towards negative infinity | FLOOR(1.9) = 1 |
IF | Conditional operation where if the first parameter is true, the second one is executed. If not, the third parameter is executed | IF(TRUE, 5+1, 6+2) = 8 |
LOG | Performs the logarithm with base e on a value | LOG(5) = 1.609 |
LOG10 | Performs the logarithm with base 10 on a value | LOG10(5) = 0.698 |
MAX | Returns the highest value from all the parameters provided | MAX(5, 55, 6, 102) = 102 |
MIN | Returns the lowest value from all the parameters provided | MIN(5, 55, 6, 102) = 5 |
NOT | Negates a boolean expression | NOT(True) = false |
RANDOM | Returns random number between 0 and 1 | RANDOM() = 0.1613... |
ROUND | Rounds a decimal number to a specified scale | ROUND(0.5652,2) = 0.57 |
SUM | Returns the sum of the parameters | SUM(0.5, 3, 1) = 4.5 |
SQRT | Returns the square root of the value provided | SQRT(4) = 2 |
ACOS | Returns the arc-cosine in degrees | ACOS(1) = 0 |
ACOSH | Returns the hyperbolic arc-cosine in degrees | ACOSH(1.5) = 0.96 |
ACOSR | Returns the arc-cosine in radians | ACOSR(0.5) = 1.04 |
ACOT | Returns the arc-co-tangent in degrees | ACOT(1) = 45 |
ACOTH | Returns the hyperbolic arc-co-tangent in degrees | ACOTH(1.003) = 3.141 |
ACOTR | Returns the arc-co-tangent in radians | ACOTR(1) = 0.785 |
ASIN | Returns the arc-sine in degrees | ASIN(1) = 90 |
ASINH | Returns the hyperbolic arc-sine in degrees | ASINH(6.76) = 2.61 |
ASINR | Returns the arc-sine in radians | ASINR(1) = 1.57 |
ATAN | Returns the arc-tangent in degrees | ATAN(1) = 45 |
ATAN2 | Returns the angle of arc-tangent2 in degrees | ATAN2(1, 0) = 90 |
ATAN2R | Returns the angle of arc-tangent2 in radians | ATAN2R(1, 0) = 1.57 |
ATANH | Returns the hyperbolic arc-tangent in degrees | ATANH(0.5) = 0.54 |
ATANR | Returns the arc-tangent in radians | ATANR(1) = 0.78 |
COS | Returns the cosine in degrees | COS(180) = -1 |
COSH | Returns the hyperbolic cosine in degrees | COSH(PI) = 11.591 |
COSR | Returns the cosine in radians | COSR(PI) = -1 |
COT | Returns the co-tangent in degrees | COT(45) = 1 |
COTH | Returns the hyperbolic co-tangent in degrees | COTH(PI) = 1.003 |
COTR | Returns the co-tangent in radians | COTR(0.785) = 1 |
CSC | Returns the co-secant in degrees | CSC(270) = -1 |
CSCH | Returns the hyperbolic co-secant in degrees | CSCH(3*PI/2) = 0.017 |
CSCR | Returns the co-secant in radians | CSCR(3*PI/2) = -1 |
DEG | Converts an angle from radians to degrees | DEG(0.785) = 45 |
RAD | Converts an angle from degrees to radians | RAD(45) = 0.785 |
SIN | Returns the sine in degrees | SIN(150) = 0.5 |
SINH | Returns the hyperbolic sine in degrees | SINH(2.61) = 6.762 |
SINR | Returns the sine in radians | SINR(2.61) = 0.5 |
SEC | Returns the secant in degrees | SEC(120) = -2 |
SECH | Returns the hyperbolic secant in degrees | SECH(2.09) = 0.243 |
SECR | Returns the secant in radians | SECR(2.09) = -2 |
TAN | Returns the tangent in degrees | TAN(360) = 0 |
TANH | Returns the hyperbolic tangent in degrees | TANH(2*PI) = 1 |
TANR | Returns the tangent in radians | TANR(2*PI) = 0 |
STR_CONTAINS | Returns true if string contains substring or false if not (case sensitive) | STR_CONTAINS("Hoy es", "Hoy") = true |
STR_LOWER | Converts all the characters of a string to upper case | STR_LOWER("HOY ES") = hoy es |
STR_UPPER | Converts all the characters of a string to upper case | STR_UPPER("hoy es") = HOY ES |
Expression Configuration
The expression evaluation can be configured to enable and disable specific features.
Feature | Description | Defaults |
---|---|---|
allowOverwriteConstants | Allows variables to have the name of a constant | True |
arraysAllowed | Allows array index functions | True |
dataAccessorSupplier | The Data Accessor is responsible for storing and retrieving variable values. You can define your own data access interface, by defining a class that implements the DataAccessorInterface | MapBasedDataAccessor |
decimalPlacesRounding | Specifies the amount of decimal places to round to in each operation or function | Disabled |
defaultConstants | Specifies the default constants that can be used in every expression | ExpressionConfiguration.StandardConstants |
functionDictionary | The function dictionary is used to look up the functions that are used in an expression. You can define your own function directory, by defining a class that implements the FunctionDictionaryIfc | MapBasedFunctionDictionary |
implicitMultiplicationAllowed | Allows for automatic multiplication without operators | True |
mathContext | Specifies the precision and rounding method | Precision: 68, Mode: HALF-EVEN |
operatorDictionary | The operator dictionary is used to look up the functions that are used in an expression. You can define your own operator directory, by defining a class that implements the OperatorDictionaryIfc | MapBasedOperatorDictionary |
powerOfPrecedence | Allows changes to the operation precedence | Lower precedence |
stripTrailingZeros | Allows the trailing decimal zeros in a number result to be stripped | True |
structuresAllowed | Specifies if the structure separator (‘.’) operator is allowed | True |
Custom Functions
...
.clause(EqualsFilter.builder().field("fieldA").value("valueA").build())
.clause(EqualsFilter.builder().field("fieldB").value("valueB").build())
.build();
{
"and": [
{
"equals": {
"field": "fieldA",
"value": "valueA"
}
}, {
"equals": {
"field": "fieldB",
"value": "valueB"
}
}
]
}
"Or" Filter
At least one condition in the list must be evaluated to true
.
var filter = OrFilter.builder()
.clause(EqualsFilter.builder().field("fieldA").value("valueA").build())
.clause(EqualsFilter.builder().field("fieldB").value("valueB").build())
.build();
{
"or": [
{
"equals": {
"field": "fieldA",
"value": "valueA"
}
}, {
"equals": {
"field": "fieldB",
"value": "valueB"
}
}
]
}
EvalEx
EvalEx is a lightweight library capable of processing numeric, logical, conditional, String, array and structure-based expressions at runtime. It doesn’t need any external dependencies. Some minor functionalities such as support for hexadecimal and implicit multiplication are present. None of the functions are null safe due to the conversion of the parameters to EvaluationValue
during processing.
Example of basic expression evaluation without variables or configuration.
var input = "5 * 2";
var expression = new Expression(input);
var rawResult = expression.evaluate();
int finalResult = rawResult.getNumberValue().intValue();
If the expression has variables, a Map with the value of said variables must be provided.
var input = "x * y";
Map<String, Object> variables = new HashMap<>();
variables.put("x", "5");
variables.put("y", "2");
var expression = new Expression(input);
var rawResult = expression.withValues(variables).evaluate();
int finalResult = rawResult.getNumberValue().intValue();
Exceptions
The library provides two new exceptions which must be handled. A ParseException
is thrown, when the expression could not be parsed. An EvaluationException
is thrown, when the expression could be parsed, but not evaluated. If possible, both exceptions report the following details:
- Start Position of the error (character position, starting with 1).
- End Position of the error (character position, starting with 1).
- The Token string, usually the operator, function, variable or literal causing the error.
- The error message.
Default Functions
Although the default operators and functions are not explicitly listed in the official documentation, they can be seen in the ExpressionConfiguration
class.
Function | Description | Example |
---|---|---|
ABS | Returns the absolute value of a value | ABS(-7) = 7 |
CEILING | Rounds a number towards positive infinity | CEILING(1.1) = 2 |
FACT | Returns the factorial of a number | FACT(5) = 120 |
FLOOR | Rounds a number towards negative infinity | FLOOR(1.9) = 1 |
IF | Conditional operation where if the first parameter is true, the second one is executed. If not, the third parameter is executed | IF(TRUE, 5+1, 6+2) = 8 |
LOG | Performs the logarithm with base e on a value | LOG(5) = 1.609 |
LOG10 | Performs the logarithm with base 10 on a value | LOG10(5) = 0.698 |
MAX | Returns the highest value from all the parameters provided | MAX(5, 55, 6, 102) = 102 |
MIN | Returns the lowest value from all the parameters provided | MIN(5, 55, 6, 102) = 5 |
NOT | Negates a boolean expression | NOT(True) = false |
RANDOM | Returns random number between 0 and 1 | RANDOM() = 0.1613... |
ROUND | Rounds a decimal number to a specified scale | ROUND(0.5652,2) = 0.57 |
SUM | Returns the sum of the parameters | SUM(0.5, 3, 1) = 4.5 |
SQRT | Returns the square root of the value provided | SQRT(4) = 2 |
ACOS | Returns the arc-cosine in degrees | ACOS(1) = 0 |
ACOSH | Returns the hyperbolic arc-cosine in degrees | ACOSH(1.5) = 0.96 |
ACOSR | Returns the arc-cosine in radians | ACOSR(0.5) = 1.04 |
ACOT | Returns the arc-co-tangent in degrees | ACOT(1) = 45 |
ACOTH | Returns the hyperbolic arc-co-tangent in degrees | ACOTH(1.003) = 3.141 |
ACOTR | Returns the arc-co-tangent in radians | ACOTR(1) = 0.785 |
ASIN | Returns the arc-sine in degrees | ASIN(1) = 90 |
ASINH | Returns the hyperbolic arc-sine in degrees | ASINH(6.76) = 2.61 |
ASINR | Returns the arc-sine in radians | ASINR(1) = 1.57 |
ATAN | Returns the arc-tangent in degrees | ATAN(1) = 45 |
ATAN2 | Returns the angle of arc-tangent2 in degrees | ATAN2(1, 0) = 90 |
ATAN2R | Returns the angle of arc-tangent2 in radians | ATAN2R(1, 0) = 1.57 |
ATANH | Returns the hyperbolic arc-tangent in degrees | ATANH(0.5) = 0.54 |
ATANR | Returns the arc-tangent in radians | ATANR(1) = 0.78 |
COS | Returns the cosine in degrees | COS(180) = -1 |
COSH | Returns the hyperbolic cosine in degrees | COSH(PI) = 11.591 |
COSR | Returns the cosine in radians | COSR(PI) = -1 |
COT | Returns the co-tangent in degrees | COT(45) = 1 |
COTH | Returns the hyperbolic co-tangent in degrees | COTH(PI) = 1.003 |
COTR | Returns the co-tangent in radians | COTR(0.785) = 1 |
CSC | Returns the co-secant in degrees | CSC(270) = -1 |
CSCH | Returns the hyperbolic co-secant in degrees | CSCH(3*PI/2) = 0.017 |
CSCR | Returns the co-secant in radians | CSCR(3*PI/2) = -1 |
DEG | Converts an angle from radians to degrees | DEG(0.785) = 45 |
RAD | Converts an angle from degrees to radians | RAD(45) = 0.785 |
SIN | Returns the sine in degrees | SIN(150) = 0.5 |
SINH | Returns the hyperbolic sine in degrees | SINH(2.61) = 6.762 |
SINR | Returns the sine in radians | SINR(2.61) = 0.5 |
SEC | Returns the secant in degrees | SEC(120) = -2 |
SECH | Returns the hyperbolic secant in degrees | SECH(2.09) = 0.243 |
SECR | Returns the secant in radians | SECR(2.09) = -2 |
TAN | Returns the tangent in degrees | TAN(360) = 0 |
TANH | Returns the hyperbolic tangent in degrees | TANH(2*PI) = 1 |
TANR | Returns the tangent in radians | TANR(2*PI) = 0 |
STR_CONTAINS | Returns true if string contains substring or false if not (case sensitive) | STR_CONTAINS("Hoy es", "Hoy") = true |
STR_LOWER | Converts all the characters of a string to upper case | STR_LOWER("HOY ES") = hoy es |
STR_UPPER | Converts all the characters of a string to upper case | STR_UPPER("hoy es") = HOY ES |
Expression Configuration
The expression evaluation can be configured to enable and disable specific features.
Feature | Description | Defaults |
---|---|---|
allowOverwriteConstants | Allows variables to have the name of a constant | True |
arraysAllowed | Allows array index functions | True |
dataAccessorSupplier | The Data Accessor is responsible for storing and retrieving variable values. You can define your own data access interface, by defining a class that implements the DataAccessorInterface | MapBasedDataAccessor |
decimalPlacesRounding | Specifies the amount of decimal places to round to in each operation or function | Disabled |
defaultConstants | Specifies the default constants that can be used in every expression | ExpressionConfiguration.StandardConstants |
functionDictionary | The function dictionary is used to look up the functions that are used in an expression. You can define your own function directory, by defining a class that implements the FunctionDictionaryIfc | MapBasedFunctionDictionary |
implicitMultiplicationAllowed | Allows for automatic multiplication without operators | True |
mathContext | Specifies the precision and rounding method | Precision: 68, Mode: HALF-EVEN |
operatorDictionary | The operator dictionary is used to look up the functions that are used in an expression. You can define your own operator directory, by defining a class that implements the OperatorDictionaryIfc | MapBasedOperatorDictionary |
powerOfPrecedence | Allows changes to the operation precedence | Lower precedence |
stripTrailingZeros | Allows the trailing decimal zeros in a number result to be stripped | True |
structuresAllowed | Specifies if the structure separator (‘.’) operator is allowed | True |
Custom Functions
Personalized functions can be added with the expression configuration. For this process a new class that extends AbstractFunction. In this class @FunctionParameter
tags must be added to class. These will signify the parameters needed to use the function. Lastly, an override to the evaluate method is needed to implement the custom function's logic. This functions can be called recursively.
This class is an example of a basic custom function which adds 5 to the parameter provided.
@FunctionParameter(name = "value")
public class AddFiveFunction extends AbstractFunction {
@Override
public EvaluationValue evaluate(Expression expression, Token functionToken, EvaluationValue... parameterValues) throws EvaluationException {
EvaluationValue value = parameterValues[0];
return new EvaluationValue(value.getNumberValue().doubleValue()+5.0);
}
}
The function is then added to the evaluated expression via an ExpressionConfiguration
object at evaluation time.
ExpressionConfiguration configuration = ExpressionConfiguration.
defaultConfiguration().withAdditionalFunctions(
Map.entry("ADDFIVE", new AddFiveFunction()));
var input = "ADDFIVE(5)";
var expression = new Expression(input, configuration);
var rawResult = expression.evaluate();
int finalResult = rawResult.getNumberValue().intValue();
Custom Operators
Much like functions, custom operators can be added. To do this, a new class that extends AbstractOperator
is needed. A tag must be added to specify if the operator must be used as prefix, postfix or infix. The tag also specifies the precedence using a value for comparison. The value of the other operators can be seen in the OperatorIfc
for reference. If no value is specified, the operator will have the highest precedence.
This class is an example of a basic custom function which adds 5 to the parameter provided.
@FunctionParameter@PrefixOperator(nameprecedence = "value"1)
public class AddFiveFunctionTimesThreeOperator extends AbstractFunctionAbstractOperator {
@Override
public EvaluationValue evaluate(Expression expression, Token functionTokenoperatorToken, EvaluationValue... parameterValuesoperands) throws EvaluationException {
EvaluationValue value = parameterValues[0]; {
return new EvaluationValue(valueoperands[0].getNumberValue().doubleValueintValue()+5.0*3);
}
}
The function operator is then added to the evaluated expression via an ExpressionConfiguration
object at evaluation time.
ExpressionConfiguration configuration = ExpressionConfiguration.
defaultConfiguration().withAdditionalFunctionswithAdditionalOperators(
Map.entry("ADDFIVE@", new AddFiveFunctionTimesThreeOperator()));
var input = "ADDFIVE(5)@3";
varExpression expression = new Expression(input, configuration);
var rawResultreturn = expression.evaluate();
int finalResult = rawResult.getNumberValue().intValue();
Custom Operators
Much like functions, custom operators can be added. To do this, a new class that extends AbstractOperator
is needed. A tag must be added to specify if the operator must be used as prefix, postfix or infix. The tag also specifies the precedence using a value for comparison. The value of the other operators can be seen in the OperatorIfc
for reference. If no value is specified, the operator will have the highest precedence.
This class is an example of a basic custom function which adds 5 to the parameter provided.
@PrefixOperator(precedence = 1)
public class TimesThreeOperator extends AbstractOperator {
@Override
public EvaluationValue evaluate(Expression expression, Token operatorToken, EvaluationValue... operands) throws EvaluationException {
return new EvaluationValue(operands[0].getNumberValue().intValue()*3);
}
}
The operator is then added to the evaluated expression via an ExpressionConfiguration
object at evaluation time.
ExpressionConfiguration configuration = ExpressionConfiguration.
defaultConfiguration().withAdditionalOperators(
Map.entry("@", new TimesThreeOperator()));
var input = "@3";
Expression expression = new Expression(input, configuration);
return expression.evaluate();
EvalEx Utils
The EvalExUtils
class contains methods made to facilitate the processes that may use the Expression Language evaluator.
Array Converters
When an array is evaluated all of its contents are converted to an EvaluationValue
object. This renders the resulting array useless if not processed. To simplify this, 4 array converters were included to convert an EvaluationValue
array to an Integer, String, Boolean or plain Object array.
Added Functions
System Function
SystemFunction
gets a System property or Environment Variable value given it's key. The System property has precedence. If no System property or Environment variable is found, an EvaluationException
is thrown with the message "No such system parameter or environment variable". It is used with the SYSTEM
tag. E.g. SYSTEM(java.runtime.version) = 17.0.6+10 (result may vary).
Encrypt and Decrypt Functions
EncryptFunction
encrypts the given content using the encryption method in CryptoUtils
. It is used with the ENCRYPT
tag. E.g. ENCRYPT("Test") = (Encrypted string for "Test"). DecryptFunction
decrypts an encrypted content using the decryption method in CryptoUtils
. It is used with the DECRYPT
tag. E.g. DECRYPT(Encrypted string for "Test") = Test.
Regex Match Function
...
EvalEx Utils
The EvalExUtils
class contains methods made to facilitate the processes that may use the Expression Language evaluator.
Array Converters
When an array is evaluated all of its contents are converted to an EvaluationValue
object. This renders the resulting array useless if not processed. To simplify this, 4 array converters were included to convert an EvaluationValue
array to an Integer, String, Boolean or plain Object array.
Added Functions
System Function
SystemFunction
gets a System property or Environment Variable value given it's key. The System property has precedence. If no System property or Environment variable is found, an EvaluationException
is thrown with the message "No such system parameter or environment variable". It is used with the SYSTEM
tag. E.g. SYSTEM(java.runtime.version) = 17.0.6+10 (result may vary).
Encrypt and Decrypt Functions
EncryptFunction
encrypts the given content using the encryption method in CryptoUtils
. It is used with the ENCRYPT
tag. E.g. ENCRYPT("Test") = (Encrypted string for "Test"). DecryptFunction
decrypts an encrypted content using the decryption method in CryptoUtils
. It is used with the DECRYPT
tag. E.g. DECRYPT(Encrypted string for "Test") = Test.
Regex Match Function
RegexMatchFunction
returns a boolean specifying if a string matches a given pattern. It is used with the REGEX_MATCH
tag. E.g. REGEX_MATCH("This is a Test", ".t") = true.
Ends With and Starts with Functions
EndsWithFunction
verifies with a boolean if a string ends with a given substring. It is used with the STR_ENDSWITH
tag. E.g. STR_ENDSWITH("This is a test","test") = true. StartsWithFunction
verifies with a boolean if a string starts with a given substring. These functions are case-sensitive. It is used with the STR_STARTSWITH
tag. E.g. STR_STARTSWITH("This is a test","This") = true.
Is Empty and Is Blank Functions
IsBlankFuction
verifies with a boolean if a string is blank. It is used with the tag STR_ISBLANK
tag. E.g. STR_ISBLANK(" ") = true. IsEmptyFunction
verifies with a boolean if a string is empty. It is used with the tag STR_ISEMPTY
tag. E.g. STR_ISEMPTY("") = true.
Split Function
SplitFunction
splits a given string by a given token. The resulting array contains EvaluationValue
objects, therefore the substrings must be converted back to strings. It is used with the STR_SPLIT
tag. E.g. STR_SPLIT("This is a Test", ".t") = true.
Ends With and Starts with Functions
EndsWithFunction
verifies with a boolean if a string ends with a given substring{"This", "is", "a", "test"}.
Concat Function
ConcatFunction
concatenates any given set of strings. It is used with the STR_ENDSWITHCONCAT
tag. E.g. STR_ENDSWITHCONCAT("This", "is", "a test", "test") = true. StartsWithFunction
verifies with a boolean if a string starts with a given substring. These functions are case-sensitive"This is a Test".
Array Join Function
ArrayJoinFunction
concatenates two given arrays of the same type into one array. It is used with the STRARR_STARTSWITH
tagJOIN
tag. The resulting array will be an Object[], therefore it must be processed with the arrayFactoryProcessor
. E.g. STR_STARTSWITH("This is a test","This") = true.
Is Empty and Is Blank Functions
IsBlankFuction
verifies with a boolean if a string is blankJOIN(list1, list2) = (1, 2, 3, 4, 5, 6) with list1 = (1, 2, 3) and list2 = (4, 5, 6) being variables.
Array Contains Function
ArrayContainsFunction
verifies if an array contains a given value. It is used with the tag STRARR_ISBLANKCONTAINS
tag. E.g. STRARR_ISBLANK(" "CONTAINS(array, 1) = true. IsEmptyFunction
verifies with a boolean if a string is empty. It is used with the tag STR_ISEMPTY
tag. E.g. STR_ISEMPTY("") = true.
Split Function
SplitFunction
splits a given string by a given token. The resulting array contains EvaluationValue
objects, therefore the substrings must be converted back to strings. , with array = (1, 2, 3) being a variable.
Get By Index Function
GetByIndexFunction
provides the value in the given index. It is used with the STRARR_SPLITGET
tag. E.g. STRARR_SPLITGET(array, 0) = "This is a Test", "") = {with array = ("This", "is", "a", "test"}) being a variable.
Concat Function
...
Array Is Empty Function
ArrayIsEmptyFunction
provides a boolean specifying if an array is Empty. It is used with the STRARR_CONCATISEMPTY
tag. E.g. STRARR_CONCAT("This", "is", "a", "test") = "This is a Test".
Array Join Function
ArrayJoinFunction
concatenates two given arrays of the same type into one arrayISEMPTY(array) = true, with array = () being a variable.
Array Size Function
ArraySizeFunction
provides the value in the given index. It is used with the ARR_JOIN
tag. The resulting array will be an Object[], therefore it must be processed with the arrayFactoryProcessor
SIZE
tag. E.g. JOINSIZE(list1, list2array) = (1, 2, 3, 4, 5, 6) with list1 array = (1, 2, 3) and list2 = (4, 5, 6) being variables.
Array Contains Function
ArrayContainsFunction
verifies if an array contains a given valuebeing a variable.
Get By Key Function
GetByKeyFunction
returns the value assigned to the value given by parameter. Because of the evaluation process, the key should be passed as a string even when the actual key is a number. It is used with the ARRMAP_CONTAINSGET
tag. E.g. ARRMAP_CONTAINSGET(arraymap, "1") = true, with array "Test" where <Integer, String> map = (1, 2, 3) being "Test") ia a variable.
...
Map Contains Key Function
GetByIndexFunction
provides the value in the given indexMapContainsKeyFunction
provides a boolean specifying if a map contains a given key. Because of the evaluation process, the key should be passed as a string even when the actual key is a number. It is used with the ARRMAP_GETCONTAINS
tag. E.g. ARRMAP_GETCONTAINS(arraymap, 0"1") = "This", with array = ("This", "is", "a", "test") being true where <Integer, String> map = (1, "Test") ia a variable.
...
Map Is Empty Function
ArrayIsEmptyFunction
provides MapIsEmptyFunction
verifies with a boolean specifying if an array a map is Emptyempty. It is used with the ARRtag MAP_ISEMPTY
tag. E.g. ARRMAP_ISEMPTY(arraymap) = true , with array where map = () being ia a variable.
...
Map Size Function
ArraySizeFunction
provides the value in the given indexMapIsEmptyFunction
returns the size of a map. It is used with the ARRtag MAP_SIZE
tag. E.g. SIZE(array) = 3, with array = (1, 2, 3) being a variable.g. MAP_SIZE(map) = 1 where <Integer, String> map = (1, "Test") ia a variable.`
File Read Function
FileReadFunction
reads a file using the find utility in FileStorageService
. It is used with the FILE_FIND
tag.
HTTP Base Client
The HTTP Base Client is an abstraction of the OkHttpClient. It provides features such as retry handlers (see Retries) and automated paginated requests.
...
This configuration can be loaded as followsas follows
protected void myMethod(@Named("direct-consumer") ConsumerProperties consumerProperties) {
try {
var maxRequeues = consumerProperties.getMaxMessageRequeues();
...
}
}
Registering Consumers
To register a consumer do the following:
public class MyClass {
@Inject
MessageQueueProvider mqProvider;
...
protected void myMethod(@Named("direct-consumer") ConsumerProperties consumerProperties) {
try {
var maxRequeues = mqProvider.registerConsumer(queue, consumerProperties.getMaxMessageRequeues();, Message.MessageType.DIRECT, this::consumeMessage);
...
}
}
Registering Consumers
...
...
}
}
The registerConsumer
method takes the following parameters:
Parameter | Type | Description |
---|---|---|
queue | String | Name of the queue to register the Consumer to. |
consumerProperties | ConsumerProperties | A ConsumerProperties instance for the consumer. |
messageType | Message.MessageType | The type of message this consumer is supposed to listen to. Can be Message.MessageType.DIRECT or Message.MessageType.BROADCAST. See Message Types. |
onMessageDelivery | Predicate<Message> | The function to execute when a message is received. Returns true if the message is successfully consumed, false otherwise. |
If the consumer should execute an action when the maximum number of requeues has been exceeded use the following:
public class MyClass {
@Inject
MessageQueueProvider mqProvider;
...
protected void myMethod(@Named("direct-consumer") ConsumerProperties consumerProperties) {
try {
mqProvider.registerConsumer(queue, consumerProperties, Message.MessageType.DIRECT, this::consumeMessage, this::requeueConsumeMessage);
...
}
}
The registerConsumer
method takes It adds the following parameters:
...
parameter to the previous method:
Parameter | Type | Description |
---|---|---|
onMessageRequeueConsumer | Consumer<String> | The function to execute when a message is received. Returns true if the message is successfully consumed, false otherwisethe maximum requeue has been exceeded. The function's parameter is a String containing the failed message's body. |
Producers
Configuration
Property | Type | Default | Description |
---|---|---|---|
producer.threads | Integer | 5 | Number of threads available for producers to send asyncMessages. |
producer.sendMessageTimeout | Duration | 30s | Maximum time to wait to get a confirmation that a message has been successfully sent. |
producer.retry.retries | Integer | 5 | Maximum number of times to try to send a message. |
producer.retry.delay | Duration | 1s | Time to wait before retrying. The time is multiplied by the number of retries to backoff between executions. |
...
provider
(Optional, String) Repository provider to store and retrieve secrets By default the storage
provider will be used within a non kubernetes environment. For kubernetes environments k8s
will be the default.
Providers
Storage
Stores and retrieves secrets from the configured storage for PDP.
...
secretsService:
provider: storage
K8s
Stores and retrieves secrets from kubernetes.
NOTE: Not available yet due to security concerns .
namespace
kubernetes namespace used to store secrets.
Configuration
secretsService:
provider:
k8s
namespace: pdp
...