Class Processors

java.lang.Object
com.hazelcast.jet.core.processor.Processors

public final class Processors
extends Object
Static utility class with factory methods for Jet processors. These are meant to implement the internal vertices of the DAG; for other kinds of processors refer to the package-level documentation.

Many of the processors deal with an aggregating operation over stream items. Prior to aggregation items may be grouped by an arbitrary key and/or an event timestamp-based window. There are two main aggregation setups: single-stage and two-stage.

Unless specified otherwise, all functions passed to member methods must be stateless.

Single-stage aggregation

This is the basic setup where all the aggregation steps happen in one vertex. The input must be properly partitioned and distributed. For non-aligned window aggregation (e.g., session-based, trigger-based, etc.) this is the only choice. In the case of aligned windows it is the best choice if the source is already partitioned by the grouping key because the inbound edge will not have to be distributed. If the input stream needs repartitioning, this setup will incur heavier network traffic than the two-stage setup due to the need for a distributed-partitioned edge. On the other hand, it will use less memory because each member keeps track only of the keys belonging to its own partitions. This is the DAG outline for the case where upstream data is not localized by grouping key:
                 -----------------
                | upstream vertex |
                 -----------------
                         |
                         | partitioned-distributed
                         V
                    -----------
                   | aggregate |
                    -----------
 

Two-stage aggregation

In two-stage aggregation, the first stage applies just the accumulate aggregation primitive and the second stage does combine and finish. The essential property of this setup is that the edge leading to the first stage is local, incurring no network traffic, and only the edge from the first to the second stage is distributed. There is only one item per group traveling on the distributed edge. Compared to the single-stage setup this can dramatically reduce network traffic, but it needs more memory to keep track of all keys on each cluster member. This is the outline of the DAG:
                -----------------
               | upstream vertex |
                -----------------
                        |
                        | partitioned-local
                        V
                  ------------
                 | accumulate |
                  ------------
                        |
                        | partitioned-distributed
                        V
                 ----------------
                | combine/finish |
                 ----------------
 
The variants without a grouping key are equivalent to grouping by a single, global key. In that case the edge towards the final-stage vertex must be all-to-one and the local parallelism of the vertex must be one. Unless the volume of the aggregated data is small (e.g., some side branch off the main flow in the DAG), the best choice is this two-stage setup:
                -----------------
               | upstream vertex |
                -----------------
                        |
                        | local, non-partitioned
                        V
                  ------------
                 | accumulate |
                  ------------
                        |
                        | distributed, all-to-one
                        V
                 ----------------
                | combine/finish | localParallelism = 1
                 ----------------
 
This will parallelize and distributed most of the processing and the second-stage processor will receive just a single item from each upstream processor, doing very little work.

Overview of factory methods for aggregate operations

single-stage stage 1/2 stage 2/2
batch,
no grouping
aggregateP(com.hazelcast.jet.aggregate.AggregateOperation<A, R>) accumulateP(com.hazelcast.jet.aggregate.AggregateOperation<A, R>) combineP(com.hazelcast.jet.aggregate.AggregateOperation<A, R>)
batch, group by key aggregateByKeyP(java.util.List<com.hazelcast.function.FunctionEx<?, ? extends K>>, com.hazelcast.jet.aggregate.AggregateOperation<A, R>, com.hazelcast.function.BiFunctionEx<? super K, ? super R, OUT>) accumulateByKeyP(java.util.List<com.hazelcast.function.FunctionEx<?, ? extends K>>, com.hazelcast.jet.aggregate.AggregateOperation<A, ?>) combineByKeyP(com.hazelcast.jet.aggregate.AggregateOperation<A, R>, com.hazelcast.function.BiFunctionEx<? super K, ? super R, OUT>)
batch, co-group by key aggregateByKeyP(java.util.List<com.hazelcast.function.FunctionEx<?, ? extends K>>, com.hazelcast.jet.aggregate.AggregateOperation<A, R>, com.hazelcast.function.BiFunctionEx<? super K, ? super R, OUT>) accumulateByKeyP(java.util.List<com.hazelcast.function.FunctionEx<?, ? extends K>>, com.hazelcast.jet.aggregate.AggregateOperation<A, ?>)
stream, group by key
and aligned window
aggregateToSlidingWindowP(java.util.List<com.hazelcast.function.FunctionEx<?, ? extends K>>, java.util.List<com.hazelcast.function.ToLongFunctionEx<?>>, com.hazelcast.jet.core.TimestampKind, com.hazelcast.jet.core.SlidingWindowPolicy, long, com.hazelcast.jet.aggregate.AggregateOperation<A, ? extends R>, com.hazelcast.jet.core.function.KeyedWindowResultFunction<? super K, ? super R, ? extends OUT>) accumulateByFrameP(java.util.List<com.hazelcast.function.FunctionEx<?, ? extends K>>, java.util.List<com.hazelcast.function.ToLongFunctionEx<?>>, com.hazelcast.jet.core.TimestampKind, com.hazelcast.jet.core.SlidingWindowPolicy, com.hazelcast.jet.aggregate.AggregateOperation<A, ?>) combineToSlidingWindowP(com.hazelcast.jet.core.SlidingWindowPolicy, com.hazelcast.jet.aggregate.AggregateOperation<A, ? extends R>, com.hazelcast.jet.core.function.KeyedWindowResultFunction<? super K, ? super R, ? extends OUT>)
stream, co-group by key
and aligned window
aggregateToSlidingWindowP(java.util.List<com.hazelcast.function.FunctionEx<?, ? extends K>>, java.util.List<com.hazelcast.function.ToLongFunctionEx<?>>, com.hazelcast.jet.core.TimestampKind, com.hazelcast.jet.core.SlidingWindowPolicy, long, com.hazelcast.jet.aggregate.AggregateOperation<A, ? extends R>, com.hazelcast.jet.core.function.KeyedWindowResultFunction<? super K, ? super R, ? extends OUT>) accumulateByFrameP(java.util.List<com.hazelcast.function.FunctionEx<?, ? extends K>>, java.util.List<com.hazelcast.function.ToLongFunctionEx<?>>, com.hazelcast.jet.core.TimestampKind, com.hazelcast.jet.core.SlidingWindowPolicy, com.hazelcast.jet.aggregate.AggregateOperation<A, ?>)
stream, group by key
and session window
aggregateToSessionWindowP(long, long, java.util.List<com.hazelcast.function.ToLongFunctionEx<?>>, java.util.List<com.hazelcast.function.FunctionEx<?, ? extends K>>, com.hazelcast.jet.aggregate.AggregateOperation<A, ? extends R>, com.hazelcast.jet.core.function.KeyedWindowResultFunction<? super K, ? super R, ? extends OUT>) N/A N/A

Tumbling window is a special case of sliding window with sliding step = window size.

Since:
3.0
  • Method Details

    • aggregateP

      @Nonnull public static <A,​ R> SupplierEx<Processor> aggregateP​(@Nonnull AggregateOperation<A,​R> aggrOp)
      Returns a supplier of processors for a vertex that performs the provided aggregate operation on all the items it receives. After exhausting all its input it emits a single item of type R —the result of the aggregate operation.

      Since the input to this vertex must be bounded, its primary use case are batch jobs.

      This processor has state, but does not save it to snapshot. On job restart, the state will be lost.

      Type Parameters:
      A - type of accumulator returned from aggrOp.createAccumulatorFn()
      R - type of the finished result returned from aggrOp.finishAccumulationFn()
      Parameters:
      aggrOp - the aggregate operation to perform
    • accumulateP

      @Nonnull public static <A,​ R> SupplierEx<Processor> accumulateP​(@Nonnull AggregateOperation<A,​R> aggrOp)
      Returns a supplier of processors for a vertex that performs the provided aggregate operation on all the items it receives. After exhausting all its input it emits a single item of type R —the result of the aggregate operation.

      Since the input to this vertex must be bounded, its primary use case are batch jobs.

      This processor has state, but does not save it to snapshot. On job restart, the state will be lost.

      Type Parameters:
      A - type of accumulator returned from aggrOp.createAccumulatorFn()
      R - type of the finished result returned from aggrOp. finishAccumulationFn()
      Parameters:
      aggrOp - the aggregate operation to perform
    • combineP

      @Nonnull public static <A,​ R> SupplierEx<Processor> combineP​(@Nonnull AggregateOperation<A,​R> aggrOp)
      Returns a supplier of processors for a vertex that performs the provided aggregate operation on all the items it receives. After exhausting all its input it emits a single item of type R — the result of the aggregate operation.

      Since the input to this vertex must be bounded, its primary use case is batch jobs.

      This processor has state, but does not save it to snapshot. On job restart, the state will be lost.

      Type Parameters:
      A - type of accumulator returned from aggrOp.createAccumulatorFn()
      R - type of the finished result returned from aggrOp. finishAccumulationFn()
      Parameters:
      aggrOp - the aggregate operation to perform
    • aggregateByKeyP

      @Nonnull public static <K,​ A,​ R,​ OUT> SupplierEx<Processor> aggregateByKeyP​(@Nonnull List<FunctionEx<?,​? extends K>> keyFns, @Nonnull AggregateOperation<A,​R> aggrOp, @Nonnull BiFunctionEx<? super K,​? super R,​OUT> mapToOutputFn)
      Returns a supplier of processors for a vertex that groups items by key and performs the provided aggregate operation on each group. After exhausting all its input it emits one item per distinct key. It computes the item to emit by passing each (key, result) pair to mapToOutputFn.

      The vertex accepts input from one or more inbound edges. The type of items may be different on each edge. For each edge a separate key extracting function must be supplied and the aggregate operation must contain a separate accumulation function for each edge.

      This processor has state, but does not save it to snapshot. On job restart, the state will be lost.

      Type Parameters:
      K - type of key
      A - type of accumulator returned from aggrOp.createAccumulatorFn()
      R - type of the result returned from aggrOp.finishAccumulationFn()
      OUT - type of the item to emit
      Parameters:
      keyFns - functions that compute the grouping key
      aggrOp - the aggregate operation
      mapToOutputFn - function that takes the key and the aggregation result and returns the output item
    • accumulateByKeyP

      @Nonnull public static <K,​ A> SupplierEx<Processor> accumulateByKeyP​(@Nonnull List<FunctionEx<?,​? extends K>> getKeyFns, @Nonnull AggregateOperation<A,​?> aggrOp)
      Returns a supplier of processors for the first-stage vertex in a two-stage group-and-aggregate setup. The vertex groups items by the grouping key and applies the accumulate primitive to each group. After exhausting all its input it emits one Map.Entry<K, A> per distinct key.

      The vertex accepts input from one or more inbound edges. The type of items may be different on each edge. For each edge a separate key extracting function must be supplied and the aggregate operation must contain a separate accumulation function for each edge.

      This processor has state, but does not save it to snapshot. On job restart, the state will be lost.

      Type Parameters:
      K - type of key
      A - type of accumulator returned from aggrOp.createAccumulatorFn()
      Parameters:
      getKeyFns - functions that compute the grouping key
      aggrOp - the aggregate operation to perform
    • combineByKeyP

      @Nonnull public static <K,​ A,​ R,​ OUT> SupplierEx<Processor> combineByKeyP​(@Nonnull AggregateOperation<A,​R> aggrOp, @Nonnull BiFunctionEx<? super K,​? super R,​OUT> mapToOutputFn)
      Returns a supplier of processors for the second-stage vertex in a two-stage group-and-aggregate setup. Each processor applies the combine aggregation primitive to the entries received from several upstream instances of accumulateByKeyP(java.util.List<com.hazelcast.function.FunctionEx<?, ? extends K>>, com.hazelcast.jet.aggregate.AggregateOperation<A, ?>). After exhausting all its input it emits one item per distinct key. It computes the item to emit by passing each (key, result) pair to mapToOutputFn.

      Since the input to this vertex must be bounded, its primary use case are batch jobs.

      This processor has state, but does not save it to snapshot. On job restart, the state will be lost.

      Type Parameters:
      A - type of accumulator returned from aggrOp.createAccumulatorFn()
      R - type of the finished result returned from aggrOp.finishAccumulationFn()
      OUT - type of the item to emit
      Parameters:
      aggrOp - the aggregate operation to perform
      mapToOutputFn - function that takes the key and the aggregation result and returns the output item
    • aggregateToSlidingWindowP

      @Nonnull public static <K,​ A,​ R,​ OUT> SupplierEx<Processor> aggregateToSlidingWindowP​(@Nonnull List<FunctionEx<?,​? extends K>> keyFns, @Nonnull List<ToLongFunctionEx<?>> timestampFns, @Nonnull TimestampKind timestampKind, @Nonnull SlidingWindowPolicy winPolicy, long earlyResultsPeriod, @Nonnull AggregateOperation<A,​? extends R> aggrOp, @Nonnull KeyedWindowResultFunction<? super K,​? super R,​? extends OUT> mapToOutputFn)
      Returns a supplier of processors for a vertex that aggregates events into a sliding window in a single stage (see the class Javadoc for an explanation of aggregation stages). The vertex groups items by the grouping key (as obtained from the given key-extracting function) and by frame, which is a range of timestamps equal to the sliding step. It emits sliding window results labeled with the timestamp denoting the window's end time (the exclusive upper bound of the timestamps belonging to the window).

      The vertex accepts input from one or more inbound edges. The type of items may be different on each edge. For each edge a separate key extracting function must be supplied and the aggregate operation must contain a separate accumulation function for each edge.

      When the vertex receives a watermark with a given wmVal, it emits the result of aggregation for all the positions of the sliding window with windowTimestamp <= wmVal. It computes the window result by combining the partial results of the frames belonging to it and finally applying the finish aggregation primitive. After this it deletes from storage all the frames that trail behind the emitted windows. In the output there is one item per key per window position.

      Behavior on job restart
      This processor saves its state to snapshot. After restart, it can continue accumulating where it left off.

      After a restart in at-least-once mode, watermarks are allowed to go back in time. If such a watermark is received, some windows that were emitted in previous execution will be re-emitted. These windows might miss events as some of them had already been evicted before the snapshot was done in previous execution.

    • accumulateByFrameP

      @Nonnull public static <K,​ A> SupplierEx<Processor> accumulateByFrameP​(@Nonnull List<FunctionEx<?,​? extends K>> keyFns, @Nonnull List<ToLongFunctionEx<?>> timestampFns, @Nonnull TimestampKind timestampKind, @Nonnull SlidingWindowPolicy winPolicy, @Nonnull AggregateOperation<A,​?> aggrOp)
      Returns a supplier of processors for the first-stage vertex in a two-stage sliding window aggregation setup (see the class Javadoc for an explanation of aggregation stages). The vertex groups items by the grouping key (as obtained from the given key-extracting function) and by frame, which is a range of timestamps equal to the sliding step. It applies the accumulate aggregation primitive to each key-frame group.

      The frame is identified by the timestamp denoting its end time (equal to the exclusive upper bound of its timestamp range). SlidingWindowPolicy.higherFrameTs(long) maps the event timestamp to the timestamp of the frame it belongs to.

      The vertex accepts input from one or more inbound edges. The type of items may be different on each edge. For each edge a separate key extracting function must be supplied and the aggregate operation must contain a separate accumulation function for each edge.

      When the processor receives a watermark with a given wmVal, it emits the current accumulated state of all frames with timestamp <= wmVal and deletes these frames from its storage. In the output there is one item per key per frame.

      When a state snapshot is requested, the state is flushed to second-stage processor and nothing is saved to snapshot.

      Type Parameters:
      K - type of the grouping key
      A - type of accumulator returned from aggrOp. createAccumulatorFn()
    • combineToSlidingWindowP

      @Nonnull public static <K,​ A,​ R,​ OUT> SupplierEx<Processor> combineToSlidingWindowP​(@Nonnull SlidingWindowPolicy winPolicy, @Nonnull AggregateOperation<A,​? extends R> aggrOp, @Nonnull KeyedWindowResultFunction<? super K,​? super R,​? extends OUT> mapToOutputFn)
      Returns a supplier of processors for the second-stage vertex in a two-stage sliding window aggregation setup (see the class Javadoc for an explanation of aggregation stages). Each processor applies the combine aggregation primitive to the frames received from several upstream instances of accumulateByFrame().

      When the processor receives a watermark with a given wmVal, it emits the result of aggregation for all positions of the sliding window with windowTimestamp <= wmVal. It computes the window result by combining the partial results of the frames belonging to it and finally applying the finish aggregation primitive. After this it deletes from storage all the frames that trail behind the emitted windows. To compute the item to emit, it calls mapToOutputFn with the window's start and end timestamps, the key and the aggregation result. The window end time is the exclusive upper bound of the timestamps belonging to the window.

      Behavior on job restart
      This processor saves its state to snapshot. After restart, it can continue accumulating where it left off.

      After a restart in at-least-once mode, watermarks are allowed to go back in time. If such a watermark is received, some windows that were emitted in previous execution will be re-emitted. These windows might miss events as some of them had already been evicted before the snapshot was done in previous execution.

      Type Parameters:
      A - type of the accumulator
      R - type of the finished result returned from aggrOp. finishAccumulationFn()
      OUT - type of the item to emit
    • aggregateToSessionWindowP

      @Nonnull public static <K,​ A,​ R,​ OUT> SupplierEx<Processor> aggregateToSessionWindowP​(long sessionTimeout, long earlyResultsPeriod, @Nonnull List<ToLongFunctionEx<?>> timestampFns, @Nonnull List<FunctionEx<?,​? extends K>> keyFns, @Nonnull AggregateOperation<A,​? extends R> aggrOp, @Nonnull KeyedWindowResultFunction<? super K,​? super R,​? extends OUT> mapToOutputFn)
      Returns a supplier of processors for a vertex that aggregates events into session windows. Events and windows under different grouping keys are treated independently. Outputs objects of type WindowResult.

      The vertex accepts input from one or more inbound edges. The type of items may be different on each edge. For each edge a separate key extracting function must be supplied and the aggregate operation must contain a separate accumulation function for each edge.

      The functioning of this vertex is easiest to explain in terms of the event interval: the range [timestamp, timestamp + sessionTimeout). Initially an event causes a new session window to be created, covering exactly the event interval. A following event under the same key belongs to this window iff its interval overlaps it. The window is extended to cover the entire interval of the new event. The event may happen to belong to two existing windows if its interval bridges the gap between them; in that case they are combined into one.

      Behavior on job restart
      This processor saves its state to snapshot. After restart, it can continue accumulating where it left off.

      After a restart in at-least-once mode, watermarks are allowed to go back in time. The processor evicts state based on watermarks it received. If it receives duplicate watermark, it might emit sessions with missing events, because they were already evicted. The sessions before and after snapshot might overlap, which they normally don't.

      Type Parameters:
      K - type of the item's grouping key
      A - type of the container of the accumulated value
      R - type of the session window's result value
      Parameters:
      sessionTimeout - maximum gap between consecutive events in the same session window
      timestampFns - functions to extract the timestamp from the item
      keyFns - functions to extract the grouping key from the item
      aggrOp - the aggregate operation
    • insertWatermarksP

      @Nonnull public static <T> SupplierEx<Processor> insertWatermarksP​(@Nonnull EventTimePolicy<? super T> eventTimePolicy)
      Returns a supplier of processors for a vertex that inserts watermark items into the stream. The value of the watermark is determined by the supplied EventTimePolicy instance.

      This processor also drops late items. It never allows an event which is late with regard to already emitted watermark to pass.

      The processor saves value of the last emitted watermark to snapshot. Different instances of this processor can be at different watermark at snapshot time. After restart all instances will start at watermark of the most-behind instance before the restart.

      This might sound as it could break the monotonicity requirement, but thanks to watermark coalescing, watermarks are only delivered for downstream processing after they have been received from all upstream processors. Another side effect of this is, that a late event, which was dropped before restart, is not considered late after restart.

      Type Parameters:
      T - the type of the stream item
    • mapP

      @Nonnull public static <T,​ R> SupplierEx<Processor> mapP​(@Nonnull FunctionEx<? super T,​? extends R> mapFn)
      Returns a supplier of processors for a vertex which, for each received item, emits the result of applying the given mapping function to it. If the result is null, it emits nothing. Therefore this vertex can be used to implement filtering semantics as well.

      This processor is stateless.

      Type Parameters:
      T - type of received item
      R - type of emitted item
      Parameters:
      mapFn - a stateless mapping function
    • filterP

      @Nonnull public static <T> SupplierEx<Processor> filterP​(@Nonnull PredicateEx<? super T> filterFn)
      Returns a supplier of processors for a vertex that emits the same items it receives, but only those that pass the given predicate.

      This processor is stateless.

      Type Parameters:
      T - type of received item
      Parameters:
      filterFn - a stateless predicate to test each received item against
    • flatMapP

      @Nonnull public static <T,​ R> SupplierEx<Processor> flatMapP​(@Nonnull FunctionEx<? super T,​? extends Traverser<? extends R>> flatMapFn)
      Returns a supplier of processors for a vertex that applies the provided item-to-traverser mapping function to each received item and emits all the items from the resulting traverser. The traverser must be null-terminated.

      This processor is stateless.

      Type Parameters:
      T - received item type
      R - emitted item type
      Parameters:
      flatMapFn - a stateless function that maps the received item to a traverser over output items. It must not return null traverser, but can return an empty traverser.
    • mapStatefulP

      @Nonnull public static <T,​ K,​ S,​ R> SupplierEx<Processor> mapStatefulP​(long ttl, @Nonnull FunctionEx<? super T,​? extends K> keyFn, @Nonnull ToLongFunctionEx<? super T> timestampFn, @Nonnull Supplier<? extends S> createFn, @Nonnull TriFunction<? super S,​? super K,​? super T,​? extends R> statefulMapFn, @Nullable TriFunction<? super S,​? super K,​? super Long,​? extends R> onEvictFn)
      Returns a supplier of processors for a vertex that performs a stateful mapping of its input. createFn returns the object that holds the state. The processor passes this object along with each input item to mapFn, which can update the object's state. For each grouping key there's a separate state object. The state object will be included in the state snapshot, so it survives job restarts. For this reason the object must be serializable. If the mapping function maps an item to null, it will have the effect of filtering out that item.

      If the given ttl is greater than zero, the processor will consider the state object stale if its time-to-live has expired. The time-to-live refers to the event time as kept by the watermark: each time it processes an event, the processor compares the state object's timestamp with the current watermark. If it is less than wm - ttl, it discards the state object. Otherwise it updates the timestamp with the current watermark.

      Type Parameters:
      T - type of the input item
      K - type of the key
      S - type of the state object
      R - type of the mapping function's result
      Parameters:
      ttl - state object's time to live
      keyFn - function to extract the key from an input item
      createFn - supplier of the state object
      statefulMapFn - the stateful mapping function
    • flatMapStatefulP

      @Nonnull public static <T,​ K,​ S,​ R> SupplierEx<Processor> flatMapStatefulP​(long ttl, @Nonnull FunctionEx<? super T,​? extends K> keyFn, @Nonnull ToLongFunctionEx<? super T> timestampFn, @Nonnull Supplier<? extends S> createFn, @Nonnull TriFunction<? super S,​? super K,​? super T,​? extends Traverser<R>> statefulFlatMapFn, @Nullable TriFunction<? super S,​? super K,​? super Long,​? extends Traverser<R>> onEvictFn)
      Returns a supplier of processors for a vertex that performs a stateful flat-mapping of its input. createFn returns the object that holds the state. The processor passes this object along with each input item to mapFn, which can update the object's state. For each grouping key there's a separate state object. The state object will be included in the state snapshot, so it survives job restarts. For this reason the object must be serializable.

      If the given ttl is greater than zero, the processor will consider the state object stale if its time-to-live has expired. The time-to-live refers to the event time as kept by the watermark: each time it processes an event, the processor compares the state object's timestamp with the current watermark. If it is less than wm - ttl, it discards the state object. Otherwise it updates the timestamp with the current watermark.

      Type Parameters:
      T - type of the input item
      K - type of the key
      S - type of the state object
      R - type of the mapping function's result
      Parameters:
      ttl - state object's time to live
      keyFn - function to extract the key from an input item
      createFn - supplier of the state object
      statefulFlatMapFn - the stateful mapping function
    • mapUsingServiceP

      @Nonnull public static <C,​ S,​ T,​ R> ProcessorSupplier mapUsingServiceP​(@Nonnull ServiceFactory<C,​S> serviceFactory, @Nonnull BiFunctionEx<? super S,​? super T,​? extends R> mapFn)
      Returns a supplier of processors for a vertex which, for each received item, emits the result of applying the given mapping function to it. The mapping function receives another parameter, the service object which Jet will create using the supplied serviceFactory.

      If the mapping result is null, the vertex emits nothing. Therefore it can be used to implement filtering semantics as well.

      Unlike mapStatefulP(long, com.hazelcast.function.FunctionEx<? super T, ? extends K>, com.hazelcast.function.ToLongFunctionEx<? super T>, java.util.function.Supplier<? extends S>, com.hazelcast.jet.function.TriFunction<? super S, ? super K, ? super T, ? extends R>, com.hazelcast.jet.function.TriFunction<? super S, ? super K, ? super java.lang.Long, ? extends R>) (with the "Keyed" part), this method creates one service object per processor.

      While it's allowed to store some local state in the service object, it won't be saved to the snapshot and will misbehave in a fault-tolerant stream processing job.

      Type Parameters:
      C - type of context object
      S - type of service object
      T - type of received item
      R - type of emitted item
      Parameters:
      serviceFactory - the service factory
      mapFn - a stateless mapping function
    • mapUsingServiceAsyncP

      @Nonnull public static <C,​ S,​ T,​ K,​ R> ProcessorSupplier mapUsingServiceAsyncP​(@Nonnull ServiceFactory<C,​S> serviceFactory, int maxConcurrentOps, boolean preserveOrder, @Nonnull FunctionEx<T,​K> extractKeyFn, @Nonnull BiFunctionEx<? super S,​? super T,​CompletableFuture<R>> mapAsyncFn)
      Asynchronous version of mapUsingServiceP(com.hazelcast.jet.pipeline.ServiceFactory<C, S>, com.hazelcast.function.BiFunctionEx<? super S, ? super T, ? extends R>): the mapAsyncFn returns a CompletableFuture<R> instead of just R.

      The function can return a null future and the future can return a null result: in both cases it will act just like a filter.

      The extractKeyFn is used to extract keys under which to save in-flight items to the snapshot. If the input to this processor is over a partitioned edge, you should use the same key. If it's a round-robin edge, you can use any key, for example Object::hashCode.

      Type Parameters:
      C - type of context object
      S - type of service object
      T - type of received item
      K - type of key
      R - type of result item
      Parameters:
      serviceFactory - the service factory
      maxConcurrentOps - maximum number of concurrent async operations per processor
      preserveOrder - whether the async responses are ordered or not
      extractKeyFn - a function to extract snapshot keys
      mapAsyncFn - a stateless mapping function
    • filterUsingServiceP

      @Nonnull public static <C,​ S,​ T> ProcessorSupplier filterUsingServiceP​(@Nonnull ServiceFactory<C,​S> serviceFactory, @Nonnull BiPredicateEx<? super S,​? super T> filterFn)
      Returns a supplier of processors for a vertex that emits the same items it receives, but only those that pass the given predicate. The predicate function receives another parameter, the service object which Jet will create using the supplied serviceFactory.

      While it's allowed to store some local state in the service object, it won't be saved to the snapshot and will misbehave in a fault-tolerant stream processing job.

      Type Parameters:
      C - type of context object
      S - type of service object
      T - type of received item
      Parameters:
      serviceFactory - the service factory
      filterFn - a stateless predicate to test each received item against
    • flatMapUsingServiceP

      @Nonnull public static <C,​ S,​ T,​ R> ProcessorSupplier flatMapUsingServiceP​(@Nonnull ServiceFactory<C,​S> serviceFactory, @Nonnull BiFunctionEx<? super S,​? super T,​? extends Traverser<R>> flatMapFn)
      Returns a supplier of processors for a vertex that applies the provided item-to-traverser mapping function to each received item and emits all the items from the resulting traverser. The traverser must be null-terminated. The mapping function receives another parameter, the service object which Jet will create using the supplied serviceFactory.

      While it's allowed to store some local state in the service object, it won't be saved to the snapshot and will misbehave in a fault-tolerant stream processing job.

      Type Parameters:
      C - type of context object
      S - type of service object
      T - type of input item
      R - type of result item
      Parameters:
      serviceFactory - the service factory
      flatMapFn - a stateless function that maps the received item to a traverser over the output items
    • noopP

      @Nonnull public static SupplierEx<Processor> noopP()
      Returns a supplier of a processor that swallows all its normal input (if any), does nothing with it, forwards the watermarks, produces no output and completes immediately. It also swallows any restored snapshot data.