T
- the type of items coming out of this stagepublic interface StreamStage<T> extends GeneralStage<T>
pipeline
that will
observe an unbounded amount of data (i.e., an event stream). It accepts
input from its upstream stages (if any) and passes its output to its
downstream stages.Modifier and Type | Method and Description |
---|---|
<R> StreamStage<R> |
customTransform(String stageName,
ProcessorMetaSupplier procSupplier)
Attaches a stage with a custom transform based on the provided supplier
of Core API
Processor s. |
default <R> StreamStage<R> |
customTransform(String stageName,
ProcessorSupplier procSupplier)
Attaches a stage with a custom transform based on the provided supplier
of Core API
Processor s. |
default <R> StreamStage<R> |
customTransform(String stageName,
SupplierEx<Processor> procSupplier)
Attaches a stage with a custom transform based on the provided supplier
of Core API
Processor s. |
StreamStage<T> |
filter(PredicateEx<T> filterFn)
Attaches a filtering stage which applies the provided predicate function
to each input item to decide whether to pass the item to the output or
to discard it.
|
<C> StreamStage<T> |
filterUsingContext(ContextFactory<C> contextFactory,
BiPredicateEx<? super C,? super T> filterFn)
Attaches a filtering stage which applies the provided predicate function
to each input item to decide whether to pass the item to the output or
to discard it.
|
<C> StreamStage<T> |
filterUsingContextAsync(ContextFactory<C> contextFactory,
BiFunctionEx<? super C,? super T,? extends CompletableFuture<Boolean>> filterAsyncFn)
Asynchronous version of
GeneralStage.filterUsingContext(com.hazelcast.jet.pipeline.ContextFactory<C>, com.hazelcast.jet.function.BiPredicateEx<? super C, ? super T>) : the filterAsyncFn returns a CompletableFuture<Boolean> instead of
just a boolean . |
<R> StreamStage<R> |
flatMap(FunctionEx<? super T,? extends Traverser<? extends R>> flatMapFn)
Attaches a flat-mapping stage which applies the supplied function to
each input item independently and emits all the items from the
Traverser it returns. |
<C,R> StreamStage<R> |
flatMapUsingContext(ContextFactory<C> contextFactory,
BiFunctionEx<? super C,? super T,? extends Traverser<R>> flatMapFn)
Attaches a flat-mapping stage which applies the supplied function to
each input item independently and emits all items from the
Traverser it returns as the output items. |
<C,R> StreamStage<R> |
flatMapUsingContextAsync(ContextFactory<C> contextFactory,
BiFunctionEx<? super C,? super T,? extends CompletableFuture<Traverser<R>>> flatMapAsyncFn)
Asynchronous version of
GeneralStage.flatMapUsingContext(com.hazelcast.jet.pipeline.ContextFactory<C>, com.hazelcast.jet.function.BiFunctionEx<? super C, ? super T, ? extends com.hazelcast.jet.Traverser<R>>) : the flatMapAsyncFn returns a CompletableFuture<Traverser<R>>
instead of just Traverser<R> . |
<K> StreamStageWithKey<T,K> |
groupingKey(FunctionEx<? super T,? extends K> keyFn)
Specifies the function that will extract a key from the items in the
associated pipeline stage.
|
<K,T1_IN,T1,R> |
hashJoin(BatchStage<T1_IN> stage1,
JoinClause<K,? super T,? super T1_IN,? extends T1> joinClause1,
BiFunctionEx<T,T1,R> mapToOutputFn)
Attaches to both this and the supplied stage a hash-joining stage and
returns it.
|
<K1,K2,T1_IN,T2_IN,T1,T2,R> |
hashJoin2(BatchStage<T1_IN> stage1,
JoinClause<K1,? super T,? super T1_IN,? extends T1> joinClause1,
BatchStage<T2_IN> stage2,
JoinClause<K2,? super T,? super T2_IN,? extends T2> joinClause2,
TriFunction<T,T1,T2,R> mapToOutputFn)
Attaches to this and the two supplied stages a hash-joining stage and
returns it.
|
default StreamHashJoinBuilder<T> |
hashJoinBuilder()
Returns a fluent API builder object to construct a hash join operation
with any number of contributing stages.
|
<R> StreamStage<R> |
map(FunctionEx<? super T,? extends R> mapFn)
Attaches a mapping stage which applies the given function to each input
item independently and emits the function's result as the output item.
|
<C,R> StreamStage<R> |
mapUsingContext(ContextFactory<C> contextFactory,
BiFunctionEx<? super C,? super T,? extends R> mapFn)
Attaches a mapping stage which applies the supplied function to each
input item independently and emits the function's result as the output
item.
|
<C,R> StreamStage<R> |
mapUsingContextAsync(ContextFactory<C> contextFactory,
BiFunctionEx<? super C,? super T,? extends CompletableFuture<R>> mapAsyncFn)
Asynchronous version of
GeneralStage.mapUsingContext(com.hazelcast.jet.pipeline.ContextFactory<C>, com.hazelcast.jet.function.BiFunctionEx<? super C, ? super T, ? extends R>) : the mapAsyncFn
returns a CompletableFuture<R> instead of just R . |
default <K,V,R> StreamStage<R> |
mapUsingIMap(IMap<K,V> iMap,
FunctionEx<? super T,? extends K> lookupKeyFn,
BiFunctionEx<? super T,? super V,? extends R> mapFn)
Attaches a mapping stage where for each item a lookup in the
supplied
IMap is performed and the result of the
lookup is merged with the item and emitted. |
default <K,V,R> StreamStage<R> |
mapUsingIMap(String mapName,
FunctionEx<? super T,? extends K> lookupKeyFn,
BiFunctionEx<? super T,? super V,? extends R> mapFn)
Attaches a mapping stage where for each item a lookup in the
IMap with the supplied name is performed and the
result of the lookup is merged with the item and emitted. |
default <K,V,R> StreamStage<R> |
mapUsingReplicatedMap(ReplicatedMap<K,V> replicatedMap,
FunctionEx<? super T,? extends K> lookupKeyFn,
BiFunctionEx<? super T,? super V,? extends R> mapFn)
Attaches a mapping stage where for each item a lookup in the
supplied
ReplicatedMap is performed and the result of the
lookup is merged with the item and emitted. |
default <K,V,R> StreamStage<R> |
mapUsingReplicatedMap(String mapName,
FunctionEx<? super T,? extends K> lookupKeyFn,
BiFunctionEx<? super T,? super V,? extends R> mapFn)
Attaches a mapping stage where for each item a lookup in the
ReplicatedMap with the supplied name is performed and the
result of the lookup is merged with the item and emitted. |
StreamStage<T> |
merge(StreamStage<? extends T> other)
Attaches a stage that emits all the items from this stage as well as all
the items from the supplied stage.
|
default StreamStage<T> |
peek()
Adds a peeking layer to this compute stage which logs its output.
|
default StreamStage<T> |
peek(FunctionEx<? super T,? extends CharSequence> toStringFn)
Adds a peeking layer to this compute stage which logs its output.
|
StreamStage<T> |
peek(PredicateEx<? super T> shouldLogFn,
FunctionEx<? super T,? extends CharSequence> toStringFn)
Attaches a peeking stage which logs this stage's output and passes it
through without transformation.
|
<R> StreamStage<R> |
rollingAggregate(AggregateOperation1<? super T,?,? extends R> aggrOp)
Attaches a rolling aggregation stage.
|
StreamStage<T> |
setLocalParallelism(int localParallelism)
Sets the preferred local parallelism (number of processors per Jet
cluster member) this stage will configure its DAG vertices with.
|
StreamStage<T> |
setName(String name)
Overrides the default name of the stage with the name you choose and
returns the stage.
|
StageWithWindow<T> |
window(WindowDefinition wDef)
Adds the given window definition to this stage, as the first step in the
construction of a pipeline stage that performs windowed aggregation.
|
addTimestamps, drainTo
getPipeline, name
@Nonnull StageWithWindow<T> window(WindowDefinition wDef)
factory methods in WindowDefiniton
@Nonnull StreamStage<T> merge(@Nonnull StreamStage<? extends T> other)
other
- the other stage whose data to merge into this one@Nonnull <K> StreamStageWithKey<T,K> groupingKey(@Nonnull FunctionEx<? super T,? extends K> keyFn)
GeneralStage
Sample usage:
users.groupingKey(User::getId)
Note: make sure the extracted key is not-null, it would fail the
job otherwise. Also make sure that it implements equals()
and
hashCode()
.
groupingKey
in interface GeneralStage<T>
K
- type of the keykeyFn
- function that extracts the grouping key@Nonnull <R> StreamStage<R> map(@Nonnull FunctionEx<? super T,? extends R> mapFn)
GeneralStage
null
, it emits nothing. Therefore this stage
can be used to implement filtering semantics as well.
Sample usage:
stage.map(name -> name.toLowerCase())
map
in interface GeneralStage<T>
R
- the result type of the mapping functionmapFn
- a stateless mapping function@Nonnull StreamStage<T> filter(@Nonnull PredicateEx<T> filterFn)
GeneralStage
Sample usage:
stage.filter(name -> !name.isEmpty())
filter
in interface GeneralStage<T>
filterFn
- a stateless filter predicate function@Nonnull <R> StreamStage<R> flatMap(@Nonnull FunctionEx<? super T,? extends Traverser<? extends R>> flatMapFn)
GeneralStage
Traverser
it returns. The traverser must be null-terminated.
Sample usage:
stage.map(sentence -> traverseArray(sentence.split("\\W+")))
flatMap
in interface GeneralStage<T>
R
- the type of items in the result's traversersflatMapFn
- a stateless flatmapping function, whose result type is
Jet's Traverser
@Nonnull <C,R> StreamStage<R> mapUsingContext(@Nonnull ContextFactory<C> contextFactory, @Nonnull BiFunctionEx<? super C,? super T,? extends R> mapFn)
GeneralStage
contextFactory
.
If the mapping result is null
, it emits nothing. Therefore this
stage can be used to implement filtering semantics as well.
Sample usage:
stage.mapUsingContext(
ContextFactory.withCreateFn(jet -> new ItemDetailRegistry(jet)),
(reg, item) -> item.setDetail(reg.fetchDetail(item))
)
mapUsingContext
in interface GeneralStage<T>
C
- type of context objectR
- the result type of the mapping functioncontextFactory
- the context factorymapFn
- a stateless mapping function@Nonnull <C,R> StreamStage<R> mapUsingContextAsync(@Nonnull ContextFactory<C> contextFactory, @Nonnull BiFunctionEx<? super C,? super T,? extends CompletableFuture<R>> mapAsyncFn)
GeneralStage
GeneralStage.mapUsingContext(com.hazelcast.jet.pipeline.ContextFactory<C>, com.hazelcast.jet.function.BiFunctionEx<? super C, ? super T, ? extends R>)
: the mapAsyncFn
returns a CompletableFuture<R>
instead of just R
.
The function can return a null future or the future can return a null result: in both cases it will act just like a filter.
The latency of the async call will add to the total latency of the output.
Sample usage:
stage.mapUsingContextAsync(
ContextFactory.withCreateFn(jet -> new ItemDetailRegistry(jet)),
(reg, item) -> reg.fetchDetailAsync(item)
.thenApply(detail -> item.setDetail(detail)
)
mapUsingContextAsync
in interface GeneralStage<T>
C
- type of context objectR
- the future's result type of the mapping functioncontextFactory
- the context factorymapAsyncFn
- a stateless mapping function. Can map to null (return
a null future)@Nonnull <C> StreamStage<T> filterUsingContext(@Nonnull ContextFactory<C> contextFactory, @Nonnull BiPredicateEx<? super C,? super T> filterFn)
GeneralStage
contextFactory
.
Sample usage:
photos.filterUsingContext(
ContextFactory.withCreateFn(jet -> new ImageClassifier(jet)),
(classifier, photo) -> classifier.classify(photo).equals("cat")
)
filterUsingContext
in interface GeneralStage<T>
C
- type of context objectcontextFactory
- the context factoryfilterFn
- a stateless filter predicate function@Nonnull <C> StreamStage<T> filterUsingContextAsync(@Nonnull ContextFactory<C> contextFactory, @Nonnull BiFunctionEx<? super C,? super T,? extends CompletableFuture<Boolean>> filterAsyncFn)
GeneralStage
GeneralStage.filterUsingContext(com.hazelcast.jet.pipeline.ContextFactory<C>, com.hazelcast.jet.function.BiPredicateEx<? super C, ? super T>)
: the filterAsyncFn
returns a CompletableFuture<Boolean>
instead of
just a boolean
.
The function must not return a null future.
The latency of the async call will add to the total latency of the output.
Sample usage:
photos.filterUsingContextAsync(
ContextFactory.withCreateFn(jet -> new ImageClassifier(jet)),
(classifier, photo) -> reg.classifyAsync(photo)
.thenApply(it -> it.equals("cat"))
)
filterUsingContextAsync
in interface GeneralStage<T>
C
- type of context objectcontextFactory
- the context factoryfilterAsyncFn
- a stateless filtering function@Nonnull <C,R> StreamStage<R> flatMapUsingContext(@Nonnull ContextFactory<C> contextFactory, @Nonnull BiFunctionEx<? super C,? super T,? extends Traverser<R>> flatMapFn)
GeneralStage
Traverser
it returns as the output items. The traverser must be
null-terminated. The mapping function receives another
parameter, the context object, which Jet will create using the supplied
contextFactory
.
Sample usage:
StreamStage<Part> parts = products.flatMapUsingContext(
ContextFactory.withCreateFn(jet -> new PartRegistryCtx()),
(registry, product) -> Traversers.traverseIterable(
registry.fetchParts(product))
);
flatMapUsingContext
in interface GeneralStage<T>
C
- type of context objectR
- the type of items in the result's traverserscontextFactory
- the context factoryflatMapFn
- a stateless flatmapping function, whose result type is
Jet's Traverser
@Nonnull <C,R> StreamStage<R> flatMapUsingContextAsync(@Nonnull ContextFactory<C> contextFactory, @Nonnull BiFunctionEx<? super C,? super T,? extends CompletableFuture<Traverser<R>>> flatMapAsyncFn)
GeneralStage
GeneralStage.flatMapUsingContext(com.hazelcast.jet.pipeline.ContextFactory<C>, com.hazelcast.jet.function.BiFunctionEx<? super C, ? super T, ? extends com.hazelcast.jet.Traverser<R>>)
: the flatMapAsyncFn
returns a CompletableFuture<Traverser<R>>
instead of just Traverser<R>
.
The function can return a null future or the future can return a null traverser: in both cases it will act just like a filter.
The latency of the async call will add to the total latency of the output.
Sample usage:
StreamStage<Part> parts = products.flatMapUsingContextAsync(
ContextFactory.withCreateFn(jet -> new PartRegistryCtx()),
(registry, product) -> registry
.fetchPartsAsync(product)
.thenApply(parts -> Traversers.traverseIterable(parts))
);
flatMapUsingContextAsync
in interface GeneralStage<T>
C
- type of context objectR
- the type of the returned stagecontextFactory
- the context factoryflatMapAsyncFn
- a stateless flatmapping function. Can map to null
(return a null future)@Nonnull default <K,V,R> StreamStage<R> mapUsingReplicatedMap(@Nonnull String mapName, @Nonnull FunctionEx<? super T,? extends K> lookupKeyFn, @Nonnull BiFunctionEx<? super T,? super V,? extends R> mapFn)
GeneralStage
ReplicatedMap
with the supplied name is performed and the
result of the lookup is merged with the item and emitted.
If the result of the mapping is null
, it emits nothing.
Therefore this stage can be used to implement filtering semantics as
well.
The mapping logic is equivalent to:
K key = lookupKeyFn.apply(item);
V value = replicatedMap.get(key);
return mapFn.apply(item, value);
Sample usage:
items.mapUsingReplicatedMap(
"enriching-map",
item -> item.getDetailId(),
(Item item, ItemDetail detail) -> item.setDetail(detail)
)
mapUsingReplicatedMap
in interface GeneralStage<T>
K
- type of the key in the ReplicatedMap
V
- type of the value in the ReplicatedMap
R
- type of the output itemmapName
- name of the ReplicatedMap
lookupKeyFn
- a function which returns the key to look up in the
map. Must not return nullmapFn
- the mapping function@Nonnull default <K,V,R> StreamStage<R> mapUsingReplicatedMap(@Nonnull ReplicatedMap<K,V> replicatedMap, @Nonnull FunctionEx<? super T,? extends K> lookupKeyFn, @Nonnull BiFunctionEx<? super T,? super V,? extends R> mapFn)
GeneralStage
ReplicatedMap
is performed and the result of the
lookup is merged with the item and emitted.
If the result of the mapping is null
, it emits nothing.
Therefore this stage can be used to implement filtering semantics as well.
The mapping logic is equivalent to:
K key = lookupKeyFn.apply(item);
V value = replicatedMap.get(key);
return mapFn.apply(item, value);
Sample usage:
items.mapUsingReplicatedMap(
enrichingMap,
item -> item.getDetailId(),
(item, detail) -> item.setDetail(detail)
)
mapUsingReplicatedMap
in interface GeneralStage<T>
K
- type of the key in the ReplicatedMap
V
- type of the value in the ReplicatedMap
R
- type of the output itemreplicatedMap
- the ReplicatedMap
to lookup fromlookupKeyFn
- a function which returns the key to look up in the
map. Must not return nullmapFn
- the mapping function@Nonnull default <K,V,R> StreamStage<R> mapUsingIMap(@Nonnull String mapName, @Nonnull FunctionEx<? super T,? extends K> lookupKeyFn, @Nonnull BiFunctionEx<? super T,? super V,? extends R> mapFn)
GeneralStage
IMap
with the supplied name is performed and the
result of the lookup is merged with the item and emitted.
If the result of the mapping is null
, it emits nothing.
Therefore this stage can be used to implement filtering semantics as well.
The mapping logic is equivalent to:
K key = lookupKeyFn.apply(item);
V value = map.get(key);
return mapFn.apply(item, value);
Sample usage:
items.mapUsingIMap(
"enriching-map",
item -> item.getDetailId(),
(Item item, ItemDetail detail) -> item.setDetail(detail)
)
See also GeneralStageWithKey.mapUsingIMap(java.lang.String, com.hazelcast.jet.function.BiFunctionEx<? super T, ? super V, ? extends R>)
for a partitioned version of
this operation.mapUsingIMap
in interface GeneralStage<T>
K
- type of the key in the IMap
V
- type of the value in the IMap
R
- type of the output itemmapName
- name of the IMap
lookupKeyFn
- a function which returns the key to look up in the
map. Must not return nullmapFn
- the mapping function@Nonnull default <K,V,R> StreamStage<R> mapUsingIMap(@Nonnull IMap<K,V> iMap, @Nonnull FunctionEx<? super T,? extends K> lookupKeyFn, @Nonnull BiFunctionEx<? super T,? super V,? extends R> mapFn)
GeneralStage
IMap
is performed and the result of the
lookup is merged with the item and emitted.
If the result of the mapping is null
, it emits nothing.
Therefore this stage can be used to implement filtering semantics as well.
The mapping logic is equivalent to:
K key = lookupKeyFn.apply(item);
V value = map.get(key);
return mapFn.apply(item, value);
Sample usage:
items.mapUsingIMap(
enrichingMap,
item -> item.getDetailId(),
(item, detail) -> item.setDetail(detail)
)
See also GeneralStageWithKey.mapUsingIMap(java.lang.String, com.hazelcast.jet.function.BiFunctionEx<? super T, ? super V, ? extends R>)
for a partitioned version of
this operation.mapUsingIMap
in interface GeneralStage<T>
K
- type of the key in the IMap
V
- type of the value in the IMap
R
- type of the output itemiMap
- the IMap
to lookup fromlookupKeyFn
- a function which returns the key to look up in the
map. Must not return nullmapFn
- the mapping function@Nonnull <R> StreamStage<R> rollingAggregate(@Nonnull AggregateOperation1<? super T,?,? extends R> aggrOp)
GeneralStage
{2, 7, 8, -5}
, the output will be {2, 9, 17, 12}
. The
number of input and output items is equal.
Sample usage:
stage.rollingAggregate(AggregateOperations.counting())
This stage is fault-tolerant and saves its state to the snapshot.
NOTE 1: since the output for each item depends on all
the previous items, this operation cannot be parallelized. Jet will
perform it on a single member, single-threaded. Jet also supports
keyed rolling aggregation
which it can parallelize by partitioning.
rollingAggregate
in interface GeneralStage<T>
R
- result type of the aggregate operationaggrOp
- the aggregate operation to do the aggregation@Nonnull <K,T1_IN,T1,R> StreamStage<R> hashJoin(@Nonnull BatchStage<T1_IN> stage1, @Nonnull JoinClause<K,? super T,? super T1_IN,? extends T1> joinClause1, @Nonnull BiFunctionEx<T,T1,R> mapToOutputFn)
GeneralStage
package javadoc
for a detailed description of the hash-join transform.
Sample usage:
// Types of the input stages:
BatchStage<User> users;
BatchStage<Map.Entry<Long, Country>> idAndCountry;
users.hashJoin(
idAndCountry,
JoinClause.joinMapEntries(User::getCountryId),
(user, country) -> user.setCountry(country)
)
hashJoin
in interface GeneralStage<T>
K
- the type of the join keyT1_IN
- the type of stage1
itemsT1
- the result type of projection on stage1
itemsR
- the resulting output typestage1
- the stage to hash-join with this onejoinClause1
- specifies how to join the two streamsmapToOutputFn
- function to map the joined items to the output value@Nonnull <K1,K2,T1_IN,T2_IN,T1,T2,R> StreamStage<R> hashJoin2(@Nonnull BatchStage<T1_IN> stage1, @Nonnull JoinClause<K1,? super T,? super T1_IN,? extends T1> joinClause1, @Nonnull BatchStage<T2_IN> stage2, @Nonnull JoinClause<K2,? super T,? super T2_IN,? extends T2> joinClause2, @Nonnull TriFunction<T,T1,T2,R> mapToOutputFn)
GeneralStage
package javadoc
for a detailed description of the hash-join transform.
Sample usage:
// Types of the input stages:
BatchStage<User> users;
BatchStage<Map.Entry<Long, Country>> idAndCountry;
BatchStage<Map.Entry<Long, Company>> idAndCompany;
users.hashJoin(
idAndCountry, JoinClause.joinMapEntries(User::getCountryId),
idAndCompany, JoinClause.joinMapEntries(User::getCompanyId),
(user, country, company) -> user.setCountry(country).setCompany(company)
)
hashJoin2
in interface GeneralStage<T>
K1
- the type of key for stage1
K2
- the type of key for stage2
T1_IN
- the type of stage1
itemsT2_IN
- the type of stage2
itemsT1
- the result type of projection of stage1
itemsT2
- the result type of projection of stage2
itemsR
- the resulting output typestage1
- the first stage to joinjoinClause1
- specifies how to join with stage1
stage2
- the second stage to joinjoinClause2
- specifies how to join with stage2
mapToOutputFn
- function to map the joined items to the output value@Nonnull default StreamHashJoinBuilder<T> hashJoinBuilder()
GeneralStage
stage.hashJoinN(...)
calls because they offer
more static type safety.
Sample usage:
// Types of the input stages:
StreamStage<User> users;
BatchStage<Map.Entry<Long, Country>> idAndCountry;
BatchStage<Map.Entry<Long, Company>> idAndCompany;
StreamHashJoinBuilder<User> builder = users.hashJoinBuilder();
Tag<Country> tCountry = builder.add(idAndCountry, JoinClause.joinMapEntries(User::getCountryId));
Tag<Company> tCompany = builder.add(idAndCompany, JoinClause.joinMapEntries(User::getCompanyId));
StreamStage<User> joined = builder.build((user, itemsByTag) ->
user.setCountry(itemsByTag.get(tCountry)).setCompany(itemsByTag.get(tCompany)));
hashJoinBuilder
in interface GeneralStage<T>
@Nonnull default StreamStage<T> peek()
GeneralStage
toString()
method at the INFO level to the log category com.hazelcast.jet.impl.processor.PeekWrappedP.<vertexName>#<processorIndex>
.
The stage logs each item on whichever cluster member it happens to
receive it. Its primary purpose is for development use, when running Jet
on a local machine.peek
in interface GeneralStage<T>
GeneralStage.peek(PredicateEx, FunctionEx)
,
GeneralStage.peek(FunctionEx)
@Nonnull StreamStage<T> peek(@Nonnull PredicateEx<? super T> shouldLogFn, @Nonnull FunctionEx<? super T,? extends CharSequence> toStringFn)
GeneralStage
shouldLogFn
predicate to see whether to log the item
toStringFn
to get the item's string
representation
com.hazelcast.jet.impl.processor.PeekWrappedP.<vertexName>#<processorIndex>
Sample usage:
users.peek(
user -> user.getName().size() > 100,
User::getName
)
peek
in interface GeneralStage<T>
shouldLogFn
- a function to filter the logged items. You can use alwaysTrue()
as a pass-through filter when you don't need any
filtering.toStringFn
- a function that returns a string representation of the itemGeneralStage.peek(FunctionEx)
,
GeneralStage.peek()
@Nonnull default StreamStage<T> peek(@Nonnull FunctionEx<? super T,? extends CharSequence> toStringFn)
GeneralStage
toStringFn
to get a string representation of the item
com.hazelcast.jet.impl.processor.PeekWrappedP.<vertexName>#<processorIndex>
Sample usage:
users.peek(User::getName)
peek
in interface GeneralStage<T>
toStringFn
- a function that returns a string representation of the itemGeneralStage.peek(PredicateEx, FunctionEx)
,
GeneralStage.peek()
@Nonnull default <R> StreamStage<R> customTransform(@Nonnull String stageName, @Nonnull SupplierEx<Processor> procSupplier)
GeneralStage
Processor
s.
Note that the type parameter of the returned stage is inferred from the call site and not propagated from the processor that will produce the result, so there is no actual type safety provided.
customTransform
in interface GeneralStage<T>
R
- the type of the output itemsstageName
- a human-readable name for the custom stageprocSupplier
- the supplier of processors@Nonnull default <R> StreamStage<R> customTransform(@Nonnull String stageName, @Nonnull ProcessorSupplier procSupplier)
GeneralStage
Processor
s.
Note that the type parameter of the returned stage is inferred from the call site and not propagated from the processor that will produce the result, so there is no actual type safety provided.
customTransform
in interface GeneralStage<T>
R
- the type of the output itemsstageName
- a human-readable name for the custom stageprocSupplier
- the supplier of processors@Nonnull <R> StreamStage<R> customTransform(@Nonnull String stageName, @Nonnull ProcessorMetaSupplier procSupplier)
GeneralStage
Processor
s.
Note that the type parameter of the returned stage is inferred from the call site and not propagated from the processor that will produce the result, so there is no actual type safety provided.
customTransform
in interface GeneralStage<T>
R
- the type of the output itemsstageName
- a human-readable name for the custom stageprocSupplier
- the supplier of processors@Nonnull StreamStage<T> setLocalParallelism(int localParallelism)
Stage
While most stages are backed by 1 vertex, there are exceptions. If a stage uses two vertices, each of them will have the given local parallelism, so in total there will be twice as many processors per member.
The default value is -1 and it signals to Jet to figure out a default value. Jet will determine the vertex's local parallelism during job initialization from the global default and the processor meta-supplier's preferred value.
setLocalParallelism
in interface Stage
Copyright © 2019 Hazelcast, Inc.. All rights reserved.