Chapter NINE
Streams


Exam Objectives

Use Java object and primitive Streams, including lambda expressions implementing functional interfaces, to create, filter, transform, process, and sort data.
Perform decomposition, concatenation, and reduction, and grouping and partitioning on sequential and parallel streams.

Chapter Content


The Optional Class

Most programming languages have a data type to represent the absence of a value, and it is known by many names:

NULL, nil, None, Nothing

The null type was introduced in ALGOL W by Tony Hoare in 1965, and it’s considered one of the worst mistakes in computer science. In Tony Hoare’s words:

I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object-oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.

Still, some may be wondering, what the problem with null is?

Well, if you’re a little worried by the problems this code might cause, you know the answer already:

String summary = 
  book.getChapter(10)
      .getSummary().toUpperCase();

The problem with that code is that if any of those methods return a null reference (for example, if the book doesn’t have a tenth chapter), a NullPointerException (the most common exception in Java) will be thrown at runtime, stopping the program.

What can we do to avoid this exception?

Perhaps, the easiest way is to check for null. Here’s one way to do it:

String summary = "";
if(book != null) {
    Chapter chapter = book.getChapter(10);
    if(chapter != null) {
        if(chapter.getSummary() != null) {
            summary = chapter.getSummary()
                             .toUpperCase();
        }
    }
}

You don’t know if any object in this hierarchy can be null, so you check every object for it. Obviously, this is not the best solution; it’s not very practical and damages readability.

There’s another issue. Is checking for null really desirable? I mean, what if those objects should never be null? By checking for null, we hide the error and don’t deal with it.

Of course, this is a design issue too. For example, if a chapter has no summary yet, what would be better to use as a default value? An empty string or null?

The class java.util.Optional<T> addresses this problem.

The job of this class is to encapsulate an optional value, which is an object that can be null.

Using the previous example, if we know that not all chapters have a summary, instead of modeling the class like this:

class Chapter {
    private String summary;
    // Other attributes and methods
}

We can use the Optional class:

class Chapter {
    private Optional<String> summary;
    // Other attributes and methods
}

So if there’s a value, the Optional class just wraps it. Otherwise, an empty value is represented by the method Optional.empty(), which returns a singleton instance of Optional.

By using this class instead of null, we explicitly declare that the summary attribute is optional. Then, we can avoid NullPointerExceptions while having the useful methods of Optional at our disposal, which we’ll review next.

First, let’s see how to create an instance of this class.

To get an empty Optional object, use:

Optional<String> summary = Optional.empty();

If you are sure that an object is not null, you can wrap it in an Optional object this way:

Optional<String> summary = Optional.of("A summary");

A NullPointerException will be thrown if the object is null. However, you can use:

Optional<String> summary = Optional.ofNullable("A summary");

That returns an Optional instance with the specified value if it is non-null. Otherwise, it returns an empty Optional.

If you want to know if an Optional contains a value, you can do it like this:

if (summary.isPresent()) {
    // Do something
}

Or in a more functional style:

summary.ifPresent(s -> System.out.println(s));
// Or summary.ifPresent(System.out::println);

The ifPresent() method takes a Consumer<T> as an argument that is executed only if the Optional contains a value.

To get the value of an Optional, use:

String s = summary.get();

However, this method will throw a java.util.NoSuchElementException if the Optional doesn’t contain a value, so it’s better to use the ifPresent() method.

Alternatively, if we want to return something when the Optional doesn’t contain a value, there are three other methods we can use:

String summaryOrDefault = summary.orElse("Default summary");

The orElse() method returns the argument (which must be of type T, in this case a String) when the Optional is empty. Otherwise, it returns the encapsulated value.

String summaryOrDefault = 
    summary.orElseGet(() -> "Default summary");

The orElseGet() method takes a Supplier<? extends T> as an argument that returns a value when the Optional is empty. Otherwise, it returns the encapsulated value.

String summaryOrException = 
    summary.orElseThrow(() -> new Exception());

The orElseThrow() method takes a Supplier<? extends X>, where X is the type of the exception to throw when the Optional is empty. Otherwise, it returns the encapsulated value.

There are versions of the Optional class to work with primitives, OptionalInt, OptionalLong, and OptionalDouble, so you can use OptionalInt instead of Optional<Integer>:

OptionalInt optionalInt = OptionalInt.of(1);
int i = optionalInt.getAsInt();

However, the use of these primitive versions is not encouraged, especially because they lack three useful methods of Optional: filter(), map(), and flatMap(). And since Optional just contains one value, the overhead of boxing/unboxing a primitive is not significant.

The filter() method returns the Optional if a value is present and matches the given predicate. Otherwise, an empty Optional is returned.

String summaryStr = 
    summary.filter(s -> s.length() > 10).orElse("Short summary");

The map() method is generally used to transform from one type to another. If the value is present, it applies the provided Function<? super T, ? extends U> to it. For example:

int summaryLength = summary.map(s -> s.length()).orElse(0);

The flatMap() method is similar to map(), but it takes an argument of type Function<? super T, Optional<U>> and if the value is present, it returns the Optional that results from applying the provided function. Otherwise, it returns an empty Optional.

Streams

Suppose you have a list of students and the requirements are to extract the students with a score of 90.0 or greater and sort them by score in ascending order.

One way to do it would be:

List<Student> studentsScore = new ArrayList<Student>();
for(Student s : students) {
   if(s.getScore() >= 90.0) {
       studentsScore.add(s);
   }
}
Collections.sort(studentsScore, new Comparator<Student>() {
   public int compare(Student s1, Student s2) {
       return Double.compare(s1.getScore(), s2.getScore());
   }
});

Very verbose when we compare it with the implementation that uses streams:

List<Student> studentsScore = students
    .stream()
    .filter(s -> s.getScore() >= 90.0)
    .sorted(Comparator.comparing(Student::getScore))
    .collect(Collectors.toList());

Don’t worry if you don’t fully understand the code, we’ll see what it means later.

What Are Streams?

First of all, streams are NOT collections.

A simple definition is that streams are wrappers for collections or arrays. They wrap an existing collection (or another data source) to support operations expressed with lambdas, so you specify what you want to do, not how to do it. You already saw it.

These are the characteristics of a stream:

One thing that allows this laziness is the way their operations are designed. Most of them return a new stream, allowing operations to be chained and form a pipeline that enables this kind of optimizations.

To set up this pipeline you:

  1. Create the stream.
  2. Apply zero or more intermediate operations to transform the initial stream into new streams.
  3. Apply a terminal operation to generate a result or a side-effect.

Here’s a diagram to help you visualize this pipeline:

┌─────────────┐   ┌───────────────────────────┐   ┌───────────────┐
│             │   │  Intermediate Ops         │   │               │
│   Source    │   │ ┌─────┐ ┌──────┐ ┌──────┐ │   │   Terminal    │
│ (Collection │ → │ │ map │→│filter│→│sorted│ │ → │  Operation    │
│  or Array)  │   │ └─────┘ └──────┘ └──────┘ │   │(e.g., collect)│
│             │   │                           │   │               │
└─────────────┘   └───────────────────────────┘   └───────────────┘
       ↑                       ↑                        ↑
       │                       │                        │
    Creation              Processing                 Result

Creating Streams

A stream is represented by the java.util.stream.Stream<T> interface. This works with objects only.

There are also specializations to work with primitive types, such as IntStream, LongStream, and DoubleStream.

There are many ways to create a stream. Let’s start with the most popular three.

The first one is creating a stream from a java.util.Collection implementation using the stream() method:

List<String> words = Arrays.asList("hello", "hola", "hallo", "ciao");;
Stream<String> stream = words.stream();

The second one is creating a stream from individual values:

Stream<String> stream = Stream.of("hello","hola", "hallo", "ciao");

The third one is creating a stream from an array:

String[] words = {"hello", "hola", "hallo", "ciao"};
Stream<String> stream = Stream.of(words);

However, you have to be careful with this last method when working with primitives.

Here’s why. Assume an int array:

int[] nums = {1, 2, 3, 4, 5};

When we create a stream from this array like this:

Stream.of(nums)

We are not creating a stream of Integers (Stream<Integer>), but a stream of int arrays (Stream<int[]>). This means that instead of having a stream with five elements we have a stream of one element:

System.out.println(Stream.of(nums).count()); // It prints 1!

The reason is the signatures of the of method:

// returns a stream of one element
static <T> Stream<T> of(T t)
// returns a stream whose elements are the specified values
static <T> Stream<T> of(T... values)

Since an int is not an object, but int[] is, the method chosen to create the stream is the first (Stream.of(T t)), not the one with the vargs, so a stream of int[] is created, but since only one array is passed, the result is a stream of one element.

To solve this, we can force Java to choose the varargs version by creating an array of objects (with Integer):

Integer[] nums = {1, 2, 3, 4, 5};
// It prints 5!
System.out.println(Stream.of(nums).count());

Or use a fourth way to create a stream (that it’s in fact used inside Stream.of(T... values)):

int[] nums = {1, 2, 3, 4, 5};
// It also prints 5!
System.out.println(Arrays.stream(nums).count());

Or use the primitive version IntStream:

int[] nums = {1, 2, 3, 4, 5};
// It also prints 5!
System.out.println(IntStream.of(nums).count());

So don’t use Stream<T>.of() when working with primitives.

Here are other ways to create streams:

static <T> Stream<T> generate(Supplier<T> s)

This method returns an infinite stream where each element is generated by the provided Supplier, and is generally used with the method:

Stream<T> limit(long maxSize)

That truncates the stream so that it is no longer than maxSize in length.

For example:

Stream<Double> s = Stream.generate(new Supplier<Double>() {
   public Double get() {
       return Math.random();
   }
}).limit(5);

Or:

Stream<Double> s = Stream.generate(() -> Math.random()).limit(5);

Or just:

Stream<Double> s = Stream.generate(Math::random).limit(5);

Which generates a stream of five random doubles.

Then we have the iterate method:

static <T> Stream<T> iterate(T seed, UnaryOperator<T> f)

It returns an infinite stream produced by the iterative application of a function f to an initial element (seed). The first element (n = 0) in the stream will be the provided seed. For n > 0, the element at position n will be the result of applying the function f to the element at position n - 1. For example:

Stream<Integer> s = Stream.iterate(1, new UnaryOperator<Integer>() {
   @Override
   public Integer apply(Integer t) {
       return t * 2; }
}).limit(5);

Or just:

Stream<Integer> s = Stream.iterate(1, t -> t * 2).limit(5);

That generates the elements 1, 2, 4, 8, 16.

There’s a Stream.Builder<T> class (that follows the builder design pattern) with methods that add an element to the stream being built:

void accept(T t)
default Stream.Builder<T> add(T t)

For example:

Stream.Builder<String> builder = Stream.<String>builder().add("h").add("e").add("l").add("l");
builder.accept("o");
Stream<String> s = builder.build();

IntStream and LongStream define the methods:

static IntStream range(int startInclusive, int endExclusive)
static IntStream rangeClosed(int startInclusive, int endInclusive)
static LongStream range(long startInclusive, long endExclusive)
static LongStream rangeClosed(long startInclusive, long endInclusive)

That returns a sequential stream for the range of int or long elements. For example:

// stream of 1, 2, 3
IntStream s = IntStream.range(1, 4);
// stream of 1, 2, 3, 4
IntStream s = IntStream.rangeClosed(1, 4);

Also, there are methods in the Java API that generate streams. For example:

IntStream s1 = new Random().ints(5, 1, 10);

Which returns an IntStream of five random ints from one (inclusive) to ten (exclusive).

Intermediate Operations

You can easily identify intermediate operations; they always return a new stream. This allows the operations to be connected.

For example:

Stream<String> s = Stream.of("m", "k", "c", "t")
    .sorted()
    .limit(3)

An important feature of intermediate operations is that they don’t process the elements until a terminal operation is invoked, meaning they’re lazy.

Intermediate operations can be either stateless or stateful.

Stateless operations retain no state from previous elements when processing a new element so each can be processed independently of operations on other elements.

Stateful operations, such as distinct and sorted, require processing the entire stream or keeping track of state from previously processed elements to produce a result.

The following table summarizes the methods of the Stream interface that represent intermediate operations.

Method Type Description
Stream<T> distinct() Stateful Returns a stream consisting of the distinct elements.
Stream<T> filter(Predicate<? super T> predicate) Stateless Returns a stream of elements that match the given predicate.
<R> Stream<R> flatMap(Function<? super T,? extends Stream<? extends R>> mapper) Stateless Returns a stream with the content produced by applying the provided mapping function to each element. There are versions for int, long and double also.
Stream<T> limit(long maxSize) Stateful Returns a stream truncated to be no longer than maxSize in length.
<R> Stream<R> map(Function<? super T,? extends R> mapper) Stateless Returns a stream consisting of the results of applying the given function to the elements of this stream. There are versions for int, long and double also.
Stream<T> peek(Consumer<? super T> action) Stateless Returns a stream with the elements of this stream, performing the provided action on each element.
Stream<T> skip(long n) Stateful Returns a stream with the remaining elements of this stream after discarding the first n elements.
Stream<T> sorted() Stateful Returns a stream sorted according to the natural order of its elements.
Stream<T> sorted(Comparator<? super T> comparator) Stateful Returns a stream sorted according to the provided Comparator.
Stream<T> parallel() N/A Returns an equivalent stream that is parallel.
Stream<T> sequential() N/A Returns an equivalent stream that is sequential.
Stream<T> unordered() N/A Returns an equivalent stream that is unordered.

Terminal Operations

You can also easily identify terminal operations, they always return something other than a stream.

After the terminal operation is performed, the stream pipeline is consumed, and can’t be used anymore. For example:

int[] digits = {0, 1, 2, 3, 4 , 5, 6, 7, 8, 9};
IntStream s = IntStream.of(digits);
long n = s.count();
System.out.println(s.findFirst()); // An exception is thrown

If you need to traverse the same stream again, you must return to the data source to get a new one. For example:

int[] digits = {0, 1, 2, 3, 4 , 5, 6, 7, 8, 9};
long n = IntStream.of(digits).count();
System.out.println(IntStream.of(digits).findFirst()); // OK

The following table summarizes the methods of the Stream interface that represent terminal operations.

Method Description
boolean allMatch(Predicate<? super T> predicate) Returns whether all elements of this stream match the provided predicate. If the stream is empty then true is returned and the predicate is not evaluated.
boolean anyMatch(Predicate<? super T> predicate) Returns whether any elements of this stream match the provided predicate. If the stream is empty then false is returned and the predicate is not evaluated.
boolean noneMatch(Predicate<? super T> predicate) Returns whether no elements of this stream match the provided predicate. If the stream is empty then true is returned and the predicate is not evaluated.
Optional<T> findAny() Returns an Optional describing some element of the stream.
Optional<T> findFirst() Returns an Optional describing the first element of this stream.
<R,A> R collect(Collector<? super T,A,R> collector) Performs a mutable reduction operation on the elements of this stream using a Collector.
long count() Returns the count of elements in this stream.
void forEach(Consumer<? super T> action) Performs an action for each element of this stream.
void forEachOrdered(Consumer<? super T> action) Performs an action for each element of this stream, in the encounter order of the stream if the stream has a defined encounter order.
Optional<T> max(Comparator<? super T> comparator) Returns the maximum element of this stream according to the provided Comparator.
Optional<T> min(Comparator<? super T> comparator) Returns the minimum element of this stream according to the provided Comparator.
T reduce(T identity, BinaryOperator<T> accumulator) Performs a reduction on the elements of this stream, using the provided identity value and an associative accumulation function, and returns the reduced value.
Object[] toArray() Returns an array containing the elements of this stream.
<A> A[] toArray(IntFunction<A[]> generator) Returns an array containing the elements of this stream, using the provided generator function to allocate the returned array.
Iterator<T> iterator() Returns an iterator for the elements of the stream.
Spliterator<T> spliterator() Returns a spliterator for the elements of the stream.

Lazy Operations

Intermediate operations are deferred until a terminal operation is invoked. The reason is that intermediate operations can usually be merged or optimized by a terminal operation.

Let’s take for example this stream pipeline:

Stream.of("sun", "pool", "beach", "kid", "island", "sea", "sand")
    .map(str -> str.length())
    .filter(i -> i > 3)
    .limit(2)
    .forEach(System.out::println);

Here’s what it does:

And you may think the map operation is applied to all seven elements, then the filter operation again to all seven, then it picks the first two, and finally it prints the values.

But this is not how it works. If we modify the lambda expressions of map and filter to print a message:

Stream.of("sun", "pool", "beach", "kid", "island", "sea", "sand")
    .map(str -> {
        System.out.println("Mapping: " + str);
        return str.length();
    })
    .filter(i -> {
        System.out.println("Filtering: " + i);
        return i > 3;
    })
    .limit(2)
    .forEach(System.out::println);

The order of evaluation will be revealed:

Mapping: sun
Filtering: 3
Mapping: pool
Filtering: 4
4
Mapping: beach
Filtering: 5
5

From this example, we can see that the stream applied operations only until it found enough elements to return a result (due to the limit(2) operation). This is called short-circuiting.

Short-circuit operations cause intermediate operations to be processed until a result can be produced.

In such a way, because of lazy and short-circuit operations, streams don’t execute all operations on all their elements. Instead, the elements of the stream go through a pipeline of operations until the point a result can be deduced or generated.

You can see short-circuiting as a subclassification. There’s only one short-circuit intermediate operation:

Stream<T> limit(long maxSize)

Because it doesn’t need to process all the elements of the stream to create a stream of a given size.

The rest are terminal:

boolean anyMatch(Predicate<? super T> predicate)
boolean allMatch(Predicate<? super T> predicate)
boolean noneMatch(Predicate<? super T> predicate)
Optional<T> findFirst()
Optional<T> findAny()

Because as soon as you find a matching element, there’s no need to continuing processing the stream.

Primitive Streams

Most of the time, we’ll use a Stream<T> that contains objects as elements. However, there are also specialized streams for handling primitive types like int, long, and double that allow you to avoid the overhead of auto-boxing and auto-unboxing elements into their wrapper classes. These primitive streams are IntStream, LongStream, and DoubleStream.

Each primitive stream has methods analogous to the ones in the regular Stream class, like map() (transforms elements), filter() (selects elements based on a predicate), reduce() (aggregates elements), etc. But because they can only deal with their corresponding primitive types, there are also specialized methods to handle them. Let’s review the most important ones.

The average() method returns an OptionalDouble with the arithmetic mean of elements, or an empty OptionalDouble if the primitive stream is empty:

IntStream stream = IntStream.range(1, 10);
OptionalDouble ave = stream.average();
System.out.println(ave.getAsDouble());

This code prints the average of the numbers from 1 to 9 (not including 10):

5.0

If you need to convert a primitive stream to a regular object stream, use the boxed() method:

Stream<Double> boxed = DoubleStream.of(1.2, 2.4).boxed();

To find the maximum value in the primitive stream, use max():

IntStream stream = IntStream.of(1, 10, 2, 20);
OptionalInt max = stream.max();
System.out.println(max.getAsInt());

This prints:

20

Each primitive stream has its own max() method that returns its corresponding Optional type (OptionalInt, OptionalLong, OptionalDouble). The same goes for min().

One peculiarity of the IntStream and LongStream is that they have special methods range() and rangeClosed() to generate a sequence of numbers in a range.

range(int a, int b) creates an IntStream of values from a (inclusive) to b (exclusive). rangeClosed(int a, int b) does the same including b:

IntStream stream = IntStream.range(1, 5);
stream.forEach(System.out::println);

This prints:

1
2
3
4

While:

LongStream stream = LongStream.rangeClosed(1, 5);
stream.forEach(System.out::println);

Prints:

1
2
3
4
5

Note that there are no range() or rangeClosed() methods in DoubleStream.

The sum() method returns the sum of all the elements:

IntStream stream = IntStream.of(1, 10, 2, 20);
int sum = stream.sum();
System.out.println(sum);

This prints:

33

Again, each primitive stream has its own dedicated sum() method that returns the primitive type result (int, long, double).

Finally, each primitive stream has a summaryStatistics() method that returns a summary of stats of the elements. Let’s see an example using IntStream:

IntStream stream = IntStream.of(1, 10, 2, 20);
IntSummaryStatistics stats = stream.summaryStatistics();
System.out.println(stats);

This prints:

IntSummaryStatistics{count=4, sum=33, min=1, average=8.250000, max=20}

LongStream and DoubleStream have analogous LongSummaryStatistics and DoubleSummaryStatistics classes.

These summary statistics objects provide methods to obtain each stat separately (getCount(), getSum(), getMin(), getAverage(), getMax()).

If you need more advanced stats, you can use a Collector and the summarizingInt(), summarizingLong(), or summarizingDouble() methods as arguments:

List<Integer> list = List.of(1, 10, 2, 20);
IntSummaryStatistics stats = list.stream()
        .collect(Collectors.summarizingInt(i -> i));
System.out.println(stats);  

This prints:

IntSummaryStatistics{count=4, sum=33, min=1, average=8.250000, max=20}

Now let’s talk in more detail about some of the most common stream operations.

Filtering Streams

Filtering is one of the most common operations when when you work with streams in Java. It allows you to select only the elements that satisfy a given predicate, discarding the rest. The filter() method is used for this purpose:

Stream<T> filter(Predicate<? super T> predicate);

The filter() method takes a Predicate functional interface as argument. A Predicate is a function that takes an element and returns a boolean. Only the elements for which the predicate returns true will be included in the resulting stream.

Let’s review a simple example:

List<Integer> list = List.of(1, 2, 3, 4, 5, 6, 7, 8, 9, 10);
List<Integer> evenList = list.stream()
        .filter(i -> i % 2 == 0)
        .collect(Collectors.toList());
System.out.println(evenList);

This code filters the original list, keeping only the even numbers. It prints:

[2, 4, 6, 8, 10]

You can chain multiple filter() calls to apply several conditions:

List<Integer> list = List.of(1, 2, 3, 4, 5, 6, 7, 8, 9, 10);
List<Integer> filteredList = list.stream()
        .filter(i -> i > 3)
        .filter(i -> i < 8)
        .collect(Collectors.toList());
System.out.println(filteredList);

This selects the numbers greater than 3 and less than 8:

[4, 5, 6, 7]

The Predicate interface also has default methods that allow you to combine predicates using logical operations:

For example, to get the numbers greater than 3 and less than 8 you can also do:

List<Integer> list = List.of(1, 2, 3, 4, 5, 6, 7, 8, 9, 10);
List<Integer> filteredList = list.stream()
        .filter(i -> i > 3 && i < 8)
        .collect(Collectors.toList());

Or using Predicate methods:

Predicate<Integer> greaterThan3 = i -> i > 3;
Predicate<Integer> lessThan8 = i -> i < 8;
List<Integer> filteredList = list.stream()
        .filter(greaterThan3.and(lessThan8))
        .collect(Collectors.toList());

The filter() method is stateless, which means that the execution of the predicate for one element doesn’t affect the execution for another.

A very useful method related to filter() is distinct():

Stream<T> distinct();

This method returns a stream of unique elements, discarding the duplicates:

List<Integer> list = List.of(1, 2, 2, 3, 4, 4, 5);
List<Integer> distinctList = list.stream()
        .distinct()
        .collect(Collectors.toList());
System.out.println(distinctList);

This prints:

[1, 2, 3, 4, 5]

You can think of distinct() as a special filtering operation.

Finally, there are two other methods similar to filter() but with a different purpose:

takeWhile() returns a stream that contains the longest prefix of elements taken from the original stream that match the given predicate.

List<Integer> list = List.of(2, 4, 6, 7, 8, 10, 11);
List<Integer> prefixList = list.stream()
        .takeWhile(i -> i % 2 == 0)
        .collect(Collectors.toList());
System.out.println(prefixList);

This selects the even numbers from the beginning of the stream until it finds the first odd number (7):

[2, 4, 6]

The opposite is dropWhile(), which discards the longest prefix of elements that satisfy the predicate and returns a stream of the remaining elements:

List<Integer> list = List.of(2, 4, 6, 7, 8, 10, 11);
List<Integer> postfixList = list.stream()
        .dropWhile(i -> i % 2 == 0)
        .collect(Collectors.toList());
System.out.println(postfixList);

This discards the initial even numbers and returns the rest of the stream:

[7, 8, 10, 11]

It is important to note that the predicates used in takeWhile() and dropWhile() must be stateless. The execution for one element shouldn’t affect the execution for another, otherwise the results will be unpredictable.

Mapping Streams

When working with streams, we often need to transform the elements from one type to another or extract certain data from them. This is where the map() and flatMap() operations come into play.

The map() method applies a function to each element of the stream, transforming it into a new element. It’s like having a machine that takes in raw materials (the original elements) and outputs refined products (the transformed elements).

<R> Stream<R> map(Function<? super T, ? extends R> mapper);

The map() method takes a Function as argument, which is an interface that represents a function that accepts one argument and returns a result. In this case, it takes an element of type T and returns an element of type R.

Here’s an example:

List<String> list = List.of("1", "2", "3", "4", "5");
List<Integer> intList = list.stream()
        .map(Integer::parseInt)
        .collect(Collectors.toList());
System.out.println(intList);

This code converts a list of strings into a list of integers using the parseInt method of the Integer class. It prints:

[1, 2, 3, 4, 5]

You can chain multiple map() operations to perform successive transformations:

List<String> list = List.of("1", "2", "3", "4", "5");
List<Integer> doubledList = list.stream()
        .map(Integer::parseInt)
        .map(i -> i * 2)
        .collect(Collectors.toList());
System.out.println(doubledList);

This first converts the strings to integers and then multiplies each number by 2:

[2, 4, 6, 8, 10]

Now, what if instead of transforming each element, you want to extract multiple elements from each one? This is where flatMap() comes in.

flatMap() is like having a machine that takes in containers filled with raw materials, unpacks each container, processes the materials, and then outputs the refined products in a single stream.

<R> Stream<R> flatMap(Function<? super T, ? extends Stream<? extends R>> mapper);

The flatMap() method takes a function that returns a stream for each element. Then, it flattens all these streams into a single one.

A common use case is when you have a stream of lists and you want to process the elements of all the lists as a single stream:

List<List<Integer>> listOfLists = List.of(
        List.of(1, 2, 3),
        List.of(4, 5, 6),
        List.of(7, 8, 9)
);
List<Integer> flattenedList = listOfLists.stream()
        .flatMap(List::stream)
        .collect(Collectors.toList());
System.out.println(flattenedList);

This code flattens the list of lists into a single list:

[1, 2, 3, 4, 5, 6, 7, 8, 9]

When working with primitive streams, there are specialized mapping operations to avoid boxing and unboxing costs:

These methods take a ToIntFunction, ToLongFunction, and ToDoubleFunction respectively, and return an IntStream, LongStream, and DoubleStream.

List<String> list = List.of("1", "2", "3", "4", "5");
IntStream intStream = list.stream()
        .mapToInt(Integer::parseInt);
intStream.forEach(System.out::println);

This code converts the stream of strings to an IntStream and prints each element:

1
2
3
4
5

You can also map one primitive type to another:

IntStream intStream = IntStream.range(1, 6);
DoubleStream doubleStream = intStream.mapToDouble(i -> i / 2.0);
doubleStream.forEach(System.out::println);

This converts an IntStream to a DoubleStream, dividing each number by 2:

0.5
1.0
1.5
2.0
2.5

Decomposing Streams

When working with streams, sometimes we need to break them down into smaller parts, analyze their elements, or combine them in certain ways. This is what we call decomposing streams and there are several operations that allow us to do this.

First, let’s talk about skip() and limit(). These methods allow us to cut a stream into parts, discarding some elements and keeping others.

skip(long n) returns a stream that discards the first n elements of the original stream. It’s like cutting off the top part of a log.

Here’s an example:

List<Integer> list = List.of(1, 2, 3, 4, 5);
List<Integer> skippedList = list.stream()
        .skip(2)
        .collect(Collectors.toList());
System.out.println(skippedList);

This skips the first two elements and collects the rest into a new list:

[3, 4, 5]

On the other hand, limit(long maxSize) returns a stream that truncates the original stream to be no longer than maxSize. It’s like cutting off the bottom part of a log.

Here’s an example:

List<Integer> list = List.of(1, 2, 3, 4, 5);
List<Integer> limitedList = list.stream()
        .limit(3)
        .collect(Collectors.toList());
System.out.println(limitedList);

This keeps only the first three elements and discards the rest:

[1, 2, 3]

You can combine skip() and limit() to extract a substream:

List<Integer> list = List.of(1, 2, 3, 4, 5);
List<Integer> subList = list.stream()
        .skip(1)
        .limit(3)
        .collect(Collectors.toList());
System.out.println(subList);

This skips the first element and then takes the next three:

[2, 3, 4]

Now, let’s talk about forEach() and forEachOrdered(). These methods allow us to perform an action on each element of the stream.

forEach(Consumer action) performs the given action on each element. The order of processing is not guaranteed to be the encounter order if the stream is parallel.

List<Integer> list = List.of(1, 2, 3, 4, 5);
list.stream()
    .forEach(System.out::println);

This prints each element of the stream:

1
2
3
4
5

forEachOrdered(Consumer action) is similar, but it guarantees that the action is performed on the elements in the encounter order of the stream if it is a parallel stream:

List<Integer> list = List.of(1, 2, 3, 4, 5);
list.stream()
    .forEachOrdered(System.out::println);

This also prints each element, but ensuring the order:

1
2
3
4
5

The allMatch(), anyMatch(), and noneMatch() methods allow us to check if certain conditions hold for the elements of the stream.

allMatch(Predicate predicate) returns true if all elements satisfy the predicate, false otherwise:

List<Integer> list = List.of(2, 4, 6, 8, 10);
boolean allEven = list.stream()
        .allMatch(i -> i % 2 == 0);
System.out.println(allEven);

The above example checks if all elements are even:

true

anyMatch(Predicate predicate) returns true if any element satisfies the predicate, false otherwise:

List<Integer> list = List.of(1, 2, 3, 4, 5);
boolean anyEven = list.stream()
        .anyMatch(i -> i % 2 == 0);
System.out.println(anyEven);

This checks if any element is even:

true

noneMatch(Predicate predicate) returns true if no element satisfies the predicate, false otherwise:

List<Integer> list = List.of(1, 3, 5, 7, 9);
boolean noneEven = list.stream()
        .noneMatch(i -> i % 2 == 0);
System.out.println(noneEven);

This checks if no element is even:

true

The findFirst() and findAny() methods return an element of the stream, if one exists.

findFirst() returns an Optional describing the first element of the stream, or an empty Optional if the stream is empty:

List<Integer> list = List.of(1, 2, 3, 4, 5);
Optional<Integer> firstElem = list.stream()
        .findFirst();
System.out.println(firstElem.get());

This finds and prints the first element:

1

findAny() returns an Optional describing some element of the stream, or an empty Optional if the stream is empty. In parallel streams, it’s useful when you don’t care about the specific element, just that one exists:

List<Integer> list = List.of(1, 2, 3, 4, 5);
Optional<Integer> anyElem = list.parallelStream()
        .findAny();
System.out.println(anyElem.get());

The above example finds and prints any element (the specific element is not guaranteed due to the parallel processing):

3

Concatenating Streams

Sometimes, when working with streams, we need to combine them, merging their elements into a single stream. This is what we call concatenating streams, and there are several ways to achieve this in Java.

The most straightforward way to concatenate streams is by using the concat() method. This static method takes two streams as input and returns a new stream that is the concatenation of the two input streams:

static <T> Stream<T> concat(Stream<? extends T> a, Stream<? extends T> b)

It’s like joining two pipes, letting the water (elements) flow from one to the other.

Let’s see an example:

Stream<Integer> stream1 = Stream.of(1, 2, 3);
Stream<Integer> stream2 = Stream.of(4, 5, 6);
Stream<Integer> concatenated = Stream.concat(stream1, stream2);
concatenated.forEach(System.out::println);

This concatenates stream1 and stream2, and prints the elements of the resulting stream:

1
2
3
4
5
6

It’s important to note that concat() is a static method and doesn’t modify the original streams. Instead, it creates a new stream that lazily pulls elements from the first stream and then the second stream when requested.

Also, keep in mind that you can only concatenate streams of the same type. If you try to concatenate streams of different types, you’ll get a compilation error.

Another way to concatenate streams is by using the flatMap() method in conjunction with Stream.of().

Stream.of() creates a stream from a variable number of arguments. You can pass the streams you want to concatenate as arguments to Stream.of(), and then use flatMap() to flatten the resulting stream of streams into a single stream:

Stream<Integer> stream1 = Stream.of(1, 2, 3);
Stream<Integer> stream2 = Stream.of(4, 5, 6);
Stream<Integer> concatenated = Stream.of(stream1, stream2)
        .flatMap(stream -> stream);
concatenated.forEach(System.out::println);

This code does the same as the previous example, but using flatMap() and Stream.of().

This approach is more verbose than using concat() directly, but it can be handy when you have a collection of streams that you want to concatenate.

For example, let’s say you have a list of streams:

List<Stream<Integer>> listOfStreams = List.of(
        Stream.of(1, 2, 3),
        Stream.of(4, 5, 6),
        Stream.of(7, 8, 9)
);

You can concatenate all these streams into one using flatMap() and Stream.of():

Stream<Integer> concatenated = listOfStreams.stream()
        .flatMap(stream -> stream);
concatenated.forEach(System.out::println);

This prints:

1
2
3
4
5
6
7
8
9

Here, we first create a stream from the List of streams using the stream() method. Then, we use flatMap() to flatten this stream of streams into a single stream.

It’s like having a bunch of pipes and joining them all into one big pipe.

However, one thing to keep in mind when concatenating streams is the encounter order. The resulting stream will have the elements of the first stream followed by the elements of the second stream, and so on, in the order they were concatenated.

Reducing Streams

When working with streams, we often need to combine the elements in some way to produce a single result. This is what we call reducing a stream, and it’s one of the most powerful operations in the Java Streams API.

The reduce() operation allows us to perform a reduction on the elements of the stream, using an associative accumulation function. It’s like cooking a dish:

  1. You start with a bunch of raw ingredients (the elements of the stream).

  2. You apply a recipe (the accumulation function) to combine them.

  3. You end up with a single cooked dish (the result of the reduction).

The reduce() method has three forms:

Optional<T> reduce(BinaryOperator<T> accumulator)

T reduce(T identity, BinaryOperator<T> accumulator)

<U> U reduce(U identity, BiFunction<U, ? super T, U> accumulator, BinaryOperator<U> combiner)

Let’s start with the first one. This form of reduce() takes a single parameter: the accumulation function. This is a BinaryOperator, which means it’s a function that takes two elements of the stream and combines them into one.

For example, let’s say we have a stream of integers, and we want to find their sum:

Stream<Integer> stream = Stream.of(1, 2, 3, 4, 5);
Optional<Integer> sum = stream.reduce((a, b) -> a + b);
System.out.println(sum.get());

This prints:

15

Here, the accumulation function (a, b) -> a + b takes two integers and returns their sum. The reduce() operation applies this function to the elements of the stream, two at a time, until all elements have been processed and a single result is obtained.

It’s important to note that this form of reduce() returns an Optional. This is because the stream might be empty, in which case there would be no elements to reduce, and therefore no result to return. The Optional allows us to handle this case gracefully.

The second form of reduce() takes two parameters: an identity value and the accumulation function.

The identity value is the starting point of the reduction, and it’s also the value that will be returned if the stream is empty. It’s like the base ingredient in our cooking analogy.

Stream<Integer> stream = Stream.of(1, 2, 3, 4, 5);
Integer sum = stream.reduce(0, (a, b) -> a + b);
System.out.println(sum);

This also prints:

15

But in this case, we start the reduction with 0, and we get a plain Integer as a result. This is not an Optional because, even if the stream is empty, we can still return the identity value.

The third form of reduce() is a bit more complex. It takes three parameters: an identity value, an accumulation function, and a combiner function.

The identity value and the accumulation function serve the same purposes as they do in the second form. The combiner function is used to combine the results of the reduction when the stream is processed in parallel.

This form of reduce() is useful for parallel processing, ensuring that the reduction operation is performed correctly across multiple threads.

For example, let’s say we want to concatenate a stream of strings:

Stream<String> stream = Stream.of("a", "b", "c", "d", "e");
String concatenated = stream.reduce("", (a, b) -> a + b, (a, b) -> a + b);
System.out.println(concatenated);

This prints:

abcde

Here, the identity value is an empty string, the accumulation function concatenates two strings, and the combiner function also concatenates two strings.

In this case, the combiner function is necessary to ensure correctness in parallel processing, even though string concatenation is associative.

For example, suppose we want to calculate the sum of the lengths of a list of strings, but we want to give extra weight to strings that start with a vowel by doubling their length:

boolean startsWithVowel(String str) {
    return str.matches("^[AEIOUaeiou].*");
}

// ...

Stream<String> stream = Stream.of("apple", "banana", "orange", "grape", "pear");

int sumOfLengths = stream.reduce(0, 
    (sum, str) -> sum + (startsWithVowel(str) ? str.length() * 2 : str.length()), 
    Integer::sum);

System.out.println(sumOfLengths);

This code prints:

37

Here, the identity value is 0, the accumulation function adds either the doubled length of a string (if it starts with a vowel) or its normal length to the running sum, and the combiner function sums two intermediate results.

In this example, the combiner function Integer::sum is important for correctly combining partial sums when the stream is processed in parallel, ensuring that the final result is accurate regardless of the order of processing.

Collecting Results

After processing a stream, we often need to collect the results into a data structure for further use. This is where the collect() operation and the Collectors class come into play.

Using Basic Collectors

The collect() method is a terminal operation that allows us to accumulate the elements of a stream into a collection or other data structure. It takes a Collector, which specifies how the elements should be collected.

The Collectors class provides a wide variety of pre-defined collectors for common use cases. We’ve used Collectors.toList() in some of the previous examples, but let’s look at some of these collectors in more detail.

The most straightforward collectors are toList() and toSet(), which collect the elements of the stream into a List or Set, respectively:

Stream<String> stream = Stream.of("cat", "dog", "elephant", "fox", "giraffe");
List<String> list = stream.collect(Collectors.toList());
System.out.println(list);

The above example prints:

[cat, dog, elephant, fox, giraffe]

If you need to collect into a specific type of collection, you can use toCollection() and provide a supplier for the collection:

Stream<String> stream = Stream.of("cat", "dog", "elephant", "fox", "giraffe");
LinkedList<String> linkedList = stream.collect(Collectors.toCollection(LinkedList::new));
System.out.println(linkedList);

This collects the elements into a LinkedList.

The joining() collector allows you to concatenate the elements of a stream into a single string, optionally with a delimiter, prefix, and suffix:

Stream<String> stream = Stream.of("cat", "dog", "elephant", "fox", "giraffe");
String joined = stream.collect(Collectors.joining(", "));
System.out.println(joined);

This prints:

cat, dog, elephant, fox, giraffe

There are also collectors for computing simple statistics about numeric streams, such as counting(), summing(), averaging(), and summarizing():

Stream<Integer> stream1 = Stream.of(1, 2, 3, 4, 5);
long count = stream1.collect(Collectors.counting());
System.out.println(count);

Stream<Integer> stream2 = Stream.of(1, 2, 3, 4, 5);
double average = stream2.collect(Collectors.averagingInt(i -> i));
System.out.println(average);

Stream<Integer> stream3 = Stream.of(1, 2, 3, 4, 5);
int sum = stream3.collect(Collectors.summingInt(i -> i));
System.out.println(sum);

Stream<Integer> stream4 = Stream.of(1, 2, 3, 4, 5);
IntSummaryStatistics stats = stream4.collect(Collectors.summarizingInt(i -> i));
System.out.println(stats);

This is the output:

5
3.0
15
IntSummaryStatistics{count=5, sum=15, min=1, average=3.000000, max=5}

These collectors come in three flavors for the three primitive types: int, long, and double.

The maxBy() and minBy() collectors allow you to find the maximum and minimum elements according to a given Comparator:

Stream<String> stream = Stream.of("cat", "dog", "elephant", "fox", "giraffe");
Optional<String> max = stream.collect(Collectors.maxBy(Comparator.comparingInt(String::length)));
max.ifPresent(System.out::println);

This prints "elephant", the longest string in the stream.

Collecting into Maps

One of the most powerful features of the Collectors class is the ability to collect elements into a Map.

The simplest way to do this is with the toMap() collector, which takes two functions: one to extract the key from each element, and one to extract the value:

Stream<String> stream = Stream.of("elephant", "fox", "giraffe");
Map<Integer, String> map = stream.collect(Collectors.toMap(String::length, s -> s));
System.out.println(map);

This collects the strings into a map, using their length as the key:

{3=fox, 7=giraffe, 8=elephant}

If there are duplicate keys, the toMap() collector will throw an exception. To handle this, you can provide a merge function as a third argument:

Stream<String> stream = Stream.of("cat", "elephant", "fox", "giraffe");
Map<Integer, String> map = stream.collect(Collectors.toMap(String::length, s -> s, (s1, s2) -> s1 + "," + s2));
System.out.println(map);

Now, if multiple strings have the same length, they will be joined with a comma. This is the output of the above example:

{3=cat,fox, 7=giraffe, 8=elephant}

Grouping, Partitioning, Mapping, and Teeing

The groupingBy() collector allows you to group the elements of a stream according to a classification function:

Stream<String> stream = Stream.of("cat", "dog", "elephant", "fox", "giraffe");
Map<Integer, List<String>> map = stream.collect(Collectors.groupingBy(String::length));
System.out.println(map);

This groups the strings by their length:

{3=[cat, dog, fox], 7=[giraffe], 8=[elephant]}

You can also provide a downstream collector to specify how the groups should be collected:

Stream<String> stream = Stream.of("cat", "dog", "elephant", "fox", "giraffe");
Map<Integer, Set<String>> map = stream.collect(Collectors.groupingBy(String::length, Collectors.toSet()));

This collects the groups into Sets instead of Lists.

The partitioningBy() collector is a special case of groupingBy() that partitions the stream into two groups according to a predicate:

Stream<String> stream = Stream.of("cat", "dog", "elephant", "fox", "giraffe");
Map<Boolean, List<String>> map = stream.collect(Collectors.partitioningBy(s -> s.length() > 5));
System.out.println(map);

This partitions the strings into those longer than 5 characters and those not longer than 5 characters. This is the result of the example above:

{false=[cat, dog, fox], true=[elephant, giraffe]}

The mapping() collector allows you to apply a function to each element before collecting the results:

Stream<String> stream = Stream.of("cat", "dog", "elephant", "fox", "giraffe");
List<Integer> list = stream.collect(Collectors.mapping(String::length, Collectors.toList()));
System.out.println(list);

This collects the lengths of the strings into a list. This is the result:

[3, 3, 8, 3, 7]

Finally, a powerful and less commonly known collector is the Collectors.teeing() collector. This collector allows you to perform two separate collection operations on a single stream and then combine their results using a merger function. This can be particularly useful when you need to perform two different operations on the same data set and then combine the outcomes in a meaningful way.

The general form of the teeing() method is as follows:

public static <T, R1, R2, R> Collector<T, ?, R> teeing(
    Collector<? super T, A1, R1> downstream1,
    Collector<? super T, A2, R2> downstream2,
    BiFunction<? super R1, ? super R2, R> merger
)

It takes three arguments:

  1. downstream1: The first collector to apply.
  2. downstream2: The second collector to apply.
  3. merger: A function that merges the results of the two collectors.

For example, let’s say you have a list of integers and we want to calculate both the sum and the count of the integers in one pass through the stream, and then combine these results into a single result.

Here’s how you can achieve this:

List<Integer> numbers = List.of(1, 2, 3, 4, 5);
var result = numbers.stream().collect(Collectors.teeing(
    Collectors.summingInt(Integer::intValue),  // First collector: Sum of the integers
    Collectors.counting(),                      // Second collector: Count of the integers
    (sum, count) -> String.format("Sum: %d, Count: %d", sum, count)  // Merger function
));

System.out.println(result);

This is the output:

Sum: 15, Count: 5

As you can see, this collector simplifies the code for complex aggregation tasks by eliminating the need for multiple passes over the stream. You can use any combination of collectors, and the merger function allows for flexible combination of the results.

Key Points

Practice Questions

1. Which of the following lines of code demonstrates the use of the Optional class to handle a potentially null value to avoid an exception?

import java.util.Optional;

public class Main {
    public static void main(String[] args) {
        String value = getValue();
        // Insert code here
    }
    
    public static String getValue() {
        return null; // This method may return null
    }
}

A) Optional<String> optional = new Optional<>(value);
B) Optional<String> optional = Optional.of(value);
C) Optional<String> optional = Optional.ofNullable(value);
D) Optional<String> optional = Optional.empty(value);
E) Optional<String> optional = Optional.nullable(value);

2. Which of the following lines of code correctly demonstrates the use of a terminal operation?

import java.util.List;
import java.util.stream.Collectors;
import java.util.stream.Stream;

public class Main {
    public static void main(String[] args) {
        List<String> list = List.of("apple", "banana", "cherry", "date");

        Stream<String> stream = list.stream()
                                    .filter(s -> s.length() > 5)
                                    .peek(System.out::println)
                                    .map(String::toUpperCase);

        // Insert terminal operation here
    }
}

A) stream.filter(s -> s.contains("A"));
B) stream.map(String::toLowerCase);
C) stream.distinct();
D) stream.limit(2);
E) stream.collect(Collectors.toList());

3. Which of the following lines of code correctly uses a primitive stream to calculate the sum of an array of integers?

import java.util.stream.IntStream;

public class Main {
    public static void main(String[] args) {
        int[] numbers = {1, 2, 3, 4, 5};

        // Insert code here to calculate sum
    }
}

A) int sum = numbers.stream().sum();
B) int sum = IntStream.range(0, numbers.length).sum();
C) int sum = IntStream.from(numbers).sum();
D) int sum = IntStream.of(numbers).sum();
E) int sum = IntStream.range(numbers).sum();

4. Which of the following lines of code correctly filters a stream to include only strings with a length greater than 3?

import java.util.List;
import java.util.stream.Collectors;
import java.util.stream.Stream;

public class Main {
    public static void main(String[] args) {
        List<String> list = List.of("one", "two", "three", "four");

        Stream<String> stream = list.stream();

        // Insert code here to filter the stream
    }
}

A) Stream<String> filteredStream = stream.filter(s -> s.length() > 3);
B) Stream<String> filteredStream = stream.map(s -> s.length() > 3);
C) Stream<String> filteredStream = stream.collect(Collectors.filtering(s -> s.length() > 3));
D) Stream<String> filteredStream = stream.filtering(s -> s.length() > 3);
E) Stream<String> filteredStream = stream.filterByLength(3);

5. Which of the following lines of code correctly maps a stream of strings to their lengths?

import java.util.List;
import java.util.stream.Collectors;
import java.util.stream.Stream;

public class Main {
    public static void main(String[] args) {
        List<String> list = List.of("apple", "banana", "cherry", "date");

        Stream<String> stream = list.stream();

        // Insert code here to map the stream
    }
}

A) Stream<String> lengthStream = stream.map(s -> s.length());
B) Stream<String> lengthStream = stream.mapToInt(s -> s.length());
C) Stream<Integer> lengthStream = stream.map(s -> s.length());
D) IntStream lengthStream = stream.map(s -> s.length());
E) Stream<String> lengthStream = stream.flatMap(s -> Stream.of(s.length()));

6. Which of the following lines of code correctly limits the stream to the first 3 elements after skipping the first 2 elements?

import java.util.List;
import java.util.stream.Collectors;
import java.util.stream.Stream;

public class Main {
    public static void main(String[] args) {
        List<String> list = List.of("one", "two", "three", "four", "five", "six");

        Stream<String> stream = list.stream();

        // Insert code here to skip and limit the stream
    }
}

A) Stream<String> resultStream = stream.skip(2).limit(3);
B) Stream<String> resultStream = stream.limit(3).skip(2);
C) Stream<String> resultStream = stream.skip(3).limit(2);
D) Stream<String> resultStream = stream.limit(2).skip(3);
E) Stream<String> resultStream = stream.slice(2, 5);

7. Which of the following lines of code correctly concatenates two streams?

import java.util.List;
import java.util.stream.Collectors;
import java.util.stream.Stream;

public class Main {
    public static void main(String[] args) {
        List<String> list1 = List.of("one", "two", "three");
        List<String> list2 = List.of("four", "five", "six");

        Stream<String> stream1 = list1.stream();
        Stream<String> stream2 = list2.stream();

        // Insert code here to concatenate the streams
    }
}

A) Stream<String> resultStream = Stream.concat(stream1, stream2.collect(Collectors.toList()));
B) Stream<String> resultStream = Stream.concat(stream1, stream2);
C) Stream<String> resultStream = stream1.concat(stream2);
D) Stream<String> resultStream = stream1.merge(stream2);
E) Stream<String> resultStream = Stream.of(stream1, stream2);

8. Which of the following lines of code uses the reduce method to correctly calculate the product of all elements in a stream of integers?

import java.util.List;
import java.util.stream.Stream;

public class Main {
    public static void main(String[] args) {
        List<Integer> numbers = List.of(1, 2, 3, 4, 5);

        Stream<Integer> stream = numbers.stream();

        // Insert code here to calculate the product
    }
}

A) int product = stream.reduce(1, (a, b) -> a + b);
B) int product = stream.reduce((a, b) -> a * b);
C) int product = stream.reduce(0, (a, b) -> a * b);
D) Optional<Integer> product = stream.reduce(1, (a, b) -> a * b);
E) int product = stream.reduce(1, (a, b) -> a * b, (a, b) -> a * b);

9. Which of the following lines of code correctly collects the elements of a stream into a Set and also ensures that the original order of the elements is maintained?

import java.util.List;
import java.util.Set;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import java.util.LinkedHashSet;

public class Main {
    public static void main(String[] args) {
        List<String> list = List.of("apple", "banana", "cherry", "date");

        Stream<String> stream = list.stream();

        // Insert code here to collect the elements into a Set while maintaining order
    }
}

A) Set<String> resultSet = stream.collect(Collectors.toSet());
B) Set<String> resultSet = stream.collect(Collectors.toCollection(LinkedHashSet::new));
C) Set<String> resultSet = stream.collect(Collectors.toCollection(TreeSet::new));
D) Set<String> resultSet = stream.collect(Collectors.toList());
E) Set<String> resultSet = stream.collect(Collectors.toMap());

Do you like what you read? Would you consider?


Do you have a problem or something to say?

Report an issue with the book

Contact me