This blog will clear your doubt to some extent over Spark Streaming and Flink Streaming
In Spark stream, we use batch interval for nearly realtime microbatch processing
Spark needs to schedule a new job for each micro batch it processes. And this scheduling overhead in quite high, such that Spark cannot handle very low batch intervals like 100ms or 50ms efficiently and thus throughput goes down for those small batches.
Flink is true streaming systems, thus deploy the job only once at startup (and the job runs continuously until explicitly shut down by the user) and thus they can handle each individual input record without overhead and very low latency.
Furthermore for Flink, JVM main memory is not a limitation because Flink can use off-head memory as well as write to disk if main memory is too small.
Spark latest version support project Tungsten, can also use off-heap memory, but they can spill to disk to some extent -- and is limited to JVM memory.
Please share your views in comment section .