Talk:Noop scheduler

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Doubts about performance improvement on flash cards[edit]

I would raise my doubts about this controller being best for flashdevices, since when a large amount of small blocks in random order is been requested, a package merging is been favored.

latencies on lineair reads are indeed small on flash drives, and latencies on random reads are below 2ms. but latencies on random reads AND writes can go from several tens to hundreds of ms.

QueueNut (talk) 17:35, 18 July 2008 (UTC)Author unknown[reply]

Request Merging[edit]

Explanation of request merging would be of benefit to this article.

Secondly, can the above unknown author provide a reference to the performance behavior of flash drives? Useful to discuss this further from the same references.

QueueNut (talk) 17:35, 18 July 2008 (UTC)QueueNut[reply]

nomerges[edit]

I realize it seems like a digression, but understanding the options available is important to understanding why there's still a "noop" scheduler. People need to know why nomerges exists and the functionality it does and doesn't provide so they can make an informed choice. Otherwise, the mention seems random and doesn't help the reader understand when they would want to user nomerges and when they would want to use noop. 24.211.228.93 (talk) 18:00, 14 December 2014 (UTC)[reply]

Hello! On second thought, I agree that it should be explained further. Will clean it up and restore the reference you've deleted. — Dsimic (talk | contribs) 19:58, 14 December 2014 (UTC)[reply]
I'm doubtful of this claim. I fail to see how merging requests benefits, for example, VM guests or SAN volumes. It seems the guest OS scanning for possible merges at best duplicates the effort of whatever system actually has physically access to the disks and at worse makes it worse by potentially merging requests that are physically separated (such as in a LUN that has been expanded and the given I/O happens to cross that boundary). I've also found a few reputable places online where nomerges is used in this way:
I'm not where I can really test this currently to get numbers myself but even if I did it would probably be considered original research. Not sure how to get resolution on this. I left that portion of the article intact until then though.152.16.10.234 (talk) 15:15, 15 December 2014 (UTC)[reply]
Right, Wikipedia is all about summing up reliable sources. Just had a look at the links you've provided above, and I'm not sure what the "Violin SSD devices", mentioned in the symantec.com link, actually are? Speaking about the rackspace.com link, it shows how SSDs are suggested to be used, but it would have been much better with some performance benchmarks to support their configuration choice.
At the same time, both sources above don't disable requests merging entirely as they both set nomerges to 1; thus, it seems that some requests merging does provide benefits even to SSDs. With that in mind, we could use those two sources to support the claim that the majority of I/O workloads and underlying devices benefit from a certain level of requests merging, while mentioning that SSDs usually don't benefit from more complex I/O merging.
Here are a few more sources that we might use:
Thoughts? I'm more than happy to discuss this further, :) so the article content ends up as accurate as possible. — Dsimic (talk | contribs) 16:31, 15 December 2014 (UTC)[reply]
Violin is just a brand of SSD. The article was probably written in response to a particular customer complaint and they happened to be using Violin so it got included even though from what I can tell it's irrelevant to the larger topic. I was just linking those two as reliable sources that turn off merging specifically to boost performance. This implies to me that Symantec and Rackspace have tested this procedure and had beneficial results. I suppose it's still possible to operate under the assumption that the bulk of I/O will be stored somewhat sequentially so the lower layer in the storage stack won't need to reorder the IOPs. That may be why they used a "1" rather than a "2" (just so the scheduler would only do a quick pass through the queue). I suppose it's also valid to assume that whatever the storage media is, reducing the number of IOP/s you have to transmit to the lower layer will improve performance related to communication or processing delays. In light of this, I'd be willing to call it valid if we just changed it to say "some" request merging is usually beneficial and suggest either 0 or 1 be used for request merging unless you're attempting to simulate something for benchmarks or performing diagnostics (as written in the original Red Hat link). 152.16.10.234 (talk) 19:27, 15 December 2014 (UTC)[reply]
Totally agreed, having fewer I/O operations is always better no matter what; it's just a question where (or when) to stop optimizing/merging. It might be better not to specify or suggest any values for the nomerges parameter, as anyone interested in such details is anyway going to look into references for more details. Also, WP:NOTMANUAL might apply, but not necessarily  – the trouble is that we have none of the NOOP's configuration parameters already covered, so covering nomerges in that way would somehow stick out. However, I might be splitting hairs there. :) Went ahead and touched up the article, please have a look. — Dsimic (talk | contribs) 23:23, 15 December 2014 (UTC)[reply]