Talk:Benchmark (computing)

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Hardware vs Software Benchmark[edit]

I think there are two types of benchmarks in computer science: Hardware and algorithm benchmarks. The article mainly focused on the first but what about algorithm or software benchmark? It does not focus on the comparison of hardware but on software performance. In academia, this is very important: [1] — Preceding unsigned comment added by Ct2034 (talkcontribs) 10:21, 12 August 2017 (UTC)[reply]

Workload[edit]

I miss a precise definition of workload in terms of computing. The wikipedia page for the word "workload" is not about computing and this page uses this term without defining it first. —Preceding unsigned comment added by 129.88.43.111 (talk) 08:07, 21 April 2011 (UTC)[reply]

IBM's LTP benchmark[edit]

This is a pretty good page, but perhaps IBM's LTP benchmark suite should be mentioned in the open source list.

64.172.115.2 17:19, 13 June 2006 (UTC)Rich[reply]

This is Wikipedia, so you know what to do: be bold and edit the page to contain that information!
Atlant 17:37, 13 June 2006 (UTC)[reply]

Added a link to what appears to be a very comprehensive database of CPU benchmarks. Also added Company Name and their benchmarking program to the list. I didn't add a link to the site because I wasn't sure if that was proper.

Maybe some examples from each type of benchmark would make things more clear

190.42.182.236 07:02, 22 September 2007 (UTC)[reply]

The examples for each type could be in the "Common benchmarks" section, by listing the mentioned benchmarks by type. It's a tedious job i know, i'd do it if i knew more about the subject. 190.42.95.57 (talk) 04:11, 22 November 2007 (UTC)[reply]


this is missing some very relevant linux kernel benchmark, (IMHO) , like hackbench or starvation free, they might be not solid-scientific, neither cross-platform, but anyways those kind of test are one of the standard of measure for recent kernel development. —Preceding unsigned comment added by 200.82.69.160 (talk) 06:44, 29 December 2007 (UTC)[reply]

Microbenchmarks[edit]

Microbenchmarks get redirected here but it is not clear that they are exactly the same thing. Should they have a page of their own or at least something to define the difference? JBrusey (talk) 14:07, 17 March 2009 (UTC)[reply]

Downgraded the article[edit]

A "B-class" article this large requires more than 3 references only cited once each in the entire article. There are whole sections with no references. § Music Sorter § (talk) 07:54, 14 July 2011 (UTC)[reply]

ioblazer[edit]

Google gives me 1400 hits - essentially the only interest in this is from its developer. There are no independent reviews. It is unlikely to be notable in the near future. TEDickey (talk) 16:48, 6 April 2014 (UTC)[reply]

Benchmark Cheating[edit]

Looks like it's starting again for phones https://www.xda-developers.com/benchmark-cheating-strikes-back-how-oneplus-and-others-got-caught-red-handed-and-what-theyve-done-about-it/ 70.49.130.192 (talk) 06:55, 2 February 2017 (UTC)[reply]

External links modified[edit]

Hello fellow Wikipedians,

I have just modified one external link on Benchmark (computing). Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 18 January 2022).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 23:50, 17 July 2017 (UTC)[reply]

Missing aspect[edit]

In the article I'm missing an important aspect of benchmarks. I use benchmarks to measure and improve algorithms. There are algorithms that have no exactly defined output value, for example in image processing. So you have to measure not only its performance, but also the quality of its output (e. g. by checking how near the algorithm's output is to the human reference). The only part of the article that goes into that direction is under "challenges":

"Examples of unmeasured qualities of service include security, availability, reliability, execution integrity, serviceability, scalability"

But neither of these effects describes the challenge that I mentioned. Is there a separate article talking about that topic or do I miss something else?

pluckerwank (talk) 08:59, 9 November 2017 (UTC)[reply]

MIPS[edit]

Should this article mention that MIPS is sometimes used for actual instruction rates, but more often as a benchmark value? In the early days, especially with word addressed machines, the amount of computation done by a single instruction was about the same between machines, such that instruction rates made some sense. Later on, as ratios between machines, MIPS was used without the connection to actual instruction counts. Gah4 (talk) 23:51, 1 January 2020 (UTC)[reply]

Outdated sources[edit]

This article requires a major revision to bring it up-to-date addressing the following issues: The provided definitions and discussion are rather outdated and inaccurate. The article assumes a very narrow scope of benchmarking missing the developments from the last two decades. References are outdated, e.g., the cited definition of benchmarking (from 1986) is rather narrow; the referenced textbook on the topic is from 1993, although up-to-date textbooks focus on the topic are available. The scope of benchmarking has in the meantime expanded to cover: (1) further system attributes (e.g., energy efficiency, reliability, resilience, or security) in addition to classical performance aspects and (2) further application scenarios (rating tools, research benchmarks). The provided classification of benchmarks is incomplete, inaccurate, and misleading. The provided list of example common benchmarks is mostly deprecated. — Preceding unsigned comment added by 2A02:810D:ABC0:C9E0:501E:1F2A:CDEE:A3C5 (talk) 13:46, 9 March 2021 (UTC)[reply]

Canned benchmarks??[edit]

It may be that I misheard, that it's not canned, but rather sth similar sounding. It seems it refers to benchmarking with games or benchmarking games, specifically "[it's not a problem] scripting performance testing for games that include canned benchmarks". Found one mention in text of it online here. It seems it's quite important for automating benchmarking, so if somebody knows what that is about it would be good to add it to the aticle. Setenzatsu.2 (talk) 07:10, 14 June 2023 (UTC)[reply]