Talk:Vector quantization

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia


"Some math"[edit]

An expression occurring in existential sentences. "For some x" is the same as " exists x." Unlike in everyday language, it is does not necessarily refer to a plurality of elements, and so might be more clearly represented in colloquial English as "for at least one." (Turkialjrees (talk) 16:44, 14 March 2015 (UTC)).[reply]

During some of my colleges I got some math what could be nice to be on this page. only I don't have enough mathimatical background to prove the used maths.

The Math[edit]

create set of prototypes = the data =

by using the Squared_Euclidean_Distance we can determine the multidimention distance between a prototype and a data point. Based on this we can find the closest prototype to a given datapoint. assign to prototype

This way the winner takes it all and the closest prototype should be moved using:

where is the learning rate

Spidfire (talk) 15:29, 31 January 2013 (UTC)[reply]

Untitled[edit]

also want to see pictures —Preceding unsigned comment added by 138.246.7.74 (talk) 13:50, 15 July 2010 (UTC)[reply]

Damn. This article made me feel dumb. --NoPetrol 06:41, 24 Nov 2004 (UTC)

I have modified the article to give a clear explanation of what vector quantization is, together with some uses for it. It still needs tidying up and referencing Pog 21:46, 1 August 2007 (UTC)[reply]

Unclear sentence[edit]

"Find the quantization vector centroid with the smallest <distance-sensitivity>"

What does "<distance-sensitivity>" mean? Does it mean sensitivity? Or does it mean distance minus sensitivity? -Pgan002 00:17, 18 August 2007 (UTC)[reply]

I expanded it as distance minus sensitivity. But I think this is not a very good algorithm, and it may have been original research. So I added citation-needed because we need an established algorithm from e.g. some book. — Preceding unsigned comment added by 213.16.80.50 (talk) 14:42, 8 November 2016 (UTC)[reply]

Spam[edit]

Why the hell is there a picture of an aeroplane on this page? —Preceding unsigned comment added by Criffer (talkcontribs) 16:24, 11 October 2007 (UTC)[reply]

Definition[edit]

Is there a kind of agreed definition on this term? At least [1] attempts to define it. Should Wikipedia adopt this definition? Are there alternative definitions somewhere? Arkadi kagan (talk) 21:11, 25 January 2010 (UTC)[reply]

Another option from [2]:

A data compression technique in which a finite sequence of values is presented as resembling the template (from among the choices available to a given codebook) that minimizes a distortion measure.

Arkadi kagan (talk) 08:38, 28 January 2010 (UTC)[reply]

Use in data compression[edit]

"All possible combinations of the N-dimensional vector [y1,y2,...,yn] form the Gaurav."

What the hell is a Gaurav?

Secondly, even if there is a correct technical term for all possible combinations of an N-Dimensional vector, it is completely out of context in that particular article. It should be removed, or correct and given a context. —Preceding unsigned comment added by 198.151.130.16 (talk) 21:46, 1 April 2011 (UTC)[reply]

Where is a block diagram?[edit]

From the article: Block Diagram: A simple vector quantizer is shown below Huh? Where is it? Cuddlyable3 (talk) 09:15, 7 June 2011 (UTC)[reply]

Each cluster the same number of points?![edit]

"It works by dividing a large set of points (vectors) into groups having approximately the same number of points closest to them."

This is not true, isn't it? E.g. clustering a 1-d normally distributioned data (10k samples) with k-means (6 clusters) results in groups with very different numbers of points assigned to each group (700 to 2400). I would not call this difference "approximately the same". Or am i missing something?

VERY approximate[edit]

From my limited experience, it seems most groups will have similar numbers, but a few groups (clusters) will have very few or very many elements assigned to it. So most clusters (maybe 60~80 %) will have a similar number of elements, but the remainder will have very few or very many elements. Hydradix (talk) 04:53, 13 October 2014 (UTC)[reply]

No mention of LBG or other methods[edit]

Article's "alternate training" method seems biased towards simulated annealing. No mention is made at all of the Linde–Buzo–Gray algorithm which is a fundamental starting point for most VQ implementations and is the most widely-cited paper in VQ work. No mention is made of PNN (Pair Nearest Neighbor) or other codebook generation methods either. --Trixter (talk) 19:49, 26 August 2013 (UTC)[reply]

Agreed! The LBG algorithm is fundamental for the topic, Vector Quantization. This, and other code-book generation methods, need to be referenced/linked. Although I have some experience with VQ, I am not an expert in VQ, so am not confident to update the page... Hydradix (talk) 07:43, 5 October 2014 (UTC)[reply]

update[edit]

I decided to be bold, and added in-page links to LBG and K-Means... I also added LBG to the References.... I tried/wanted to add Enhanced LBG to External References, but when I tired Wikipedia Preview the link would always fail (http://anale-informatica.tibiscus.ro/download/lucrari/2-1-02-balint.pdf) so ELBG was not referenced. — Preceding unsigned comment added by Hydradix (talkcontribs) 08:34, 5 October 2014 (UTC)[reply]

Article is too technical and abstract[edit]

I have no mathematical background. Despite my interest in signal processing, I didn't understand a word of the lede and used external information to add a sentence for the mortals among us. Once I gain a good understanding of the topic, I will update the article with more understandable information. --Holzklöppel (talk) 09:32, 11 October 2023 (UTC)[reply]