This may take 3 months or more, since drafts are reviewed in no specific order. There are 1,389 pending submissions waiting for review.
If the submission is accepted, then this page will be moved into the article space.
If the submission is declined, then the reason will be posted here.
In the meantime, you can continue to improve this submission by editing normally.
Where to get help
If you need help editing or submitting your draft, please ask us a question at the AfC Help Desk or get live help from experienced editors. These venues are only for help with editing and the submission process, not to get reviews.
If you need feedback on your draft, or if the review is taking a lot of time, you can try asking for help on the talk page of a relevant WikiProject. Some WikiProjects are more active than others so a speedy reply is not guaranteed.
To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags.
Ball covariance is a statistical measure that can be used to test the independence of two random variables defined on metric spaces.[1] The ball covariance is zero if and only if two random variables are independent, making it a good measure of correlation. Its significant contribution lies in proposing an alternative measure of independence in metric spaces. Prior to this, distance covariance in metric spaces[2] could only detect independence for distance types with strong negative type. However, ball covariance can determine independence for any distance measure.
Ball covariance uses permutation tests to calculate the p-value. This involves first computing the ball covariance for two sets of samples, then comparing this value with many permutation values.
Correlation, as a fundamental concept of dependence in statistics, has been extensively developed in Hilbert spaces, exemplified by Pearson correlation coefficient[3], Spearman correlation coefficient[4], and Hoeffding's dependence measure[5]. However, with the advancement of time, many fields require the measurement of dependence or independence between complex objects, such as in medical imaging, computational biology, and computer vision. Examples of complex objects include Grassmann manifolds, planar shapes, tree-structured data, matrix Lie groups, deformation fields, symmetric positive definite (SPD) matrices, and shape representations of cortical and subcortical structures. These complex objects mostly exist in non-Hilbert spaces and are inherently nonlinear and high-dimensional (or even infinite-dimensional). Traditional statistical techniques, developed in Hilbert spaces, may not be directly applicable to such complex objects. Therefore, analyzing objects that may reside in non-Hilbert spaces poses significant mathematical and computational challenges.
Previously, a groundbreaking work in metric space independence tests was the distance covariance in metric spaces proposed by Lyons (2013)[2]. This statistic equals zero if and only if random variables are independent, provided the metric space is of strong negative type. However, testing the independence of random variables in spaces that do not meet the strong negative type condition requires new explorations.
Next, we will introduce ball covariance in detail, starting with the definition of a ball. Suppose two Banach spaces: and , where the norms and also represent their induced distances. Let be a Borel probability measure on be two Borel probability measures on , and be a -valued random variable defined on a probability space such that , and . Denote the closed ball with the center and the radius in as or , and the closed ball with the center and the radius in as or . Let be an infinite sequence of iid samples of , and be the positive weight function on the support set of . Then, the population ball covariance can be defined as follows:
where for and .
Next, we will introduce another form of population ball covariance. Suppose which indicates whether is located in the closed ball . Then, let means whether both and is located in , and . So does , and for . Then, let , be iid samples from . Another form of population ball covariance can be shown as
Now, we can finally express the sample ball covariance. Consider the random sample . Let and be the estimate of and . Denote
the sample ball covariance is
Just like the relationship between the Pearson correlation coefficient and covariance, we can define the ball correlation coefficient through ball covariance. The ball correlation is defined as the square root of
where
and
And the sample ball correlation is defined similarly,
where
and
1.Independence-zero equivalence property: Let , and denote the support sets of , and , respectively. implies if one of the following conditions establish:
(a). is a finite dimensional Banach space with .
(b)., where and are positive constants, is a discrete measure, and is an absolutely continuous measure with a continues Radon–Nikodym derivative with respect to the Gaussian measure.
2.Cauchy–Schwarz type inequality:
3.Consistence: If and uniformly converge and with respectively, we have and .
4.Asymptotics: If and uniformly converge and with respectively,
(a)under the null hypothesis, we have , where are independent standard normal random variables.