Jump to content

Ulam matrix

From Wikipedia, the free encyclopedia

In mathematical set theory, an Ulam matrix is an array of subsets of a cardinal number with certain properties. Ulam matrices were introduced by Stanislaw Ulam in his 1930 work on measurable cardinals: they may be used, for example, to show that a real-valued measurable cardinal is weakly inaccessible.[1]

Definition

[edit]

Suppose that κ and λ are cardinal numbers, and let be a -complete filter on . An Ulam matrix is a collection of subsets of indexed by such that

  • If then and are disjoint.
  • For each , the union over of the sets , is in the filter .

References

[edit]
  1. ^ Jech, Thomas (2003), Set Theory, Springer Monographs in Mathematics (Third Millennium ed.), Berlin, New York: Springer-Verlag, p. 131, ISBN 978-3-540-44085-7, Zbl 1007.03002