Draft:Fiat-Naor Algorithm

From Wikipedia, the free encyclopedia

Fiat-Naor algorithm[1][2] is an algorithm proposed by Amos Fiat and Moni Naor for black-box function inversion with preprocessing. In this problem, a computationally unbounded preprocessing algorithm is given a function and outputs an advice of bits. Given the advice and the black-box access to , an online algorithm is asked to invert at some point in time . The goal is to obtain the best tradeoff between and . Fiat-Naor algorithm works for every and satisfying (where the -notation hides a poly-logarithmic factor). This is essentially the best known tradeoff for this problem so far.

Problem Description[edit]

The task of black-box function inversion with preprocessing is as follows. In the preprocessing stage, a computationally unbounded preprocessing algorithm is given a function , where denotes the set , and outputs an advice of bits. In the online stage, an online algorithm is given the black-box access to , the advice , and some point in the image of and is asked to find such that in time . Black-box access to allows to query any and obtain .

Naive Solutions[edit]

There are two naive solutions.

  • does nothing. makes different queries to until it finds an such that . This gives us and .
  • outputs a look-up table that stores every in the image of with a preimage. just searches in the table. This gives us and .

Combining two solutions we get an algorithm working for every in general.

Inverting Random Functions[edit]

Martin Hellman[3] was the first who studied inverting functions with preprocessing. His algorithm, later made rigorous by Fiat and Naor,[1] can invert a random function with high probability over the choice of for every and satisfying . However, it does not work with arbitrary functions.

Lower Bound[edit]

Andrew Yao[4] proved a lower bound of for function inversion.

Algorithm Description[edit]

Fiat-Naor algorithm inverts arbitrary function at any image with probability over the randomness of preprocessing algorithm . The algorithm works as follows.

  • Preprocessing algorithm : Uniformly choose from and store in a look-up table . Define . Let and . Choose from a hash function family . Run subroutine given for to obtain . The advice contains .
    • Subroutine : Define function as where is the smallest number in such that . Let function . Uniformly choose from . Store in a look-up table . Here denotes the result of iteratively evaluating for times starting from .
  • Online algorithm : Search in the look-up table for an entry in the form of . If such an entry exists, output . Otherwise, calculate and from as in . Run subroutine given for . If a solution is found in the subroutine, output it.
    • Subroutine : Define and as in . Note that evaluating requires checking whether . can do so by checking if is stored as an image in . Starting from , iteratively evaluate for steps. In each step, if reaching some stored in , immediately jump to the corresponding . If reaching again, output the immediate predecessor.

Parameters[edit]

The algorithm takes a space parameter and a time parameter , which will be different from the actual space and by some poly-logarithmic factor in , respectively. and decide other parameters as follows, where notation hides constant factors.

is required to be a family of -wise independent hash functions for some , each with a -space description. Functions are chosen in some way such that they are pairwise independent, and the evaluation of these function takes amortized time. Fiat and Naor gave such a construction, in which these functions are not fully independent to achieve the amortized time.

Explanation[edit]

Subroutines: Hellman's Algorithm[edit]

The subroutines in Fiat-Naor algorithm is built on Hellman's algorithm for inverting random function.[3]

The main idea of preprocessing is to build chains of length in the form of and store the starting point and ending point in a loop-up table as the advice. On input , the online algorithm iteratively evaluate . In each step, if it reaches an endpoint of some chain, it immediately jump back to the starting point . If is covered by some chain (not as a starting point), then the algorithm will reach again in step and find the preimage, which is the immediate predecessor of is this chain.

Probability analysis shows that with , we can ensure that there are few enough collisions in chains with randomly chosen starting points, so that these chains cover distinct values. This allows inverting at a -fraction of points.

It remains to amplify the success probability. For this purpose, the above algorithm is not directly applied to , but to re-randomized by a random function . Then by repeating times with independently chosen , every possible input is covered at least once with probability . Hellman's algorithm uses space and (where -notation hides poly-logarithmic factors in ), so is sufficient to let .

Since a random function is unlikely to have a succinct description, Hellman suggested heauristically using some function family . Fiat and Naor made this argument rigorous with their construction of . They also showed that in general, for every function where every image has at most preimages, Hellman's algorithm works if and thus for every and satisfying .

Handle Images with Many Preimages[edit]

Hellman's algorithm does not obtain a good tradeoff when there are some elements with many preimages, which makes big. This case has to be handled to invert arbitrary functions.

The look-up table is exactly for this purpose. It contains random elements and their images. These images can be inverted by searching in . It remains to handle images not included in , that is function over the remaining domain . Since , with probability , every image of with more than preimages are contained in . Therefore, the remaining images have relatively few preiamges and can be inverted using Hellman's algorithm.

To apply Hellman's algorithm over a partial domain , we use a function to re-randomize . Then function becomes a function inside . Note that hashing to arbitrary subset is difficult. For this reason, is designed as where is the smallest number in such that . The online algorithm has to query to check whether , so it makes at most more queries to evaluate .

Since every image of has at most , setting can guarantee that elements of are covered in each subroutine. Thus repetitions can amplify the success probability for every image to . The actual space and time are and . By setting , it is guaranteed that

Applications[edit]

The line of research in function inversion has developed practical tools for cryptanalysis, such as Rainbow table.[5] Because of the generality of function inversion, Fiat-Naor algorithm can be applied to other data structure problems such as string searching[6] and 3SUM-Indexing.[7][8] It is also applied to computational complexity to beat some conjectured lower bounds.[9][10]

References[edit]

  1. ^ a b Fiat, Amos; Naor, Moni (1991-01-03). "Rigorous time/Space tradeoffs for inverting functions". Proceedings of the twenty-third annual ACM symposium on Theory of computing - STOC '91. New York, NY, USA: Association for Computing Machinery. pp. 534–541. doi:10.1145/103418.103473. ISBN 978-0-89791-397-3.
  2. ^ Fiat, Amos; Naor, Moni (January 2000). "Rigorous Time/Space Trade-offs for Inverting Functions". SIAM Journal on Computing. 29 (3): 790–803. doi:10.1137/S0097539795280512. ISSN 0097-5397.
  3. ^ a b Hellman, M. (July 1980). "A cryptanalytic time-memory trade-off". IEEE Transactions on Information Theory. 26 (4): 401–406. doi:10.1109/TIT.1980.1056220. ISSN 0018-9448.
  4. ^ Yao, A. C.-C. (1990-04-01). "Coherent functions and program checkers". Proceedings of the twenty-second annual ACM symposium on Theory of computing - STOC '90. New York, NY, USA: Association for Computing Machinery. pp. 84–94. doi:10.1145/100216.100226. ISBN 978-0-89791-361-4.
  5. ^ Oechslin, Philippe (2003). "Making a Faster Cryptanalytic Time-Memory Trade-Off". In Boneh, Dan (ed.). Advances in Cryptology - CRYPTO 2003. Lecture Notes in Computer Science. Vol. 2729. Berlin, Heidelberg: Springer. pp. 617–630. doi:10.1007/978-3-540-45146-4_36. ISBN 978-3-540-45146-4.
  6. ^ Corrigan-Gibbs, Henry; Kogan, Dmitry (2019). "The Function-Inversion Problem: Barriers and Opportunities". In Hofheinz, Dennis; Rosen, Alon (eds.). Theory of Cryptography. Lecture Notes in Computer Science. Vol. 11891. Cham: Springer International Publishing. pp. 393–421. doi:10.1007/978-3-030-36030-6_16. ISBN 978-3-030-36030-6.
  7. ^ Golovnev, Alexander; Guo, Siyao; Horel, Thibaut; Park, Sunoo; Vaikuntanathan, Vinod (2020-06-22). "Data structures meet cryptography: 3SUM with preprocessing". Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing. STOC 2020. New York, NY, USA: Association for Computing Machinery. pp. 294–307. doi:10.1145/3357713.3384342. hdl:1721.1/137337.3. ISBN 978-1-4503-6979-4.
  8. ^ Kopelowitz, Tsvi; Porat, Ely (2019-07-25), The Strong 3SUM-INDEXING Conjecture is False, arXiv:1907.11206
  9. ^ Mazor, Noam; Pass, Rafael (2024). "The Non-Uniform Perebor Conjecture for Time-Bounded Kolmogorov Complexity Is False". Venkatesan Guruswami. Schloss Dagstuhl – Leibniz-Zentrum für Informatik: 20 pages, 864706 bytes. doi:10.4230/LIPICS.ITCS.2024.80. ISSN 1868-8969. {{cite journal}}: Cite journal requires |journal= (help)
  10. ^ Hirahara, Shuichi; Ilango, Rahul; Williams, Ryan (2023-11-15), Beating Brute Force for Compression Problems, retrieved 2024-04-22