Shai Ben-David

From Wikipedia, the free encyclopedia

Shai Ben-David is an Israeli-Canadian computer scientist and professor at the University of Waterloo. He is known for his research in theoretical machine learning.[1]

Biography[edit]

Shai Ben-David grew up in Jerusalem, Israel and received a Ph.D. in mathematics from the Hebrew University of Jerusalem,[2] where he was advised by Saharon Shelah.[3][2] He held postdoctoral positions in mathematics and computer science at the University of Toronto. He was a professor of computer science at the Technion and also held visiting positions at the Australian National University and Cornell University.[4]

He has been a professor of computer science at the University of Waterloo since 2004.

Selected publications and awards[edit]

Ben-David has written highly cited papers on learning theory and online algorithms.[5][6][7][8][9] He is a co-author, with Shai Shalev-Shwartz, of the book "Understanding Machine Learning: From Theory to Algorithms"(Cambridge University Press, 2014).[1]

He received the best paper award at NeurIPS 2018.[10] for work on sample complexity of distribution learning problems.[11] He was the President of the Association for Computational Learning from 2009 to 2011.[12]

Awards[edit]

Publications[edit]

  • Understanding machine learning: From theory to algorithms

Authors: Shai Shalev-Shwartz, Shai Ben-David Publication date 2014/5/19 Publisher Cambridge university press

  • A theory of learning from different domains

Authors: Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, Jennifer Wortman Vaughan Publication date 2010/5 Journal Machine learning Volume 79 Pages 151-175 Publisher Springer US

  • Analysis of representations for domain adaptation

Authors: Shai Ben-David, John Blitzer, Koby Crammer, Fernando Pereira Publication date 2006 Journal Advances in neural information processing systems Volume 19

  • Detecting change in data streams

Authors: Daniel Kifer, Shai Ben-David, Johannes Gehrke Publication date 2004/8/31 Journal VLDB Volume 4

References[edit]

  1. ^ a b Shalev-Shwartz, Shai; Ben-David, Shai (2014). Understanding Machine Learning: From Theory to Algorithms. Cambridge: Cambridge University Press. ISBN 978-1-107-05713-5.
  2. ^ a b "Shai Ben-David at the Mathematics Genealogy Project".
  3. ^ "ACML 2018 Main/Speakers". www.acml-conf.org. Retrieved 2021-04-26.
  4. ^ "Shai Ben-David | Simons Institute for the Theory of Computing". simons.berkeley.edu. Retrieved 2021-04-10.
  5. ^ Ben-David, Shai; Blitzer, John; Crammer, Koby; Kulesza, Alex; Pereira, Fernando; Vaughan, Jennifer Wortman (2010-05-01). "A theory of learning from different domains". Machine Learning. 79 (1): 151–175. doi:10.1007/s10994-009-5152-4. ISSN 1573-0565.
  6. ^ Schölkopf, Bernhard; Platt, John; Hofmann, Thomas (2007). Advances in Neural Information Processing Systems 19: Proceedings of the 2006 Conference. MIT Press. ISBN 978-0-262-19568-3.
  7. ^ VLDB (2004-10-08). Proceedings 2004 VLDB Conference: The 30th International Conference on Very Large Databases (VLDB). Elsevier. ISBN 978-0-08-053979-9.
  8. ^ Ben-David, S.; Borodin, A.; Karp, R.; Tardos, G.; Wigderson, A. (1994-01-01). "On the power of randomization in on-line algorithms". Algorithmica. 11 (1): 2–14. doi:10.1007/BF01294260. ISSN 1432-0541. S2CID 26771869.
  9. ^ Alon, Noga; Ben-David, Shai; Cesa-Bianchi, Nicolò; Haussler, David (1997-07-01). "Scale-sensitive dimensions, uniform convergence, and learnability". Journal of the ACM. 44 (4): 615–631. doi:10.1145/263867.263927. ISSN 0004-5411.
  10. ^ "Professor Shai Ben-David and colleagues win best paper award at NeurIPS 2018". Cheriton School of Computer Science. 2018-12-03. Retrieved 2021-04-10.
  11. ^ "Nearly Tight Sample Complexity Bounds for Learning Mixtures of Gaussians via Sample Compression Schemes" (PDF).
  12. ^ "Shai Ben-David". CIFAR. Retrieved 2021-04-10.
  13. ^ "Shai Ben-David". awards.acm.org. Retrieved 2024-01-26.

External links[edit]