Seminar Upcoming

The Technion Theory Lunch is a seminar run by the theory group at the Department of Computer Science, Technion. The seminar holds weekly meetings in which graduate students and faculty from the Technion and other universities in Israel and abroad present recent results or other interesting topics in theoretical computer science. The seminar is open to faculty, graduate and undergraduate students interested in theory of computing.

Icon of seminar time  Wednesdays, 13:00-14:00 (food starts at 12:45)
Icon of seminar place  Room 201 (note return to traditional room), Taub Building

If you are interested in giving a talk at the seminar, contact Omri Ben-Eliezer and/or David Wajc.

The seminar’s mailing list is used to announce our weekly meetings, and also to publicize CS theory related events, such as workshops and seminars, in Israel and abroad.

To join or leave our mailing list.

Shuffling cards when you are of very little brain: Low memory generation of permutations
Show and Hide Ext. Content for Shuffling cards when you are of very little brain: Low memory generation of permutations
Wed, November 12
Boaz Menuhin

How can we generate a permutation of the numbers 1 through n so that it is hard to guess the next element given the history so far? The twist is that the generator of the permutation (the “Dealer”) has limited memory, while the “Guesser” has unlimited memory. With unbounded memory (actually n bits suffice), the Dealer can generate a truly random permutation where ln n is the expected number of correct guesses.

We will show tight bounds for the relationship between the guessing probability and the memory m required to generate the permutation. We suggest a method for an m-bit Dealer that operates in constant time per turn, and any Guesser can pick correctly only O(n/m + log m) cards in expectation. The method is fully transparent, requiring no hidden information from the Dealer (i.e., it is ”open book” or ”whitebox”).

We show that this bound is the best possible, even with secret memory. Specifically, for any m-bit Dealer, there is a (computationally powerful) Guesser that achieves Ω(n/m+log m) correct guesses in expectation. We point out that the assumption that the Guesser is computationally powerful is necessary: under cryptographic assumptions, there exists a low-memory Dealer that can fool any computationally bounded Guesser.

To appear in FOCS 2025.
https://arxiv.org/abs/2505.01287

The surprising graph behind k-nearest neighbors and contrastive learning
Show and Hide Ext. Content for The surprising graph behind k-nearest neighbors and contrastive learning
Wed, November 19
Orr Fischer (Bar Ilan University)

Contrastive learning is a comparative-based learning paradigm in which labelers are given triplet queries (A,B,C) of data points, and answer whether in their opinion point A is “closer” to B than C, or vice versa. E.g. if a labeler is given three pictures (A = Lion, B = Tiger, C = Dog), they would say that a lion is closer to a tiger than to a dog. In this talk, we discuss the dimensionality of satisfying m contrastive learning constraints in an l_p metric. In particular, we answer the following question:

What is a sufficiently large dimension d, such that given any m (non-contradictory) constraints of the form “dist(A,B) < dist(A,C)", can we find an embedding into R^d such that all constraints are satisfied? (under the l_p distance). We show that e.g. for p=2 that d = Theta(sqrt(m)) is both necessary and sufficient. Relatedly, we use our techniques to answer the following natural question for the k-nearest neighbor problem: What is a sufficiently large dimension d, such that given any metric D on n points, can we find an embedding into R^d that preserves the k-nearest neighbors of each point of D? (under the l_p distance). We show that d = O~(poly(k)) is always sufficient. Our methods are a combination of combinatorial and algebraic techniques. At the heart of our algorithms is an analysis of a graph which we refer to as the constraint graph, whose properties are associated with the dimension bounds we obtain - and in particular, its arboricity (an important measure of density).

Shay Sapir
Show and Hide Ext. Content for Shay Sapir
Wed, November 26
Shay Sapir (Weizmann Institute)
Yaseen Abd-Elhaleem
Show and Hide Ext. Content for Yaseen Abd-Elhaleem
Wed, December 3
Yaseen Abd-Elhaleem (Haifa University)
Yuval Gil
Show and Hide Ext. Content for Yuval Gil
Wed, December 24
Yuval Gil (Reykjavik University)
Ilan Doron-Arad
Show and Hide Ext. Content for Ilan Doron-Arad
Wed, January 7
Ilan Doron-Arad (MIT)
Amit Levi
Show and Hide Ext. Content for Amit Levi
Wed, January 14
Amit Levi (University of Haifa)
Assaf Reiner
Show and Hide Ext. Content for Assaf Reiner
Wed, January 21
Assaf Reiner (Hebrew University)
Omrit Filtser
Show and Hide Ext. Content for Omrit Filtser
Wed, January 28
Omrit Filtser (Open University)