In classic complexity theory, coping with NP-hard optimization problems often translates to the following fundamental question: “What is the best approximate solution that we can get in polynomial time?” We will present basic techniques for the design and analysis of approximation algorithms.
This includes combinatorial techniques (such as local search and local ratio) and techniques based on linear programming. We will demonstrate the use of these techniques for solving packing and scheduling problems, as well as problems on graphs.
Course covering a wide range of topics in complexity theory, such as space complexity, the polynomial hierarchy, probabilistic computation, interactive proofs, circuit complexity, and more.
We will discuss the question of whether it is possible to design responsible algorithms that process sensitive data while avoiding the risk of exposing personal and sensitive information. We will address this question from a theoretical perspective, focusing on the concept of differential privacy, which offers a formal mathematical definition of algorithmic privacy. Course Content Overview:
• A systematic development of the definition of differential privacy and its basic properties.
• Design and analysis of differentially-private algorithms.
• Can any algorithmic task be solved privately? Analysis of the cost of differential-privacy in terms of computational and data efficiency.
• Additional selected topics as determined by the instructor’s discretion.
This is a theoretical course that demands mathematical maturity.
This course will introduce students to algorithmic foundations for handling large, high-dimensional datasets.
A sampling of topics and questions that will be covered includes:
Basic concepts: error correction, error detection and erasure correction. Linear error-correcting codes. Hamming codes. Introduction to finite fields. Reed-Solomon codes, BCH codes and alternant codes. Decoding algorithms. Bounds on the parameters of error-correcting codes.
Probabilistic methods have become a cornerstone of combinatorial theory, while probabilistic algorithms have become central to computer science.
This course introduces an array of already-classical and more modern probabilistic methods, along with their applications to math and CS. The course is centered on proof techniques and problem solving. Among the covered topics: basic methods, concentration inequalities, martingales, the local lemma, information entropy.
The topic of the seminar is Algorithms for Sustainable Resource Allocation.
We will review new variants of classic resource allocation problems arising in modern computing paradigms aimed at sustainable allocation of resources, and the challenges they present from algorithmic viewpoint. Background material on approximation techniques for resource allocation problems which are known to be hard to solve will be presented in several introductory lectures at the beginning of the seminar.
The seminar deals with distributed computing in modern computing systems, in the presence of different kinds of failures. Grades are based (mostly) on presenting in-depth a paper from the recent research literature.
The metric method is a powerful tool that has been used extensively in the last two decades in the design and analysis of algorithms.
This course will survey some of the basic techniques in the metric approach, as well as its applications to various topics such as: clustering and graph cuts, balanced graph partitioning, network routing, and online algorithms.
The topic of this course is uncertainty in algorithms. Given a computational problem, the traditional design and analysis of algorithms assumes that complete knowledge of the input is known in advance. However, this is not the case in many applications, including resource management and allocation, data structures, and game dynamics. In these scenarios making decisions in the face of uncertainty is of major importance. Take, for example, managing a cache memory; when there is a page miss, the decision which page to evict from the cache is made without knowing future requests for pages.
In the course we will go over advanced techniques developed in recent years for dealing with such settings, where information about the future is unavailable or limited. We will mainly focus on competitive analysis of online algorithms and online learning, and study surprising connections between them. We will see how techniques from linear programming and convex optimization are useful for developing online algorithms.