Theory Seminar MSR-Silicon Valley RIP Windows On Theory

Theory Seminar – MSR-Silicon Valley – RIP

A common feature of most natural communication is that it is based on a large context shared between the speaker and the listener. Language knowledge, technical terms, social political events, history, etc. Everything is combined to form this shared context. And it is clear that this context cannot be completely shared by both. This common context helps to compress communication (otherwise, you must include English dictionaries and grammar books in this summary). Otherwise, this summary would have to include English dictionaries and grammar books). However, the lack of general usage can cause misunderstandings and ambiguity. The challenge of achieving the advantages of common contexts without losing new errors due to incomplete causes many new mathematical issues.

This debate focuses on the specific settings of this tension between the common context and the instability of sharing, that is, the use of a common random unity in the complexity of communication. It is widely known that sharing randomity between speakers and listeners can significantly reduce the complexity of communication in specific communication tasks. What happens if this randomity is not completely shared? The sender accesses a uniform random bit column, and the recipient accesses the same bit noise version in the same bit column. Among the other results, the on e-way communication protocol of any K-bit, which has a complete shared randomnessness, can be "simulated" with an incomplete shared randomness of 2^ K-bit, which is essentially tight. Explain the general things that indicate something.

Based on joint research with Clément Canone (University of Colombia), Venkatesan Guruswami (CMU), and Raghu Meka (MSR).

2014/09/17 Tim Raf Garden (Stanford University)

Obstacle to the optimal equilibrium

Explains when and how the lower world of algorithms and calculation of algorithms on an optimization problem is converted to the worst quality of equilibrium in games derived from the problem. The simplest usage of this lowe r-world framework is to use the lower world of existing calculations or communication to derive the worst case of anarchy equilibrium.

Using POA in the games classroom. This is a new approach to the lower bound of POA, based on a recurrence formula instead of an explicit construction. This threshold is particularly relevant for game design problems where the goal is to design games that have only near-optimal equilibria, from designing simple combinatorial auctions to designing efficient tolls for routing networks.

2014/09/10 Dawn Woodard (Visiting Researcher, Cornell University)

Characterizing the Efficiency of Markov Chain Monte Carlo Methods

With the introduction of simulation-based computational methods, the field of Bayesian statistics has expanded exponentially, with applications in a wide range of fields from finance to information technology. However, the greatest challenge in adopting a Bayesian approach is the computational methods used to implement it. Our understanding of the errors associated with these methods has lagged far behind their use, and for some statistical models, all available methods take a long time to converge. In this talk, we will describe our work to evaluate the effectiveness of certain computational methods used for Bayesian statistical inference.

In particular, we will present recent results on the effectiveness of Bayesian computation (ABC), which has become a fundamental tool in genetics and population systems biology. ABC is used when estimating the likelihood function is prohibitively expensive or impossible, and instead approximates the likelihood function by drawing pseudo samples from the model. We treat both rejection sampling and Markov Monte Carlo versions of ABC and show the surprising result that multiple pseudo samples do not improve the efficiency of the algorithm compared to using a high-variance estimator, typically computed with a single pseudo sample. This result means that there is no need to tune the number of pseudo samples, in contrast to particle MCMC methods, where multiple particles are often required to obtain sufficient accuracy.

2014/09/03 Omri Weinstein (Princeton University)

Using Honest but Curious to Bad

In this paper, we will introduce new technologies that can maintain estimated internal information costs for online conversation without much revealing additional information. Use this configuration to obtain new results regarding information theoretically safe calculations and dialogue compression.

In particular, the information meter is indicated that it can be used to realize safe communication between two unreliable information theory. The goal of the player is to calculate the function f (x, y), and if F recognizes a protocol with information cost i and communication cost C, our information meter is the following "strong" protocol. Can be used to generate:

(i) Assuming that both players are honest, if you calculate F with a high probability, (II), even if one player is malicious/ ant i-virtual, honest players are honest. O (k (I+ log C)) The probability of revealing more information to the other player is up to 2^.

It also describes approaches that use our millimeters to replace dialogue compression.

As a result, the progress of interactive compression in the region of I = O (log C) leads to new*compression in all areas, and therefore lead to new direct Japanese theorem in communication complexity.

C o-research with Mark Braverman

2014/08/27 Amit Danierry (Hebrew University)

To the complexity of inappropriate learning due to the average complexity

At present, it is unknown how to show the hardness of learning problems. There is a large gap between the upper and lower worlds in this field. The main barrier is that the standard NP recurrence ceremony does not capture the difficulty of learning. All known lower worlds are based on (undeveloped) encryption assumptions.

In this field, we will introduce new methods using recurrence formulas from average difficult problems. Naturally, Feige's assumptions regarding the complexity of K-SAT random case objects are naturally generalized. Under this assumption, we show the following:

1. DNFS learning is difficult. 2. It is difficult to learn the intersection of hal f-lover of half space.

Furthermore, the same assumption implies the difficulty of almost all learning problems that were considered difficult under the cryptocation assumptions.

Co-research with NATI LINIAL and Shai SHELEV-SHWARTZ.

2014/08/20 Robi Crautogamar (Witsman Research Institute)

Reduction of adaptive dimension metric

I plan to discuss dat a-driven dimensions in the context of teachers with teachers in a general measurement space. We have two contributions. Statistically, we present a generalization that intends to have a lipsitz effect in the weighing space that is twice or close to twice. As a result, we provide a new theoretical explanation of empirical reports obtained by pr e-processing Euclid data using a PCA (main component analysis) before configuring a linear classification device.

In terms of algorithm, we introduce a PCA analog, that is, an efficient procedure that approximates the unique dimension of data for a general weighing space. Therefore, our results take advantage of the two advantages, lo w-dimensional. (1) It is a more efficient algorithm such as a proximity search and a generalized boundary that is more optimistic than (2).

Co-research with Lee-Ad Gottlieb and Aryeh Kontorovich.

Zero knowledge physical proof of physical nature

Is it possible to prove whether two DNA fingerprints match or not without clarifying further information about fingerprints? Is it possible to prove that the two objects have the same pattern without clarifying the pattern itself?

In the area of ​​digital, zero knowledge is a wel l-established concept, and the parties are convinced to follow the statement without clarifying any information beyond the validity of the statement. However, in the context of the nature of nature, zero knowledge is not so developed.

In this debate, we are interested in protocols that provide physical properties of physical objects without clarifying any more information. The literature lacks a single formal framework for such a protocol design and analysis. We use the Universal Synthesis Framework to formally define, model, and analyze the physical zero knowledge (Physicalzk) protocol. It shows the application of physical zero knowledge to X-ray profiling and DNA neutron profiling. Finally, the Public-Coin Evidence type, called Public Observable Evidence, is explored by PHYSICALZK framework.

C o-work with Ben Fish and Daniel Freout

2014/08/06 Lyle Lamshow

Strawole's equal victory < Span> I plan to discuss dat a-driven dimensions in the context of teachers with teachers in a general measurement space. We have two contributions. Statistically, we present a generalization that intends to have a lipsitz effect in the weighing space that is twice or close to twice. As a result, we provide a new theoretical explanation of empirical reports obtained by pr e-processing Euclid data using a PCA (main component analysis) before configuring a linear classification device.

In terms of algorithm, an efficient procedure is introduced to a PCA analog, that is, the specific dimension of the data for a general measuring space. Therefore, our results take advantage of the two advantages, lo w-dimensional. (1) It is a more efficient algorithm such as a proximity search and a generalized boundary that is more optimistic than (2).

Co-research with Lee-Ad Gottlieb and Aryeh Kontorovich.

Zero knowledge physical proof of physical nature

Is it possible to prove whether two DNA fingerprints match or not without clarifying further information about fingerprints? Is it possible to prove that the two objects have the same pattern without clarifying the pattern itself?

In the area of ​​digital, zero knowledge is a wel l-established concept, and the parties are convinced to follow the statement without clarifying any information beyond the validity of the statement. However, in the context of the nature of nature, zero knowledge is not so developed.~In this debate, we are interested in protocols that provide physical properties of physical objects without clarifying any more information. The literature lacks a single formal framework for such a protocol design and analysis. We use the Universal Synthesis Framework to formally define, model, and analyze the physical zero knowledge (Physicalzk) protocol. It shows the application of physical zero knowledge to X-ray profiling and DNA neutron profiling. Finally, the Public-Coin Evidence type, called Public Observable Evidence, is explored by PHYSICALZK framework.<(log n)^<1/2>C o-work with Ben Fish and Daniel Freout

2014/08/06 Lyle Lamshow

Strawole equal victory I plan to discuss dat a-driven dimensions in the context of learning with teachers in general measurement spaces. We have two contributions. Statistically, we present a generalization that intends to have a lipsitz effect in the weighing space that is twice or close to twice. As a result, we provide a new theoretical explanation of empirical reports obtained by pr e-processing Euclid data using a PCA (main component analysis) before configuring a linear classification device.

In terms of algorithm, we introduce a PCA analog, that is, an efficient procedure that approximates the unique dimension of data for a general weighing space. Therefore, our results take advantage of the two advantages, lo w-dimensional. (1) It is a more efficient algorithm such as a proximity search and a generalized boundary that is more optimistic than (2).<\F>Co-research with Lee-Ad Gottlieb and Aryeh Kontorovich.<\F>Zero knowledge physical proof of physical nature<\F>Is it possible to prove whether two DNA fingerprints match or not without clarifying further information about fingerprints? Is it possible to prove that the two objects have the same pattern without clarifying the pattern itself?

In the area of ​​digital, zero knowledge is a wel l-established concept, and the parties are convinced to follow the statement without clarifying any information beyond the validity of the statement. However, in the context of the nature of nature, zero knowledge is not so developed.

In this debate, we are interested in protocols that provide physical properties of physical objects without clarifying any more information. The literature lacks a single formal framework for such a protocol design and analysis. We use the Universal Synthesis Framework to formally define, model, and analyze the physical zero knowledge (Physicalzk) protocol. It shows the application of physical zero knowledge to X-ray profiling and DNA neutron profiling. Finally, the Public-Coin Evidence type, called Public Observable Evidence, is explored by PHYSICALZK framework.

C o-work with Ben Fish and Daniel Freout

2014/08/06 Lyle Lamshow

Equal victory of Strollet

In 1743, Sweden's instrument creator Daniel P. Stroller provided an elegant geometric structure to determine the accurate pitch of notes, such as the position of the frets along the neck of the guitar. Strawole chose 24/17 as a slope of perspective in his structure. This ratio should be close to 2 square root. But why did he choose 24 for 17th? It turned out that the strawole choice had a musical advantage that the frets far away from the equal look high on the fingerboard.

2014/07/30 Ryan Williams (Stanford University)

The fastest path of all pairs through the complexity of the circuit

The shortest route problem (APSP) in a dense graph with an arbitrary side has been known for 50 years in real RAM (by floyd and warshall) for 50 years. The fastest algorithm (starting in Fredman in 1975) is the first step towards the positive answer to the positive answer is *what is the algorithm of N^3/(log^c n) time. It is to determine whether or not it is possible.

I describe the new randomization law for calculating the two Nx-N matrix min-plus accumulation (also known as tropical stacks), and provides a higher speed algorithm in APSP. In the actual rum, this algorithm is time O

(n^3/2^

) It is executed in and cooperates with a high probability in any matrix group. This algorithm implements tools from lo w-level circuit complexity.

2014/07/23 SHACHAR LOVETT (University of California San Diego)

Re decrypisation of the lead Muller code in small fields

The issue decryption problem of a certain sign is a problem that requires the maximum radius of any sphere in the radius that only contains a fixing number. Even well studied codes such as the lead Solomon code and the lead Müller sign, the radius of the list is not well understood. fix limited body $ \ f $. Lead Müller sign $ \ rm_

(N, D) $ is defined by $ \ f $ $ $ n $ changing $ D $ poles. In this study, we study the decryption radius of the Lead Müller number list in the fixed number of $ \ \ f = \ f_p $, fixed number $ d $, large $ n $. The decryption radius of the list is equal to the minimum distance of the sign. In other words, if the minimum normalization distance of $ (D) $ is equal to $ δ (d) $, $ δ (RM_) $.

(n, d)$, the number of encoders in any sphere of radius $\delta (d)-εs$ is bounded by $c = c (p, d, εs)$, independent of $n$. This resolves a conjecture by Gopalan-Klivans-Zuckerman [Stoc 2008], which, along with other results, was proved for the special case $\f = ˶f_2$. It also extends the work of Gopalan [FOCS 2010], and is proved for the case $d = 2$. We also analyze the number of encodings in radius spheres that exceed the minimum distance of the codes. We show that when e \ leq d$, the number of encodings in $

m_(n, d)$ exceeds the minimum distance of $

m_(n, d)$.

(n, d)$ in a radius ball $ \ delta (e) - \ eps $ is bounded by $ \ exp (c \ cdot n^) $, where $ c = c (p, d, \ eps) $ is independent of $ n $. The dependence on N $ is tight. This is an extension of the work of Kaufman-Lovett-Porat [IEEE Inf. Theory 2012], who proved a similar bound for $ \ f_2 $. The proof is based on several new ingredients, including an extension of the Frieze-Kannan weak regularity to general function spaces, higher-order Fourier analysis, and an extension of the Schwarz-Zippel Lemma to polynomial composition.

Collaboration with Abhishek Bhowmick.

2014/07/16 Noam Nisan (Hebrew University of Jerusalem).

Economic efficiency requires interaction

We study the need for interaction between individuals to obtain approximately efficient allocations. The role of interaction in markets has been noted in economic thought, such as in Hayek's classic 1945 paper.

We consider this problem in the context of the simultaneous complexity of communication. We analyze the amount of simultaneous communication required to achieve a nearly efficient allocation. In particular, we consider two settings: combinatorial auctions with bidders (bilateral matching) and combinatorial auctions with subbidders. For both settings, we first show that non-interactive systems incur huge communication costs compared to interactive systems. On the other hand, we show that with limited interaction, we can find a nearly efficient allocation.

In collaboration with Shahar Dobzinski and Sigal Oren.~Sep 07, 2014 Anupam Gupta (CMU)

The Power of Appeal: Online Spanning Trees with Change (Unusual time: discussion starts at 3pm)

In the web (or Steiner) tree problem, we want to buy a link to maintain a tree that is given a measuring space, reaches the web from this measurement space, and maintains a tree that covers all the points that have reached at the minimum cost. It is known that greedy algorithms maintain the competitive trees individually O (log n), which is ideal.

However, the decision of electronic algorithm is not irreversible. When the new vertex arrives, in addition to adding a side that connects the newly arrived end, it is allowed to exchange a few sides purchased before with other sides. Can you maintain a better solution? We have a positive answer to this problem. If you can exchange one side at each point, you can get a stable competitive tree.

C o-research with Amit Kumar and Albert Gu.

2014/07/02 David Zuckerman (University of Austin)

Pseudo order by shrinking

The strong theme in the theory of complexity and pseud o-structural theory in recent decades is to use the lower world to provide a pseudo generator (PRGS). However, the general results using this hardness vs random paradigm are unknown, but the lower world of the ultr a-mature format is unknown because they suffer from the quantitative loss of parameters, but the lower world of the fixed polymorphism is no n-consecutive. The effect cannot be obtained. When these lower worlds are proven using random constraints, they indicate that they can configure essential PRGs without improving the lower world.

In particular, if the random constraints that leave the fraction of variables are reduced by the size of any circuit in the family, only the P^factor has a reduced gamma index. Our PRG uses a length S^seed to deceive the circuit of the size S family. By using this general configuration, you can obtain a multilateral PRG with a multilateral error with the following seed circuit and the length of the following seeds:

1. In the case of the de Morgan type, the length of the seed S^. 2. In the case of any basal type, the species length s^. 3. In the case of the de Morgan type dedicated to reading, the seed length S^. 4. In the size S branch program, the length of the seed is S^.

The best PRG, known in these categories, used seeds with a length of N/2 or higher to extract N-bits, and functioned only for size S = O (n).

C o-research with Russell Impagliazzo and Raghu Meka.

2014/6/25 Rocco Celdio (University of Colombia)

Teacherless Learning complex theoretical perspective

We will introduce and study new types of learning problems with probability distribution through the ND Bourgs Super Suba. Learning problems in our framework are defined by class C, a bouleque function on the ultr a-cube. In our model, the learning algorithm is given an unknown function in C uniformly and random satisfaction, and its goal is to extract the high content of the collision of the uniform distribution in F's satisfaction. It is. Discuss the relationship between having an efficient learning algorithm in this framework and avoiding the "dimension curse" in more traditional density issues.

The main result is that the linear threshold function and the DNF type, which are classes of the two boule functions that are widely studied in calculation learning theory, have efficient distribution learning algorithm in our model. Our algorithm for the linear threshold function is executed in Poly (n, 1/Epsilon) Time, and our algorithm for DNF of multiple sizes is executed in the Typ e-Machine time. On the other hand, under the cryptographic assumptions, it proves the result of complementary hardness, indicating that learning of monotonous 2-CNF, intersection of two and half space, and ptfs of the following number 2 are all difficult. This suggests that our algorithm is close to the efficient learning limit in this model.

C o-research with Anindya de and ILIAS DIAKONIKOLAS.

2014/6/18 Heim Kaplan (Tel Aviv University)

Nearby labeling scheme and guidance universal graphs

It describes how to assign "EMPH" to the top of any unsuitable graph.

The vertices of N $ are $ n/2+o (1) $ bit, given two labels, and without other information about the graph, the vertex is adjacent to the graph. You can decide. This is the optimal value up to the addictive number, which is the first improvement in about 50 years from the $ n/2+o (log n) $ boundary of the month. Therefore, we have searched for the optimal EMPH graph up to the rising method constant, including $ O (2^) $ peaks, and solved the unresolved problem since 1968. Similar strict results can be obtained for Aruku graphs, tournament graphs, and tw o-part graphs.

June 17, 2014 Amit Sai (UCLA)

Please be careful when you are a different date and time: 2: 30-3: 30 at Titan, Tuesday

3: 30-3: 30-PM 3: 30-3: 30 --An 3: 30-3: 30 pm

Is it possible for someone to keep the secret when the enemy can see how all her brain will behave even when she is thinking about the secrets? The obfuscation program problem asks the software for the similarities of this question. Is it possible to keep the secrets of software that use secrets embedded in the code and keep the secrets from the enemy who obtained the entire source code of the software? The secret must remain hidden, regardless of the enemy's opponent changing or executing the code. For decades, it was difficult to achieve this goal in a typical program.

However, our recent research (FOCS 2013) showed the first construction candidate for a safe genera l-purpose program. These new configurations are based on new mathematical ideas. But, especially because these ideas are new, we need to ask questions:. In this lecture, we will discuss recent projects working on this question.<\Omega(1)>Easy and optimal conversation that is strong in noise

The dialogu e-type encoding system adds redundancy to the dialogue type protocol (= conversation) so that even a channel with many noise, which can be damaged by the transmitted symbol, can be damaged.

The fact that there is still a coding system that tolerates the fixing errors is a surprising result of Schulman. His encoding system achieves this feat by using a coded tree whose complicated structure shows its existence by Schulman, but an effective configuration or a signed / decoding procedure. I am. Until the recent research of Kol, Raz, Stoc13], all of these systems needed a major (undefined) constant function. KOL and Raz show that if the error is random, the rate is close to this. In particular, they are 1-θ (\ SQRT)

This argument indicates that even the errors in the discussion can break this binding. Especially in random channels and ignored channels, the rate of 1-θ (≖ ≖) is achieved, and a completely anti-virtual channel is given a coding scheme that achieves a rate of 1-θ (≖ ≖). 。 It is assumed that these boundaries are severe. The encoding method is extremely natural and simple, and basically two parties do the first conversation (no sign!), And in the meantime, a short replacement is inserted. If the hash value does not match, each part goes back.

This is an interactive talk with a whiteboard. Please join us!

2014/4/6 No theory seminar (STOC Week)

2014/5/28 Paris Siminelakis (Stanford University)

Models of convex random graphs

Realistic random graph models are important for both design purposes (predicting the average performance of different protocols/algorithms) and network inference (extracting latent group memberships, clustering, etc.). To date, there are thousands of documents defining various random graph models. I present a principled framework for deriving random graph models by dramatically generalizing the Erdos-Renyi approach to the definition of the classical model G(n, m). Our central principle is to study uniform measures on symmetric sets of graphs, i. e. sets that are invariant under a set of transformations. Our main contribution is to derive natural sufficient conditions under which a uniform measure on a symmetric set of graphs (i) asymptotically convolves into a distribution in which edges occur independently, and (ii) allows computing the probability of each edge from the properties via the solution of an optimization problem. If time permits, I would also like to present an application of this work to resolve vulnerabilities in Kleinberg's growth scheme, the "Navigating Graph".

Based on collaboration with Dimitri Achliopta.

May 21, 2014 Chen Avin (Visiting Professor at Ben-Gurion University, Brown University and ICERM)

The Emergence of Homosexuality and the Glass Ceiling Effect in Social Media

A glass ceiling can be defined as "an invisible but unbreakable wall that prevents minorities and women from climbing the corporate ladder, regardless of their qualifications and achievements". Although undesirable, it is well known that many societies and organizations exhibit a glass ceiling.

In this paper, we formally define and study the glass ceiling phenomenon in social networks and provide a natural mathematical model to (partially) explain it. We propose a biased preferential attachment model with two types of nodes and based on three well-known social phenomena: i) female minority in the network, ii) the rich get richer (preferential attachment), and iii) homophily (liking the same people). We demonstrate that our model shows a strong categorical ceiling effect and that all three conditions are necessary, i. e., removing any one of them causes the model to no longer show a glass ceiling.

Furthermore, it provides empirical evidence of a researcher network (based on DBLP data) that shows all the above properties, such as female minorities, priority attachment, homosexuality, and glass ceiling.

Co-researcher: Barbara Keller, Zvi Lotker, Claire Mathieu, David Peleg, Yvonne-Annnnolet

2014/5/14 SWASTIK Kopparty

Simultaneous approach to the problem of constraints

When a collective K of the 2SAT phrase included in the same variable collective V is given, can you find an assignment to V variables that meet most of the phrases contained in each set? We consider this simultaneous constraints of the simultaneous constraints, and design the first no n-evidence of approximation algorithm for that.

Our main result is that there is a multi-term "parate" approximation algorithm for any CSP F, when K = Log^(n), for simultaneous MAX-F-CSP instance. In contrast, K = log^ for any no n-sel f-evident Boul CSP

At the time of (n), the simultaneous MAX-F-CSP instance indicates that an inclusive coefficient that is not zero cannot be obtained in a polymal time (index time assumed).

C o-research with Amey Bhangale and Sushant Sachdeva.

2014/5/7 Schubangi Salaf (Ratgars University)

Bottom world of defined numerical circuits

In recent years, many exciting results have been announced on the depth of numerical circuits and the dull down world for numerical circuits that are limited in depth. In this lecture, these results are introduced and the main issues and unresolved directions in this field are described.

2014/4/23 Motti Perry (Jerusalem Hebrew University)

Application of "wisdom of crowds"

Agent arrives sequentially and researches a new mechanism design model in which rewards select actions from unknown actions. The information revealed by the subject affects the motivation for agents to explore and generate new information. We characterize the optimal information disclosure policy of planners aimed at maximizing social welfare. One interpretation of our results is the application of what is known as the "wisdom of the crowd." This problem has become increasingly important due to the rapid expansion of the Internet over the past decade.

ILAN KREMER and Yishay Mansour participate.

2014/4/16 Greg Variant (Stanford University)

Optimal identity test of automated inequality and sna p-shots

This debate consists of two parts. In the first section, we discuss issues that verify the identity of the distribution. When there is a description of distribution P in discrete support, if you obtain several samples (independent Thai) from the unknown distribution Q, the case of p = q and at least the EPS are all fluctuating distance (distance L1). Can you distinguish it with a high probability? By joint research with Paul Valiant, we solved this question to the constant factor in an expression based on the basal: Testers distinguish the two cases using the sample of F (p) by 2/3 success probability. When the C 'F (P, EPS) sample is given, there is no tester that can distinguish the case of C*EPS at least the C*EPS case. The function F (P, EPS) is more complicated, although the 2/3 Norm of P is set up by the value divided by EPS^2. This result is a 2/3 norm, as the results so far have been greatly generalized and more strict: most of the support distribution of most N has a 2/3 norm bounded in SQRT (n). , Immediately shows that O (SQRT (N)/EPS^2) sample is sufficient for such distribution.

In the second half of the lecture, focus on the main analysis tools used to get test results. Our simple test algorithms include some hairy inequality. In order to enable this analysis, we will provide full characteristics of inequality in inequality, such as generalized Casie Schwartz, holder inequality, and Motor of L_P rules. Our features are probably no n-traditional, using the linear planning law that the output that must be calculated by hand after trial and error is calculated. Such characteristics were not in the literature, but the computing machine was very useful.

2014/4/02 Thomas Vidic (Simmons Research Institute)

Completely independent quantum key distribution

Quantum cipher promises a level of security that cannot be reproduced in the classic world. Can this safety be guaranteed even if the quantum device, which is the base of the protocol, is unreliable? < SPAN> This debate consists of two parts. In the first section, we discuss issues that verify the identity of the distribution. When there is a description of distribution P in discrete support, if you obtain several samples (independent Thai) from the unknown distribution Q, the case of p = q and at least the EPS are all fluctuating distance (distance L1). Can you distinguish it with a high probability? By joint research with Paul Valiant, we solved this question to the constant factor in an expression based on the basal: Testers distinguish the two cases using the sample of F (p) by 2/3 success probability. When the C 'F (P, EPS) sample is given, there is no tester that can distinguish the case of C*EPS at least the C*EPS case. The function F (P, EPS) is more complicated, although the 2/3 Norm of P is set up by the value divided by EPS^2. This result is a 2/3 norm, as the results so far have been greatly generalized and more strict: most of the support distribution of most N has a 2/3 norm bounded in SQRT (n). , Immediately shows that O (SQRT (N)/EPS^2) sample is sufficient for such distribution.

In the second half of the lecture, focus on the main analysis tools used to get test results. Our simple test algorithms include some hairy inequality. In order to enable this analysis, we will provide full characteristics of inequality in inequality, such as generalized Casie Schwartz, holder inequality, and Motor of L_P rules. Our features are probably no n-traditional, using the linear planning law that the output that must be calculated by hand after trial and error is calculated. Such characteristics were not in the literature, but the computing machine was very useful.

2014/4/02 Thomas Vidic (Simmons Research Institute)

Completely independent quantum key distribution

Quantum cipher promises a level of security that cannot be reproduced in the classic world. Can this safety be guaranteed even if the quantum device, which is the base of the protocol, is unreliable? This debate consists of two parts. In the first section, we discuss issues that verify the identity of the distribution. When there is a description of distribution P in discrete support, if you obtain several samples (independent Thai) from the unknown distribution Q, the case of p = q and at least the EPS are all fluctuating distance (distance L1). Can you distinguish it with a high probability? By joint research with Paul Valiant, we solved this question to the constant factor in an expression based on the basal: Testers distinguish the two cases using the sample of F (p) by 2/3 success probability. When the C 'F (P, EPS) sample is given, there is no tester that can distinguish the case of C*EPS at least the C*EPS case. The function F (P, EPS) is more complicated, although the 2/3 Norm of P is set up by the value divided by EPS^2. This result is a 2/3 norm, as the results so far have been greatly generalized and more strict: most of the support distribution of most N has a 2/3 norm bounded in SQRT (n). , Immediately shows that O (SQRT (N)/EPS^2) sample is sufficient for such distribution.

In the second half of the lecture, focus on the main analysis tools used to get test results. Our simple test algorithms include some hairy inequality. In order to enable this analysis, we will provide full characteristics of inequality in inequality, such as generalized Casie Schwartz, holder inequality, and Motor of L_P rules. Our features are probably no n-traditional, using the linear planning law that the output that must be calculated by hand after trial and error is calculated. Such characteristics were not in the literature, but the computing machine was very useful.

2014/4/02 Thomas Vidic (Simmons Research Institute)

Completely independent quantum key distribution

Quantum cipher promises a level of security that cannot be reproduced in the classic world. Can this safety be guaranteed even if the quantum device, which is the base of the protocol, is unreliable?

This central challenge in quantum cryptography dates back to the early 1990s, when the task of achieving device-independent quantum key distribution (Diqkd) was first formulated.

In this talk, we provide a positive answer to this challenge by presenting a robust protocol for Diqkd and rigorously proving its security. The proof of security is based on a fundamental property of quantum entanglement called monogamy. The resulting protocol is robust, achieving a linear baseline rate and tolerating a constant noise rate in the devices, while assuming only that the devices are modeled by the laws of quantum mechanics, spatially isolated from each other, and isolated from adversarial laboratories.

This talk will be presented in an introductory manner so that it can be understood without prior knowledge of quantum information or cryptography.

Based on collaboration with Umesh Vazirani.~2014/3/26 No seminar 2014/3/19 No seminar.

Fast Matching Pattern Matching

In this work, we loosely consider matching a pattern with a gray image under a close relative transformation. We give theoretical results and find that the algorithm is surprisingly successful in practice.

Given a grayscale template M_1 of dimension N_1 ⊖n_1 times and a grayscale image M_2 of dimension N_2 ⊖N_2 times, our goal is to find a transformation that maps pixels of M1 to pixels of M2 while minimizing the sum-discoloration-difference error. We present a subgraph algorithm that gives approximate results for this problem, i. e., it performs this task while considering as few pixels as possible from both images, and gives a transformation that comes close to minimizing the difference.

Our main contribution is an algorithm for a natural family of images, called smooth images. For such images, we approximate the distance between images with an additive epsilon error, using a set of queries that depend only on polynomials in 1/epsilon and N_2/N_1.

The implementation of this signed algorithm works surprisingly well. We performed several experiments on three different datasets and obtained very good results, showing robustness to noise and the ability to match real-world patterns in images.

Collaboration with Simon Korman (Tau), Daniel Reichman (Wis), and Shai Avidan (Tau).

2014/3/5 No seminar 2014/2/26 No seminar

2014/2/21 Anup Rao (University of Washington) Note Unusual day and time (Thursday, 10:30am)

Circuits with large fan-ins

We give an explicit function F:^N → that requires at least ω(log ^2 n) non-input gates in this model when k=2n/3. When circuits are restricted to depth 2, we prove a stronger lower bound of n^ω (1), and when restricted to type existence, our lower bound strengthens to ω (n^2 /k log n) gates.

Our model is related to several well-known approaches to proving lower bounds in complexity theory. Optimal lower bounds for the number-service model in communication complexity, for depth-limited circuits in AC0, for depth-limited singleton circuits, and for manifold extractors in small fields would suggest strong lower bounds in our model. On the other hand, the new lower bounds for our model result in new time interval agreements for branching schemes, and weaknesses for circuits with linear size and logarithmic depth (fan-in 2). In particular, this lower bound gives an alternative proof of the best-known time interval agreement for ignorant programs.

Collaboration with Pavel Hrubes.

Candidate Difficulty and Functional Encryption for All Circuits

In this work, we study difficulty and functional encryption for general circuits:

Discreteness requires that for two equivalent circuits $C_0$ and $C_1$ of comparable size, violation of $C_0$ and $C_1$ must be computationally discrete.

In functional encryption, we issue a ciphertext $x$ and a key for a circuit $C$. Decrypting the ciphertext $˶ct_x = ˶enc(x)$ with key $˶sk_c$ gives us the value $c(x)$, but nothing else about $x$. Furthermore, we should not be able to learn more than the sum of what we can learn individually.

We give a construction for difficulty discrimination and functional encryption that supports all polynomial-sized circuits. This goal is achieved in three steps:

We describe a candidate construction for attachment-indiscernible difficulty for NC^1$ circuits. The security of this construction is based on a new algebraic hardness assumption. Our candidate and assumptions use a simplified variant of the polylane map that we call the multidimensional jigsaw puzzle.

It shows a method of realizing indiscriminate discrimination in all circuits using the difficult level of NC^1 $ and the perfect type encryption (decrypt on $ NC^1 $).

Finally, it shows a method of realizing a functional encryption for all circuits using the incomparable cumulative, public key encryption, and no n-interactive zero knowledge to the circuit. In addition, the functional encryption scheme we have built can enjoy a concise encryption blob and enable some other applications.

C o-research with Sanjam Garg, Craig Gentry, Shy Halevision, Amit Sahai, Brent Waters

January 22, 2014 George Ver Games (Microsoft Research)

Adjustment of differences

If you and I have one million songs and most of them are the same, how can you effectively convey different songs? In this lecture, a novel and practical algorithm (D. EPPSTEIN, M. GOODRICH, F. Uyeda, a joint research with F. Uyeda) to calculate the specified difference using difference proportional communication, linear calculation, and short delay. I will say it. An important factor is a new estimation amount for the defined difference, and exceeds the conventional estimated amount of minwise sketch when the defined difference value is small. The similarity with the "peel algorithm" used in the Tornado code indicates that it is not surprising because there is a reduction from the dating in coding. In Part 2, the generalization of array collation in the context of the processing distance measurement (joint research with J. Ullman) and the generalization of sequence in the graph (generalization of famous reputation diffusion issues) Describe joint research with Kannan). The "Steiner" version of the graph problem suggests a new network coding problem. If time is allowed, we will describe a simple connection that indicates that this basic data structure can be used for graph encoding.

2014/1/15 IFTACH HAITNER (Tel Aviv University)

An arbitrary fixed bias reversed coin implies a on e-way function

It indicates that the existence of a singl e-directional function is implied the existence of a monetary burea u-based protocol for any no n-obvious fixed bias (, for example . 499). This is a recent result of HAITNER and OMRI [FOCS '11], and proved this result for the protocol with bias.

207. In contrast to the result of the hatner and the OMRI, our results also occur in a weak coin setting protocol. C o-research with ITAY BERMAN and ARIS TENTES.

*** Winter vacation from November 21, 2013 to January 14, 2014.

November 20, 2013 Daniel REICHMAN (Witsman Research Institute, Israel)

Similar analysis of consolidated graphs

The main standard of the smoothing analysis of the graph is to take any large graph G in the given graph class randomly (usually some random side of G). It suggests that it will be a graph with more beautiful properties than usual.

In this debate, we have stated that the analysis of trees, or the equivalent consolidated graphs will be smoothed. The connected graph G, which has the vertex, may have a very large diameter, has a very long mixed time, and has no long pass. The situation changes completely when a random side (εn) is added to the top of G:

-The extension of the side is at least C/log n.

-The diameter is O (log n).~-The expansion of the vertex is at least C/log n.~-There is a long route on the linear.

-The mixing time is o (log^2n)

((The last three numbers assume that the number of basic graph G is the number of via the world). The above estimates are all asymptographic. C o-research with (TEL AVIV/KEIBRIDGE).

November 13, 2013 Ankit Sharma (CMU)

Multiple cutting

Cutting of multiple paths is commonlyized to have two or more terminals. Theoretically, when a group of graph terminals is given, the goal is to minimize the number of "disconnection" sides (sides that correspond to terminals with different terminals), and to respond to certain terminals to certain terminals. 。 One of the special cases of this problem is the Max Flow / Mini Cut problem with two terminals. If you have three or more terminals, this problem will be a difficult NP. < SPAN>. 207. In contrast to the results of HAITNER and OMRI, our results are also made up of weak coin setting protocols. C o-research with ITAY BERMAN and ARIS TENTES.

*** Winter vacation from November 21, 2013 to January 14, 2014.

November 20, 2013 Daniel REICHMAN (Witsman Research Institute, Israel)

Similar analysis of consolidated graphs

The main standard of the smoothing analysis of the graph is to take any large graph G in the given graph class randomly (usually some random side of G). It suggests that it will be a graph with more beautiful properties than usual.

In this debate, we have stated that the analysis of trees, or the equivalent consolidated graphs will be smoothed. The connected graph G, which has the vertex, may have a very large diameter, has a very long mixed time, and has no long pass. The situation changes completely when a random side (εn) is added to the top of G:

-The extension of the side is at least C/log n.

-The diameter is O (log n).

-The expansion of the vertex is at least C/log n.

-There is a long route on the linear.

-The mixing time is o (log^2n)

((The last three numbers assume that the number of basic graph G is the number of via the world). The above estimates are all asymptographic. C o-research with (TEL AVIV/KEIBRIDGE).

November 13, 2013 Ankit Sharma (CMU)

Multiple cutting

Cutting of multiple paths is commonlyized to have two or more terminals. Theoretically, when a group of graph terminals is given, the goal is to minimize the number of "disconnection" sides (sides that correspond to terminals with different terminals), and to respond to certain terminals to certain terminals. 。 One of the special cases of this problem is the Max Flow / Mini Cut problem with two terminals. If you have three or more terminals, this problem will be a difficult NP. 207. In contrast to the result of the hatner and the OMRI, our results also occur in a weak coin setting protocol. C o-research with ITAY BERMAN and ARIS TENTES.

*** Winter vacation from November 21, 2013 to January 14, 2014.

November 20, 2013 Daniel REICHMAN (Witsman Research Institute, Israel)

Similar analysis of consolidated graphs

The main standard of the smoothing analysis of the graph is to take any large graph G in the given graph class randomly (usually some random side of G). It suggests that it will be a graph with more beautiful properties than usual.

In this debate, we have stated that the analysis of trees, or the equivalent consolidated graphs will be smoothed. The connected graph G, which has the vertex, may have a very large diameter, has a very long mixed time, and has no long pass. The situation changes completely when a random side (εn) is added to the top of G:

-The extension of the side is at least C/log n.

-The diameter is O (log n).

-The expansion of the vertex is at least C/log n.

-There is a long route on the linear.

-The mixing time is o (log^2n)

((The last three numbers assume that the number of basic graph G is the number of via the world). The above estimates are all asymptographic. C o-research with (TEL AVIV/KEIBRIDGE).

November 13, 2013 Ankit Sharma (CMU)

Multiple cutting

Cutting of multiple paths is commonlyized to have two or more terminals. Theoretically, when a group of graph terminals is given, the goal is to minimize the number of "disconnection" sides (sides that correspond to terminals with different terminals), and to respond to certain terminals to certain terminals. 。 One of the special cases of this problem is the Max Flow / Mini Cut problem with two terminals. If you have three or more terminals, this problem will be a difficult NP.

This problem has a rich history of approximation algorithms, beginning with the 2-approval by Dahlhaus et al. In 1994, the problem was significantly improved in a paper by Calinescu et al., which presented a relaxation of the problem and a 1, 5 approximation. It was then shown that the enhancement gap of this relaxation is difficult to overcome. Since then, the rounding schemes in the relaxation have been improved, and in a recent work, Buchbinder et al. Stoc 2013, they introduced a new rounding scheme that gives an approximation of 1. 32 388. In collaboration with Jan Vondrak, we first present the best combination of rounding schemes used by Buchbinder et al. and show that it is at its limit to achieve a factor of 1, 30902 (= (3+SQRT (5))/4). We then introduce a new rounding scheme and show that the new combination of rounding schemes achieves an approximation factor close to 1, 297. Below UGC, it is difficult to go below 1, 14.

This is a collaboration with Jan Vondrak.

Bio: Ankit Sharma is a graduate student at Carnegie Mellon University. He is advised by Avrim Blum and Anupam Gupta. His research interests are approximation algorithms and algorithmic game theory.

Backpack Robbery

The multi-armed bandit problem is the leading theoretical model of exploration-exposure agreements in machine learning, with myriad applications ranging from medical testing to communication networks, web search, advertising, and dynamic pricing. In many of these applications, learners may be constrained by one or more bid (or budget) constraints in addition to the usual time horizon constraints. The literature lacks a general model that encompasses these types of problems. We propose a model called "bandits with knapsacks" that combines stochastic integer programming and e-learning. A feature of our problem compared to the existing reductionist literature is that the optimal policy for a given latent distribution may significantly outperform a policy that plays the optimal fixed arms. As a result, achieving signed regret in the knapsack bandit problem is significantly more difficult than in traditional bandit problems.

We present two algorithms whose rewards are close to the informational optimum; one based on a novel "balance search" example, and the other a primitive dual algorithm with multiplicative updates. Furthermore, we prove that the regret obtained by both algorithms is optimal up to many logarithmic times.

Joint work with Robert Kleinberg and Alex Slivkins. Published in FOCS 2013.

10/30/2013 No seminar.

10/25/2013 Adam Smith (Penn State) Note the unusual location (Telstar) and time (13:30pm).

Spouse Privacy: Exploiting Controversial Uncertainty in Statistical Data

In this discussion, we present a new framework for defining privacy in statistical databases, allowing us to reason about and exploit conflicting uncertainties about the data. Roughly, our framework requires indiscriminate possibility in the real world, where the mechanism is computed on a real dataset, and in the ideal world, where the simulator extracts some function of a "cleaned" version of the dataset (e. g., data is removed). In each world, the underlying dataset is drawn from the same distribution with some order (specified as part of the definition), forming the adversarial uncertainty about the dataset.

We argue that our framework provides important guarantees in a broader range of settings compared to previous attempts to model privacy in the presence of adversarial uncertainty. We present several natural "silent" mechanisms that satisfy our definition framework under realistic assumptions about the distribution of the underlying data.

To be published in FOCS 2013, in collaboration with Raef Bassily, Adam Groce, and Jonathan Katz.

October 24, 2013 Omri Weinstein (Princeton University) Note the unusual place (Luna) and time (10:30 am)

Complexity and Information Applications

During the past 30 years, communication complexity has found applications in almost every field of computer science and is one of the few known methods for proving unconditional lower bounds. Therefore, the development of tools in communication complexity is a promising approach for the advancement of other computational models, such as circuit complexity, flow, data structures, and privacy.

A remarkable example of such tools is the information theory introduced in the late 1940s in the context of a singl e-direction data transmission problem. Shannon's research revealed that the amortization cost of sending random messages, that is, the amount of information contained in that message, is equal to the amount of information. However, this compression theory cannot be easily applied to dialogue settings, such as having multiple rounds of conversation in which two (or more) parties have to have multiple rounds to complete the task.

  • The goal of the ongoing research is to expand this theory, develop appropriate tools, and understand how information will behave in interactive settings such as communication complex models. This introductory lecture summarizes the complexity of information that is similar to Shannon theory. Interesting applications in this new field, interesting applied cases we discovered (information that helped understand the strict strings of communication and complexity of the collective mismatch function (0, 48N), and the limits of parallel calculation). I will explain about.
  • October 23, 2013 Jonathan Ullman (Harvard) Note Unusual Location (Luna) (Time is unchanged)
  • Value of fingerprint cord and approximation differential privacy
We indicate a new lower world of specimen complex (EPS, delta) differential algorithms, which accurately answers larg e-scale measurement questions. (^D)^n for the measurement inquiry for the database D, has a format "What percentage of individual records in the database satisfy the nature Q?" In order to answer any & amp; gt; & amp; gt; nd measurement inquiries in D within the range of ± α errors, we are nd & amga.

(D^ log | Q | / alpha^ 2 episodes). This lower world is the most optimal to multiple times, as demonstrated in the Private Multiplicative Weights algorithm (FOCS'10) of Hardt and Rothblum. Specifically, our lower world is more asymptotic than the sample complexity required for the accuracy and the difference in privacy (EPS, delta), which is simply required for accuracy. This is the first time that it is big. In addition, it shows that this lower world can also be a specific case of K-Way Margins (here, q | | (2D)^k).

(2D)^k) indicates that α is constant.

Our results build on the existence of short fingerprint codes (Boneh-Shaw, Crypto'95) and show that these codes are tightly coupled to the sample complexity of differential private data disclosure. They also give a new way to bind certain low-sample bounds to stronger low-sample bounds.

In collaboration with Mark Bun and Salil Vadhan.

October 16, 2013 Justin Thaler (Simons Laboratory, University of California, Berkeley)

Interactive Proofs for Time Circuit Evaluation

Recently, considerable attention has been paid to the development of protocols for verifiable computation. These protocols allow a verifier with weak computational power to offload computations to a stronger but untrusted peer, while at the same time ensuring that the prover performed the computation correctly. Despite considerable progress, existing implementations have not yet reached full practical utility. The main bottleneck is usually the additional effort required by the prover to return a response with guaranteed correctness.

This talk describes recent work that addresses this bottleneck by drastically reducing the runtime of Prover using a powerful interactive proof protocol developed by Goldwasser, Kalai and Rothblum (GKR) and improved and implemented by Cormode, Mitzenmacher and Thaler.

This talk provides a detailed technical overview of the GKR protocol and the algorithmic techniques underlying its efficient implementation.

October 9, 2013 Nikhil Srivastava (MSR India)

(This talk will be 2 hours long with a 15-minute break. The first half of the talk will cover the main content.)

Interconnected families, Ramanujan graphs, and the Kadison-Singer-Singer problem

We introduce a new ontology based on random polynomials and use it to prove the following two results.

(1) Unfolded graphs are very sparse graphs, but very well connected in the sense that the adjacency matrix has a large spectral gap. For d-regular graphs, there is a limit to how large this gap can be, and graphs that achieve this limit are called Ramanujan graphs. An elegant number-theoretic construction by Lubotzky-Phillips-Sarnak and Margulis shows that for any d=p+1, where p is prime, there exists an infinite family of Ramanujan graphs. We prove that there exists an infinite family of bimodal Ramanujan graphs of any degree greater than or equal to 2. We do this by proving a variant of the conjecture of Bilu and Linial that there exists a good 2-lift for any graph.

(2) The Kadison-Singer problem is a problem in operator theory that arose in an attempt to make mathematically rigorous certain claims in Dirac's formulation of quantum mechanics. Over the course of several decades, this problem turned out to be equivalent to many inconsistent-type conjectures about finite matrices, with applications in signal processing, harmonic analysis, computer science, and more. We prove a strong variant of a conjecture by Nik Weaver that any set of vectors satisfying certain mild conditions can be partitioned into two sets, each of which spectrally approximates the whole set.

Both proofs rely on two key ingredients: a new existence condition that reduces the existence of the desired object to a restriction on the roots of the expected characteristic polynomial of a particular random matrix, and a systematic technique for proving sharp bounds on the roots of such polynomials. The technique is largely elementary, drawing on tools from the theory of real constant polynomials.

In collaboration with Adam Marcus and Dan Spillman.

October 2, 2013 Eli Gafni (UCLA)

Adaptive Register Allocation with a Linear Number of Registers

We give an adaptive algorithm in which a process uses a multi-record multi-reader register and acquires exclusive write access to its own single-record multi-reader register. This is the first such algorithm that uses a number of registers that is linear in the number of processes involved. Previous adaptive algorithms required at least (n^) registers.

Collaboration with Carole Delporte-Gallet (Paris 7), Hugues Fauconnier (Paris 7) and Leslie Lamport (MSR-SVC).

10 January 2013 Milan Vojnovic (MSR Cambridge) Note: Unusual day (Tuesday) and time (10:30am)

Cooperation and efficiency in maximizing games

Consider a framework for researching the effects of cooperation in maximizing games on the quality of the results. This game is a category of games that include games that make strategic effort investing in a set that individuals can be used by individuals as a special case. An important feature in such an environment is the impact of the deviation of strategic coalitions on the value generated. In this lecture, we will discuss how the recently developed smooth game framework can derive the anarchy threshold value of the utilization game. In particular, we discuss the new concept of coalition's regularity, and indicate how to implement the dull value of the anarchy threshold value in the utilization game.

This lecture is based on a joint research with Yoram Bachrach, Vasilis Sygkanis, and Eva Tardos.

2013/9/25 VIJAY V. Vazirani (Georgia Institute of Technology)

Big conflict in equilibrium calculation: The market gives surprise

Equality calculation is one of the most important things that joined algorithm theory and complexity in the past decade, and has a unique property that is completely different from the complexity of optimization issues.

Our contribution to this evolution theory is summarized in the following propositions: physical equilibrium calculation problems tend to show a remarkable binary conflict. The tw o-paragraph conflict of the Nash equilibrium has been known for a long time, showing a qualitative difference between 2 nashes and Knash against K & Amp; Gt; 2. We establish a tw o-part method of market equilibrium.

To do so, for example, it is necessary to define the concept of the Leontief-Free function that helps capture the common use of the product set, which is an alternative like bread and bagels. For example, if the product is complementary, such as bread and butter, the classic Leontievable function will do a great job. Surprisingly, in the first case, the utility function was defined only for economics special cases, for example, the utility function CES. The new smallest-maximum relationship supports the claim that our concept is valid.

We have given this idea due to the high advantage of algorithm approach to market equilibrium.

Note Interview with Jugal Garg and Ruta Mehta.

2013/9/18 Gerani Nelson (Harvard University).

OSNAP: Faster Numerical Linear Algebra Algorithms VIA SUBSPACE EMBEDDINGS (OSE) is a high probability of S selection on any lo w-dimensional space V. 2 | | | | X | _2 (up to 1+EPS passage error) at the same time to all X in V. SARLOS was a pioneer in OSE to speed up many numerica l-shaped algebra issues in 2006. The issues that benefit from OSE include the approximation of the approximation of the approximation, the lowest, lo w-ranking, the return of the L_P, the similarity of the leverage score, and the construction of a good preparatory person.

We give a class of OSE distribution called "ignorant sparse nearby imaging" (OSNAP). This class produces a very sparse and small line of S-table, which has exceeded the recent research in this field by (Clarkson, Woodruff Stoc '13). In particular, we can have one no n-zero item per row and one row, or per row and per row (1/ maximum) no n-zero items per row. Indicates that you can have. 0. A is N x D for n & amp; gt; & amp; gt; d, up to fixing coefficient | | | A X-B | _2 | Then, the algorithm of the execution time O (nnz (a) + ^) is obtained. Here, nnz (a) is the number of no n-zero entry of A, and ω is an index of the square matrix.

Our main technical results are basically a Bai-yin-type theorem in random matrix theory, which seems to be an independent interest:, in R^, any U and any of the arbitrary U that is a regular orthogonal. For random sparing S, which has a proper selected entry and a sufficient number of rows, all the unique values ​​of SU are in the section [1-EPS, 1+EPS] with a good probability.

C o-research with Huy Lê nguyn (Princeton University).

September 11, 2013 Graham Cormode

A short summary of big data

When dealing with big data, it is often necessary to see small summats to grasp the whole picture. In recent years, many new technologies have been developed that can extract important characteristics of larg e-scale distribution from compact and eas y-t o-use summaries. This lecture introduces examples of various types of summaries, such as sampling, outline, and special purposes. Finally, we will overview the future paths for further development and recruitment of such summaries.

April 9, 2013 Aleksander Madry (EPFL)

Navigate the main path with an electric flow: from flow to match, and return

A new method using electric flow calculations to solve the maximum flow and the minimum S-T cut problem. This approach is based on the idea of ​​the inner point follo w-up route (IPMS), a powerful tool in convex optimization, and uses a certain interaction between the maximum flow and the bilateral mapping.

As a result, we provide some long-term improvements for the maximum flow, the minimum S-T disconnection problem, and the closely related matching problem. Furthermore, the relevance of the initial tw o-division structure of the electric flow and the convergence behavior of the IPM when applied to the flow problem is established. With this relevance, it is possible to overcome the infamous repetitive convergence wall (SQRT (M)) with all the known internal points.

August 28, 2013 Chay Valdi (TAU)

Local calculation algorithm and local mechanism design

The outline lecture is divided into two parts. The first part introduces LCA (local computation algorithms: local calculation algorithm). LCA realizes query access to larg e-scale solutions to calculation problems using pol y-controlled time and space. It also explains how to build LCA using the reduction of we b-based algorithms.

Part 2 describes the local mechanism design, that is, how to design the honest mechanisms executed in the time of pol y-logarithm and space. Focus on local scheduling algorithms.

The lecture is based on a joint project with Noga Alon, Yishay Mansour, Ronitt Rubinfeld, Aviad Rubinstein, and Ning XIE.

2013/8/26 KAI-Min CHUNG (Central Research Institute Institute of Information Science, Taiwan)

Interactive coding, review < SPAN> The main path is navigated with an electric flow: from the flow to a match

A new method using electric flow calculations to solve the maximum flow and the minimum S-T cut problem. This approach is based on the idea of ​​the inner point follo w-up route (IPMS), a powerful tool in convex optimization, and uses a certain interaction between the maximum flow and the bilateral mapping.

As a result, we provide some long-term improvements for the maximum flow, the minimum S-T disconnection problem, and the closely related matching problem. Furthermore, the relevance of the initial tw o-division structure of the electric flow and the convergence behavior of the IPM when applied to the flow problem is established. With this relevance, it is possible to overcome the infamous repetitive convergence wall (SQRT (M)) with all the known internal points.

August 28, 2013 Chay Valdi (TAU)

Local calculation algorithm and local mechanism design

The outline lecture is divided into two parts. The first part introduces LCA (local computation algorithms: local calculation algorithm). LCA realizes query access to larg e-scale solutions to calculation problems using pol y-controlled time and space. It also explains how to build LCA using the reduction of we b-based algorithms.

Part 2 describes the local mechanism design, that is, how to design the honest mechanisms executed in the time of pol y-controller and space. Focus on local scheduling algorithms.

The lecture is based on a joint project with Noga Alon, Yishay Mansour, Ronitt Rubinfeld, Aviad Rubinstein, and Ning XIE.

2013/8/26 KAI-Min CHUNG (Central Research Institute Institute of Information Science, Taiwan)

Interactive coding, Review Electric Flow Navigate: From flow to match, then return

A new method using electric flow calculations to solve the maximum flow and the minimum S-T cut problem. This approach is based on the idea of ​​the inner point follo w-up route (IPMS), a powerful tool in convex optimization, and uses a certain interaction between the maximum flow and the bilateral mapping.

As a result, we provide some long-term improvements for the maximum flow, the minimum S-T disconnection problem, and the closely related matching problem. Furthermore, the relevance of the initial tw o-division structure of the electric flow and the convergence behavior of the IPM when applied to the flow problem is established. With this relevance, it is possible to overcome the infamous repetitive convergence wall (SQRT (M)) with all the known internal points.

August 28, 2013 Chay Valdi (TAU)

Local calculation algorithm and local mechanism design

The outline lecture is divided into two parts. The first part introduces LCA (local computation algorithms: local calculation algorithm). LCA realizes query access to larg e-scale solutions to calculation problems using pol y-controlled time and space. It also explains how to build LCA using the reduction of we b-based algorithms.

Part 2 describes the local mechanism design, that is, how to design the honest mechanisms executed in the time of pol y-logarithm and space. Focus on local scheduling algorithms.

The lecture is based on a joint project with Noga Alon, Yishay Mansour, Ronitt Rubinfeld, Aviad Rubinstein, and Ning XIE.

2013/8/26 KAI-Min CHUNG (Central Research Institute Institute of Information Science, Taiwan)

Interactive coding, review

How can a communication protocol between two parties be encoded so that it can withstand contradictory errors in the communication channel? This question dates back to the representative work of Shannon and Hamming in the 1940s, which initiated research into error-correcting codes (ECC). However, even if all messages in a communication protocol are encoded with a "good" ECC, the encoded protocol will have a bad error rate (i. e., O(1/m), where M is the number of communication rounds). To address this problem, Schulman (FOCS'92, Stoc'93) introduced the concept of interactive coding. We argue that while a method of encoding each message individually with ECC guarantees that the encoded protocol has the same amount of information as the original protocol, this is no longer the case when interactive coding is used. In particular, the encoded protocol may completely leak the private inputs of the players, even if they remained secret in the original protocol. To address this problem, we introduce the concept of interactive knowledge-preserving coding, where an interactive coding protocol is required to preserve the "knowledge" transmitted in the original protocol. The main results are as follows: Applying ECC to each message separately is inherently optimal: no knowledge-preserving interactive coding scheme has an error rate of 1/m.

In the limit to a computationally defined (polynomial-time) adversary, assuming the existence of a one-way function (an underlying one-way function), there exists a knowledge-preserving interactive coding scheme with constant error rate and information rate n^eps (respectively 1/polylog(n)) for each EPS& gt; 0. Furthermore, one-way operations are required to achieve 1/m error.

Finally, even in the limit to a computationally powerful adversary, any knowledge-preserving interactive coding scheme with constant error rate has an information rate of at most O(1/log n). These results also hold for non-modulated interactive coding schemes.

In collaboration with Rafael Pass and Sidharth Telang.

August 21, 2013 Thomas Steinke (Harvard University)

Pseudorandomness of regular branching schemes by Fourier analysis

In this paper, we propose an input bit that can be read in any order, assumption, conversion branching program, and explicit pseud o-random number generator for conversion. The length of the species is $ O (˶log^2 n) $, and $ n $ is the branch program length. This is a special case of the generator by Impagliazzo, Meka and Zuckerman (FOCS 2012) (giving a $ s^$ seed length to any branching program in size $ s $). Also, general mistakes, width $ 2^& amp; gt? This is not comparable to the results of IMPAGLIAZZO and others.

Our pseud o-random generators are similar to what Gopalan et al. (FOCS 2012) used to read CNF, but the resolution is completely different. Our thing is based on the branch program Fourier analysis. In particular, a regular branch program ignored by $ W $ indicates that it has a maximum $ (2w^2)^K $ foulier, regardless of the length of the program.

C o-research with Omer Reingold and Salil Vadhan. Refer to http: // eccc. HPI-web. De/Report/2013/086/.

2013/8/14 Lenato Paes Reme (MSR SV)

The performance guarantee < SPAN> at the budget auction proposes an explicit pseud o-random number generator for reading, conversion, or conversion, which allows you to read the input bit in any order. The length of the species is $ O (˶log^2 n) $, and $ n $ is the branch program length. This is a special case of the generator by Impagliazzo, Meka and Zuckerman (FOCS 2012) (giving a $ s^$ seed length to any branching program in size $ s $). Also, general mistakes, width $ 2^& amp; gt? This is not comparable to the results of IMPAGLIAZZO and others.

Our pseud o-random generators are similar to what Gopalan et al. (FOCS 2012) used to read CNF, but the resolution is completely different. Our thing is based on the branch program Fourier analysis. In particular, a regular branch program ignored by $ W $ indicates that it has a maximum $ (2w^2)^K $ foulier, regardless of the length of the program.

C o-research with Omer Reingold and Salil Vadhan. Refer to http: // eccc. HPI-web. De/Report/2013/086/.

2013/8/14 Lenato Paes Reme (MSR SV)

In the performance guarantee in the budget auction, we propose an input bit in any order, assumption, conversion branching program, and explicit pseud o-random number generator for conversion. The length of the species is $ O (˶log^2 n) $, and $ n $ is the branch program length. This is a special case of the generator by Impagliazzo, Meka and Zuckerman (FOCS 2012) (giving a $ s^$ seed length to any branching program in size $ s $). Also, general mistakes, width $ 2^& amp; gt? This is not comparable to the results of IMPAGLIAZZO and others.

Our pseud o-random generators are similar to what Gopalan et al. (FOCS 2012) used to read CNF, but the resolution is completely different. Our thing is based on the branch program Fourier analysis. In particular, a regular branch program ignored by $ W $ indicates that it has a maximum $ (2w^2)^K $ foulier, regardless of the length of the program.

C o-research with Omer Reingold and Salil Vadhan. Refer to http: // eccc. HPI-web. De/Report/2013/086/.

2013/8/14 Lenato Paes Reme (MSR SV)

Performance guarantee at budget auction

In settings where players have limited access to liquidity, as expressed in the form of a budget constraint, efficiency maximization has proven to be a challenging objective. In particular, social welfare cannot be approximated by a better proxy than the number of players. Thus, the literature has mainly relied on receipts as a way to achieve efficiency in such settings. While this has been achieved in some important scenarios, it is known that in many settings, incentive-compatible auctions will always result in a Pareto solution or that true mechanisms cannot always guarantee a Pareto outcome. Traditionally, impossibility results can be avoided by considering approaches. However, Pareto efficiency is a binary property (it either satisfies or it does not), which does not allow approximation. In this paper, we propose a new concept of efficiency, called "emph", which is the maximum amount of revenue that an omniscient salesperson could extract from a given case. We explain the intuition behind this objective function and show that it can be estimated from two different auctions. Furthermore, we show that no true algorithm can guarantee an approximation factor better than 4/3 for wet welfare, providing an honest auction. 2013/8/7 Abhradeep Guha Thakurta (MSR SV, Stanford University)

(Near) Dimension Independent Differential Private Learning

This talk presents recent developments in the field of differential private machine learning. In particular, we present results that exponentially improve (in terms of dimensionality) the error guarantees of existing differential private learning algorithms (production perturbation, objective perturbation, and perturbed leader-following private learning). In fact, some of these algorithms obtain error bounds independent of any explicit dependency on dimensionality.

We also provide experimental results that support these error bounds.

Work in collaboration with Prateek Jain, Microsoft Research India.

2013/7/31 Amitabh Trehan (Technion)

Self-healing network or self-healing network

First the red player removes a node (and its adjacent edges), then the blue player adds edges between the remaining nodes. Throughout the game, which edges should blue add so that the network remains connected, no node has too many new edges, and the distance between any pair of nodes (i. e., the network spread) does not become too large? Now imagine that the nodes in the graph are computers and the graph is a distributed network. The nodes themselves are the blue guys, but no one knows them except the nodes that share their advantage. Solving such problems is the essence of self-healing distributed networks.

We introduce a distributed self-healing model that is particularly applicable to reconfigurable networks such as peer-to-peer and wireless networks, and present a fully distributed algorithm that can "recover" certain global topological properties using only local information. ForgivingTree [PODC2008] and Forgiving Graph [PODC 2009; DC 2012] use a "virtual graph" approach by preserving connectivity, low-order growth, and node proximity (e. g. diameter and area). Xheal [PODC 2011; Xheal: expanders using local self-healing] further preserves the scalability and spectral properties of the network. We present a fully distributed implementation in the LOCAL message passing model. However, we are working on ideas that allow for even more efficient implementations and stronger guarantees.

Collaboration with Thomas P Hayes, Jared Saia, Navin Rustagi, and Gopal Pandurangan.

July 24, 2013 Haim Kaplan (Tel Aviv University)

Maximal submatrix queries of Monge matrices and some Monge matrices with applications

We present a data structure for maximal submatrix queries of Monge tables and sub-Monge tables. This structure requires space O(n ⊖log n), preprocessing time O(n ⊖log^2 n), and query response time O(⊖log^2 n) for n ⊖times n Monge tables. For several Monge tables, the space and preprocessing increase by α(n) (the inverse Ackermann function), while queries remain O(˶log^2 n). Our scheme exploits the interpretation of the maximal columns of a Monge matrix (respectively a sub-Monge) as an overwrap of a pseudogram (respectively a pseudo-segment).

This data structure has already been found to have some dynamic distance oracle on flat graphs, maximum flows on flat graphs, and geometric issues on rectangular rectangles in the sky.

July 17, 2013 Moni Naor (Witsman Research Institute)

Cryptographic and data structure: A heavenly match

Advances in cipher and complexity often go through hands. In this lecture, we will explore the relationship between the data structure, which is a different field of cipher and computer science. There are many cases where the development of the field has been applied in a fruitful form in the other field. Initial examples include Hellman's Time/Space TradeOffs announced in 1980.

July 10, 2013 Tonian Pitassi (University of Toronto)

Average spraying world and lower world of monotonous switching networks

(Joint research with Jubal Filmus, Robert Robert, Stephen Cook)

2013/7/3 Andrew v. Goldberg (MSR SV)

Node tagging algorithm

When a weighted graph is given, the distance oracle enters the punch of the vertex and returns the distance between them. The labeling approach for designing the distance oracle is to calculate the label at each vertex in advance so that the distance can be calculated from the label correspondingly without seeing the graph. Investigate the results of the labeling algorithm, which has recently attracted attention, node labeling (HL).

HL's inquiry time and memory requirements depend on the size of the label. While there is a graph that tolerates a small label, it can prove that there is a graph that needs to increase the label. It is difficult to calculate the optimal distributor label, but it can be approximated up to O (log (n)) in a multiple time. This is all label size (that is, memory required to store labels), maximum label size (in the worst case, query time is determined), and vector norm LP that is generally guided by top label size Can be done on. It can also be approximated between LP and LQ Norm at the same time. < SPAN> This data structure has already been found to have some applications to dynamic distance oracle on flat graphs, maximum flows on flat graphs, and geometric issues on the sky rectangular.

July 17, 2013 Moni Naor (Witsman Research Institute)

Cryptographic and data structure: A heavenly match

Advances in cipher and complexity often go through hands. In this lecture, we will explore the relationship between the data structure, which is a different field of cipher and computer science. There are many cases where the development of the field has been applied in a fruitful form in the other field. Initial examples include Hellman's Time/Space TradeOffs announced in 1980.

July 10, 2013 Tonian Pitassi (University of Toronto)

Average spraying world and lower world of monotonous switching networks

(Joint research with Jubal Filmus, Robert Robert, Stephen Cook)

2013/7/3 Andrew v. Goldberg (MSR SV)

Node tagging algorithm

When a weighted graph is given, the distance oracle enters the punch of the vertex and returns the distance between them. The labeling approach for designing the distance oracle is to calculate the label at each vertex in advance so that the distance can be calculated from the label correspondingly without seeing the graph. Investigate the results of the labeling algorithm, which has recently attracted attention, node labeling (HL).

HL's inquiry time and memory requirements depend on the size of the label. While there is a graph that tolerates a small label, it can prove that there is a graph that needs to increase the label. It is difficult to calculate the optimal distributor label, but it can be approximated up to O (log (n)) in a multiple time. This is all label size (that is, memory required to store labels), maximum label size (in the worst case, query time is determined), and vector norm LP that is generally guided by top label size Can be done on. It can also be approximated between LP and LQ Norm at the same time. This data structure has already been found to have some dynamic distance oracle on flat graphs, maximum flows on flat graphs, and geometric issues on rectangular rectangles in the sky.

July 17, 2013 Moni Naor (Witsman Research Institute)

Cryptographic and data structure: A heavenly match

Advances in cipher and complexity often go through hands. In this lecture, we will explore the relationship between the data structure, which is a different field of cipher and computer science. There are many cases where the development of the field has been applied in a fruitful form in the other field. Initial examples include Hellman's Time/Space TradeOffs announced in 1980.

July 10, 2013 Tonian Pitassi (University of Toronto)

Average spraying world and lower world of monotonous switching networks

(Joint research with Jubal Filmus, Robert Robert, Stephen Cook)

2013/7/3 Andrew v. Goldberg (MSR SV)

Node tagging algorithm

When a weighted graph is given, the distance oracle enters the punch of the vertex and returns the distance between them. The labeling approach for designing the distance oracle is to calculate the label at each vertex in advance so that the distance can be calculated from the label correspondingly without seeing the graph. Investigate the results of the labeling algorithm, which has recently attracted attention, node labeling (HL).

HL's inquiry time and memory requirements depend on the size of the label. While there is a graph that tolerates a small label, it can prove that there is a graph that needs to increase the label. It is difficult to calculate the optimal distributor label, but it can be approximated up to O (log (n)) in a multiple time. This is all label size (that is, memory required to store labels), maximum label size (in the worst case, query time is determined), and vector norm LP that is generally guided by top label size Can be done on. It can also be approximated between LP and LQ Norm at the same time.

The hierarchical label is a special class of HL, and a small highway dimension can calculate a small hierarchical label in a multilemium time. On the other hand, a hierarchical graph label is significantly larger than the general one. The discovery method for calculating a hierarchical label leads to a hig h-speed implementation of the fruit tree distance on the road network. By using label compression, time and space can be exchanged, and this algorithm can be practically used in a wider range of applications. The experimental results indicate that a heuristic hierarchy label works well with road networks and several other class graphs. In addition, we will discuss effective applications of proven approximation algorithms, and show the experimental results.

Finally, the label is stored in the database and the HL query is implemented in SQL, indicating that SQL developers can access this algorithm.

June 26, 2013 David Woodruf (IBM Almaden)<-\Omega(\cc)>Low approximation of input time and regression sparse time

We improve the time of the minimum square regression and the algorithm of the lo w-ranking method in consideration of the disturbance of the input matrix. That is, if NNZ (a) represents the number of non-zero entry in the input matrix A, the approximation of the approximation of the approximation of the approximation of the approximation 2 when the queue NNZ (a) + poly (d log n) is given in the time of-nnz (a) + poly (d log n). The method of solving is shown, and it shows a method of seeking an approximation Bestank K of a queue NNZ (A) + Poly (Klog N).

The approximation value is all relative errors. In the past algorithms based on the high-speed Johnson-Linden Strauss conversion, it took at least the time of ndlog D or nnz (a)*k.

C o-research with Ken Clarkson.

June 19, 2013 Siu At Chan (UC Berkeley)

Approximately similar resistance by independent pairwise part

June 10, 2013 Prateek Jain (MSR)

13: 30-14: 30. Visit a different day (Monday)!

Proof of alternate minimization method for lo w-ranking queue estimation issues

The alternative minimization is a widely applicable and empirical approach to find the most suitable lo w-ranked matrix to data. For example, in a lo w-ranking integral problem, this method is considered the most accurate and efficient, an important factor in winning the Netflix Challenge.

In alternating minimization, we write the low-rank target matrix in a bilinear form, e. g., $X = UV^dag$; then the algorithm alternates between finding the optimal $U$ and the optimal $V$. Usually, each alternating step of the separation is convex and involute. However, the overall problem becomes non-convex, and there is little theoretical understanding of when this approach produces good results.

In this talk, we present one of the first theoretical analyses of alternating minimization for various low-rank matrix estimation problems, e. g., matrix integrals, recursive matrix integrals, etc. Recent well-known results have shown that these problems become well solvable if certain (now formal) conditions are imposed. We show that alternating minimization also succeeds under similar conditions. Moreover, we show that, compared to existing results, alternating minimization guarantees faster (especially geometric) convergence to the true matrix while allowing a simpler analysis.

Co-authored with Praneeth Netrapalli, Sujay Sanghavi, and Inderjit Dhillon.

2013/6/5 *No talk* (STOC@Stanford)

2013/5/21 Daniel Weitzner (MIT)

1:30-14:30. Beware of a different day (Tuesday)!

True Privacy: Context and Personal Control as a Path to True Privacy in the 21st Century

Daniel J. Weitzner Director of the Distributed Information Group, MIT Computer Science and Artificial Intelligence Laboratory

We hear "your privacy is important to us," but does anyone really understand what that means? As society's awareness of privacy grows, so too does the idea of ​​what privacy is. Whether we call privacy a fundamental human right, something that exists in the penumbra of other constitutional rights, or a consumer fairness issue, the 20th century application of privacy has been unsatisfying for individuals and burdensome for innovators. This is an argument for renewing the fundamentals of privacy: freedom of association, protection from discrimination, and limits on the tyranny of large organizations, public and private. If we go back to the basics of privacy, we find that true privacy relies very little on "notice" and places great value on respect for the environment and individual control. The pseudo-conventional concept of "choice" is not useful for users, and it is important to respect the context of human relationships. The formal misunderstanding of the difference between our and Europe's privacy frameworks is based on the simplistic view that Europe has "more" privacy and America has "less" privacy, and we believe that we can choose our privacy level on a linear scale. In reality, we need to solve more complex problems.

May 15, 2013 Anupam Gupta (CMU)

How to get your errands done (and get to dinner on time)

In the orientation problem, we are given a metric space (where distance represents travel time between locations), a starting vertex ("home"), and a deadline B, and we want to visit as many points as possible using a tour of length at most B. We are familiar with fixed-factor approximation algorithms for this problem, due to the work of Blum et al. in 2002.

However, visiting nodes is not enough: after arriving at a location, we must wait some (random) time at each location before getting a reward. Each such waiting time comes from a known probability distribution. So how do we do this? In this discussion, we discuss adaptive and non-adaptive approximation algorithms for this stochastic orientation problem.

This is based on joint research with Ravi Krishnaswamy, Viswanath Nagarajan, and R. Ravi.

May 8, 2013 VITALY FELDMAN (IBM Research, Almaden)

Statistical algorithms and threshold processing for detecting planted clicks

Introducing a framework that proves the lower world of calculation issues related to distribution based on an algorithm class called statistical algorithm. In such an algorithm, access to the input distribution is limited to obtaining an estimated value of any function in the sample extracted randomly from the input distribution, not directly accessing the sample. For example, most natural algorithms that are theoretically or practically interested, such as momen t-based techniques, local search, standard repetitive methods for convex optimization, MCMC, simulated ant i-annealing, are theoretical and practical natural algorithms. There is a statistical correspondence. Our framework is inspired by the statistical query model in learning theory and is common.

Our main applications are the complexity of statistical algorithms for detecting the planting distribution of both sides of clicks (or dusty clothing of signature). /2-ⅻⅻ ⅻ)) is a lower world close to the optimal case; 0. It has been considered difficult to prove the hardness of other problems and the hardness for encryption application. The threshold provides specific evidence of hardness, and thus supports these assumptions.

C o-research with ELENA GRIGORSHU, Lebin, Sandoz Vebala, Ying Hiao.

May 1, 2013 Chandra Chekuri (University of Illinois Urbana Champagne School)

Larg e-scale decomposition and application of larg e-scale wood graphs

TreeWidth is a graph parameter that plays a basic role in many structural and algorithm results. We study the issues that disassemble the given graph $ G $ into some partial graphs where nodes are not continuous. At that time, each subgraph has sufficient large trees. We prove two theorem on the desirable subgraphs of $ h $ and the desirable lowe r-world $ R $ gautions for each subgraph tree. These theorems are $ h, r $ as parameters when graph $ G $ with a Thursday $ K $ is $ hr^2 ¬ le k/¬ polylog (k) $, or $. It claims that it can be executed when h^3r ¬ le k/¬ polylog (k) $ is established.

The decomposition theorem was inspired by Chuzhoy's pioneering research on the problem of the maximum route on the no n-mushroom graph, and the following research that extended its concept into the path of the discontinued node. The purpose of this lecture is to explain the background of this theorem, the application to routing, the ability to move fixed parameters and the ERDOS-POSA type.

By applying the latter, the well-known Robertson and Seymour Grid-Minor theorem can be avoided. No preliminary knowledge about the width of the wood is assumed.

This decomposition theorem, which is obtained from a joint paper with Julia Chuzhoy published in Stoc 2013, is also based on a past paper on the problem of maximum separation paths.

April 24, 2013 Robert Crautogamar (Witman Science Institute)

How to pull out cheaply or remove Stiner points

The main result I announce is that the Steiner Point Removal (SPR) problem can always be solved by mult i-pair, which will actively solve questions raised by Chan, Xia, Konjevod and Richa (2006). It is a thing. In particular, the graph with all the surrounding weight $ G = (V, E, W) $ and the partial set of the terminal $ TSUBSET V $ have only a terminal graph, and $ G '= (T, e', W ') This is $ $, this is smaller than $ G $, the shortest path between the two terminals is almost equal to $ G' $ and $ G $, that is, $ O (≖log^6 | T |) It is within $. The proof of existence gives a randomized polyal time algorithm.

The characteristic of our proof is a new weighing deformation. N $ points of $ (x, d) $ are all known to meet $ O (γLog n) $. Roughly speaking, this is that there is a random split $ x $ with an arbitrary 2 points in x $ 2 in X $, with a boundary in the probability that Y is separated. This is the binding of the following probability variable $ z_p $.

C o-research with Rior Commma, Hui L. Guen

2013/4/17 Yaron singer (Google/Harvard University)

Adaptive ceading in social networks < Span> This decomposition theorem is a pioneering research on Chuzhoy on the problem of the maximum maximum route on the no n-mixed graph, and the following research that has expanded its concept to a no n-continuous node route. It was inspired by. The purpose of this lecture is to explain the background of this theorem, the application to routing, the ability to move fixed parameters and the ERDOS-POSA type.

By applying the latter, the well-known Robertson and Seymour Grid-Minor theorem can be avoided. No preliminary knowledge about the width of the wood is assumed.

This decomposition theorem, which is obtained from a joint paper with Julia Chuzhoy published in Stoc 2013, is also based on a past paper on the problem of maximum separation paths.

April 24, 2013 Robert Crautogamar (Witman Science Institute)

How to pull out cheaply or remove Stiner points

The main result I announce is that the Steiner Point Removal (SPR) problem can always be solved by mult i-pair, which will actively solve questions raised by Chan, Xia, Konjevod and Richa (2006). It is a thing. In particular, the graph with all the surrounding weight $ G = (V, E, W) $ and the partial set of the terminal $ TSUBSET V $ have only a terminal graph, and $ G '= (T, e', W ') This is $ $, this is smaller than $ G $, the shortest path between the two terminals is almost equal to $ G' $ and $ G $, that is, $ O (≖log^6 | T |) It is within $. The proof of existence gives a randomized polyal time algorithm.

The characteristic of our proof is a new weighing deformation. N $ points of $ (x, d) $ are all known to meet $ O (γLog n) $. Roughly speaking, this is that there is a random split $ x $ with an arbitrary 2 points in x $ 2 in X $, with a boundary in the probability that Y is separated. This is the binding of the following probability variable $ z_p $.

C o-research with Rior Commma, Hui L. Guen

2013/4/17 Yaron singer (Google/Harvard University)

Adaptiv e-indicated ceading in social networks This decomposition theorem is inspired by Chuzhoy's pioneering research on the problem of the maximum route on the no n-led graph and subsequent research that expands its concept into an unexpected node route. It is a thing. The purpose of this lecture is to explain the background of this theorem, the application to routing, the ability to move fixed parameters and the ERDOS-POSA type.

By applying the latter, the well-known Robertson and Seymour Grid-Minor theorem can be avoided. No preliminary knowledge about the width of the wood is assumed.

This decomposition theorem, which is obtained from a joint paper with Julia Chuzhoy published in Stoc 2013, is also based on a past paper on the problem of maximum separation paths.

April 24, 2013 Robert Crautogamar (Witman Science Institute)

How to put out your hand cheaply or remove Stiner points

The main result I announce is that the Steiner Point Removal (SPR) problem can always be solved by mult i-pair, which will actively solve questions raised by Chan, Xia, Konjevod and Richa (2006). It is a thing. In particular, the graph with all the surrounding weight $ G = (V, E, W) $ and the partial set of the terminal $ TSUBSET V $ have only a terminal graph, and $ G '= (T, e', w ') This is $ $, this is smaller than $ G $, the shortest path between the two terminals is almost equal to $ G' $ and $ G $, that is, $ O (≖log^6 | T |) It is within $. The proof of existence gives a randomized polyal time algorithm.

The characteristic of our proof is a new weighing deformation. N $ points of $ (x, d) $ are all known to meet $ O (γLog n) $. Roughly speaking, this is that there is a random split $ x $ with an arbitrary 2 points in x $ 2 in X $, with a border with the probability of separation. This is the binding of the following probability variable $ z_p $.

C o-research with Rior Commma, Hui L. Guen

2013/4/17 Yaron singer (Google/Harvard University)

Adaptive custy in social networks

With the rapid spread of social networking technology in the past 10 years, special attention has been paid to algorithms and data mining technology designed to maximize information cascades in social networks. Despite the great progress in the past 10 years, the number of access to data and social networks is limited, so even if stat e-o f-th e-art technology is applied, performance often decreases.

In this discussion, we will introduce a new framework called an adaptive ceading. This framework is a tw o-step model that dramatically increases chai n-like information, using a phenomenon known as the "paradox of friendship" in social networks. Our main results show that a fixed agent approach is feasible for the most wel l-studied models propagated in social networks. This result provides new techniques and concepts that will be independent for those who are interested in probability optimization and machine learning.

C o-research with Lior Sep

April 10, 2013 Eurami Moses (Technion, During Stanford Research League)

Knowledge as a window for distributed cooperation

In this lecture, we will overview the knowledg e-based approach to distributed systems, present some basic connections between knowledge and multimedia cooperation, and the interactions of time and communication to enable coordination. Indicates what kind of insight can be provided. This lecture is a solo lecture for general CS audience. The last part of the lecture is based on a joint research with IDO BEN ZVI.

March 13, 2013 Anpam Datta (CMU)

With the rapid spread of social networking technology in the past 10 years, the rapid spread of social networking technology in the past 10 years has attracted special attention to algorithms and data mining technology designed to maximize information cascades. Despite the great progress in the past 10 years, the number of access to data and social networks is limited, so even if stat e-o f-th e-art technology is applied, performance often decreases.

In this discussion, we will introduce a new framework called an adaptive ceading. This framework is a tw o-step model that dramatically increases chai n-like information, using a phenomenon known as the "paradox of friendship" in social networks. Our main results show that a fixed agent approach is feasible for the most wel l-studied models propagated in social networks. This result provides new techniques and concepts that will be independent for those who are interested in probability optimization and machine learning.

C o-research with Lior Sep

April 10, 2013 Eurami Moses (Technion, During Stanford Research League)

Knowledge as a window for distributed cooperation

In this lecture, we will overview the knowledg e-based approach to distributed systems, present some basic connections between knowledge and multimedia cooperation, and the interactions of time and communication to enable coordination. Indicates what kind of insight can be provided. This lecture is a solo lecture for general CS audience. The last part of the lecture is based on a joint research with IDO BEN ZVI.

March 13, 2013 Anpam Datta (CMU)

With the rapid spread of social networking technology in the past 10 years of physics rehearsal password, special attention has been paid to algorithms and data mining technology designed to maximize information cascades in social networks. Despite the great progress in the past 10 years, the number of access to data and social networks is limited, so even if stat e-o f-th e-art technology is applied, performance often decreases.

In this discussion, we will introduce a new framework called an adaptive ceading. This framework is a tw o-step model that dramatically increases chai n-like information, using a phenomenon known as the "paradox of friendship" in social networks. Our main results show that a fixed agent approach is feasible for the most wel l-studied models propagated in social networks. This result provides new techniques and concepts that will be independent for those who are interested in probability optimization and machine learning.

C o-research with Lior Sep

April 10, 2013 Eurami Moses (Technion, During Stanford Research League)

Knowledge as a window for distributed cooperation

In this lecture, we will overcome the knowledg e-based approach to distributed systems, present some basic connections between knowledge and multimedia cooperation, and the interaction of time and communication to enable coordination. Indicates what kind of insight can be provided. This lecture is a solo lecture for general CS audience. The last part of the lecture is based on a joint research with IDO BEN ZVI.

March 13, 2013 Anpam Datta (CMU)

Physics rehearsal password

Introducing quantitative usability and security models to guide the design of the password management plan (a systematic strategy that supports and memorize multiple passwords). Just as the security certificate in the cipher is based on the theoretical assumption of complexity (for example, factor decomposition and discrete loans), we quit usability by introducing usability assumptions. In particular, password management is based on the assumption that human memory, for example, users in accordance with the given rehearsal program will well maintain the corresponding memories. These assumptions are brought about by cognitive science research and are verified by empirical research. If you are given a rehearsal requirements and a user visiting schedule for each account, the total number of rehearsals that the user must do to remember all passwords is used as an ease of use of the password system. Masu. It also presents a security model that shows the complexity of password management with multiple accounts and the relevant threats, including online, offline, and text password leaks.

Attention is that the current password management system lacks safety or not useful, and guarantees that most rehearsal requirements are naturally satisfied, while providing common evidence, strong security. For this reason, we present a new design in which the basic secrets are strategically shared between accounts. In this construction, the remaining orthodox of China is used in a no n-standard way to achieve these competitive goals.

C o-research with CMU Jeremiah Blocki and Manual Blum.

2/20-3/6/2013: Seminar Seminar, not currently planned for these dates.

2/13/2013 SERGIU HART (Hebrew University).

2. 2. 2 Balance with dynamics

An overview of a series of research on dynamic systems in multiplayer environments. On the other hand, the natural information constraints of each participant do not know the payment function of other participants ("decoupling") significantly restrict the possibility of convergence in the nash equilibrium. On the other hand, there is a simple adaptive hulistic such as "regret matching", and in the long term it is a concept that embodies complete rationality. He also mentions the relationship between behavioral economics, neuroscopic, and engineering.

January 16, 2013 Constantine Makarichev (MSR Red Monde) Editorial Department Note: TELSTAR (Ordinary Building, SVC6)

Classification of data with a lot of noise with some information

Talk about the sem i-struck model on the smallest Ark feedback collective problem. The minimum arc feedback collective issue is given a mushroom graph, and the goal is to delete as few as possible to make this graph no n-circulation. This is a classic optimization problem. The most known approximation algorithm by Seymour gives the approximation of O (log n log log n) in the worst case. I discuss whether the result is better than the worst case in "real life". For this reason, we will introduce two models that try to capture the "actual" case of the minimum feedback arc set. And one of them is more detailed. OPT is the cost of the optimal solution, and introduces an approximation algorithm that requires (1 + ε) opt + n multipupial N multipemrocational N.

C o-research with YURY MAKARYCHEV (TTIC) and Aravindan Vijayaraghavan (CMU).

January 9, 2013 Shaja Gold Wasser (Massachusetts Institute of Technology, Witsman Research Institute)

Simimilation algorithm

We introduce a new type of probability algorithm called pseud o-determined algorithm. This is a probable multilateral time object with a blackbox access cannot be distinguished from the deterministic algorithm.

It shows the necessary and sufficient conditions for the existence of such algorithms and some examples of Belagio Algorithms that improve the decisive solution.

The concept of pseud o-determined calculation is expanded not only in multidemble time algorithms, but also in other areas where randomization is indispensable, such as distributed algorithms and nonlinear algorithms. I will describe these extensions.

Hig h-speed algorithm that maximizes partial unit function

Recently, many advances in improved approaches for issues including su b-arithmetic purpose functions have been developed, and many interesting techniques have been developed. However, the resulting algorithms are often slow and not practical. In this paper, we develop general techniques to obtain very hig h-speed approximation algorithms to maximize partial arithmetic functions under various restrictions. These include a new local search algorithm based on the speed of greedy continuous algorithms and the probability function to handle multiple restrictions.

(Based on joint research with JAN VONDRAK)

November 28, 2012 ILAN Lobel (NYU)

Price discrimination between different points: Structure and calculation of optimal policy

We consider the question of how companies should optimally determine the price order to maximize lon g-term average revenue when there is a strategic customer continuous flow. Specifically, the customer arrives over time, is strategic at the timing of purchase, and is different in two dimensions: a company's evaluation and a purchase or a willingness to wait before leaving.

Customer patience and evaluation may be correlated in any way. In response to this common formula, we indicate that companies can limit their attention to short periodic price policies, twice as long as their customers' waiting groups. Furthermore, we establish results on unde r-fitting of general monotonous price policies and explain the structure of optimal policy. These typical scenarios are characterized by nesting sales, and companies provide partial discounts for each cycle, provide significant discounts in the middle of the cycle, and provide the largest discounts at the end of the cycle. 。 Furthermore, it is equivalent between the price of a heterogeneous customer strategy and the pric e-assignment problem for a heterogeneous customer group that can preserve the unit of the product. C o-research with Omar Besbes.~2012/11/9 Elchanan Mossel (U C Berkeley)

New proof of Gaussian stability and discrete noise stability

Discuss the new proof of Borel's results that Gaussian stability and majority default stability are more stable, and new applications to the hardness generated from these proofs and the new application to social selection theory.

November 2, 2012 ARAVIND SRINIVASAN (Maryland University)

Lovasz local lemma-configuration and non-configuration

Overview: Lovasz Local Lemma is a powerful probability tool. First, we will outlaws the discovery of the configuration aspects of Lovasz by MOSER and Tardos, the connection with other areas, and the expansion by Haeupler, SAHA, and speakers. Later, David Harris and the speakers describe the recent expansion.

2012/10/24 No discussion (FOCS)

2012/10/17 SANJEV ARORA (Princeton University)

Is machine learning possible? -The three vinet

Beware of unusual times (1-2pm for Titana)

Many tasks in machine learning (especially unsupervised learning) are patently unpleasant: NP-hard or worse. Despite this, researchers have developed heuristic algorithms that actually try to solve these tasks. In most cases, these algorithms are heuristic and come with no guarantees about their running time or the quality of the solutions they return. Can this status quo be changed?

In this discussion, we suggest that the answer is yes, and present three of our recent projects as examples: (a) a new algorithm for learning topic models; (Blei et al.'s linear Dirichlet distribution and applied to more general topic models. It works provably under certain reasonable assumptions and is in practice up to 50 times faster than existing software like Mallet. It is based on a new procedure for non-negative matrix factorization. (SVM, decision trees, etc.) (C) Proof for ICA with unknown Gaussian noise. (An algorithm that provably learns "manifolds" with a small number of parameters but exponentially many "regions of interest".

(Based on collaboration with Rong Ge, Ravi Kannan, Ankur Moitra, Sushant Sachdeva).

October 10, 2012 Grigory Yaroslavtsev (Pennsylvania State University)

Learning and Testing Subunit Functions

Subunit functions capture the law of diminishing returns and can be viewed as a generalization of the principal for functions on the Boolean cube. Such functions are used in a variety of fields, including combinatorial optimization, machine learning, and economics. In this discussion, we focus on learning such functions from examples and testing whether a given function is a subunit with a small number of queries.

We present structural results for a class of subunits whose values ​​take on a discrete integral range of size r, giving a concise representation of this class. This representation can be written as maximal over a set of threshold functions represented by an R-DNF formula. This leads to efficient packed learning algorithms for this class as well as testing algorithms whose running time is independent of domain size.

In collaboration with Sofya Raskhodnikova and Rocco Serverio

October 5, 2012 at 11:00 AM Fred Cato (Indiana University Maurer School of Law) Big Data in Healthcare: The Future of Healthcare Innovation and the Regulation that Kills It

Be careful of rare days (Friday) and time (11:00 am)

Personal information is being recognized as the most important resource for treatment, research, and medical management. While medical providers generate a large amount of information related to the interaction with patients, more health information, including genes and behavioral data, is now directly produced by individuals, home, mobile devices, and social media sites. It is generated as a result of the exchange. And personal medical record. Such data is indispensable for healthcare transformation and true evolution of true individualized medicine, but the Personal Information Protection Law is inadvertently restricted and inconsistent. Is imposed. Blue ribbon teams consisting of doctors, researchers, ethics, lawyers, technicians, and privacy experts who have been funded by NIH for the past three years have promoted medical research and have an alternative approach to privacy. We have created and have tried to eliminate the threats brought about today's health, privacy, and the future of tomorrow's medical innovation.

Fred H. Cate is a wel l-known professor at the University of Indiana's Faculty of Maurer and C. Ben Daton. Central for Legal, and Applied Health Care Research, Managing Director, National Security Applied Research. IN BOTH INFORMATION ASSURANCE CARCH AND IND IND INFORMATION ASSURANCE EDUCATION). Microsoft Computing Academic Advisory Board (Microsoft Computing Academic Advisory Committee), many members of the government, industry advisory committee and surveillance committee. Frequent I am testifying. He is the author of more than 150 papers and books, and is also the first editor of the Oxford University Publishing Journal "International Privacy Law". A chief researcher of the NIH Grant "Protection of Privacy in Health Research".

2012/9/19 ZVIKA BRAKERSKI (Stanford University)

Efficient interactive code for noise

We study the problem of constructing interactive protocols that are robust to noise. This problem was originally addressed in Schulman's seminal work (FOCS '92, STOC '93) and has recently seen a resurgence in popularity. Robust interactive communication is the interactive analogue of error-correcting codes. Given an interactive protocol designed to run on an error-free channel, we construct a protocol that evaluates the same operations over a noisy channel (or, more generally, simulates the execution of the original protocol). As with (non-interactive) error-correcting codes, the noise can be either probabilistic, i. e., coming from some distribution, or adversarial, i. e., arbitrary, subject only to a general restriction on the number of errors.

We show how to efficiently simulate any interactive protocol in the presence of competing constant-rate noise by simply imposing a continuous burst of communication complexity ($ccc$). Our simulator is randomized and succeeds in simulating the original protocol with probability at least $1-2^.

$. Previous work has not been able to achieve efficient simulation in the adversarial case.

In collaboration with Yael Thaumann-Kalai

2012/9/12 Moritz Hult (IBM Almaden)

Inconsistency and Privacy in Spectral Data Analysis

Panel inconsistency is a property frequently observed in large real-world matrices. Intuitively, a matrix has low consistency if the unique vectors of the matrix bear little similarity to the individual rows of the matrix. We show that this property is very useful for designing differentially private singular vector approximations and low-rank approximations.

Our algorithms for these challenges turn out to be much more accurate under low coherence assumptions than the worst case suggested by known lower bounds. Although not easy to analyze, our algorithms are very efficient and easy to implement. We complement our theoretical results with several experiments on real and synthetic data.

Based on collaboration with Aaron Roth

2012/5/9 Brendan Lucier (MSR-NE)

Fixed Prices and Partitions

In the composite auction, sellers have items to sell, and buyers have items they want to buy. In general, such auctions are complicated, and the bidding specifications and optimal distributions have an exponential complexity in N and/ M. In some special cases (for example, the situation of "total replacement"), there is a way to set the price of the sale, and when each buyer gets his most preferred set, it is socially efficient. Such results occur. The results of such a wonderful price setting are known as Wallas equilibrium, but unfortunately does not necessarily exist.

In this lecture, in collaboration with Michael Feldman and Nick Gravin, we will announce new results that alleviate the concept of price balance in combinations. The main feature of this relaxation is that it can prepare an item before the seller sells, and the same number can be made. Here are some nature of this concept, the problems that arise in algorithms, and the solutions we provide. In particular, the general buyer price is converted to the result of such a "price of a certain apartment" and gives a black bo x-like reduction of simply loss of social welfare.

2012/8/29 Nome Nisan (MSR SV)

Does the auction have to be complicated?

Consider the size of the auction menu as a scale of complexity, and ask whether a simple auction is enough to generate high profits. In the case of one item and IID candidate, Myerson indicates that the answer is "Jesus". Complex auctions can also make more infinite profits than simple auctions, even for bidders with addictive valuation. However, if the value of the bidder for the two items is distributed independently, the answer is "Rather Jesus": If each of the two items is simply sold separately, at least half of the revenue will be obtained in any auction. 。

C o-research with Selge Heart

2012/8/22 SHIRI CHECHIK (MSR SV)

In a ful l-dynamic distance Oracle < SPAN> composite auction of flat graphs using a prohibited distance label, sellers have items they want to sell, and buyers have items they want to buy. In general, such auctions are complicated, and the bidding specifications and optimal distributions have an exponential complexity in N and/ M. In some special cases (for example, the situation of "total replacement"), there is a way to set the price of the sale, and when each buyer gets his most preferred set, it is socially efficient. Such results occur. The results of such a wonderful price setting are known as Wallas equilibrium, but unfortunately does not necessarily exist.

In this lecture, in collaboration with Michael Feldman and Nick Gravin, we will announce new results that alleviate the concept of price balance in combinations. The main feature of this relaxation is that it can prepare an item before the seller sells, and the same number can be made. Here are some nature of this concept, the problems that arise in algorithms, and the solutions we provide. In particular, the general buyer price is converted to the result of such a "price of a certain apartment" and gives a black bo x-like reduction of simply loss of social welfare.

2012/8/29 Nome Nisan (MSR SV)

Does the auction have to be complicated?

Consider the size of the auction menu as a scale of complexity, and ask whether a simple auction is enough to generate high profits. In the case of one item and IID candidate, Myerson indicates that the answer is "Jesus". Complex auctions can also make more infinite profits than simple auctions, even for bidders with addictive valuation. However, if the value of the bidder for the two items is distributed independently, the answer is "Rather Jesus": If each of the two items is simply sold separately, at least half of the revenue will be obtained in any auction. 。

C o-research with Selge Heart

2012/8/22 SHIRI CHECHIK (MSR SV)

In the ful l-distance Oracle composite auction of a flat graph using a prohibited distance label, sellers have items to sell, and buyers have items they want to buy. In general, such auctions are complicated, and the bidding specifications and optimal distributions have an exponential complexity in N and/ M. In some special cases (for example, the situation of "total replacement"), there is a way to set the price of the sale, and when each buyer gets his most preferred set, it is socially efficient. Such results occur. The results of such a wonderful price setting are known as Wallas equilibrium, but unfortunately does not necessarily exist.

In this lecture, in collaboration with Michael Feldman and Nick Gravin, we will announce new results that alleviate the concept of price balance in combinations. The main feature of this relaxation is that it can prepare an item before the seller sells, and the same number can be made. Here are some nature of this concept, the problems that arise in algorithms, and the solutions we provide. In particular, the general buyer price is converted to the result of such a "price of a certain apartment" and gives a black bo x-like reduction of simply loss of social welfare.

2012/8/29 Nome Nisan (MSR SV)

Does the auction have to be complicated?

Consider the size of the auction menu as a scale of complexity, and ask whether a simple auction is enough to generate high profits. In the case of one item and IID candidate, Myerson indicates that the answer is "Jesus". Complex auctions can make infinite profits than simple auctions, even for bidders with an additional valuation. However, if the value of the bidder for the two items is distributed independently, the answer is "rough Jesus": If you simply sell each of the two items separately, at least half of the revenue will be obtained in any auction. 。

C o-research with Selge Heart

2012/8/22 SHIRI CHECHIK (MSR SV)

Completely dynamic distance oracle of flat graphs using prohibited range labels

An oracle distance is a data structure that provides fast answers to distance queries. Recently, constrained distance queries, i. e., the problem of designing a distance oracle that can estimate distances in subgraphs that avoid some forbidden vertices, has attracted attention. In this discussion, we consider forbidden distance oracles for planar graphs. We present an efficient and compact oracle distance that can deliver any number of failures.

In addition, we also consider a closely related notion called a fully dynamic oracle. In the dynamic oracle problem, instead of getting failures in the query phase, we prefer to deal with inconsistent online sequences of update and query tasks. Each query operation involves two vertices s and t, whose distances need to be estimated. Each update operation involves the insertion/deletion of a vertex/edge from the graph.

In this paper, we show that a modification of the forbidden oracle distance can give a fully dynamic oracle distance with improved bounds over the previously known fully dynamic oracle distance for flat graphs.

Based on collaboration with Itai Abraham and Cyril Gavoille

2012/8/15 Ragu Mecha (IAS)

Constructive differential minimization by edge walking.

Minimizing the divergence of SET systems is a fundamental problem in combinatorics. One of the cornerstones of the field is Spencer's famous 6 standard deviations (AMS 1985): for any system of n in a universe of size n, there exists a coloring that achieves 6% discrepancy. Spencer's original proof was existential and did not provide an efficient algorithm for finding such a coloring. Recently, an important paper by Bansal (FOCS 2010) gives an efficient algorithm for finding such a coloring. His algorithm is based on a relaxation of the SDP of the differential problem and an intelligent rounding procedure.

In this work, we give a new random algorithm for finding a coloring like Spencer's result, based on a constrained random walk that we call "edge walk". Our algorithm and its analysis are "truly" constructive, since they use only basic linear algebra and do not deal with existential arguments, and give a new proof of Spencer's theorem and a coloring of a sublemma.

Collaboration with Shachar Lovett.

August 8, 2012 Rocco Serdio (Columbia University)

Inverse problem of power exponents in weighted voting games

Suppose that N-voter needs to design a weighted voting system to choose one of the two candidates. For example, voters represent states with different population, and represent shareholders who have different shares in the company. How can we design a weight voting system that has a predicted influence on each voter?

Of course, in order to answer such questions, it is necessary to clearly define the influence of voters in the weighted voting system. In the voting theory literature, there are many such scale of influence. Such a scale is sometimes called "indicator of power". In this lecture, we will consider two most popular power indicators, "BANZHAF indicators" (in the calculator science theory, known as "CHOW parameters") and "Shapley-SHUBIK indicators". These are two completely different physical methods that quantify the influence of each voter in the given weight voting system.

The main achievements are algorithms that solve the opposite problem that designs the weight voting scheme for each power index. in particular

(1) When the vector of the BANZHAF index is given to the voter of N people, our first algorithm (if such a weight voting scheme exists), the BANZHAF index is very close to the target index. Efficiently build a weight voting scheme you have. The results of this paper provide an improvement in execution time, almost twice as exponential (due to close contact parameters) compared to the only correct solution that can be proved.

(2) If the vector of the target Shaplay Schubic index is given, our second algorithm is very close to the desired index (in the event that there is such a weight voting scheme). Efficiently build a voting scheme with an index. This is the first algorithm for this problem, not exp (n) execution time, but has the execution time of Poly (n).

These two results are made up of a common algorithm, but the structural results for proof of correctness are significantly different in two indices. The result of the BANZHAF index is based on the structural properties of the linear threshold function and the geometric and linear algebraic arguments of how the supe r-plane interacts with the Boul super cubes. The result of the Shapley-shubik index is based on the anti-joint boundary of the non-independent probability variable.

The utterance does not require a voting theory.

Based on joint research with Anindya de, Ilias Diakonikolas, and Vitaly Feldman.

August 1, 2012 AVI WIGDERSON (IAS Princeton)

Contractor access, maternal recovery, partial recognition

Study some physical problems that must restore unknown distributions in unknown vectors from partial samples and noise. Such problems naturally occur in various situations, such as learning, clustering, statistics, data mining, and privacy of database. We are rationally efficient algorithms for data recovery, if the loss and noise are close to the theoretical information threshold (that is, the original data is almost eliminated). Give.

At the root of our algorithm is a new structure called partial identification graph (PID). Standard identifiers are a part of the features (vector coordinates) that uniquely identifies individuals in the maternal corporation, but partial identifiers may be smaller because they allow ambiguous (and "fake"). be. The PID graph captures this fraudulent structure. The PID graph reduces the dimension of search problems and has a strategy for reviewing these local statistics fragments on global scale. The core of this study is to demonstrate that there is an identifier with an "inexpensive" PID graph (therefore can be effective search). In addition, it shows how to efficiently find such an optimal PID.

If time is allowed, I would like to explain the new learning model we call the "access restriction", the first motive to study the above search issues. This model aims to generalize the general "black box" access when trying to learn "device" that calculates the calculation (for example, a circuit, a decisive tree, a polymorphism ...). We propose a "gray box" access that enables partial views of devices obtained from random constraints. With the above recovery algorithm, the analog of the PAC-Learning model analog for devices that cannot be reached in the standard "black box" version of PAC-Learning, such as the decision tree and DNF You can get results.

Based on a joint project with ZEEV DVIR, ANUP RAO, and AMIR Yehudayoff.

2012/7/25 Abhradeep Guha Thakurta (University of Pennsylvania)

Differences of maximizing empirical risks and hig h-dimensional regression

Recently, there have been several high-profile privacy violations in machine learning-based systems that satisfy various ad-hoc notions of privacy, such as the attack on Amazon's recommendation system by Calandrino et al. in 2011 and the attack on Facebook's advertising system by Korolova in 2011. In the presence of such violations, an obvious question arises: "How can we design learning algorithms with strict privacy guarantees?"

In this talk, I focus on designing convex empirical risk minimization (ERM) algorithms (a special class of learning algorithms) with differential privacy guarantees. In recent years, differential privacy has emerged as one of the most frequently used notions of strict privacy.

My discussion is logically divided into two parts:

Part a) Private ERM in Offline Datasets: In this part, I describe various approaches to differentially private ERM when the complete dataset is simultaneously available (as opposed to the online setting described in the next part). One of the main focuses of this part is to discuss our two novel approaches towards private ERM in offline settings: i) an improved objective perturbation algorithm (which Chaudhuri et al., 2008 and 2011 ) and ii) an Online Programming Based Algorithm (OCP). In addition, we also discuss the first private ERM algorithm in high dimensions (i. e., when the number of data items is much smaller than the dimension of the underlying model parameters).

Part B) Private Online Learning: In E-learning, private data is learned in real-time, since the model being trained and its predictions are constantly changing. We study this problem in the context of Online Convex Programming (OCP). For this problem, we provide a general framework that can be used to convert a given OCP algorithm into its private variant while preserving privacy, as long as the OCP algorithm satisfies the following two criteria: 1) linear reduction sensitivity, i. e., the impact of a new data point on the trained model is linearly reduced, and 2) a sublinear solution. We work with our framework using two commonly used OCP algorithms: i) Generalized Gradient Discrepancy Ascent (GIGA) and II) Implicit Gradient Descent (IGD).

C o-research with Prateek Jain [MSR, India], Daniel Kifer [Pennsylvania State University], PRAVESH KOTHARI [Texas University Austin School], ADAM Smith [Pennsylvania State University].

2012/7/18 Alex Samolo Donitsky (Jerusalem, Hebrew University)

Disconnected birds and boundaries of linear signs

The linear part of the Humming cube H is C as C. Follow the Friedman and Tirich, calculate the growth rate of the measuring ball in the discrete "Torus" T = H/C ', and use it to update the C cardinality, and eventually C.

The concept of the discipline of the measuring space defined by OLLIVIER is useful when C 'has a local structure (that is, C' is located or locally corrected / localized). did.

With this approach, you can easily prove the known top world.

C o-research with Eran ICELAND.

July 11, 2012 Aviv Zohar (MSR SVC)

Game theory review

Can a game theorist well predicted social and political results? Should Israel listen to the opinions of a game theorist who claims that Iran should attack to keep it from having nuclear weapons? How much is the game theory is useful for interaction design? Is a computer useful? In this discussion, I would like to explore criticism of the practical use of game theory. It is assumed that you know the basic concept of game theory.

2012/7/4 Seminar closed (holiday)

2012/6/27 The seminar is closed.~2012/6/20 PIOTR INDYK (MIT)

Speed ​​-o f-speed algorithm for sparse foulier conversion

Hig h-speed Fourier conversion (FFT) is one of the most basic numerical algorithms. Calculate the discharge Fourier conversion (DFT) of the n dimensional signal in the o (n log n) time. This algorithm plays an important role in many fields.

Many applications (audio, image, video compression, etc.) are equal to "small" or zero. The conversion output is (almost) sparse. In this case, there is an algorithm that can calculate no n-zero coefficients at higher speed than FFT. However, in fact, the index of the execution time of these algorithms and its complex structures are limited to very sparse signals.

In this talk, we describe new algorithms for the sparse Fourier transform. Their main feature is their simplicity, which allows for low overhead and efficient execution times in theory and practice. One of these algorithms achieves a running time of O(k log n), where k is the number of nonzero Fourier coefficients of the signal. This is an improvement over the running time of the FFT for each k = o (n).

Collaboration with Haitham Hassanieh, Dina Katabi, and Eric Price.

June 13, 2012 Guy Rothblum (MSR SVC)

How to compute in the presence of leaks

We face the following problem: how to run an arbitrary algorithm in the presence of an attacker who observes some information about the internal state of the computation during its execution. This general problem has been tackled from various angles in recent years. It is important not only for running cryptographic algorithms in the presence of side-channel attacks, but also for running non-cryptographic algorithms, such as proprietary search algorithms or games, on cloud servers where parts of the internal execution are observable.

In this work, we look at algorithms that run with CPU leaks. For each (sub)computation performed on the CPU, we have an adversary observe the inputs, outputs, and the output of an arbitrary adaptively chosen function defined at random.

Our main result is a generic compiler that converts any algorithm into a secure instance of this family of partially observed attacks (while preserving the functionality of the algorithm). This result is unconditional and does not depend on any assumptions about secure hardware or cryptography.

In collaboration with Shafi Goldwasser

Jasmine Fisher (MSR Cambridge) June 6, 2012

From genome coding to life decoding algorithms

The dramatic progress of genomic decoding over 10 years since the base sequence of human genome has been deciphered, has a major medical advancement, how complex and how many people have not yet been elucidated. I revealed it. Biology is an extremely complicated puzzle. We can know some of the fragments, but we have no idea how they combine them and play the symphony of life. A recent initiative to create a complex biological phenomenon that can be executed (we call it an executable biology) is a new scientific breakthrough with a new light on the puzzle of life. It has a great possibility to bring. At the same time, the new waves of the future are urging computer science to make a big leap in a way that could not be imagined before to deal with the huge complexity in biology. In this lecture, recent successes using formal methods to model the cell fate determination in occurrence and cancer, and the current progress of developing specific tools for biologists to visually model cell processes. Focus on the initiatives inside.

2012/5/29 Madu Sudan (MSR New England)

TBA. Please be careful about different days and time (Tuesday 11:00 am to 12:00 pm).

2012/5/23 There is no seminar.

5/16/2012 Edis Cohen (AT & amp; T).

Title: How to maximize sampling data

Random sampling is an important tool for maintaining the ability to search data under resources restrictions. It is used to summarize the data that is too large to save and operate, and to satisfy resource constraints in bandwidth and battery power. The estimated amount applied to the sample provides a quick similar answer to questions raised in the original data, and the value of the sample depends on the estimated amount of these.

I am interested in queries that spans multiple data points, such as maximum value and range. The sum of these queries corresponds to individual measurement values ​​and differential rules, and is used for programming, change/ abnormal detection of traffic logs and measurement data. The infinitive estimation is very efficient. Each key query inevitably has a large variation, but clustering reduces relative errors. < SPAN> The dramatic progress of genome decoding over 10 years since the base sequence of human genome has been deciphered has caused great medical progress, how complicated and unraveled in human biology. I revealed it again. Biology is an extremely complicated puzzle. We can know some of the fragments, but we have no idea how they combine them and play the symphony of life. A recent initiative to create a complex biological phenomenon that can be executed (we call it an executable biology) is a new scientific breakthrough with a new light on the puzzle of life. It has a great possibility to bring. At the same time, the new waves of the future are urging computer science to make a big leap in a way that could not be imagined before to deal with the huge complexity in biology. In this lecture, recent successes using formal methods to model the cell fate determination in occurrence and cancer, and the current progress of developing specific tools for biologists to visually model cell processes. Focus on the initiatives inside.

2012/5/29 Madu Sudan (MSR New England)

TBA. Please be careful about different days and time (Tuesday 11:00 am to 12:00 pm).

2012/5/23 There is no seminar.

5/16/2012 Edis Cohen (AT & amp; T).

Title: How to maximize sampling data

Random sampling is an important tool for maintaining the ability to search data under resources restrictions. It is used to summarize the data that is too large to save and operate, and to satisfy resource constraints in bandwidth and battery power. The estimated amount applied to the sample provides a quick similar answer to questions raised in the original data, and the value of the sample depends on the estimated amount of these.

I am interested in queries that spans multiple data points, such as maximum value and range. The sum of these queries corresponds to individual measurement values ​​and differential rules, and is used for programming, change/ abnormal detection of traffic logs and measurement data. The infinitive estimation is very efficient. Each key query inevitably has a large variation, but clustering reduces relative errors. The dramatic progress of genomic decoding over 10 years since the base sequence of human genome has been deciphered, has a major medical advancement, how complex and how many people have not yet been elucidated. I revealed it. Biology is an extremely complicated puzzle. We can know some of the fragments, but we have no idea how they combine them and play life symphony. A recent initiative to create a complex biological phenomenon that can be executed (we call it an executable biology) is a new scientific breakthrough with a new light on the puzzle of life. It has a great possibility to bring. At the same time, the new waves of the future are urging computer science to make a big leap in a way that could not be imagined before to deal with the huge complexity in biology. In this lecture, recent successes using formal methods to model the cell fate determination in occurrence and cancer, and the current progress of developing specific tools for biologists to visually model cell processes. Focus on the initiatives inside.

2012/5/29 Madu Sudan (MSR New England)

TBA. Please be careful about different days and time (Tuesday 11:00 am to 12:00 pm).

2012/5/23 There is no seminar.

5/16/2012 Edis Cohen (AT & amp; T).

Title: How to maximize sampling data

Random sampling is an important tool for maintaining the ability to search data under resources restrictions. It is used to summarize the data that is too large to save and operate, and to satisfy resource constraints in bandwidth and battery power. The estimated amount applied to the sample provides a quick similar answer to questions raised in the original data, and the value of the sample depends on the estimated amount of these.

I am interested in queries that spans multiple data points, such as maximum value and range. The sum of these queries corresponds to individual measurement values ​​and differential rules, and is used for programming, transformed data and measurement data. The infinitive estimation is very efficient. Each key query inevitably has a large variation, but clustering reduces relative errors.

Samples cannot provide information about the question value, accurate value, or partial information. The estimated amount of horvitz-thompson is known to minimize sampling diversification with the results of "All or Nothing" (no accurate information or information about the questions asked). Is it best or not applied at all.

We aim to present a general fundamental methodology to generate the optimal no n-negative estimation (parate) for sampling data and understand the possibilities. Demonstrate the significant improvement of estimated accuracy.

This study is a joint research with HAIM KAPLAN (Tel Aviv University).

May 9, 2012 Andrew Drucker (Massachusetts Institute of Technology)

New boundary of snapshot compression for difficult questions

If an instance of a very difficult decision to solve is given, pursuing a more limited goal of compressing this instance and making it smaller to a smaller instance of the same or different issues. Can do. Studies of instance compression power and limits include interesting interaction between calculation theory and theoretical thinking.

As a typical problem, he wants to determine whether the Bourour type PSI of the size M & Amp; & amp; gt; n that spans N variables is given, and whether the PSI will be satisfied. Can this problem effectively reduce the size Poly (n) equivalent problem instances that do not depend on M? HARNIK and NAOR (FOCS '06) and Bodlaender et al. (Icalp '08) show that this problem is important in encryption and fixed parameter stability theory. Fortnow and Santhanam (Stoc '08) assumed that NP was not included in Con/Poly, and gave a negative answer to decisive compression.

We describe the newly improved evidence for the efficient compression method. Our method is valid for SAT's probability compression, and gives the first evidence of decisive compression against various problems. In order to prove our results, the information bottleneck of the snapshot compression system is used by using a new method to "transfer" the information provided in compressed mapping.

Most appropriate multidimensional mechanism design: Reduce profits to maximize welfare

Providing a portional law from maximizing revenue at multidimensional combination auctions to maximizing welfare.<\gamma>).

This setting is appropriately expanded the on e-dimensional result of Myerson [Myerson81]. It indicates that all the feasible Bayes auctions can be implemented through the virtual VCG distribution rules. Virtual VCG distribution rules have the following simple format: Each bidder's bid vector V_i is converted to a virtual bid vector F_i (V_I) via a bi d-specific function. Then, the allocation that maximizes virtual welfare is selected. Using this feature, the implementation of the VCG allocation rules shows how to find and execute the optimal revenue auction only by accessing the black box. This result is generalized into a bidder with any correlation and introduces the concept of secondary VCG distribution rules.

In the settings that have any feasibility and demand constraints, we will return from profits to welfare optimization through two algorithm results in reduce d-type auctions. First, provide separation oracle to determine the feasibility of a reduce d-type auction. Furthermore, by giving only a black box access to the implementation of the VCG assignment rule, the method of executing both algorithms in a computational efficiency and proximal implementation is provided, and two complete plenty time randomization (FPRAS) scheme is provided. With a high probability, separate oracle is correct in all aspects of EPS (infinite norm) from the boundary of a feasible shrinking form, and the decomposition algorithm is an optional that is far from EPS (infinite norm) from the border. Returns the distribution to the virtual VCG assignment rules within the given and feasible shrinkable EPS (infinite norm).

Our mechanism operates the time in multilateral, not the type profile, but the number of types of biders. This execution time is always a polynomial for the number of biders, and scales due to the property of supporting the value assignment of each bidder. This execution time can be improved to multilateral in both the number of biders and the number of items in the setting of object symmetry by using the result of [Daskalakis-Weinberg 12].

C o-research with Yang Cai and Kostis Daskalakis.

2012/4/25 or Meir.

The combination configuration of a localized code that can be tested locally. < SPAN> This setting is appropriately expanded the on e-dimensional result of Myerson [Myerson81]. It indicates that all the feasible Bayes auctions can be implemented through the virtual VCG distribution rules. Virtual VCG distribution rules have the following simple format: Each bidder's bid vector V_i is converted to a virtual bid vector F_i (V_I) via a bi d-specific function. Then, the allocation that maximizes virtual welfare is selected. Using this feature, in the implementation of the VCG allocation rules, the optimal revenue auction is shown and executed. This result is generalized into a bidder with any correlation and introduces the concept of secondary VCG distribution rules.

In the settings that have any feasibility and demand constraints, we will return from profits to welfare optimization through two algorithm results in reduce d-type auctions. First, provide separation oracle to determine the feasibility of a reduce d-type auction. Furthermore, by giving only a black box access to the implementation of the VCG assignment rule, the method of executing both algorithms in a computational efficiency and proximal implementation is provided, and two complete plenty time randomization (FPRAS) scheme is provided. With a high probability, separate oracle is correct in all aspects of EPS (infinite norm) from the boundary of a feasible shrinking form, and the decomposition algorithm is an optional that is far from EPS (infinite norm) from the border. Returns the distribution to the virtual VCG assignment rules within the given and feasible shrinkable EPS (infinite norm).

Our mechanism operates the time in multilateral, not the type profile, but the number of types of biders. This execution time is always a polynomial for the number of biders, and scales due to the property of supporting the value assignment of each bidder. This execution time can be improved to multilateral in both the number of biders and the number of items in the setting of object symmetry by using the result of [Daskalakis-Weinberg 12].

C o-research with Yang Cai and Kostis Daskalakis.

2012/4/25 or Meir.

The configuration configuration of the codes that can be tested locally. This setting is appropriately expanded the on e-dimensional result of Myerson [Myerson81]. It indicates that all the feasible Bayes auctions can be implemented through the virtual VCG distribution rules. Virtual VCG distribution rules have the following simple format: Each bidder's bid vector V_i is converted to a virtual bid vector F_i (V_I) via a bi d-specific function. Then, the allocation that maximizes virtual welfare is selected. Using this feature, in the implementation of the VCG allocation rules, the optimal revenue auction is shown and executed. This result is generalized into a bidder with any correlation and introduces the concept of secondary VCG distribution rules.

In the settings that have any feasibility and demand constraints, we will return from profits to welfare optimization through two algorithm results in reduce d-type auctions. First, provide separation oracle to determine the feasibility of a reduce d-type auction. Furthermore, by giving only a black box access to the implementation of the VCG assignment rule, the method of executing both algorithms in a computational efficiency and proximal implementation is provided, and two complete plenty time randomization (FPRAS) scheme is provided. With a high probability, separate oracle is correct in all aspects of EPS (infinite norm) from the boundary of a feasible shrinking form, and the decomposition algorithm is an optional that is far from EPS (infinite norm) from the border. Returns the distribution to the virtual VCG assignment rules within the given and feasible shrinkable EPS (infinite norm).

Our mechanism operates the time in multilateral, not the type profile, but the number of types of biders. This execution time is always a polynomial for the number of biders, and scales due to the property of supporting the value assignment of each bidder. This execution time can be improved to multilateral in both the number of biders and the number of items in the setting of object symmetry by using the result of [Daskalakis-Weinberg 12].

C o-research with Yang Cai and Kostis Daskalakis.

2012/4/25 or Meir.

Configuration of the codes that can be tested locally.

There is a test that allows you to check only a certain number of symbols of the string whether the given character string is the codeword of the code, or rather, by reading only a certain number of symbols of the string. If you do, it is said that you can test locally. LOCALLY TESTABLE CODES (LTC) has been explicitly researched by Goldreich and Sudan (J. ACM 53 (4)), and since then, several LTC configurations have been proposed.

The bes t-known LTC configuration achieves a very efficient parameters, but greatly depends on algebrae and PCP machines. We present a new and definitely simpler LTC configuration that matches the bes t-known parameters and does not greatly depend on algebra or PCP machinery. However, our configuration is probable.

April 18, 2012 Shayan Oveis Gharan (Stanford University).

Spectral mult i-direction division and higher Cheeger inequality.

The basic fact of algebra graph theory is that the number of consolidated components of the no n-mushroom graph is equal to the multiple level of zero of the Lapracian matrix in the graph. In particular, the graph is cut only when there are at least two unique values ​​equal to zero. CHEEGER's inequality and its deformation provide a similar version of the latter fact, and only when there are at least two unique values ​​close to zero, the graph has a sparse cleaning.

That is, there is a unique value of the K that is close to zero only when the vertex set is divided into a K-individual set, and each is defined as a sparse cutting. Our results also use a specific vector of the lower K to embed the vertex in R^K, and gives the theoretical legitimacy of the clustering algorithm that applies geometric estimation to its embedding. Our technology is also size

Make a trad e-off between N/K and K, which is the minimum value of the normalized Lapracian matrix.

Based on a joint research with James R. Lee and Luca Trevisan.

2012/4/11 JAN VONDRAK (IBM Almaden).

Randomized true mechanism hardness in combination auctions. < SPAN> Incorrect correction signs can be checked by reading only a certain number of symbols of the character string whether the given character string is the codeword of the code or rather the code. It is said that if there is a test, it is locally testable. LOCALLY TESTABLE CODES (LTC) has been explicitly researched by Goldreich and Sudan (J. ACM 53 (4)), and since then, several LTC configurations have been proposed.

The bes t-known LTC configuration achieves a very efficient parameters, but greatly depends on algebrae tools and PCP machines. We present a new and definitely simpler LTC configuration that matches the bes t-known parameters and does not greatly depend on algebra or PCP machinery. However, our configuration is probable.

April 18, 2012 Shayan Oveis Gharan (Stanford University).

Spectral mult i-direction division and higher Cheeger inequality.

The basic fact of algebra graph theory is that the number of consolidated components of the no n-mushroom graph is equal to the multiple level of zero of the Lapracian matrix in the graph. In particular, the graph is cut only when there are at least two unique values ​​equal to zero. CHEEGER's inequality and its deformation provide a similar version of the latter fact, and only when there are at least two unique values ​​close to zero, the graph has a sparse cleaning.

That is, there is a unique value of the K that is close to zero only when the vertex set is divided into a K-individual set, and each is defined as a sparse cutting. Our results also use a specific vector of the lower K to embed the vertex in R^K, and gives the theoretical legitimacy of the clustering algorithm that applies geometric estimation to its embedding. Our technology is also size

Make a trad e-off between N/K and K, which is the minimum value of the normalized Lapracian matrix.

Based on a joint research with James R. Lee and Luca Trevisan.

2012/4/11 JAN VONDRAK (IBM Almaden).

Randomized true mechanism hardness in combination auctions. There is a test that allows you to check only a certain number of symbols of the string whether the given character string is the codeword of the code, or rather, by reading only a certain number of symbols of the string. If you do, it is said that you can test locally. LOCALLY TESTABLE CODES (LTC) has been explicitly researched by Goldreich and Sudan (J. ACM 53 (4)), and since then, several LTC configurations have been proposed.

The bes t-known LTC configuration achieves a very efficient parameters, but greatly depends on algebrae and PCP machines. We present a new and definitely simpler LTC configuration that matches the bes t-known parameters and does not greatly depend on algebra or PCP machinery. However, our configuration is probable.

April 18, 2012 Shayan Oveis Gharan (Stanford University).

Spectral mult i-direction division and higher Cheeger inequality.

The basic fact of algebra graph theory is that the number of consolidated components of the no n-mushroom graph is equal to the multiple level of zero of the Lapracian matrix in the graph. In particular, the graph is cut only when there are at least two unique values ​​equal to zero. CHEEGER's inequality and its deformation provide a similar version of the latter fact, and only when there are at least two unique values ​​close to zero, the graph has a sparse cleaning.

That is, there is a unique value of the K that is close to zero only when the vertex set is divided into a K-individual set, and each is defined as a sparse cutting. Our results also use a specific vector of the lower K to embed the vertex in R^K, and gives the theoretical legitimacy of the clustering algorithm that applies geometric estimation to its embedding. Our technology is also size

Make a trad e-off between N/K and K, which is the minimum value of the normalized Lapracian matrix.

Based on a joint research with James R. Lee and Luca Trevisan.

2012/4/11 JAN VONDRAK (IBM Almaden).

Randomized true mechanism hardness in combination auctions.

The problem of the combination auction is one of the important issues in designing algorithm mechanism: an agent has a motivation to clarify the true evaluation, which is from a social welfare perspective (approximately). To be optimal, how do you assign an object M to an agent with a private evaluation of a different combination of different objects? There are approximation algorithms in some no n-evident evaluation categories, but they are usually not motivated by agents to report honestly. The classic VCG mechanism is honest, but the calculation efficiency is not good. Therefore, the main problem is whether it can be combined with the truth and the calculation efficiency requirements, or whether it is compatible.

It is known that we have identified the class of the explicit (simply expressed) subunit, and the combination auction without honest requests (1-1/e)-estimation is accepted. 。 However, it proves that there is no true mechanism that gives a better approximation of the number of agents unless the NP is a p/poly part set. The true mechanism is already excluded and universally excluded).

C o-research with Shaddin Dughmi and Shahar Dobzinski.

2012/4/4 There is no seminar.

2012/3/28 13: 30-2: 30 Justin Turler (Harvard University)

Practical verification calculation by dialogue flow certification

One of the problems that can occur when outsourcing for commercial cloud computing services is reliable. For example, calculation of the specific value of a large graph and the calculation of a large queue derived from a database. Obviously, we may not want to calculate the results on our own, and we may not be able to save all data locally. This leads to a new problem in the streaming model: We consider a flow algorithm (a model of a limited memory and computing resource) supported by a powerful helper (service provider). The service provider's goal is not only providing the answer to the user, but also convincing the correct answer.

In this lecture, we will explain the overview of recent research that explores the application of a proof system to the problem of the flow. In these protocols, honest service providers can always convince the data holder that the answer is correct, but most of the dishonest experts are often caught. The protocols I discuss are expanded using powerful ideas from communication complexity and interactive proof theory, and most of them have achieved millions of updates per second and most of the space and communication. We discuss that it is extremely practical that you do not need.

C o-research with Amit Chakrabarti, Graham Cormode, Andrew McGregor, Michael MitzenMacher, KE Yi.

2012/3/21 1: 30-2: 30 MEHRDAD noJOUMIAN (Waterloo University)

Secret sharing based on the social behavior of players

First, provide a mathematical model of "trust calculation" in social networks. Next, the concept of the "social secret shared scheme" is introduced, and share is assigned based on the reputation of the player and the way of interacting with other parties. In other words, this method will update the share of each cycle without changing the secrets, enabling reliable parties gain more power.

Finally, we propose a new scheme called "Social Rational Secret Sharing". In this scheme, rational futur e-oriented players perform lon g-term interaction in social context. In order to motivate this, consider a game that is repeatedly performed, such as a sealed bidding auction. Assuming that each party has a reputation value, it is possible to punish (or reward) a selfish (or selfless) player for each game. With this social enhancement, players will be cooperative.

14/3/2012 1: 30-2: 30 Venkatesan Guruswami (University of Carnegie Melon)

Lassale hierarchy, highe r-grade values, graph separation

Separating the vertex of the graph into two (almost) parts and minimizing the weight of the cu t-off side is a basic optimization problem that occurs in various applications. Despite intensive research, there is a great deal of understanding of the feasibility of approximation of these problems. The best algorithm achieves an ultr a-stable approximation coefficient, but it is not known that even the approximation of 1. 1 is difficult to do in NP.

It describes the approximation schemes of various graph split problems such as sparstost cuts, minimum double, and small extensions. In particular, giving an algorithm executed in time n^with an approximation ratio (1+epsilon)/min (1, lambda_r) (here, Lambda_r is the minimum value of the normalized lapracian matrix graph). This probably indicates why it was irregular, even with the appearance of very weak hardness on these problems.

Our algorithm is based on the round procedure for Semidefinite planning from a strong class called Lasserre Hierarchy. The analysis uses a lo w-ranking boundary of the matrix in Flobenius Norm using a queue.

Our method is widely applied to optimizing a secondary integer plan problem with a positive sem i-corresponding regular function and a larg e-range linear restriction. This framework has other infamous problems such as a unique game, but it is easy to show that it is easy if normalized Lapracian does not have a very small eigen value.

C o-research with Ali Kemal Sinop.

2012/3/7 - Canceled (Tech Festa)

2012/2/29 13: 30-2: 30 Paricsit Goparan (MSR-SVC)

Shortcode

Long code plays an important role in understanding the approach to the NP-HARD problem. However, as the name implies, it is a long code. We enjoy many of the desirable characteristics of long codes, and some scenarios build a short code that can be replaced by long codes.

This short code comes from the explicit configuration of a graph with several large values. This answers the questions raised by Arora, Barak, and Steurer. It shows a general recipe for configuring a small extension from a localized code.

C o-research with Boaz Barak, Johann Hastad, Raghu Meka, Prasad RaghaventDra, David Steurer.<\Omega(1/delta^2)>2/22/2012 1: 30-2: 30 ALEKSANDER MADRY (MSR Ne w-ngland)<1-O(1/log(1/epsilon))>Online algorithm and K server forecast

In the past, problems that are considered by optimization can only be obtained after all input can be used. However, in many real world scenarios, inputs gradually become clear, and you must make an inverse decision on the way, with only partial information about the entire input. This is the motivation to develop a model that can deal with such scenarios.

This discussion will explore one of the most common approaches to dealing with uncertainty in optimization: computational modeling and competitive analysis. We will focus on a central problem in the field, the K-server problem. This problem captures many online scenarios, especially the widely studied caching problem, and is considered by many to be the "holy grail" problem in the field.

This talk will present a new randomized algorithm for the K-server problem, the first online algorithm for this problem that achieves polylogism-competitiveness.

Based on collaboration with Nikhil Bansal, Niv Buchbinder, and Joseph (Seffi) Naor.

2/15/2012 1:30-2:30 Dorothea Wagner (Karlsruhe Institute of Technology)

Algorithmic Engineering of Graph Aggregation

Graph aggregation has a wide range of applications, from social sciences to biology to the growing field of complex systems, and has become a central tool for analyzing networks in general. A common goal of graph aggregation is to identify dense clusters in a network. There are countless formulations, among which measurement metrics are widely used. However, the majority of algorithms for graph aggregation are based on heuristics for, for example, NP-hard optimization problems and do not allow structural guarantees on their output. Moreover, most real-world networks are not static but evolve over time, as well as their group structures.

This talk focuses on the algorithmic aspects of graph aggregation, in particular quality measures and algorithms based on the intuition of identifying networks as dense subgraphs loosely connected to each other. The talk discusses various quality measures, in particular quality index formulations, and introduces algorithm engineering approaches to maximizing modularity and related problems.

2012/2/9 10:30-11:30 Michael Kapralov (Stanford University)

Algorithms for the Bilateral Matching Problem with Sparsification and Flow Links

It is necessary to reconsider several classic algorithm solutions from the viewpoint of modern data processing architectures, from the need to process modern datasets. In recent years, the scintillation certificate has appeared as an important primitive in the algorithm toolbox of the graph algorithm, enabling a small space expression that keeps some useful characteristics of the graph. This discussion focuses on two issues. First, a new algorithm for both sides of matching problems, which use both diluted and random walks in a new way. Second, it gives an efficient algorithm to build sparity on the latest calculation platform.

In the first half of the lecture, we will consider the problem that is a classic problem applied to edge color, routing, programming, etc., which demands complete matching on regular tw o-part graphs. A series of improvements for many years has completed a linear time algorithm. We use both scintillations and random walks to get efficient signed time algorithm for this problem. In particular, give an algorithm that recovers a complete match with time O (n log n). Here, N is the vertex number of the graph when the graph is given in an adjacent array expression. The execution time is within the O (log n) of the output and complicated degree, and the problem is effectively solved. Our approach also provides a very efficient and easy-to-implement algorithm for calculating Birkhoff-Von-Neumann disassembly in a 2-minute multiplier and double-probable matrix.

The second part of the discussion describes efficient algorithms for uniformly punching graphs in the recent distributed stream processing system, such as the recently introduced Twitter Storm. In addition, we will introduce a new approach to gaining spectral sponsor, based on the new concept of the graph nodes, associated with the shortest path distance in the random sample of the graph.

Finally, it introduces the concept of scintillation related to matching problems in general graphs, and indicates an application to a problem that is approximately the maximum matching of a single ration in the flow model.

2012/1/26 10: 30-11: 30 Gregory Valiant (UC Berkeley)

It is necessary to reconsider some classic algorithm solutions from the viewpoint of modern data processing architectures, from the need to process the vast amount of modern data sets for statistical problems < SPAN>. In recent years, the scintillation certificate has appeared as an important primitive in the algorithm toolbox of the graph algorithm, enabling a small space expression that keeps some useful characteristics of the graph. This discussion focuses on two issues. First, a new algorithm for both sides of matching problems, which use both diluted and random walks in a new way. Second, it gives an efficient algorithm to build sparity on the latest calculation platform.

In the first half of the lecture, we will consider the problem that is a classic problem applied to edge color, routing, programming, etc., which demands complete matching on regular tw o-part graphs. A series of improvements for many years has completed a linear time algorithm. We use both scintillations and random walks to get efficient signed time algorithm for this problem. In particular, give an algorithm that recovers a complete match with time O (n log n). Here, N is the vertex number of the graph when the graph is given in an adjacent array expression. The execution time is within the O (log n) of the output and complicated degree, and the problem is effectively solved. Our approach also provides a very efficient and easy-to-implement algorithm for calculating Birkhoff-Von-Neumann disassembly in a 2-minute multiplier and double-probable matrix.

The second part of the discussion describes efficient algorithms for uniformly punching graphs in the recent distributed stream processing system, such as the recently introduced Twitter Storm. In addition, we will introduce a new approach to gaining spectral sponsor, based on the new concept of the graph nodes, associated with the shortest path distance in the random sample of the graph.

Finally, it introduces the concept of scintillation related to matching problems in general graphs, and indicates an application to a problem that is approximately the maximum matching of a single ration in the flow model.

2012/1/26 10: 30-11: 30 Gregory Valiant (UC Berkeley)

It is necessary to reconsider some classic algorithm solutions from the viewpoint of modern data processing architectures, from the need to process the huge amount of algorithms on the algorithmic solution to statistical problems. In recent years, the scintillation certificate has appeared as an important primitive in the algorithm toolbox of the graph algorithm, enabling a small space expression that keeps some useful characteristics of the graph. This discussion focuses on two issues. First, a new algorithm for both sides of matching problems, which use both diluted and random walks in a new way. Second, it gives an efficient algorithm to build sparity on the latest calculation platform.

In the first half of the lecture, we will consider the problem of complete matching on the regular tw o-part graph, which is a classic problem applied to edge color, routing, programming, etc. A series of improvements for many years has completed a linear time algorithm. We use both scintillations and random walks to get efficient signed time algorithm for this problem. In particular, give an algorithm that recovers a complete match with time O (n log n). Here, N is the vertex number of the graph when the graph is given in an adjacent array expression. The execution time is within the O (log n) of the output and complicated degree, and the problem is effectively solved. Our approach also provides a very efficient and easy-to-implement algorithm for calculating Birkhoff-Von-Neumann disassembly in a 2-minute multiplier and double-probable matrix.

The second part of the discussion describes efficient algorithms for uniformly punching graphs in the recent distributed stream processing system, such as the recently introduced Twitter Storm. In addition, we will introduce a new approach to gaining spectral sponsor, based on the new concept of the graph nodes, associated with the shortest path distance in the random sample of the graph.

Finally, it introduces the concept of scintillation related to matching problems in general graphs, and indicates an application to a problem that is approximately the maximum matching of a single ration in the flow model.

2012/1/26 10: 30-11: 30 Gregory Valiant (UC Berkeley)

Algorithic solution to statistical issues

In this lecture, a new approach to the three classic statistical issues will solve insights on the basic properties of these tasks from a computational perspective, and to deal with the increased size of real world datasets. suggest.

The first problem is to restore the parameters in the gauss distribution mixed distribution. If data from a single gauss distribution is given, the specimen average of the data and the diversification of the actual distribution are significantly good. However, if a part of the data point is plotted according to one Gaussian distribution, and the remaining data points are plotted according to another Gaussian distribution, how can the parameters of each Gaussian component be restored? The issue was proposed for the first time by Pearson in the 1890s and has been reconsiders by computer scientists in the last 10 years. In two papers, Adam Kalai and Ankur Moitra, we found that both the specimen and computational degree of computing in this issue were polynomial for the associated parameters (the reverse number of accuracy).

The second problem was examined in a series of papers with Paul Valiant, and examined a task that estimates a class of a wide range of statistical properties, including entropy, distance between distribution pairs, and support sizes. It is. Our results are "discrete items problem" (what rows of lines do you need to make a row of lines in order to accurately estimate the number of discriminatory lines when the data matrix in line is given) There are several meanings, such as solving the specimen complexity. We indicate that N/Log N lines are necessary and sufficient, and greatly improve the upper and lower worlds in this problem.

Finally, we will describe a new boundary against the problem of learning a lot of noise. Roughly speaking, this problem represents a task that identifies "related" variables. For example, when a column that expresses the expression of many different genes and a large table with a final column that represents a condition of a certain medical condition is given, the (probably small) set of (perhaps) of the (probably small) part of the (probably small) part of the medical condition. How can I find it?

25/1/2012 1: 30-2: 30 Virginia vassilevska WiLLIAMS (UC Berkeley)

In this lecture, a lecture that is faster than the CopperSmith-Wingrad < SPAN> This lecture solves insights on the basic nature of these tasks from a computational perspective, and reveals the actual world dataset. We propose a new approach to address the increased size of.

The first problem is to restore the parameters in the gauss distribution mixed distribution. If data from a single gauss distribution is given, the specimen average of the data and the diversification of the actual distribution are significantly good. However, if a part of the data point is plotted according to one Gaussian distribution, and the remaining data points are plotted according to another Gaussian distribution, how can the parameters of each Gaussian component be restored? The issue was proposed for the first time by Pearson in the 1890s and has been reconsiders by computer scientists in the last 10 years. In two papers, Adam Kalai and Ankur Moitra, we found that both the specimen and computational degree of computing in this issue were polynomial for the associated parameters (the reverse number of accuracy).

The second problem was examined in a series of papers with Paul Valiant, and examined a task that estimates a class of a wide range of statistical properties, including entropy, distance between distribution pairs, and support sizes. It is. Our results are "discrete items problem" (what rows of lines do you need to make a row of lines in order to accurately estimate the number of discriminatory lines when the data matrix in line is given) There are several meanings, such as solving the specimen complexity. We indicate that N/Log N lines are necessary and sufficient, and greatly improve the upper and lower worlds in this problem.

Finally, we will describe a new boundary against the problem of learning a lot of noise. Roughly speaking, this problem represents a task that identifies "related" variables. For example, when a column that expresses the expression of many different genes and a large table with a final column that represents a condition of a certain medical condition is given, the (probably small) set of (perhaps) of the (probably small) part of the (probably small) part of the medical condition. How can I find it?

25/1/2012 1: 30-2: 30 Virginia vassilevska WiLLIAMS (UC Berkeley)

In this lecture that is faster than Coppersmith-Wingrad, the three classic statistical issues are solved on the basic nature of these tasks from a computational perspective, increasing the real world's data set. We propose a new approach to deal with the size.

The first problem is to restore the parameters in the gauss distribution mixed distribution. If data from a single gauss distribution is given, the specimen average of the data and the diversification of the actual distribution are significantly good. However, if a part of the data point is plotted according to one Gaussian distribution, and the remaining data points are plotted according to another Gaussian distribution, how can the parameters of each Gaussian component be restored? The issue was proposed for the first time by Pearson in the 1890s and has been reconsiders by computer scientists in the last 10 years. In two papers, Adam Kalai and Ankur Moitra, we found that both the specimen and computing of this issue were polymorphic for the associated parameters (the reverse number of the accuracy required).

The second problem was examined in a series of papers with Paul Valiant, and examined a task that estimates a class of a wide range of statistical properties, including entropy, distance between distribution pairs, and support sizes. It is. Our results are "discrete items problem" (what rows of lines do you need to make a row of lines in order to accurately estimate the number of discriminatory lines when the data matrix in line is given) There are several meanings, such as solving the specimen complexity. We indicate that N/Log N lines are necessary and sufficient, and greatly improve the upper and lower worlds in this problem.

Finally, we will describe a new boundary against the problem of learning a lot of noise. Roughly speaking, this problem represents a task that identifies "related" variables. For example, when a column that expresses the expression of many different genes and a large table with a final column that represents a condition of a certain medical condition is given, the (probably small) set of (perhaps) of the (probably small) part of the (probably small) part of the medical condition. How can I find it?

25/1/2012 1: 30-2: 30 Virginia vassilevska WiLLIAMS (UC Berkeley)

More faster queue than Coppersmith-Wingrad

In 1987, Coppersmith and Winograd published an algorithm to multiply two n×n matrices in O(n^) arithmetic operations. This algorithm remained the theoretically fastest approach to matrix multiplication for 24 years. Recently, we have been able to design an algorithm to multiply n×n matrices in at most O(n^) arithmetic operations, improving the running time of Coppersmith-Winograd.

The improvement is based on a recursive application of the original Coppersmith-Winograd construction and a general theorem that reduces the running time analysis of the algorithm to solving a nonlinear constraint program. The final analysis is done by solving this program numerically. To fully optimize the running time, we use ideas from an independent study by Stothers, who claimed a running time of O(n^) in his doctoral thesis.

The purpose of this talk is to provide some intuition and highlight the main new ideas necessary to achieve the improvement.

2012/1/24 10:30-11:30 Roy Schwartz (Technion-Israel Institute of Technology)

Infrastructure Maximization

Combinatorial problems with subarbitrary objective functions have attracted attention in recent years, due in part to their importance in economics, algorithmic game theory, and combinatorial optimization. In addition to the commonality of subarbitrary utility functions in economics and algorithmic game theory, such functions also play important roles in combinatorics, graph theory, and combinatorial optimization. Some well-known problems that can be captured in terms of maximizing subunits include Max-Cut, Max-DiCut, Max-k-Cover, Generalized-Assignment, some variants of Max-SAT, and some welfare and programming problems.

The classical work on subunit maximization problems is mostly combinatorial.

Recently, however, many results based on continuous algorithms have emerged. The main bottleneck in the continuous approach is how to approximately solve the non-convex relaxation for the subunit problem at hand. A simple and elegant technique called "continuous greedy" can address this problem well for monotonic subunit objective functions, but only more complicated techniques are known for general non-monotonic subunit objective functions. In this paper, we propose a new unified continuous greedy algorithm that finds approximate fractional solutions in both non-monotonic and monotonic cases and improves the approximation ratio in various applications. Some notable direct results are update-theoretic exact approximations for max-sat subunits and subunits with k players for the number of k players, and improved (1/e)-estimations for the maximization of non-monotonic subunit functions subject to matroid or o(1)-knapsack constraints.

We show that the continuous technique can be further used to obtain improved results in other settings. Perhaps the most basic subunit maximization problem is the unconstrained subunit maximization problem, which includes well-studied problems such as max-cut, max-dicut, max-setup, and several variants of max-sat. We exploit the symmetry of this problem and propose an information-theoretic tight (1/2) comparison algorithm. Unlike previously known algorithms, this algorithm preserves fractional internal state. This algorithm can also be further simplified to obtain a purely combinatorial algorithm that runs in linear time only.

2012/1/18 13:30-2:30 Paul Ohm (University of Colorado School of Law)

Computer Science and Law

Today, computer science has ever formated the law, and the law has formed a computer science. Computer scientists have opened new computing, networking platforms and applications to set up a new form of collaboration and conflicts. These create opportunities for lawyers and legislators to respond. When lawyers and legislators have restricted that computer scientists are allowed or (although not so many), this relationship works in the opposite order. This interdisciplinary interaction has been held in various specific conflicts, from net neutrality to encrypted wars, struggle over copyright, to information privacy, but computer science and law. It is worthwhile to consider them more generally by looking at the necessary relationships between (and policy).

In this panel discussion, Paul Om, a professor of the Colorado University Law, will consider the relationship between computer science and law and policies. Does computer science affect laws and policies based on professional system administrators, prosecutors of the Justice, Cyber ​​Law and Information Privacy, majoring in computer science in the faculty. Also, we will talk about whether there is the opposite. Based on specific cases, we will discuss anonymous seizure, seizure domain names, deep packet inspections, etc. The session intends to be a dynamic and interactive session that the audience helps the audience to form the direction of the discussion.

2012/1/12 10: 30-11: 30 Shiri Chechik (Witsman Science Research Institute)

avatar-logo

Elim Poon - Journalist, Creative Writer

Last modified: 27.08.2024

Microsoft Research, Silicon Valley. MIHAI BUDIU. Microsoft Research, Silicon Valley. ´ULFAR ERLINGSSON. Microsoft Research, Silicon Valley and. JAY LIGATTI. Microsoft Research. Silicon Valley. Martın Abadi. Microsoft Research. Silicon Valley. & UC Santa Cruz. Michael Vrable. UC San Diego. Mihai Budiu. Microsoft. The seminar took place on Wednesdays, , at MSR Silicon Valley. The 09/18/ talk was canceled to allow the local audience some time to pack their.

Play for real with EXCLUSIVE BONUSES
Play
enaccepted