Chronological list of talks in the 2002-2003 academic year:

• September 16, 2002, 12:30pm

Speaker/Topic: No meeting---Yom Kippur.

• September 30, 2002, 12:30pm

Speaker: Christian Glasser, U. Wuerzburg
Topic: Error-Bounded Probabilistic Computations between MA and AM

There are two crucial points when one looks at the definition of BPP:

• a probability limit (i.e., an input is accepted if and only if the probability that some path is accepting is greater than 1/2),
• and a probability gap (i.e., our machine promises that the probability of accepting paths is never within some fixed interval).

What happens if we keep the probability gap but lower the probability limit? It is easy to see that nothing happens if the probability limit is decreased by a polynomial factor. However, if we decrease it by an exponential factor we get a new class which is denoted by SBP (small bounded-error probability).

We study this class and show that it is between MA and AM on the one hand, and between BPP and BPPpath on the other hand. Moreover, we provide evidence that SBP does not coincide with these and other known complexity classes. In particular, we construct a relativized world where SBP is not contained in Sigma(2). This provides an oracle under which BPPpath is not in Sigma(2).

Joint work with Elmar Boehler and Daniel Meister.

• October 7, 2002, 12:30pm

Speaker/Topic: No meeting---UR Fall Break.

• October 21, 2002, 12:30pm

Speaker: Rahul Tripathi
Topic: Graph Isomorphism is in SPP

With the discovery of a polynomial-time algorithm for primality testing by Agrawal et al. [AKS02], Graph Isomorphism problem becomes perhaps the most widely studied natural problem whose complexity is not classified as complete for some natural class. The Graph Isomorphism Problem consists of deciding whether two given graphs are isomorphic, i.e., whether there is a bijective mapping (permutation) from the vertices of one graph to the vertices of the other graph such that edge connections (relations) are preserved.

To date, there is no efficient (polynomial-time) algorithm for this problem. On the other hand, there is evidence suggesting that Graph Isomorphism problem is not NP-Complete. For instance, it is known that if Graph Isomorphism is NP-Complete then the polynomial-time hierarchy collapses.

Another approach to studying the complexity of Graph Isomorphism is to classify the problem in a class that is low for most of the gap-definable counting classes (e.g., PP, Parity-P, C_{=}P, etc.). The class SPP is known to be low for any gap-definable class. Since it is unlikely that the class NP is low for these gap-definable counting classes, the classification of Graph Isomorphism in SPP is evidence that the problem might not be hard for NP.

In this talk, I will present a recent paper entitled "Graph Isomorphism is in SPP" by Arvind and Kurur [ECCC-037:2002]. This is a major result since the classification of Graph Isomorphism in SPP was an open issue for almost a decade. The proof technique relies on results from both permutation group theory and complexity theory.

• October 28, 2002, 12:30pm (note special date)

Speaker: Mitsunori Ogihara
Topic: Primality is in P

In this talk I will present the recent work of Agrawal, Kayal, and Saxena that shows PRIMALITY---the problem of testing whether a given positive integer is a prime number---is in P. PRIMALITY is a fundamental problem in number theory. Since the ancient Greek Eratosthenes, who discovered the Sieve method for finding primes, properties of primes have been well studied. In computer science, the focus has been on computational complexity of PRIMALITY. In the mid-70's, Pratt showed that PRIMALITY is in NP and Solovay and Strassen showed that PRIMALITY is in coRP. These observations raised the question of whether the problem is in P, and if not, whether the problem is in RP. The back burner of the investigations is the modern cryptology. In response to Diffie and Hellman's 1976 proposal of public-key cryptography, a number of public-key cryptosystems were proposed. Many of such proposals, typified by RSA, Rabin's, and Blum's, are based upon difficulty in factoring composite numbers and use the product of large prime numbers as a key. Since fast PRIMALITY algorithms are essential for implementing such cryptosystems, much larger attention was given to the investigation on the complexity of PRIMALITY. In the mid-80's the use of elliptic curves over a finite field/ring became a trend in the complexity theory of number theoretic problems and the question about the membership in RP of PRIMALITY was positive resolved in '87 by Adleman and Huang using the technique. As to the deterministic time complexity of PRIMALITY the best one has been the Adleman, Pomerance, and Rumely algorithm published in 1983 that has the running time of O(log N ^ logloglog N), where N is the number whose primality is tested. The algorithm is an extension of earlier work by Cohen and Lenstra, which uses cyclotonic rings. The expectation has been that more complicated mathematics will be need to settle the question of whether PRIMALITY is in P. However, the Agrawal, Kayal, and Saxena method does not use highly complicated mathematics. To verify the correctness of the proof one needs little beyond elementary number theory (except one lemma due to someone else). The goal of my talk is to go over the proof of this exciting result.

• November 4, 2002, 12:30pm (note special length and location)

Rochester Complexity Theory Workshop in Honor of Manuel Blum's Visit
Special Information: Time = 1230-350pm. Location: First two sessions in CSB 601 and third session in CSB 703.

Workshop Program:

• Session I (Location = CSB 601. Session Chair = Lane A. Hemaspaandra.)
• 1230-100:
The Minimization Problem for Boolean Formulas.
Edith Hemaspaandra and Gerd Wechsung.
• 100-130:
The Complexity of Counting the Number of Self-Avoiding Walks on Graphs Embedded in the Two-Dimensional Grid.
Maciej Liskiewicz, Mitsunori Ogihara, and Seinosuke Toda.

• Break: 130-140.

• Session II (Location = CSB 601. Session Chair = Mitsunori Ogihara.)
• 140-210:
The Complexity of Computing the Size of an Interval.
Lane A. Hemaspaandra, Christopher M. Homan, Sven Kosub, and Klaus W. Wagner.
• 210-240:
Lower Bounds and the Hardness of Counting Properties.
Lane A. Hemaspaandra and Mayur Thakur.

• Break: 240-250.

• Session III. (Location = CSB 703. Session Chair = Joel Seiferas.)
• 250-320:
One-Way Permutations and Self-Witnessing Languages.
Christopher M. Homan and Mayur Thakur.
• 320-350:
The Enumerability of the Determinant and the Rank.
Alina Beygelzimer and Mitsunori Ogihara.

• November 18, 2002, 12:30pm *No Seminar Today! Use the time to prove theorems!!*

• December 2, 2002, 12:30pm

Speaker: Ming Zhong
Topic: Electronic Cash and Its Key Technologies

Intuitively, Ecash can be viewed as the digitial equivalent of paper cash. In this talk, I will explain the motivations for E-cash research and the essential characteristics of E-cash systems. I would like to share with you all the interesting achievements during the 15 year E-cash research history since David Chaum proposed the first off-line E-cash system. Specifically, I will talk about four milestone E-cash systems, each representing a huge improvement on the efficiency and functions of E-cash. My explanation will be interweaved with the introduction to the underlying technologies on which E-cash systems are based. Finally, I will talk about my previous research, my ideas to improve E-cash efficiency and my personal feelings about this field.

• January 27, 2003, 12:30pm (note special date)

Speaker: Alina Beygelzimer
Topic: Accuracy-efficiency trade-offs in approximating joint probability distributions

I will consider the question of approximating joint probability distributions by distributions describable using simple graphical models that allow efficient probabilistic inference (i.e., efficient propagation of probabilities). I will define the notion of ``effective width'' of a distribution as the size of the largest ``significant'' dependency among variables, so that no distribution that uses only information about dependencies on smaller subsets of nodes can serve as a good appproximation. I will consider two parameterized properties of random graphical models, the property of ``being efficient'' (i.e., having small inference complexity), and the property of ``being accurate'' (i.e., being close to the target distribution). Both properties are monotone and exhibit a threshold behavior. The relative position of such thresholds captures the effective width of the distribution and motivates the definition of the best accuracy-efficiency trade-off, i.e., a balance between the complexity and the accuracy of an approximation. We give a sampling algorithm for estimating the trade-off curve and provide controlled experimental results to support the proposed definitions. Joint work with Irina Rish, IBM Watson.

• February 3, 2003, 12:30pm

Speaker: Mayur Thakur
Topic: Towards Rice's Theorem in Complexity Theory

• February 10, 2003, 12:30pm (note special date)

Speaker: S. S. Ravi, Department of Computer Science, SUNY Albany
Topic: Computational Aspects of Sequential Dynamical Systems

Sequential Dynamical Systems (SDSs) are a special type of automata that can be used to model simulation systems. We focus on the computational complexity of testing several phase space properties of SDSs. Our main result is a dichotomy between classes of SDSs whose behavior is easy to predict and those whose behavior is hard to predict. Specifically, we show the following.

(a) Several state reachability problems for SDSs are PSPACE-complete, even when restricted to SDSs whose underlying graphs are d-regular for some constant d and and the function associated with each node is symmetric. Moreover, this result holds even when each node computes the same symmetric function.

(b) In contrast, the reachability problems are solvable in polynomial time for SDSs when the Boolean function associated with each node is symmetric and monotone.

We will also briefly discuss our results for other problems (e.g. predecessor and fixed point existence) for SDSs.

This research was carried out jointly with Chris Barrett and Madhav Marathe (both from Los Alamos National Laboratories), Harry Hunt III, Dan Rosenkrantz and Richard Stearns (all from the University at Albany).

• February 17, 2003, 12:30pm

Speaker: Christopher Homan
Topic: Minimizing Redundancy in Worst-Case Cryptography

Redundancy is a basic feature of many computational settings. In this talk, I will present techniques for constructing complexity-theoretic one-way functions in which the amount of ambiguity (i.e., redundancy in the preimage) is minimized, while other, important properties are maintained. I will highlight a technique for constructing total, one-way permutations from partial, one-way permutations. This construction proves the equivalence of number of distinct, complexity-theoretic hypotheses that were, prior to the discovery of this construction, not known to be equivalent.

• March 3, 2003, 12:30pm

Speaker: Yin-He Cheng
Topic: Multiclass Classification Methods based on Binary Classification

Although there is much work on binary classification, many successiful binary classification methods such as SVM and Boosting are hard to be directly extended for multiclass classificaiton. One solution is to reduce the multiclass classification problems into binary classification problems. There are many approaches based on this strategy have been studied. The most commonly used ones are one-vs-all and all-pair. A composing approach called Error Correcting Output Code (ECOC) is proposed by Dietterich et al. ECOC cleverly used error correcting code method from information theory to solve multi-class classification problems. "Loss-based decoding", an improvment of ECOC, is proposed by Allwein et al. It focuses on cases in which the binary learning algorithm is based on the margin of a training sampple.

In this talk, I will introduce these composing methods for multiclass classification, with some analysis of their training and generalization errors.

• March 17, 2003, 12:30pm

Speaker: Chengliang Zhang
Topic: The Storage and Retrieval Mechanism in Multidimensional Databases

Traditional transactional databases are not suitable for the currently more and more complex analyzing tasks. Data warehouse, which is subject-oriented, integrated, time-variant and nonvolatile, is developed specially for those tasks. I focus on how to organize an effective storage mechanism to store and retrieve huge amount, mostly sparse multidimensional data. To speed up the query and analysis response, we need to precompute the multidimensional data. However, constrained by the finite storage and the explosive combination of multidimensional data, we must choose which views to materialize and need to make up a plan to materialize the selected views. Index is another way to speed up the query. Because the data are usually very sparse in multidimensional space, so we need to propose some efficient compression techniques to compress the data. At last I will introduce some retrieval methods for multidimensional data.

• March 31, 2003, 12:30pm

Speaker: Tao Li
Topic: Random Projection for Dimension Reduction: Theory and Practice

Dimension reduction is a fundamental yet difficult problem in machine learning and data mining. Formally, the problem can be abstracted as representing a set of n points in d-dimensional space by a set of points in k-dimensional space with minimum distortion in the distance between any pair of points (where k is strictly less than d). In this talk, we will discuss the theory and practice of using random projection for dimension reduction. First, the Johnson-Lindenstrauss Lemma, which is the fundamental result for random projection, will be reviewed in detail. Specifically, we will see an elementary probabilistic proof given by Dasgupta and Gupta. Second, we will look at some real applications of random projection. Finally, several natural extensions will be discussed.

• April 7, 2003, 12:30pm

Speaker: Holger Spakowski, U. Duesseldorf
Topic: Exact Complexity of the Winner Problem for Young Elections

In 1977 Young proposed a voting scheme that extends the Condorcet Principle based on the fewest possible number of voters whose removal yields a Condorcet winner. We prove that both the winner and the ranking problem for Young elections is complete for P_{||}^NP, the class of problems solvable in polynomial time by parallel access to NP. Analogous results for Lewis Carroll's 1876 voting scheme were recently established by E. Hemaspaandra, L. Hemaspaandra, and J. Rothe. In contrast, we prove that the winner and ranking problems in Fishburn's homogeneous variant of Carroll's voting scheme can be solved efficiently by linear programming.
Joint work with Joerg Rothe and Joerg Vogel.

• April 21, 2003, 12:30pm ****SPECIAL ROOM: CSB 601****

Speaker: Alina Beygelzimmer
Topic: Alternative Notions of Approximation and Space-Bounded Computations. Ph.D. Thesis Defense Talk

We investigate alternative notions of approximation for problems inside P. We show that for some natural problems, a barely non-trivial advantage over guessing is as hard to obtain as the solution itself. For example, we prove that if one could eliminate even a single possibility for the value of an arithmetic circuit on a given input, then this would imply that the class P has fast (polygarithmic time) parallel algorithms. In other words, this would constitute a proof that there are no inherently sequential problems in P, which is quite implausible. The result is robust with respect to eliminating procedures that are allowed to err (by excluding the correct value) with small probability.

We also show that several fundamental linear algebra problems are hard in this sense. It turns out to be as hard to substantially reduce the number of possible values for matrix rank and matrix determinant as to compute them exactly. This notion of approximation has been studied previously. This is the first time, however, it is investigated for functions inside P.

Back to the main page