UB Theory Weekly – Week 4

Hi all,

The fourth Theory Weekly Meeting will happen this Friday between 4:00 – 5:00 pm at 203 Davis Hall(Theory Lab). The schedule will be as follows :

4:00 – 4:40 pm – A talk by Sai Vikneshwar(me :)). The plan for the talk is as follows –

I will start with the problems that the modern distributed storage systems face and the existing techniques to solve them. Later, I will go on to introduce “Locally Repairable Codes” – which find a good balance between storage space and faster recovery. These codes saved hundreds of millions of dollars for Microsoft when employed in Windows Azure Storage.

4:40 – 5:00 pm  – Socializing with the Theory community. 

Hoping to see you all!

Ken Clarkson’s Talk at FWCG 2015

I was extremely fortunate to attend Ken Clarkson‘s excellent talk on “Input Sparsity and Hardness for Robust Subspace Approximation” at FWCG this year. Excerpts from the talk –

Let’s assume that we are given an input of n points a_1, a_2, ... , a_n in d dimensions. These points are arranged as rows of a matrix A \in \mathbb{R}^{n * d}.

The goal is to find a k – dimensional subspace minimizing cost(A,L) \equiv \sum_{i\in [n]} dist(a_i,L)^2 among all possible k-dimensional subspaces.

The immediate solution is through Singular Value Decomposition(SVD hereon) – there is a best rank-k A_k = U_k\sum_kV_k^\top with cost(A,L^*) = \|A - A_k\|_F^2 = \sum_{i\in[n]}dist(a_i, a_iV_kV_k^\top)^2 where L^* is the span of top k right singular vectors in V_k^\top. The best known approximation algorithms can find L' such that dim(L') = k and cost(A, L') \le (1 + \epsilon)cost(A, L^*) in O(nnz(A) + (n + d)poly(k\epsilon^{-1})). It is also important to note that SVD primarily functions by reducing the dimensionality of the given set of data points and hence, a single point (or an outlier) can carry a lot of weightage and influence the decisions made in the decomposition. This leads to the conclusion that SVD is not a very robust technique.

The alternative solution is to use M-Estimators for Regression – Let r_i's be the residuals i.e the difference between the ith observation and the fitted value. The traditional least squares method tries to minimize \sum_i{r_i^2} which runs into trouble when there are outliers in the data as we saw in the earlier case. The M-estimators uses \min_x\sum_iM(r_i) to address this issue and are proven to reduce the effect of outliers.

Now that a more robust technique is established, we need to see how it can be applied to low-rank approximation. Low-rank approximation also calls for rotational invariance where we want the cost to be preserved even if \mathbb{R}^d is rotated by a matrix W. For example, cost functions of the form \sum_i{|a_i - b_i|}_1 can never be used since when B has a zero rank, the rotational invariance property is not preserved for most rotations. The workaround is to use cost functions of the form \sum_{i \in [n]}M(dist(a_i, L)) for M(z) = z^p.  These functions are rotational invariant and more robust for smaller values of p. For any basis V of L, we have cost(A,L) = cost(A,V) \equiv \sum_{i\in [n]} M(dist(a_i, a_i V V^\top)).  A subspace L' is a (1 + \epsilon) approximation if dim(L') = k and cost(A, L') <= (1 + \epsilon)min_{dim(L) = k}cost(A, L).

Some nice results on this front –

  1. Hardness – It is NP-hard find a O(1 + 1/d) approximation for p \in [1,2).
  2. For p \in [1, 2), an algorithm exists such that it returns a (1 + \epsilon)-approximation in O(nnz(A) +(n+d)poly(k\epsilon^{-1}) +exp(poly(k\epsilon^{-1}))) with constant probability.
  3. Results for M-estimators that are nice –
    1. All such functions are even – M(z) = M(-z)
    2. Monotonic – |a| > |b| -> M(a) > M(b)
    3. Bounded growth – There is a C_M such that for a \ge b > 0, C_Ma/b \le M(a)/M(b) \le a^2/b^2.
    4. Root sub-additivity property – \sqrt{M(a + b)} \le \sqrt{M(a)} + \sqrt{M(b)}.
  4. For any vector x, \sqrt{\sum_{i \in [n]}M(x_i)} is almost a norm except for a few properties like scale invariance, scale insensitivity, etc.

There are some open questions too –

  1. If \epsilon is fixed, what is the hardness of approximation?

He also went onto describe some algorithms for the nice M-estimators and row and column reduction techniques which preserve the approximation. Overall, I would say it was a wonderful talk which helped me understand low rank approximations.

Note – I have only sketched a basic idea on the topic here. I strongly recommend the following papers for further reading –

1. Sparse Subspace Embeddings

2. Sketching for M-estimators.

Acknowledgements – I would like to thank Ken Clarkson for his support and encouragement(and slides :)), which played a major role in this blog post.

UB Theory Weekly – Week 3

Hi all,

This week we have FWCG happening here at UB. I would like to urge you all to attend the same. There are some interesting talks –

Piotr Indyk (MIT)
Talk title: Beyond Locality Sensitive Hashing
Time: 11:00-12:00, Oct. 23,
Location: 106 O’Brian Hall
Kenneth L. Clarkson (IBM Almaden)
Talk title: Input Sparsity and Hardness for Robust Subspace Approximation
Time: 11:00-12:00, Oct. 24
Location: 101 Davis Hall
Kenneth W. Regan (UB)
Talk title: Using the Shape of Space for Shortcuts: Speeding up regression on millions of chess positions
Time: 2:45-3:45 pm, Oct. 23
Location: 106 O’Brian Hall

UB Theory Weekly – Week 2

Hi all,

The second Theory Weekly Meeting will happen this Friday between 4:00 – 5:00 pm at 203 Davis Hall(Theory Lab). The schedule will be as follows :

4:00 – 4:50 pm – A talk by Prof. Roger(Xin) He.

The abstract in his own words :

Two “graph drawing” problems –

Given a graph G, the general idea of “graph drawing” is to draw G on the plane so that certain conditions/optimization measures are met. Depending on thees conditions, these problems can be fairly practical, or very theoretical. The tools involved are often graph-theoretic. Sometimes, there are geometry flavor in it. (Since we are talking about the drawings on the plane, most interesting stuff are for planar graphs.)

I’ll be discussing two such problems, (towards theoretical end), that I and my students have studied/are doing.

Definition 1: Given a planar graph G and a metric function D(*,*), a “greedy drawing” of G is:
1. Each vertex of G is drawn as a point on the plane.
2. Each edge of G is drawn as a straight line connecting its two end vertices.
3. Satisfies the “greedy drawing” condition: For any two vertices u, w in G, there exits a neighbor v of  w, such that D(v,w) < D(u,w). (In other words, v is closer to w than u.)

The condition 3 is equivalent to the following: For any two vertices u,w in G, there is a path P_{uw}, such that, when traveling from u to w,  the distance D(x,w) between the intermediate vertices x of P_{uw} to w strictly decreases.

The problem is motivated by a network routing problem. If G has a greedy drawing, we will have a very simple routing protocol: If u wants to send a message to the destination w, u can simply send it to a neighbor v, that is closer to w than itself. (Then v will follow the same protocol. The greedy property guarantees the message will reach the destination w).

The questions are: Which class of graphs has such drawings? If so, what’s the smallest size of the drawing.

A (moderately famous) related conjecture (now theorem) is (Papadimitriou, 2003): Every 3-connected plane graph G has a greedy drawing with respect to Euclidean distance D(*,*). The conjecture was proved by Tom Leighton et. al.(2008). However, there there drawbacks and open problems, which we will discuss.

Another problem:

Def 2: Same as Def 1, except that the condition 3 is replaced by the following. Condition 3 satisfies the “monotone condition”: For any two vertices u,w, there is a path P_{uw} from u to w, and a line (or direction) l_{uw}, such that P_{uw} is “monotone with respect l_{uw}” (namely, the order of the projection points of the vertices of P_{uw}on the direction l_{uw} is the same order of the vertices in P_{uw}).

This drawing style is introduced 3 years ago, partially to address the drawbacks of “greedy drawing”. The questions are: Which class of graphs has such drawings? If so, what’s the smallest size of the drawing.
I’ll discuss partial results, and related problems.

4:50 – 5:00 pm – Socializing with the Theory community.

All are welcome!

UB Theory Weekly – Week 1

Hi all,

The first Theory Weekly Meeting will happen this Friday between 4:00 – 5:00 pm at 203 Davis Hall(Theory Lab). The schedule will be as follows :

4:00 – 4:30 pm – A talk by Prof. Shi Li on “Approximating capacitated k-median with (1+eps)k open facilities”.

The abstract of the talk in his own words  – In the capacitated k-median (CKM) problem, we are given a set F of facilities, each facility i \in F with a capacity u_i,  a set C of clients, a metric d over F \cup C and an integer k.  The goal is to open k facilities in F and connect the clients C to the open facilities such that each facility i is connected by at most u_i clients, so as to minimize the total connection cost.

In this paper, we give the first constant approximation for CKM, that only violates the cardinality constraint by a factor of 1+eps. The natural LP relaxation for the problem, which almost all previous algorithms for CKM are based on, has unbounded integrality gap even if (2-eps)k facilities can be opened.  We introduce a novel configuration LP for the problem, that overcomes this integrality gap.  Our algorithm does not need to solve the configuration LP to optimality. Instead it is based on a rounding algorithm with the following property: given a fractional solution x, which may or may not be a valid solution to the LP, the rounding algorithm either successfully returns a good feasible solution, or reports a constraint of the LP that x violates.

4:30 – 4:45 pm – Q and A based on the talk.

4:45 – 5:00 pm – Socializing with the Theory community.

All are welcome!