## GlobalSIP 2013 Symposium on:

# Low-Dimensional Models and Optimization in Signal Processing

[Download the PDF Call for Papers]

Sparsity, low-rank, and other low-dimensional geometric models have long been studied and exploited in machine learning, signal processing and computer science. For instance, sparsity has made an impact in the problems of compression, linear regression, subset selection, graphical model learning, and compressive sensing. Similarly, low-rank models lie at the heart of a variety of dimensionality reduction and data interpolation techniques. Additionally, manifold models are able to succinctly characterize complex signal classes that exhibit few degrees of freedom.The goal of this symposium is to showcase recent developments in the formulation of: (i) new low-dimensional models for high-dimensional data that are concise enough to be informative in signal recovery and information extraction, where examples include sparsity, low-rank matrices, and manifold models; (ii) new signal recovery and information extraction algorithms, in particular optimization-based approaches, that promote solutions well-matched to the aforementioned signal models, where examples include greedy and iterative algorithms, optimization solvers, and heuristic methods; and (iii) new applications where where low-dimensional signal models outperform existing signal processing, signal estimation, and data recovery approaches in the computational, fidelity, or measurement requirement realms.

Submissions of at most 1 page in two-column IEEE format are welcome on topics including:

- Dimensionality Reduction
- Algorithms for Signal Processing
- Signal Models
- Signal Processing
- Compressive Sensing

### Keynote Speakers

Piotr Indyk, *Massachusetts Institute of Technology*, **Approximation-Tolerant Model-Based Compressive Sensing**

The goal of sparse recovery is to recover a k-sparse signal x from (possibly noisy) linear measurements of the form y = A x, where A describes the measurement process. Standard results in compressive sensing show that it is possible to recover the signal x from m = O(k log (n/k)) measurements, and that this bound is tight. The framework of model-based compressive sensing (introduced by Baraniuk et al.) overcomes the lower bound and reduces the number of measurements further to O(k) by limiting the supports of x to a subset M of all possible supports. This has led to many measurement-efficient algorithms for a wide variety of signal models, including block-sparsity and tree-sparsity.

However, extending the framework to more general models has been stymied by a key obstacle: for the framework to apply, one needs an algorithm that given a signal x finds the *best* approximation to x that has its support in M (this procedure is often called the "model projection procedure"). An *approximation* algorithm for this optimization task is not sufficient. Since many problems of this form are not known to have exact polynomial-time algorithms, this requirement poses a fundamental obstacle for extending the framework to a richer class of models. Generalizing the model-based framework to approximate model projections has been a subject of a large body of research in the recent years.

In this talk, we show how to remove this obstacle and show how to extend the model-based compressive sensing framework so that it requires only approximate solutions to the aforementioned optimization problems. Interestingly, our extension requires the existence of *two* approximation algorithms, one for the maximization and one for the minimization variants of the optimization problem. We then show how this framework leads to improved model-based compressive sensing algorithms for some well-studied sparsity models.

Joint work with Chinmay Hegde and Ludwig Schmidt, to appear in SODA'14.

**Piotr Indyk** is a Professor of Electrical Engineering and Computer Science at MIT. He joined MIT in 2000, after earning PhD from Stanford University. Earlier, he received Magister degree from Uniwersytet Warszawski in 1995. Piotr's research interests lie in the design and analysis of efficient algorithms. Specific interests include: high-dimensional computational geometry, sketching and streaming algorithms, sparse recovery and compressive sensing. He has received the Sloan Fellowship (2003), the Packard Fellowship (2003) and the MIT Faculty Research Innovation Fellowship (2012). His work on sparse Fourier sampling has been named to Technology Review "TR10" in 2012, while his work on locality-sensitive hashing has received the 2012 Kanellakis Theory and Practice Award.

Benjamin Recht, *University of Wisconsin Madison*, **Going Off the Grid**

We often model signals from the physical world with continuous parameterizations. Unfortunately, continuous models pose problems for the tools of sparse approximation, and popular discrete approximations are fraught with theoretical and algorithmic issues. In this talk, I will propose a general, convex-optimization framework - called atomic-norm denoising - that obviates discretization and gridding by generalizing sparse approximation to continuous dictionaries. As an extended example that highlights the salient features of the atomic-norm framework, I will highlight the problem of estimating the frequencies and phases of a mixture of complex exponentials from noisy, incomplete data. I will demonstrate that atomic-norm denoising outperforms state-of-the-art spectral and sparse-approximation methods in both theory and practice. I will then close with a discussion of additional applications of the atomic-norm framework including deconvolution, deblurring, and system identification. This is joint work with Badri Bhaskar, Parikshit Shah, and Gongguo Tang.

**Benjamin Recht** is an Assistant Professor in the Department of Computer Sciences at the University of Wisconsin-Madison and holds courtesy appointments in Electrical and Computer Engineering, Mathematics, and Statistics. He is a PI in the Wisconsin Institute for Discovery (WID), a newly founded center for research at the convergence of information technology, biotechnology, and nanotechnology. Ben received his B.S. in Mathematics from the University of Chicago, and received a M.S. and PhD from the MIT Media Laboratory. After completing his doctoral work, he was a postdoctoral fellow in the Center for the Mathematics of Information at Caltech. He is the recipient of an NSF Career Award, an Alfred P. Sloan Research Fellowship, and the 2012 SIAM/MOS Lagrange Prize in Continuous Optimization.

Rebecca Willett, *University of Wisconsin-Madison*, **Efficient Tracking in Dynamic Environments**

Modern sensors are collecting data at unprecedented rates, often from platforms with limited processing power and bandwidth for data transmission. To cope with this data deluge, we must develop robust methods for efficiently extracting information from large-scale streaming data. This task is most tractable when the information of interest exhibits low-dimensional spatio-temporal structure. However, in practical scenarios ranging from motion imagery to network analysis, the environment is nonstationary, resulting in both poor empirical performance and weak theoretical guarantees. I will describe a novel "dynamic mirror descent" method which addresses this challenge by learning and tracking low-dimensional models underlying the data. The associated regret bounds, in contrast to competing online learning regret bounds, reflect this underlying spatio-temporal structure. These concepts are demonstrated empirically in the context of sequential compressive observations of a dynamic scene and tracking influences within a dynamic network.

**Rebecca Willett** is an associate professor in the Electrical and Computer Engineering Department at the University of Wisconsin-Madison. She completed her PhD in Electrical and Computer Engineering at Rice University in 2005 and was an assistant then associate professor of Electrical and Computer Engineering at Duke University from 2005 to 2013. Prof. Willett received the National Science Foundation CAREER Award in 2007, is a member of the DARPA Computer Science Study Group, and received an Air Force Office of Scientific Research Young Investigator Program award in 2010. Prof. Willett has also held visiting researcher positions at the Institute for Pure and Applied Mathematics at UCLA in 2004, the University of Wisconsin-Madison 2003-2005, the French National Institute for Research in Computer Science and Control (INRIA) in 2003, and the Applied Science Research and Development Laboratory at GE Healthcare in 2002. Her research interests include network and imaging science with applications in medical imaging, neural coding, astronomy, and social networks. Additional information, including publications and software, are available online at http://willett.ece.wisc.edu.

### Paper Submission

Submit papers of at most 1 page in two-column IEEE format through the GlobalSIP website at http://www.ieeeglobalsip.org/Papers.asp. All papers (contributed and invited) will be presented as posters.

### Important Dates

Paper Submission Deadline | June 15, 2013 |

Review Results Announce | July 30, 2013 |

Camera-Ready Papers Due | September 7, 2013 |