Industrial Engineering and Operations Research
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html
Upcoming EventsWenpin Tang - Optimal Surviving Strategy For The Up The River Problem, Feb 5
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115023&date=2018-02-05
Abstract:<br />
<br />
Nowadays there are more and more people living on the planet, but the available resources are very limited. So an interesting question is how to allocate the limited resources to maximize our welfare.<br />
<br />
In this talk, I will present the "Up the River problem" which is a dynamic allocation model in a random environment with limited resources. The model was introduced by David Aldous in 2000, along with a couple of open problems. A unit drift is distributed among a finite collection of Brownian particles on R+, which are annihilated once they reach the origin. Starting K particles at x = 1, we prove Aldous' conjecture that the push-the-laggard strategy of distributing the drift asymptotically (as K → ∞) maximizes the total number of surviving particles, with approximately √ 4 π K1/2 surviving particles. The solution to the problem relies on stochastic calculus and partial differential equations: the hydrodynamic limit of the particle density satisfies a two-phase partial differential equations, one with a fix boundary and the other with a free boundary.<br />
<br />
At the end of the talk, I will discuss recent progress in continuous-time information search model, initiated by Ke and Villas-Boas. The techniques in solving the "Up the River problem" can also be applied to these search models. The talk is based on joint work with Li-Cheng Tsai (Up the River problem), Tony Ke and J.Miguel Villas-Boas <br />
<br />
Bio: Wenpin Tang is an Assistant Professor at Department of Mathematics, UCLA. He obtained his Ph.D. from Department of Statistics, UC Berkeley. His advisor was Jim Pitman. Before coming to Berkeley, he obtained an engineer diploma (Diplôme d'ingénieur) from Ecole Polytechnique, France.<br />
<br />
Research: Tang's research interests include probability theory and its applications. He has been working on paths embedding in Brownian motion (with Jim Pitman), and paths intersection of Brownian motion (with Steve Evans and Jim Pitman). He also solved a conjecture of David Aldous on Up the River problem (with Li-Cheng Tsai). <br />
<br />
More generally, he is interested in topics such as random combinatorial objects, stochastic differential equations, and interacting particle systems.<br />
<br />
More information can be found on his website here: http://www.math.ucla.edu/~wenpintang/http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115023&date=2018-02-05Georgina Hall - LP, SOCP, and optimization-free approaches to sum of squares optimization, Feb 12
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115024&date=2018-02-12
The problem of optimizing over the cone of nonnegative polynomials is a fundamental problem in computational mathematics, with applications to polynomial optimization, control, machine learning, game theory, and combinatorics, among others. A number of breakthrough papers in the early 2000s showed that this problem, long thought to be intractable, could be solved by using sum of squares programming. This technique however has proved to be expensive for large-scale problems, as it involves solving large semidefinite programs (SDPs).<br />
<br />
In the first part of this talk, we present two methods for approximately solving large-scale sum of squares programs that dispense altogether with semidefinite programming and only involve solving a sequence of linear or second order cone programs generated in an adaptive fashion. In the second part of the talk, we focus on the problem of finding tight lower bounds on polynomial optimization problems (POPs), a fundamental task in this area that is most commonly handled through the use of SDP-based sum of squares hierarchies (e.g., due to Lasserre and Parrilo). In contrast to previous approaches, we provide the first theoretical framework for efficiently constructing converging hierarchies of lower bounds on POPs whose computation does not require any optimization, but simply the ability to multiply certain fixed polynomials together and check nonnegativity of the coefficients of the product.<br />
<br />
Bio: Georgina Hall is a final-year graduate student and a Gordon Wu fellow in the department of Operations Research and Financial Engineering at Princeton University, where she is advised by Professor Amir Ali Ahmadi. She was the valedictorian of Ecole Centrale, Paris, where she obtained a B.S. and an M.S., in 2011 and 2013 respectively. Her interests lie in convex relaxations of NP-hard problems, particularly those arising in polynomial optimization. Georgina is the recipient of the Médaille de l’Ecole Centrale from the French Académie des Sciences and the Princeton School of Engineering and Applied Sciences Award for Excellence. She was also chosen for the 2017 Rising Stars in EECS workshop at Stanford and the 2017 Young Researchers Workshop at Cornell University. Her paper “DC decomposition of nonconvex polynomials using algebraic techniques” is the recent recipient of the 2016 Informs Computing Society Prize for Best Student Paper. She has also been the recipient of a number of teaching awards, including the Princeton University's Engineering Council Teaching Award, the university-wide Excellence in Teaching Award of the Princeton Graduate School, and the 2017 Excellence in Teaching of Operations Research Award of the Institute for Industrial and Systems Engineers.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115024&date=2018-02-12Eric Friedlander - Mean-Field Methods In Large Stochastic Networks, Feb 14
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115453&date=2018-02-14
Abstract: Analysis of large-scale communication networks (e.g. ad hoc wireless networks, cloud computing systems, server networks etc.) is of great practical interest. The massive size of such networks frequently makes direct analysis intractable. Asymptotic approximations using hydrodynamic and diffusion scaling limits provide useful methods for approaching such problems. In this talk, we study two examples of such an analysis. In the first, the technique is applied to a model for load balancing in a large, cloud-based, storage system. In the second, we present an asymptotic method of solving control problems in such networks.<br />
<br />
Bio: Eric Friedlander's research is focused on the modeling and analysis of large scale systems arising from communication networks and biological systems. As more and more business is conducted online, massive cloud-based storage systems and market places have created the need for models and methodology applicable on a scale that is currently impracticable. In his research, he studies these types of systems and develop tractable methods of approaching the problems which arise. In addition, he am interested in the study of data assimilation methods for complex high-dimensional systems. Data assimilation is the science of incorporating data into complex deterministic dynamical models (normally described through a system of ODE/PDE). Such a process is often complicated by the massive size of such systems and various methods have been developed to cope with this challenge.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115453&date=2018-02-14Zeyu Zheng - Top-Down Statistical Modeling, Feb 16
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115492&date=2018-02-16
Abstract: In this talk, we will argue that data-driven service systems engineering should take a statistical perspective that is guided by the decisions and performance measures that are critical from a managerial perspective. We further take the view that the statistical models will often be used as inputs to simulations that will be used to drive either capacity decisions or real-time decisions (such as dynamic staffing levels). We start by discussing Poisson arrival modeling in the context of systems in which time-of-day effects play a significant role. We will discuss several new statistical tools that we have developed that significantly improve the quality of the performance predictions made by the simulation models. In the second part of our talk, we show that in dealing with high-intensity arrival streams (such as in the call center and the ride-sharing contexts), the key statistical features of the traffic that must be captured for good performance prediction lie at much longer time scales than the inter-arrival times that are the usual focus of conventional statistical analysis for such problems. This observation is consistent with the extensive limit theory available for many-server systems. Our “top-down” approach focuses on data collected at these longer time scales, and on building statistical models that capture the key data features at this scale. In particular, we will discuss the use of Poisson auto-regressive processes as a basic tool in such “top-down” modeling, and on the statistical framework we are creating to build effective simulation-based decision tools based on real-world data.<br />
<br />
<br />
<br />
Short Bio: Zeyu Zheng is a PhD candidate in the Department of Management Science and Engineering at Stanford University. His research lies at the interface of operations research, data sciences, and decision making. Zeyu has done research on simulation, data-driven decision making, stochastic modeling, machine learning, and over-the-counter markets, and he has a PhD minor in Statistics and an MA in economics from Stanford University. Before coming to Stanford, he graduated from Peking University with a BS in mathematics.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115492&date=2018-02-16Weina Wang- Delay Bounds And Asymptotics In Cloud Computing Systems, Feb 21
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115025&date=2018-02-21
Abstract<br />
With the emergence of big-data technologies, cloud computing systems are growing rapidly in size and becoming more and more complex, making it costly to conduct experiments and simulations. Therefore, modeling computing systems and characterizing their performance analytically are more critical than ever in identifying bottlenecks, informing system design, and facilitating provisioning. In this talk, I will illustrate how we study the delay performance in cloud computing systems from different modeling perspectives. First, I will focus on the delay of jobs that consist of multiple tasks, where the tasks can be processed in parallel on different servers, and a job is completed only when all its tasks are completed. Such jobs with parallel tasks are prevalent in today’s cloud computing systems. While the delay of individual tasks has been extensively studied, job delay has not been well-understood, even though job delay is the most important metric of interest to end users. In our work, we establish a stochastic upper bound on job delay using properties of associated random variables, and show its tightness in an asymptotic regime where the number of servers in the system and the number of tasks in a job both become large. After this, I will also briefly summarize our results on delay characterization for data-processing tasks where the locality of data needs to be considered, and for data transfer in large-scale datacenter networks.<br />
<br />
Bio<br />
Weina Wang is a joint postdoctoral research associate in the Coordinated Science Lab at the University of Illinois at Urbana-Champaign, and in the School of ECEE at Arizona State University. She received her B.E. from Tsinghua University and her Ph.D. from Arizona State University, both in Electrical Engineering. Her research lies in the broad area of applied probability and stochastic systems, with applications in cloud computing, data centers, and privacy-preserving data analytics. Her dissertation received the Dean’s Dissertation Award in the Ira A. Fulton Schools of Engineering at Arizona State University in 2016. She received the Kenneth C. Sevcik Outstanding Student Paper Award at ACM SIGMETRICS 2016.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115025&date=2018-02-21Barna Saha - Efficient Fine-Grained Algorithms, Feb 26
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115452&date=2018-02-26
Abstract: One of the greatest successes of computational complexity theory is the classification of countless fundamental computational problems into polynomial-time and NP-hard ones, two classes that are often referred to as tractable and intractable, respectively. However, this crude distinction of algorithmic efficiency is clearly insufficient when handling today's large scale of data. We need a finer-grained design and analysis of algorithms that pinpoints the exact exponent of polynomial running time, and a better understanding of when a speed-up is not possible. Over the years, many polynomial-time approximation algorithms were devised as an approach to bypass the NP-hardness obstacle of many discrete optimization problems. This area has developed into a rich field producing many algorithmic ideas and has lead to major advances in computational complexity. So far, however, a similar theory for high polynomial time problems to understand the trade-off between quality and running time is vastly lacking. <br />
<br />
In this presentation, I will give you an overview of the newly growing field of fine-grained algorithms and complexity, and my contributions therein. This will include fundamental problems such as edit distance computation, all-pairs shortest paths, parsing and matrix multiplication. They have applications ranging from genomics to statistical natural language processing, machine learning and optimization. I will show how as a natural byproduct of improved time complexity, one may design algorithms that are highly parallel as well as streaming algorithms with sublinear space complexity. Finally, motivated by core machine learning applications, I will discuss alternative measures of efficiency that may be equally relevant as time complexity.<br />
<br />
Bio: Barna Saha is currently an Assistant Professor in the College of Information & Computer Science at the University of Massachusetts Amherst. She is also a Permanent Member of Center for Discrete Mathematics and Theoretical Computer Science (DIMACS) at Rutgers. Before joining UMass in 2014, she was a Research Scientist at AT&T Shannon Laboratories, New Jersey. She spent four wonderful years (2007-2011) at the University of Maryland College Park from where she received her Ph.D. in Computer Science. In Fall 2015, she was at the University of California Berkeley as a Visiting Scholar and as a fellow of the Simons Institute. Her research interests include Theoretical Computer Science, Probabilistic Method & Randomized Algorithms and Large Scale Data Analytics. She is the recipient of NSF CAREER award (2017), Google Faculty Award (2016), Yahoo Academic Career Enhancement Award (2015), Simons-Berkeley Research Fellowship (2015), NSF CRII Award (2015) and Dean's Dissertation Fellowship (2011). She received the best paper award at the Very Large Data Bases Conference (VLDB) 2009 for her work on Probabilistic Databases and was chosen as finalists for best papers at the IEEE Conference on Data Engineering (ICDE) 2012 for developing new techniques to handle low quality data.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115452&date=2018-02-26