Industrial Engineering and Operations Research
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html
Upcoming EventsHoda Bidkhori - Analyzing Process Flexibility Using Robust Optimization, Jan 26
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=114634&date=2018-01-26
Abstract: Process flexibility has been widely applied in many industries as a competitive strategy to improve responsiveness to demand uncertainty. The first part of the talk addresses the problem of managing process flexibility in a fairly general manufacturing system. In our model, each plant might have a different cost for adding flexibility or extra capacity. We model this problem as an adaptive optimization problem and discuss different approaches to solve it efficiently. <br />
In the second part of the talk, we discuss a distribution-free model to evaluate the performance of process ﬂexibility structures when only the mean and partial expectation of the demand are known. We characterize the worst-case demand distribution under general concave objective functions and apply it to derive tight lower bounds for the performance of chaining structures under the balanced systems. In the third part of the talk, we examine the worst-case performance of flexibility designs under supply and demand uncertainties, where supply uncertainty can be in the form of either plant or arc disruptions.<br />
<br />
Bio: Dr. Hoda Bidkhori has been an Assistant Professor in the Department of Industrial Engineering at University of Pittsburgh as of Fall 2015. Prior to this, Dr. Bidkhori worked as a Lecturer and as a Postdoctoral Researcher at the Massachusetts Institute of Technology. She holds a Ph.D. in Applied Mathematics from MIT. Her current research centers around data-driven decision-making, decision-making under uncertainty, and also the development and implementation of data-driven computationally tractable solutions for problems arising in healthcare, transportation, and inventory management.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=114634&date=2018-01-26Laurent El Ghaoui- Lifted Neural Nets: Beyond The Grip Of Stochastic Gradients In Deep Learning, Jan 29
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=114846&date=2018-01-29
Abstract: We describe a novel family of models of multi-layer feedforward neural networks, where the activation functions are encoded via penalties in the training problem. The new framework allows for algorithms such as block-coordinate descent methods to be applied, in which each step is composed of simple (no hidden layer) supervised learning problems that are parallelizable across layers, or data points, or both. Although the training problem has many more variables than that of a standard network, preliminary experiments seem to indicate that the proposed models provide excellent initial guesses for standard networks, and could become competitive with state-of-the-art neural networks, both in terms of performance and speed. In addition, the lifted models provide avenues for interesting extensions such as network topology optimization, input matrix completion, and robustness against noisy inputs.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=114846&date=2018-01-29Wenpin Tang - Optimal Surviving Strategy For The Up The River Problem, Feb 5
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115023&date=2018-02-05
Abstract:<br />
<br />
Nowadays there are more and more people living on the planet, but the available resources are very limited. So an interesting question is how to allocate the limited resources to maximize our welfare.<br />
<br />
In this talk, I will present the "Up the River problem" which is a dynamic allocation model in a random environment with limited resources. The model was introduced by David Aldous in 2000, along with a couple of open problems. A unit drift is distributed among a finite collection of Brownian particles on R+, which are annihilated once they reach the origin. Starting K particles at x = 1, we prove Aldous' conjecture that the push-the-laggard strategy of distributing the drift asymptotically (as K → ∞) maximizes the total number of surviving particles, with approximately √ 4 π K1/2 surviving particles. The solution to the problem relies on stochastic calculus and partial differential equations: the hydrodynamic limit of the particle density satisfies a two-phase partial differential equations, one with a fix boundary and the other with a free boundary.<br />
<br />
At the end of the talk, I will discuss recent progress in continuous-time information search model, initiated by Ke and Villas-Boas. The techniques in solving the "Up the River problem" can also be applied to these search models. The talk is based on joint work with Li-Cheng Tsai (Up the River problem), Tony Ke and J.Miguel Villas-Boas <br />
<br />
Bio: Wenpin Tang is an Assistant Professor at Department of Mathematics, UCLA. He obtained his Ph.D. from Department of Statistics, UC Berkeley. His advisor was Jim Pitman. Before coming to Berkeley, he obtained an engineer diploma (Diplôme d'ingénieur) from Ecole Polytechnique, France.<br />
<br />
Research: Tang's research interests include probability theory and its applications. He has been working on paths embedding in Brownian motion (with Jim Pitman), and paths intersection of Brownian motion (with Steve Evans and Jim Pitman). He also solved a conjecture of David Aldous on Up the River problem (with Li-Cheng Tsai). <br />
<br />
More generally, he is interested in topics such as random combinatorial objects, stochastic differential equations, and interacting particle systems.<br />
<br />
More information can be found on his website here: http://www.math.ucla.edu/~wenpintang/http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115023&date=2018-02-05Georgina Hall - LP, SOCP, and optimization-free approaches to sum of squares optimization, Feb 12
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115024&date=2018-02-12
The problem of optimizing over the cone of nonnegative polynomials is a fundamental problem in computational mathematics, with applications to polynomial optimization, control, machine learning, game theory, and combinatorics, among others. A number of breakthrough papers in the early 2000s showed that this problem, long thought to be intractable, could be solved by using sum of squares programming. This technique however has proved to be expensive for large-scale problems, as it involves solving large semidefinite programs (SDPs).<br />
<br />
In the first part of this talk, we present two methods for approximately solving large-scale sum of squares programs that dispense altogether with semidefinite programming and only involve solving a sequence of linear or second order cone programs generated in an adaptive fashion. In the second part of the talk, we focus on the problem of finding tight lower bounds on polynomial optimization problems (POPs), a fundamental task in this area that is most commonly handled through the use of SDP-based sum of squares hierarchies (e.g., due to Lasserre and Parrilo). In contrast to previous approaches, we provide the first theoretical framework for efficiently constructing converging hierarchies of lower bounds on POPs whose computation does not require any optimization, but simply the ability to multiply certain fixed polynomials together and check nonnegativity of the coefficients of the product.<br />
<br />
Bio: Georgina Hall is a final-year graduate student and a Gordon Wu fellow in the department of Operations Research and Financial Engineering at Princeton University, where she is advised by Professor Amir Ali Ahmadi. She was the valedictorian of Ecole Centrale, Paris, where she obtained a B.S. and an M.S., in 2011 and 2013 respectively. Her interests lie in convex relaxations of NP-hard problems, particularly those arising in polynomial optimization. Georgina is the recipient of the Médaille de l’Ecole Centrale from the French Académie des Sciences and the Princeton School of Engineering and Applied Sciences Award for Excellence. She was also chosen for the 2017 Rising Stars in EECS workshop at Stanford and the 2017 Young Researchers Workshop at Cornell University. Her paper “DC decomposition of nonconvex polynomials using algebraic techniques” is the recent recipient of the 2016 Informs Computing Society Prize for Best Student Paper. She has also been the recipient of a number of teaching awards, including the Princeton University's Engineering Council Teaching Award, the university-wide Excellence in Teaching Award of the Princeton Graduate School, and the 2017 Excellence in Teaching of Operations Research Award of the Institute for Industrial and Systems Engineers.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115024&date=2018-02-12Eric Friedlander - Mean-Field Methods In Large Stochastic Networks, Feb 14
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115453&date=2018-02-14
Abstract: Analysis of large-scale communication networks (e.g. ad hoc wireless networks, cloud computing systems, server networks etc.) is of great practical interest. The massive size of such networks frequently makes direct analysis intractable. Asymptotic approximations using hydrodynamic and diffusion scaling limits provide useful methods for approaching such problems. In this talk, we study two examples of such an analysis. In the first, the technique is applied to a model for load balancing in a large, cloud-based, storage system. In the second, we present an asymptotic method of solving control problems in such networks.<br />
<br />
Bio: Eric Friedlander's research is focused on the modeling and analysis of large scale systems arising from communication networks and biological systems. As more and more business is conducted online, massive cloud-based storage systems and market places have created the need for models and methodology applicable on a scale that is currently impracticable. In his research, he studies these types of systems and develop tractable methods of approaching the problems which arise. In addition, he am interested in the study of data assimilation methods for complex high-dimensional systems. Data assimilation is the science of incorporating data into complex deterministic dynamical models (normally described through a system of ODE/PDE). Such a process is often complicated by the massive size of such systems and various methods have been developed to cope with this challenge.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115453&date=2018-02-14Zeyu Zheng - Top-Down Statistical Modeling, Feb 16
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115492&date=2018-02-16
Abstract: In this talk, we will argue that data-driven service systems engineering should take a statistical perspective that is guided by the decisions and performance measures that are critical from a managerial perspective. We further take the view that the statistical models will often be used as inputs to simulations that will be used to drive either capacity decisions or real-time decisions (such as dynamic staffing levels). We start by discussing Poisson arrival modeling in the context of systems in which time-of-day effects play a significant role. We will discuss several new statistical tools that we have developed that significantly improve the quality of the performance predictions made by the simulation models. In the second part of our talk, we show that in dealing with high-intensity arrival streams (such as in the call center and the ride-sharing contexts), the key statistical features of the traffic that must be captured for good performance prediction lie at much longer time scales than the inter-arrival times that are the usual focus of conventional statistical analysis for such problems. This observation is consistent with the extensive limit theory available for many-server systems. Our “top-down” approach focuses on data collected at these longer time scales, and on building statistical models that capture the key data features at this scale. In particular, we will discuss the use of Poisson auto-regressive processes as a basic tool in such “top-down” modeling, and on the statistical framework we are creating to build effective simulation-based decision tools based on real-world data.<br />
<br />
<br />
<br />
Short Bio: Zeyu Zheng is a PhD candidate in the Department of Management Science and Engineering at Stanford University. His research lies at the interface of operations research, data sciences, and decision making. Zeyu has done research on simulation, data-driven decision making, stochastic modeling, machine learning, and over-the-counter markets, and he has a PhD minor in Statistics and an MA in economics from Stanford University. Before coming to Stanford, he graduated from Peking University with a BS in mathematics.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115492&date=2018-02-16Weina Wang- Delay Bounds And Asymptotics In Cloud Computing Systems, Feb 21
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115025&date=2018-02-21
Abstract<br />
With the emergence of big-data technologies, cloud computing systems are growing rapidly in size and becoming more and more complex, making it costly to conduct experiments and simulations. Therefore, modeling computing systems and characterizing their performance analytically are more critical than ever in identifying bottlenecks, informing system design, and facilitating provisioning. In this talk, I will illustrate how we study the delay performance in cloud computing systems from different modeling perspectives. First, I will focus on the delay of jobs that consist of multiple tasks, where the tasks can be processed in parallel on different servers, and a job is completed only when all its tasks are completed. Such jobs with parallel tasks are prevalent in today’s cloud computing systems. While the delay of individual tasks has been extensively studied, job delay has not been well-understood, even though job delay is the most important metric of interest to end users. In our work, we establish a stochastic upper bound on job delay using properties of associated random variables, and show its tightness in an asymptotic regime where the number of servers in the system and the number of tasks in a job both become large. After this, I will also briefly summarize our results on delay characterization for data-processing tasks where the locality of data needs to be considered, and for data transfer in large-scale datacenter networks.<br />
<br />
Bio<br />
Weina Wang is a joint postdoctoral research associate in the Coordinated Science Lab at the University of Illinois at Urbana-Champaign, and in the School of ECEE at Arizona State University. She received her B.E. from Tsinghua University and her Ph.D. from Arizona State University, both in Electrical Engineering. Her research lies in the broad area of applied probability and stochastic systems, with applications in cloud computing, data centers, and privacy-preserving data analytics. Her dissertation received the Dean’s Dissertation Award in the Ira A. Fulton Schools of Engineering at Arizona State University in 2016. She received the Kenneth C. Sevcik Outstanding Student Paper Award at ACM SIGMETRICS 2016.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115025&date=2018-02-21Barna Saha - Efficient Fine-Grained Algorithms, Feb 26
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115452&date=2018-02-26
Abstract: One of the greatest successes of computational complexity theory is the classification of countless fundamental computational problems into polynomial-time and NP-hard ones, two classes that are often referred to as tractable and intractable, respectively. However, this crude distinction of algorithmic efficiency is clearly insufficient when handling today's large scale of data. We need a finer-grained design and analysis of algorithms that pinpoints the exact exponent of polynomial running time, and a better understanding of when a speed-up is not possible. Over the years, many polynomial-time approximation algorithms were devised as an approach to bypass the NP-hardness obstacle of many discrete optimization problems. This area has developed into a rich field producing many algorithmic ideas and has lead to major advances in computational complexity. So far, however, a similar theory for high polynomial time problems to understand the trade-off between quality and running time is vastly lacking. <br />
<br />
In this presentation, I will give you an overview of the newly growing field of fine-grained algorithms and complexity, and my contributions therein. This will include fundamental problems such as edit distance computation, all-pairs shortest paths, parsing and matrix multiplication. They have applications ranging from genomics to statistical natural language processing, machine learning and optimization. I will show how as a natural byproduct of improved time complexity, one may design algorithms that are highly parallel as well as streaming algorithms with sublinear space complexity. Finally, motivated by core machine learning applications, I will discuss alternative measures of efficiency that may be equally relevant as time complexity.<br />
<br />
Bio: Barna Saha is currently an Assistant Professor in the College of Information & Computer Science at the University of Massachusetts Amherst. She is also a Permanent Member of Center for Discrete Mathematics and Theoretical Computer Science (DIMACS) at Rutgers. Before joining UMass in 2014, she was a Research Scientist at AT&T Shannon Laboratories, New Jersey. She spent four wonderful years (2007-2011) at the University of Maryland College Park from where she received her Ph.D. in Computer Science. In Fall 2015, she was at the University of California Berkeley as a Visiting Scholar and as a fellow of the Simons Institute. Her research interests include Theoretical Computer Science, Probabilistic Method & Randomized Algorithms and Large Scale Data Analytics. She is the recipient of NSF CAREER award (2017), Google Faculty Award (2016), Yahoo Academic Career Enhancement Award (2015), Simons-Berkeley Research Fellowship (2015), NSF CRII Award (2015) and Dean's Dissertation Fellowship (2011). She received the best paper award at the Very Large Data Bases Conference (VLDB) 2009 for her work on Probabilistic Databases and was chosen as finalists for best papers at the IEEE Conference on Data Engineering (ICDE) 2012 for developing new techniques to handle low quality data.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115452&date=2018-02-26Agostino Capponi - Columbia University, Apr 2
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115026&date=2018-04-02
Agostino Capponi joined Columbia University's IEOR Department in August 2014, where he is also a member of the Institute for Data Science and Engineering.<br />
<br />
His main research interests are in the area of networks, with a special focus on systemic risk, contagion, and control. In the context of financial networks, the outcome of his research contributes to a better understanding of risk management practices, and to assess the impact of regulatory policies aimed at controlling financial markets. He has been awarded a grant from the Institute for New Economic Thinking for his research on dynamic contagion mechanisms.<br />
<br />
His research has been published in top-tier journals of Operations Research, Mathematical Finance, and Financial Economics, including Operations Research, Mathematics of Operations Research, Management Science, Review of Asset Pricing Studies, and Mathematical Finance. His work has also been published in leading practitioner journals and invited book chapters. Agostino is a frequently invited speaker at major conferences in the area of systemic risk. He has on-going collaborations with several governmental institutions that are tasked with the analysis of financial networks data, in particular the US Commodity Futures Trading Commission and the Office of Financial Research. Agostino holds a world patent for a target tracking methodology in military networks.<br />
<br />
Agostino received his Master and Ph.D. Degree in Computer Science and Applied and Computational Mathematics from the California Institute of Technology, respectively in 2006 and 2009.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115026&date=2018-04-02Robots on the Edge: Intelligent Machines, Industry 4.0 and Fog Robotics, Apr 5
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=116353&date=2018-04-05
Please join us for the CITRIS Silicon Valley Forum, a new monthly series from CITRIS and the Banatao Institute. Our second panel of the Spring 2018 series invites Ken Goldberg, Professor of Industrial Engineering and Operations Research and Juan Aparicio, Head of Research Group Advanced Manufacturing Automation at Siemens to discuss Robots on the Edge: Intelligent Machines, Industry 4.0, and Fog Robotics on April 5, 2018 from 11:30am-1:30pm. The talk will be hosted at the UC Santa Cruz Silicon Valley campus in Santa Clara, CA. Lunch will be provided.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=116353&date=2018-04-05Rahul Jain — Reinforcement Learning without Reinforcement, Apr 16
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=116947&date=2018-04-16
Abstract: Reinforcement Learning (RL) is concerned with solving sequential decision-making problems in the presence of uncertainty. RL is really about two problems together. The first is the `Bellman problem’: Finding the optimal policy given the model, which may involve large state spaces. Various approximate dynamic programming and RL schemes have been developed, but either there are no guarantees, or they are not universal, or rather slow. In fact, most RL algorithms have become synonymous with stochastic approximation schemes that are known to be rather slow. This is an even more difficult problem for MDPs with continuous state (and action) spaces. We present a class of RL algorithms for the continuous state space problem based on `empirical’ ideas, which are simple, effective and yet universal with probabilistic guarantees. The idea involves randomized Kernel-based function fitting combined with `empirical’ updates. The key is a “probabilistic contraction analysis” method we have developed for analysis of stochastic iterative algorithms, wherein we show convergence to a probabilistic fixed point of a sequence of random operators via a stochastic dominance argument. <br />
<br />
The second RL problem is the `online learning (or the Lai-Robbins) problem’ when the model itself is unknown. We propose a simple posterior sampling-based regret-minimization reinforcement learning algorithm for MDPs. It achieves O(sqrt{T})-regret which is order-optimal. It not only optimally manages the “exploration versus exploitation tradeoff” but also obviates the need for expensive computation for exploration. The algorithm differs from classical adaptive control in its focus on non-asymptotic regret optimality as opposed to asymptotic stability. This seems to resolve a long standing open problem in Reinforcement Learning.<br />
<br />
Biography: Rahul Jain is the K. C. Dahlberg Early Career Chair and Associate Professor of Electrical Engineering, Computer Science* and ISE* (*by courtesy) at the University of Southern California (USC). He received a B.Tech from the IIT Kanpur, and an MA in Statistics and a PhD in EECS from the University of California, Berkeley. Prior to joining USC, he was at the IBM T J Watson Research Center, Yorktown Heights, NY. He has received numerous awards including the NSF CAREER award, the ONR Young Investigator award, an IBM Faculty award, the James H. Zumberge Faculty Research and Innovation Award, and is currently a US Fulbright Specialist Scholar. His interests span reinforcement learning, stochastic control, statistical learning, stochastic networks, and game theory, and power systems and healthcare on the applications side. The talk is based on work with a number of outstanding students and postdocs who are now faculty members themselves at top places.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=116947&date=2018-04-16Avraham Shtub -Technion, Apr 30
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115031&date=2018-04-30
Professor Avraham Shtub holds the Stephen and Sharon Seiden Chair in Project Management. He has a B.Sc in Electrical Engineering from the Technion - Israel Institute of Technology (1974), an MBA from Tel Aviv University (1978) and a Ph.D in Management Science and Industrial Engineering from the University of Washington (1982). He is a certified Project Management Professional (PMP) and a member of the Project Management Institute (PMI-USA). <br />
<br />
Professor Shtub is the recipient of the Institute of Industrial Engineering's 1995 "Book of the Year Award" for his Book "Project Management: Engineering, Technology and Implementation" (co- authored with Jonathan Bard and Shlomo Globerson), Prentice Hall, 1994. He is the recipient of the Production Operations Management Society's Wick Skinner Teaching Innovation Achievements Award for his book: "Enterprise Resource Planning (ERP): The Dynamics of Operations Management". His books on Project Management were published in English, Hebrew, Greek and Chinese. He is the recipient of the 2008 Project Management Institute Professional Development Product of the Year Award for the training simulator "Project Team Builder – PTB". Prof. Shtub was a Department Editor for IIE Transactions he was on the Editorial Boards of the Project Management Journal, The International Journal of Project Management, IIE Transactions and the International Journal of Production Research. <br />
<br />
He was a faculty member of the department of Industrial Engineering at Tel Aviv University from 1984 to 1998 where he also served as a chairman of the department (1993-1996). He joined the Technion in 1998 and was the Associate Dean and head of the MBA program. He has been a consultant to industry in the areas of project management, training by simulators and the design of production-operation systems. He was invited to speak at special seminars on Project Management and Operations in Europe, the Far East, North America, South America and Australia. Professor Shtub visited and taught at Vanderbilt University, The University of Pennsylvania, Korean Institute of Technology, Bilkent University in Turkey, Otego University in New Zealand, Yale University, Universidad Politécnica de Valencia, University of Bergamo in Italy.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115031&date=2018-04-30