Industrial Engineering and Operations Research
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html
Upcoming EventsBin Yu — Three principles of data science: predictability, stability, and computability, Aug 28
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=110934&date=2017-08-28
In this talk, I'd like to discuss the intertwining importance and connections of three principles of data science in the title in data-driven decisions. Making prediction as its central task and embracing computation as its core, machine learning has enabled wide-ranging data-driven successes. Prediction is a useful way to check with reality. Good prediction implicitly assumes stability between past and future. Stability (relative to data and model perturbations) is also a minimum requirement for interpretability and reproducibility of data driven results (cf. Yu, 2013). It is closely related to uncertainty assessment. Obviously, both prediction and stability principles can not be employed without feasible computational algorithms, hence the importance of computability. <br />
<br />
The three principles will be demonstrated in the context of two neuroscience projects and through analytical connections. In particular, the first project adds stability to predictive modeling used for reconstruction of movies from fMRI brain signlas to gain interpretability of the predictive model. The second project uses predictive transfer learning that combines AlexNet, GoogleNet and VGG with single V4 neuron data for state-of-the-art prediction performance. It provides stable function characterization of neurons via (manifold) deep dream images from the predictive models in the difficult primate visual cortex V4. Our V4 results lend support, to a certain extent, to the resemblance of these CNNs to a primate brain.<br />
<br />
Bin Yu<br />
Departments of Statistics and EECS, UC Berkeley<br />
statistics.berkeley.edu/~binyuhttp://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=110934&date=2017-08-28Yong Zeng — NSF Funding Opportunities Related to Data Science, Sep 11
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=111268&date=2017-09-11
Abstract: This presentation will provide an overview of the funding opportunities related to data science in National Science Foundation. The funding opportunities will include those in the directorates of Computer & Information Science & Engineering (CISE), Engineering (ENG), and Mathematical and Physical Sciences (MPS), and the focus will be those supported by Division of Mathematical Sciences (DMS) in MPS.<br />
<br />
Bio: Yong Zeng serves as a program director in Division of Mathematical Sciences at National Science Foundation. He is also a professor in Department of Mathematics and Statistics at University of Missouri - Kansas City.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=111268&date=2017-09-11Matteo Basei - The coordination of centralised and distributed generation, Sep 18
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=111306&date=2017-09-18
Abstract. This paper analyses the interaction between centralised carbon-emissive technologies and distributed intermittent non-emissive technologies. In our model, there is a representative consumer who can satisfy her electricity demand by investing in distributed generation (solar panels) and by buying power from a centralised firm at a price the firm sets. Distributed generation is intermittent and induces an externality cost to the consumer. The firm provides non-random electricity generation subject to a carbon tax and to transmission costs. The objective of the consumer is to satisfy her demand while minimising investment costs, payments to the firm and intermittency costs. The objective of the firm is to satisfy the consumer's residual demand while minimising investment costs, demand deviation costs, and maximising the payments from the consumer. We formulate the investment decisions as McKean-Vlasov control problems with stochastic coefficients. We provide explicit, price model-free solutions to the optimal decision problems faced by each player, the solution of the Pareto optimum, and the Stackelberg equilibrium where the firm is the leader. Joint work with René Aïd (Paris Dauphine University) and Huyên Pham (Paris Diderot University).http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=111306&date=2017-09-18Adam Elmachtoub - The value of opaque products, Sep 25
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=111270&date=2017-09-25
Abstract: A product is said to be opaque if one or more of its attributes are not revealed until after the product has been sold. Opaque products have historically been used in the travel industry where airline and hotel brands might be hidden to the customer, in exchange for a discount. More recently, online retailers have also used opaque products, where customers can sacrifice their choice of color for a better price. The value of opaque products stems from their ability to i) price discriminate and ii) balance inventory. In this talk, we provide a set of tools for pricing and managing inventory with opaque products. We also explicitly characterize and quantify the price discrimination and inventory balancing effects. Finally, we describe how opaque products can be used as an alternative for discriminatory and dynamic pricing strategies. This is based on joint works with Michael Hamilton (Columbia), Yehua Wei (Boston College), and Yeqing Zhou (Columbia). <br />
<br />
Bio: Adam Elmachtoub is an Assistant Professor of Industrial Engineering and Operations Research at Columbia University, where he is also a member of the Data Science Institute. In 2014-2015, he spent one year at the IBM T.J. Watson Research Center working in the area of Smarter Commerce. His research currently focuses on designing new approaches for supply chain and revenue management, especially where the two areas collide. More broadly, he is also interested in leveraging data to make informed decisions in industries such as retail, logistics, and travel. He previously received his B.S. degree from Cornell in 2009, and my Ph.D. from MIT in 2014. In 2016, he received an IBM Faculty Award and was named in Forbes 30 under 30 in science.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=111270&date=2017-09-25Nick Sahinidis — ALAMO: Machine learning from data and first principles, Oct 2
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=111022&date=2017-10-02
We have developed the ALAMO methodology with the aim of producing a tool capable of using data to learn algebraic models that are accurate and as simple as possible. ALAMO relies on (a) integer nonlinear optimization to build low-complexity models from input-output data, (b) derivative-free optimization to collect additional data points<br />
that can be used to improve tentative models, and (c) global optimization to enforce physical constraints on the mathematical structure of the model. We present computational results and comparisons between ALAMO and a variety of learning techniques, including Latin hypercube sampling, simple least-squares regression, and the lasso. We also describe results from applications in CO 2 capture that motivated the development of ALAMO.<br />
<br />
Nick Sahinidis<br />
Department of Chemical Engineering<br />
Carnegie Mellon University<br />
http://archimedes.cheme.cmu.edu<br />
Sahinidis@cmu.eduhttp://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=111022&date=2017-10-02Peter Bartlett - Representation, optimization and generalization in deep learning, Oct 9
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=112266&date=2017-10-09
Deep neural networks have improved state-of-the-art performance for prediction problems across an impressive range of application areas, and they have become a central ingredient in AI systems. This talk considers factors that affect their performance, describing some recent results in two directions. First, we investigate the impact of depth on representation and optimization properties of these networks. We focus on deep residual networks, which have been widely adopted for computer vision applications because they exhibit fast training, even for very deep networks. We show that as the depth of these networks increases, they are able to represent a smooth invertible map with a simpler representation at each layer, and that this implies a desirable property of the functional optimization landscape that arises from regression with deep function compositions: stationary points are global optima. Second, we consider the generalization behavior of deep networks, that is, how their performance on training data compares to predictive accuracy. In particular, we aim to understand how to measure the complexity of functions computed by these networks. For multiclass classification problems, we present a margin-based generalization bound that scales with a certain margin-normalized "spectral complexity," involving the product of the spectral norms of the weight matrices in the network. We show how the bound gives insight into the observed performance of these networks in practical problems.<br />
<br />
Joint work with Steve Evans and Phil Long, and with Matus Telgarsky and Dylan Foster.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=112266&date=2017-10-09Philip Protter - Issues of Incomplete Markets, Oct 16
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=112204&date=2017-10-16
Abstract: In a complete market, there is a unique risk neutral measure, unique prices, and all contingent claims can be (at least theoretically) perfectly hedged. In an incomplete market, in contrast, there is an infinite number of risk neutral measures, a continuum of “fair” prices, and contingent claims can in general not be perfectly hedged, even theoretically. Unfortunately, there seems to be plenty of evidence markets in actuality are incomplete.<br />
<br />
We are interested in trying to understand this a priori confusing situation. To make things concrete, we address the following question: Suppose a sequence of incomplete markets “converges” to a complete market in an appropriate sense (to be defined), do the major objects also converge? Mostly, this is false: the ranges of option prices do not converge, for example. We work out some simple examples that illustrate some of the issues, and we indicate when one might have some kind of reasonable convergence of the markets, and what such a convergence might be.<br />
<br />
The talk is back on joint work with Jean Jacod. <br />
<br />
Light refreshments will be served.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=112204&date=2017-10-16Bob Oliver — Fishy Predictions or Fish Stew, Nov 6
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=112999&date=2017-11-06
Because of current federal laws on endangered fish species, water exports to California Aqueducts in San Joaquin Valley and Southern California are restricted by a combination of low Delta Smelt counts and densities in the Bay Delta and judgments by experts. This seminar suggests some ways in which Bayes’ Factors and combinations of forests of Information Odds Scores can help us improve our design of models that analyze large volumes of messy data, develop insights and better understand the major (“causal”) factors that are relevant in predicting future growths or declines in the species. Many different agencies, objectives and prior beliefs are involved.<br />
<br />
Bob Oliver is professor emeritus in the Department of Industrial Engineering & Operations Research at UC Berkeleyhttp://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=112999&date=2017-11-06Fatma Kilinc-Karzan — Online First-Order Framework for Robust Convex Optimization, Nov 20
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=113253&date=2017-11-20
Robust optimization (RO) has emerged as one of the leading paradigms to efficiently model parameter uncertainty. The recent connections between RO and problems in statistics and machine learning domains demand for solving RO problems in ever more larger scale. However, the traditional approaches for solving RO formulations based on building and solving robust counterparts or the iterative approaches utilizing nominal feasibility oracles can be prohibitively expensive and thus significantly hinder the scalability of RO paradigm. <br />
<br />
We present a general and flexible iterative framework to solve robust convex optimization problems that is built on a fully online first-order paradigm. In contrast to the existing literature, a key distinguishing feature of our approach is that it requires access to only cheap first-order oracles for the original problem that are remarkably cheaper than pessimization or nominal feasibility oracles, while maintaining the same convergence rates. This, in particular, makes our approach much more scalable and hence preferable in large-scale applications. In the case of strongly convex functions, we also establish a new improved iteration complexity addressing an open question in the literature. Motivated by a robust portfolio optimization problem, we demonstrate our approach on robust quadratic programs with ellipsoidal uncertainty.<br />
<br />
If time permits, we will discuss connections and applicability of our online framework in the context of joint estimation and optimization problems.<br />
<br />
This is joint work with Nam Ho-Nguyen (Carnegie Mellon University, USA).<br />
<br />
Fatma is an Associate Professor of Operations Research at Tepper School of Business, Carnegie Mellon University. She is also affiliated with the Algorithms Combinatorics and Optimization (ACO) PhD Program.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=113253&date=2017-11-20Agostino Capponi - Bail-Ins And Bail-Outs: Incentives, Connectivity, And Systemic Stability, Nov 27
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=112939&date=2017-11-27
We develop a framework to analyze the consequences of alternative designs for interbank networks, in which a failure of one bank may lead to others. Earlier work had suggested that, provided shocks were not too large (or too correlated), denser networks were preferred to more sparsely connected networks because they were better able to absorb shocks. With large shocks, especially when systems are non-conservative, the likelihood of costly bankruptcy cascades increases with dense networks. Governments, worried about the cost of bailouts, have proposed bail-ins, where creditors of defaulting banks voluntarily contribute to rescue their debtors.<br />
However, credibility and enforceability were again at issue. How can the government enforce private sector involvement when the private sector knows that in the absence of its cooperation, the government would nonetheless proceed with a bailout? While a few bail-ins have been observed in practice, it has not been understood until recently under which conditions they exist. <br />
<br />
We answer in the affirmative the key question of whether credible bail-in strategies exist, showing that this strongly depends on the network structure. We model the coordination of a rescue consortium between a social planner and the banks as a sequential game. We show that the equilibrium welfare losses are generically unique, and characterize the subgame perfect equilibrium of the game yielding the optimal intervention plan.<br />
<br />
Our findings reverse the presumptions in earlier works and promote sparsely connected networks over densely connected ones because (i) the no-intervention threat exhibits a phase transition and becomes more credible for large shocks and (ii) banks' contributions to the coordinated bail-in plan are larger. <br />
<br />
This is joint work with Benjamin Bernard and Joseph Stiglitz.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=112939&date=2017-11-27Jon Lee — Comparing relaxations via volume for nonconvex optimization, Nov 27
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=113259&date=2017-11-27
Abstract: Practical exact methods for global optimization of mixed-integer nonlinear optimization formulations rely on convex relaxation. Then, one way or another (via refinement and/or disjunction), global optimality is sought. Success of this paradigm depends on balancing tightness and lightness of relaxations. We will investigate this from a mathematical viewpoint, comparing polyhedral relaxations via their volumes. Specifically , I will present some results concerning: fixed charge problems, vertex packing in graphs, boolean quadratic formulations, and convexification of monomials in the context of spatial branch-and-bound" for factorable formulations. Our results can be employed by users (at the modeling level) and by algorithm designers/implementers alike. <br />
<br />
Short bio: Jon Lee is the G. Lawton and Louise G. Johnson Professor of Engineering at the University of Michigan. He received his Ph.D. from Cornell University. Jon has previously been a faculty member at Yale University and the University of Kentucky, and an adjunct professor at New York University. He was a Research Staff member at the IBM T.J. Watson Research Center, where he managed the mathematical programming group. Jon is author of ~120 papers, the text "A First Course in Combinatorial Optimization" (Cambridge University Press), and the open-source book "A First Course in Linear Optimization" (Reex Press). He was the founding Managing Editor of the journal Discrete Optimization (2004-06), he is currently co-Editor of the journal Mathematical Programming, and he is on the editorial boards of the journals Optimization and Engineering, and Discrete Applied Mathematics. Jon was Chair of the Executive Committee of the Mathematical Optimization Society (2008-10), and Chair of the INFORMS Optimization Society (2010-12). He was awarded the INFORMS Computing Society Prize (2010), and he is a Fellow of INFORMS (since 2013).http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=113259&date=2017-11-27Hoda Bidkhori - Analyzing Process Flexibility Using Robust Optimization, Jan 26
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=114634&date=2018-01-26
Abstract: Process flexibility has been widely applied in many industries as a competitive strategy to improve responsiveness to demand uncertainty. The first part of the talk addresses the problem of managing process flexibility in a fairly general manufacturing system. In our model, each plant might have a different cost for adding flexibility or extra capacity. We model this problem as an adaptive optimization problem and discuss different approaches to solve it efficiently. <br />
In the second part of the talk, we discuss a distribution-free model to evaluate the performance of process ﬂexibility structures when only the mean and partial expectation of the demand are known. We characterize the worst-case demand distribution under general concave objective functions and apply it to derive tight lower bounds for the performance of chaining structures under the balanced systems. In the third part of the talk, we examine the worst-case performance of flexibility designs under supply and demand uncertainties, where supply uncertainty can be in the form of either plant or arc disruptions.<br />
<br />
Bio: Dr. Hoda Bidkhori has been an Assistant Professor in the Department of Industrial Engineering at University of Pittsburgh as of Fall 2015. Prior to this, Dr. Bidkhori worked as a Lecturer and as a Postdoctoral Researcher at the Massachusetts Institute of Technology. She holds a Ph.D. in Applied Mathematics from MIT. Her current research centers around data-driven decision-making, decision-making under uncertainty, and also the development and implementation of data-driven computationally tractable solutions for problems arising in healthcare, transportation, and inventory management.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=114634&date=2018-01-26Laurent El Ghaoui- Lifted Neural Nets: Beyond The Grip Of Stochastic Gradients In Deep Learning, Jan 29
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=114846&date=2018-01-29
Abstract: We describe a novel family of models of multi-layer feedforward neural networks, where the activation functions are encoded via penalties in the training problem. The new framework allows for algorithms such as block-coordinate descent methods to be applied, in which each step is composed of simple (no hidden layer) supervised learning problems that are parallelizable across layers, or data points, or both. Although the training problem has many more variables than that of a standard network, preliminary experiments seem to indicate that the proposed models provide excellent initial guesses for standard networks, and could become competitive with state-of-the-art neural networks, both in terms of performance and speed. In addition, the lifted models provide avenues for interesting extensions such as network topology optimization, input matrix completion, and robustness against noisy inputs.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=114846&date=2018-01-29Wenpin Tang - Optimal Surviving Strategy For The Up The River Problem, Feb 5
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115023&date=2018-02-05
Abstract:<br />
<br />
Nowadays there are more and more people living on the planet, but the available resources are very limited. So an interesting question is how to allocate the limited resources to maximize our welfare.<br />
<br />
In this talk, I will present the "Up the River problem" which is a dynamic allocation model in a random environment with limited resources. The model was introduced by David Aldous in 2000, along with a couple of open problems. A unit drift is distributed among a finite collection of Brownian particles on R+, which are annihilated once they reach the origin. Starting K particles at x = 1, we prove Aldous' conjecture that the push-the-laggard strategy of distributing the drift asymptotically (as K → ∞) maximizes the total number of surviving particles, with approximately √ 4 π K1/2 surviving particles. The solution to the problem relies on stochastic calculus and partial differential equations: the hydrodynamic limit of the particle density satisfies a two-phase partial differential equations, one with a fix boundary and the other with a free boundary.<br />
<br />
At the end of the talk, I will discuss recent progress in continuous-time information search model, initiated by Ke and Villas-Boas. The techniques in solving the "Up the River problem" can also be applied to these search models. The talk is based on joint work with Li-Cheng Tsai (Up the River problem), Tony Ke and J.Miguel Villas-Boas <br />
<br />
Bio: Wenpin Tang is an Assistant Professor at Department of Mathematics, UCLA. He obtained his Ph.D. from Department of Statistics, UC Berkeley. His advisor was Jim Pitman. Before coming to Berkeley, he obtained an engineer diploma (Diplôme d'ingénieur) from Ecole Polytechnique, France.<br />
<br />
Research: Tang's research interests include probability theory and its applications. He has been working on paths embedding in Brownian motion (with Jim Pitman), and paths intersection of Brownian motion (with Steve Evans and Jim Pitman). He also solved a conjecture of David Aldous on Up the River problem (with Li-Cheng Tsai). <br />
<br />
More generally, he is interested in topics such as random combinatorial objects, stochastic differential equations, and interacting particle systems.<br />
<br />
More information can be found on his website here: http://www.math.ucla.edu/~wenpintang/http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115023&date=2018-02-05Georgina Hall - LP, SOCP, and optimization-free approaches to sum of squares optimization, Feb 12
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115024&date=2018-02-12
The problem of optimizing over the cone of nonnegative polynomials is a fundamental problem in computational mathematics, with applications to polynomial optimization, control, machine learning, game theory, and combinatorics, among others. A number of breakthrough papers in the early 2000s showed that this problem, long thought to be intractable, could be solved by using sum of squares programming. This technique however has proved to be expensive for large-scale problems, as it involves solving large semidefinite programs (SDPs).<br />
<br />
In the first part of this talk, we present two methods for approximately solving large-scale sum of squares programs that dispense altogether with semidefinite programming and only involve solving a sequence of linear or second order cone programs generated in an adaptive fashion. In the second part of the talk, we focus on the problem of finding tight lower bounds on polynomial optimization problems (POPs), a fundamental task in this area that is most commonly handled through the use of SDP-based sum of squares hierarchies (e.g., due to Lasserre and Parrilo). In contrast to previous approaches, we provide the first theoretical framework for efficiently constructing converging hierarchies of lower bounds on POPs whose computation does not require any optimization, but simply the ability to multiply certain fixed polynomials together and check nonnegativity of the coefficients of the product.<br />
<br />
Bio: Georgina Hall is a final-year graduate student and a Gordon Wu fellow in the department of Operations Research and Financial Engineering at Princeton University, where she is advised by Professor Amir Ali Ahmadi. She was the valedictorian of Ecole Centrale, Paris, where she obtained a B.S. and an M.S., in 2011 and 2013 respectively. Her interests lie in convex relaxations of NP-hard problems, particularly those arising in polynomial optimization. Georgina is the recipient of the Médaille de l’Ecole Centrale from the French Académie des Sciences and the Princeton School of Engineering and Applied Sciences Award for Excellence. She was also chosen for the 2017 Rising Stars in EECS workshop at Stanford and the 2017 Young Researchers Workshop at Cornell University. Her paper “DC decomposition of nonconvex polynomials using algebraic techniques” is the recent recipient of the 2016 Informs Computing Society Prize for Best Student Paper. She has also been the recipient of a number of teaching awards, including the Princeton University's Engineering Council Teaching Award, the university-wide Excellence in Teaching Award of the Princeton Graduate School, and the 2017 Excellence in Teaching of Operations Research Award of the Institute for Industrial and Systems Engineers.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115024&date=2018-02-12Eric Friedlander - Mean-Field Methods In Large Stochastic Networks, Feb 14
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115453&date=2018-02-14
Abstract: Analysis of large-scale communication networks (e.g. ad hoc wireless networks, cloud computing systems, server networks etc.) is of great practical interest. The massive size of such networks frequently makes direct analysis intractable. Asymptotic approximations using hydrodynamic and diffusion scaling limits provide useful methods for approaching such problems. In this talk, we study two examples of such an analysis. In the first, the technique is applied to a model for load balancing in a large, cloud-based, storage system. In the second, we present an asymptotic method of solving control problems in such networks.<br />
<br />
Bio: Eric Friedlander's research is focused on the modeling and analysis of large scale systems arising from communication networks and biological systems. As more and more business is conducted online, massive cloud-based storage systems and market places have created the need for models and methodology applicable on a scale that is currently impracticable. In his research, he studies these types of systems and develop tractable methods of approaching the problems which arise. In addition, he am interested in the study of data assimilation methods for complex high-dimensional systems. Data assimilation is the science of incorporating data into complex deterministic dynamical models (normally described through a system of ODE/PDE). Such a process is often complicated by the massive size of such systems and various methods have been developed to cope with this challenge.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115453&date=2018-02-14Zeyu Zheng - Top-Down Statistical Modeling, Feb 16
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115492&date=2018-02-16
Abstract: In this talk, we will argue that data-driven service systems engineering should take a statistical perspective that is guided by the decisions and performance measures that are critical from a managerial perspective. We further take the view that the statistical models will often be used as inputs to simulations that will be used to drive either capacity decisions or real-time decisions (such as dynamic staffing levels). We start by discussing Poisson arrival modeling in the context of systems in which time-of-day effects play a significant role. We will discuss several new statistical tools that we have developed that significantly improve the quality of the performance predictions made by the simulation models. In the second part of our talk, we show that in dealing with high-intensity arrival streams (such as in the call center and the ride-sharing contexts), the key statistical features of the traffic that must be captured for good performance prediction lie at much longer time scales than the inter-arrival times that are the usual focus of conventional statistical analysis for such problems. This observation is consistent with the extensive limit theory available for many-server systems. Our “top-down” approach focuses on data collected at these longer time scales, and on building statistical models that capture the key data features at this scale. In particular, we will discuss the use of Poisson auto-regressive processes as a basic tool in such “top-down” modeling, and on the statistical framework we are creating to build effective simulation-based decision tools based on real-world data.<br />
<br />
<br />
<br />
Short Bio: Zeyu Zheng is a PhD candidate in the Department of Management Science and Engineering at Stanford University. His research lies at the interface of operations research, data sciences, and decision making. Zeyu has done research on simulation, data-driven decision making, stochastic modeling, machine learning, and over-the-counter markets, and he has a PhD minor in Statistics and an MA in economics from Stanford University. Before coming to Stanford, he graduated from Peking University with a BS in mathematics.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115492&date=2018-02-16Weina Wang- Delay Bounds And Asymptotics In Cloud Computing Systems, Feb 21
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115025&date=2018-02-21
Abstract<br />
With the emergence of big-data technologies, cloud computing systems are growing rapidly in size and becoming more and more complex, making it costly to conduct experiments and simulations. Therefore, modeling computing systems and characterizing their performance analytically are more critical than ever in identifying bottlenecks, informing system design, and facilitating provisioning. In this talk, I will illustrate how we study the delay performance in cloud computing systems from different modeling perspectives. First, I will focus on the delay of jobs that consist of multiple tasks, where the tasks can be processed in parallel on different servers, and a job is completed only when all its tasks are completed. Such jobs with parallel tasks are prevalent in today’s cloud computing systems. While the delay of individual tasks has been extensively studied, job delay has not been well-understood, even though job delay is the most important metric of interest to end users. In our work, we establish a stochastic upper bound on job delay using properties of associated random variables, and show its tightness in an asymptotic regime where the number of servers in the system and the number of tasks in a job both become large. After this, I will also briefly summarize our results on delay characterization for data-processing tasks where the locality of data needs to be considered, and for data transfer in large-scale datacenter networks.<br />
<br />
Bio<br />
Weina Wang is a joint postdoctoral research associate in the Coordinated Science Lab at the University of Illinois at Urbana-Champaign, and in the School of ECEE at Arizona State University. She received her B.E. from Tsinghua University and her Ph.D. from Arizona State University, both in Electrical Engineering. Her research lies in the broad area of applied probability and stochastic systems, with applications in cloud computing, data centers, and privacy-preserving data analytics. Her dissertation received the Dean’s Dissertation Award in the Ira A. Fulton Schools of Engineering at Arizona State University in 2016. She received the Kenneth C. Sevcik Outstanding Student Paper Award at ACM SIGMETRICS 2016.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115025&date=2018-02-21Barna Saha - Efficient Fine-Grained Algorithms, Feb 26
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115452&date=2018-02-26
Abstract: One of the greatest successes of computational complexity theory is the classification of countless fundamental computational problems into polynomial-time and NP-hard ones, two classes that are often referred to as tractable and intractable, respectively. However, this crude distinction of algorithmic efficiency is clearly insufficient when handling today's large scale of data. We need a finer-grained design and analysis of algorithms that pinpoints the exact exponent of polynomial running time, and a better understanding of when a speed-up is not possible. Over the years, many polynomial-time approximation algorithms were devised as an approach to bypass the NP-hardness obstacle of many discrete optimization problems. This area has developed into a rich field producing many algorithmic ideas and has lead to major advances in computational complexity. So far, however, a similar theory for high polynomial time problems to understand the trade-off between quality and running time is vastly lacking. <br />
<br />
In this presentation, I will give you an overview of the newly growing field of fine-grained algorithms and complexity, and my contributions therein. This will include fundamental problems such as edit distance computation, all-pairs shortest paths, parsing and matrix multiplication. They have applications ranging from genomics to statistical natural language processing, machine learning and optimization. I will show how as a natural byproduct of improved time complexity, one may design algorithms that are highly parallel as well as streaming algorithms with sublinear space complexity. Finally, motivated by core machine learning applications, I will discuss alternative measures of efficiency that may be equally relevant as time complexity.<br />
<br />
Bio: Barna Saha is currently an Assistant Professor in the College of Information & Computer Science at the University of Massachusetts Amherst. She is also a Permanent Member of Center for Discrete Mathematics and Theoretical Computer Science (DIMACS) at Rutgers. Before joining UMass in 2014, she was a Research Scientist at AT&T Shannon Laboratories, New Jersey. She spent four wonderful years (2007-2011) at the University of Maryland College Park from where she received her Ph.D. in Computer Science. In Fall 2015, she was at the University of California Berkeley as a Visiting Scholar and as a fellow of the Simons Institute. Her research interests include Theoretical Computer Science, Probabilistic Method & Randomized Algorithms and Large Scale Data Analytics. She is the recipient of NSF CAREER award (2017), Google Faculty Award (2016), Yahoo Academic Career Enhancement Award (2015), Simons-Berkeley Research Fellowship (2015), NSF CRII Award (2015) and Dean's Dissertation Fellowship (2011). She received the best paper award at the Very Large Data Bases Conference (VLDB) 2009 for her work on Probabilistic Databases and was chosen as finalists for best papers at the IEEE Conference on Data Engineering (ICDE) 2012 for developing new techniques to handle low quality data.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115452&date=2018-02-26Agostino Capponi - Columbia University, Apr 2
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115026&date=2018-04-02
Agostino Capponi joined Columbia University's IEOR Department in August 2014, where he is also a member of the Institute for Data Science and Engineering.<br />
<br />
His main research interests are in the area of networks, with a special focus on systemic risk, contagion, and control. In the context of financial networks, the outcome of his research contributes to a better understanding of risk management practices, and to assess the impact of regulatory policies aimed at controlling financial markets. He has been awarded a grant from the Institute for New Economic Thinking for his research on dynamic contagion mechanisms.<br />
<br />
His research has been published in top-tier journals of Operations Research, Mathematical Finance, and Financial Economics, including Operations Research, Mathematics of Operations Research, Management Science, Review of Asset Pricing Studies, and Mathematical Finance. His work has also been published in leading practitioner journals and invited book chapters. Agostino is a frequently invited speaker at major conferences in the area of systemic risk. He has on-going collaborations with several governmental institutions that are tasked with the analysis of financial networks data, in particular the US Commodity Futures Trading Commission and the Office of Financial Research. Agostino holds a world patent for a target tracking methodology in military networks.<br />
<br />
Agostino received his Master and Ph.D. Degree in Computer Science and Applied and Computational Mathematics from the California Institute of Technology, respectively in 2006 and 2009.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115026&date=2018-04-02Robots on the Edge: Intelligent Machines, Industry 4.0 and Fog Robotics, Apr 5
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=116353&date=2018-04-05
Please join us for the CITRIS Silicon Valley Forum, a new monthly series from CITRIS and the Banatao Institute. Our second panel of the Spring 2018 series invites Ken Goldberg, Professor of Industrial Engineering and Operations Research and Juan Aparicio, Head of Research Group Advanced Manufacturing Automation at Siemens to discuss Robots on the Edge: Intelligent Machines, Industry 4.0, and Fog Robotics on April 5, 2018 from 11:30am-1:30pm. The talk will be hosted at the UC Santa Cruz Silicon Valley campus in Santa Clara, CA. Lunch will be provided.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=116353&date=2018-04-05Avraham Shtub -Technion, Apr 30
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115031&date=2018-04-30
Professor Avraham Shtub holds the Stephen and Sharon Seiden Chair in Project Management. He has a B.Sc in Electrical Engineering from the Technion - Israel Institute of Technology (1974), an MBA from Tel Aviv University (1978) and a Ph.D in Management Science and Industrial Engineering from the University of Washington (1982). He is a certified Project Management Professional (PMP) and a member of the Project Management Institute (PMI-USA). <br />
<br />
Professor Shtub is the recipient of the Institute of Industrial Engineering's 1995 "Book of the Year Award" for his Book "Project Management: Engineering, Technology and Implementation" (co- authored with Jonathan Bard and Shlomo Globerson), Prentice Hall, 1994. He is the recipient of the Production Operations Management Society's Wick Skinner Teaching Innovation Achievements Award for his book: "Enterprise Resource Planning (ERP): The Dynamics of Operations Management". His books on Project Management were published in English, Hebrew, Greek and Chinese. He is the recipient of the 2008 Project Management Institute Professional Development Product of the Year Award for the training simulator "Project Team Builder – PTB". Prof. Shtub was a Department Editor for IIE Transactions he was on the Editorial Boards of the Project Management Journal, The International Journal of Project Management, IIE Transactions and the International Journal of Production Research. <br />
<br />
He was a faculty member of the department of Industrial Engineering at Tel Aviv University from 1984 to 1998 where he also served as a chairman of the department (1993-1996). He joined the Technion in 1998 and was the Associate Dean and head of the MBA program. He has been a consultant to industry in the areas of project management, training by simulators and the design of production-operation systems. He was invited to speak at special seminars on Project Management and Operations in Europe, the Far East, North America, South America and Australia. Professor Shtub visited and taught at Vanderbilt University, The University of Pennsylvania, Korean Institute of Technology, Bilkent University in Turkey, Otego University in New Zealand, Yale University, Universidad Politécnica de Valencia, University of Bergamo in Italy.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115031&date=2018-04-30