Industrial Engineering and Operations Research
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html
Upcoming EventsHoda Bidkhori - Analyzing Process Flexibility Using Robust Optimization, Jan 26
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=114634&date=2018-01-26
Abstract: Process flexibility has been widely applied in many industries as a competitive strategy to improve responsiveness to demand uncertainty. The first part of the talk addresses the problem of managing process flexibility in a fairly general manufacturing system. In our model, each plant might have a different cost for adding flexibility or extra capacity. We model this problem as an adaptive optimization problem and discuss different approaches to solve it efficiently. <br />
In the second part of the talk, we discuss a distribution-free model to evaluate the performance of process ﬂexibility structures when only the mean and partial expectation of the demand are known. We characterize the worst-case demand distribution under general concave objective functions and apply it to derive tight lower bounds for the performance of chaining structures under the balanced systems. In the third part of the talk, we examine the worst-case performance of flexibility designs under supply and demand uncertainties, where supply uncertainty can be in the form of either plant or arc disruptions.<br />
<br />
Bio: Dr. Hoda Bidkhori has been an Assistant Professor in the Department of Industrial Engineering at University of Pittsburgh as of Fall 2015. Prior to this, Dr. Bidkhori worked as a Lecturer and as a Postdoctoral Researcher at the Massachusetts Institute of Technology. She holds a Ph.D. in Applied Mathematics from MIT. Her current research centers around data-driven decision-making, decision-making under uncertainty, and also the development and implementation of data-driven computationally tractable solutions for problems arising in healthcare, transportation, and inventory management.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=114634&date=2018-01-26Laurent El Ghaoui- Lifted Neural Nets: Beyond The Grip Of Stochastic Gradients In Deep Learning, Jan 29
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=114846&date=2018-01-29
Abstract: We describe a novel family of models of multi-layer feedforward neural networks, where the activation functions are encoded via penalties in the training problem. The new framework allows for algorithms such as block-coordinate descent methods to be applied, in which each step is composed of simple (no hidden layer) supervised learning problems that are parallelizable across layers, or data points, or both. Although the training problem has many more variables than that of a standard network, preliminary experiments seem to indicate that the proposed models provide excellent initial guesses for standard networks, and could become competitive with state-of-the-art neural networks, both in terms of performance and speed. In addition, the lifted models provide avenues for interesting extensions such as network topology optimization, input matrix completion, and robustness against noisy inputs.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=114846&date=2018-01-29Wenpin Tang - Optimal Surviving Strategy For The Up The River Problem, Feb 5
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115023&date=2018-02-05
Abstract:<br />
<br />
Nowadays there are more and more people living on the planet, but the available resources are very limited. So an interesting question is how to allocate the limited resources to maximize our welfare.<br />
<br />
In this talk, I will present the "Up the River problem" which is a dynamic allocation model in a random environment with limited resources. The model was introduced by David Aldous in 2000, along with a couple of open problems. A unit drift is distributed among a finite collection of Brownian particles on R+, which are annihilated once they reach the origin. Starting K particles at x = 1, we prove Aldous' conjecture that the push-the-laggard strategy of distributing the drift asymptotically (as K → ∞) maximizes the total number of surviving particles, with approximately √ 4 π K1/2 surviving particles. The solution to the problem relies on stochastic calculus and partial differential equations: the hydrodynamic limit of the particle density satisfies a two-phase partial differential equations, one with a fix boundary and the other with a free boundary.<br />
<br />
At the end of the talk, I will discuss recent progress in continuous-time information search model, initiated by Ke and Villas-Boas. The techniques in solving the "Up the River problem" can also be applied to these search models. The talk is based on joint work with Li-Cheng Tsai (Up the River problem), Tony Ke and J.Miguel Villas-Boas <br />
<br />
Bio: Wenpin Tang is an Assistant Professor at Department of Mathematics, UCLA. He obtained his Ph.D. from Department of Statistics, UC Berkeley. His advisor was Jim Pitman. Before coming to Berkeley, he obtained an engineer diploma (Diplôme d'ingénieur) from Ecole Polytechnique, France.<br />
<br />
Research: Tang's research interests include probability theory and its applications. He has been working on paths embedding in Brownian motion (with Jim Pitman), and paths intersection of Brownian motion (with Steve Evans and Jim Pitman). He also solved a conjecture of David Aldous on Up the River problem (with Li-Cheng Tsai). <br />
<br />
More generally, he is interested in topics such as random combinatorial objects, stochastic differential equations, and interacting particle systems.<br />
<br />
More information can be found on his website here: http://www.math.ucla.edu/~wenpintang/http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115023&date=2018-02-05Georgina Hall - LP, SOCP, and optimization-free approaches to sum of squares optimization, Feb 12
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115024&date=2018-02-12
The problem of optimizing over the cone of nonnegative polynomials is a fundamental problem in computational mathematics, with applications to polynomial optimization, control, machine learning, game theory, and combinatorics, among others. A number of breakthrough papers in the early 2000s showed that this problem, long thought to be intractable, could be solved by using sum of squares programming. This technique however has proved to be expensive for large-scale problems, as it involves solving large semidefinite programs (SDPs).<br />
<br />
In the first part of this talk, we present two methods for approximately solving large-scale sum of squares programs that dispense altogether with semidefinite programming and only involve solving a sequence of linear or second order cone programs generated in an adaptive fashion. In the second part of the talk, we focus on the problem of finding tight lower bounds on polynomial optimization problems (POPs), a fundamental task in this area that is most commonly handled through the use of SDP-based sum of squares hierarchies (e.g., due to Lasserre and Parrilo). In contrast to previous approaches, we provide the first theoretical framework for efficiently constructing converging hierarchies of lower bounds on POPs whose computation does not require any optimization, but simply the ability to multiply certain fixed polynomials together and check nonnegativity of the coefficients of the product.<br />
<br />
Bio: Georgina Hall is a final-year graduate student and a Gordon Wu fellow in the department of Operations Research and Financial Engineering at Princeton University, where she is advised by Professor Amir Ali Ahmadi. She was the valedictorian of Ecole Centrale, Paris, where she obtained a B.S. and an M.S., in 2011 and 2013 respectively. Her interests lie in convex relaxations of NP-hard problems, particularly those arising in polynomial optimization. Georgina is the recipient of the Médaille de l’Ecole Centrale from the French Académie des Sciences and the Princeton School of Engineering and Applied Sciences Award for Excellence. She was also chosen for the 2017 Rising Stars in EECS workshop at Stanford and the 2017 Young Researchers Workshop at Cornell University. Her paper “DC decomposition of nonconvex polynomials using algebraic techniques” is the recent recipient of the 2016 Informs Computing Society Prize for Best Student Paper. She has also been the recipient of a number of teaching awards, including the Princeton University's Engineering Council Teaching Award, the university-wide Excellence in Teaching Award of the Princeton Graduate School, and the 2017 Excellence in Teaching of Operations Research Award of the Institute for Industrial and Systems Engineers.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115024&date=2018-02-12Eric Friedlander - Mean-Field Methods In Large Stochastic Networks, Feb 14
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115453&date=2018-02-14
Abstract: Analysis of large-scale communication networks (e.g. ad hoc wireless networks, cloud computing systems, server networks etc.) is of great practical interest. The massive size of such networks frequently makes direct analysis intractable. Asymptotic approximations using hydrodynamic and diffusion scaling limits provide useful methods for approaching such problems. In this talk, we study two examples of such an analysis. In the first, the technique is applied to a model for load balancing in a large, cloud-based, storage system. In the second, we present an asymptotic method of solving control problems in such networks.<br />
<br />
Bio: Eric Friedlander's research is focused on the modeling and analysis of large scale systems arising from communication networks and biological systems. As more and more business is conducted online, massive cloud-based storage systems and market places have created the need for models and methodology applicable on a scale that is currently impracticable. In his research, he studies these types of systems and develop tractable methods of approaching the problems which arise. In addition, he am interested in the study of data assimilation methods for complex high-dimensional systems. Data assimilation is the science of incorporating data into complex deterministic dynamical models (normally described through a system of ODE/PDE). Such a process is often complicated by the massive size of such systems and various methods have been developed to cope with this challenge.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115453&date=2018-02-14Zeyu Zheng - Top-Down Statistical Modeling, Feb 16
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115492&date=2018-02-16
Abstract: In this talk, we will argue that data-driven service systems engineering should take a statistical perspective that is guided by the decisions and performance measures that are critical from a managerial perspective. We further take the view that the statistical models will often be used as inputs to simulations that will be used to drive either capacity decisions or real-time decisions (such as dynamic staffing levels). We start by discussing Poisson arrival modeling in the context of systems in which time-of-day effects play a significant role. We will discuss several new statistical tools that we have developed that significantly improve the quality of the performance predictions made by the simulation models. In the second part of our talk, we show that in dealing with high-intensity arrival streams (such as in the call center and the ride-sharing contexts), the key statistical features of the traffic that must be captured for good performance prediction lie at much longer time scales than the inter-arrival times that are the usual focus of conventional statistical analysis for such problems. This observation is consistent with the extensive limit theory available for many-server systems. Our “top-down” approach focuses on data collected at these longer time scales, and on building statistical models that capture the key data features at this scale. In particular, we will discuss the use of Poisson auto-regressive processes as a basic tool in such “top-down” modeling, and on the statistical framework we are creating to build effective simulation-based decision tools based on real-world data.<br />
<br />
<br />
<br />
Short Bio: Zeyu Zheng is a PhD candidate in the Department of Management Science and Engineering at Stanford University. His research lies at the interface of operations research, data sciences, and decision making. Zeyu has done research on simulation, data-driven decision making, stochastic modeling, machine learning, and over-the-counter markets, and he has a PhD minor in Statistics and an MA in economics from Stanford University. Before coming to Stanford, he graduated from Peking University with a BS in mathematics.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115492&date=2018-02-16Weina Wang- Delay Bounds And Asymptotics In Cloud Computing Systems, Feb 21
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115025&date=2018-02-21
Abstract<br />
With the emergence of big-data technologies, cloud computing systems are growing rapidly in size and becoming more and more complex, making it costly to conduct experiments and simulations. Therefore, modeling computing systems and characterizing their performance analytically are more critical than ever in identifying bottlenecks, informing system design, and facilitating provisioning. In this talk, I will illustrate how we study the delay performance in cloud computing systems from different modeling perspectives. First, I will focus on the delay of jobs that consist of multiple tasks, where the tasks can be processed in parallel on different servers, and a job is completed only when all its tasks are completed. Such jobs with parallel tasks are prevalent in today’s cloud computing systems. While the delay of individual tasks has been extensively studied, job delay has not been well-understood, even though job delay is the most important metric of interest to end users. In our work, we establish a stochastic upper bound on job delay using properties of associated random variables, and show its tightness in an asymptotic regime where the number of servers in the system and the number of tasks in a job both become large. After this, I will also briefly summarize our results on delay characterization for data-processing tasks where the locality of data needs to be considered, and for data transfer in large-scale datacenter networks.<br />
<br />
Bio<br />
Weina Wang is a joint postdoctoral research associate in the Coordinated Science Lab at the University of Illinois at Urbana-Champaign, and in the School of ECEE at Arizona State University. She received her B.E. from Tsinghua University and her Ph.D. from Arizona State University, both in Electrical Engineering. Her research lies in the broad area of applied probability and stochastic systems, with applications in cloud computing, data centers, and privacy-preserving data analytics. Her dissertation received the Dean’s Dissertation Award in the Ira A. Fulton Schools of Engineering at Arizona State University in 2016. She received the Kenneth C. Sevcik Outstanding Student Paper Award at ACM SIGMETRICS 2016.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115025&date=2018-02-21Barna Saha - Efficient Fine-Grained Algorithms, Feb 26
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115452&date=2018-02-26
Abstract: One of the greatest successes of computational complexity theory is the classification of countless fundamental computational problems into polynomial-time and NP-hard ones, two classes that are often referred to as tractable and intractable, respectively. However, this crude distinction of algorithmic efficiency is clearly insufficient when handling today's large scale of data. We need a finer-grained design and analysis of algorithms that pinpoints the exact exponent of polynomial running time, and a better understanding of when a speed-up is not possible. Over the years, many polynomial-time approximation algorithms were devised as an approach to bypass the NP-hardness obstacle of many discrete optimization problems. This area has developed into a rich field producing many algorithmic ideas and has lead to major advances in computational complexity. So far, however, a similar theory for high polynomial time problems to understand the trade-off between quality and running time is vastly lacking. <br />
<br />
In this presentation, I will give you an overview of the newly growing field of fine-grained algorithms and complexity, and my contributions therein. This will include fundamental problems such as edit distance computation, all-pairs shortest paths, parsing and matrix multiplication. They have applications ranging from genomics to statistical natural language processing, machine learning and optimization. I will show how as a natural byproduct of improved time complexity, one may design algorithms that are highly parallel as well as streaming algorithms with sublinear space complexity. Finally, motivated by core machine learning applications, I will discuss alternative measures of efficiency that may be equally relevant as time complexity.<br />
<br />
Bio: Barna Saha is currently an Assistant Professor in the College of Information & Computer Science at the University of Massachusetts Amherst. She is also a Permanent Member of Center for Discrete Mathematics and Theoretical Computer Science (DIMACS) at Rutgers. Before joining UMass in 2014, she was a Research Scientist at AT&T Shannon Laboratories, New Jersey. She spent four wonderful years (2007-2011) at the University of Maryland College Park from where she received her Ph.D. in Computer Science. In Fall 2015, she was at the University of California Berkeley as a Visiting Scholar and as a fellow of the Simons Institute. Her research interests include Theoretical Computer Science, Probabilistic Method & Randomized Algorithms and Large Scale Data Analytics. She is the recipient of NSF CAREER award (2017), Google Faculty Award (2016), Yahoo Academic Career Enhancement Award (2015), Simons-Berkeley Research Fellowship (2015), NSF CRII Award (2015) and Dean's Dissertation Fellowship (2011). She received the best paper award at the Very Large Data Bases Conference (VLDB) 2009 for her work on Probabilistic Databases and was chosen as finalists for best papers at the IEEE Conference on Data Engineering (ICDE) 2012 for developing new techniques to handle low quality data.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115452&date=2018-02-26Agostino Capponi - Columbia University, Apr 2
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115026&date=2018-04-02
Agostino Capponi joined Columbia University's IEOR Department in August 2014, where he is also a member of the Institute for Data Science and Engineering.<br />
<br />
His main research interests are in the area of networks, with a special focus on systemic risk, contagion, and control. In the context of financial networks, the outcome of his research contributes to a better understanding of risk management practices, and to assess the impact of regulatory policies aimed at controlling financial markets. He has been awarded a grant from the Institute for New Economic Thinking for his research on dynamic contagion mechanisms.<br />
<br />
His research has been published in top-tier journals of Operations Research, Mathematical Finance, and Financial Economics, including Operations Research, Mathematics of Operations Research, Management Science, Review of Asset Pricing Studies, and Mathematical Finance. His work has also been published in leading practitioner journals and invited book chapters. Agostino is a frequently invited speaker at major conferences in the area of systemic risk. He has on-going collaborations with several governmental institutions that are tasked with the analysis of financial networks data, in particular the US Commodity Futures Trading Commission and the Office of Financial Research. Agostino holds a world patent for a target tracking methodology in military networks.<br />
<br />
Agostino received his Master and Ph.D. Degree in Computer Science and Applied and Computational Mathematics from the California Institute of Technology, respectively in 2006 and 2009.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115026&date=2018-04-02Robots on the Edge: Intelligent Machines, Industry 4.0 and Fog Robotics, Apr 5
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=116353&date=2018-04-05
Please join us for the CITRIS Silicon Valley Forum, a new monthly series from CITRIS and the Banatao Institute. Our second panel of the Spring 2018 series invites Ken Goldberg, Professor of Industrial Engineering and Operations Research and Juan Aparicio, Head of Research Group Advanced Manufacturing Automation at Siemens to discuss Robots on the Edge: Intelligent Machines, Industry 4.0, and Fog Robotics on April 5, 2018 from 11:30am-1:30pm. The talk will be hosted at the UC Santa Cruz Silicon Valley campus in Santa Clara, CA. Lunch will be provided.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=116353&date=2018-04-05Rahul Jain — Reinforcement Learning without Reinforcement, Apr 16
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=116947&date=2018-04-16
Abstract: Reinforcement Learning (RL) is concerned with solving sequential decision-making problems in the presence of uncertainty. RL is really about two problems together. The first is the `Bellman problem’: Finding the optimal policy given the model, which may involve large state spaces. Various approximate dynamic programming and RL schemes have been developed, but either there are no guarantees, or they are not universal, or rather slow. In fact, most RL algorithms have become synonymous with stochastic approximation schemes that are known to be rather slow. This is an even more difficult problem for MDPs with continuous state (and action) spaces. We present a class of RL algorithms for the continuous state space problem based on `empirical’ ideas, which are simple, effective and yet universal with probabilistic guarantees. The idea involves randomized Kernel-based function fitting combined with `empirical’ updates. The key is a “probabilistic contraction analysis” method we have developed for analysis of stochastic iterative algorithms, wherein we show convergence to a probabilistic fixed point of a sequence of random operators via a stochastic dominance argument. <br />
<br />
The second RL problem is the `online learning (or the Lai-Robbins) problem’ when the model itself is unknown. We propose a simple posterior sampling-based regret-minimization reinforcement learning algorithm for MDPs. It achieves O(sqrt{T})-regret which is order-optimal. It not only optimally manages the “exploration versus exploitation tradeoff” but also obviates the need for expensive computation for exploration. The algorithm differs from classical adaptive control in its focus on non-asymptotic regret optimality as opposed to asymptotic stability. This seems to resolve a long standing open problem in Reinforcement Learning.<br />
<br />
Biography: Rahul Jain is the K. C. Dahlberg Early Career Chair and Associate Professor of Electrical Engineering, Computer Science* and ISE* (*by courtesy) at the University of Southern California (USC). He received a B.Tech from the IIT Kanpur, and an MA in Statistics and a PhD in EECS from the University of California, Berkeley. Prior to joining USC, he was at the IBM T J Watson Research Center, Yorktown Heights, NY. He has received numerous awards including the NSF CAREER award, the ONR Young Investigator award, an IBM Faculty award, the James H. Zumberge Faculty Research and Innovation Award, and is currently a US Fulbright Specialist Scholar. His interests span reinforcement learning, stochastic control, statistical learning, stochastic networks, and game theory, and power systems and healthcare on the applications side. The talk is based on work with a number of outstanding students and postdocs who are now faculty members themselves at top places.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=116947&date=2018-04-16Avraham Shtub -Technion, Apr 30
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115031&date=2018-04-30
Professor Avraham Shtub holds the Stephen and Sharon Seiden Chair in Project Management. He has a B.Sc in Electrical Engineering from the Technion - Israel Institute of Technology (1974), an MBA from Tel Aviv University (1978) and a Ph.D in Management Science and Industrial Engineering from the University of Washington (1982). He is a certified Project Management Professional (PMP) and a member of the Project Management Institute (PMI-USA). <br />
<br />
Professor Shtub is the recipient of the Institute of Industrial Engineering's 1995 "Book of the Year Award" for his Book "Project Management: Engineering, Technology and Implementation" (co- authored with Jonathan Bard and Shlomo Globerson), Prentice Hall, 1994. He is the recipient of the Production Operations Management Society's Wick Skinner Teaching Innovation Achievements Award for his book: "Enterprise Resource Planning (ERP): The Dynamics of Operations Management". His books on Project Management were published in English, Hebrew, Greek and Chinese. He is the recipient of the 2008 Project Management Institute Professional Development Product of the Year Award for the training simulator "Project Team Builder – PTB". Prof. Shtub was a Department Editor for IIE Transactions he was on the Editorial Boards of the Project Management Journal, The International Journal of Project Management, IIE Transactions and the International Journal of Production Research. <br />
<br />
He was a faculty member of the department of Industrial Engineering at Tel Aviv University from 1984 to 1998 where he also served as a chairman of the department (1993-1996). He joined the Technion in 1998 and was the Associate Dean and head of the MBA program. He has been a consultant to industry in the areas of project management, training by simulators and the design of production-operation systems. He was invited to speak at special seminars on Project Management and Operations in Europe, the Far East, North America, South America and Australia. Professor Shtub visited and taught at Vanderbilt University, The University of Pennsylvania, Korean Institute of Technology, Bilkent University in Turkey, Otego University in New Zealand, Yale University, Universidad Politécnica de Valencia, University of Bergamo in Italy.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=115031&date=2018-04-30BLISS Seminar: Learning with Low Approximate Regret with Partial Feedback, May 7
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=117308&date=2018-05-07
We consider the adversarial multi-armed bandit problem with partial feedback, minimizing a non-negative loss function using the graph based feedback framework introduced by Mannor and Shamir in 2011. We offer algorithms that attain small loss bounds, as well as low approximate regret against a shifting comparator.<br />
<br />
Classical learning algorithms add a low level of uniform noise to the algorithm’s choice to limit the variance of the loss estimator used in importance sampling, which also helps the algorithm to shift to a new arm fast enough when the comparator changes. However, such an approach poses significant hurdles to proving small-loss or low approximate regret bounds. We show that a different general technique of freezing arms, rather than adding random noise, does much better in this setting. The idea of freezing arms was proposed by Allenberg et al. in 2006 in the context of bandit learning with multiplicative weights. We show the broad applicability of this technique by extending it to partial information feedback (via a novel dual freezing thresholding technique), and to shifting comparators. This is joint work with Thodoris Lykouris and Karthik Sridharan.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=117308&date=2018-05-07Let's be Flexible: Soft Haptics and Soft Robotics, Jun 15
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=117719&date=2018-06-15
While traditional robotic manipulators are constructed from rigid links and localized joints, a new generation of robotic devices are soft, using flexible, deformable materials. In this talk, I will describe several new systems that leverage softness to achieve novel shape control, provide a compliant interface to the human body, and access hard-to-reach locations. First, soft haptic devices change their shape and mechanical properties to allow medical simulation and new paradigms for human-computer interface. They can be made wearable by people or by objects in the environment, as needed to assist human users. Second, superelastic materials and 3D-printed soft plastics enable surgical robots that can steer within the human body in order to reach targets inaccessible via the straight-line paths of traditional instruments. These surgical robots are designed on a patient- and procedure-specific basis, to minimize invasiveness and facilitate low-cost interventions in special patient populations. Third, everting pneumatic tubes are used to create robots that can grow hundreds of times in length, steer around obstacles, and squeeze through tight spaces. These plant-inspired growing robots can achieve simple remote manipulation tasks, deliver payloads such as water or sensors in search and rescue scenarios, and shape themselves into useful structures.<br />
<br />
Biosketch: Allison M. Okamura received the BS degree from the University of California at Berkeley in 1994, and the MS and PhD degrees from Stanford University in 1996 and 2000, respectively, all in mechanical engineering. She is currently Professor in the mechanical engineering department at Stanford University, with a courtesy appointment in computer science. She is the Editor-in-Chief of the journal IEEE Robotics and Automation Letters. Her awards include the 2016 Duca Family University Fellow in Undergraduate Education, 2009 IEEE Technical Committee on Haptics Early Career Award, 2005 IEEE Robotics and Automation Society Early Academic Career Award, and 2004 NSF CAREER Award. She is an IEEE Fellow. Her academic interests include haptics, teleoperation, virtual environments and simulators, medical robotics, neuromechanics and rehabilitation, prosthetics, and engineering education. Outside academia, she enjoys spending time with her husband and two children, running, and playing ice hockey. For more information about her research, please see the Collaborative Haptics and Robotics in Medicine (CHARM) Laboratory website: http://charm.stanford.edu.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=117719&date=2018-06-15Seminar 217, Risk Management: A Term Structure Model for Dividends and Interest Rates, Jul 31
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=118257&date=2018-07-31
Over the last decade, dividends have become a standalone asset class instead of a mere side product of an equity investment. We introduce a framework based on polynomial jump-diffusions to jointly price the term structures of dividends and interest rates. Prices for dividend futures, bonds, and the dividend paying stock are given in closed form. We present an efficient moment based approximation method for option pricing. In a calibration exercise we show that a parsimonious model specification has a good fit with Euribor interest rate swaps and swaptions, Euro Stoxx 50 index dividend futures and dividend futures options, and Euro Stoxx 50 index options.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=118257&date=2018-07-31Seminar 217, Risk Management: Is motor insurance ratemaking going to change with telematics and semi-autonomous vehicles?, Aug 28
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=118738&date=2018-08-28
Many automobile insurance companies offer the possibility to monitor driving habits and distance driven by means of telematics devices installed in the vehicles. This provides a novel source of data that can be analysed to calculate personalised tariffs. For instance, drivers who accumulate a lot of miles should be charged more for their insurance coverage than those who make little use of their car. However, it can also be argued that drivers with more miles have better driving skills than those who hardly use their vehicle, meaning that the price per mile should decrease with distance driven. The statistical analysis of a real data set by means of machine learning techniques shows the existence of a gaining experience effect for large values of distance travelled, so that longer driving should result in higher premium, but there should be a discount for drivers that accumulate longer distances over time due to the increased proportion of zero claims. We confirm that speed limit violations and driving in urban areas increase the expected number of accident claims. We discuss how telematics information can be used to design better insurance and to improve traffic safety. Predictive models provide benchmarks of the impact of semi-autonomous vehicles on insurance rates.<br />
<br />
This talk will cover the award winning paper on semiautonomous vehicle insurance presented in the International Congress of Actuaries in Berlin, June, 2018, which is under revision in Accident Analysis and Prevention and it will also include the contents of a paper entitled “The use of telematics devices to improve automobile insurance rates”, accepted in Risk Analysis.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=118738&date=2018-08-28Seminar 217, Risk Management: On Optimal Options Book Execution Strategies with Market Impact, Sep 4
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=118739&date=2018-09-04
We consider the optimal execution of a book of options when market impact is a driver of the option price. We aim at minimizing the mean-variance risk criterion for a given market impact function. First, we develop a framework to justify the choice of our market impact function. Our model is inspired from Leland’s option replication with transaction costs where the market impact is directly part of the implied volatility function. The option price is then expressed through a Black– Scholes-like PDE with a modified implied volatility directly dependent on the market impact. We set up a stochastic control framework and solve an Hamilton–Jacobi–Bellman equation using finite differences methods. The expected cost problem suggests that the optimal execution strategy is characterized by a convex increasing trading speed, in contrast to the equity case where the optimal execution strategy results in a rather constant trading speed. However, in such mean valuation framework, the underlying spot price does not seem to affect the agent’s decision. By taking the agent risk aversion into account through a mean-variance approach, the strategy becomes more sensitive to the underlying price evolution, urging the agent to trade faster at the beginning of the strategy.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=118739&date=2018-09-04Seminar 217, Risk Management: Capacity constraints in earning, and asset prices before earnings announcements, Sep 11
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=118096&date=2018-09-11
This paper proposes an asset pricing model with endogenous allocation of constrained learning capacity, that provides an explanation for abnormal returns before the scheduled release of information about firms, such as quarterly earnings announcements. In equilibrium investors endogenously focus their learning capacity and acquire information about stocks with upcoming announcements, resulting in excess price movements during this period. I show cross-sectional heterogeneity in stock returns and institutional investors' information demand before quarterly earnings announcements that are consistent with the model. The results suggest that limited information acquisition capacity, and investors' optimal allocation response can play a significant role in asset price movements before firms' scheduled announcements.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=118096&date=2018-09-11Science Lecture - Artificial Intelligence and the long-term future of humanity, Sep 15
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=119492&date=2018-09-15
The news media in recent years have been full of dire warnings about the risk that AI poses to the human race, coming from well-known figures such as Stephen Hawking and Elon Musk. Should we be concerned? If so, what can we do about it? While some in the mainstream AI community dismiss these concerns, Professor Russell will argue instead that a fundamental reorientation of the field is required to avoid the existential risks that AI might otherwise create. Other risks, such as progressive enfeeblement, seem harder to address.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=119492&date=2018-09-15Seminar 217, Risk Management: Nonstandard Analysis and its Application to Markov Processes, Sep 18
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=118740&date=2018-09-18
Nonstandard analysis, a powerful machinery derived from mathematical logic, has had many applications in probability theory as well as stochastic processes. Nonstandard analysis allows construction of a single object - a hyperfinite probability space - which satisfies all the first order logical properties of a finite probability space, but which can be simultaneously viewed as a measure-theoretical probability space via the Loeb construction. As a consequence, the hyperfinite/measure duality has proven to be particularly in porting discrete results into their continuous settings.<br />
<br />
In this talk, for every general-state-space continuous-time Markov process satisfying appropriate conditions, we construct a hyperfinite Markov process to represent it. Hyperfinite Markov processes have all the first order logical properties of a finite Markov process. We establish ergodicity of a large class of general-state-space continuous-time Markov processes via studying their hyperfinite counterpart. We also establish the asymptotical equivalence between mixing times, hitting times and average mixing times for discrete-time general-state-space Markov processes satisfying moderate condition. Finally, we show that our result is applicable to a large class of Gibbs samplers and a large class of Metropolis-Hasting algorithms.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=118740&date=2018-09-18Seminar 217, Risk Management: A Deep Learning Investigation of One-Month Momentum, Sep 25
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=118741&date=2018-09-25
The one-month return reversal in equity prices was first documented by Jedadeesh (1990), who found that there was a highly significant negative serial correlation in the monthly return series of stocks. This is in contrast to the positive serial correlation of the annual stock returns. Explanations for this effect differ, but the general consensus has been that the trailing one-month return includes a component of overreaction by investors. Since 1990, the one-month return reversal effect has decayed substantially, which has led others to refine it. Asness, Frazzini, Gormsen, and Pedersen (2017) refine this idea by adjusting MAX5 (the average of the five highest daily returns over the trailing month) for trailing volatility. They define a measure SMAX (scaled MAX5), which is the MAX5 divided by the trailing month daily return volatility. SMAX is designed to capture lottery demand in excess of volatility. They show that SMAX has an even stronger one-month return reversal than trailing month return.<br />
<br />
In this talk, I first replicate the results of Jedadeesh and Asness as benchmark models. I confirm that SMAX outperforms simple return reversal over the test period 1993-2017. However, the effectiveness of SMAX declines substantially over the test period. Using an enhanced combination of return statistics, I improve upon SMAX. I further improve upon SMAX by applying Neural Networks to trailing daily active returns. Note that all of these signals decay substantially in effectiveness over the common test period 1998-2017.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=118741&date=2018-09-25Seminar 217, Risk Management: Predicting Portfolio Return Volatility at Median Horizons, Oct 2
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=118742&date=2018-10-02
Commercially available factor models provide good predictions of short-horizon (e.g. one day or one week) portfolio volatility, based on estimated portfolio factor loadings and responsive estimates of factor volatility. These predictions are of significant value to certain short-term investors, such as hedge funds. However, they provide limited guidance to long-term investors, such as Defined Benefit pension plans, individual owners of Defined Contribution pension plans, and insurance companies. Because return volatility is variable and mean-reverting, the square root rule for extrapolating short-term volatility predictions to medium-horizon (one year to five years) risk predictions systematically overstates (understates) medium-horizon risk when short-term volatility is high (low). In this paper, we propose a computationally feasible method for extrapolating to medium-horizon risk predictions in one-factor models that substantially outperforms the square root rule.http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=118742&date=2018-10-02Seminar 217, Risk Management: Topic Forthcoming, Oct 9
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=118749&date=2018-10-09
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=118749&date=2018-10-09Seminar 217, Risk Management: Topic Forthcoming, Oct 16
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=118748&date=2018-10-16
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=118748&date=2018-10-16Seminar 217, Risk Management: Topic Forthcoming, Oct 23
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=118743&date=2018-10-23
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=118743&date=2018-10-23Seminar 217, Risk Management: Topic Forthcoming, Nov 13
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=118744&date=2018-11-13
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=118744&date=2018-11-13Seminar 217, Risk Management: Topic Forthcoming, Nov 27
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=118746&date=2018-11-27
http://events.berkeley.edu/index.php/calendar/sn/IEOR.html?event_ID=118746&date=2018-11-27