Workshop Overview
- Place: The workshop was held in Shi Dai Building, School of Management, Fudan University from Monday, July 23, 2012 to Wednesday, July 25, 2012.
- Program Chairs: Jeff L. Hong (HKUST), Jian-Qiang Hu (Fudan University), Seong-Hee Kim (Georgia Tech).
- Program: For a full program schedule, click here.
- Participants: There were total 34 participants. To see the full list of participants, click here.
Presentation
- Click the title of a presentation if you want to see the presentation slides.
Monday, July 23, 2012, Morning
- Response Surface Estimation (Peter Glynn, Stanford University)
In many settings, it is of interest to understand the global structure of the "response" of a stochastic system to changes in model input parameters. Such calculations tend to be computationally intensive, because of the need to simulate the system at many points in the parameter space. In this talk, we will discuss recent developments in this area, including the incorporation of shape and smoothness constraints, and use of Gaussian random field ideas. This work is joint with Eunji Lim and Mohammad Mousavi.
- Good Path Simulation Methods for Quasi-Monte Carlo Pricing and Hedging of Financial Derivatives with Discontinuities (Xiaoqum Wang, Tsinghua University)
Quasi-Monte Carlo (QMC) methods are important numerical tools in the pricing and hedging of complex financial derivatives. When the payoff functions are discontinuous, QMC methods combining with the traditional path simulation methods (such as the Brownian bridge and principal component analysis) may give poor results. This paper develops new path simulation methods that take into account the discontinuity structure. The new path simulation methods may ``localize'' the discontinuities and reduce the effective dimension, changing the functions to be well-suited to QMC. Numerical experiments demonstrate that the proposed methods bring a dramatic variance reduction in QMC for pricing exotic options and for estimating sensitivities.
If interested, please see an article titled "Pricing and Hedging with Discontinuous Functions: Quasi–Monte Carlo Methods and Dimension Reduction" published in Management Science (Vol. 59, pp 376-389) in 2013.
- A Change-of-Variable Approach to Simulating Conditional Expectations (Guangwu Liu, City University of Hong Kong)
Typically crude Monte Carlo estimators do not perform well when estimating conditional expectations, which may be expectations conditioned on a probability-zero event. In this talk we introduce a change-of-variable approach to simulating conditional expectations. A key of the proposed approach is the construction of a one-to-one mapping such that a conditional expectation can be represented as an ordinary expectation by using change of variable technique. This new representation leads to an efficient estimator of the conditional expectation. Application to Greek estimation for financial options will be discussed.
Monday, July 23, 2012, Afternoon
- Distributed Simulation Challenges in Sensor Networks and the Cloud (Richard Fujimoto, Georgia Institute of Technology)
The emergence of sensor networks and cloud computing have created new opportunities and challenges for distributed simulation. Wireless sensor networks are now being routinely deployed for a variety of applications ranging from monitoring the environment and other natural phenomena to managing critical infrastructures such as transportation networks and the electric power grid. Simulation provides a means to project possible future system states based on these data collected from the sensor network. In some cases such as when tight control loops are required, it is preferable to embed these simulations within the sensor network itself so that the simulations can be closely coupled with incoming data streams. In other situations it is preferable to execute the simulations remotely in cloud computing platforms. In both cases, distributed simulation provides a means to exploit parallel and/or distributed computing platforms. Each of these environments offers their own unique constraints and requirements. In this presentation I will discuss the opportunities and technical challenges that arise in realizing distributed simulations in these new, emerging computing environments. Examples of new distributed simulation approaches designed to address these challenges will be discussed.
- Accelerating Simulation of Conditional Value-at-Risk via Parallelism on Graphic Processing Units (Hai Lan, Shanghai Jiaotong University)
Simulation of Conditional Value-at-Risk (CVaR) sometimes requires nested simulation which is time-consuming. Nowadays, the development of chip-level parallelism in the microprocessor industry has already brought multicore Central Processing Units (CPUs) and many core Graphics Processing Units (GPUs) into the mass market. In this talk, the parallelization of nested simulation with screening algorithm for measuring CVaR is discussed. A "divide-and-conquer" scheme is proposed to dispatch simulation tasks to different processing units and meanwhile reduce data communication to minimum. Experimental results demonstrate that speedup compared with the serial implementation is significant when GPU memory accessing is optimized.
- Throughput Regulation in Multicore Processors via IPA (Yorai Wardi, Georgia Institute of Technology)
Work done with N. Almoosa, W. Song, and S. Yalamanchili. This talk describes a control law for throughput regulation of instruction sequences in multicore processors using dynamic voltage-frequency scaling. The regulation algorithm is based on an integral control whose gain is adjusted online based on the derivative of the frequency-throughput relationship. This relationship is modeled as a queueing network, and the derivative is computed via IPA. The queueing network is multiclass and hence the IPA derivatives are biased. However, recently it has been suggested that as long as the relative bias is limited in certain ways, optimization and control algorithms that are based on IPA still converge to minimum values. We demonstrate this point on the aforementioned throughput regulation problem by using an industry-grade multiprocessor simulator.
Tuesday, July 24, 2012, Morning
- Simulation Optimization and Optimal Sampling (Raghu Pasupathy, Virginia Tech)
The Simulation Optimization (SO) problem is an optimization problem where the objective and constraint function(s) are observable only using a Monte Carlo simulation. SO has recently found expediency in a variety of application settings where the functions involved in an optimization context cannot be specified analytically, but are instead expressed conveniently and implicitly using a simulation. Owing to their flexibility, SO has generated great interest, and the last decade has seen rapid strides in solving various SO flavors. In this talk, I will first give an overview of some of the key SO flavors that have emerged, providing (in brief) my sense of the “state of the art” for each of these flavors. Then, I will consider the specific context of stochastically constrained SO problems on large finite sets — a topic about which much is still unknown. Within this setting, I will consider the question “can we characterize the nature of simulation budgeting plans for an ‘optimally evolving’ solution algorithm?” In other words, are there sampling laws that should hold if a solution algorithm is to exhibit the fastest possible convergence rate? I answer this question in the affirmative and show that when the feasible region is “large” and countable, the optimal sampling characterization depends on a single estimable quantity called the “score.” The score, in essence, is a measure of the suboptimality and the infeasibility of a solution, measured in the correct units. The implications of this result to algorithm design seem immediate, at least for the countable context. For instance, the idea of a score leads to a simple sequential algorithm that is asymptotically optimal. Preliminary numerical experience on SO problems having many thousands of solutions is encouraging and will be discussed. If time permits, I will also discuss ongoing extensions to the continuous context.
- Model-based Annealing Random Search with Stochastic Averaging (Jiaqiao Hu, SUNY at Stony Brook)
We propose a new algorithm for black-box optimization. The algorithm is based on the recent Model-based Annealing Random Search (MARS) for global optimization, but aims to improve the computational efficiency of MARS by using a stochastic averaging scheme. We show that the algorithm converges to the global optimizer even when the sample size per iteration is fixed at a constant value. This is in contrast with MARS, which requires a computational effort that grows polynomially for convergence.
- Multi-Objective COMPASS for Discrete Optimization via Simulation (Loo Hay Lee, National University of Singapore)
Due to its wide application in many industries, discrete optimization via simulation (DOvS) has recently attracted more research interests. As industry systems become more complex, advanced search algorithms for DOvS are desired with higher expectation towards efficiency. In this research work, we incorporate the ideas of single-objective COMPASS with the concept of Pareto optimality, thus to propose MO-COMPASS for solving DOvS problems with two or more objectives, and we apply the technique to solve an aircraft spare part allocation problem. Numerical experiments are illustrated to show its ability to achieve high efficiency.
Tuesday, July 24, 2012, Afternoon
- Dynamic, Risk-Based Aviation Security Screening Policy Performance Analysis Using Simulation (Sheldon H. Jacobson, University of Illinois at Urbana-Champaign)
Passenger screening at aviation security checkpoints is a critical component in protecting airports and aircraft from terrorist threats. Recent developments in screening device technology have increased the ability to detect these threats; however, the average amount of time it takes to screen a passenger still remains a concern. This paper uses simulation models to capture the queueing process for a multi-level airport checkpoint security system, where multiple security classes are formed through subsets of specialized screening devices. This model formulation allows an optimal static assignment policy to be obtained, which minimizes the steady-state expected amount of time a passenger spends in the security system. An optimal dynamic assignment policy is also obtained through a transient analysis that balances the expected number of true alarms with the expected amount of time a passenger spends in the security system. Performance of a two-class system is compared to that of a selective security system containing primary and secondary levels of screening. The key contribution of this research by embedding simulation results within an optimization framework, optimal assignment policies can be obtained that enhance security and increase passenger throughput by efficiently and effectively utilizing available screening resources.
- A Simulation Model for Retail Forecasting During the Threat of a Hurricane (Douglas Morrice, The University of Texas at Austin)
We consider a forecasting and replenishment problem faced by a retailer during the threat of a hurricane. This talk will focus on a state-space forecasting model that was derived from the retailer's decision making process during hurricane operations, its historic demand data, and hurricane weather forecast data from the United States' National Weather Service and National Hurricane Center. The resultant forecasting model is a simulation that will be used to help assess the efficacy of different hurricane inventory allocation policies. Joint work with John C. Butler and Paul Cronin of The University of Texas at Austin, and Fehmi Tanrisever of Eindhoven University of Technology (TUE)
- A Discrete-Event Simulation Study of the Re-emergence of S. Vulgaris in Horse Farms Adopting Selective Therapy (Jie Xu, George Mason University)
The large strongyle Strongylus vulgaris is the most pathogenic among strongyle nematodes that are ubiquitous in grazing horses. Decades of frequent anthelmintic treatment have reduced the prevalence of this parasite dramatically. As the concern over increased anthelmintic resistance in cyathostomin parasites has led to implementation of selective therapy to reduce the frequency of anthelmintic treatment, it has been conjectured that S. vulgaris could re-emerge. Past statistical studies have shown that the prevalence of S. vulgaris at horse and farm level are significantly higher in farms using selective therapy than in farms not using selective therapy. Lack of any valid biological model to explain the phenomenon, we propose to use a model-based discrete-event simulation approach to study the effect of selective therapy on the re-emergence of S. vulgaris in farms using selective therapy.
- Response Surface Estimation (Peter Glynn, Stanford University)
Interest Group Discussions
- Four topics were chosen: Simulation Optimization, Modeling Error, Applications, and New Technology.
- Click here to see Group Assignments.
- Interest Group Discussion Summary:
- Simulation Optimization
- Modeling Error
- Applications
- New Technology
last update 8/8/13