My research spans from probability to optimization, from pure math to astronomy. I am particularly interested in designing optical systems for high-contrast imaging with the hope of using such systems to one day take direct images, and eventually spectra, of Earth-like exoplanets.


Machine Learning

Revisiting Compressed Sensing: Exploiting the Efficiency of Simplex and Sparsification Methods. Math. Prog. Comp., 2016. We propose two approaches to solve large-scale compressed sensing problems. The first approach uses the parametric simplex method to recover very sparse signals by taking a small number of simplex pivots, while the second approach reformulates the problem using Kronecker products to achieve faster computation via a sparser problem formulation. In particular, we focus on the computational aspects of these methods in compressed sensing.

Parametric Simplex Method for Sparse Learning. NIPS, 2017. Various problems, especially in the field of machine learning, involve an unknown regularization parameter. Examples include: Dantzig selector estimation, LAD-Lasso, and constrained l1 minimization estimation. For such problems, we show that the parametric simplex method can be used to solve these problems for all values of the regularization parameter (a continuum of optimization problems) in the same time that other variants of the simplex method can solve just one instance.

The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R. J. Machine Learning Research, 15(1), 489-493, 2014. The parametric simplex method is applied to solve sparse precision matrix estimation problems very efficiently.

Grade Inflation

A Regression Approach to Fairer Grading. SIAM Review, 56, 337-352, May 2014. We describe a statistical procedure to account for differences in grading practices from one course to another. The goal is to define a course "inflatedness" and a student "aptitude" that best captures ones intuitive notions of these concepts.

Climate Change

Local Warming. SIAM Review, 54, 1-11, Sep. 2012. Using 55 years of daily average temperatures from a local weather station, I made a least-absolute-deviations (LAD) regression model that accounts for three effects: seasonal variations, the 11-year solar cycle, and a linear trend. The model was formulated as a linear programming problem and solved using widely available optimization software. The solution indicates that temperatures have gone up by about 2° F over the 55 years covered by the data.

High-Contrast Imaging with Shaped Pupil Masks

Shaped Pupil Lyot Coronagraphs: High-Contrast Solutions for Restricted Focal Planes . To appear in J. of Astronomical Telescopes, Instruments, and Systems, 2016. Coronagraphs of the Apodized Pupil and Shaped Pupil varieties use the Fraunhofer diffraction properties of amplitude masks to create regions of high contrast in the vicinity of a target star. Here we investigate a hybrid coronagraph architecture in which a binary, hard-edged Shaped Pupil mask replaces the gray, smooth apodizer of the Apodized Pupil Lyot Coronagraph (APLC). We describe Shaped Pupil Lyot Coronagraph (SPLC) designs for the baseline architecture of the WFIRST-AFTA Coronagraph. These SPLCs help to enable two scientific objectives of the WFIRST-AFTA mission: (i) broadband spectroscopy to characterize exoplanet atmospheres in reflected starlight and (ii) debris disk imaging.

Optimal Pupil Apodizations for Arbitrary Apertures. Optics Express, 19(27), 26796-26809, Oct. 2011. In the context of exoplanet direct detection and characterization, where high-contrast imaging is mandatory, we present fully optimized two-dimensional pupil apodizations for which no specific geometric con- straints are put on the pupil plane apodization, apart from the shape of the aperture itself. Masks for circular and segmented apertures are displayed, with and without a central obstruction and spiders. We can now optimize apodizers for any aperture shape, and examples of optimal masks are shown for the Subaru telescope, the Space Infrared telescope for Cosmology and Astrophysics (SPICA) and the James Webb Space Telescope (JWST).

Fast Computation of Lyot-Style Coronagraph Propagation. Optics Express. 15(24):15935-15951, 2007. We present a new method for numerical propagations through Lyot-style coronagraphs using finite size occulting masks. Standard methods for coronagraphic simulations involve Fast Fourier Transforms (FFT) of very large arrays, and computing power is an issue for the design and tolerancing of coronagraphs on Extremely Large Telescopes (ELT) in order to handle both the speed and memory requirements. Our method combines a semi-analytical approach with non-FFT based Fourier transform algorithms. It enables both fast and memory-efficient computations without introducing any additional approximations. Typical speed improvements based on computation costs are of twenty to fifty for propagations from pupil to Lyot plane, with thirty to sixty times less memory jeeded. Our method makes it possible to perform numerical coronagraphic studies even in the case of ELTs using a contemporary commercial laptop computer, or any standard commercial workstation computer.

Extreme Optics and the Search for Earth-Like Planets. Mathematical Programming Series B, 2007. In this paper I describe a new and exciting application of optimization technology. The problem is to design a space telescope capable of imaging Earth-like planets around nearby stars. Because of limitations inherent in the wave nature of light, the design problem is one of diffraction control so as to provide the extremely high contrast needed to image a faint planet positioned very close to its much brighter star. The mathematics behind the diffraction control problem are described.

Shaped Pupil Coronagraphy. C.R. Physique, 8:312-322, 2007. We summarize the design procedure for shaped pupils and review the various families of designs we have found. We describe the manufacturing processes we have used to make free standing shaped pupil masks and review our most recent laboratory results with and without wavefront control. We also discuss the factors limiting high contrast in the laboratory and our plans for mitigating them.

The shaped pupil coronagraph for planet finding coronagraphy: optimization, sensitivity, and laboratory testing. Microwave and Terahertz Photonics. Proceedings of the SPIE, , 5487:1312-1321, 2004. This paper summarizes our work designing optimal shaped pupils for high-contrast imaging. We show how any effective apodization can be created using shaped pupils and present a variety of both one-dimensional and azimuthally symmetric pupil shapes. Each pupil has its own performance advantage and we discuss the tradeoffs among various designs. Optimizations are typically performed by maximizing a measure of system throughput under constraints on contrast and inner working angle. We mention the question of sensitivity to aberrations. Controlling aberrations will be critical for any implementation of a planet-finding coronagraph. Finally, we present our first laboratory results testing a shaped pupil coronagraph.

Checkerboard-Mask Coronagraphs for High-Contrast Imaging. Astrophysical Journal, 615(1):555, 2004. We present yet another new family of masks for high-contrast imaging as required for the to-be-built terrestrial planet finder space telescope. We call these masks checkerboard masks. They consist of two barcode masks, one rotated 90 degrees with respect to the other. Each barcode mask provides contrast to the 10-5 level. The checkboard mask then achieves a 10-10 level of contrast everywhere except along the two axes of symmetry where the contrast remains at the 10-5 level. We show that by combining a Lyot-plane checkboard mask with an image-plane occulter we can achieve even tighter inner working angles, although as with occulting designs in general pointing error and stellar size become nontrivial issues. Checkerboard masks can be thought of as the binary-mask analogue of Nisenson's apodized square aperture concept.

New pupil masks for high-contrast imaging. In Proceedings of SPIE Conference on Astronomical Telescopes and Instrumentation, 5170(07), 2003. Motivated by the desire to image exosolar planets, recent work by us and others has shown that high-contrast imaging can be achieved using specially shaped pupil masks. To date, our masks have been symmetric with respect to a cartesian coordinate system but were not rotationally invariant, thus requiring that one take multiple images at different angles of rotation about the central point in order to obtain high-contrast in all directions. In this talk, we present two new classes of masks that have rotational symmetry and provide high-contrast in all directions with just one image. These masks provide the required 10-10 level of contrast to within 4λ/D of the central point. They are also well-suited for use on ground-based telescopes, and perhaps NGST as well, since they can accommodate central obstructions and associated support spiders.

An optical/UV space coronagraph concept for the terrestrial planet finder. Advances in Space Research, 34(3):625-630, 2004. NASA's current strategic plan calls for the launching of a space observatory, The Terrestrial Planet Finder (TPF), by the middle of the next decade; it will search for terrestrial planets in the habitable zone of roughly 150 nearby stars and characterize them for the potential to harbor life. This paper describes a visible light concept for TPF developed by the Ball Aerospace led TPF study team. This concept consists of a 4 meter by 10 meter coronagraphic telescope in a deep space orbit.

Circularly Symmetric Apodization via Starshaped Masks. Astrophysical Journal, 599:686-694, 2003. In a recent paper, we introduced a class of shaped pupil masks, called spider-web masks, that produce point spread functions having annular dark zones. With such masks, a single image can be used to probe a star for extrasolar planets. In this paper, we introduce a new class of shaped pupil masks that also provide annular dark zones. We call these masks starshaped masks. Given any circularly symmetric apodization function, we show how to construct a corresponding starshaped mask that has the same point-spread function as obtained by the apodization.

Spiderweb Masks for High Contrast Imaging. Astrophysical Journal, 590:593-603, 2003. Motivated by the desire to image exosolar planets, recent work by us and others has shown that high-contrast imaging can be achieved using specially shaped pupil masks. To date, the masks we have designed have been symmetric with respect to a cartesian coordinate system but were not rotationally invariant, thus requiring that one take multiple images at different angles of rotation about the central point in order to obtain high-contrast in all directions. In this paper, we present a new class of masks that have rotational symmetry and provide high-contrast in all directions with just one image. These masks provide the required 10-10 level of contrast to within 4 λ/D, and in some cases 3 λ/D, of the central point, which is deemed necessary for exosolar planet finding/imaging. They are also well-suited for use on ground-based telescopes, and perhaps NGST too, since they can accommodate central obstructions and associated support spiders.

Extrasolar Planet Finding via Optimal Apodized and Shaped Pupil Coronagraphs. Astrophysical Journal, 582:1147-1161, 2003. In this paper we examine several different apodization approaches to achieving high-contrast imaging of extrasolar planets and compare different designs on a selection of performance metrics. These approaches are characterized by their use of the pupil's transmission function to focus the starlight rather than by masking the star in the image plane as in a classical coronagraph. There are two broad classes of pupil coronagraphs examined in this paper: apodized pupils with spatially varying transmision functions and shaped pupils, whose transmission values are either 0 or 1. The latter are much easier to manufacture to the needed tolerances. In addition to comparing existing approaches, numerical optimization is used to design new pupil shapes. These new designs can achieve nearly as high a throughput as the best apodized pupils and perform significantly better than the apodized square aperture design. The new shaped pupils enable searches of 50% - 100% of the detectable region, suppress the star's light to below 10-10 of its peak value and have inner working distances as small as 2.8 λ/D. Pupils are shown for terrestrial planet discovery using square, rectangular, circular, and elliptical apertures. A mask is also presented targeted at Jovian planet discovery, where contrast is given up to yield greater throughput.

High-Contrast Imaging via Occulters

Diffraction-based Sensitivity Analysis for an External Occulter Laboratory Demonstration . Applied Optics. OSA, 2016. An external flower-shaped occulter flying in formation with a space telescope can theoretically provide sufficient starlight suppression to enable direct imaging of an Earth-like planet. Occulter shapes are scaled to enable experimental validation of their performance at laboratory dimensions. Previous experimental results have shown promising performance but have not realized the full theoretical potential of occulter designs. Here, we develop a two-dimensional diffraction model for optical propagations for occulters incorporating experimental errors. We perform a sensitivity analysis, and comparison with experimental results from a scaled-occulter testbed validates the optical model to the 10-10 contrast level. The manufacturing accuracy along the edge of the occulter shape is identified as the limiting factor to achieving the theoretical potential of the occulter design. This hypothesis is experimentally validated using a second occulter mask manufactured with increased edge feature accuracy and resulting in a measured contrast level approaching the 10-12 level - a better than one order of magnitude improvement in performance.

Eliminating Poisson's Spot with Linear Programming. Operations Research and Cyber-Infrastructure. Springer, 2009. A leading design concept for NASA's upcoming planet-finding space telescope involves placing an occulter 72,000 km in front of a 4-m telescope. The purpose of the occulter is to block the bright starlight thereby enabling the telescope to take pictures of planets orbiting the blocked star. Unfortunately, diffraction effects prevent a simple circular occulter from providing a sufficiently dark shadow—a specially shaped occulter is required. In this paper, I explain how to reduce this shape-optimization problem to a large-scale linear programming problem that can be solved with modern LP tools.

Optimal Occulter Design for Finding Extrasolar Planets. Astrophysical Journal, 665:794-798, 2007. One method for finding terrestrial planets around nearby stars is to use two spacecraft---a telescope and a specially shaped occulter, whose shape is specially designed to prevent all but a tiny fraction of the starlight from diffracting into the telescope. As the cost and observing cadence for such a mission will be driven largely by the separation between the two spacecraft, it is critically important to design an occulter that can meet the observing goals while flying as close to the telescope as possible. In this paper, we explore this tradeoff between separation and occulter diameter. More specifically, we present a method for designing the shape of the outer edge of an occulter that is as small as possible and gives a shadow that is deep enough and large enough for a 4m telescope to survey the habitable zones of many stars for Earth-like planets. In particular, we show that in order for a 4m telescope to detect in broadband visible light a planet 0.06 arcseconds from a star shining 1010 times brighter than the planet requires a specially-shaped occulter 50m in diameter positioned about 72,000 km in front of the telescope.

High-Contrast Imaging via Pupil Mapping

Design of PIAA Coronagraphs Over Square Apertures. Astrophysical Journal, 195(2): 25, 2011. We present the results of a theoretical study pertaining to the feasibility of PIAA units using Deformable Mirrors. We begin by reviewing the general derivation of the design equations driving PIAA. We then show how to solve these equations for square apertures and show the performance of pure PIAA systems in the ray optics regime. We tie these design equations into the study of edge diffraction effects, and provide a general expression for the field after a full propagation through a PIAA coronagraph. Third, we illustrate how a combination of pre and post apodisers yields to a contrast of 10-10 even in the presence of diffractive effects. Finally we present novel PIAA configurations over square apertures which circumvent the constraints on the manufacturing of PIAA optics by inducing the apodisation with two square Deformable Mirrors (DM).

Diffraction-Based Sensitivity Analysis of Apodized Pupil Mapping Systems. Astrophysical Journal, 652:833, 2006. Pupil mapping is a promising and unconventional new method for high contrast imaging being considered for terrestrial exoplanet searches. It employs two (or more) specially designed aspheric mirrors to create a high-contrast amplitude profile across the telescope pupil that does not appreciably attenuate amplitude. As such, it reaps significant benefits in light collecting efficiency and inner working angle, both critical parameters for terrestrial planet detection. While much has been published on various aspects of pupil mapping systems, the problem of sensitivity to wavefront aberrations remains an open question. In this paper, we present an efficient method for computing the sensitivity of a pupil mapped system to Zernike aberrations. We then use this method to study the sensitivity of a particular pupil mapping system and compare it to the concentric-ring shaped pupil coronagraph. In particular, we quantify how contrast and inner working angle degrade with increasing Zernike order and rms amplitude. These results have obvious ramifications for the stability requirements and overall design of a planet-finding observatory.

Diffraction Analysis of 2-D Pupil Mapping for High-Contrast Imaging. Astrophysical Journal, 636:528, 2006. Pupil-mapping is a technique whereby a uniformly-illuminated input pupil, such as from starlight, can be mapped into a non-uniformly illuminated exit pupil, such that the image formed from this pupil will have suppressed sidelobes, many orders of magnitude weaker than classical Airy ring intensities. Pupil mapping is therefore a candidate technique for coronagraphic imaging of extrasolar planets around nearby stars. Unlike most other high-contrast imaging techniques, pupil mapping is lossless and preserves the full angular resolution of the collecting telescope. So, it could possibly give the highest signal-to-noise ratio of any proposed single-telescope system for detecting extrasolar planets. Prior analyses based on pupil-to-pupil ray-tracing indicate that a planet fainter than 10-10 times its parent star, and as close as about 2λ/D, should be detectable. In this paper, we describe the results of careful diffraction analysis of pupil mapping systems. These results reveal a serious unresolved issue. Namely, high-contrast pupil mappings distribute light from very near the edge of the first pupil to a broad area of the second pupil and this dramatically amplifies diffraction-based edge effects resulting in a limiting attainable contrast of about 10-5. We hope that by identifying this problem others will provide a solution.

Pupil Mapping in 2-D for High-Contrast Imaging. Astrophysical Journal, 626:1079-1090, 2005. Pupil-mapping is a technique whereby a uniformly-illuminated input pupil, such as from starlight, can be mapped into a non-uniformly illuminated exit pupil, such that the image formed from this pupil will have suppressed sidelobes, many orders of magnitude weaker than classical Airy ring intensities. Pupil mapping is therefore a candidate technique for coronagraphic imaging of extrasolar planets around nearby stars. The pupil mapping technique is lossless, and preserves the full angular resolution of the collecting telescope, so it could possibly give the highest signal-to-noise ratio of any proposed single-telescope system for detecting extrasolar planets. We derive the 2-dimensional equations of pupil mapping for both 2-mirror and 2-lens systems. We give examples for both cases. We derive analytical estimates of aberration in a 2-mirror system, and show that the aberrations are essentially corrected with an added reversed set of mirrors.

Two-Mirror Apodization for High-Contrast Imaging. Astrophysical Journal, 599:695-701, 2003. Direct detection of extrasolar planets will require imaging systems capable of unprecedented contrast. Apodized pupils provide an attractive way to achieve such contrast but they are difficult, perhaps impossible, to manufacture to the required tolerance and they absorb about 90% of the light in order to create the apodization, which of course lengthens the exposure times needed for planet detection. A recently proposed alternative is to use two mirrors to accomplish the apodization. With such a system, no light is lost. In this paper, we provide a careful mathematical analysis, using one dimensional mirrors, of the on-axis and off-axis performance of such a two-mirror apodization system. There appear to be advantages and disadvantages to this approach. In addition to not losing any light, we show that the nonuniformity of the apodization implies an extra magnification of off-axis sources and thereby makes it possible to build a real system with about half the aperture that one would otherwise require or, equivalently, resolve planets at about half the angular separation as one can achieve with standard apodization. More specifically, ignoring pointing error and stellar disk size, a planet at 1.7λ/D ought to be at the edge of detectability. However, we show that the non-zero size of a stellar disk pushes the threshold for high-contrast so that a planet must be at least 2.5λ/D from its star to be detectable. The off-axis analysis of two-dimensional mirrors is left for future study.

Celestial Mechanics and the n-Body Problem

The 2-Body Problem. Tech. Report. A simple analysis of the famous 2-body problem.

Comment on "Making of the Moon". In Physics Today, April, 2015.

Linear Stability of Ring Systems Around Oblate Central Masses. Advances in Space Research, 42:1370-1377, 2008. We consider the stability of a ring of bodies of equal mass uniformly distributed around a large oblate central mass. The purpose of this and previous papers is to shed light on the stability of Saturn's rings. Previous papers have been limited by the assumptions that (1) all ring bodies are at the same distance from the central body, (2) the central body acts like a point mass (i.e., is a perfect sphere), and (3) the ring bodies all have the same mass and are evenly spaced around the ring. The third limitation is probably the least important; as long as there are a large number of masses and the mass distribution is approximately uniform then the system should behave as a system of equally-spaced, equal-mass bodies. The main objective of this paper is to remove the second limitation. But, the paper also aims to address limitation (1).

A Simple Approximate Analysis of the Linear Stability of Ring Systems. In New Trends in Astrodynamics and Applications III, American Institute of Physics, 886:169-174, 2007. We give a simple linear stability analysis of a system of n equal mass bodies in circular orbit about a single more massive body. A full analysis requires the possibility of perturbing all bodies. If the massive body is sufficiently dominant, then one can ignore perturbations to it. In this paper, we give a linear stability analysis based on perturbations to just one of the small ring bodies. Such an analysis could be justified by assuming that this one body has mass zero. But, we do not make this assumption. Therefore, it is surprising that the result we obtain agrees to within a factor of 2 with the result one obtains by considering perturbations to all ring bodies. We also give a simple back-of-the-envelope computation that shows that our stability mass threshold is consistent with the observed optical density of Saturn's rings.

Linear Stability of Ring Systems. Astronomical Journal, 133:656-664, 2007. We give a self-contained modern linear stability analysis of a system of n equal mass bodies in circular orbit about a single more massive body. Starting with the mathematical description of the dynamics of the system, we form the linear approximation, compute all of the eigenvalues of the linear stability matrix, and finally derive inequalities that guarantee that none of these eigenvalues have positive real part. In the end, we rederive the result that J.C. Maxwell found for large n in his seminal paper on the nature and stability of Saturn's rings, which was published 150 years ago. In addition, we identify the exact matrix that defines the linearized system even when n is not large. This matrix is then investigated numerically (by computer) to find stability inequalities. Furthermore, using properties of circulant matrices, the eigenvalues of the large 4n x 4n matrix can be computed by solving n quartic equations, which further facilitates the investigation of stability. Finally, we have implemented an n-body simulator and we verify that the threshold mass ratios that we derived mathematically or numerically do indeed identify the threshold between stability and instability. Throughout the paper we consider only the planar n-body problem so that the analysis can be carried out purely in complex notation, which makes the equations and derivations more compact, more elegant and therefore, we hope, more transparent. The result is a fresh analysis that shows that these systems are always unstable for 2 ≤ n ≤ 6 and for n > 6 they are stable provided that the central mass is massive enough. We give an explicit formula for this mass-ratio threshold.

Lagrange Points for Eccentric Planar 3-Body Systems. Technical report, Department of Operations Research and Financial Engineering, Princeton University, 2006.

Lagrange Points L1, L2 and L3. Technical report, Department of Operations Research and Financial Engineering, Princeton University, 2018.

Linear Stability of Lagrange Points: Complex Variable Notation. Technical report, Department of Operations Research and Financial Engineering, Princeton University, 2006.

Horsing Around on Saturn. In New Trends in Astrodynamics and Applications, volume 1065, pages 336-345. NY Academy of Sciences, 2005. Based on simple statistical mechanical models, the prevailing view of Saturn's rings is that they are unstable and must therefore have been formed rather recently. In this paper, we argue that Saturn's rings and inner moons are in much more stable orbits than previously thought and therefore that they likely formed together as part of the initial formation of the solar system. To make this argument, we give a detailed description of so-called horseshoe orbits and show that this horseshoeing phenomenon greatly stabilizes the rings of Saturn. This paper is part of a collaborative effort with E. Belbruno and J.R. Gott III. For a description of their part of the work, see their papers in these proceedings.

New Orbits for the n-Body Problem. In Proceedings of the Conference on New Trends in Astrodynamics, 2003. In this paper, we consider minimizing the action functional as a method for numerically discovering periodic solutions to the n-body problem. With this method, we can find a large number of choreographies and other more general solutions. We show that most of the solutions found, including all but one of the choreographies, are unstable. It appears to be much easier to find unstable solutions to the n-body problem than stable ones. Simpler solutions are more likely to be stable than exotic ones.


The Falling Slinky. Am. Math. Monthly, 2017. The counterintuitive dynamics of a falling slinky, as illustrated here are analyzed carefully.


Discrete-Time Pricing and Optimal Exercise of American Perpetual Warrants. Applied Mathematics and Optimization, 67:97-122, 2013. This paper is similar to the one below except here the simple random walk is replaced with a geometric random walk. Again, using duality, we are able to derive an explicit solution.

Pricing American Perpetual Warrants by Linear Programming. SIAM Review, 51(4):767-782, 2009. A warrant is an option that entitles the holder to purchase shares of a common stock at some prespecified price during a specified interval. The problem of pricing a perpetual warrant (with no specified interval) of the American type (that can be exercised any time) is one of the earliest contingent claim pricing problems in mathematical economics. The problem was first solved by Samuelson and McKean in 1965 under the assumption of a Geometric Brownian Motion of the stock price process. It is a well-documented exercise in stochastic processes and continuous-time finance curricula. The present paper offers a solution to this time-honored problem from an optimization point of view using linear programming duality under a simple random walk assumption for the stock price process, thus enabling a classroom exposition of the problem in graduate courses on linear programming without assuming a background on stochastic processes.

Frontiers of Stochastically Nondominated Portfolios. Econometrica, 71(4):1287-1297, 2003. We consider the problem of constructing a portfolio of finitely many assets whose returns are described by a discrete joint distribution. We propose mean-risk models which are solvable by linear programming and generate portfolios whose returns are nondominated in the sense of second-order stochastic dominance. Next, we develop a specialized parametric method for recovering the entire mean-risk efficient frontiers of these models and we illustrate its operation on a large data set involving thousands of assets and realizations.

A Martingale System Theorem for Stock Investments. Operations Research Letters, 9:155-159, 1990. A proof is given that the dollar-cost-averaging investment strategy yields no advantage over any other non-clairvoyant strategy by showing that the difference between any two strategies is a mean-zero martingale.


A Parametric Simplex Algorithm for Linear Vector Optimization Problems. Tech Report, 2015. In this paper, a parametric simplex algorithm for solving linear vector optimization problems (LVOPs) is presented.

Fast Fourier Optimization

Fast Fourier Optimization. Math Programming Computation, 4(1),53-69, Jan. 2012. We explain the main idea behind the fast Fourier transform and show how to adapt it in such a manner as to make it encodable as constraints in an optimization problem. We demonstrate with a real-world problem from the field of high-contrast imaging. On this problem, dramatic improvements are translated to an ability to solve problems with a much finer grid of discretized points. We show that the "fast Fourier" version of the optimization constraints produces a larger but sparser constraint matrix and therefore one can think of the fast Fourier transform as a method of sparsifying the constraints in an optimization problem, which is usually a good thing.

Semidefinite Optimization

Solving Problems with Semidefinite and Related Constraints Using Interior-Point Methods for Nonlinear Programming. Mathematical Programming, 95:279-302, 2003. Consider the diagonal entries dj, j = 1,2,...,n, of the matrix D in an LDLT factorization of an n x n matrix X. As a function of X, each dj is well-defined on the closed domain of positive semidefinite matrices. We show that these functions are twice continuously differentiable and concave throughout the interior of this domain. Using these facts, we show how to formulate semidefinite programming problems as standard convex optimization problems that can be solved using an interior-point method for nonlinear programming.

The Gauss-Newton Direction in Semidefinite Programming. Optimization Methods and Software, 15(1):1-27, 2001.

Primal-Dual Affine-Scaling Algorithms Fail for Semidefinite Programming. Mathematics of Operations Research, 24(1):149-175, 1999. We give an example of a semidefinite programming problem in which primal-dual affine-scaling algorithms using the HRVW/KSH/M, MT, and AHO directions fail. We prove that each of these algorithm can generate a sequence converging to a non-optimal solution, and that, for the AHO direction, even its associated continuous trajectory can converge to a non-optimal point. In contrast with these directions, we show that the primal-dual affine-scaling algorithm using the NT direction for the same semidefinite programming problem always generates a sequence converging to the optimal solution. Both primal and dual problems have interior feasible solutions, unique optimal solutions which satisfy strict complementarity, and are nondegenerate everywhere.

An Interior-Point Method for Semidefinite Programming. SIAM Journal on Optimization, 6:342-361, 1996. We propose a new interior-point based method to minimize a linear function of a matrix variable subject to linear equality and inequality constraints over the set of positive semidefinite matrices. We present a theoretical convergence analysis and show that the approach is very efficient for graph bisection problems, such as max-cut.

The Simplest Semidefinite Programs Are Trivial. Mathematics of Operations Research, 20:590-596, 1995. We show that if the linear constraints in a semidefinite program have the special form MXMT= B, then the optimization problem is trivial in the sense that an explicit solution exists.

Second-Order Cone Optimization

Using LOQO to Solve Second-Order Cone Programming Problems. Technical Report SOR-98-09, Statistics and Operations Research, Princeton University, 1998. Submitted Optimization and Engineering.

Interior-Point Methods for Nonlinear Optimization

Global convergence of a primal-dual interior-point method for nonlinear programming. Algorithmic Operations Research. 3(1):12-19, 2008.

Interior-Point Algorithms, Penalty Methods and Equilibrium Problems. Computational Optimization and Applications. 34:155-182, 2006. We consider the question of solving equilibrium problems formulated as complementarity problems and, more generally, mathematical programs with equilibrium constraints (MPECs)-as nonlinear programs, using an interior-point approach. These problems pose theoretical difficulties for nonlinear solvers, including interior-point methods. We examine the use of penalty methods to get around these difficulties and provide numerical results. We go on to show that penalty methods can resolve some problems that interior-point algorithms encounter in general.

Convergence analysis of a primal-dual interior-point method for nonlinear programming. Optimization Online, 2004.

Interior-Point Methods for Nonconvex Nonlinear Programming: Jamming and Numerical Testing. Mathematical Programming, 99(1):35-48, 2004. The paper considers a current example of Wachter and Biegler which is shown not to converge for the standard primal-dual interior-point method for nonlinear programming. Three possible solutions to the problem are derived and discussed, including shifting slack variables, a trust region method, and a modified barrier method. The paper then compares LOQO, a line-search interior-point code, with SNOPT, a sequential-quadratic-programming code, and KNITRO, a trust-region interior-point code on a large test set of nonlinear programming problems. Specific types of problems which can cause LOQO to fail are identified.

Interior-Point Methods for Nonconvex Nonlinear Programming: Filter Methods and Merit Functions. Computational Optimization and Applications, 23:257-272, 2002. Recently, Fletcher and Leyffer proposed using filter methods instead of a merit function to control step lengths in a sequential quadratic programming algorithm. In this paper, we analyze possible ways to implement a filter-based approach in an interior-point algorithm. Extensive numerical testing shows that such an approach is more efficient than using a merit function alone.

A Comparative Study of Large-Scale Nonlinear Optimization Algorithms. Technical Report ORFE 01-04, Department of Operations Research and Financial Engineering, Princeton University, 2001.

Interior-Point Methods for Nonconvex Nonlinear Programming: Orderings and Higher-Order Methods. Mathematical Programming, 87(2):303-316, 2000. The topics covered in this paper include primal versus dual orderings and higher order methods, which attempt to use each factorization of the Hessian matrix more than once to improve computational efficiency. Results show that unlike linear and convex quadratic programming, higher order corrections to the central trajectory are not useful for nonconvex nonlinear programming, but that a variant of Mehrotra's predictor-corrector algorithm can definitely improve performance.

An Interior-Point Algorithm for Nonconvex Nonlinear Programming. Computational Optimization and Applications, 13:231-252, 1999. The paper describes an interior-point algorithm for nonconvex nonlinear programming which is a direct extension of interior-point methods for linear and quadratic programming. Major modifications include a merit function and an altered search direction to ensure that a descent direction for the merit function is obtained. Preliminary numerical testing indicates that the method is robust. Further, numerical comparisons with MINOS and LANCELOT show that the method is efficient, and has the promise of greatly reducing solution times on at least some classes of models.

LOQO: An Interior-Point Code for Quadratic Programming. Optimization Methods and Software, 11:451-484, 1999. This paper describes a software package, called LOQO, which implements a primal-dual interior-point method for general nonlinear programming. The paper focuses mainly on the algorithm as it applies to linear and quadratic programming with only brief mention of the extensions to convex and general nonlinear programming. In particular, we emphasize the importance of establishing and maintaining symmetric quasidefiniteness of the reduced KKT system. We show that the industry standard MPS format can be nicely formulated in such a way to provide quasidefiniteness. Computational results are included for a variety of linear and quadratic programming problems.

LOQO User's Manual--Version 3.10. Optimization Methods and Software, 12:485-514, 1999. LOQO is a system for solving smooth constrained optimization problems. The problems can be linear or non- linear, convex or nonconvex, constrained or unconstrained. The only real restriction is that the functions defining the problem be smooth (at the points evaluated by the algorithm). If the problem is convex, LOQO finds a globally optimal solution. Otherwise, it finds a locally optimal solution near to a given starting point. This manual describes: (1) how to install LOQO on your hardware, (2) how to use AMPL together with LOQO to solve general convex optimization problems, (3) how to use the subroutine library to formulate and solve convex optimization problems, and (4) how to formulate and solve linear and quadratic programs in MPS format.

Interior-Point Methods for Linear Optimization

Solving Multistage Stochastic Programs Using Tree Dissection. SIAM Journal on Optimization, 1996. To appear.

Affine-Scaling Trajectories Associated with a Semi-Infinite Linear Program. Math. of OR, 20:163--174, 1995. . Math. of OR, 20:163--174, 1995. The affine-scaling algorithm is extended to problems having an infinite number of constraints. An example is given for which the algorithm converges to the wrong boundary point no matter how small a step length is chosen.

Symmetric Quasi-Definite Matrices. SIAM Journal on Optimization, 5(1):100-113, 1995. We say that a symmetric matrix K is quasi-definite if it can be written in 2x2 block-matrix form where the upper left block is symmetric and negative definite and the lower right block is symetric and positive definite. Although such matrices are indefinite, we show that any symmetric permutation of a quasi-definite matrix yields an LDLT factorization. We apply this result to obtain a new approach for solving the symmetric indefinite systems arising in interior-point methods for linear and quadratic programming.

Symmetric Indefinite Systems for Interior Point Methods. Mathematical Programming, 58:1-32, 1993. In this paper, we illustrate the advantages of solving the reduced KKT system of equations directly rather than further reducing the system to the so-called normal equations and solving that smaller, but denser, system.

Two-Thirds is Sharp for Affine Scaling. OR Letters, 13:197-201, 1993. In this paper, we give an example of an LP for which the affine-scaling algorithm fails to converge for all step lengths that are greater than 2/3 of the distance to the nearest constraint.

ALPO: Another linear program optimizer. ORSA J. on Computing, 5:134-146, 1993. This paper describes an implementation of the one-phase primal-dual path-following algorithm for solving linear programming problems.

Prior Reduced Fill-In in Solving Equations in Interior-Point Algorithms. Operations Research Letters, 11:195-198, 1992. The relative computational efficienccy of computing step directions by solving the primal normal equations versus the solving dual normal equations is investigated.

Splitting Dense Columns in Sparse Linear Systems. Linear Algebra and Applications, 152:107-117, 1991. The idea of splitting dense columns as a technique for sparsifying the normal equations is introduced and compared with the alternative technique based on the Sherman-Morrison-Woodbury formula.

Dikin's Convergence Result for the Affine-Scaling Algorithm. Contemporary Mathematics, 114:109-119, 1990. The affine-scaling algorithm is an analogue of Karmarkar's linear programming algorithm. It uses affine transformations instead of projective transformations. In the late 1980's it came to the attention of the western mathematical programming community that a Soviet mathematician, Dikin, proposed the basic affine~scaling algorithm in 1967 and published a proof of convergence in 1974. Dikin's convergence proof assumes only primal nondegeneracy, while all other known proofs require both primal and dual nondegeneracy. The aim of this paper is to give a clear presentation of Dikin's ideas.

The AT&T KORBX® System. AT&T Technical Journal, 68:7-19, 1989. This paper describes the AT&T KORBX system, which implements certain variants of the Karmarkar algorithm on a computer that has multiple processors, each capable of performing vector arithmetic. An overview of the KORBX system is given that covers its optimization algorithms, the hardware architecture, and software implementation techniques. Performance is characterized for a variety of applications found in industry, government, and academia.

An Implementation of the Minimum-Degree Algorithm Using Simple Data Structures. Technical Report, 1990. In this paper we describe a new implementation of the minimum-degree algorithm based on the so-called explicit model.

Affine-Scaling for Linear Programs with Free Variables. Math. Prog., 43:31-44, 1989. The affine-scaling modification of Karmarkar's algorithm is extended to solve problems with free variables. This extended primal algorithm is used to prove two important results. First the geometrically elegant feasibility algorithm proposed by Chandru and Kochar is the same algorithm as the one obtained by appending a single column of residuals to the constraint matrix. Second the dual algorithm described by Adler et al., is the same as the extended primal algorithm applied to the dual.

A Modification of Karmarkar's Linear Programming Algorithm. Algorithmica, 1:395-407, 1986. We present a modification of Karmarkar's linear programming algorithm. Our algorithm uses a recentered projected gradient approach thereby obviating a priori knowledge of the optimal objective function value. Assuming primal and dual nondegeneracy, we prove that our algorithm converges. We present computational comparisons between our algorithm and the revised simplex method. For small, dense constraint matrices we saw little difference between the two methods.

Case Studies in Optimization

Case Studies in Trajectory Optimization: Catenary Problem. Optimization and Engineering, 6:463-482, 2005. This is the second paper in a series presenting case studies in modern large-scale constrained optimization [9], the purpose of which is to illustrate how recent advances in algorithms and modeling languages have made it easy to solve difficult problems using modern optimization software. In this paper, we consider the shape of a hanging chain, which, in equilibrium, minimizes the potential energy of the chain. In addition to the tutorial aspects of this paper, we also emphasize the importance of certain modeling issues such as convex vs. nonconvex formulations of given problem. We will present several models of the problem and demonstrate differences in the number of iterations and solution time.

Nonlinear Programming and Engineering Applications. In H.J. Greenberg, editor, Tutorials on Emerging Methodologies and Applications in Operations Research. Springer-Verlag, 2004. Abstract The last decade has seen dramatic strides in ones ability to solve nonlinear programming problems. In this chapter, we review a few applications of nonlinear programming to interesting, and in some cases important, engineering problems.

A Case Study in Trajectory Optimization: Putting on an Uneven Green. SIAG/OPT Views-and-News, 12(1):6-14, 2001. We study in this paper the problem of how to putt a golf ball on an uneven green so that it will arrive at the hole with minimal final speed. This problem serves as a good case study for trajectory optimization as it illustrates many of the issues that arise in trajectory optimization problems. This putting example is just one of a collection of case studies that is submitted to Optimization and Engineering under the title "Case Studies in Trajectory Optimization: Trains, Planes, and Other Pastimes". The purpose of these studies is to illustrate how recent advances in algorithms and modeling languages have made it easy to solve difficult optimization problems using off-the-shelf software.

Case Studies in Trajectory Optimization: Trains, Planes, and Other Pastimes. Optimization and Engineering, 2:215-243, 2001. This is the first in a series of papers presenting case studies in modern large scale constrained optimization, the purpose of which is to illustrate how recent advances in algorithms and modeling languages have made it easy to solve difficult optimization problems using off-the-shelf software. In this first paper, we consider four trajectory optimization problems: (a) how to operate a train efficiently, (b) how to putt a golf ball on an uneven green so that it arrives at the cup with minimal speed, (c) how to fly a hang glider so as to maximize or minimize the range of the glide, and (d) how to design a slide to make a toboggan go from beginning to end as quickly as possible. In addition to the tutorial aspects of this paper, we also present evidence suggesting that the widely used trapezoidal discretization method is inferior in several ways to the simpler midpoint discretization method.

Random-Process Formulation of Computationally Efficient Performance Measures for Wideband Arrays in the Far Field. In The 1999 Midwest Symposium on Circuits and Systems, 1999. Math. Prog., 87(2):303-316, 2000. Random-process autocorrelations and crosscorrelations in three dimensions here provide computationally efficient mean-square-error and average-gain measures to control main beam and sidelobe characteristics in convex optimization of far-field beamforming coefficients for large wide-band arrays. An example optimized with the ultra-efficient interior-point code LOQO illustrates.

Robust Optimization

Robust Optimization of Large-Scale Systems. Operations Research 43(2), 264-281, 1995. Mathematical programming models with noisy, erroneous, or incompLete data are common in operations research applications. Difficulties with such data are typically dealt with reactively-through sensitivity analysis--or proactively-through stochastic programming formulations. In this paper, we characterize the desirable properties of a solution to models, when the problem data are described by a set of scenarios for their value, instead of using point estimates. A solution to an optimization model is defined as: solution robust if it remains "close" to optimal for all scenarios of the input data, and mndel robust if it remains "almost" feasible for all data scenarios. We then develop a general model formulation, called robust optimization (RO), that explicitly incorporates the conflicting objectives of solution and model robustness. Robust optimization is compared with the traditionaL approaches of sensitivity analysis and stochastic linear programming. The classicaL diet probLem illustrates the issues. Robust optimization models are then developed for several reaL-world appLications: power capacity expansion; matrix balancing and image reconstruction; air-force airline scheduling; scenario immunization for financial pLanning; and minimum weight structural design. We also comment on the suitability of parallel and distributed computer architectures for the solution of robust optimization models.

Global Optimization

Extension of Piyavskii's Algorithm to Continuous Global Optimization. J. Global Opt., 14:205-216, 1999. We use the simple, but little-known, result that a uniformly continuous function on a convex set is ε-Lipschitz (as defined in the paper) to extend Piyavskii's algorithm for Lipschitz global optimization to the larger domain of continuous (not necessarily Lipschitz) global optimization.

Probabilistic Potential Theory

A Probabilistic Formula for the Concave Hull of a Function. Ann. Prob., 23, 2014-2021, 1995.

Discrete-Space Markov Processes. Large Deviations for Performance Analysis: Appendix B, 1995.

Probabilistic Solution of the Dirichlet Problem for Biharmonic Functions in Discrete Space. Ann. Prob., 12(2), 311-324, 1984. Considering difference equations in discrete space instead of differential equations in Euclidean space, we investigate a probabilistic formula for the solution of the Dirichlet problem for biharmonic functions. This formula involves the expectation of a weighted sum of the pay-offs at the successive times at which the Markov chain is in the complement of the domain. To make the infinite sum converge, we use Borel's summability method. This is interpreted probabilistically by imbedding the Markov chain into a continuous time, discrete space Markov process.

Stochastic Waves. Transactions of the AMS, 275, 771-779, 1983. Let \(\phi\) be a real valued function defined on the state space of a Markov process \(x_t\). Let \(\tau_t\) be the first time \(x_t\) gets to a level set of \(\phi\) that is \(t\) units higher than the one on which it started. We call the time-changed process \(\tilde{x}_t = x_{\tau_t}\) a stochastic wave. We give conditions under which this process is Markovian and we evaluate its infinitesimal generator.

Toward a Stochastic Calculus for Several Markov Processes. Adv. Appl. Math., 4(2), 125-144, 1983. This paper, my PhD thesis, investigates a class of generalized harmonic functions associated with a tensor product of a pair of strong Markov processes. In the case where both processes are Brownian motions, a smooth function \(\sf \small f \) is ``harmonic'' if \(\sf \small \Delta_{x_1} \Delta_{x_2} f(x_1,x_2) = 0 \). For these harmonic functions we investigate a boundary value problem that is analogous to the Dirichlet problem associated with a single process.

Random Polynomials

The Complex Zeros of Random Sums. Tech. Report, 2015. In this paper, we derive explicit formulas for the distribution of the complex roots of random sums of the form \( P_n(z) = \sum_{j=0}^{n} \eta_j f_j(z), \) where the functions \(f_j\) are given analytic functions that are real-valued on the real line. We consider some special cases including Weyl polynomials, Taylor polynomials, and truncated Fourier series.

The Complex Zeros of Random Polynomials. Transactions of the AMS, 347(11):4365-4384, 1995. Mark Kac gave an explicit formula for the expectation of the number of zeros of a random polynomial in any measurable subset of the reals, where a random polynomial is assumed to have coefficients that are independent standard normal random variables. In fact, for each n > 1, he obtained an explicit intensity function for which the expected number of zeros in a domain is the integral of the density over that domain. Here, we extend this formula to obtain an explicit formula for the expected number of zeros in any measurable subset of the complex plane. We show that the expected number of zeros has a density on the real line and a density in the complex plane. We give explicit formulae for these density functions. We also study the asymptotics showing that for large n the complex density's mass lies close to, and is uniformly distributed around, the unit circle.

Optimal Stopping/Switching/Choice

The Curse of Dimensionality or the Blessing of Markovianity. Tech. Report, 2014. The beauty (and the blessing) of Markov processes is that they allow one to do computations on the state space \(\sf \small E \) of the Markov process as opposed to working in the sample space of this process, which is at least as large as \(\sf \small E^{\mathbb N} \) and therefore huge relative to \(\sf \small E \). Obviously, the tractability of a problem formulated on \(\sf \small E \) depends on how big this space is. When \(\sf \small E \) is a subset of a medium to high dimensional space, problems quickly become intractable. This fact is often referred to as the curse of dimensionality. But, even in high dimensions, \(\sf \small E \) is small compared to \(\sf \small E^{\mathbb N} \). Hence, it is surprising that some recent papers have claimed to resolve the curse of dimensionality by working instead in the sample space. In this paper, we will review one such example.

The Postdoc Variant of the Secretary Problem. Tech. Report, 2012. The classical secretary problem involves sequentially interviewing a pool of n applicants with the aim of hiring exactly the best one in the pool---nothing less is good enough. The optimal decision strategy is easy to describe and the probability of success is 1/e. In this paper, we consider a minor variant of this classical problem. We wish to pick not the best but the second best (the best is going to Harvard). In this case, an explicit solution can be given both for the optimal strategy and the associated optimal success probability. As n goes to infinity, the probability of success tends to 1/4. Apparently, it is easier to pick the best than the second best.

Brownian Bandits. Dynkin Festschrift, AMS, 1995. We study the multi-armed bandit problem in which each arm's reward process evolves according to a Brownian motion. We derive the formula for the Gittins indices associated with such problems.

Three Bewitching Paradoxes. Topics in Contemporary Probability and Applications, 1994. We discuss three popular paradoxes in probability, including the "Let's Make a Deal" paradox.

Optimal Switching Among Several Brownian Motions. SIAM J. on Control and Optimization, 30, 1150-1162, 1992.

Optimal Switching Between a Pair of Brownian Motions. Ann. Prob., 18(3), 1010-1033, 1990. Consider two Brownian motions each taking values on a given interval with absorption at the endpoints. The time evolution of the two processes can be controlled separately: i.e., we can alternate between letting the first process run while freezing the second and letting the second run while freezing the first. This results in a switched process that evolves in the rectangle like a horizontal Brownian motion when the second process freezes and like a vertical Brownian motion when the first freezes. Given a nonnegative continuous payoff function defined on the boundary of the domain, a control consists of a switching strategy and a stopping time. We study the problem of finding an optimal control which maximizes the expected payoff obtained at stopping time (stopping in the interior results in zero reward). In the interior of the rectangle, the optimal switching strategy is determined by a partition into three sets: a horizontal control set, a vertical control set and an indifference set. We give an explicit characterization of these sets in the case when the payoff function is either linear or strongly concave on each face.

Markov Strategies for Optimal Control Problems Indexed by a Partially Ordered Set. Ann. Prob., 11(3), 642-647, 1983.

Optimal Stopping and Supermartingales over Partially Ordered Sets. Z. Wahrscheinlichkeitstheorie verw. Gebiete, 57, 253-264, 1981.

The Optimal Choice of a Subset of a Population. Mathematics of Operations Research, 5(4), 481-486, 1980. We check a ranked population in random order. At each step we either accept or reject the observed sample. As a result we get a partition of the population into two parts: accepted and rejected. We are interested only in the case when all accepted samples are better than all rejected. Under this condition, we maximize the probability to obtain a fixed size k of the accepted part.


The discrete Radon transform and its approximate inversion via linear programming. Discrete Applied Mathematics. 75:39-61, 1997.

New Insights Into Emission Tomography Via Linear Programming. Proc. Conf. on Formation, Processing and Evaluation of Medical Images, 1988.

Purple America

On Graphical Representation of Voting Results. We describe and illustrate various ways to represent election results graphically. The advantages and disadvantages of the various methods are discussed. While there is no one perfect way to fairly represent the outcomes, it is easy to come up with methods that are superior to those used in recent elections.


Blink and You'll See It. Sky and Telescope, June. 2019.

The Retrograde Motion of Mars with a Sextant. Submitted to Sky and Telescope, Feb. 2017.

Venus Transit Parallax Measurement. Tech. Report, Sept. 2015.

Measuring The Parallax of Barnard's Star. Tech. Report, Nov. 2014.

Sequencing the Stars. Sky and Telescope, Dec. 2010.

Measuring Earth's Diameter from a Sunset Photo. Optics and Photonics News, 19(11), 34-39, Nov. 2008.

Measuring the Astonomical Unit. Sky and Telescope, 113(1), 91-94, 2007.


Why Sunset Reflections on Water Smear Vertically Tech Report, 2018.

Which Young Tableaux can represent an outer sum? Journal of Integer Sequences, 2015.

A Speciation Solver for Cement Paste Modeling and the Semismooth Newton Method. Cement and Concrete Research, 68, 139-147, 2015. A speciation solver based on a semismooth Newton method adapted to the thermodynamic modeling of cement paste is proposed. The strong theoretical properties associated with these methods offer important practical advantages. Numerical experiments indicate that the algorithm is reliable, robust, and efficient.

The Subterranean Brachistochrone. Technical Report, Princeton University, 2013. Inspired by Elon Musk's proposal to build a subterranean train from San Francisco to LA, I worked out the math for the tunnel shape that best exploits Earth's gravity.

Two Professors Retake the SAT. Chronicle of Higher Education, 55(39), A30-31, 2009.

Symmetrization of Binary Random Variables. Bernoulli, 5(6):1013-1020, 1999. A random variable Y is called an independent symmetrizer of a given random variable X if (a) it is independent of X and (b) the distribution of X + Y is symmetric about 0. In cases where the distribution of X is symmetric about its mean, it is easy to see that the constant random variable Y = -E X is a minimum-variance independent symmetrizer. Taking Y to have the same distribution as -X clearly produces a symmetric sum but it may not be of minimum variance. We say that a random variable X is symmetry resistant if the variance of any symmetrizer, Y, is never smaller than the variance of X. Let X be a binary random variable: P{X = a} = p and P{X = b} = q where a and b are distinct, 0 < p < 1, and q = 1 - p. We prove that such a binary random variable is symmetry resistant if (and only if) \(\sf \small p \ne 1/2 \). Note that the minimum variance as a function of p is discontinuous at p = 1/2. Dropping the independence assumption, we show that the minimum-variance reduces to pq-min(p,q)/2, which is a continuous function of p.

The Kruskal Count. In The Mathematics of Preference, Choice, and Order: Essays in Honor of Peter C. Fishburn. Springer-Verlag, 2009. The Kruskal Count is a card trick in which a magician "guesses" a card selected by a subject according to a certain counting procedure. With high probability the magician can correctly "guess" the card. The success of the trick is based on a mathematical principle related to coupling models for Markov chains. This paper analyzes in detail two simplified variants of the trick and estimates the probability of success. The theoretical probabilities are compared with Monte Carlo simulation results for several variants of the actual trick.

A Probabilistic Model for the Time to Unravel a Strand of DNA. Stochastic Models 4:299-314, 1988. A common model for the time \(\sf \sigma_L\) (sec) taken by a DNA strand of length \(\sf L \) (cm) to unravel is to assume that new points of unraveling occur along the strand as a Poisson process of rate \(\sf \lambda \) l/(cm x sec) in space-time and that the unraveling propogates at speed \(\sf \nu/2 \) (cm/sec) in each direction until time \(\sf \sigma_L.\) We derive a formula for the distribution of \(\sf \sigma_L.\)

Rejected Papers (but worthy nonetheless)

Using LOQO to Solve Second-Order Cone Programming Problems. Technical Report SOR-98-09, Statistics and Operations Research, Princeton University, 1998.

Uniform Continuity is Almost Lipschitz Technical Report SOR 1997. We prove that a function is uniformly continuous if and only if it is within \(\sf \small \varepsilon\) of being Lipshitz continuous for all \(\sf \small \varepsilon>0\).

An EM approach to OD matrix estimation. Technical Report SOR 94-04, Princeton University, 1994.

Constructing Strong Markov Processes. Technical Report, University of Illinois, 1984. Markov processes that are not "strong Markov" can be made strongly Markov by enlarging the state space. In the classical theory, this is accomplished by assuming that the state space has a topological structure and then taking the compactification of that topological space. Such a process entails the unnatural assumption that the process lives in a topological space to start with. Here, we make no such assumption. Rather, we show how to introduce a natural topology associated with the process. We then complete the space with respect to this topology and show that the resulting Markov process has the strong Markov property in this enlarged space. Several examples illustrate the idea.