ABSTRACT: The theory of Braids shows the interplay of two disciplines of pure mathematics. Topology used in the definition of braids, and the theory of groups, used in their treatment. We will show that the system of all braids of order n is a group, and will talk about equality of two braids of the same order.
ABSTRACT: In spite of increasing computing power, many statistical problems give rise to Markov chain Monte Carlo algorithms that mix too slowly to be useful. Most often, this poor mixing is due to high posterior correlations between parameters. In the illustrative special case of Gaussian linear models with known variances, we characterize which functions of parameters mix poorly and demonstrate a practical way of removing all autocorrelations from the algorithms by adding Gibbs steps in suitable directions. These additional steps are also very effective in practical problems with nonnormal posteriors, and are particularly easy to implement.
ABSTRACT: The following topics will be discussed: Delayed response in stock markets; Stochastic delay differential equations; Black-Scholes model for (B,S)-securities markets; Option pricing in the (B,S)-securities markets; GARCH(1,1) model for volatility; Numerical solution of Black-Scholes problem; Maximum likelihood method for parameters estimation; A proposed model of *B,S)-securities market with delayed response.
ABSTRACT: Financial institutions must forecast loan losses for the purpose of provisioning; a loss provision is an expense set aside for bad loans. This presentation shows that losses can be tied to a Bayesian network structure involving repayment and delinquency, and how this structure can be used to forecast losses.
ABSTRACT: Emerging infectious diseases such as SARS and West Nile virus pose an important threat to our public health system. SARS was documented in over 8,400 people globally, with cases in Asia, Europe, and North America. When, where, and how it will reappear, if at all, is an open question. West Nile virus, which first emerged in North America in 1999 as an outbreak in New York City, has now spread across Canada and the United States. Current challenges include rapid and accurate diagnosis, risk assessment, disease containment, and vaccine development. In this presentation, research activities of the Canadian SARS Research Network addressing these challenges will be presented with a particular focus on risk factors for SARS among healthcare workers. Results of the first Canadian serosurvey study of West Nile virus will be presented and a cohort study of patients with severe West Nile Virus will be discussed. The theme of changing our traditional way of doing research for these infections will be emphasized.
ABSTRACT: On this 10th anniversary of Wiles' announcement of a proof of Fermat's Last Theorem, it is perhaps not inappropriate to commemorate the grand event by recounting attempts to prove the theorem, starting with Fermat, describing some of the ideas and drama leading to Wiles' proof, and noting some of what lies ahead.
ABSTRACT: Fomin, Fulton, Li, and Poon defined an operation on pairs of partitions. They conjectured that a certain symmetric function associated to this operation is Schur-positive. We obtain a combinatorial description of this operation that allows us to generalize the conjecture, to prove many instances of it, and to show that it holds asymptotically.
This is join work with Francois Bergeron and Riccardo Biagioli.
ABSTRACT: We describe optimal detection techniques and how they have been applied to understanding climate change in the 20th century. Using fingerprints of climate change estimated from simulations conducted with coupled ocean-atmosphere climate models at Canadian Centre for Climate Modelling and Analysis and Hadley Centre for Climate Prediction and Research, we estimate the contributions and their associated uncertainties of greenhouses gases and sulfate aerosols. Results show that the increase of greenhouse gases explain most of observed warming, at spatial scales from global to (sub)continent, in the second half of the 20th century.
Fernando Hernandez-Hernandez will give speak on "Topologies on the First Uncountable Ordinal and Guessing Sequences" at 2:00p.m. in N638 Ross.
ABSTRACT: We present a method to define topologies on the first uncountable ordinal, by defining weak neighbourhood bases on the points from the elements of a guessing sequence. A guessing sequence is family of subsets indexed by limit ordinals and such that each element from that family is cofinal in its index. With these topologies we give, for example, the construction of a new Dowker space and the construction of a perfectly normal nonrealcompact space of cardinality aleph_1 which is consistent with Matin's Axiom plus the negation of the Continuum Hypothesis, answering a question posed by R.L. Blair in the 70's.
ABSTRACT: We define two important objects of Combinatorial Group Theory: graphs of groups, and their fundamental groups. We show how graphs of groups can be constructed from group actions on graphs. Conversely, starting with a graph of groups G, one can construct a graph called the universal cover of G. We present the two structure theorems of the Bass-Serre theory: the first one states that the universal cover of a graph of groups is a tree; the second structure theorem shows that a group acting on a tree is isomorphic to the fundamental group of the graph of groups obtained from the action. Applications of the theorems include an easy proof of the Kurosh subgroup theorem for free products of groups.
ABSTRACT: In this talk, we find the optimal investment, consumption, and annuitization policies of a utility-maximizing retiree facing a random time of death. We use techniques from optimal control to discover that the individual will buy annuities in order to keep wealth to one side of a barrier in wealth-annuity space--a type of barrier-control. In the region of no-annuity purchasing, we obtain a partial differential equation that determines the maximum utility. In the time-homogeneous case, we obtain an explicit solution to the problem and present a numerical example.
ABSTRACT: I will report on some joint work with B. Allison and A. Pianzola in which we investigate iterated loop algebras and obtain a general ``permanance of base '' result. Using this we show that the ``type'' and ``number of steps '' of an iterated loop algebra of a split simple Lie algebra are isomorphism invariants.
ABSTRACT: Game theory is the mathematics of making moves in a game to optimize payoff for yourself while other player(s) does the same thing for herself. A "discrete" game, such as tic-tac-toe presents each player with a finite number of moves to choose from, where as a game like super mario brothers or castle wolfenstein has a choice of continuous variables--which direction to move in, for how long, when to press the fire button. These are "differential games". I'm going to talk about some interesting differential games with simple solutions that made differential game theory popular with the american military-industrial complex: bombing a target, intercepting an aircraft, pursuit and evasion, attack and attrition. (as time permits).
ABSTRACT: We find that a Fock space which is the basic module for the affine general Lie algebra $\hat{gl}__N$ allows both the quantum affine general algebra $U_q(\hat{gl}__N)$ and the quantum toroidal algebra actions by using the vertex operator construction. As a byproduct, we enhance a Lusztig theorem on quantum affine algebras.
ABSTRACT: We consider unbiased estimation following a group sequential test for distributions in a one-parameter exponential family. We show that, for an estimable parameter function, there exists uniquely an unbiased estimator depending on the sufficient statistic and based on the truncation-adaptation criterion (Liu and Hall, Biometrika, 1999); moreover, this estimator is identical to one based on the Rao-Blackwell theorem. When completeness fails, we show that the uniformly minimum-variance unbiased estimator may not exist or otherwise possess undesirable performance, thus claiming the optimality of the Rao-Blackwell estimator, regardless of the completeness of the sufficient statistics. The results have potential application in clinical trials and other fields.
ABSTRACT: There are several connections between permutation statistics on the symmetric group and the representation theory of the symmetric group S_n. After giving a brief survey of the known results for S_n, I will show how to generalize them to the even-signed permutation group D_n. In particular I will define a major index and a descent number on D_n that allow me to give an explicit formula for the Hilbert series for the invariants of the diagonal action of D_n on the polynomial ring. Moreover I will give a monomial basis for the coinvariant algebra R(D). This new basis leads to the definition of a new family of D_n modules that decompose R(D). An explicit decomposition of these representations into irreducible components is obtained by extending the major index on particular standard Young bitableaux.
This is a joint work with Fabrizio Caselli.
ABSTRACT: Fathom is a statistical software package that was designed for teaching statistical concepts. Its key feature is that it is "dynamic": changes made to the data in one of its representations are immediately made in every representation, encouraging exploration and "what if" questions. Last year Fathom was licensed by the Ministry of Education for use in all Ontario public schools.
I will give an informal demonstration of how Fathom works and show the type of experiences with data our students may now have with explorations appropriate for students from grade 9 to grade 12. I will also demonstrate how I have recently used Fathom in my third year course in regression analysis.
ABSTRACT: The singularly perturbed problem \[ \varepsilon \frac{d^{2}u}{dx^{2}}+Q(u)=0 \] with various boundary conditions at $x=\pm 1$ is studied from a rigorous point of view where $\varepsilon $ is a small parameter. Known solutions obtained from the method of matched asymptotics are shown to approximate true solutions with an exponentially small error estimate. The so-called spurious solutions turn out to be approximations of true solutions, when the locations of their internal layers, ``spikes'' or ``shocks'', are properly assigned. An estimate is also given for the maximum number of ``spikes'' or ``shocks'' that these solution can have.
ABSTRACT: I will give examples from the history of mathematics in order to explore the question in the title.
A pizza lunch will follow in N537 Ross.
ABSTRACT: Erdos and Renyi first considered the evolution of a random
graph, in which n vertices begin life as isolated points and then edges are
thrown in randomly one by one. This evolving random graph undergoes a phase
transition when the number of edges is around n/2: a "giant" component
suddenly appears. We give a result on the joint distribution of three
parameters of the giant component in the phase after it appears: the number of
vertices in the 2-core (the largest subgraph of minimum degree 2 or more); the
excess (#edges - #vertices) of the 2-core; and the number of vertices not in
the 2-core. This uses a combination of combinatorial and probabilistic tools.
It is joint work with B. Pittel.
Upcoming seminars: Nov 10. Benedek Valko
(Technical Univ. of Budapest) www.math.toronto.edu/probability.
ABSTRACT: In this talk, a general class of models called "perturbation models" are introduced. These models are described by an underlying "null" model that accounts for most of the structure in a data while a perturbation accounts for possible small localized departures. The theory and inferential methods for fitting the perturbation models are discussed. Two important statistical problems for which perturbation models are applicable are finite mixture models and spatial scan analysis. In a finite mixture model, the null model represents a mixture with m number of components and the perturbation model represents additional components. In the spatial scan analysis context, the null density models the background or noise whereas the perturbation searches for an unusual region such as a tumorous tissue in mammography or a target in automatic target recognition. The theory based on the Hotelling-Weyl tube formula provides an elegant approach to solving classes of statistical problems for which exact solution is intractable and when regularity conditions required for the application of the central limit theorem type results are not satisfied. In this talk, (1) a new test statistic for detecting the presence of perturbation is described, (2) a general theory for the derivation of the limiting distribution of the test statistic is illustrated using the results of Hotelling-Weyl tube formula, (3) computational issues associated with the implementation of the procedures are addressed and (4) the resulting theory is applied to the problem of testing for the number of terms in mixture models. Application of the resulting theory to image analysis will be discussed. This is joint work with Catherine Loader, Department of Statistics, Case Western Reserve University.
ABSTRACT: Every uncountable abelian polish group has polishable subgroups of arbitrarily high Borel rank. I will present this recent result of Greg Hjorth that answers a question of Slawek Solecki and myself. (Slawek and I proved that every uncountable polish group has Borel subgroups of arbitrarily high Borel rank.) .
ABSTRACT: We will give some details of recent results concerning the space of symmetric functions in non-commuting variables (refered to as NCSF). This algebra was recently considered by Rosas-Sagan where they defined and developed bases which are analogous to those which exist for the (commutative) symmetric functions. We will try to make clear in this exposition the relationship between the space NCSF and the space of non-commutative symmetric functions of Gelfand-Krob-Lascoux-Leclec-Retakh-Thibon-et al. We will also introduce the notion of a fundamental basis for NCSF which in some sense is analogous to the Schur functions on the level of symmetric functions.
This is joint work with Nantel Bergeron.
ABSTRACT: A classical way of modeling phase transition phenomena (e.g., in spin systems, binary materials, superconductors, etc.) is to consider an appropriate free energy Helmholtz functional and its local minimizers. This type of approach dates back to van der Waals, and its popularity is growing, as manifested by this years' Nobel prize in physics, 1/3 awarded to V.L. Ginzburg. In this talk, I will discuss the original van der Waals' fully nonlocal functional, and show that it admits a variety of periodic L^2 local minimizers. Depending on the absolute temperature of the system, these minimizers can have discontinuous interfaces. This is in sharp contrast to the local approximations of the fully nonlocal functional, which are often known as Ginzburg-Landau or Cahn-Hilliard models. In other words, this result apparently establishes a middle ground between the ambiguous local phase-field and free boundary models. Finally, which might sound unbelievable, it is not clear how to recover the above result numerically.
ABSTRACT: Studies of suspended particles in the atmospheric boundary layer, on Earth and Mars.
Pizza will be served in N537 Ross after the talk.
ABSTRACT: In the talk, I will explain a construction of boronic negative-energy representations of the central extension of the Lie algebra of differential operators on the circle, the Lie algebra ${\mathcal W}_{1+\infty}$, as well as of the Lie algebra ${\mathcal W}_{1+\infty}(gl_N)$. In particular, when restricted to the Virasoro subalgebra of the Lie algebra ${\mathcal W}_{1+\infty}$, we obtain a bosonic realization of the Virasoro algebra with central charge $c=2$ and negative-energies, which is completely reducible.
Note: This is a joint work with Dong Liu.
ABSTRACT: Lilavati is one of the most famous textbooks in Indian History. It was written by a great poet and mathematician, Bhaskaracharya ( b. 1114 ), in Sanskrit verses. He exemplified the remark of Weierstrass that "A mathematician who is also not something of a poet, can never be a complete mathematician". In this talk I'll try to explain the Indian approach to Mathematics and Philosophy with special reference to the above book, which contains a lot of surprising material that can be derived only by Calculus. It contains a formula for an approximate value of the length of an arc of a circle in terms of the chord, which I have not seen elsewhere.
ABSTRACT: I will give necessary and sufficient conditions for a sigma-ideal I of Borel sets to be the ideal of phi-null sets for a Maharam submeasure phi. As an application, I will prove that every quotient of separable submeasure algebras are submeasure algebras. In case of (arbitrary) measure algebras this was proved by Solovay, who defined a strictly positive measure on the quotient. In the case of submeasure algebras, the indirect approach seems to be necessary.
ABSTRACT: The enumeration of lattice animals is (perhaps) one of the most famous problems in combinatorics, and as a lattice model of polymers it is also of considerable importance in statistical physics and theoretical chemistry. Though lattice animals have been studied since the 40's, it is surprising how few rigorous results exist.
In this talk I will explore one possible reason for this--that the model is not solvable in terms of the most common functions of mathematical physics, "differentiably finite" functions.
The proof of this relies on (almost) purely combinatorial and graphical techniques which demonstrate a direct relationship between the types of graph structures within the bond animals to the singularities of the solution.
ABSTRACT: I will present a new algorithm for solving a score equation for the maximum likelihood estimate in certain problems of practical interest. The method circumvents the need to compute second order derivatives of the full likelihood function. It exploits the structure of certain models that yield a natural decomposition of a very complicated likelihood function. In this decomposition, the first part is a log likelihood from a simply analyzed model and the second part is used to update estimates from the first. Convergence properties of this fixed point algorithm are examined and asymptotics are derived for estimators obtained by using only a finite number of steps. Illustrative examples considered in the paper includebivariate and multivariate Gaussian copula models, nonnormal random effects and state space models. Properties of the algorithm and of estimators are evaluated in simulation studies on a bivariate copula model and a nonnormal linear random effects model.
ABSTRACT: In this talk we discuss robust regression estimates for censored data. The extension of the classical Least Squares estimates to the cases of censored responses was first proposed by Miller (1976) and Buckley and James (1979). More recently Ritov (1990) and Lai and Ying (1994) studied M-estimates for censored responses. Unfortunately, these estimates require a monotone estimating equation and hence are robust only to low-leverage outliers. We propose an extension of high-breakdown regression estimates to the case of censored response variables. In particular, our approach extends the classes of LMS estimates [Rousseeuw, 1984], S-estimates [Rousseeuw and Yohai, 1984], MM-estimates [Yohai, 1987], tau-estimates [Yohai and Zamar, 1988], P-estimates [Maronna and Yohai, 1993] and maximum depth estimates [Rousseeuw and Hubert, 1999]. Simulation studies show that these estimates have good finite sample properties. Examples and an algorithm to compute these estimators will also be discussed.
ABSTRACT: Biomedical signals are typically finite duration, dynamic and non-stationary processes whose frequency characteristics vary over time or space. This often requires algorithms capable of locally analyzing and processing signals. The recently developed S-transform (ST) combines the time-frequency representation of the windowed Fourier transform with the multi-scale analysis of the wavelet transforms. Applying this transform to a temporal signal reveals information on what and when frequency events occur. In addition, its multi-scale analysis allows more accurate detection of subtle signal changes while interpretation in a time-frequency domain is easy to understand. Based on the ST, a series of adaptive time-frequency analysis techniques can be derived, which may provide valuable information for disease diagnosis and treatment. In this talk, we overview the theory of the ST and illustrate its effectiveness in de-noising and analyzing magnetic resonance imaging data.
ABSTRACT: Classical works on aero-elasticity assume linear models for dynamics, aerodynamics, and structures. However, structural non-linearities arise from worn hinges of control surfaces, loose control linkages, material behavior and various other sources. Aging aircraft and combat aircraft that carry heavy external stores are more likely to be influenced by effects associated with nonlinear structures. An understanding of the nonlinear behavior of the system is crucial to the efficient and safe design of aircraft wings and control surfaces.
There are three types of structural non-linearities: cubic spring, free-play and hysteresis. The principle interest for the aero-elastician is the asymptotic motion behavior (convergence, divergence, limit cycle oscillation) and the amplitude and frequency of the limit cycle oscillations.
For a two-degree-of-freedom aero-elastic airfoil motion placed in an incompressible flow, by using the analytical techniques: the center manifold theory, the principle of normal form, the perturbation method, and the point transformation method, we accurately predict the nonlinear response. Various types of nonlinear response, damped, period-one, period-one with harmonics, period-two, period-two with harmonics, and chaotic motions are detected and the amplitudes and frequencies of limit cycle oscillations are predicted for the velocities beyond the linear flutter speed. In particular, a secondary bifurcation after the primary Hopf (flutter) bifurcation is detected for a cubic hard spring in the pitch degree of freedom. Furthermore, there is a hysteresis in the secondary bifurcation: starting from different initial conditions the motion may jump from one limit cycle to another at different flow velocities. Higher order harmonic balance method is employed to investigate the possible bifurcation branches.
A more up to date model, a three-degree-of-freedom aero-elastic airfoil with control surface/free-play, will be introduced. The recent theoretical/experimental study will be briefly discussed, and the future topics will be presented.
ABSTRACT: We will introduce the notion of characters for graded Hopf algebras and derive some interesting properties of the character group. In particular we will give a canonical factorization of a character into an even and an odd character. We will discuss some canonical character related to combinatorial Hopf algebras.
ABSTRACT: Fixed annuities are fairly illiquid financial instruments. Due to its illiquid property, fixed annuities often offer a liquidity premium to compensate for the redemption restrictions. We describe the numerical implementation for calculation of yield when the instantaneous risk-free rate of return follows Vasicek Model. In particular we discuss the benefits and challenges of using Gauss-Hermite quadrature for calculation of two-dimensional integrals.
ABSTRACT: It is widely believed that thousands of genes in a given living organism function in a complicated way that creates the mystery of life. However, using traditional methods in molecular biology working on a 'One gene in one experiment' basis, it is hard to obtain the 'whole picture' of gene function. In the past several years, DNA microarray technology, has attracted tremendous interests, since it allows for monitoring of the interactions among thousands of genes simultaneously.
Statistical considerations are frequently to the fore in the analysis of microarray data, as researches shift through massive amounts of data and adjust for various sources of variability in order to identify the important genes amongst the many that are measured.
This talk is an attempt to provide an overview of the stages of microarray experiment up to the extent needed to outline the sources of major challenges a data analyst may encounter. In addition, brief summary of the statistical methods involved would be followed by a concise discussion of several issues concerning the step from the statistical results to their biological application.
ABSTRACT: We prove a theorem due to Reidemeister and Serre, about groups acting freely on trees. This is the beginning of Bass-Serre theory, which uses group actions on graphs to obtain results on the structure of groups. The theorem leads to an easy proof of the Nielsen-Schreier subgroup theorem, and provides a Schreier basis for subgroups of free groups. The theorem is also used to obtain the Schreier index formula.
ABSTRACT: The Navier-Stokes equations are a system of nonlinear, partial differential equations that describe the motion of a viscous, incompressible fluid. It is of importance to understand the long time behavior of solutions of these equations. The numerical treatment of high-Reynolds number viscous flows is of considerable interest for the applications because of the fact that the predominant part of the practically important flows take place either in large scales and high speeds or with small viscosity. My research has primarily focused on the problem of identifying the long-time behavior of solutions, as well as their asymptotic when the coefficient 1/Re of the highest-order derivatives approaches zero. In this talk I am going to present a new method for solving the incompressible Navier-Stokes equations and advection-diffusion equations. The main advantage of the method is that due to the economy of the computer time and memory required it is very efficient for solving multidimensional problems. Finally, I shall discuss my future research plans.
Refreshments will be served in N620 Ross at 2:00p.m.
ABSTRACT: How new species emerge and evolve is a fundimental question in evolutionary ecology. A nice theory explaining how inter-specific competition may play a vital role in speciation has emerged recently. It is called evolutionary branching: a species splits into two driven by the competition between the species and its mutants. The mathematics behind it is called adaptive dynamical systems: the dynamical systems governing the evolution of traits. These ideas have been tossed around by J. Metz, U. Dieckmann, S. Geritz et al., and have received wide interest and acceptance.
This theory is very successful in explaining the branching of a single species with a single trait. Unfortunately, the branching of a system with multiple species and multiple traits is still poorly understood. The main goal of this talk is to explain the branching of such systems. The branching is dominated by a double-dimensioned adaptive dynamical system. The branching condition comprises a coexistence condition for mutants and their parents, and a saddle condition for the "doubled" adaptive system. An obvious application is to explain how a species can speciate and adapt to either a resource gradient of a range of discrete resources, where resource competition is the main selection force. This shines some light into the observed bio-diversity in resource adaptation.
ABSTRACT: We'll talk about planar graph and the Hanani-Tutte theorem about such graph. We'll define what a geometric graph is as well as some of their properties, including the relation between planar and geometric graphs.
ABSTRACT: A compact space $X$ is Valdivia compact if for some $\kappa$ there is an embedding $i : X\to [0,1]^\kappa$ such that $i[X]\cap \Sigma(\kappa)$ is dense in $i[X]$, where $\Sigma(\kappa)$ denotes the subset of $[0,1]^\kappa$ consisting of all functions with countable support.
I will present some properties of Valdivia compact spaces and their characterization in terms of inverse systems.
ABSTRACT: We look at the relationship between a finite set of points and its ideal. One way of studying this is by the use of Hilbert functions, which is a sequence of numbers that tells us the number of independent forms of each degree that pass through the given points. A special kind of ideal, known as lex-segment ideals, satisfy many extremal properties among all ideals with the same Hilbert function. As a result, many people have looked into ways of generalizing lex-segment ideals in ways one can still be left with some of the extremal properties. Lex plus powers ideals are one such generalization. The ideal is split into two parts: the lex part and the "powers" part. The powers part defines a maximal length regular sequence that the ideal contains, and it is conjectured that lex plus powers ideals satisfy extremal properties among all ideals containing a regular sequence in the same degrees and having the same Hilbert function.In this talk, I will provide some evidence to believe this conjecture.
ABSTRACT: Genetic linkage studies look for regions of the genome that are shared, in excess of what is expected under the null hypothesis of no linkage, by affected relatives. The excess sharing is evaluated assuming a known pedigree that determines the relationships among the affected individuals. Unidentified pedigree errors can have serious consequences for linkage studies, resulting in either reduced power or false positive evidence for linkage. Genome-screen data collected for linkage studies can be used to detect misspecified relationships.
Mathematical models for the underlying segregation and transmission of the chromosomes will be described. Under these models, all the crossover processes in a pedigree can be viewed jointly as a continuous-time Markov random walk on the vertices of a hypercube. In practice, only limited information on this Markov process can be observed and the dimension of the hypercube is generally large. To circumvent the computational difficulties, we construct augmented Markov processes that have substantially reduced numbers of states, and we use a hidden Markov method to calculate the likelihood of observed genotype data for specific pairs of individuals. This allows us to perform hypothesis tests for detection of misspecified relationships and to construct confidence sets of relationships compatible with the data. For complex pedigrees, the likelihood calculations become infeasible. As an alternative, we propose some new statistics that are computationally simpler, yet result in powerful hypothesis tests for detection of pedigree errors. In the case when the null relationship for a pair is rejected, we propose a simple method using the EM algorithm to infer the likely true relationship for the pair.
We will also discuss the implementation of the methods, with applications to several data sets collected for linkage studies. The software, PREST, is freely available at http://utstat.utoronto.ca/sun/Software/Prest.
This is joint work with Mary Sara McPeek.
ABSTRACT: Stochastic differential equations with finite delay have been intensively studied in the last years and fundamental results on the behaviour of their solutions were derived. But albeit deterministic equations with infinite delay are often encountered in applications, e.g. viscoelasticity and population dynamics, only a few work has so far been devoted to stochastic differential equations with infinite delay.
In this talk we introduce affine stochastic differential equations with both finite and infinite delay. After we have explained some differences between the solutions of the underlying deterministic differential equations with finite and infinite delay we present consequences of these differences for stochastic equations with infinite delay.
Treating equations with infinite delay often requires more sophisticated methods and techniques as the finite delay case. But on the other hand there exists a subclass of equations with infinite delay which can be reduced to ordinary differential equations without delay. We consider in detail the stochastic equations in this subclass. Moreover we establish that various linear hereditary models can be described by equations in this subclass.
ABSTRACT: A compact space $X$ is Valdivia compact if for some $\kappa$ there is an embedding $i : X\to [0,1]^\kappa$ such that $i[X]\cap \Sigma(\kappa)$ is dense in $i[X]$, where $\Sigma(\kappa)$ denotes the subset of $[0,1]^\kappa$ consisting of all functions with countable support.
I will present some properties of Valdivia compact spaces and their characterization in terms of inverse systems.
ABSTRACT: Many health conditions including cancer and psychiatric disorders are believed to have a complex genetic basis, and genes and environmental factors are likely to interact one another in the presence and severity of these conditions. Assessing familial aggregation and inheritability of disease is a classic topic of genetic epidemiology, which is commonly referred to as segregation analysis. While it is routine now to conduct such analyses for quantitative and dichotomous traits, there do not exist methods and software that accommodate ordinal traits. I will introduce a latent variable model for conducting genetic epidemiologic analyses of ordinal traits. The advantage of this latent variable model lies in its flexibility to include environmental factors (usually represented by covariates) and its potential to allow gene-environment interactions. The model building employs the EM algorithm for maximization and a peeling algorithm for computational efficiency. Asymptotic theory is provided for statistical inference and simulation studies are conducted to confirm that the asymptotic theory is adequate in practical applications. I will also demonstrate how to apply this latent variable model to examine the familial transmission of alcoholism, which is categorized into three ordinal levels: normal control, alcohol abuse, and alcohol dependence. Not only does this analysis confirm that alcoholism is familial, but also it suggests that the transmission may have a major gene component which is not revealed if alcoholism is dichotomized.
This is a joint work with Rui Feng and Hongtu Zhu.
ABSTRACT: We develop a compartmental mathematical model to address the role of hospitals in SARS transmission dynamics, which partially explains the heterogeneity of the epidemic. Comparison of the effects of two major policies, strict hospital infection control procedures and community-wide quarantine measures, implemented in Toronto two weeks into the initial outbreak, shows that their combination is the key to short-term containment and that quarantine is the key to long-term containment.
ABSTRACT: We present a fast algorithm for solving electromagnetic scattering from a rectangular open cavity embedded in an infinite ground plane. The medium inside the cavity is assumed to be (vertically) layered. By introducing a transparent (artificial) boundary condition, the problem in the open cavity is reduced to a bounded domain problem. A simple finite difference method is then applied to solve the model Helmholtz equation. The fast algorithm is designed for solving the resulting discrete system in terms of the discrete {\it Fourier transform} and a preconditioning conjugate gradient (PCG) method with a complex diagonal preconditioner for the indefinite interface system. The existence and uniqueness of the finite difference solution is proved for arbitrary wave numbers. Our numerical experiments for large numbers of mesh points, up to 16 million unknowns, and for large wave numbers, {\em e.g.}, between 100 and 200 wavelengths, show that the algorithm is extremely efficient. The cost for calculating the Radar Cross Section, which is of significant interest in practice, is $O(M^2)$ for an $M \times M$ mesh. The proposed algorithm may be extended easily to solve discrete systems from other discretization methods of the model problem.
ABSTRACT: Management Board Secretariat's Ministry Salary and Benefits Projection Model will be discussed, which addresses a variety of issues including: annual increases for different groups of staff at different points throughout the fiscal year, staff transitions from one position to another, differing benefit rates for classified versus unclassified staff, etc. Additionally, Risk Management practices in Management Board Submissions and in the government as a whole will be discussed.
ABSTRACT: We will discuss various products being sold currently. We will look at how they are priced and how we test to make sure the pricing model is properly implemented. In particular we shall look at interest rate derivatives such as GICs with embedded options.
ABSTRACT: We will examine the representation theory of finite groups over the complex field paying particular attention to the associated character theory. If time permits, we will discuss representations of the symmetric group S_n using Specht modules and a recent extension of this approach to the representations of the Rook Monoid.
ABSTRACT: Different mathematical models for influenza epidemics are presented; deterministic and stochastic. Most of the models are about the spread of influenza. An optimization, a simulation, and a model with corcirculating influenza strains are presented, also.
ABSTRACT: The Florida State Pension plan gives participants the option to switch between Defined Contribution and Defined Benefit Pension plqns. M.A. Milevsky and S.D. Promislow have recently obtained analytical expressions for the optimal switching time to maximize the expected wealth on retirement. When expected wealth is replaced by expected utility of wealth it is no longer possible to derive closed form solutions. This seminar will report on several results for power utility functions obtained by simulation.
ABSTRACT: We first briefly introduce the model for the dynamics of price adjustment in a single commodity market developed by J. Belair and M.C. Mackey. The model with nonlinearities in both supply and demand functions was discussed in their research. Delays due to production lags and storage policies involved in supply schedule play the key role in the model development. We found the interpretation for the delay, especially on storage time is not reasonable in their discussion, and hence probably produce some problems in the model formulation. As our main work, we introduce penalty functions to improve their method and conclude that under some constraint conditions the storage time can be completely determined by current price. Meanwhile, conditions for the local stability of the equilibrium price are given.
ABSTRACT: One of the most important and difficult problems in the option pricing theory is to evaluate and optimally exercise the American-style options on multiple assets. A Basket Option is an option whose payoff is linked to a portfolio or "basket" of several underlying assets. With growing diversification in investor's portfolio, basket options on such portfolios are increasingly demanded.
In this seminar, we will present a Least Squares Monte Carlo (LSM) approach to price American-style basket options. While the Monte Carlo method is applied to simulate trajectories for asset prices, the least-squares regression is used to estimate the continuation values of basket options, which makes this approach readily applicable in path-independent and multifactor situations where traditional techniques can not be used. Simulation examples for spread options, dual options and portfolio options will be given to illustrate the algorithm performance and the detailed numerical analyses will also be provided as well.
ABSTRACT: The topological theory of connectedness and total disconnectedness has been categorically developed in various ways of approach since the 60s. In this talk we introduce the theory of connectedness by studying two different approaches to the notion of connectedness. The first approach was functionally introduced by R. E. Hoffmann and is used to extend many properties in the category of topological spaces to certain categories. We present a new approach to the notion of connectedness and give a further discussion on its theory. This second approach involves a class F of morphisms in a given category and it then coincides with the first one in a particular class F. We also introduce the categorical concept of total disconnectedness and study its theory so that itself and the connectedness of the forementioned approach have a nice connection. Moreover, by taking advantage of the notion of F-openness which was studied by M. M. Clementino, E. Giuli and W. Tholen, we define the definition of F-local connectedness and derive its properties.
ABSTRACT: The Florida State Pension plan gives participants the option to switch between Defined Contribution and Defined Benefit Pension plans. M.A. Milevsky and S.D. Promislow have recently obtained analytical expressions for the optimal switching time to maximize the expected wealth on retirement. When expected wealth is replaced by expected utility of wealth it is no longer possible to derive closed form solutions. This seminar will report on several results for power utility functions obtained by simulation.
This is in partial fufillment of his seminar requirement for the Master's degree, and all MA students are expected to attend.
August 11, 2003 to August 16, 2003
ABSTRACT: There exist instruments and techniques that produce instantaneous values of CO2 partial pressure in the lung (PACO2) and in associated arterial (PaCO2) and mixed venous blood vessels (PvCO2). However, these techniques are invasive and difficult to carry out. These measurements are very useful in predicting important cardio-respiratory function, as well as in predicting sufficient volumes of an anesthetic to be administered to patients.
I will review the model presented by Chilton and Stacey in 1953. This early model was one of the first to present a broad theoretical attack of the dynamics of carbon dioxide in the lungs during respiration under condition of metabolic equilibrium.
I will construct a continuous dynamic model of the periodic oscillation of PACO2, PaCO2, and PvCO2 as a function of respiratory frequency, cardiac output, fractional residual capacity, and metabolic CO2 production. My formulas will be derived from a mass balance approach to the different compartments of the respiration cycle. I will present simulations of this model under various conditions. I will analyze my results and compare them with results from Chilton and Stacy, and Benallal et al 2000.
Survey paper requirement for Masters students.
Reminder: Master's
Mathematics students are expected to attend the talk.
ABSTRACT: A semi-analytical approach for computing the temperature distribution and thermal stress inside an InSb crystal grown with the Czochralski technique is described. An analysis of the growing conditions indicates that the crystal growth occurs on the conductive time scale. A perturbation method for the temperature field is developed using the Biot number as a (small) expansion parameter whose zeroth order solution is one-dimensional (in the axial direction) and is obtained for a cylindrical and a conical crystal. For the growth conditions of InSb a parabolic temperature profile in the radial direction is shown to arise naturally as the first order correction. Both the quasi-steady and unsteady cases are solved for the crystal/melt system and various crucible profiles. Some issues relevant to growth conditions are also discussed. This is joint work with I. Frigaard, H. Huang, and S. Liang.
ABSTRACT: A securities market model will first be developed in discrete time. The definitions of arbitrage and equivalent martingale measure in our market model will be given and then it will be shown that the market has no arbitrage opportunities if and only if there exists an equivalent martingale measure. Furthermore the equivalent martingale measure is unique if and only if the market is complete, that is if every security is attainable by some admissible trading strategy. Next the model will be formulated in continuous time and the analagous no arbitrage if and only if there exists an equivalent martingale measure condition will be proven.
Seminar requirement for Masters students.
Reminder: Master's
mathematics students are expected to attend the talks of other students.
Documented evidence at 6 such talks is expected. Attendance sheets can be
picked up from N519 Ross.
ABSTRACT: First exit times and their dependence on variations of parameters are discussed for diffusion processes with non-stationary coefficients. We present new prior energetic type estimates for solutions of parabolic Kolmogorov's equations with infinite horizon. These estimates are used to obtain estimates of $L_p$-distances and some other distances between two exit times.
ABSTRACT: In this talk we will first go over the classical risk model in actuarial mathematics. A well known result on the distribution of its ruin time, namely, the Seal's formula will be introduced. Then we will focus on the classical model perturbed by a Brownian motion (perturbed model). Using Ito's formula one can show that, in such a model, the ruin probability as a function of initial reserve and time is the unique solution of a certain partial-integral-differential equation. The Laplace transforms of ruin time can be derived from this equation. Perturbed models with exponential claims will be considered later in this talk. Those Laplace transforms can be inverted numerically in this special case. We will show how this can be carried out. Discussions and related plots will be given.
ABSTRACT: Protein carries out the majority of the functions for human body. In order to understand the function of proteins, one needs to understand the protein structures first. Due to the large number of known protein structures in Protein Data Bank, protein classification methods are needed to help understanding protein structure-function interaction. In order to effectively classify the proteins, sequence and 3D structure comparison algorithms should be developed first.
This paper focuses on the protein 3D structure comparison methods and can be divided into two parts. In the first part, I present a brief introduction to protein 3D structure, which is necessary for understanding the latter content. A review of existing algorithms is given and the working mechanisms of several popular algorithms are described.
The second part discusses two structure comparison algorithms, which are DALI and VAST. The statistical methods applied in these two algorithms, such as Metropolis algorithm and Gibbs sampler, are introduced. The DALI algorithm was implemented as an example to show how statistical methods can be applied for protein structure comparisons.
Seminar requirement for Masters students
Reminder: Master's Mathematics
students are expected to attend the talks of other students. Documented
evidence at 6 such talks is expected. Attendance sheets can be picked up from
N519 Ross.
ABSTRACT: The discrete Fourier transform has applications in Physics, Statistics, error correcting code, and theoretical problems. We will introduce definitions and the algebra associated to the DFT and attempt to give a "picture" of this operation on the set of functions on the group Z/nZ.
Seminar requirement for Masters students.
Reminder: Master's Mathematics students are expected to attend the talks of other students. Documented evidence at 6 such talks is expected. Attendance sheets can be picked up from N519 Ross.
ABSTRACT: Proof of Generalized Fundentmental Theorem of Galois Group.
Generalized Fundamental Theorem: If F is an algebraic Galois extension field of K, then there is a one-to-one correspondence between the set of all intermediate fields of the extension and the set of all closed subgroups of the Galois group (given by E| Aut_E F ) such that: F is Galois over every intermediate field E, but E is Galois over K if and only if the corresponding subgroup is normal in G=Aut_K F; In this case G/Aut_E F is (isomorphic to) the Galois group Aut_K E of E over K.
Seminar requirement for masters students.
Reminder: Master's
Mathematics students are expected to attend the talks of other students.
Documented evidence at 6 such talks is expected. Attendance sheets can be
picked up from N519 Ross.
ABSTRACT: The fundamental decision faced by an investor is how to invest among various assets over time. This problem is known as dynamic portfolio selection. Investment decisions share two important characteristics in varying degrees: uncertainty over the future rewards from the investment, and the timing of the investment. In this seminar, the dynamic portfolio selection problem with fixed and/or proportional transaction costs is presented. The portfolio consists of a risk-free asset, and a risky asset whose price dynamics is generated by geometric Brownian motion. The objective is to find the stochastic controls (amounts invested in the risky and risk-free assets) that maximize the expected value of the discounted utility of the terminal wealth. In contrast to the existing formulations by singular stochastic optimal control theory, the dynamic optimization problem is formulated as a non-singular stochastic optimal control problem so that optimal trading strategies can be obtained explicitly. In the limiting case of zero transaction costs, the optimal control problem in the new formulation is solved analytically when the portfolio consists of a risk-free asset, and many risky assets whose price dynamics are governed by correlated geometric Brownian motions.
ABSTRACT: Based on Grassmann's master piece "Ausdehnungslehre", Ramshow's recent work "On multiplying Points: The paired Algebras of Forms and Sites", and umbral calculus, a new approach to Bézier curves and surfaces is given in the first chapter of the dissertation. Under the new approach a Bézier curve /surface of degree n can be simply denoted as a power of points. Using this new approach, many known results of Bézier curves and surfaces, both the statements and the proofs, can be simplified and many new results can be relatively easier to get.
Some classical problems are studied using this new theory in the second chapter. The geometric Hermite interpolation with the given tangent direction, curvature vector and torsion is studied in detail. A solution of Bézier curve of degree 5 with optimum approximate order is given. The general characterization of singular points, inflection points and torsion vanish points is given for both Bézier curves and Bézier rational curves.
In chapter 3, we first discuss in detail the general case of conversion between triangular Bézier surface and rectangular surface. Under our new theory, the conversion, which is a very important problem in CAGD, becomes much easier and much clearer. Secondly we are able to prove that under some restriction about control points, which is described by a matrix, the condition of Geometric continuity between two triangular Bézier surface patches can be greatly simplified. The matrix itself, we believe, will have an important position to characterize the control points of Bézier patch. Finally the vertex enclosure continuity of n triangular Bézier patches is studied.
ABSTRACT: This talk focuses on the development of stochastic models for high frequency time series data and related statistical inference procedures. In particular, two classes of models are proposed to analyse stock data. The first one is a class of inter-temporal stochastic conditional duration models that are useful to model the duration process. A duration represents the time difference between consecutive trading times, which is an important financial variable reflecting the activeness of a stock. The statistical inference is developed under the framework of state space models with non-normal marginal errors, in which the Monte Carlo maximum likelihood estimation proposed by Durbin and Koopman (1997) is implemented. Specifically, we consider two heavy tailed distributions, log\-Weibull and log\-gamma random in our analysis of an IBM stock data set. The second one is a class of time deformation return models for the return process. A return process is referred to as the series of differences of two adjacent log-prices of a stock over a certain period of time. The return variable is one of the most interesting and most important financial measurements for both researchers and investors. For the return models, we adopt an inferential strategy based on simulated method of moments (SMM) for parameter estimation, in which simulation studies are conducted to validate certain choice of the number of moments required in the formation of estimating equations. Our numerical results in both simulation studies and data analyses have indicated that this simulation-based inferential approach turns out to work very well for the parameter estimation in the return models. The study presented in the thesis is largely motivated by the analysis of the high frequency IBM stock data.
ABSTRACT: In drug discovery, high throughput screening (HTS) is used to assay large numbers of compounds against a biological target. A research pharmaceutical company might have of the order 2 million compounds available, and the human genome project is generating many new targets. Hence, there is a need for a more efficient strategy: smart screening.
In smart screening, a representative sample (experimental design) of compounds is selected from a collection and assayed against a target. A model is built relating activity to explanatory variables describing compound structure. The model is used to predict activity in the remainder of the compound collection and only the more promising compounds are screened.
Previous work has demonstrated the effectiveness of local methods (e.g., k-nearest neighbours, decision trees). This talk will concentrate on averaging strategies, including bagging, boosting, and some new methods based on subsets of explanatory variables. I will show that conventional concepts from statistical inference such as hypothesis testing need to be modified for effective classification in the drug discovery context.
Refreshments will be served in N620 Ross at 10:30a.m.
ABSTRACT: In this seminar we will present models of microparasitic infections that reproduce within their hosts and transfer from one host to another. The host population is divided into three groups: susceptibles S, infectives I, and removed R who can no longer contract the disease due to immunity or isolation. In the SIR model members of the host population proceed from the S class to the I class and then to the R class. The dynamics of the movement is described by three differential equations in S, I, and R. The SIRS model has the additional feature that members of the host population can move from the R class back to the S class through loss of immunity. The system of differential equations corresponding to this model will be discussed.
Seminar requirement for masters students.
Reminder: Master's
Mathematics students are expected to attend the talks of other students.
Documented evidence at 6 such talks is expected. Attendance sheets can be
picked up from N519 Ross.
ABSTRACT: A general methodology in the characterisation of three interest rate models will be presented. Closed-form solutions to zero-coupon bond prices and forward rates are obtained. The first model postulates that the short rate process is a function of a continuous time finite state Markov chain representing the "state of the world". The characterisation is extended to the situation when the Markov chain is time varying. This gives rise to the second model. Finally, a mean reverting model is considered where the mean reversion parameter switches between a finite number of states/regimes of an economy. Empirical studies demonstrating the plausibility of the models will also be given.
Refreshments will be served at 3:30p.m. in N620 Ross.
ABSTRACT: We present the background for some algorithms for fast, combinatorial analysis of which sections of proteins (or other molecules) are rigid and which are flexible. We will also briefly present some illustrations from work on proteins, docking of ligands, etc. The algorithms are implemented in the program FIRST for protein analysis, available free (with an academic license agreement) on the web from Michigan State University.
The algorithms are based on ideas of 'generic rigidity of graphs' developed over several decades of mathematical work on the geometry and combinatorics of rigid structures. We will describe some of this mathematical background and a pair of key unsolved mathematical problem which we call The Molecular Conjectures.
Details of the algorithms and current work will be presented in a second seminar in the near future.
The work is part of a joint NIH funded grant with Leslie Kuhn (Biochemistry) and Michael Thorpe (BioPhysics) at Michigan State University.
ABSTRACT: In life history studies involving patients with chronic diseases it is often of interest to study the relationship between a marker process and a more clinically relevant response process. This interest may arise from a desire to gain a better understanding of the underlying pathophysiology, a need to evaluate the utility of the marker as a potential surrogate outcome, or a plan to conduct inferences based on joint models. We consider data from a trial of breast cancer patients with bone metastases. In this setting, the marker process is a point process which records the onset times and cumulative number of bone lesions which reflects the extent of metastatic bone involvement. The response is also a point process, which records the times patients experience skeletal complications resulting from these bone lesions. Interest lies in assessing how the development of new bone lesions affects the incidence of skeletal complications. By considering the marker as an internal time-dependent covariate in the point process model for skeletal complications we develop and apply methods which allow one to express the association via regression. A complicating feature of this study is that new bone lesions are only detected upon periodic radiographic examination, which makes the marker processes subject to interval-censoring. A modified $EM$ algorithm is used to deal with this incomplete data problem.
Refreshments will be served in N620 Ross at 10:30a.m.
ABSTRACT: Standard interpretations of Frege's logicism take at face value a drastically oversimplified picture of nineteenth century mathematics. Against this background, Frege can easily seem to be outside the mathematical mainstream, and many commentators have recently concluded exactly this. This paper digs into the historical background to show that Frege (and nineteenth century foundations more generally) was more profoundly engaged with ongoing mathematics than has been realized. Among other things that are relevant to assessing the mathematical thrust of Frege's work are a contrast between the Riemann - inspired "conceptual" style of Gottingen and the arithmetical style of Weierstrass and Berlin, differences between Riemann and Weierstrass on definitional practices, and the early applications in number theory of (what is now called in English) the theory of cyclotomic extensions. This historical background is not just interesting in its own right, but it also prompts a revised assessment of what Frege was trying to do in Grundlagen, and in turn suggests a reevaluation of the proper relation between the philosophy of mathematics and mathematical practice.
ABSTRACT: We will present new techniques in Lie theory arising from problems in complex analysis and complex differential geometry. Geometric aspects will be emphasized in an elementary approach.
ABSTRACT: A bounded linear operator A on a Hilbert space H is said to be of finite type if there is a finite-dimensional subspace K of H such that K contains the range of the commutator AB-BA and is invariant with respect to B, where B is the adjoint of A. In this talk, the analytic model, the trace formula for the commutator and the formula for the eigenvectors of some classes of operators of finite type will be given.
ABSTRACT: A Wealth of biological genotyping data has emerged from human complex trait mapping studies. The inherent probabilistic properties of genetic data have generated many open questions. In this talk, I will address two problems arising from gene mapping studies and the methods we have developed to solve the problems.
The first problem is multilocus large pedigree analysis. The proposed methods incorporates Hidden Markov Model into the Gibbs sampling scheme. The second problem is nonparametric hypothesis testing for unbalanced design to detect gene interaction. A new nonparametric method is devised and the concept of composite linear rank statistics is established.
Refreshments will be served in N620 Ross at 3:30p.m.
ABSTRACT: Problems of recognition occur in practically every field of human activity. The talk focuses on the entropic algorithm for pattern-recognition. We will give a brief introduction of Shannon's entropy and its properties, then emphasize the entropic pattern-recognition criterion. A detailed example will be given to illustrate the entropic algorithm for recognition.
Seminar requirement for Masters students
Reminder: Master's Mathematics
students are expected to attend the talks of other students. Documented
evidence at 6 such talks is expected. Attendance sheets can be picked up from
N519 Ross.
ABSTRACT: Tin electroplated coatings are widely used as corrosion protection layers in consumer packaging applications. The desirable smoothness and brightness of the coatings are a function of the grain-size and their micro- and nano-structures. Earlier studies demonstrated that analysis of the electrodeposited metal surface on a micrometer scale offers the potential to provide new insights into the metal deposition mechanism and provides a way of improving the quality of the metal deposits. In this work, atomic force microscopy (AFM) is used to characterize the morphology of the tin electrodeposits grown under various plating conditions. We are interested in the effects of plating solution composition, current density and plating temperature on the morphology and structure of these films. The surface roughness of the tin films grown using different conditions is evaluated by applying scaling analysis from the AFM images. The results show that the tin film surface displays self-affinity, and normal scaling behaviour is observed only when the brightener and stabilizer are both present in the plating electrolyte. This presentation will outline the methodology employed to perform the dynamic scaling analysis and how chemist can use fractal concepts to understand metal growth on surfaces. This is a joint work of T.Jiang, N.Hall and S. Morin.
ABSTRACT: We establish the theory of localization operators with two admissible wavelets on locally compact and Hausdorff groups. The analogue of the resolution of the identity formula, the boundedness of localization operators with symbols in $L^p$, the trace formula and the $S_p$ properties of localization operators are discussed. Moreover, we give the trace class norm inequalities for wavelet multiplers and for wavelet multipliers with two admissible wavelets, we give the trace class norm inequalities and the trace formula.
ABSTRACT: This work is mainly focused on Hausdorff topological groups. Motivated by the Kuratowski-Mrowka Theorem, a topological group is called c-compact if the projection along any other group of its product with that group maps closed subgroups to closed subgroups. The problem of whether every c-compact topological group is compact has been an open question for over fifteen years.
We obtain three main results on c-compact topological groups. The first one is that every c-compact group that admits a continuous monomorphism into a compact group is actually compact. (As obvious as this result may appear, its proof is fairly non-trivial.) The second states that every c-compact group is compact if every c-compact of a smaller class admits a continuous monomorphism into a compact group, and the third says that every c-compact locally compact group of a certain class is compact if and only if every countable discrete c-compact group admits a continuous monomorphism into a compact group. While the first result is of an affirmative nature, the two latter ones are reduction theorems.
In the course of attempts to solve the problem of c-compactness we construct some dual adjunctions, which in a way generalize several known dualities. Cartesian closedness of the underlying categories of topological spaces turns out to play a crucial role in establishing these dualities. However, the most well-known cartesian closed category of topological spaces (consisting of the Hausdorff k-spaces) is not the most convenient one in the context of topological groups. As a resolution we investigate the categorical properties of Tychonoff spaces with the property that every function on them with Tychonoff codomain and continuous restrictions to compact subsets is continuous. Such spaces are known for over thirty years, and their topological properties of such spaces has been thoroughly studied, but their category does not seem to have not drawn any attention in the past. We prove that this category is cartesian closed, and show that it is equivalent to an epireflective subcategory of Hausdorff k-spaces.
In a category with notions of image and closed subobject an object is called h-closed if its image under every morphism is closed in the codomain. This concept generalizes the notion of h-completeness of topological groups, which is somewhat weaker than the notion of c-compactness. As an addendum we obtain a categorical characterization of C^*-algebras in some larger categories of *-algebras containing it. We show that C^*-algebras are precisely the h-closed objects in these categories.
ABSTRACT: We investigate the OSI Network-layer and develop an abstraction suitable for the study of network collective dynamics and various network performance indicators (throughput, average number of packets in transit, average packet delay, average packet speed,...). We investigate how the network collective dynamics and the network performance indicators are affected by various routing algorithms and by the addition of randomly generated links into a regular network connection topology. Data for our study was gathered using Netzwerk-1, a C++ simulation tool developed for our abstraction.
ABSTRACT: In this paper, we provide a definition of Pareto equilibrium in terms of risk measures, and present necessary and sufficient conditions for equilibrium in a market with finitely many traders (whom we call "banks") who trade with each other in a financial market. Each bank has a preference relation on random payoffs which is monotonic, complete, transitive, convex and continuous, and show that this, together with the current position of the bank, leads to a family of valuation measures for the bank. We show that a market is in Pareto equilibrium if and only if there exist a (possibly signed) measure which, for each bank, agrees with a positive convex combination of all valuation measures used by that bank on securities traded by that bank.
Dr Ku is an applicant for the financial or actuarial mathematics position.
ABSTRACT: The famous 15th-century cardinal appears on at least one list of "great" mathematicians; on the other hand his contemporary Regiomontanus dismissed his efforts in mathematics as "ridiculous". But whatever his technical competence, it is certain that Nicholas's perception of mathematics shaped and illumined his views on such issues as the limits of human knowledge and the relation of man to God. I shall try to sketch from both perspectives -- the technical and the philosophical -- the place of mathematics in the world-view of this fascinating figure.
ABSTRACT: We characterize the class of finite solvable groups by two-variable identities in a way similar to characterization of finite nilpotent groups by Engel identities. More precisely, a sequence of words $u_1,\ldots,u_n,\ldots $ is called correct if $u_k\equiv 1$ in a group $G$ implies $u_m\equiv 1$ in a group $G$ for all $m>k$.
We are looking for an explicit correct sequence of words $u_1(x,y),\ldots,u_n(x,y),\ldots$ such that a group $G$ is solvable if and only if for some $n$ the word $u_n$ is an identity in $G$.
Let $u_1=x^{-2}y^{-1} x$, and $u_{n+1} = [xu_nx^{-1},yu_ny^{-1}]$.
The main result states that a finite group $G$ is solvable if and only if for some $n$ the identity $u_n(x,y)\equiv 1$ holds in $G$.
From the point of view of profinite groups this result implies that the provariety of prosolvable groups is determined by a single explicit proidentity in two variables.
The proof of the main theorem relies on reduction to Thompson's list of minimal non-solvable simple groups, on extensive use of the arithmetic geometry (Lang - Weil bounds, Deligne's machinery, estimates of Betti numbers, etc.) and on application of computer algebra and geometry (SINGULAR, MAGMA).
Joint work with: T.Bandman, G.-M.Greuel, F.Grunewald, B.Kunyavskii, G.Pfister
ABSTRACT: A law u(x_1,...,x_n)=u(x_1,...,x_n) is called positive if the words u and v do not contain inverses of variables, e.g. xy=yx.
A group G is called locally graded if every finitely generated subgroup in G has a nontrivial finite image.
We show equivalence of some known problems and give an affirmative answer in the class of locally graded groups.
ABSTRACT: To achieve more realistic image synthesis, sophisticated physically-based models have been recently used in computer graphics for simulating physical phenomena. For instance, in water animation, the two-phase incompressible Navier-Stokes model was used to simulate water motion. Consequently, one needs to solve Poisson equations with many complex interfaces, which are often known as Stefan problems. Moreover, the PDE coefficients often have large jumps in discontinuities. The large number and highly irregular shape of the interface pose great challenge to accurate and efficient numerical solution to the problems. In this talk, we present a fast multigrid preconditioning technique for solving highly irregular interface problems. Our approach takes advantage of the knowledge of the interface location and the jump conditions, which one often knows in practice. Specifically, our interpolation captures the boundary conditions at the interface. Numerical results in 2D and 3D show that the resulting multigrid method is more efficient than other robust multigrid methods, and is independent of both the mesh size and the size of the jump.
Consider the algebra of formal power series in countable many noncommuting variables over the rationals. The subalgebra \Pi(x) of symmetric functions in noncommuting variables consists of all elements invariant under permutations of the variables and of bounded degree. We develop a theory of such functions analogous to the ordinary theory of symmetric functions. In particular, we define analogs of the monomial, power sum, elementary, and complete homogeneous, and Schur symmetric functions as well as investigating their properties.
A preprint can be downloaded at: www.ma.usb.ve/~mrosas/articles/articulos.html.
These Prof Talks are organized by Club Infinity and are intended to expose undergraduate students to different and, perhaps, new areas of mathematics. Each talk is about 30 minutes in length, and of course, after the talk there will be pizza and refreshments.
ACM.nom>Alexander Tovbis, University of Central Florida, will
give a talk on "
ABSTRACT: Mathematical models are used to study the disease transmission dynamics of TB (tuberculosis). The models incorporate many important features of the disease including the long and variable latency period of TB and the emergence of antibiotic resistance due to incomplete treatment of drug-sensitive TB. Bifurcation analyses of the models provide threshold conditions which can be used to determine the role of key parameters in the coexistence and coevolution of competing strains of TB -- regular TB and drug-resistant TB strains. Optimal control strategies of the disease will also be discussed.
ABSTRACT: We consider a Markov chain Monte Carlo (MCMC) algorithm for building an additive model with a sum of trees. The trees themselves are ``treed models'', with a separate linear regression model in each terminal node. To adopt a Bayesian approach, we put prior distributions on parameters within each tree and on the sum of trees. The MCMC algorithm used in to train a single tree is extended to the additive framework. The key component of this extension is a step in which a single random tree is drawn conditional on all others in the sum. The extension is straightforward yet powerful, enabling a more flexible set of models.
The model and associated training algorithm have some interesting similarities to Boosting and backfitting. If the priors are set so as to heavily regularize individual trees, we see Boosting-like behaviour with a large number of weak learners, each contributing a small amount to the overall model. Since a treed regression is anything but weak, careful attention must be paid to the choice of prior parameters. If instead we relax the regularization, then a smaller number of additive trees will contribute to the model. The iterated draws of each tree conditional on others is similar to a Bayesian version of backfitting.
The Bayesian framework and MCMC training algorithm yield a posterior distribution, which can be used to assess uncertainty. For example, posteriors for the number of weak learners and for predictions are easily available.
ABSTRACT: An n-category is an n-dimensional geometric object carrying a certain kind of algebraic structure, most notably composition operations. (A 1-category is just an ordinary category.) Several definitions of the notion of n-category have been proposed. The limit case (n tending to infinity) can also be considered, the resulting notion is the most general. This talk chiefly concerns the definition of infinity-categories put forth by Michael Makkai. According to it, an infinity-category is what Makkai calls a multitopic set (a structure built from special oriented polytopes called multitopes; here Makkai's definition is rather involved), and composition is declared by an infinite-dimensional universal property, making additional operations or axioms unnecessary.
After treating the two-dimensional case for motivation, I shall give an improved version of Makkai's definition. The main change lies in the definition of multitopic sets, which in my case is entirely geometric. I call these structures dendrotopic sets. I shall go on to present a different axiomatization of universality in dendrotopic sets and show that it leads to a notion stronger than Makkai's. The crucial part of my proof is a construction that makes use of a rather surprising combinatorial fact concerning dendrotopes.
ABSTRACT: Exponential dichotomies have played and continue to play a significant role in the study of the asymptotic behavior of various types of differential equations. After reviewing the basic ideas behind exponential dichotomy we will focus on its application in showing the existence of periodic solutions of ordinary differential equations, parabolic partial differential equations and functional differential equations. We will establish the criteria necessary to further extend this method to other types of equations.
ABSTRACT: The cardinality of a set of certain directed bipartite graphs is equal to that of a set of permutations that satisfy certain criteria. In this talk we decribe both the set of graphs and the set of permutations, what their cardinality has to do with the chromatic polynomial, and whether we can find a natural bijection between the sets.
This is joint work with Richard Ehrenborg.
ABSTRACT: A macromolecular structure is calculated from a combined set of common chemical topologies and experimental data. In this seminar, I will guide the listener through a structure calculation and discuss the problems one encounters along the way.
ABSTRACT: Clustering is the process of partitioning a set of objects into subsets based on some measure of similarity (or dissimilarity) between pairs of the objects. It has wide applications in data mining and bioinformatics. I will give an overview on existing clustering algorithms. A new non-parametric clustering algorithm will be presented in this talk. The advantage of this algorithm will be demonstrated by using simulated and real data sets.
These Professor Talks are organized by Club Infinity and are intended to expose undergraduate students to different and, perhaps, new areas of mathematics. Each talk is about 30 minutes in length, and of course, after the talk there will be pizza and refreshments.
ABSTRACT: The talk presents possible consequences of web systems usage for e-procurement at the level of both buyers and buying organizations. The analysis is based on a model that integrates the forces of electronic interconnections and Transaction Cost Economics. A cluster analysis performed on data collected from 110 corporate buyers from over 100 organizations was used to test the validity of the model. Findings helped identify two basic groups of buyers with differences on relevant variables supporting the model presented. Results show that web systems usage helps (1) buyers reduce their search costs, which gives them the flexibility to increase their base of suppliers when such increase fits the interest of the buying organization and (2) buying organizations reduce their coordination costs which gives them the flexibility to outsource more activities when such move fits their interest. The implications of this study for research and practice are discussed.
ABSTRACT: Algebraic number theory is the study of number theory using the tools of abstract algebra. I will discuss the rise of the subject, focusing on the contributions of Euler, Gauss, Kummer, and Dedekind. I will also touch on more recent developments in algebraic number theory.
ABSTRACT: In this talk we associate to a simplicial complex a square-free monomial ideal called its "facet ideal". We use the combinatorial structure of a simplicial complex to study algebraic properties of its facet ideal. We explore dualities with Stanley-Reisner theory, and describe effective ways to compare the Stanley-Reisner complex to the facet complex for a given monomial ideal. As a result, we re-interpret some existing Cohen-Macaulay criteria, and produce some new ones.
ABSTRACT: Experimental observations indicate that high molecular weight polymers in very dilute solution undergo a phase transition from an expanded open coil state to a compact state as the solvent quality or temperature is decreased. This transition is known as the collapse transition. The existence of a collapse transition has not been proved for the standard models of linear and ring polymers, that is the self-avoiding walk and self-avoiding polygon models, however, numerical evidence strongly supports its existence. The numerical evidence is also consistent with the conclusion that the the collapse transition of linear and ring polymers occurs at the same critical temperature.
In this talk, the known rigorous and numerical results related to the collapse phase transition of self-avoiding polygons and walks will be reviewed. Recent rigorous results will also be discussed. In particular, it will be shown that self-avoiding polygons in $\mathbb{Z}^2$, the square lattice, collapse at the same critical point as a wider class of lattice subgraphs, namely closed trails with a fixed number of vertices of degree 4. This is proved by establishing combinatorial bounds on closed trails in terms of self-avoiding polygons. A similar approach can be used to relate open trails with a fixed number of vertices of degree 4 and self-avoiding walks.
Refreshments will be served at 3:30p.m. in N620 Ross.
ABSTRACT: The identification of monotone-in-time quantities underpins some of the basic insights into the long-time behavior on nonlinear Schrodinger evolutions. For example, in the focusing setting, the variance identity reveals a monotone behavior implying the existence of blow-up solutions. In the defocusing case on R^3,the Morawetz identity of Lin-Strauss provides space-time norm bounds implying scattering behavior. This talk describes a unified approach to obtaining monotone-in-time quantities for NLS evolutions, generalizing these two classic examples. A scattering result for the R^3 cubic defocusing case will also be discussed. This talk describes joint work with M. Keel, G. Staffilani, H. Takaoka and T. Tao.
ABSTRACT: The coefficients in a q-binomial coefficient mod (q^n-1) are shown to be equal to the number of double cosets C_n\S_n/S_k \times S_{n-k} whose size is certain multiple. This extends an observation of Chapoton. The result is generalized to finite Coxeter groups. q-analogues are conjectured which generalize results of Andrew and Haiman. A related Schur function result is given.
ABSTRACT: In the talk, we present a time deformation return model for the return process or the series of differences of two adjacent log-prices of a stock. In particular, the trading duration process is used as the directing process to deform the calendar time. For this model, we develop a procedure based on the method of generalized moments for parameter estimation, in which the number of moments is chosen through simulations. We illustrate the proposed model and inferential procedure by an analysis of IBM stock data.
ABSTRACT: A technique for modeling contaminant transport based on Markov Process theory is developed. Transport is quantified by summing the first two moments of independent random displacements and applying the Central Limit Theorem (CLT) to obtain solute distributions of a Gaussian nature. For non-uniform flow fields the CLT is applied in a streamfunction / equi-travel time space and transforms are used to give concentrations in Cartesian co-ordinates. Simulations in uniform, radially converging and circular flow fields show the method to be two to three orders of magnitude faster than modelling with the advection-dispersion equation, using a control volume technique.
ABSTRACT: We consider, on the space of polynomials in $2 n$ variables $X=x_1,\ldots,x_n$ and $Y=y_1,\ldots,y_n$, the usual action of the group $S_n\times S_n$. Using a classical result of Steinberg, this space $Q[X,Y]$ can be viewed as a $n!^2$ dimensional module over the invariants of the group. This is to say that polynomials in X and Y can be uniquely decomposed as linear expressions in covariants, with coefficients that are invariants. We uses theses results, together with restriction to $S_n$ (considered as a diagonal subgroup), to decompose diagonal alternants. In particular, we give an explicit basis for diagonal alternants, modulo the ideal generated by products of symmetric polynomials in $X$ and $Y$. The construction of this basis involves a very nice classification of configurations on $n$ points in $R^2$.
ABSTRACT: There is a rich and highly-developed combinatorial theory for for Schur functions (Young tableaux, the Littlewood-Richardson Rule, etc), but one can argue that it suffers from a few too many seemingly arbitrary choices and miracles.
On the other hand, Kashiwara's theory of crystal bases for quantum groups comes close to subsuming this theory, and at the same time is (a) canonical and (b) has a much greater range of applicability (namely, to the representations of semisimple Lie groups and algebras and their quantum analogues).
The main goal of our talk will be to explain that Kashiwara's theory can be developed at a purely combinatorial level, and need not rely on any of the representation theory of quantum groups. Even in type A, this leads to a more natural understanding of the combinatorics of Schur functions. Along the way, we hope to mention an open problem or two.
ABSTRACT: I introduce a family of prior distributions over multivariate distributions, based on the use of a `Dirichlet diffusion tree' to generate exchangeable data sets. These priors can be viewed as generalizations of Dirichlet processes and of Dirichlet process mixtures, but unlike simple mixtures, they can capture the hierarchical structure present in many distributions, by means of the latent diffusion tree underlying the data. This latent tree also provides a hierarchical clustering of the data, which, unlike ad hoc clustering methods, comes with probabilistic indications of uncertainty. The relevance of each variable to the clustering can also be determined. Although Dirichlet diffusion trees are defined in terms of a continuous-time process, posterior inference involves only finite-dimensional quantities, allowing computation to be performed by reasonably efficient Markov chain Monte Carlo methods. The methods are demonstrated on problems of modeling a two-dimensional density and of clustering gene expression data.
ABSTRACT: As a fascinating field of growing interest on the borderline between functional analysis, operator theory, differential equations and mathematical physics, nonlinear spectral theory has been extensively studied by many authors. Three European mathematicians, Furi, Martelli and Vignoli, made a major contribution by introducing a notion of spectrum for continuous maps. Later, a new spectrum that contains the eigenvalues of the operator as the linear case was introduced by the speaker. Both theories have wide applications.
We shall discuss the application of the new theory to the study of some nonlinear integral operators such as Hammerstein integral operators. The results then will be used to prove existence of a solution for a second order differential equation under three-point boundary conditions. A generalization of the Borsuk-Ulam theorem will also be given.
ABSTRACT: I will discuss two recent results about the complex of injective words, both of which are joint work with Phil Hanlon. The first is a Hodge-type decomposition for the S_n-module structure for the (top) homology, refining a recent formula of Reiner and Webb. A key ingredient was to show that the Eulerian idempotents interact in a nice fashion with the simplicial boundary map on the complex of injective words. The second result deals with a recent conjecture of Uyemura-Reyes, namely that the random-to-random shuffle operator has integral spectrum. We prove that this conjecture would imply that the Laplacian on each chain group in the complex of injective words also has integral spectrum.
ABSTRACT: Advances in protein expression, instrumentation and computation have made protein structure determination a process that can be measured in weeks/months rather than years. As a result, researchers can now focus more on biological problems than the act of structure determination itself. Increases in throughput are also crucial to drug screening and proteomics initiatives. My seminar will discuss how material (protein/DNA) is manufactured, how data is collected, and how structural calculations are performed.
ABSTRACT: The generalized symmetric group G(n,m) may be viewed as the group of square matrices of size n having one non-zero entry in each row and column, this entry being a m-th root of unity. We will talk about two action of the (generalized) symmetric group on polynomials (symmetrizing and quasi-symmetrizing action) and will study their invariants and covariants (quotient by invariants). We will make and important use of Grobner bases.
ABSTRACT: HFC make a very intensive research area due to their capability to produce free-pollution electromotive energy from a chemical reaction.
I will make an overview of HFC model equations that give the HFC fluid dynamics. I will present a FEM for 3D HFC fluid dynamics computations, some numerical results and other issues to HFC computations. I will discuss the boundary conditions on liquid-solid interfaces and free air-porous domain. In terms of geometry optimization of HFC performance, I will present some shape optimization techniques to use for porous domain optimal design.
ABSTRACT: Large-scale software systems typically involve a large number of actors playing different roles, interacting with each other to achieve personal and common goals. As agent-based software technologies advance, systematic methods are needed to support the development of large-scale multi-agent systems. As with other kinds of software systems, successful system development relies on in-depth understanding of stakeholder needs and wants, and their effective translation into system requirements, and eventually into executable software. This paper presents a requirements engineering methodology based on agent concepts at the requirements modeling level. The strategic actor is used as the central organizing construct during requirements elicitation and analysis. In considering alternative arrangements of work processes and system interactions, strategic actors seek to exploit opportunities and avoid vulnerabilities. The methodology starts by building a lexicon as a preliminary step. The relevant actors are then identified. A breadth coverage step produces a first-cut model of the domain and the social relationships within it. The models are then developed depth-wise to capture organizational and individual goals and to explore alternatives. The methodology complements and extends the i* modeling framework. By taking into account agent characteristics such as autonomy, intentionality, and sociality starting from the requirements level, the methodology leads naturally into the development of large-scale systems that employ multi-agent software technologies. An example from the healthcare domain is used to illustrate the methodology.
ABSTRACT: The Clifford (or Weyl) algebras have natural representations on the exterior (or symmetric) algebras of polynomials over half of generators. Those representations are also important in quantum and statistical mechanics where the generators are interpreted as operators which create or annihilate particles and satisfy Fermi (or Bose) statistics. In this talk, I will present a model for the extended affine Lie algebra over quantum tori via the fermionic (or bosonic) construction.
ABSTRACT: High Throughput Screening (HTS) is used in drug discovery to screen large numbers of compounds against a biological target. Data on activity against the target are collected for a representative sample (experimental design) of compounds selected from a collection. The explanatory variables are chemical descriptors of compound structure. Some previous work shows that local methods, namely K-nearest neighbours (KNN) and classification and regression trees (CART), perform very well. Some adaptations to KNN and CART including averaging over subsets of explanatory variables, bagging, and boosting, have also been considered. After briefly reviewing and comparing these techniques, I will focus on estimating activity and error rates for assessing model performance. This will shed some light on how various models handle large random or systematic errors in drug screening data.
ABSTRACT: What is the goal of education? Surely it is to connect students into the larger community - a community that extends over space and time--as well as prepare them for the demands of career and citizenship. In mathematics, this means more than covering the basic techniques of arithmetic, algebra and geometry. We need to take account of the fact that mathematics has a past and a cultural imprint and see our role as providing one more way in which a person reaching maturity can direct his interests and enthusiasms. I will argue that in doing this, we actually touch more strongly on important mathematical issues that will help even the students who will need a strong mathematical background for their careers and for higher education.
ABSTRACT: An essential goal in neuroscience is to develop links between the multiple levels of neural organization that include the molecular, cellular, multi-cellular, circuit and system levels. The complexity of brain networks makes their output impossible to understand using purely experimental techniques. Mathematical models with clear links to experimental data are uniquely poised to forge links between different neural levels. However, the development and the computational and mathematical analyses of appropriate physiological models are challenging. In large part, this is due to the specificity and the interdisciplinarity of the work.
Oscillatory output produced by networks of inhibitory neurons play critical roles in brain function, including hippocampus. In this talk, I will present some of our modelling work which shows that it is possible to link changes in inhibitory kinetics associated with anesthetics to specific alterations in inhibitory network patterns. This suggests that the effects of different anesthetic drugs might lie in the different ways in which these drugs modulate inhibitory network patterns. Future and ongoing work will also be discussed.
ABSTRACT: We present examples of how set theoretic methods are used to approach topological problems. We introduce two forcing notions, one is a modification of the well known Mathias forcing which helped us to obtain a result on the remainder of the Stone-Cech compactification of the reals; the other helped us to solve a question concerning the existence of perfectly normal nonrealcompact spaces.
The syllabus of the exam is available for perusal in N519 Ross.
ABSTRACT: Information theory is a branch of probability theory originated by Dr. Shannon, who proposed a quantative measure of the amount of information supplied by probabilistic experiment and Shannon's entropy. Here we just discuss the basic property of the entropy and the proof of the Unique theorem.
Seminar requirement for Masters students.
Reminder: Master's
Mathematics students are expected to attend the talks of other students.
Documented evidence at 6 such talks is expected. Attendance sheets can be
picked up from N519 Ross.
ABSTRACT: Several of the past talks in our seminar were concerned with Descent Algebras, Quasi-symmetric functions algebra as well as Peak functions algebra..
We will discuss the links among these algebras, in particular see that the peak functions algebras is closely related to the descent algebras of type B.
ABSTRACT: Euclid's systematization of mathematics was the crowning achievement of ancient science. His ordering of virtually all of the known propositions of mathematics into a coherent logically entailed system built upon reasonable axioms was seen as the path to certain knowledge. When Isaac Newton chose the same format for physics in the Principia Mathematica, the axiomatic system was established as the template for a scientific theory, regardless of the subject matter. The 18th and 19th centuries saw the creation of dozens of systematic formulations that purported to be scientific analyses of varieties of human endeavors that would have the same reliability and certainty as did Euclid's Elements. The only problem was getting true axioms. This talk will review some of those efforts to make social studies "Euclidean".