Logo

Instituto de Ingeniería Matemática y Computacional

Facultad de Matemáticas - Escuela de Ingeniería

Actividades

 

Víctor Verdugo, Instituto de Ingeniería Matemática y Computacional (IMC) y Departamento de Ingeniería Industrial y de Sistemas, Pontificia Universidad Católica de Chile.

Miércoles 9 de octubre de 2024, 13:40 hrs. (Presencial Auditorio Edificio San Agustín; link Zoom disponible escribiendo a Esta dirección de correo electrónico está siendo protegida contra los robots de spam. Necesita tener JavaScript habilitado para poder verlo.)

ABSTRACT

Prophet inequalities are a cornerstone in optimal stopping and online decision-making. Traditionally, they involve the sequential observation of n non-negative independent random variables and face irrevocable accept-or-reject choices. The goal is to provide policies that provide a good approximation ratio against the optimal offline solution that can access all the values upfront -- the so-called prophet value. In the prophet inequality over time problem, the decision-maker can commit to an accepted value for T units of time, during which no new values can be accepted. This creates a trade-off between the duration of commitment and the opportunity to capture potentially higher future values.

In this work we provide optimal approximation guarantees for the IID setting. We show a single-threshold algorithm that achieves an approximation ratio of (1+1/e^2)/2≈0.567, and we prove that no single-threshold algorithm can surpass this guarantee. Then, for each n, we prove it is possible to compute the tight worst-case approximation ratio of the optimal dynamic programming policy for instances with n values by solving a convex optimization program. A limit analysis of the first-order optimality conditions yields a nonlinear differential equation showing that the optimal dynamic programming policy's asymptotic worst-case approximation ratio is ≈0.618. Joint work with Sebastian Pérez-Salazar (Rice University, USA).

 

Leonardo Zepeda-Núñez, Google Research.

Jueves 12 de septiembre de 2024, 13:40 hrs. (Presencial Auditorio Edificio San Agustín; link Zoom disponible escribiendo a Esta dirección de correo electrónico está siendo protegida contra los robots de spam. Necesita tener JavaScript habilitado para poder verlo.)

ABSTRACT

The advent of generative AI has turbocharged the development of a myriad of commercial applications, and it has slowly started to permeate into scientific computing. In this talk we discussed how recasting the formulation of old and new problems within a probabilistic approach opens the door to leverage and tailor state-of-the-art generative AI tools. As such, we review recent advancements in Probabilistic SciML – including computational fluid dynamics, inverse problems, and particularly climate sciences, with an emphasis on statistical downscaling.

Statistical downscaling is a crucial tool for analyzing the regional effects of climate change under different climate models: it seeks to transform low-resolution data from a (potentially biased) coarse-grained numerical scheme (which is computationally inexpensive) into high-resolution data consistent with high-fidelity models.

We recast this problem in a two-stage probabilistic framework using unpaired data by combining two transformations: a debiasing step performed by an optimal transport map, followed by an upsampling step achieved through a probabilistic conditional diffusion model. Our approach characterizes conditional distribution without requiring paired data and faithfully recovers relevant physical statistics, even from biased samples.

We will show that our method generates statistically correct high-resolution outputs from low-resolution ones, for different chaotic systems, including well known climate models and weather data. We show that the framework is able to upsample up to 300x while accurately matching the statistics of relevant physical quantities – even when the low-frequency content of the inputs and outputs differs. This is a crucial yet challenging requirement that existing state-of-the-art methods usually struggle with.

 

Matthew Jacobs, Department of Mathematics, University of California, Santa Barbara.

Viernes 23 de agosto de 2024, 13:40 hrs. (Presencial Sala de usos múltiples, piso 1, edificio Felipe Villanueva, Facultad de Matemáticas)

ABSTRACT

A particularly prominent area of machine learning is generative modeling, in which one seeks to artificially generate new data from a given input. For instance, generating images from a chosen text prompt. Mathematically, this can be posed as finding a map that pushes a simple probability distribution, such as a standard Gaussian to a complicated probability distribution. New data can then be created by sampling random points from the simple distribution and then sending them through the constructed map.

Currently, one of the most popular paradigms used in generative modeling is diffusion modeling, where the map is created by observing the behavior of a diffusion equation applied to the given data. In this talk, I will give an introduction to diffusion modeling, discuss some of my work approaching the problem from a deterministic PDE perspective (as opposed to the more typical stochastic approach), and mention some interesting open questions in this area.

 

Eduardo Cerpa, Decano de la Facultad de Matemáticas UC.

Miércoles 12 de junio de 2024, 13:40 hrs. (Presencial Auditorio Edificio San Agustín; Link de Zoom disponible escribiendo a Esta dirección de correo electrónico está siendo protegida contra los robots de spam. Necesita tener JavaScript habilitado para poder verlo.)

ABSTRACT

Terapias con estimulación eléctrica son usadas para tratar síntomas de diferentes desórdenes del sistema nervioso. En este contexto, el uso de señales de alta frecuencia ha recibido mucha atención debido a sus efectos en tejidos y células. En esta charla veremos cómo métodos matemáticos son útiles para abordar algunas preguntas relevantes cuando para la neurona se considera un modelo de FitzHugh-Nagumo. Acá la estimulación es a través de un término fuente en la ecuación diferencial ordinaria y el nivel de activación de la neurona está asociado con la existencia de potenciales de acción que son soluciones con un comportamiento específico. Una primera pregunta se relaciona con la efectividad de una técnica reciente llamada corrientes interferenciales, que combina dos señales de frecuencia kilohertz con el objetivo de lograr activación profunda. La segunda pregunta es sobre cómo evitar la activación inicial no deseada que se origina al comenzar a estimular con señales. Mostraremos resultados teóricos y computacionales usando métodos como promediado, análisis de Lyapunov, deformación casi-estática y otros.

 

Nicolás Barnafi, Instituto de Ingeniería Matemática y Computacional (IMC) y Facultad de Biología, Pontificia Universidad Católica de Chile.

Miércoles 28 de agosto de 2024, 13:40 hrs. (Presencial Auditorio Edificio San Agustín; link Zoom disponible escribiendo a Esta dirección de correo electrónico está siendo protegida contra los robots de spam. Necesita tener JavaScript habilitado para poder verlo.)

ABSTRACT

The equations of nonlinear poroelasticity describe the flow of one or more fluid phases through a complex network within a solid undergoing large deformations. One iconic example of such materials is soft tissue, but many other applications are relevant, such as hydrogels, CO2 sequestration and salt filters. We have been able to identify one important source of difficulties, deemed as primal inconsistency, which has given rise to novel formulations that are yet to be analyzed. Still, there is no general framework within which these equations can be analyzed, with the most successful one so far being that of generalized gradient flows. This formalism has given deep insight into the origin of historical fixed-point iterations used to solve linear models, such as fixed-stress and undrained iterations. In this talk, I will give an overview of these topics and hope to sparkle some curiosity to tackle some of these open challenges.