Special Session 132: Advances in Nonlinear PDE-based Models for Artificial Intelligence and Computer Vision

Active Contour-based Image Segmentation Framework using a Nonlinear Second-order Diffusion-based Model

Tudor Barbu
Institute of Computer Science of the Romanian Academy
Romania
Co-Author(s):    
Abstract:
Active contour models represent well-known computer vision techniques for image segmentation. They are divided into parametric and geodesic active contours (GAC). In this work we introduce a novel PDE-based segmentation approach inspired by the GAC models and level-set method. The proposed segmentation scheme evolves level-set based active contours toward the boundaries of some certain objects in the analyzed image. A second-order nonlinear anisotropic diffusion model is introduced here for this task. Its curve evolution equation is based on a level-set function u, representing the evolving function, and image function v. It uses a properly chosen stopping function whose arguments are based on v, and a positive monotonically decreasing diffusivity conductance function receiving combinations of gradients and Laplacians of u as arguments. A rigorous mathematical treatment is performed on this nonlinear parabolic PDE model, its validity being investigated. We demonstrate that it admits a unique weak solution under some certain assumptions. Then it is solved numerically applying a finite difference-based approximation algorithm developed by us. That algorithm provides successful results when applied to image objects. The discrete u is initialized as a square contour covering almost the entire image and evolves to objects` edges in few iterations (less than 100). The proposed active contour-based segmentation solution can be applied successfully to important computer vision tasks, like object detection and tracking.

Blow-up, Stability, and Decay Bounds in Anisotropic Diffusion for Image Processing

Cristian ENACHE
American University of Sharjah
United Arab Emirates
Co-Author(s):    Cristian Enache, Eylem Ozturk
Abstract:
This talk explores anisotropic diffusion problems governed by the Finsler Laplacian, with applications in image processing. We establish precise conditions for blow-up at finite time, which can model sharp transitions in tasks like image segmentation. Additionally, we demonstrate global existence for appropriate data, ensuring stability in diffusion-based methods. Finally, we derive explicit exponential decay bounds, providing insights into the long-term behavior of these processes, relevant for efficient image filtering and enhancement. These results offer a rigorous foundation for nonlinear PDE models used not only in computer vision and AI, but also in other areas of science and engineering.

The optical flow problem: an optimal control approach

Gabriela Marinoschi
Gheorghe Mihoc-Caius Iacob Institute of Mathematical Statistics and Applied Mathematics of the Romanian Academy
Romania
Co-Author(s):    Gabriela Marinoschi
Abstract:
The optical flow problem consists in determining the motion, or more exactly the velocity field, of an object function representing the brightness pattern in an image. The optical flow problem is reduced to an optimal control problem governed by a linear parabolic equation having the unknown velocity field (the optical flow) as drift term. This model is derived from a new assumption, that is, the brightness intensity is conserved on a moving pattern driven by a Gaussian stochastic process. The optimality conditions are deduced by a passage to the limit technique in an approximating optimal control problem introduced for a regularization purpose. Finally, the controller uniqueness is addressed. This optical flow estimation solution can be applied successfully in AI and CV-based domains like the video object detection and tracking. This is a joint work with V. Barbu.

Deep Ridgelet Transform: Harmonic Analysis for Deep Neural Network

Sho Sonoda
RIKEN
Japan
Co-Author(s):    
Abstract:
The ridgelet transform has been developed to study neural network parameters, and it can describe the distribution of parameters. Mathematically, it is defined as a pseudo-inverse operator of neural networks. Namely, given a function $f$, and network $NN[\gamma]$ with parameter $\gamma$, the ridgelet transform $R[f]$ for the network $NN$ satisfies the reconstruction formula $NN[R[f]]=f$. For depth-2 fully-connected networks on a Euclidean space, the ridgelet transform has been discovered up to the closed-form expression, thus we could describe how the parameters are distributed. However, for a variety of modern neural network architectures, the closed-form expression has not been known. In this talk, I will introduce a systematic method to induce the generalized neural networks and their corresponding ridgelet transforms from group equivariant functions, and present an application to deep neural networks.