<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Research Survey on Nam Le</title><link>http://lnhutnam.github.io/en/categories/research-survey/</link><description>Recent content in Research Survey on Nam Le</description><generator>Hugo</generator><language>en-US</language><lastBuildDate>Mon, 30 Mar 2026 00:00:00 +0000</lastBuildDate><atom:link href="http://lnhutnam.github.io/en/categories/research-survey/index.xml" rel="self" type="application/rss+xml"/><item><title>Recent Advances in KAN-Based Numerical PDE Solvers</title><link>http://lnhutnam.github.io/en/posts/kan-pde-solvers/</link><pubDate>Mon, 30 Mar 2026 00:00:00 +0000</pubDate><guid>http://lnhutnam.github.io/en/posts/kan-pde-solvers/</guid><description>&lt;p>Kolmogorov-Arnold Networks (KANs), introduced in 2024, have rapidly become one of the most active frontiers in scientific machine learning for solving partial differential equations (PDEs) (Liu et al., 2024). Unlike Multi-Layer Perceptrons (MLPs), which apply fixed activation functions at nodes, KANs place &lt;strong>learnable univariate activation functions on edges&lt;/strong>, grounded in the Kolmogorov-Arnold representation theorem: every continuous multivariate function can be expressed as a composition of univariate functions and summations. This structural difference gives KANs two key properties relevant to PDE numerics — &lt;strong>higher interpretability&lt;/strong> and &lt;strong>parameter efficiency&lt;/strong> — making them an appealing successor to MLP-based Physics-Informed Neural Networks (PINNs).&lt;/p>
&lt;p>From 2024 through early 2026, researchers have published dozens of frameworks combining KANs with classical numerical concepts (spectral methods, operator learning, energy-stable time-stepping, neural operators) and targeting problems ranging from single PDEs to high-dimensional systems with hundreds of variables.&lt;/p>
&lt;hr>
&lt;h2 class="heading" id="overview">
 Overview&lt;span class="heading__anchor"> &lt;a href="#overview">#&lt;/a>&lt;/span>
&lt;/h2>&lt;p>The KAN-for-PDEs landscape organises into several interrelated research threads:&lt;/p>
&lt;ol>
&lt;li>&lt;strong>Physics-Informed KAN Frameworks (PIKANs / KINN)&lt;/strong> — direct replacements of MLP layers in PINNs with KAN layers, using strong, energy, and inverse PDE formulations.&lt;/li>
&lt;li>&lt;strong>Spectral-Basis and Wavelet-Enriched KANs&lt;/strong> — embedding orthogonal polynomial or wavelet bases to combat spectral bias.&lt;/li>
&lt;li>&lt;strong>KAN-Based Neural Operators&lt;/strong> — KAN sub-networks inside DeepONet, FNO, and pseudo-differential operator frameworks for learning PDE solution maps.&lt;/li>
&lt;li>&lt;strong>Time-Dependent and Evolutionary KANs&lt;/strong> — energy-stable schemes, KAN-ODEs, and moving-boundary solvers.&lt;/li>
&lt;li>&lt;strong>Discontinuities, Shock Waves, and Turbulence&lt;/strong> — specialised architectures for sharp transitions.&lt;/li>
&lt;li>&lt;strong>High-Dimensional PDEs&lt;/strong> — separable and tensor-product KAN surrogates scaling to hundreds of dimensions.&lt;/li>
&lt;li>&lt;strong>Data-Driven Discovery and Inverse Problems&lt;/strong> — interpretability-driven model identification.&lt;/li>
&lt;/ol>
&lt;div class="table-wrapper">&lt;table>
 &lt;thead>
 &lt;tr>
 &lt;th>Architecture&lt;/th>
 &lt;th>Key Strength&lt;/th>
 &lt;th>Representative Work&lt;/th>
 &lt;/tr>
 &lt;/thead>
 &lt;tbody>
 &lt;tr>
 &lt;td>KINN&lt;/td>
 &lt;td>Forward/inverse problems, strong/energy/inverse forms&lt;/td>
 &lt;td>Wang et al., 2024&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>ChebPIKAN&lt;/td>
 &lt;td>Fluid mechanics PDEs, orthogonal basis&lt;/td>
 &lt;td>Cui et al., 2024&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>KANO&lt;/td>
 &lt;td>Symbolic operator recovery, variable-coefficient PDEs&lt;/td>
 &lt;td>arXiv:2509.16825&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>EvoKAN&lt;/td>
 &lt;td>Long-horizon time evolution, energy stability&lt;/td>
 &lt;td>arXiv:2503.01618&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Anant-KAN&lt;/td>
 &lt;td>High-dimensional PDEs (up to 300D)&lt;/td>
 &lt;td>arXiv:2505.03595&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>DPINN&lt;/td>
 &lt;td>Shock waves and discontinuities&lt;/td>
 &lt;td>arXiv:2507.08338&lt;/td>
 &lt;/tr>
 &lt;/tbody>
 &lt;/table>
&lt;/div>&lt;hr>
&lt;h2 class="heading" id="background">
 Background&lt;span class="heading__anchor"> &lt;a href="#background">#&lt;/a>&lt;/span>
&lt;/h2>&lt;h3 class="heading" id="the-kolmogorov-arnold-representation-theorem">
 The Kolmogorov-Arnold Representation Theorem&lt;span class="heading__anchor"> &lt;a href="#the-kolmogorov-arnold-representation-theorem">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>The theoretical foundation of KANs is the Kolmogorov-Arnold theorem: any continuous function $f: [0,1]^n \to \mathbb{R}$ can be written as&lt;/p>
&lt;p>$$f(x_1, \ldots, x_n) = \sum_{q=0}^{2n} \Phi_q!\left(\sum_{p=1}^{n} \phi_{q,p}(x_p)\right),$$&lt;/p>
&lt;p>where $\phi_{q,p}: [0,1] \to \mathbb{R}$ and $\Phi_q: \mathbb{R} \to \mathbb{R}$ are univariate continuous functions. In contrast to MLPs — where activations are fixed and weights are learned — KANs &lt;strong>parameterise the activation functions themselves&lt;/strong> (typically as B-splines or orthogonal polynomials) on each edge of the network graph.&lt;/p>
&lt;h3 class="heading" id="physics-informed-neural-networks-pinns--the-starting-point">
 Physics-Informed Neural Networks (PINNs) — The Starting Point&lt;span class="heading__anchor"> &lt;a href="#physics-informed-neural-networks-pinns--the-starting-point">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>PINNs (Raissi, Perdikaris, &amp;amp; Karniadakis, 2019) embed physical laws directly into the neural network loss function. For a PDE $\mathcal{N}[u] = f$ on domain $\Omega$ with boundary condition $\mathcal{B}[u] = g$ on $\partial\Omega$, the PINN loss is&lt;/p>
&lt;p>$$\mathcal{L} = \underbrace{\frac{1}{N _r}\sum _{i=1}^{N _r}|\mathcal{N}[u _\theta](x _i)|^2} _{\text{PDE residual}} + \underbrace{\frac{1}{N _b}\sum _{j=1}^{N _b}|\mathcal{B}[u _\theta](x _j) - g(x _j)|^2} _{\text{boundary condition}}.$$&lt;/p>
&lt;p>The substitution of MLP layers with KAN layers in this framework is the basic idea behind all PIKAN architectures.&lt;/p>
&lt;hr>
&lt;h2 class="heading" id="recent-developments">
 Recent Developments&lt;span class="heading__anchor"> &lt;a href="#recent-developments">#&lt;/a>&lt;/span>
&lt;/h2>&lt;h3 class="heading" id="1-physics-informed-kan-frameworks">
 1. Physics-Informed KAN Frameworks&lt;span class="heading__anchor"> &lt;a href="#1-physics-informed-kan-frameworks">#&lt;/a>&lt;/span>
&lt;/h3>&lt;h4 class="heading" id="kinn--the-foundational-framework">
 KINN — The Foundational Framework&lt;span class="heading__anchor"> &lt;a href="#kinn--the-foundational-framework">#&lt;/a>&lt;/span>
&lt;/h4>&lt;p>The &lt;strong>Kolmogorov-Arnold-Informed Neural Network (KINN)&lt;/strong> is the primary physics-informed framework replacing MLP layers in PINNs with KAN layers (Wang et al., 2024). KINN supports three PDE formulations: the &lt;strong>strong form&lt;/strong> (collocating the PDE residual directly), the &lt;strong>energy form&lt;/strong> (minimising a variational energy functional), and the &lt;strong>inverse form&lt;/strong> (recovering unknown parameters from observations).&lt;/p>
&lt;p>Systematic benchmarks demonstrate that KINN significantly outperforms MLP-based PINNs in accuracy and convergence speed for multi-scale problems, stress concentration, singularities, nonlinear hyperelasticity, and heterogeneous materials. The one domain where MLP remains competitive is complex geometry problems. Published in &lt;em>Computer Methods in Applied Mechanics and Engineering&lt;/em> (2024), KINN has become the canonical reference for subsequent KAN-PDE research.&lt;/p>
&lt;h4 class="heading" id="chebyshev-and-polynomial-basis-pikans">
 Chebyshev and Polynomial Basis PIKANs&lt;span class="heading__anchor"> &lt;a href="#chebyshev-and-polynomial-basis-pikans">#&lt;/a>&lt;/span>
&lt;/h4>&lt;p>A major architectural refinement has been substituting B-spline basis functions with &lt;strong>orthogonal polynomial bases&lt;/strong>. The &lt;strong>ChebPIKAN&lt;/strong> model leverages orthogonality of Chebyshev polynomials and integrates physics-informed loss functions for fluid-mechanics PDEs including the Allen-Cahn, Burgers, Helmholtz, Kovasznay flow, cylinder wake flow, and cavity flow equations (Cui et al., 2024). ChebPIKAN significantly outperforms vanilla KAN by embedding essential physical information and alleviating overfitting.&lt;/p>
&lt;p>The &lt;strong>AC-PKAN&lt;/strong> (Attention-Enhanced Chebyshev PKAN) further addresses the &lt;em>rank collapse&lt;/em> problem in Chebyshev-based KANs by integrating wavelet-activated MLPs with an internal attention mechanism, provably preserving a full-rank Jacobian and approximating PDEs of arbitrary order (arXiv:2505.08687). An external &lt;strong>Residual Gradient Attention (RGA)&lt;/strong> mechanism dynamically re-weights individual loss terms based on gradient norms, stabilising training of stiff PDE systems.&lt;/p>
&lt;p>The &lt;strong>Legendre-KAN&lt;/strong> method applies Legendre polynomial orthogonality to solve the fully nonlinear Monge-Ampère equation with Dirichlet boundary conditions, demonstrating effectiveness on both smooth and singular solutions across various dimensions and in the optimal transport problem.&lt;/p>
&lt;h4 class="heading" id="hybrid-kanmlp-and-augmented-lagrangian-approaches">
 Hybrid KAN–MLP and Augmented Lagrangian Approaches&lt;span class="heading__anchor"> &lt;a href="#hybrid-kanmlp-and-augmented-lagrangian-approaches">#&lt;/a>&lt;/span>
&lt;/h4>&lt;p>The &lt;strong>AL-PKAN&lt;/strong> introduces a hybrid encoder-decoder architecture where the decoder maps hidden variable features from high-dimensional latent space into trainable univariate activation functions via KAN (Zhang et al., 2025). An augmented Lagrangian function treats penalty factors and Lagrangian multipliers as learnable parameters to dynamically balance constraint terms. This approach typically improves prediction accuracy by &lt;strong>one to two orders of magnitude&lt;/strong> compared to traditional neural networks.&lt;/p>
&lt;p>The &lt;strong>HPKM-PINN&lt;/strong> combines MLP and KAN branches with a trainable convex mixing parameter to blend features optimally across subdomains, especially effective for multi-scale problems.&lt;/p>
&lt;h3 class="heading" id="2-spectral-basis-and-wavelet-enriched-kans">
 2. Spectral-Basis and Wavelet-Enriched KANs&lt;span class="heading__anchor"> &lt;a href="#2-spectral-basis-and-wavelet-enriched-kans">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>&lt;strong>Wav-KAN&lt;/strong> incorporates wavelet functions into the KAN structure, capturing both high-frequency and low-frequency components via continuous dyadic wavelet transforms for multiresolution analysis. This directly addresses the &lt;em>spectral bias&lt;/em> problem inherent in standard neural networks, which struggle to resolve high-frequency features in PDE solutions.&lt;/p>
&lt;p>PIKANs have been extended to &lt;strong>multi-resolution spectral hybridisations (HWF-PIKAN)&lt;/strong>, combining wavelet and Fourier features to explicitly counteract spectral bias and accelerate convergence for advection-dominated and kinetic equations.&lt;/p>
&lt;p>A unified benchmark published in February 2026 provides a &lt;strong>systematic, controlled comparison between MLP-based PINNs and KAN-based PIKANs&lt;/strong> across a representative collection of ODEs and PDEs (arXiv:2602.15068). The results show that PIKANs consistently achieve more accurate solutions, converge in fewer iterations, and yield superior gradient estimates.&lt;/p>
&lt;h3 class="heading" id="3-kan-based-neural-operators">
 3. KAN-Based Neural Operators&lt;span class="heading__anchor"> &lt;a href="#3-kan-based-neural-operators">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>Neural operators learn mappings between infinite-dimensional function spaces, enabling generalisation across families of PDEs. KANs are increasingly embedded in operator architectures.&lt;/p>
&lt;p>&lt;strong>DeepOKAN&lt;/strong> replaces MLP sub-networks in the Deep Operator Network (DeepONet) framework with KAN sub-networks using Gaussian Radial Basis Functions (Abueidda et al., 2024). The branch and trunk networks of DeepONet are re-implemented as RBF-KAN layers. Evaluated on 1D sinusoidal waves, 2D orthotropic elasticity, and transient Poisson problems, DeepOKAN consistently achieves lower training losses and more accurate predictions compared to standard DeepONet.&lt;/p>
&lt;p>&lt;strong>PO-CKAN&lt;/strong> (Physics-informed Deep Operator KAN with Chunk Rational Structure) integrates PDE residual loss into a DeepONet-style branch–trunk architecture using Chunkwise Rational KAN sub-networks (arXiv:2510.08795). On Burgers&amp;rsquo; equation with viscosity $\nu = 0.01$, PO-CKAN reduces mean relative $L^2$ error by approximately &lt;strong>48%&lt;/strong> compared to PI-DeepONet.&lt;/p>
&lt;p>&lt;strong>KANO&lt;/strong> (Kolmogorov-Arnold Neural Operator) is the most theoretically ambitious framework, jointly parameterising operators in both &lt;strong>spectral and spatial bases&lt;/strong> within a pseudo-differential operator framework (arXiv:2509.16825). KANO overcomes the pure-spectral bottleneck of Fourier Neural Operators (FNO): while FNO remains practical only for spectrally sparse operators, KANO remains expressive over generic variable-coefficient PDEs. Crucially, KANO achieves &lt;strong>symbolic recovery of the learned operator&lt;/strong>, enabling closed-form extraction of governing equations. On the quantum Hamiltonian learning benchmark, KANO attains state infidelity $\approx 6 \times 10^{-6}$ compared to FNO&amp;rsquo;s $\approx 1.5 \times 10^{-2}$.&lt;/p>
&lt;p>&lt;strong>KAN-ONets&lt;/strong> embeds adaptive, learnable B-spline activations from KAN into FNO (yielding FNO-KAN for uniform grids) and into the attention-based GNOT (yielding GNOT-KAN for arbitrary grids). Across seven challenging PDE benchmarks, KAN-ONets achieves &lt;strong>MSE reductions of 10.2–30.2%&lt;/strong> compared to existing models.&lt;/p>
&lt;h3 class="heading" id="4-time-dependent-and-evolutionary-kans">
 4. Time-Dependent and Evolutionary KANs&lt;span class="heading__anchor"> &lt;a href="#4-time-dependent-and-evolutionary-kans">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>&lt;strong>EvoKAN&lt;/strong> (Evolutionary Kolmogorov-Arnold Network, March 2025) introduces a novel paradigm: rather than retraining repeatedly, EvoKAN &lt;strong>encodes only the PDE&amp;rsquo;s initial state&lt;/strong> during an initial learning phase, then evolves the network parameters numerically, governed by the same PDE (arXiv:2503.01618). KAN weights are treated as time-dependent functions updated through time steps, enabling prediction over arbitrarily long time horizons.&lt;/p>
&lt;p>EvoKAN integrates the &lt;strong>Scalar Auxiliary Variable (SAV) method&lt;/strong> to guarantee unconditional energy stability: at each time step, SAV requires only solving decoupled linear systems with constant coefficients. EvoKAN has been validated on the 1D and 2D Allen-Cahn equations (phase-field phenomena with sharp interfaces) and the 2D Navier-Stokes equations (turbulent flows), closely matching analytical references.&lt;/p>
&lt;p>&lt;strong>KAN-ODEs&lt;/strong> apply KANs as the backbone of neural ordinary differential equation (ODE) frameworks, enabling data-driven discovery of governing dynamics with greater interpretability compared to MLP-based neural ODEs (arXiv:2407.04192).&lt;/p>
&lt;p>&lt;strong>Shallow-KAN&lt;/strong> addresses Stefan-type moving boundary problems (melting, solidification) by approximating the temperature distribution and moving interface while enforcing governing PDEs, phase equilibrium, and the Stefan condition through physics-informed residuals (arXiv:2601.09818). A key finding is that &lt;strong>two hidden layers with tens of learnable parameters&lt;/strong> suffice — far fewer than the nearly one million parameters required by standard MLP-based PINNs for the same problem.&lt;/p>
&lt;h3 class="heading" id="5-discontinuities-shock-waves-and-turbulence">
 5. Discontinuities, Shock Waves, and Turbulence&lt;span class="heading__anchor"> &lt;a href="#5-discontinuities-shock-waves-and-turbulence">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>A known weakness of smooth neural networks is difficulty resolving &lt;strong>sharp spatial transitions and discontinuities&lt;/strong> such as shock waves. Two specialised frameworks address this:&lt;/p>
&lt;p>&lt;strong>DPINN&lt;/strong> (Discontinuity-aware PINN) incorporates a discontinuity-aware KAN for modelling shock-wave properties, combined with an adaptive Fourier-feature embedding layer to mitigate spectral bias, mesh transformation for complex geometries, and learnable local artificial viscosity to stabilise the algorithm near discontinuities (arXiv:2507.08338). Numerical experiments on the inviscid Burgers&amp;rsquo; equation and transonic/supersonic airfoil flows demonstrate superior accuracy over existing methods.&lt;/p>
&lt;p>A &lt;strong>Physics-Infused KAN for Turbulence&lt;/strong> (2026) targets turbulent flow prediction integrated with CFD, applying KAN within the Reynolds-Averaged Navier-Stokes (RANS) framework. It addresses the &lt;em>information bottleneck&lt;/em> phenomenon in multi-output KANs and proposes pruning-based network optimisation, achieving high prediction accuracy for Navier-Stokes equations.&lt;/p>
&lt;h3 class="heading" id="6-high-dimensional-pdes-and-the-curse-of-dimensionality">
 6. High-Dimensional PDEs and the Curse of Dimensionality&lt;span class="heading__anchor"> &lt;a href="#6-high-dimensional-pdes-and-the-curse-of-dimensionality">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>High-dimensional PDEs (tens to hundreds of dimensions) are where conventional numerical methods completely fail due to exponential cost scaling. KAN has shown early promise here.&lt;/p>
&lt;p>&lt;strong>Anant-Net&lt;/strong> (2025) is a scalable neural surrogate employing a tensor product formulation with dimension-wise sweeps and selective automatic differentiation (arXiv:2505.03595). Benchmarked on the Poisson, Sine-Gordon, Allen-Cahn, and transient heat equations, Anant-Net &lt;strong>solves PDEs in up to 300 dimensions on a single GPU within a few hours&lt;/strong>. The framework includes &lt;strong>Anant-KAN&lt;/strong>, an interpretable KAN-based variant offering deeper insights into the learned solution structure.&lt;/p>
&lt;p>&lt;strong>Separable PIKANs (SPIKANs)&lt;/strong> decompose the PDE solution into products of one-dimensional KAN networks, drastically reducing computational complexity for high-dimensional problems while retaining accuracy and interpretability.&lt;/p>
&lt;h3 class="heading" id="7-data-driven-discovery-and-inverse-problems">
 7. Data-Driven Discovery and Inverse Problems&lt;span class="heading__anchor"> &lt;a href="#7-data-driven-discovery-and-inverse-problems">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>KANs are especially powerful for &lt;strong>scientific discovery tasks&lt;/strong> where interpretability of the learned function is critical.&lt;/p>
&lt;p>Data-driven model discovery with KANs has been demonstrated on complex dynamical systems — including the Ikeda map and optical-cavity systems — where sparse optimisation methods fail due to non-sparse governing equations (arXiv:2409.15167). KAN captures complex behaviour while offering interpretability through its edge-wise univariate functions, providing insight into governing dynamics inaccessible in black-box MLPs.&lt;/p>
&lt;p>&lt;strong>PI-KAN-PointNet&lt;/strong> extends PIKAN to simultaneously solve inverse problems over multiple irregular geometries within a single training run, demonstrated on natural convection over 135 geometries with sparse data. &lt;strong>KINN for Inverse Problems&lt;/strong> enables identification of unknown material parameters in heterogeneous or hyperelastic materials from partial observations. &lt;strong>KANHedge&lt;/strong> applies KANs to high-dimensional BSDE solvers for option pricing, demonstrating improved hedging performance over MLP-based deep BSDE solvers (arXiv:2601.11097).&lt;/p>
&lt;h3 class="heading" id="8-comparative-analysis-kan-vs-mlp-for-pdes">
 8. Comparative Analysis: KAN vs. MLP for PDEs&lt;span class="heading__anchor"> &lt;a href="#8-comparative-analysis-kan-vs-mlp-for-pdes">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>A comprehensive comparison between MLP and KAN representations for differential equations establishes nuanced findings (arXiv:2406.02917):&lt;/p>
&lt;div class="table-wrapper">&lt;table>
 &lt;thead>
 &lt;tr>
 &lt;th>Architecture&lt;/th>
 &lt;th>Shallow Networks&lt;/th>
 &lt;th>Deep Networks&lt;/th>
 &lt;th>Robustness&lt;/th>
 &lt;th>Interpretability&lt;/th>
 &lt;/tr>
 &lt;/thead>
 &lt;tbody>
 &lt;tr>
 &lt;td>KAN (B-spline)&lt;/td>
 &lt;td>Superior accuracy&lt;/td>
 &lt;td>Comparable to MLP&lt;/td>
 &lt;td>Lower (may diverge with different seeds)&lt;/td>
 &lt;td>High — symbolic extraction possible&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>KAN (Chebyshev/Legendre)&lt;/td>
 &lt;td>High accuracy&lt;/td>
 &lt;td>Competitive&lt;/td>
 &lt;td>Moderate — rank collapse risk&lt;/td>
 &lt;td>High&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>MLP/PINN&lt;/td>
 &lt;td>Moderate accuracy&lt;/td>
 &lt;td>Robust&lt;/td>
 &lt;td>High&lt;/td>
 &lt;td>Low&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>PIKAN (optimised)&lt;/td>
 &lt;td>Superior&lt;/td>
 &lt;td>Superior or comparable&lt;/td>
 &lt;td>Moderate&lt;/td>
 &lt;td>High&lt;/td>
 &lt;/tr>
 &lt;/tbody>
 &lt;/table>
&lt;/div>&lt;p>Key findings: KANs in &lt;strong>shallow settings significantly outperform MLPs&lt;/strong>, leveraging per-edge nonlinear expressiveness. In deep settings, KANs do not consistently outperform MLPs, but when properly optimised (e.g., with L-BFGS or Self-Scaled Broyden second-order optimisers), they achieve superior accuracy. &lt;strong>JAX-based PIKAN implementations&lt;/strong> have achieved up to 84× training speedup over original NumPy/PyTorch KANs.&lt;/p>
&lt;hr>
&lt;h2 class="heading" id="open-problems">
 Open Problems&lt;span class="heading__anchor"> &lt;a href="#open-problems">#&lt;/a>&lt;/span>
&lt;/h2>&lt;p>Despite rapid progress, several challenges remain:&lt;/p>
&lt;p>&lt;strong>Computational cost.&lt;/strong> Spline function evaluation involves multiple iterations, making KANs significantly slower per parameter than MLPs. Variants like PowerMLP propose more efficient formulations (arXiv:2412.13571), but a satisfactory solution to raw training speed at scale is still outstanding.&lt;/p>
&lt;p>&lt;strong>Scalability to complex geometries.&lt;/strong> KINN and standard PIKANs underperform MLPs on irregular geometry problems. This remains a practical bottleneck for engineering applications involving complex domains.&lt;/p>
&lt;p>&lt;strong>Gradient instability in deep KANs.&lt;/strong> Deep PIKANs face vanishing/exploding gradient challenges, motivating Glorot-like initialisation strategies and residual-gated architectures.&lt;/p>
&lt;p>&lt;strong>Theoretical guarantees.&lt;/strong> Generalisation bounds for KANs trained on PDE collocation have been studied — bounds scale with $\ell_1$ norms of spline coefficients — but practical understanding of how architecture choices affect convergence and generalisation remains incomplete (arXiv:2410.08026).&lt;/p>
&lt;p>&lt;strong>Operator learning completeness.&lt;/strong> While KANO achieves symbolic operator recovery, the theoretical relationship between KAN architecture depth/width and approximation of PDE solution operators is still under active development.&lt;/p>
&lt;p>The trajectory is clear: KAN-based PDE solvers are moving from proof-of-concept demonstrations on canonical benchmarks toward &lt;strong>production-ready frameworks&lt;/strong> for engineering simulation, turbulence modelling, inverse problems, and high-dimensional scientific computing. The combination of interpretability, parameter efficiency, and growing theoretical foundations positions KANs as a genuinely transformative architecture for numerical PDEs.&lt;/p>
&lt;hr>
&lt;h2 class="heading" id="references">
 References&lt;span class="heading__anchor"> &lt;a href="#references">#&lt;/a>&lt;/span>
&lt;/h2>&lt;p>Abueidda, D. W., Pantidis, P., &amp;amp; Mobasher, M. E. (2024). &lt;em>DeepOKAN: Deep operator network based on Kolmogorov Arnold networks for mechanics problems&lt;/em>. arXiv:2405.19143. &lt;a href="https://www.alphaxiv.org/overview/2405.19143v3">https://www.alphaxiv.org/overview/2405.19143v3&lt;/a>&lt;/p>
&lt;p>Cui, Z., et al. (2024). Physics-informed Kolmogorov–Arnold network with Chebyshev polynomials for fluid mechanics. &lt;em>Physics of Fluids, 37&lt;/em>(9), 095120. &lt;a href="https://pubs.aip.org/aip/pof/article-abstract/37/9/095120/3361431">https://pubs.aip.org/aip/pof/article-abstract/37/9/095120/3361431&lt;/a>&lt;/p>
&lt;p>Knottenbelt, W., et al. (2026). &lt;em>KANHedge: Efficient hedging of high-dimensional options using Kolmogorov-Arnold network-based BSDE solver&lt;/em>. arXiv:2601.11097. &lt;a href="https://arxiv.org/abs/2601.11097">https://arxiv.org/abs/2601.11097&lt;/a>&lt;/p>
&lt;p>Kovachki, N., et al. (2023). Neural operator: Learning maps between function spaces with applications to PDEs. &lt;em>Journal of Machine Learning Research, 24&lt;/em>(89), 1–97.&lt;/p>
&lt;p>Li, Z., et al. (2025). &lt;em>Discontinuity-aware KAN-based physics-informed neural networks&lt;/em>. arXiv:2507.08338. &lt;a href="https://arxiv.org/html/2507.08338v1">https://arxiv.org/html/2507.08338v1&lt;/a>&lt;/p>
&lt;p>Liu, Z., et al. (2024). &lt;em>KAN: Kolmogorov–Arnold Networks&lt;/em>. arXiv:2404.19756. &lt;a href="https://storage.prod.researchhub.com/uploads/papers/2024/05/04/2404.19756.pdf">https://storage.prod.researchhub.com/uploads/papers/2024/05/04/2404.19756.pdf&lt;/a>&lt;/p>
&lt;p>Liu, Z., et al. (2024). &lt;em>A comprehensive and FAIR comparison between MLP and KAN representations for differential equations and operator networks&lt;/em>. arXiv:2406.02917. &lt;a href="https://arxiv.org/abs/2406.02917">https://arxiv.org/abs/2406.02917&lt;/a>&lt;/p>
&lt;p>Liu, Z., et al. (2026). &lt;em>A unified benchmark of physics-informed neural networks and Kolmogorov-Arnold networks&lt;/em>. arXiv:2602.15068. &lt;a href="https://arxiv.org/html/2602.15068v1">https://arxiv.org/html/2602.15068v1&lt;/a>&lt;/p>
&lt;p>Peng, W., et al. (2025). &lt;em>KANO: Kolmogorov-Arnold Neural Operator&lt;/em>. arXiv:2509.16825. &lt;a href="https://arxiv.org/abs/2509.16825">https://arxiv.org/abs/2509.16825&lt;/a>&lt;/p>
&lt;p>Shukla, K., et al. (2025). &lt;em>Anant-Net: Breaking the curse of dimensionality with scalable and interpretable neural surrogates for high-dimensional PDEs&lt;/em>. arXiv:2505.03595. &lt;a href="https://arxiv.org/html/2505.03595v3">https://arxiv.org/html/2505.03595v3&lt;/a>&lt;/p>
&lt;p>Tang, K., et al. (2025). &lt;em>AC-PKAN: Attention-enhanced and Chebyshev polynomial-based Kolmogorov-Arnold networks&lt;/em>. arXiv:2505.08687. &lt;a href="https://arxiv.org/html/2505.08687v2">https://arxiv.org/html/2505.08687v2&lt;/a>&lt;/p>
&lt;p>Wang, Z., et al. (2025). &lt;em>EvoKAN: Energy-dissipative evolutionary Kolmogorov-Arnold networks for complex PDE systems&lt;/em>. arXiv:2503.01618. &lt;a href="https://arxiv.org/abs/2503.01618">https://arxiv.org/abs/2503.01618&lt;/a>&lt;/p>
&lt;p>Wang, Z., et al. (2024). Kolmogorov–Arnold-Informed neural network: A physics-informed deep learning framework for solving forward and inverse problems based on Kolmogorov–Arnold Networks. &lt;em>Computer Methods in Applied Mechanics and Engineering&lt;/em>. arXiv:2406.11045. &lt;a href="https://www.sciencedirect.com/science/article/abs/pii/S0045782524007722">https://www.sciencedirect.com/science/article/abs/pii/S0045782524007722&lt;/a>&lt;/p>
&lt;p>Xu, Y., et al. (2026). &lt;em>Shallow-KAN based solution of moving boundary PDEs&lt;/em>. arXiv:2601.09818. &lt;a href="https://arxiv.org/html/2601.09818v1">https://arxiv.org/html/2601.09818v1&lt;/a>&lt;/p>
&lt;p>Yang, L., et al. (2025). &lt;em>KAN-ODEs: Kolmogorov-Arnold network ordinary differential equations for learning dynamical systems and hidden physics&lt;/em>. arXiv:2407.04192. &lt;a href="https://arxiv.org/html/2407.04192v1">https://arxiv.org/html/2407.04192v1&lt;/a>&lt;/p>
&lt;p>Zhang, Z., et al. (2025). Physics-informed neural networks with hybrid Kolmogorov-Arnold networks. &lt;em>PMC&lt;/em>. &lt;a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC11950322/">https://pmc.ncbi.nlm.nih.gov/articles/PMC11950322/&lt;/a>&lt;/p>
&lt;p>Zuo, Q., et al. (2025). &lt;em>Data-driven model discovery with Kolmogorov-Arnold networks&lt;/em>. arXiv:2409.15167. &lt;a href="https://arxiv.org/abs/2409.15167">https://arxiv.org/abs/2409.15167&lt;/a>&lt;/p></description></item><item><title>Recent Advances in Numerical PDEs</title><link>http://lnhutnam.github.io/en/posts/recent-numerical-pde/</link><pubDate>Mon, 30 Mar 2026 00:00:00 +0000</pubDate><guid>http://lnhutnam.github.io/en/posts/recent-numerical-pde/</guid><description>&lt;p>Numerical methods for partial differential equations (PDEs) have entered a period of rapid transformation, driven by two converging forces: deep learning&amp;rsquo;s maturation as a tool for high-dimensional function approximation, and the resurgence of classical methods augmented by machine learning. The field broadly divides into &lt;em>physics-informed machine learning&lt;/em>, &lt;em>neural operator learning&lt;/em>, &lt;em>foundation models for PDEs&lt;/em>, and the continuing evolution of &lt;em>classical high-order&lt;/em>, &lt;em>structure-preserving&lt;/em>, and &lt;em>data-driven discovery&lt;/em> methods. Quantum computing and laser-based hardware solvers are also beginning to enter the landscape. This survey organises the most active research fronts, highlights landmark and recent key papers, and identifies open problems as of early 2026.&lt;/p>
&lt;hr>
&lt;h2 class="heading" id="overview">
 Overview&lt;span class="heading__anchor"> &lt;a href="#overview">#&lt;/a>&lt;/span>
&lt;/h2>&lt;p>The table below summarises the major approaches covered in this survey, their representative key papers, and their current status.&lt;/p>
&lt;div class="table-wrapper">&lt;table>
 &lt;thead>
 &lt;tr>
 &lt;th>Approach&lt;/th>
 &lt;th>Representative Key Papers&lt;/th>
 &lt;th>Status&lt;/th>
 &lt;/tr>
 &lt;/thead>
 &lt;tbody>
 &lt;tr>
 &lt;td>PINNs (adaptive/staged training)&lt;/td>
 &lt;td>Raissi et al. (2019); IEEE 2025 staged training; PhysicsNeMo/Modulus&lt;/td>
 &lt;td>Production-ready&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>KANs for PDEs&lt;/td>
 &lt;td>Liu et al. (2024, ICLR 2025); KINN; PI-KAN; HRKANs&lt;/td>
 &lt;td>Active frontier&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Fourier Neural Operators&lt;/td>
 &lt;td>Li et al. (2020); O-FNO (2025); ReBA accelerator&lt;/td>
 &lt;td>Widely adopted&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>DeepONet variants&lt;/td>
 &lt;td>Lu et al. (2019); L-DeepONet; Hybrid KAN-DeepONet; Quantum DeepONet&lt;/td>
 &lt;td>Mature + expanding&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>PDE Foundation Models&lt;/td>
 &lt;td>Poseidon; OmniArch; PDEformer; Geo-NeW&lt;/td>
 &lt;td>Emerging (2024–2026)&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Deep BSDE &amp;amp; high-dimensional&lt;/td>
 &lt;td>Han, Jentzen, &amp;amp; E (PNAS 2018); Deep Shotgun; DRDM; Heun-BSDE&lt;/td>
 &lt;td>Active&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Data-driven PDE discovery&lt;/td>
 &lt;td>SINDy (Brunton et al.); GN-SINDy; Evo-SINDy; Bayesian-SINDy&lt;/td>
 &lt;td>Active&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Structure-preserving methods&lt;/td>
 &lt;td>Hairer et al. (2006); Stochastic multisymplectic; Geo-NeW&lt;/td>
 &lt;td>Maturing&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>High-order FEM/DG&lt;/td>
 &lt;td>hp-DGFEM Boltzmann; ML-accelerated FEM; FEX-PG&lt;/td>
 &lt;td>Mature + augmented&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Fractional PDEs&lt;/td>
 &lt;td>Review (2024); O-FNO for fractional Poisson; Fractional Laplacian meshfree&lt;/td>
 &lt;td>Active&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Hamilton–Jacobi PDEs&lt;/td>
 &lt;td>Review arXiv:2502.20833; Actor-critic NN; Deep BSDE for HJB&lt;/td>
 &lt;td>Active&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Multiscale / ROM&lt;/td>
 &lt;td>MLP-based multiscale; POD-DL-ROM; Multi-fidelity ROM&lt;/td>
 &lt;td>Active&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Uncertainty quantification&lt;/td>
 &lt;td>QMC/RQMC; PDE-DKL&lt;/td>
 &lt;td>Active&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Quantum computing&lt;/td>
 &lt;td>Schrödingerisation; H-DES (ColibriTD); Quantum DeepONet&lt;/td>
 &lt;td>Early-stage&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Photonic/analog solvers&lt;/td>
 &lt;td>LightSolver LPU&lt;/td>
 &lt;td>Very early-stage&lt;/td>
 &lt;/tr>
 &lt;/tbody>
 &lt;/table>
&lt;/div>&lt;hr>
&lt;h2 class="heading" id="background">
 Background&lt;span class="heading__anchor"> &lt;a href="#background">#&lt;/a>&lt;/span>
&lt;/h2>&lt;h3 class="heading" id="the-classical-pde-problem">
 The Classical PDE Problem&lt;span class="heading__anchor"> &lt;a href="#the-classical-pde-problem">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>A general PDE on a domain $\Omega \subseteq \mathbb{R}^d$ takes the form&lt;/p>
&lt;p>$$\mathcal{N} [u] (x) = f(x), \quad x \in \Omega, \qquad \mathcal{B} [u] (x) = g(x), \quad x \in \partial \Omega,$$&lt;/p>
&lt;p>where $\mathcal{N}$ is a (possibly nonlinear) differential operator, $\mathcal{B}$ encodes boundary or initial conditions, and $u: \Omega \to \mathbb{R}$ is the unknown. Classical mesh-based methods — finite element (FEM), finite difference (FDM), finite volume (FVM), and spectral methods — discretise $\Omega$ into $N$ degrees of freedom and solve a resulting algebraic system. Their complexity typically scales as $O(N^\alpha)$ for some $\alpha \geq 1$, and in $d$ dimensions $N \sim h^{-d}$ for mesh spacing $h$, leading to exponential cost as $d$ grows.&lt;/p>
&lt;h3 class="heading" id="the-deep-learning-turn">
 The Deep Learning Turn&lt;span class="heading__anchor"> &lt;a href="#the-deep-learning-turn">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>The 2019 PINN paper by Raissi, Perdikaris, and Karniadakis, and the 2020 FNO paper by Li et al., triggered an explosion of mesh-free and operator-learning approaches. Rather than discretising $\Omega$, these methods parameterise $u$ (or the solution operator $\mathcal{N}^{-1}$) as a neural network and minimise a physics-informed or data-driven loss. The key advantages are mesh-free flexibility, natural handling of inverse problems, and — in the operator-learning setting — the ability to generalise across PDE instances.&lt;/p>
&lt;hr>
&lt;h2 class="heading" id="recent-developments">
 Recent Developments&lt;span class="heading__anchor"> &lt;a href="#recent-developments">#&lt;/a>&lt;/span>
&lt;/h2>&lt;h3 class="heading" id="1-physics-informed-neural-networks-pinns-and-variants">
 1. Physics-Informed Neural Networks (PINNs) and Variants&lt;span class="heading__anchor"> &lt;a href="#1-physics-informed-neural-networks-pinns-and-variants">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>PINNs, introduced by Raissi, Perdikaris, and Karniadakis (2019), embed physical laws directly into the neural network loss function as residual terms of the form $\mathcal{L}_{\text{phys}} = |f(\hat{u})|^2$, supplemented by data, boundary, and initial condition constraints. Their appeal lies in a mesh-free design that handles irregular geometries and inverse problems naturally. Yet PINN training is notoriously fragile — subject to spectral bias, loss imbalance, and stiffness — motivating a rich line of training improvements.&lt;/p>
&lt;p>&lt;strong>Staged training strategies.&lt;/strong> A 2025 IEEE paper proposes a two-stage process: a short-time pretraining phase followed by extension to the full time domain, combined with uncertainty-guided sampling. This significantly improves accuracy and efficiency for time-dependent PDEs compared to standard PINNs (IEEE, 2025).&lt;/p>
&lt;p>&lt;strong>Evolutionary optimisation of PINNs.&lt;/strong> A 2025 arXiv paper introduces evolutionary optimisation to tune PINN architectures, improving robustness when data are scarce by complying with physical laws through training loss (arXiv:2501.06572).&lt;/p>
&lt;p>&lt;strong>Automatic structure discovery via knowledge distillation.&lt;/strong> A 2025 &lt;em>Nature Communications&lt;/em> paper proposes a physics-informed distillation framework that decouples physical and parameter regularisation in teacher–student networks, then uses clustering and parameter reconstruction to embed physically meaningful structures. Experiments on Laplace, Burgers, Poisson, and fluid mechanics equations show improved accuracy, training efficiency, and transferability (arXiv:2502.06026).&lt;/p>
&lt;p>Production-ready frameworks include &lt;em>PhysicsNeMo/Modulus&lt;/em> (CUDA-optimised kernels with 4× speedups) and &lt;em>DeepXDE&lt;/em>, which support adaptive weighting schemes, curriculum learning, intelligent residual point sampling, and domain decomposition for stiff problems.&lt;/p>
&lt;h3 class="heading" id="2-kolmogorovarnold-networks-kans-for-pdes">
 2. Kolmogorov–Arnold Networks (KANs) for PDEs&lt;span class="heading__anchor"> &lt;a href="#2-kolmogorovarnold-networks-kans-for-pdes">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>Proposed by Liu, Wang, Vaidya et al. (2024, accepted ICLR 2025), &lt;strong>KANs&lt;/strong> replace fixed activation functions at MLP nodes with learnable spline-parameterised functions on each edge. This change — inspired by the Kolmogorov-Arnold representation theorem — provides faster neural scaling laws, improved interpretability, and comparable or better accuracy with far fewer parameters, especially for scientific AI tasks. The major PINN-KAN hybrid architectures are as follows:&lt;/p>
&lt;div class="table-wrapper">&lt;table>
 &lt;thead>
 &lt;tr>
 &lt;th>Architecture&lt;/th>
 &lt;th>PDE focus&lt;/th>
 &lt;th>Key claim&lt;/th>
 &lt;/tr>
 &lt;/thead>
 &lt;tbody>
 &lt;tr>
 &lt;td>&lt;strong>KINN&lt;/strong>&lt;/td>
 &lt;td>Solid mechanics, multi-scale, singularities&lt;/td>
 &lt;td>Significantly outperforms MLP-PINNs in accuracy and convergence speed&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&lt;strong>PI-KAN&lt;/strong>&lt;/td>
 &lt;td>Navier–Stokes (forward)&lt;/td>
 &lt;td>High prediction accuracy; addresses information bottleneck&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&lt;strong>HRKANs&lt;/strong>&lt;/td>
 &lt;td>Poisson, Burgers&lt;/td>
 &lt;td>Highest fitting accuracy, lowest training time vs. KAN and ReLU-KAN&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&lt;strong>PIKANs&lt;/strong> (adaptive grid)&lt;/td>
 &lt;td>Forward PDE problems&lt;/td>
 &lt;td>Up to 84× faster training; adaptive state transition reduces $L^2$ error by 43%&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&lt;strong>EvoKAN&lt;/strong>&lt;/td>
 &lt;td>Complex PDE systems&lt;/td>
 &lt;td>Energy-dissipative; encodes only the initial state, avoiding retraining&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&lt;strong>KAN-ODEs&lt;/strong>&lt;/td>
 &lt;td>Schrödinger, Allen–Cahn, dynamical systems&lt;/td>
 &lt;td>Improved performance over Neural ODEs in discovering hidden physics&lt;/td>
 &lt;/tr>
 &lt;/tbody>
 &lt;/table>
&lt;/div>&lt;p>KANs are also being used inside &lt;strong>DeepONet branch/trunk networks&lt;/strong> for hybrid neural operator surrogates in porous media flows, including Darcy flow and 2D/3D multiphase problems (arXiv:2511.02962). For a deeper treatment of KAN architectures for PDEs, see the companion post in this series.&lt;/p>
&lt;h3 class="heading" id="3-neural-operator-learning">
 3. Neural Operator Learning&lt;span class="heading__anchor"> &lt;a href="#3-neural-operator-learning">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>Neural operators learn mappings between infinite-dimensional function spaces — enabling resolution-invariant, discretisation-agnostic PDE solvers. The two dominant architectures are the &lt;strong>Fourier Neural Operator (FNO)&lt;/strong> and &lt;strong>Deep Operator Networks (DeepONet)&lt;/strong>.&lt;/p>
&lt;p>&lt;strong>FNO&lt;/strong> applies global convolution in Fourier space, giving resolution invariance and fast inference. The 2025 &lt;em>Optimised FNO (O-FNO)&lt;/em> integrates residual connections and enhanced spectral resolution for the 2D fractional Poisson equation, achieving over 98% test accuracy and outperforming both base FNO and DeepONet. A hardware/algorithm co-design chip, &lt;strong>ReBA&lt;/strong>, implements the Galerkin Transformer achieving 34.57× speedup over CPUs and up to 51.26× over prior accelerators (IEEE, 2025).&lt;/p>
&lt;p>&lt;strong>DeepONet&amp;rsquo;s&lt;/strong> branch-trunk architecture excels under noise and complex geometries where FNO degrades. Recent extensions include multi-fidelity physics-guided DeepONet (2025), Fusion DeepONet for hypersonic flow predictions on arbitrary grids (arXiv:2501.01934), and &lt;strong>Latent-space DeepONet (L-DeepONet)&lt;/strong> (&lt;em>Nature Communications&lt;/em>, 2024), which outperforms all other neural operators with small latent dimensions ($d \leq 100$), enabling real-time high-dimensional predictions. Ensemble and Mixture-of-Experts DeepONets achieve 2–4× lower relative $\ell_2$ errors through basis enrichment and spatial locality (arXiv:2405.11907). &lt;strong>Taylor Mode Neural Operators&lt;/strong> provide an order-of-magnitude speed-up for DeepONet and 8× for FNO in computing high-order derivatives via Taylor-mode automatic differentiation.&lt;/p>
&lt;p>&lt;strong>Graph Neural Operator Methods.&lt;/strong> The &lt;strong>GOLA framework&lt;/strong> (2025) addresses the limitation of regular-grid assumptions by constructing graphs from irregularly sampled spatial points with a Fourier-based encoder for learnable complex-coefficient embeddings, outperforming baselines in data-scarce regimes across 2D Darcy, Advection, Eikonal, and Nonlinear Diffusion problems (arXiv:2505.18923).&lt;/p>
&lt;h3 class="heading" id="4-foundation-models-for-pdes">
 4. Foundation Models for PDEs&lt;span class="heading__anchor"> &lt;a href="#4-foundation-models-for-pdes">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>Inspired by the success of LLMs, PDE foundation models represent a paradigm shift: large transformers pre-trained on diverse physical systems that can be fine-tuned for downstream tasks with minimal data.&lt;/p>
&lt;p>&lt;strong>Poseidon&lt;/strong> (ETH Zurich, 2024) is a multiscale operator transformer with time-conditioned layer norms, enabling continuous-in-time evaluation. Pre-trained on diverse physical systems, it exploits the semigroup property of time-dependent PDEs for significant data scaling (arXiv:2405.19101).&lt;/p>
&lt;p>&lt;strong>OmniArch&lt;/strong> (ICML 2025) is the first multi-scale and multi-physics scientific computing foundation model, featuring a Fourier encoder-decoder and transformer backbone with a &lt;em>PDE-Aligner&lt;/em> for physics-informed fine-tuning. It achieves unified 1D-2D-3D pre-training on PDEBench and demonstrates zero-shot learning on new physics.&lt;/p>
&lt;p>&lt;strong>PDEformer&lt;/strong> (2025) represents PDEs as computational graphs integrating symbolic and numerical information; a graph transformer with implicit neural representation enables mesh-free predictions with zero-shot accuracy comparable to specialist models (arXiv:2402.12652).&lt;/p>
&lt;p>&lt;strong>Multimodal PDE Foundation Model&lt;/strong> (UCLA, 2025) integrates both numerical inputs (equation parameters, initial conditions) and text descriptions. It achieves average relative error below 3.3% in-distribution and generates interpretable scientific text — bridging NLP and scientific computing (arXiv:2502.06026).&lt;/p>
&lt;p>&lt;strong>Physics-informed fine-tuning&lt;/strong> (arXiv:2603.15431, 2026) establishes that hybrid fine-tuning (combining physics-informed and data-driven objectives) achieves superior extrapolation to downstream tasks and enables data-free learning of unseen PDE families.&lt;/p>
&lt;p>&lt;strong>Geo-NeW&lt;/strong> (arXiv:2602.02788, Feb 2026) — General-Geometry Neural Whitney Forms — is a data-driven finite element method jointly learning differential operators and compatible finite element spaces on the geometry. It exactly preserves physical conservation laws via Finite Element Exterior Calculus, with state-of-the-art performance on out-of-distribution geometries.&lt;/p>
&lt;h3 class="heading" id="5-deep-learning-for-high-dimensional-pdes">
 5. Deep Learning for High-Dimensional PDEs&lt;span class="heading__anchor"> &lt;a href="#5-deep-learning-for-high-dimensional-pdes">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>Classical mesh-based methods suffer exponential complexity growth in dimension $d$. Three principal deep learning paradigms address this.&lt;/p>
&lt;p>The &lt;strong>Deep BSDE method&lt;/strong> (Han, Jentzen, &amp;amp; E, &lt;em>PNAS&lt;/em>, 2018) reformulates semilinear parabolic PDEs using backward stochastic differential equations (BSDEs) and learns the gradient of the solution with neural networks, enabling solution of PDEs in hundreds to thousands of dimensions. A 2025 review by the original authors traces subsequent advances. Key recent improvements include:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Deep Shotgun Method&lt;/strong> (&lt;em>J. Sci. Comput.&lt;/em>, 2025): avoids full trajectory simulation, using only data distribution, achieving results up to dimension 10,000 (Springer, 2025).&lt;/li>
&lt;li>&lt;strong>XNet-enhanced Deep BSDE&lt;/strong> (2025): a new network architecture with fewer parameters, significantly improving computational efficiency and accuracy (arXiv:2502.06238).&lt;/li>
&lt;li>&lt;strong>Deep Random Difference Method (DRDM)&lt;/strong> (2025): approximates the convection-diffusion operator using only first-order differences, avoiding Hessian computations, with proved first-order accuracy in time step $h$ (arXiv:2506.20308).&lt;/li>
&lt;li>&lt;strong>Stratonovich-based BSDE with Heun integration&lt;/strong> (2025): identifies that Euler-Maruyama discretisation bias is the root cause of BSDE underperformance relative to PINNs; Heun integration eliminates this bias and achieves competitive results across high-dimensional benchmarks (arXiv:2505.01078).&lt;/li>
&lt;/ul>
&lt;p>The &lt;strong>Deep Ritz method&lt;/strong> (E &amp;amp; Yu, 2018) minimises energy functionals using neural networks. Extensions to multiscale problems leverage scale convergence theory to derive $\Gamma$-limits of oscillatory energy functionals.&lt;/p>
&lt;p>The &lt;strong>Full History Recursive Multilevel Picard (MLP)&lt;/strong> methodology — combining Picard iterations with multilevel Monte Carlo — was the first method proven to overcome the curse of dimensionality for semilinear parabolic PDEs and remains one of very few methods with such proven guarantees.&lt;/p>
&lt;p>&lt;strong>PDE-DKL&lt;/strong> (2025) combines deep learning for low-dimensional latent representations with Gaussian Processes for kernel regression under explicit PDE constraints, providing both high accuracy and principled uncertainty quantification in limited-data regimes (arXiv:2501.18258).&lt;/p>
&lt;h3 class="heading" id="6-classical-high-order-methods-fem-dg-and-spectral">
 6. Classical High-Order Methods: FEM, DG, and Spectral&lt;span class="heading__anchor"> &lt;a href="#6-classical-high-order-methods-fem-dg-and-spectral">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>Despite the deep learning surge, classical methods continue to mature, particularly in rigorous error analysis and efficiency.&lt;/p>
&lt;p>The &lt;strong>hp-version DG finite element method for the Boltzmann transport problem&lt;/strong> (&lt;em>J. Sci. Comput.&lt;/em>, 2024) achieves arbitrary-order convergence rates and handles polytopic elements, enabling efficient parallel implementation within existing multigroup discrete ordinates software. High-order DG methods for unsteady compressible flows — targeting acoustic waves, turbulence, and magnetohydrodynamics — benefit from block-diagonal mass matrices allowing efficient explicit time-stepping.&lt;/p>
&lt;p>A systematic 2024 approach uses neural networks to learn the element-wise solution map of PDEs, accelerating finite element-type methods in an &amp;ldquo;element neural network&amp;rdquo; paradigm that generalises across element geometries. Machine learning-based spectral methods combine orthogonal function expansions (Fourier, Legendre) with deep neural operator learning for highly accurate solutions with fewer grid points.&lt;/p>
&lt;p>&lt;strong>FEX-PG&lt;/strong> (2024) solves high-dimensional partial integro-differential equations using parameter grouping to reduce coefficient count and Taylor series approximation for integral terms, achieving relative errors on the order of single-precision machine epsilon while providing &lt;em>interpretable, explicit&lt;/em> solution formulas absent from most DL methods (arXiv:2410.00835).&lt;/p>
&lt;h3 class="heading" id="7-structure-preserving-numerical-methods">
 7. Structure-Preserving Numerical Methods&lt;span class="heading__anchor"> &lt;a href="#7-structure-preserving-numerical-methods">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>Structure-preserving methods retain intrinsic properties of the continuous system — symplecticity, energy conservation, divergence-free constraints — at the discrete level. They enhance numerical stability and long-term accuracy, ensuring computed solutions respect the underlying mathematical structure.&lt;/p>
&lt;p>Recent research encompasses geometric integrators and mimetic discretisations for conservative finite element, difference, and volume schemes; stochastic multisymplectic PDEs and their structure-preserving discretisations (&lt;em>Studies in Applied Mathematics&lt;/em>, 2025); and structure-preserving learning via the Geo-NeW model, which exactly preserves physical conservation laws through Finite Element Exterior Calculus. A 2024 University of Maryland workshop identified integration of structure-preserving methods with uncertainty quantification as a key open problem.&lt;/p>
&lt;h3 class="heading" id="8-data-driven-pde-discovery">
 8. Data-Driven PDE Discovery&lt;span class="heading__anchor"> &lt;a href="#8-data-driven-pde-discovery">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>&lt;strong>SINDy&lt;/strong> and its extensions use sparse regression over a dictionary of candidate functions. &lt;strong>GN-SINDy&lt;/strong> (2024–2026) addresses high dimensionality and large datasets by combining Q-DEIM greedy sampling, differentiable surrogate modelling, and sparse regression, showing robustness on Burgers, Allen-Cahn, and KdV equations. &lt;strong>Evo-SINDy&lt;/strong> (ACM, 2025) uses multi-population co-evolutionary algorithms for universal PDE identification. &lt;strong>Bayesian-SINDy&lt;/strong> quantifies parameter uncertainty robustly (arXiv:2402.15357).&lt;/p>
&lt;p>On the neural-symbolic front, &lt;strong>Mechanistic PDE Networks&lt;/strong> (arXiv:2502.18377, 2025) represent spatiotemporal data as space-time dependent linear PDEs within neural network hidden representations, then solve and decode for specific tasks. &lt;strong>MORL4PDEs&lt;/strong> (&lt;em>Chaos Solitons Fractals&lt;/em>, 2024) uses reinforcement learning and genetic algorithms for symbolic PDE regression without pre-specified candidate libraries. The &lt;strong>Physics-Informed Information Criterion (PIC)&lt;/strong> (&lt;em>Research&lt;/em>, 2022) selects the most appropriate PDE from candidates by incorporating symmetry constraints.&lt;/p>
&lt;h3 class="heading" id="9-hamiltonjacobi-pdes">
 9. Hamilton–Jacobi PDEs&lt;span class="heading__anchor"> &lt;a href="#9-hamiltonjacobi-pdes">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>Hamilton–Jacobi (HJ) PDEs govern optimal control, level-set methods, and front propagation. A comprehensive 2025 review (arXiv:2502.20833) covers grid-based methods, representation formula methods, Monte Carlo via Laplace&amp;rsquo;s method, and deep learning approaches. Key deep learning advances include actor-critic neural network frameworks for static HJ equations (convergence analysed in 2024), and variational methods that solve HJ PDEs up to 100 dimensions with relative errors of 1–5%. Deep BSDE methods naturally apply to Hamilton-Jacobi-Bellman (HJB) equations arising in stochastic optimal control.&lt;/p>
&lt;h3 class="heading" id="10-fractional-and-non-local-pdes">
 10. Fractional and Non-Local PDEs&lt;span class="heading__anchor"> &lt;a href="#10-fractional-and-non-local-pdes">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>Fractional-order derivatives model anomalous diffusion, viscoelastic behaviour, and memory effects that integer-order PDEs cannot capture. Recent advances include semi-analytical methods (Adomian Decomposition, Variational Iteration) applied to 3D time-fractional diffusion, telegraph, and wave equations; a 2024 comprehensive review of fractional stochastic PDEs covering the latest numerical methods and practical implementations; the Optimised FNO (O-FNO, 2025) achieving 98%+ test accuracy for fractional Poisson equations; and a 2025 meshfree finite difference scheme for the fractional Laplacian on arbitrary bounded domains.&lt;/p>
&lt;h3 class="heading" id="11-multiscale-methods-and-model-order-reduction">
 11. Multiscale Methods and Model Order Reduction&lt;span class="heading__anchor"> &lt;a href="#11-multiscale-methods-and-model-order-reduction">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>The 2024 &lt;em>Numerical Multiscale Methods&lt;/em> dissertation establishes an equivalence between time averaging and space homogenisation, and extends Deep Ritz to multiscale problems via scale convergence theory. &lt;strong>Multi-fidelity reduced order models&lt;/strong> for PDE-constrained optimisation (arXiv:2503.21252, 2025) use a hierarchical trust region algorithm with active learning, constructing a full/reduced/ML model hierarchy on-the-fly. &lt;strong>POD-DL-ROMs&lt;/strong> (Politecnico di Milano, 2024) combine proper orthogonal decomposition with autoencoder architectures for nonlinear parametric PDEs, providing a mathematically rigorous framework enhancing accuracy of reduced models.&lt;/p>
&lt;h3 class="heading" id="12-uncertainty-quantification-and-stochastic-pdes">
 12. Uncertainty Quantification and Stochastic PDEs&lt;span class="heading__anchor"> &lt;a href="#12-uncertainty-quantification-and-stochastic-pdes">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>&lt;strong>Quasi-Monte Carlo (QMC) methods&lt;/strong> achieve faster convergence than Monte Carlo for smooth integrands. A 2024 paper analyses QMC with generalised Gaussian random variables and Gevrey regular inputs — relaxing the standard uniformly bounded assumption — analysing dimension truncation, FEM, and QMC errors jointly for randomly shifted rank-1 lattice rules (arXiv:2411.03793). &lt;strong>Randomised QMC (RQMC)&lt;/strong> with scrambled Sobol&amp;rsquo; sequences achieves smaller bias and RMSE than Monte Carlo for risk-averse optimisation (arXiv:2408.02842). A 2024 ICERM semester at Brown University (&amp;ldquo;Numerical PDEs: Analysis, Algorithms, and Data Challenges&amp;rdquo;) served as a major gathering point for researchers integrating uncertainty quantification with PDE methods.&lt;/p>
&lt;h3 class="heading" id="13-quantum-and-photonic-computing-for-pdes">
 13. Quantum and Photonic Computing for PDEs&lt;span class="heading__anchor"> &lt;a href="#13-quantum-and-photonic-computing-for-pdes">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>&lt;strong>Schrödingerisation&lt;/strong> techniques convert general linear PDEs into Schrödinger-type equations via the &amp;ldquo;warped transformation,&amp;rdquo; enabling direct quantum Hamiltonian simulation. A 2024 &lt;em>Quantum&lt;/em> journal paper provides explicit quantum circuit implementations for the heat and advection equations with complexity analysis demonstrating quantum advantage in high dimensions. &lt;strong>ColibriTD&amp;rsquo;s H-DES&lt;/strong> (March 2025) was reported as the first real-hardware solution of a PDE via variational quantum algorithm, executing on IBM&amp;rsquo;s 156-qubit Heron R2 processor for the inviscid Burgers&amp;rsquo; equation.&lt;/p>
&lt;p>&lt;strong>LightSolver&amp;rsquo;s Laser Processing Unit (LPU)&lt;/strong> (announced September 2025) can now directly map and solve PDEs, with constant-time iteration steps independent of problem size, claiming up to 100× speed gains over GPU solvers and partnerships with Ansys for engineering integration.&lt;/p>
&lt;hr>
&lt;h2 class="heading" id="open-problems">
 Open Problems&lt;span class="heading__anchor"> &lt;a href="#open-problems">#&lt;/a>&lt;/span>
&lt;/h2>&lt;p>&lt;strong>PINN training stability.&lt;/strong> Despite many improvements, PINN training remains fragile for stiff and multi-scale problems. A general theory of loss landscape conditioning and principled hyperparameter selection is lacking.&lt;/p>
&lt;p>&lt;strong>Neural operator generalisation theory.&lt;/strong> While FNO and DeepONet generalise empirically across PDE instances, rigorous approximation-theoretic guarantees relating operator-learning error to network width, depth, and training data remain incomplete.&lt;/p>
&lt;p>&lt;strong>Foundation model reliability and extrapolation.&lt;/strong> PDE foundation models show impressive zero-shot accuracy within their pre-training distribution, but their failure modes on out-of-distribution physics — and the extent to which physics-informed fine-tuning can compensate — are not yet well understood.&lt;/p>
&lt;p>&lt;strong>High-dimensional solvers beyond parabolic PDEs.&lt;/strong> The Deep BSDE method and MLP method primarily address semilinear parabolic PDEs. Extending their curse-of-dimensionality guarantees to elliptic, hyperbolic, or fully nonlinear PDEs remains largely open.&lt;/p>
&lt;p>&lt;strong>Structure-preserving deep learning.&lt;/strong> Integrating conservation laws and geometric structure (symplecticity, divergence-free constraints) into neural PDE solvers at scale — beyond the Geo-NeW approach for specific exterior calculus structures — is an active and unresolved challenge.&lt;/p>
&lt;p>&lt;strong>Quantum hardware advantage.&lt;/strong> Near-term quantum devices face noise and connectivity limitations that restrict their practical advantage over classical HPC for PDE solving. Demonstrating genuine quantum speedup for industrially relevant PDEs on real hardware remains an open goal.&lt;/p>
&lt;hr>
&lt;h2 class="heading" id="references">
 References&lt;span class="heading__anchor"> &lt;a href="#references">#&lt;/a>&lt;/span>
&lt;/h2>&lt;p>Brunton, S. L., Proctor, J. L., &amp;amp; Kutz, J. N. (2016). Discovering governing equations from data by sparse identification of nonlinear dynamical systems. &lt;em>PNAS, 113&lt;/em>(15), 3932–3937.&lt;/p>
&lt;p>ColibriTD. (2025, March). &lt;em>H-DES: First real-hardware PDE solver via variational quantum algorithm&lt;/em>. The Quantum Insider. &lt;a href="https://thequantuminsider.com/2025/03/25/colibritd-announces-h-des-pde-solver-as-a-step-toward-accessible-quantum-simulation-in-engineering/">https://thequantuminsider.com/2025/03/25/colibritd-announces-h-des-pde-solver-as-a-step-toward-accessible-quantum-simulation-in-engineering/&lt;/a>&lt;/p>
&lt;p>E, W., &amp;amp; Yu, B. (2018). The deep Ritz method: A deep learning-based numerical algorithm for solving variational problems. &lt;em>Communications in Mathematics and Statistics, 6&lt;/em>(1), 1–12.&lt;/p>
&lt;p>E, W., Han, J., &amp;amp; Jentzen, A. (2022). Algorithms for solving high dimensional PDEs: From nonlinear Monte Carlo to machine learning. &lt;em>Nonlinearity, 35&lt;/em>(1), 278.&lt;/p>
&lt;p>Han, J., Jentzen, A., &amp;amp; E, W. (2018). Solving high-dimensional partial differential equations using deep learning. &lt;em>PNAS, 115&lt;/em>(34), 8505–8510. &lt;a href="https://www.pnas.org/doi/10.1073/pnas.1718942115">https://www.pnas.org/doi/10.1073/pnas.1718942115&lt;/a>&lt;/p>
&lt;p>Han, J. (2025). &lt;em>A brief review of the Deep BSDE method for solving high-dimensional partial differential equations&lt;/em>. arXiv:2505.17032. &lt;a href="https://arxiv.org/abs/2505.17032">https://arxiv.org/abs/2505.17032&lt;/a>&lt;/p>
&lt;p>Hu, J., Jin, S., Liu, N., &amp;amp; Zhang, L. (2024). Quantum circuits for partial differential equations via Schrödingerisation. &lt;em>Quantum, 8&lt;/em>, 1563. &lt;a href="https://quantum-journal.org/papers/q-2024-12-12-1563/">https://quantum-journal.org/papers/q-2024-12-12-1563/&lt;/a>&lt;/p>
&lt;p>IEEE. (2025). &lt;em>A staged training approach for physics-informed neural networks in solving partial differential equations&lt;/em>. &lt;a href="https://ieeexplore.ieee.org/document/11172661/">https://ieeexplore.ieee.org/document/11172661/&lt;/a>&lt;/p>
&lt;p>IEEE. (2025). &lt;em>Higher-order-ReLU-KANs (HRKANs) for solving physics-informed neural networks more accurately, robustly and faster&lt;/em>. &lt;a href="https://ieeexplore.ieee.org/document/11105234/">https://ieeexplore.ieee.org/document/11105234/&lt;/a>&lt;/p>
&lt;p>IEEE. (2025). &lt;em>ReBA: A hybrid sparse reconfigurable butterfly accelerator for solving PDEs via hardware and algorithm co-design&lt;/em>. &lt;a href="https://ieeexplore.ieee.org/document/11044078/">https://ieeexplore.ieee.org/document/11044078/&lt;/a>&lt;/p>
&lt;p>IEEE. (2025). &lt;em>An optimized Fourier neural operator for the 2D fractional Poisson equation&lt;/em>. &lt;a href="https://ieeexplore.ieee.org/document/11405135/">https://ieeexplore.ieee.org/document/11405135/&lt;/a>&lt;/p>
&lt;p>Li, Z., et al. (2020). &lt;em>Fourier neural operator for parametric partial differential equations&lt;/em>. arXiv:2010.08895.&lt;/p>
&lt;p>LightSolver. (2025, September). &lt;em>LightSolver announces advance in physical modeling on the LPU&lt;/em>. The Quantum Insider. &lt;a href="https://thequantuminsider.com/2025/09/16/lightsolver-announces-advance-in-physical-modeling-on-the-lpu-and-new-roadmap-for-optical-analog-pde-solving/">https://thequantuminsider.com/2025/09/16/lightsolver-announces-advance-in-physical-modeling-on-the-lpu-and-new-roadmap-for-optical-analog-pde-solving/&lt;/a>&lt;/p>
&lt;p>Liu, Z., et al. (2024). &lt;em>KAN: Kolmogorov-Arnold Networks&lt;/em>. arXiv:2404.19756. ICLR 2025. &lt;a href="https://arxiv.org/abs/2404.19756">https://arxiv.org/abs/2404.19756&lt;/a>&lt;/p>
&lt;p>Lu, L., Jin, P., Pang, G., Zhang, Z., &amp;amp; Karniadakis, G. E. (2021). Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. &lt;em>Nature Machine Intelligence, 3&lt;/em>, 218–229.&lt;/p>
&lt;p>Lu, L., et al. (2024). Learning nonlinear operators in latent spaces for real-time predictions of complex dynamics in physical systems. &lt;em>Nature Communications&lt;/em>. &lt;a href="https://www.nature.com/articles/s41467-024-49411-w">https://www.nature.com/articles/s41467-024-49411-w&lt;/a>&lt;/p>
&lt;p>McCabe, M., et al. (2025). &lt;em>Poseidon: Efficient foundation models for PDEs&lt;/em>. arXiv:2405.19101. &lt;a href="https://arxiv.org/html/2405.19101v2">https://arxiv.org/html/2405.19101v2&lt;/a>&lt;/p>
&lt;p>Peng, W., et al. (2025). &lt;em>OmniArch: Building foundation model for scientific computing&lt;/em>. ICML 2025. &lt;a href="https://icml.cc/virtual/2025/poster/45099">https://icml.cc/virtual/2025/poster/45099&lt;/a>&lt;/p>
&lt;p>Raissi, M., Perdikaris, P., &amp;amp; Karniadakis, G. E. (2019). Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. &lt;em>Journal of Computational Physics, 378&lt;/em>, 686–707.&lt;/p>
&lt;p>Shi, Z., et al. (2025). &lt;em>Physics-informed fine-tuning of foundation models for partial differential equations&lt;/em>. arXiv:2603.15431. &lt;a href="https://arxiv.org/html/2603.15431v1">https://arxiv.org/html/2603.15431v1&lt;/a>&lt;/p>
&lt;p>Wang, S., et al. (2025). &lt;em>Geo-NeW: Structure-preserving learning improves geometry generalization in PDEs&lt;/em>. arXiv:2602.02788. &lt;a href="https://arxiv.org/abs/2602.02788">https://arxiv.org/abs/2602.02788&lt;/a>&lt;/p>
&lt;p>Wang, Z., et al. (2024). Kolmogorov–Arnold-Informed neural network: A physics-informed deep learning framework for solving forward and inverse problems. &lt;em>Computer Methods in Applied Mechanics and Engineering&lt;/em>. &lt;a href="https://linkinghub.elsevier.com/retrieve/pii/S0045782524007722">https://linkinghub.elsevier.com/retrieve/pii/S0045782524007722&lt;/a>&lt;/p>
&lt;p>Xiao, P., et al. (2025). Quantum DeepONet: Neural operators accelerated by quantum computing. &lt;em>Quantum, 9&lt;/em>, 1761. &lt;a href="https://quantum-journal.org/papers/q-2025-06-04-1761/">https://quantum-journal.org/papers/q-2025-06-04-1761/&lt;/a>&lt;/p>
&lt;p>Xie, Z., et al. (2025). &lt;em>Anant-Net: Breaking the curse of dimensionality with scalable and interpretable neural surrogates&lt;/em>. arXiv:2505.03595. &lt;a href="https://arxiv.org/html/2505.03595v3">https://arxiv.org/html/2505.03595v3&lt;/a>&lt;/p>
&lt;p>Xie, Z., et al. (2025). &lt;em>A deep shotgun method for solving high-dimensional parabolic partial differential equations&lt;/em>. &lt;em>Journal of Scientific Computing&lt;/em>. &lt;a href="https://link.springer.com/10.1007/s10915-025-02983-1">https://link.springer.com/10.1007/s10915-025-02983-1&lt;/a>&lt;/p>
&lt;p>Xu, K., &amp;amp; Darve, E. (2025). &lt;em>Integration matters for learning PDEs with backwards SDEs&lt;/em>. arXiv:2505.01078. &lt;a href="https://arxiv.org/abs/2505.01078">https://arxiv.org/abs/2505.01078&lt;/a>&lt;/p>
&lt;p>Zeng, Q., et al. (2025). Automatic network structure discovery of physics informed neural networks via knowledge distillation. &lt;em>Nature Communications&lt;/em>. &lt;a href="https://www.nature.com/articles/s41467-025-64624-3">https://www.nature.com/articles/s41467-025-64624-3&lt;/a>&lt;/p>
&lt;p>Zhang, Y., et al. (2024). &lt;em>PDEformer: Towards a foundation model for one-dimensional partial differential equations&lt;/em>. arXiv:2402.12652. &lt;a href="http://arxiv.org/pdf/2402.12652.pdf">http://arxiv.org/pdf/2402.12652.pdf&lt;/a>&lt;/p>
&lt;p>Zhang, Y., et al. (2025). &lt;em>A multimodal PDE foundation model for prediction and scientific text descriptions&lt;/em>. arXiv:2502.06026. &lt;a href="https://arxiv.org/abs/2502.06026">https://arxiv.org/abs/2502.06026&lt;/a>&lt;/p></description></item><item><title>Recent Advances in Steady States of Navier-Stokes Equations</title><link>http://lnhutnam.github.io/en/posts/ss-nse/</link><pubDate>Mon, 30 Mar 2026 00:00:00 +0000</pubDate><guid>http://lnhutnam.github.io/en/posts/ss-nse/</guid><description>&lt;p>The study of steady-state and self-similar solutions of the incompressible Navier-Stokes equations (NSE) has undergone remarkable progress in the 2020s. This post surveys landmark results from 2024–2026 touching on existence, uniqueness, classification, and stability of such solutions. The stationary (steady) NSE in $\mathbb{R}^3$ reads:&lt;/p>
&lt;p>$$-\nu \Delta u + (u \cdot \nabla) u + \nabla p = 0, \quad \operatorname{div} u = 0.$$&lt;/p>
&lt;p>A central object of the self-similar theory is the class of &lt;strong>$(-1)$-homogeneous&lt;/strong> (scale-invariant) solutions: a function $u$ is $(-1)$-homogeneous if $u(\lambda x) = \lambda^{-1} u(x)$ for all $\lambda &amp;gt; 0$. These are precisely the profiles of forward self-similar solutions $u(x,t) = t^{-1/2} U(x/\sqrt{t})$ of the time-dependent NSE.&lt;/p>
&lt;hr>
&lt;h2 class="heading" id="overview">
 Overview&lt;span class="heading__anchor"> &lt;a href="#overview">#&lt;/a>&lt;/span>
&lt;/h2>&lt;p>Five landmark results define the frontier of this area in 2024–2026:&lt;/p>
&lt;ol>
&lt;li>&lt;strong>Non-uniqueness of Leray–Hopf solutions&lt;/strong> via a computer-assisted proof in the self-similar framework (Hou, Wang, &amp;amp; Yang, 2025).&lt;/li>
&lt;li>&lt;strong>Forward self-similar solutions in 2D&lt;/strong> for arbitrarily large initial data (Albritton, Guillod, Korobkov, &amp;amp; Ren, 2026).&lt;/li>
&lt;li>&lt;strong>Existence of self-similar solutions in high dimensions&lt;/strong> ($4 \leq n \leq 16$) without smallness conditions (Bang, Gui, Liu, Wang, &amp;amp; Xie, 2025).&lt;/li>
&lt;li>Sharp &lt;strong>removable singularity results&lt;/strong> for $(-1)$-homogeneous solutions with singular rays (Li, Li, &amp;amp; Yan, 2024).&lt;/li>
&lt;li>&lt;strong>Steady NSE in junction domains&lt;/strong> with large, non-small fluxes (Gazzola, Korobkov, Ren, &amp;amp; Sperone, 2025).&lt;/li>
&lt;/ol>
&lt;div class="table-wrapper">&lt;table>
 &lt;thead>
 &lt;tr>
 &lt;th>Paper&lt;/th>
 &lt;th>Authors&lt;/th>
 &lt;th>Contribution&lt;/th>
 &lt;/tr>
 &lt;/thead>
 &lt;tbody>
 &lt;tr>
 &lt;td>arXiv:2410.11170&lt;/td>
 &lt;td>Li, Li, Yan&lt;/td>
 &lt;td>Optimal removable singularity for $(-1)$-homogeneous solutions&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>arXiv:2412.07283&lt;/td>
 &lt;td>Bang, Gui, Liu, Wang, Xie&lt;/td>
 &lt;td>Self-similar solutions in 2D sector: existence/non-uniqueness&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>arXiv:2505.14642&lt;/td>
 &lt;td>Gazzola, Korobkov, Ren, Sperone&lt;/td>
 &lt;td>Steady NSE in junction channels, non-small fluxes&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>arXiv:2509.25116&lt;/td>
 &lt;td>Hou, Wang, Yang&lt;/td>
 &lt;td>&lt;strong>First rigorous non-uniqueness of Leray–Hopf&lt;/strong>&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>arXiv:2510.10488&lt;/td>
 &lt;td>Bang, Gui, Liu, Wang, Xie&lt;/td>
 &lt;td>$(-1)$-homogeneous solutions, dimensions $4 \leq n \leq 16$&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>arXiv:2601.03161&lt;/td>
 &lt;td>Albritton, Guillod, Korobkov, Ren&lt;/td>
 &lt;td>Forward self-similar solutions, 2D, large data&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>arXiv:2601.03833&lt;/td>
 &lt;td>Gui, Liu, Xie&lt;/td>
 &lt;td>Global existence of 2D forward self-similar solutions&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>arXiv:2602.19846&lt;/td>
 &lt;td>Fujii&lt;/td>
 &lt;td>Sharp uniqueness/non-uniqueness in critical Besov spaces&lt;/td>
 &lt;/tr>
 &lt;/tbody>
 &lt;/table>
&lt;/div>&lt;hr>
&lt;h2 class="heading" id="background">
 Background&lt;span class="heading__anchor"> &lt;a href="#background">#&lt;/a>&lt;/span>
&lt;/h2>&lt;h3 class="heading" id="landau-solutions-and-šveráks-classification">
 Landau Solutions and Šverák&amp;rsquo;s Classification&lt;span class="heading__anchor"> &lt;a href="#landau-solutions-and-%c5%a1ver%c3%a1ks-classification">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>In 1944, Landau discovered a three-parameter explicit family of $(-1)$-homogeneous axisymmetric no-swirl solutions of the 3D stationary NSE. Known as &lt;strong>Landau solutions&lt;/strong>, they are parameterized by vectors $b \in \mathbb{R}^3$ and represent fluid jets emanating from the origin. A seminal result of Šverák (2006) established that all $(-1)$-homogeneous solutions smooth on $\mathbb{S}^2$ must be Landau solutions — the only scale-invariant flows without singularities on the sphere.&lt;/p>
&lt;h3 class="heading" id="forward-self-similar-solutions">
 Forward Self-Similar Solutions&lt;span class="heading__anchor"> &lt;a href="#forward-self-similar-solutions">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>A &lt;strong>forward self-similar solution&lt;/strong> takes the form&lt;/p>
&lt;p>$$u(x, t) = \frac{1}{\sqrt{t}} U!\left(\frac{x}{\sqrt{t}}\right),$$&lt;/p>
&lt;p>where the self-similar profile $U$ solves the stationary scaled NSE. The seminal work of Jia and Šverák (2014) showed that for any $(-1)$-homogeneous initial data smooth away from the origin, at least one global self-similar solution exists for &lt;strong>large data&lt;/strong> — without any smallness restriction. Existence is proved via the Leray–Schauder continuation theorem rather than a fixed-point contraction (Jia &amp;amp; Šverák, 2015).&lt;/p>
&lt;p>&lt;strong>Discretely self-similar&lt;/strong> (DSS) solutions, where $u(\lambda x, \lambda^2 t) = \lambda^{-1} u(x,t)$ for a specific $\lambda &amp;gt; 1$, were constructed for large data by Tsai (2014).&lt;/p>
&lt;h3 class="heading" id="classification-of--1-homogeneous-solutions">
 Classification of $(-1)$-Homogeneous Solutions&lt;span class="heading__anchor"> &lt;a href="#classification-of--1-homogeneous-solutions">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>Tian and Xin (1998) proved that all $(-1)$-homogeneous axisymmetric solutions with exactly one singularity must be Landau solutions. A key series of papers by Li, Li, and Yan (2016–2023) classified all $(-1)$-homogeneous axisymmetric no-swirl solutions with singularities at both the north and south poles of $\mathbb{S}^2$, parameterizing them as a four-dimensional surface with boundary. They also constructed the first &lt;strong>non-axisymmetric&lt;/strong> $(-1)$-homogeneous solutions with swirl using the Weierstrass representation of minimal surfaces.&lt;/p>
&lt;hr>
&lt;h2 class="heading" id="recent-developments">
 Recent Developments&lt;span class="heading__anchor"> &lt;a href="#recent-developments">#&lt;/a>&lt;/span>
&lt;/h2>&lt;h3 class="heading" id="1-removable-singularity-theorem-li-li--yan-2024">
 1. Removable Singularity Theorem (Li, Li, &amp;amp; Yan, 2024)&lt;span class="heading__anchor"> &lt;a href="#1-removable-singularity-theorem-li-li--yan-2024">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>One of the sharpest results of 2024 is the &lt;strong>removable singularity theorem&lt;/strong> proved by Li, Li, and Yan (arXiv:2410.11170, to appear in &lt;em>Trans. Amer. Math. Soc.&lt;/em>): any local $(-1)$-homogeneous solution $u$ near a potential singular ray through $P \in \mathbb{S}^2$ extends smoothly across $P$, &lt;strong>provided&lt;/strong> $u = o(\ln \operatorname{dist}(x, P))$ on $\mathbb{S}^2$.&lt;/p>
&lt;p>The result is &lt;strong>sharp&lt;/strong>: for any $\alpha &amp;gt; 0$, there exist local solutions where $|u(x)| / \ln |x&amp;rsquo;| \to -\alpha$ as $x \to P$, showing that logarithmic growth exactly prevents smooth extension. The paper also establishes existence of solutions with any finite number of singularities located arbitrarily on $\mathbb{S}^2$. A companion survey by Li and Yan (arXiv:2509.07243, Sep 2025) provides a state-of-the-art exposition of this topic.&lt;/p>
&lt;h3 class="heading" id="2-self-similar-solutions-in-high-dimensions-bang-et-al-2025">
 2. Self-Similar Solutions in High Dimensions (Bang et al., 2025)&lt;span class="heading__anchor"> &lt;a href="#2-self-similar-solutions-in-high-dimensions-bang-et-al-2025">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>Bang, Gui, Liu, Wang, and Xie (arXiv:2510.10488, Oct 2025) proved existence of $(-1)$-homogeneous solutions to the steady NSE in &lt;strong>high spatial dimensions&lt;/strong>:&lt;/p>
&lt;blockquote>
&lt;p>For any $(-3)$-homogeneous, locally Lipschitz external force on $\mathbb{R}^n \setminus {0}$ with $4 \leq n \leq 16$, the steady NSE admit at least one $(-1)$-homogeneous solution that is scale-invariant and regular away from the origin.&lt;/p>&lt;/blockquote>
&lt;p>&lt;strong>Global uniqueness&lt;/strong> holds when the external force is small. The key novelty is a &lt;strong>dimension-reduction effect&lt;/strong> from self-similarity: integral estimates of the positive part of the total head pressure enable energy estimates even in the supercritical dimension regime. For forces with only a nonnegative radial component, existence extends to &lt;strong>all $n \geq 4$&lt;/strong>.&lt;/p>
&lt;p>The same group (arXiv:2412.07283, Dec 2024) also established existence, uniqueness, and non-uniqueness of self-similar solutions to the steady NSE in &lt;strong>2D sectors&lt;/strong> with no-slip boundary conditions, providing rigorous corrections to classical Rosenhead (1940) calculations.&lt;/p>
&lt;h3 class="heading" id="3-forward-self-similar-solutions-in-2d-for-large-data-2026">
 3. Forward Self-Similar Solutions in 2D for Large Data (2026)&lt;span class="heading__anchor"> &lt;a href="#3-forward-self-similar-solutions-in-2d-for-large-data-2026">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>Two independent papers in January 2026 addressed the 2D problem, where classical local energy estimates break down because the initial $(-1)$-homogeneous vorticity is not locally integrable:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Gui, Liu, and Xie&lt;/strong> (arXiv:2601.03833) established global existence of forward self-similar solutions for any divergence-free, $(-1)$-homogeneous, locally Hölder continuous initial velocity, with &lt;strong>no smallness assumption&lt;/strong>.&lt;/li>
&lt;li>&lt;strong>Albritton, Guillod, Korobkov, and Ren&lt;/strong> (arXiv:2601.03161) independently constructed such solutions from &lt;strong>arbitrarily large&lt;/strong> initial data and provided &lt;strong>numerical evidence for non-uniqueness&lt;/strong> — the first construction and validation of non-uniqueness for the 2D self-similar problem.&lt;/li>
&lt;/ul>
&lt;h3 class="heading" id="4-non-uniqueness-of-lerayhopf-solutions-hou-wang--yang-2025">
 4. Non-Uniqueness of Leray–Hopf Solutions (Hou, Wang, &amp;amp; Yang, 2025)&lt;span class="heading__anchor"> &lt;a href="#4-non-uniqueness-of-lerayhopf-solutions-hou-wang--yang-2025">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>The most dramatic recent development is the &lt;strong>first rigorous computer-assisted proof of non-uniqueness of Leray–Hopf solutions&lt;/strong> to the unforced 3D NSE by Hou, Wang, and Yang (arXiv:2509.25116, Sep 2025, revised Mar 2026):&lt;/p>
&lt;blockquote>
&lt;p>There exist &lt;strong>infinitely many distinct suitable Leray–Hopf solutions&lt;/strong> to the 3D NSE on $\mathbb{R}^3 \times [0,1]$ with the same compactly supported, divergence-free initial condition $u_{in} \in L^q$ for any $q &amp;lt; 3$.&lt;/p>&lt;/blockquote>
&lt;p>The proof executes the &lt;strong>Jia–Šverák program&lt;/strong> (Jia &amp;amp; Šverák, 2015), which requires finding a large forward self-similar background flow whose linearized operator has an &lt;strong>unstable eigenvalue&lt;/strong> (positive real part), then bifurcating to produce infinitely many Leray–Hopf solutions. The key steps are:&lt;/p>
&lt;ol>
&lt;li>A finite-element + spectral-basis numerical method computes a highly precise candidate profile $\tilde{U}$.&lt;/li>
&lt;li>The linearized operator $L_{\tilde{U}}$ is decomposed into a coercive part plus a finite-rank perturbation, whose invertibility is certified by &lt;strong>computer-assisted interval arithmetic&lt;/strong>.&lt;/li>
&lt;li>This certifies an unstable eigenpair $(\tilde{v}, \tilde{\lambda})$ with $\operatorname{Re}(\tilde{\lambda}) &amp;gt; 0$, yielding the second (and infinitely many) solutions via Riesz projection and Duhamel analysis.&lt;/li>
&lt;/ol>
&lt;p>These solutions just miss the Prodi–Serrin condition that guarantees uniqueness. Guillod and Šverák (2017) had provided strong numerical evidence that such unstable profiles exist, but the rigorous proof remained elusive until Hou et al.&lt;/p>
&lt;h3 class="heading" id="5-sharp-non-uniqueness-for-weak-solutions-via-convex-integration-20222026">
 5. Sharp Non-Uniqueness for Weak Solutions via Convex Integration (2022–2026)&lt;span class="heading__anchor"> &lt;a href="#5-sharp-non-uniqueness-for-weak-solutions-via-convex-integration-20222026">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>A parallel program uses convex integration to prove non-uniqueness of weak solutions. Cheskidov and Luo (&lt;em>Invent. Math.&lt;/em>, 2022) proved sharp non-uniqueness in $L^p_t L^\infty$ for any $p &amp;lt; 2$ in the periodic setting. Miao, Nie, and Ye (arXiv:2412.09637, Dec 2024) extended this to $\mathbb{R}^3$. Fujii (arXiv:2602.19846, Feb 2026) completed a sharp classification in critical Besov spaces $C([0,T); \dot{B}^{n/p-1}_{p,q}(\mathbb{R}^n))$, finding that large-time asymptotics of non-unique solutions are governed by non-trivial &lt;strong>stationary flows&lt;/strong> — a first in the critical regularity setting.&lt;/p>
&lt;div class="table-wrapper">&lt;table>
 &lt;thead>
 &lt;tr>
 &lt;th>Result&lt;/th>
 &lt;th>Authors&lt;/th>
 &lt;th>Year&lt;/th>
 &lt;th>Setting&lt;/th>
 &lt;th style="text-align: center">Self-similar?&lt;/th>
 &lt;/tr>
 &lt;/thead>
 &lt;tbody>
 &lt;tr>
 &lt;td>Non-uniqueness, $L^p_t L^\infty$, torus&lt;/td>
 &lt;td>Cheskidov &amp;amp; Luo&lt;/td>
 &lt;td>2022&lt;/td>
 &lt;td>3D periodic&lt;/td>
 &lt;td style="text-align: center">No&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Non-uniqueness, $L^p_t L^\infty$, $\mathbb{R}^3$&lt;/td>
 &lt;td>Miao, Nie &amp;amp; Ye&lt;/td>
 &lt;td>2024&lt;/td>
 &lt;td>3D whole space&lt;/td>
 &lt;td style="text-align: center">No&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Non-uniqueness of Leray–Hopf, 3D&lt;/td>
 &lt;td>Hou, Wang &amp;amp; Yang&lt;/td>
 &lt;td>2025&lt;/td>
 &lt;td>3D whole space&lt;/td>
 &lt;td style="text-align: center">&lt;strong>Yes&lt;/strong>&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Forward self-similar, 2D, large data&lt;/td>
 &lt;td>Albritton et al.&lt;/td>
 &lt;td>2026&lt;/td>
 &lt;td>2D whole space&lt;/td>
 &lt;td style="text-align: center">&lt;strong>Yes&lt;/strong>&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Steady NSE in 2D sector&lt;/td>
 &lt;td>Bang et al.&lt;/td>
 &lt;td>2024&lt;/td>
 &lt;td>2D sector&lt;/td>
 &lt;td style="text-align: center">&lt;strong>Yes&lt;/strong>&lt;/td>
 &lt;/tr>
 &lt;/tbody>
 &lt;/table>
&lt;/div>&lt;h3 class="heading" id="6-liouville-theorems-and-stability-of-landau-solutions">
 6. Liouville Theorems and Stability of Landau Solutions&lt;span class="heading__anchor"> &lt;a href="#6-liouville-theorems-and-stability-of-landau-solutions">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>&lt;strong>Tan&lt;/strong> (arXiv:2501.03609, Jan 2025) proved new Liouville theorems for the stationary NSE (including the fractional case) under growth conditions in Lebesgue spaces. &lt;strong>Ding and Tan&lt;/strong> (arXiv:2501.03615, Jan 2025) proved a Liouville theorem for the stationary &lt;strong>inhomogeneous&lt;/strong> NSE via frequency localization of the Dirichlet energy near the origin.&lt;/p>
&lt;p>The asymptotic stability of small Landau solutions in $L^3$ was sharpened by &lt;strong>Bradshaw and Wang&lt;/strong> (arXiv:2409.12918, Sep 2024): $L^3$-asymptotic stability holds in Lorentz spaces $L^{3,q}$ for $q &amp;lt; \infty$, but &lt;strong>fails&lt;/strong> in $L^{3,\infty}$ (weak-$L^3$), marking the precise boundary of stability.&lt;/p>
&lt;h3 class="heading" id="7-steady-nse-in-bounded-and-unbounded-domains">
 7. Steady NSE in Bounded and Unbounded Domains&lt;span class="heading__anchor"> &lt;a href="#7-steady-nse-in-bounded-and-unbounded-domains">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>A major reference work by Korobkov, Pileckas, and Russo (&lt;em>Springer/Birkhäuser&lt;/em>, March 2024) provides the first comprehensive book treatment of &lt;strong>Leray&amp;rsquo;s problem&lt;/strong>: existence of a solution in bounded domains under only the condition of zero total flux — without smallness on the boundary data.&lt;/p>
&lt;p>Gazzola, Korobkov, Ren, and Sperone (arXiv:2505.14642, May 2025) studied steady NSE in a &lt;strong>junction of unbounded channels&lt;/strong> with sources and sinks, under inhomogeneous Dirichlet boundary conditions and without smallness of fluxes. They prove existence of a solution with uniformly bounded Dirichlet integral in every compact subset via Leray&amp;rsquo;s &lt;em>reductio ad absurdum&lt;/em> argument using Morse–Sard-type theorems in Sobolev spaces.&lt;/p>
&lt;hr>
&lt;h2 class="heading" id="open-problems">
 Open Problems&lt;span class="heading__anchor"> &lt;a href="#open-problems">#&lt;/a>&lt;/span>
&lt;/h2>&lt;p>Several central questions remain unresolved or only partially answered:&lt;/p>
&lt;p>&lt;strong>The Clay Millennium Prize Problem.&lt;/strong> Whether 3D NSE solutions from smooth initial data can blow up in finite time is not resolved. The Hou et al. non-uniqueness result concerns Leray–Hopf solutions from &lt;em>singular&lt;/em> $L^q$ ($q &amp;lt; 3$) initial data, not smooth data.&lt;/p>
&lt;p>&lt;strong>Complete classification of $(-1)$-homogeneous solutions in 3D.&lt;/strong> The axisymmetric no-swirl case is fully classified, and swirl solutions are well-studied, but a complete classification for all $(-1)$-homogeneous solutions with arbitrarily many singular rays and all possible swirl configurations is not yet achieved.&lt;/p>
&lt;p>&lt;strong>Rigorous non-uniqueness of forward self-similar solutions in 3D.&lt;/strong> The Jia–Šverák program produced numerical evidence (Guillod &amp;amp; Šverák, 2017), but a fully rigorous, non-computer-assisted proof of non-uniqueness for the forward (not backward) self-similar 3D problem remains open.&lt;/p>
&lt;p>&lt;strong>Asymptotic stability of large Landau solutions.&lt;/strong> While small Landau solutions are asymptotically stable in $L^3$, stability for large-parameter Landau solutions is not fully understood.&lt;/p>
&lt;p>&lt;strong>The Leray problem in non-axisymmetric 3D exterior domains without flux restrictions.&lt;/strong> The axisymmetric case was solved by Korobkov, Pileckas, and Russo, but the general 3D exterior domain problem under large flux remains open.&lt;/p>
&lt;hr>
&lt;h2 class="heading" id="references">
 References&lt;span class="heading__anchor"> &lt;a href="#references">#&lt;/a>&lt;/span>
&lt;/h2>&lt;p>Albritton, D., Guillod, J., Korobkov, M., &amp;amp; Ren, X. (2026). &lt;em>Forward self-similar solutions to the 2D Navier-Stokes equations from large data&lt;/em>. arXiv:2601.03161. &lt;a href="https://arxiv.org/abs/2601.03161">https://arxiv.org/abs/2601.03161&lt;/a>&lt;/p>
&lt;p>Bang, J., Gui, C., Liu, Y., Wang, C., &amp;amp; Xie, C. (2024). &lt;em>Self-similar solutions to the steady Navier-Stokes equations in 2D sectors&lt;/em>. arXiv:2412.07283. &lt;a href="https://arxiv.org/abs/2412.07283">https://arxiv.org/abs/2412.07283&lt;/a>&lt;/p>
&lt;p>Bang, J., Gui, C., Liu, Y., Wang, C., &amp;amp; Xie, C. (2025). &lt;em>On the existence of self-similar solutions to the steady Navier-Stokes equations in high dimensions&lt;/em>. arXiv:2510.10488. &lt;a href="https://arxiv.org/abs/2510.10488">https://arxiv.org/abs/2510.10488&lt;/a>&lt;/p>
&lt;p>Bradshaw, Z., &amp;amp; Wang, X. (2024). &lt;em>Asymptotic stability of Landau solutions in Lorentz spaces&lt;/em>. arXiv:2409.12918. &lt;a href="https://arxiv.org/pdf/2409.12918.pdf">https://arxiv.org/pdf/2409.12918.pdf&lt;/a>&lt;/p>
&lt;p>Cheskidov, A., &amp;amp; Luo, X. (2022). Sharp nonuniqueness for the Navier-Stokes equations. &lt;em>Inventiones Mathematicae&lt;/em>. arXiv:2009.06596. &lt;a href="https://arxiv.org/abs/2009.06596">https://arxiv.org/abs/2009.06596&lt;/a>&lt;/p>
&lt;p>Ding, M., &amp;amp; Tan, W. (2025). &lt;em>Liouville-type theorem for the stationary inhomogeneous Navier-Stokes equations&lt;/em>. arXiv:2501.03615. &lt;a href="https://arxiv.org/abs/2501.03615">https://arxiv.org/abs/2501.03615&lt;/a>&lt;/p>
&lt;p>Fujii, M. (2026). &lt;em>Sharp non-uniqueness for the Navier-Stokes equations in critical Besov spaces&lt;/em>. arXiv:2602.19846. &lt;a href="https://arxiv.org/html/2602.19846v1">https://arxiv.org/html/2602.19846v1&lt;/a>&lt;/p>
&lt;p>Gazzola, F., Korobkov, M., Ren, X., &amp;amp; Sperone, G. (2025). &lt;em>The steady Navier-Stokes equations in a system of unbounded channels with sources and sinks&lt;/em>. arXiv:2505.14642. &lt;a href="https://arxiv.org/abs/2505.14642">https://arxiv.org/abs/2505.14642&lt;/a>&lt;/p>
&lt;p>Gui, C., Liu, Y., &amp;amp; Xie, C. (2026). &lt;em>On the forward self-similar solutions to the two-dimensional Navier-Stokes equations&lt;/em>. arXiv:2601.03833. &lt;a href="https://arxiv.org/html/2601.03833v2">https://arxiv.org/html/2601.03833v2&lt;/a>&lt;/p>
&lt;p>Hou, T., Wang, Y., &amp;amp; Yang, C. (2025). &lt;em>Nonuniqueness of Leray-Hopf solutions to the unforced incompressible 3D Navier-Stokes equations&lt;/em>. arXiv:2509.25116. &lt;a href="https://arxiv.org/abs/2509.25116">https://arxiv.org/abs/2509.25116&lt;/a>&lt;/p>
&lt;p>Jia, H., &amp;amp; Šverák, V. (2015). Are the incompressible 3d Navier–Stokes equations locally ill-posed in the natural energy space? &lt;em>Journal of Functional Analysis, 268&lt;/em>(12), 3734–3766. &lt;a href="https://www.sciencedirect.com/science/article/pii/S002212361500138X">https://www.sciencedirect.com/science/article/pii/S002212361500138X&lt;/a>&lt;/p>
&lt;p>Korobkov, M., Pileckas, K., &amp;amp; Russo, R. (2024). &lt;em>The Steady Navier-Stokes System: Basics of the Theory and the Leray Problem&lt;/em>. Springer/Birkhäuser. &lt;a href="https://books.google.com/books/about/The_Steady_Navier_Stokes_System.html?id=GOf8EAAAQBAJ">https://books.google.com/books/about/The_Steady_Navier_Stokes_System.html?id=GOf8EAAAQBAJ&lt;/a>&lt;/p>
&lt;p>Korobkov, M., &amp;amp; Ren, X. (2024). &lt;em>On basic velocity estimates for the plane steady-state Navier-Stokes equations in convex domains&lt;/em>. arXiv:2405.17884. &lt;a href="https://arxiv.org/abs/2405.17884">https://arxiv.org/abs/2405.17884&lt;/a>&lt;/p>
&lt;p>Li, L., Li, Y., &amp;amp; Yan, Y. (2024). &lt;em>Removable singularity of $(-1)$-homogeneous solutions of stationary Navier-Stokes equations&lt;/em>. &lt;em>Transactions of the American Mathematical Society&lt;/em>. arXiv:2410.11170. &lt;a href="https://arxiv.org/abs/2410.11170">https://arxiv.org/abs/2410.11170&lt;/a>&lt;/p>
&lt;p>Li, Y., &amp;amp; Yan, Y. (2025). &lt;em>Recent research on $(-1)$-homogeneous solutions of stationary Navier-Stokes equations&lt;/em>. arXiv:2509.07243. &lt;a href="https://arxiv.org/abs/2509.07243">https://arxiv.org/abs/2509.07243&lt;/a>&lt;/p>
&lt;p>Miao, C., Nie, Y., &amp;amp; Ye, W. (2024). &lt;em>Sharp non-uniqueness for the Navier-Stokes equations in the whole space&lt;/em>. arXiv:2412.09637. &lt;a href="https://arxiv.org/abs/2412.09637">https://arxiv.org/abs/2412.09637&lt;/a>&lt;/p>
&lt;p>Tan, W. (2025). &lt;em>New Liouville type theorems for the stationary Navier-Stokes equations&lt;/em>. arXiv:2501.03609. &lt;a href="https://arxiv.org/pdf/2501.03609.pdf">https://arxiv.org/pdf/2501.03609.pdf&lt;/a>&lt;/p>
&lt;p>Tsai, T.-P. (2014). &lt;em>Forward discretely self-similar solutions of the Navier-Stokes equations&lt;/em>. arXiv:1210.2783. &lt;a href="https://arxiv.org/abs/1210.2783">https://arxiv.org/abs/1210.2783&lt;/a>&lt;/p></description></item><item><title>Recent Research Directions in Analysis of PDEs 2021–2026</title><link>http://lnhutnam.github.io/en/posts/recent-pde-2126/</link><pubDate>Mon, 30 Mar 2026 00:00:00 +0000</pubDate><guid>http://lnhutnam.github.io/en/posts/recent-pde-2126/</guid><description>&lt;p>The arXiv section of Analysis of Partial Differential Equations is one of the most prolific areas of pure mathematics, producing over 400 preprints per month as of early 2026. The period 2021–2026 has witnessed landmark breakthroughs — including a computer-assisted proof of finite-time singularity in the 3D Euler equations, the resolution of Hilbert&amp;rsquo;s Sixth Problem via kinetic theory, and the emergence of probabilistic and nonlocal operator methods as dominant paradigms. This survey identifies, categorises, and profiles the key research directions and landmark papers in math.AP during this era.&lt;/p>
&lt;hr>
&lt;h2 class="heading" id="overview">
 Overview&lt;span class="heading__anchor"> &lt;a href="#overview">#&lt;/a>&lt;/span>
&lt;/h2>&lt;p>The landscape of math.AP in 2021–2026 organises into several major research directions:&lt;/p>
&lt;div class="table-wrapper">&lt;table>
 &lt;thead>
 &lt;tr>
 &lt;th>Direction&lt;/th>
 &lt;th>Landmark Papers&lt;/th>
 &lt;th>Landmark Results&lt;/th>
 &lt;/tr>
 &lt;/thead>
 &lt;tbody>
 &lt;tr>
 &lt;td>Fluid singularity (Euler)&lt;/td>
 &lt;td>Chen &amp;amp; Hou (2022–2023)&lt;/td>
 &lt;td>Finite-time blowup for 3D Euler/2D Boussinesq, smooth data (&lt;em>PNAS&lt;/em> 2025)&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>NS non-uniqueness&lt;/td>
 &lt;td>Albritton, Brué &amp;amp; Colombo (2021)&lt;/td>
 &lt;td>Non-unique Leray–Hopf solutions for forced NS&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Hilbert&amp;rsquo;s 6th Problem&lt;/td>
 &lt;td>Deng, Hani &amp;amp; Ma (2024–2025)&lt;/td>
 &lt;td>Long-time Boltzmann derivation; fluid equations from Newton&amp;rsquo;s laws&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Wave kinetic equation&lt;/td>
 &lt;td>Deng &amp;amp; Hani (2021)&lt;/td>
 &lt;td>Rigorous WKE derivation from cubic NLS&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Mixed local-nonlocal operators&lt;/td>
 &lt;td>Biagi, Dipierro, Valdinoci et al. (2020–2022)&lt;/td>
 &lt;td>Regularity, max. principles, Faber-Krahn inequalities&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Double phase functionals&lt;/td>
 &lt;td>De Filippis &amp;amp; Mingione (2022–2023)&lt;/td>
 &lt;td>Gradient regularity in mixed/double phase settings&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Normalized Schrödinger&lt;/td>
 &lt;td>Wei &amp;amp; Wu (2021); Jeanjean &amp;amp; Le (2020)&lt;/td>
 &lt;td>Critical mass constraints, ground states, NLS&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>MFG inverse problems&lt;/td>
 &lt;td>Imanuvilov, Liu &amp;amp; Yamamoto (2023)&lt;/td>
 &lt;td>Lipschitz stability, Carleman estimates for MFG&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Keller-Segel chemotaxis&lt;/td>
 &lt;td>Li &amp;amp; Winkler (2022); Lyu &amp;amp; Wang (2021)&lt;/td>
 &lt;td>Signal-dependent motility, global regularity&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Stefan/free boundary&lt;/td>
 &lt;td>Ferrari et al. (2024); Arya, Jeon &amp;amp; Julin (2026)&lt;/td>
 &lt;td>$C^{1,\alpha}$ regularity, supercooled Stefan&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Stochastic PDEs&lt;/td>
 &lt;td>Bailleul &amp;amp; Bruned (2021); Bailleul &amp;amp; Hoshino (2025)&lt;/td>
 &lt;td>Renormalisation, regularity structures&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Calderón inverse problem&lt;/td>
 &lt;td>Cârstea, Uhlmann et al. (2021); Krupchyk (2025)&lt;/td>
 &lt;td>Nonlinear and fractional settings&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Dispersive PDEs&lt;/td>
 &lt;td>Deng, Nahmod &amp;amp; Yue (2020); Gubinelli et al. (2025)&lt;/td>
 &lt;td>Random tensors, modulated dispersive equations&lt;/td>
 &lt;/tr>
 &lt;/tbody>
 &lt;/table>
&lt;/div>&lt;hr>
&lt;h2 class="heading" id="background">
 Background&lt;span class="heading__anchor"> &lt;a href="#background">#&lt;/a>&lt;/span>
&lt;/h2>&lt;h3 class="heading" id="the-mathap-landscape">
 The math.AP Landscape&lt;span class="heading__anchor"> &lt;a href="#the-mathap-landscape">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>Analysis of PDEs is the mathematical study of equations involving unknown functions and their partial derivatives, arising in physics, geometry, probability, and engineering. The arXiv math.AP category encompasses everything from regularity theory for elliptic and parabolic equations to global well-posedness for dispersive equations, from geometric flows to inverse problems, and from kinetic theory to stochastic PDEs. With roughly 300–400 papers per month (408 in February 2026 alone), it is one of the most active and interconnected areas of pure mathematics.&lt;/p>
&lt;p>The period 2021–2026 is characterised by three broad trends. First, &lt;strong>grand-challenge resolutions&lt;/strong>: several longstanding open problems — including Hilbert&amp;rsquo;s Sixth Problem and the existence of finite-time singularities for 3D Euler equations with smooth data — were settled using novel combinations of rigorous analysis, Feynman-diagram combinatorics, and computer-assisted numerics. Second, &lt;strong>new paradigm emergence&lt;/strong>: mixed local-nonlocal operators, double phase functionals, and normalised solutions have matured from isolated curiosities into systematic research programmes with their own regularity theories. Third, &lt;strong>interdisciplinary expansion&lt;/strong>: MFG systems, optimal transport, SPDEs, and AI-assisted methods have become structural parts of the math.AP ecosystem.&lt;/p>
&lt;hr>
&lt;h2 class="heading" id="recent-developments">
 Recent Developments&lt;span class="heading__anchor"> &lt;a href="#recent-developments">#&lt;/a>&lt;/span>
&lt;/h2>&lt;h3 class="heading" id="1-mathematical-fluid-dynamics-singularity-non-uniqueness-and-stability">
 1. Mathematical Fluid Dynamics: Singularity, Non-Uniqueness, and Stability&lt;span class="heading__anchor"> &lt;a href="#1-mathematical-fluid-dynamics-singularity-non-uniqueness-and-stability">#&lt;/a>&lt;/span>
&lt;/h3>&lt;h4 class="heading" id="finite-time-blowup-of-the-3d-euler-equations">
 Finite-Time Blowup of the 3D Euler Equations&lt;span class="heading__anchor"> &lt;a href="#finite-time-blowup-of-the-3d-euler-equations">#&lt;/a>&lt;/span>
&lt;/h4>&lt;p>The question of whether the 3D incompressible Euler equations&lt;/p>
&lt;p>$$\partial_t u + (u \cdot \nabla) u + \nabla p = 0, \qquad \operatorname{div} u = 0,$$&lt;/p>
&lt;p>can develop a singularity from smooth initial data — open since Euler introduced the equations in 1757 — saw a decisive resolution in a bounded-domain setting through a landmark two-part series by &lt;strong>Jiajie Chen and Thomas Y. Hou&lt;/strong> (arXiv:2210.07191, arXiv:2305.05660, &lt;em>PNAS&lt;/em> 2025). Their work proves finite-time, nearly self-similar blowup of both the &lt;strong>2D Boussinesq&lt;/strong> and &lt;strong>3D axisymmetric Euler&lt;/strong> equations with smooth initial data and finite energy in the presence of a solid boundary. The proof employs weighted $L^\infty$ and $C^{1/2}$ norms, sharp functional inequalities inspired by optimal transport, and computer-assisted rigorous numerics to verify nonlinear stability constants. The result was praised as one of the most significant advances in mathematical fluid mechanics in decades.&lt;/p>
&lt;p>Prior to Chen–Hou, &lt;strong>Tarek Elgindi&lt;/strong> (2021) showed finite-time singularity for the 3D axisymmetric Euler equations without swirl from $C^{1,\alpha}$ initial vorticity. The Chen–Hou 2021 paper on the Hou-Luo model proved asymptotically self-similar blowup from smooth data for the HL model. Concurrently, Hou and collaborators presented numerical evidence for singularity in 3D Navier-Stokes achieving a $10^7$-fold increase in maximum vorticity, and DeepMind (2025) used AI-assisted methods to discover families of unstable singularities in the Incompressible Porous Media and Boussinesq equations.&lt;/p>
&lt;h4 class="heading" id="non-uniqueness-of-lerayhopf-solutions-for-navier-stokes">
 Non-Uniqueness of Leray–Hopf Solutions for Navier-Stokes&lt;span class="heading__anchor"> &lt;a href="#non-uniqueness-of-lerayhopf-solutions-for-navier-stokes">#&lt;/a>&lt;/span>
&lt;/h4>&lt;p>A 2021 breakthrough by &lt;strong>Dallas Albritton, Elia Brué, and Maria Colombo&lt;/strong> proved non-uniqueness of Leray–Hopf solutions to the &lt;em>forced&lt;/em> 3D Navier-Stokes equations: they exhibited two distinct Leray solutions with zero initial velocity and identical body force, exploiting the extreme instability of a self-similar background solution. Recognised as the most influential 2021 math.AP paper on arXiv by Paper Digest, the result was subsequently extended to bounded domains via gluing methods (arXiv:2209.03530) and to stochastic settings (&lt;em>Electronic Journal of Probability&lt;/em>, 2024).&lt;/p>
&lt;h4 class="heading" id="stability-of-shear-flows-and-kinetic-theory">
 Stability of Shear Flows and Kinetic Theory&lt;span class="heading__anchor"> &lt;a href="#stability-of-shear-flows-and-kinetic-theory">#&lt;/a>&lt;/span>
&lt;/h4>&lt;p>Parallel to the singularity programme, sharp asymptotic stability results for &lt;strong>2D monotone shear flows&lt;/strong> with no-slip boundary conditions, and extensive work on &lt;strong>inviscid damping&lt;/strong> and enhanced dissipation near shear flows, have appeared throughout 2025–2026.&lt;/p>
&lt;p>Arguably the most monumental result in kinetic PDE theory during this period: &lt;strong>Yu Deng, Zaher Hani, and Xiao Ma&lt;/strong> provided a rigorous long-time derivation of the Boltzmann equation from hard-sphere dynamics (arXiv:2408.07818, 2024), extending Lanford&amp;rsquo;s 1975 short-time theorem to all times within the lifespan of the Boltzmann solution. In a companion paper (arXiv:2503.01800, 2025), they completed the derivation of the &lt;strong>compressible Euler&lt;/strong> and &lt;strong>incompressible Navier-Stokes-Fourier&lt;/strong> equations from Newton&amp;rsquo;s laws — effectively resolving &lt;strong>Hilbert&amp;rsquo;s Sixth Problem&lt;/strong> for rarefied hard-sphere gases. The proof uses cumulant ansätze, Feynman-diagram combinatorics, and a molecule-reduction algorithm. This followed the same team&amp;rsquo;s 2021 derivation of the &lt;strong>wave kinetic equation&lt;/strong> from the cubic NLS.&lt;/p>
&lt;h3 class="heading" id="2-nonlocal-and-fractional-pdes-mixed-local-nonlocal-operators">
 2. Nonlocal and Fractional PDEs: Mixed Local-Nonlocal Operators&lt;span class="heading__anchor"> &lt;a href="#2-nonlocal-and-fractional-pdes-mixed-local-nonlocal-operators">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>One of the dominant new paradigms of the 2020s is the study of operators of the form&lt;/p>
&lt;p>$$\mathcal{L} u = -\Delta u + (-\Delta)^s u, \quad s \in (0,1),$$&lt;/p>
&lt;p>which superpose a classical Laplacian with a fractional (nonlocal) Laplacian. These arise naturally in models combining Brownian and Lévy diffusion processes. The foundational paper by &lt;strong>Biagi, Dipierro, Valdinoci, and Vecchi&lt;/strong> (2020/2021) initiated a systematic theory of regularity and maximum principles for such operators.&lt;/p>
&lt;p>Between 2021 and 2026 an explosion of activity produced: gradient regularity for mixed local-nonlocal problems via De Filippis and Mingione (2022, minimisers of mixed functionals are locally $C^{1,\beta}$-regular); Hölder regularity for mixed local-nonlocal degenerate elliptic equations (Garain &amp;amp; Lindgren, 2022); the Wiener criterion for nonlocal Dirichlet problems (Kim, Lee &amp;amp; Lee, 2022); and a Faber-Krahn inequality for mixed operators (Biagi, Dipierro, Valdinoci &amp;amp; Vecchi, 2021). &lt;strong>Serena Dipierro&lt;/strong> and &lt;strong>Enrico Valdinoci&lt;/strong> were among the most prolific contributors, publishing on nonlocal logistic equations with Neumann conditions, ecological niches for mixed dispersal, and Sobolev inequalities for mixed operators.&lt;/p>
&lt;p>&lt;strong>Giovanni Leoni&amp;rsquo;s&lt;/strong> 2023 treatise &lt;em>A First Course in Fractional Sobolev Spaces&lt;/em> provided a self-contained reference covering definitions, embeddings, Hardy inequalities, and interpolation inequalities, and ranked among the most-cited arXiv math.AP papers of 2023. Concurrently, a 2025 paper established well-posedness and regularity theory for time-fractional stochastic PDEs involving Caputo derivatives and general nonlocal operators driven by Gaussian and Lévy noise (arXiv:2512.03754).&lt;/p>
&lt;h3 class="heading" id="3-double-phase-operators-and-nonstandard-growth">
 3. Double Phase Operators and Nonstandard Growth&lt;span class="heading__anchor"> &lt;a href="#3-double-phase-operators-and-nonstandard-growth">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>The &lt;strong>double phase functional&lt;/strong>&lt;/p>
&lt;p>$$\mathcal{H}(u) := \int_\Omega \bigl(|Du|^p + a(x)|Du|^q\bigr),dx, \quad q &amp;gt; p &amp;gt; 1,\ a(x) \geq 0,$$&lt;/p>
&lt;p>introduced by Colombo and Mingione, generated a remarkable surge of activity throughout 2021–2026.&lt;/p>
&lt;div class="table-wrapper">&lt;table>
 &lt;thead>
 &lt;tr>
 &lt;th>Year&lt;/th>
 &lt;th>Paper&lt;/th>
 &lt;th>Authors&lt;/th>
 &lt;th>Key Contribution&lt;/th>
 &lt;/tr>
 &lt;/thead>
 &lt;tbody>
 &lt;tr>
 &lt;td>2021&lt;/td>
 &lt;td>A new class of double phase variable exponent problems&lt;/td>
 &lt;td>Crespo-Blanco, Gasiński, Harjulehto, Winkert&lt;/td>
 &lt;td>Existence/uniqueness for new double phase with variable exponents&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>2021&lt;/td>
 &lt;td>Double phase implicit obstacle problems&lt;/td>
 &lt;td>Zeng, Rădulescu, Winkert&lt;/td>
 &lt;td>Mixed BVPs with convection and multivalued conditions&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>2022&lt;/td>
 &lt;td>Nonuniformly elliptic Schauder theory&lt;/td>
 &lt;td>De Filippis, Mingione&lt;/td>
 &lt;td>Schauder estimates in nonuniform elliptic settings&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>2022&lt;/td>
 &lt;td>New embedding results for double phase problems&lt;/td>
 &lt;td>Ho, Winkert&lt;/td>
 &lt;td>Musielak-Orlicz Sobolev spaces with variable exponent&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>2023&lt;/td>
 &lt;td>Regularity at nearly linear growth&lt;/td>
 &lt;td>De Filippis, Mingione&lt;/td>
 &lt;td>Hölder gradient regularity for log-type functionals&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>2025&lt;/td>
 &lt;td>Partial regularity for parabolic double phase systems&lt;/td>
 &lt;td>Ok, Scilla, Stroffolini&lt;/td>
 &lt;td>Partial Hölder regularity for parabolic systems&lt;/td>
 &lt;/tr>
 &lt;/tbody>
 &lt;/table>
&lt;/div>&lt;p>The work of &lt;strong>Cristiana De Filippis&lt;/strong> and &lt;strong>Giuseppe Mingione&lt;/strong> is particularly prominent throughout, providing a comprehensive regularity theory for double phase and nonuniformly elliptic functionals (arXiv:2308.10222).&lt;/p>
&lt;h3 class="heading" id="4-normalized-solutions-and-variational-methods-for-schrödinger-equations">
 4. Normalized Solutions and Variational Methods for Schrödinger Equations&lt;span class="heading__anchor"> &lt;a href="#4-normalized-solutions-and-variational-methods-for-schr%c3%b6dinger-equations">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>The problem of finding solutions $u \in H^1(\mathbb{R}^N)$ with prescribed $L^2$-norm — the &lt;em>mass constraint&lt;/em>&lt;/p>
&lt;p>$$\int_{\mathbb{R}^N} |u|^2,dx = c$$&lt;/p>
&lt;p>— has become a central theme in the study of nonlinear Schrödinger equations. The influential papers by &lt;strong>Louis Jeanjean and Thanh Trung Le&lt;/strong> on multiple normalized solutions for Sobolev critical equations (2020–2021) and by &lt;strong>Juncheng Wei and Yuanze Wu&lt;/strong> on normalized solutions with critical Sobolev exponent and mixed nonlinearities (2021) launched a wave of activity. Key directions include: normalized ground states for NLS with potential (Bartsch, Molle, Rizzi &amp;amp; Verzini); normalized solutions for Schrödinger-Poisson-Slater equations; and standing waves and stability for &lt;strong>Choquard equations&lt;/strong>. The March 2026 arXiv listings confirm that sharp exponents, existence and asymptotics for Choquard equations, and boosted ground states for pseudo-relativistic Schrödinger equations remain highly active.&lt;/p>
&lt;p>Parallel work on eigenvalue problems addresses &lt;strong>Steklov eigenvalues&lt;/strong> (monotonicity for regular $N$-gons, sharp geometric bounds), eigenvalues of &lt;strong>Pucci&amp;rsquo;s extremal operator&lt;/strong> in 3D, and &lt;strong>biharmonic Steklov problems&lt;/strong> on thin sets.&lt;/p>
&lt;h3 class="heading" id="5-mean-field-games-and-aggregation-diffusion-pdes">
 5. Mean Field Games and Aggregation-Diffusion PDEs&lt;span class="heading__anchor"> &lt;a href="#5-mean-field-games-and-aggregation-diffusion-pdes">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>&lt;strong>Mean field game theory&lt;/strong> generated a prolific suite of PDE questions between 2021 and 2026. Highlights include: Imanuvilov, Liu, and Yamamoto (2023) proving Lipschitz stability for determining states and inverse sources in MFG equations using Carleman estimates; Klibanov, Li, and Liu (2023) on Hölder stability via Carleman estimates; the inverse boundary problem for first-order master equations (Liu &amp;amp; Zhang, 2022); and Bresch, Jabin, and Soler (2022) introducing a novel probabilistic derivation of the mean-field limit applicable to Vlasov-Poisson-Fokker-Planck in 2D. By 2025–2026, nonlocal MFG models with spatial interactions and new work on &lt;strong>Wasserstein gradient flows of kernel mean discrepancies&lt;/strong> with connections to machine learning appeared on arXiv (arXiv:2506.01200).&lt;/p>
&lt;p>&lt;strong>Optimal transport&lt;/strong> has deeply influenced aggregation-diffusion equations and gradient flows. The March 2026 arXiv listings include a major 73-page paper by &lt;strong>Carrillo, Gwiazda, and Skrzeczkowski&lt;/strong> presenting a new formula for the Wasserstein distance between solutions to nonlinear continuity equations.&lt;/p>
&lt;h3 class="heading" id="6-chemotaxis-and-reaction-diffusion-systems">
 6. Chemotaxis and Reaction-Diffusion Systems&lt;span class="heading__anchor"> &lt;a href="#6-chemotaxis-and-reaction-diffusion-systems">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>Chemotaxis systems — in particular Keller-Segel models with &lt;strong>signal-dependent motility&lt;/strong> (density-suppressed diffusion) — generated intense activity. Key papers include logistic damping effects and global classical solutions for reaction-diffusion systems with density-suppressed motility (Lyu &amp;amp; Wang, 2021), refined regularity analysis for Keller-Segel-consumption systems (Li &amp;amp; Winkler, 2022), and global existence with uniform boundedness under signal-dependent motility (Jiang &amp;amp; Laurençot, 2021). In 2024, a construction of smooth finite-time blowup solutions for the &lt;strong>3D Keller-Segel-Navier-Stokes&lt;/strong> (chemotaxis-fluid) system with buoyancy appeared, using a quantitative method that directly constructs the singular solution (arXiv:2404.17228).&lt;/p>
&lt;p>In parallel, &lt;strong>free boundary reaction-diffusion models&lt;/strong> for species spreading and SIS epidemic models — including 2026 work on asymmetric kernels in advective periodic environments — continue to produce threshold and long-time dynamics results.&lt;/p>
&lt;h3 class="heading" id="7-free-boundary-problems">
 7. Free Boundary Problems&lt;span class="heading__anchor"> &lt;a href="#7-free-boundary-problems">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>The Stefan problem (modelling solidification and melting) remained highly active throughout 2021–2026. Key results include $C^{1,\alpha}$ regularity of flat free boundaries for the &lt;strong>inhomogeneous one-phase Stefan problem&lt;/strong> (Ferrari, Forcillo, Giovagnoli &amp;amp; Jesus, 2024; arXiv:2404.07535); regularity of the free boundary for the &lt;strong>supercooled Stefan problem&lt;/strong> in arbitrary dimensions (2025; arXiv:2512.10136), where the free boundary decomposes into regular, singular, and jump parts with the singular part having controlled parabolic dimension; and well-posedness and regularity of physical solutions for the supercooled Stefan problem assuming only integrable initial temperature, with explicit classification of free boundary points (2025; arXiv:2506.18741). These results use obstacle problem techniques, non-degeneracy estimates, and sharp free boundary classification arguments.&lt;/p>
&lt;p>Shape optimisation for &lt;strong>principal eigenvalues of Pucci operators&lt;/strong> and $\Gamma$-convergence of convolution-type functionals for free discontinuity problems are active related directions in 2026.&lt;/p>
&lt;h3 class="heading" id="8-stochastic-pdes-and-regularity-structures">
 8. Stochastic PDEs and Regularity Structures&lt;span class="heading__anchor"> &lt;a href="#8-stochastic-pdes-and-regularity-structures">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>Martin Hairer&amp;rsquo;s theory of regularity structures generated deep ongoing activity. The period 2021–2026 saw Bailleul and Bruned (2021) extending the algebraic renormalisation framework of regularity structures to a broader class of singular SPDEs (arXiv:2101.11949); the publication of &lt;strong>&amp;ldquo;A tourist&amp;rsquo;s guide to regularity structures&amp;rdquo;&lt;/strong> by Bailleul and Hoshino (2025/2026) in &lt;em>EMS Surveys&lt;/em> as an essentially self-contained treatment; applications to stochastic quantisation ($\Phi^4_3$), the &lt;strong>KPZ equation&lt;/strong>, and stochastic geometric flows (Hairer, 2021); and variance renormalisation in regularity structures for the 2D generalised Parabolic Anderson Model (Gerencsér &amp;amp; Hsu, 2026).&lt;/p>
&lt;p>On the fluid side, global unique solvability for &lt;strong>stochastic Navier-Stokes-Korteweg&lt;/strong> equations and &lt;strong>stochastic Allen-Cahn-Navier-Stokes&lt;/strong> systems with ergodic invariant measures appeared in 2025, and non-uniqueness of Leray-Hopf solutions was extended to the stochastic forced setting.&lt;/p>
&lt;h3 class="heading" id="9-dispersive-pdes-wave-turbulence-well-posedness-and-blowup">
 9. Dispersive PDEs: Wave Turbulence, Well-Posedness, and Blowup&lt;span class="heading__anchor"> &lt;a href="#9-dispersive-pdes-wave-turbulence-well-posedness-and-blowup">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>The &lt;strong>full derivation of the wave kinetic equation&lt;/strong> from the cubic NLS by Deng and Hani (arXiv:1912.09518, 2021) was the most impactful dispersive result of the era. Their analysis relies on absolutely convergent Feynman-diagram (paired-tree) expansions and identifies favourable scaling laws $\alpha \sim L^{-\varepsilon}$ for the kinetic limit.&lt;/p>
&lt;p>Ongoing work includes polynomial growth of Sobolev norms for the fractional NLS on $\mathbb{T}^d$ (Wang, 2026); low-regularity global well-posedness for generalised Zakharov-Kuznetsov equations (Nowicki-Koth, 2026); &lt;strong>modulated dispersive equations&lt;/strong> (modulated KdV with normal form reduction; Gubinelli, Li, Li &amp;amp; Oh, 2025; arXiv:2505.24270); and probabilistic well-posedness of dispersive PDEs beyond variance blowup (2025; arXiv:2509.02344). Scattering results for the quintic generalised Benjamin-Bona-Mahony equation and the 3D Zakharov-Kuznetsov equation, and long-time asymptotics via Riemann-Hilbert and inverse scattering methods for integrable equations, appear in the March 2026 listings.&lt;/p>
&lt;h3 class="heading" id="10-geometric-pdes">
 10. Geometric PDEs&lt;span class="heading__anchor"> &lt;a href="#10-geometric-pdes">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>&lt;strong>Ricci flow&lt;/strong> uniqueness in the non-compact setting (Lee, 2025; arXiv:2503.20292) and a new non-Kähler expanding Ricci soliton construction with Kähler tangent cone at infinity (Bamler, Chen &amp;amp; Conlon, 2026) reflect the continued health of geometric flows. The &lt;strong>volume-preserving mean curvature flow&lt;/strong> regularity in dimensions 2 and 3 appeared in March 2026 (Arya, Jeon &amp;amp; Julin).&lt;/p>
&lt;p>Regularity theory for &lt;strong>Monge-Ampère equations&lt;/strong> received major contributions via a geometric approach: Brendle, Léger, McCann, and Rankin (2023; arXiv:2311.10208) derived the Pogorelov second-derivative bound using Kim-McCann-Warren&amp;rsquo;s pseudo-Riemannian geometry, providing a new approach to $C^1$ estimates for optimal transport maps. Liouville theorems and sharp solvability for the &lt;strong>parabolic Monge-Ampère equation&lt;/strong> with periodic data appeared in March 2026.&lt;/p>
&lt;h3 class="heading" id="11-inverse-problems-for-pdes">
 11. Inverse Problems for PDEs&lt;span class="heading__anchor"> &lt;a href="#11-inverse-problems-for-pdes">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>The &lt;strong>Calderón problem&lt;/strong> — recovering a coefficient from boundary Dirichlet-to-Neumann data — attracted major advances: the quasilinear setting (Cârstea, Feizmohammadi, Kian, Krupchyk &amp;amp; Uhlmann, 2021), inverse problems for fractional semilinear elliptic equations (Lai &amp;amp; Lin, 2020), the Calderón problem via Vekua theory (Clifford analysis framework, 2026; arXiv:2601.17313), and the convex lifting approach (Alberti, Petit &amp;amp; Sanna, 2025; arXiv:2507.00645). The &lt;strong>anisotropic Calderón problem&lt;/strong> for fractional Schrödinger operators on closed Riemannian manifolds (Krupchyk, 2025) was an important further advance.&lt;/p>
&lt;p>&lt;strong>Inverse moving source problems for parabolic equations&lt;/strong> (Zhao, 2023), reconstruction of scalar parameters in subdiffusion, and inverse problems for &lt;strong>multi-term time-fractional diffusion&lt;/strong> with Caputo derivatives are active in 2025–2026.&lt;/p>
&lt;h3 class="heading" id="12-semi-classical-analysis-spectral-theory-and-nonlinear-elliptic-theory">
 12. Semi-Classical Analysis, Spectral Theory, and Nonlinear Elliptic Theory&lt;span class="heading__anchor"> &lt;a href="#12-semi-classical-analysis-spectral-theory-and-nonlinear-elliptic-theory">#&lt;/a>&lt;/span>
&lt;/h3>&lt;p>A 2024 arXiv survey on &lt;strong>semi-classical analysis&lt;/strong> introducing three representative topics ranked as the top 2024 math.AP paper by Paper Digest, and a 2026 paper celebrating the &lt;strong>100th anniversary of the WKB papers&lt;/strong> (Vũ Ngọc) indicate that semi-classical methods remain foundational.&lt;/p>
&lt;p>In nonlinear elliptic and parabolic theory, major contributions include: &lt;em>Regularity Theory for Elliptic PDEs&lt;/em> by Fernández-Real and Ros-Oton (2023), a comprehensive self-contained reference; Fujita-type results for degenerate parabolic equations on &lt;strong>Heisenberg groups&lt;/strong> (Fino, Ruzhansky &amp;amp; Torebek, 2023), ranked the highest-impact 2023 math.AP paper; and singularity formation for nonlinear heat equations on infinite graphs (Punko &amp;amp; Zucchero, 2026).&lt;/p>
&lt;hr>
&lt;h2 class="heading" id="emerging-and-cross-cutting-themes-20252026">
 Emerging and Cross-Cutting Themes (2025–2026)&lt;span class="heading__anchor"> &lt;a href="#emerging-and-cross-cutting-themes-20252026">#&lt;/a>&lt;/span>
&lt;/h2>&lt;p>&lt;strong>Computer-assisted proofs and rigorous numerics.&lt;/strong> The Chen–Hou Euler blowup proof and related work on the CLM model (Hou-Wang, 2026) demonstrate that computer-assisted methods with rigorous error control are becoming standard for complex nonlinear stability analyses. These methods combine spectral Galerkin approximations with interval arithmetic and weighted norm frameworks to certify nonlinear stability constants — a methodology likely to expand further.&lt;/p>
&lt;p>&lt;strong>AI and machine learning for PDEs.&lt;/strong> The 2026 workshop &lt;em>MLPDES26&lt;/em> and the NSF/AMS report on AI for the mathematical sciences signal growing interplay between pure math.AP and deep learning. Neural PDE networks for equation discovery (arXiv:2502.18377), geometric operator learning via optimal transport (arXiv:2507.20065), and AI-assisted singularity discovery (DeepMind, 2025) represent this interdisciplinary frontier.&lt;/p>
&lt;p>&lt;strong>PDE methods in geometry and probability.&lt;/strong> The intersection of math.AP with differential geometry, probability (SPDEs), and mathematical physics remains extremely active. The March 2026 listings span general relativity (tensorial wave equations), Kähler geometry (Ricci solitons), and stochastic PDEs — confirming that math.AP functions as a hub connecting multiple mathematical disciplines.&lt;/p>
&lt;hr>
&lt;h2 class="heading" id="open-problems">
 Open Problems&lt;span class="heading__anchor"> &lt;a href="#open-problems">#&lt;/a>&lt;/span>
&lt;/h2>&lt;p>&lt;strong>Smooth-data Euler regularity beyond bounded domains.&lt;/strong> The Chen–Hou result proves blowup in a bounded domain. Whether finite-time singularity occurs for the 3D Euler equations in all of $\mathbb{R}^3$ from smooth, rapidly decaying initial data — the original Euler problem — remains open.&lt;/p>
&lt;p>&lt;strong>Navier-Stokes uniqueness from smooth initial data.&lt;/strong> The Albritton-Brué-Colombo result proves non-uniqueness for &lt;em>forced&lt;/em> NS from zero initial velocity. Non-uniqueness (or uniqueness) of Leray–Hopf solutions for the &lt;em>unforced&lt;/em> equations from smooth $H^1$ initial data is unresolved (see the companion survey on self-similar solutions).&lt;/p>
&lt;p>&lt;strong>Optimal regularity theory for double phase problems.&lt;/strong> Despite the comprehensive work of De Filippis and Mingione, optimal Schauder estimates for parabolic double phase systems at the boundary and under critical growth conditions are not fully established.&lt;/p>
&lt;p>&lt;strong>Complete derivation programme for Hilbert&amp;rsquo;s Sixth Problem.&lt;/strong> Deng-Hani-Ma resolved the case of hard-sphere gases in the Boltzmann regime. The derivation of hydrodynamic equations from particle dynamics in other regimes — dense gases, quantum systems, plasma — remains largely open.&lt;/p>
&lt;p>&lt;strong>Global well-posedness for energy-critical NLS in high dimensions.&lt;/strong> Despite progress on wave kinetic theory and probabilistic well-posedness, the deterministic global well-posedness theory for energy-critical and supercritical dispersive equations in dimensions $d \geq 5$ has significant gaps.&lt;/p>
&lt;p>&lt;strong>Quantum and numerical computation in pure math.AP.&lt;/strong> The growing use of computer-assisted proofs raises methodological questions about standards of verification, reproducibility, and the scope of problems accessible to these techniques.&lt;/p>
&lt;hr>
&lt;h2 class="heading" id="references">
 References&lt;span class="heading__anchor"> &lt;a href="#references">#&lt;/a>&lt;/span>
&lt;/h2>&lt;p>Albritton, D., Brué, E., &amp;amp; Colombo, M. (2021). &lt;em>Non-uniqueness of Leray solutions of the forced Navier-Stokes equations&lt;/em>. &lt;a href="https://cvgmt.sns.it/media/doc/paper/5405/main.pdf">https://cvgmt.sns.it/media/doc/paper/5405/main.pdf&lt;/a>&lt;/p>
&lt;p>Bailleul, I., &amp;amp; Bruned, Y. (2021). &lt;em>Renormalised singular stochastic PDEs&lt;/em>. arXiv:2101.11949. &lt;a href="https://www.pure.ed.ac.uk/ws/portalfiles/portal/194767736/2101.11949.pdf">https://www.pure.ed.ac.uk/ws/portalfiles/portal/194767736/2101.11949.pdf&lt;/a>&lt;/p>
&lt;p>Bailleul, I., &amp;amp; Hoshino, M. (2025). A tourist&amp;rsquo;s guide to regularity structures and singular stochastic PDEs. &lt;em>EMS Surveys in Mathematical Sciences&lt;/em>. &lt;a href="https://ems.press/journals/emss/articles/14298505">https://ems.press/journals/emss/articles/14298505&lt;/a>&lt;/p>
&lt;p>Brendle, S., Léger, F., McCann, R. J., &amp;amp; Rankin, C. (2023). &lt;em>A geometric approach to a priori estimates for optimal transport maps&lt;/em>. arXiv:2311.10208. &lt;a href="https://arxiv.org/abs/2311.10208">https://arxiv.org/abs/2311.10208&lt;/a>&lt;/p>
&lt;p>Chen, J., &amp;amp; Hou, T. Y. (2022). &lt;em>Stable nearly self-similar blowup of the 2D Boussinesq and 3D Euler equations with smooth data I: Analysis&lt;/em>. arXiv:2210.07191. &lt;a href="https://arxiv.org/abs/2210.07191">https://arxiv.org/abs/2210.07191&lt;/a>&lt;/p>
&lt;p>Chen, J., &amp;amp; Hou, T. Y. (2023). &lt;em>Stable nearly self-similar blowup of the 2D Boussinesq and 3D Euler equations with smooth data II: Rigorous numerics&lt;/em>. arXiv:2305.05660. &lt;a href="https://arxiv.org/abs/2305.05660">https://arxiv.org/abs/2305.05660&lt;/a>&lt;/p>
&lt;p>Chen, J., &amp;amp; Hou, T. Y. (2025). Singularity formation in 3D Euler equations with smooth initial data. &lt;em>PNAS, 122&lt;/em>(28). &lt;a href="https://www.pnas.org/doi/10.1073/pnas.2500940122">https://www.pnas.org/doi/10.1073/pnas.2500940122&lt;/a>&lt;/p>
&lt;p>De Filippis, C., &amp;amp; Mingione, G. (2023). &lt;em>Regularity for double phase problems at nearly linear growth&lt;/em>. arXiv:2308.10222. &lt;a href="https://arxiv.org/abs/2308.10222">https://arxiv.org/abs/2308.10222&lt;/a>&lt;/p>
&lt;p>DeepMind. (2025). &lt;em>Discovering new solutions to century-old problems in fluid dynamics&lt;/em>. &lt;a href="https://deepmind.google/blog/discovering-new-solutions-to-century-old-problems-in-fluid-dynamics/">https://deepmind.google/blog/discovering-new-solutions-to-century-old-problems-in-fluid-dynamics/&lt;/a>&lt;/p>
&lt;p>Deng, Y., &amp;amp; Hani, Z. (2021). &lt;em>On the derivation of the wave kinetic equation for NLS&lt;/em>. arXiv:1912.09518. &lt;a href="http://arxiv.org/pdf/1912.09518.pdf">http://arxiv.org/pdf/1912.09518.pdf&lt;/a>&lt;/p>
&lt;p>Deng, Y., Hani, Z., &amp;amp; Ma, X. (2024). &lt;em>Long time derivation of the Boltzmann equation from hard sphere dynamics&lt;/em>. arXiv:2408.07818. &lt;a href="https://www.semanticscholar.org/paper/91b67412a6058c1ace054a32fbf36fa2d2998d3d">https://www.semanticscholar.org/paper/91b67412a6058c1ace054a32fbf36fa2d2998d3d&lt;/a>&lt;/p>
&lt;p>Deng, Y., Hani, Z., &amp;amp; Ma, X. (2025). &lt;em>Hilbert&amp;rsquo;s sixth problem: Derivation of fluid equations via Boltzmann&amp;rsquo;s kinetic theory&lt;/em>. arXiv:2503.01800. &lt;a href="https://www.semanticscholar.org/paper/01d8f11b5d31f7037fb4914797e938db11d76ec5">https://www.semanticscholar.org/paper/01d8f11b5d31f7037fb4914797e938db11d76ec5&lt;/a>&lt;/p>
&lt;p>Ferrari, F., Forcillo, N., Giovagnoli, D., &amp;amp; Jesus, B. (2024). &lt;em>Free boundary regularity for the inhomogeneous one-phase Stefan problem&lt;/em>. arXiv:2404.07535. &lt;a href="https://arxiv.org/abs/2404.07535">https://arxiv.org/abs/2404.07535&lt;/a>&lt;/p>
&lt;p>Gubinelli, M., Li, J., Li, T., &amp;amp; Oh, T. (2025). &lt;em>Nonlinear PDEs with modulated dispersion IV: Normal form reduction for modulated KdV&lt;/em>. arXiv:2505.24270. &lt;a href="https://arxiv.org/pdf/2505.24270.pdf">https://arxiv.org/pdf/2505.24270.pdf&lt;/a>&lt;/p>
&lt;p>Hou, T. Y. (2021). &lt;em>The potentially singular behavior of the 3D Navier-Stokes equations&lt;/em>. arXiv:2107.06509. &lt;a href="https://arxiv.org/abs/2107.06509">https://arxiv.org/abs/2107.06509&lt;/a>&lt;/p>
&lt;p>Hu, J., Jin, S., Liu, N., &amp;amp; Zhang, L. (2024). Quantum circuits for partial differential equations via Schrödingerisation. &lt;em>Quantum, 8&lt;/em>, 1563.&lt;/p>
&lt;p>Imanuvilov, O. Y., Liu, Y., &amp;amp; Yamamoto, M. (2023). Lipschitz stability for determining states and inverse sources in MFG equations. &lt;em>[Journal of Mathematical Analysis]&lt;/em>.&lt;/p>
&lt;p>Ok, J., Scilla, G., &amp;amp; Stroffolini, B. (2025). &lt;em>Partial regularity for parabolic systems of double phase type&lt;/em>. arXiv:2510.03849. &lt;a href="https://arxiv.org/pdf/2510.03849.pdf">https://arxiv.org/pdf/2510.03849.pdf&lt;/a>&lt;/p>
&lt;p>Paper Digest. (2025, March). &lt;em>Most influential arXiv (Analysis of PDEs) papers — 2025-03 version&lt;/em>. &lt;a href="https://www.paperdigest.org/2025/03/most-influential-arxiv-analysis-of-pdes-papers-2025-03-version/">https://www.paperdigest.org/2025/03/most-influential-arxiv-analysis-of-pdes-papers-2025-03-version/&lt;/a>&lt;/p>
&lt;p>Segata, J., &amp;amp; Chen, M. (2026). &lt;em>Scattering for the 3D Zakharov-Kuznetsov equation&lt;/em> [arXiv preprint]. arXiv math.AP March 2026.&lt;/p>
&lt;p>arXiv math.AP listings. (2026, February–March). &lt;a href="https://arxiv.org/list/math.AP/2026-03">https://arxiv.org/list/math.AP/2026-03&lt;/a>&lt;/p></description></item></channel></rss>