Nam Le

Recent Advances in Numerical PDEs

Le, Nhut Nam
Table of Contents

Numerical methods for partial differential equations (PDEs) have entered a period of rapid transformation, driven by two converging forces: deep learning’s maturation as a tool for high-dimensional function approximation, and the resurgence of classical methods augmented by machine learning. The field broadly divides into physics-informed machine learning, neural operator learning, foundation models for PDEs, and the continuing evolution of classical high-order, structure-preserving, and data-driven discovery methods. Quantum computing and laser-based hardware solvers are also beginning to enter the landscape. This survey organises the most active research fronts, highlights landmark and recent key papers, and identifies open problems as of early 2026.


Overview #

The table below summarises the major approaches covered in this survey, their representative key papers, and their current status.

ApproachRepresentative Key PapersStatus
PINNs (adaptive/staged training)Raissi et al. (2019); IEEE 2025 staged training; PhysicsNeMo/ModulusProduction-ready
KANs for PDEsLiu et al. (2024, ICLR 2025); KINN; PI-KAN; HRKANsActive frontier
Fourier Neural OperatorsLi et al. (2020); O-FNO (2025); ReBA acceleratorWidely adopted
DeepONet variantsLu et al. (2019); L-DeepONet; Hybrid KAN-DeepONet; Quantum DeepONetMature + expanding
PDE Foundation ModelsPoseidon; OmniArch; PDEformer; Geo-NeWEmerging (2024–2026)
Deep BSDE & high-dimensionalHan, Jentzen, & E (PNAS 2018); Deep Shotgun; DRDM; Heun-BSDEActive
Data-driven PDE discoverySINDy (Brunton et al.); GN-SINDy; Evo-SINDy; Bayesian-SINDyActive
Structure-preserving methodsHairer et al. (2006); Stochastic multisymplectic; Geo-NeWMaturing
High-order FEM/DGhp-DGFEM Boltzmann; ML-accelerated FEM; FEX-PGMature + augmented
Fractional PDEsReview (2024); O-FNO for fractional Poisson; Fractional Laplacian meshfreeActive
Hamilton–Jacobi PDEsReview arXiv:2502.20833; Actor-critic NN; Deep BSDE for HJBActive
Multiscale / ROMMLP-based multiscale; POD-DL-ROM; Multi-fidelity ROMActive
Uncertainty quantificationQMC/RQMC; PDE-DKLActive
Quantum computingSchrödingerisation; H-DES (ColibriTD); Quantum DeepONetEarly-stage
Photonic/analog solversLightSolver LPUVery early-stage

Background #

The Classical PDE Problem #

A general PDE on a domain $\Omega \subseteq \mathbb{R}^d$ takes the form

$$\mathcal{N} [u] (x) = f(x), \quad x \in \Omega, \qquad \mathcal{B} [u] (x) = g(x), \quad x \in \partial \Omega,$$

where $\mathcal{N}$ is a (possibly nonlinear) differential operator, $\mathcal{B}$ encodes boundary or initial conditions, and $u: \Omega \to \mathbb{R}$ is the unknown. Classical mesh-based methods — finite element (FEM), finite difference (FDM), finite volume (FVM), and spectral methods — discretise $\Omega$ into $N$ degrees of freedom and solve a resulting algebraic system. Their complexity typically scales as $O(N^\alpha)$ for some $\alpha \geq 1$, and in $d$ dimensions $N \sim h^{-d}$ for mesh spacing $h$, leading to exponential cost as $d$ grows.

The Deep Learning Turn #

The 2019 PINN paper by Raissi, Perdikaris, and Karniadakis, and the 2020 FNO paper by Li et al., triggered an explosion of mesh-free and operator-learning approaches. Rather than discretising $\Omega$, these methods parameterise $u$ (or the solution operator $\mathcal{N}^{-1}$) as a neural network and minimise a physics-informed or data-driven loss. The key advantages are mesh-free flexibility, natural handling of inverse problems, and — in the operator-learning setting — the ability to generalise across PDE instances.


Recent Developments #

1. Physics-Informed Neural Networks (PINNs) and Variants #

PINNs, introduced by Raissi, Perdikaris, and Karniadakis (2019), embed physical laws directly into the neural network loss function as residual terms of the form $\mathcal{L}_{\text{phys}} = |f(\hat{u})|^2$, supplemented by data, boundary, and initial condition constraints. Their appeal lies in a mesh-free design that handles irregular geometries and inverse problems naturally. Yet PINN training is notoriously fragile — subject to spectral bias, loss imbalance, and stiffness — motivating a rich line of training improvements.

Staged training strategies. A 2025 IEEE paper proposes a two-stage process: a short-time pretraining phase followed by extension to the full time domain, combined with uncertainty-guided sampling. This significantly improves accuracy and efficiency for time-dependent PDEs compared to standard PINNs (IEEE, 2025).

Evolutionary optimisation of PINNs. A 2025 arXiv paper introduces evolutionary optimisation to tune PINN architectures, improving robustness when data are scarce by complying with physical laws through training loss (arXiv:2501.06572).

Automatic structure discovery via knowledge distillation. A 2025 Nature Communications paper proposes a physics-informed distillation framework that decouples physical and parameter regularisation in teacher–student networks, then uses clustering and parameter reconstruction to embed physically meaningful structures. Experiments on Laplace, Burgers, Poisson, and fluid mechanics equations show improved accuracy, training efficiency, and transferability (arXiv:2502.06026).

Production-ready frameworks include PhysicsNeMo/Modulus (CUDA-optimised kernels with 4× speedups) and DeepXDE, which support adaptive weighting schemes, curriculum learning, intelligent residual point sampling, and domain decomposition for stiff problems.

2. Kolmogorov–Arnold Networks (KANs) for PDEs #

Proposed by Liu, Wang, Vaidya et al. (2024, accepted ICLR 2025), KANs replace fixed activation functions at MLP nodes with learnable spline-parameterised functions on each edge. This change — inspired by the Kolmogorov-Arnold representation theorem — provides faster neural scaling laws, improved interpretability, and comparable or better accuracy with far fewer parameters, especially for scientific AI tasks. The major PINN-KAN hybrid architectures are as follows:

ArchitecturePDE focusKey claim
KINNSolid mechanics, multi-scale, singularitiesSignificantly outperforms MLP-PINNs in accuracy and convergence speed
PI-KANNavier–Stokes (forward)High prediction accuracy; addresses information bottleneck
HRKANsPoisson, BurgersHighest fitting accuracy, lowest training time vs. KAN and ReLU-KAN
PIKANs (adaptive grid)Forward PDE problemsUp to 84× faster training; adaptive state transition reduces $L^2$ error by 43%
EvoKANComplex PDE systemsEnergy-dissipative; encodes only the initial state, avoiding retraining
KAN-ODEsSchrödinger, Allen–Cahn, dynamical systemsImproved performance over Neural ODEs in discovering hidden physics

KANs are also being used inside DeepONet branch/trunk networks for hybrid neural operator surrogates in porous media flows, including Darcy flow and 2D/3D multiphase problems (arXiv:2511.02962). For a deeper treatment of KAN architectures for PDEs, see the companion post in this series.

3. Neural Operator Learning #

Neural operators learn mappings between infinite-dimensional function spaces — enabling resolution-invariant, discretisation-agnostic PDE solvers. The two dominant architectures are the Fourier Neural Operator (FNO) and Deep Operator Networks (DeepONet).

FNO applies global convolution in Fourier space, giving resolution invariance and fast inference. The 2025 Optimised FNO (O-FNO) integrates residual connections and enhanced spectral resolution for the 2D fractional Poisson equation, achieving over 98% test accuracy and outperforming both base FNO and DeepONet. A hardware/algorithm co-design chip, ReBA, implements the Galerkin Transformer achieving 34.57× speedup over CPUs and up to 51.26× over prior accelerators (IEEE, 2025).

DeepONet’s branch-trunk architecture excels under noise and complex geometries where FNO degrades. Recent extensions include multi-fidelity physics-guided DeepONet (2025), Fusion DeepONet for hypersonic flow predictions on arbitrary grids (arXiv:2501.01934), and Latent-space DeepONet (L-DeepONet) (Nature Communications, 2024), which outperforms all other neural operators with small latent dimensions ($d \leq 100$), enabling real-time high-dimensional predictions. Ensemble and Mixture-of-Experts DeepONets achieve 2–4× lower relative $\ell_2$ errors through basis enrichment and spatial locality (arXiv:2405.11907). Taylor Mode Neural Operators provide an order-of-magnitude speed-up for DeepONet and 8× for FNO in computing high-order derivatives via Taylor-mode automatic differentiation.

Graph Neural Operator Methods. The GOLA framework (2025) addresses the limitation of regular-grid assumptions by constructing graphs from irregularly sampled spatial points with a Fourier-based encoder for learnable complex-coefficient embeddings, outperforming baselines in data-scarce regimes across 2D Darcy, Advection, Eikonal, and Nonlinear Diffusion problems (arXiv:2505.18923).

4. Foundation Models for PDEs #

Inspired by the success of LLMs, PDE foundation models represent a paradigm shift: large transformers pre-trained on diverse physical systems that can be fine-tuned for downstream tasks with minimal data.

Poseidon (ETH Zurich, 2024) is a multiscale operator transformer with time-conditioned layer norms, enabling continuous-in-time evaluation. Pre-trained on diverse physical systems, it exploits the semigroup property of time-dependent PDEs for significant data scaling (arXiv:2405.19101).

OmniArch (ICML 2025) is the first multi-scale and multi-physics scientific computing foundation model, featuring a Fourier encoder-decoder and transformer backbone with a PDE-Aligner for physics-informed fine-tuning. It achieves unified 1D-2D-3D pre-training on PDEBench and demonstrates zero-shot learning on new physics.

PDEformer (2025) represents PDEs as computational graphs integrating symbolic and numerical information; a graph transformer with implicit neural representation enables mesh-free predictions with zero-shot accuracy comparable to specialist models (arXiv:2402.12652).

Multimodal PDE Foundation Model (UCLA, 2025) integrates both numerical inputs (equation parameters, initial conditions) and text descriptions. It achieves average relative error below 3.3% in-distribution and generates interpretable scientific text — bridging NLP and scientific computing (arXiv:2502.06026).

Physics-informed fine-tuning (arXiv:2603.15431, 2026) establishes that hybrid fine-tuning (combining physics-informed and data-driven objectives) achieves superior extrapolation to downstream tasks and enables data-free learning of unseen PDE families.

Geo-NeW (arXiv:2602.02788, Feb 2026) — General-Geometry Neural Whitney Forms — is a data-driven finite element method jointly learning differential operators and compatible finite element spaces on the geometry. It exactly preserves physical conservation laws via Finite Element Exterior Calculus, with state-of-the-art performance on out-of-distribution geometries.

5. Deep Learning for High-Dimensional PDEs #

Classical mesh-based methods suffer exponential complexity growth in dimension $d$. Three principal deep learning paradigms address this.

The Deep BSDE method (Han, Jentzen, & E, PNAS, 2018) reformulates semilinear parabolic PDEs using backward stochastic differential equations (BSDEs) and learns the gradient of the solution with neural networks, enabling solution of PDEs in hundreds to thousands of dimensions. A 2025 review by the original authors traces subsequent advances. Key recent improvements include:

The Deep Ritz method (E & Yu, 2018) minimises energy functionals using neural networks. Extensions to multiscale problems leverage scale convergence theory to derive $\Gamma$-limits of oscillatory energy functionals.

The Full History Recursive Multilevel Picard (MLP) methodology — combining Picard iterations with multilevel Monte Carlo — was the first method proven to overcome the curse of dimensionality for semilinear parabolic PDEs and remains one of very few methods with such proven guarantees.

PDE-DKL (2025) combines deep learning for low-dimensional latent representations with Gaussian Processes for kernel regression under explicit PDE constraints, providing both high accuracy and principled uncertainty quantification in limited-data regimes (arXiv:2501.18258).

6. Classical High-Order Methods: FEM, DG, and Spectral #

Despite the deep learning surge, classical methods continue to mature, particularly in rigorous error analysis and efficiency.

The hp-version DG finite element method for the Boltzmann transport problem (J. Sci. Comput., 2024) achieves arbitrary-order convergence rates and handles polytopic elements, enabling efficient parallel implementation within existing multigroup discrete ordinates software. High-order DG methods for unsteady compressible flows — targeting acoustic waves, turbulence, and magnetohydrodynamics — benefit from block-diagonal mass matrices allowing efficient explicit time-stepping.

A systematic 2024 approach uses neural networks to learn the element-wise solution map of PDEs, accelerating finite element-type methods in an “element neural network” paradigm that generalises across element geometries. Machine learning-based spectral methods combine orthogonal function expansions (Fourier, Legendre) with deep neural operator learning for highly accurate solutions with fewer grid points.

FEX-PG (2024) solves high-dimensional partial integro-differential equations using parameter grouping to reduce coefficient count and Taylor series approximation for integral terms, achieving relative errors on the order of single-precision machine epsilon while providing interpretable, explicit solution formulas absent from most DL methods (arXiv:2410.00835).

7. Structure-Preserving Numerical Methods #

Structure-preserving methods retain intrinsic properties of the continuous system — symplecticity, energy conservation, divergence-free constraints — at the discrete level. They enhance numerical stability and long-term accuracy, ensuring computed solutions respect the underlying mathematical structure.

Recent research encompasses geometric integrators and mimetic discretisations for conservative finite element, difference, and volume schemes; stochastic multisymplectic PDEs and their structure-preserving discretisations (Studies in Applied Mathematics, 2025); and structure-preserving learning via the Geo-NeW model, which exactly preserves physical conservation laws through Finite Element Exterior Calculus. A 2024 University of Maryland workshop identified integration of structure-preserving methods with uncertainty quantification as a key open problem.

8. Data-Driven PDE Discovery #

SINDy and its extensions use sparse regression over a dictionary of candidate functions. GN-SINDy (2024–2026) addresses high dimensionality and large datasets by combining Q-DEIM greedy sampling, differentiable surrogate modelling, and sparse regression, showing robustness on Burgers, Allen-Cahn, and KdV equations. Evo-SINDy (ACM, 2025) uses multi-population co-evolutionary algorithms for universal PDE identification. Bayesian-SINDy quantifies parameter uncertainty robustly (arXiv:2402.15357).

On the neural-symbolic front, Mechanistic PDE Networks (arXiv:2502.18377, 2025) represent spatiotemporal data as space-time dependent linear PDEs within neural network hidden representations, then solve and decode for specific tasks. MORL4PDEs (Chaos Solitons Fractals, 2024) uses reinforcement learning and genetic algorithms for symbolic PDE regression without pre-specified candidate libraries. The Physics-Informed Information Criterion (PIC) (Research, 2022) selects the most appropriate PDE from candidates by incorporating symmetry constraints.

9. Hamilton–Jacobi PDEs #

Hamilton–Jacobi (HJ) PDEs govern optimal control, level-set methods, and front propagation. A comprehensive 2025 review (arXiv:2502.20833) covers grid-based methods, representation formula methods, Monte Carlo via Laplace’s method, and deep learning approaches. Key deep learning advances include actor-critic neural network frameworks for static HJ equations (convergence analysed in 2024), and variational methods that solve HJ PDEs up to 100 dimensions with relative errors of 1–5%. Deep BSDE methods naturally apply to Hamilton-Jacobi-Bellman (HJB) equations arising in stochastic optimal control.

10. Fractional and Non-Local PDEs #

Fractional-order derivatives model anomalous diffusion, viscoelastic behaviour, and memory effects that integer-order PDEs cannot capture. Recent advances include semi-analytical methods (Adomian Decomposition, Variational Iteration) applied to 3D time-fractional diffusion, telegraph, and wave equations; a 2024 comprehensive review of fractional stochastic PDEs covering the latest numerical methods and practical implementations; the Optimised FNO (O-FNO, 2025) achieving 98%+ test accuracy for fractional Poisson equations; and a 2025 meshfree finite difference scheme for the fractional Laplacian on arbitrary bounded domains.

11. Multiscale Methods and Model Order Reduction #

The 2024 Numerical Multiscale Methods dissertation establishes an equivalence between time averaging and space homogenisation, and extends Deep Ritz to multiscale problems via scale convergence theory. Multi-fidelity reduced order models for PDE-constrained optimisation (arXiv:2503.21252, 2025) use a hierarchical trust region algorithm with active learning, constructing a full/reduced/ML model hierarchy on-the-fly. POD-DL-ROMs (Politecnico di Milano, 2024) combine proper orthogonal decomposition with autoencoder architectures for nonlinear parametric PDEs, providing a mathematically rigorous framework enhancing accuracy of reduced models.

12. Uncertainty Quantification and Stochastic PDEs #

Quasi-Monte Carlo (QMC) methods achieve faster convergence than Monte Carlo for smooth integrands. A 2024 paper analyses QMC with generalised Gaussian random variables and Gevrey regular inputs — relaxing the standard uniformly bounded assumption — analysing dimension truncation, FEM, and QMC errors jointly for randomly shifted rank-1 lattice rules (arXiv:2411.03793). Randomised QMC (RQMC) with scrambled Sobol’ sequences achieves smaller bias and RMSE than Monte Carlo for risk-averse optimisation (arXiv:2408.02842). A 2024 ICERM semester at Brown University (“Numerical PDEs: Analysis, Algorithms, and Data Challenges”) served as a major gathering point for researchers integrating uncertainty quantification with PDE methods.

13. Quantum and Photonic Computing for PDEs #

Schrödingerisation techniques convert general linear PDEs into Schrödinger-type equations via the “warped transformation,” enabling direct quantum Hamiltonian simulation. A 2024 Quantum journal paper provides explicit quantum circuit implementations for the heat and advection equations with complexity analysis demonstrating quantum advantage in high dimensions. ColibriTD’s H-DES (March 2025) was reported as the first real-hardware solution of a PDE via variational quantum algorithm, executing on IBM’s 156-qubit Heron R2 processor for the inviscid Burgers’ equation.

LightSolver’s Laser Processing Unit (LPU) (announced September 2025) can now directly map and solve PDEs, with constant-time iteration steps independent of problem size, claiming up to 100× speed gains over GPU solvers and partnerships with Ansys for engineering integration.


Open Problems #

PINN training stability. Despite many improvements, PINN training remains fragile for stiff and multi-scale problems. A general theory of loss landscape conditioning and principled hyperparameter selection is lacking.

Neural operator generalisation theory. While FNO and DeepONet generalise empirically across PDE instances, rigorous approximation-theoretic guarantees relating operator-learning error to network width, depth, and training data remain incomplete.

Foundation model reliability and extrapolation. PDE foundation models show impressive zero-shot accuracy within their pre-training distribution, but their failure modes on out-of-distribution physics — and the extent to which physics-informed fine-tuning can compensate — are not yet well understood.

High-dimensional solvers beyond parabolic PDEs. The Deep BSDE method and MLP method primarily address semilinear parabolic PDEs. Extending their curse-of-dimensionality guarantees to elliptic, hyperbolic, or fully nonlinear PDEs remains largely open.

Structure-preserving deep learning. Integrating conservation laws and geometric structure (symplecticity, divergence-free constraints) into neural PDE solvers at scale — beyond the Geo-NeW approach for specific exterior calculus structures — is an active and unresolved challenge.

Quantum hardware advantage. Near-term quantum devices face noise and connectivity limitations that restrict their practical advantage over classical HPC for PDE solving. Demonstrating genuine quantum speedup for industrially relevant PDEs on real hardware remains an open goal.


References #

Brunton, S. L., Proctor, J. L., & Kutz, J. N. (2016). Discovering governing equations from data by sparse identification of nonlinear dynamical systems. PNAS, 113(15), 3932–3937.

ColibriTD. (2025, March). H-DES: First real-hardware PDE solver via variational quantum algorithm. The Quantum Insider. https://thequantuminsider.com/2025/03/25/colibritd-announces-h-des-pde-solver-as-a-step-toward-accessible-quantum-simulation-in-engineering/

E, W., & Yu, B. (2018). The deep Ritz method: A deep learning-based numerical algorithm for solving variational problems. Communications in Mathematics and Statistics, 6(1), 1–12.

E, W., Han, J., & Jentzen, A. (2022). Algorithms for solving high dimensional PDEs: From nonlinear Monte Carlo to machine learning. Nonlinearity, 35(1), 278.

Han, J., Jentzen, A., & E, W. (2018). Solving high-dimensional partial differential equations using deep learning. PNAS, 115(34), 8505–8510. https://www.pnas.org/doi/10.1073/pnas.1718942115

Han, J. (2025). A brief review of the Deep BSDE method for solving high-dimensional partial differential equations. arXiv:2505.17032. https://arxiv.org/abs/2505.17032

Hu, J., Jin, S., Liu, N., & Zhang, L. (2024). Quantum circuits for partial differential equations via Schrödingerisation. Quantum, 8, 1563. https://quantum-journal.org/papers/q-2024-12-12-1563/

IEEE. (2025). A staged training approach for physics-informed neural networks in solving partial differential equations. https://ieeexplore.ieee.org/document/11172661/

IEEE. (2025). Higher-order-ReLU-KANs (HRKANs) for solving physics-informed neural networks more accurately, robustly and faster. https://ieeexplore.ieee.org/document/11105234/

IEEE. (2025). ReBA: A hybrid sparse reconfigurable butterfly accelerator for solving PDEs via hardware and algorithm co-design. https://ieeexplore.ieee.org/document/11044078/

IEEE. (2025). An optimized Fourier neural operator for the 2D fractional Poisson equation. https://ieeexplore.ieee.org/document/11405135/

Li, Z., et al. (2020). Fourier neural operator for parametric partial differential equations. arXiv:2010.08895.

LightSolver. (2025, September). LightSolver announces advance in physical modeling on the LPU. The Quantum Insider. https://thequantuminsider.com/2025/09/16/lightsolver-announces-advance-in-physical-modeling-on-the-lpu-and-new-roadmap-for-optical-analog-pde-solving/

Liu, Z., et al. (2024). KAN: Kolmogorov-Arnold Networks. arXiv:2404.19756. ICLR 2025. https://arxiv.org/abs/2404.19756

Lu, L., Jin, P., Pang, G., Zhang, Z., & Karniadakis, G. E. (2021). Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nature Machine Intelligence, 3, 218–229.

Lu, L., et al. (2024). Learning nonlinear operators in latent spaces for real-time predictions of complex dynamics in physical systems. Nature Communications. https://www.nature.com/articles/s41467-024-49411-w

McCabe, M., et al. (2025). Poseidon: Efficient foundation models for PDEs. arXiv:2405.19101. https://arxiv.org/html/2405.19101v2

Peng, W., et al. (2025). OmniArch: Building foundation model for scientific computing. ICML 2025. https://icml.cc/virtual/2025/poster/45099

Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2019). Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378, 686–707.

Shi, Z., et al. (2025). Physics-informed fine-tuning of foundation models for partial differential equations. arXiv:2603.15431. https://arxiv.org/html/2603.15431v1

Wang, S., et al. (2025). Geo-NeW: Structure-preserving learning improves geometry generalization in PDEs. arXiv:2602.02788. https://arxiv.org/abs/2602.02788

Wang, Z., et al. (2024). Kolmogorov–Arnold-Informed neural network: A physics-informed deep learning framework for solving forward and inverse problems. Computer Methods in Applied Mechanics and Engineering. https://linkinghub.elsevier.com/retrieve/pii/S0045782524007722

Xiao, P., et al. (2025). Quantum DeepONet: Neural operators accelerated by quantum computing. Quantum, 9, 1761. https://quantum-journal.org/papers/q-2025-06-04-1761/

Xie, Z., et al. (2025). Anant-Net: Breaking the curse of dimensionality with scalable and interpretable neural surrogates. arXiv:2505.03595. https://arxiv.org/html/2505.03595v3

Xie, Z., et al. (2025). A deep shotgun method for solving high-dimensional parabolic partial differential equations. Journal of Scientific Computing. https://link.springer.com/10.1007/s10915-025-02983-1

Xu, K., & Darve, E. (2025). Integration matters for learning PDEs with backwards SDEs. arXiv:2505.01078. https://arxiv.org/abs/2505.01078

Zeng, Q., et al. (2025). Automatic network structure discovery of physics informed neural networks via knowledge distillation. Nature Communications. https://www.nature.com/articles/s41467-025-64624-3

Zhang, Y., et al. (2024). PDEformer: Towards a foundation model for one-dimensional partial differential equations. arXiv:2402.12652. http://arxiv.org/pdf/2402.12652.pdf

Zhang, Y., et al. (2025). A multimodal PDE foundation model for prediction and scientific text descriptions. arXiv:2502.06026. https://arxiv.org/abs/2502.06026

Tags:
Categories: