String Theory

One of the emerging applications of machine learning in mathematics and physics in recent years has been in the study of string theory. This field is famously known for its abundance of symmetry which necessitates specific network design. Among the most important geometries in mathematics and physics are Calabi--Yau manifolds. String theory in flat ten-dimensional Minkowski space can be compactified on various CY geometries. The case of complex dimension d=3 leads to CY 3-folds and compactifies to the familiar flat four-dimensional spacetime. The case of d=2 requires the target geometry to be a complex torus or K3 surface. My research is dedicated to a deep understanding of these geometries. The field has recently seen promising applications of machine learning to gain further insight into their algebraic geometric structures as well as their numerical invariants.

Lattice polyhedron () and its dual (). Use cursor for 3D animation.

Toric K3 surfaces

String compactification on toric K3 hypersurfaces can be constructed from reflexive 3D lattice polyhedra. A full classification of all 4319 such polyhedra was obtained by Kreuzer-Sharke [KS98]. It was known since the 80's that in this case supersymmetry extends to N=(4,4) due to the hyperkähler metric, which has profound consequences for mirror symmetry. Indeed, Dolgachev's interpretation of mirror symmetry for K3 surfaces through orthogonal Picard lattices [Dol96] does not match Batyrev's mirror construction for toric hypersurfaces [Bat94] in the presence of toric correction terms. Instead, a computer-assisted proof for pairs of dual reflexive polyhedra was given by Rohsiepe [Roh04], replacing the Picard lattice by a suitably chosen sublattice.

In this project, we take a novel AI-assisted approach to explain mirror symmetry for K3 surfaces. We design a neural network optimized for permutation invariant geometric input data such as 3D polyhedra. This network is inspired by the PointNet architecture [QSMG17] for 3D point cloud classification and consists of three key modules: (1) T-Net, (2) Encoder, (3) Decoder. We then train on geometric data derived form 3D lattice polyhedra. As a result, we are able to deep learn Picard numbers of toric K3 surfaces with high accuracy. Mirror symmetry requires a matching of even non-degenerate lattices corresponding to dual polyhedra. For this, we train our network on more refined data in order to learn all cyclic factors and obtain the desired matching.


Paper


Bibliography