Papers
Paper discussions, DOI/arXiv references, AI summaries, and citations
The benchmark compares MACE-MP-0 against standard PBE calculations for lattice constants, bulk moduli, and phonon-derived descriptors across 220 inorganic compounds. The authors report roughly one order of magnitude improvement in wall-clock throughput while keeping median lattice-constant error below 0.5%. What stood out to me is that they explicitly separate in-distribution oxides from out-of-distribution intermetallics, and the performance gap is much larger in the latter. I am curious how people here interpret this in practice. For high-throughput screening, we usually care less about absolute energy and more about ranking and filtering. If MACE can preserve ordering in unstable candidates, this could replace early-stage DFT in many pipelines. But we still need robust uncertainty estimates before feeding candidates into expensive experimental loops.
The new model combines message passing with symmetry-aware tokenization and reports stronger performance on formation energy and elastic tensor prediction benchmarks than CGCNN and MEGNet baselines. Their ablation suggests most gains come from enforcing space-group constraints during training rather than raw parameter count. I would still like to see broader transfer tests. Many benchmark sets overlap heavily with public repositories used in pretraining, so true out-of-domain generalization is hard to judge. If anyone has tried this model on low-symmetry organic-inorganic hybrids, please share failure cases.
The release is exciting, but I want to temper expectations. A large fraction of candidate structures are labeled as potentially synthesizable based on formation-energy filters and model predictions, not full thermodynamic phase-space analysis. For discovery workflows this is great, yet downstream teams should not treat every listed structure as synthesis-ready. For people who already integrated GNoME into active-learning loops, what validation protocol are you using? We currently cross-check with Materials Project hull distances and then run a smaller DFT relaxation set before any Bayesian optimization step. Curious whether others see systematic biases in nitrides or chalcogenides.
This workflow combines a pretrained graph model with uncertainty-aware acquisition to rank 2D candidates for hydrogen evolution and thermal stability. They evaluate around 70,000 hypothetical structures and only run expensive DFT for the top uncertainty-calibrated subset. The hit rate seems significantly better than random or heuristic filtering. One concern is that the candidate generator may bias towards known motifs, limiting novelty. Still, the closed-loop process is a nice template for groups that cannot afford brute-force DFT on full candidate spaces.
This paper compiles more than 400 calculations across Mn, Fe, Co, and Ni oxides and evaluates how U choices affect oxidation energetics and magnetic ordering. Their biggest contribution is a consistent protocol for fitting U against both formation enthalpy and band-gap constraints, rather than matching only one observable. I appreciate that the supplementary information includes full INCAR sets and pseudopotential choices. Reproducing literature values has been frustrating because many papers omit these details. Has anyone here tried applying their fitted U values to mixed-anion systems like oxyfluorides?