Abstract

This lecture addresses the fundamental paradigm shift in software construction introduced by Large Language Models (LLMs). While AI provides "probable" solutions through statistical inference, the critical nature of software—especially in mission-critical systems—demands the "provable" guarantees of formal verification. The presentation explores the risks of "hallucination loops" in AI-assisted programming and Automatic Program Repair (APR), as identified in 2025 empirical studies. It proposes a rigorous framework where AI acts as a generator of candidates, which are then strictly validated through Design by Contract (DbC) and formal methods. The goal is to move from the "stochastic parrot" model of coding to a hybrid approach of "AI-assisted formal software engineering."

About this Lecture

Number of Slides:  30
Duration:  minutes
Languages Available:  English, French, German, Italian, Russian
Last Updated:  27/02/2026

Request this Lecture

To request this particular lecture, please complete this online form.

Request a Tour

To request a tour with this speaker, please complete this online form.

All requests will be sent to ACM headquarters for review.