Are Your Random Generators Really Random? Testing Samplers from Theory to the Real World
Randomness plays a central role in modern computing — especially in today’s AI systems. Many artificial intelligence / machine learning algorithms, probabilistic reasoning, and many other core technologies rely on sampling: they repeatedly generate random examples according to carefully designed probabilities.But this raises an important question: How do we know these sampling programs are actually working correctly?If a program is supposed to produce results according to specific probabilities, even small errors can lead to misleading conclusions or unreliable systems. Surprisingly, testing whether a “random” program is correct is much harder than it sounds. Traditional methods require collecting huge numbers of samples, which quickly becomes impractical.In this talk, the speaker will present a new way to test sampling algorithms more efficiently. Instead of only asking for random outputs, we allow ourselves to ask more targeted questions, such as requesting a random output restricted to a particular subset of possibilities. This additional flexibility dramatically reduces the number of samples needed, making testing feasible even for large and complex systems.Beyond the technical results, the talk will highlight real-world challenges — especially those arising in AI and probabilistic verification.
Sourav Chakraborty is a Professor in the Advanced Computing and Microelectronics Unit (ACMU) within the Computer and Communication Sciences Division (CCSD) at the Indian Statistical Institute (ISI), Kolkata, India. He previously served as a faculty member at Chennai Mathematical Institute, India. He has also held postdoctoral positions in the Algorithms and Complexity group at Centrum Wiskunde & Informatica (CWI) in Amsterdam, Netherlands, and in the Computer ScienceDepartment at Technion, Israel. He earned his PhD in Computer Science from the University of Chicago in 2005.
Photo Gallery
Venue
Event Details
-
Mode:In-Person
