Katharine Turner --
Australian National University
Representing Vineyard Modules
10am, Sorbonne University, Room 2402 (Zoom Link)
Abstract: Time-series of persistence diagrams, more commonly known as vineyards, are a useful way to capture how multi-scale topological features vary over time. However, as the persistent homology is calculated and considered at each time step independently we do lose significant information in how the individual persistent homology classes evolve over time. A natural algebraic version of vineyards is a time series of persistence modules equipped with interleaving maps between the persistence modules at different time stamps. Let’s call this a vineyard module. I will set up the framework for representing a vineyard module via an indexed set of vines alongside a collection of matrices. Furthermore I will outline an algorithmic way to transform the bases of the persistence modules at each time step within the vineyard module to make the matrices within this representation as simple as possible. With some reasonable assumptions (analogous to those in Cerf theory) on the vineyard modules, this simplified representation can be completely described (up to isomorphism) by the underlying vineyard and a vector of finite length. While this vector representation is not in general guaranteed to be unique we can prove that it will be always zero when the vineyard module is isomorphic to the direct sum of vine modules. (paper)
Bio: Katharine Turner is a Senior Lecturer in the Mathematical Sciences Institute at the Australian National University (ANU). After graduating from her PhD at the University of Chicago in 2015, she did a postdoc at EPFL in a joint position between the Mathematical Statistics group and the Labortatory for Topology and Neuroscience, and in 2017 moved to ANU. Her research focuses on the theory and applications of algebraic topology for use in data science. In 2020 Katharine was awarded a prestigious Australian Research Council Discovery Early Career Fellowship for a project on “Statistical shape analysis using persistent homology”.
Nina Otter --
Queen Mary University of London
On the effectiveness of persistent homology
2pm, Sorbonne University, Room 2402 (Zoom Link)
Abstract: Persistent homology (PH) is, arguably, the most widely used method in Topological Data Analysis. In the last decades it has been successfully applied to a variety of applications, from predicting biomolecular properties, to discriminating breast-cancer subtypes, classifying fingerprints, or studying the morphology of leaves. At the same time, the reasons behind these successes are not yet well understood. We believe that for PH to remain relevant, there is a need to better understand why it is so successful. In this talk I will discuss recent work that tries to take a first step in this direction. The talk is based on joint work with Renata Turkes and Guido Montúfar. (paper)
Bio: Since July 2021, Nina Otter is a Lecturer (Assistant Professor) in Mathematics of Data Science at Queen Mary University of London. Before that, she was a CAM adjunct assistant professor at UCLA, and a postdoctoral fellow at the Max Planck Institute for Mathematics in the Sciences in Leipzig. She obtained a DPhil (PhD) in Mathematics from the University of Oxford in June 2018, and an MSc and BSc in Mathematics from the ETH Zurich. In her research, she is interested in using methods from algebra, geometry and topology to study data. She is currently particularly interested in advancing our understanding of weather regimes using methods from topology.
Joshua Levine --
University of Arizona
Neural Representations for Volume Visualization
10am, Sorbonne University, Room 2402 (Zoom Link)
Abstract: In this talk, I will describe two projects, both joint work with collaborators at Vanderbilt University. The first project studies how generative neural models can be used to model the process of volume rendering scalar fields. We construct a generative adversarial network that learns the mapping from volume rendering parameters, such as viewpoint and transfer function, to the rendered image. In doing so, we can analyze the volume itself and provide new mechanisms for guiding the user in transfer function editing and exploring the space of possible images that can be volume rendered. Both our training process and applications are available on the web at https://github.com/matthewberger/tfgan In the second part of my talk, I will explore a recent neural modeling approach for building compressive representations of volume data. This approach represents volumetric scalar fields as learned implicit functions wherein a neural network maps a point in the domain to an output scalar value. By setting the number of weights of the neural network to be smaller than the input size, we achieve compressive function approximation. Combined with carefully quantizing network weights, we show that this approach yields highly compact representations that outperform state-of-the-art volume compression approaches. We study the impact of network design choices on compression performance, highlighting how conceptually simple network architectures are beneficial for a broad range of volumes. Our compression approach is hosted at https://github.com/matthewberger/neurcomp
Bio: Joshua A. Levine is an associate professor in the Department of Computer Science at University of Arizona. Prior to starting at Arizona in 2016, he was an assistant professor at Clemson University from 2012 to 2016, and before that a postdoctoral research associate at the University of Utah’s SCI Institute from 2009 to 2012. He is a recipient of the 2018 DOE Early Career award. He received his PhD in Computer Science from The Ohio State University in 2009 after completing BS degrees in Computer Engineering and Mathematics in 2003 and an MS in Computer Science in 2004 from Case Western Reserve University. His research interests include visualization, geometric modeling, topological analysis, mesh generation, vector fields, performance analysis, and computer graphics.
Henry Adams --
Colorado State University
The unreasonably effective interaction between math and applications:
A case study on persistence images
10am, Sorbonne University, Room 2300 (Zoom Link)
Abstract: Wigner described the unreasonable effectiveness of mathematics in the natural sciences: ideas from mathematics are unreasonably effective in advancing applications, and ideas from applications are unreasonably effective in advancing mathematics. We describe a case study on persistent images, a stable vector representation of persistent homology. If you combine topology with data, you get persistent homology. If you combine persistent homology with machine learning, you might get persistent landscapes or persistence images or a host of other options. The first attempt at persistence images were not stable (i.e. continuous), but by making them stable, their machine learning performance improves, as we will describe on examples ranging from materials science to biology. This ping-ponging behavior of injecting ideas from mathematics, then injecting ideas from applications, etc, leads to robust applied tools and new mathematical questions. Joint work with Sofya Chepushtanova, Tegan Emerson, Eric Hanson, Michael Kirby, Francis Motta, Rachel Neville, Chris Peterson, Patrick Shipman, and Lori Ziegelmeier.
Bio: Henry Adams is an Associate Professor of Mathematics at Colorado State University. His research interests are in computational topology and geometry, quantitative topology, combinatorial topology, and topological data analysis. He has applied topology to problems arising in machine learning, computer vision, minimal sensing, collective motion models, and chemical energy landscapes. Professor Adams is the Executive Director of the Applied Algebraic Topology Research Network (AATRN).
Brian Summa --
Is Bigger Data Always Better?
4pm, Online (Zoom Link)
Abstract: Scientific datasets have continually grown in size, driven by the perceived need for higher fidelities to model or measure complex phenomena correctly. This trend comes at a high cost. It requires significant effort and resources to acquire, process, and store this ever-increasing collection of produced data. In this talk, I’ll discuss our ongoing efforts to reduce this cost through novel image acquisition, records and analyses of user behavior during exploration, and concise descriptors of data features.
Bio: Brian Summa is an Assistant Professor of Computer Science at Tulane University, where he is the head of the visualization and graphics lab. His research interests focus on large scale imaging and data analysis.
Michael Aupetit --
Qatar Computing Research Institute
Web page - Twitter
Visual Analytics, Machine Learning and Topological Models to support multidimensional data analysis
2pm, Sorbonne University, Room 24-25/405 (Zoom Link)
Abstract: I will give an overview of my work to support the analyst exploring and discovering patterns in multidimensional data. Topology Data Analysis (TDA) (including clustering) and Multidimensional Projection (MDP) techniques are core computational approaches to summarize multidimensional (MD) data. But Visualization is a crucial component to link the summarized data to the end-user who generates knowledge. It relies on finding the most efficient graphical encoding and interactions to support the analytical tasks and to account for the perceptual and cognitive bottlenecks to fit with the scale of the MD data. I will first present generative models for TDA, to extract a summary of the MD data before visualizing it. Then, I will expose MDP techniques and their distortions that a new supervised MDP exploits to separate classes only if they do not overlap in the MD space. I will switch to visualization techniques showing how visual enrichment can solve some of the MDP distortions. Then, show how we can start the analytic process from scratch with interactive Voronoi treemaps. Finally, I will show how to scale the process with perceptual-data-driven visual quality measures and discuss future research tracks.
Bio: Dr. Michaël Aupetit has worked at the Qatar Computing Research Institute since 2014. He is a Senior Scientist with the Social Computing group. Before joining QCRI, Michaël worked for ten years as a research scientist and senior expert in data mining and visual analytics at CEA LIST in Paris Saclay. He designed decision support systems to solve complex industrial problems in health and security domains. Michaël initiated and co-organized five international workshops, including the first workshop on Topology Learning at NIPS 2007, and created and chaired the first Visualization and Computer-Human Interaction conference (VisCHI 2019) in the Middle East. He has been a PC member of the leading visualization conferences IEEE VIS and Eurovis. He also reviewed hundreds of papers for top-tier journals and conferences, doing regular reviews for IEEE TVCG, Computer Graphics Forum, and Neurocomputing. He contributed more than a hundred publications and holds 3 WO , 2 US, and 1 EP patent. He obtained the Habilitation for Research Supervision (HDR) in Computer Science from Paris 11 Orsay University in 2012, and the Ph. D degree in Industrial Engineering from Grenoble National Polytechnic Institute (INPG) in 2001.