Computer vision has seen major success in learning to recognize objects from massive “disembodied” Web photo collections labeled by human annotators. Yet cognitive science tells us that perception develops in the context of acting the world---and without intensive supervision. Meanwhile, many realistic vision tasks require not only categorizing a well-composed human-taken photo, but also actively deciding where to look in the first place. In the context of these challenges, we are exploring how machine perception benefits from anticipating the sights and sounds an agent will experience as a function of its own actions. Based on this premise, we introduce methods for learning to look around intelligently in novel environments, learning from video how to interact with objects, and perceiving audio-visual streams for both semantic and spatial context. Together, these are steps towards first-person perception, where interaction with the world is itself a supervisory signal
Kristen Grauman is a Professor in the Department of Computer Science at the University of Texas at Austin and a Research Scientist at Facebook AI Research. Her research in computer vision and machine learning focuses on visual recognition and search. Before joining UT Austin in 2007, she received her Ph.D. at MIT. She is a AAAI Fellow, a Sloan Fellow, and a recipient of the NSF CAREER, ONR YIP, PECASE, PAMI Young Researcher award, and the 2013 IJCAI Computers and Thought Award. She and her collaborators were recognized with best paper awards at CVPR 2008, ICCV 2011, ACCV 2016, and a 2017 Helmholtz Prize “test of time” award. She served as a Program Chair of the Conference on Computer Vision and Pattern Recognition (CVPR) in 2015 and Neural Information Processing Systems (NeurIPS) in 2018, and she currently serves as Associate Editor-in-Chief for the Transactions on Pattern Analysis and Machine Intelligence (PAMI).
The topics will include (in order of the speakers) -- Interfacing MATLAB and FEniCS -- a few desperate attempts; Using the Kokkos library to write code that runs on both CPU/GPU; Fast Gaussian elimination for sparse matrices.
Tendons are soft connective tissues that connect the muscular system to the skeleton. Tendons are abundant in the human body and their primary function is to enable transmission of mechanical power. These tissues are prone to overuse and disease. To understand the relationships between tendon's function and disease, one needs to clearly understand the mechanical behaviors in a physiological context. Despite decades of studies on tendon, a comprehensive framework for studying tendon mechanics that addresses its inelastic mechanical response in relationship to its structure is missing.
The objective of this research was to analyze and model the inelastic mechanical behaviors of tendon that are viscoelasticity, damage, and plastic deformation. We did that through four main aims: (1) developing a comprehensive and unifying modeling framework, reactive inelasticity (RIE), for modeling the major inelastic mechanical behaviors of tendon using kinetics of molecular bonds; (2) evaluating damage and plastic deformation as the potential mechanisms of tendon softening behavior during axial loading by using micro-mechanical experiments on tail tendon and RIE modeling; (3) evaluating the poroelastic parameters of tendon, particularly hydraulic permeability, using lateral osmotic loading and biphasic mixture finite element modeling; (4) visualizing and quantifying the 3D microstructure of tendon using serial block-face SEM (SBF-SEM), which showed several helical fibrillar structure, and providing a load-transfer model for interfibrillar load transfer.
This study is innovative in comprehensively addressing the mechanisms of inelasticity in tendon by separately identifying and modeling inelastic behaviors, providing a unifying theoretical explanation for the underlying mechanisms, elucidating novel structure-mechanics relationships, and calculating the inelastic mechanical properties. The outcomes of this study provide a novel understanding of tendon mechanics and its relationships to tendon's multiscale structure.
Babak Naghizadehsafa Safa received his PhD from U. Delaware in 2019. He received his BS in Mechanical Engineering at Sharif University of Technology in Tehran, Iran. He has developed software, ReactiveBond,” An implementation of Reactive Inelasticity model for uniaxial deformations in Matlab (https://github.com/BabakNSafa/ReactiveBond). His current research interests include: Experimental Tissue Mechanics, Tissue Damage Mechanics, Multi-scale Mechanics, Collagen Mechanics, Theoretical and Computational Inelasticity Mixture Theory, and 3D Electron Microscopy.
Bayes’ rule provides an optimal framework to update the probability distribution of parameters with observation data and parameter-to-observable map. The central tasks of Bayesian inversion are to sample from the posterior distribution and compute statistics of some quantities of interest. However, critical challenges are faced when the parameter dimension is high and the parameter-to-observable map is expensive to evaluate, e.g., involving large-scale PDE solve. In this talk, I will introduce recent advances on computational methods to tackle these challenges by exploiting the geometry, smoothness, sparsity, intrinsic low-dimensionality, and low-rank properties of the posterior, which are shown to be scalable with respect to the parameter dimension and the complexity of the map approximation.
Based on ab initio calculations, the main features of a molecule with N atoms in its interaction with a bio receptor (electronic structure, topology, and reactivity) are presented by an NxN matrix (for each populated conformation) – the electron-conformational (EC) matrix of congruity (ECMC), with matrix elements as donor-acceptor properties, bond orders, and interatomic distances. Then by comparison of the ECMCs of a series of compounds with a given biological activity (the training set), a smaller number of matrix elements, common for all the active compounds (with tolerances), is revealed, - the submatrix of activity (ECSA), or the numerical Pharmacophore.
The presence of the ECSA in the ECMC of at least one of its populated conformations is an absolutely reliable necessary condition of the biological activity under consideration (within the completeness of the training set and the chosen limits of activity). Its determination is straightforward, fully computerized, and very fast. The application of this EC method shows that it works well in revealing the possible biological (drug, odorant, toxic) activity of any molecular system, and it even allowed us to discover that in some cases the activity takes place in dimers. In comparison with existing statistical methods of pharmacophore identification, the EC method has no limitations and no a priori chosen arbitrary descriptors, which lead to chance correlations and artifacts.
However, the presence of the ECSA, being a necessary condition of activity, is not a sufficient condition of activity. To reveal the latter, we worked a parametrized procedure, which takes into account the presence of anti-pharmacophore shielding and other groups that compete with the pharmacophore in the interaction with the bio receptor. Together with handling the conformations and the dependence of the activity on tolerances and substrate-enzyme bonding energies, we got a general formula for prediction numerical activities in reasonable agreement with experimental data [for some details, see, e.g., I. B. Bersuker, Comput. Aided Mol. Des. (2008) 22:423–430; DOI 10.1007/s10822-008-9191-x].
In this talk, we review some recent advances in the analysis and design of algebraic flux correction (AFC) schemes for hyperbolic problems. In contrast to most variational stabilization techniques, AFC approaches modify the standard Galerkin discretization in a way which provably guarantees the validity of discrete maximum principles for scalar conservation laws and invariant domain preservation for hyperbolic systems. The corresponding inequality constraints are enforced by adding diffusive fluxes, and bound-preserving antidiffusive corrections are performed to obtain nonlinear high-order approximations. After introducing the AFC methodology and the underlying theoretical framework in the context of continuous piecewise-linear finite element discretizations, we present some of the limiting techniques that we use in high-resolution AFC schemes. As an alternative to flux-corrected transport (FCT) algorithms which apply limited antidiffusive corrections to bound-preserving low-order solutions, we propose a new limiting strategy based on representation of these solutions as convex combinations of "bar states" satisfying physical and numerical admissibility conditions. Each antidiffusive flux is limited so as to guarantee that the associated bar state remains in the convex invariant set and preserves appropriate local bounds. There is no free parameter, and the nonlinear discrete problem is well-defined even in the steady-state limit. In the case study for the Euler equations of gas dynamics, we enforce local maximum principles for the density, velocity, and specific total energy in addition to positivity preservation for the density and pressure. The results of numerical studies for standard test problems illustrate the ability of the methods under investigation to resolve steep gradients without generating spurious oscillations. In the last part of this talk, we discuss the design of AFC schemes for high-order finite elements. The approaches to be explored are based on the use of Bernstein basis functions and partitioned finite element spaces.