Sparse tensor computations are central to important applications including computer-assisted drug design, fraud detection, and national security. Timely execution of these applications improves user productivity and reduces the energy consumption associated with each execution. Sparse computations are characterized as having inputs where many or most values are zero. To avoid the inefficiency of storing and computing on zero-valued data, applications only store the nonzeros, with auxiliary data structures to recover their locations. As a result, sparse tensor computations exhibit unpredictable memory-access patterns that include indirection through the auxiliary data structures. Consequently, on today?s computer architectures, performance of sparse tensor computations is completely dominated by the movement of data, through the memory system and across nodes. Data movement is expensive both in terms of execution time and energy expenditure. Optimizing data movement of sparse tensor computations as high-performance architectures have become increasingly diverse ? conventional parallel architectures, graphics processors used as parallel accelerators and complex memory systems ? creates a performance and productivity challenge for software developers who end up writing low-level architecture-specific code for each platform. The proposed approach simultaneously optimizes how data is organized in memory, how the computation is structured to access the data in a way that reduces data movement, and how the computation and data movement make best use of features of the hardware architectures. Since the nonzero structure of the data is unknown until program execution, the approach also examines runtime information in its decisions. The resulting co-optimization strategy enables a cohesive approach for iteratively making scheduling and data representation transformation decisions for a wide range of sparse computations and incorporating runtime adaptations. This project is developing a programming framework that permits high-level specification of a sparse computation and optimizes it to reduce data movement. It composes data representations, data layouts and storage mappings, and parallel schedules for sparse computations. It employs data dependencies, runtime information, and architecture features to fully bind the final generated code. This approach is intended to enable handling sparse tensor computations with dependences such as sparse triangular solve and many other solvers for systems of linear equations, applying reorderings such as Morton ordering on sparse tensors, and late binding of sparse tensor data representations. The novel and most significant aspects of the research include: (1) composable schedule and data transformations, including data layout transformations and storage mapping; (2) inspector synthesis for runtime data transformations between data representations, layouts, and storage mappings, which are composed with external functions; (3) support for data-dependent tensor computations; and, (4) framework abstractions deployed in the MLIR/LLVM compiler. The researchers are strongly committed to broadening participation in computing and have comprehensive plans to engage the underrepresented groups. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.