The computational science and engineering community is awash with excitement over the prospect of using machine learning to learn low-dimensional models from data. Indeed, the field of model reduction has long sought to derive approximate low-dimensional representations of an underlying high-fidelity model. Model reduction has clear connections to machine learning, yet with a large difference in perspective: model reduction methods have grown from the computational science community, with a focus on reducing high-dimensional models that arise from physics-based modeling, whereas machine learning has grown from the computer science community, with a focus on creating low-dimensional models from black-box data streams. A large class of model reduction methods are projection-based; that is, they derive the low-dimensional approximation by projection of the original large-scale model onto a low-dimensional subspace, typically defined by a set of global basis vectors. In doing so, they draw on the foundational theories and numerical analysis tools of computational mechanics. This talk will discuss our approaches that blend the two perspectives---the rigor of a projection-based model reduction framework together with the convenience of a data-driven learning approach. In particular, we will present an approach that (1) analyzes the governing partial differential equations to identify variable transformations that reveal system structure, (2) postulates a projection-based reduced model in the transformed variables, and (3) learns the reduced model operators directly from simulation data using least-squares optimization. The method is demonstrated for nonlinear systems of partial differential equations arising in various aerospace engineering applications. Joint work with Boris Kramer, Alexandre Marques, Benjamin Peherstorfer, Elizabeth Qian.