This presentation investigates the application of manifold learning approaches to acoustic modeling in automatic speech recognition (ASR). Acoustic models in ASR are defined over high dimensional feature vectors which can be represented by a graph with nodes corresponding to the feature vectors and weights describing the local relationships between feature vectors. This representation underlies manifold learning approaches which assume that high dimensional feature representations lie on a low dimensional imbedded manifold. A manifold based regularization framework is presented for deep neural network (DNN) training of tandem bottle-neck feature extraction networks for ASR. It is argued that this framework has the effect of preserving the underlying low dimensional manifold based relationships that exists among speech feature vectors within the hidden layers of the DNN. This is achieved by imposing manifold based locality preserving constraints on the outputs of the network. The ASR word error rates obtained using these networks is evaluated for speech in noise tasks and compared to that obtained using DNN bottle-neck networks trained without manifold constraints.