Decision making from sequential sampling, especially when more than two alternative choices are possible, requires appropriate stopping criteria to maximize accuracy under time constraints. Optimal conditions for stopping have previously been investigated for modeling human decision making processes. In this work, we show how the k-nearest neighbor classification algorithm in machine learning can be utilized as a mathematical framework to derive a variety of novel sequential sampling models. We interpret these nearest neighbor models in the context of diffusion decision making (DDM) methods. We compare these nearest neighbor methods to exemplar-based models and accumulator models, such as Race and LCA. Computational experiments show that the new models demonstrate significantly higher accuracy given equivalent time constraints.