The phenomenon of syntactic priming is well studied in the literature, but the mechanisms behind it are still under debate. In this study, we trained English-speaking participants in artificial language sequences with dependencies that are either adjacent or non-adjacent. The participants then wrote completions to relative clause (RC) fragments. We found that participants who learn non-adjacent dependencies in the artificial language, exhibit a bias to write high-attachment (non-adjacent) continuations for RCs, when compared to participants in a control condition who exhibit low-attachment (adjacent) biases in RCs. The implications for theories of syntactic priming and its relations to implicit learning are discussed.