Cherokee is an extremely low-resource language, which means that there is little parallel Cherokee-English parallel data to analyze. This presents difficulties of finding data-efficient models and useful data augmentation methods for performant machine translation. We propose using transfer learning with neural machine translation models with Inuktitut, a language with similar properties to Cherokee, to enhance BLEU scores. We attempt to augment a baseline NMT system by comparing subword-level and character-level embeddings, hyperparameter tuning vocabulary sizes and number of iterations per epoch for parent model in transfer learning, and data augmenting through copied monolingual data. In aggregate, we find a total improvement of 0.96 BLEU. When looking at the model’s performance over different sentence lengths, we find there is no relationship between sentence length and BLEU score.