Pseudocode to code translation is an open field of research, with work impacting a variety of disciplines. We approach the task by employing transformers for the task of pseudocode-to-C++-code translation, and do a comparative study with the earlier published results using LSTMs. We use human annotated C++ programs and corresponding pseudocode, which provide pairs of pseudocode-gold code line translations, which was made available by previous work. We considered as our research problem line by line pseudocode-to-code translations, in order to decompose the problem of whole program translation into smaller pieces, which allows us to treat the program synthesis as a search problem given candidate line translations. We experimented with different architectures, tokenizers and input types. While training a BERT-to-BERT model as an encoder-decoder model (under our time constraints) was not able to produce syntactically correct translations, the pretrained BART models was able to reach state of the art results once finetuned on our dataset. Furthermore, we observed additional benefit of not only feeding the pseudocode for the line, but additionally 1. pseudocode of the N preceding lines and 2. code of the N preceding lines, for N=5 and 10. This leverages the transformers' ability to learn long-term dependencies and supports the hypothesis that cross-line context is relevant for the task at hand.