Implementations of R-NET and Character-level Embeddings on SQUAD
While there have been many new and exciting developments in solving the SQuAD challenge over recent years, I decided to focus on the fundamentals in my final project approach. What better way to practice and reinforce classical deep learning concepts such as recurrent neural networks, convolutional networks and self-attention than implementing R-NET with added character-level word embeddings? My experiments showed that character-level emebeddings enrich the understanding of word components and provide improvement on key evaluation metrics. My implementation of R-NET also exhibits an additional lift in model performance on SQuAD 2.0. However, the limitations of R-NET are also highlighted as it struggles to identify unanswerable questions especially when similar phrases exist in both question and passage.