Fine-tuning a pre-trained on 1BLM performed well on QA and other NLP tasks. Why don’t we construct a gigantic text taken from novels, textbooks, news, webpages and conversation, so that a trained on this dataset can a output (e.g. multiple sentences) conditioned on input sentences? Note that we don’t do fine-tuning here; we train the model on one giant dataset (not Seq2Seq, just ), and that’s it. 1BLM has no inter-sentence dependency, yet here each sample of minibatch is a randomly sampled consecutive multiple sentences. It doesn’t only do QA but also give an appropriate output according to the input task. The samples are noisy, and the samples with question and answer format may be much smaller than the entire dataset. However, I believe the large size of the dataset would generalize nicely to resolve them. Any feedback?



Source link
thanks you RSS link
( https://www.reddit.com/r//comments/8zkm91/d__lm_with_a__dataset_used_for_qa/)

LEAVE A REPLY

Please enter your comment!
Please enter your name here