I am making a project that is a “fill in the blanks” model. Basically, you provide it a fill in the blanks problem and it solve it.
So far I think a seq2seq model should be able to achieve this using attention.
So can a pretrained LSTM model be attached to the seq2seq model as the encoder and fine tuned using data?
Also, I would greatly appreciate some critique on my approach and if it is correct to solve the “Fill in the blanks” model.
thanks you RSS link
More link Blog tech
more link ADS
Blockchain, bitcoin, ethereum, blockchain technology, cryptocurrencies
Information Security, latest Hacking News, Cyber Security, Network Sec
Information Security, latest Hacking News, Cyber Security, Network Security
Blog! Development Software and Application Mobile
Development apps, Android, Ios anh Tranning IT, data center, hacking
Car News, Reviews, Pricing for New & Used Cars, car reviews and news, concept cars