Call for Papers | IJCNN 2019 Special Section 徵稿通道開啟

PaperWeekly發表於2018-12-07

640



640


Special Section: Transferable neural models for language understanding


Language understanding, dealing with machine reading comprehension in various forms such as question answering, machine translation and language dialog, has been an aspiration of the artificial intelligence community, but has limited success until recently. Due to the success of deep neural networks, there is a resurgence of interest in research on deep neural networks applied to language understanding. The most recent research in language understanding aims to build deep neural network models that can be used for various language understanding tasks, such as paraphrasing, question answering, machine translation, spoken dialog, and text categorization. However, these models are (1) very data hungry – requiring large training data; (2) very task specific – hard to generalize the model for one task to other related tasks. To solve these problems, recently, transfer learning has been applied to language understanding. Transfer learning is a learning paradigm that aims to apply knowledge gained while solving one problem to a different but related problem. Transfer learning builds a neural model for one language understanding task with large training data, and then the model is retrained for another task with small training data.


IJCNN 2019 會議論文截止日期為 2018 年 12 月 15 日,錄用通知日期為 2019 年 1 月 30 日


徵稿主題包括但不限於:自然語言理解,推理和生成,深度學習,遷移學習,主動學習,自我學習,領域適應學習,序列對序列學習,機器翻譯,複述,問答系統,資訊抽取等。 


線上投稿:


https://ieee-cis.org/conferences/ijcnn2019/upload.php 


請選擇 Main research topic 為 S33 — Transferable neural models for language understanding。 


若想進一步瞭解,請聯絡 Dr. Zhiwei Lin(z.lin@ulster.ac.uk)。

640



?


現在,在「知乎」也能找到我們了

進入知乎首頁搜尋「PaperWeekly」

點選「關注」訂閱我們的專欄吧



關於PaperWeekly


PaperWeekly 是一個推薦、解讀、討論、報導人工智慧前沿論文成果的學術平臺。如果你研究或從事 AI 領域,歡迎在公眾號後臺點選「交流群」,小助手將把你帶入 PaperWeekly 的交流群裡。


640

▽ 點選 | 閱讀原文 | 線上投稿

相關文章