谷歌 AI 負責人談2020 年機器學習趨勢:多工和多模態會有大突破

曼孚科技發表於2019-12-16

在上週加拿大溫哥華舉行的NeurIPS會議上,機器學習成為了中心議題。

來自世界範圍內約1.3萬名研究人員集中探討了神經科學、如何解釋神經網路輸出以及人工智慧如何幫助解決現實世界中的重大問題等焦點話題。

會議期間,谷歌 AI 負責人Jeff Dean接受了媒體VentureBeat的專訪,並暢談了其對於2020年機器學習趨勢的相關看法,Jeff Dean認為:

2020年,機器學習領域在多工學習和多模態學習上將會有大突破,同時新出現的裝置也會讓機器學習模型的作用更好地發揮出來。

以下擷取了部分採訪的英文原文,並簡要進行了翻譯:


1.談AI晶片

VentureBeat:What do you think are some of the things that in a post-Moore’s Lawworld people are going to have to keep in mind?

您認為在後摩爾定律世界中,人們需要牢記哪些事情?

Jeff Dean:Well I think one thing that’s been shown to be pretty effective is specialization of chips to do certain kinds of computation that you want to do that are not completely general purpose, like a general-purpose CPU. So we’ve seen a lot of benefit from more restricted computational models, like GPUs or even TPUs, which are more restricted but really designed around what ML computations need to do. And that actually gets you a fair amount of performance advantage, relative to general-purpose CPUs. And so you’re then not getting the great increases we used to get in sort of the general fabrication process improving your year-over-year substantially. But we are getting significant architectural advantages by specialization.

我認為用專門的晶片而不是用CPU來做非通用的計算,已經被證明非常有效。TPU或者GPU,雖然有諸多限制,但它們是圍繞著機器學習計算的需要而設計的,這比通用GPU有更大的效能優勢。

因此,我們很難看到過去那種算力的大幅度增長,但我們正在透過專業化,來獲得更大的架構優勢。


2.談機器學習

VentureBeat:You also got a little into the use of machine learning for the creation of machine learning hardware. Can you talk more about that?

對機器學習在建立機器學習硬體方面的應用,您能詳細說說嗎?

Jeff Dean:Basically, right now in the design process you have design tools that can help do some layout, but you have human placement and routing experts work with those design tools to kind of iterate many, many times over. It’s a multi-week process to actually go from the design you want to actually having it physically laid out on a chip with the right constraints in area and power and wire length and meeting all the design roles or whatever fabrication process you’re doing.

So it turns out that we have early evidence in some of our work that we can use machine learning to do much more automated placement and routing. And we can essentially have a machine learning model that learns to play the game of ASIC placement for a particular chip.

基本上,現在在設計過程中,一些工具可以幫助佈局,但也需要人工佈局和佈線專家,從而可以使用這些設計工具進行多次重複的迭代。

從你想要的設計開始,到佈局在晶片上,並在面積、功率和導線長度方面有適當的限制,同時還要滿足所有設計角色或正在執行的任何製造過程,這通常需要花費數週的時間。

所以事實證明,在一些工作中,我們可以使用機器學習來做更多的自動佈局和佈線。

我們基本上可以有一個機器學習模型,去針對特定晶片玩ASIC放置的遊戲。我們內部一直在試驗的一些晶片上,這也取得了不錯的結果。


3.談谷歌挑戰

VentureBeat: What do you feel are some of the technical or ethical challenges for Google in the year ahead?

您認為谷歌在未來一年面臨哪些技術或倫理上的挑戰?

Jeff Dean:In terms of AI or ML, we’ve done a pretty reasonable job of getting a process in place by which we look at how we’re using machine learning in different product applications and areas consistent with the AI principles. That process has gotten better-tuned and oiled with things like model cards and things like that. I’m really happy to see those kinds of things. So I think those are good and emblematic of what we should be doing as a community.

And then I think in the areas of many of the principles, there [are] real open research directions. Like, we have kind of the best known practices for helping with fairness and bias and machine learning models or safety or privacy. But those are by no means solved problems, so we need to continue to do longer-term research in these areas to progress the state of the art while we currently apply the best known state-of-the-art techniques to what we do in an applied setting.

就AI或機器學習而言,我們已經完成了一個相當合理的工作,並建立了一個流程。透過該流程,我們可以瞭解如何在與AI原理一致的不同產品應用和領域中使用機器學習。該過程已經得到了更好的調整,並透過模型卡之類的東西進行了最佳化。

然後,我認為在許多原則領域中,存在真正的開放研究方向,可以幫助我們解決公平和偏見以及機器學習模型或安全性或隱私問題。但是,我們需要繼續在這些領域中進行長期研究,以提高技術水平,並將最著名的最新技術應用於我們的工作中。


4.談人工智慧趨勢

VentureBeat: What are some of the trends you expect to emerge, or milestones you think may be surpassed in 2020 in AI?

您認為在2020年人工智慧領域會出現哪些趨勢或里程碑?

Jeff Dean:I think we’ll see much more multitask learning and multimodal learning, of sort of larger scales than has been previously tackled. I think that’ll be pretty interesting.

And I think there’s going to be a continued trend to getting more interesting on-device models — or sort of consumer devices, like phones or whatever — to work more effectively.

I think obviously AI-related principles-related work is going to be important. We’re a big enough research organization that we actually have lots of different thrusts we’re doing, so it’s hard to call out just one. But I think in general [we’ll be] progressing the state of the art, doing basic fundamental research to advance our capabilities in lots of important areas we’re looking at, like NLP or language models or vision or multimodal things. But also then collaborating with our colleagues and product teams to get some of the research that is ready for product application to allow them to build interesting features and products. And [we’ll be] doing kind of new things that Google doesn’t currently have products in but are sort of interesting applications of ML, like the chip design work we’ve been doing.

我認為,在多工學習和多模態學習方面會有突破,解決更多的問題。我覺得那會很有趣。

而且我認為,將會有越來越有效的裝置(手機或其他型別的裝置)出現,來讓模型更有效地發揮作用。

我認為與AI相關的原理工作顯然很重要。但對於谷歌來說,我們是一個足夠大的研究機構,實際上我們正在做許多不同的工作,因此很難一一列舉。

但總的來說,我們將進一步發展最先進的技術,進行基礎研究,以提高我們在許多重要領域的能力,比如NLP、語言模型或多模態的東西。

同時,我們也會與我們的同事和產品團隊合作,為產品應用做一些研究,使他們能夠構建有趣的功能和產品。

英文采訪原文連結:


來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/69956378/viewspace-2668883/,如需轉載,請註明出處,否則將追究法律責任。

相關文章