論文原址:https://cseweb.ucsd.edu/classes/fa17/cse29...
Deep Neural Networks for YouTube Recommendations
Paul Covington, Jay Adams, Emre Sargin
Google
Mountain View, CA
{pcovington, jka, msargin}@google.com
ABSTRACT
YouTube represents one of the largest scale and most sophisticated industrial recommendation systems in existence. Inthis paper, we describe the system at a high level and focus on the dramatic performance improvements brought by deep learning. The paper is split according to the classic two-stage information retrieval dichotomy: first, we detail a deep candidate generation model and then describe a sepa-rate deep ranking model. We also provide practical lessons and insights derived from designing, iterating and maintain-ing a massive recommendation system with enormous user-facing impact.
YouTube代表了目前規模最大,最複雜的行業推薦系統之一。 在本文中,我們從較高的角度描述了該系統,並著重討論了深度學習帶來的顯著效能提升。
本文根據經典的兩階段資訊檢索二分法進行了拆分:
首先,我們詳細介紹了深層候選生成模型,然後描述了一個單獨的深度排名模型。 我
們還提供從設計,迭代和維護龐大的推薦系統中獲得的實踐經驗和見解,這些建議系統具有巨大的面向使用者的影響。
Keywords
recommender system; deep learning; scalability(可擴充套件性)
1. INTRODUCTION
YouTube is the world’s largest platform for creating, shar-ing and discovering video content. YouTube recommenda-tions are responsible for helping more than a billion usersdiscover personalized content from an ever-growing corpusof videos. In this paper we will focus on the immense im-pact deep learning has recently had on the YouTube videorecommendations system.
YouTube是世界上最大的建立,共享和發現影片內容的平臺。 YouTube的建議負責幫助超過十億使用者從不斷增長的影片庫中發現個性化內容。 在本文中,我們將重點介紹最近在YouTube影片推薦系統上深度學習產生的巨大影響(immense impact).
Figure 1 illustrates the recom-mendations on the YouTube mobile app home.
Recommending YouTube videos is extremely challenging from three major perspectives:
Scale:Many existing recommendation algorithms provento work well on small problems fail to operate on ourscale. Highly specialized distributed learning algorithmsand efficient serving systems are essential for handlingYouTube’s massive user base and corpus.
Freshness:YouTube has a very dynamic corpus withmany hours of video are uploaded per second. Therecommendation system should be responsive enoughto model newly uploaded content as well as the lat-est actions taken by the user. Balancing new content with well-established videos can be understood froman exploration/exploitation perspective.
Noise:Historical user behavior on YouTube is inher-ently difficult to predict due to sparsity and a vari-ety of unobservable external factors. We rarely ob-tain the ground truth of user satisfaction and insteadmodel noisy implicit feedback signals. Furthermore,metadata associated with content is poorly structuredwithout a well defined ontology. Our algorithms need to be robust to these particular characteristics of ourtraining data.
從三個主要方面來看,推薦YouTube影片具有極大的挑戰性:
Scale:許多現有的推薦演算法被證明可以很好地解決小問題,但無法在我們的規模上執行。 高度專業的分散式學習演算法和高效的服務系統對於處理YouTube龐大的使用者群和語料庫至關重要。
Freshness:YouTube具有非常動態的語料庫,每秒上傳多個小時的影片。 推薦系統應具有足夠的響應能力,以對新上傳的內容以及使用者採取的最新操作進行建模。 從探索/開發的角度來看,可以將新內容與成熟的影片相結合。
Noise:由於稀疏性和各種不可觀察的外部因素,YouTube上的歷史使用者行為本來就很難預測。 我們很少獲得使用者滿意度的基本事實,而是建模嘈雜的隱式反饋訊號。 此外,與內容相關的後設資料在沒有定義良好的本體的情況下結構較差。 我們的演算法必須對訓練資料的這些特定特徵具有健壯性(robust)。
In conjugation with other product areas across Google,YouTube has undergone a fundamental paradigm shift to-wards using deep learning as a general-purpose solution fornearly all learning problems. Our system is built on GoogleBrain [4] which was recently open sourced as TensorFlow [1].TensorFlow provides a flexible framework for experimenting with various deep neural network architectures using large-scale distributed training. Our models learn approximatelyone billion parameters and are trained on hundreds of bil-lions of examples.
與Google的其他產品領域相結合( In conjugation with),YouTube已經發生了根本的模正規化地(paradigm)轉變,即使用深度學習作為解決所有學習問題的通用解決方案。 我們的系統建立在Google Brain (最近在TensorFlow中開源)的基礎上。TensorFlow提供了靈活的框架,可使用大規模分散式訓練來嘗試各種深度神經網路架構。 我們的模型學習了大約十億個引數,並針對數十億個示例進行了訓練。
In contrast to vast amount of research in matrix factorization methods [19], there is relatively little work using deep neural networks for recommendation systems. Neural net-works are used for recommending news in [17], citations in[8] and review ratings in [20]. Collaborative filtering is for-mulated as a deep neural network in [22] and autoencodersin [18]. Elkahkyet al. used deep learning for cross domainuser modeling [5]. In a content-based setting, Burgeset al.used deep neural networks for music recommendation.
與矩陣分解方法(matrix factorization methods)的大量研究相反[19],使用深度神經網路進行推薦系統的工作相對較少。 神經網路在[17]中用於推薦新聞,在[8]中用於引用,在[20]中用於評價等級。 協作過濾(Collaborative filtering)在[22]中被模擬為一個深度神經網路,在[18]中被模擬為自動編碼器。 Elkahkyet等使用深度學習進行跨域使用者建模[5]。 在基於內容的設定中,Burgeset等人使用深度神經網路進行音樂推薦。
The paper is organized as follows: A brief system overviewis presented in Section 2. Section 3 describes the candidategeneration model in more detail, including how it is trainedand used to serve recommendations. Experimental resultswill show how the model benefits from deep layers of hiddenunits and additional heterogeneous signals. Section 4 detailsthe ranking model, including how classic logistic regressionis modified to train a model predicting expected watch time(rather than click probability). Experimental results willshow that hidden layer depth is helpful as well in this situa-tion. Finally, Section 5 presents our conclusions and lessonslearned.
本文的組織結構如下:在第2節中進行了簡要的系統概述。第3節更詳細地描述了候選代模型,包括如何訓練候選者模型以及如何使用它來提供建議。 實驗結果將顯示該模型如何受益於隱藏單元的深層和其他異構訊號。 第4節詳細介紹了排名模型,包括如何修改經典邏輯迴歸以訓練模型以預測預期觀看時間(而非點選機率)。 實驗結果表明,在這種情況下,隱藏層的深度也是有幫助的。 最後,第5節介紹了我們的結論和經驗教訓。
2. SYSTEM OVERVIEW
The overall structure of our recommendation system is il-lustrated in Figure 2. The system is comprised of two neuralnetworks: one forcandidate generation and one forranking.
我們的推薦系統的總體結構如圖2所示。該系統由兩個神經網路組成:一個候選代和一個排名。
Figure 2: Recommendation system architecturedemonstrating the “funnel” where candidate videosare retrieved and ranked before presenting only afew to the user.
The candidate generation network takes events from the user’s YouTube activity history as input and retrieves a small subset (hundreds) of videos from a large corpus. Thesecandidates are intended to be generally relevant to the userwith high precision. The candidate generation network onlyprovides broad personalization via collaborative filtering.The similarity between users is expressed in terms of coarsefeatures such as IDs of video watches, search query tokensand demographics
候選生成網路從使用者的YouTube活動歷史記錄中獲取事件作為輸入,並從大型語料庫中檢索(retrieves)一小部分(數百個)影片。 這些候選人通常與高精度的使用者相關。 候選生成網路僅透過協作過濾來提供廣泛的個性化。使用者之間的相似性透過諸如影片觀看ID,搜尋查詢令牌和人口統計等粗略特徵來表達
Presenting a few “best” recommendations in a list requires a fine-level representation to distinguish relative importance among candidates with high recall. The ranking networkaccomplishes this task by assigning a score to each videoaccording to a desired objective function using a rich set offeatures describing the video and user.
在列表中提出一些“最佳”建議需要一個精細的表述,以區分具有較高回憶性的候選人之間的相對重要性。 分級網路透過使用描述影片和使用者的豐富功能集,根據期望的目標功能,為每個影片分配分數,從而完成此任務。
The highest scoringvideos are presented to the user, ranked by their score.The two-stage approach to recommendation allows us tomake recommendations from a very large corpus (millions)of videos while still being certain that the small number ofvideos appearing on the device are personalized and engag-ing for the user. Furthermore, this design enables blendingcandidates generated by other sources, such as those de-scribed in an earlier work [3].
向使用者展示得分最高的影片(按其得分排序)。兩階段推薦方法使我們可以從非常大的影片庫(數百萬個)中進行推薦,同時仍然可以確保裝置上出現的影片數量很少 並吸引使用者。 此外,這種設計可以混合其他來源生成的候選物件,例如早期工作中描述的那些[3]。
During development, we make extensive use of offline met-rics (precision, recall, ranking loss, etc.) to guide iterative improvements to our system. However for the final deter-mination of the effectiveness of an algorithm or model, werely on A/B testing via live experiments. In a live experi-ment, we can measure subtle changes in click-through rate,watch time, and many other metrics that measure user en-gagement. This is important because live A/B results arenot always correlated with offline experiments.
在開發過程中,我們廣泛使用離線方法(metrics)(精度,召回率,排名損失等)來指導系統的迭代改進。 但是,對於最終確定演算法或模型有效性的方法,主要是透過實時實驗進行A / B測試。 在現場實驗中,我們可以衡量點選率,觀看時間以及許多其他衡量使用者參與度的指標的細微變化。 這很重要,因為實時A / B結果並不總是與離線實驗相關。
3. CANDIDATE GENERATION
During candidate generation, the enormous YouTube cor-pus is winnowed down to hundreds of videos that may berelevant to the user. The predecessor to the recommenderdescribed here was a matrix factorization approach trainedunder rank loss [23]. Early iterations of our neural networkmodel mimicked this factorization behavior with shallownetworks that only embedded the user’s previous watches.From this perspective, our approach can be viewed as a non-linear generalization of factorization techniques.
在候選人生成過程中,巨大的YouTube影片會被逐出數百個可能與使用者相關的影片。 此處介紹的推薦程式的前身是在秩損失下訓練的矩陣分解方法[23]。 我們的神經網路模型的早期迭代使用淺層網路模擬了這種分解行為,該淺層網路僅嵌入了使用者以前的觀看。從這個角度來看,我們的方法可以看作是分解技術的非線性概括。
3.1 Recommendation as Classification
We pose recommendation as extreme multiclass classifica-tion where the prediction problem becomes accurately clas-sifying a specific video watch wt at time t among millionso f videosi(classes) from a corpusV based on a user U and contextC,
我們將推薦作為極端的多類分類,在這種情況下,預測問題變得非常準確,從而可以根據使用者 U 和上下文C從語料庫 V 中準確分類特定影片 wt 觀看時間 t,其中包括數百萬個影片 i(類)。
where u∈RN represents a high-dimensional “embedding” ofthe user, context pair and the vj∈RN represent embeddings of each candidate video. In this setting, an embedding issimply a mapping of sparse entities (individual videos, usersetc.) into a dense vector inRN.
The task of the deep neuralnetwork is to learn user embeddings u as a function of theuser’s history and context that are useful for discriminatingamong videos with a softmax classifier.Although explicit feedback mechanisms exist on YouTube(thumbs up/down, in-product surveys, etc.)
we use the im-plicit feedback [16] of watches to train the model, where auser completing a video is a positive example. This choice is based on the orders of magnitude more implicit user historyavailable, allowing us to produce recommendations deep in the tail where explicit feedback is extremely sparse.
其中u∈RN代表使用者的高維“嵌入”,上下文對和 vj∈RN 代表每個候選影片的嵌入。 在這種情況下,嵌入就是將稀疏實體(單個影片,使用者等)對映到RN中的密集向量。
深度神經網路的任務是根據使用者的歷史和上下文來學習使用者嵌入 u ,這對於用softmax分類器區分影片非常有用。儘管YouTube上存在顯式反饋機制(explicit feedback mechanisms)(點贊/反對,產品內調查等)
我們使用觀看的隱式反饋[16]來訓練模型,其中使用者完成影片就是一個很好的例子。 該選擇基於可提供的更多隱式使用者歷史記錄的數量級,這使我們能夠在顯式反饋極為稀少的尾部深處產生推薦。
Efficient Extreme Multiclass/高效的極端多類
To efficiently train such a model with millions of classes, werely on a technique to sample negative classes from the back-ground distribution (“candidate sampling”) and then correct for this sampling via importance weighting [10]. For each ex-ample the cross-entropy loss is minimized for the true labeland the sampled negative classes. In practice several thou-sand negatives are sampled, corresponding to more than 100times speedup over traditional softmax. A popular alterna-tive approach is hierarchical softmax [15], but we weren’t able to achieve comparable accuracy. In hierarchical soft-max, traversing each node in the tree involves discriminat-ing between sets of classes that are often unrelated, makingthe classification problem much more difficult and degradingperformance
為了有效地訓練具有數百萬個類別的模型,主要採用了一種從背景分佈中抽取負面類別的技術(“候選抽樣”),然後透過重要性加權對該抽樣進行校正[10]。 對於每個示例,對於真實標籤和取樣的負類,交叉熵損失最小。 在實踐中,對幾千個負片進行了取樣,對應於傳統softmax的100倍以上的加速。 一種流行的替代方法是分層softmax [15],但我們無法達到可比的準確性。 在分層soft-max中,遍歷樹中的每個節點都涉及在通常不相關的類集之間進行區分,這使分類問題變得更加困難並降低了效能
At serving time we need to compute the most likely N classes (videos) in order to choose the topNto present to the user. Scoring millions of items under a strict serv-ing latency of tens of milliseconds requires an approximatescoring scheme sublinear in the number of classes. Previoussystems at YouTube relied on hashing [24] and the classi-fier described here uses a similar approach. Since calibratedlikelihoods from the softmax output layer are not neededat serving time, the scoring problem reduces to a nearestneighbor search in the dot product space for which generalpurpose libraries can be used [12]. We found that A/B re-sults were not particularly sensitive to the choice of nearestneighbor search algorithm.
在投放時,我們需要計算最可能的N類(影片),以便選擇要呈現給使用者的最合適的 N。 在數十毫秒的嚴格服務等待時間下對數百萬個專案進行評分需要在類數上近似線性的近似評分方案。 YouTube以前的系統依賴於雜湊[24],這裡描述的分類器使用類似的方法。 由於在服務時間不需要來自softmax輸出層的經過校準的似然性,因此計分問題減少到可以使用通用庫的點積空間中的最近鄰居搜尋[12]。 我們發現,A / B結果對最近鄰居搜尋演算法的選擇不是特別敏感。
3.2 Model Architecture/模型架構
Inspired by continuous bag of words language models [14],we learn high dimensional embeddings for each video in afixed vocabulary and feed these embeddings into a feedfor-ward neural network. A user’s watch history is representedby a variable-length sequence of sparse video IDs which ismapped to a dense vector representation via the embed-dings. The network requires fixed-sized dense inputs andsimply averaging the embeddings performed best among sev-eral strategies (sum, component-wise max, etc.). Impor-tantly, the embeddings are learned jointly with all othermodel parameters through normal gradient descent back-propagation updates. Features are concatenated into a widefirst layer, followed by several layers of fully connected Rec-tified Linear Units (ReLU) [6]. Figure 3 shows the generalnetwork architecture with additional non-video watch fea-tures described below.
受連續的單詞語言模型[14]的啟發,我們為固定詞彙表中的每個影片學習高維嵌入,並將這些嵌入饋入前饋神經網路。 使用者的觀看歷史記錄由可變長度的稀疏影片ID序列表示,該序列透過嵌入對映為密集的向量表示形式。 該網路需要固定大小的密集輸入,並在幾種策略(求和,逐分量最大值等)中簡單地對執行效果最好的嵌入進行平均。 重要的是,嵌入是透過正常梯度下降反向傳播更新與所有其他模型引數一起學習的。 功能被連線到第一層,然後是幾層完全連線的整流線性單元(ReLU)[6]。 圖3顯示了通用網路架構,並具有以下所述的其他非影片觀看功能。
Figure 3: Deep candidate generation model architecture showing embedded sparse features concatenated withdense features. Embeddings are averaged before concatenation to transform variable sized bags of sparse IDsinto fixed-width vectors suitable for input to the hidden layers. All hidden layers are fully connected. Intraining, a cross-entropy loss is minimized with gradient descent on the output of the sampled softmax.At serving, an approximate nearest neighbor lookup is performed to generate hundreds of candidate videorecommendations./顯示候選稀疏特徵與密集特徵串聯在一起的深度候選生成模型體系結構。 在連線之前對嵌入進行平均,以將可變大小的稀疏ID袋轉換為適合輸入到隱藏層的固定寬度向量。 所有隱藏層均已完全連線。 訓練時,在取樣softmax的輸出上使用梯度下降使交叉熵損失最小化。在服務時,執行近似最近鄰居查詢以生成數百個候選影片建議。
3.3 Heterogeneous Signals
A key advantage of using deep neural networks as a gener-alization of matrix factorization is that arbitrary continuousand categorical features can be easily added to the model.Search history is treated similarly to watch history - each query is tokenized into unigrams and bigrams and each to-ken is embedded. Once averaged, the user’s tokenized, em-bedded queries represent a summarized dense search history.Demographic features are important for providing priors sothat the recommendations behave reasonably for new users.The user’s geographic region and device are embedded andconcatenated. Simple binary and continuous features suchas the user’s gender, logged-in state and age are input di-rectly into the network as real values normalized to [0,1].
使用深度神經網路作為矩陣分解的一般化的主要優勢在於,可以輕鬆地將任意連續和分類特徵新增到模型中。將搜尋歷史與觀看歷史類似地對待-每個查詢都被標記為( is tokenized into)字母組合(unigram)和bigrams,每個查詢token已嵌入。 取平均後(Once averaged),使用者的標記化嵌入式查詢代表了彙總的密集搜尋歷史記錄。人口統計功能對於提供優先順序至關重要,因此建議對於新使用者而言行為合理。使用者的地理區域和裝置已嵌入並連線在一起。 簡單的二進位制和連續功能(例如使用者的性別,登入狀態和年齡)直接以標準化為[0,1]的實際值直接輸入到網路中
“Example Age” Feature
Many hours worth of videos are uploaded each second toYouTube. Recommending this recently uploaded (“fresh”)content is extremely important for YouTube as a product.We consistently observe that users prefer fresh content, thoughnot at the expense of relevance. In addition to the first-ordereffect of simply recommending new videos that users want to watch, there is a critical secondary phenomenon of boot-strapping and propagating viral content [11].
每秒需要花費數小時的時間將影片上傳到YouTube。 對於YouTube作為產品,推薦最近上傳的(“新鮮”)內容非常重要。我們始終觀察到,使用者更喜歡新鮮的內容,儘管這樣做並不以犧牲相關性為代價。 除了簡單推薦使用者想要觀看的新影片的第一效果之外,還有一種嚴重的輔助現象,即引導和傳播病毒內容[11]。
Machine learning systems often exhibit an implicit biastowards the past because they are trained to predict future behavior from historical examples. The distribution of videopopularity is highly non-stationary but the multinomial dis-tribution over the corpus produced by our recommender willreflect the average watch likelihood in the training windowof several weeks. To correct for this,we feed the age of thetraining example as a feature during training. At servingtime, this feature is set to zero (or slightly negative) to re-flect that the model is making predictions at the very endof the training window.
Figure 4 demonstrates the efficacy of this approach on anarbitrarily chosen video [26]
機器學習系統經常表現出對過去的隱性偏見,因為它們經過訓練可以根據歷史示例預測未來的行為。 影片受歡迎度的分佈是高度不穩定的,但是由我們的推薦者在語料庫上生成的多項分佈會反映出幾周的訓練視窗中的平均觀看可能性。 為了解決這個問題,我們在訓練過程中以訓練示例的年齡為特徵。 在服務時間,此功能設定為零(或略微為負)以反映模型正在訓練視窗的盡頭進行預測。
圖4展示了這種方法對任意選擇的影片的有效性[26]
Figure 4: For a given video [26], the model trainedwith example age as a feature is able to accuratelyrepresent the upload time and time-dependant pop-ularity observed in the data. Without the feature,the model would predict approximately the averagelikelihood over the training window./對於給定的影片[26],以示例年齡作為特徵進行訓練的模型能夠準確表示資料中觀察到的上傳時間和時間相關的受歡迎程度。 如果沒有該功能,該模型將在訓練視窗內大致預測出平均可能性。
3.4 Label and Context Selection
t is important to emphasize that recommendation ofteninvolves solving asurrogate problemand transferring theresult to a particular context. A classic example is the as-sumption that accurately predicting ratings leads to effectivemovie recommendations [2]. We have found that the choiceof this surrogate learning problem has an outsized impor-tance on performance in A/B testing but is very difficult tomeasure with offline experiments.
重要的是要強調推薦通常涉及解決替代問題並將結果轉移到特定的環境。 一個典型的例子是假設,準確預測收視率會導致有效的電影推薦[2]。 我們已經發現,選擇這種替代學習問題對A / B測試的效能具有極大的重要性,但是很難透過離線實驗進行測量。
Training examples are generated from all YouTube watches(even those embedded on other sites) rather than just watcheson the recommendations we produce. Otherwise, it wouldbe very difficult for new content to surface and the recom-mender would be overly biased towards exploitation. If usersare discovering videos through means other than our recom-mendations, we want to be able to quickly propagate thisdiscovery to others via collaborative filtering. Another keyinsight that improved live metrics was to generate a fixednumber of training examples per user, effectively weightingour users equally in the loss function. This prevented a smallcohort of highly active users from dominating the loss.
培訓示例是從所有YouTube觀看(甚至是嵌入在其他網站上的觀看)中生成的,而不僅僅是觀看我們提出的建議。 否則,將很難使新內容浮出水面,並且推薦者將過度偏向於利用。 如果使用者透過我們的建議之外的其他方式發現影片,我們希望能夠透過協作過濾將該發現快速傳播給其他人。 改進實時指標的另一個關鍵見解是為每個使用者生成固定數量的培訓示例,從而在損失函式中有效地加權使用者。 這阻止了一小群高活躍使用者控制損失。
Somewhat counter-intuitively, great care must be takentowithhold information from the classifierin order to pre-vent the model from exploiting the structure of the site andoverfitting the surrogate problem. Consider as an example a case in which the user has just issued a search query for “tay-lor swift”. Since our problem is posed as predicting the nextwatched video, a classifier given this information will predictthat the most likely videos to be watched are those whichappear on the corresponding search results page for “tay-lor swift”. Unsurpisingly, reproducing the user’s last searchpage as homepage recommendations performs very poorly.By discarding sequence information and representing searchqueries with an unordered bag of tokens, the classifier is nolonger directly aware of the origin of the label.
有點反直覺的是,必須格外小心地保留來自分類器的資訊,以防止模型利用站點的結構和替代問題。 以一個示例為例,其中使用者剛剛發出了“保持快速”搜尋查詢。 由於我們的問題在於預測下一個觀看的影片,因此,根據該資訊,分類器將預測最可能觀看的影片是出現在相應搜尋結果頁面上的“超快”影片。 毫不奇怪,將使用者的最後一個搜尋頁面作為首頁推薦來執行的效果非常差。透過丟棄序列資訊並用無序的令牌袋來表示搜尋查詢,分類器將不再直接知道標籤的來源。
Natural consumption patterns of videos typically lead tovery asymmetric co-watch probabilities. Episodic series areusually watched sequentially and users often discover artistsin a genre beginning with the most broadly popular beforefocusing on smaller niches. We therefore found much betterperformance predicting the user’s next watch, rather thanpredicting a randomly held-out watch (Figure 5). Many col-laborative filtering systems implicitly choose the labels andcontext by holding out a random item and predicting it fromother items in the user’s history (5a). This leaks future infor-mation and ignores any asymmetric consumption patterns.In contrast, we “rollback” a user’s history by choosing a ran-dom watch and only input actions the user took before theheld-out label watch (5b)
影片的自然消費模式通常會導致非常不對稱的共同觀看機率。 通常會順序觀看情節劇系列,使用者通常會先發現一種藝術家,然後再著眼於較小的壁ni,從最廣泛使用的流派開始。 因此,我們發現預測使用者的下一隻觀看記錄要比預測隨機伸出的觀看記錄要好得多(圖5)。 許多協作過濾系統會透過隱藏隨機項並根據使用者歷史記錄中的其他項對其進行預測來隱式選擇標籤和上下文(5a)。 這樣做會洩漏未來的資訊,並且會忽略任何不對稱的消費模式。相反,我們透過選擇隨機觀看記錄來“回滾”使用者的歷史記錄,並且僅輸入使用者在未使用標籤觀看記錄之前進行的操作(5b)
Figure 5: Choosing labels and input context to the model is challenging to evaluate offline but has a largeimpact on live performance. Here, solid events•are input features to the network while hollow events◦areexcluded. We found predicting a future watch (5b) performed better in A/B testing. In (5b), the exampleage is expressed astmax−tNwheretmaxis the maximum observed time in the training data./選擇模型的標籤和輸入上下文對於離線評估具有挑戰性,但對實時效能有很大影響。 此處,固定事件是網路的輸入特徵,而排除了空心事件。 我們發現預測未來的觀看(5b)在A / B測試中表現更好。 在(5b)中,示例表示為astmax-tN,其中txaxis為訓練資料中的最大觀察時間。
3.5 Experiments with Features and Depth
Adding features and depth significantly improves preci-sion on holdout data as shown in Figure 6. In these exper-iments, a vocabulary of 1M videos and 1M search tokenswere embedded with 256 floats each in a maximum bag sizeof 50 recent watches and 50 recent searches. The softmaxlayer outputs a multinomial distribution over the same 1Mvideo classes with a dimension of 256 (which can be thoughtof as a separate output video embedding). These modelswere trained until convergence over all YouTube users, corre-sponding to several epochs over the data. Network structurefollowed a common “tower” pattern in which the bottom ofthe network is widest and each successive hidden layer halvesthe number of units (similar to Figure 3). The depth zeronetwork is effectively a linear factorization scheme which performed very similarly to the predecessor system. Widthand depth were added until the incremental benefit dimin-ished and convergence became difficult:
- Depth 0: A linear layer simply transforms the concate-nation layer to match the softmax dimension of 256
- Depth 1: 256 ReLU
- Depth 2: 512 ReLU→256 ReLU
- Depth 3: 1024 ReLU→512 ReLU→256 ReLU
- Depth 4: 2048 ReLU→1024 ReLU→512 ReLU→256 ReLU
Figure 6: Features beyond video embeddings im-prove holdout Mean Average Precision (MAP) andlayers of depth add expressiveness so that the modelcan effectively use these additional features by mod-eling their interaction.
4. RANKING
The primary role of ranking is to use impression data tospecialize and calibrate candidate predictions for the partic-ular user interface. For example, a user may watch a givenvideo with high probability generally but is unlikely to clickon the specific homepage impression due to the choice ofthumbnail image. During ranking, we have access to manymore features describing the video and the user’s relation-ship to the video because only a few hundred videos arebeing scored rather than the millions scored in candidategeneration. Ranking is also crucial for ensembling differentcandidate sources whose scores are not directly comparable.
排名的主要作用是使用印象資料為特定的使用者介面專門化和校準候選預測。 例如,使用者通常可以高機率觀看給定的影片,但是由於選擇了縮圖影像,因此不太可能點選特定的主頁印象。 在排名過程中,我們可以使用更多功能來描述影片以及使用者與影片的關係,因為僅對幾百個影片進行了評分,而不是候選代數中的數百萬個評分。 排名對於彙總分數不能直接比較的不同候選人資源也至關重要。
We use a deep neural network with similar architecture ascandidate generation to assign an independent score to eachvideo impression using logistic regression (Figure 7). Thelist of videos is then sorted by this score and returned to theuser. Our final ranking objective is constantly being tunedbased on live A/B testing results but is generally a simplefunction of expected watch time per impression. Rankingby click-through rate often promotes deceptive videos thatthe user does not complete (“clickbait”) whereas watch timebetter captures engagement [13, 25].
我們使用具有類似架構的無掃描殘基生成的深層神經網路,透過邏輯迴歸為每個影片展示分配獨立的得分(圖7)。 影片列表然後按此分數排序並返回給使用者。 我們的最終排名目標是根據實時A / B測試結果不斷調整的,但通常是預期的每次觀看觀看時間的簡單函式。 按點選率進行排名通常會宣傳使用者未完成的欺騙性影片(“點選誘餌”),而觀看時機則可以吸引使用者參與[13,25]。
Figure 7: Deep ranking network architecture depicting embedded categorical features (both univalent andmultivalent) with shared embeddings and powers of normalized continuous features. All layers are fullyconnected. In practice, hundreds of features are fed into the network/深度排名網路體系結構,它描述了具有共享嵌入和歸一化連續功能的強大功能的嵌入式分類功能(單價和多價)。 所有層均已完全連線。 實際上,將數百種功能饋入網路
4.1 Feature Representation
Our features are segregated with the traditional taxonomyof categorical and continuous/ordinal features. The categor-ical features we use vary widely in their cardinality - someare binary (e.g. whether the user is logged-in) while othershave millions of possible values (e.g. the user’s last searchquery). Features are further split according to whether theycontribute only a single value (“univalent”) or a set of values(“multivalent”). An example of a univalent categorical fea-ture is the video ID of the impression being scored, while acorresponding multivalent feature might be a bag of the lastNvideo IDs the user has watched. We also classify featuresaccording to whether they describe properties of the item(“impression”) or properties of the user/context (“query”).Query features are computed once per request while impres-sion features are computed for each item scored.
我們的功能與分類和連續/常規功能的傳統分類法分開。 我們使用的分類功能的基數差異很大-有些是二進位制的(例如,使用者是否登入),而其他的則有數百萬個可能的值(例如,使用者的上一次搜尋查詢)。 根據特徵只貢獻一個值(“單價”)還是一組值(“多價”)進一步細分。 單價分類功能的一個示例是對印象進行評分的影片ID,而相應的多價功能可能是使用者觀看過的lastNvideo ID包。 我們還根據特徵是描述專案的屬性(“印象”)還是使用者/上下文的屬性(“查詢”)對特徵進行分類。查詢特徵是針對每個請求計算一次,印象分數是針對所評分的每個專案計算。
Feature Engineering
We typically use hundreds of features in our ranking mod-els, roughly split evenly between categorical and continu-ous. Despite the promise of deep learning to alleviate theburden of engineering features by hand, the nature of ourraw data does not easily lend itself to be input directly intofeedforward neural networks. We still expend considerable engineering resources transforming user and video data intouseful features. The main challenge is in representing a tem-poral sequence of user actions and how these actions relateto the video impression being scored
我們通常在排名模型中使用數百個功能,這些功能在分類和連續模式之間大致平均分配。 儘管有深度學習有望透過手工減輕工程特徵的負擔,但是我們原始資料的性質並不容易將其直接輸入到前饋神經網路中。 我們仍然會花費大量工程資源,將使用者和影片資料轉換為有用的功能。 主要挑戰在於呈現使用者操作的臨時順序以及這些操作與得分影片印象之間的關係
We observe that the most important signals are those thatdescribe a user’s previous interaction with the item itself andother similar items, matching others’ experience in rankingads [7]. As an example, consider the user’s past history withthe channel that uploaded the video being scored - how manyvideos has the user watched from this channel? When wasthe last time the user watched a video on this topic? Thesecontinuous features describing past user actions on relateditems are particularly powerful because they generalize wellacross disparate items. We have also found it crucial topropagate information from candidate generation into rank-ing in the form of features, e.g. which sources nominatedthis video candidate? What scores did they assign?
我們觀察到,最重要的訊號是那些描述使用者先前與商品本身和其他相似商品互動的資訊,與其他人對廣告的排名相匹配[7]。 例如,考慮使用者過去上傳該頻道的歷史記錄,該頻道已被計分-使用者從該頻道觀看了多少個影片? 使用者最後一次觀看有關該主題的影片是什麼時候? 這些連續的功能描述了過去使用者在相關專案上的動作,這些功能特別強大,因為它們可以將完全不同的專案進行概括。 我們還發現至關重要的是,以特徵形式(例如, 哪個來源提名了該影片候選人? 他們分配了什麼分數?
Features describing the frequency of past video impres-sions are also critical for introducing “churn” in recommen-dations (successive requests do not return identical lists). If a user was recently recommended a video but did not watchit then the model will naturally demote this impression onthe next page load. Serving up-to-the-second impressionand watch history is an engineering feat onto itself outsidethe scope of this paper, but is vital for producing responsiverecommendations.
描述過去影片輸入頻率的特徵對於在建議中引入“使用者流失”也很關鍵(成功的請求不會返回相同的列表)。 如果最近向使用者推薦了一個影片但沒有觀看,那麼該模型自然會在下一頁載入時降級此印象。 提供最新的印象和觀看歷史記錄是在本文範圍之外的一項工程壯舉,但對於產生響應性建議至關重要。
Embedding Categorical Features
Similar to candidate generation, we use embeddings to mapsparse categorical features to dense representations suitablefor neural networks. Each unique ID space (“vocabulary”)has a separate learned embedding with dimension that in-creases approximately proportional to the logarithm of thenumber of unique values. These vocabularies are simplelook-up tables built by passing over the data once beforetraining. Very large cardinality ID spaces (e.g. video IDsor search query terms) are truncated by including only thetopNafter sorting based on their frequency in clicked im-pressions. Out-of-vocabulary values are simply mapped tothe zero embedding. As in candidate generation, multivalentcategorical feature embeddings are averaged before being fedin to the network.
與候選生成類似,我們使用嵌入將稀疏分類特徵對映到適用於神經網路的密集表示。 每個唯一ID空間(“詞彙表”)都有一個單獨的學習嵌入,其嵌入的維數大約與唯一值個數的對數成正比。 這些詞彙表是透過在訓練之前將資料傳遞一次而建立的簡單查詢表。 透過根據點選印象中的頻率僅包含thetopNafter排序,可以截斷非常大的基數ID空間(例如影片ID或搜尋查詢字詞)。 語音外的值僅對映到零嵌入。 與候選生成一樣,在將多價分類特徵嵌入饋入網路之前對其進行平均。
Importantly, categorical features in the same ID space alsoshare underlying emeddings. For example, there exists a sin-gle global embedding of video IDs that many distinct fea-tures use (video ID of the impression, last video ID watchedby the user, video ID that “seeded” the recommendation,etc.). Despite the shared embedding, each feature is fed sep-arately into the network so that the layers above can learnspecialized representations per feature. Sharing embeddingsis important for improving generalization, speeding up train-ing and reducing memory requirements. The overwhelmingmajority of model parameters are in these high-cardinalityembedding spaces - for example, one million IDs embeddedin a 32 dimensional space have 7 times more parametersthan fully connected layers 2048 units wide.
重要的是,同一ID空間中的分類特徵也共享基本的嵌入。 例如,存在許多不同功能使用的影片ID的單一全域性嵌入(印象的影片ID,使用者觀看的最後一個影片ID,“植入”推薦的影片ID等)。 儘管有共享的嵌入,但每個要素仍被單獨饋送到網路中,因此上面的層可以學習每個要素的專門表示。 共享嵌入對於提高通用性,加快訓練速度並減少記憶體需求非常重要。 絕大多數模型引數都位於這些高基數嵌入空間中,例如,嵌入32維空間中的100萬個ID具有比2048單位寬的完全連線層多7倍的引數。
Normalizing Continuous Features
Neural networks are notoriously sensitive to the scaling anddistribution of their inputs [9] whereas alternative approachessuch as ensembles of decision trees are invariant to scalingof individual features. We found that proper normalization of continuous features was critical for convergence. A con-tinuous featurexwith distributionfis transformed to ̃xbyscaling the values such that the feature is equally distributedin [0,1) using the cumulative distribution, ̃x=∫x−∞df.This integral is approximated with linear interpolation onthe quantiles of the feature values computed in a single passover the data before training begins.
眾所周知,神經網路對其輸入的縮放和分佈非常敏感[9],而諸如決策樹的整合之類的替代方法則對單個特徵的縮放不變。 我們發現適當的歸一化196
連續的特徵對於收斂至關重要。 透過對值進行縮放將連續的特徵x的分佈fis轉換為̃x,以便使用累積分佈̃x =∫x-∞df將特徵均勻分佈在[0,1)中。透過對所計算特徵值的分位數進行線性插值來近似該積分 在一次逾越節中,訓練開始之前的資料
In addition to the raw normalized feature ̃x, we also inputpowers ̃x2and√ ̃x, giving the network more expressive powerby allowing it to easily form super- and sub-linear functionsof the feature. Feeding powers of continuous features wasfound to improve offline accuracy.
除了原始歸一化特徵̃x之外,我們還輸入功率̃x2和√̃x,使網路可以輕鬆形成特徵的超線性和亞線性函式,從而賦予網路更高的表達能力。 發現連續功能的饋電能力可提高離線精度。
4.2 Modeling Expected Watch Time
Our goal is to predict expected watch time given trainingexamples that are either positive (the video impression wasclicked) or negative (the impression was not clicked). Pos-itive examples are annotated with the amount of time theuser spent watching the video. To predict expected watchtime we use the technique of weighted logistic regression,which was developed for this purpose.
我們的目標是在給定的培訓示例為正面(點選影片印象)或負面(未點選印象)的情況下預測預期觀看時間。 以使用者觀看影片所花費的時間來說明正例。 為了預測預期的收看時間,我們使用了為此目的而開發的加權邏輯迴歸技術。
The model is trained with logistic regression under cross-entropy loss (Figure 7). However, the positive (clicked)impressions are weighted by the observed watch time onthe video. Negative (unclicked) impressions all receive unitweight. In this way, the odds learned by the logistic regres-sion are∑TiN−kwhereNis the number of training examples,kis the number of positive impressions, andTiis the watchtime of theith impression. Assuming the fraction of pos-itive impressions is small (which is true in our case), thelearned odds are approximately E[T](1 +P), wherePis theclick probability and E[T] is the expected watch time of theimpression. SincePis small, this product is close to E[T].For inference we use the exponential functionexas the fi-nal activation function to produce these odds that closelyestimate expected watch time.
該模型透過交叉熵損失下的邏輯迴歸進行訓練(圖7)。 但是,正面(點選)展示會根據影片上觀看的觀看時間進行加權。 負數(未點選)印象數均以單位重量計。 這樣,邏輯迴歸學習到的機率就是∑TiN-k,其中Nis是訓練樣本的數量,kis是正面印象的數量,Tis是該印象的觀察時間。 假設正面印象的分數很小(在我們的情況下是正確的),則學習的機率約為E [T](1 + P),其中Pis點選機率,E [T]是預期的展示時間。 由於Pis很小,因此該乘積接近E [T]。為了進行推斷,我們使用指數函式(例如最終的啟用函式)來產生這些機率,這些機率會密切估計預期的觀看時間。
4.3 Experiments with Hidden Layers
Table 1 shows the results we obtained on next-day holdoutdata with different hidden layer configurations. The valueshown for each configuration (“weighted, per-user loss”) wasobtained by considering both positive (clicked) and negative(unclicked) impressions shown to a user on a single page.We first score these two impressions with our model. If thenegative impression receives a higher score than the posi-tive impression, then we consider the positive impression’swatch time to bemispredicted watch time. Weighted, per-user loss is then the total amount mispredicted watch timeas a fraction of total watch time over heldout impressionpairs
表1顯示了我們在具有不同隱藏層配置的第二天保持資料上獲得的結果。 每種配置所顯示的值(“加權的每使用者損失”)是透過考慮在單個頁面上顯示給使用者的正面(點選)和負面(未點選)印象獲得的。我們首先使用模型對這兩個印象進行評分。 如果負面印象獲得的得分高於正面印象,則我們將正面印象的觀看時間視為錯誤的觀看時間。 加權的每位使用者損失則是錯誤預測的觀看時間總數,佔總保留時間對的觀看時間總數的一部分
Table 1: Effects of wider and deeper hidden ReLUlayers on watch time-weighted pairwise loss com-puted on next-day holdout data.
These results show that increasing the width of hiddenlayers improves results, as does increasing their depth. Thetrade-off, however, is server CPU time needed for inference.The configuration of a 1024-wide ReLU followed by a 512-wide ReLU followed by a 256-wide ReLU gave us the bestresults while enabling us to stay within our serving CPUbudget.
這些結果表明,增加隱藏層的寬度可以改善結果,增加深度也可以改善結果。 但是,折衷方案是推理所需的伺服器CPU時間。配置1024寬的ReLU,然後配置512寬的ReLU,再配置256寬的ReLU,可以使我們獲得最佳結果,同時使我們能夠保持服務CPU預算之內。
For the 1024→512→256 model we tried only feeding thenormalized continuous features without their powers, whichincreased loss by 0.2%. With the same hidden layer con-figuration, we also trained a model where positive and neg-ative examples are weighted equally. Unsurprisingly, thisincreased the watch time-weighted loss by a dramatic 4.1%
5. CONCLUSIONS
We have described our deep neural network architecturefor recommending YouTube videos, split into two distinctproblems: candidate generation and ranking.
我們已經介紹了用於推薦YouTube影片的深度神經網路體系結構,它分為兩個不同的問題:候選人生成和排名。
Our deep collaborative filtering model is able to effectivelyassimilate many signals and model their interaction with lay-ers of depth, outperforming previous matrix factorizationapproaches used at YouTube [23]. There is more art thanscience in selecting the surrogate problem for recommenda-tions and we found classifying a future watch to perform wellon live metrics by capturing asymmetric co-watch behaviorand preventing leakage of future information. Withholdingdiscrimative signals from the classifier was also essential toachieving good results - otherwise the model would overfitthe surrogate problem and not transfer well to the home-page.
我們的深度協作過濾模型能夠有效地吸收許多訊號,並模擬它們與深度層之間的相互作用,勝過YouTube之前使用的矩陣分解方法[23]。 在選擇替代問題進行推薦時,藝術比科學還多,我們發現,透過捕獲不對稱合作手錶行為並防止將來資訊洩漏,可以對未來手錶進行分類,以實現良好的實時指標。 保留來自分類器的區分性訊號對於獲得良好結果也是必不可少的-否則該模型將適合替代問題,並且無法很好地傳遞到首頁。
We demonstrated that using the age of the training exam-ple as an input feature removes an inherent bias towards thepast and allows the model to represent the time-dependentbehavior of popular of videos. This improved offline holdoutprecision results and increased the watch time dramaticallyon recently uploaded videos in A/B testing.
我們證明了使用訓練樣本的年齡作為輸入特徵可以消除對過去的內在偏見,並允許模型表示隨時間變化的流行影片行為。 這樣改善了離線保持精度結果,並大大延長了A / B測試中最近上傳的影片的觀看時間。
Ranking is a more classical machine learning problem yetour deep learning approach outperformed previous linearand tree-based methods for watch time prediction. Recom-mendation systems in particular benefit from specialized fea-tures describing past user behavior with items. Deep neuralnetworks require special representations of categorical andcontinuous features which we transform with embeddingsand quantile normalization, respectively. Layers of depthwere shown to effectively model non-linear interactions be-tween hundreds of features.
排名是一個更為經典的機器學習問題,但我們的深度學習方法優於以前的基於線性和基於樹的觀看時間預測方法。 推薦系統尤其受益於描述專案過去使用者行為的專門功能。 深度神經網路需要分類和連續特徵的特殊表示,分別透過嵌入和分位數歸一化進行轉換。 顯示了深度層,可以有效地建模數百個要素之間的非線性相互作用。
Logistic regression was modified by weighting training ex-amples with watch time for positive examples and unity fornegative examples, allowing us to learn odds that closelymodel expected watch time. This approach performed muchbetter on watch-time weighted ranking evaluation metricscompared to predicting click-through rate directly.
透過對訓練示例進行加權,對正例和統一否定例的觀看時間進行加權,對邏輯迴歸進行了修改,從而使我們能夠了解與預期觀看時間密切相關的賠率。 與直接預測點選率相比,該方法在觀看時間加權排名評估指標上表現更好。
6. ACKNOWLEDGMENTS
The authors would like to thank Jim McFadden and PranavKhaitan for valuable guidance and support. Sujeet Bansal,Shripad Thite and Radek Vingralek implemented key com-ponents of the training and serving infrastructure. ChrisBerg and Trevor Walker contributed thoughtful discussionand detailed feedback.
作者要感謝Jim McFadden和PranavKhaitan的寶貴指導和支援。 Sujeet Bansal,Shripad Thite和Radek Vingralek實施了培訓和服務基礎設施的關鍵元件。 ChrisBerg和Trevor Walker進行了周到的討論並提供了詳細的反饋。
7. REFERENCES
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo,Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean,M. Devin, S. Ghemawat, I. Goodfellow, A. Harp,G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser,M. Kudlur, J. Levenberg, D. Man ́e, R. Monga,S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens,B. Steiner, I. Sutskever, K. Talwar, P. Tucker,V. Vanhoucke, V. Vasudevan, F. Vi ́egas, O. Vinyals,P. Warden, M. Wattenberg, M. Wicke, Y. Yu, andX. Zheng. TensorFlow: Large-scale machine learningon heterogeneous systems, 2015. Software availablefrom tensorflow.org.
[2] X. Amatriain. Building industrial-scale real-worldrecommender systems. InProceedings of the SixthACM Conference on Recommender Systems, RecSys’12, pages 7–8, New York, NY, USA, 2012. ACM.
[3] J. Davidson, B. Liebald, J. Liu, P. Nandy,T. Van Vleet, U. Gargi, S. Gupta, Y. He, M. Lambert,B. Livingston, and D. Sampath. The youtube videorecommendation system. InProceedings of the FourthACM Conference on Recommender Systems, RecSys’10, pages 293–296, New York, NY, USA, 2010. ACM.
[4] J. Dean, G. S. Corrado, R. Monga, K. Chen,M. Devin, Q. V. Le, M. Z. Mao, M. Ranzato,A. Senior, P. Tucker, K. Yang, and A. Y. Ng. Largescale distributed deep networks. InNIPS, 2012.
[5] A. M. Elkahky, Y. Song, and X. He. A multi-view deeplearning approach for cross domain user modeling inrecommendation systems. InProceedings of the 24thInternational Conference on World Wide Web, WWW’15, pages 278–288, New York, NY, USA, 2015. ACM.
[6] X. Glorot, A. Bordes, and Y. Bengio. Deep sparserectifier neural networks. In G. J. Gordon and D. B.Dunson, editors,Proceedings of the FourteenthInternational Conference on Artificial Intelligence andStatistics (AISTATS-11), volume 15, pages 315–323.Journal of Machine Learning Research - Workshopand Conference Proceedings, 2011.
[7] X. He, J. Pan, O. Jin, T. Xu, B. Liu, T. Xu, Y. Shi,A. Atallah, R. Herbrich, S. Bowers, and J. Q. n.Candela. Practical lessons from predicting clicks onads at facebook. InProceedings of the EighthInternational Workshop on Data Mining for OnlineAdvertising, ADKDD’14, pages 5:1–5:9, New York,NY, USA, 2014. ACM.
[8] W. Huang, Z. Wu, L. Chen, P. Mitra, and C. L. Giles.A neural probabilistic model for context based citationrecommendation. InAAAI, pages 2404–2410, 2015.
[9] S. Ioffe and C. Szegedy. Batch normalization:Accelerating deep network training by reducinginternal covariate shift.CoRR, abs/1502.03167, 2015.
[10] S. Jean, K. Cho, R. Memisevic, and Y. Bengio. Onusing very large target vocabulary for neural machinetranslation.CoRR, abs/1412.2007, 2014.
[11] L. Jiang, Y. Miao, Y. Yang, Z. Lan, and A. G.Hauptmann. Viral video style: A closer look at viralvideos on youtube. InProceedings of InternationalConference on Multimedia Retrieval, ICMR ’14, pages193:193–193:200, New York, NY, USA, 2014. ACM.
[12] T. Liu, A. W. Moore, A. Gray, and K. Yang. Aninvestigation of practical approximate nearestneighbor algorithms. pages 825–832. MIT Press, 2004.
[13] E. Meyerson. Youtube now: Why we focus on watchtime. http://youtubecreator.blogspot.com/2012/08...: 2016-04-20.
[14] T. Mikolov, I. Sutskever, K. Chen, G. Corrado, andJ. Dean. Distributed representations of words andphrases and their compositionality.CoRR,abs/1310.4546, 2013.
[15] F. Morin and Y. Bengio. Hierarchical probabilisticneural network language model. InAISTATSˆa ̆A ́Z05,pages 246–252, 2005.
[16] D. Oard and J. Kim. Implicit feedback forrecommender systems. Inin Proceedings of the AAAIWorkshop on Recommender Systems, pages 81–83,1998.
[17] K. J. Oh, W. J. Lee, C. G. Lim, and H. J. Choi.Personalized news recommendation using classifiedkeywords to capture user preference. In16thInternational Conference on Advanced CommunicationTechnology, pages 1283–1287, Feb 2014.
[18] S. Sedhain, A. K. Menon, S. Sanner, and L. Xie.Autorec: Autoencoders meet collaborative filtering. InProceedings of the 24th International Conference onWorld Wide Web, WWW ’15 Companion, pages111–112, New York, NY, USA, 2015. ACM.
[19] X. Su and T. M. Khoshgoftaar. A survey ofcollaborative filtering techniques.Advances inartificial intelligence, 2009:4, 2009.
[20] D. Tang, B. Qin, T. Liu, and Y. Yang. User modelingwith neural network for review rating prediction. InProc. IJCAI, pages 1340–1346, 2015.
[21] A. van den Oord, S. Dieleman, and B. Schrauwen.Deep content-based music recommendation. InC. J. C. Burges, L. Bottou, M. Welling,Z. Ghahramani, and K. Q. Weinberger, editors,Advances in Neural Information Processing Systems26, pages 2643–2651. Curran Associates, Inc., 2013.
[22] H. Wang, N. Wang, and D.-Y. Yeung. Collaborativedeep learning for recommender systems. InProceedingsof the 21th ACM SIGKDD International Conferenceon Knowledge Discovery and Data Mining, KDD ’15,pages 1235–1244, New York, NY, USA, 2015. ACM.
[23] J. Weston, S. Bengio, and N. Usunier. Wsabie: Scalingup to large vocabulary image annotation. InProceedings of the International Joint Conference onArtificial Intelligence, IJCAI, 2011.
[24] J. Weston, A. Makadia, and H. Yee. Label partitioningfor sublinear ranking. In S. Dasgupta andD. Mcallester, editors,Proceedings of the 30thInternational Conference on Machine Learning(ICML-13), volume 28, pages 181–189. JMLRWorkshop and Conference Proceedings, May 2013.
[25] X. Yi, L. Hong, E. Zhong, N. N. Liu, and S. Rajan.Beyond clicks: Dwell time for personalization. InProceedings of the 8th ACM Conference onRecommender Systems, RecSys ’14, pages 113–120,New York, NY, USA, 2014. ACM.
[26] Zayn. Pillowtalk.https://www.youtube.com/watch?v=C3d6GntKbk.
本作品採用《CC 協議》,轉載必須註明作者和本文連結