Your Prediction Gets As Good As Your Data
In the past, I have often seen that software engineers and data scientists assume that they can keep increasing their prediction accuracy by improving their machine learning algorithm. Here, I want to approach the classification problem from a different angle where I suggest data scientists analyze the distribution of their data to measure the information level in their data. This approach gives us an upper bound for how far we can improve the accuracy of a predictive algorithm and make sure our optimization efforts are not wasted.
Information and Entropy
In information theory, mathematician have developed useful measures such as entropy to compute the information level in the data. Let's think of a random coin with a head probability of 1%. If one filps this coin, she will collect more information if she sees the head events (i.e. rare event) compared to seeing a tail (i.e. moere likely event). One can formualte the information level in a random process with the negative logarithm of the random event probability.
This captures the described intuition. Mathmatician also formulated another measure called entropy by which they capture the average information in a random process in bits. Below we have shown the entropy formula for a discrete random variable:
For the first example, let's assume we have a coin with P(H)=0% and P(T)=100%. We can compute the entropy of the coin as follows:
For the second example, let's consider a coin where P(H)=1% and P(T)=1-P(H)=99%. Plugging numbers one can find that the entropy of such a coin is:
Finally, if the coin has P(H) = P(T) = 0.5 (i.e. a fair coin), its entropy is calculated as follows:
Entropy and Predictability
So, what these examples tell us? If we have a coin with head probability of zero, the coin's entropy is zero meaning that the average information in the coin is zero. This makes sense because flipping such a coin always comes as tail. Thus, the prediction accuracy is 100%. In other words, when the entropy is zero, we have the maximum predictibility.
In the second example, head probability is not zero but still very close to zero which again makes the coin to be very predictable with a low entropy.
Finally, in the last example we have 50/50 chance of seeing head/tail events which maximizes the entropy and consequently minimizes the predictability. In words, one can show that a fair coin has the meaximum entropy of 1 bit making the prediction as good as a random guess.
Kullback–Leibler Divergence
As last example, we show how we can borrow ideas from information theory to measure the distance between two probability distributions. Let's assume we are modeling two random processes by their pmf's: P(.) and Q(.). One can employ the entropy measure to compute the distance between two pmf's as follows:
Above distance function is known as KL Divergence which measures the distance of Q distribution from P's. The KL Divergence can be very useful in various applications such as NLP problems where we want to measure the distance between the distributions of two documents (e.g. modelled as bag of words).
Wrap-up
In this post, we showed that the entropy from information theory provides a way to measure how much information exists in a given dataset. We also highlighted the inverse relationship between the entropy and the predictability. This shows that one can use the entropy to calculate an upper bound for the accuracy of the prediction problem in hand.
Source: http://www.aioptify.com/informationbound.php
相關文章
- talk-to-your-data
- Your title
- Scan Your Truck Using Nexiq Adapter: Simplifying Your Diagnostic ProcessAPT
- Prettier your projectProject
- Offering Your Seat
- yii2 Unable to verify your data submission錯誤解決
- Structuring Your TensorFlow ModelsStruct
- translate-your-site
- deploy-your-site
- your Android sdk is missingAndroid
- connect your tunnel to CloudflareCloud
- Do Your Data Recovery 安全可靠的資料恢復軟體資料恢復
- Creating your first iOS FrameworkiOSFramework
- RuneScape - To verify your level of combatBAT
- Do Your Data Recovery for Mac安全可靠的資料恢復軟體Mac資料恢復
- 錯誤內容:You have an error in your SQL syntax; check the manual that corresponds to your MySQL serverErrorMySqlServer
- Project Management - 2) Estimate Your WorkProject
- Boost Your Strategy With The Content Marketing Tools
- Your VM has become "inaccessible.
- Getting NOW() in your preferred time zone
- 勇者鬥惡龍: “Your Story”失敗,但“Your RPG”或將續寫傳奇
- Make sure to include VueLoaderPlugin in your webpack configVuePluginWeb
- Would you like to develop a story for your character?dev
- JQuery Plugin 2 - Passing Options into Your PluginjQueryPlugin
- Writing your first Django app, part 1DjangoAPP
- How to build your custom release bazel version?UI
- godaddy 的 Monitoring performance to make your website fasterGoORMWebAST
- Your Tokens Are Mine: A Suspicious Scam Token in A Top Exchange
- 使用yum報錯Your license is invalid.
- Vultr賬號被封:Your account is currently closed
- DrawERD makes it easy to visualize your database structure.DatabaseStruct
- WWDC 2017:Your Apps and Evolving Network Security StandardsAPP
- today, is it worth 1/30,000 of your life?
- Elevate Your Lead Generation Game with Maps Scraper AIGAMAI
- How to prevent your jar packages from being decompiled?JARPackageCompile
- ou have not concluded your merge (MERGE_HEAD exists)
- 如何解決"You have an error in your SQL syntax"ErrorSQL
- Your ApplicationContext is unlikely to start due to a @ComponentScan of the default package.APPContextPackage
- Transformer網路-Self-attention is all your needORM