The Twitter commons have a credibility problem, and, in the age of “big data,” all problems require an elegant, algorithmic solution. Last week, a group of researchers at the Qatar Computing Research Institute (Q.C.R.I.) and the Indraprastha Institute of Information Technology (I.I.I.T.), in Delhi, India, released what could be a partial fix. Tweetcred, a new extension for the Chrome browser, bills itself as a “real-time, web-based system to assess credibility of content on Twitter.” When you install Tweetcred, it appends a “credibility ranking” to all of the tweets in your feed, when viewed on twitter.com. Each tweet’s rating, from one to seven, is represented by little blue starbursts next to the user’s name, almost like a Yelp rating. The program learns over time, and users can give tweets their own ratings to help it become more accurate.
Tweetcred is built on insights that researchers have gained from studying massive databases of tweets surrounding major news events. In 2012, the I.I.I.T. researcher Aditi Gupta analyzed more than thirty-five million tweets from fourteen major news events during 2011, ranging from the U.K. riots to Steve Jobs’s resignation to the uprising in Libya. Gupta wanted to see if she could use certain characteristics of a tweet to predict its credibility. Human analysts ranked the credibility of sample tweets in the database. Gupta then correlated the tweets’ score with a number of variables to see of what made a Credible Tweet: tweet length, whether the tweet included a U.R.L., the number of followers of the user who tweeted it, and so on.
She found, for example, that longer tweets were more credible, whereas tweets with swear words were, unsurprisingly, less credible. Tweets with pronouns were less credible because, Gupta writes, “Tweets that contain information or are reporting facts about the event are impersonal in nature.” From these results, Gupta developed an algorithm that could be used to automatically determine a tweet’s credibility, much as Google’s PageRank judges a Web site’s relative importance. Tweetcred pairs Gupta’s research with findings from a team that conducted a similar study of tweets surrounding the spread of rumors following the 2010 Chile earthquake. The Tweetcred algorithm uses forty-five different characteristics to calculate its credibility score.
- More Here
Tweetcred is built on insights that researchers have gained from studying massive databases of tweets surrounding major news events. In 2012, the I.I.I.T. researcher Aditi Gupta analyzed more than thirty-five million tweets from fourteen major news events during 2011, ranging from the U.K. riots to Steve Jobs’s resignation to the uprising in Libya. Gupta wanted to see if she could use certain characteristics of a tweet to predict its credibility. Human analysts ranked the credibility of sample tweets in the database. Gupta then correlated the tweets’ score with a number of variables to see of what made a Credible Tweet: tweet length, whether the tweet included a U.R.L., the number of followers of the user who tweeted it, and so on.
She found, for example, that longer tweets were more credible, whereas tweets with swear words were, unsurprisingly, less credible. Tweets with pronouns were less credible because, Gupta writes, “Tweets that contain information or are reporting facts about the event are impersonal in nature.” From these results, Gupta developed an algorithm that could be used to automatically determine a tweet’s credibility, much as Google’s PageRank judges a Web site’s relative importance. Tweetcred pairs Gupta’s research with findings from a team that conducted a similar study of tweets surrounding the spread of rumors following the 2010 Chile earthquake. The Tweetcred algorithm uses forty-five different characteristics to calculate its credibility score.
- More Here
No comments:
Post a Comment