What To Ask Jack?
A Rundown on Issues Surrounding Twitter

2018 was the year social media executives took the hot seat. Executives from Facebook, Twitter, and Google appeared in front of the US Congress for gruelling sessions. They were held answerable for a number of issues surrounding social media, including Russian election interference, political bias and user privacy concerns.
2019 is not offering any respite.
On February 5 an Indian Parliamentary Standing Committee on Information Technology, led by BJP MP Anurag Thakur, summoned the Twitter CEO Jack Dorsey for a hearing scheduled on February 11. This came after two weeks of strong protests by right-wing activists accusing the company of political bias. Executives from the company’s Silicon Valley headquarters did not appear citing a “short notice”, and instead published a separate blog post addressing the issue. This did not bode well with the committee, but the hearing has been rescheduled to February 25.
The issues plaguing Twitter in India are not different than those they face in the US, or any other country for that matter. Here are the most pressing issues, along with measures taken by Twitter to address them and suggestions for the Standing Committee on what to ask CEO Jack. While this article focuses on Twitter, most of these issues apply to all the social media platforms. It is surprising that the Standing Committee did not summon executives from Facebook, which also owns Instagram and WhatsApp and has a far greater number of users.
My rundown is based on Jack’s Congressional appearances in September 2018, his interviews with NYU professor Jay Rosen and Concordia University professor Gad Saad, his podcasts with Joe Rogan and Sam Harris, and his Twitter exchange with business technology journalist Kara Swisher.
Political Bias
The first issue that the Standing Committee will address is political bias. The accusations against Twitter are that tweets by right-wing users are censored or hidden, an act defined as shadow banning, and that right-wing users are more likely to have their account suspended.
There are reasons why right-wing users might feel this way. Because Twitter does rank tweets and replies, but not based on political factors as some conservatives have come to believe. One important measure that Twitter took to combat spamming is down-ranking replies from accounts that it deems suspicious. Twitter algorithms detect if the same user is tweeting across multiple accounts, repeatedly blocked or reported for harassment, tweeting at high velocity, or engaging in a similar undesirable pattern. The algorithm is not designed to judge the content of the tweet. Rather it focuses on the conduct of the tweeter and decides how far below the tweet should appear in replies. It also does not affect the tweets on the timeline, because those are from people whom a user chose to follow.
Mike Rothschild, a researcher on conspiracy theories, carried out a hands-on study that explains why down-ranking affects users on the right-side of the political spectrum more often than those on the left. To test this out, Rothschild created a fake far-right account and started following as many right-wing accounts in Twitter as allowed, tweeting and retweeting information about President Trump. Rothschild found that his account had a high follower growth because of something called a “follow train”, a long string of tweets with just usernames which encourages users to follow every account on the train. While this did give him more followers, his tweets received less traction because Twitter gives more importance to original tweets and older accounts. More disturbingly, Mike found that a large amount of fake news was being circulated within this network.
There are definitely instances where genuine supporters who do not have any malicious intent get grouped with this network. In fact, Twitter has acknowledged this in the past and has tweaked its algorithm accordingly. But there is also the concern that the algorithm is only as objective as is its engineer. Jack has previously admitted that Twitter employees tend to lean towards the left side of the political spectrum. The implications of this are more serious when it comes to account suspension.
Account suspensions are carried out manually when someone is reported. The enforcers look at both the overall conduct of the reported user and specific contents of the tweet. Conservatives tend to believe that their account is more likely to get suspended compared to that of liberals. While the enforcers are given a politically unbiased rulebook to follow to decide whether or not an account has violated its Terms of Service, their implicit bias might creep in. To reduce the chances of this, Twitter has a global enforcing team. Jack acknowledged that there is still a chance for human error in this process, but the company rectifies any at the earliest.
Jack and Twitter have repeatedly and unequivocally stated that they do not treat people differently on the basis of their political affiliation. Of course, their word alone is not enough to let go of the issue. The Standing Committee will probably get the same explanation outlined here, but they can press Jack on why the human error rate is high and the appeals process to get a suspension revoked is far from optimal. They can also request the company to increase the transparency behind the decisions they make.
Election Interference By Malicious Foreign Actors
The issue that received the most attention in the last couple of years, and for good reason, appears to be the area that Twitter has made good progress in. The 2016 Russian interference in the US election showed the world how social media could be used to manipulate voters and influence election results. Jack’s testimony to US Congress in September and replies to the Senate Committee’s questions address the issue in depth and provide us with insightful statistics. A total of nearly 50,000 automated accounts were Russian-linked and an estimated $1.9 million was spent on promoted tweets by Russia Today, which ran two of the most active accounts that tweeted election-related content. Despite the volume, these tweets had a significantly low reach in terms of impressions. Regardless, Twitter has taken a number of steps to prevent this from happening:
- identifying suspicious accounts by tracking exceptionally high-volume tweeting with the same hashtag or mentioning the same @ handle without a reply from the account being addressed,
- increasing challenges meant to identify if the user is human or a bot
- developing a political conversations dashboard that informs sudden shifts in sentiment indicating potential coordinated malpractices
- verifying candidates running for office with a blue badge and adding labels below the name providing users with more information on the candidate
- making it mandatory for people running political ads to self-identify and be present in the US
- launching Ads Transparency Center that provides information on the ads run by a specific account and the targeting criteria used
- preventing the misuse of APIs by restricting their ability to abuse data
These steps have led to a 214% increase in accounts being removed year-over-year for producing malicious and spam content. Despite these improvements, Russian and Iranian-linked accounts have continued to emerge in significant numbers after 2016, tweeting on divisive social and political issues. While Twitter identified and suspended these accounts, the fact that these malicious actors can adapt their methods and find a way into the platform, even for a brief period of time, is concerning. I qualified my analysis as Twitter “appears” to have made good progress. This is because we do not have enough data or insights from any election after 2016 to be certain. When the Senate Intelligence committee pointed out that malicious Russian activity occurred once again during the Catalan Independence Referendum in Spain, a year after the US elections, Jack replied saying that a number of “spammy” accounts were acted upon but provided no further details. More recently, Jack said that the improvements were tested a lot in the Mexican elections and will be further tested in the upcoming Indian elections.
These statements do not inform the extent to which the measures taken by Twitter worked. Many steps above were specific to the US, including the candidate verifications and political ads policy. In less than two months, the Lok Sabha elections — the largest democratic exercise in the world — will take place. The Standing Committee’s agenda focuses on safeguarding citizens’ rights. So, it is unclear if this issue is within the scope of this hearing. Nevertheless, it is important that the government appoints a committee to address this issue at the earliest and gets clarification on what we can expect from Twitter, and other social media giants, during the elections.
Harassment, Hate Speech, Threats, Doxxing and Spread of False Information
These are the most serious and complex issues that Twitter has to grapple with. Unfortunately, these also happen to be the issues the company has made the least progress in. Jack likens Twitter to a public square where people with different opinions can come express themselves. But some people have misused this concept to harass or threatens others, and the platform has the right tools to amplify such abuses.
The algorithms discussed earlier work to an extent to prevent some of these issues because of its ability to detect accounts that frequently abuse others or carry our coordinated harassment campaigns. But apart from this, Twitter largely relies on reactive measures to make the platform a healthy place. Any tweet that is deemed to be a violation of the company’s policy and rules is only removed if reported. This is impractical if someone’s tweet has a barrage of abusive replies, and in case of doxxing or spreading of false information, reactive measures might occur too late. Once reported, tweets that involve direct threats to the life of a person are easy to judge, but for others, the company has to decide if freedom of speech protects them, a decision that even the highest courts of the land struggle to make.
There are cases where even if someone violates the rules, Twitter has a controversial clause that allows tweets of public interest and newsworthiness to remain. This has protected prominent public figures, including US President Donald Trump, creating a double-standard. Jack defended this clause saying such tweets are usually allowed if there is some precedent to it or if the same contents have appeared in different mediums.
Most of the abuse occurs in shared spaces, like tweet replies and mentions, where anyone can inject themselves even if they are not welcome. The company is considering allowing the original tweeter (the host) to control whose replies are shown, but people can then filter out any replies with a different viewpoint, contributing to the problem of echo chambers. Echo chambers already exist because people mostly choose to follow those who share similar views. Twitter is thinking about allowing people to follow topics instead, where the likelihood of diverse opinions showing up is higher.
In September last year, when speaking with Jay Rosen, Jack mentioned that the company is developing tools to measure the health of a conversation. He compared this tool to a thermometer which measures body temperature to judge the health of a person. The four parameters of the tool he discussed are shared attention, shared reality, receptivity, and variety of perspective. This year, Jack repeated the same idea when talking with podcast host Sam Harris.
As you can see, most of the proposed solutions are yet to implement, and we do not which stage of implementation they are in. In most of his recent interviews, Jack has openly addressed a variety of topics, but this is one area where he has shied away from the specifics. The Standing Committee should press for these details.
Twitter has always been a platform shaped by user feedback. The @ sign, hashtag and retweet were not features by design, but they evolved because people started using them in a particular way. Likewise, if users seek a healthier place to converse, I believe that the company will do everything it can to deliver.
As for the Standing Committee, taking the first step to hold tech giants accountable is admirable, but the discussions should focus on the wider issues outlined here rather than partisan issues. It would be ideal to have these discussions in the open, instead of behind closed doors. I am looking forward to hearing more on this topic on February 25, if Jack, or any senior executive, show up this time.
Thank you for reading and feel free to get in touch with me to provide any feedback. Thanks to Aditi Kumar for proofreading and putting the commas in the right places.