They aren’t evil—they’re just confused. This is a complex issue and they are not sure how to fix the problem. I think there is just a lot of confusion there and no one can figure out what to do.
They are waiting on an AI to clean up the fraudulent accounts. Maybe they’re working on a super-secret technology that will be able to quickly and accurately identify the bad guys. However, this is a clear case of coevolution—the bad guys will figure out new unscrupulous techniques to employ faster than the AI can evolve. I’m not counting on AI to save the day here.
They’re bolstering their team security humans and not doing it fast enough. Today, they have a team of 10,000 security humans working to identify and fix issues like this and that team is supposed to grow to 20,000.
They know it’s a problem, but nobody is talking about it. They think they can coast for a while longer.
They know it’s a problem, but it’s not as big a problem as some of the other complicated and serious issues that have been in the spotlight, such as fake news, online bullying, or election manipulation. They’re spending time and resources to tackle these big ones first.
I am not the first to spot all the gaping holes in Instagram’s armor, but I wanted answers, not just theories. So I went to Instagram to ask them myself what they’re doing to stop widespread user fraud and to prevent companies from getting scammed while doing business using these platforms.
Over the years, I’ve had repeated discussions with Instagram and Facebook about these problems (remember that Facebook bought Instagram for $1 billion in 2012). I knew one of the founders of Instagram, Mike Krieger, before he started the company. I also spent the day with Mark Zuckerberg some time ago, when he invited me to Facebook Headquarters.
Hi, Mark! Here’s a picture of Zuck holding one of my prints from many years ago, when we met for a few hours at Facebook’s headquarters. Later, at their hackathon, I snapped a shot of him that he used as his FB profile photo for ages, so I thought that was pretty cool. I hope he agrees with my points in this book and some possible solutions, and that we’re still kinda friends after he reads this whole thing!
When I brought up these issues of Instagram fraud with Krieger, he did not have anything specific to say. He instead routed me back to Instagram public relations and security teams. Look, I don’t blame him. What could he publicly say that their PR teams could not say?
One of the PR team’s jobs is to deflect negative questions from the media. Those poor people. I didn’t want to subject them to more deflection, but I got in line anyway and waited my turn to ask the Instagram PR team some hard questions. It’s a hard job and they are just stuck in a rigid system that doesn’t allow them to give satisfactory answers, even to plain questions. They really have no authority to write honest, interesting responses.
In this case, the PR team tried to explain this fraud away as a minor issue and they point to automated services they have that clean up the fake accounts. It is clear to me that these services do a lousy job of identifying and addressing the fake accounts and activity. As a reminder, no automated Instagram service has yet flagged @genttravel as problematic.
I have also talked to many other senior people I know at these companies. They absolutely know, and admit, that the fake engagement economy is a problem on these services. They all spoke to me off the record, of course, and I’m not here to name names. Well, except for @miss.everywhere.
Throughout the book, I believe I make a strong case that it is indeed in Instagram’s long-term best interest to clean up their platform. Otherwise, eventually, no one will trust in the platform. When people stop believing in the veracity of followers, likes, and comments, this entire Instagram economy could collapse. Why would advertisers continue to spend money on Instagram Influencers when they can’t believe the numbers being reported back? It’s a looming issue that is only getting worse.
In this chapter, we take a look at what Instagram and Facebook are doing to combat these fraudulent engagement issues. I share some conversations I’ve had with folks inside these organizations and share some of my thoughts about their responses.
Instagram Public Relations and Instagram Security Respond
As mentioned, the conversation began when I reached out to Mike Krieger, one of the founders of Instagram. I knew him pre-Instagram, so I often reach out to him, although we had never spoken about this topic. He pointed me to the Instagram Communications team. Here’s what happened when I spoke, on the record, with a couple of folks on Instagram’s Communications team.
As part of my research for this book, I went out and looked for some fraudulent accounts. In less than 48 hours I found more than 200 accounts I suspected to be cheating the system. It was remarkably easy. When I talked with Instagram, I also asked if I could send 10 of these suspicious accounts over for Instagram’s security team to review. They said yes and warned me that any account sent to Instagram security found to have fraudulent numbers could be terminated.
Besides emailing over the ten suspicious accounts, I wrote down a few questions I wanted to ask on the phone, which I detail below. They gave me answers on the phone but asked that I wait for a follow-up email for an on-the-record response. A while later, an Instagram spokesperson sent an email back to me. There are some parts of that email I can quote, which I have. They’ve asked me to summarize other parts.
To start, they responded with this very official-sounding statement.
We take spam, inauthentic and other abusive behavior very seriously.
We consider services that automate or sell likes or follows to be spam, and we aggressively remove them from the platform. When we find “spammy” activity, we work to counter and prevent it, including blocking accounts and removing violating content all at once. We review suspicious activity closely and take the time to understand how to help prevent similar activity in the future.
Our internal estimates show that spam accounts make up a small fraction of Instagram’s monthly active user base.
Below are the questions I asked, their summarized responses, and my thoughts on those responses.
1. How do the bots that use automated follow, unfollow, like, and comment functionality work?
The Instagram Communications team actually provided a pretty good answer to this one. I’ve summarized what they’ve said about each method below:
Programs and scripts that run locally, on a user’s computer: These sorts of scripts search for a given hashtag, and, when they find it, send out likes, follows, and comments on posts that match the hashtag in hopes of receiving a like or follow in return. This engagement model relies on reciprocity to succeed (just like we’ve discussed in this book). These programs and scripts are generally paid, for but can be free and/or open source.
Purchased services where the service provider runs automated activity on behalf of the user: The user provides their credentials to the service. The service provider logs in and performs similar actions as in the previous example. This type of service requires no technical skill and is highly automated.
Buying likes and follows from a farm of pre-existing fake accounts: This is less prevalent. Instagram said they are often able to police these accounts and shut them down en masse. Their response didn’t answer the question of how this particular approach works, however.
Engagement pods: These pods often use browser plugins that will automatically like posts or comments from a desktop or laptop computer. Instagram said this behavior is some of the hardest to detect, since it does a good job of mimicking a real action from a real user’s browser.
2. How many bots use the web interface and use scripts and scrape info?
The Instagram Communications team did not give us an answer to this one. They did say that some bots will engage in scraping behavior as part of their audience discovery process—the hashtag searching method mentioned above—but that the primary goal of these bots is to attract reciprocal engagement behavior.
3. What
are you doing in general to mitigate spam, inauthentic, or abusive behavior?
Not surprisingly, the Instagram Communications team had an excessively long response to this one. In summary, they said they have a combination of automated and manual systems to combat fraud like this. The automated systems are largely based on machine learning algorithms that try to detect suspicious non-human behaviors. The manual systems rely on a team of 10,000 folks working on safety and security topics. They told me that this team will be growing to 20,000 individuals in 2019.
The Instagram Communications team pointed to a variety of legal-ese policies, guidelines, and terms of use, which I’ve summarized below. There are also links in case you want to check it out (warning—these are kind of long and boring to read. Instagram should already know, based on all their analyses about user engagement, that nobody is going to read all these policies).
Platform Policies #18: “Don’t participate in any “like”, “share”, “comment” or “follower” exchange programs.”
https://www.instagram.com/about/legal/terms/api/
Community Guidelines Bullet 3: “Help us stay spam-free by not artificially collecting likes, followers, or shares, posting repetitive comments or content, or repeatedly contacting people for commercial purposes without their consent.”
https://help.instagram.com/477434105621119/
Terms of Use: #3: “You are responsible for any activity that occurs through your account and you agree you will not sell, transfer, license or assign your account, followers, username, or any account rights.”
https://help.instagram.com/478745558852511
Terms of Use #10: “You must not access Instagram’s private API by means other than those permitted by Instagram.” Use of Instagram’s API is subject to a separate set of terms:
http://instagram.com/about/legal/terms/api/
Since we emailed them, the Instagram Communications team have also written a blog post on how they are approaching the issue of fraudulent engagement. They have launched some machine learning tools to identify fraudulently-procured engagement. In this post, they said:
Starting today, we will begin removing inauthentic likes, follows and comments from accounts that use third-party apps to boost their popularity. We’ve built machine learning tools to help identify accounts that use these services and remove the inauthentic activity. Accounts we identify using these services will receive an in-app message alerting them that we have removed the inauthentic likes, follows and comments given by their account to others.
You can read the whole thing here: https://instagram-press.com/blog/2018/11/19/reducing-inauthentic-activity-on-instagram/
As I mention further down, whatever system they’ve implemented is still not doing a great job. Most of the ten fraudulent accounts I identified in my original email to them are still very much alive and kicking.
4. Why don’t you sue those Follower/Like/Comment companies for being injurious and get them to turn over their client list?
No response on this one. I still think it’s a good idea.
5. Why would Instagram provide an API that lets a third party follow/comment/like on their behalf? I can’t think of a useful use-case for that. All following should be done while I am logged into the app on my phone, right? If you are disabling that, why was it allowed in the first place? Why did it take so long to remove?
Same as above—no response here, although I still think it’s a good idea.
6. What if you only allowed Instagram access via the app instead of the web. Would this eliminate some of the bad activity?
The Instagram Communications team said that it is unlikely that restricting activity to the app would effectively deter bad actors on the platform because bots typically mimic IOS or Android behavior in an attempt to circumvent Instagram policies and “blend in” with real humans.
7. If you do remove access to the API that allows follows/likes/comments, then what will you do about the hundreds of millions of ill-gotten gains? Do you have the ability to backtrack and purge?
No answer on this one.
8. In terms of pods: do you think megapods with 1000+ people inside are a problem? Can you detect it? Why did it take Buzzfeed writing this article to make you go out and do something about it? Some of those group names like “Daily Instagram Engagement” are incredibly obvious. What about groups not on Facebook?
The Instagram Communications team said they recently took action on a number of groups on the platform promoting podding behavior. I personally don’t think they’ve done enough to combat this behavior, as it’s still very easy to join pods and podding groups.
9. I found over 200 bad actors in 48 hours, and I barely tried. Why doesn’t Instagram have people doing the same thing? The patterns are obvious, so why isn’t there a process, manual or automatic, for clearing these accounts or deleting the ill-gotten gains?
For this one, they did investigate the 10 I sent over, and they found suspicious activity on many accounts (although they weren’t able to comment on which ones specifically). They admitted to being able to identify that some of these accounts had bought likes and followers from inauthentic and automated accounts.
I wanted to see for myself if they had taken action against any of the accounts I had flagged. Let’s go back to the 10 suspicious accounts I initially sent the Instagram Security team. Here are the accounts, which I anonymized:
In column 1, you see their follower count in April of 2018, when I first sent the email to Instagram Communications. The next column shows December of 2018, six months later. You can see quite clearly that despite finding “evidence of suspicious activity”:
Instagram security did not ban their accounts
Followers were not culled—even if they were, it was effectively useless, because most of our suspects had tremendous follower growth in the 6 months afterward
This is an example of one of the accounts (Suspect #1) that I sent to Instagram security. It appears this guy purchased 150,000 followers that were delivered in 5 days. Source: Socialblade.com
Of those that did not grow dramatically, it appears they bought a big following at the start of our tracking, and then perhaps stopped buying because they were already in the top 1% of the most followed users.
I don’t think the 10,000 people in Instagram security are incompetent. Maybe they’re just too busy and overworked. Maybe they are busy fixing other problems that they deem more important. I don’t know what to think.
What follows are some ideas on how to fix many of the problems in the existing system.
Possible Solution #1: Flip the Business Model
Facebook has been very prominent in the news lately. There are some chinks in the Facebook armor causing anguish both inside and outside of the company. Part of this is due to the nature of their business model. Today, Facebook and Instagram have very similar business models. They acquire as many users as possible and then sell their data and attention to advertisers and third parties, primarily through advertisements.
So, what do I mean by “flip the business model”? Well, instead of generating all its revenue from advertising, Instagram (and therefore, Facebook) could begin to charge users to use their service. This model would look more like what Netflix, Pandora, and Spotify do today, with tiered pricing systems for users and a lowered (or completely removed) reliance on advertisements. This switch has several aspects I’ll talk through here, but I contend it would ultimately please more stakeholders than the current model does.
Flickr has recently switched to a similar model. While there has been backlash, all the serious photographers and artists stayed on because of the new benefits of the paid model.
If Instagram began charging users a few dollars a month, there could be many immediate advantages for users:
Users who pay would no longer be subjected to advertising.
Users could have the option of returning their feed to chronological order, since the algori
thm’s goal will no longer be to maximize your screen time in order to maximize advertising income.
Resources inside Facebook and Instagram that are currently spending time on maximizing the advertising code and algorithms would be freed up to instead give users lots of new features.
User data would be less likely to be sold to third parties, as advertisers would play less of a role in the revenue ecosystem.
This model would keep the bots at bay. Bot farms wouldn’t pay millions of dollars a month to keep their bot army going. For example, Netflix doesn’t have hordes of bot activity ramping up view counts or thumbs-upping shows to create fake metrics.
Lastly, Instagram and Facebook would have everyone’s credit card number stored, which would support the ability to offer all other sorts of other additional services through the platform. Facebook has already introduced the option to buy some products and services (like event tickets) directly on the platform. Having a higher percentage of users with credit cards information already stored on Instagram would decrease microtransaction friction. What does that mean? When a service like iTunes already has your credit card number, it’s much easier to make purchases, because buying is now a one-step process.
Under the Influence- How to Fake Your Way Into Getting Rich on Instagram Page 12