What are recommendations on Instagram?
We make recommendations to the people who use our services to help them discover new communities and content. Both Facebook and Instagram may recommend content, accounts, and entities that people do not already follow. Some examples of our recommendations experiences include Instagram Explore, Accounts You May Like, and IGTV Discover. Our goal is to make recommendations that are relevant and valuable to each person who sees them. We do this by personalizing recommendations, which means making unique recommendations for each person. For example, if you interact with restaurants and bookstores on Instagram, we may recommend content about food, recipes, books, or reading.
What baseline standards does Instagram maintain for its recommendations?
At Instagram, we have guidelines that govern what content we recommend to people. Through those guidelines, we work to avoid making recommendations that could be low-quality, objectionable, or sensitive, and we also avoid making recommendations that may be inappropriate for younger viewers. Our Recommendations Guidelines are designed to maintain a higher standard than our Community Guidelines, because recommended content and connections are from accounts you haven't chosen to follow. As always, content that goes against our Community Guidelines will be removed from Instagram.
By publishing these guidelines, we want to provide people with more information about the types of content and accounts that we try to avoid recommending, both to keep our community more informed and to provide guidance for content creators about recommendations.
In developing these guidelines, we sought input from 50 leading experts specializing in recommender systems, expression, safety, and digital rights. Those consultations are part of our constant efforts to improve these guidelines and provide people with a safe and positive experience when they receive recommendations on our platform.
There are five categories of content that are allowed on our platforms, but that may not be eligible for recommendations. These categories are listed below, as are some illustrative examples of content within each category.
Content that impedes our ability to foster a safe community, such as:
- Content that discusses self-harm, suicide, or eating disorders. (We remove content that encourages suicide or self-injury, or any graphic imagery).
- Content that may depict violence, such as people fighting. (We remove graphically violent content.)
- Content that may be sexually explicit or suggestive, such as pictures of people in see-through clothing. (We remove content that contains adult nudity or sexual activity.)
- Content that promotes the use of certain regulated products, such as tobacco or vaping products, adult products and services, or pharmaceutical drugs. (We remove content that attempts to sell or trade most regulated goods.)
- Content shared by any non-recommendable account.
Sensitive or low-quality content about Health or Finance, such as:
- Content that promotes or depicts cosmetic procedures.
- Content containing exaggerated health claims, such as “miracle cures.”
- Content attempting to sell products or services based on health-related claims, such as promoting a supplement to help a person lose weight.
- Content that promotes misleading or deceptive business models, such as payday loans or “risk-free” investments.
Content that users broadly tell us they dislike, such as:
- Content that includes clickbait.
- Content that includes engagement bait.
- Content that promotes a contest or giveaway.
Content that is associated with low-quality publishing, such as:
- Unoriginal content that is largely repurposed from another source without adding material value.
- Content from web sites that get a disproportionate number of clicks from Instagram versus other places on the web.
- News content that does not include transparent information about authorship or the publisher’s editorial staff.
False or misleading content, such as:
- Content including claims that have been found false by independent fact-checkers or certain expert organizations. (We remove misinformation that could cause physical harm or suppress voting.)
- Vaccine-related misinformation that has been widely debunked by leading global health organizations.
- Content that promotes the use of fraudulent documents, such as someone sharing a post about using a fake ID. (We remove content attempting to sell fraudulent documents, like medical prescriptions).
We also try to not recommend accounts that:
- Recently violated Instagram’s Community Guidelines. (This does not include accounts that we otherwise remove from our platforms for violating Instagram’s Community Guidelines.)
- Repeatedly and/or recently shared content we try not to recommend.
- Repeatedly posted vaccine-related misinformation that has been widely debunked by leading global health organizations.
- Repeatedly engaged in misleading practices to build followings, such as purchasing ‘likes’.
- Have been banned from running ads on our platforms.
- Recently and repeatedly posted false information as determined by independent third party fact-checkers or certain expert organizations.
- Are associated with offline movements or organizations that are tied to violence.
A similar set of these guidelines applies to recommendations on Facebook. Those guidelines can be found in the Facebook Help Center.
- What types of accounts get removed from Instagram?
- Why am I seeing a warning before I can view a photo or video on Instagram?
- What is Instagram’s policy on the sale of marijuana?
- What should I do if someone on Instagram asks me to buy goods or services from them?
- Does Instagram allow photos of mothers breastfeeding?