Recommendations on Instagram
We make recommendations to the people who use our services to help them discover new communities and content. Both Facebook and Instagram may recommend content, accounts, and entities that people do not already follow. Some examples of our recommendation experiences include Instagram Explore, Accounts You May Like, and the Reels tab.
Our goal is to make recommendations that are relevant and valuable to each person who sees them. We do this by personalizing recommendations, which means making unique recommendations for each person. For example, if you interact with restaurants and bookstores on Instagram, we may recommend content about food, recipes, books, or reading.
What baseline standards does Instagram maintain for its recommendations?
At Instagram, we have guidelines that govern what content we recommend to people. Through those guidelines, we work to avoid making recommendations that could be low-quality, objectionable, or sensitive, and we also avoid making recommendations that may be inappropriate for younger viewers. Our Recommendations Guidelines are designed to maintain a higher standard than our Community Guidelines, because recommended content and connections are from accounts you haven't chosen to follow. We use technology to detect both content and accounts that don’t meet these Recommendations Guidelines and to help us avoid recommending them. As always, content that goes against our Community Guidelines will be removed from Instagram.
By publishing these guidelines, we want to provide people with more information about the types of content and accounts that we try to avoid recommending, both to keep our community more informed and to provide guidance for content creators about recommendations.
In developing these guidelines, we sought input from 50 leading experts specializing in recommender systems, expression, safety, and digital rights. Those consultations are part of our constant efforts to improve these guidelines and provide people with a safe and positive experience when they receive recommendations on our platform.
Content Recommendations
There are five categories of content that are allowed on our platforms, but that may not be eligible for recommendations. These categories are listed below, as are some illustrative examples of content within each category.
Content that impedes our ability to foster a safe community, such as:
  1. Content that discusses self-harm, suicide, or eating disorders, as well as content that depicts or trivialises themes around death or depression. (We remove content that encourages suicide or self-injury, or any graphic imagery.) We use technology to try to avoid showing certain types of content that discusses self-harm, suicide and eating disorders to people under 16 years old, even if they follow the account sharing it. We do allow content that provides support, recovery or resources on these topics for everyone.
  2. Content that may depict violence, such as people fighting. (We remove graphically violent content.)
  3. Content that may be sexually explicit or suggestive, such as pictures of people in see-through clothing. (We remove content that contains adult nudity or sexual activity.) We use technology to try to avoid showing sexually explicit or suggestive content to people under 16 years old.
  4. Content that promotes the use of certain regulated products, such as tobacco or vaping products, adult products and services, or pharmaceutical drugs. (We remove content that attempts to sell or trade most regulated goods.) We use technology to try to avoid showing people under 16 years old content that promotes non-medical drugs and marijuana even if they follow the account sharing it.
  5. Content shared by any non-recommendable account.
Note: You may be able to control how much or how little of this content you can see on Instagram using the Sensitive Content Control.
Sensitive or low-quality content about Health or Finance, such as:
  1. Content that promotes or depicts cosmetic procedures.
  2. Content containing exaggerated health claims, such as “miracle cures.”
  3. Content attempting to sell products or services based on health-related claims, such as promoting a supplement to help a person lose weight.
  4. Content that promotes misleading or deceptive business models, such as payday loans or “risk-free” investments.
Content that users broadly tell us they dislike, such as:
  1. Content that includes clickbait.
  2. Content that includes engagement bait.
  3. Content that promotes a contest or giveaway.
Content that is associated with low-quality publishing, such as:
  1. Unoriginal content that is largely repurposed from another source without adding material value.
  2. Content from web sites that get a disproportionate number of clicks from Instagram versus other places on the web.
  3. News content that does not include transparent information about authorship or the publisher’s editorial staff.
False or misleading content, such as:
  1. Content including claims that have been found false by independent fact-checkers or certain expert organizations. (We remove misinformation that could cause physical harm or suppress voting.)
  2. Vaccine-related misinformation that has been widely debunked by leading global health organizations.
  3. Content that promotes the use of fraudulent documents, such as someone sharing a post about using a fake ID. (We remove content attempting to sell fraudulent documents, like medical prescriptions).
Account Recommendations
We also try to not recommend accounts that:
  1. Recently violated Instagram’s Community Guidelines. (This does not include accounts that we otherwise remove from our platforms for violating Instagram’s Community Guidelines.)
  2. Repeatedly and/or recently shared content we try not to recommend.
  3. Repeatedly posted vaccine-related misinformation that has been widely debunked by leading global health organizations.
  4. Repeatedly engaged in misleading practices to build followings, such as purchasing ‘likes’.
  5. Have been banned from running ads on our platforms.
  6. Recently and repeatedly posted false information as determined by independent third party fact-checkers or certain expert organizations.
  7. Are associated with offline movements or organizations that are tied to violence.
  8. Discuss or depict suicide and self-harm in the account name, username, profile photo or bio (with the exception of accounts focused on providing support, raising awareness, and recovery).
A similar set of these guidelines applies to recommendations on Facebook. Those guidelines can be found in the Facebook Help Center.
Visit the Meta Transparency Center for information on how an artificial intelligence (AI) system selects, ranks and delivers the content you see on Instagram.
English (US)
+
Meta © 2024