Meta, the company that owns Facebook, Instagram, and WhatsApp, has announced a set of new security tools for creators on Instagram. It should now be easier to keep accounts protected from unwanted interactions and there should be more transparency about content that may impact account status.
Although community engagement is one of the biggest concerns for content creators, deleting spam comments or removing fake accounts can be exhausting work. More time spent moderating content means less time interacting positively and creating new content.
In addition to spam detection, Instagram has updated its app transparency methods: the app can flag potentially violating content, so that it will be possible to anticipate and correct problems quickly and better understand the social network’s content rules.
Accounts suspected of being spam or a “bot” will automatically be filtered into a separate inbox for analysis. If an unobtrusive account is selected in this tool, it is possible to add the account to the remaining followers again. On the other hand, if you are sure that all of these accounts are spam or “bots,” you should select “Delete All,” and the accounts in question will not be alerted to this process.
Instagram is testing hiding Stories views that, according to AI tools, are flagged as spam. It is another new feature that will reduce unwanted user interaction with content creators.
“Friendly zombie fanatic. Analyst. Coffee buff. Professional music specialist. Communicator.”