Instagram will start warning users when it detects potentially bullying language in its captions.
The app said it will use artificial intelligence to spot possibly harmful language and give users the opportunity to reconsider it before sharing.
A similar feature was introduced earlier this year which warned users if it thought their comments on other people’s posts contained bullying or offensive language.
If the AI spots something, a message will appear telling the user the caption looks similar to others that have previously been reported and gives them the option to edit it, learn about why it was flagged, or share it anyway.
It comes amid criticism that Facebook-owned Instagram has not acted quickly enough to remove abusive and potentially dangerous content from the platform.
Politicians and campaigners have called for greater regulation to enable better policing of social media and hold sites to account for not protecting users.
Instagram said it was committed to developing new technology and features aimed at mitigating online bullying.
Dan Raisbeck, co-founder of anti-cyberbullying charity Cybersmile, said: “We should all consider the impact of our words, especially online where comments can be easily misinterpreted.
“Tools like Instagram’s Comment and Caption Warning are a useful way to encourage that behaviour before something is posted, rather than relying on reactive action to remove a hurtful comment after it’s been seen by others.”