Instagram has launched a new initiative to combat DM spam. The company’s latest move comes as part of a broader effort to protect users, especially women, from unsolicited and inappropriate content that has plagued private message inboxes for far too long. This update is not only aimed at restricting unwanted messages but also underscores Instagram’s commitment to strengthening its anti-abuse measures, particularly in response to growing concerns from politicians and critics.
Instagram’s battle against DM spam began in late June when the platform started testing stricter DM request policies. The new restrictions include limiting senders to a single message if the recipient doesn’t follow them. Users are now required to accept chat requests before they can receive any further messages from the sender. Moreover, the new policies restrict DM requests to text-only, barring the sending of photos, videos, or voice messages.
The introduction of these measures marks a significant step towards curbing the harassment and annoyance caused by spammers and creeps on the platform. While Instagram already had some tools in place to combat spam, such as the “Hidden Words” tool that filters messages with objectionable content, the latest update goes further in providing a more comprehensive solution to tackle this persistent issue.
A key motivation behind Instagram’s efforts to combat DM spam is to provide enhanced safety, especially for women who often receive unsolicited and inappropriate messages, including unsolicited nudes. By limiting DM requests and restricting the media types that can be sent, Instagram aims to curb this disturbing practice effectively.
However, it’s important to acknowledge that while these measures significantly reduce the incidence of unwanted photos and videos, harassers may still attempt to send crude text messages. Instagram recognizes this limitation and continues to work towards improving its anti-harassment policies to safeguard all users from any form of abuse or intimidation.
Instagram’s parent company, Meta, is facing increasing pressure from politicians and critics to strengthen its anti-abuse measures, particularly concerning the protection of teenage users. The company has already introduced some initiatives, like the “Messenger Kids” app and safety filters, to ensure a safer online environment for younger audiences. Nonetheless, lawmakers and activists have demanded more stringent safeguards.
Recently, a Senate bill was proposed, requiring parental consent for teens who wish to use social media apps. In addition, Arkansas enacted a law mandating age verification for online platforms. These legislative actions reflect growing concerns that social media platforms, including Instagram, need to adopt more robust measures to protect young users from potential harm and exploitation.
While Instagram’s recent DM restrictions and Meta’s broader efforts to enhance safety measures are commendable, critics have long raised concerns about the efficacy of existing anti-harassment policies. Some demographics, in particular, have reportedly faced inadequate protection from abuse and harassment on the platform.