Our new Chat Moderation feature integrates AI into our chat software to automatically moderate messages sent by customers. This moderation does not apply to messages sent by Salesfloor users. The AI system analyzes chat content in real-time, identifying and flagging inappropriate, offensive, or harmful messages. The flagged messages will be automatically blocked and not displayed to the store associates. A warning message will be displayed to both the customer and the associate, informing them that the message was flagged. This ensures a safer and more respectful communication environment for all the associates across all our retailers.
Skip to:
- What does it look like when a conversation is flagged?
- How does Salesfloor identify inappropriate messages?
- What else should I know?
What does it look like when a conversation is flagged?
How does Salesfloor identify inappropriate messages?
Retailers can either establish a uniform global severity level for all subjects or select varying severity levels based on the specific subject. Below are the different subjects along with examples for each severity level, where level 1 represents the strictest of the three.
Sexual:
- 3: Intercourse, masturbation, porn, sex toys and genitalia
- 2: Sexual intent, nudity and lingerie
- 1: Informational statements that are sexual in nature, affectionate activities (kissing, hugging, etc.), flirting, pet names, relationship status, sexual insults and rejecting sexual advances
Hate:
- 3: Slurs, hate speech, promotion of hateful ideology
- 2: Negative stereotypes or jokes, degrading comments, denouncing slurs, challenging a protected group's morality or identity, violence against religion
- 1: Positive stereotypes, informational statements, reclaimed slurs, references to hateful ideology, immorality of protected group's rights
Violence:
- 3: Serious and realistic threats, mentions of past violence
- 2: Calls for violence, destruction of property, calls for military action, calls for the death penalty outside a legal setting, mentions of self-harm/suicide
- 1: Denouncing acts of violence, soft threats (kicking, punching, etc.), violence against non-human subjects, descriptions of violence, gun usage, abortion, self-defense, calls for capital punishment in a legal setting, destruction of small personal belongings, violent jokes
Bullying:
- 3: Slurs or profane descriptors toward specific individuals, encouraging suicide or severe self-harm, severe violent threats toward specific individuals
- 2: Non-profane insults toward specific individuals, encouraging non-severe self-harm, non-severe violent threats toward specific individuals, silencing or exclusion
- 1: Profanity in a non-bullying context, playful teasing, self-deprecation, reclaimed slurs, degrading a person's belongings, bullying toward organizations, denouncing bullying
Weapons (Beta):
- 3: Buying, selling, trading, and constructing bombs and firearms
- 2: Buying, selling, trading, and constructing non-explosive weapons
- 1: Neutral mentions of all weapons
Child Exploitation:
- 3: Asking for or trading child pornography (cp) or related links, mentioning proclivity for cp, identifiably underage users soliciting sex or pornography, roleplay involving children, mentions of sexual activity or sexual fetishes involving children
What else should I know?
Exports: Flagged messages are included in chat exports and can be accessed through the back office. A new column for flagged messages is added to CSV exports.
Chat History: Flagged messages are not displayed in the chat history, the warning message will be displayed.
Non-Retroactive: The feature does not apply to messages sent before it was enabled or before changing the security level.