Content Moderation Policy
Last updated: 20-09-2025
This Content Moderation Policy describes the rules and processes used by Alerten to review, approve, or remove user-submitted alerts and reports. Our goal is to provide timely, accurate, and safe information to the community while minimizing misuse, misinformation, and harm.
1. Allowed Content
Users may submit reports that are factual, timely and relevant to public safety, including but not limited to:
- Crimes (theft, robbery, assault) — factual descriptions of incidents or suspicious activity.
- Accidents (road collisions, fires) — incident details and location/time where known.
- Public-safety hazards (power outages, water disruptions, severe weather).
- Official advisories from verified authorities (police, fire brigade, municipal services).
2. Prohibited Content
We do not allow content that could harm individuals or the public, or that violates privacy and legal standards. Examples include:
- False or deliberately misleading reports: Fabricated incidents or hoaxes intended to deceive or cause panic.
- Personal data and doxxing: Posting private information such as home addresses, phone numbers, identification numbers, or images that identify private individuals.
- Hate speech & threats: Content that promotes violence, harassment, or discrimination against protected groups or individuals.
- Incitement or vigilante calls: Encouraging users to take vigilante action or to target individuals or groups.
- Graphic or explicit media: Excessively graphic photos or videos not necessary for public safety context.
3. Moderation Workflow
We use a combination of automated tools and human moderators to evaluate reports before they are published.
- Automated screening: All submissions pass through automated filters for spam, profanity, and known malicious patterns.
- Manual review: Reports flagged by the system or the community are reviewed by trained moderators for accuracy and safety.
- Priority handling: Reports that indicate imminent danger, ongoing violent incidents, or verified authority notices receive expedited review.
- Publication: Only reports that meet our criteria for relevance and safety are published to the public feed. Others are held for verification or rejected.
4. User Reporting & Flags
Community moderation helps us maintain quality. Users can:
- Flag content: Mark reports as false, abusive, or irrelevant.
- Provide evidence: Attach photos, timestamps, or eyewitness details to support a report or an appeal.
- Report abuse: Report accounts that repeatedly post harmful or false content.
5. Enforcement Actions
When content or accounts violate this policy, we may take one or more of the following actions:
- Remove or redact the offending report from public view.
- Issue warnings to the account owner.
- Temporarily suspend account posting privileges.
- Permanently ban repeat or severe offenders.
- Refer criminal or dangerous conduct to law enforcement when appropriate.
6. Appeals
If your content has been removed or your account suspended, you may appeal by contacting our moderation team at support@alerten.org. Please include the report, a clear explanation, and any supporting evidence. Our team aims to respond within 24–72 hours.
7. Privacy & Data Handling
We minimize the collection of PII in reports. Any personal data included in a report will be handled according to our Privacy Policy. Moderators may redact identifying information before publishing.
8. Updates to this Policy
We may update this moderation policy periodically. Substantive changes will be posted here with a new "Last updated" date. Continued use after changes constitutes acceptance.