At Veon3.com, we are committed to maintaining a safe and responsible platform for AI video generation. This Content Moderation Policy explains how we moderate content, what types of content are prohibited, and how we enforce our community standards.
We strive to:
- Promote Safety: Prevent harmful, illegal, or abusive content from being generated or shared
- Respect Rights: Protect intellectual property rights and privacy of individuals
- Foster Creativity: Enable legitimate creative expression within legal and ethical boundaries
- Maintain Trust: Build a platform that users and partners can trust
We employ a comprehensive content moderation system:
-
Pre-Generation Filtering
- Automated analysis of prompts before video generation
- Detection of prohibited keywords and phrases
- Blocking requests that violate our policies
-
Generation-Time Monitoring
- Real-time analysis during video creation
- Detection of policy-violating visual content
- Automatic termination of prohibited generation attempts
-
Post-Generation Review
- Automated scanning of completed videos
- Flagging potentially violating content for review
- Storage of content metadata for safety purposes
-
User Reporting
- Easy-to-use reporting mechanism for community members
- Priority review of user-reported content
- Feedback to reporters on actions taken
-
Human Review
- Manual review by trained moderation team
- Appeals process for moderation decisions
- Continuous improvement of automated systems
The following content categories are strictly prohibited:
- Pornography, sexually explicit material, or nudity
- Sexual acts or sexually suggestive content
- Content sexualizing or exploiting minors
- Graphic violence, gore, or torture
- Content promoting self-harm or suicide
- Depictions of child abuse or animal cruelty
- Content depicting or promoting illegal activities
- Drug manufacturing or distribution
- Weapons trafficking or illegal sales
- Content targeting individuals or groups based on protected characteristics
- Harassment, bullying, or intimidation
- Promotion of hate groups or ideologies
- Deepfakes intended to deceive or defraud
- Misleading content about public figures without disclosure
- Disinformation or fake news campaigns
- Non-consensual use of personal images or information
- Content violating data protection regulations
- Unauthorized surveillance or stalking content
- Unauthorized use of copyrighted material
- Trademark violations
- Plagiarism or passing off others' work as original
When we detect or receive reports of policy violations, we may take the following actions:
- Content Removal: Immediate deletion of violating content
- Generation Blocking: Prevention of similar content generation
- Watermarking: Additional markers on borderline content
- Warning: Notification of policy violation
- Feature Restriction: Temporary limitation of certain features
- Account Suspension: Temporary ban from the platform (7-30 days)
- Account Termination: Permanent ban for serious or repeated violations
We consider the following factors when determining enforcement actions:
- Severity of the violation
- Intent (accidental vs. deliberate)
- User history and past violations
- Potential harm caused
- Legal requirements
If you encounter content that violates our policies:
-
In-Platform Reporting
- Use the "Report" button on any content
- Select the violation category
- Provide additional context if needed
-
Email Reporting
- Send details to: [email protected]
- Include content URL or description
- Explain the nature of the violation
-
Urgent Safety Issues
- Email: [email protected]
- For immediate threats or illegal content
- We prioritize urgent reports
- Acknowledgment: You'll receive confirmation within 24 hours
- Review: Our team reviews the report within 48 hours
- Action: Appropriate enforcement action is taken if violation confirmed
- Notification: Reporters are informed of the outcome (privacy-permitting)
If you believe content was incorrectly moderated or your account was unfairly sanctioned:
- Email: [email protected]
- Include your account information
- Explain why you believe the decision was incorrect
- Provide any relevant evidence or context
- Appeals are reviewed by a different moderator
- Response within 5-7 business days
- Decisions are based on policy and evidence
- Final decisions are communicated via email
We commit to transparency in our moderation practices:
- Total content moderation actions taken
- Breakdown by violation category
- Appeal statistics and outcomes
- System improvements implemented
- Clear explanations for moderation decisions
- Educational resources on policy compliance
- Regular updates on policy changes
We recognize that some content may have educational, artistic, or documentary value:
- Context Matters: Content is evaluated based on context and intent
- Proper Labeling: Educational or artistic content should be appropriately labeled
- Age Restrictions: Some content may be restricted to adult accounts
- Community Standards: Must still comply with core safety standards
Content related to news reporting or documentary purposes:
- Journalistic Standards: Should follow ethical journalism principles
- Factual Accuracy: Should be truthful and properly sourced
- Public Interest: Must serve a legitimate public interest purpose
All users are expected to:
- Familiarize themselves with our content policies
- Report policy violations they encounter
- Respect moderation decisions
- Use the platform responsibly and ethically
- Storage: Metadata of moderated content is retained for safety purposes
- Privacy: User privacy is protected in moderation processes
- Security: Moderation data is stored securely and access-controlled
- Retention: Data retention follows our Privacy Policy guidelines
We operate globally and comply with:
- Local laws and regulations in jurisdictions we serve
- International human rights standards
- Cultural sensitivities while maintaining core safety standards
We continuously improve our moderation systems through:
- Regular policy reviews and updates
- User feedback and community input
- Technology advancements in AI safety
- Industry best practices and standards
For questions about content moderation:
Or visit our contact page for more options.
Last Updated: January 8, 2025