Quick Start Guide
Get up and running with Outharm in 5 minutes
This guide will walk you through setting up your first content moderation project, from creating your account to making your first API call. You'll be moderating content in just a few minutes.
📋Before You Start
Account Access
You'll need an Outharm account to access the Console
Content Ready
Sample text or images to test moderation
🚀Step-by-Step Setup
Create Your Project
Sign in to the Console and create your first project. Projects help you organize different applications or environments.
Console Steps:
- Navigate to Console → Projects
- Click "Create Project"
- Enter a project name (e.g., "My App Production")
- Save your project
Generate API Token
Create an API token to authenticate your requests. Keep this token secure as it provides access to your project.
Console Steps:
- Go to Console → Access Tokens
- Click "Generate Token"
- Give it a descriptive name
- Copy and securely store the token
Define Your Schema
Create a Schema that describes your content structure. This ensures the API knows what fields to expect in your moderation requests.
Example Schema:
Blog Post Schema
Configure Categories
Simply enable or disable the categories you want to detect based on your platform's needs.
Simple Setup:
- • Toggle categories on/off (sexual content, violence, hate speech, etc.)
- • Choose what you consider harmful for your platform
Make Your First API Call
Now you're ready to moderate content! Choose between automated AI moderation or manual human review.
🔌Your First API Call
Automated Moderation
Using the schema example above, here's how to make your first moderation request:
POST https://api.outharm.com/moderation/automated
Authorization: Bearer your-api-token
Content-Type: application/json
{
"schema_id": "your-blog-post-schema-id",
"content": {
"title": ["Check out this awesome content!"],
"content": ["This post contains some inappropriate material..."],
"images": [
"data:image/jpeg;base64,/9j/4AAQSkZJRgABAQ...",
"https://example.com/suspicious-image.jpg"
]
}
}
Example Response (Harmful Content Detected):
{
"submission_id": "123e4567-e89b-12d3-a456-426614174000",
"schema_id": "your-blog-post-schema-id",
"is_harmful": true,
"results": {
"title": {
"is_harmful": false,
"detailed": [
{ "is_harmful": false }
]
},
"content": {
"is_harmful": true,
"detailed": [
{
"is_harmful": true,
"categories": ["sexual", "violence"]
}
]
},
"images": {
"is_harmful": true,
"detailed": [
{ "is_harmful": false },
{
"is_harmful": true,
"categories": ["sexual"]
}
]
}
}
}
🚀Ready to Get Started?
Congratulations! You've successfully set up content moderation. Here are some recommended next steps to get the most out of Outharm:
Related Documentation
- • Platform Overview - Understand the platform basics
- • Categories - What content types can be detected
- • Schemas & Components - How to structure content for analysis
- • Console Walkthrough - Complete console guide