Video
16 min
nima allows you to automatically detect harmful videos using powerful ai detection providers example payload { "priority" "severe", "channel" "automated detection", "reason for request" "lorem abuse reason for request", "customerspecific" { }, "content" { "content id" "098765434567", "title" "sniper video", "body" "forest walk holding weapon", "url" "https //videos pexels com/video files/6201055/6201055 uhd 3840 2160 25fps mp4", "token" "", "type" { "id" "video static" }, "status" "published", "customerspecific" {} }, "reportee" { "id" "123", "name" "reportee john", "lang" "en", "phonenumber" "070 564123", "emailaddress" "reportee\@tremau net", "customerspecific" {} }, "label" \[ "weapon" ] } request payload requirements for video moderation, ensure your request payload includes the following content type specific requirements attribute required description content url yes for video based moderation, the url should point to the video if the url provided is intended to display an entire page, please see "detecting harmful pages" content type id yes must be set to "video static" nima acts as a central hub and demultiplexer for all your proactive detection needs, providing single point connectivity to a wide range of third party ai detection providers by utilizing nima for integration, you eliminate the need to develop multiple, complex api integrations and future proof your trust & safety operations upon sending an item to nima, the platform efficiently relays it to the specific ai detection provider(s) of your choice and facilitates automated actions based on the received responses supported content types nima supports proactive detection for the following content types click on each type for detailed implementation guides text moderation docid\ fm6yrxolw76sv39vrnuh9 image moderation docid\ gtnrroo9oo b8utlom3a4 video moderation docid\ pcdyktouo36ksedccdhwq audio moderation docid\ kmjfkvvy ejtwtb0afeos emoji moderation docid\ kva42 yagjoanlqznramw livestream moderation docid\ vrfdpi7e4p3 1mldyqb13 setting up proactive detection prerequisites to utilize ai detection in your workflows, the following must be in place the desired ai detection provider(s) must be configured in the ai marketplace with valid credentials you must have defined policies and moderation queues to be used for automated enforcement the primary endpoint to submit cases for proactive detection is queues/process file proactive detection is enabled through the rule engine navigate to admin settings > workflow configuration to set up a rule and ensure your payloads will be received in nima conditions to route any case to an ai detection provider, the rule must contain these two mandatory conditions channel must be set to "proactive detection" content type must be set to the type of content being submitted (e g , "text", "image", "video", "audio", or "video stream") once these are set, the list of available ai detection providers for that content type will be accessible via the choose provider button it is possible to configure multiple models (up to 3 models) per rule using the and and or operators if it is the case, multiple models will be displayed in the moderation console, api response, and backend for content type specific implementation details and requirements, refer to the content specific pages of the documentation how thresholds and actions work when a case is processed by an ai detection provider, the provider returns a confidence score nima uses your pre configured thresholds to determine the final action π΄ red threshold if the confidence score falls within this range, the pre defined moderation action (e g , removal, suspension) will be immediately and automatically invoked π orange threshold the case will be redirected to a specific moderation queue for manual review π’ green threshold the case will be processed out of the system and not actioned sending requests request payloads the structure of the request payload made to queues/process file is very similar to that for user reports, with one difference the channel must equal "automated detection" global parameters parameter type required description reason for request string yes user explains why he or she is reporting content or profile priority string no priority of the incoming request \['severe','high','medium','low'] channel string yes in order to use proactive detection features, channel must be set to 'automated detection' customerspecific object no global customerspecific attributes ai detection providers will inject moderation results here before the case is placed into nima label string\[] no main reason and policy infringement content is being reported for (example "hateful content", "terrorism", "child abuse", etc ) content parameter type required description content title string yes a title for the piece of content content content id string yes id corresponding to content that has been uploaded and reported by a user content url string yes publicly available url of reported content this may also soon be dependent on content type content type object yes content type sits as an object like this "type" {"id" "txt"}, content type can have the following values const video stream = "video stream"; const video static = "video static"; const img = "image"; const audio = "audio"; const user profile = "user profile"; const chat = "chat"; const txt = "text"; const iframe = "iframe"; const pdf = "pdf"; where the id is the enum in caps content rawtext string no raw text is used when the content type is 'txt' and is used for automated text moderation via ai detection providers if rawtext is supplied, then language is required content language string no necessary if rawtext is supplied content token string no used for livestreaming content status string no status of the content on client platform content customerspecific object no customer specific key value attributes that will be passed back in webhook responses upon receipt of moderation action being applied specifically for the content reportee parameter type required description name string yes the name of the individual who is being reported id string yes user id of the reportee (content uploader) as it appears in your database fallback to the reportee emailaddress when id equals 0, anonymous or unknown lang string no the language of the individual being reported phonenumber string no phone number of reportee emailaddress string yes user email of the reportee for communication purposes (statement of reasons sent by email, appeals, etc ) customerspecific object no reportee customer specific attributes location the location attribute is not essential and any one of the following attributes may be passed in parameter type required description/examples postalcode string no "pt12 zpe" city string no "london" region string no "middlesex" countrycode string (use iso codes, i e gb) no "gb" ipaddress string no "192 168 2 1" customerspecific object no {"key" "value"} a note on customerspecific attributes the global, content, reportee and reporter attributes all possess an optional field which accepts an object customerspecific key value pairs will always be accepted; however, to observe these customerspecific attributes in the metadata on the moderation dashboard, the client must configure these custom attributes in the "admin settings > custom attributes" configuration page more info on custom attributes docid 2fmg8gdwrlps9p7lslnuf response upon processing, the api responds with a flagdata attribute this attribute mirrors the input json payload but is decorated with additional data retrieved from the ai detection provider the global customerspecific attribute will contain the raw data retrieved from the ai detection provider { "shouldqueueflagcreate" true, "flagdata" { "channel" "automated detection", "reason for request" "content moderated using ai", "customerspecific" { "summary" { "id" "0cbab320 a50f 11f0 863a 3390509fdd90", "code" 200, "project id" 1109668119, "user id" 4762, "created on" "2025 10 09t12 54 11 518z", "status" \[ { "status" { "code" "0", "message" "success" }, "response" { "input" { "charge" 0 00001, "created on" "2025 10 09t12 54 10 514z", "model" "mod64 4 new dense extra ens", "id" "0cbab320 a50f 11f0 863a 3390509fdd90", "media" { "url" null, "filename" null, "type" "photo", "mime type" "jpeg", "mimetype" "image/jpeg", "width" 400, "height" 400, "num frames" 1, "duration" 0 }, "model version" 1, "model type" "classification", "user id" 4762, "project id" 1109668119, "config version" 1, "config tag" "default" }, "output" \[ { "time" 0, "classes" \[ { "score" 1, "class" "general not nsfw not suggestive" }, { "score" 9 44821110238081e 10, "class" "general nsfw" }, { "score" 3 1891640439596362e 9, "class" "general suggestive" }, { "score" 1, "class" "no female underwear" }, { "score" 4 775388418032378e 10, "class" "yes female underwear" }, { "score" 1, "class" "no male underwear" }, { "score" 6 144879605152197e 10, "class" "yes male underwear" }, { "score" 1, "class" "no sex toy" }, { "score" 1 682822992066235e 9, "class" "yes sex toy" }, { "score" 1, "class" "no female nudity" }, { "score" 3 7282382736592007e 10, "class" "yes female nudity" }, { "score" 1, "class" "no male nudity" }, { "score" 4 351427662641072e 9, "class" "yes male nudity" }, { "score" 1, "class" "no female swimwear" }, { "score" 8 637842524583661e 11, "class" "yes female swimwear" }, { "score" 1, "class" "no male shirtless" }, { "score" 6 832969190462278e 10, "class" "yes male shirtless" }, { "score" 0 9990317821502686, "class" "no text" }, { "score" 0 0009682858944870532, "class" "text" }, { "score" 0 00004784707925864495, "class" "animated" }, { "score" 0 0000030272485673776828, "class" "hybrid" }, { "score" 0 9999492168426514, "class" "natural" }, { "score" 1 5425227761767244e 9, "class" "animated gun" }, { "score" 3 8822928871695694e 8, "class" "gun in hand" }, { "score" 6 655429984903094e 8, "class" "gun not in hand" }, { "score" 0 9999998807907104, "class" "no gun" }, { "score" 6 558474874296394e 10, "class" "culinary knife in hand" }, { "score" 1 5358057936509795e 8, "class" "culinary knife not in hand" }, { "score" 3 359654110468568e 9, "class" "knife in hand" }, { "score" 8 217730282922275e 7, "class" "knife not in hand" }, { "score" 0 9999991655349731, "class" "no knife" }, { "score" 9 284041313506464e 10, "class" "a little bloody" }, { "score" 1, "class" "no blood" }, { "score" 8 553271868549928e 9, "class" "other blood" }, { "score" 6 558339982198902e 11, "class" "very bloody" }, { "score" 1, "class" "no fight" }, { "score" 3 38575834035737e 11, "class" "yes fight" } / truncated for brevity / ] } ] } } ], "from cache" false, "hsl options" {} }, "probs" { "general not nsfw not suggestive" 1, "general nsfw" 0, "general suggestive" 0, "no female underwear" 1, "yes female underwear" 0 / truncated for brevity / }, "detectedby" "hive", "outcome" { "status" "green", "matchedrule" null } }, "content" { "content id" "f747c6ad 7771 4be6 a4e0 8ffbf7c4401c", "title" "profile pic", "body" "profile pic", "url" "https //picsum photos/400/400", "type" { "id" "img" }, "customerspecific" { "platform content type" "profile picture" } }, "reportee" { "id" "12345", "name" "test user", "lang" "en us", "phonenumber" "+1234567890", "emailaddress" "placeholder 12345\@tremau fr", "customerspecific" { "reportee player level" "vip" } } }, "message" "success", "ok" true } if any configured rule is matched in the system, the payload will bear an outcome json node "outcome" { "status" "green", "matchedrule" null } if the status is green, then the case will not appear on the moderation console if the status is orange or red, then the corresponding rule will appear in the json response, and the corresponding case will be visible in queues for orange and in the report list for red the flagdata attribute reflects the input json payload with the following decorations the global customerspecific attribute will contain the data retrieved from the ai detection provider note proactive detection features are only available if the credentials for the ai detection providers have been provided in the configuration mapping content types to ai detection providers nima ensures that the correct ai detection provider is used for the correct content type the input request payload must have a content type that matches the content type supported by the provider configured in the rule (e g , if you set up a rule using a text only ai detection provider, the payload must specify content type "txt" ) note for emoji moderation, use content type "txt" as emojis are processed as text see emoji moderation docid\ kva42 yagjoanlqznramw for details configuring detection rules to create a rule for automated video detection, navigate to admin settings > workflow configuration and define the following conditions required conditions channel must be set to "proactive detection" content type must be set to "video static" once these conditions are set, the list of available ai detection providers will be accessible via the choose provider button audio profanity detection for video sightengine supports audio profanity detection for video content (video static type) this allows you to moderate both the visual and audio components of videos in a single request when configuring a rule for video content with audio profanity detection set the content type to video static select sightengine as the provider choose the audio profanity class the detection processes synchronously and returns profanity matches found in the video's soundtrack along with a transcription videos without sound are handled gracefullyβthe system processes the content normally without the audio profanity component