API Documentation
Proactive Detection
Image
3 min
nima allows you to automatically detect harmful images using powerful ai detection providers example payload { "channel" "automated detection", "reason for request" "lorem abuse reason for request", "customerspecific" { "customer specific key 1" "customer specific value 1", "customer specific key 2" "customer specific value 2", "customer specific key 3" "customer specific value 3" }, "content" { "content id" "098765434567", "title" "weapon", "body" "lorem ipsum", "url" "https //images unsplash com/photo 1585589266882 2cb137ba7db6?ixlib=rb 4 0 3\&ixid=m3wxmja3fdb8mhxwag90by1wywdlfhx8fgvufdb8fhx8fa%3d%3d\&auto=format\&fit=crop\&w=725\&q=80", "token" "", "type" { "id" "img" }, "status" "published", "customerspecific" {} }, "reportee" { "id" "123", "name" "reportee john", "lang" "en", "phonenumber" "070 564123", "emailaddress" "reportee\@tremau net", "customerspecific" {} }, "location" { "city" "stockholm", "postalcode" "111 57", "region" "stockholm", "countrycode" "se", "ipaddress" "192 168 0 1", "customerspecific" {} } } request payload requirements for image moderation, ensure your request payload includes the following content type specific requirements true 220,220,222left unhandled content type left unhandled content type left unhandled content type left unhandled content type left unhandled content type left unhandled content type left unhandled content type left unhandled content type left unhandled content type for all other payload parameters, please refer to the docid\ fhlq7h pq8ibh7ocl2dza documentation configuring detection rules to create a rule for proactive image detection, navigate to admin settings > workflow configuration and define the following conditions required conditions channel must be set to "proactive detection" content type must be set to "image" once these conditions are set, the list of available ai detection providers will be accessible via the choose provider button the screenshot above shows an example rule configured to detect sensitive/disturbing depictions in images using the gore detection model from sightengine