Being a Moderator
30 min
as a moderator, when you log in nima, you will land on the moderation dashboard the is the central hub for your work, designed to make human review as efficient and as accurate as possible nima offers a customizable and flexible experience to meet the specific needs of you and your team optimizing your workflow managing content queues the console offers two distinct modes for managing content queues dashboard overview when you log in as a moderator, you'll see a dashboard with all the queues you're assigned to team assignments to queues are managed in docid\ azsiick6eehflzwytql v settings the dashboard has 2 tabs , both organised by queue reports reports waiting to be reviewed by moderators for the first time docid\ hadsvd38gpyc7canb4oyi reports that users have appealed after the first review, pending moderators second review you can see how many reports are pending and how long they've been waiting (less than 24 hours, less than 48 hours, or more than 48 hours) the dashboard shows when it was last updated and automatically refreshes to keep the information current queue viewing modes you can view queues in two different ways , each suited to different tasks list view displays reports in a table format, giving you a comprehensive overview of the queue this mode works well for bulk moderation and manual prioritization cases appear as rows that are easy to scan you can manually select specific reports to prioritize or skip customize which columns appear to see the details that matter to you handle specific cases out of the normal order when needed docid\ angnoc xkytjfghpyhxmz shows cases one at a time when you click the play button, using a case centric approach that consolidates all reports on a given piece of content into a single view it provides a full screen view with comprehensive context for each case, so nothing gets overlooked this mode is ideal for high volume queues where you need to process cases quickly and sequentially cases appear in a continuous sequence, moving automatically to the next case after you make a decision each case view includes a summary of all reports, a customisable reported account section, and the full audit trail streamlined workflow keeps you focused on making decisions without having to select each case manually see docid\ angnoc xkytjfghpyhxmz for a detailed walkthrough of the interface how cases appear and are ordered which cases appear in which queues depends on the rules you've configured both docid\ hfx29ae9o5ktdj2 dokxb and docid\ hfx29ae9o5ktdj2 dokxb control where cases go the order of cases within each queue is based on their severity nima automatically puts the most critical content first, so you can address urgent issues quickly you can customize how cases are prioritized in queue prioritisation settings to match your platform's specific needs the moderation console is built around a case centric approach , consolidating all reports from ai proactive detection or user reports on a given piece of content into a single view this gives moderators a comprehensive summary of reports and fuller context around each case, while individual reports can still be accessed and reviewed independently within the case view the interface is structured around three panels , with navigation and controls at the top on the top bar, you can see navigation breadcrumb shows where you are (reports / queue name / case number) undo button to revert your last action timer shows how long you've been reviewing the current case left panel report details this panel lists all the reports associated with the case , along with their audit trails each report card contains source the reporting channel (user report, proactive detection, etc ) input labels tags applied to categorize the report, either by the ai detection provider or by users reporting a content reason for report what users may have added when reporting a content confidence score when the report comes from proactive ai detection, the confidence score returned by the detection engine is displayed audit trails the history of actions, decisions, and changes related to that report reports can be filtered to help moderators focus on what they need most center panel content, attributes, and reported account this panel shows the reported content, its details, and the reported account section content preview view the actual content (image, video, text, etc ) or the multiple pieces of content in case of a collection if preview isn't available, click the download link to inspect the original file toggle blur/unblur for sensitive content content details preview content text description or transcript when available content id unique identifier for this piece of content content type format (image, video, text, listing, etc ) country/locale location or language information custom attributes all the additional metadata like prior reviewer signals or system detected risks, as configured in docid 2fmg8gdwrlps9p7lslnuf reported account this section displays key insights about the reported account, such as reports count and policy violations the reported account section is fully customisable teams can configure which custom attributes are displayed, allowing you to surface the information most relevant to your review workflow by clicking on the account, you will get a full history of all the cases ingested in nima for that account right panel actions and policies this is where you make your decision and apply the outcome docid\ o4mxk fvhcwa88dmpctva to select whether the content violates a policy some policies have sub options to let you specify the exact type of violation docid\ lsfkyoqsi365x alowit1 you can choose only one policy or workflow action per report both actions and policies displayed for each queue are configurable in docid\ azsiick6eehflzwytql v when reviewing docid\ hadsvd38gpyc7canb4oyi , only moderation actions will be displayed instead of policies additional actions re route flag send the case to another queue for specialized review escalate flag send to senior moderators or escalation queues request information ask the reported account for more context once a report has been moderated, if it will be re opened from the report list, you will find here the details of the policy and action applied by selecting a different policy, you will be changing the decision additional capabilities revert actions when a case has already been processed, you can revert the action that was applied this is useful for correcting mistakes identified during quality control or when new information changes the original decision to revert an action open a processed case in the moderation console click the "revert action" button select a revert action from the list (only actions marked as revert actions are shown when you apply a revert action the original enforcement is undone on your platform via the configured webhook the affected user receives a notification (if a template is configured in user notification) explaining the reversal the revert action appears in the audit trail and in the flag list view revert actions are configured in settings > actions as any other action blur functionality to protect moderator wellbeing, you can blur sensitive images click the unblur button next to any blurred image to view the original, or click blur to reapply the effect you can configure blur and greyscale intensity in your docid\ aa7jy1u8cvlirssikv i , accessible when clicking on your user icon on the top left translating content when reviewing text in other languages, use the built in translation feature select your target language from the dropdown and click translate to instantly view the content in that language escalating reports for complex or sensitive reports that need expert review, click the escalate flag button select the appropriate escalation queue from the dropdown and confirm to move the case the report is immediately removed from your queue and sent to the specialized team escalation queues are created \[here] (link to be added) cases can be escalated also from the list view request information from users when you need additional context before making a decision, use the request information feature to engage directly with the reported account this allows you to ask clarifying questions without leaving the moderation console how it works click request information in the right panel compose your question or request for clarification the user receives a formal email with a call to action button linking to a nima form the user submits their response through the form their response appears as a chat like thread directly on the case in your moderation console tools for complex cases for challenging or ambiguous flags, the console provides built in tools to assist in the decision making process the moderation console is built around a case centric approach , consolidating all reports from ai proactive detection or user reports on a given piece of content into a single view this gives moderators a comprehensive summary of reports and fuller context around each case, while individual reports can still be accessed and reviewed independently within the case view the interface is structured around three panels , with navigation and controls at the top on the top bar, you can see navigation breadcrumb shows where you are (reports / queue name / case number) undo button to revert your last action timer shows how long you've been reviewing the current case left panel report details this panel lists all the reports associated with the case , along with their audit trails each report card contains source the reporting channel (user report, proactive detection, etc ) input labels tags applied to categorize the report, either by the ai detection provider or by users reporting a content reason for report what users may have added when reporting a content confidence score when the report comes from proactive ai detection, the confidence score returned by the detection engine is displayed audit trails the history of actions, decisions, and changes related to that report reports can be filtered to help moderators focus on what they need most center panel content, attributes, and reported account this panel shows the reported content, its details, and the reported account section content preview view the actual content (image, video, text, etc ) or the multiple pieces of content in case of a collection if preview isn't available, click the download link to inspect the original file toggle blur/unblur for sensitive content content details preview content text description or transcript when available content id unique identifier for this piece of content content type format (image, video, text, listing, etc ) country/locale location or language information custom attributes all the additional metadata like prior reviewer signals or system detected risks, as configured in docid 2fmg8gdwrlps9p7lslnuf reported account this section displays key insights about the reported account, such as reports count and policy violations the reported account section is fully customisable teams can configure which custom attributes are displayed, allowing you to surface the information most relevant to your review workflow by clicking on the account, you will get a full history of all the cases ingested in nima for that account right panel actions and policies this is where you make your decision and apply the outcome docid\ o4mxk fvhcwa88dmpctva to select whether the content violates a policy some policies have sub options to let you specify the exact type of violation docid\ lsfkyoqsi365x alowit1 you can choose only one policy or workflow action per report both actions and policies displayed for each queue are configurable in docid\ azsiick6eehflzwytql v when reviewing docid\ hadsvd38gpyc7canb4oyi , only moderation actions will be displayed instead of policies additional actions re route flag send the case to another queue for specialized review escalate flag send to senior moderators or escalation queues request information ask the reported account for more context once a report has been moderated, if it will be re opened from the report list, you will find here the details of the policy and action applied by selecting a different policy, you will be changing the decision additional capabilities revert actions when a case has already been processed, you can revert the action that was applied this is useful for correcting mistakes identified during quality control or when new information changes the original decision to revert an action open a processed case in the moderation console click the "revert action" button select a revert action from the list (only actions marked as revert actions are shown when you apply a revert action the original enforcement is undone on your platform via the configured webhook the affected user receives a notification (if a template is configured in user notification) explaining the reversal the revert action appears in the audit trail and in the flag list view revert actions are configured in settings > actions as any other action blur functionality to protect moderator wellbeing, you can blur sensitive images click the unblur button next to any blurred image to view the original, or click blur to reapply the effect you can configure blur and greyscale intensity in your docid\ aa7jy1u8cvlirssikv i , accessible when clicking on your user icon on the top left translating content when reviewing text in other languages, use the built in translation feature select your target language from the dropdown and click translate to instantly view the content in that language escalating reports for complex or sensitive reports that need expert review, click the escalate flag button select the appropriate escalation queue from the dropdown and confirm to move the case the report is immediately removed from your queue and sent to the specialized team escalation queues are created \[here] (link to be added) cases can be escalated also from the list view request information from users when you need additional context before making a decision, use the request information feature to engage directly with the reported account this allows you to ask clarifying questions without leaving the moderation console how it works click request information in the right panel compose your question or request for clarification the user receives a formal email with a call to action button linking to a nima form the user submits their response through the form their response appears as a chat like thread directly on the case in your moderation console the moderation console is built around a case centric approach , consolidating all reports from ai proactive detection or user reports on a given piece of content into a single view this gives moderators a comprehensive summary of reports and fuller context around each case, while individual reports can still be accessed and reviewed independently within the case view the interface is structured around three panels , with navigation and controls at the top on the top bar, you can see navigation breadcrumb shows where you are (reports / queue name / case number) undo button to revert your last action timer shows how long you've been reviewing the current case left panel report details this panel lists all the reports associated with the case , along with their audit trails each report card contains source the reporting channel (user report, proactive detection, etc ) input labels tags applied to categorize the report, either by the ai detection provider or by users reporting a content reason for report what users may have added when reporting a content confidence score when the report comes from proactive ai detection, the confidence score returned by the detection engine is displayed audit trails the history of actions, decisions, and changes related to that report reports can be filtered to help moderators focus on what they need most center panel content, attributes, and reported account this panel shows the reported content, its details, and the reported account section content preview view the actual content (image, video, text, etc ) or the multiple pieces of content in case of a collection if preview isn't available, click the download link to inspect the original file toggle blur/unblur for sensitive content content details preview content text description or transcript when available content id unique identifier for this piece of content content type format (image, video, text, listing, etc ) country/locale location or language information custom attributes all the additional metadata like prior reviewer signals or system detected risks, as configured in docid 2fmg8gdwrlps9p7lslnuf reported account this section displays key insights about the reported account, such as reports count and policy violations the reported account section is fully customisable teams can configure which custom attributes are displayed, allowing you to surface the information most relevant to your review workflow by clicking on the account, you will get a full history of all the cases ingested in nima for that account right panel actions and policies this is where you make your decision and apply the outcome docid\ o4mxk fvhcwa88dmpctva to select whether the content violates a policy some policies have sub options to let you specify the exact type of violation docid\ lsfkyoqsi365x alowit1 you can choose only one policy or workflow action per report both actions and policies displayed for each queue are configurable in docid\ azsiick6eehflzwytql v when reviewing docid\ hadsvd38gpyc7canb4oyi , only moderation actions will be displayed instead of policies additional actions re route flag send the case to another queue for specialized review escalate flag send to senior moderators or escalation queues request information ask the reported account for more context once a report has been moderated, if it will be re opened from the report list, you will find here the details of the policy and action applied by selecting a different policy, you will be changing the decision additional capabilities revert actions when a case has already been processed, you can revert the action that was applied this is useful for correcting mistakes identified during quality control or when new information changes the original decision to revert an action open a processed case in the moderation console click the "revert action" button select a revert action from the list (only actions marked as revert actions are shown when you apply a revert action the original enforcement is undone on your platform via the configured webhook the affected user receives a notification (if a template is configured in user notification) explaining the reversal the revert action appears in the audit trail and in the flag list view revert actions are configured in settings > actions as any other action blur functionality to protect moderator wellbeing, you can blur sensitive images click the unblur button next to any blurred image to view the original, or click blur to reapply the effect you can configure blur and greyscale intensity in your docid\ aa7jy1u8cvlirssikv i , accessible when clicking on your user icon on the top left translating content when reviewing text in other languages, use the built in translation feature select your target language from the dropdown and click translate to instantly view the content in that language escalating reports for complex or sensitive reports that need expert review, click the escalate flag button select the appropriate escalation queue from the dropdown and confirm to move the case the report is immediately removed from your queue and sent to the specialized team escalation queues are created \[here] (link to be added) cases can be escalated also from the list view request information from users when you need additional context before making a decision, use the request information feature to engage directly with the reported account this allows you to ask clarifying questions without leaving the moderation console how it works click request information in the right panel compose your question or request for clarification the user receives a formal email with a call to action button linking to a nima form the user submits their response through the form their response appears as a chat like thread directly on the case in your moderation console moderator wellbeing blur functionality protects moderator wellbeing by reducing exposure to explicit or disturbing content this feature automatically applies blur and greyscale filters to sensitive images in the moderation interface configuring blur effects the blur effects configuration allows you to adjust how images are displayed within your moderation interface, applying blur and greyscale effects to mitigate exposure to sensitive content enable filters this setting allows you to choose whether the blur effect is applied to images automatically or if you want to disable it toggle on the blur effect will be applied to images based on the set configurations toggle off no blur will be applied to the images, and they will be displayed normally blur percentage (0% to 100%) the blur percentage allows you to control the intensity of the blur effect on images 0% no blur effect (image is displayed in its original form) 100% maximum blur effect (image is heavily blurred) greyscale percentage (0% to 100%) the greyscale percentage applies a black and white filter to the image 0% the image will be displayed in full color 100% the image will be fully greyscale once you save these settings, all images in the moderation interface are automatically blurred according to your configuration viewing and blurring images when you need to see the original image to make a decision, you can easily control the blur unblur the unblur button is located next to the blurred image and clicking it will immediately display the unblurred version of the image in its full resolution blur again click the blur button to reapply the effect and hide the image