Glossary
1 min
this page is your go to reference for clear, consistent terminology it defines the core concepts and terms we use throughout nima and this documentation concept definition account action any action applied directly to an account (e g , warn, strike, suspend) account history the historical record of all cases related to a specific account, including past reports, violations, account level enforcement, current and expired strikes, and appeals this can also include custom fields relevant to each platform, such as verified status account strike system docid\ imksiysjewzvi5ne93o7n nima enables platforms to configure their own automated strike system, building upon individual content violations all the way to account bans, taking into account a range of severity and actions the system includes configurable strike expiration times actions what is applied after a moderation decision, such as removing content, restricting visibility, suspending an account, or issuing a warning action configuration configuration mapping policies/sub policies to the actions triggered when violations occur appeal a request to reconsider a moderation decision appeal date the date when the appeal was submitted by the appellant (either reporter or reportee) appeal id the unique identifier assigned to each individual appeal submitted in nima audit logs the full collection of all audit log entries over time automated enforcement the threshold of the rule engine within which a policy is automatically enforced and therefore the associated action is triggered case a case is a record created in nima for potentially violative content reported one or several times multiple reports on the same content will be grouped under the same case a case corresponds to a single parent content id in the case of a collection of files, a case will have a single parent content id and multiple child content ids that will not vary across different reports case handling time the time between a case being presented to a moderator and that case being effectively processed case response time the time duration from the moment a case enters nima until it is resolved this measure is usually not communicated externally confidence score the probability returned by ai detection engines when reviewing a content content the user generated item under review (e g , text, image, video, listing, message) it is a single piece of content that can be subject to multiple reports therefore, multiple cases and case ids can pertain to a single piece of content content action actions that specifically affect content items (e g , remove, blur, limit visibility) content history the versioned record of all changes, actions, and states of a content item content id the unique identifier assigned to a piece of content multiple reports on the same content correspond to the same content id content type the format of the content being moderated, such as text, image, video static, or video stream criteria the conditions used in the rule engine when setting up tules custom attributes configurable metadata fields attached to cases, reporters, reportees, or content for workflow and analytics purposes escalation routing a case to a higher level queue or specialized team due to sensitivity or severity escalation queue a queue dedicated to escalated cases requiring experienced or specialized review event logs a time stamped record of everything that happens to a particular flagged case post appeal action the action applied at the end of a case or appeal review, which may differ from the initial action illegal content user report a user report alleging that content violates the law, submitted through the nima illegal content reporting form infringement categories categories set by the eu commission and matched to policies in nima to automatically compile the dsa transparency report initial action in the appeal list, the action originally applied before the appeal was submitted initial policy violation in the appeal list, the policy or sub policy selected during the first moderation decision before any potential appeal or reevaluation input label the category or descriptor assigned to the alleged violation by a reporter when compiling a form or an ai detection provider through automated detection max handling time in queue the maximum amount of time a case may remain in a queue before breaching sla move to queue moderators working on a report in a queue can manually move this report to another standard queue overturn a review outcome in appeals in which the original moderation decision is reversed policy configuration configuration of policies, sub policies, descriptions, and mapping to actions or tiers policy description the policy or sub policy rationale or summary, describing the type of violative content or behavior that is covered policy violation a confirmed violation of a given policy or sub policy policy a high level content moderation guideline defined by your platform that specifies what types of content or behavior are prohibited policies represent broad categories of violations (e g , "hate speech", "violence", "spam") and are the foundation of enforcement in nima, either applied through proactive detection rules or selected by moderators when reviewing content sub policy sub policies provide detailed breakdowns of policies (e g , under "hate speech" policy, sub policies might include "racial slurs", "religious intolerance", "gender based harassment") if a policy has subpolicies, these will be applied through proactive detection rules or selected by moderators when reviewing content priority score based on configurable criteria, a priority score is calculated for each report to prioritize it in the content review queue proactive detection nima's automated detection uses ai/ml systems to identify potential violations without requiring initial human review when integrated as an ingestion channel, these systems trigger automated workflows via the rule engine this engine features a centralized ai marketplace, allowing users to easily select and manage various detection providers proactive detection provider a third party or internal ai detection engine that detects content proactively and triages it based on the rules set in the rule engine proactive detection rule a rule that automatically performs moderation actions or routing based on defined conditions quality control nima's quality control system helps maintain consistency and accuracy in moderation decisions by providing data analytics, performance metrics, and audit capabilities for both human moderation and automated enforcement it enables teams to review moderation decisions and continuously improve it queues queues contain reports pending review and can be manually configured queues can have different prioritization models based on the severity of the content and can be assigned to specific groups of moderators queue routing rule rule defining how cases ingested in nima through different channels (excluding automated detection) are redirected to queues reply status in an appeal assessment, moderators may message the appellant for clarification or additional information reply status records two things whether the appellant has replied to that initial moderator message, and whether the appellant's message has been marked as read report a flagged instance of potentially violative content submitted to nima for review reports can originate from multiple sources user reports (via your platform's reporting flows or the nima illegal content reporting form), proactive detection (ai/ml automated detection), trusted flaggers, or law enforcement agencies each report indicates a potential policy violation and creates or contributes to a case for moderation review reported account the user whose content or behavior is being reported reported account custom attributes custom metadata relating to the reportee or their content reporting account the person or entity submitting a report reporting account custom attributes custom attributes relating to the reporting account (e g , trusted flagger status) roles permission profiles determining what capabilities a user can access within nima service level agreement (sla) the target maximum time allowed for resolving a case, set up for each standard or escalation queue signal the outcome (e g , a probability score) sent by an internal detection tool to nima source the channel of a report or detection (e g , automated detection, user report, lea form, illegal content report form) standard queue a standard queue receives incoming reports from users or through automated detection reports in a standard queue can be manually moved by a moderator to another standard queue, or automatically re routed to an escalation queue strike expiration time the time after which previous violations expire in a tier under the strike system team defined groups of moderators or reviewers assigned to queues threshold when setting up rules, users can set and edit thresholds for automatically actioning content, sending it to human review, or taking no action tier when configuring the account strike system, each tier corresponds to the severity of the policy violation for example, a single tier 1 violation may lead to the permanent ban of a user from the platform users match tiers to policies in policy configuration trails a report trail contains all logs related to a report, such as actions taken, appeals, etc trusted flagger report a report submitted by an accredited or designated entity recognized for high quality reporting through the nima illegal content reporting form uphold when reviewing a user appeal, the decision to uphold the original action means that the original moderation decision was correct user notifications communications sent to users regarding actions taken on their content or account or to reporters following the action on their reports these include dsa statement of reasons but can be sent for any action taken in nima user reports a report submitted by users using either the platform's own reporting flows or through nima's illegal content reporting form user reports can be configured to be routed to specific queues workflow action an action that is not linked to a policy or policy violation, such as no action, unlock content, or unlock user