Table of Contents | ||
---|---|---|
|
Summary
This Description of Methodology (DoM) describes the processes that deliver Verity – GumGum’s content-level contextual analysis and brand safety solution.
...
dataAvailable | States whether the classification request has already been processed. If it has, Verity returns the results from the database. If not Verity starts a new processing request. |
status | The current processing status of the analysis request. |
pageUrl | The URL of the page or video analyzed by Verity, as applicable. |
languageCode | The standard ISO 639-1 code for the language of the content. Verity currently supports content in:
Verity video analysis currently supports English only. Note: If Verity detects an unsupported language, a status of NOT_SUPPORTED is returned. |
iab v1 | The IAB v1.0 categories identified for the page.
IAB v1.0 categories are widely adopted in programmatic and Real-Time-Bidding (RTB) ad marketplaces. IAB v1.0 categories are organized into the following tiers:
Refer to the Verity Taxonomy document for a listing of IAB v1 categories.
Verity video analysis does not support IAB v1.0 categories. |
iab v2 | The IAB v2.0 categories identified for the content. The IAB defined a more granular content taxonomy in IAB Tech Lab Content Taxonomy v2.0 (released in 2017). IAB v2.0 defines additional content classifications and restructures existing IAB v1.0 classifications. Each IAB v2.0 category has a unique three-digit ID, and is structured into a tiered hierarchy with up to 4 tiers of categories. Refer to the Verity Taxonomy for a listing of IAB v2 categories. |
keywords | The top Keywords identified for the content, listed in order of prominence. |
safe | The final aggregated Brand Safety summary result for the content. If any threat classifications are identified with a high-risk level, the safe value is false and the content is considered unsafe. If no (or low-risk) threat classifications are identified, the safe value is true, and the content is considered safe. |
threats | Threat categories are part of GumGum’s brand safety taxonomy. GumGum classifies content into nine threat categories. For a complete list of Threat category IDs and Names, refer to Threat Categories in the Verity Taxonomy document. To detect possible threats, Verity analyzes and scores all the extracted content. Verity then correlates the scores to determine a per-category threat risk-level for the content. Possible threat category risk-levels are:
|
events | The Events classifier identifies seasonal events such as the Olympics (e.g. annual, bi-annual, 4-yearly events) for the purposes of contextual ad targeting. Verity lists up to five Event categories, in order of prominence. For a complete list of Event category IDs and Names, refer to Event Categories in the Verity Taxonomy document.
Verity video analysis does not support Events. |
sentiments | Identifies and extracts opinions within digital content. The positive, neutral, and negative levels of sentiment expressed in the content are evaluated. For contextual targeting purposes, a sentiment level of neutral or positive is generally recommended. |
processedAt | The date and time of the classification. |
...
Verity
...
Classification and
...
Scoring
Verity analyses threat, contextual categories, keywords and sentiment results in different ways. The data models Verity implements vary for different purposes and are fine-tuned and optimized on an ongoing basis.
IAB Content Categories | Content classifiers predict the likelihood that the given content belongs to one or more IAB categories. |
---|---|
Threats | Machine learning predicts threat categories by applying data models trained on collections of various kinds of threatening content. |
Events | Machine learning predicts event categories by applying data models trained on large-scale collections of event-related content pages. |
Keywords | A set of rules derives, scores, and ranks the most important keywords from content based on prominence and term frequency–inverse document frequency (TF-IDF) scores. |
Sentiments | Machine learning predicts the sentiment of each sentence on within content by applying models trained on content with varying tones of voice. Verity returns an aggregated breakdown of the proportion of sentences in the content that are positive, neutral or negative (referred to as Document Level Sentiment Analysis). |
...
Brand safety is a threat detection algorithm, so in this case Verity favors Recall over Precision. Data Scientists use Precision Recall curves to maximize Recall with minimum loss in Precision, thereby maximizing the number of potential threats classified.
Content classification is used for targeting purposes. In this case, GumGum favors Precision over Recall. Data Scientists use Precision recall curves to maximize Precision with minimum loss in Recall, thereby maximizing the accuracy of the classified targets.
Verity and the 4A’s Brand Safety Floor
The 4A’s, the leading trade organization for marketing communications agencies, defines the Advertising Assurance Brand Safety Floor and Brand Suitability Framework (revised in May 2020). The following table details the mapping between the 4A’s Brand Safety Floor and GumGum’s threat categories.
4A’s Floor | GumGum’s Verity brand safety categories | ||
Category | Definition | Category | |
1 Adult & Explicit Sexual Content | Illegal sale, distribution, and consumption of child pornography Explicit or gratuitous depiction of sexual acts, and/or display of genitals, real or animated | GGT4 | Sexual; sexually charged |
2 Arms & Ammunition | Promotion and advocacy of Sale of illegal arms, rifles, and handguns Instructive content on how to obtain, make, distribute, or use illegal arms Glamorization of illegal arms for the purpose of harm to others Use of illegal arms in unregulated environments | GGT1 | Violence and gore |
GGT2 | Illegal/criminal | ||
3 Crime & Harmful acts to individuals and Society and Human Rights Violations | Graphic promotion, advocacy, and depiction of willful harm and actual unlawful criminal activity – Explicit violations/demeaning offenses of Human Rights (e.g. human trafficking, slavery, self harm, animal cruelty etc.), | GGT1 | Violence and gore |
GGT2 | Illegal/criminal | ||
4 Death, Injury or Military Conflict | Promotion or advocacy of Death or Injury Incendiary content provoking, enticing, or evoking military aggression Live action footage/photos of military actions & genocide or other war crimes | GGT1 | Violence and gore |
GGT9 | Illness/medical | ||
5 Online piracy | Pirating, Copyright infringement, & Counterfeiting. | GGT8 | Malware |
Note: GumGum Verity classifies content that covers the topics of piracy, copyright infringement, or counterfeiting. Verity does not consider whether the content itself was pirated, counterfeited, or infringes on copyright. | |||
6 Hate speech & acts of aggression | Unlawful acts of aggression based on race, nationality, ethnicity, religious affiliation, gender, or sexual image or preference. Behavior or commentary that incites such hateful acts, including bullying. | GGT6 | Hate; hate speech, harassment and cyberbullying |
7 Obscenity and Profanity, including language, gestures, and explicitly gory, graphic or repulsive content intended to shock and disgust | Excessive use of profane language or gestures and other repulsive actions with the intent to shock, offend, or insult. | GGT5 | Obscene; profanity/vulgarity |
8 Illegal Drugs/Tobacco/ | Promotion or sale of illegal drug use – including abuse of prescription drugs. Federal jurisdiction applies, but allowable where legal local jurisdiction can be effectively managed.
Promotion and advocacy of tobacco and eCigarette (Vaping) & Alcohol use to minors. | GGT3 | Drugs and alcohol |
9 Spam or Harmful Content | Malware/Phishing. | GGT8 | Malware and phishing |
10 Terrorism | Promotion and advocacy of graphic terrorist activity involving defamation, physical and/or emotional harm of individuals, communities, and society. | GGT1 | Violence and gore (both text and image) |
11 DebatedSensitive Social Issue/ Violations of Human Rights | Insensitive, irresponsible and harmful treatment of debated social issues and related acts intended to demean a particular group or incite greater conflict. | GGT6 | Hate; hate speech, harassment and cyberbullying. |
GGT2 | Illegal; criminal | ||
The 4A’s floor categories do not map to this GumGum Threat category. | GGT7 | Disasters |
...