Solution FAQ

Solution FAQ

Click the FAQ title to read the complete FAQ details.

 

How does GumGum Contextual understand content?

Powered by GumGum’s AI technology, GumGum Contextual scans web content – pages, images, and videos (including audio). Going beyond simple strategies like identifying keywords and page tags in the URL string or metadata, GumGum Contextual applies sophisticated machine learning techniques to provide complete content-level analysis.

How does GumGum Contextual's technical solution differ from other contextual solutions?

In addition to being the only contextual solution that provides a content-level understanding of pages, images, and videos (including audio), GumGum Contextual is the only solution that applies machine learning techniques to both page text and images. 

How does GumGum Contextual machine learning classify pages?

The objective of a supervised learning model is to predict the correct classification for newly presented page content, based on prior training data. 

What machine learning models and algorithms does GumGum Contextual use?

GumGum Contextual uses multiple machine learning models and algorithms, including neural networks and Convolutional Neural Networks (CNN), which are frequently used for image classification.

How are pages annotated?

Human annotators prepare pages and images for machine learning. GumGum gives annotators a Taxonomy (such as IAB v2) and the annotator uses a tool to pick the corresponding categories that best match each page (referred to as Ground-Truth). GumGum also leverages publicly available corpuses of labeled data for model training.

Why was a page classified in a certain way?

People sometimes want to know a specific reason for a page classification. For example, was the content category “Infectious Diseases” listed because the word “Cold” appeared on the page? 

Is there a way to force a classification?

Currently GumGum Contextual does not support classification override, however clients may provide GumGum with appropriately annotated pages to be fed into the model training data.

How do I appeal an assigned brand safety or contextual classification?

Clients may appeal an assigned brand safety or contextual classification.

What IAB Category tiers does GumGum Contextual return?

GumGum Contextual returns all IAB hierarchy tiers for both versions 1, 2, and 3 of the taxonomy:

What are GumGum Contextual's brand safety capabilities?

GumGum Contextual detects brand safety threats for multiple categories, which align with the 4As Advertising Assurance Brand Safety Floor. Clients can set a unique threshold or tolerance level for each threat category.

How should GumGum Contextual Threat confidence levels be interpreted?

GumGum Contextual analyzes content for threat categories. If a threat category is detected, the threat category is listed in the analysis results along with a confidence level.

How does GumGum Contextual detect threatening images?

GumGum Contextual’s sophisticated Computer Vision machine learning can identify threatening scenes, such as natural disasters or accidents. Object detection picks out potentially threatening objects within an image, such as weapons, exposed skin or drinks. 

How does sentiment analysis work?

Machine learning predicts the sentiment of each sentence on the page, by applying data models trained on extensive corpuses of annotated data. Verity returns an aggregated breakdown of the proportion of sentences on the page that are positive, neutral or negative.

How should sentiment results be used for targeting?

Sentiment thresholds are entirely up to the Publisher to set. Across the web, “neutral” is the most common primary sentiment classification. To aim for articles with a favorable tone of voice, the GumGum Contextual team suggests targeting “neutral” and “positive” sentiment results.

Which contextual providers use keyword-based algorithms vs machine learning?

GumGum claims that "Many providers in the contextual space do not actually use machine learning but make use of keyword-based algorithms."

How does GumGum Contextual work for audio?

GumGum Contextual processes standalone audio tracks (or audio tracks extracted from video) by transcribing the audio to text. The speech-to-text output is enriched with any available metadata (such as title and description) then sent to GumGum Contextual’s Natural Language Processing (NLP) machine learning models for classification.

How does GumGum Contextual work for video?

GumGum Contextual processes videos by applying Computer Vision analysis to sampled image frames and Natural Language Processing (NLP) analysis to text transcribed from the video’s audio track.

Is GumGum the first "independent ad provider" to receive content-level MRC accreditation?

Yes. GumGum claims that we are the first “independent ad tech provider” to receive content-level accreditation from the Media Ratings Council (MRC). YouTube actually received the first-ever MRC content-level brand safety accreditation.

What does "content-level" accreditation mean?

Property-level accreditation requires the consideration of text elements only. By contrast, content-level accreditation requires the consideration of image, audio and video elements as well as text.

What is the difference between GumGum's MRC accreditation and YouTube's?

One important distinction between the GumGum and YouTube accreditations is that GumGum is an independent ad tech provider, while YouTube operates as a 1st party platform. Another important distinction is that YouTube’s accreditation is more limited. YouTube has content-level accreditation for brand safety only, whereas GumGum Contextual has content-level accreditation for brand safety, brand suitability and contextual analysis.

What are the Enhanced Brand Safety Guidelines?

The Enhanced Brand Safety Guidelines are a supplement to the Ad Verification Guidelines, published by the Media Ratings Council (MRC) in 2018. They were issued as a direct response to Brand Safety violations identified on YouTube in 2017. 

How many humans are feeding GumGum's machine learning with data?

Machine Learning model development involves contributions from many people. It's hard to provide a precise static number, as the GumGum Contextual team runs multiple data collection and data labeling jobs at any given time, each with different requirements. In terms of model development and Q.A., we have a full-time in-house team of data scientists with specializations in image sciences, (a.k.a. Computer Vision) and linguistics (a.k.a. Natural Language Processing). To accomplish the data collect

Is there a Description of Methodology for GumGum Contextual?

Yes, the Description of Methodology provides an overview of GumGum Contextual’s capabilities and functions.

What languages does GumGum Contextual currently support?

GumGum Contextual supports multiple languages, outlined in the GumGum Contextual Language Support Grid.


The contents of these documents and any attachments contain GumGum, Inc. confidential information and are legally protected from disclosure.