Verity™ Description of Methodology

This Description of Methodology (DoM) describes the processes that deliver Verity – GumGum’s content-level contextual analysis and brand safety solution. 

Powered by GumGum’s AI technology, Verity applies sophisticated machine learning techniques to analyze digital content, including web pages, images, and videos (plus audio). 

Verity returns a detailed report featuring brand safety scores for the content, along with contextual targeting categories, prominent keywords, and sentiment categories. 

Verity supports the contextual targeting categories defined in the Interactive Advertising Bureau (IAB) Content Taxonomy v1.0, 2.0, and 3.0.

Media Ratings Council (MRC) Content-Level Accreditation

Verity is the first independent third-party solution to achieve MRC accreditation for content-level brand safety.

This recognition by the MRC validates that GumGum’s proprietary contextual intelligence solution is able to consider all available signals (text, image, audio, and video) needed to give a true contextual reading.

Verity is officially accredited for content-level Contextual Analysis, Brand Safety and Brand Suitability for English-language text/image, video image, and audio classification (Desktop, Mobile Web, CTV).

Primary Users and Use Cases

Verity serves agencies, advertisers, DSPs, and publishers as a third-party content-level contextual analysis and brand safety data solution. 

Operating as a fee-based third-party service in the cloud, publishers can integrate Verity into content management systems (CMS) or data management platforms (DMP) to analyze and optimize media content. 

Supply-side and demand-side platforms (SSPs and DSPs) can implement the Verity service on their own technology platforms, ad exchanges, and ad servers. There are two primary product use cases: 

  1. Increased Brand Safety —  Advertisers can deploy Verity to detect objectionable content and avoid serving their advertising messaging adjacent to or embedded within that content. Publishers can use Verity to identify and assess potentially objectionable digital content prior to publication.

  2. Optimum Contextual Targeting — Advertisers and Publishers can access the Verity service to locate content that is highly relevant, enabling contextually aligned advertising to be served.

Verity’s core technology remains unchanged for each implementation. Integrations are accomplished via the Verity API.

As of October 2023, GumGum's Verity service processes 1-2 billion unique monthly requests for content and brand safety classification globally. 

Verity Platform Functions

Verity’s function is to provide data to clients who explicitly request and pay for analysis information about specific digital content. The clients are interested in establishing brand suitability and contextual classification for specific content, to drive their own content creation or ad serving. 

Verity applies natural language processing (NLP) and computer vision (CV) based machine learning techniques to analyze digital content. Multiple kinds of content can be analyzed, such as desktop and mobile web pages, images, and Online Video platforms (OLV) and connected TV (CTV) videos (including audio). 

Web Page Analysis Functions

Going beyond simple strategies like identifying keywords on the page or in the URL string or metadata, Verity works by scanning the full text and prominent imagery of a web page. Verity’s NLP processes analyze the core page content, while CV processes analyze the imagery.

Verity provides what the Media Ratings Council (MRC) refers to as content-level reporting defined as “more granular context and brand safety measurement and reporting for video and display content within a domain, site, platform, mobile application or URL”.

Note the following details about Verity web page content-level processing:

  • Verity does not apply content-level analysis to code or objects (including third-party code or objects) that appear outside, adjacent to, or embedded within the core text on a page. 

  • Verity does not download or analyze the CSS, JavaScript, navigation, footer, sidebars, and other areas extraneous to the core textual content on the page. For example, on a typical Blog page Verity extracts and analyzes the central content of the page, but not the surrounding elements such as third-party advertising or related content.

  • Verity also does not provide analysis of continually changing dynamically loaded user-generated content within publisher pages (e.g., reviews sections, comments sections, social media plug-ins) or social media environments. 

  • Verity applies logic to identify the prominent image on a web page for analysis. Additional images on the page may be subject to image extraction limitations.

  • GumGum informs clients that Verity analyzes the web page (not the surrounding material) specifying that the analysis includes the core textual content and prominent imagery but nothing else – not graphics, sidebar content, or third-party insertions such as paid advertising. 

  • Verity acknowledges that surrounding, adjacent, or embedded content on a web page (which may be provided by JavaScript executions or non-textual content) can affect the context of a page as presented to users and may be a consideration for advertisers. 

  • Other key platform functions such as ad serving, detection of ad fraud, identification of invalid traffic (IVT/SIVT), measurement of viewability, measurement of audiences, and other cookie implementations are not handled by Verity or its technology.

Video Analysis Functions

Verity analyzes video content by applying powerful classifiers to the video’s transcribed audio track and image data from sampled video frames. 

Video analysis leverages GumGum’s industry-leading NLP text analysis and CV image analysis processes, plus fast and accurate audio transcription services.

Verity Machine Learning Technology

Verity is the only solution that applies machine learning techniques to provide content-level brand safety and contextual analysis. Alternative solutions may only leverage keyword methodologies that consider the text and are limited to page-level analysis, use of Allow or Blocklists, or URL-level analysis. These cruder contextual approaches often eliminate safe and relevant inventory. They also miss relevant content (e.g., keywords that are spelled differently), overlook related content, and mistakenly target irrelevant content (e.g., keywords with multiple meanings).

Verity’s supervised machine learning works by first training a machine learning model with training data that comprises thousands of pieces of example content (i.e. pages, images, and videos) for each category paired with the correctly labeled outputs. For example, to learn how to classify a GumGum threat category on “Drugs and alcohol”, first a human has to hand-annotate thousands of pieces of content that have something to do with drugs or alcohol. 

The supervised learning algorithm searches for patterns in the data that correlate with the desired outputs. After training, the supervised learning algorithm can process new unseen pages and label them with a classification based on the prior training data. For example, the model could predict whether digital content references drugs or alcohol and classify it accordingly for the purposes of brand safety.

Architecture and Flow

Customers use Verity to analyze specific digital content and determine the eligibility of the content for ads. Verity does not crawl the internet for content; instead, a client application calls Verity (via their integration with the Verity API) specifying the URLs of specific content they’d like to analyze. 

GumGum's Verity service exists entirely within a secure Cloud infrastructure. Verity’s Cloud-based architecture is massively scalable and currently processes approximately 1 billion unique requests per month for content and brand safety classification.

Access for Verity User Agents

If a requested URL blocks a Verity browser, Verity cannot process the content and returns an error. Verity customers are therefore requested to configure their domain access permissions to enable Verity to access their site in order to extract and process content. 

Page Analysis Process

The Verity page analysis process involves the following core components:


  1. Verity API Gateway: The Verity API Gateway receives a page URL request, authenticates the client request and passes the URL to the Verity API.

  2. Verity API: The Verity API initiates the request and then orchestrates the Content Extractor, Text and Image analyses systems to extract the page data and perform the analyses. 

  3. Content Extractor: The Content Extractor accepts page requests sent by the Verity API from a queue. The Content Extractor loads the page URL, downloads the page title, metadata, and HTML and saves it as a text string in the database. If a prominent image is identified for the page, the Content Extractor downloads and saves the image to the database with identification information for the associated page. The Content Extractor passes the Page URL and image information on for text and image analysis.

  4. Text Analysis: The Text Analysis engine applies Natural Language Processing (NLP) for text classification (e.g. IAB and Threat categories) and information extraction (e.g. Keywords). 

  5. Image analysis: The Image Analysis engine houses GumGum’s core Computer Vision capabilities in a modular architecture. The Image Analysis component passes images through multiple data models to determine their classification information.

  6. Verity Report: The Verity API retrieves the text and image classification results, applies weighting and merging logic to the results, and returns the final Verity page report to the client.

Video Analysis

Verity analyzes videos for the purposes of content-level contextual targeting and brand safety.

Verity works by applying machine learning techniques to the video audio track, sampled video frames, and video metadata (where available) and assigning contextual categories, detecting keywords, and calculating a brand safety score.

Verity Video Analysis leverages the following systems:

  • Transcribe Service – Applies automatic speech recognition (ASR) to convert speech to text.

  • OCR Service –  Performs Optical Character Recognition (OCR) to detect text in video and convert the detected text into machine-readable text. 

  • Verity Text Processing – Applies machine learning models to the video metadata, title, transcription text, and OCR text and provides a brand safety and contextual classification report.

  • Verity Image Processing – Applies machine learning models to sampled video frames and provides a brand safety report.

Video Analysis Process

The Verity video analysis process involves the following core components:


  1. Verity API Gateway: The Verity API Gateway receives a video URL request, authenticates the client request and passes the URL to the Verity API.

  2. Verity API: The Verity API passes the request to the Video Service to orchestrate video analysis. 

  3. Video Service: the Video Service downloads video and audio into separate files.

  4. Audio Transcribe: The audio file is sent for transcription.

  5. Optical Character Recognition (OCR): Verity API verifies if the audio transcription results contain a sufficient sample of at least 50 words. If not, Verity API initiates an OCR job to detect text in the video file and convert the detected text into machine-readable text.

  6. Prism Video Frame Threat Classifier: Video is sent to the Video Threat Classifier for brand safety analysis of video frames.

  7. Verity Text Processing: Verity API passes concatenated text results (comprising transcription, OCR if available, Client metadata title and description) to Verity Tapas Text Processing. The Text Processing engine processes the video transcription, OCR, client metadata title and description by applying Natural Language Processing (NLP) for text classification (e.g. IAB Content Categories v2.0 and Threat categories) and information extraction (e.g. Keywords). 

  8. Verity Report: The Verity API accepts the text analysis results, applies result weighting and merging logic, then returns the final video analysis Verity Report to the client.

Brand Safety

Verity Machine learning predicts threat categories by applying data models trained on collections of various kinds of threatening content. Verity’s sophisticated Computer Vision machine learning can identify threatening scenes, such as natural disasters or accidents. Object detection picks out potentially threatening objects within an image, such as weapons, exposed skin or drinks. 

Verity detects brand safety threats for each of the following categories.

  • Violence and gore

  • Criminal

  • Drugs and alcohol

  • Sexually charged

  • Profanity and vulgarity

  • Hate speech, harassment, and cyberbullying

  • Disasters

  • Malware and phishing

  • Medical

These categories align with GARM’s Brand Safety Floor and Brand Suitability Framework.

Clients can set a unique threshold or risk-tolerance level for each threat category. For example, a healthcare provider may choose to set no threshold for the “Medical” threat category, yet higher thresholds for categories that are less suitable for ad placement (e.g., “Hate”, “Violence”, or “Obscene”).

Content Classification 

Verity works by applying machine learning techniques to relevant content to assign contextual categories.

IAB Categories

The Interactive Advertising Bureau (IAB) defines a Content Taxonomy to provide publishers with a consistent and easy way to organize their website content, and enable advertisers to target standard content categories. Verity returns all IAB hierarchy tiers for versions 1.0, 2.0 and 3.0 of the taxonomy:

  • IAB V1 –  2 tiers - 372 categories

  • IAB V2 – 4 tiers - 698 categories

For example, Verity analysis of an article on “The Rise of Alternative Venture Capital” identifies IAB v1.0 categories in 2 tiers, and IAB v2.0 and v3.0 categories in 4 tiers.



Keywords are derived from content, metadata, and headlines. Verity ranks keywords according to frequency of use and prominence. Objects detected in an image may be included in the list of keywords.


Verity predicts the sentiment of each sentence within content (referred to as Document Level Sentiment Analysis), and returns an aggregated breakdown of the proportion of sentences within content that are positive, neutral or negative. Sentiment thresholds are entirely up to the publisher to set. Across the web, “neutral” is the most common primary sentiment classification.

Verity Classification and Brand Safety Report

The Verity report includes complete brand safety, keyword, and categorization analysis data for the requested content. Each report contains the following analysis results:


States whether the classification request has already been processed. If processed data exists, Verity returns the results from the database. If not Verity starts a new processing request.


The current processing status of the analysis request.


The URL of the page, video, image, or text analyzed by Verity, as applicable.


A unique identifier generated for the classification request.


The standard ISO 639-1 code for the language of the content. Refer to the Language Support Grid for the latest supported languages.

Note: If Verity detects an unsupported language, a status of NOT_SUPPORTED is returned.


IAB contextual categories are defined in the IAB Content Taxonomy and are widely adopted in programmatic and Real-Time-Bidding (RTB) ad marketplaces.

Verity supports current versions of the IAB Content Taxonomy. The Verity team keeps track of new taxonomy releases and implements updates in a timely fashion.

Refer to the Verity Taxonomy document for a listing of IAB contextual categories.


The top Keywords identified for the content, listed in order of prominence.


The final aggregated Brand Safety summary result for the content.  

If any threat classifications are identified with a risk level of HIGH, the safe value is false and the content is considered unsafe.

If no (or low-risk) threat classifications are identified, the safe value is true, and the content is considered safe.


Threat categories are part of GumGum’s brand safety taxonomy. GumGum classifies content into nine threat categories. For a complete list of Threat category IDs and Names, refer to Threat Categories in the Verity Taxonomy document.

To detect possible threats, Verity analyzes and scores all the extracted content. Verity then correlates the scores to determine a per-category threat risk-level for the content.

Possible threat category risk-levels are:

  • HIGH


  • LOW


Identifies and extracts opinions within digital content. 

The positive, neutral, and negative levels of sentiment expressed in the content are evaluated. For contextual targeting purposes, a sentiment level of neutral or positive is generally recommended.


The date and time of the classification. 


Classification Approaches

Verity analyses threat, contextual categories, keywords and sentiment results in different ways. The data models Verity implements vary for different purposes and are fine-tuned and optimized on an ongoing basis.  

Partners should be aware that, as with any machine learning technology, performance is highly dependent on the specific data set being analyzed, consequently no single error rate nor range exists. Verity handles proprietary data sets and cannot disclose proprietary partner result data.

Verity calculates and measures error rates in the form of Precision, Recall, F1, and F2 for each machine learning model. As part of this process, GumGum:

  • Engages data annotation leveraging human-annotators to establish Ground Truth for various data sets.

  • Works with third-party vendors and research consultants to conduct relevancy testing.

Note: If a Verity data set that has been delivered to a partner is deemed erroneous or incomplete, GumGum will follow the Verity Data Reissuance Policy.

The following sections outline the data models and scoring used for Brand Safety and Contextual Classification in Verity, and points to a relevant third-party study.

Brand Safety Classification and Scoring

Verity’s brand safety classification relies on GumGum’s threat data model. The threat model is trained on collections of various kinds of threatening content.

As brand safety and content classification serve different purposes, Verity considers different approaches for scoring brand safety versus content classification models. Both approaches use Recall scoring (e.g. out of all the images of weapons in a dataset, how many weapons were identified) and Precision scoring (e.g. the number of times an image identified as a weapon was actually a weapon).

Brand safety is a threat detection algorithm, so in this case Verity favors Recall over Precision. Data Scientists use Precision-Recall curves to maximize Recall with minimum loss in Precision, thereby maximizing the number of potential threats classified. 

Verity results comprise risk and confidence levels for each Threat category.

The risk level represents the risk potential of unsafe content within a page, video, image, or text string. Possible risk levels are LOW, MEDIUM and HIGH.

In traditional statistical measures, confidence in observed results may be assessed according to the number of samples involved in a test. Larger scale sampling leads to a higher confidence score. However, Verity confidence levels are not related to the quantity of sample data. For example:

  • A threat category result “confidence”: “VERY_LOW” should be interpreted as Verity identifying a very low risk for that category within the content, with a high level of confidence. 

  • A threat category result “confidence”: “VERY_HIGH” should be interpreted as Verity identifying a very high level risk for that category within the content, with a high level of confidence. 

Contextual Classification and Scoring

Verity analyses contextual categories, keywords and sentiment results using various methods and data models, outlined in the following table:

IAB Content Categories 

Content classifier predicts the likelihood that the given content belongs to one or more IAB categories.


A set of rules derives, scores, and ranks the most important keywords.


Machine learning predicts the sentiment of each sentence within content by applying models trained on content with varying tones of voice. Verity returns an aggregated breakdown of the proportion of sentences in the content that are positive, neutral or negative (referred to as Document Level Sentiment Analysis). There are inherent accuracy limitations for sentiment reporting, as this varies by data set, largely due to the subjective nature of the classification task. Our studies have shown that Neutral is typically the highest scoring sentiment value for documents analyzed.

Content classification is used for targeting purposes so Verity favors Precision over Recall. Data Scientists use Precision-Recall curves to maximize Precision with minimum loss in Recall, thereby maximizing the accuracy of the classified targets.

Contextual Intelligence Relevancy Study

GumGum participates in publicly available third-party media studies, such as the Comparison of Contextual Intelligence Vendors and Behavioral Targeting undertaken with the Dentsu Aegis Network in 2020. The study report found that:

GumGum Verity™ had the highest percentage of relevant pages across all four Contextual Intelligence vendors.

Partners may review the complete report, available from this link Understanding Contextual Relevance and Efficiency.

Verity and the GARM Brand Safety Floor


Integration Methods

Verity integration clients include publishers who can sell ad space directly to advertisers, using Verity data to place ads with contextually targeted content, or to avoid brand-unsafe content. 

Verity client integrations also include video implementations, such as a Contextual Video Marketplace where brands and advertisers can access Verity’s contextual and brand-safety data for the marketplace publishers’ video inventory.

Clients leverage Verity data via RESTful API or Page Tag integration. In both cases, Verity analysis results are returned in a JSON response body.

API Integration

Verity offers separate APIs for Page and Video Analysis via server-to-server (S2S) connections. In either case a user or client application calls the Verity API, specifying the URL of content to be analyzed. Clients implement webhooks to listen for the JSON response body results on a Verity callback URL. 

Page Tags

In this case, publishers implement a page tag that automatically calls Verity to analyze a page whenever a user visits the page. 

For example, a publisher could set up a page tag to fetch new ads for the page based on the keywords identified by Verity. Initial ad loading is disabled until Verity returns the keyword data. A callback publishes targeting keywords using the Verity data, then fetches new ads via Google publisher Tag refresh functionality.

Processing Time

Once a request is sent, Verity takes less than a second to return an initial response, indicating whether or not data is already available for the URL.

If data is available (i.e. the content has been processed recently and results are in the database) the Verity response is returned immediately.

If the request is for new digital content, Verity initiates an asynchronous process to analyze the content and correlate the results into a Verity response. It may take a few minutes to complete processing for new media.

Machine Learning Model Development

The Verity team carefully selects and trains machine learning models for each contextual and brand-safety classification. As part of the normal Verity lifecycle, existing models are continually enhanced or seamlessly replaced with higher-performing models. 

GumGum develops machine learning models and also works with technology partners in various ways. GumGum:

  • Engages data annotation companies to provide human annotators and crowdsourced data annotation platforms.

  • Adopts machine learning models from technology partners or open source frameworks. 

When working with a technology partner, GumGum:

  • Verifies that the technology partner is a good fit for the Verity service.

  • Vets the technology partner for quality of service (e.g. by completing a pilot implementation with GumGum as a proof of concept).

  • Validates that the technology partner business is legitimate and appropriately licensed and that the relationship does not pose any undue risk to GumGum.

  • Contractually obligates the technology partner to comply with all applicable laws and regulations (international, federal and state).

GumGum’s legal and business teams carefully monitor all technology partner relationships on an ongoing basis.

Classification Quality Maintenance

The Verity team constantly runs A/B testing to evaluate alternative data models and competitor results. On a quarterly basis, Verity also maintains a Rolling KPI quality check where URLs are collected randomly from publisher domains and added to a Gold Standard Data Set. 

The URLs are human-annotated for threat and contextual classifications using both individual annotators and data annotation platforms.  The Verity team runs classification processes, checks the results, and determines remediation or enhancement steps. 

Page Minimum Reporting Requirements

Web pages must meet certain minimum requirements in order for Verity to successfully process the page content:

  • The URL specified in the page request must be valid and meet these requirements:

    • Start with http:// or https://.

    • Have a properly URL-encoded address.

    • Any request parameter values must be properly URL-encoded.

  • Verity must be able to download HTML from the page URL.

Verity will attempt to extract content for analysis from pages that meet the above requirements.

Verity’s content extraction function can successfully process a wide range of web page designs, HTML markup, and image formats, however, some known issues exist that may impede the extraction of usable web page content.

Review the limitations detailed in the following sections.

Content Extraction Limitations

The following table summarizes some of the known issues Verity may encounter when downloading and extracting pages for analysis.





Maximum characters per page

Verity processes only the first 20,000 characters on any page in any supported language. Note that, according to the Verity team’s research, the majority of web pages are under 7,500 characters per page. Few pages exceed the 20,000 character limitation.

Insufficient Content

Where Verity’s content extraction processes cannot extract sufficient relevant content from a page (typically 50 text characters or less), Verity is unable adequately perform classification tasks across text. An error message INSUFFICIENT_CONTENT is returned. The benefit of excluding insufficient content from Verity analysis is that classifications are only made based on meaningful amounts of data, enabling increased accuracy across all classes. 

Infinite scrolling pages

Infinite scrolling enables users to keep scrolling through information on a web page, without clicking a “Load More” or “Next Page” option. Many platforms, such as, have implemented Infinite Scrolling, as information loads quickly and maintains user engagement. In many Infinite Scrolling environments, each component page of the Infinite Scroll page has its own URL and the URL changes as the content is loaded. As Verity has a 20,000 character maximum limit and only processes page URLs that are specifically requested by the partner, Verity typically does not process the complete content of an Infinite Scrolling page.

Dynamically rendered pages

Dynamic web pages contain content that is generated automatically from a web server via Javascript, instead of being hard-coded on the page. The content of the page may change based on multiple variables, for example, new data on the web server or user selection. The content of these page can only be reliably discovered by rendering the page. Verity therefore does not attempt to classify dynamically rendered pages.

Home pages

Home pages for a site may have more complicated layouts than the main corpus of the site content and often contain text passages quoted from other pages on the site. Verity’s contextual categorization of home page content may therefore be less useful than the classification of other pages on the site.

Intricate page layouts

Some sites may implement complex HTML and CSS schemes that may require rendering to reveal the main body text of the pages. These design practices are not typically employed by established publishers and therefore rarely impede Verity content extraction.

User Generated Content (UGC)

Verity does not process or analyze UGC, such as Comments or Social Media posts. UGC is constantly changing, therefore Verity does not attempt to provide a UGC content classification that could immediately become outdated.

Embedded video content

Verity video classification requires access to the video asset directly, to perform content-level analysis. As such, video content embedded within a webpage (or hosted video player) is not considered in Page Classification reporting. There is the potential that the page classification may vary entirely, or in part, from the video classification, within which a video ad may be served. Verity page classification reporting should be used to support page-level ad targeting or avoidance. Verity video classification reporting should be used to support video-level ad targeting or avoidance.

Site Access Limitations

Partner restrictions on website access may limit Verity’s ability to download content. Typically, to bypass partner site restrictions, Verity partners configure their Allow lists enabling Verity user agents to access their content.





Websites with login required

Some websites may require user login before any content is displayed. In these cases, Verity will return an error and will not attempt to classify the content. However, in most cases partners add Verity user agents to their Allow list so this issue does not arise.

Geographic content

Content is often tailored to a specific geographic market, for example for News, Sports, or Streaming sites. The site may be designed to effectively serve a local market, or to conform to region-specific regulations such as GDPR.

Websites may automatically detect the a user’s geographic address based on their IP address and dynamically serve the content targeted to their region. Verity user agents run in the U.S.A., may be served content targeted to that market from these websites.

However, most multi-national publishers run websites with country-specific domains for each nation they serve. Verity will classify the content of the country-specific page URL requested.


Many Publisher websites are protected by a paywall, and limit access to their content in various ways, such as:

  • Limiting the number of pages a user can read without logging in and subscribing.

  • Displaying the opening sentences or paragraphs of a page, but concealing the rest of the content until a reader selects a subscription option or logs in.

Verity can often extract enough content from these page to successfully perform a classification, however in most cases the Publisher has added Verity user agents to their Allow list, so the paywall does not impact Verity.

Rate limits

Web properties may want to reduce their exposure to DoS (Denial of Service) or bot attacks. Multiple requests within a short time span may trigger the website to block subsequent requests from Verity. In this case, Verity is unable to extract page content until the block is lifted.


A Robots.txt file may limit access to a site or parts of a site. The site may also limit the number of pages that can be downloaded (for example, only 10 pages per month). This may limit Verity’s ability to download content from the site.

Fake Page Content

In theory a Publisher could set up a page to return different content for a page URL, in order to manipulate Verity’s classification results. A publisher that intentionally misrepresents page content for the purposes of avoiding or circumventing Verity’s brand safety measures would be considered nefarious. To our awareness, Verity has not encountered an issue of this kind.

Image Formats Analyzed

Verity applies logic to identify the prominent image on a web page for analysis. Additional images on the page may be subject to image extraction limitations. Supported image formats are:

  • BMP

  • EPS

  • ICNS

  • ICO

  • IM

  • JPEG

  • JPEG 2000

  • MSP

  • PCX

  • PNG

  • PPM

  • SGI


  • TIFF

  • WebP

  • XBM

Video Data Analyzed

The Verity Video analysis pipeline processes and analyzes video content and metadata, specifically:

  • Audio
    Transcription of the video's audio track. The maximum transcription length supported is 4 hours (14400 seconds).

  • OCR
    Text and cursive text detected in the video frames. OCR is included in the process when the video transcription yields fewer than 50 words. 

  • Metadata and title
    Page title and metadata.

  • Video frames
    Sampling is performed at a rate of 1 frame per second.

  • Video formats
    Supported formats are MPEG-4, MOV, MP3, FLAC, and M3U8.


User Information is Not Analyzed

Verity does not process or store user information (such as cookies or browsing history). Verity analysis is based solely on the content of media analyzed.

The contents of these documents and any attachments contain GumGum, Inc. confidential information and are legally protected from disclosure.