Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
  • How do we define brand suitability compared to brand safety? 

  • -- Brand Safety is a "table-stakes" minimum and we align with GARM's definitions in the Brand Safety Floor for this. Brand Suitability is non-binary and accounts for the differing levels of comfort and association with various kinds of content that advertisers/brands may be comfortable with, on a campaign by campaign basis. For Brand Suitability, we are aligned with GARM's definitions in the Brand Suitability Framework for Low, Medium, High-risk situations, which aim to take into consideration the "purpose" or "intent" of the content as well. For example, a fictional film review may have mentions of violence or harmful content, but due to the fact that this is a fictional representation of violence for the purposes of entertainment, it may be considered a suitable environment for some brands, while it is not in direct violation of the explicit real-life act of violence or harm to another that may be posted elsewhere. 

  • "Many providers in the contextual space do not actually use machine learning but make use of keyword-based algorithms" - do we know who?

    • -- To my knowledge, IAS, DoubleVerify, Oracle, Peer39 all use keyword-based algorithms to define their contextual and brand safety segments and do not leverage deep learning, let alone machine learning, models for classification.

  • When will Verity be in other languages, such as French?

  • -- Verity is currently available in English, Japanese, Spanish, French, German. Support for Italian and Portuguese is currently under development and should be available before the end of 2021.

  • Can you give me the elevator explanation of how Verity works for audio and video?

  • -- Verity leverages audio transcription to take audio files (standalone or extracted from video) to convert speak-to-text. The output of this process is then sent to Verity's NLP machine learning models for classification. For video analysis, Verity leverages audio transcription for speech-to-text as well as frame sampling from the video content itself, sending the individual frames captured to Verity's Computer Vision models for classification. In parallel, the speech-to-text output is enriched with any available video metadata (title, description) and sent to Verity's NLP machine learning models for classification. The outputs of these parallel tracks is then merged in our proprietary merging logic and a comprehensive analysis considering both auditory and visual elements of the video is returned. 

  • First "independent ad provider" - is that because YouTube has certification?

  • -- Yes, that is exactly correct. YouTube received the first content-level accreditation from the MRC for Brand Safety. They are regarded as a "1st party platform" vs. "independent ad tech provider" since they own the platform/network and operate as a walled garden. Twitter, Facebook, Google, Amazon, TikTok would fall into that same "1st party platform' bucket, while IAS, DV, Oracle would be independent ad tech providers, such as ourselves. 

  • How we differ from YouTube - they only run on their own network?

  • -- Primary distinction is that we are an independent ad tech provider, while they are operating as a 1st party platform. Another important distinction here is that the scope of YouTube's content-level accreditation is limited to Brand Safety, while Verity's content-level accreditation covers Brand Safety, Brand Suitability, and Contextual Analysis.

  • "Property" = text only?

    • -- Yes, property level distinction only requires a consideration of text elements. Content-level requires consideration of text, imagery, audio, video. 

  • Enhanced Brand Safety Guidelines - beyond just text? When was this implemented?

  • -- The Enhanced Brand Safety Guidelines (supplement to the Ad Verification Guidelines) were published by the MRC in 2018. They were influenced as a direct response to Brand Safety violations that were identified on YouTube in 2017. Here is an article that chronicles some of these issues. 

  • How many humans are feeding our machine with data?

  • -- Hard to put an exact static number on this, as we always have data collection and data labeling jobs running at any given time, each of which may have differing requirements. In terms of model development and QA, we have a full-time in-house team of data scientists with specializations in computer vision (image sciences) and natural language processing (linguistics). For data collection and annotations, we leverage both a dedicated in-house team and a globally distributed workforce of human annotators for our data labeling practices.

  • Link to description of methodology

...

Excerpt

Verity supports multiple languages, outlined in the Language Support Grid .