IAS Principles for Responsible Use of Artificial Intelligence
OUR Perspective on the Use of AI
Artificial Intelligence (AI) is a core pillar of IAS’s long-term strategy, as it enables us to bring new insights to our customers and partners by analyzing data at a scale not previously possible. IAS has developed a digital advertising platform powered by AI and machine learning (ML) that captures up to 280 billion interactions daily from around the globe. IAS leverages AI for prediction, decisioning, protection, and targeting, and infuses the power of AI and ML into all aspects of our products. We operate by a set of guiding principles to ensure we use AI responsibly and do not create unintended impacts to our business processes or for the general public.
Principles
01
Transparency
We use AI only for purposes that enable us to provide outstanding products and services to our clients, and we make these purposes fully known. In addition, we open our information systems to independent, third-party audits from multiple certification and accreditation bodies.
02
Fairness
We actively seek to keep our AI systems free from bias and mitigate disproportionate impacts on individuals or groups.
03
Compliance
We build and operate our AI systems to comply with the leading AI standards, regulations, and governance frameworks. We document our AI governance efforts and evaluate and adjust them as necessary to keep up with the changing law, societal expectations, and technology.
04
Security
We design our AI systems to be secure and incorporate our approach to AI into our regularly tested information security program, which aligns with industry standards to keep our data secure. We have a formal, risk-based and enterprise-wide risk assessment program, which incorporates the assessment of security risk exposures, including in relation to our use of AI and ML.
05
Human Oversight
We have world-class data scientists who scrutinize the design, implementation, and outputs of our AI system to verify that they are functioning as intended. In addition, our AI and ML systems operate under human observation in order to minimize bias and avoid unexpected outcomes.
What Kind of AI Does IAS Use?
There are various technologies that are collectively referred to as “AI” in the marketplace. IAS’s Total Media Quality, Quality Attention, and Fraud solutions for example, all use ML and Human-Assisted techniques that have been industry standard for a number of years in applications such as spam filters, fraud protection, and pattern recognition. These ML techniques are trusted, safe, and common in any system utilized to detect fraud, identify patterns, or analyze user behavior.
IAS does not use Generative AI in any of its products, although IAS does use trusted Generative AI platforms in other aspects of its business.
Safe & Responsible AI
IAS is committed to the transparent and responsible use of AI, which is why we chose to be early adopters of TrustArc’s Responsible AI Certification, which is built upon AI governance frameworks including the EU AI Act, NIST AI Risk Management Framework, OECD AI Principles, and ISO 42001. Our partnership with TrustArc will enable IAS to ensure alignment with evolving AI governance and privacy frameworks and standards on an ongoing basis, mitigating compliance risks.
We also have internal policies (e.g. AI Policy) and training in place to educate our employees and guide them toward making informed decisions with regard to the use of AI, keeping our AI system secure and its uses aligned with our values.
Additionally, safe and responsible use of AI at IAS includes:
Rigorous Testing
Our IT change management process is compliant with global standards, and we routinely test our AI systems before and after deployment to ensure they are functioning as intended.
Human Control
While automation is a fundamental benefit of AI, our systems do not function independently of human oversight. Our Data Scientists continually monitor input data quality and system outputs and have the authority and responsibility to intervene if necessary.
Data Correction
In addition to our internal quality assurance processes, we work with trusted partners who help us detect errors or anomalies in the data that necessitate human review.
No Harm
The ML models and the processes by which we perform data analytics using AI are internal to IAS; our AI system is not used to interact with the general public, nor does the system have the capability to make decisions with the potential to impact individual persons.
We are continually looking at ways to enhance and improve our AI capabilities. We are also committed to using AI and ML in a way that benefits us all and, above all, does not cause harm. Because of this, IAS will not pursue objectives using AI that are likely to harm individuals or groups or that violate international law, human rights, or globally accepted standards of practice.