Strategic Risk Management: Approximate Domain Unlearning (ADU) introduces a novel approach to managing AI model risks, particularly for organizations deploying Vision-Language Models (VLMs) in safety-critical applications like autonomous driving. Boards should consider integrating ADU into AI governance frameworks to mitigate risks associated with unintended domain generalization.
Regulatory Compliance: ADU addresses concerns about data privacy and information leakage, aligning with emerging regulations on AI transparency and accountability. Boards must ensure compliance by adopting technologies that enable selective forgetting of sensitive or irrelevant domains.
Operational Efficiency: ADU reduces computational overhead by focusing model retention on relevant domains, optimizing resource allocation and improving model performance in targeted applications.
Innovation Leadership: Early adoption of ADU can position organizations as leaders in responsible AI, fostering trust among stakeholders and customers.
Defining the Problem
Issue: Pre-trained Vision-Language Models (VLMs) like CLIP exhibit strong domain generalization, enabling them to recognize objects across diverse domains (e.g., real photos, illustrations, sketches). While this capability is powerful, it introduces risks:
Safety Risks: In autonomous driving, VLMs may misclassify illustrated objects (e.g., cars in advertisements) as real, leading to hazardous decisions.
Privacy and Security Risks: Retaining unnecessary domain knowledge increases the potential for information leakage and model inversion attacks.
Computational Inefficiency: Processing irrelevant domains consumes excessive resources, impacting scalability and operational costs.
Current Solutions and Gaps: Existing “class unlearning” methods focus on forgetting specific object classes but fail to address domain-level risks. These methods are inadequate for applications requiring fine-grained control over domain-specific knowledge.
Finding the Cause
Root Causes:
Domain Entanglement: VLMs are pre-trained on diverse datasets, causing feature representations of different domains to overlap in the latent space. This entanglement makes it difficult to isolate and remove knowledge from specific domains without affecting others.
Lack of Domain-Specific Tools: Traditional unlearning techniques (e.g., class unlearning) do not account for the unique challenges of domain-level disentanglement, such as varying styles within a domain (e.g., realistic vs. stylized illustrations).
Generalization vs. Specialization Trade-off: VLMs prioritize broad generalization, which conflicts with the need for domain-specific forgetting in practical applications.
Identifying Context and Consequences
Context:
Autonomous Systems: Misclassification of illustrated objects as real can lead to critical failures in autonomous vehicles or industrial robots.
Data Privacy: Retaining unnecessary domain knowledge may violate data protection regulations (e.g., GDPR) and expose organizations to legal risks.
Resource Allocation: Processing irrelevant domains increases computational costs, reducing efficiency in large-scale deployments.
Consequences of Inaction:
Safety Incidents: Failure to distinguish between real and illustrated objects could result in accidents or operational failures.
Regulatory Penalties: Non-compliance with data privacy laws may lead to fines, reputational damage, and loss of stakeholder trust.
Competitive Disadvantage: Organizations lagging in responsible AI adoption may lose market share to competitors leveraging advanced unlearning techniques.
Generating Solutions
Proposed Solution: Approximate Domain Unlearning (ADU) ADU is a novel framework designed to:
Disentangle Domain Features: Uses Domain Disentangling Loss (DDL) to separate feature distributions of different domains in the latent space, enabling selective forgetting.
Adapt to Instance-Level Variations: Employs an Instance-wise Prompt Generator (InstaPG) to dynamically adjust prompts based on individual image characteristics, addressing intra-domain diversity.
Balance Memorization and Forgetting: Combines cross-entropy and Maximum Mean Discrepancy (MMD) losses to optimize the trade-off between retaining knowledge in target domains and forgetting irrelevant ones.
Alternative Solutions:
Full Model Retraining: Retrain VLMs from scratch excluding unwanted domains, which is computationally expensive and impractical for large models.
Domain-Specific Fine-Tuning: Fine-tune models on target domains only, but this may not effectively remove knowledge from forgotten domains.
Black-Box Unlearning: Apply unlearning to proprietary models without access to internal parameters, but this limits control and effectiveness.
Methodology Evaluation
Evaluation Criteria:
Effectiveness: Ability to reduce recognition accuracy in forgotten domains while preserving accuracy in retained domains.
Efficiency: Computational overhead and training time compared to baselines.
Robustness: Performance under challenging conditions (e.g., domain imbalance, partial labels, or overlapping classes).
Generalizability: Applicability to different VLMs and datasets.
Results:
Superior Performance: ADU outperforms baselines (e.g., LP++, CLIPFit, BBF) by over 20% in harmonic mean (
H) and forgetting accuracy (For) across datasets like Office-Home and Mini DomainNet.Robustness: Maintains stable performance under domain imbalance, partial labels, and varying training samples.
Efficiency: Incurs minimal computational overhead (e.g., 10.7 GB memory, 550 seconds training time on Office-Home) compared to alternatives.
Visual Evidence: t-SNE visualizations confirm effective domain separation in the feature space, while attention maps show reduced focus on objects in forgotten domains.
Choosing the Best Solution
Recommendation: ADU is the most effective and practical solution for domain-specific unlearning in VLMs. It addresses the root causes of domain entanglement and instance-level diversity while balancing computational efficiency and robustness. Organizations should:
Pilot ADU in High-Risk Applications: Implement ADU in autonomous systems or privacy-sensitive domains to evaluate its impact on safety and compliance.
Integrate with AI Governance: Incorporate ADU into AI ethics and risk management frameworks to ensure responsible deployment.
Monitor and Iterate: Continuously assess ADU’s performance and adapt to evolving regulatory and operational requirements.
Implementation Roadmap:
Phase 1: Collaborate with AI research teams to deploy ADU in controlled environments.
Phase 2: Scale ADU across high-priority applications, monitoring for safety and efficiency improvements.
Phase 3: Advocate for industry standards on domain unlearning to drive broader adoption and regulatory alignment.
References
Kawamura, Kodai, Yuta Goto, Rintaro Yanagi, Hirokatsu Kataoka, and Go Irie. 2025. “Approximate Domain Unlearning for Vision-Language Models.” arXiv preprint arXiv:2510.08132. https://kodaikawamura.github.io/Domain_Unlearning/.
Radford, Alec, Jong Wook Kim, Chris Hallacy, et al. 2021. “Learning Transferable Visual Models From Natural Language Supervision.” International Conference on Machine Learning (ICML).
Shokri, Reza, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. “Membership Inference Attacks Against Machine Learning Models.” IEEE Symposium on Security and Privacy (SP).
All Founding Subscribers receive a full Enterprise License to Risk Anchor included with their subscription.
That includes:
The full local package (HTML/JS/CSS) to run on your own infrastructure.
Unlimited assessments: Use it across as many models, business units, or portfolio companies as you need.
Ongoing upgrades: New modules (for example, Business Continuity, Drift Monitoring, or sector-specific controls) are included as they are released.
For more ideas consider purchasing “Shaping the Decade: Governance, Sustainability, and AI 2026–2036,” a guide for boards at the crossroads of governance, technology, and stakeholder capitalism. Available Here.
Tanya Matanda is a governance strategist bridging institutional oversight, AI governance, and fiduciary resilience. Her work supports boards, LPs, and regulators in designing governance systems fit for the AI era.
Copyright © 2025 Matanda Advisory Services
Research and Audio Supported by AI Systems













