top of page
Writer's pictureAnastasia Khomyakova

Mihai Maruseac: Model transparency for AI/ML security



Mihai Maruseac is a member of Google Open Source Security team (GOSST) where he works on Supply Chain Security for AI/ML applications (since summer of 2023). Before, he worked on GUAC, a system to understand complex software supply chains. Before joining GOSST, Mihai created the TensorFlow Security team after joining Google from a startup to incorporate Differential Privacy (DP) within Machine Learning (ML) algorithms. Mihai has a PhD in Differential Privacy from UMass Boston.


Model transparency for AI/ML security


AI models (especially LLMs) are now being released at a never seen before frequency. At the same time, supply chain attacks increase YoY by more than 700%. Coupling these two facts together reveals a shocking perspective: it is very possible for bad actors to infect unsuspecting hosts that want to benefit from the AI explosion. And this is not theoretical, as we have already seen examples of malicious LLMs being released.


Looking at the traditional software development life cycle and associated supply chain risks we see that there are solutions (e.g., artifact signing, generating provenance, generating software bill of materials) that would enforce transparency, entailing a reduction in supply chain compromises. We can do the same for ML models, across different ML frameworks and model hubs, by drawing analogies between training AI models and building traditional software artifacts. This results in a model transparency suit of tools which represents one pilar of Google's Secure AI Framework (SAIF).

Commenti


bottom of page