Blog AI/ML Introducing the GitLab AI Transparency Center
April 11, 2024
4 min read

Introducing the GitLab AI Transparency Center

This new initiative will help our community understand how we uphold governance and transparency in our AI products.

AI Transparency Center - cover

GitLab is dedicated to responsibly integrating artificial intelligence (AI) throughout our comprehensive DevSecOps platform. We offer GitLab Duo, a full suite of AI capabilities across the GitLab platform, so that our customers can ship better, more secure software faster. GitLab Duo follows a privacy- and transparency-first approach to help customers confidently adopt AI while keeping their valuable assets protected.

Generative AI is moving so quickly and we know it presents a host of novel questions about the privacy and safety of this technology. In GitLab's 2023 State of AI in Software Development report, more than 75% of respondents expressed concern about AI tools having access to private information or intellectual property.

Transparency is a core value at GitLab, and we take a transparency- and privacy-first approach to building our AI features to help ensure that our customers’ valuable intellectual property is protected. Accordingly, we’ve launched our AI Transparency Center to help GitLab’s customers, community, and team members better understand the ways in which GitLab upholds ethics and transparency in our AI-powered features.

The AI Transparency Center includes GitLab’s AI Ethics Principles for Product Development, AI Continuity Plan, and our AI features documentation.

The AI Ethics Principles for Product Development explained

We believe ethics play an important role in building AI features. For this reason, we’ve launched GitLab’s AI Ethics Principles for Product Development to address what we consider to be the best practices in responsible AI development. These Principles will help guide GitLab as we continue to build and evolve our AI functionality.

The Principles specifically address five key areas of concern that GitLab monitors so that we can continue to responsibly integrate AI into our customers’ workflows:

  • Avoiding unfair bias. Diversity, Inclusion, and Belonging is also one of GitLab’s core values. It is a critical consideration when building features powered by AI systems, as there is evidence that AI systems may perpetuate human and societal biases. GitLab will continue to prioritize Diversity, Inclusion, and Belonging when building AI features.

  • Safeguarding against security risks. GitLab is a DevSecOps platform, which means we integrate security throughout our entire product, including in our AI features. While AI brings many potential security benefits, it can also create security risks if not deployed correctly. As we do with all of our features, our goal is to mitigate these security risks in GitLab’s AI features.

  • Preventing potentially harmful uses. We strive to build AI features responsibly. We try to carefully consider the potential consequences of our AI features in order to refrain from launching features that are likely to cause, or allow others to cause, overall harm.

  • Considering what data our AI features use and how they use it. We will continue to carefully evaluate the data that our AI features use, the purposes for which we’re using this data, and who owns the intellectual property and other rights to the data, just as we do with all of GitLab’s features.

  • Holding ourselves accountable. GitLab’s mission is to make it so that everyone can contribute, and we welcome feedback from the GitLab community about our AI features. We will in turn aim to share our AI ethics-related findings with others in the industry where possible. We also know that AI systems, and the risk mitigations we need to employ with them, will change over time, so we are committed to continuously reviewing and iterating on our AI features and these Principles.

The AI Continuity Plan explained

Unlike other DevSecOps platforms, GitLab is not tied to a single AI model provider. Instead, our AI features are powered by a diverse set of models, which helps us support a wide range of use cases and gives our customers flexibility.

We carefully select our third-party AI vendors to ensure a commitment from the vendor that they will forgo the use of GitLab and GitLab customers’ content for the developing, training, and fine tuning of vendor models.

Our new AI Continuity Plan lays out GitLab’s processes when reviewing and selecting new third-party AI vendors, and when these AI vendors materially change their practices with respect to customer data.

AI features documentation

In keeping with GitLab’s core Transparency value, our AI features documentation clearly outlines our AI features’ purposes, underlying models, statuses, and privacy practices.

Visit the AI Transparency Center

The AI Transparency Center is publicly available in keeping with our Transparency value and to encourage others in the AI industry and the GitLab community to take safety, privacy, and ethics into account when building their own AI-powered functionality.

We’re excited about the opportunities that responsible AI will bring, and will continue to build our AI features with ethics, privacy, and transparency in mind.

We want to hear from you

Enjoyed reading this blog post or have questions or feedback? Share your thoughts by creating a new topic in the GitLab community forum. Share your feedback

Ready to get started?

See what your team could do with a unified DevSecOps Platform.

Get free trial

New to GitLab and not sure where to start?

Get started guide

Learn about what GitLab can do for your team

Talk to an expert