In the Linux kernel, the person who submitted the application will bear full legal responsibility for 'all lines of code generated by AI, and any bugs or security flaws resulting therefrom.'

by
After months of discussion, the Linux kernel development community has formulated a formal project-wide policy regarding the contribution of AI-generated code. This policy, agreed upon by Linux kernel BDFL Linus Torvalds and the maintainers, is based on a pragmatic approach that recognizes AI as merely a tool, rejecting a complete ban on AI as ineffective, and instead establishing guidelines that assume humans will responsibly utilize the tools.
linux/Documentation/process/coding-assistants.rst at master · torvalds/linux
https://github.com/torvalds/linux/blob/master/Documentation/process/coding-assistants.rst
Linux lays down the law on AI-generated code, says yes to Copilot, no to AI slop, and humans take the fall for mistakes — after months of fierce debate, Torvalds and maintainers come to an agreement | Tom's Hardware
https://www.tomshardware.com/software/linux/linux-lays-down-the-law-on-ai-generated-code-yes-to-copilot-no-to-ai-slop-and-humans-take-the-fall-for-mistakes-after-months-of-fierce-debate-torvalds-and-maintainers-come-to-an-agreement
The key point of the rules established this time is that it strictly prohibits AI agents themselves from adding the 'Signed-off-by' tag.
'Signed-off-by' is a mechanism in which developers legally declare that they created the code they submit, or that they have the right to redistribute it under an appropriate open-source license. This indicates agreement to a set of terms called a Developer Certificate of Origin , and it serves as a crucial legal step to clarify the origin of the code and prevent future copyright disputes.

by
The new regulations stipulate that only individuals with legal responsibility can certify developer origin certificates, and strictly prohibit AI agents themselves from applying this 'Signed-off-by' tag. Even for contributions that include AI-generated code, the submitter must fully review the content, verify compliance with license requirements, and assume full responsibility for their own 'Signed-off-by' tag.
Furthermore, as an operational rule to maintain transparency, it has been mandated that a new 'Assisted-by' tag be used whenever an AI tool is involved in development. This tag has a specific format that specifies the name of the agent used, the model version, and the name of the specific tool. This makes it possible to accurately track the role of AI in the development process.

by Thiago Pedrosa
This stems from concerns about the legal risks that AI code based on training data with unclear copyrights could fundamentally undermine the framework of open-source licenses and developer origin certificates.
Another major contributing factor is the current situation where the maintainers' review burden has reached its limit due to the large number of low-quality products, known as 'AI slops,' that are not based on facts, and patches containing bugs. In the past, there have been reports of performance degradation caused by code submitted while concealing the use of AI, which has provoked strong backlash from the community.
Technology news site Tom's Hardware commented, 'In short, if the code is good, that's all that matters. If it's shoddy AI code that breaks the kernel, then the person who submitted it will be held accountable by Linus Torvalds. In the open-source world, you can't ask for a greater deterrent.'
Related Posts:







