Skip to main content
Commonmark migration
Source Link

Linus Torvalds explains the reason for using a SHA hash in his git presentation at Google (By the way: I recommend anyone who wants to understand what git is all about to watch it completely):

|video|transcript|

Having a good hash is good for being able to trust your data, it happens to have some other good features, too, it means when we hash objects, we know the hash is well distributed and we do not have to worry about certain distribution issues. Internally it means from the implementation standpoint, we can trust that the hash is so good that we can use hashing algorithms and know there are no bad cases. So there are some reasons to like the cryptographic side too, but it's really about the ability to trust your data. I guarantee you, if you put your data in git, you can trust the fact that five years later, after it is converted from your harddisc to DVD to whatever new technology and you copied it along, five years later you can verify the data you get back out is the exact same data you put in. And that is something you really should look for in a source code management system.

 

One of the reasons I care is we actually had for the kernel a break-in on one of the BitKeeper sites, where people tried to corrupt the kernel source code repository, and BitKeeper actually caught it. BitKeeper did not have a really fancy hash at all, I think it is only 16-bit CRC, something like that. But it was good enough that you could actually see clumsy attempt, it was not cryptographically secure but it was hard enough in practice to overcome that it was caught immediately. But when that happens once to you, you got burned once, you do not ever want to get burned again. Maybe your projects aren't that important, my projects, they are important. There is a reason I care.

[...]

So maybe I am a cuckoo, maybe I am a bit crazy, and I care about security more than most people do. But the whole notion that I would give the master copy of source code that I trust and I care about so much I would give it to a third party is ludicrous. Not even Google. Not a way in Hell would I do that. I allow Google to have a copy of it, but I want to have something I know that nobody touched it. By the way I am not a great MIS person so disc corruption issue is definitely a case that I might worry about because I do not do backups, so it's Ok if I can then download it again from multiple trusted parties I can verify them against each other that part is really easy, I can verify them against hopefully that 20 bytes that I really really cared about, hopefully I have that in a few places. 20-byte is easier to track than 180MB. And corruption is less likely to hit those 20 bytes. If I have those 20 bytes, I can download a git repository from a completely untrusted source and I can guarantee that they did not do anything bad to it. That's a huge thing and that is something when you do hosted repositories for other people if you use subversion you are just not doing it right. You are not allowing them to sleep well at night. Of course, if you do it for 70... how many, 75,000 projects? Most of them are pretty small and not that important so it's Ok. That should make people feel better.

When you host your repository on a 3rd party site, you can use the cryptographic hash of every revision to make sure that they didn't tamper with it. When they just change a single byte, the hashes won't fit anymore. That means writing down the hash of your HEAD revisions from time to time prevents you from malicious manipulation of your codebase, even when you don't host your code yourself.

Linus Torvalds explains the reason for using a SHA hash in his git presentation at Google (By the way: I recommend anyone who wants to understand what git is all about to watch it completely):

|video|transcript|

Having a good hash is good for being able to trust your data, it happens to have some other good features, too, it means when we hash objects, we know the hash is well distributed and we do not have to worry about certain distribution issues. Internally it means from the implementation standpoint, we can trust that the hash is so good that we can use hashing algorithms and know there are no bad cases. So there are some reasons to like the cryptographic side too, but it's really about the ability to trust your data. I guarantee you, if you put your data in git, you can trust the fact that five years later, after it is converted from your harddisc to DVD to whatever new technology and you copied it along, five years later you can verify the data you get back out is the exact same data you put in. And that is something you really should look for in a source code management system.

 

One of the reasons I care is we actually had for the kernel a break-in on one of the BitKeeper sites, where people tried to corrupt the kernel source code repository, and BitKeeper actually caught it. BitKeeper did not have a really fancy hash at all, I think it is only 16-bit CRC, something like that. But it was good enough that you could actually see clumsy attempt, it was not cryptographically secure but it was hard enough in practice to overcome that it was caught immediately. But when that happens once to you, you got burned once, you do not ever want to get burned again. Maybe your projects aren't that important, my projects, they are important. There is a reason I care.

[...]

So maybe I am a cuckoo, maybe I am a bit crazy, and I care about security more than most people do. But the whole notion that I would give the master copy of source code that I trust and I care about so much I would give it to a third party is ludicrous. Not even Google. Not a way in Hell would I do that. I allow Google to have a copy of it, but I want to have something I know that nobody touched it. By the way I am not a great MIS person so disc corruption issue is definitely a case that I might worry about because I do not do backups, so it's Ok if I can then download it again from multiple trusted parties I can verify them against each other that part is really easy, I can verify them against hopefully that 20 bytes that I really really cared about, hopefully I have that in a few places. 20-byte is easier to track than 180MB. And corruption is less likely to hit those 20 bytes. If I have those 20 bytes, I can download a git repository from a completely untrusted source and I can guarantee that they did not do anything bad to it. That's a huge thing and that is something when you do hosted repositories for other people if you use subversion you are just not doing it right. You are not allowing them to sleep well at night. Of course, if you do it for 70... how many, 75,000 projects? Most of them are pretty small and not that important so it's Ok. That should make people feel better.

When you host your repository on a 3rd party site, you can use the cryptographic hash of every revision to make sure that they didn't tamper with it. When they just change a single byte, the hashes won't fit anymore. That means writing down the hash of your HEAD revisions from time to time prevents you from malicious manipulation of your codebase, even when you don't host your code yourself.

Linus Torvalds explains the reason for using a SHA hash in his git presentation at Google (By the way: I recommend anyone who wants to understand what git is all about to watch it completely):

|video|transcript|

Having a good hash is good for being able to trust your data, it happens to have some other good features, too, it means when we hash objects, we know the hash is well distributed and we do not have to worry about certain distribution issues. Internally it means from the implementation standpoint, we can trust that the hash is so good that we can use hashing algorithms and know there are no bad cases. So there are some reasons to like the cryptographic side too, but it's really about the ability to trust your data. I guarantee you, if you put your data in git, you can trust the fact that five years later, after it is converted from your harddisc to DVD to whatever new technology and you copied it along, five years later you can verify the data you get back out is the exact same data you put in. And that is something you really should look for in a source code management system.

One of the reasons I care is we actually had for the kernel a break-in on one of the BitKeeper sites, where people tried to corrupt the kernel source code repository, and BitKeeper actually caught it. BitKeeper did not have a really fancy hash at all, I think it is only 16-bit CRC, something like that. But it was good enough that you could actually see clumsy attempt, it was not cryptographically secure but it was hard enough in practice to overcome that it was caught immediately. But when that happens once to you, you got burned once, you do not ever want to get burned again. Maybe your projects aren't that important, my projects, they are important. There is a reason I care.

[...]

So maybe I am a cuckoo, maybe I am a bit crazy, and I care about security more than most people do. But the whole notion that I would give the master copy of source code that I trust and I care about so much I would give it to a third party is ludicrous. Not even Google. Not a way in Hell would I do that. I allow Google to have a copy of it, but I want to have something I know that nobody touched it. By the way I am not a great MIS person so disc corruption issue is definitely a case that I might worry about because I do not do backups, so it's Ok if I can then download it again from multiple trusted parties I can verify them against each other that part is really easy, I can verify them against hopefully that 20 bytes that I really really cared about, hopefully I have that in a few places. 20-byte is easier to track than 180MB. And corruption is less likely to hit those 20 bytes. If I have those 20 bytes, I can download a git repository from a completely untrusted source and I can guarantee that they did not do anything bad to it. That's a huge thing and that is something when you do hosted repositories for other people if you use subversion you are just not doing it right. You are not allowing them to sleep well at night. Of course, if you do it for 70... how many, 75,000 projects? Most of them are pretty small and not that important so it's Ok. That should make people feel better.

When you host your repository on a 3rd party site, you can use the cryptographic hash of every revision to make sure that they didn't tamper with it. When they just change a single byte, the hashes won't fit anymore. That means writing down the hash of your HEAD revisions from time to time prevents you from malicious manipulation of your codebase, even when you don't host your code yourself.

added 102 characters in body
Source Link
Philipp
  • 23.5k
  • 6
  • 65
  • 69

Linus Torvalds explains the reason for using a SHA hash in his git presentation at Google (By the way: I recommend anyone who wants to understand what git is all about to watch it completely):

|video|transcript|

Having a good hash is good for being able to trust your data, it happens to have some other good features, too, it means when we hash objects, we know the hash is well distributed and we do not have to worry about certain distribution issues. Internally it means from the implementation standpoint, we can trust that the hash is so good that we can use hashing algorithms and know there are no bad cases. So there are some reasons to like the cryptographic side too, but it's really about the ability to trust your data. I guarantee you, if you put your data in git, you can trust the fact that five years later, after it is converted from your harddisc to DVD to whatever new technology and you copied it along, five years later you can verify the data you get back out is the exact same data you put in. And that is something you really should look for in a source code management system.

One of the reasons I care is we actually had for the kernel a break-in on one of the BitKeeper sites, where people tried to corrupt the kernel source code repository, and BitKeeper actually caught it. BitKeeper did not have a really fancy hash at all, I think it is only 16-bit CRC, something like that. But it was good enough that you could actually see clumsy attempt, it was not cryptographically secure but it was hard enough in practice to overcome that it was caught immediately. But when that happens once to you, you got burned once, you do not ever want to get burned again. Maybe your projects aren't that important, my projects, they are important. There is a reason I care.

[...]

So maybe I am a cuckoo, maybe I am a bit crazy, and I care about security more than most people do. But the whole notion that I would give the master copy of source code that I trust and I care about so much I would give it to a third party is ludicrous. Not even Google. Not a way in Hell would I do that. I allow Google to have a copy of it, but I want to have something I know that nobody touched it. By the way I am not a great MIS person so disc corruption issue is definitely a case that I might worry about because I do not do backups, so it's Ok if I can then download it again from multiple trusted parties I can verify them against each other that part is really easy, I can verify them against hopefully that 20 bytes that I really really cared about, hopefully I have that in a few places. 20-byte is easier to track than 180MB. And corruption is less likely to hit those 20 bytes. If I have those 20 bytes, I can download a git repository from a completely untrusted source and I can guarantee that they did not do anything bad to it. That's a huge thing and that is something when you do hosted repositories for other people if you use subversion you are just not doing it right. You are not allowing them to sleep well at night. Of course, if you do it for 70... how many, 75,000 projects? Most of them are pretty small and not that important so it's Ok. That should make people feel better.

When you host your repository on a 3rd party site, you can use the cryptographic hash of every revision to make sure that they didn't tamper with it. When they just change a single byte, the hashes won't fit anymore. That means writing down the hash of your HEAD revisions from time to time prevents you from malicious manipulation of your codebase, even when you don't host your code yourself.

Linus Torvalds explains the reason for using a SHA hash in his git presentation at Google:

|video|transcript|

Having a good hash is good for being able to trust your data, it happens to have some other good features, too, it means when we hash objects, we know the hash is well distributed and we do not have to worry about certain distribution issues. Internally it means from the implementation standpoint, we can trust that the hash is so good that we can use hashing algorithms and know there are no bad cases. So there are some reasons to like the cryptographic side too, but it's really about the ability to trust your data. I guarantee you, if you put your data in git, you can trust the fact that five years later, after it is converted from your harddisc to DVD to whatever new technology and you copied it along, five years later you can verify the data you get back out is the exact same data you put in. And that is something you really should look for in a source code management system.

One of the reasons I care is we actually had for the kernel a break-in on one of the BitKeeper sites, where people tried to corrupt the kernel source code repository, and BitKeeper actually caught it. BitKeeper did not have a really fancy hash at all, I think it is only 16-bit CRC, something like that. But it was good enough that you could actually see clumsy attempt, it was not cryptographically secure but it was hard enough in practice to overcome that it was caught immediately. But when that happens once to you, you got burned once, you do not ever want to get burned again. Maybe your projects aren't that important, my projects, they are important. There is a reason I care.

[...]

So maybe I am a cuckoo, maybe I am a bit crazy, and I care about security more than most people do. But the whole notion that I would give the master copy of source code that I trust and I care about so much I would give it to a third party is ludicrous. Not even Google. Not a way in Hell would I do that. I allow Google to have a copy of it, but I want to have something I know that nobody touched it. By the way I am not a great MIS person so disc corruption issue is definitely a case that I might worry about because I do not do backups, so it's Ok if I can then download it again from multiple trusted parties I can verify them against each other that part is really easy, I can verify them against hopefully that 20 bytes that I really really cared about, hopefully I have that in a few places. 20-byte is easier to track than 180MB. And corruption is less likely to hit those 20 bytes. If I have those 20 bytes, I can download a git repository from a completely untrusted source and I can guarantee that they did not do anything bad to it. That's a huge thing and that is something when you do hosted repositories for other people if you use subversion you are just not doing it right. You are not allowing them to sleep well at night. Of course, if you do it for 70... how many, 75,000 projects? Most of them are pretty small and not that important so it's Ok. That should make people feel better.

When you host your repository on a 3rd party site, you can use the cryptographic hash of every revision to make sure that they didn't tamper with it. When they just change a single byte, the hashes won't fit anymore. That means writing down the hash of your HEAD revisions from time to time prevents you from malicious manipulation of your codebase, even when you don't host your code yourself.

Linus Torvalds explains the reason for using a SHA hash in his git presentation at Google (By the way: I recommend anyone who wants to understand what git is all about to watch it completely):

|video|transcript|

Having a good hash is good for being able to trust your data, it happens to have some other good features, too, it means when we hash objects, we know the hash is well distributed and we do not have to worry about certain distribution issues. Internally it means from the implementation standpoint, we can trust that the hash is so good that we can use hashing algorithms and know there are no bad cases. So there are some reasons to like the cryptographic side too, but it's really about the ability to trust your data. I guarantee you, if you put your data in git, you can trust the fact that five years later, after it is converted from your harddisc to DVD to whatever new technology and you copied it along, five years later you can verify the data you get back out is the exact same data you put in. And that is something you really should look for in a source code management system.

One of the reasons I care is we actually had for the kernel a break-in on one of the BitKeeper sites, where people tried to corrupt the kernel source code repository, and BitKeeper actually caught it. BitKeeper did not have a really fancy hash at all, I think it is only 16-bit CRC, something like that. But it was good enough that you could actually see clumsy attempt, it was not cryptographically secure but it was hard enough in practice to overcome that it was caught immediately. But when that happens once to you, you got burned once, you do not ever want to get burned again. Maybe your projects aren't that important, my projects, they are important. There is a reason I care.

[...]

So maybe I am a cuckoo, maybe I am a bit crazy, and I care about security more than most people do. But the whole notion that I would give the master copy of source code that I trust and I care about so much I would give it to a third party is ludicrous. Not even Google. Not a way in Hell would I do that. I allow Google to have a copy of it, but I want to have something I know that nobody touched it. By the way I am not a great MIS person so disc corruption issue is definitely a case that I might worry about because I do not do backups, so it's Ok if I can then download it again from multiple trusted parties I can verify them against each other that part is really easy, I can verify them against hopefully that 20 bytes that I really really cared about, hopefully I have that in a few places. 20-byte is easier to track than 180MB. And corruption is less likely to hit those 20 bytes. If I have those 20 bytes, I can download a git repository from a completely untrusted source and I can guarantee that they did not do anything bad to it. That's a huge thing and that is something when you do hosted repositories for other people if you use subversion you are just not doing it right. You are not allowing them to sleep well at night. Of course, if you do it for 70... how many, 75,000 projects? Most of them are pretty small and not that important so it's Ok. That should make people feel better.

When you host your repository on a 3rd party site, you can use the cryptographic hash of every revision to make sure that they didn't tamper with it. When they just change a single byte, the hashes won't fit anymore. That means writing down the hash of your HEAD revisions from time to time prevents you from malicious manipulation of your codebase, even when you don't host your code yourself.

Source Link
Philipp
  • 23.5k
  • 6
  • 65
  • 69

Linus Torvalds explains the reason for using a SHA hash in his git presentation at Google:

|video|transcript|

Having a good hash is good for being able to trust your data, it happens to have some other good features, too, it means when we hash objects, we know the hash is well distributed and we do not have to worry about certain distribution issues. Internally it means from the implementation standpoint, we can trust that the hash is so good that we can use hashing algorithms and know there are no bad cases. So there are some reasons to like the cryptographic side too, but it's really about the ability to trust your data. I guarantee you, if you put your data in git, you can trust the fact that five years later, after it is converted from your harddisc to DVD to whatever new technology and you copied it along, five years later you can verify the data you get back out is the exact same data you put in. And that is something you really should look for in a source code management system.

One of the reasons I care is we actually had for the kernel a break-in on one of the BitKeeper sites, where people tried to corrupt the kernel source code repository, and BitKeeper actually caught it. BitKeeper did not have a really fancy hash at all, I think it is only 16-bit CRC, something like that. But it was good enough that you could actually see clumsy attempt, it was not cryptographically secure but it was hard enough in practice to overcome that it was caught immediately. But when that happens once to you, you got burned once, you do not ever want to get burned again. Maybe your projects aren't that important, my projects, they are important. There is a reason I care.

[...]

So maybe I am a cuckoo, maybe I am a bit crazy, and I care about security more than most people do. But the whole notion that I would give the master copy of source code that I trust and I care about so much I would give it to a third party is ludicrous. Not even Google. Not a way in Hell would I do that. I allow Google to have a copy of it, but I want to have something I know that nobody touched it. By the way I am not a great MIS person so disc corruption issue is definitely a case that I might worry about because I do not do backups, so it's Ok if I can then download it again from multiple trusted parties I can verify them against each other that part is really easy, I can verify them against hopefully that 20 bytes that I really really cared about, hopefully I have that in a few places. 20-byte is easier to track than 180MB. And corruption is less likely to hit those 20 bytes. If I have those 20 bytes, I can download a git repository from a completely untrusted source and I can guarantee that they did not do anything bad to it. That's a huge thing and that is something when you do hosted repositories for other people if you use subversion you are just not doing it right. You are not allowing them to sleep well at night. Of course, if you do it for 70... how many, 75,000 projects? Most of them are pretty small and not that important so it's Ok. That should make people feel better.

When you host your repository on a 3rd party site, you can use the cryptographic hash of every revision to make sure that they didn't tamper with it. When they just change a single byte, the hashes won't fit anymore. That means writing down the hash of your HEAD revisions from time to time prevents you from malicious manipulation of your codebase, even when you don't host your code yourself.