I’ve always wondered why people put 255 as the length of the varchar, not 200, 250 or 300. I know it’s from 2^8, but why use 2^8? I understand one of the reason that people argue is that “to match the binary nature of the data store” so we can safe space/not wasting space.
SQL Server 2005, 2008, 2012 and 2014 all has the storage size as the length of data entered + 2 bytes. So whether we create the column as varchar(255) or varchar(258), when the column contains “406 Middleton Street” it uses 22 bytes and when it contains “10 High Street” it uses 17 bytes. It is that simple: bytes = length + 2.
Now that I’m taking a role as a data architect, I feel obliged to know this thing. I need to know what is the right data type to use, is it varchar(200) or varchar(255). And the answer to this question is very simple: use varchar(200). Or 100, or 300. Just go with what is required, and we don’t need to pay attention to 255 (or 511, or 64, or 131). These 2^n-1 numbers don’t have any roles in the storage of SQL Server database. Just ignore them.
But where is this 255 convention come from? Yes it is from history / historical reason, but what is it? Yes it is 2^8-1, but what is it? What was the historical reason? What happened in the past, and when, and where?
Proposal #2: because in SQL Server 6.5 the maximum length for varchar was 255 (link, same page as above)
Bingo! This makes sense. And it is true (link). In SQL Server up to 6.5 the max length was 255, both for char and varchar.
#2 is why we don’t see people uses the other 2^n-1 like 131 and 511. It makes sense.
I know it is old story, but it is good to remember that up to version 6.5, SQL Server code was still Sybase (link), i.e. Microsoft SQL Server 1.0 = Sybase SQL Server 3.0. The brand was Microsoft but the code was Sybase. It is in Microsoft SQL Server 7.0 that this Sybase code base was changed (link).