Recently starting my first job as a software developer, I was a little thrown to be told that I did not have to follow any naming conventions in my code. Code written by groups working on other, larger projects followed naming conventions, but since I was brought in to write a new, stand-alone, application, the feeling was that it did not especially matter. It was the last of my worries, so I just took that existing convention and ran with it.
fSum += fWeight * fValue
But is it actually worthwhile? I find it hard to judge the net effect that following this kind of naming convention has on comprehension and detection of errors, but, visually, it just looks kind of ugly. Plus, having every class and file in the project called cSomething seems pretty asinine.
I'm not under the illusion that it's a remotely big deal when compared to things that make an obvious difference, like the algorithms and architectures that you employ. But any convention that affects every line of code that I write seems worth getting right.
What do you find the most elegant and effective naming convention, if one need be used at all? Does it denote type and/or scope?
Joel Spolsky wrote an article about why Hungarian notation exists and what it was intended for that may help answer your question.
Prefixes like tbl for a database table, int for an integer etc. are generally not useful - it is trivial to work out what is what from context or from your development tools in those cases. Something like imp for imperial measurements and met for metric makes a lot more sense because otherwise you can only see that they are floating point numbers.
area = width * height
looks perfectly fine while
impArea = metWidth * impHeight
shows you straight off that something is wrong.
Personally I just use descriptive variable names. $number_of_items is obviously an integer count. $input_file_handle, $is_active and $encrypted_password have obvious types both in terms of language data type and semantic type.
What you're describing is called Hungarian notation. It was once considered best practice, but is generally frowned upon now.
The Wikipedia article contains a section on its pros and cons.
Yes, prefixes can be useful, but I have a couple of suggestions:
If your entire team is using the same conventions, they become much more useful. Using them on your own is less helpful.
In strongly, statically typed languages, don't just copy the type of a variable. E.g. "bSubscribed" is a bad name for a boolean variable in C# or Java, because your IDE already knows what type it is. In C on the other hand, which lacks a boolean type, this would be useful information.
In C# and Java, you might consider a prefix to show that an object may be null. Or that a string has been html-escaped. Or that it represents a regular expression, or a sql statement. Or that an array has been sorted. Use your imagination.
Basically it's a question of asking yourself what you'd like a variable name to tell you, and that depends a lot on the domain and the language you're working in.
There are a few different "styles" of naming convention out there, and most of them have some value in making code understandable.
What is FAR more important is: use descriptive names for variables and functions. In your example with "sum", "weight" and "value", you might want to give more meaningful names: "totalCost", "lumberWeight", "lumberValuePerOunce" (I'm making some assumptions here)
I find conventions like prepending variable names with a character signifying the type to be quite distracting in most modern languages.
Most .NET developer follow Microsoft's Design Guidelines ( http://msdn.microsoft.com/en-us/library/ms229042.aspx), which is pretty similar to Java's (main difference is Microsoft favors Pascal Case for member names, while Java favors camel Case).
Aside from that, I'd say your code sample is far less readable because of the extra noise you've added.
I find that, for the most part, Google's coding style guide is pretty good. Since you were told that you can use any style you like, I think you should start by looking this over, and picking and choosing from it.
Prefixing variable names with data types (especially primitive data types) increases visual noise, as well as the risk that an otherwise small change will become a big-bang renaming.
Re the first point, is "intStudentCount" really any clearer than e.g. "numberOfStudents"? Wouldn't "invoiceLineItems" be at least as informative as "aobjItems". (The data type should be about the meaning of the data in the problem domain, not the low-level representation.)
As for the second point, what happens when e.g. a premature selection of int is replaced by long or double? Even worse, what happens when a concrete class is refactored into an interface with multiple implementing classes? Any practice which increases the burden of realistic maintenance scenarios seems questionable to me.
It might also depend on why you are prefixing the name, as opposed to just what you prefix it with.
As an example, I tend to use a 1-2 letter prefix for the names of controls on a form. It isn't because I don't know that the compiler will easily find the right class for the button (as an example), but I tend to design large forms first, then write most of the code afterwards.
Having a prefix of bt for the buttons makes it easily to find the right button afterwards, instead of having lots of names jumbled together.
I don't use prefixes for naming variables though, neither for the type (which is generally not that useful anyway), nor for the meaning, context or unit (which is the original idea behind Hungarian Notation).
In my view, it depends on the language and the size of the project. I've never actually gone so far as to use type prefixes on all of my variables, but you do want to name them in a clear manner.
In a statically typed language, like the one you're using, the more comfortable I feel with the type system, the less important Hungarian Notation becomes. So in Java or C# and especially in Haskell I wouldn't even think about adding those prefixes, because your tools can tell you the type of any given expression, and will catch most mistakes resulting from misunderstanding a type.
Prefixes often make sense for objects, for instance in a form where you might have 20 text boxes calling them all
tbSomething makes sense.
However, mostly I don't think it's worthwhile, especially for value types.
For instance, suppose you had:
short shortValue = 0; //stuff happens
Months later you find you need to change it - a short isn't big enough. Now you have:
int shortValue = 0; //stuff happens
Unless you also now change the name of the variable (more risk of breaking code than changing the type in this case) you now have confusing code.
You're better off having a name that describes what it holds:
int loopCounter = 0; //stuff happens
If later that needs to change to a long: no problem.
There's maybe more argument for these conventions in dynamically typed languages or those without an IDE.
I've always been prone to using a 2 to 4 character abbreviation for the type in front of the variable itself. At times it seems tedious, but when you're working with complex data types or situations, it becomes beneficial. I think it falls into the lower camel case category.
Looking at your example above, it would be slightly retooled to be:
Arrays always have an a in front of the type to designate the array. This also allows me to group similar variables instances together. For instance, I can use...
...which indicates that my Store TableAdapter is filling into the Store DataTable.
I've always tried to adhere to the basic principle that prefixes and/or suffixes should be inserted only when it make the code more readable (as in plain English).
The less cryptic, the better...
Why have a method like this:
public boolean connect( String h, int p );
When you can have something like this:
public boolean connect( String hostName, int port );
Moreover, the IDEs today really have powerful tool for refactoring (Especially Java) variables, method names, classes, etc... The idea to say the maximum information with the least amount of character is just old fashioned.