To my way of looking at things, Hungarian Notation is a kludge to get around an insufficiently powerful type system. In languages that allow you to define your own types it's relatively trivial to create a new type that encodes the behavior you're expecting. In Joel Spolsky's rant on Hungarian Notation he gives an example of using it to detect possible XSS attacks by indicating that a variable or function is either unsafe (us) or safe (s), but that still relies on the programmer to visually check. If you instead have an extensible type system you can just create two new types, UnsafeString and SafeString, and then use them as appropriate. As a bonus, the type of encode becomes:
SafeString encode(UnsafeString)
and short of accessing the internals of UnsafeString or using some other conversion functions becomes the only way to get from a UnsafeString to a SafeString. If all your output functions then only take instances of SafeString it becomes impossible to output an un-escaped string [ baring shenanigans with conversions such as StringToSafeString(someUnsafeString.ToString()) ].
It should be obvious why allowing the type system to sanity check your code is superior to trying to do it by hand, or maybe eye in this case.
In a language such as C of course, you're screwed in that an int is an int is an int, and there's not much you can do about that. You could always play games with structs but it's debatable whether that's an improvement or not.
As for the other interpretation of Hungarian Notation, I.E. prefixing with the type of the variable, that's just plain stupid and encourages lazy practices like naming variables uivxwFoo instead of something meaningful like countOfPeople.