I would suggest not to worry about supporting both ascii and unicode build (a-la TCHAR) and go stright to unicode. That way you get to use more of the platform independant functions (wcscpy, wcsstr etc) instead of relying onto TCHAR functions which are Micrpsoft specific.
You can use std::wstring instead of std::string and replace all chars with wchar_ts. With a massive change like this I found that you start with one thing and let the compiler guide you to the next.
One thing that I can think of that might not be obvious at run time is where a string is allocated with malloc without using sizeof operator for the underlying type. So watch out for things like char * p = (char*)malloc(11) - 10 characters plus terminating NULL, this string will be half the size it's supposed to be in wchar_ts. It should become wchar_t * p = (wchar_t*)malloc(11*sizeof(wchar_t)).
Oh and the whole TCHAR is to support compile time ASCII/Unicode strings. It's defined something like this:
#define _T(x) L ## x
#define _T(x) ## x
So that in unicode configuration _T("blah") becomes L"blah" and in ascii configuration it's "blah".
Thanks for your useful answer. I have no real need to support both ASCII and Unicode. So, it is full steam into Unicode then :-)
-1: "this string will be half the size it's supposed to be in UNICODE" is false. With wchar_t, characters may be up to 4 bytes, and it depends on the actual content.
That's an edge case in UTF16 encoding that will not apply to text that used to be ASCII. The point I was making was to do with converting code that assumed 1 byte = 1 character. To get that code working under UCS2 the assumption that 2 bytes = 1 character is 100% correct.