Erik Wikström
10/1/2008 5:30:00 PM
On 2008-10-01 18:57, Ioannis Vranos wrote:
> REH wrote:
>> On Oct 1, 5:59 am, Ioannis Vranos <ivra...@no.spam.nospamfreemail.gr>
>> wrote:
>>> Hi, I am currently learning QT, a portable C++ framework which comes
>>> with both a commercial and GPL license, and which provides conversion
>>> operations to its various types to/from standard C++ types.
>>>
>>> For example its QString type provides a toWString() that returns a
>>> std::wstring with its Unicode contents.
>>>
>>> So, since wstring supports the largest character set, why do we need
>>> explicit Unicode types in C++?
>>>
>>> I think what is needed is a "unicode" locale or at the most, some
>>> unicode locales.
>>>
>>> I don't consider being compatible with C99 as an excuse.
>>
>> If I understand what you are asking...
>>
>> wstring in the standard defines neither the character set, nor the
>> encoding. Given that Unicode is currently a 21-bit standard, how can
>> wstring support the largest character set on a system where wchar_t is
>> 16-bits (assuming a one-character-per-element encoding)? You could
>> only support the BMP (which is exactly what most systems and language
>> that "claim" Unicode support are really capable of).
>
>
> I do not know much about encodings, only the necessary for me stuff, but
> the question does not sound reasonable for me.
>
> If that system supports Unicode as a system-specific type, why can't
> wchar_t be made wide enough as that system-specific Unicode type, in
> that system?
Because it has been to narrow for 5 to 10 years and the compiler vendor
does not want to take any chances with backward compatibility, and since
we will get Unicode types it is a good idea to use wchar_t for encodings
not the same size as the Unicode types.
--
Erik Wikström