如何在Linux / OS X上打印wstring?
如何在控制台/屏幕上打印如下字符串:€áa¢cée£
?我试过:
How can I print a string like this: €áa¢cée£
on the console/screen? I tried this:
#include <iostream>
#include <string>
using namespace std;
wstring wStr = L"€áa¢cée£";
int main (void)
{
wcout << wStr << " : " << wStr.length() << endl;
return 0;
}
这不工作。甚至混乱,如果我从字符串中删除€
,打印输出如下:?a?c?e? :7
但在字符串中€
,则在€
字符后不打印任何内容。
which is not working. Even confusing, if I remove €
from the string, the print out comes like this: ?a?c?e? : 7
but with €
in the string, nothing gets printed after the €
character.
如果我在python中编写相同的代码:
If I write the same code in python:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
wStr = u"€áa¢cée£"
print u"%s" % wStr
它在同一控制台上正确打印字符串。我在c ++中丢失了什么(好吧,我只是一个noob)?干杯!!
it prints out the string correctly on the very same console. What am I missing in c++ (well, I'm just a noob)? Cheers!!
更新1:。
#include <iostream>
#include <string>
using namespace std;
string wStr = "€áa¢cée£";
char *pStr = 0;
int main (void)
{
cout << wStr << " : " << wStr.length() << endl;
pStr = &wStr[0];
for (unsigned int i = 0; i < wStr.length(); i++) {
cout << "char "<< i+1 << " # " << *pStr << " => " << pStr << endl;
pStr++;
}
return 0;
}
首先,它报告 14
作为字符串的长度:€áa¢cée£:14
是因为它计数每个字符2个字节?
First of all, it reports 14
as the length of the string: €áa¢cée£ : 14
Is it because it's counting 2 byte per character?
所有我得到这个:
char 1 # ? => €áa¢cée£
char 2 # ? => ??áa¢cée£
char 3 # ? => ?áa¢cée£
char 4 # ? => áa¢cée£
char 5 # ? => ?a¢cée£
char 6 # a => a¢cée£
char 7 # ? => ¢cée£
char 8 # ? => ?cée£
char 9 # c => cée£
char 10 # ? => ée£
char 11 # ? => ?e£
char 12 # e => e£
char 13 # ? => £
char 14 # ? => ?
作为最后一个cout输出。所以,实际问题仍然存在,我相信。 Cheers !!
as the last cout output. So, actual problem still remains, I believe. Cheers!!
更新2:
Update 2: based on n.m.'s second suggestion
#include <iostream>
#include <string>
using namespace std;
wchar_t wStr[] = L"€áa¢cée£";
int iStr = sizeof(wStr) / sizeof(wStr[0]); // length of the string
wchar_t *pStr = 0;
int main (void)
{
setlocale (LC_ALL,"");
wcout << wStr << " : " << iStr << endl;
pStr = &wStr[0];
for (int i = 0; i < iStr; i++) {
wcout << *pStr << " => " << static_cast<void*>(pStr) << " => " << pStr << endl;
pStr++;
}
return 0;
}
这是我得到的结果:
€áa¢cée£ : 9
€ => 0x1000010e8 => €áa¢cée£
á => 0x1000010ec => áa¢cée£
a => 0x1000010f0 => a¢cée£
¢ => 0x1000010f4 => ¢cée£
c => 0x1000010f8 => cée£
é => 0x1000010fc => ée£
e => 0x100001100 => e£
£ => 0x100001104 => £
=> 0x100001108 =>
为什么会报告为 9
code> 8 ?或者这是我应该期望?干杯!!
Why there it's reported as 9
than 8
? Or this is what I should expect? Cheers!!
在字符串文字之前删除 L
使用 std :: string
,而不是 std :: wstring
。
Drop the L
before the string literal. Use std::string
, not std::wstring
.
UPD:有一个更好的(正确的)解决方案。保持wchar_t,wstring和L,并在程序开始调用 setlocale(LC_ALL,)
。
UPD: There's a better (correct) solution. keep wchar_t, wstring and the L, and call setlocale(LC_ALL,"")
in the beginning of your program.
你应该在程序的开始调用 setlocale(LC_ALL,)
。这将指示您的程序使用您的环境的语言环境,而不是默认的C语言环境。
You should call setlocale(LC_ALL,"")
in the beginning of your program anyway. This instructs your program to work with your environment's locale, instead of the default "C" locale. Your environment has a UTF-8 one so everything should work.
没有调用 setlocale(LC_ALL,)
,该程序使用UTF-8序列工作,没有实现,他们是UTF-8。如果在终端上打印了正确的UTF-8序列,它将被解释为UTF-8,并且一切都会很好。这是如果你使用 string
和 char
会发生什么:gcc使用UTF-8作为字符串的默认编码, ostream愉快地打印它们而不应用任何转换。它认为它有一个ASCII字符序列。
Without calling setlocale(LC_ALL,"")
, the program works with UTF-8 sequences without "realizing" that they are UTF-8. If a correct UTF-8 sequence is printed on the terminal, it will be interpreted as UTF-8 and everything will look fine. That's what happens if you use string
and char
: gcc uses UTF-8 as a default encoding for strings, and the ostream happily prints them without applying any conversion. It thinks it has a sequence of ASCII characters.
但是当你使用 wchar_t
时,一切都会断裂:gcc使用UTF-32,正确的重新编码不是
But when you use wchar_t
, everything breaks: gcc uses UTF-32, the correct re-encoding is not applied (because the locale is "C") and the output is garbage.
当你调用 setlocale(LC_ALL,)
程序知道它应该重新编码UTF-32到UTF-8,一切都很好,dandy再次。
When you call setlocale(LC_ALL,"")
the program knows it should recode UTF-32 to UTF-8, and everything is fine and dandy again.
这一切假设我们只想要使用UTF-8。使用任意语言环境和编码超出了此答案的范围。
This all assumes that we only ever want to work with UTF-8. Using arbitrary locales and encodings is beyond the scope of this answer.