[HarfBuzz] Question regarding the use of HB_SCRIPT_KATAKANA for "regular" Japanese
behdad at behdad.org
Sun Dec 22 15:35:59 PST 2013
On 13-12-22 06:17 PM, Ariel Malka wrote:
>> As it happens, those three scripts are all considered "simple", so the shaping
>> logic in HarfBuzz is the same for all three.
> Good to know. For the record, there's a function for checking if a script is
> complex in the recent Harfbuzz-flavored Android OS: http://goo.gl/KL1KUi
Please NEVER use something like that. It's broken by design. It exists in
Android for legacy reasons, and will eventually be removed.
>> Where it does make a difference
>> is if the font has ligatures, kerning, etc for those. OpenType organizes
>> those features by script, and if you request the wrong script you will miss
>> out on the features.
> Makes sense to me for Hebrew, Arabic, Thai, etc., but I was bit surprised to
> find-out that LATN was also a complex script.
LATN uses the "generic" shaper, so it's not complex, no.
> So for instance, if I would shape some text containing Hebrew and English
> solely using the HEBR script, I would probably loose kerning and ffi-like
> ligatures for the english part
> (this is what I'm actually doing currently in
> my "simple" BIDI implementation...)
Then fix it. BIDI and script itemization are two separate issues.
>> How you do font selection and what script you pass to HarfBuzz are two
>> completely separate issues. Font fallback stack should be per-language.
> I understand that the best scenario will always be to take decisions based on
> "language" rather than solely on "script", but it creates a problem:
> Say you work on an API for Unicode text rendering: you can't promise your
> users a solution where they would use arbitrary text without providing
> language-context per span.
These are very good questions. And we have answers to all. Unfortunately
there's no single location with all this information. I'm working on
documenting them, but looks like replying to you and letting you document is
What Pango does is: it takes an input list of languages (through $LANGUAGE for
example), and whenever there's a item of text with script X, it assigns a
language to the item in this manner:
- If a language L is set on the item (through xml:lang, or whatever else the
user can use to set a language), and script X may be used to write language L,
then resolve to language L and return,
- for each language L in the list of default languages $LANGUAGE, if script
X may be used to write language L, then resolve to language L and return,
- If there's a predominant language L that is likely for script X, resolve
to language L and return,
- Assign no language.
This algorithm needs two tables of data:
- List of scripts a language tag may possibly use. This is for example
available in pango-script-lang-table.h. It's generated from fontconfig orth
files using pango/tools/gen-script-for-lang.c. Feel free to copy it.
- List of most likely language for each script. This is available in CLDR:
Pango has it's own manually compiled list in pango-language.c
Again, all these are on my plate for the next library I'm going to design. It
will take a while though...
> Or, to come back to the origin of the message: solutions like ICU's "scrptrun"
> which are doing script detection are not appropriate (because they won't help
> you finding the right font due to the lack of language context...)
> I guess the problem is even more generic, like with utf8-encoded html pages
> rendered in modern browsers, as demonstrated by the creator of liblinebreak:
> On Sun, Dec 22, 2013 at 10:47 PM, Behdad Esfahbod <behdad at behdad.org
> <mailto:behdad at behdad.org>> wrote:
> On 13-12-22 10:10 AM, Ariel Malka wrote:
> > I'm trying to render "regular" (i.e. modern, horizontal) Japanese with
> > So far, I have been using HB_SCRIPT_KATAKANA and it looks similar to what is
> > rendered via browsers.
> > But after examining other rendering solutions I can see that "automatic
> > detection" can often take place.
> > For instance, the Mapnik project is using ICU's "scrptrun", which, given the
> > following sentence:
> > ユニコードは、すべての文字に固有の番号を付与します
> > would detect a mix of Katakana, Hiragana and Han scripts.
> > But for instance, it would not change anything if I'd render the sentence by
> > mixing the 3 different scripts (i.e. instead of using only
> > Or are there situations where it would make a difference?
> As it happens, those three scripts are all considered "simple", so the shaping
> logic in HarfBuzz is the same for all three. Where it does make a difference
> is if the font has ligatures, kerning, etc for those. OpenType organizes
> those features by script, and if you request the wrong script you will miss
> out on the features.
> > I'm asking that because I suspect a catch-22 situation here. For
> example, the
> > word "diameter" in Japanese is 直径 which, given to "scrptrun" would be
> > detected as Han script.
> > As far as I understand, it could be a problem on systems where
> > DroidSansFallback.ttf is used, because the word would look like in
> > Chinese.
> > Now, if we were using MTLmr3m.ttf, which is preferred for Japanese, the word
> > would have been rendered as intended.
> How you do font selection and what script you pass to HarfBuzz are two
> completely separate issues. Font fallback stack should be per-language.
> > Reference: https://code.google.com/p/chromium/issues/detail?id=183830
> > Any feedback would be appreciated. Note that the wisdom accumulated here
> > be translated into tangible info and code samples (see
> > https://github.com/arielm/Unicode)
> > Thanks!
> > Ariel
> > _______________________________________________
> > HarfBuzz mailing list
> > HarfBuzz at lists.freedesktop.org <mailto:HarfBuzz at lists.freedesktop.org>
> > http://lists.freedesktop.org/mailman/listinfo/harfbuzz
More information about the HarfBuzz