<html>
<head>
<base href="https://bugs.documentfoundation.org/">
</head>
<body><span class="vcard"><a class="email" href="mailto:newbie-02@gmx.de" title="b. <newbie-02@gmx.de>"> <span class="fn">b.</span></a>
</span> changed
<a class="bz_bug_link
bz_status_NEW "
title="NEW - Implement a Kahan summation algorithm for reduce the numerical error in the total"
href="https://bugs.documentfoundation.org/show_bug.cgi?id=137679">bug 137679</a>
<br>
<table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>What</th>
<th>Removed</th>
<th>Added</th>
</tr>
<tr>
<td style="text-align:right;">Ever confirmed</td>
<td>
</td>
<td>1
</td>
</tr>
<tr>
<td style="text-align:right;">CC</td>
<td>
</td>
<td>newbie-02@gmx.de
</td>
</tr>
<tr>
<td style="text-align:right;">Status</td>
<td>UNCONFIRMED
</td>
<td>NEW
</td>
</tr></table>
<p>
<div>
<b><a class="bz_bug_link
bz_status_NEW "
title="NEW - Implement a Kahan summation algorithm for reduce the numerical error in the total"
href="https://bugs.documentfoundation.org/show_bug.cgi?id=137679#c3">Comment # 3</a>
on <a class="bz_bug_link
bz_status_NEW "
title="NEW - Implement a Kahan summation algorithm for reduce the numerical error in the total"
href="https://bugs.documentfoundation.org/show_bug.cgi?id=137679">bug 137679</a>
from <span class="vcard"><a class="email" href="mailto:newbie-02@gmx.de" title="b. <newbie-02@gmx.de>"> <span class="fn">b.</span></a>
</span></b>
<pre>hello @Roman,
full support, yes, calc (and other spreadsheets) need better precision, less
errors, less complaints,
on a first glance i'd say 'Kahan' is good for some special (but common) cases,
chained summation of plenty small numbers to one bigger value what suffers from
truncation / rounding errors because only a part of the mantissa of the small
values is included in the calculation,
but it fails for plenty other tasks, and requires an 'intelligent overview of
the task' which - i assume - spreadsheets normally don't have, as well as some
effort to re-organize the calculations to combine atomic calculations together,
thus 'performance impact' and 'error prone',
and i'd expect it capable to avoid 'summation artefacts', but not the common
user noticed errors resulting from decimal -> binary conversion,
similar for 'argument sorting',
imho better proposals:
1.) IEEE 754-2008 decimals, could someone with knowlegde have a look ifn't gcc
already has that implemented? i'd read something about a decimal64 data type /
library,
2.) smart rounding acc. to the input (strings, or dec-value representation, not
'fp'), thoose reflect the precision submitted and expected by the user ...
would require storing a property 'precision' alongside with the values, and
after each operation round acc. to the input and operation, tried it with 'user
macros', slow but works, should be faster in code,
difficulties: either changed data representation neccessary to track the
precision, or fetch input (cell.string) for each operation and calculate from
that ... both bad impact but better than the user has to formulate every
calculation with 'round',
advantage of 1. and 2. over Kahan: they can also correct / suppress errors in
other calculations such as MOD, DATE, dragfill etc.
3.) spontaneous idea: round the fp-representation of decimals to 48 (+1)
mantissa bits, and take the last four bits (which in most cases are rounded
away by calc's 'round to 15 digits' anway), and store the 'decimal precision'
there (0000 for "no rounding!", 0001 to 1111 for '1 to 15' decimal digits),
thus you can reliably 'round away' most fp-conversion and fp-calculation
artefacts for the typical user data (e.g. finance sheets) with less 'user
irritating' errors becoming visible ...
e.g. calculating 0,1 + 0,2 would become 1.0000000000000009E-1 +
2.0000000000000018E-1, thus 3.0000000000000027E-1, but with! the info 'rounding
to one decimal digit is! correct', and thus become exact dec 0,3 which is much
better than having 3.0000000000000004E-1 and an arbitrary decision 'round or
not',
the calculated value would be:
0b0_01111111101_(1)0011001100110011001100110011001100110011001100111000
rounded for 48/49 bit storing to
0b0_01111111101_(1)001100110011001100110011001100110011001100110100(0001), thus
3.000000000000007E-1
while a calculation from dec 0,3 would result in bin:
0b0_01111111101_(1)0011001100110011001100110011001100110011001100110011
rounded for 48/49 bit storing to
0b0_01111111101_(1)001100110011001100110011001100110011001100110011(0001), thus
2.999999999999998E-1
that's a little more 'off' than neccessary, but ... reliable values!, both for
sure "0,3" und thus for sure equal!, and for comparisions calc can look 'low
precision' and apply 'wide range',
just a half-baked idea on a rainy evening that i like to throw in the
discussion, don't be annoyed if i suggest creative nonsense, it was time enough
that reasonable people could have solved the problem better ... to drag it
along forever is worse than my funny suggestions ... ???
me as a soothsayer ... it won't be long before someone classifies this bug as a
duplicate of one of the 40++ fp-precision bugs whose resolved-duplicate chains
are gradually biting each other's tails, and it falls into oblivion over time
...</pre>
</div>
</p>
<hr>
<span>You are receiving this mail because:</span>
<ul>
<li>You are the assignee for the bug.</li>
</ul>
</body>
</html>