# Re: st: Decimal precision, again

 From "SJ Friederich, Economics" <[email protected]> To [email protected] Subject Re: st: Decimal precision, again Date Fri, 25 Jul 2003 20:24:55 +0100

Thanks very much to everyone for their comments. I tried to send a short and to-the- point message to the List but ended up neither giving enough information, nor explaining myself very clearly. Allow me to elaborate a little.

The variable I am considering is a share price. It will not take on very large values, and can have no more than two-digit decimal precision. Referring back to Bill Gould's comments, I think this situation is what he had in mind: I do know what the variable should look like and in particular that it should obey a minimum increment of 1 penny.

. d price in 1/6

417.8
418.68
418.9
419.28
425.35
426.55

As doubles, these values will be stored as above, but mostly not so when mistakenly insheeted as floats. If I:

. g float fprice = price

Then, although the Editor displays them as above, clicking on those cells shows that Stata really holds them as:

417.7999
418.67999
418.89999
419.28
425.35001
426.54999

Again, in both cases, Stata's editor will display them in the same (correct)
way, and I would presume Stata will -outsheet- them exactly as it displays them - that is, as I want them.

To promote that variable to double, Bill suggested thinking along the lines of:

. gen double fixed = round(old*10,1)/10

I'll do it this way. For the sake of completeness, would my original intuition (coarse though it may have been) of outsheeting and re-insheeting the dataset right away with the "double" option have worked?

Many thanks again.

Sylvain

--On 25 July 2003 12:58 -0500 "William Gould, Stata" <[email protected]> wrote:

```Sylvain Friederich <[email protected]> asks about getting back
-double- precision when the data was read using only float:

```
```[...]  I made a mistake in -insheet-ing some data (or, ahem, just because
the "double" option of -insheet- didn't work well until recently) and I
think a particular variable appearing as a float in my data should
really be there with double precision.

Re-processing this data from scratch would represent a tremendous drag.
Would outsheeting the Stata dataset and re-insheeting it using the
"double" option fix this unambiguously?
```
```Many people have already responded on the list that one cannot get back
was has been lost.  As Michael Blasnik <[email protected]> put
it, "When a variable is stored as a float, the precision beyond float is
lost."

Right they are, unless ... unless you know something about how the
original number should look.  For instance, pretend I have decimal
numbers with  one digit to the right of the decimal place such as

100.1
42.4
103894.3

If I store these numbers as float, I end up with

100.0999984741211
42.40000152587891
103894.296875

Knowing that there is just one digit to the right of the decimal,
however,  I can promote back to double:

. gen double fixed = round(old*10,1)/10

The answer is that one cannot get back the original precision unless, in
the reduced precision number, there is enough information so that one can
know what the rest of the numbers would have been.  That always requires
the addition of outside information, but you may have that.

As another example, if someone writes down 3.1415927, I would bet the
original number is even closer to 3.141592653589793.

-- Bill
[email protected]
*
*   For searches and help try:
*   http://www.stata.com/support/faqs/res/findit.html
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/

```
```

----------------------
Sylvain Friederich
Department of Economics, University of Bristol
Bristol BS8 1TN
Tel +44(0)117 928-8425
Fax +44(0)117 928-8577

*
*   For searches and help try:
*   http://www.stata.com/support/faqs/res/findit.html
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/
```