Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: too long of string in mata?


From   Pedro Nakashima <[email protected]>
To   [email protected]
Subject   Re: st: too long of string in mata?
Date   Wed, 26 Oct 2011 17:45:08 -0200

Just sharing the answer from stata:

"
I don't know why -tokens()- is limiting tokens to a maximum of 503 characters
so I have submitted a bug report to the developers. I recommend using the more
advanced -tokengetall()- programming functions because they have no problem
parsing very long tokens.
"

Pedro.

2011/10/26 Pedro Nakashima <[email protected]>:
> Ok, thanks.
>
> I've just sent.
>
> 2011/10/26 Nick Cox <[email protected]>:
>> No comment from StataCorp that I recall. Best to send it into [email protected] to get their confirmation that it's a bug -- if we're fortunate it's our misunderstanding. If a bug it would join a list of things to be fixed.
>>
>> Nick
>> [email protected]
>>
>> Pedro Nakashima
>>
>> Bad news..
>>
>> Nick, is ther any form to confirm it?
>>
>> Pedro.
>>
>> 2011/10/8 Nick Cox <[email protected]>:
>>
>>> I get that result too with the -tokens()- function (not a command).
>>> Seems that you are being bitten by some limit in Mata, perhaps even a
>>> bug.
>>>
>>> Nick
>>>
>>> On Sat, Oct 8, 2011 at 6:52 PM, Ozimek, Adam <[email protected]> wrote:
>>>> Stata users,
>>>>
>>>> I'm trying to read a dataset with very long text field (a continuation of a problem I've asked multiple questions about on statalist, thanks for all your help and patience so far). I'm finding Mata splitting the text into two fields when the text is over 503 characters long. Here is an example where I've created an arbitrary text with indicators at the 100 character points:
>>>>
>>>>
>>>> . type long.txt
>>>> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx100xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
>>>>> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx200xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
>>>>> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx300xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
>>>>> xxxxxxxx400xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx500xxxxxxxxxxxxxxxxxx
>>>>> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx600
>>>>
>>>> . file open testname using long.txt, read text
>>>>
>>>> . file read testname test
>>>>
>>>> . mata
>>>> ------------------------------------------------- mata (type end to exit) ---------------------------------------------------------
>>>>
>>>> : fields = tokens(st_local("test"),char(9))
>>>>
>>>> : fields
>>>>  [1, 1] = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx100xxxxxxxxxxxxxxxxxxxx
>>>>> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx200xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
>>>>> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx300xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
>>>>> xxxxxxxxxxxxxxxxxxx400xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx500xxx
>>>>  [1, 2] = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx600
>>>>
>>>>
>>>> Is there some way to tell the tokens command to not split the text at 503 characters?
>>>
>>
>> *
>> *   For searches and help try:
>> *   http://www.stata.com/help.cgi?search
>> *   http://www.stata.com/support/statalist/faq
>> *   http://www.ats.ucla.edu/stat/stata/
>>
>

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index