Subject: | |
From: | |
Reply To: | |
Date: | Wed, 4 Nov 2009 17:48:41 -0800 |
Content-Type: | text/plain |
Parts/Attachments: |
|
|
That is exactly what I was trying earlier, but kept getting tr errors (tr
[-C ...etc.)
So I gave up and tried something else.
do shell script "tr '\\r' '\\n' <" & quoted form of posix path of myFile
>> & " | " & ....
On 11/04/09 5:16 PM, "Mark J. Reed" wrote:
> sorry, that should be
>
> do shell script "tr '\\r' '\\n' <"
>
> since AS uses the backslash too.
>
> On Wed, Nov 4, 2009 at 8:16 PM, Mark J. Reed <[log in to unmask]> wrote:
>> If you're using do shell script, you can pipe the script through tr
>> before piping it to awk or cut or whatever.
>>
>> do shell script "tr '\r' '\n' <" & quoted form of posix path of myFile
>> & " | awk ..."
>>
>> (and leave off the quoted form of posix path of myfile at the end of
>> the awk command). That will do the conversion one line at a time,
>> without having to read the whole file into memory at once.
>>
>>
>> On Wed, Nov 4, 2009 at 7:44 PM, [log in to unmask] <[log in to unmask]> wrote:
>>> I think I tracked it down, thanks Mark.
>>>
>>> Here's where the error occurs:
>>>
>>> set fileText to read thisFile
>>> tell application "TextCommands" to set fileText to convert linebreaks
>>> fileText to Unix format
>>>
>>> Because the file is so large, the "fileText" variable is empty, and when
>>> the script is ready to write the converted text back to the file it
>>> generates a trapped error that leaves the original file untouched.
>>>
>>> So I think what I really need to is a way to convert line breaks on a huge
>>> file without having to read it into memory.
>>>
>>>
>>>
>>> ES
>>>
>>> On Nov 4, 2009, at 8:41am, Mark J. Reed wrote:
>>>
>>>> Ed: was this a repeat from a different thread? Did you make sure the
>>>> file has UNIX line endings?
>>>
>>
>>
>>
>> --
>> Mark J. Reed <[log in to unmask]>
>>
>
>
|
|
|