Subject: | |
From: | |
Reply To: | |
Date: | Wed, 5 May 2010 17:35:09 -0600 |
Content-Type: | text/plain |
Parts/Attachments: |
|
|
On May 5, 2010, at 2:16 PM, Steve Cassidy wrote:
> Sue
>
> I think you have a range of possibilities as regards speeding things
> up. Geoff has already covered a couple. But I think all are going to
> revolve around getting more of your data stored – either as stored
> calculations or as non-calculated data.
>
> You haven't mentioned anything about how your base data gets
> changed, except to say that it grows daily. Is someone inputing
> records? Are they imported from somewhere?
Yes, a lot of new data is imported daily and then more data on a
monthly basis. I am considering adding a lookup or set field script
to the end of the import script.
> It seems to me that you could script either process such that your
> summed results are cascaded up the hierarchy using Set Field steps.
> This is going to make the input process marginally slower for each
> record – in essence you have to pay the price somewhere! But at
> least your clicking down through the data would be quicker.
>
> You also haven't mentioned how up-to-date your data needs to be. If
> records are being added continually during the day, but you don't
> really need those to reflect in your summarized data until the next
> day, you could run a daily (nightly?) script that loops through and
> sets all your summary fields.
I need to learn how to set up a script to run automatically at a
certain time each day--any suggestion of where I could see an example
of how to do that?
>
> Or if you do your data crunching in 'sessions' where you do multiple
> drill-downs, you might just run an update script in advance of a
> session. You know – set the script in motion, go have your coffee
> break, then come back and run your analysis.
>
> Anyway, as I say, you really need to get those numbers stored
> somehow! You could quite easily put in place the scripts and fields
> needed for a parallel 'stored' version of your process while
> continuing to use the current method. So I think you have a fair
> opportunity to test this on a small part of your data to see what
> the gain will be.
Starting from the bottom up seems like the best idea.
> You'll probably like it, especially as you bear in mind that the
> current method really is going to get slower and slower until you
> archive some old data.
That is not a pleasant thought!
Thanks for the suggestions. Hopefully I can get some significant
improvements in speed without too much overhaul.
Sue
>
> Just a few thoughts...
>
> Steve
>
> On May 5, 2010, at 8:55 PM, Sue wrote:
>
>> Thanks, Geoff.
>>
>> I was afraid that was the answer re: stored calculations. Your
>> suggestion to tackle the problem just at the second level sounds
>> like it would be well worth some consideration.
>>
>> I am definitely not excited about the option of exporting and
>> importing data between the levels, nor do I like the look up or
>> auto-calc option. Too easy to end up looking at incorrect info
>> based on old data.
>>
>> I appreciate your suggestions. Thanks again.
>>
>> Sue
|
|
|