FMPRO-L Archives

May 2010, Week 1

FMPRO-L@LISTSERV.DARTMOUTH.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Reply To:
FileMaker Pro Discussions <[log in to unmask]>
Date:
Wed, 5 May 2010 11:56:01 -0600
Content-Type:
text/plain
Parts/Attachments:
text/plain (57 lines)
Here is what you end up with when a self-taught user puts together a  
complex database:

I have put together a database that does what I want it to do, which  
is compile data into various clusters, then compile the data in those  
clusters into large clusters, and that data then into even larger  
clusters, etc.  All my links between the levels are pretty simple and  
based on one or two digit numerical matches only. All calculations are  
also numerical. Data is portaled from each level to the next and  
summary calculations are made at each level.

My base data source has 250,000 records but is growing daily.

The next level compiles the basic data into 100 plus different daily  
buckets.

The next level compiles the daily buckets into 100 plus different  
monthly buckets.

The next level compiles the monthly buckets into 100 yearly buckets

The next level combines the yearly buckets into 7 buckets.

The final level combines the yearly buckets into 1 bucket.

There are summary calculations at each level beyond the first, as well  
as links that bring over data from prior years and calculate  
comparisons.

It all works, and is very useful because at any level I can click down  
through the portals to see the details of the contributive data at  
each level.  But the file (In FmPro10 accessed over a server with  
FmProServer 10) is extremely slow at the upper levels.

I am wondering what steps I could take within Filemaker to speed up  
the file functioning, such as possibly storing calculation results at  
each level (none are stored at present, which I suspect is the  
problem).  Is there anything I need to consider before changing all of  
my calculations to stored? Can I assume the fields will recalculate as  
necessary whenever additional data is added at the bottom level, or not?

The other alternative I can think of is to create the necessary match  
fields at the bottom level, then have each level's calculation reach  
back to the raw data and calculate the summaries from that, rather  
than (as now) compiling the results of calculations at lower levels.   
I am not positive this would be faster and that has the disadvantage  
of being less able to detect calculation problems, or see at what  
level any problem is occurring, whenever there is a classification or  
compilation issue happens  (as well as being a lot of work)--since the  
summaries at each level might not actually equal a sum of the data at  
the previous level if .

At the moment I am just really, really tired of the spinning ball.   
Suggestions welcome.

Sue

ATOM RSS1 RSS2