Recommendations on pagination?

So, I just finished reading Kevin C.'s excellent article at:
http://glu.ttono.us/articles/2006/08/30/guide-things-you-shouldnt-be-doing-in-rails

I was feeling pretty smug with myself (although I did have a Find_All
to get rid of) until I got to the pagination part. Its great
information about its performance problems, but I’m not sure what to do
now.

I did find paginate_by_sql, which seems like a good approach to me:
http://thebogles.com/blog/2006/06/paginate_by_sql-for-rails-a-more-general-approach/

What is the best practice in this area?

If it works and your data set is small enough that it’s happy and
quick, then stick with it. If you’re having speed problems, then think
about writing something custom.

Definitely don’t just rewrite something because that’s what all the
cool kids are doing! :slight_smile:

I’m with Tom. Until I actually start seeing a performance impact. I
currently have tens of thousands of rows, not millions.

Joe

“Joe R.” wrote:

I’m with Tom. Until I actually start seeing a performance impact. I
currently have tens of thousands of rows, not millions.

If I recall, the documentation says to perform SQL row count instead
of fetching rows when getting the items_count. I wonder if people used
this approach and still seeing performance issues with large row set.

Should I be concerned…?

Long

Can anyone confirm the behavior of ac.save? I mean what happens when a
model is retrieved and a subset attributes are modified and then saved?

I have the impression every field in the row is updated regardless, even
if the field had not changed. This seems not efficient and can cause a
dirty-write in some situations.

Does ac.save support field-specific updates? If so how…

Thanks,

Long

On Wed, 2006-08-30 at 19:54 +0000, Tom T. wrote:

If it works and your data set is small enough that it’s happy and
quick, then stick with it. If you’re having speed problems, then think
about writing something custom.

Definitely don’t just rewrite something because that’s what all the
cool kids are doing! :slight_smile:


I’m looking at scaling issues from the width dimension rather than the
depth dimension. I have a table that has 143 columns which are split
amongst 7 data input screens and I am thinking that it would be
faster/less memory intensive to only retrieve the specific fields
necessary for each particular input screen and of course, the list view.

Does anyone do this?

Craig

On 8/30/06, Long [email protected] wrote:

Can anyone confirm the behavior of ac.save? I mean what happens when a
model is retrieved and a subset attributes are modified and then saved?

I have the impression every field in the row is updated regardless, even
if the field had not changed. This seems not efficient and can cause a
dirty-write in some situations.

Does ac.save support field-specific updates? If so how…

The entire record is always saved. Add a lock_version column to enable
optimistic locking.

jeremy

Long wrote the following on 31.08.2006 00:39 :

Can anyone confirm the behavior of ac.save? I mean what happens when a
model is retrieved and a subset attributes are modified and then saved?

I have the impression every field in the row is updated regardless, even
if the field had not changed. This seems not efficient and can cause a
dirty-write in some situations.

Does ac.save support field-specific updates? If so how…

You’ll break validation… You could override update_attribute (which
doesn’t do validation anyway).

Lionel.

Long wrote:

“Joe R.” wrote:

I’m with Tom. Until I actually start seeing a performance impact. I
currently have tens of thousands of rows, not millions.

If I recall, the documentation says to perform SQL row count instead
of fetching rows when getting the items_count. I wonder if people used
this approach and still seeing performance issues with large row set.

Should I be concerned…?

Long

Getting a count of ALL rows shouldn’t be expensive - mysql caches it,
and I think PostgreSQL also started caching it. Counts WITH condition
clauses can be expensive if there are a lot of rows.

Joe