1. #1
    Sencha - Ext JS Dev Team dongryphon's Avatar
    Join Date
    Jul 2009
    Posts
    1,347
    Vote Rating
    134
    dongryphon is a name known to all dongryphon is a name known to all dongryphon is a name known to all dongryphon is a name known to all dongryphon is a name known to all dongryphon is a name known to all

      1  

    Default Grid buffered/infinite scrolling in 4.1

    Grid buffered/infinite scrolling in 4.1


    Since the changes described below have started appearing in the nightly builds, I figured it was a good time to start this thread. The good parts of this post are most likely written by Animal (Nige)

    NOTE: The following will apply to the next generally available build, but does not apply to Beta 3. Some of this was already discussed in the recent performance blog post (http://www.sencha.com/blog/optimizin...d-applications).

    Buffered rendering of grids, commonly known as "infinite scrolling" has improved significantly in 4.1. The major improvements are in the "prefetch buffer" management. The prefetch buffer used to just be a MixedCollection of records keyed by the ordinal position in the global dataset which greatly complicated cache lookup and eviction. This has now become a true page cache which maintains a set of page-sized blocks of records, each keyed by page number.

    This means that fetching a range of records from the cache is as quick as can be. It's a matter of calculating the page range which encompasses that record range and extracting the set of records. This is done as efficiently as possible using Array.slice where the range does not coincide with the beginning or end of a page.

    Simpler API
    You no longer have to know about the methods which perform all this magic. In 4.1, you can use the regular Store API.

    All you need to do is configure your Store like this:

    Code:
        buffered: true,
        pageSize: 50, // or whatever works best given your network amd DB latency
        autoLoad: true
    The autoLoad does what autoLoad has always done: starts at page 1! The 4.0-way of initialization using the guaranteeRange method still works but should be replaced with autoLoad or the new loadPage method. Calling guaranteeRange disables certain internal optimizations to maintain compatibility.

    How it works
    The grid now calculates how large the rendered table should be using the configuration of the PagingScroller which is the object that monitors scroll position. These are as follows when scrolling downwards:

    * trailingBufferZone The number of records to keep rendered above the visible area.
    * leadingBufferZone The number of records to keep rendered below the visible area.
    * numFromEdge How close the edge of the table should come to the visible area before the table is refreshed further down.

    The rendered table needs to contain enough rows to fill the height of the view plus the trailing buffer size plus leading buffer size plus (numFromEdge * 2) to create some scrollable overflow.

    As the resulting table scrolls, it is monitored, and when the end of the table comes within numFromEdge rows of coming into view, the table is re-rendered using a block of data further down in the dataset. It is then positioned so that the visual position of the rows do not change

    In the best case scenario, the rows required for that re-rendering are already available in the page cache, and this operation is instantaneous and visually undetectable.

    To configure these values, configure your grid with a verticalScroller:

    Code:
    {
        xtype: 'gridpanel',
        verticalScroller: {
            numFromEdge: 5,
            trailingBufferZone: 10,
            leadingBufferZone: 20
        }
    }
    This will mean that there will be 40 rows overflowing the visible area of the grid to provide smooth scrolling, and that the re-rendering will kick in as soon as the edge of the table is within 5 rows of being visible.

    Keeping the pipeline full
    Keeping the page cache primed to be ready with data for future scrolling is the job of the Store. The Store also has a trailingBufferZone and a leadingBufferZone.

    Whenever rows are requested for a table re-render, after returning the requested rows, the Store then ensures that the range encompassed by those two zones around that requested data is in the cache by requesting them from the server if they are not already in the cache.

    Those two zones have quite a large default value, but can be tuned by the developer to keep fewer or more pages in the pipeline.

    Cache Misses
    If "teleporting" way down into the dataset to a part for which there are definitely no cached pages, then there will be a load mask and a delay because data will need to be requested from the server. However this case has been optimized too.

    The page which contains the range required to create the visible area is requested first, and the table will be re-rendered as soon as it arrives. The surrounding pages covering the trailingBufferZone and leadingBufferZone are requested after the data that is really needed ASAP by the UI.

    Pruning the cache
    By default, the cache has a calculated maximum size, beyond which, it will discard the Least Recently Used pages. This size is the number of pages spanned by the scroller's leadingBufferZone plus visible size plus trailingBufferZone plus the Store's configured purgePageCount. Increasing the purgePageCount means that once a page has been accessed, you are much more likely to be able to return to it quickly without triggering a server request later.

    A purgePageCount value of zero means that the cache may grow without being pruned, and it may eventually grow to contain the whole dataset. This might actually be a very useful option when the dataset is not ridiculously large. Remember that humans cannot comprehend too much data, so multiple thousand row grids are not actually that useful - that probably means that they just got their filter conditions wrong and will need to re-query,

    Pull the whole dataset client side!
    One option if the dataset is not astronomical is to cache the entire dataset in the page map.

    You can experiment with this option in the "Infinite Grid Tuner" which is in your SDK examples directory under examples/grid/infinite-scroll-grid-tuner.html.

    If you set the "Store leadingBufferZone" to 50,000 and the purgePageCount to zero, this will have the desired effect.

    The leadingBufferZone determines how far ahead the Store tries to keep the pipeline full. 50,000 means keep it very full!

    A purgePageCount of zero means that the page map may grow without limit.

    So when you then kick off the "Reload", you can see the first, visually needed page being requested, and then rendered.

    Then you can see the Store diligently trying to fulfil that huge leadingBufferZone. Pretty soon, the whole dataset will be cached, and data access anywhere in the scrollable area will be instant.

    Compatibility with 4.0
    There are new API's in 4.1 related to these changes. Unlike in previous beta releases, the guaranteeRange method should work now. Even so, as notes above, its use is discouraged because (for compatibility) when you use it you are also specifying the size of the rendered table. Since the minimum size is actually dynamic, this can be hazardous to handle this way. The new "zones" configurations are designed to allow you to adjust the amount of rendering you want beyond the minimum.
    Don Griffin
    Ext JS Development Team Lead

    Check the docs. Learn how to (properly) report a framework issue and a Sencha Cmd issue

    "Use the source, Luke!"

  2. #2
    Sencha - Support Team slemmon's Avatar
    Join Date
    Mar 2009
    Location
    Boise, ID
    Posts
    5,079
    Vote Rating
    186
    slemmon is a splendid one to behold slemmon is a splendid one to behold slemmon is a splendid one to behold slemmon is a splendid one to behold slemmon is a splendid one to behold slemmon is a splendid one to behold slemmon is a splendid one to behold slemmon is a splendid one to behold

      0  

    Default


    Thank you for writing this up. Very comprehensive.

  3. #3
    Ext JS Premium Member westy's Avatar
    Join Date
    Feb 2009
    Location
    Bath, UK
    Posts
    911
    Vote Rating
    41
    westy is a jewel in the rough westy is a jewel in the rough westy is a jewel in the rough

      0  

    Default


    Sounds good; I look forward to trying it.
    Product Architect
    Altus Ltd.

  4. #4
    Sencha User
    Join Date
    Aug 2010
    Posts
    19
    Vote Rating
    0
    Teemac is on a distinguished road

      0  

    Default Effect on dom?

    Effect on dom?


    In the event of getting enitre record set down on the client and keeping the purgePageCount to 0, what will that do to the dom? Will it still only render a small portion of the records clearing out ones that are scrolled away from? Or will it just continually add more records to the bottom potentially getting very large?

  5. #5
    Sencha - Ext JS Dev Team Animal's Avatar
    Join Date
    Mar 2007
    Location
    Notts/Redwood City
    Posts
    30,506
    Vote Rating
    54
    Animal has a spectacular aura about Animal has a spectacular aura about Animal has a spectacular aura about

      0  

    Default


    It only renders just a few more rows than you can actually see.

  6. #6
    Sencha User MD's Avatar
    Join Date
    Mar 2007
    Posts
    178
    Vote Rating
    0
    MD is on a distinguished road

      0  

    Default


    This is such great news -- really made my day -- thanks dongryphon and Animal! There seemed to be a lot of uncertainty and issues with respect to Grids and virtual/infinite scrolling since 4.0 and throughout the 4.1 betas, but this should certainly ease much of the concerns. Don, just for clarification -- these changes will appear in the RC's, or just the Final GA release?

  7. #7
    Ext JS Premium Member cabal's Avatar
    Join Date
    Mar 2009
    Location
    Warsaw, Poland
    Posts
    23
    Vote Rating
    1
    cabal is on a distinguished road

      0  

    Default Filtering problem

    Filtering problem


    Hello

    There is still a little filtering problem.
    I need remote filtering for buffered grid data.
    So I'm setting new extraParams to proxy, then clearing pageMap and loading page 1.
    When filtered resultset is at least as long as page size (scroller is needed), the scroller is adjusted to new resultset size. But when resultset is smaller, even one record, the scroller has size of unfiltered dataset.

    The reason is in scroller: when the resultset size is smaller than page size, the scroller is disabled no matter what it was before that test, see Ext.grid.PagingScroller.onViewRefresh.

    It was tested on today's nightly.

  8. #8
    Sencha - Ext JS Dev Team Animal's Avatar
    Join Date
    Mar 2007
    Location
    Notts/Redwood City
    Posts
    30,506
    Vote Rating
    54
    Animal has a spectacular aura about Animal has a spectacular aura about Animal has a spectacular aura about

      0  

    Default


    Thanks. We'll have to implement a remote filter somewhere.

  9. #9
    Ext JS Premium Member
    Join Date
    Mar 2011
    Posts
    69
    Vote Rating
    0
    scancubus is on a distinguished road

      0  

    Default scroller bouncing UP while scrolling

    scroller bouncing UP while scrolling


    Has anyone noticed in infinite grid that the scroller is moving back up as you mousewheel down? It seems to happen as soon as the grid prefetches more data. I also CANNOT look at the last records in the grid, the scroller doesnt like them or something.

  10. #10
    Sencha Premium Member skirtle's Avatar
    Join Date
    Oct 2010
    Location
    UK
    Posts
    3,596
    Vote Rating
    324
    skirtle has a brilliant future skirtle has a brilliant future skirtle has a brilliant future skirtle has a brilliant future skirtle has a brilliant future skirtle has a brilliant future skirtle has a brilliant future skirtle has a brilliant future skirtle has a brilliant future skirtle has a brilliant future skirtle has a brilliant future

      0  

    Default


    @scancubus. Which build are you testing with?