View Full Version : ExtJS Performance

13 Feb 2010, 9:26 PM
I am currently developing a large applicaon using ExtJS and ASP.NET MVC. I developed few screens and it went all well. I have a border layout and got tree navigation of the left pane and Tabbed interface on the centre region to load different modules. EVerything went well until I noticed my system is not performing well when I am running the website.

Whenever I load a module to a Tab the memory is increasing by 6000K (because of the data load I agree) but even if I close the tab, the memory consumption is still the same as it is not decreasing the memory usage which is scary. I am using firefox as the browser with Ext version 3.1.1. I thought of going back to ASP.NET MVC Model and thinking of loading one at a time still using extjs as UI.

Did anyone face this problem before or am I doing something silly?

15 Feb 2010, 6:35 AM
Are you loading a ton of data via AJAX or something?

If so, try loading the tree incrementally. You want to use a 'TreeLoader' object.

var myTreeLoader = new Ext.tree.TreeLoader({url:'PriceList.jsp', paramOrder:['pricelist','pricelistdetail','action']});

myTreeLoader.on("beforeload", function(treeLoader, node) {
myTreeLoader.baseParams = {};
myTreeLoader.baseParams.listid = internal_price_list; // 1st level of hierarchy
myTreeLoader.baseParams.detailid = internal_price_list_detail; // 2nd level in hierarchy
myTreeLoader.baseParams.action = 'get_tree';
}, this);

myTreeLoader.getParams = function(node) {
var buf = [], bp = this.baseParams;
for(var key in bp) {
if(typeof bp[key] != 'function') {
buf.push(encodeURIComponent(key), '=', encodeURIComponent(bp[key]), '&');
buf.push('path=', (node.attributes.path) ? encodeURIComponent(node.attributes.path) : '/');
return buf.join('');

You'd want to copy .getParams() as-is, I think. Then change my code in 'beforeload' to set whatever parameters you need to load the tree. Then perhaps something like this to create the tree:

var tree = new Ext.tree.TreePanel({
useArrows: true,
autoScroll: true,
animate: false,
enableDD: false,
containerScroll: true,
border: false,
loader: myTreeLoader,
root: new Ext.tree.AsyncTreeNode({
text: 'My Title Tree',
draggable: false,
id: 'src'

The end result would force your tree to only load the top hierarchy rather than the entire dataset. Only when a node was expanded would it would load the node's child records.

It is always best to try to load as little data as possible -- JavaScript is not magic; it's limited by client memory.

16 Feb 2010, 2:53 PM
Thanks for the reply. I was loading tree only on-demand. The problem was I didn't use the Autodestroy option on the parent components. It seems like fixed as the memory is gradually going down even though doesn't happen instantly.

16 Feb 2010, 3:27 PM
Hi extshrek, how do you use Autodestroy option on the parent components? could you paste a little code please


17 Feb 2010, 1:56 AM
Hi extshrek, how do you use Autodestroy option on the parent components? could you paste a little code please


Every Panel object has an "Autodestroy" property and if you set it to true, all child objects will get destroyed when you destroy this panel. Hope this will help.

Mike Robinson
17 Feb 2010, 7:36 AM
If you look at the "memory consumption" of an app from the outside, e.g. with Task Manager, you only see part of the picture. JavaScript has a "lazy" storage manager that takes deallocated objects and holds onto them, itself, for recycling. (It's trying to avoid "giving the storage back to the OS, just to have to request it again a few milliseconds later," because anything involving the OS memory-manager is more expensive. It also wants to avoid "touching" memory areas unnecessarily, which might trigger a page-fault.)

The OS is "lazy" too. It won't sweep through storage if it's not "short on storage," because there is no material advantage in taking the time to do so. It won't even take the pages out of working-sets... which means that storage consumption is now over-stated (i.e. "but who cares?"). Saves fuel for the garbage-trucks, you know...

Only when a moderate "squeeze" is placed on the storage subsystem will you start seeing these various mechanisms kick in. On a (typically big and copious) developer's machine, you usually have to induce storage-stress to get an accurate picture... and there are, of course, tools that will do just that.

And then you need to judge for yourself how realistic that scenario is, or isn't. One of the first things to go is recently-used programs and libraries, which can make the system feel much more sluggish in actual practice and might not at all be something your software would actually encounter "in the field." You really need to know, if you can know, just how big "a typical user's" machine is: how much physical RAM, how fast etc.) Lots of machines these days are actually very big, because hey, "chips are cheap." Take advantage of that.