PDA

View Full Version : Store with persistence, transactions, retrieving from other stores



deitch
13 Dec 2007, 12:15 PM
Per the previous threads in Help and extensions in Ext 1.x, I have cleaned and packaged up the extension to Ext.data.Store. All classes are extensions in Ext.ux of existing classes in Ext.data. Main features are as follows:
Ext.ux.HttpWriteProxy that can handle writing in addition to reading.
Ext.ux.JsonWriterReader that can convert Records into Json in addition to its usual job of reading Json into Records.
Ext.ux.ObjectReader that can convert POJSO (plain old JavaScript object) into Record and vice-versa
Ext.ux.WriteStore that has full transactions (see the README)
Ext.ux.WriteStore also has full support for writing back to the sourceI also intend to write Ext.ux.XmlWriterReader, but have not gotten to it yet.

Setting up the server side is the responsibility of the user.

------
The entire set of libraries, along with the GPL license and other non-Ext-related libraries, has been released to its own Website at jsorm.com.

J.C. Bize
13 Dec 2007, 12:37 PM
Excellent work... I can't wait to play with this.

Thanks for the thorough documentation.

Cheers,
JC

trbs
13 Dec 2007, 1:11 PM
me too, thanks!

tofsteel
15 Dec 2007, 1:46 AM
really good job deitcher...

congratulations...

note : line : 395 code : o.request.callback.call(o.request.scope, result, o.request.arg,true)

what is result variable? firebug say error this line. result is not define.

deitch
15 Dec 2007, 3:48 PM
Doh! Clearly, something slipped through. I didn't remotely believe it was possible I released something bug-free, so all I can say is, thank you for catching it!

I will work this through over the next few days and re-release the library.

ElliotS
16 Dec 2007, 3:47 PM
Great work!

deitch
17 Dec 2007, 6:07 AM
OK, this is fixed. I am just reworking my ant build file and will have a fixed version posted by the end of the day.

deitch
17 Dec 2007, 6:08 AM
Thank you, ElliotS.

deitch
17 Dec 2007, 7:59 AM
I cleaned up that bug that you kindly found. Feedback is always appreciated.

Avi

shprota
13 Jan 2008, 6:56 PM
This is just what I was looking for. Will play with it today.

deitch
13 Jan 2008, 6:57 PM
Enjoy. Let me know.

Jonathan.Hart
14 Jan 2008, 6:07 AM
Great stuff! :)
I am VERY interested in your XmlReader modification as I have need of such an updater for Xml sources.

Any ideas as to when you might have something on this so I do not re-invent the wheel?

pic_Nick
18 Jan 2008, 2:01 AM
Hi deitch. Very good extension!
But I want to suggest you something: In the rejectChanges method it would be better to rewrite rollback logic like this (changed lines marked bold):

rejectChanges: function(){
// back out the changes in reverse order
var m = this.journal.slice(0).reverse();
this.journal = [];
for (var i = 0, len = m.length; i < len; i++) {
var jType = m[i].type;
if (jType == this.types.change) {
// reject the change
m[i].record.reject();
}
else if (jType == this.types.add) {
// undo the add
Ext.ux.WriteStore.superclass.remove.call(this, m[i].record);
//
}
else if (jType == this.types.remove) {
// put it back
Ext.ux.WriteStore.superclass.insert.call(this, m[i].index, m[i].record);
//
}
}
}
so the grid, that displays store's data will be notified about added/deleted rows.

Marte
25 Jan 2008, 6:32 AM
Hi, Deitch

Really good extension, congratulations!

I just want to ask you for the need of commitment in commitChanges(). If m[i].record.commit() is invoked, the record will discard its internal knowledge of modified members, thus making them unable to be rejected. I realized of this after debugging an intentionally buggy server code. The write operation was intentionally aborted so I could test the UI. Then I got a non-rejected changes in all modified records.

Is that a bug, or I am simply misunderstanding the usage of commit/reject? My logic is as follows:

a) The user edits an existing record.
b) The record is processed this way:
- Located in the store.
- invoked beginEdit() on it.
- updated its data via several calls to record.set(key, val)
- invoked endEdit()
c) The store's update event is captured for EDIT operation, then store.commitChanges() is invoked.
d) The operation is rejected in server-side.
e) The UI detects a non-valid update, and store.rejectChanges() is invoked. At this point, the records have forgotten the original data (due to the call to store.commitChanges()) and changes are not rejected.

Cheers!

lostAtSea
31 Jan 2008, 3:32 AM
Hello Deitch,

I'm having the same issues as Marte, trying to rollback record updates after server side rejection.

Is there any solution to this?

lostAtSea

cimperia
4 Feb 2008, 6:57 AM
Yes there is:

1)
in commitchanges()
remove the commit statement:


m[i].record.commit()

and whatever code is involved in commiting the records to the grid, ie marks records as clean.

2)
Add in that very same function the following code:


this.on('writesuccessful', function() {
// commit the changes and clean out
var m = this.journal.slice(0);
for (var i = 0, len = m.length; i < len; i++) {
if (m[i].type == this.types.change) {
// should check whether it's dirty or not.
if (m[i].record.dirty) {
m[i].record.commit();
}
}
}

this.journal = [];
}, this, {
single : true
})
3) Finally in the callback function where you test for success/failure of the JSON call, add this:


this.fireEvent('writesuccessful',this);

Hopefully it will work for you. I have modified the code quite a bit so that it behaves as advertised (real multiple undo (which currently it does not support) etc...).

deitch
4 Feb 2008, 4:43 PM
All,

My apologies for the delay in responses. I have been relying (unwisely) on the forum notification mechanism to let me know about new messages. Clearly, I was in error.

I will respond to all of these in the next few days.
Thanks.
Avi

deitch
7 Feb 2008, 11:23 AM
Good point, pic_Nick. Using the superclass allows events to properly flow through. This will be included in the new release, to be released shortly.

deitch
7 Feb 2008, 12:19 PM
lostAtSea and marte,

You have raised an interesting issue. What we really have right now is 2 data stores: the one in memory of ext (WriteStore), and whatever is backing it at the server side. We have the local store completing its commit, then sending the changes to the server. But what if the server rejects those changes?

In theory, there are two possibilities:

We keep the change and retry later
We reject the change


You clearly prefer the latter, with which I agree. We need a provision to provisionally accept, transmit, and then when accepted fully accept. I will look at cimperia's solution and some others, and get it back.

deitch
7 Feb 2008, 1:04 PM
cimperia,

In principle I like your idea. The issues is determining the states of write. When you post to a web server, you can have one of three outcomes:

POST fails (network error, server error, etc.)
POST succeeds, processing fails (e.g. the case above)
POST succeeds, processing succeeds (best outcome)


The trick is to differentiate between the three. In both of the last 2 cases, the HTTP response will be a straight 200, and Ext.ajax.request() will return success to its callback. Thus, we need to process the content of the results inside the library (i.e. not the application), to determine if the processing succeeded or failed.

Unless there is a better idea out there in the group, we will do exactly that. What it means is that the response from the server will have to follow a precise protocol. But, of course, it already does that for the most part.

deitch
7 Feb 2008, 1:39 PM
The update is in the works. Here is how it will work.

When you commitChanges(), it will *first* post them to the server. If the POST fails *or* if the POST succeeds but the first text characters back from the server are not "SUCCESS" (without the quotes), a writeexception event will be sent. If they are SUCCESS, then the commit will be completed locally and a write event will be sent. This closely follows the ext model of read/readexception, etc.

pic_Nick
8 Feb 2008, 9:24 AM
I think it will be better to expect a JSON-object as a response text with success property set to true rather than simply look for first characters of the response text.

deitch
8 Feb 2008, 9:51 AM
pic_Nick,

I thought of that. The problem is that you are then getting into the data itself, where we want to be able to provide success/failure before we reach that level. The only response from a data update activity should be a status message. That is cleanest in text.

deitch
8 Feb 2008, 9:54 AM
Thank you all for your patience and feedback.

Attached is write-store v0.5 (for the curious, the previous version was 0.4). I have fixed all that has been raised here. Additionally, I have extended the documentation to reflect the various responses from a POST, as well as the sample page to show the various results and give sample code.

The only issue that has not been addressed is XmlReaderWriter. I expect that to come soon.

Enjoy!

pic_Nick
9 Feb 2008, 4:04 AM
pic_Nick,

I thought of that. The problem is that you are then getting into the data itself, where we want to be able to provide success/failure before we reach that level. The only response from a data update activity should be a status message. That is cleanest in text.

It is simply the way it is done in Ext.form.BasicForm submit mechanism. And moreover, there is not only status message is a possible response from data update, but it may be some extra data like actual number of records affected in a back-end storage or something else. Futher more, the responseText is data itself and it is our responsibility to interpret its contents. And what it will be, string beginning with 'SUCCESS' or JSON-object, containing success property is doesn't really metter, but JSON-object is more flexible and convenient.
Ufff! It was my opinion :">

deitch
9 Feb 2008, 3:33 PM
I see your point. The problem is we need three layers here: network success/failure, processing success/failure, and results. But we only have 2 to deal with, the network and the data content. Personally, I would prefer to see it at another layer. HTTP header is a possibility, but then it gets into the HTTP protocol, which is a little messier.

Any other opinions?

pic_Nick
10 Feb 2008, 1:35 AM
You are right. I just propose to combine two last layers, processing success/failure and result data, into one data content layer, you've mentioned. That's all :)

deitch
10 Feb 2008, 2:12 PM
I am not completely sure I get what you are proposing. Can you outline it a bit further?

pic_Nick
10 Feb 2008, 11:20 PM
Sorry, maybe it's my bad english :">
You have metioned 3 layers: network s/f - that is HTTP response status; processing s/f - that is some custom status code in the responseText; and some result data (ie number of processed records). I'm just proposing to wrap two last layers in whole JSON-object, so the processing of the responseText will be more convenient.

And I would like to suggest another improvement:

writeRecords: function(options){
var data = [];
var tmp;
// we take only the journal, unless we have explicitly asked to replace all

// sure, we could use a trinary operator, but this may get more complex in the future,
// and if-then-else is cleaner to understand

// to be supported later
if ((options != null && options.replace == true) || this.replaceWrite == true) {
// get the actual data in the record
var rs = this.data.getRange(0);
for (var i = 0; i < rs.length; i++) {
var elm = rs[i];
if (elm != null) {
data[i] = elm.data;
}
}
} else {
// filter out redundunt journal data
// back up journal data, so we can rollback changes correctly
var fj = this.journal.slice(0);
// traverse journal in reverse order
for (var i = fj.length - 1; i > 0; i--)
if (fj[i]) {
switch (fj[i].type) {
// if we've encountered 'remove' action...
case this.types.remove:
// then delete all preceding actions with this record
for (var j = i - 1; j >= 0; j--)
if (fj[j] && (fj[j].record == fj[i].record) && ((fj[j].type == this.types.change) || (fj[j].type == this.types.add))) {
// if there was 'add' action then delete the 'remove' action itself
if (fj[j].type == this.types.add)
fj.splice(i--, 1);
fj.splice(j, 1);
i--;
}
break;
// if we've encountered 'change' action...
case this.types.change:
// then delete all preceding 'change' actions with this record
for (var j = i - 1; j >= 0; j--)
if (fj[j] && (fj[j].record == fj[i].record) && (fj[j].type == this.types.change)) {
fj.splice(j, 1);
i--;
}
break;
}
}
// get the actual data in the record
for (var i = 0; i < fj.length; i++) {
if (fj[i] != null) {
data[i] = {
type: fj[i].type,
data: fj[i].record.data
};
}
}
}
return data;
},

This highlighted code will prevent unnecessary records of journal data from sending to the server. For example: if we have changed two fields of one record, we will have two records in our journal, but we do not need to send them both, because the second one will contain the changes from the first.

Marte
15 Feb 2008, 6:23 AM
Hi, guys. I was a bit off last days and I didn't see your proposals. I modified the commitChanges() method a bit and it worked for me. I moved the commitment into the write event, this way:



// now write the changes to persistent storage
this.on('write',function(){
// commit the changes and clean out
var m = this.journal.slice(0);
// only adds and changes need commitment
for (var i=0, len=m.length; i<len; i++) {
if (m[i].type == this.types.change || m[i].type == this.types.add) {
m[i].record.commit();
}
}
this.journal = [];
}


This way, the commit occurs only on successful updates. I think I did a bit more but I don't remember now... I will come again if I finally remember. :-)

Cheers!

deitch
15 Feb 2008, 6:31 AM
I actually moved it into the writeHandler, even before the write event gets launched. Take a look at the latest code up above.

deitch
15 Feb 2008, 7:09 AM
pic_Nick,

I disagree with the improved efficiency changes. The reason is that what we are doing client side is really only a temporary reflection. Once we close the browser window, all the changes are gone. We really on the server-side to be the permanent store.

If we are journalling the changes, we cannot assume that the server-side only cares about the net results of each atomic transaction. Some applications will do that. Others will follow the RDBMS paradigm of recording every part of a transaction in a journal, even those that effectively null each other out (e.g. add a record then do some stuff then remove the same record). They may do it for future debugging reasons; efficiency; security; compliance; other stuff; who knows? The point is, the client library should not be in the position of making policy decisions about what parts of an atomic transaction to keep and what not. Send the whole thing back.

I am still thinking through the JSON vs. straight-text vs. XML vs. etc. for the processing status. I think that since the store is wholly representative of the server-side, the contract should remain between WriteStore and the server-side, and not punt. So that we are doing correctly. The larger question is how to report it.

FYI, we still can handle processing data beyond the SUCCESS or lack thereof. The WriteStore only checks if the responseText starts with SUCCESS. It ignores everything else, and passes the entire responseText to the callback handler. So, there is plenty of room to work with it.

GraemeBryce
17 Feb 2008, 1:04 PM
Can I suggest that, given the debate over the JSON or Text or XML response from the server that there are merits in each and that there is no "perfect" way. The answer would appear to be to code for at least two ways on each object and to configure the expected response. Thus for the read/write that is working with JSON data a config should allow the developer to expect either JSON or Text or a simple 200-OK from the server whilst with the XML object the choice of XML or Text would be reasonable.

For some EXT developers the problem will be that the server response is a given and outwith their individual control.

deitch
17 Feb 2008, 2:41 PM
Graeme,

I was actually thinking of getting out of this predicament entirely by having an HTTP header include the success/failure. The application, which gets the response object in its entirety, can then choose if it wants to look at response.responseText, responseXml, or something else entirely.

The problem is that it then depends on an HTTP header, which is native to the HttpProxy. It also creates complications on the server side. What if the processing application has no access to the HTTP headers? In CGI modes (or fastCGI) this is sometimes the case, as well as within certain other constrained environments.

On the other hand, sticking with what there is means that if something new comes along - post-XML, post-JSON, new world whatever, we need to retrofit it. I really prefer to keep it independent of that.

So is HTTP headers a better way? That, of course, means that other Proxy (e.g. StoreProxy) implementations have to have some equivalent mechanism, which complicates matters, and the processing application must have access to the headers.

No clean answer, is there?

GraemeBryce
18 Feb 2008, 1:29 AM
Deitch

Correct that there is no clean answer.

I would say that the HTTP header is a good way to confirm success or failure of the server-side transaction and would simply be adding detail the the header 200-ok that confirms the post reached th server.

However there is no real scope to return meaningful data in the HTTP Headers so allowing the client to "explain" to the user what went wrong such as which parts of a transaction failed or which values failed validation etc.

Also fair to say that many developers will not have experience of creating HTTP headers server-side and so you increase the barrier to use.

for maximum adoption in a library such as EXT you really need to offer "all things to all men" and then let individuals choose for themselves the best way forward.

GB

cimperia
18 Feb 2008, 6:22 AM
IMO, the application knows best and it should be up to it to know when to commit or discard changes without forcing the server response to comply to some straight jacket

Marte
19 Feb 2008, 5:12 AM
cimperia,

I agree with you. Actually my app is behaving much like yours.

deitch
26 Feb 2008, 9:01 AM
OK, so let's roll this together. Rather than having the application commit to a certain style of output, and rather than working with HTTP headers (which, I agree, I dislike, as it mixes too many layers rather than clean separation), how do you want this to work?

In principle, when an app calls commitChanges(), and there is an ability to write to the server, two things should happen:

The changes should be sent to the server
The changes should be committed to the local Store


Of course, 2 only occurs if 1 was successful. WriteStore needs to know if 1 was successful. Yet, with the above, only the app knows if 1 was successful. How do we get around this?

Options that I see:

What we do now, which is require the response to begin with some pre-defined string
Use HTTP headers
Make it a 2-step: first you call write(), then you call commitChanges() if you like the response of write(). If not, you do not.
Use callbacks. We change event 'write' to occur after writing is complete but before committing locally. We only commit if we get a true response to the callback, the same way as the beforewrite event works.


We all agree that #2 is no-good. Consensus here seems to be that #1 is too constricting. That leaves 3&4 (unless someone has better ideas, which I would love). 3&4 both have the advantage that they involve the application in the decision of whether or not the server-side commit (at the application layer) was good. #3 has the disadvantage that what was a one-step process is now two-step. But that is not an awful price to pay. The other advantage to #4 over #3 is that #3 is ambiguous if commitChanges() is called without calling write() first. #4 is very clear: you call commitChanges(). If you have a 'write' handler registered, we check it to see if the response was good. If not, we just go ahead.

Thoughts?

cimperia
27 Feb 2008, 6:51 AM
Deicth, I am still using your original version of WriteStore (though modified to suit my needs) and therefore I don

deitch
27 Feb 2008, 7:00 AM
Cimperia,

What you are listing looks a lot like #4 above. I would do it a little differently, though.


Do the write
Launch the write event
If the write event returns false, do not do the commit
If the write event returns true, do the commit, then launch the commit eventThe application just needs to add the writeStore.on('write',function(){...}) and be sure to return true/false.

cimperia
27 Feb 2008, 7:13 AM
Yes, I think you're right. I like your solution, it's clean and leave the developer in control, which is my main concern.

deitch
27 Feb 2008, 7:25 AM
Agreed. It will be done.

GraemeBryce
3 Mar 2008, 4:35 AM
Deitch

Can you confirm that the Zip attached to the first post in this thread is the latest version of the code?

It would be valuable to add a small release/change control section to the text of the first post to indicate when it was last updated - this appears to be a convention in this section of the forums.

Regards: Graeme

deitch
3 Mar 2008, 6:25 AM
Graeme,

I can confirm that it is not. I am working the last changes in, will post it on my own Web site, and put a link there. The change control will definitely be included in the release, as well as the nomenclature (write-store-version.zip or the like).

I will update the most recent (but not final, as that is being worked on with the write-response-commit model from above) to the first post.

Thanks for suggesting it.
Avi

GraemeBryce
3 Mar 2008, 7:32 AM
Thanks for that,

I am experimenting with the code today in particular for use as a client-side store for UI wizards that include many forms some of which can be iterated many times. The result should be a single nested store that can be committed to the server in a single step.

I am also looking into the possibilities to combine this store with the buffered store elsewhere in this forum thus further increasing the scope. This second investigation raises the question of the differences between a plugin and an extension and the common problem that two different branches of the same object can not easily be made into one again.

I will keep you posted.

Regards: Graeme

deitch
4 Mar 2008, 8:13 AM
My pleasure.

I also have a series of other libraries (non-ext-related) that I am almost ready to release on their own Website. The latest downloads for write-store will be there, along with a wiki describing how to use it and a sample page.

As soon as the site is ready (target is end of next week), I will post it here.

GraemeBryce
5 Mar 2008, 2:34 AM
Deitch

I have a situation where I would like to use code such as



clientStore = new Ext.ux.WriteStore({
proxy: new Ext.ux.StoreProxy({store: jobStore, field: 'Clients'}),
updateProxy: this.proxy,
replaceWrite: true,
reader: new Ext.ux.ObjectReader({root:'client'},clientRecord)
});


Important in this example is the use of the term {root:'client'} passed as a parameter to the Object reader.

My data would then appear as


{"job":[
{
"JobID": "8719"
,"ClientNames": "Geoff Jones and Graeme Bryce and Susan Smith"
,"ClientSalutations": "Geoff and Graeme and Laura"
,"JobDescription": "Sale of someaddress And Purchase of another address"
,"PurchaseAgent": "AF & CD Smith"
,"PurchaseAskingPrice": "209800"

,"Clients":
{"client" :[
{"EntityID": "146695", "FirstName": "Geoff", "JobID": "8719", "LastName": "Jones"},
{"EntityID": "146693", "FirstName": "Graeme", "JobID": "8719", "LastName": "Bryce"},
{"EntityID": "146694", "FirstName": "Susan", "JobID": "8719", "LastName": "Smith"}
]
}

,"Parties": [
{"EntityID": "145628", "JobID": "8719", "Name": "Allingham & Co.", "PartyName": "Agent", "PrimaryAddressLine": "134 Marchmont Road\u000d\u000aEdinburgh\u000d\u000aEH9 1AQ" },
{"EntityID": "145329", "JobID": "8719", "Name": "Halifax plc", "PartyName": "Lender", "PrimaryAddressLine": "PO Box 60\u000d\u000aTrinity Road\u000d\u000aHalifax\u000d\u000aHX1 2RG" }
]
}
]
}

So the field 'Clients' in the top level record record contains an object 'client' that in turn contains records, My expectation was that providing the root parameter to the ObjectReader would cause it to iterate the 'client' collection but it appears this is not the case.

I welcome your thoughts

Regards: G.

deitch
6 Mar 2008, 12:06 PM
Graeme,

I will look through your example and let you know.

Avi

deitch
6 Mar 2008, 1:15 PM
Graeme,

The assumption in StoreProxy is that your data looks as follows:


{"job":[
{
"JobID": "8719"
,"ClientNames": "Geoff Jones and Graeme Bryce and Susan Smith"
,"ClientSalutations": "Geoff and Graeme and Laura"
,"JobDescription": "Sale of someaddress And Purchase of another address"
,"PurchaseAgent": "AF & CD Smith"
,"PurchaseAskingPrice": "209800"

,"Clients":
[
{"EntityID": "146695", "FirstName": "Geoff", "JobID": "8719", "LastName": "Jones"},
{"EntityID": "146693", "FirstName": "Graeme", "JobID": "8719", "LastName": "Bryce"},
{"EntityID": "146694", "FirstName": "Susan", "JobID": "8719", "LastName": "Smith"}
]

,"Parties": [
{"EntityID": "145628", "JobID": "8719", "Name": "Allingham & Co.", "PartyName": "Agent", "PrimaryAddressLine": "134 Marchmont Road\u000d\u000aEdinburgh\u000d\u000aEH9 1AQ" },
{"EntityID": "145329", "JobID": "8719", "Name": "Halifax plc", "PartyName": "Lender", "PrimaryAddressLine": "PO Box 60\u000d\u000aTrinity Road\u000d\u000aHalifax\u000d\u000aHX1 2RG" }
]
}
]
}

In other words, the field element passed to StoreProxy simply contains the object to be passed, without any sub-elements. What you are looking for is a multi-layer. StoreProxy isolates the correct record (by ID) and the field within that record. However, you are proposing that the value for the field itself might be multi-layered, and thus have a root, as in your example above.

I am sorely tempted to say, "that is too complex, why not just use it the way the data is above." Of course, if you cannot control the server side, or have good reasons to have it that way, it wouldn't work. But the biggest argument against my saying it is that JsonReader works precisely the way you described, by passing a root argument to the Reader.

I will work it into the next release.
Avi


Deitch

I have a situation where I would like to use code such as



clientStore = new Ext.ux.WriteStore({
proxy: new Ext.ux.StoreProxy({store: jobStore, field: 'Clients'}),
updateProxy: this.proxy,
replaceWrite: true,
reader: new Ext.ux.ObjectReader({root:'client'},clientRecord)
});


Important in this example is the use of the term {root:'client'} passed as a parameter to the Object reader.

My data would then appear as


{"job":[
{
"JobID": "8719"
,"ClientNames": "Geoff Jones and Graeme Bryce and Susan Smith"
,"ClientSalutations": "Geoff and Graeme and Laura"
,"JobDescription": "Sale of someaddress And Purchase of another address"
,"PurchaseAgent": "AF & CD Smith"
,"PurchaseAskingPrice": "209800"

,"Clients":
{"client" :[
{"EntityID": "146695", "FirstName": "Geoff", "JobID": "8719", "LastName": "Jones"},
{"EntityID": "146693", "FirstName": "Graeme", "JobID": "8719", "LastName": "Bryce"},
{"EntityID": "146694", "FirstName": "Susan", "JobID": "8719", "LastName": "Smith"}
]
}

,"Parties": [
{"EntityID": "145628", "JobID": "8719", "Name": "Allingham & Co.", "PartyName": "Agent", "PrimaryAddressLine": "134 Marchmont Road\u000d\u000aEdinburgh\u000d\u000aEH9 1AQ" },
{"EntityID": "145329", "JobID": "8719", "Name": "Halifax plc", "PartyName": "Lender", "PrimaryAddressLine": "PO Box 60\u000d\u000aTrinity Road\u000d\u000aHalifax\u000d\u000aHX1 2RG" }
]
}
]
}

So the field 'Clients' in the top level record record contains an object 'client' that in turn contains records, My expectation was that providing the root parameter to the ObjectReader would cause it to iterate the 'client' collection but it appears this is not the case.

I welcome your thoughts

Regards: G.

GraemeBryce
6 Mar 2008, 1:38 PM
Deitch

In this case I do have control over the server side and can remove the additional layer, however many others will be unable to do so.

For those working with XML it is common (although not necessary) to create a container node such as <clients> that then contain many item nodes such as <client>. The JSON in my example was a server-side conversion from XML using a standard conversion routine.

I think it is great to have the flexibility and to support the root: parameter as with other items in EXT and I look forward to providing further feedback one the release when available.

Regards: G.

GraemeBryce
2 Apr 2008, 2:18 AM
Deitch

Just wondering if you are still engaged with this project/thread or if other things are requiring all of your time?

Did you ever resolve some of the discussions earlier in the thread and do you intend on posting a revised version?

Your audience is still listening :-)

deitch
2 Apr 2008, 3:25 AM
Hi Graeme,

Absolutely engaged! Thank you for checking in. I have the required changes in place and tested, sample files including a live one, a separate set of internationalization libraries (not related to ext-js) and have also set up a full Website to support them - wiki descriptions, license terms, etc. There were some last minute administrative issues to work out (property ownership, legal-type stuff), but I am about a week away from
publishing the site.

I apologize for the delay. The library need not be perfect, but the supporting materials need to work correctly.

Thanks for checking in!
Avi

GraemeBryce
2 Apr 2008, 3:31 AM
Cool

I will look forward to the release.

deitch
8 Apr 2008, 6:49 PM
Graeme,

I set up a separate Website - forum, wiki, downloads, GPL and commercial licenses - at the site http://jsorm.com. The forum is new, so not much there, but the Wiki is well-populated.

There is another set of libraries there as well (non-Ext-related), and more to come. I look forward to your feedback.

Avi

pic_Nick
14 Apr 2008, 12:28 AM
Hello, deitch.
I've checked your new release. It is very good and is almost exactly what I'm thinking of... with one exception :). I do not understand, why are you so evading of relying on response text to be a JSON string? You are using JsonWriterReader as an underlying data processing object (like JsonStore uses JsonReader in native Ext), you are sending changes to the server in JSON string. And why not to strict the response text also to JSON string? IMHO it will be more clear and convenient (maybe even to change name 'WriteStore' to 'JsonWriteSore' ;)).
Anyway, thank you for your work!

deitch
14 Apr 2008, 10:42 AM
Hi Nick,

Thank you, I hope you find the wiki entries / documentation equally useful.

As for the below, I had thought this would satisfy everyone. This way, write-store does not restrict your response text to be anything at all. It can be text, JSON, XML, or even some new encoding that you made up (NER = Nick's Encoding Regime?). You control it via the callback on the write event.

Did I misunderstand something?
Avi


Hello, deitch.
I've checked your new release. It is very good and is almost exactly what I'm thinking of... with one exception :). I do not understand, why are you so evading of relying on response text to be a JSON string? You are using JsonWriterReader as an underlying data processing object (like JsonStore uses JsonReader in native Ext), you are sending changes to the server in JSON string. And why not to strict the response text also to JSON string? IMHO it will be more clear and convenient (maybe even to change name 'WriteStore' to 'JsonWriteSore' ;)).
Anyway, thank you for your work!

pic_Nick
14 Apr 2008, 10:19 PM
Yes, I've seen that response is all controlled via callback :). It was just my humble thoughts... And this is yours project and you do it well. It was very helpfull to me.

deitch
15 Apr 2008, 8:18 AM
Nick,

If you can suggest a little more in-depth how the detail would work, maybe I can bundle it as an option if it makes sense?

Avi

mjlecomte
15 Apr 2008, 9:51 AM
I set up a separate Website - forum, wiki, downloads, GPL and commercial licenses - at the site http://jsorm.com. The forum is new, so not much there, but the Wiki is well-populated.
...
I look forward to your feedback.


This is the first time I've visited this thread, wanting to check into it once my familiarity with ext grew a bit. I looked at your wiki briefly, I have to read further, you've done a nice job.

I have one preliminary comment/suggestion, and hopefully it is not too inappropriate from my cursory review of the wiki thus far. I was looking for a bullet item that might describe "When should I not use this store?". You have a nice explanation supporting what this store does that the standard will not. For example if I'm loading a store for a comboBox with 4 options, this store may be inappropriate because X, Y, Z, etc. Just looking for a summary of when to use and when NOT to use this store in relation to the standard stores.

pic_Nick
15 Apr 2008, 10:36 AM
Nick,

If you can suggest a little more in-depth how the detail would work, maybe I can bundle it as an option if it makes sense?

Avi

Well, if you are really interested, the changes is quite simple (they are marked red):

// handle the results
writeHandler: function(o,success,response) {
// if the POST worked, i.e. we reached the server and found the processing URL,
// which handled the processing and responded, AND the processing itself succeeded,
// then success, else exception

// the expectation for success is that the responseText represents a JSON string with top-level
// 'success' property set to 'true' along with other application-specific data
var vSuccess = success;
var vRspObj = null;
if (vSuccess) {
try {
vRspObj = Ext.decode(response);
vSuccess = vRspObj.success === true;
} catch (e) {
vSuccess = false;
}
}
if (vSuccess) {
// commit the changes and clean out
var m = this.journal.slice(0);
// only changes need commitment
for (var i=0, len=m.length; i<len; i++) {
if (m[i].type == this.types.change) {
m[i].record.commit();
}
}
this.journal = [];
this.fireEvent("write",this,o,vRspObj);
} else {
this.fireEvent("writeexception",this,o,response);
}
}

mjlecomte
22 Apr 2008, 4:33 PM
I just grabbed the zip file from your website to explore further. One thing off the bat you might want to bundle the sample-data file with the distribution. I grabbed it separately from the site easy enough though.

Is the web forum admin on holiday?

deitch
22 Apr 2008, 5:44 PM
MJ,

I wanted to bundle it, but that requires files server-side and client-side, and deployment of Ext JS, which isn't really appropriate for me to do. I can bundle it, though.

The forum admin is definitely not on holiday (although I could really use one!). The forum requires email confirmation, and then moderator approval. I will reach out to you privately.

deitch
23 Apr 2008, 9:14 AM
Nick,

Now I see what you want to do. The early versions of write-store took the response, assumed it was text, and looked for the char string SUCCESS at the beginning of the response. I then replaced it with one that assumes nothing, but fires a 'write' event. If no handler returns false from the write event, the commit proceeds, followed by a 'commit' event. What you are proposing is to return to the earlier method, but instead of assuming text with the char string SUCCESS, treat it as json and look for the success property.

The problem with this is the same as the problem with assuming text. Some people work with constraints that do not conform to text with SUCCESS or json, but may use XML or some other format entirely. Users have expressed that on this forum and privately to me.

I think flexibility is preferred here.

To do what you want, why not pull it into a separate event for handler for write? You already have the code right there.



Well, if you are really interested, the changes is quite simple (they are marked red):

// handle the results
writeHandler: function(o,success,response) {
// if the POST worked, i.e. we reached the server and found the processing URL,
// which handled the processing and responded, AND the processing itself succeeded,
// then success, else exception

// the expectation for success is that the responseText represents a JSON string with top-level
// 'success' property set to 'true' along with other application-specific data
var vSuccess = success;
var vRspObj = null;
if (vSuccess) {
try {
vRspObj = Ext.decode(response);
vSuccess = vRspObj.success === true;
} catch (e) {
vSuccess = false;
}
}
if (vSuccess) {
// commit the changes and clean out
var m = this.journal.slice(0);
// only changes need commitment
for (var i=0, len=m.length; i<len; i++) {
if (m[i].type == this.types.change) {
m[i].record.commit();
}
}
this.journal = [];
this.fireEvent("write",this,o,vRspObj);
} else {
this.fireEvent("writeexception",this,o,response);
}
}

deitch
24 Apr 2008, 5:18 AM
Adding to the above, with much appreciation for all the people who have helped. I have included the changelog.txt with the zip distribution on the jsorm.com Web site. I have used recommenders publicly available forum IDs as shown here, rather than actual names or emails.

1) If I have missed anyone, do not hesitate to tell me.
2) If someone prefers their real name or email rather than forum ID, again do not hesitate to tell me.

pic_Nick
24 Apr 2008, 9:59 AM
deitch, I agree with you that this is more flexible way, so shall it be.
BTW, why don't you use addEvents() to add your events to the WriteStore? I don't know exactly what it does but in the Ext's code it is always in place.

deitch
30 Apr 2008, 9:07 AM
You will laugh, but mostly "legacy" code. I had started it the other way, and changing that was always the right thing to do, but never a priority.

I am working on RC3 right now (chriswa found a nice little logic bug that gets tripped if you update a record then delete it then commit, all within one transaction), so maybe I will do it now if I can.

Blackhand
26 May 2008, 10:19 PM
I've been over the wiki a couple times, and I'm probably just blind, but is there any way to check in there are any uncommitted changes in the transaction journal, I haven't been able to find one.

getModifiedRecords() would normally work, except within the case of Write-Store, adding/removing records are do not append/remove from the modifiedRecords array (understandably).

I simply need to check if there are any pending changes that need to be committed.

Thanks.

Blackhand
26 May 2008, 10:26 PM
Ok, well literally 5 seconds after I made the post I opened up the source for write-store and went down to the WriteStore class, seeing that it has a property journal[].

After testing around a bit, checking if journal[].length != 0 seems to give me the desired result.

deitch
27 May 2008, 4:58 AM
Blackhand,

Apologies for not getting back sooner... but there was no way I was going to beat you to the punch at 02:19am! :-)

This is a good point - you can go to the internal journal structure, but requiring you to do so is not the best way. I should add a boolean property that returns if the store is in dirty state, i.e. uncommitted changes. The question is, is it better as:

isDirty()
isCommitted()
isModified()Thoughts as to the best boolean to use? I could also use getModified() getDirty() to get an integer as to how many atomic elements of a transaction are uncommitted, but I don't see how that has any meaning.

Thoughts?


Ok, well literally 5 seconds after I made the post I opened up the source for write-store and went down to the WriteStore class, seeing that it has a property journal[].

After testing around a bit, checking if journal[].length != 0 seems to give me the desired result.

cimperia
27 May 2008, 7:09 AM
I added the following function to the library:


isDirty: function()
{
return (this.journal.length);
}

deitch
27 May 2008, 4:14 PM
Wouldn't that surprise a user? Most people expect boolean for is_() functions, rather than an integer.


I added the following function to the library:


isDirty: function()
{
return (this.journal.length);
}

pic_Nick
28 May 2008, 2:12 AM
I already added such a function to this class for myself

isDirty: function(){
return this.journal.length > 0;
}

cimperia
28 May 2008, 5:23 AM
Wouldn't that surprise a user? Most people expect boolean for is_() functions, rather than an integer.

Perhaps, but the idea is that you can use this as a boolean function as in:

if (isDirty()) ....

but you can as well capture the number of rows in the journal ( I use that to find the commit 'charge').

if ( rows = isDirty() ) ...
if ( rows > 10) ....

deitch
28 May 2008, 5:25 AM
I think this is one of these cases where there is no reason not to make everyone happy.

isDirty() will return boolean
getModifiedCount() will return integer

How is that?


Perhaps, but the idea is that you can use this as a boolean function as in:

if (isDirty()) ....

but you can as well capture the number of rows in the journal ( I use that to find the commit 'charge').

if ( rows = isDirty() ) ...
if ( rows > 10) ....

cimperia
28 May 2008, 5:30 AM
I must admit that I like my 'short-cut' but yes, your solution is cleaner, though the long name for the get method is maybe cumbersome.

deitch
28 May 2008, 5:50 AM
Yeah, I agree. I actually spent time trying to come up with a shorter name before posting, but failed. If you can think of a shorter one....



I must admit that I like my 'short-cut' but yes, your solution is cleaner, though the long name for the get method is maybe cumbersome.

deitch
11 Jun 2008, 1:14 PM
1.0 RC4 has been released, and includes:

A slew of bug fixes (we don't call those "features"!)
Better journalling, so every detail of a change, including updates in a record, are kept and tracked
Sequential intra-transaction rollback


The last feature, inspired by cimperia, is particularly exciting. It allows you to call writeStore.rejectChanges(count), where count is the number of changes within the transaction to rollback. Thus, if you make 50 changes, and want to undo just the last 5, you can easily do so. Beforehand, you had to either commit the whole transaction and then make the 5 changes, or reject the whole thing and redo the first 45.

Additionally, we have added boolean isDirty() and integer getModifiedCount(), which will tell you if there are uncommitted changes and how many there are, respectively.

A great example of the new features is available from the write-store wiki page.

Download it all or check out the wiki from http://jsorm.com. If you like it, spread the word. And do try out the i18n library.

Blackhand
11 Jun 2008, 3:05 PM
Hi deitch, great work on the new release and I really like the changes.

I hate to come so soon with a bug report though =(

On the jsorm wiki writestore sample, the first test case I tried for the new undo feature didn't quite work correctly.

You can replicate this yourself by doing the following:

Double click a cell to edit, append "1". Stop editing. Edit the same cell again, append "2", stop editing. Edit the same cell again, append "3", stop editing.

Now press undo. The "3" is removed from the cell, the grid cell is no longer marked as dirty (I think this could be the first bug, but not the main issue). Press undo again, the "2" is removed, this is correct. Press undo one last time and the save, reject and undo bar becomes disabled (I assume because the stores isDirty() is now false) but the "1" we appended first, does not get removed.

Other test cases I tried across different cells and such seemed to work fine, seems theres just a problem rolling back multiple modifications to the same cell.

I'm going to be using this update in production tomorrow, I'll hold out on the undo feature for now.

Thanks for your hard work.

deitch
11 Jun 2008, 4:02 PM
Hi Blackhand,

Thanks for the comments and compliments. No apologies on reporting bugs; I would much rather they come now than in 6 months when people are using it in production.

I will replicate it, figure it out, and then fix it. RC5 will be on its way....

deitch
12 Jun 2008, 6:23 AM
Blackhand,

This is fixed. Had to do with how the grid looks at whether or not a field is dirty, relies on the cruder modified array of the Ext.data.Record. WriteRecord, which wraps Record, uses a more subtle change-by-change journal, but needs to correctly update modified so that GridView knows what to do. It updates correctly now.

I violated standard procedure for this and just rereleased it as RC4. It is on the Web page http://jsorm.com.

Nalfein
14 Jun 2008, 7:32 AM
Firstly, nice extension, keep up the good work.

I have found one issue, in lines 554 and 559 you are calling:

Ext.data.Store.superclass.remove.call(this, m[i].record);This gives me an error: "Ext.data.Store.superclass.remove has no properties"

It would be nice if commitChanges() accepts "failure" and "success" options containing handlers to be called if that particular transaction failed/success, like form.submit() does. If would be also nice if the store understands responses like "{success: false}" and will call the "failure" handler.

Some questions:
1. In "write" handler i get "undefined" as the second parameter, named "o". What is it for?
2. Is possibly to easily update client data from data received from server? I need to set ID, default values fields.

deitch
14 Jun 2008, 6:18 PM
Firstly, nice extension, keep up the good work.

I have found one issue, in lines 554 and 559 you are calling:

Ext.data.Store.superclass.remove.call(this, m[i].record);This gives me an error: "Ext.data.Store.superclass.remove has no properties"

Doh! You caught us in a nice sloppy one there. I just had a discussion with one of the frequent contributors on my adopting Eric Raymond / Linux Torvalds "release early, release often" philosophy, in that our users are much smarter than we are. You just proved it yet again. This will be fixed in 1.0 before final release.


It would be nice if commitChanges() accepts "failure" and "success" options containing handlers to be called if that particular transaction failed/success, like form.submit() does. If would be also nice if the store understands responses like "{success: false}" and will call the "failure" handler.

This one has gone through a lot of iteration. In the end, the problem is that the server can send back in *any* syntax it wants. Some will send JSON like what you did; some will do XML; some will do plain-text; etc. etc. The options are infinite. Sure, the store can know if the network submission succeeded, and if the response was 200 or 404 or whatever. But it cannot know which option your server chose for showing that the write itself succeeded. That is why there is a write event, which gets passed the data, and lets your handler decide if this is success or failure.



Some questions:
1. In "write" handler i get "undefined" as the second parameter, named "o". What is it for?
2. Is possibly to easily update client data from data received from server? I need to set ID, default values fields.
1 relates to the scope of the call, and depends on the signature of the Ext Ajax call.
2 I do not understand. Could you explain further?

Thanks

Nalfein
14 Jun 2008, 11:48 PM
(...) "release early, release often" philosophy, in that our users are much smarter than we are. You just proved it yet again.

I've become recently a fan of test-driven development. Bugs like that are caught before they reach the user. I know that JavaScript is not Java, and running in browser brings some limitations, but I believe that this method should be applied here too, somehow, especially because Store is not a GUI component and regular unit testing would do the job.


This one has gone through a lot of iteration. In the end, the problem is that the server can send back in *any* syntax it wants. (...) That is why there is a write event, which gets passed the data, and lets your handler decide if this is success or failure.

Ok, this is a good reason, but makes me to subclass your WriteStore, because I need events like general 'success' and 'failure'. It would be sufficient if 'success' is invoked if 'write' handlers don't return 'false'. The 'failure' event would be fired otherwise. I also need to set this 'success' and 'failure' handlers per transaction (like in BasicForm.submit()), because my state object is used in multiple modules and when a module (an edit form, for example) needs to save data it needs to set up events only for this single transaction (in order to show error messages in this particular form). After the transaction the form is usually disposed.




2. Is possibly to easily update client data from data received from server? I need to set ID, default values fields.
2 I do not understand. Could you explain further?

In my case, record unique identifiers are generated on the server. My JavaScript application creates a record without setting the ID and this incomplete data goes to the server. It would be great if the server can return complete records (with ID) and I can easily update instances in the Store.

deitch
15 Jun 2008, 7:50 AM
I've become recently a fan of test-driven development. Bugs like that are caught before they reach the user. I know that JavaScript is not Java, and running in browser brings some limitations, but I believe that this method should be applied here too, somehow, especially because Store is not a GUI component and regular unit testing would do the job.

I am, too. On the jsorm site, I have another library, i18n, which went through extensive testing, mainly through jsunit. I tried a different approach here, mainly because it grew up on its own (i.e. was designed by input from this and other fora). I also have not figured out how to fully unit test something like this with both ends.



Ok, this is a good reason, but makes me to subclass your WriteStore, because I need events like general 'success' and 'failure'. It would be sufficient if 'success' is invoked if 'write' handlers don't return 'false'. The 'failure' event would be fired otherwise. I also need to set this 'success' and 'failure' handlers per transaction (like in BasicForm.submit()), because my state object is used in multiple modules and when a module (an edit form, for example) needs to save data it needs to set up events only for this single transaction (in order to show error messages in this particular form). After the transaction the form is usually disposed.
Interesting thought, but I don't understand why it is any cleaner than just having beforewrite, write and commit events. Right now, if write handlers do not return false, it just goes on to the commit which is then fired. Isn't that the same, but clean?


In my case, record unique identifiers are generated on the server. My JavaScript application creates a record without setting the ID and this incomplete data goes to the server. It would be great if the server can return complete records (with ID) and I can easily update instances in the Store.
So, if the server sends 3 records, each of ID 10,11,12. These records are modified on the client, and commitChanges() is called. You want to ensure that the sent records include the IDs 10,11,12 in the modified records. Is that correct?

If so, I understand. I had a similar request from Johughes on the jsorm forum. Please take a look and tell me if this is the same: http://jsorm.com/forum/showthread.php?t=5

Nalfein
15 Jun 2008, 8:16 AM
Isn't that the same, but clean?

Nothing is called when 'write' returns false. I had to add another event that merges this one scenario and 'writeexception'.


So, if the server sends 3 records, each of ID 10,11,12. These records are modified on the client, and commitChanges() is called. You want to ensure that the sent records include the IDs 10,11,12 in the modified records. Is that correct?No, I create a new record on the client and fill only some fields, particularly I do not fill ID, because it is generated on the server. I call commitChanges() and this incomplete record goes to the server. The server computes ID, fills missing fields and returns a complete record. Then I would like to update the record on the client using data returned by the server.

deitch
16 Jun 2008, 5:28 AM
Nothing is called when 'write' returns false. I had to add another event that merges this one scenario and 'writeexception'.
OK, now I see what you want. If any of the write event handlers return false, to have a failure event. Added to the 1.1 list.


No, I create a new record on the client and fill only some fields, particularly I do not fill ID, because it is generated on the server. I call commitChanges() and this incomplete record goes to the server. The server computes ID, fills missing fields and returns a complete record. Then I would like to update the record on the client using data returned by the server.
Interesting. So we create a new record on the client. The client submits it to the server. The server, being the centralized manager it is intended to be, is the only true authority for the real ID for each record, and thus may actually return the IDs for the records. Now, you want the Store to read that response and populate the missing field, particular the ID field, so that the client and server are in sync with respect to ID numbers. Is that correct?

It creates an interesting problem. Right now, the client sends either the journal or the entire data set back to the server. The server sends a response - any response in any format it so chooses, digital pig latin, for all the client cares. That response is passed to the application by the WriteStore via the 'write' event handler.

At this point, the data within the response is in the hand of the application through its handler. The WriteStore does not have any idea of the structure of the response, nor does it want to... because then it would get into the nitty-gritty of each server having its own format for success/failure (text, xml, json, pig latin, etc.). What you are saying is that the response will *also* contain updated record info, that we now want the Store (and its reader, of course) to process to update records. But the response can include lots of other information, too.

The simplest way - but least elegant - is to have the response *not* contain the updated records, but rather after the commit event is sent, ask the Store to reload. The server has all the data by now. This creates barely any additional network traffic, and would work right now, as is. Admittedly, though, this is inelegant.

A second option is for the commit event handler to take the correct information out of the response text and call loadData(), which will get passed to the reader. This requires zero additional network traffic, but is even less elegant.

I am struggling to find a truly elegant solution to this. Perhaps an automatic version of #1 above, i.e. an option to commitChanges() that will cause the store to reload upon send? But that could get messy. What if you only update 3 records out of 500? You don't want to reload all 500. There are ways to restrict the reload to just a few records, but require an understanding of the query to the server.

Thoughts?

cimperia
16 Jun 2008, 8:08 AM
I have implemented a solution to this. It’s only partial as far as Nalfein’s wishes are concerned because it does not populate any default fields but only the row id. This is a typical situation where the table primary key is maintained (usually auto-incremented) on the database server.

I am away from home at the moment so I cannot post the code, but in a nutshell the implementation follows these lines:

WriteStore sends a JSON array to the server, the response from the server has the same structure as the one sent from the client, but the database id has been populated for new records and only newly created records are sent back from the server.

WriteStore has been modified so that a new field has been added to the JSON sent to the server:

The data structure has been changed to:


data[] = {
rid: record_id
type: entry.type,
data: details
}

rid stores EXT record id, not the database (or row) record id, therefore even if the database key is null as it will be on creation of a new record, EXT will provide a record id. (There’s an ext config that allows to set the record id to the record primary key when it exists, ie for loaded records, but it’s not an issue.)

Here’s a JSON sample for a new record sent to the server:


data "users":[{"rid":1010,"type":"c","data":{"id":null,"user_name":"john",….

The table primary key is

Blackhand
16 Jun 2008, 9:59 PM
I'm supposed to be subscribed to this thread, but I'm not getting updates O_o. Anyway, thanks for the fix deitch.

On to the debate of handling new record IDs.

When working in grid's and adding new rows, I had this problem where I would add a record into the store with a set of default values. Then naturally the user would want to edit some of these defaults, this would create a bunch of updates in store's journal.

Back at the server, obviously only the create will succeed because theres no way to tell which record the updates should apply to.

My way around this was to make a few modifications to the write store code, to condense updates/creates/deletes into one journal entry per record only if necessary.

For instance, if a user adds a record, then deletes it and hits save, nothing gets sent to the server, because effectively no change was made. If a user creates a record, then updates it and hits save, only a create journal entry is sent, but the default values on the journall entry will be those in the latest update.

As for the reloading the store when the server responds. I always thought this pretty standard behavior. I would think it a bit risky to always trust that the client data is remaining in synch with the server side data, particularly in multi user environments. Paged data helps with the problem of having to reload 500+ records. Success or failure, I reload the store, which keeps the client side data fresh.

Edit: Also deitch, I have been using a custom "AspNetAjaxProxy" as my proxy all over my projects. This proxy allows you to use ASP.Net web services and page page methods exposed to javascript as your proxy. eg.


proxy: new Ext.ux.AspNetAjaxProxy({
proxyObject: PageMethods,
proxyMethod: PageMethods.GetItems
errorHandler: this.errorHandler
}),

I normally add more "dynamic" parameters on before load events like such:


this.writeStore.proxy.methodParams = {
publishDate: this.getPublishDate()
};

When implementing your store, I obviously had the problem that the proxy did not have the additional update function. I have since added it and thought this proxy may be useful to ASP.Net developers who wish to use your store.

Should I attach it here? Send it to you so you can add it along with the other proxies? Or create a new thread for it?

deitch
17 Jun 2008, 3:51 PM
I'm supposed to be subscribed to this thread, but I'm not getting updates O_o. Anyway, thanks for the fix deitch.

I am supposed to be as well, but it is intermittent, and definitely not in my spam filter. I didn't get your post from early today, just happened to check my subscriptions now.

Let me read through your post and cimperia's and nalfein's, come up with a good solution that takes the best of all. No matter how you play it, we have


Original Ext.data.Store which just loaded read-only from the server
WriteStore, which allows you to push to the server (journal or entire data set)
The next iteration, which is the desire to keep data more in sync between clients and server, including all of the posts from you, cimperia, nalfein, etc.


The third item on the list is the crux of what I want to see in 1.1. I am just awaiting some confirmations of bugs to complete the 1.0 release, and I can get started on 1.1.

deitch
17 Jun 2008, 3:58 PM
Edit: Also deitch, I have been using a custom "AspNetAjaxProxy" as my proxy all over my projects. This proxy allows you to use ASP.Net web services and page page methods exposed to javascript as your proxy. eg.


proxy: new Ext.ux.AspNetAjaxProxy({
proxyObject: PageMethods,
proxyMethod: PageMethods.GetItems
errorHandler: this.errorHandler
}),

I normally add more "dynamic" parameters on before load events like such:


this.writeStore.proxy.methodParams = {
publishDate: this.getPublishDate()
};

When implementing your store, I obviously had the problem that the proxy did not have the additional update function. I have since added it and thought this proxy may be useful to ASP.Net developers who wish to use your store.

Should I attach it here? Send it to you so you can add it along with the other proxies? Or create a new thread for it?

Tough call. It primarily has meaning as another proxy for use with WriteStore. I am happy to host it on the domain I have, where I publish write-store and i18n, and even set up a separate thread for it in the fora there. You just need to be very explicit on the post how it is licensed. write-store and i18n are cross-licensed: GPL and commercial. Anyone can download and use either as long as they abide by the GPL, or buy a commercial license.

I would be more than happy to integrate it directly into write-store, which is not a problem for the GPL, but creates issues if someone buys a commercial license. After all, this is not my code, and I do not have the right to sell your code.

In the meantime, I did create a user extensions forum under write-store on jsorm.com

deitch
20 Jun 2008, 9:38 AM
May I ask people here to take a look at http://extjs.com/forum/showthread.php?p=184775

deitch
30 Jun 2008, 12:00 PM
There have been some requests for condensed write mode, wherein multiple changes to a single record in a single transaction will create a single change record when transmitted to the server.

Would people kindly take a look at http://jsorm.com/forum/showthread.php?p=20#post20 to see if it is still relevant?

Thanks.

deitch
1 Jul 2008, 6:45 PM
I have been hard at work getting 1.1 in place, including many of the things people have asked for on this forum and http://jsorm.com/forum The only remaining piece is the update of information from the server after a commitChanges().

Here is the 1.1 method.

Call commitChanges()
Data is sent to the server in one of update, replace or condensed modes (yes, I have condensed working). This data includes cimperia's recommended rid field, if in update or condensed), as a flag as to which record on the client a particular entry refers to. The data also includes the appropriate ID for the record, if id was included in metaData.
Data is sent back from the client. As now, the data is passed through the 'write' event handlers to see if the results are OK, i.e. application-level validation of success/failure.
Assuming all 'write' handlers are good, commit is completed client-side and 'commit' event is sent.


Additionally, per-commit success/failure handlers are supported.

What I would like to add is the following. A new flag exists, updateResponse on the store (as a default) or options.update (as a per-commit). This can be set to one of

none: no updates occur
update: the response data is passed through the reader again, the records are extracted, and any record fields that differ are updated on the client
replace: the response data is passed through the reader again, and all the records in the store are replaced by the set from the server. The rid is used as a reference. Any record from the server that has no rid is added as new.


The weakness here is that it is hard to remove records based on data from the server. Add, update, is easy, but remove is hard. I guess any record that has all fields as null, or no fields, can be considered a delete.

An alternative is to have the server send a journal of changes to make, but that requires us to assume the server knows the state of the client for every client, a very dangerous assumption to make.

I look forward to feedback.

[QUOTE=cimperia;182446]I have implemented a solution to this. It

wiznia
2 Jul 2008, 6:38 AM
Hi deitch!
I can't belive it, just yesterday I started using the write-store in an application I'm developing and found the problems you fix here (I wrote a mail through the contact us link in your page, but it bounced back...)
First of all, excelent change the new rid (I've actually added it myself yesterday) and the new condensed mode. I thought the previous version had a bug and post processed the data in the server to condense it.
The question is, where can I download the latest version??? I can't find it anywhere!
I have to keep developing, but if the new version with this changes is ready (or almost) I would appreciate if you post it somewhere.
Thank you!

Ionatan Wiznia

deitch
2 Jul 2008, 7:01 AM
Ionatan,

Glad to hear it is helping you. I am concerned by the bounced mail. I will contact you offline to find out what happened.

I just finished putting the changes on the http://jsorm.com Web page. The new version is available as 1.1 alpha, and the wiki and sample pages have been changed to reflect the new features.

I am still working to get my head around the rid usage. If you added it, would you mind sharing what you did?


Hi deitch!
I can't belive it, just yesterday I started using the write-store in an application I'm developing and found the problems you fix here (I wrote a mail through the contact us link in your page, but it bounced back...)
First of all, excelent change the new rid (I've actually added it myself yesterday) and the new condensed mode. I thought the previous version had a bug and post processed the data in the server to condense it.
The question is, where can I download the latest version??? I can't find it anywhere!
I have to keep developing, but if the new version with this changes is ready (or almost) I would appreciate if you post it somewhere.
Thank you!

Ionatan Wiznia

wiznia
2 Jul 2008, 7:42 AM
Thank you very much!
My change to the rid was pretty much the same as the one that cimperia proposed, and the one that is in the new version, to add the entry.record.id in the data variable of the writeRecords function.
The main problem was that I needed the id on the server to know what record I had to update or delete (of course create isn't a problem since it's new).

I'll give a try to the new version and post again later.

deitch
2 Jul 2008, 8:19 AM
My pleasure.

So the rid is similar to what I am working with, and what cimperia proposed, which makes sense. The only challenge is in getting it processed. I do not want write-store to process the reply, since it could be in json or xml or etc., so I want to pass it back to the original reader, which knows how to process it. The issue is that the native reader does not know how to understand anything other than the metadata and the root element given in the metadata. Thus, I (as of yet) cannot extract the rid to know which record to update how.

Still thinking...


Thank you very much!
My change to the rid was pretty much the same as the one that cimperia proposed, and the one that is in the new version, to add the entry.record.id in the data variable of the writeRecords function.
The main problem was that I needed the id on the server to know what record I had to update or delete (of course create isn't a problem since it's new).

I'll give a try to the new version and post again later.

wiznia
2 Jul 2008, 8:59 AM
Hi again! I'm using the new version, but I think there's a problem in condensed mode (I don't know if it happens in other modes too).
The problem is the index property of an entry in the journal of the store. You use the index to identify the record, but that index can get changed all the time.
A simple example, a store with 2 records and you do:
store.remove(store.getAt(0));
store.getAt(0).set("name", "newName");

Both calls end up with an index of 0, but should be journal entries for different records, so then you only receive one of them.
Why are you using the index? Why not compare the record id or better the object directly?
Besides there's a thing with store.data.items, when you have a filtered store, the records don't appear in that array (only tha un-filtered, of course) so the index in that array is not a good idea...
It seems that it's used only for grouping changes together with the condensed mode, right?
I tried replacing entry.index with entry.record.id and it seems to work, but it needs more testing, besides I think it's better to compare the record object than the record id.
What do you think?
I'm going on a vacation tomorrow, I come back in a week, so we'll talk then.

Oh, and about handling the responses, I was about to add some post processing of the response from the server in the writeStore, but you are right, it would be nice that the reader could handle that. I'll give it a thought.

Bye!

deitch
2 Jul 2008, 12:57 PM
Do you mean that the calls below of store.getAt(0) are a problem? If so, then there really isn't a problem, as that is how Ext.data.Store is supposed to work. If you mean the condensed mode of how it condenses to send back to the server, that is a good point. It appears within WriteStore as follows:


// we have no entry yet, so this is a new one, record it
recs[entry.index] = i;


In this case, using the index from the journal entry may not work entirely, because it can change over time. We still need the index to use in the journal, as need to know where in the set it is for quick access.

I will change it to use the record ID, which is unique.


I think there's a problem in condensed mode (I don't know if it happens in other modes too).
The problem is the index property of an entry in the journal of the store. You use the index to identify the record, but that index can get changed all the time.
A simple example, a store with 2 records and you do:
store.remove(store.getAt(0));
store.getAt(0).set("name", "newName");

Both calls end up with an index of 0, but should be journal entries for different records, so then you only receive one of them.
Why are you using the index? Why not compare the record id or better the object directly?
Besides there's a thing with store.data.items, when you have a filtered store, the records don't appear in that array (only tha un-filtered, of course) so the index in that array is not a good idea...
It seems that it's used only for grouping changes together with the condensed mode, right?
I tried replacing entry.index with entry.record.id and it seems to work, but it needs more testing, besides I think it's better to compare the record object than the record id.
What do you think?
I'm going on a vacation tomorrow, I come back in a week, so we'll talk then.


Yes, right now you can update the store from the commit call, but that creates an issue. You are now changing the records again, which leads to a possible commit or reject, etc. From within the store, we can easily do it and make it clean. It is also a pity to extract that code when it would be easier if the store did the right thing. The difficulty I still face is in relying on the reader to get the records with enough information for us to affiliate them with the right one.



Oh, and about handling the responses, I was about to add some post processing of the response from the server in the writeStore, but you are right, it would be nice that the reader could handle that. I'll give it a thought.

Bye!

wiznia
10 Jul 2008, 5:27 AM
I meant that way that condensed mode, condenses the data to send it back to the server is the problem. The entry.index is not an ID for the record, since I can add several records in the store at index 0, but they are all different records. The way it's handled now the journal would be overwritten every time, sending just one record to the server. I made some tests and by changing entry.index to entry.record.id it works.

Regarding updating the store with data from the server, it's tricky you would have to send back from the server the original rid, that way you can do a store.getById(rid) and then process the changes in the record. What I did is that the server sends back exactly the same records, but with one field more, called oldID. I added it to the write event, but should be added inside the writestore. This is it:




this.on("write", function(store, o, response) {
var response = this.reader.read(response);
if(response.success) {
for(var i=0; i < response.records.length; i++)
{
var rec = this.getById(response.records[i].json.oldID);
if (rec) {
rec.id = response.records[i].data[this.reader.meta.id];
for (var j in response.records[i].data) {
rec.data[j] = response.records[i].data[j];
}
}
else {
this.add(response.records[i]);
}
}
return true;
}
});
The thing is that the reader doesn't know the oldID field, so I get it from the json property. Besides, this only handles updates and inserts, not deletes I wasn't sure how to handle this, but it shouldn't be that hard to add that functionality.

Well, that's it, what do you think? If I want to add it to the write store where is the best place to put it? In the writeHandler method inside the if(o.update || this.updateResponse)?
Oh, and another thing, in that method everywhere you use "o.update" or "o.success" or "o.failure" you have to change it to "o && o.xxx" because if you do a commitChanges() without any parameter it throws an error.

Thank you!

deitch
11 Jul 2008, 3:41 AM
Wiznia,

Yes, agree wholeheartedly. The 1.1B1, which I released yesterday, already has this change.

As for the rest, working on it, and feedback to follow herein...




I meant that way that condensed mode, condenses the data to send it back to the server is the problem. The entry.index is not an ID for the record, since I can add several records in the store at index 0, but they are all different records. The way it's handled now the journal would be overwritten every time, sending just one record to the server. I made some tests and by changing entry.index to entry.record.id it works.

wiznia
14 Jul 2008, 11:40 PM
Wow! Deitch, I keep amazing myself when I see that we write almost the same code, but separate. I just took a look to the new version, here is the part that I wrote, it goes in line 618 of the last version:

var newRecords = this.reader.read(response);
if(newRecords.success) {
for(var i=0; i < newRecords.records.length; i++) {
// TODO: this only works for json and xml readers, is there a better way?
if(newRecords.records[i].json)
var oldID = newRecords.records[i].json.oldID;
else {
var oldID = Ext.DomQuery.select("oldID", newRecords.records[i].node);
oldID = oldID[0].childNodes[0].nodeValue;
}

var rec = this.getById(oldID);
if(rec) {
rec.id = newRecords.records[i].data[this.reader.meta.id];

// If I don't change this, then the getById doesn't work...
delete(this.data.map[rec[oldID]]);
this.data.map[rec.id] = rec;

for(var j in newRecords.records[i].data)
rec.data[j] = newRecords.records[i].data[j];
}
else
this.add(newRecords.records[i]);
}
}


As you see, it's pretty much the same that you did (with some minor differences). The only 2 big differences are:
1) I tried to get the old rid from the reader (in the server I return an extra field called oldID). The problem is that the reader doesn't read the fields that aren't in the definition of the record (in this case, oldID). So I tried a workaround (not pretty and only made it work for json or xml reader) that is getting the data from the json property (for json readers) or by using a domQuery in the node property (in xml readers).
2) I had problems when updating the ids of records, so I had to add the lines that modify the map object.

For the rest, it's pretty much the same, you delete the record in case everything is null, which is ok.
Oh, by the way, the line 746 that you added solves the problem I told you in the last mail of getting an error when you do a commitChanges() without any parameters, right?

What do you think about the solution for getting the rid?? Any better ideas?

deitch
15 Jul 2008, 6:40 PM
Hi Wiznia,

Yes, that change solves the commitChanges() without parameters error.

In terms of getting the old record ID, I have not come up with anything earth-shattering yet. The key here is not to change the reader itself, since someone may use a completely different reader, not one we have provided, yet we still have to be flexible enough to do it. Still thinking and open to better ideas....


Wow! Deitch, I keep amazing myself when I see that we write almost the same code, but separate. I just took a look to the new version, here is the part that I wrote, it goes in line 618 of the last version:

var newRecords = this.reader.read(response);
if(newRecords.success) {
for(var i=0; i < newRecords.records.length; i++) {
// TODO: this only works for json and xml readers, is there a better way?
if(newRecords.records[i].json)
var oldID = newRecords.records[i].json.oldID;
else {
var oldID = Ext.DomQuery.select("oldID", newRecords.records[i].node);
oldID = oldID[0].childNodes[0].nodeValue;
}

var rec = this.getById(oldID);
if(rec) {
rec.id = newRecords.records[i].data[this.reader.meta.id];

// If I don't change this, then the getById doesn't work...
delete(this.data.map[rec[oldID]]);
this.data.map[rec.id] = rec;

for(var j in newRecords.records[i].data)
rec.data[j] = newRecords.records[i].data[j];
}
else
this.add(newRecords.records[i]);
}
}
As you see, it's pretty much the same that you did (with some minor differences). The only 2 big differences are:
1) I tried to get the old rid from the reader (in the server I return an extra field called oldID). The problem is that the reader doesn't read the fields that aren't in the definition of the record (in this case, oldID). So I tried a workaround (not pretty and only made it work for json or xml reader) that is getting the data from the json property (for json readers) or by using a domQuery in the node property (in xml readers).
2) I had problems when updating the ids of records, so I had to add the lines that modify the map object.

For the rest, it's pretty much the same, you delete the record in case everything is null, which is ok.
Oh, by the way, the line 746 that you added solves the problem I told you in the last mail of getting an error when you do a commitChanges() without any parameters, right?

What do you think about the solution for getting the rid?? Any better ideas?

wiznia
16 Jul 2008, 3:47 AM
Hi Deitch!
You are right about the reader, I made another solution, it's still not earth-shattering enough, but....
What I did is clone the recordType the reader uses (not quite simple), add a field (oldID) read the records and change back the recordType to the original.
Look at the attachment an tell me what you think (line 624).

deitch
26 Jul 2008, 6:56 PM
Hi Wiznia,

I should be able to take a look at it sometime this week.




Hi Deitch!
You are right about the reader, I made another solution, it's still not earth-shattering enough, but....
What I did is clone the recordType the reader uses (not quite simple), add a field (oldID) read the records and change back the recordType to the original.
Look at the attachment an tell me what you think (line 624).

cdomigan
10 Aug 2008, 8:57 PM
Hi Deitch, great extension :-)

Is it possible to send parameters along with the write data? How would I for instance get my baseParams to be included on writes.

Cheers,

Chris

deitch
11 Aug 2008, 3:22 AM
Hi Chris,

Thank you kindly. I hope you are located in the EU or Asia, as this post was written at 12:57am my time (East Coast of America).

I had thought they were already sent, but apparently not. I will do a little firebugging today and determine what is missing, get it into 1.1. As it is, I am behind in getting 1.1 beyond beta, as I have been struggling with the issues raised previously, i.e. modifying data based on responses to writes.

My apologies on the delay in approving you on the other forum; the amount of spam registrations has been astonishing.

Avi


Hi Deitch, great extension :-)

Is it possible to send parameters along with the write data? How would I for instance get my baseParams to be included on writes.

Cheers,

Chris

deitch
11 Aug 2008, 4:47 AM
Chris,

What you are asking for its fairly straightforward; I am working it into 1.1. However, there are two parts to it:


Call-by-call params. This is fairly easy. Right now, the two params sent with the write requests are 'data', with the actual data sent, and 'mode', with the write mode (update, replace, etc.). It is straightforward to add arbitrary options to these. When calling commitChanges(), there is an options parameter to the call. I am adding a 'params' member to those options, which should be an object with key-value pairs. Each key will be treated as a new param, each value as the value of that param. Any key that overrides an privileged one (i.e. data and mode) will be ignored.
baseParams. This is a little more conflicting. I am hesitant to include baseParams to every call, as the load() call does, since many people may not want those baseParams on the update() calls, just the load() calls. It is fairly trivial on each call to do options.params = store.baseParams, although it is inelegant. As an alternative, I propose creating a parallel store option to baseParams called updateParams, same idea, but applies only to updates. If you want them the same, you only need to set store.updateParams = store.baseParams once.



Hi Deitch, great extension :-)

Is it possible to send parameters along with the write data? How would I for instance get my baseParams to be included on writes.

Cheers,

Chris

deitch
11 Aug 2008, 5:13 AM
There are two open threads here:

optional params for the updates: this is complete in 1.1B2, available on the Web site http://jsorm.com. This is the last feature I am adding to 1.1, and am looking only for bug fixes at this point.
updating data in the store based on responses to the update. I am moving this to 1.2, and am opening a more detailed discussion on the jsorm.com forum. I want to figure out how to do this before the month is out.


Thanks.

deitch
11 Aug 2008, 6:37 AM
Wiznia,

What you have done is rather interesting. If I understand correctly, the entire changes are line 624 to 633. You have modified the reader to temporarily have a new recordType that includes a field "oldID", thus tricking it into processing the oldID (what we were calling the rid). You are right, it is not implemented in the most elegant (messing around with reader.ef) or efficient (doing it with each update) fashion, but this is the basis of a nice solution. I will modify and put into 1.2 Alpha. I still have some concerns. Please check them out at http://jsorm.com/forum/showthread.php?t=6

I want to hear people's thoughts on these concerns.


Hi Deitch!
You are right about the reader, I made another solution, it's still not earth-shattering enough, but....
What I did is clone the recordType the reader uses (not quite simple), add a field (oldID) read the records and change back the recordType to the original.
Look at the attachment an tell me what you think (line 624).

cdomigan
12 Aug 2008, 2:34 PM
Hi Deitch

Yes I'm in New Zealand - hence the late post :-)

Having a separate updateParams sounds like the way to go, agreed.

Great work!! :)

Chris

deitch
12 Aug 2008, 3:39 PM
Grazie, Merci, Arigato, Todah,

Now, if I can get some better adoption on the i18n library.... I have been meaning to add several other calendars in addition to Gregorian, but not until I see higher adoption rates. I have also sold some commercial licenses on both, which is nice.

Take look at 1.1B2.


Hi Deitch

Yes I'm in New Zealand - hence the late post :-)

Having a separate updateParams sounds like the way to go, agreed.

Great work!! :)

Chris

wiznia
14 Aug 2008, 2:34 AM
Yes, that's why I said it's not earth shattering enough, it's not "nice" to play with the ef, but it does the trick. I agree it can be optimized, like creating the new record with the rid in the constructor just once.
I can add this next week if you want, and post it here again. I will also add a configuration parameter to set the name of the field (what you've been calling rid and and I've called it oldID).
When I have it ready I merge the differences with your new version, add this and post it.
The problem you mention in your post in the jsorm page of server and client getting completely out sync for other records that weren't in the update, I think that is a problem out of the concerns of the store. You can implement something in the server to keep them in sync, and then use the store to pass the updates. But I think this is out of the scope of this extension.
Like you said:

If the client has records 1-10, and sends updates only for 2,3,4, and the server updates them, what if 7,8,9 are now different server-side? If the store can receive the updated records, it's up to the server to decide wich records to send.

cdomigan
29 Sep 2008, 1:44 PM
I'm having an issue where the store's journal is *not* being reset upon creating a new instance. This is very strange.

I am instantiating a new WriteStore - the store itself is empty, but upon inspection by Firebug, the journal is not flushed and is remembering previous commits etc.

What would I be doing wrong here?

Chris

deitch
7 Oct 2008, 6:31 AM
Chris,

I will take a look at it. Can you give me some code to replicate it?

deitch
19 Oct 2008, 6:20 PM
Chris,

I still haven't seen an update post. Can you post some sample code, where you see the problem? I would like to figure it out, fix it if it is an issue, but I need the sample.

Thanks.


I'm having an issue where the store's journal is *not* being reset upon creating a new instance. This is very strange.

I am instantiating a new WriteStore - the store itself is empty, but upon inspection by Firebug, the journal is not flushed and is remembering previous commits etc.

What would I be doing wrong here?

Chris

cdomigan
3 Nov 2008, 5:29 PM
I'm seeing this every time:

Under store->data->items[0] there will be a journal array that starts off empty, but every time I create a New store and do some processing, the Old store's journal seems to point to the same place, so they share journal entries.

It's as if all the journal entries are being held in a static or global var that every subsequent new instance of the store is pointing to.

I can create a brand new WriteStore (with new Readers, Proxies etc), then immediately inspect it under FireBug and it will show a journal with entries already in it!

My store code looks like:

var relationStore = new Ext.ux.WriteStore({
proxy: new Ext.data.HttpProxy({
url: 'sources/listRelationships'
}),
updateProxy: new Ext.ux.HttpWriteProxy({
url: 'sources/setRelationships'
}),
reader: new Ext.ux.JsonWriterReader({
root: 'relationships',
id: 'id',
fields: [
{name: 'id', type: 'int'},
{name: 'relation_source_id'},
{name: 'relation_record_id'},
{name: 'relation_table_display'},
{name: 'relation_record_display'},
{name: 'type'}
]
}),
writeMode: Ext.ux.WriteStore.modes.condensed
});

Then all I'm doing is relationStore.add(newRecord).

But, and this may be where the issue is, I'm using var newRecord = new store.recordType({}) to create a new record based on the record field definition of what is already in the store. From looking at the code, it seems each record has it's own journal data, so maybe this is where that information is being copied across?

Chris

wiznia
5 Nov 2008, 1:55 AM
Hi cdomigan, I had the same problem. I think it was when I upgraded to Ext 2.2 or Firefox 3, or something.
I'm still not sure why it isn't working, but the problem is the initialization of objects and arrays. For example, in the WriteRecord, you have inside the extend call, but outisde any method the initialization of journal: [] and modCount: {}. Move this into the constructor function and it should work. Do this for every array or object that you see initialized this way.
I used this class in one project and made many modifications, when I have a little more time I will clean it up and upload it here.
I hope this helps you.

Ionatan

Eric24
5 Nov 2008, 9:07 AM
Great extension deitch! (I'm playing with it now, and if it works as expected, you'll sell another commercial license).

A question: For a "true" REST interface, doing a "replace" using an HTTP PUT seems the most appropriate way of updating an object. Do you support doing a PUT of the JSON data in addition to the existing POST?

--Eric

Eric24
5 Nov 2008, 9:51 AM
Also, what about support for GroupingStore?

PS - I went ahead and created Ext.ux.GroupingWriteStore by extending WriteStore using the exact same code as the standard Ext GroupingStore. If there's a better way, please let me know.

Eric24
5 Nov 2008, 10:17 AM
It's working like a charm (haven't tried PUT yet)!

One more question: Is there a way to control what record fields are returned to the server on a commit? For example, I have certain fields in my records that are irrelevant on an update, and I'd like to eliminate them from the update, possibly by specifying an array of field names to include or omit as part of the commitChanges() call?

deitch
5 Nov 2008, 3:12 PM
Chris,

There should be no problem using this.recordType({}) to create a record. Although it is not listed in the ExtJS API at all, recordType does, indeed, store the output of Ext.data.Record.create(config), and should be similarly usable.

Can you give a more info? When you first create the one store you showed me, there is journal data? Or is it that you create one store, do some work, then create a second, and the second has journal info from the first? I am still not 100% clear on how to recreate the issue. When I create a journal, add a new record, I see no issues. There is one journal entry in the store, specifically, the creation of a new record (which is what should be there). Starting from scratch, how do I get the history that should not be there?

Thanks.

deitch
5 Nov 2008, 4:14 PM
Eric,




Great extension deitch! (I'm playing with it now, and if it works as expected, you'll sell another commercial license).

Thank you. Always nice to see people benefit from one's work (and I look forward to the commercial license).


A question: For a "true" REST interface, doing a "replace" using an HTTP PUT seems the most appropriate way of updating an object. Do you support doing a PUT of the JSON data in addition to the existing POST?

Yes, in a true restful interface, POST would update the existing data, while PUT would overwrite/replace data. Thus, in replace mode, it would seem to make sense to use PUT.

Question for the forum: how do we implement this? Options:

Make all replace modes to PUT, others POST
Leave all replace modes POST, have an override config. This will be part of the creation of the updateProxy, i.e. HttpWriteProxy, something like replaceMode: PUT or the likeMy hesitation with #1 is that it forces everyone to use PUT. Nice and RESTful, but will break existing installations and not everyone knows how to work with PUT or has the flexibility to change the server-side.

My hesitation with #2 is that it is one more config param (but not so terrible).

Looking forward to feedback.

wiznia
6 Nov 2008, 1:37 AM
I thought of adding PUT to the class, but the problem I saw it's that the store can have any type of changes (delete, create, update) in the same server call... So, what method do you use?

deitch
6 Nov 2008, 6:12 AM
Ionatan,

You have a good point. Essentially, what we are doing is not REST at all. If it were REST, then each record would update. Since we are bundling the updates together, whatever he format - replace, update or condensed - then this is not RESTful.

So, since this is not really RESTful, is there any real interest in having a PUT option? Or am I missing something here about how it could be considered RESTful?

Avi


I thought of adding PUT to the class, but the problem I saw it's that the store can have any type of changes (delete, create, update) in the same server call... So, what method do you use?

deitch
6 Nov 2008, 6:48 AM
Need to think this one through, would appreciate the help of people on the forum.

When you load a record, e.g. using JsonReader, the only data loaded in the record is that which was specified when the record was created. For example, if you specify


myRec = Ext.data.Record.create([
{name: 'firstname'},{name: 'lastname'}
]);


But your json sent is


{recs:
[
{firstname: 'Eric', lastname: '24', middlename: 'I do not know'}
]
}


The 'middlename' field will be ignored. Does it make sense to do it in reverse, i.e. to only transmit those fields that are in the record definition?

The issues are:
1) Does this apply only in replace mode, or also in update mode, where the journal is transmitted?
2) Does this break older stuff?
3) Does this really make sense? Isn't it always easier to ignore data transmitted than to not have it? This is the philosophy behind the Reader: send me everything, I will ignore what I do not want to have.

I strongly lean towards #3, but looking for input.



It's working like a charm (haven't tried PUT yet)!

One more question: Is there a way to control what record fields are returned to the server on a commit? For example, I have certain fields in my records that are irrelevant on an update, and I'd like to eliminate them from the update, possibly by specifying an array of field names to include or omit as part of the commitChanges() call?

deitch
6 Nov 2008, 6:59 AM
Also, what about support for GroupingStore?

PS - I went ahead and created Ext.ux.GroupingWriteStore by extending WriteStore using the exact same code as the standard Ext GroupingStore. If there's a better way, please let me know.

I have no objection to it in principle. The issue is that Ext.data.Store now has three children - SimpleStore, JsonStore, GroupingStore. I am sure more will come on the way. How do we avoid having branches and children to all of these?

I had an interesting different idea. One of the issues in WriteStore was the Record itself. As several people noted, Ext.data.Record has a journal, but not a great one, and it became difficult to do atomic roll-backs. For example, if you changed a field and then changed it again, you could not go back two steps within a single transaction rollback. For obvious reasons, I did not want to recreate Ext.data.Record entirely, nor to require something non-standard for it, yet I needed a different object. Enter Ext.data.WriteRecord. However, rather than require its usage, I used the decorator pattern, which is really well-suited to JavaScript. Every time an Ext.data.Record is added to WriteStore, it is decorated with an Ext.data.WriteRecord. It is transparent to everyone else, yet adds all the features needed.

Would the decorator pattern work here? Rather than using


myStore = new Ext.ux.WriteStore(config);
(although you are welcome to continue doing that), we would have a decorator that could be applied to anything that is an Ext.data.Store


myStore = new Ext.data.Store(config); // or Ext.data.GroupingStore or JsonStore or...
writeStore = Ext.ux.WriteStore.create(myStore,writeConfig);
This would be a two-step process, for those who chose it, but would greatly simplify the usage of any underlying store.

I started a thread on this at the jsorm.com site http://jsorm.com/forum/showthread.php?t=7

If people would kindly comment there, it would be appreciated.
Thanks.

eccehomo
29 Dec 2008, 7:57 AM
I have no objection to it in principle. The issue is that Ext.data.Store now has three children - SimpleStore, JsonStore, GroupingStore. I am sure more will come on the way. How do we avoid having branches and children to all of these?


Why not:


new Ext.data.Store({
plugins : [
Ext.ux.WriteStore({})
]
...
})


Seems more natural and consistent with the Ext.Component family, or for the majority of Ext classes for that matter. The "problem" you encountered with other variants/descendants of Store being not able to inherit all the features of your extension could be "solved" by the plugin/composition method. This is what we have been doing all the time for the Ext.Component family. It just seemed sensible to me to do exactly the same thing with Ext.data

I really hope plugins for store are natively supported in future releases.

deitch
30 Dec 2008, 7:36 PM
Is this supported? How does the data Store invoke the WriteStore and pass it enough information?

I was looking at the reverse, a decorator pattern wherein when you call a WriteStore, you pass it a Store class that it decorates.


Why not:


new Ext.data.Store({
plugins : [
Ext.ux.WriteStore({})
]
...
})
Seems more natural and consistent with the Ext.Component family, or for the majority of Ext classes for that matter. The "problem" you encountered with other variants/descendants of Store being not able to inherit all the features of your extension could be "solved" by the plugin/composition method. This is what we have been doing all the time for the Ext.Component family. It just seemed sensible to me to do exactly the same thing with Ext.data

I really hope plugins for store are natively supported in future releases.

eccehomo
31 Dec 2008, 2:34 AM
Hi. I forgot to mention that your extension is cool. I am working on a very similar thing when I found yours but I had a different solution in mind. Anyway, I was considering using your extension, but I was really "into" something already. I'll be monitoring this extension though.

As for the plugin method, of course, it is not natively supported in Ext 2.x.

But because of Ext's extensibility, the limitation could be overcome by a simple patch:
http://www.extjs.com/forum/showthread.php?p=267145 (http://www.extjs.com/forum/showthread.php?p=267145#post267145)

It's a simple patch for the Store, so it could have plugins installed to it. Let me know what you think.

(P.S I am thinking of posting that code here in the extensions section, but I do not know if that patch is really helpful/useful for the community).

deitch
5 Jan 2009, 6:56 AM
Hi Eccehomo,

Thank you kindly for the compliment. I would love to see what you are working on, as well.

This is very similar to the decorator pattern; is that not easier? The other path I was looking at (started work on it) is to remove any dependence on Ext.data.Store. WriteStore in itself has grown to a self-sufficient database, with journals, logging, all the usual semantics. All it is missing is query language. It seems to make more sense to go down that path, and keep hooks into Ext.data.Store so that WriteStore becomes JsormDB, where JsormDB has hooks to adapt to any Ext.data.Store, so that store will use JsormDB as a backing store, and will look transparent to the Ext.data.Store user. This is very similar to how ExtJS grew up, as an adapter to YahooUI... and the irony is not lost on me!




Hi. I forgot to mention that your extension is cool. I am working on a very similar thing when I found yours but I had a different solution in mind. Anyway, I was considering using your extension, but I was really "into" something already. I'll be monitoring this extension though.

As for the plugin method, of course, it is not natively supported in Ext 2.x.

But because of Ext's extensibility, the limitation could be overcome by a simple patch:
http://www.extjs.com/forum/showthread.php?p=267145 (http://www.extjs.com/forum/showthread.php?p=267145#post267145)

It's a simple patch for the Store, so it could have plugins installed to it. Let me know what you think.

(P.S I am thinking of posting that code here in the extensions section, but I do not know if that patch is really helpful/useful for the community).

eccehomo
6 Jan 2009, 10:20 AM
Well, to me the decorator pattern has some disadvantages. First, you lose the ability to initialise or install listeners to the Store until the Store's constructor finishes execution (i.e, only after the Store's instantiation could your plugin work). In my case, I do not want this to happen. For example, if autoLoad is set to 'true', the Store will load while on the constructor; thus you lose the chance to install a listener on the store's first load.

However, it seems that you have some grand plans for your extension. :D From what I understand, you plan to lose some tight coupling from Ext.

The plugin/composition method might still be applicable in your case, though. If you plan to make this current 'extension' as independent from Ext as possible (in the near future), then a plugin for Ext.data.Store to "adapt" your class (i.e., adapter (or bridge?) pattern) isn't a bad idea at all.

deitch
7 Jan 2009, 12:16 PM
"Grand plans"? Heh, hardly. It is just that it has taken on a life of its own, with some developers asking to use it outside of ExtJS entirely, and others plugging it into other frameworks. More request-driven than my own grand visions.

I am still trying to get my head around the plugin as opposed to decorator. With a decorator, you could decorate the class, not necessarily the object, and thus everything that gets passed to the decorator's constructor gets passed to the decoratee's (English?) constructor when instantiated. But perhaps I am misunderstanding you. Could you give me some examples of what you would want to work that does under the current extension pattern but not on the decorator pattern?


Well, to me the decorator pattern has some disadvantages. First, you lose the ability to initialise or install listeners to the Store until the Store's constructor finishes execution (i.e, only after the Store's instantiation could your plugin work). In my case, I do not want this to happen. For example, if autoLoad is set to 'true', the Store will load while on the constructor; thus you lose the chance to install a listener on the store's first load.

However, it seems that you have some grand plans for your extension. :D From what I understand, you plan to lose some tight coupling from Ext.

The plugin/composition method might still be applicable in your case, though. If you plan to make this current 'extension' as independent from Ext as possible (in the near future), then a plugin for Ext.data.Store to "adapt" your class (i.e., adapter (or bridge?) pattern) isn't a bad idea at all.

eccehomo
7 Jan 2009, 9:52 PM
Ok. The original problem was that your extension was originally meant for Store. Thus you extended store.


Ext.ux.WriteStore = Ext.extend(Ext.data.Store, {
...
});

The problem you encountered which you posted on another forum site, was that there are many variants/descendants of Store (e.g. GroupingStore, SImpleStore, JsonStore). This of course would greatly limit your extension.

For suppose I used your extension. Then after a week, client suggests that he wants to see a grouped grid, which of course would require usage of GroupingStore. But since your extension descends from Store, I cannot any more use your extension. What this means is that I have to do the same thing:


Ext.ux.WriteGroupingStore = Ext.extend(Ext.data.GroupingStore, {
<THE CODE OF YOUR EXTENSION>
});

This means, I have to duplicate the code of your extension to all possible subclasses of Store. Of course, this, pardon my term, sucks. :D

With that kind of problem/limitation detected you suggested this (decorator pattern):


var store = new Ext.data.Store({});
var writeStore = Ext.ux.WriteGroupingStore(store, {<CONFIG FOR YOUR EXTENSION});
//or
var store = new Ext.data.JsonStore({});
var writeStore = Ext.ux.WriteGroupingStore(store, {<CONFIG FOR YOUR EXTENSION});
//or
var store = new Ext.data.GroupingStore({});
var writeStore = Ext.ux.WriteGroupingStore(store, {<CONFIG FOR YOUR EXTENSION});
//or
var store = new Store();//this is a hypothetical class of other frameworks/libraries
var writeStore = Ext.ux.WriteGroupingStore(store, {<CONFIG FOR YOUR EXTENSION});


That solves the problem of losing the functionality of your extension for descendant classes of store. This is elegant enough, of course.

However, there is a problem I noticed with this pattern. Note that "problem" is relative to the requirements/goals you wish to satisfy.:D


/**
if autoLoad is set to true, Store will load automatically. The load method is executed at the constructor!
*/
var store = new Ext.data.JsonStore({
autoLoad : true //this will fire the 'beforeload' event while on the constructor
});
var writeStore = Ext.ux.WriteGroupingStore(store, {<CONFIG FOR YOUR EXTENSION});

What this means is that your extension could not listen to the FIRST beforeload event. Of course, if that is unimportant to you, then you don't have a problem. Otherwise, you have a BIG problem. Which is exactly my case - I need to listen to the first beforeload event.


Of course you might say or suggest this:


var store = new Ext.data.JsonStore({
listeners : {
beforeload : {
single : true, fn : function(){...}
}
}
});
var writeStore = Ext.ux.WriteGroupingStore(store, {<CONFIG FOR YOUR EXTENSION});

That solves the problem of listening to the first beforeload. However, to me, that is an 'ad hoc' method; I have to define those listeners on every instantiation of Store. Those listeners, ideally, must be re-factored in a separate "class" or "scope". What I want to achieve is a refactored version: a set of fixed enhancements (listeners, delegates, interceptors) to the Store which are encapsulated in a separate class.

Now comes the plugin method. There is basically nothing peculiar to this method apart from the fact that it is applied to a Store. Plugins are meant to be, at least in Ext 2.x, for COmponents. To emphasize, this pattern is not natively, currently supported by Ext 2.x

I will continue this on a next post...

eccehomo
7 Jan 2009, 10:04 PM
continued....


What I want to achieve is basically what we are trying to do with Components. For instance if we want to add a feature for Panels (GridPanel, Panel, TabPanel, TreePanel), we write a plugin. We do not extend because we want our 'added feature' to be applicable to a certain group of classes, which in this case are Panels. Thus we proceed like this:


Ext.ux.PanelPlugin = function(config){
Ext.apply(this, config);
}

Ext.ux.PanelPlugin.prototype = {
init : function(aPanel){
//...do something cool with all possible Panels (e.g, GridPanel, TreePanel, etc.);
}
}

//to use the extension/plugin

new Ext.Panel({
plugins : [Ext.ux.PanelPlugin()]
});
//or
new Ext.TabPanel({
plugins : [Ext.ux.PanelPlugin()]
});
//or
new Ext.grid.GridPanel({
plugins : [Ext.ux.PanelPlugin()]
});


So you see, there is nothing I see unreasonable (or irrational) if we apply the same pattern for Stores (Ext.data):


Ext.ux.StorePlugin = function(config){
Ext.apply(this, config);
}

Ext.ux.StorePlugin.prototype = {
init : function(aStore){
//...do something cool with all possible Stores (e.g, GridPanel, TreePanel, etc.);
//attach preconfigured listeners, interceptors, delegates, sequences, etc, etc.
}
}

//to use the extension/plugin
new Ext.data.Store({
plugins : [Ext.ux.PanelPlugin()]
});
//or
new Ext.data.GroupingStore({
plugins : [Ext.ux.PanelPlugin()]
});
//or
new Ext.data.JsonStore({
plugins : [Ext.ux.PanelPlugin()]
});

What are the benefits? We can write classes that are plugged to Stores. These classes are just, like the Component plugins counterpart, enhancements (fixed listeners/manipulators/responders/delegates) of various Store-related issues (bugs, added features, etc). Though this could be achieved using the decorator pattern, we can do various things for Store (attaching listeners) while the Store is still being instantiated (while we are still on the Store's constructor). To me, and I will emphasise: there is a great multitude of difference if we can "have a class" (i.e, manipulate it) while (1) it is being constructed and (2) after its construction. Again, if you believe that you do not need this benefit, or the difference is superficial, then of course you are fine with the decorator pattern.

Next is the code to implement this...

eccehomo
7 Jan 2009, 10:28 PM
...continued

The first problem is: how do we achieve the plugin method (composition method) for Stores as achieved by Components (i.e, GridPanel, TabPanel, Toolbar, TextField, etc..)?

So my first impulse was to inspect Component's constructor, and found this (at line 197):


this.initComponent();

if(this.plugins){
if(Ext.isArray(this.plugins)){
for(var i = 0, len = this.plugins.length; i < len; i++){
this.plugins[i] = this.initPlugin(this.plugins[i]);
}
}else{
this.plugins = this.initPlugin(this.plugins);
}
}


So, this is what I want for my Store. Alas, I cannot "override" the constructor, unless I will Extend Store. If I Extend Store, I'd be back to my original problem. What is called for is a hack. :D (Note: As suggested, this is only a hack. This is not proven bug-free or "safe". I will appreciate comments or suggestions though as how to achieve a better method)


Looking at Store's constructor, we see this (line 155):


if(this.storeId || this.id){
/*
hmmm. this seems to be the only way to "inject"
my code while still on the constructor
*/
Ext.StoreMgr.register(this); .
}


Facts:
1.) Store gets registered on the StoreMgr if there is an id or storeId
2.) A Store gets registered while still on the constructor.

So, the second fact is a chance to inject my code while Store is "still on the constructor". The first fact is a requirement so that I can inject my code. How do we do this?

Strategy:
1.) define an interceptor for a Store's constructor. Its goal is to define an id if none is found. This way, we are assured that a Store will be registered in StoreMgr
2.) define an interceptor for the 'register' method on StoreMgr. That interceptor will in turn initialise the plugins defined for Store



Ext.ns("Ext.ux.data.patches");

Ext.ux.data.patches.StorePluginsPatch = function(){

return {

idSeed : 0,

id : function(id, prefix){
prefix = prefix || "apex-new-store-";
var id = prefix + (++this.idSeed);
return id;
},

//private
onStoreConstruct : function(config){
//provide an id if non is supplied, so that it'd
//be registered in the StoreMgr
config = config || {};
if( !config.id ){
config.id = this.id();
}
},

//private
onStoreRegister : function(){
var initPlugin = function(plugin, store){
if( plugin.xtype && typeof plugin.init !== 'function' ){
plugin = Ext.ComponentMgr.create(plugin);
}
var p = plugin.init(store);
return p;
}

for(var i = 0, len = arguments.length; i < len; i++){
var store = arguments[i];
if(store.plugins){
if( Ext.isArray(store.plugins) ){
Ext.each(store.plugins, function(x,y,z){
store.plugins[x] = initPlugin(x, store);
}, this);
}
else{
store.plugins = initPlugin(store.plugins, store);
}
}
}
},

installPatch : function(){

//"install listeners" by adding function interceptors
Ext.data.Store.prototype.constructor = Ext.data.Store.prototype.constructor.createInterceptor(this.onStoreConstruct, this);
Ext.StoreMgr.register = Ext.StoreMgr.register.createInterceptor(this.onStoreRegister, this);
}
}
}();
Ext.ux.data.patches.StorePluginsPatch.installPatch();


Now we can do this:


/*
this class is enchancements for Store. First goal is to smartly detect a need for metaData. Second, change some important param names which my server will understand
*/
Ext.ux.data.plugins.ApexEnhancements.prototype = {

requestMetaKey : 'REQUEST_META',

sortingDirectionKey : "sorttype",

sortedFieldKey : "sortfields",

init : function(store){

this.store = store;
this.store.on("beforeload", this.onStoreBeforeLoad, this);

this.store.requestMetaKey = this.requestMetaKey;

//just add a utility method to store
this.store.reloadMeta = this.reloadMeta.createDelegate(this);

//change the some param names
Ext.apply(this.store.paramNames, {
dir : this.sortingDirectionKey
,sort : this.sortedFieldKey
});
},

onStoreBeforeLoad : function(store, options){
if( !store.reader.recordType || options.reloadMeta ){
var o = {}
o[this.requestMetaKey] = true;
options.params = Ext.applyIf(options.params || {}, o);
}
else{
if( options.params && options.params[this.requestMetaKey] ){
delete options.params[this.requestMetaKey];
}
}
},

//untested
reloadMeta : function(){
var o = {};
o.reloadMeta = true;
this.store.reload(o);
}
}

//now I can proceed quite simply:
var myStore = new Ext.data.JsonStore({
plugins : [new Ext.ux.data.plugins.ApexEnhancements({})]
});

//or
var myStore = new Ext.data.Store({
plugins : [new Ext.ux.data.plugins.ApexEnhancements({})]
});


As I suggested in a separate thread, native support for Plugins in the Ext.data package is needed if we want to achieve better refactored code on the "data layer" aspect of Ext applications.

Comments?

eccehomo
7 Jan 2009, 10:41 PM
@deitch

It seems that your extension is a success as there are a lot of requests coming (which only means confidence on your work and capacity). If you found my previous suggestions sensible, then perhaps I could further suggest this:


Ext.ux.WriteStore //-->your class which you plan to be independent of any libraries.

Ext.ux.BridgeToStore //-->a special class which should be plugged to Store to utilise you class

Ext.data.Store //a class which could be used in conjunction with your class.

//Sample Usage:
Ext.ux.BridgeToStore.protoype = {
init : function(extStore){
this.extStore = extStore;
new Ext.ux.WriteStore(this.extStore);
}
}

//case 1:
new Ext.data.Store({
plugins : [new Ext.ux.BridgeToStore]
});


Of course, there is an infinite list of possible use cases. That would depend on users of your class really.

deitch
12 Jan 2009, 1:55 PM
Seriously, awesome feedback. I am reading your posts right now, will absorb and then get back to you here. The only downside to your great feedback, especially its detail, is that I need some time to absorb. Didn't want you to think I wasn't reading it, though.

deitch
12 Jan 2009, 2:23 PM
Not to obsess too much about the decorator, but what if we decorate the class, rather than the instance?



var WriteStoreClass = Ext.ux.WriteStoreDecorate(Ext.data.Store);
var myStore = WriteStoreClass({/* config info for both */});


In other words, we still have a single constructor, which allows us to grab events at various times, but we dynamically extend the class. As long as the argument to WriteStoreDecorate is an Ext.data.Store (i.e. including one of its children), the generated class will work.

Does this make it easier, or more complicated?

deitch
12 Jan 2009, 2:32 PM
That is precisely my plan, except, of course that it will not be a Ext.ux.WriteStore, but rather SomethingElse.db. But the idea still holds. I have started work on it.


@deitch

It seems that your extension is a success as there are a lot of requests coming (which only means confidence on your work and capacity). If you found my previous suggestions sensible, then perhaps I could further suggest this:


Ext.ux.WriteStore //-->your class which you plan to be independent of any libraries.

Ext.ux.BridgeToStore //-->a special class which should be plugged to Store to utilise you class

Ext.data.Store //a class which could be used in conjunction with your class.

//Sample Usage:
Ext.ux.BridgeToStore.protoype = {
init : function(extStore){
this.extStore = extStore;
new Ext.ux.WriteStore(this.extStore);
}
}

//case 1:
new Ext.data.Store({
plugins : [new Ext.ux.BridgeToStore]
});


Of course, there is an infinite list of possible use cases. That would depend on users of your class really.

eccehomo
12 Jan 2009, 8:53 PM
Not to obsess too much about the decorator, but what if we decorate the class, rather than the instance?



var WriteStoreClass = Ext.ux.WriteStoreDecorate(Ext.data.Store);
var myStore = WriteStoreClass({/* config info for both */});


In other words, we still have a single constructor, which allows us to grab events at various times, but we dynamically extend the class. As long as the argument to WriteStoreDecorate is an Ext.data.Store (i.e. including one of its children), the generated class will work.

Does this make it easier, or more complicated?

I think that (correct my thoughts if you find I misinterpreted you) that code is not really meant to be a decorator but to be a factory. As I see it, the Factory method will provide some fixed listeners, delegates.

Suppose this:


Ext.data.Store.A = function(config){
//extend store, define "fixed" listeners
return new Ext.data.Store(config);
}

//or (example)
Ext.data.Store.WriteStoreFactory = function(storeConfig){
//instantiate store and decorate, then return the instantiated store
}

//use case
var mystore = Ext.data.Store.WriteStoreFactory({...});


IMHO, it might be a sensible alternative. However, if we "extend" store on its every instantiation...do you think it would be necessary to do so? To me, it would be unnecessary.

But then again, suppose we implement the Factory Pattern what if I wanted to install listeners while store is on it's constructor, and defining those listeners as config options would prove redundant, ad-hoc, and unscalable? With the Factory Pattern we are "too bound" to the exposed method which will create the class; unless we define another factory method - that however might defeat the purpose of the idea of Factory. But it depends really on your goal.

I suggested the plugin/composition method as implemented on Components because it has a generally useful and controllable feature when implemented.

deitch
14 Jan 2009, 12:40 PM
I think that (correct my thoughts if you find I misinterpreted you) that code is not really meant to be a decorator but to be a factory. As I see it, the Factory method will provide some fixed listeners, delegates.


I think you are right. It is more of a factory, but the generated instance is a decorated class. Something of a hybrid.



Suppose this:


Ext.data.Store.A = function(config){
//extend store, define "fixed" listeners
return new Ext.data.Store(config);
}

//or (example)
Ext.data.Store.WriteStoreFactory = function(storeConfig){
//instantiate store and decorate, then return the instantiated store
}

//use case
var mystore = Ext.data.Store.WriteStoreFactory({...});




Actually, I meant something different. Right now, you have an Ext.ux.WriteStore which can be instantiated with the normal new() call. It is great, because it is a direct descendant of Ext.data.Store, so you can pass it config options for both the parent (Ext.data.Store) and child (Ext.ux.WriteStore). It is limited, because you cannot use a different parent which is also an Ext.data.Store, e.g. Ext.data.GroupingStore. This is a classic problem of classical inheritance (pun mostly intended).

However, we are in JavaScript, not Java or C++ or whatever. We can just apply the necessary functionality directly to a prototype or an object. We want to be able to capture it, great. So we do something like this.



var Factory = Ext.ux.WriteStoreF(Ext.data.Store);
var myStore = new Factory(config);


The Factory is not instantiating an Ext.data.Store, passing it config, and then giving you a WriteStore decorated on top of that. Rather, the Ext.ux.WriteStoreF() is dynamically extending Ext.data.Store (or Ext.data.GroupingStore or anything else), and creating an Ext.ux.WriteStore on the fly. When you run new Factory(config) in there, it is as if you ran new Ext.ux.WriteStore(config) right now, except that it is extended on the fly. It is as if I take all the "Ext.extend..." code I have right now, and running it run-time.

This is one of the beauties of JavaScript, as a prototypal language. I can do exactly that, dynamically create an Ext.ux.WriteStore that has as its prototype an extension of Ext.data.Store, except one time it is Ext.data.Store, the other Ext.data.GroupingStore, the other MY.foo.bar.Store.

deitch
3 Mar 2009, 9:41 AM
One major milestone is complete. jsormdb is out the door. It is very similar to write-store, except that it is completely independent of ExtJS. This has the advantage of not requiring the entire ExtJS framework for areas that it is unnecessary, and thus working with other frameworks, while still working with ExtJS, thus providing additional features to ExtJS.

It has the disadvantage that some services that are built into ExtJS need to be reinvented, or taken from other places. For example, the event management system in ExtJS is based on Douglas Crockford's eventuate(), which is under a broad open license, and this is too.

Looking forward to feedback http://jsorm.com.

sumo123
25 Mar 2009, 5:43 AM
I need to transform the json data in the updateproxy before it is sent to the server. My server needs the data in a different structure e.g. one example is the recordState (c, u, d) are 0,1 and 2 and are properties per line see below.

What is the best way to do this? Thanks


{ Rows:
{
[
{id: 1, firstName: 'John', lastName: 'Smith', address: [{number: '123', street: 'Main St'},{number: '456', street: 'Elm St', RecordState: 0}]},
{id: 2, firstName: 'Jill', lastName: 'Stein', address: [{number: '789', street: 'Park St'},{number: '012', street: 'Birch St', RecordState: 1}]}
]
}
}

deitch
25 Mar 2009, 2:23 PM
There are no explicit hooks for this. I see two options:

Subclass the Writer to do what you want. This is, in scientific terms, "ugly".
Use the "beforeupdate" event of the WriteProxy. It gets called immediately before writing, and passes the params object by reference, so you can easily modify it. The element params.data is the (normally String) result of writer.write().I would recommend the second.

Customizable data format is an interesting idea; maybe I will hook it into the next major release of jsormdb.




I need to transform the json data in the updateproxy before it is sent to the server. My server needs the data in a different structure e.g. one example is the recordState (c, u, d) are 0,1 and 2 and are properties per line see below.

What is the best way to do this? Thanks


{ Rows:
{
[
{id: 1, firstName: 'John', lastName: 'Smith', address: [{number: '123', street: 'Main St'},{number: '456', street: 'Elm St', RecordState: 0}]},
{id: 2, firstName: 'Jill', lastName: 'Stein', address: [{number: '789', street: 'Park St'},{number: '012', street: 'Birch St', RecordState: 1}]}
]
}
}

sumo123
25 Mar 2009, 3:14 PM
Thanks Deitch for this superb extension and your great support.

Is there an temporary way to get the WriteStore working with a groupingstore until support for decorators comes out in v1.2. I am currently making an exact copy of your WriteStore code but extending from groupingstore instead of store. However I want to try to get away from duplicating code to minimise js size.

Would appreciate any advice. Thanks again

deitch
27 Mar 2009, 1:59 PM
Hmm, I need to think about that. In jsormdb - like write-store, but independent of ExtJS - I use a separate extension mechanism, which is quite suitable to this, based directly on Douglas Crockford's extends. In this case, I need to think it through. Will do so over the coming days.


Thanks Deitch for this superb extension and your great support.

Is there an temporary way to get the WriteStore working with a groupingstore until support for decorators comes out in v1.2. I am currently making an exact copy of your WriteStore code but extending from groupingstore instead of store. However I want to try to get away from duplicating code to minimise js size.

Would appreciate any advice. Thanks again

deitch
30 Mar 2009, 6:21 PM
A beta release of 1.2 is available. This release includes only the decorator support. It also does a much better job of encapsulating private objects. It cannot go as far as it needs to, because it depends so much on ExtJS (unlike jsormdb), but definitely improved.

The decorator pattern is fairly simple to use. You can invoke an Ext.ux.WriteStore in one of two ways:


Traditional: Works exactly the same as the old way, no change. It will be an extension of Ext.data.Store

var w = new Ext.ux.WriteStore({autoLoad: true, option: 1, ...});


Decorator: The constructor accepts two arguments. The first is the class to subclass. The second is the usual config, including options to both the parent and the Ext.ux.WriteStore.

var w;
// child of a regular Ext.data.Store
w = new Ext.ux.WriteStore(Ext.data.Store, {autoLoad: true, option: 1, ...});
// child of an Ext.data.GroupingStore
w = new Ext.ux.WriteStore(Ext.data.GroupingStore, {autoLoad: true, groupOption: 1, ...});


Looking forward to data feedback.

sumo123
31 Mar 2009, 1:53 PM
That's brilliant!!! Works great. Thanks

deitch
31 Mar 2009, 1:54 PM
Pleasure. You should post future comments on the jsorm bulletin board. I am checking ExtJS today, but I am much more responsive there.


That's brilliant!!! Works great. Thanks

deitch
30 Aug 2009, 12:45 AM
Open question to the community of users: Should write-store continue its existence or retire? I have been in the process of making sure it functions properly under ExtJS 3.0. However, 3.0 already includes the two basic components of write-store, specifically writing back to the server and transactions. Given that these are now in the basic ExtJS, should it continue as a user extension, or be retired?

Looking for feedback and reasons.
Thanks.

cdomigan
3 Sep 2009, 5:15 PM
Your store seems to be superior in a couple of ways:


Ability to save a manifest of all changes to a single url
Proper callbacks for writing, as opposed to the DataWriter where you have to listen to a "write" event (this is annoying for so many reasons)


At least, I haven't seen how the new Ext 3.0 can do this. Any pointers?

Chris

deitch
3 Sep 2009, 7:52 PM
Thanks. I have not used the new stuff extensively yet, so I don't actually know. I can finish updating WriteStore to be 3.x-compatible, it is just a question of if it is worthwhile.

cdomigan
10 Sep 2009, 5:22 PM
I for one would certainly use it :)

ajit.mankottil
5 Sep 2014, 1:18 AM
Do you have it for Extjs 4.2?

deitch
5 Sep 2014, 2:44 AM
I haven't touched it in years. Loved extjs in its day, but the latest incarnation as Sencha became too corporate; I found myself putting in a lot of unpaid labour for a few other people's benefit. If I find time, I will try to adapt it and re-release for 4.x.

I had thought they had built in their own store as well? I have memories of them copying other work?