I'm new to Sencha Touch, so I'll apologise in advance if the answers to these questions are obvious...
Suppose I wanted to allow a user to add new components and containers to a main panel at runtime using a UI. I could imagine a clunky way of doing this that would map easily to the ST APIs, but it would be very tedious to use, especially on something like a tablet. So has anyone implemented something like this that makes good use of the multitouch interface, e.g. dragging an component over another component to put them both in a new container, using a rotate gesture to switch from an HBox layout to a VBox layout and so on. And if so, is there something publically visible that demonstrates this working in practice.
Related to this question, suppose I had a mechanism that allowed a user to build a panel of components at runtime. They might naturally want to save such a layout and reuse it later. What would be the best way of doing this? I presume I could save the DOM tree representing the panel, and load that back in again. But could I then build an ST Ext.Container from this tree. Or would I have to serialize and save the Ext.Container itself (presumably risking the layouts breaking if a new version of ST came along?).
Look at the TouchStyle example for different layouts based on device and orientation.
For dynamic components at runtime you could store the JSON config for each in a database. When your app starts, fetch the component's JSON and add it to an Ext.create(JSON). What's nice about Sencha is you can use xtype for configuration instead of creating individual objects.
But the TouchStyle example seems to be an example of essentially programmatic changes to the layout in response to orientation and form factor. That's not really what I'm looking for. I want the user to be able to start with a blank panel and, though user interaction, add new components and containers to the panel, and add the layout requirements, in a natural fashion. I.e. I want to be able to construct a simple GUI UI builder that runs at runtime, and is controlled by the user, as opposed to building a UI programmatically at "compile" time. I'm not planning on making the whole of the UI configurable by the user, but just some of it. In the same way as users can often configure an IDE by rearranging panels, docking them etc. But making good use of multitouch if possible.
Sencha Touch can most definitely handle what you're looking to do.
Lets say you're creating a simple drawing program with Sencha Touch. The basic flow would be something like this...
- create a new touch project (Sencha Designer makes everything below so simple... )
- add a toolbar up top with element buttons (circle, square, paint bucket, etc..)
- add a panel below the toolbar for drawing on. call it surfacePanel
- add a controller
- in your controller, listen for surfacePanel touch events
- when panel is touched, your controller calls an addElement method in the controller
- addElement looks to see which element was selected (circle, square etc.)
- addElement then adds a new child element to the surfacePanel at the location where it was touched
- addElement then saves this information (i.e. touch x,y, element added etc..) to local storage or back to your server with a json call to persist the session.
- do the same for other elements as they're added, dragged, moved, edited etc.. same procedure just gets more invovled.
you could/should break things out further into circle classes, square classes etc.. and additional controllers etc.
Upon loading the app, grab the users last saved state from the server or local storage and repopulate the surfacePanel.
Of course, i glossed over all the details but should give you a place to start.
Thanks John. What you describe makes sense if we just have a flat (unstructured) tree of simple objects like circles, squares etc. But suppose the elements are things like graphs, tables etc. At that point you'd really like to lay them out just as you would if you were building them at design time, e.g. nesting them in containers, setting layouts/alignments on the containers etc. At that point it becomes interesting because we then have to decide when these containers get created, what gestures control the layout constraints etc. It's one of those areas where we could make the programming simple at the expense of a cumbersome UI. Or we might be able to make it very natural to manipulate from the user's perspective with a suitable set of gestures. So my initial question was really whether anyone has already done this, what gestures they used to create and destroy auxiliary containers, specified layout constraints etc, and whether it actually worked well in practice.
I'm not saying it would be easy but I see no reason this can't be done. I'm sure that within a few hours to a day you could prototype something that lets you drag objects around a surface with precision, drop them into other containers (again, with precision using padding, margins etc.). Your surfacePanel class would handle tick marks, grid positioning, freezing and rehydrating their state etc... or each element (circle, square, table, graph) class could manage it's state. Performance based on how efficient your code and the clients browser performs.
I'd start off with a proof of concept then go from there. Everything you need is baked into the framework. Sounds like a fun project.