Other Topics on tools
Tools are used by the user to interact with the drawing. They are used to draw objects, and after that to edit those objects. Tools are organized around a2dToolContr, which is a base class for tool controllers. The main implementation for the moment is a2dStToolContr. The controller has a tool stack, and one can push and pop tools on/from it. The stack becomes important when wants to draw and zoom at the same time while drawing. In such cases first a drawing tool is pushed, and temporarily a zoom tool is pushed to zoom in on parts where accurate drawing is required, followed by a zoom out to continue drawing the rest of the object. Also drawing a new object can be followed by editing the same objects. Also here, the editing tool is pushed on the stack while the drawing tool is active. As soon as the editing is finished, the editing tool is popped from the stack and the drawing tool becomes active again. This way one can draw and edit several objects in a row. At last the tool stack becomes even more important when editing nested object within other objects. Here for every nested object a new edit tool is pushed. Finishing editing one nested object, results automatically in popping the tool, and brings you up one level in the nested objects. A simple application of this feature is for editing text properties, which are children of drawn objects. A more complex use is when adding and editing curves, and even markers on those curves, on a curve group object. It makes it possible to edit level by level into a complex nested tree of objects.
Tools and views and documents
A a2dCanvasDocument can contain a hierarchy of objects. Being several objects a part of a drawing, and the total drawing the top of the document. Each view on the document, can display a different object in the hierarchy. When one changes in one view a lower hierarchy object, it will automatically lead to an update of the other views. Even if a view displays the total drawing, the change in a subdrawing, will directly be shown in that view too. Tools to edit views, are directly connected to a a2dCanvasView, so tools work on a certain document and a certain level in that document, as it is set on the view. The tool its controller a2dStToolContr is set to a a2dCanvasView, and the tools in the controller its tool stack, receive events from the view. A tool controller is always used for one view, other views will have there own controller. If one view has several tools on its stack, and lets say is currently busy editing an object, clearly one can not use this controller for a second view at the same time. In general each view (same or different document), will all have its own toolcontroller set to it. Reuse of a toolcontroller on a different document ( with or without its view )is possible, but that requires stopping all tools, and resetting the toolcontroller. It is better to just have a controller per view.
The normal case is that each document that is opened or created will standard open one or more views on that document. The application its a2dViewConnector derived class will take care of connecting the new views into its (new) windows or frame.
Sometimes you maybe want to re-use a view plus its controller for all documents. When a document is closed, the document will sent proper events to disconnect its views. And a view connector will receive a pre add document event. The connector will close the current document, this generated the disconnect view event. And that is the moment to connect the new document to the just disconnected view of the old document.
How tools communicate
Often one wants to take over settings from the current tool to the new tool pushed, or to an existing tool when the current tool is popped. Same when a modeless style dialog does change the central stored style settings, the active tool might what to take over the new style. Like for a drawing tool it wants to use the new style. At the same time a zoom tool will always draw its zooming rectangle in the same style. To prevent having to derive each and every tool in order to adept to what is needed for an application, all tools share an extra event handler, next to being them selfs event handlers. The extra event handler is intercepting events when the tool itself doe not handle a tool event. It makes it possible to concentrate many customizations in to a central place.
a2dStToolEvtHandler which takes over style from and to tools.
a2dStToolFixedToolStyleEvtHandler to customize tools with a fixed style.
The mechanism is either to have the a2dStToolEvtHandler intercept a2dComEvent and a2dCommandProcessorEvent and that way find the commands that where issued. Based on that you can change the tool settings. For instance the a2dObjectEditTool does sent a a2dComEvent with id sm_toolStartEditObject, when it sarts to edit a canvas object. So in your a2dStToolEvtHandler , you can detect that event, and decide to give the edited object a new style are to set the current style of the object being edited as the central stored style in a2dGetCmdh(). An modeless open style dialog will take that style as the current, and therefore will show the style of the object that is being edited right now. A simular event is a2dStTool::sm_toolComEventAddObject, which is sent when a new object is added by a drawing tool. New object often will take over the central style for that new object. Again that is taken care of by a2dStToolEvtHandler for a whole set of tools. In case you want a different set of tools to never change there style, you need to set a handler like a2dStToolFixedToolStyleEvtHandler to all those tools. A second technic used in the a2dToolEvtHandler derived classes, is to update some tools in idle time. If for instance the central style changes, and independant of how and why it was changed, you just want the active tools to take that same style, you check the central style in idle time, and if different to the tool style, change the tools style.
Tool active in hierarchy of objects.
A tool can work on any level in a a2dCanvasDocument. Often this is the ShowObject() of the a2dCanvasView in which the tools is active, but it can also be a deeper nested child object of this ShowObject(). The parent object in the tools indicates the level, and children of this object are added or edited etc.. When editing an object, an edit copy of the object to edit is created, and added temporarily as a tool object to the parent object. So it really becomes part of the document, at that level in the document hierarchy. As an example editing a curve on a curve group inside a plot object. Here the edit copy is stored as part of the curve group as along as the curve is being edited. The tools its objects are rendered as part of the document, and therefore layers and hierarchy of objects is all handled properly. Next to the edit copy, the tools can add more object to decorate the editing features. But in case of editing the most important are a2dHandle's objects. The a2dHandle object are added as children of the edit copy. The tools makes changes on the edit copy via the a2dHandle objects, and the changes on the edit copy are transferred to the original object when appropriate. Like when dragging a handle to enlarge the edit copy, only at the end of a drag the change is transferred to the original. During the drag of the edit copy its handle the original is not changed. The changes are not made directly by calling members functions on the original, but via commands to the command processor of the document. This way all changes are recorded, and placed on the undo stack. Every change in state, can be undone by reversing the action of the command. The command records enough information to be able to undo its own actions. In other words, a command brings the state of the document from one state in to the next, and can also bring it back into the previous state.
Sub editing is a feature implemented via tools, which makes it possible to edit nested object in a recursive manner. The first click on the top object brings that object in edit mode, and makes it possible to edit nested objects and so one. The next topic CorridorPath, explains the mechanism to redirect events to the right object. A new edit tool for each nested object is pushed, when editing deeper nested objects. One edit tool is popped again when editing of a nested object finishes. Each edit tool has has its own a2dCorridor, and when one is popped, the corridor of the next is restored. Editing works by making an edit copy of the original objects. Events modify the edit copy, and from there commands are sending the changes to the original. A hit on a sub object in the orginal, starts a subedit. The pushed edit tool for the subobject makes a new editcopy of the subobject, and the current tool is brought into halt state. Meaning that it still know it was in editing mode for an object, put the editcopy is removed. As soon as a sub edit tool finshes, the editcopy is regenerated for the halted tool. Normally tools itself decide when some action is finished, but in case of subediting the object itself needs to decide. Each object can have different ways of being editable or subeditable. A central edit tool can not know this. Subediting is for this reason always a oneshot actions, this means that when the object decides that editing on it is finished, this automatically result in popping the tool that did the editing. Resulting to make the halted last tool on the stack active. In case of several subediting actions in a row, the tool stack will contain several one shot object editing tools. Only the last one pushed, is not halted. The tools are popped one by one, to finally arrive at the ShowObject of the view, where one can start editing on other objects again. Only the first tool that started editing of a direct object of the ShowObject, is special, since it is able to jump from one object to another. Also here there is no parent object which started the editing. While all sub editing tools are one shot, the top one does not have to be.
Tools are in general for one specific task, like deleting objects, dragging objects, copy objects etc. This is very flexible, since one can write programs using exactly the tools one needs. It makes no sence to have complete editing of objects, with selection and such, if all the program needs to do is drag objects around. So for a library it is a good approach. But what if you wat to have an editing application, where all different tool actions need to come together, such that editing becomes intuitive. For that there is something called MasterTools. A master tool is pushed as top tool on the stack, meaning if there are no tools on the stack, this one is pushed again. Indirectly the master tool pushed other tools. The trick is that the master tool defines the strategy for working with several tools. It intercepts mouse events and key events, and based on that pushes the right/wanted tool. One can dynamically change a master tool to completely change the way an application interacts with the user. Some predifined matsertool are available in wxArt2D:
a2dMasterDrawZoomFirst Tool with first priority on zooming, changing to selecting with Ctrl and Alt keys. Drawing tools are pushed from the menu, but when ended this mastertool becomes active again.
a2dMasterDrawSelectFirst Tool with first priority on selecting objects, changing to other tools with Ctrl and Alt keys. Drawing tools are pushed from the menu, but when ended this mastertool becomes active again.
a2dGraphicsMasterTool Tool to edit graph structures, which have objects connected with wires.
Drawing tools are pushed from the menu, but when ended this mastertool becomes active again.
If you still not like the way they behave, you can derive from them or make your own mastertool. A base master tool can help you with that a2dMasterDrawBase. Mastertool in general push othere tools as oneshot. This means that after they performed one action, the tool will remove itself from the stack, and give control to the master tool again.
Editing Canvas Objects
Mouse events are arring via the a2dCanvas into the a2dCanvasView, and from there are redirected to the a2dCanvasObject that is hit. The event travels through the document its hierarchy to test for mouse leave and enter events.
The question is, when should the canvas object handle moude events. If the application works with tools, it often depends on the tool in action how the canvasobject should react. For instance when connecting objects with a wiretool, the object to connect should deliver feedback to tell if they can be connected. And pins on the object should feedback information to tell if one can connect to that pin or not. But in case of editing that same object, one wants different feedback. E.g. The cursor should change depending on what can be edited on a vertex or segment of a polygon. To conclude, a default mouse handling is only usefull when the object should change its behaviour on mouse events, when no tool is active. That could be showing properties or animation effects.
When editing objects using the edittool, that object is copied and the so called editcopy is created. That editcopy is used for editing the original object. In this editcopy it is it makes perfect sence to do mouse handling inside the canvas object. That way one can implement editing behaviour specific for each objects. Editing a polygon is different from editing a circle. Because of this default mouse handling is done only when the object is an editcopy. meaning the editing tool is busy editing that object. One could say when the object is in editing mode, the mouse events are handled in a certain way. One could created various modes, for different tools. Like feedback mode when a wire tool wants to connect to the object.
For handles and pins, being children of a canvas object, the mouse handling also depends much on the tool in action. Again handling of mouse events and rendering of the pin, can depend on the tool that is active. This can be controlled by a mode flag.
In general on can sas that mouse handling inside an object is good, but should depend on the tool that is active. Same for rendering the object different depending on which tool is in action. A tool that is not specific to one class of objects, but handles all objects at once, often require unique implementation for rending and mouse handling inside a specific object. Editing an object being the best example of such a tool. To achieve this, it is best to set objects or rendering into a specific mode, depending on the tool in action.
The select tool will set the object its selection flag, resulting in the select rendering stage to display the object selected. The edit tool will set an object in editmode. Other tools will set modes to pins and objects, and ask for feedback inside objects, which will be different depending on the mode set.