EmptyJournalStack
X

Menu
Thoughts on HyperLively - Oct. 21st
The idea here is to be as immediate as possible. We should reach for an iPad with Lively as freely as for a notebook, when trying to develop a thought.
Start with Drawing
As with a notebook, the only thing you do other than draw is to go to the next page, or back to a previous page.
The command bar could be always present, or appear only when some command button is pressed.
Objects
Ways we would like to be the same or better.
The underlying principle is that projecting thoughts to images helps us to think: it gives form to the elements in our idea and, in so doing, leverages our visual cortex to contemplate more elements and relationships than we can do in the abstract.
The the most immediate affordance of our application must be drawing, simple and fast.
Next to drawing, we use simple text as a powerful symbolic extension to our thoughts in images. It is naturally tempting to include some sort of text support in this primitive phase of thought development - character recognizer invoked by command key, or the like. As Astrid points out, this introduces many complications (mode change, appearance change, etc), and may actually interfere with the consistent graphical style of the drawing being made. Keep it simple: let the text be drawn simply as part of the drawing, and leave the recognition if any to a later abstraction phase in the development.
Just as one might want to interpret text in a drawing, it is tempting to interpret what parts of the drawing may be separate for one reason or another. Here again, we feel, it is probably best to begin with no interpretation.
Gestures
Navigation
What "frills" might we want to offer for basic drawing? - controls for pen width and color; maybe texture too How to offer such choices without introducing too much cognitive interference? - cmd-key overlay of options
We will want some well-defined gesture vocabulary, which can then be further interpreted in various contexts. I'm imagining... short click long click stroke (drag) and possibly... tap (very short click) double click or double tap ... with any of these modifiable by a command key being pressed
Multiuser access
Ideally, any page can be synchronized with other users in the style of SyncMorphs and Clojure/worlds. In this manner the "slide sorter" becomes an iconic channel selection interface for choosing worlds to share, much as our current LivelyToLively session list.
Post-recognition
It's easy to imagine an abstraction tool that might examine the time-sequence and locations of strokes in a drawing and produce a grouping of strokes into higher-level drawing elements. Similarly certain shapes could be recognized as, e.g., rectangles or ellipses, and sequences of particular shapes could be reasonably inferred to be text.
Pages - the first level of structure
Relationship to Hypercard
As described above, it is desirable to support some transition from mere pixels to real objects. Whether this is accomplished by hand or somewhat automatically, we must then deal with worlds (pages) that are no longer unstructured canvases.
The obvious way to navigate a simple notebook is with a set of command bar offering... - delete this page << jump to the beginning < go to the previous page <> use some sort of scrolling to jump to any page > go to the next page >> jump to the end + add a page following this one
But here there are some choices, related to later inclusion as part of a presentation. For instance, should the next page begin as a blank page, or a copy of this page, or as a new background for this page, or as a new blank page with this page as a background. Presumably, by analogy to a notebook, the default should be a new blank page, and the structural topics of backgrounds, foregrounds, etc. can be asserted later after the creative burst has been satisfied and the user has had a chance to explore the tool more.
A more iconic form of navigation would be to present thumbnail images of all pages from beginning to end, along with some ability to add and remove pages. This seems much more powerful, since the same view could allow for reordering the material, and "muting" pages to be skipped in a presentation.
If there are objects on the page, then we need some agreements regarding what it means to make pen gestures in various places on the page. It will probably serve us to have a notion of objects being selected, and possible another notion of objects being "open or closed" to changes.
Command keys
Every event in a Lively world should be recorded in a stream, and these event should be undoable, redoable, and sharable. If a stroke becomes, e.g., a graphical object, then that object should retain its original source event stream, allowing it, e.g., to be re-rendered using a different brush shape or texture.
Event history
Lively is a system that integrates the process of composition with that of deployment, and the greatest challenge is to provide a rich enough set of gestures to support both modes of operation without unduly complicating things. From the outset I am assuming that we will make use of command keys, whether on a keyboard or overlaid on the display. My inclination is to use these only as modifiers, or fleeting modes, and that there will be no other modes in the UI that are not visibly manifest.
Ways we would like to be different or better.