The space of all possible gestures, between touching the screen / pressing the button, moving along an arbitrary path (or not, in the case of a tap), and lifting your finger / releasing the button. It gets a lot more complex with multi touch gestures, but it’s the same basic idea, just multiple gestures in parallel.
OLPC Sugar Discussion about Pie Menus
OLPC Sugar Pie Menu Discussion
Excerpts from the discussion on the OLPC Sugar developer discussion list about pie menus for PyGTK and OLPC Sugar.
Excerpt About Gesture Space
I think it’s important to trigger pie menus on a mouse click (and control them by the instantaneous direction between clicks, but NOT the path taken, in order to allow re-selection and browsing), and to center them on the exact position of the mouse click. The user should have a crisp consistent mental model of how pie menus work (which is NOT the case for gesture recognition). Pie menus should completely cover all possible “gesture space” with well defined behavior (by basing the selection on the angle between clicks, and not the path taken). In contrast, gesture recognition does NOT cover all gesture space (because most gestures are syntax errors, and gestures should be far apart and distinct in gesture space to prevent errors), and they do not allow in-flight re-selection, and they are not “self revealing” like pie menus.
Pie menus are more predictable, reliable, forgiving, simpler and easier to learn than gesture recognition, because it’s impossible to make a syntax error, always possible to recover from a mistaken direction before releasing the button, they “self reveal” their directions by popping up a window with labels, and they “train” you to mouse ahead by “rehearsal”.
Multitouch Tracking Example
One interesting example is multitouch tracking for zooming/scaling/rotating a map.
A lot of iPhone apps just code it up by hand, and get it wrong (or at least not as nice a google maps gets it).
For example, two fingers enable you to pan, zoom and rotate the map, all at the same time.
The ideal user model is that during the time one or two fingers are touching the map, there is a correspondence between the locations of the fingers on the screen, and the locations of the map where they first touched. That constraint should be maintained by panning, zooming and rotating the map as necessary.
The google map app on the iPhone does not support rotating, so it has to throw away one dimension, and project the space of all possible gestures onto the lower dimensional space of strict scaling and panning, without any rotation.
So the ideal user model two finger dragging and scaling without rotation is different, because it’s possible for the map to slide out from under your fingers due to finger rotation. So it effectively tracks the point in-between your fingers, whose dragging causes panning, and the distance between your fingers, whose pinching causes zooming. Any finger rotation around the center point is thrown ignored. That’s a more complicated, less direct model than panning and scaling with rotation.
But some other iPhone apps haphazardly only let you zoom or pan but not both at once. Once you start zooming or panning, you are locked into that gesture and can’t combine or switch between them. Whether this was a conscious decision on the part of the programmer, or they didn’t even realize it should be possible to do both at once, because they were using a poorly designed API, or thinking about it in terms of “interpreting mouse gestures” instead of “maintaining constraints”.
Apple has some gesture recognizers for things like tap, pinch, rotation, swipe, pan and long press. But they’re not easily composable into a nice integrated tracker like you’d need to support panning/zooming/rotating a map all at once. So most well written apps have to write their own special purpose multitouch tracking code (which is a pretty complicated stuff, and hard to get right).
Article about “Visualizing Fitts’s Law”:
Particletree " Visualizing Fitts's Law
Particletree is the beginning of something. This place, this collection of knowledge, is a gathering of forces, a…
Hacker News discussion:
Visualizing Fitts's Law (2007) | Hacker News
But the issue is Apple broke the whole mechanism with hot corners. Now if I move fast anywhere near a hot corner, it…
3 points by DonHopkins on March 19, 2018 [-] | on: Visualizing Fitts’s Law (2007)
Pie menus benefit from Fitts’ Law by minimizing the target distance to a small constant (the radius of the inactive region in the menu center where the cursor starts) and maximizing the target area of each item (a wedge shaped slice that extends to the edge of the screen).
They also have the advantage that you don’t need to focus your visual attention on hitting the target (which linear menus require), because you can move in any direction into a big slice without looking at the screen (while parking the cursor in a little rectangle requires visual feedback), and you can learn to use them with muscle memory, with quick “mouse ahead” gestures.
An Empirical Comparison of Pie vs. Linear Menus
Jack Callahan, Don Hopkins, Mark Weiser (+) and Ben Shneiderman. Computer Science Department University of Maryland College Park, Maryland 20742 (+) Computer Science Laboratory, Xerox PARC, Palo Alto, Calif. 94303. Presented at ACM CHI’88 Conference, Washington DC, 1988.
Menus are largely formatted in a linear fashion listing items from the top to bottom of the screen or window. Pull down menus are a common example of this format. Bitmapped computer displays, however, allow greater freedom in the placement, font, and general presentation of menus. A pie menu is a format where the items are placed along the circumference of a circle at equal radial distances from the center. Pie menus gain over traditional linear menus by reducing target seek time, lowering error rates by fixing the distance factor and increasing the target size in Fitts’s Law, minimizing the drift distance after target selection, and are, in general, subjectively equivalent to the linear style.
The Design and Implementation of Pie Menus — Dr. Dobb’s Journal, Dec. 1991
There’re Fast, Easy, and Self-Revealing.
Copyright © 1991 by Don Hopkins.
Originally published in Dr. Dobb’s Journal, Dec. 1991, lead cover story, user interface issue.
Although the computer screen is two-dimensional, today most users of windowing environments control their systems with a one-dimensional list of choices — the standard pull-down or drop-down menus such as those found on Microsoft Windows, Presentation Manager, or the Macintosh.
This article describes an alternative user-interface technique I call “pie” menus, which is two-dimensional, circular, and in many ways easier to use and faster than conventional linear menus. Pie menus also work well with alternative pointing devices such as those found in stylus or pen-based systems. I developed pie menus at the University of Maryland in 1986 and have been studying and improving them over the last five years.
During that time, pie menus have been implemented by myself and my colleagues on four different platforms: X10 with the uwm window manager, SunView, NeWS with the Lite Toolkit, and OpenWindows with the NeWS Toolkit. Fellow researchers have conducted both comparison tests between pie menus and linear menus, and also tests with different kinds of pointing devices, including mice, pens, and trackballs.
Included with this article are relevant code excerpts from the most recent NeWS implementation, written in Sun’s object-oriented PostScript dialect.
Demo of Pie Menus in SimCity for X11. Ported to Unix and demonstrated by Don Hopkins.
Pet Rock Remote Control: Pie menu remote control touch screen interface for sending commands to pet rocks.
MediaGraph Music Navigation with Pie Menus Prototype developed for Will Wright’s Stupid Fun Club: This is a demo of a user interface research prototype that I developed for Will Wright at the Stupid Fun Club. It includes pie menus, an editable map of music interconnected with roads, and cellular automata.
The Sims, Pie Menus, Edith Editing, and SimAntics Visual Programming Demo: This is a demonstration of the pie menus, architectural editing tools, and Edith visual programming tools that I developed for The Sims with Will Wright at Maxis and Electronic Arts.
Sylos on March 19, 2018
Well, radial menus typically are displayed around your mouse cursor, so the proximity aspect is there. They also fill out the space, well, radially, so you can really just fling your cursor into a direction and will have the total width of the menu item to hit all the way.
With touch screens, there’s two major differences compared to the desktop:
1) You don’t have screen edges that you can fling your cursor against, so placing UI elements at the edge does not make them easier to hit.
2) Users are generally quicker to traverse the screen and hit something, but are much worse at hitting something that’s small, so you often want to make UI elements bigger (which does result in them being more spaced out) and then put the UI elements on several screens instead.
DonHopkins on March 19, 2018
The other problem with touch screens is that your finger isn’t transparent, so you can’t see what you’re pointing at the same as you can on a screen with a mouse. So you have to come up with different strategies for displaying menu items and feedback. Like showing the selected item title at the top of the screen where your hand isn’t covering it.
walterbell on March 19, 2018
Thanks for the references!
Any idea why these are not often used with touchscreen mobile interfaces, e.g. press for contextual pie menu? Even without OS support, they could be implemented within apps.
DonHopkins on March 19, 2018
There have been various implementations of pie menus for Android  and iOS . And of course there was the Momenta pen computer in 1991 , and I developed a Palm app called ConnectedTV  in 2001 with “Finger Pies” (cf Penny Lane ;). But Apple has lost their way when it comes to user interface design, and iOS isn’t open enough that a third party could add pie menus to the system the way they’ve done with Android. But you could still implement them in individual apps, just not system wide.
Also see my comment above about the problem of non-transparent fingers.
Swiping gestures are essentially like invisible pie menus, but actual pie menus have the advantage of being “Self Revealing”  because they have a way to prompt and show you what the possible gestures are, and give you feedback as you make the selection.
They also provide the ability of “Reselection” , which means you as you’re making a gesture, you can change it in-flight, and browse around to any of the items, in case you need to correct a mistake or change your mind, or just want to preview the effect or see the description of each item as you browse around the menu.
Compared to typical gesture recognition systems, like Palm’s graffiti for example, you can think of the gesture space of all possible gestures between touching the screen, moving around through any possible path, then releasing: most gestures are invalid syntax errors, and they only recognizes well formed gestures.
There is no way to correct or abort a gesture once you start making it (other than scribbling, but that might be recognized as another undesired gesture!). Ideally each gesture should be as far away as possible from all other gestures in gesture space, to minimize the possibility of errors, but in practice they tend to be clumped (so “2” and “Z” are easily confused, while many other possible gestures are unused and wasted).
But with pie menus, only the direction between the touch and the release matter, not the path. All gestures are valid and distinct: there are no possible syntax errors, so none of gesture space is wasted. There’s a simple intuitive mapping of direction to selection that the user can understand (unlike the mysterious fuzzy black box of a handwriting recognizer), that gives you the ability to refine your selection by moving out further (to get more leverage), return to the center to cancel, move around to correct and change the selection.
Pie menus also support “Rehearsal”  — the way a novice uses them is actually practice for the way an expert uses them, so they have a smooth learning curve. Contrast this with keyboard accelerators for linear menus: you pull down a linear menu with the mouse to learn the keyboard accelerators, but using the keyboard accelerators is a totally different action, so it’s not rehearsal.
Pie menu users tend to learn them in three stages: 1) novice pops up an unfamiliar menu, looks at all the items, moves in the direction of the desired item, and selects it. 2) intermediate remembers the direction of the item they want, pop up the menu and moves in that direction without hesitating (mousing ahead but not selecting), looks at the screen to make sure the desired item is selected, then clicks to select the item. 3) expert knows which direction the item they want is, and has confidence that they can reliably select it, so they just flick in the appropriate direction without even looking at the screen.
I wrote some more stuff about pie menus in the previous discussion of Fitts’ Law. 
 Android Pie Menus:
 iOS Pie Menus:
 Momenta Pen Pie Menus:
 Palm ConnectedTV Finger Pie Menus:
Self-revealing gestures are a philosophy for design of gestural interfaces that posits that the only way to see a behavior in your users is to induce it (afford it, for the Gibsonians among us). Users are presented with an interface to which their response is gestural input. This approach contradicts some designers’ apparent assumption that a gesture is some kind of “shortcut” that is performed in some ephemeral layer hovering above the user interface. In reality, a successful development of a gestural system requires the development of a gestural user interface. Objects are shown on the screen to which the user reacts, instead of somehow intuiting their performance. The trick, of course, is to not overload the user with UI “chrome” that overly complicates the UI, but rather to afford as many suitable gestures as possible with a minimum of extra on-screen graphics. To the user, she is simply operating your UI, when in reality, she is learning a gesture language.
In general, subjects used approximately straight strokes. No alternate strategies such as always starting at the top item and then moving to the correct item were observed. However, there was evidence of reselection from time to time, where subjects would begin a straight stroke and then change stroke direction in order to select something different.
Surprisingly, we observed reselection even in the hidden menu groups. This was especially unexpected in the Marking group since we felt the affordances of marking do not naturally suggest the possibility of reselection. It was clear though, that training the subjects in the hidden groups on exposed menus first made the option of reselection apparent. Clearly many of the subjects in the Marking group were not thinking of the task as making marks per se, but of making selections from menus that they had to imagine. This brings into question our a priori assumption that the Marking group was using a marking metaphor, while the Hidden group was using a menu selection metaphor. This may explain why very few behavioral differences were found between the two groups.
Reselection in the hidden groups most likely occurred when subjects began a selection in error but detected and corrected the error before confirming the selection. This was even observed in the “easy” 4-slice menu, which supports the assumption that many of these reselections are due to detected mental slips as opposed to problems in articulation. There was also evidence of fine tuning in the hidden cases, where subjects first moved directly to an approximate area of the screen, and then appeared to adjust between two adjacent sectors.
Requirement: Novices need to find out what commands are available and how to invoke the commands. Design feature: pop-up menu.
Requirement: Experts desire fast invocation. Once the user is aware of the available commands, speed of invocation becomes a priority. Design feature: easy to draw marks.
Requirement: A user’s expertise varies over time and therefore a user must be able to seamlessly switch between novice and expert behavior. Design feature: menuing and marking are not mutually exclusive modes. Switching between the two can be accomplished in the same interaction by pressing-and-waiting or not waiting.
Our model of user behavior with marking menus is that users start off using menus but with practice gravitate towards using marks and using a mark is significantly faster than using a menu. Furthermore, even users that are expert (i.e., primarily use marks) will occasionally return to using the menu to remind themselves of the available commands or menu item/mark associations.
 TLDR: bla bla bla pie menus bla bla bla. ;)