Considerations for a Collaborative Table-top Simulation System

From EQUIS Lab Wiki

Revision as of 15:34, 17 August 2010 by 130.15.168.235 (Talk)
(diff) ← Older revision | Current revision (diff) | Newer revision → (diff)
Jump to: navigation, search


Contents

Control Permissions

JCATS workstations can be setup as either a controller or a client. A controller is able to control multiple forces, whereas a client is generally responsible for only one force. If a collaborative table environment is used then it must be considered as to how access rights for controlling a force are managed.

With the current iteration of the equisFTIR table-top computer it is not possible to distinguish between users based on their fingers, therefore control of a force based on the person cannot be done in this way. This raises some related questions as to whether posing the limitation of controlling only one force was due to the clients working on isolated computers and whether this limitation would be necessary in a more collaborative paradigm where all of the individuals at the table can see what the others are doing and are in more direct, face to face, communication. This could depend on other considerations such as how many people are appropriate for working on a single table-top and how individuals grouped together. Are there currently groups of individuals performing the simulation that have greater frequencies of communication and could logically be grouped together at a single table?

JCATS has a built-in messaging system to simulate e-mail and chat communications. Currently this allows a client computer to send messages to other clients or to the primary training audience. Since it is not possible to detect who is currently using the table it poses a problem as to who the message was sent from. Is it necessary for the messages to be sent from individuals or would it be appropriate for it to be sent from the table a whole?

Display Controls

The JCATS display is used for showing a map of the terrain and the locations of entities on said map. JCATS allows for the user to customize many preferences affecting the display screen such as whether unit strength (health), shoot ranges with line of sight or environment data are shown. If several individuals are using the same table then is it necessary for these options to be shown globally on the table or are they controlled in some other way? If they are set globally then will the undesired displayed information be a hindrance to those who it does not pertain to?

Different views, such as level of zoom and location can be stored for quick recovery later. Panning and zooming can also be done on the fly during the simulation. If one individual zooms in the screen, is it necessary for the entire map view on the table to zoom in or is some other system necessary for controlling what parts of the screen are zoomed in?

Screen Real Estate

Currently the military uses a dual display setup where the map display is shown on one screen and the control windows pertinent to their task are displayed on the second. Examples of these control windows are movement controls and direct-fire controls. Movement controls are responsible for controlling the paths that a force takes crossing the map. Direct-fire controls are used to manage targets, the weapons and munitions used as well as controlling when to fire weapons. If several users are sharing a single large display then how is the display partitioned, if at all, between areas used for controls and those used for displaying map data? Is it possible for control windows common to all individuals to be shared, such as a single movement control, or must every user have their own window to maintain efficiency? Is it necessary to maintain the current setup where all of the windows are be shown at all times or would it be efficient to show them only when the they are needed in the current context?

Another consideration would be how large of a display is feasible. Is there a minimum size that is sufficient to properly keep track of the simulation? Is there a maximum size after which the simulators cease to work collaboratively and return to more individualized control?

Voice Communication

During the simulation, the simulators (individuals carrying out the simulation) can communicate with each other via a headset. This is currently done on an individual to individual basis. In the new collaborative paradigm of the table-top, is this still appropriate or would communication occur on a table to table basis? Is voice communication used between a simulator and a small number of other people or is there a wide range of people who they need to communicate with? Would voice communication over headsets still be necessary if collaborating groups of individuals were stationed at the same table?

Priority Queues

Many missions in JCATS are given priority types such as soon as possible, which moves the mission to the top of the priority queue. If many users are setting priorities concurrently, is there an issue with the assignment of priorities to the individuals at the table. For example, are the missions of a certain user more critical than those of another? If so, how will a user be prevented from adding their less critical mission to the top of the queue?

Text Entry

Text entry is commonly used for entering identifiers to missions, aggregates (groups of entities) and views in order to be able to easily recall them later. Currently they allow for free form text entry, in other words, any string of characters is allowed for a name. This brings up questions as to whether it would be ideal to use a software or a hardware keyboard. If the software approach is taken, is it possible to condense the number of possible inputs to a list of commonly used names or are the names entered significantly different between simulations?

Number Entry

Number entry is used for specifying altitudes, coordinates, lengths, etc, in JCATS. These numbers are freely entered and can take on virtually any value. Is it possible to pick a set of commonly used values to display to the user instead of asking for arbitrary values?

Multi-touch

If multiple users are performing multi-touch gestures concurrently then a problem may be how are the touches grouped together to represent one person. For example, with a two handed gesture, how are the two touches grouped together to represent one gesture as opposed to two different people doing a one handed gesture.