Life is a Table: Midterm Progress Report

From EQUIS Lab Wiki

Jump to: navigation, search

Written by Joey Pape

Contents

Introduction

The following report will summarize the progress made on the Life is a Table project so far, as well as the plan for work that will be done for the remainder of the term. The report assumes that the reader is familiar with the general aim of the project.

Original Proposal

The proposal from the beginning of the term stated the following three steps as the direction that the project would follow:

  • Step 1: Implement an input handler for Life is a Village (LIAV) to receive touch, release and drag events from EquisFTIR. Use these events to implement movement using multitouch controls. This step was presented as the opportunity to evaluate the performance of LIAV running with EquisFTIR, as well as to make decisions regarding camera angle and movement controls.
  • Step 2: Implement simple gesture recognition into the input handler. Use these gestures to zoom the camera, and to replace whatever keyboard controls are still present from the original version of LIAV.
  • Step 3: Expand on LIAV. This step was kept deliberately vague, so that the decision of how the final game would play could be made as the project progressed.

Progress so Far

Since the beginning of the term the following describes how each of the steps has progressed:

  • Step 1: Movement controls have been implemented as they will appear in the tabletop version of the game, however the current version is simulating touches with mouse events. An input handler for EquisFTIR has not been written yet, so performance has not been tested. The following tasks were required to progress to implementing movement with the mouse.
    • Camera position has been changed from behind the robot to directly above the robot.
    • The camera rotation is fixed relative to the world as opposed to relative to the robot (effectively maintaining that "up is always north")
    • The position of the mouse is being rendered as a 3D cube mesh. This was an important step as rendering the mouse in OGRE is non-trivial, and this allows for testing of the controls on a desktop.
    • Clicking causes the robot to walk towards the mouse.
    • The gear changes between 0 and 5 based on the distance from the robot that is clicked (gear 0 has been altered to mean 'not moving').
  • Step 2: Step 2 has been removed from the project. The decision to do this was made based on the fact that upon completing step 1, working on completing the implementation of all of the controls present in LIAV would not have translated to progress in step 3. The project will continue directly to step 3 in hopes of making a more impressive final product.
  • Step 3: The design of the gameplay of the final game has been decided upon. The game will involve using the touch movement controls to walk the robot to a goal somewhere on the map. An arrow will point the robot in the direction of the goal. While attempting this task, the robot will be attacked by ninjas. The ninjas will be NPCs that constantly try to walk toward the robot. If a ninja reaches the robot, the player loses. The robot can shoot a laser at the ninja using some touch control (this is discussed in more detail later).

Future Work

The remainder of the report will summarize the work that needs to be done between now and the end of the project. My hope is that I will be able to complete at least steps 0-3 by the end of the term. The completion of each step does result in a playable demo though.

Step 0: Write the EquisFTIR Input Handler

The resources appear to be in place to interface EquisFTIR with a C++ project, so hopefully I will be able to write the code necessary to handle touch events in LIAV by the end of the week. I am not anticipating much more resistance in this area of the project, but it is important to note that the game will be playable as a PC game pending some unsurpassable barrier to that effect. I do not intend on resorting to that though.

Step 1: Create the Goal for the Robot

This step will simply involve designating somewhere on the map (maybe randomly) as the goal for the robot to reach. It would likely be displayed as a special tree, similar to the harvesting trees in LIAV. When the robot gets within a certain distance of the tree, the player has won.

This step also requires an arrow which points the player in the right direction. This shouldn't be too difficult to implement, once the tree is in place. However, if it seems that I should continue to the next step, it might simply be rendered as a cube, without worrying about rotating it in the right direction.

Step 2: Introduce Ninjas Into the Game

This is likely going to be the most significant step remaining. The Architecture needs to be introduced into LIAV to handle ninjas. This task will largely involve replicating the architecture which is already in place to handle the villagers in LIAV with some significant differences:

  • The mesh will be replaced with the ninja mesh included in OGRE. The mesh also includes animations such as walking, attacking and dying which can be used in the game.
  • The AI would be greatly simplified from that of villagers. The states of the ninja would include 'Pursue,' 'Attack' and 'Die.' The ninja would be in constant pursuit of the robot, utilizing the A-Star path-finding algorithm already written into LIAV. Upon coming within a certain distance, the ninja would enter the attack state, and the player would lose. Upon being shot by the robot, the ninja would enter the die state, and disappear (after performing the die animation).
  • An object which randomly drops ninjas into the world will be implemented. Each ninja that is dropped into the world will have an initial position off screen. This will be accomplished by always dropping ninjas at a constant distance from the robot, but at a random angle. This will effectively create a ring centered at the robot, which surrounds the viewing area without entering it, on which ninjas will always be generated. The frequency of ninjas being generated will continue to increase (within reason) based on how close the robot is to the goal (before the goal is implemented though, the frequency could either be constant, or depend on elapsed game time).

Step 3: Implement the Robots Laser

This step will be broken up into the following smaller tasks.

  • Implement the Laser Firing: This may seem synonymous with the larger step, but this task has nothing to do with the user input which triggers the laser being fired. In fact it will likely be initially tested by having the laser constantly firing for the entire game. First the laser needs to be displayed. This will likely look like a red line extending from the robot to off the screen (this would be a convenient time to reuse the distance at which ninjas appear as the length of the laser). When the laser is on, it will search the existing ninjas, to see if any of them appear inside some angle of the lasers trajectory. If they do, then they enter the die state.
  • Implement Laser Controls: A control scheme needs to be decided on for firing the laser. Right now what I am leaning towards is any touch which appears close enough to the robot to mean 'gear 0' in the implementation of movement controls, fires the laser in the direction that the robot is currently facing. This has the advantage of easily handling shooting while moving ("running and gunning" if you will), and shooting while stationary. Depending on how this control scheme performs, the decision might be made to include a virtual button on the edge of the screen which when pressed fires the laser.

These are the minimum requirements for a player controlled laser. Step 3 would make the laser a little more interesting. Both steps will likely be done mostly within the CAXAvatar and CAXAvatarData objects.

Step 4: Implement a 'Laser Overheating' Mechanic

This step shouldn't be too difficult assuming that step 2 is completed. There would essentially be a heat indicator somewhere on the screen. Whenever the robot is firing the laser the heat indicator increases gradually. If laser is not being fired, then the heat indicator gradually decreases. If the indicator maxes out, then the laser cannot be fired until the indicator decreases to zero. This is a mechanic inspired by similar mechanics in such games as Halo and Mass Effect. This would be implemented by displaying a bar on the screen (representing heat) and adjusting it's length as described. Highlighting the bar by surrounding it with a red box could occur when the laser is overheating. Time permitting the heat could also be reflected in the color of the robot.