ARP – Video of it in action

Watch the video at full res via the following link…https://vimeo.com/40966014
Or see a smaller version here…

CONCLUSION:

Well we presented the project earlier today, it went really well. With a working prototype the lecturers really seemed to understand the project and the amount of work we have done on this project.

In terms of problems encountered whilst presenting we had a really big issue, there was no power going through to the Kinect or the Arduino. After about 20 minutes of panicking and stalling the lecturers we realised this was because of the power sockets in the newly refurbished room. Once we had realised we were plugged into a broken socket, we changed plug and presented our project.

Moving away from the presentation into the project overall, the project went really well. Initially we decided to work separately on the different parts of the project, the hardware, the software & the graphics. However after a couple of weeks of development work we realised we needed to work more closely. The Kinect detection and collision detection really relied on how and where the hardware was setup.

If we were going to work on the project again we think we would make sure we set aside planning days early on to make sure each member of the group knew exactly what the project was and how it worked in every stage.

Overall though, we are very happy with the end product and are really proud with what we have achieved with the technology available. We might work on this project a little more in the future, tweaking the collision detection and the facial import. However due to time constraints and technological availability this might be difficult.

Creating an Immersive Experience

At the moment, we have created a system that is supposed to give the feeling of immersion in a new world. A world in which you recognise the objects blocking your way. These objects are your own objects and the level has been created by you. Every annoying obstacle stopping you from defusing the bombs has been placed by yourself.

This feeling of immersion is the backbone of our game, to further help with this experience we have decided to add another small tweak to the project. Not only does the game scan your level, but now it also scans you! Using a webcam, the game now scans your face and the faces you make and adds it to the game.

At the beginning of the ARP experience you, as the user, are asked to take 3 photo’s. One where you are just looking normal, to be used when running. One where you are angry, when trying to defuse a bomb or falling from a high platform. And finally one where you are looking happy, used after you defuse a bomb and celebrate your achievement.

My default face...

Why so sad?

Happy, Happy, Happy

This extra feature really helps us sell the ARP as a immersive experience and really helps transform the real world into a virtual world.

Fine Tuning

Character size tweaked, character speed tweaked, level size tweaked. Yet the game is still not looking correct.

This one isn’t to do with the code, the main hardware is so delicate that everything needs to be carefully considered.

To get the best results of the scanning it’s important to make sure the Kinect and the hardware is the correct length away from the obstacles. If the Kinect is an inch to close and picks up the floor as an obstacle then the whole project breaks.

So after much fine tuning, we have determined that the following measurements must be kept at all times whilst playing the ARP game.

Distance from the Kinect to the front of the obstacles : 32″

Distance from the Kinect to the floor : 38″

Distance from the Kinect to the table : 7″

If these distances are kept, then the ARP should run smoothly.

Testing the Setup

We have finally been able to matchup the early version of the game with the main Kinect device, including the 8ft pole attached to it.

It’s great to see all of these aspects of this project combining, and with the graphics being added constantly it’s really starting to take shape.

The Kinect device sitting in its perch.

Setting up the game and the main hardware.

Our next step was to integrate the code that controls the Arduino with the code that controls the way the game is played. This actually proved to be a really smooth link and didn’t give us any problems. We were a bit surprised!

Once the project was setup we decided to play around with the game alongside hardware. Straight away we realised there were some tweaks needed to be done. The main problem being the speed in which the character moves across the level and the length of the level itself.

Dropping Bombs

To make the game more fun, and to add an element of danger to playing the game we have decided to add a hazard that the user will need to get to before a certain time.

This hazard is going to be a bomb.

Each bomb is dropped at a random location inside the artificial sprite i mentioned earlier, the user then has to get across the level to the bomb and defuse it before the bomb explodes. To defuse the bomb the user must press the ‘D’ key and shoot a special ray from his chest! If the user is successful in defusing the bomb then he/she will gain 100 points.

"Arpman" defusing a bomb!

If the user doesn’t get to the bomb in time, as the bomb starts to flash it will explode, taking a live from the gamer and will not gain any points. The user starts with 3 lives, when all of them have been used the game will end and the user will be provided with a final score.

The final score screen.

Collision Detection

The most important aspect of the game is the Kinect integration.

Using Kinect the ARP will scan, in realtime, the users level and create obstacles that the user will need to get across to defuse the bombs that are landing.

This is done by pulling the Kinect into Flash and grabbing the depth scan which is taking place every frame.

The game then reads the characters position in relation with the scanned background and determines whether there is an obstacle near. If there isn’t then the user can continue. If there is, the user will be stopped in their way.

This also works if you try to jump onto an obstacle, the game will stop the user from falling and create a platform for the user.

Below is a sample scan from the game, normally the user would be shown the video feed from the Kinect, but in this case they are shown the depth scan.

The white sections are the obstacles, notice the floor is also white. This is something that needed to be addressed to ensure the game works properly.

In the image above you can see the depth scan as it comes through from the game. The main problem here is that the floor is also showing as an obstacle (white), this needed to be adjusted so that the floor does not become a hit object, only the correct objects to the right and left. The object on the right is actually a Buzz Lightyear doll.

Development – Handling the visuals

To ensure that the final product does not get bogged down by the amount of visuals being used, it was important to ensure the graphical assets load in properly. As the actual stage size is 1024X768 it was very important to bring in the assets seperately the main loading.

To do this, we have created a small swf which houses all of the required graphics.

The crazyness that is the swf. Each object has to be have its own Movieclip and unique name.

This means that with a small swf being loaded in we are not adding to the inital load times for the main game. This ensures the users are not sat waiting for something to happen.

Our competition = Microsoft

This may be a little strange, but at the moment our competition for the ARP is actually Microsoft. I do think they have a massive advantage over us as they did create the device. However i think that with a strong final push we can create something on par with this:

Full article found here.

Developing the game side of the project – Part 1

So for the next couple posts, were going to explain how the game side of the project was built and how we have used past practices to help us reach our objective.

This whole project has been a huge learning curve for each of us and the majority of the work we have done has been created from complete scratch. There has been 1 part of the game where we have been able to use our past knowledge to help us. That area was the collision detection, as we had already worked with Kinect before – https://arp301.wordpress.com/2012/02/29/previous-version/ – we were able to lift some of the collision detection from it to help us work with Kinect. This was particularly handy given that the Kinect device is a relatively new piece of hardware and there isn’t much support online from other users.

So with all our previous projects in this module examined we set about creating something that would immerse the user and really have them feeling they were inside a game. The first major challenge we faced was with the Kinect device itself, to ensure the Kinect’s depth perception is used to its maximum potential, we had to make sure the Kinect could scan as much as possible. To achieve this we needed to ensure that either the Kinect was far enough away it could capture the whole gaming stage or to move the Kinect as the user played the game. We decided to go with the later, this then meant that the game itself needed to have an artificial stage. Normally when creating a game you would create the background as one of the earliest stages, however as our background was being created by the moving Kinect we had to ensure the game area was wide enough to ensure users could move along the gaming platform. This was achieved by creating an artificial Sprite, roughly 4000 pixels long, where all gaming objects were added and which moved along as the user moved the camera.

Arduino side of the project….completed

After some more test runs I can say its all working pretty damn good if I do say so myself. I’ve now connected the Arduino to the motor and the Arduino is talking and listening to Flash, allowing complete control using the keyboard keys left and right to enable motorised movement. If you release the left/right key the motor simply stops, as planned.

See the video below, there’s not much more to say about it really, it just works great!
Oh and if you turn the sound up you can hear the motor going when I’m holding the left and right keys down…to prove that its not some sort of visual hoax 😉

There’s only one notable issue at the moment that’s really bothering me that I can’t seem to change for the better, and that’s to get the full signal required through the Arduino to turn the motor on, you have to kind of double tap the left or right key to get it to work as you can see in the video I’m always double tapping. Which isn’t the end of the world for demo purposes, but certainly not ideal and something I’ll look into and fix hopefully. I know why its doing it, but no matter what I do to the code it only seems to destabilise the double tap which works fine and becomes a bit random….its hard to explain without actually doing it and seeing it when the LED’s are plugged in instead of the motor. I used the LED’s a lot for testing, as long as these light up in the correct fashion I knew that would power the motor fine.

Let the game development and testing begin!