100 Days of VR Game Dev
Several months ago, having never programmed before, I spent two weeks teaching myself how to program in C# and use Unity so that I would be able to better understand VR development tools. What surprised me was just how much you could learn in so short an amount of time. For months now I’ve been telling people that if you want to make games, you can achieve a professional level of skill in as little as 3 months dedicated time, thanks to tools like Unity and a plethora of online learning resources.
I had avoided diving in until now because I wanted to focus on design and writing, but have since realized that being able to directly prototype will be invaluable. Not to mention, with the developer shortage we currently have, I could pick up freelance work to bootstrap my own VR games.
So, I’ve decided to take my own advice, and prove that I’m not full of shit. For the next 100 days I will be focused on teaching myself C# game development with Unity. My plan is to document every resource I take in, so that other aspiring developers can have a potential roadmap (I’ve found that one of the biggest hurdles for people learning online is not knowing what to look for or how to look for it). I’ll be publishing daily updates here, with some commentary, so that those interested can follow along. Also, a large part of learning is making games, so if anyone is interested in collaborating, you can reach me at email@example.com.
My development knowledge from Day 1 was 2 weeks of learning C# and the Unity editor (which I did 9 months ago). Because of this, if you are a total beginner to programming, you may find it useful to first complete the Microsoft Academy C# course. The book I start with on Day 1 introduces beginners to programming and Unity as well (which I’m using for review)
^I needed to optimize audio for my game, so did some research + testing and shared my findings
Whelp, just like that 100 days of full-time game development has come and gone. Today I finished optimizing all of Cytopia’s scripts (a first pass anyway), and did some bug fixing. The game’s first level is very close to completion, all that’s left to do is get voice actors, refine the audio, and bugfix. Some animation and effect touch up wouldn’t hurt either, but is less of a priority.
I’m going to make a post summarizing some of the things I learned when I get a chance. My #1 priority right now is getting this game refined enough to show Oculus, with the hope that they’ll at the very least be able to supply early hardware for testing, and at best give me enough out of their indie fund to ensure I can work on Cytopia for the rest of the year (which would only require about 10–15k). Having a finished product I’m proud to show will also allow me to start building interest, since it’s going to take a lot of momentum for a Kickstarter to be successful with so few people owning headsets. Private funding would be ideal to take this to the next level, but as I’ve learned through trial and error, raising money is not easy, especially for games. But I’ll just focus on making something awesome that no one has seen before, and hopefully the rest will follow.
Also, someone mentioned to me they were starting to learn and looking for people to learn with, and I realized it might be a fun idea to have an online meetup for game learners and devs. I’ll be posting details at the top of this post when I decide on a time and details.
Woot woot, here’s some rigidbody shenanigans to close out:
^how to apply force to a ragdoll
Making a good object pooling script turned out to be a little more difficult than I thought it would be, but once I understood that the tutorial examples I was seeing were limited, and switched tracks to “how would I hard code this”, an elegant answer came to me.
^refresher on accessing children gameobjects
^texture atlas research
My CPU speed in the profiler was randomly slower than it had been the other day by a factor of 10, but I hadn’t changed anything that should have done that. After some troubleshooting I noticed my harddrive was almost full, which meant that my Unity GI cache hadn’t been cleared in a while. Clearing it filled up 10 gigs of space, and fixed the problem.
Today I set certain parts of the level to spawn and despawn at different times so that I’d never have more than 200 or so draw calls in any one frame. I also did some level redesign for an encounter that wasn’t giving the effect I wanted.
Using the Unity Profiler, I noticed there were massive spikes (63ms!) happening whenever I instantiated a sound that plays on awake. I need to optimize the sound files themselves, and come up with a better way of playing them (probably through object pooling).
^sound optimization tips
My major bottleneck is very clearly draw calls right now. The consistent draw calls at spawn is 665, about 465 higher than the max I want to be at (GearVR apps are recommended to be under 100 draw calls, so I figure I can get away with double that on a dedicated 970 or better graphics card). The first step I took was to disable each large GameObject grouping (mostly organized by scene) one by one with the profiler on, and record how many draw calls each section took. By doing this, I was able to identify which sections were causing the largest draw call overhead (so that I can focus my efforts on the big offenders). I’m glad I did this, because out of 21 sections examined, 3 account for 468 draw calls. One of the surprising areas of offense was my grouping of colliders, which was taking up 166 draw calls. I soon realized that Occlusion Culling was not working correctly.
^interesting draw call thread
^fixed timestep, movement physics
^FixedUpdate (camera should never be moved in it)
^combine draw calls by combining identical meshes
^more optimization tips, OnBecameInvisible can be used to disable scripts when an object deactivated
^more optimization tips
^how to do Occlusion Culling. I figured out that mine was not functioning despite being on because there is other setup you have to do
^a way to batch meshes
Debugging! I went through my game’s code and fixed all the known bugs revealed by playtesting (and some caused by changes I made from debugging). Theoretically Cytopia’s first level should now be playable end-to-end without any bugs (although there’s definitely more playtesting needed to say that with more certainty). Other than detail polish and sound, the last large obstacle is optimization. When I first did some preliminary checking yesterday, I was at 1200 draw calls (which is a ridiculous number). Switching the lighting to baked halved that, but I still need to bring that figure much lower. On top of that bottleneck, there are almost certainly other areas of scripting that need optimization (if only to make garbage collection more reliable). The journey continues!
^ways to increase performance in Unity
^VR optimization tricks
^Unity mobile VR optimization
Today I added special effects to the game with help from the Unity Asset Store.
^pack I bought
^cool VR game dev article
^mix animations (legacy)
^detach child objects
I finished putting in animations (and adding guns/needles to hands). When I have time, money, or some help I’ll want to clean them up a bit, but they’re good enough for now.
I’m continuing to apply the new animations to my characters, as well as refactor the old code that used to trigger them. In case you’re curious, the workflow for this has been to make the character model humanoid (Unity has this option to allow easy retargeting), export the animation from PNueron’s software as FBX binary, import the animation, convert the animation to humanoid, create a new character controller, fill the new controller with the imported animations, edit the start/end points of each animation to the take I want and so they blend with the animations before/after, set the transitions, then go through the AI code making necessary changes. I’m using a lot more animation events to trigger things now, rather than before when my code timed things manually (which is a huge pain in the ass).
^best way to have part of an animation clip loop
Using the mocap data I captured yesterday, I started to retarget the data for my character models (which have slightly different skeletons), as well as adjust old character code. Not a ton was accomplished since I was up late the night before finishing the mocap, and was fairly sore from doing all the movements (although my recover from surgery is going well, performing 21 different movements 4+ times each is not something my muscles are used to yet).
Using my newly acquired Perception Neuron motion capture suite, I captured multiple takes of 21 different animations I plan to use in Cytopia’s first level. The quality of the motion capture varies depending on the take, with “more perfect” calibration resulting in better data. I’ll definitely do a full very-detailed review of the product when I’ve used it a bit more and have some spare time, but suffice it to say that the kit gets the job done and is amazing value for $1500, with the next closest comparable product hefting pricetags around $10,000. As they improve the technology and increase manufacturing, it’s going to be a game-changer for low-cost animation, and hopefully eventually full-body VR capture.
I successfully added UI hints! Luckily the way I constructed my “death class” makes it really easy to set triggers after a certain number of failures. The most annoying part is putting the HMD on and off the get things where they look good.
^how to play video on a quad in unity
^$10 controller models for Unity (used to show what buttons to press)
I’m now doing UI work to ensure people know how to play without me there. VR UI is definitely a tricky beast. A lot of experimentation is happening in the area, and I unfortunately don’t have the time to invest significant attention to it. Through college I dabbled in and considered focusing on UX design, so I’m well aware that people can spend their entire full-time job figuring out how to place UI in a way that works best for people. However, since my game is UI-less except for helpful hints (and an aiming laser), I’ll start simple but functional, allowing upgrades when more time allows.
^how to translate Photoshop transparency into Unity
^how to show transparency on textures via materials
Yesterday and today has mostly been planning the next major things I need to do to get Cytopia ready. Namely, making a list of all the animations I need to record with my Perception Neuron (21!) and planning the UI elements I am going to add to help players know how to play without me telling them the controls. Also, you wouldn’t believe how hard it is to find a 3D controller model. Good ones are $50, and there’s a low-poly one I’m looking at for $10. I think I remember Oculus saying they would provide developers with a 3D model of an Xbox controller sometime in the future, so I’m going to go with the low-poly model for now and upgrade later.
I entered a contest where 20 winners will get a Vive-Pre and ticket to see 3rd-party Vive games in development:
If you win, let me know! I might be there with you.
Today I created a “blink” effect that can be called very easily from any script (via static variables), as well as created an effect for when you get injected with something.
Today was a bit frustrating, but another learning lesson. I spent the whole day debugging and trying to fix places my game was unexpectedly crashing, then finally discovered that the crashes were caused by bugs introduced by the newest version of Unity. After downloading the available patches, everything worked fine. This is the first time I’ve run into Unity-side bugs, so although it was frustrating, now I know that when something behaves illogically (and spits out errors that are completely undescriptive), it’s probably a Unity bug and I should look at what patches are available.
^how to enable/disable particle systems
^Unity bug in newest release makes disabling particles impossible without halving framerate in editor
^patch releases for bugs
^calling a function from another script (I had been using .SendMessage, but this way is much much faster)
Do you know what’s fun? Game crashes that don’t have immediately obvious causes. And by fun of course I mean a world of hurt. Luckily in today’s case, the crash was repeatable, meaning I could isolate it to a handful of scripts. Crashes that happen randomly at different times are literally a programmer’s worst nightmare. So, long story short, using depreciated Unity libraries is dangerous, and having bool checks in your Start() method is a good way to protect yourself from enable/disable doing things you don’t want it to. Honestly, even though the bug I encountered was frustrating, I had been wondering how to use Start() for initialization, while not having stuff re-initialize every time the script was re-enabled. Lesson learned.
A note about work environment: make sure you’re comfortable or your productivity will plummet. Since arriving at my new location I’ve been working out of the most uncomfortable chair in existence. It made sitting down to work literally painful, and over several days has caused quite a bit of back pain. Finally I said ENOUGH, went to Staples, tried every office chair they had, and bought the most comfortable one. It’s worth spending a little bit of money to be comfortable for the 8+ hours I sit at my computer every day.
Because of this jaunt and residual back pain, I only managed to re-incorporate the new psychic scripts into the scene (I’m using a separate script rather than virtual/override methods, which means I’ll have to do more refactoring, but because I’ve kept dependencies fairly low and well organized, is not a big deal).
So it looks like Oculus is doing a soft launch, as even pre-orders made opening day have expected ship dates as far back as May. Granted, they’ve said there were a bunch of fraudulent orders screwing up people’s ship dates (why captcha wasn’t used is beyond me), but the fact remains that ordering after Day 1 of pre-orders is going to get you an Oculus in the May/June timeframe. Although not the end of the world, this certainly is a bit of a pain for game developers, as the consumer base will take several months to receive their hardware. I was planning to do a Kickstarter soon after launch, but now I’m unsure of when to do it, since I need a substantial gamer audience capable of trying out the demo. Regardless, although the price was a bit higher than most of us expected (which will certainly hurt adoption numbers in the short term), as Palmer has stated repeatedly, for $599 you’re getting a very good deal. Having tried the CV1, I can say with certainty it’s an amazing device (and makes me look at my DK2 with slight distaste).
Programming-wise, I was able to implement new code for Cytopia’s telekinesis that makes it much easier to use (and thus feel much better). Additionally I implemented new movement code that bases your movement on where your head is looking (as opposed to before using absolute orientation). These two changes were the most obviously needed from my first batch of user testing (indeed they’d always been placeholder), and I think they’ll substantially improve the experience.
Oh, and just as a warning about raycasts, if a raycast script is behaving strangely, check to make sure you don’t unexpected code in an “if(raycast)” function. I was having an issue where looking at the ceiling suddenly dropped my held objects, and it took a while to realize I had no ceiling collider in the room, so the if statement was cancelling code it shouldn’t just because nothing was being hit (it also didn’t help that the ceiling collider was missing, since I was in a room that didn’t allow powers in the actual game).
Somehow I didn’t understand co-routines until needing them for a certain functionality late at night on New Years Eve. I was able to get Cytopia with environment art running at framerate around 2am on New Years, and subsequently playtested 2 people who were over partying with my brothers. Still tons of work to do to get this game polished, but reaction to the art has so far been good.
^coroutines are just functions which allow you to perform operations over multiple frames
I’m going to be traveling to a new base of operations, so will be taking a short break.
^some good game optimization tips
Environment collider placement is a pretty draining task since it’s just lining up geometry. With the environment now (mostly) finished, I should be able to get a decent build working by the end of tomorrow. Hitting framerate will be key, and surprisingly it’s pretty much there after today’s collider replacement. The less colliders you have in a scene, the better, since a huge collider is more efficient than a bunch of small ones. In case you weren’t aware, the efficiency order for types of colliders is mesh<box<capsule<sphere. Just a few more optimizations like scenes despawning when not needed, and we’ll be doing pretty well.
Environment art COMPLETE(ish). Today I finally finished placing the first pass of environment art. I’ll likely be making scaling and prop placement adjustments as I conduct playtesting, but for now the game finally looks like you’re in a sci-fi facility. Of course, the only hitch is that in its current form it’s not even close to hitting framerate (which was expected). Tomorrow I’ll need to delete all the art-attached colliders and make simple box colliders to keep the player/objects on the level. Then I’ll need to implement a spawning/despawning system so that only 2 scenes worth of art exist at any one time. Fingers crossed that optimization alone will be fast enough (if not I’ll need to dig in for more specified optimizations).
Today my environment art placement continued. I also learned that 2048x2048 textures that have text, when placed on a Quad actually look pretty decent if the text is big enough. It’s definitely more difficult to read than directly on a normal computer screen, but the CV1 resolution and framerate should improve that substantially.
Phew, my 1 week holiday has officially ended. With the Oculus announcement that they’ll ship CV1 hardware to devs with demos they deem good enough, I need to get Cytopia into an impressive state as soon as possible. I figure if I work straight from now through New Years without stopping, I’ll be at or close to that level of polish. So I basically have 1–2 weeks to get the first level up to snuff in order to obtain Oculus and Vive hardware (which I desperately need to be able to test on for launch). I was hoping for a month, but we’ll see what I can do.
Most of today was spent placing environment art for 3 more scenes, as well as designing a new scene to solve a control-teaching-design problem I’ve been struggling with. I also watched this awesome Star Wars documentary in my breaks: https://www.youtube.com/watch?v=coPi6fvskF4
Today I squeaked out some time to perform bug fixes revealed by my first usertest. Holiday season is entering full force, so I want to have a stable playable build to test people with.
I started and finished crafting the environment art for the first room of Cytopia.
I spent the day looking at my art assets and planning out what I want to complete in the next sprint. The number of estimated hours went way up, but that may be because I’m now able to more accurately predict my rate of productivity. Only time will tell. The holiday season makes it a bit messy, since I probably will be missing several work days throughout, but nevertheless this planning process will help me stay productive.
The first end-to-end playable build of Cytopia’s first level is now complete! Finishing up one day late (according to my sprint schedule), I can now start user testing to see people’s behavior, and make alterations based on that. With the bulk of the coding work out of the way, I’ll be able to focus on art and polish for the next sprint (although there will definitely be several coding tasks to attend to, especially bug fixing and optimization). Just from my first playtest of a friend, I was able to reveal 3 bugs and learn 2 control-based behaviors that may prove problematic. Refinement time is fun because you learn a lot.
^implement rigidbodies when an enemy is hit
After struggling to get the ragdoll death behavior I wanted, the above resource made me realize I should be implementing enemy death via a script on the projectile, rather than on the enemy receiver.
^how to apply force to the hit rigidbody
^collision properties in Unity
^how to use StateMachineBehaviors in a way that accesses Scene gameObjects
I’ve completed what can best be described as a cinematic, and it actually turned out pretty well with the mocaped animations. They’ll need to be replaced by better/cleaner ones at some point, but for the time being they portray the emotional experience I was going for. Polish can come later when I don’t struggle to do 2 pound physio exercises (donning the Perception Neuron and doing animation actions is currently a bit laborous). Mixamo has been a very helpful tool, both for making good looking character models and getting some pre-made animations (like walking).
Re-mapping of Perception Neuron mocap data to Mixamo characters isn’t too bad. However, it could be even better, so I’m going to see if I can use their Unity SDK to capture data directly in Unity (aka capture the data in the correct skeleton rather than capturing the data then re-targeting it).
Today my Perception Neuron arrived, which was an exhilarating surprise considering I was expecting it in 2 weeks. The timing is perfect, since to do the second-to-last scene of Cytopia’s first level I need it.
^Started reading Game Programming Patterns
-be vigilant about unsubscribing listeners that are destroyed
-prototyping pattern can allow you to lessen redundancy by defining different enemies in relation to prior enemies
-it’s a long trend, not fast moving
-5x slower, 12x growth potential
-le gatekeepers are reason for slowness
-certificates are becoming the main revenue source
-free content with paid certification
^Unity VR example projects and tutorials
^Switch statements might be better / more efficient than my charState if statements (currently I have a variable charState that changes depending on what I want my characters to be doing. In the update function, it checks “if charState = 1, do something, if charState = 2, do something different, etc.)
Slush Conference Videos (and some notes I took):
-common misconceptions are that hits are there today gone tomorrow, that “sure they made one hit game, but won’t repeat” (many companies prove that this is wrong), and that games are a gamble (not a complete gamble, having a process can have high success rate)
-affinity investing is triple bottom line, groups of people (p2p) investing in areas of impact they are passionate about
-I should reach out to Berkeley Angel Network and other relevant Affinity groups
^Equity Crowdfunding (SF Angels Group founder)
-send a scheduleonce.com link to people you email/message through Linkedin if you want to video meet with them (he talks about in context of potential investors through Linkedin pro)
-what investors want to hear (according to him):
1. The story. (how you came to be working on this)
3. Show traction.
4. Business model (how are you going to make money)
5. WIFM (exit strategy)
6. What you want the investor to do
^View to Play
-ads should not be in a failure moment (like showing a video after you lose)
-designing ads into the game, rather than as a necessary evil afterthought
-from him talking, I got the idea of how showing video movie trailers before/after the game could be a way to make game free or reduced cost, but at same time ehhhh. Offering two options could be a way to go, but Steam doesn’t currently allow that, and that would cause segmented downloads and reviews anyway
-needs to be additive, since video watching alone will not generate enough revenue. still need in game purchases and/or sales
^Social gaming in Latin America
-they get 1 million questions a day, then users proof the questions to seperate the good questions from bad
-the users all think the company is from the country they are in, because the content is regionalized
-problem: job search is not fun or enjoyable
-better job matching tech will make the economy stronger by keeping people employed in the jobs they fit
^gamers working on game movies will give us good game movies
^Collaborative game development (Maxplay)
-86% of game teams are distributed
-working on real time VR game creation (like google docs for games)
Service design and living design.
-We are now measuring impact, not just monetary.
-the way we find work will change. He says will be a challenge.
-triangle of issue. person with problem can’t afford help, entrepreneur with solution can’t fund, person with money can’t fix <they are looking at how they can fix that
^UNICEF guy, impact portfolio
-You can’t just impose solutions, need to ask the people living there
-kids immediately can answer a text (like, over a million kids) in Liberia and 17 other countries (1.7 million current users total)
^VR Head of River
^CEO Logitech Reinventing the corporation
-logitech stalled when mobile hit
-they are interested in games! A possible funding source…
-trees, plants, and seeds. Basically, most older companies are using their tree (aka the declining profits on their old business that they dominated the market in) to invest in several “plants” (aka new growth areas), and in addition they have small 1-person startups that try new things
I’m almost done the first audio pass. I went and wrote + recorded a bunch of lines of dialogue, but after hearing the first piece of dialogue actually in game, decided to throw out all of it except 2–3 lines. Luckily this was exactly the kind of thing I anticipated. The lines were just draft recordings by me, nothing even close to final audio polish. I wasted a bit of time, but much less than I could have. This is the whole point of the framework: iron out design kinks before investing in polish.
In case you’re wondering how I get sound effects, I usually just google “X sound” or “X sound effect” then listen to a bunch of sounds before choosing one I like. For this project I’m making sure the license allows free commercial use.
On a side note, I’m starting to feel very iffy about 3D spatialized audio via the Oculus Audio SDK. It’s pretty performance heavy, and at times I’m finding the actual accuracy of it questionable, although that might be because I havn’t spent enough time tuning it (which is a problem in itself). Thinking more about it, I can’t help but feel 3D spatialized audio is being overhyped right now. People are saying things like “it’s almost as important for presence as visuals,” but that’s really not true. Sure, having accurate sound spacialization can add to presence. However, I think most consumers aren’t going to particularly notice it. We’ve been conditioned to listen to audio via headphones, and even simple left-right loud-soft attenuation can create an understanding of sound in 3D space.
VR development right now is HARD. We have to hit 90fps on systems that are just starting to be able to do so with a graphic fidelity we’ve grown accustomed to. 3D audio is cool, and I think will grow to become a must have, but in these early days the gains may be outweighed by the cost in performance and time.
^audio tricks for the Oculus Audio SDK
^always render certain objects on top of other objects
^set up the new Oculus Audio SDK
^L voice via Audacity
^Unity VR UI
^how to detect collision velocity (can’t believe I never have used this before now)
Finished the tricky encounter, am now doing a first audio pass (puting in temporary sounds and setting their proper triggers).
I’ve spent the past two days working on the tricky encounter I mentioned on Day 58 (I would be more specific, but, well, spoilers). I also took out some time to apply to BoostVC’s accelerator, since it’d be pretty cool to be surrounded by fellow VR entrepreneurs everyday (money for art and technology is always nice too). I’m meeting with a 3D artist tomorrow for advice and to see if he knows anyone interested in a project like mine, so we’ll see how that goes.
^how to do Animation events
Having your animations call functions attached to the same GameObject is pretty useful.
After a 2 day break for Thanksgiving and my birthday, I dove back into an old prototype of Cytopia I paid a freelancer to build for me 11 months ago. My goal was to figure out how he did an eyelid blinking effect, with the hope of just re-using the code. However, looking at what he had written, I realized that I could write something much more concise. Looking back on that old code, I was reminded how important it is to organize your scripts well. It was a mess. Once I accomplished the blink effect, I wrote out a code plan for the second trickiest to implement encounter in the game, which I will start on tomorrow.
^destruction tool ($60)
^how to use destruction tool
Using this purchasable Unity tool, I’m now able to easily make objects destructible.
^how to make a laser sight
^gravity gun tutorial and script
Today I got Cytopia’s telekinesis working. I decided to have it aimed via a blue laser (Raycast LineRenderer) that shoots from your left upper forehead. I then recycled code from a prior attempt at telekinesis, and applying the new aiming method have a result I’m happy with for now. The code has two parts: one script applied to all telekineticable objects, and one script applied to the player. I may change the technique to one like the gravity gun tutorial above (rather than it’s current form, which levitates selected objects and allows you to apply force in any direction via the right thumbstick), but the telekinesis is going to be the most adjusted part of this project via playtesting, so for prototype purposes it does its job.
Finding myself with some extra time, I created an art sandbox with the sci-fi environment pack I bought. Basically, I laid out every single piece of art in a line so that I can easily visualize what my options are when I get to filling in the level with art later.
Today I implemented a “glass breaking” effect. In Cytopia you start in a cryotank, so trying to move forwards causes the glass of the tank to break into many shards and scatter in front of you. First I placed shards of glass over where the environment art glass was, then disabled the hodgepodge sheet at start. A piece of code I wrote activates when the player tries to move forwards, enabling the pieces of glass, applying a force and torque affected by Random.Range, and replacing the unbroken glass art with broken glass art at the edges of the tank.
^by hiding the audio icon attached to each piece of glass, I was able to more easily construct a sheet of it
^how to instantiate from a list (I ended up using an array with foreach loop applying force to each piece of glass)
Today I focused on getting a basic player death screen effect, as well as finishing up the respawn code to make sure everything resets as it should. Below are a collection of links I found helpful in that endeavor. Ultimately I used a quad in front of a second camera that would enable on death, then fade (lerp) it’s color from transparent red to opaque black.
^I decided to avoid the UI system but this got me thinking about alternatives
^this approach seemed better for me (using a second camera that renders in front of the main camera)
^an approach to screen fading
^how to gradually fade a color from one to another
^I couldn’t remember how to set an object’s color through code
^Color lerp Unity documentation was poor. From digging through these threads I realized the “t” in the lerp equation had to be manipulated outside of the Lerp code. My final code for the color fade ended up being (inside the update function):
deathColor = Color.Lerp(originalDeathColor, Color.black, lerpTime/2);
deathEffectQuad.gameObject.GetComponent<Renderer>().material.color = deathColor;
lerpTime += Time.deltaTime;
^Oculus SDK screen fading, which I’m shying away from for now
^preliminary research on texture atlasing, an advanced technique for reducing draw calls (will almost definitely be necessary for VR optimization)
I set it up so that when bullets hit my player, all the necessary things in the level are reset. Further in the future I’ll probably try to clean this code up to be more efficient and less “hard coded”, but for now I’m willing to take on technical debt to get a functioning prototype.
Day 52 (RESUMED. And so the Sprint begins(:
Aaaaand we’re back. I still have a few surgery related distractions that slow me down a bit (mainly an annoying rash and some soreness in the mornings), but I’m itching (no pun intended) to get back to game development.
I’ve decided to commit to a 30 day sprint using the Scrum methodology, in which I will complete a full-playthrough framework for Cytopia’s first level. If you’re following along, I encourage you to pick a project of your own to complete in 30 days. If you’re not familiar with Scrum, it basically allows you to plan out software development based on time estimates, so you can stay organized and know each day whether you are on track. I’m using a google docs burndown chart, and Trello for organizing Product Backlog Items (aka things to do that are broken into time-estimated pieces).
The first step is to plan all the features you want completed by the end of the 30 days to feel like this iteration of your game is complete. In this case, the goal is to create a framework, not a polished game, so the emphasis should be on coding implementation, not fancy effects, art, or polish. I’m working off of a detailed Game Design Doc, so if you don’t already have a fleshed out game design in mind, you might want to take some time to create one (remember, the goal is to have all the core functionality in 30 days of programming).
So you can have an idea of what a game design doc looks like, here are the Cytopia design documents, which took about a month to create, not including many months spent world building and prototyping VR design. Since these are New Dimension VR internal documents, please don’t widely share them. Also, they obviously contain spoilers for Cytopia, so if you’re trying to avoid that, you’ve been warned.
Also, here’s a picture example of my Trello after finishing my Sprint Planning meeting: http://imgur.com/VSD3Qwa
Don’t feel that your planning documents have to be this in depth for your project. I’ve been working on the Cytopia world for over 2 years and the game design for about a year with the intention of a successful commercial release, so if you’re just building a hobby project or the skillsets for future projects, the designs can be much more bare bones. If you’re betting your entire career on the game you’re about to make, then you might want to put this much effort into the game design :p
Lastly, here are some links of stuff I looked at today while figuring out how to get enemies shooting at the player (in the end I settled for a simple projectile-based method rather than using line renderers).
^trying to figure out how to make enemies miss
Road to Recovery:
Hi all, just wanted to let you know my surgery went well and I’m now at home recovering. For the meantime I’m going to take it easy, but I will resume when the pain is less distracting.
Unfortunately I need to postpone this blog for a little longer. On Oct 15, 2015 I’m going to be getting corrective surgery on my chest via the Nuss Bar procedure. I’ve had pectus excavatum my entire life (aka an indented chest), but in the last year or so it has started to cause some aches and pains, so I’ve decided now is the best time to take care of it to prevent problems in the future. Although the procedure is relatively non-invasive (they cut slits in my chest and put metal bars in to pop the ribs out to their proper location), I’m going to be on opiate pain killers for about 2 weeks following the surgery. I should be well enough to continue working about a month after the surgery.
Although it’s always difficult to publicly talk about your health, I thought I owed you guys an explanation for my absence. Moving forward from my recovery, my primary focus is going to be on building the first level of Cytopia in time for Oculus launch, so expect a mix of learning links and project based insight. In the meantime, don’t wait up. Complete tutorials, build projects, and keep learning! If you complete a small game, don’t hesitate to send it to me (firstname.lastname@example.org). Thanks for following, see you soon.
Day 51 (Resumed Oct 2)
- Cooking with Unity: FPS Part 2:
2. Started reading Institute for the Future’s reports on work:
Continued reading “Reinventing Organizations”
With Oculus Connect coming up I won’t have access to my PC for a week, and the next couple days I’m taking it easy so I’m healthy for the trip (have had a cold the last few days). Because of this, I’m going to temporarily take a break from posting updates here until October 1. Half way to 100!
Had to run some errands today and visited one of my brothers, so didn’t have any work time unfortunately.
1. Read “Reinventing Organizations” by Frederic Laloux
This book is amazing, very well researched and providing potential blueprints for any self-organizing company.
1. Read the Telekommunist Manifesto
I’ve been learning a lot from my readings about other people’s forays into the same problem I’m trying to solve (namely, creating socially progressive companies that are more democratic and value-oriented instead of profit-oriented). The TKManifesto definitely made me think about things I otherwise would not have, so was well worth the time. However, I don’t think it gives capitalism the credit it deserves for getting us to this point, instead trying to vilify it.
I’ll be doing some programming stuff through the week, but my primary focus until Oculus Connect is figuring out a workable solution I can bring people together around.
1. Reading about P2P governance
-The Political Economy of Peer Production: http://www.ctheory.net/articles.aspx?id=499
-Protocall Power: http://p2pfoundation.net/Protocollary_Power
-Peer-to-Peer Relationality: http://blog.p2pfoundation.net/essay-of-the-day-peer-to-peer-relationality/2012/03/04
-From the Communism of Capital to Capital for the Commons: http://www.triple-c.at/index.php/tripleC/article/view/561
The VR game I made over the course of 10 days, Super Bunny Monster Defense, is now available. It’s pretty fun. Can you beat all 8 levels?:
When humans broke through decoding the biological genome, life was awesome at first. You were able to get a modified…share.oculus.com
1. Some info about int vs float optimization. Basic summary is that it’s not really worth worrying about: http://stackoverflow.com/questions/2550281/floating-point-vs-integer-calculations-on-modern-hardware
2. Learning how to SphereCast (raycast that is thick):
3. Cooking with Unity FPS tutorial Part 1:
4. Learned how to use GitHub and got an account:
5. Finished reading 0 Marginal Cost Society
6. Got telekinesis working in Cytopia
1. Finished grayboxing Cytopia’s first level. Here’s a picture (MINOR SPOILER WARNING): http://imgur.com/EeWxyr6
2. Figured out how to display frames per second while game is running:
3. Made some minor improvement to Super Bunny Monster Defense (made it so the Aimer couldn’t leave the map, made it impossible to knockback the Boss without it damaging you, fixed a bug where a level was finishing one kill early, and did the thing I learned yesterday where you make the walls of the environment not interact with eachother by using layers).
4. Continued to read 0 Marginal Cost Society
1. Textures consideration:
2. Some VR best practices:
3. Started using the Oculus forums:
4. Performance optimization (for GearVR, but still relevant):
5. Started grayboxing Cytopia’s first level (moving simple geometry with gray texture to make the playsize spaces optimal in VR, providing a frame to more easily fill in the level’s art as well as allowing you to program interactions without having to deal with making the level pretty)
6. Implemented simple first-person player movement to test the environment frame
Funny story, I was trying to search to figure out what the performance hit of overlapping colliders was, since if it was high, I would need to make sure the colliders in my walls came exactly up against eachother (which would be a HUGE pain in the ass). Of course, nothing really came up regarding that topic, so I dove into any search related to it. After some deep diving someone mentioned, in a fairly unrelated topic (how to make it so Rigidbodys don’t bounce randomly when inside eachother), that the Unity added the Layers system. I thought to myself, ha, that person doesn’t know about layers. Kept browsing for answers. Then it hit me. Layers. Put all my environment geometry on one layer, and make it so the layer can’t interact with itself. Well now I feel like an idiot, but that’s OK, because at least I learned what I set out to learn.
1. Object Pooling:
2. Continued reading 0 Marginal Cost Society
3. Started doing simple-geometry level building for Cytopia
Today I resumed work on Cytopia, a project I’ve been ushering along for about a year now. Since the 1 on 1 Unity consultation offered at Oculus Connect is too good an opportunity to pass up, I want to get a basic prototype of the game ready for then (which is in 2 weeks). I’ll still be balancing this with Business/Community Commons learning and furthering my own programming skills, but much of my time will be absorbed in creating this new demo (I’ve had 2 other prototypes made by people I’ve worked with based on my designs, but this is the first time I’ll be doing the actual development for the project).
Today I read half of the book 0 Marginal Cost Society in my quest to understand what alternatives exist to purely profit motivated companies. My plan is to balance programming learning and this quest for the time being. I also found some MOOC software engineering courses that I would like to get to at some point http://www.saylor.org/pathways/career-related-courses/.
Day 40 (ish)
This weekend I finished up Super Bunny Monster Defense, which has 8 levels and an impossible 9th level. The game is submitted to Oculus Share and awaiting approval. The game took about 9–10 days to complete. I’ve also taken the weekend to reevaluate and see what I want to work on in the next two weeks to prepare for Oculus Connect.
1. For Room Defense (maybe soon to be Super Bunny Defense) I now have 6 levels polished and am happy with the powerups available. I was going to publish today, but the people at Oculus probably aren’t going to look at it to put up until Monday anyways, so I might as well add a few more levels tomorrow. This is technically the first finished piece of software I’ve solo developed, so I’d like for people to have a long enough game to enjoy it.
Working on the level design has only further reminded me how much game-design is its own skillset that requires time consuming work. Balancing waves of enemies to be not too difficult for noobs and not too easy for veterans is hard. My design philosophy for this game is to make the first 4 levels very easy, so that 99% of players can get to level 5, have level 5 be a little bit challenging, then start ramping up the difficulty for level 6 and beyond. When playing level 6 for the first time, I took 6 damage (20 total damage means game over), which means there is going to be huge dropoff at that point as 75% of players get overwhelmed (although some, if they like the game, will try again with new tactics and be more prepared). The level 7 design scares me a little bit right now, so I’m going to play test it and see what happens. I’ll probably make level 8 the final boss since I don’t want to spend all day on this tomorrow. Then of course add an impossible level 9 just for the fun of it ;)
1. I published the second edition of my book Virtual Reality Insider.
2. Finally got the soundscape for Room Defense done.
Towards the end of the night I was trying to create a new enemy type from the existing one, but since the model I’m using is legacy (i.e. outdated), I ended up deleting all instances of the model’s shader by accident. Luckily other data on the model was preserved, but it’s a good lesson that I need to start using version control, or else instead of having to re-do 5 minutes of work, I could lose days making one wrong step.
Anyways, tomorrow I’m finishing Room Defense in whatever state it’s in so that I can move on. I’ll have the entire day to focus on level balance, new enemies, and additional powerups.
I just published the 2nd edition of my book, and am giving it away for free today: https://www.reddit.com/r/oculus/comments/3je3e4/vr_insiders_2nd_edition_is_now_available_and_free/
1. After doing some playtesting, I realized that having movement based on the rooms orientation was very very clunky since even turning your head 90 degrees made control almost impossible (since your forward would become left or right in that situation). I decided to base the playermovement on the Oculus camera’s orientation, and this article helped a lot with that:
^I also got help from a post about how to change rotations of an object via specific variables (important in my case because I only wanted one axis for turning): http://answers.unity3d.com/questions/725869/copy-rotation-values-in-between-objects.html
2. Sound effects downloaded
I decided to extend working on this project a little longer since it’s so close to being fun, and the work I had to do getting my book together got in the way of finishing it. If I complete it tomorrow then I can truthfully say I spent one week full time on it. However, tomorrow is my brother’s last day in town, so I may extend into Thursday depending on where the game is at. With the new movement implementation, I’m hyped to build the levels, just need to power through creating a good soundscape.
1. Room Defense, levels now load properly, and I implemented a way for the player to get stunned (not be able to shoot) if they run into an enemy.
2. Interesting article about education (it might be useful to start making knowledge trees):
I finally have the Room Defense game in a state where I can just focus on adding levels. Tomorrow is the deadline, so I’ll try to get it into as fun of a state by the end of the day as possible.
I spent a good portion of today getting the 2nd Edition of Virtual Reality Insider finished for publication. I should be releasing it officially soon. The ebook of the new edition will be free on that day.
1. Oculus Audio SDK for 3D spatial audio (read through documentation):
2. Screen fade:
I spent all evening trying to make it so that when a new level loads, the screen doesn’t freeze for a second. In trying to do so I made my code messier and didn’t actually solve the problem, eventually deciding to use a screen-fade when the level loads so that people don’t get sick from the lag (it’s extremely jarring to have the screen freeze in VR). For future projects, I definitely want to use some sort of version control, and will probably avoid having more than one Unity level, instead activating stuff manually through code.
I need to get the second edition of my book out (it’s been sitting around 98% finished), which may slow my work tomorrow, but my goal is to publish this game to Oculus Share by the end of Tuesday no matter what condition it’s in, so that I can move on (and have a benchmark of what I could do in a week at this time).
1. Realized there was very discomforting lag in between levels, and so started optimizing by having persistent gameObjects not destroyed.
2. Got towers to aim and shoot at enemies in range.
3. Got ice patches to slow enemies on enter, and stop slowing them on exit.
4. Created an explosive bullet that does AOE damage on every third shot (yay powerups)
5. Starting to look into sound in Unity:
1. Working on UI for Room Defense, looking into optimization:
2. Got first playable version of 3 levels of Room Defense (super rough balance-wise, but playable)
1. Worked on Room Defense (VR Tower Defense game)
2. UI in world space:
3. Example of googling for answers while developing:
4. More work on Room Defense
1. Unity Animation and Mecanim tutorials (first 3 of “Animating, first 2 of “Controlling Animation”):
2. More animation setup tutorial:
3. Simple walking implementation through script with Mechanim: https://www.youtube.com/watch?v=HsKtxPmtvbY
5. Set up characters and animation for “Room Defense”.
The online tutorials I found for Mecanim were generally not great (except for #3, which made me finally understand how to influence states through scripting). Once I set up a state machine and influenced it for my own character (a rabbit I found for free on the Unity Asset Store) and an enemy (FT Caveworm: https://www.assetstore.unity3d.com/en/#!/content/3317), I was able to get the hang of using animation via scripting.
In my project Room Defense, I now have a VR room setup with a character who can move around and aim (with help from some code from our last project Source Fighter), and that plays a walking animation while moving. I also implemented an enemy that walks towards you, jumps at you and plays an attack animation within a certain range, and can be hit back to its spawn point with the right trigger. I found it very helpful to set up an int that keeps track of the enemy state, allowing me to effect the enemy’s behavior depending on what state it is in. I’m excited to get to the meat of the gameplay tomorrow.
1. Tower Defense Tutorial Parts 3–5:
2. Set up and downloaded assets for a new Unity project (Room Defense). Assets downloaded were all free (Chairs and Sofas pack, Classroom Stuff, FreeFurnitureSet, plants, and Wooden floor textures). Go to the Unity asset store and find what you want for making your own tower defense game.
1.Tower Defense Tutorial Part 2: https://www.youtube.com/watch?v=x8dhQd-qZuM&list=PLlHjNcdoyw6UK30xrTUhjM-usQOOE5jhN&index=9
I spent a good part of the day cleaning my room, and wasn’t feeling well (I think I picked up some virus or bacteria), so didn’t get much accomplished in terms of programming today. Hopefully I feel better tomorrow.
1. Enabled Matchmaking (https://www.youtube.com/watch?v=kAIWWC6b6Po)
2. Tried to figure out custom UI for matchmaking
3. Cooking With Unity Tower Defense Tutorial: https://www.youtube.com/watch?v=8yrVLTyDcu0&list=PLlHjNcdoyw6UK30xrTUhjM-usQOOE5jhN&index=8
So I realized that to get matchmaking working on the Oculus, I’d have to create a custom UI menu that places the text as a gameobject in the scene, rather than using the built-in UI. The only resources I could find that would help in this regard were the example projects that Unity provides. However, I didn’t feel like spending the whole day figuring out how to do UI from that code, so decided to move on from this project. I want to soon start focusing on narrative based games, but will start out next week with tutorials on Tower Defense games, followed either by more tutorials or making a VR tower defense game.
Source Fighter postmortem:
Networking is still an area I’m going to avoid until more learning materials become available. Trial and error learning is very difficult and time consuming. That being said, now that I’ve attempted this, I feel like anything singleplayer is much easier, since you’re only dealing with one game state rather than several. While working on this project, I randomly had a few insights related to programming that will be very helpful moving forward, such as how to pass variables from different scripts to eachother and how to pass variables into functions.
1. Unet Tutorials 15–17 (https://www.youtube.com/watch?v=3OKvGiGzdx8)
2. I ended up staying up till 4am last night (so, 4am this morning) trying to fix the same problem with the attack balls not going towards the aimer on the host. With a total of 8 hours put in trying to solve that one problem, I decided to implement the ball-firing in a different way, which was successful.
3. I tried to get the attacks to propel the hit player in the direction the ball was moving, but could only get the behavior working properly for when the client shot the host, and not vice-versa (it’s ultimately the same problem I spent 8 hours trying to solve with aiming the balls). After about 3–4 hours trying to implement this, I have decided to finish up the game tomorrow assuming player death will be via attacks subtracting health, rather than the super-smash additive damage I originally wanted.
4. I placed a large gameobject below the game’s platform that upon collision moves the player’s position to the top of the map at a random location (so that when players fall off the map, they appear to respawn).
5. Attended a Convrge meetup (a VR social app) with an animator from Industrial Light and Magic, which I spent 3 hours at.
Tomorrow is the last day I’ll be working on this game. I’m feeling ready to move on, since getting crushed by online syncing isn’t exactly the most fun thing in the world, but I want to at least get something playable out on Oculus share if possible. This experience has confirmed that multiplayer games are still too difficult for me to attempt, but I’m also excited about how close we are to having multiplayer games creatable by anyone. Seriously, once there are a few more tutorial series out there and maybe a book specifically focusing on multiplayer C# UNET, it’s going to be crazy.
However, that time will not be prior to VR launch, so it is with renewed vigor that we’ll start Tower Defense tutorials Monday. In the meantime, I’ll need to figure out Matchmaking, implement player death upon health falling to 0, and create a few more attacks. If I have time, a way to keep track of players’ scores would be nice, but that’s a stretch goal I’ll focus on once 4 attacks are working.
1. UNET Tutorials 10–15
Day 25 (Source Fighter Day 4)
Got health working again, and toyed with getting bullets to actually knock people back in the correct direction, but took most of the day off as my “weekend” time.
Day 24 (Source Fighter Day 3)
I started out the day by implementing a script that would apply damage when a player collided with the attack ball, and would display that damage to the player’s client. Simple enough. However, I then spent the rest of the day trying to get both players’ damage to show up for both players, and was not able to do so. Tomorrow I’m going to reinstate the original health script (currently it’s a mess from my desperate attempts), and will have to decide between taking a different approach, or giving up on a public scoreboard for the time being.
I now have direct experience of why multiplayer is considered hard, although honestly if there were more learning resources for it available, it wouldn’t be a problem. In fact, I expect resources around UNET to increase substantially in the next few years, given Unity’s popularity as an engine and a growing awareness that making games is much easier than before. Still, for the moment, it can be frustrating. That being said, I’m okay with how things have gone, since I’m definitely outside my comfort zone and learning.
Day 23 (Source Fighter Day 2)
1. UNET health tutorial: https://www.youtube.com/watch?v=R14oxS3IY3A
2. Tried to get power shooting to work the same on both clients, only got it working on host.
Today I spent the entire day trying to fix one bug where I can’t get the position of the Aimer on the non-host (apart from a brief respite watching Dragon Ball Z: Battle of Gods for inspiration). It’s always draining to have a day without any progress, but I’ve submitted a question to Unity answers, and am going to start tomorrow just moving on with the hope we’ll be able to fix the problem later.
Develop once, publish everywhere! Unity is the ultimate tool for video game development, architectural visualizations…answers.unity3d.com
Source Fighter Unity project file:
Project planning document:
Day 22 (Source Fighter Day 1)
Today was very productive. I was able to do basic setup of the scene, write scripts for player movement and player aiming, and get our first attack half way done (all while getting it to work multiplayer). At this rate, I feel confident that we will be able to get the game working with 4 powers and multiplayer by the end of the week (famous last words lol). If anyone knows how to stop jumping from visually glitching so badly in multiplayer, let me know.
Source Fighter Unity project file:
Project planning document:
- Rotation Syncing UNET tutorial:
2. Xbox Controller setup:
^Watched first 2 minutes of video and paused on Xbox controller mapping screen. Then used a test project to see if I could convert movement commands on a cube to Xbox controller (first left, then right stick). Was very easy, just had to create custom inputs for the Right Stick. After this I tried to get a second player-object working in a multiplayer scene, but could not figure out to do so, so moved on to more tutorials. People often complain about the way Unity handles controller input, but it seems to me that if you set the prefs correctly, it’s extremely simple (at least for Windows only applications). Example of forum posts I was browsing to try to figure out the 2 multiplayer player objects problem: http://stackoverflow.com/questions/31666024/unity-unet-calling-command-outside-of-player-object
3. Optimization: https://www.youtube.com/watch?v=61KVmlwGDJY
4. Watched first 10 minutes (will come back to learn lag simulation if needed): https://www.youtube.com/watch?v=CdbzZSHPN4E
5. Unique Identities for UNET hit detection:
6. Shooting in UNET:
Tomorrow we start our multiplayer VR fighting game (think a mix of Super Smash with Couch Knights). We still have some multiplayer learning to do to get the game to work, as well as some other tutorials, but I think taking on this project is just the right difficulty to push our comfort zone, maximizing the amount we’ll learn in the next week. My hope is that what we build can act as a platform upon which the VR community can make an open source multiplayer VR fighting game. I will make the Unity project files of what I’m working on available via dropbox, so if you’re following along, you can try to work out solutions yourself then check my code for solutions. Also, if I get stuck on something but someone else finds a solution, don’t hesitate to contact me (email@example.com or @VRInsider).
^Ug, got part way through this tutorial and realized it also has stuff that’s out of date
^direct documentation seemed to be the way to go… but I’m abandoning ship for Unity’s networking solution
^I watched the first 5 minutes, but realized I’d be better off just following the Unity manual
^Read all the Unity Networking documentation, and took notes of what I thought was important for our fighting game here: https://docs.google.com/document/d/1x8nCKzHWoClavZ-SzqP7oJ4lXumuC-JhwuH3NX_gzR4/edit?usp=sharing
^Huzzah, an up-to-date UNET tutorial!
Overall, networking is more difficult than I originally anticipated. It seemed obvious to me that the Unity or Photon tools would be as simple as drag and drop, but it’s not quite there yet (although Unity’s system seems to be getting close). Really the biggest deficit is in learning material available online. Luckily it seems like converting a single player game to multiplayer with the Unity system isn’t too hard, so worst case scenario we’ll end up making a single-player fighting game with the possibility of making it multiplayer later. I still think it’s possible to do multiplayer, however.
2. Static class review: http://unity3d.com/learn/tutorials/modules/intermediate/scripting/statics?playlist=17117
^I still don’t fully understand Coroutines, but am not going to focus on it right now. I’ll have to learn about them in more depth further down the line.
^Ok, from this I now fully understand Invoke, CancelInvoke was the missing piece.
6. Hover Junkers: https://www.youtube.com/watch?v=kYDSqRzOVKE
Out of nowhere this got some exposure on Reddit today, which was a nice surprise. If you’re seeing this now, know that the plan is to spend all 7 days of next week making a VR fighting game, which will be made open source.
Started to do a Photon multiplayer tutorial, but one of the functions in the first line of code was outdated (tutorial was a year old). Am slightly burnt out from yesterday, so am going to read “Black Code” today instead and watch some media I’ve been meaning to get to. I’ve lined up 4 multiplayer tutorials for later (I’m thinking that I’ll complete 2 tomorrow, then do some Cooking with Unity tutorials). My goal is to spend all of next week creating a VR multiplayer fighting game, which will be made open source. If you want to get ahead:
- Dual Stick Shooter Tutorial Parts 2–5: (Part 4, other parts are linked in youtube sidebar) https://www.youtube.com/watch?v=iKKly1pZ6fo&list=PLlHjNcdoyw6UK30xrTUhjM-usQOOE5jhN&index=27
1. Tutorial about Random function: https://www.youtube.com/watch?v=fOOUpQ2_YKc&index=13&list=PLlHjNcdoyw6UK30xrTUhjM-usQOOE5jhN
^stopped watching at 35 minutes, probably not worth watching since is a bit outdated with the changes to Unity 5. Main takaway was how to create sudo mesh colliders out of many box colliders
3. Tutorial about Singletons: https://www.youtube.com/watch?v=KHTv8DbY_3w&index=23&list=PLlHjNcdoyw6UK30xrTUhjM-usQOOE5jhN
4. Dual Stick Shooter tutorial (Part 1): https://www.youtube.com/watch?v=iy-81lHEXvo&index=24&list=PLlHjNcdoyw6UK30xrTUhjM-usQOOE5jhN
The Cooking With Unity series is pretty awesome for picking up gamedev tools and vocabulary (i.e. useful functions and types of interaction). I had the fortune of meeting the creator several months back (he’s a cool dude), now that I’m in the trenches of game programming, I’m glad he’s taken so much time to put out these tutorials.
1.Cooking with Unity Vectors and Transforms: https://www.youtube.com/watch?v=pqwvoo3iiZs&index=6&list=PLlHjNcdoyw6UK30xrTUhjM-usQOOE5jhN
^not programming, but including this as a note that I’m consuming tons of news media daily (mostly r/oculus and r/futurology)
I’ve been resetting my caffeine tolerance the last few days, switched from cup of coffee every day to green tea instead, and will soon switch to no caffeine drink whatsoever. This is causing a temporary drop in productivity, but will be made up for when I return to drinking coffee. I might write a blog post about caffiene optimization some point in the future, but for now, I thought I’d mention this. If you don’t reset caffeine tolerance, you can find yourself stuck needing more and more caffeine just to achieve normalcy, and ultimately miss out on the mind benefits. To avoid extremely painful withdrawl, it’s best to wean around the 1–2 cup a day mark. I’ve found that weaning from one cup a day is bad enough, usually resulting in about 3 days of headache/fever-like symptoms if cut cold turkey without tea easing.
1. Continued Prospector Solitaire Tutorial (“Introduction to Game Design, Prototyping, and Development”)
2. Continued reading “Decoded” by Maijong (fiction)
- Video about c# dictionaries:
2. List, Enum, Dictionaries basics review:
3. Started Prospector Solitaire Tutorial (“Introduction to Game Design, Prototyping, and Development”). Got cards to spawn correctly in deck.
4. Reviewed get/set: https://www.youtube.com/watch?v=-1eLIDSWiLk
1. Ended up getting distracted by a new book (“Decoded”) and so didn’t get any work done today. I think I’m going to just move back to tutorials tomorrow to ensure I don’t procrastinate. It’s a well established principle of the brain that if you’re working on a problem for a while and don’t get the dopamine reward of solving it, you can lose a lot of motivation. Better to just keep moving :)
1. Worked trying to get body pieces to spawn into a list, then have each piece in the list follow the piece in front of it with a delay (so as to allow traditional snake-like following). Was not able to accomplish this. Told brother about issue and he suggested using two layers of Dictionaries.
Hitting a wall in programming is definitely frustrating. However, luckily tools like the Unity forums and Stack Overflow exist, allowing you to get help online. My plan is to try getting dictionaries to work tomorrow (I have to figure out how dictionaries actually function, which will be a good learning experience), and if I can’t figure it out, will post on the Unity answers forums. Regardless of whether the question is answered, tomorrow will be my last day working on 3D Snake (for now), since I want to move on to more tutorials.
1. Got player movement working for 3D snake
2. Got food destroy/respawning working, as well as increasing speed from each successful block eaten
I was surprised at how difficult I found just getting movement to work was. Following along with tutorials and setting out to write code yourself are definitely two different things. When I stopped looking only at other snake code forum posts and really thought about the type of movement I wanted to accomplish (namely, always moving in one of 6 directions depending on last key pressed), I was able to figure out a solution. Taking breaks for food, etc helped me reach breakthroughs (such as how to ensure you couldn’t switch your direction directly backwards, killing yourself). Last obstacle I need to figure out is how to spawn body pieces into a list in a way that makes them move where the piece in front was (aka follow the piece in front like traditional snake).
If you’re following along, you’ll notice for 3D Snake that I haven't posted links to what I’ve been using to figure stuff out. This is because this project is meant to force you to figure stuff out on your own rather than just follow along with someone else’s code. Today I’ve made tons of google searches in an attempt to find answers to the questions I’ve had, from “c# unity snake movement” to “how to get vector3 of gameObject”.
1. Read through Boss code of Space SHUMP tutorial (can’t get bounds code to work)
2. Started first MIDTERM project: create a game in 2 days. For today I’m going to focus on design and planning out the code construction, with some implementation, while tomorrow will be focused on execution. The game prototype I am making is 3D VR Snake. My goal is to create a functioning prototype that tests the mechanics of having a 3D snake game in VR, to see if it is fun. If the mechanics are fun, I want to be able to expand on the game at a later time, turning it into a full VR title. If you are following along, feel free to create your own game, or also make a 3D Snake (just pick something you think you can complete in 2 days from both your knowledge so far and the knowledge you can gain quickly from researching how to do specific things).
3. Got Oculus camera working in Unity (just had to download SDK and click a checkbox (http://docs.unity3d.com/Manual/VROverview.html)
4. Put basic objects for 3D snake into scene, successfully set out programming challenges needed to be completed for prototype, and got cubes (food) to randomly spawn within the play area via a callable function
1. Continued Space SHUMP tutorial (only have Boss code and Scene decoration left)
^Couldn’t get Enemy_1 class to work properly (would get stuck at top of screen), so skipped over to Boss class. I don’t really understand the Bounds code very thoroughly, so I’d rather just move on and not waste time (the class will be fairly useless in VR). Personally I feel like the implementation of this demo could have been less complex and more streamlined, but I assume the author is trying to do things in a way so as to teach you new stuff.
- Continued Space SHUMP tutorial
^creating the Bounds class got a bit tedious, but it was good that it took me outside my comfort zone as well. It’s crazy how much coding can go into something as simple as “detect when objects go off screen or reach the edge of screen”. The reusability of this code, however, will make the time put in worth it. I guess that’s the secret to good coding: put in extra time making good architecture, and your future use of it will end up saving you a ton of time overall
1. Finished Catapult game tutorial from “Introduction to Game Design, Prototyping, and Development” book
^couldn’t figure out how to get number of balls shot in a round to display, so gave up and moved on after 20 minutes of troubleshooting, since I have no intention of continuing this particular prototype. UI seems to be a bit tricky overall since you’re dealing with mixing ints and strings
2. Started Space SHUMP prototype from “Introduction to Game Design, Prototyping, and Development”
- Started physics catapult game tutorial from “Introduction to Game Design, Prototyping, and Development”
^Took a break to hang out with brother while he was in town.
1. “Introduction to Game Design, Prototyping, and Development” Object oriented thinking. Created flock of birds (boid) program. REMEMBER TO IMPORT UNITY TOOLS INTO NEW PROJECTS, or Visual Studios won’t open scripts (took me about two minutes to troubleshoot). Finished Part 2 of book (skimmed over Scrum section, since I’ve already written about it, and did not make burndown charts, earmarked for later)
2. Completed Apple Picker tutorial from DGPD book
^got stuck dealing with new UI (it has changed in Unity since the book was written). Learned that you need to make sure you use the UnityEngine.UI namespace at the top of your script if you want to use the new “Text” namespace (http://answers.unity3d.com/questions/872864/how-do-you-reference-a-text-on-a-canvas-unity-46-g.html). Also had to manually convert an int to a string in HighScore which the book thought I didn’t.
- Continued Part 2 of “Introduction to Game Design, Prototyping, and Development”
^(Chapter 24) learned how to debug
2. Downloaded Visual Studio Tools for Unity so that I could use VS for debugging instead of MonoDevelop (once you install the tools, you must import the package into your project, which I found out handily online via a question someone on Reddit asked)
3. Watched the Hover Junker Devblogs #0–5 : https://www.youtube.com/watch?v=A7jNGu8PbWQ
^seeing a small game studio working on a VR project gave me a bunch of insights about the process, as well as some VR tips for what is good with the Vive
4. “Introduction to Game Design, Prototyping, and Development” Chapter 25: Learned (reviewed) how to make and use classes
- Continued reading Part 2 of “Introduction to Game Design, Prototyping, and Development”
^reviewed and solidified understanding of lists and arrays, as well as if, else statements, and functions
2. Breifly looked at http://docs.unity3d.com/Manual/VROverview.html to figure out how to get Oculus camera into a scene, but did not try yet
Day 1 (July 27, 2015)
- Downloaded Unity 5.1
2. Downloaded Visual Studios Community (2015)
^so much better than MonoDevelop, makes coding extremely fluid
3. Googled Unity documentation on Visual Studio, learned I could change it to default in preferences
4. Started reading “Introduction to Game Design, Prototyping, and Development” book PART 2
5. Found https://www.reddit.com/r/unity3d for future help/resources
6. Downloaded many free Unity assets to play around with (customizable female character model, space assets, medieval buildings, rocks, zombie with animations)
7. messed around with shaders and prefabs and put some into scene (as well as on character model)