Full stack geospatial web developer.



Setting up a quick Drupal 7 development environment with Docker

Category : Web Development · No Comments · by Dec 20th, 2016

Docker LogoFor several years my colleagues and I have used Vagrant Drupal Development (VDD) to set up local development instances of the various Drupal websites we manage. It worked flawlessly for a couple of those years, but lately it has been coming apart at the seams. Every time we use it to spin up a fresh development instance we discover yet another bug requiring yet another shameful hack. It’s been over two years since its last official release so this doesn’t come as a complete shock.

The time seemed ripe to try out Docker. Docker “is an open platform for developers and sysadmins to build, ship, and run distributed applications, whether on laptops, data center VMs, or the cloud.” I don’t know much about Docker yet, truth be told. But I know it uses LXC (Linux Containers), a relatively new technology, to run processes in a kernel-level virtual environment. As a developer this means I can spin up servers in lightweight virtual containers without creating full-fledged virtual machines with their own gigabytes-sized operating systems.

What my colleagues and I liked about VDD is that it provided an automated means to set up a fresh copy of Drupal 7 with its files mounted from the host OS. We would then copy over Drupal’s “sites” subdirectory and import a MySQL database dump from the production server. Any changes made to the Drupal files mounted from the host OS would be reflected immediately at http://localhost:8080/.

I was able to do the same thing with Docker with the following commands:

# Assume we have a full copy of our Drupal website's "sites"
# subdirectory in ~/docker/sites and a MySQL database dump in
# ~/docker/dump.sql

# Download/overwrite settings.php file with a modified
# version of the stock default.settings.php from Drupal core.
# This version of the file adds a databaase connection that
# gets its MySQL host address and root password from the
# environment variables provided by the linked MySQL container.
cd ~/docker/sites/default
curl -O 'https://raw.githubusercontent.com/ua-snap/docker-drupal-settings/master/settings.php'

# Set up persistent MySQL database container.
docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=root \
  -d mysql:latest

# Wait 10-15 seconds for the MySQL container to start up,
# then spawn a temporary MySQL container to use as a client
# connecting to the persistent MySQL container. MySQL server
# info is passed as environment variables to the client
# container when it is linked. Use this info to connect to
# the MySQL server and create "drupal7" database.
docker run -it --link some-mysql:mysql --rm mysql \
  sh -c 'exec mysql \-h "$MYSQL_PORT_3306_TCP_ADDR" \
  -P "$MYSQL_PORT_3306_TCP_PORT" \
  -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD" \
  -e "CREATE DATABASE drupal7;"'

# Spawn another temporary MySQL container that links to
# the persistent MySQL container and imports database dump.
docker run -i --link some-mysql:mysql --rm mysql \
  sh -c 'exec mysql \-h "$MYSQL_PORT_3306_TCP_ADDR" \
  -P "$MYSQL_PORT_3306_TCP_PORT" \
  -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD" drupal7' \
  < ~/docker/dump.sql

# Set up persistent Drupal container that links to MySQL
# container, mounting the "sites" directory from the host OS.
docker run --name some-drupal -p 8080:80 \
  --link some-mysql:mysql \
  -v ~/docker/sites:/var/www/html/sites -d drupal:7

And lightweight is right! Here are some interesting figures from my VDD vs. Docker development environments:

VDD storage size: 8.2 GB
Docker (Drupal + MySQL) storage size: 816 MB

VDD memory footprint: 2.13 GB
Docker (Drupal + MySQL) memory footprint: 272 MB

VDD spin-up time: ~13.5 minutes
Docker (Drupal + MySQL) spin-up time: ~30 seconds

SHARE :
Permalink

Domesticating a wild algorithm

Category : Game Development · No Comments · by Nov 21st, 2016

In my previous post, I came up with a recursive algorithm that determines where a unit can move in my PICO-8 turn-based tactical RPG. But it’s not perfect. It’s not even adequate. Watch the animation below to see if you can spot why:

Unoptimized Algorithm

This algorithm is calculating the possible moves for an Archer unit, which is able to move 4 steps in any cardinal direction. As shown in my previous post, I decided to use recursion to calculate each step independently so the algorithm is intelligent enough to wrap around obstacles. Indeed, this part is working. But if it’s finding 34 accessible spaces, why is the function called 193 times? What’s happening in the extra 159 function calls?

It’s clear that those 159 function calls are doing work that’s already been done. Retrospect is 20/20, but it’s not hard to figure out why. Each square is looking in each of its 4 cardinal directions for an available space to move into. But only the very first square in the center needs to check all 4 directions. The rest of the squares should already “know” that our Archer unit can access the square in the direction from which it arrived. They should only be checking the other 3 directions at most.

Worse yet, the algorithm is prone to taking leisurely strolls through large swaths of squares it has already marked as accessible, from every possible path in every possible direction. What else could explain numbers like this (taken without obstacles, unlike the animation above):

Before optimization:
0 steps:
1 function call
1 steps: 5 function calls
2 steps: 17 function calls
3 steps: 65 function calls
4 steps: 237 function calls
5 steps: 925 function calls
6 steps: 3,501 function calls

We have a big problem here.

What to do? Couldn’t we simply prevent the algorithm from moving into a space already marked as accessible? It seems sensible at first blush. But, alas, it’s possible for the algorithm to arrive at the same square with more available steps than the last time it visited. It’s highly probable that we’d end up cutting its journey short and occasionally failing to reach every available square. Here’s a better idea: If an adjacent square has already been visited before, why not check to see if we can arrive into the square with more remaining steps? If not, turn back. But if so, move into the square and record its new “high step score.” Like this:

Better Move Range Algorithm

Let’s take this for a test drive:

Optimized Algorithm

Nice.

After optimization:
0 steps:
1 function call
1 steps: 5 function calls
2 steps: 13 function calls
3 steps: 25 function calls
4 steps: 41 function calls
5 steps: 61 function calls
6 steps: 85 function calls

Algorithm Optimization Graph

I don’t think I’m going to let any of my units move more than 10 spaces at a time. But I can rest easy knowing that I now have an algorithm that can calculate ~12 step ranges without batting an eye, whereas the unoptimized version caused black smoke to billow out of PICO-8 at just 6 steps.

SHARE :
Permalink

Better movement through recursion

Category : Game Development · No Comments · by Nov 20th, 2016

Now that I have a few graphics for my little PICO-8 game, it’s time to saunter back toward more familiar terrain — coding.

I need to find a way to determine where each unit can move on the map’s grid. Each unit has a certain number of steps they are allowed to move per turn. My first thought was, “Why not just keep a list of coordinates indicating where each unit can move from their current position, or maybe just calculate it on-the-fly geometrically to save more of my 8,192 PICO-8 coding tokens for later?” But there are problems with both of these approaches.

XCOM 2 Movement

Movement range lines in XCOM 2.

Units should not be allowed to move into a square already occupied by another unit or an environmental obstacle like a mountain or river. Units will need to walk around other units, cross bridges, etc. Wrapping around obstacles and walking through narrow choke points can alter a unit’s movement pattern dramatically. Hence, pre-calculating a unit’s available steps and simply marking some as invalid is not going to work. That can’t possibly be how XCOM pulls of its unpredictable twisting range of motion lines, after all.

Recursion strikes me as the only sane solution to this problem. It’s simple enough. A unit can check to see whether it can move into each of its four adjacent spaces. It then subtracts 1 from its number of available steps and performs the same check in each of those four spaces, and so forth. Nothing will be off limits to a unit so long as it has enough steps to wind its way around any obstacles in the way. The (simplified) Lua code looks like this:

function range(x, y, steps)
  valid[x][y] = true

  if steps == 0 then
    return
  end

  if obstacle[x][y - 1] == false then
    range(x, y - 1, steps - 1)
  end

  if obstacle[x][y + 1] == false then
    range(x, y + 1, steps - 1)
  end

  if obstacle[x - 1][y] == false then
    range(x - 1, y, steps - 1)
  end

  if obstacle[x + 1][y] == false then
    range(x + 1, y, steps - 1)
  end
end

This produces the characteristic “move diamond” that’s so familiar to fans of the genre:

Move Range Algorithm

Tune in next time to find out why this isn’t good enough.

SHARE :
Permalink

tactics.p8, or how I learned to stop worrying and steal ideas from Fire Emblem and XCOM

Category : Game Development · No Comments · by Nov 16th, 2016
Fire Emblem

Fire Emblem (Game Boy Advance)

I’m very much into turn-based strategy games these days. I’m relatively new to the genre but I can’t get enough of it. I recently played my very first game of Civilization, and I’ve wasted whole weekends plowing through XCOM: Enemy Unknown and XCOM 2. I’ve also pushed myself through the unexpectedly grueling campaign of Fire Emblem for Game Boy Advance. I will gladly play any of these games repeatedly and endlessly, but I think Fire Emblem is my favorite so far. If Santa pulls through for me this year, I might just be playing Fire Emblem Fates for the Nintendo 3DS by December.

So! Obviously I want to make a turn-based tactical RPG. The more it plays like Fire Emblem or XCOM the better, but I appreciate how the strict limitations of PICO-8 force me to distill these games down to their essence. For example, I don’t have the skills to mimic the beautifully animated sprites of Fire Emblem, but I’m not even sure how one would represent people carrying swords, axes, spears, and bows with just 8×8-pixel sprites. My solution, as seen in my previous blog post, was to draw just the weapons themselves. Like chess pieces, they’re easy to identify and simple to take in at a glance on the battlefield.

But unlike a typical black and white chess set, I have colors to deal with. I want the player and computer opponent to share the same unit types, to have green and red versions of the same sprites. Unfortunately and predictably, the shades of green and red available in PICO-8’s 16-color palette are too subtle for my color blind eyes. This is definitely not something you want to mix up in the heat of battle! What to do? Point the red units to the left!

Tactics Unit Cursor

SHARE :
Permalink

Eight bits of empowerment with PICO-8

Category : Game Development · No Comments · by Nov 16th, 2016

PICO-8 CodeTime to dust off the ol’ blog and get excited again. I just discovered PICO-8.

PICO-8 is “a fantasy console for making, sharing and playing tiny games and other computer programs.” Sounds rather abstract, doesn’t it? Think of it as a tiny emulator that replicates the retro look of games from the Game Boy Color era (or thereabouts). But instead of playing commercial games lifted from the shady recesses of the Internet, it has its own ecosystem with thousands of free homebrew games. Even better, it provides all the tools needed for aspiring indie developers to make their own games, and you can use these tools to modify or extend any of the existing games.

More importantly, it’s deliberately built with, in their own words, “harsh limitations.” Cartridges are 32kb, which gives you limited room for code, sprites, and sound. And herein lies the genius of PICO-8. It’s a platform that gives teams-of-one like myself a fighting chance to develop a finished game. A crude, blocky game, sure, but one that can stand proudly with the others — a game that I can take from concept to distribution using what talents I already have while taking baby steps in other areas, like art and music.

All of my previous attempts to make a game in Unity have ended in a despondent prowl through the hodgepodge offerings of the Unity Asset Store. But when your graphics are 8×8 pixel art, why not take a stab at making them yourself? Behold!

PICO-8 Tactics Sprites

SHARE :
Permalink

LoopBack, a flexible off-the-shelf API solution without the fuss

Category : Web Development · No Comments · by Jun 3rd, 2016

LoopBack LogoI’ve been dabbling in LoopBack lately. LoopBack is a “highly-extensible, open-source Node.js framework.” More specifically, it allows you to “quicky create dynamic end-to-end APIs.” I’ve only been working with it a few days, but I can already see it seriously delivers on that promise.

Fresh out of the box, LoopBack will guide you through the process of creating a model. A model can be as simple as a collection of fields you want to be able to manipulate over the API. It’s just as easy to define relations between the models. The custom models you create inherit from LoopBack’s base models that implement basic CRUD operations. Everything you configure interactively through it’s model/relation generators is recorded into JSON and JavaScript files, so your configuration can be checked into a Git repo and modified later.

By default, LoopBack stores any data received through CRUD operations in its own internal memory, which means your data will get wiped whenever LoopBack shuts down. But it allows you specify a JSON file to use for data persistence to buy you time to figure out the what & where of your real database. Once you figure out what kind of database you plan to use for production, simply use one of LoopBack’s off-the-shelf database connectors to interface with PostgreSQL, MongoDB, or whatever.

So basically you can spin up an API that’s good enough for basic development within minutes and worry about infrastructure details later. This is fantastic.

SHARE :
Permalink

Preserving URLs after sending a Drupal 6 website into retirement using HTTrack + Apache

Category : Web Development · No Comments · by May 10th, 2016

Support for Drupal 6 ended on February 24, 2016. That means any Drupal 6 website still accessible to the public is at risk of getting hacked and is growing more susceptible by the day. Your options are:

My organization chose the latter, and huge kudos to Karen for her article. It saved us a lot of time and/or money. Even better, while implementing her strategy I discovered I could use a few Apache tricks to keep HTTrack’s static URLs backwards compatible with the old Drupal 6 website. You too can ditch that unsightly “.html” at the end of your URLs and serve them up as the same Drupal paths you know and love, and with no money down!

Please read Karen’s article first or this won’t make much sense. Consider this article an addendum to hers. Also, this article assumes your Drupal 6 website uses Clean URLs. These instructions will not be able to preserve your Drupal website’s URLs if they contain “?q=”.

Here is the HTTrack command I ended up using to mirror our website, which is a slight modification of Karen’s:

httrack http://website -O . -N "%p/%n%[page].%t" -wqQ%v --robots=0

Using this command, HTTrack stores parent nodes as directories. For example, assuming both “/about” and “/about/history” are valid pages, they are stored as an “about” directory containing the following files:

  • index.html (the “about” page’s content, a.k.a. the parent node)
  • history.html (the “history” page’s content, a.k.a the child node)

There’s one small catch, however. HTTrack stores the website’s root index as an index-2.html file. The index.html is a meta-page generated by HTTrack itself that lists all of the websites it has mirrored. If HTTrack mirrors just one website, index.html automatically redirects to index-2.html, the actual root index for the website, but not before an “HTTrack” splash screen shows up for a split second in the browser.

I wasn’t able to find a way to disable HTTrack’s index.html meta-page feature. Fortunately, we can bypass it altogether by adding index-2.html to Apache’s DirectoryIndex and giving it precedence over index.html:

DirectoryIndex index-2.html index.html

With that out of the way, the front page and all parent node URLs act like the old Drupal site. Using just HTTrack + Apache’s DirectoryIndex, Drupal’s root path “/” can still be accessed as “/”, and the its “/about” path can still be accessed as “/about”.

This is a step in the right direction, but remember, the HTML files generated by HTTrack are not aware of our Apache tricks. Internal links generated by HTTrack will still go to the explicitly *.html versions of URLs. Clicking just one link will send the user back to the *.html Stone Age.

We can get around this for the front page by using Apache’s mod_rewrite module to redirect both index.html and index-2.html URLs to just “/”:

RewriteCond %{REQUEST_URI} ^/(index-2|index)(\.html)?$
RewriteRule ^.+$ / [R=301]

Using HTTP’s 301 “Moved Permanently” status code also ensures that “/” is the only version of the URL that will be picked up for search engine listings. Now wouldn’t it be nice if we could do the same for child nodes? We can, in fact! But only after we dial up the sorcery a notch.

The following mod_rewrite rule strips the “.html” off of child node paths:

RewriteCond %{REQUEST_URI} ^.+\.html$
RewriteRule ^(.+)\.html$ $1 [R=301]

And the following rule reconstructs child node URLs back into their *.html versions behind the scenes, invisible to the browser, whether they came from our previous Apache rule, a Google link, or were typed directly into the address bar:

RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME}\.html -f
RewriteRule ^(.+)$ $1.html

So now even child node URLs are the same as they were on the old Drupal website and will continue to be listed that way in search engines. For example, “/about/history” on the old site will still be “/about/history” on the new site. This covers everything, generally, but there are exceptions. If any of your Drupal modules are fancy enough to use GET parameters for paging (e.g., calendars), their URLs will not be preserved.

Putting it all together, it looks like this:

# Modify your existing DirectoryIndex settings.
DirectoryIndex index-2.html index.html

<IfModule mod_rewrite.c>
  RewriteEngine on

  RewriteCond %{REQUEST_URI} ^/(index-2|index)(\.html)?$
  RewriteRule ^.+$ / [R=301]

  RewriteCond %{REQUEST_URI} ^.+\.html$
  RewriteRule ^(.+)\.html$ $1 [R=301]

  RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME}\.html -f
  RewriteRule ^(.+)$ $1.html
</IfModule>
SHARE :
Permalink

Promises and promise chaining, your best defense against hadouken code

Category : Web Development · (4) Comments · by May 5th, 2016

There’s no shortage of buzzwords floating around the JavaScript community these days, but “promise” is certainly not one of them. Promises are the lynchpin of clean, asynchronous JavaScript code. They were included in the ECMAScript 6 standard and they are not going away. If, like me, you have been skirting around the concept by cobbling together examples from the Stack Overflow, now is the time to stop. Because sooner or later you’re going to end up writing hadouken code:

Hadouken Code

Like so many other coding concepts, promises are not trivial but not especially difficult either. The problem is that, owing to their very nature, the English language seems to buckle under the weight of formal attempts to explain them. Check out this description of the resolve method from MDN for example:

Promise.resolve(value) – Returns a Promise object that is resolved with the given value. If the value is a thenable (i.e. has a then method), the returned promise will “follow” that thenable, adopting its eventual state; otherwise the returned promise will be fulfilled with the value.

But thanks to the umpteen blogs and tutorials on the web paraphrasing promises in their own way, I finally have a pretty decent grasp on the concept. Now allow me to pay it forward with my own take.

You are given an empty box. Through some kind of magic (e.g., a wizard named Asychronism), you expect that something will eventually materialize in this box. In fact, Asychronism promised this to you. Some of your work depends on whatever shows up in the box, but you can take care of other things while you wait. Just keep an eye on the box.

Something just materialized in the box! What is it? Another empty box, with another heartfelt promise from the rascally wizard Asychronism that something will eventually materialize inside. So you keep an eye on the new box for a while and something materializes in that one too. What is it? It’s the item you needed to finish the rest of your work!

That’s about as far as this anology goes, but it captures the spirit of both promises (an empty box to be filled) and promise chaining (a new box materializing inside of the box).

Consider the following Node.js code:

var request = require('request');
 
request('/end/point/1', function (error, response, body) {
  request('/end/point/2', function (error, response, body) {
    request('/end/point/3', function (error, response, body) {
      // Do stuff.
    });
  });
});

Here we have three anonymous callback functions nested inside one another. This is one way to ensure asynchronous requests run in strict sequence, but it’s a sloppy relic of pre-promise JavaScript. It’s probably also a symptom of poor back-end API design, but things like this can happen in the real world. Imagine if we had to add two or three more steps to this sequence. Hadouken!

We can avoid this by using the promise enabled version of the request module, which returns a promise like this:

var request = require('request-promise');

var promise = request('/end/point');
promise.then(function(response) {
  // Do stuff.
}

Which means we can refactor our asynchronous call sequence like this:

var request = require('request-promise');

request('/end/point/1').then(function (body) {
  return request('/end/point/2')
}).then(function (body) {
  return request('/end/point/3')
}).then(function (body) {
  // Do stuff.
});

In this example, the request() function returns a promise (an empty box) and .then() tells the program to keep an eye on the promise and invoke the provided anonymous function when something materializes. Inside the anonymous function, we call request() again, which returns another promise. But rather than dealing with this promise immediately inside of the anonymous function (which would lead to hadouken code), we return the return value, passing it outside of the anonymous function. Via the magic of promises, the “inner promise” is passed up the chain and becomes the “outer promise”. This is promise chaining.

SHARE :
Permalink

Using an asynchronous server asynchronously, a small step towards Node.js enlightenment

Category : Web Development · No Comments · by Apr 27th, 2016

I’m a bit of a Node.js novice. So far my only practical experience has been writing an Express-based API for the Historical Sea Ice Atlas that queries a PostGIS database, performs some post-processing, and outputs the results as JSON. This was not enough to appreciate why Node.js was better, faster, or more scalable than other solutions, but I knew there must be something behind all the hype.

Okay, I understood in a very theoretical way that because Node.js handles I/O asynchronously, it doesn’t lock up entire threads from a finite thread pool for the entire duration of a request, à la PHP/Apache. For this reason, any web app that does a lot of server-side I/O will generally stay more responsive in Node.js as its traffic increases. You get this benefit for free.

But how does Node.js help us in the trenches of code development. If, like me, you’ve spent a decade thinking of JavaScript in strictly front-end UX terms, it can be difficult to grasp. But it’s not really. We’re all comfortable with asynchronous calls in the browser, right? Why should servers be any different?

I encountered a situation today that assured me that back-end JavaScript is, in fact, just JavaScript. The confusion had been in my head all along.

The situation:

  1. Use the request module to download and time the response of a web page.
  2. POST this value to a REST endpoint using the restler module.

In a browser, treating asynchronous code procedurally can create race conditions:

// Browser script. Assume jQuery is loaded.
var foo;

$.get('/source.html', function(data) {
 // Do stuff.
 foo = value;
});

// Race condition: "foo" may or may not be defined.
$.post('/api/destination', {
 'bar': foo
});

Why would it be any different on an asynchronous JavaScript server? It isn’t. The same situation holds true in Node.js. This creates a race condition:

// Node.js script.
var request = require('request');
var rest = require('restler');
var foo;

request('/source.html', function (error, response, body) {
  // Do stuff.
  foo = value;
});

// Race condition: "foo" may or may not be defined.
rest.post('/api/destination', {
  data: {
    'bar': foo
  }
}

Instead, embrace asynchronism:

// Node.js script.
var request = require('request');
var rest = require('restler');

request('/source.html', function (error, response, body) {
  // Do stuff.
  var foo = value;
  rest.post('/api/destination', {
    data: {
      'bar': foo
    }
  });
});

Of course we’d be better off using promises and chaining. Look at that! An idea for my next post.

SHARE :
Permalink

Inadvertently Minecraft’ing with Unity and ProCore

Category : Game Development · No Comments · by Apr 23rd, 2015

Instantiation DemoI have some ray casting and I have some ProCore. What can I do with these things? Quite a bit, as it happens. In a previous post, I showed a very simple example of how ray casting can be used by a First Person Controller to interact with objects around it. While turning objects blue is probably not the most common use case, it’s easy to see how the code could be adapted to destroy objects instead using the Destroy function. From there it doesn’t take much imagination to see this as a simple gun implementation.

So that covers modifying and destroying objects. What else could we do? Create objects! In fact, Unity makes it almost as easy to create things as destroy them. And with ProCore, we can even add some structure to what we are creating. Take a gander at the animated GIF above.

To do this, I:

  1. Built a 1×1 cube using ProBuilder.
  2. Created a new Unity prefab inside my Resources folder named Block.
  3. Added the 1×1 ProBuilder cube to my Block prefab.
  4. Wrote the following script for my First Person Controller:
using UnityEngine;
using System.Collections;

public class Player : MonoBehaviour {
  Transform playerCamera;
  Vector3 position;
  Vector3 direction;

  void Start () {
    playerCamera = transform.Find("Main Camera");
  }

  void Update () {
    position = playerCamera.position;
    direction = playerCamera.forward;

    RaycastHit hit = new RaycastHit();
    if(Input.GetMouseButtonDown(0)
       && Physics.Raycast(position, direction, out hit)) {
      Vector3 cellPos = getCellPosition(hit);
      Instantiate(Resources.Load("Block"), cellPos, Quaternion.identity);
    }
  }

  // Given a ray cast hit, this function determines the grid
  // position on which to build a block.
  Vector3 getCellPosition(RaycastHit hit) {
    Vector3 position = new Vector3();

    // Keep the position along the ray cast normal vector axis,
    // but round it to nearest whole number to make more precise.
    // Round the positions on the other two axes down to nearest
    // whole number to align them with ProGrids' coordinate system.
    if(Mathf.Approximately(Mathf.Abs(hit.normal.x), 1.0f)) {
      position.x = Mathf.Round(hit.point.x);
      position.y = Mathf.Floor(hit.point.y);
      position.z = Mathf.Floor(hit.point.z);
    } else if(Mathf.Approximately(Mathf.Abs(hit.normal.y), 1.0f)) {
      position.x = Mathf.Floor(hit.point.x);
      position.y = Mathf.Round(hit.point.y);
      position.z = Mathf.Floor(hit.point.z);
    } else if(Mathf.Approximately(Mathf.Abs(hit.normal.z), 1.0f)) {
      position.x = Mathf.Floor(hit.point.x);
      position.y = Mathf.Floor(hit.point.y);
      position.z = Mathf.Round(hit.point.z);
    }

    // If we hit the back side of a cube along any axis, place our
    // next cube one grid cell back along that axis.
    for(int i = 0; i < 3; i++) {
      if(Mathf.Approximately(hit.normal[i], -1.0f)) {
        position[i] -= 1.0f;
      }
    }

    // ProGrids builds cubes backwards on the z-axis, so add one to
    // z-axis to account for this.
    position.z += 1.0f;

    return position;
  }
}

You could do all of this without ProCore, it’s true. But building an interactive environment in Unity for use with this script, without ProGrids snapping objects to whole number intervals, would be a nightmare.

SHARE :
Permalink