The Social Life of Public/Private Urban Spaces:

Simulating Human Interaction of Public Space in a Private Sphere

Jack Lynch
Data Mining the City
15 min readDec 13, 2018

--

by: Jack Lynch, jwl2175

Thesis // Abstract:

Proposing a simulation methodology to better understand how a public population in an urban environment might begin to inhabit, occupy, or flock towards open and public spaces despite their apparent containment within an outwardly private sector of the city. By using a potential large-scale housing development occupying exactly one (1) city block in a mid-class borough as precedent, flocking and collision based simulations may become useful to spawn public persona data entries and analyze their interactions to and from defined regions. Within the simulations, said regions can begin to articulate the passing of people from the public street corridors into public spaces despite the apparent private nature of being removed from the street. This relationship will be stressed by different plan iterations that compare and contrast internal public spaces of various sizes, flows, and entry methods. Moreover, the simulations offer the potential for data expansion as attractors can be additionally coded to further the nuanced conditions that affect human flock behavior.

I. Introduction

Public spaces have always heralded the attention, awe, and study of designers, civic engineers, and the general populous. From ancient civilizations, to the refined Greek Agora, to the squares and plazas cities know today, the role of the public space has been deemed invaluable. Arguably best studied in its current social climate by William H. Whyte in 1971 vis-à-vis The Street Life Project and its subsequent Social Life of Small Urban Spaces documentary and accompanying book (Figure 01), Whyte utilized the growing new medium of film and footage to study the way people interacted with one another and the contextual space of a public plaza in an urban environment.

Figures 01–02

The footage tracked the way individuals moved towards and away from one another. Where they sat, why the sat where, who they sat with, and why they sat with whom. Whyte, in his own way, pioneered the idea that people in a public place are, at their core, predictable, and he sought to prove that with recorded facts. He tracked their movement and graphed their data (Figure 02), in the same way simulations do today. In fact, by using data recorded from The Social Life of Small Urban Spaces as pre-determined assumptions for human behavior in public spaces, we can program an interface analyzing how these human populations interact with new plan iterations for urban spaces that might be built today.

I. Learning Agent Based Computing

To begin, first a set of criteria needed to be developed and made certain. First, the simulation would necessitate an understanding of multiple people, objects, within the code and interface. This data should and would mimic the networking ability to move and operate individually or in groups, akin to that of a person in a social setting. After all, Whyte said himself that,

“People are attracted to, well, other people.”

The answer is utilizing human flow algorithms, or, more specifically, flocking behaviors. In the natural world, organisms exhibit certain behaviors when traveling in groups. This phenomenon is also known as flocking. Using Python, these patterns can be simulated by creating simple rules. This is known as emergent behavior, and can be used in spaces with certain edges.

Therefore, by switching rules of behaviors to be more like human flow in a public space setting (Figure 03), we can simulate how people begin to swarm as groups when they are interacting with each other. That might provide further opportunity for us to review if a specific public space might be successful in tandem with its adjacent public spheres of influence.

Figure 03

Our initial attempt began with a flocking algorithm developed in java (Figures 04–05). This script relied on the interaction between boid objects and a method of calculation that applies new accelerations and steering vector forces from boid to boid so that objects begin have a weight of influence on the movement of its neighboring objects. Ultimately, this flocking simulation was too devoid of human independence and less indicative of the nuanced choices and attractions Whyte had studied.

Figures 04–05

Preliminary Java Code:

Flock flock;void setup() {
size(640, 360);
flock = new Flock();
// Add an initial set of boids into the system
for (int i = 0; i < 150; i++) {
flock.addBoid(new Boid(width/2,height/2));
}
}
void draw() {
background(5);
flock.run();
}
// Add a new boid into the System
void mousePressed() {
flock.addBoid(new Boid(mouseX,mouseY));
}
void run() {
for (Boid b : boids) {
b.run(boids); // Passing the entire list of boids to each boid individually
}
}
void addBoid(Boid b) {
boids.add(b);
}
}// The Boid class would follow hereafter.....

Alternatively, we utilized a different flocking algorithm developed by A.W. Martin as reference that simulates fish schooling behavior. The python native script opted for the creation of a population with pre-prescribed attributes or behaviors that could easily be modified, adapted, or added on too to better simulate human flocking mechanisms in a public space.

First, a class of agent-based people had to be developed. The agents are additionally coded with three different, randomly generated seeds that define them as one of three conditional options. In theory, each seed of this class of agents — or sub-agents — could be scripted to allow for further alterations in their flocking behavior. Agents defined as 0 being of the sub-class generation x, agent definitions of 1 being of the sub-class generation y, and agent definitions of 2 being of the sub-class generation z. As agent based behaviors in the accompanying python code develop and evolve, these generations would take on different assumed values for attraction.

Creating Agents:

class person(object):
type = ('genx', 'geny', 'genz')
personcolors = (
color(251,46,1),
color(111,203,159),
color(0,103,125)
)
def __init__(self):

# Need the baseImage to position the person on a white pixel!!!

x = random.randrange(0, width)
y = random.randrange(0, 800)
br = brightness(baseImage.get(x, y))
while br < 200:
x = random.randrange(0,width)
y = random.randrange(0,800)
br = brightness(baseImage.get(x, y))

self.position = [x, y]
self.speed = 20
self.direction = random.random() * 2.0 * math.pi - math.pi
self.turnrate = 0
#assign generation
chance = random.random()
if chance < high:
identity = 1 #generationy
elif chance < low+high:
identity = 2 #generationz
else:
identity = 0 #generationx

self.personcolor = person.personcolors[identity]
self.type = person.type[identity]
self.id = identity
def move(self):
# TODO Globals... Yuck.
global allpersones, behaviors
state = {}# TODO Make this more efficient.
for person in allpersones:
for behavior in behaviors:
behavior.setup(self, person, state)
for behavior in behaviors:
behavior.apply(self, state)
# behavior.draw(self, state)
def draw(self):
pushMatrix()
translate(*self.position)
rotate(self.direction)
stroke(self.personcolor)
strokeWeight(1)
fill(self.personcolor)
rect(0, 0, 2, 5)
fill(255)
ellipseMode(CENTER)
ellipse(1,2.5,3,4)
popMatrix()

Then, the behaviors of each class of agents can begin to be modified. Behaviors are pooled as their own classes with modifiable parameters into a singular behavior class before assigned to each individual agent. Existing behaviors include attractions towards neighboring agents, repulsion from neighboring agents based upon too close of a randomized proximity range, directional vector awareness as relating towards other agents and environment (covered later), the acceleration and movement of said agents, as well as any agent restrictions with regards to the extents of the simulation.

Flocking Behavior Classes:

import mathclass Behavior(object):
def __init__(self, **parameters):
self.parameters = parameters
def setup(self, person, otherperson, state):
pass
def apply(self, person, state):
pass
def draw(self, person, wifi, state):
pass
######################################################## for wifi + collision based behaviors, see codes below######################################################
class MoveTowardsCenterOfNearbyperson(Behavior):
def setup(self, person, otherperson, state):
if person is otherperson:
return
if 'closecount' not in state:
state['closecount'] = 0.0
if 'center' not in state:
state['center'] = [0.0, 0.0]
closeness = self.parameters['closeness']
distance_to_otherperson = dist(
otherperson.position[0], otherperson.position[1],
person.position[0], person.position[1]
)
if distance_to_otherperson < closeness:
if state['closecount'] == 0:
state['center'] = otherperson.position
state['closecount'] += 1.0
else:
state['center'][0] *= state['closecount']
state['center'][1] *= state['closecount']
# state['center'][0] += otherperson.position[0]
# state['center'][1] += otherperson.position[1]
state['center'] = [
state['center'][0] + otherperson.position[0],
state['center'][1] + otherperson.position[1]
]
state['closecount'] += 1.0state['center'][0] /= state['closecount']
state['center'][1] /= state['closecount']
def apply(self, person, state):
if state['closecount'] == 0:
return
center = state['center']
distance_to_center = dist(
center[0], center[1],
person.position[0], person.position[1]
)
if distance_to_center > self.parameters['threshold']:
angle_to_center = math.atan2(
person.position[1] - center[1],
person.position[0] - center[0]
)
person.turnrate += (angle_to_center - person.direction) / self.parameters['weight']
person.speed += distance_to_center / self.parameters['speedfactor']
def draw(self, person, state):
closeness = self.parameters['closeness']
stroke(200, 200, 255)
noFill()
ellipse(person.position[0], person.position[1], closeness * 2, closeness * 2)
class TurnAwayFromClosestperson(Behavior):
def setup(self, person, otherperson, state):
if person is otherperson:
return
if 'closest_person' not in state:
state['closest_person'] = None
if 'distance_to_closest_person' not in state:
state['distance_to_closest_person'] = 1000000
distance_to_otherperson = dist(
otherperson.position[0], otherperson.position[1],
person.position[0], person.position[1]
)
if distance_to_otherperson < state['distance_to_closest_person']:
state['distance_to_closest_person'] = distance_to_otherperson
state['closest_person'] = otherperson
def apply(self, person, state):
closest_person = state['closest_person']
if closest_person is None:
return
distance_to_closest_person = state['distance_to_closest_person']
if distance_to_closest_person < self.parameters['threshold']:
angle_to_closest_person = math.atan2(
person.position[1] - closest_person.position[1],
person.position[0] - closest_person.position[0]
)
person.turnrate -= (angle_to_closest_person - person.direction) / self.parameters['weight']
person.speed += self.parameters['speedfactor'] / distance_to_closest_person
def draw(self, person, state):
stroke(100, 255, 100)
closest = state['closest_person']
line(person.position[0], person.position[1], closest.position[0], closest.position[1])
class TurnToAverageDirection(Behavior):
def setup(self, person, otherperson, state):
if person is otherperson:
return
if 'average_direction' not in state:
state['average_direction'] = 0.0
if 'closecount_for_avg' not in state:
state['closecount_for_avg'] = 0.0
distance_to_otherperson = dist(
otherperson.position[0], otherperson.position[1],
person.position[0], person.position[1]
)
closeness = self.parameters['closeness']
if distance_to_otherperson < closeness:
if state['closecount_for_avg'] == 0:
state['average_direction'] = otherperson.direction
state['closecount_for_avg'] += 1.0
else:
state['average_direction'] *= state['closecount_for_avg']
state['average_direction'] += otherperson.direction
state['closecount_for_avg'] += 1.0
state['average_direction'] /= state['closecount_for_avg']
def apply(self, person, state):
if state['closecount_for_avg'] == 0:
return
average_direction = state['average_direction']
person.turnrate += (average_direction - person.direction) / self.parameters['weight']
class move(Behavior):
def setup(self, person, otherperson, state):
person.speed = 1
person.turnrate = 0
def apply(self, person, state):
# Move forward, but not too fast.
if person.speed > self.parameters['speedlimit']:
person.speed = self.parameters['speedlimit']
person.position[0] -= math.cos(person.direction) * person.speed
person.position[1] -= math.sin(person.direction) * person.speed
# Turn, but not too fast.
if person.turnrate > self.parameters['turnratelimit']:
person.turnrate = self.parameters['turnratelimit']
if person.turnrate < -self.parameters['turnratelimit']:
person.turnrate = -self.parameters['turnratelimit']
person.direction += person.turnrate
# Fix the angles.
if person.direction > math.pi:
person.direction -= 2 * math.pi
if person.direction < -math.pi:
person.direction += 2 * math.pi
class WrapAroundWindowEdges(Behavior):
def apply(self, person, state):
if person.position[0] > width:
person.position[0] = 0
if person.position[0] < 0:
person.position[0] = width
if person.position[1] > height:
person.position[1] = 0
if person.position[1] < 0:
person.position[1] = height


#### END BEHAVIOR
########################################

II. Learning Behavior Classes

But, in addition to the flocking mechanisms of agents, in order to understand their interactivity with any drawn environment, the program necessitated the addition of collision-based modeling. Therefore, additional behaviors were added that attached sensors to agents. The collisions with drawn plans were interpreted via live-updating brightness scans of the agents over any drawn background image. Inaccessible areas in these plans were drawn with black poché and therefore, wouldn’t register a brightness. The result are agents with behaviors to turn away from all inaccessible areas, as well as to spawn on pre-prescribed regions of a select brightness. The range of a plan’s brightness allows for a multiplicity of region based cataloguing as agents begin to occupy some regions in denser clusters than others. During the testing period, the plans of the Rockefeller Center Plaza were used as for visual reference and precedent (Figures 06–07).

Figures 06–07

Collision-Based Modeling Script:

class TurnAwayFromWall_1(Behavior):
def setup(self, person, otherperson, state):
img=self.parameters['BaseImage']

# define location of antenna
if 'antenna' not in state:
state['antenna']=[1,1,1]

x = person.position[0]
y = person.position[1]

direction = person.direction
direction1 = direction + math.pi / 12.0
direction2 = direction - math.pi / 12.0
distance = -20 # Because of a motion bug in the original example,
# the agents are actually moving backwards.
# Compensate by using a negative antenna length.

#antenna1
x0=floor(x+distance*cos(direction))
if x0 >= width:
x0 = x0-width
if x0 < 0:
x0 = width+x0
y0=floor(y+distance*sin(direction))
if y0 >= height:
y0 = y0-height
if y0 < 0:
y0 = height+y0

#antenna2
x1=floor(x+distance*cos(direction1))
if x1 >= width:
x1 = x1-width
if x1 < 0:
x1 = width+x1
y1=floor(y+distance*sin(direction1))
if y1 >= height:
y1 = y1-height
if y1 < 0:
y1 = height+y1

#antenna3
x2=floor(x+distance*cos(direction2))
if x2 >= width:
x2 = x2-width
if x2 < 0:
x2 = width+x2
y2=floor(y+distance*sin(direction2))
if y2 >= height:
y2 = y2-height
if y2 < 0:
y2 = height+y2

xs=[x0,x1,x2]
ys=[y0,y1,y2]
# define if persons collide with walls
#img = loadImage("test.png")
#img.loadPixels()
for i in xrange(3):
# Draw the antenna for debugging
stroke(150)
point(xs[i], ys[i])

# extract the pixel color underneath the antenna
px = img.get(xs[i], ys[i])
b = brightness(px)

if b < 20:
state['antenna'][i] = 0
else:
state['antenna'][i] = 1

# turn direction
def apply(self, person, state):
if 'antenna' not in state:
self.setup(person, None, state)

if state['antenna'][0] == 0:
if state['antenna'][1] == 0 and state['antenna'][2] == 1:
person.turnrate -= (math.pi/2) #/ self. parameters['weight']
elif state['antenna'][1] == 1 and state['antenna'][2] == 0:
person.turnrate += (math.pi/2) #/ self. parameters['weight']
else:
'''t=random.randrange(0,2)
if t <=1:
factor=1
else:
factor=-1'''
person.turnrate += (math.pi/2) #/ self. parameters['weight']
########## ANTENNA ENDS######################################################
######################################################
######################################################
########## WIFI BEGINSclass FlockTowardsWifi(Behavior):

def setup(self, person, wifi, state):
if 'closecount' not in state:
state['closecount'] = 0.0
if 'center' not in state:
state['center'] = [0.0, 0.0]

closeness = self.parameters['closeness']
distance_to_wifi = dist(
wifi.position[0], wifi.position[1], person.position[0], person.position[1])
if distance_to_wifi < closeness:
if state['closecount'] == 0:
state['center'] = wifi.position
state['closecount'] += 1.0
else:
state['center'][0] *= state['closecount']
state['center'][1] *= state['closecount']
state['center'] = [
state['center'][0] + wifi.position[0],
state['center'][1] + wifi.position[1]
]
state['closecount'] += 1.0state['center'][0] /= state['closecount']
state['center'][1] /= state['closecount']

def apply(self, person, state):
if state['closecount'] == 0:
return
center = state['center']
distance_to_center = dist(
center[0], center[1],
person.position[0], person.position[1]
)
if distance_to_center > self.parameters['threshold']:
angle_to_center = math.atan2(
person.position[1] - center[1],
person.position[0] - center[0]
)
person.turnrate += (angle_to_center - person.direction) / self.parameters['weight']
person.speed += distance_to_center / self.parameters['speedfactor']

def draw(self, person, state):
closeness = self.parameters['closeness']
stroke(200, 200, 255)
noFill()
ellipse(person.position[0], person.position[1], closeness * 2, closeness * 2)

Spawning Behavior Attribute:

def __init__(self):

# Need the baseImage to position the person on a white pixel!!!

x = random.randrange(0, width)
y = random.randrange(0, 800)
br = brightness(baseImage.get(x, y))
while br < 200:
x = random.randrange(0,width)
y = random.randrange(0,800)
br = brightness(baseImage.get(x, y))

III. Developing Spaces for Simulations

Now that a general basis for a simplified human-flow simulation had been established. A series of plans for public spaces could be developed to begin to understand which organization of spaces yield better results. Using an urban block in the South Bronx as a suitable mid-class, high population, high public civic institution neighborhood for testing, ground floor plans aimed for public occupation beneath these private housing programs serves as an ideal testing ground for the agent populations.

Below are numerous iterations of floor plans developed over the course of the design process. In accordance with some of William H. Whyte’s earlier understandings of what exactly drives people to make a public space successful, the plan iterations aimed at understanding x sets of broad criteria.

1) large spaces are too large and therefore don’t benefit the nature of people

2) small spaces confine people once they enter and allow for better human-to-human interaction

3) borders within public spaces allow for people to sit, and people are attracted to places they can sit

4) the area where the street and plaza meet is key. to broad and there is no transition, too small and its too daunting. Too many obstructions and people stand congest. To few and people won’t be drawn to the space. (i.e. figure 08)

The following ground floor plans (figures 09–14) for public spaces were simulated, both with agents spawning in said spaces (an environment with existing persons [in these scenarios, the occupants of the housing development]) as well as with agents solely spawning outside of said spaces and then analyzing how they flock to fill these grey-ed out regions.

Figure 09
Figure 10
Figure 11
Figure 12
Figure 13
Figure 14

IV. Conclusions

Based on the plans drawn, and simulations run, there appeared to be relatively similar density of persons entering and occupying public spaces within the private sphere between large versus small spaces and enclosed versus open spaces when the agent spawns were restricted to the exclusively public sectors. The being for density similarities is mostly in line with the lack of additional environmental agents and factors that begin to appeal or attract human interests. For example, Whyte spends a whole section of his research discussing a person’s ability to follow landscaping from one zone to another, wherein landscaping as an environmental feature coerces sector transitions for select agents.

However, these similarities are still important. It shows that while some designers might opt for larger, open spaces with broad range transitions, smaller avenues and pockets (i.e. pocket parks) can be equally successful. The fundamental essence of what makes public spaces work relies on a much larger scale of intricacies that necessitate deeper programming and higher functioning simulations. Again in the words of Whyte,

“It is difficult to design a space that will not attract people. What is remarkable is how often this has been accomplished.”

In its fundamental core, however, the mere implication of flocking behavior supports Whyte’s ultimate thesis.

“People are attracted to other people.”

And this is proven vis-à-vis the simulations without mid-range brightness regions. In these simulations, there is an assumed internal agent population: those that live in the units supposedly built above the ground level public spaces. With agents spawning in these regions, we see a greater population flock “inside.”

What’s interesting to note, however, is the tendencies of the simulated agents to gravitate towards the borders, thus harkening back to the notion that a greater perimeter of borders within spaces offers attraction values for agents, such as potential seating.

V. Struggles, Coding Failures, + Potential Developments

The code used for these simulations could easily benefit from better upgrades and developments. To start, more behaviors could be developed and applied to the different classes of agents. Below is my attempt to introduce a new class of agents particularly pertinent in the design of public spaces today, and that is the introduction of a WIFI internet attractor.

First attempt to insert wifi portion via new behavior and class:

class FlockTowardsWifi(Behavior):

def setup(self, person, wifi, state):
if 'closecount' not in state:
state['closecount'] = 0.0
if 'center' not in state:
state['center'] = [0.0, 0.0]

closeness = self.parameters['closeness']
distance_to_wifi = dist(
wifi.position[0], wifi.position[1], person.position[0], person.position[1])
if distance_to_wifi < closeness:
if state['closecount'] == 0:
state['center'] = wifi.position
state['closecount'] += 1.0
else:
state['center'][0] *= state['closecount']
state['center'][1] *= state['closecount']
state['center'] = [
state['center'][0] + wifi.position[0],
state['center'][1] + wifi.position[1]
]
state['closecount'] += 1.0state['center'][0] /= state['closecount']
state['center'][1] /= state['closecount']

def apply(self, person, state):
if state['closecount'] == 0:
return
center = state['center']
distance_to_center = dist(
center[0], center[1],
person.position[0], person.position[1]
)
if distance_to_center > self.parameters['threshold']:
angle_to_center = math.atan2(
person.position[1] - center[1],
person.position[0] - center[0]
)
person.turnrate += (angle_to_center - person.direction) / self.parameters['weight']
person.speed += distance_to_center / self.parameters['speedfactor']

def draw(self, person, state):
closeness = self.parameters['closeness']
stroke(200, 200, 255)
noFill()
ellipse(person.position[0], person.position[1], closeness * 2, closeness * 2)

My first attempt was to establish a behavior class, FlockTowardsWifi, directly relating to a new agent class, WIFI. However, I found difficulties understanding how to intertwine the behavior related to the original agent (person) with an entirely new agent. Previously, all agents had been interacting with sub-classes of themselves, coded and titled otherperson as a variant of the original class.self.

My second attempt was to both introduce WIFI as an element as well as a menu of other factors (lets say BENCH, TREE, TABLE, COMPUTER, CAFÉ, etc.) via an interactive menu and drag_program batch of code. The idea here was to introduce a drag program class with mouse and keyboard interactivity so as to allow the simulation to become more expressive and — more fundamentally — user friendly.

Second attempt at wifi and additional agent introduction via a drag_program:

#drag program [text label, image file]wifi = ["WIFI", "wifi.png"]
bench = ["BENCH", "bench.png"]
#drag program classclass DragProgram(object):
def __init__(self, xpos, ypos):
self.xpos = xpos
self.ypos = ypos
self.box_size = 100
global overBox, lock, drag, xOffset, yOffset
overBox = False
lock = False
drag = False
xOffset = 0
yOffset = 0
def setup(self):
self.position = [(self.xpos+self.box_size/2), (self.ypos+self.box_size/2)]

def display(self, xText, program):
global overBox, lock, drag, xOffset, yOffset

self.program = program
self.xText = xText
self.programText = program[0]
self.programImg = loadImage(program[1])

if (mouseX > self.xpos
and mouseX < self.xpos + self.box_size
and mouseY > self.ypos
and mouseY < self.ypos + self.box_size):
overBox = True
tint(200)

else:
overBox = False
noTint(0)

if mousePressed == True and overBox == True:
xOffset = mouseX - self.xpos
yOffset = mouseY - self.ypos

tint(100)
self.xpos = mouseX - (self.box_size/2)
self.ypos = mouseY - (self.box_size/2)

image(self.programImg, self.xpos, self.ypos, self.box_size, self.box_size)
fill(0)
textAlign(CENTER)
textSize(12)
text(self.programText, self.xText+self.box_size/2, 880)
noTint()

self.position = [(self.xpos+self.box_size/2),(self.ypos+self.box_size/2)]

Unfortunately, this provided the largest crossroad for my simulations advancement, as I couldn’t decipher the reasoning behind the scripts refusal to recognize the global relationship of the individual classes with the program.display(location, png icon ref).

I truly believe these simulations could and strongly would benefit from these additions. It makes the program more user friendly, caters more towards designers, and begins to analyze the intricacies of human behavior Whyte researched in the 1970s.

fin.

--

--