# ChiPy Mentorship pt 3

Nov 17, 2017 · 4 min read

Hey everyone! I’m back again for the third and final installment of the ChiPy mentorship blog posts. You can check out my last post here.

Check out my GitHub here: https://github.com/aapatni/chipy2017

In my last update I discussed my plan of action for the rest of the project and for the last month I have been executing upon that plan. So far I have completed 3 of those 4 steps:

1. Finish creating 3D environment
2. Map 3D world points to 2D camera view
3. Train a Regression model that will take any 2D points and turn them into (3D) way points that the robot can follow
4. Integrate with current robot code

# Step 1+2

In the notebook below, I have created a 3D virtual environment where I can create “fake targets”. The simulation views these targets similar to how my robot would view these targets, and maps them to a 2D screen. With this environment I am able to determine a series of 10 dimensions from the object which will go into training my machine learning algorithm.

Video of Simulation: (targets are randomly generated throughout the screen)

Here’s the code:

`import pygamefrom pygame.locals import *import randomfrom OpenGL.GL import *from OpenGL.GLU import *import mathfrom IPython.display import clear_outputfrom operator import itemgetterimport numpy as npimport randomimport csv#leftRect = [(-5.125,0,-40),(-3.125,0,-40),(-3.125,5,-40),(-5.125,5,-40),]#rightRect = [(3.125,0,-40),(5.125,0,-40),(5.125,5,-40),(3.125,5,-40)]edges = ((0,1),(1,2),(2,3),(3,0))fov_y = 45width = 320height = 240def TransformVec3(vecA,mat44):    vecB = [0, 0, 0, 0]    for i0 in range(0, 4):        vecB[i0] = vecA[0] * mat44[0*4+i0] + vecA[1] * mat44[1*4+i0] + vecA[2] * mat44[2*4+i0] + mat44[3*4+i0]    return [vecB[0]/vecB[3], vecB[1]/vecB[3], vecB[2]/vecB[3]]def TestRec(prjMat, ll):    ll_ndc = TransformVec3(ll, prjMat)    pos_pixel_xy = [width*(ll_ndc[0]+1.0)/2.0, height*(1.0-ll_ndc[1])/2.0]    return (pos_pixel_xy)def getCenter(right, left):    x = (sum([x[0] for x in right])+sum([x[0] for x in left]))/8.0    y = (sum([y[1] for y in right])+sum([y[0] for y in left]))/8.0    return(x,y)def getWidth(right, left):    return (right[-1][0]-left[0][0])def getHeight(right, left):    return ((right[-1][1]+left[-1][1])-(right[0][1]+left[0][1]))/2.0def getData(prjMat, right, left):    leftCoordinates = [TestRec(prjMat,x) for x in left]    rightCoordinates = [TestRec(prjMat,x) for x in right]    center = getCenter(rightCoordinates, leftCoordinates)    h = getHeight(sorted(rightCoordinates, key=itemgetter(1)),sorted(leftCoordinates,key=itemgetter(1)))    w = getWidth(sorted(rightCoordinates, key=itemgetter(0)),sorted(leftCoordinates, key=itemgetter(0)))        return [center, w, h]#This draws the rectangles edgesdef Target(rightRect,leftRect):    prjMat = (GLfloat * 16)()    glGetFloatv(GL_PROJECTION_MATRIX, prjMat)    glBegin(GL_LINES)    for edge in edges:        for vertex in edge:            glVertex3fv(leftRect[vertex])    glEnd()    glBegin(GL_LINES)    for edge in edges:        for vertex in edge:            glVertex3fv(rightRect[vertex])    glEnd()def addToRow(right, left,index,adjust):    for i in right:        i[index] = i[index]+adjust    for i in left:        i[index] = i[index]+adjust    return (right,left)def main():    leftRect = [[-5.125,0,-20],[-3.125,0,-20],[-3.125,5,-20],[-5.125,5,-20]]    rightRect = [[3.125,0,-20],[5.125,0,-20],[5.125,5,-20],[3.125,5,-20]]    try:        pygame.init()        display = (width,height)        pygame.display.set_mode(display, DOUBLEBUF|OPENGL)        glMatrixMode(GL_PROJECTION)        gluPerspective(fov_y, (display[0]/display[1]), .1, 1000)        glMatrixMode(GL_MODELVIEW)        counter = 0        with open('dataGL.csv','a',newline='') as csvFile:            writer = csv.writer(csvFile,delimiter= ',')            #writer.writerow(["X","Y","W","H","LPX","LPY","LPZ","RPX","RPY","RPZ"])            while counter<1000000:                #iterates through events to check for quits                for event in pygame.event.get():                    if event.type == pygame.QUIT:                        pygame.quit()                        quit()                 Target(rightRect,leftRect)                prjMat = (GLfloat * 16)()                glGetFloatv(GL_PROJECTION_MATRIX, prjMat)                data= getData(prjMat, rightRect, leftRect)                leftArray = np.array(leftRect)                leftPoint = leftArray.mean(axis=0)                rightArray = np.array(rightRect)                rightPoint = rightArray.mean(axis=0)                toCSV = [data[0][0],data[0][1],data[1],data[2],leftPoint[0],leftPoint[1],leftPoint[2],rightPoint[0],rightPoint[1],rightPoint[2]]                #print(toCSV)                #writer.writerow(toCSV)                ##set bounds and randomize input                if(data[0][0] > 320 or data[0][0]<0 or data[0][1]>240 or data[0][1]<0 or leftPoint[2]>-5 or leftPoint[2]<-100):                    leftRect =[[-5.125,0,-20],[-3.125,0,-20],[-3.125,5,-20],[-5.125,5,-20],]                    rightRect=[[3.125,0,-20],[5.125,0,-20],[5.125,5,-20],[3.125,5,-20]]                xAdjust = float(random.randint(-50,50))                yAdjust = float(random.randint(-50,50))                zAdjust = float(random.randint(-50,50))                if(xAdjust == 0):                    xAdjust+=1                if(yAdjust == 0):                    yAdjust+=1                if(zAdjust == 0):                    zAdjust+=1                rightRect, leftRect = addToRow(rightRect,leftRect, 0, 1.0/xAdjust)                rightRect, leftRect = addToRow(rightRect,leftRect, 1, 1.0/yAdjust)                rightRect, leftRect = addToRow(rightRect,leftRect, 2, 1.0/zAdjust)                pygame.display.flip()                pygame.time.wait(10)                glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT)                counter+=1    except Exception as e:        %tb        print (e)main()`

# Step 3

After creating a CSV Database with around 1000000 (1 million) different target placements I then had to create a machine learning algorithm to map the “2D Coordinates” to “3D coordinates”. Imagine the robot looking at an image and determining its position based on the image.

In order to do that I have so far created a Linear Regression Model and a Neural Network Model in order to extrapolate these coordinates. Basically on my input layer I have (X coordinate, Y coordinate, height, width) and for my output layer I have (3D coordinates).

With the Linear regression I got an accuracy of around 95% on the test data and the Neural Network had about a 99% accuracy. These values may be skewed slightly because I still have not accounted for error in my calculations. My next steps with these models will be to make them more robust so they can handle “real world” data.

# Step 4

Integrating my project with the robot is by far going to be the hardest task I will encounter during this mentorship. I will need to:

a.) Determine best paths to success

b.) Create a system to control and track the robots movement

c.) Account for all the error in the system

Thank you so much for reading this blog series! Once my project is completed, I will post a final update with a video of a successful mission.

Written by

Written by

## Decision Tree in Machine Learning

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just \$5/month. Upgrade