Srishti Local Guide (SLG)

Rashi Balachandran
Design with code
Published in
10 min readAug 19, 2018

Group: Janaki Syam, Rashi Balachandran, Arsh Kumar

Abstract

SLG or the Srishti Local Guide is an application that aims to provide personalized navigation and assistance to students and faculty who have difficulty navigating around the campus.

Context

Srishti has six large campuses, each with a different layout, a different purpose, and housing hundreds of students. For our assignment, we decided to focus specifically on the N5 campus. In a 3-floor campus, with multiple classrooms, screening rooms, offices, and faculty spaces, navigation can prove to be an issue. And for a student, getting around a new campus can be an inconvenient, daunting experience — especially in the beginning, when there are 300 other students, all equally confused and the only means of figuring out navigation is asking faculty or other staff for directions. To add to this, is the discomfort of being in an unfamiliar space and the uncertainty due to lack of knowledge and direction.

To deal with the issues of navigation, uncertainty, and the feeling of being lost for the first few months of college, we decided to create a something that would solve these issues and would also add a familiar voice that the student could trust in this time.

Every student in Srishti has their own personal, unique timetable, and so no student follows the same route every day. In fact, the average student visits three different classrooms every week, as well as the other campus spaces like the canteen, the reception, etc. So, creating a common map for everyone is tricky, as not all information is relevant to every student. Currently, to even know the room number your class is in, students have to login to and access their Student Portal, where this information is listed. This logging in and memorizing room numbers is occasionally a little inconvenient and most people resort to asking their classmates instead of going through this process. And just knowing the classroom number isn’t of much help, due to the irregular classroom placement in N5, the multiple staircases, and the shared campus. To add to this, as a new student it’s hard to approach people to ask for directions, especially as everyone is unknown to you.

Initial Ideation & Experimentation

Being new to the N5 campus, these ideas were inspired by our experience in the first few weeks of college. Our initial concept was to create a conversational agent that was a geo-specific, personalized campus map (in the form of an app) that would aid students in finding their way around the campus. We planned on making it campus specific , such that the Guide would only work if you were on campus (using a sensor system), and each Srishti campus would have this Guide feature. This was done to eliminate any security issues, as the location features wouldn’t work outside campus so there was no risk of your location being used for the wrong purposes and there was no possibility of being tracked outside campus. The conversational agent would be installed on each student’s phone and would accept both speech and text inputs. We wanted the speech inputs so that the process of figuring out the class location would be less time consuming. In return, the Guide would give you text responses, giving you information on class location, facilitator names, attendance, timing issues, etc. When asked to guide you to a certain location on campus, classroom or otherwise, it would give you a visual depiction of the route through a video that could be followed by the student. This video would show you the way from where you were at that moment, to your destination, while showing you all the landmarks on the way. We wanted it to be visual as it’s more accurate, easily depicted, and easily understandable. The Guide would be linked to every student’s Student Portal and thus would have their class and timing information already, and hence would answer questions better. This is also how it would give attendance, assignments, and timing alerts and facilitator information. The conversational agent would act the same for faculty and others as well and would give faculty additional information on how many of their students were on campus specifically during their class hours.

Following our first exhibition, when we were coding this conversational agent, we came across a few loopholes and issues:

  • There are thousands of permutations and combinations of the routes that could be taken by a student, and creating a video for each of these routes is virtually impossible.
  • While in the future we aim to actually link it to the Student Portal, at the moment we were unable to due to security issues, and so it wasn’t possible to access a student’s actual class records.
  • Due to limited location capabilities, the conversational agent is currently unable to actually detect when the student is on campus or the real time location of the student.

Feedback

During our exhibition, presenting our film which demonstrated the use of our prototype conversational agent to an audience that was seeing our work for the first time prompted a lot of constructive criticism from the viewers. Among all the feedback we received, there were some bits that seemed to recur and we gave these more importance and emphasis and decided to keep them in mind while moving forward to Assignment — 2 and making our semi-finished conversational prototype as we thought that’s what would really impact our next assignment.

The feedback received regarding mostly the usability of it, along with privacy concerns since it tracks your location on campus. The feedback which struck us the most was how if we were to label it as a “Navigation Assistant” it would be a problem since it had so many features other than just the navigation aspect, and the navigation feature would eventually become redundant after few months since the students would learn their way around campus.

On the other hand, other feedback we received said that if we could just perfect the navigational features until they were foolproof, even if it worked only for a month, then its purpose would be served. This really gave us a new perspective to think from since we were very focused on the functionality of it and how many problems we could possibly solve, we completely forgot about the crux or the foundation of our idea and the logic behind it.

During the whole course of the exhibition we got insights as to what people thought our project did well and what it lacked. They discussed scalability, the future of the agent, how we could further modify it to specifically fit the context of Srishti and not just any other college, and how to fit S.L.G. in the given context, and the technological restraints that would limit some of our aims. One of the crucial pieces of feedback we received concerning navigation was that there were thousands of routes to cover, with the number of combinations of possible combinations considering the number of classes. We also learned how machine learning is actually insanely complex, and advanced, was currently beyond our capabilities.

Aside from just feedback, we also got a lot of assistance from the students and faculty who came to the exhibition. One of the concepts was to show the route to a room using a series of photos that showed landmarks along the way, along with a blueprint style map that would simplify the layout of N5. The reason we hadn’t considered the technicalities of the concept, was that we had become too fixated on the video time-lapse idea along with the non-navigation aspect of the software.

After contemplating over these, we thought of ways in which we could better our conversational agent — by focusing on how the navigation might work, how we would fit S.L.G in the already given context of Srishti, and how we would make sure to eradicate all the possible loopholes in the system regarding privacy and security.

Developing the Prototype

After developing our ideas through the feedback we got from our A1 assignment, we started to build the prototype by coding on python. The python language took a little time for all of us to get the hang of since none of us had any previous experience with the language. While learning, we knew that for sure we needed to use loops in order to achieve the conversation aspect. Apart from using loops, the question was how to integrate navigation into our code.

We started by coding the easier aspects, such as inputing your name and password. Once we figured out the outline of our code, we decided to tackle the navigation. While previously we had decided to use videos to show navigation, we realised that the adding video was not only complicated, but it limited the possibility of adding a lot of conversation and also would make it difficult for us to add all possibilities of navigation routes. In order to add more conversation, we decided to make the navigation textual and coded accordingly.

Due to this change, we did spend a lot of time planning on what conversation to add where and the flow of the conversation between the user and the device. The addition of human like conversations that would engage the user to interact with the device, such as funny comments, or appropriate phrases also made a difference in terms of conversational aspects. This we could achieve through random functions.

It was difficult to initially code as the code was very long and difficult to understand. As beginner coders, we all faced difficulty in debugging the code and eventually kept finding many many errors while running the code. Also using the loops wouldn’t work the way we wanted to, and we figured this out which we kept running the code for different possibilities. While proceeding with the initial prototype we also realised that the addition of our other features was also something we needed to figure out.

We had gotten feedback from the initial prototype submission, which was to structure the code, which we hadn’t done before. After getting feedback from the initial prototype, we knew that organising the code was our number one task. We used multiple functions to shorten the code that was repetitive. This made a huge difference as it not only shortened out code, but it had made it much easier for us to correct all the errors. We also used class to define colours and attributes that we added to make our code look neat. Since we could not access Blackboard, we had to improvise and we used files instead. By opening and reading files we were able to successfully achieve the aspect of the app being specific to each student. Meaning with entering your name, the students personal details of their class, cca, assignments can be accessed.

Since our conversation was to help navigate, we needed a way to send automated messages, or messages that would require the app to wait for the student to reach a particular destination. For this, we used the time.sleep function that helped us time the messages depending on how long it would take the student to locate the desire destination.

Future Applications:

In the future, it would be ideal for the Guide to make use of machine learning to adapt to the needs of every individual user. It would learn and get accustomed to every user’s personal timetable, work habits, lunch and CCA timings, the time they arrive in college, and the time they choose to leave college. Thus, it would adapt and give automated responses accordingly. The user would also be able to customize the Guide to match their needs and could choose to switch off or alter responses.

We also aim to eventually make the navigation more visual through photos and a blueprint style map, like a very zoomed in Google Maps. Another feature we aim to add is speech inputs, to ease communication.

Eventually, as students begin to learn their way around the campus, the navigation features will lose importance. Thus, to prevent the Guide from becoming completely redundant, we want it to eventually shift priority to its other features like attendance alerts, time notifications, automated responses for lunch and CCA, etc. so it fulfills more assistant-like duties and continues to remain useful to the student.

Summary

Keeping all the loopholes in mind, we came up with our final idea for Srishti Local Guide. It’s a text based conversational agent, that helps you navigate around the N5 campus. After the student logs in, using their full name (that is registered in the student database) and their Cyberoam ID, they can access the app and navigate. The point of using the Cyberoam ID as a password to log in, is that the Guide uses the N5 Wi-Fi to function and is thus localized and restricted to the campus, so using the Cyberoam ID allows the Guide to access the Wi-Fi, and thus work only on campus.

The navigation works by asking you which floor you’re currently on, and then guides you to your destination by directing you from landmark to landmark by pointing them out to you, in order, and giving you directions to get from one landmark to the next until you eventually reach your destination. This is achieved using a series of while, if, and else loops, which increases the Guide’s ability to converse with the user. It does so by increasing the possible outcomes of the conversation, by keeping the questions open ended and thus allowing the Guide to guide you to the destination by narrowing down possibilities.

The questions it asks are in a conversational manner, and it gives you apt responses, which include your facilitator’s names, if you’re late for class, if your attendance is low, etc. The student needn’t ask the Guide for directions to a specific class but can instead just say “where is my class today?” Since the Guide accesses data from files that contain individual student’s timetables, it knows which class the student has at the time and can guide them. Thus, the responses are personalized and specific to the student who is using the Guide.

Given below are links to videos demonstrating the testing of the Srishti Local Guide prototype, in two different stages of development.

Wizard of Oz Demo: https://www.youtube.com/watch?v=mxegGsR2XPU&t=29s

Final Coded Prototype Demo: https://youtu.be/7_8cDIZM804

--

--