Creative Coding Weekly Report

WinnieTheWho
36 min readJan 24, 2024

--

(24/05/08) Final Project — Final Presentation

In the second play test, I was able to run the hand gesture recognition model on webcam stream using MediaPipe and OpenCV and display the corresponding emojis on screen. The pipeline now can recognize 5 hands at max. To make this project more engaging, as people may not know to trigger the effect with their hands when they first enter the experience, I’d like to enable facial gesture recognition and emoji display as well. This way, visitors would immediately see the visual effects they triggered and would be more willing to play with the system more.

MediaPipe provides a face landmark detection model which can be used somewhat similarly to the hand landmark detection model. Unfortunately, they don’t have a facial gesture model at the moment. Initially, I thought about creating a heuristic-based approach to define the facial gestures. However, since there are 478 landmarks, it would be labor-intensive to define such a heuristics and it won’t end up accurate anyways. Thus, I researched online and found someone built a facial gesture recognition model based on the MediaPipe face landmarks. In their project, they provided the .tflite model and an inference pipeline to run this model given a set of face landmarks. I decided to give it a try.

For the facial gesture recognition model, the output will be one of these five categories:

  • Happy 🙂
  • Neutral 😐
  • Sad 😢
  • Angry 😠
  • Surprised 😲

Similar to what I did for hand gestures, I downloaded emojis for the corresponding facial emotions and over them to the video stream. However, different from the hand gesture recognition, for facial gesture recognition, I’ll need to load the model and inference class, prepare the input to the model, and run the model inference as an additional step after the face landmarks are obtained.

Using class to encapsulate important information and functionality has make the development much easier. When I start the program, I just initialize all the recognizers. For each frame in the video stream, I process it through both the hand gesture recognizer, and face landmarker + facial gesture recognizer, and overlay the result on top of the video frame.

Based on the class feedback, because the system can track multiple people’s faces and hands, instead of randomly displaying the emojis, it’ll be more effective to display the emojis associated with the identified identity. To do this, I change the code to calculate the bounding boxes of the hands and faces, and scale the emoji size and overlay on top of the detected faces and hands. There were several issues that I encountered:

  • To make sure that closer hand or face is larger, and further hand or face is smaller, scaling the emoji size according to the bounding box is a possible solution. This would also change the aspect ratio of the emoji image.
  • When the face or hand goes partially off the screen, the scaled size will go out of bound of the screen size, I have to set a limit to make sure the code won’t break.
  • For hand gestures, because there’s the handedness difference, I decide to flip the emoji image to align with the actual hand’s handedness. There’s a handedness field output from the gesture, which I can use to decide the handedness info.

To reflect back on this project, if I have more resources, I would likely to train a facial gesture recognizer myself, which will be able to recognize more expressions and be more accurate. This means I would need to gather some kind of diverse training data of different people and facial expressions, which will take some time and resources. Also training a model will need some time and computing resources as well.

The final code is here:

import itertools
import mediapipe as mp
import copy
import cv2
import emoji
import numpy as np

from mediapipe import solutions
from mediapipe.framework.formats import landmark_pb2
from PIL import Image, ImageFont, ImageDraw

from emotion_classifier import EmotionClassifier


BaseOptions = mp.tasks.BaseOptions

HandLandmarker = mp.tasks.vision.HandLandmarker
HandLandmarkerOptions = mp.tasks.vision.HandLandmarkerOptions
HandLandmarkerResult = mp.tasks.vision.HandLandmarkerResult

GestureRecognizer = mp.tasks.vision.GestureRecognizer
GestureRecognizerOptions = mp.tasks.vision.GestureRecognizerOptions
GestureRecognizerResult = mp.tasks.vision.GestureRecognizerResult

FaceLandmarker = mp.tasks.vision.FaceLandmarker
FaceLandmarkerOptions = mp.tasks.vision.FaceLandmarkerOptions
FaceLandmarkerResult = mp.tasks.vision.FaceLandmarkerResult

VisionRunningMode = mp.tasks.vision.RunningMode


class EmojiRecognition:
def __init__(self,
emojis: dict,
emotion_classifier: EmotionClassifier,
emotion_classifier_labels: list):
"""
Initializes the EmojiRecognition instance.

Args:
emojis: a mapping from emoji_name (str) to emoji_img (matrix)
emotion_classifier: a ML inference class to run an emotion c
lassification tflite model from face landmarks
emotion_classifier_labels: a list of facial emotions containing
["Angry", "Happy", "Neutral", "Sad", "Surprise"]
"""
self.hand_landmarker_results = None
self.face_landmarker_results = None
self.emojis = emojis
self.emotion_classifier = emotion_classifier
self.emotion_classifier_labels = emotion_classifier_labels


def visualizeHandResults(self,
rgb_image: np.array,
detection_result: HandLandmarkerResult):
"""
Visualizes the face landmarkers, emotion, and corresponding emoji.

Args:
rgb_image: a video frame.
detection_result: HandLandmarkerResult for the current frame.

Returns:
The video frame annotated with hand landmarks and emojis.
"""
annotated_image = np.copy(rgb_image)
hand_landmarks_list = detection_result.hand_landmarks
hand_bbxs = []

# Loop through the detected poses to visualize.
for idx in range(len(hand_landmarks_list)):
hand_landmarks = hand_landmarks_list[idx]

# Draw the hand landmarks.
hand_landmarks_proto = landmark_pb2.NormalizedLandmarkList()
hand_landmarks_proto.landmark.extend([
landmark_pb2.NormalizedLandmark(x=landmark.x,
y=landmark.y,
z=landmark.z) for landmark in hand_landmarks
])

# Bounding box calculation
hand_bbxs.append(self.CalculateHandBoundingBox(annotated_image, hand_landmarks_proto))

solutions.drawing_utils.draw_landmarks(
annotated_image,
hand_landmarks_proto,
solutions.hands.HAND_CONNECTIONS,
solutions.drawing_styles.get_default_hand_landmarks_style(),
solutions.drawing_styles.get_default_hand_connections_style())

# add emoji
annotated_image = self.overlayHandEmojis(annotated_image,
detection_result, hand_bbxs)
return annotated_image

def CalculateBoundingBox(self,
image: np.array,
face_landmarks: landmark_pb2.NormalizedLandmarkList):
"""
Calculates the bounding box for the recognized face.

Args:
image: a video frame.
face_landmarks: a list of normalized face landmarks.

Returns:
A list of four numbers to locate the bounding box.
"""
image_width, image_height = image.shape[1], image.shape[0]
landmark_array = np.empty((0, 2), int)

for _, landmark in enumerate(face_landmarks.landmark):
landmark_x = min(int(landmark.x * image_width), image_width - 1)
landmark_y = min(int(landmark.y * image_height), image_height - 1)
landmark_point = [np.array((landmark_x, landmark_y))]
landmark_array = np.append(landmark_array, landmark_point, axis=0)

x, y, w, h = cv2.boundingRect(landmark_array)

return [x, y, x + w, y + h]

def CalculateHandBoundingBox(self,
image: np.array,
hand_landmarks: landmark_pb2.NormalizedLandmarkList):
"""
Calculates the bounding box for the recognized hand.

Args:
image: a video frame.
hand_landmarks: a list of normalized hand landmarks.

Returns:
A list of four numbers to locate the bounding box.
"""
h, w = image.shape[:2]
landmark_array = np.empty((0, 2), int)

for _, landmark in enumerate(hand_landmarks.landmark):
x = min(int(landmark.x * w), w - 1)
y = min(int(landmark.y * h), h - 1)
landmark_point = [np.array((x, y))]
landmark_array = np.append(landmark_array, landmark_point, axis=0)

x, y, w, h = cv2.boundingRect(landmark_array)

return [x, y, x+w, y+h]


def CalculateLandmarkPoints(self,
image: np.array,
face_landmarks: landmark_pb2.NormalizedLandmarkList):
"""
Calculates the landmark coordinates in the image.

Args:
image: a video frame.
face_landmarks: a list of normalized face landmarks.

Returns:
A list of face landmark points in the image space.
"""
h, w = image.shape[:2]
points = []

for _, landmark in enumerate(face_landmarks.landmark):
x = min(int(landmark.x * w), w - 1)
y = min(int(landmark.y * h), h - 1)
points.append([x, y])

return points


def preprocessLandmarks(self, landmark_list):
temp_landmark_list = copy.deepcopy(landmark_list)

# Convert to relative coordinates
x, y = 0, 0
for index, landmark_point in enumerate(temp_landmark_list):
if index == 0:
x, y = landmark_point[:2]

temp_landmark_list[index][0] = temp_landmark_list[index][0] - x
temp_landmark_list[index][1] = temp_landmark_list[index][1] - y

# Convert to a one-dimensional list
temp_landmark_list = list(
itertools.chain.from_iterable(temp_landmark_list))

# Normalization
max_value = max(list(map(abs, temp_landmark_list)))

def normalize_(n):
return n / max_value

temp_landmark_list = list(map(normalize_, temp_landmark_list))

return temp_landmark_list


def annotateBoundingBox(self, rgb_image, bbx):
# Outer rectangle
image = np.copy(rgb_image)
cv2.rectangle(image,
(bbx[0], bbx[1]),
(bbx[2], bbx[3]),
(0, 0, 0),
1)

return image


def annotateEmotionDetection(self, rgb_image, bbx, facial_text):
image = np.copy(rgb_image)
cv2.rectangle(image,
(bbx[0], bbx[1]),
(bbx[2], bbx[1] - 22),
(0, 0, 0),
-1)
if facial_text != "":
info_text = 'Emotion :' + facial_text
cv2.putText(image,
info_text,
(bbx[0] + 5, bbx[1] - 4),
cv2.FONT_HERSHEY_SIMPLEX,
0.6,
(255, 255, 255),
1,
cv2.LINE_AA)

return image


def detectEmotions(self, annotated_image, face_landmarks):
"""
Detects the emotion in the image based on the recognized face landmarks.

Args:
annotated_image: a video frame.
face_landmarks: a list of normalized face landmarks.

Returns:
A string of the facial emotion.
"""
# Bounding box calculation
bbx = self.CalculateBoundingBox(annotated_image, face_landmarks)

# Landmark calculation
landmark_list = self.CalculateLandmarkPoints(annotated_image, face_landmarks)

# Conversion to relative coordinates / normalized coordinates
pre_processed_landmark_list = self.preprocessLandmarks(
landmark_list)

#emotion classification
facial_emotion_id = self.emotion_classifier(pre_processed_landmark_list)
facial_emotion = self.emotion_classifier_labels[facial_emotion_id]
# Drawing part
annotated_image = self.annotateBoundingBox(annotated_image, bbx)
annotated_image = self.annotateEmotionDetection(annotated_image,
bbx,
facial_emotion)

return facial_emotion, bbx


def visualizeFaceResults(self,
annotated_image: np.array,
detection_result: FaceLandmarkerResult):
"""
Visualizes the face landmarkers, emotion, and corresponding emoji.

Args:
annotated_image: a video frame.
detection_result: FaceLandmarkerResult for the current frame.
"""
face_landmarks_list = detection_result.face_landmarks
facial_emotions = []
facial_bbxs = []

# Loop through the detected faces to visualize
for idx in range(len(face_landmarks_list)):
face_landmarks = face_landmarks_list[idx]

# Draw the face landmarks
face_landmarks_proto = landmark_pb2.NormalizedLandmarkList()
face_landmarks_proto.landmark.extend([
landmark_pb2.NormalizedLandmark(x=landmark.x,
y=landmark.y,
z=landmark.z) for landmark in face_landmarks
])

# Run emotion recognizer
emotion, bbx = self.detectEmotions(annotated_image,
face_landmarks_proto)
facial_emotions.append(emotion)
facial_bbxs.append(bbx)

solutions.drawing_utils.draw_landmarks(
image=annotated_image,
landmark_list=face_landmarks_proto,
connections=mp.solutions.face_mesh.FACEMESH_TESSELATION,
landmark_drawing_spec=None,
connection_drawing_spec=mp.solutions.drawing_styles
.get_default_face_mesh_tesselation_style())
solutions.drawing_utils.draw_landmarks(
image=annotated_image,
landmark_list=face_landmarks_proto,
connections=mp.solutions.face_mesh.FACEMESH_CONTOURS,
landmark_drawing_spec=None,
connection_drawing_spec=mp.solutions.drawing_styles
.get_default_face_mesh_contours_style())
solutions.drawing_utils.draw_landmarks(
image=annotated_image,
landmark_list=face_landmarks_proto,
connections=mp.solutions.face_mesh.FACEMESH_IRISES,
landmark_drawing_spec=None,
connection_drawing_spec=mp.solutions.drawing_styles
.get_default_face_mesh_iris_connections_style())

# add emoji
print(facial_emotions)
annotated_image = self.overlayFaceEmojis(annotated_image, facial_emotions, facial_bbxs)

return annotated_image

def recordHandLandmarkerResult(self,
result: HandLandmarkerResult,
output_image: mp.Image,
timestamp_ms: int):
"""A Call back function to Update the hand_landmarker_results state."""
#print('pose landmarker result: {}'.format(result))
self.hand_landmarker_results = result


def recordFaceLandmarkerResult(self,
result: FaceLandmarkerResult,
output_image: mp.Image,
timestamp_ms: int):
"""A Call back function to Update the face_landmarker_results state."""
# print('face landmarker result: {}'.format(result))
self.face_landmarker_results = result


def addEmoji(self, annotated_image, emoji, bbx=None):
"""
Overlays the emoji onto annotated_image.

Args:
annotated_image: a video frame.
emoji: an emoji image.
bbx: four numbers to locate the bounding box.
"""
if bbx:
print(emoji.shape)
hstart = max(0, bbx[1])
wstart = max(0, bbx[0])
h = bbx[3]-bbx[1]
w = bbx[2]-bbx[0]
emoji = cv2.resize(emoji, (w, h), interpolation= cv2.INTER_LINEAR)
else:
h, w = emoji.shape[:2]
hstart = np.random.randint(0, annotated_image.shape[0] - h)
wstart = np.random.randint(0, annotated_image.shape[1] - w)

# Separate the alpha channel from the color channels
alpha = emoji[:, :, 3] / 255 # convert from 0-255 to 0.0-1.0
overlay_colors = emoji[:, :, :3]
alpha_mask = np.dstack((alpha, alpha, alpha))
background = annotated_image[hstart:hstart+h, wstart:wstart+w]
# Combine the background with the overlay image weighted by alpha
composite = background * (1 - alpha_mask) + overlay_colors * alpha_mask
# Overwrite the section of the background image that has been updated
annotated_image[hstart:hstart+h, wstart:wstart+w] = composite

return annotated_image


def overlayHandEmojis(self,
annotated_image: np.array,
detection_result: HandLandmarkerResult,
hand_bbxs: list):
"""
Overlays the corresponding emoji image on top of the video stream.

Args:
annotated_image: a video frame.
detection_result: HandLandmarkerResult for the current frame.
hand_bbxs: A list of bounding boxes for the recognized hand.
"""
emojis_to_display = []
for i, result in enumerate(detection_result.gestures):
for j, gesture in enumerate(result):
if gesture.category_name == "Closed_Fist":
if detection_result.handedness[i][j].category_name == "Left":
emojis_to_display.append((cv2.flip(emojis["closed_fist"], 1), hand_bbxs[i]))
else:
emojis_to_display.append((emojis["closed_fist"], hand_bbxs[i]))
elif gesture.category_name == "Open_Palm":
if detection_result.handedness[i][j].category_name == "Left":
emojis_to_display.append((cv2.flip(emojis["open_palm"], 1), hand_bbxs[i]))
else:
emojis_to_display.append((emojis["open_palm"], hand_bbxs[i]))
elif gesture.category_name == "Pointing_Up":
if detection_result.handedness[i][j].category_name == "Left":
emojis_to_display.append((cv2.flip(emojis["pointing_up"], 1), hand_bbxs[i]))
else:
emojis_to_display.append((emojis["pointing_up"], hand_bbxs[i]))
elif gesture.category_name == "Thumb_Down":
if detection_result.handedness[i][j].category_name == "Left":
emojis_to_display.append((cv2.flip(emojis["thumb_down"], 1), hand_bbxs[i]))
else:
emojis_to_display.append((emojis["thumb_down"], hand_bbxs[i]))
elif gesture.category_name == "Thumb_Up":
if detection_result.handedness[i][j].category_name == "Left":
emojis_to_display.append((cv2.flip(emojis["thumb_up"], 1), hand_bbxs[i]))
else:
emojis_to_display.append((emojis["thumb_up"], hand_bbxs[i]))
elif gesture.category_name == "Victory":
if detection_result.handedness[i][j].category_name == "Left":
emojis_to_display.append((cv2.flip(emojis["victory"], 1), hand_bbxs[i]))
else:
emojis_to_display.append((emojis["victory"], hand_bbxs[i]))
elif gesture.category_name == "ILoveYou":
if detection_result.handedness[i][j].category_name == "Left":
emojis_to_display.append((cv2.flip(emojis["love"], 1), hand_bbxs[i]))
else:
emojis_to_display.append((emojis["love"], hand_bbxs[i]))

if len(emojis_to_display):
for emoji, bbx in emojis_to_display:
annotated_image = self.addEmoji(annotated_image, emoji, bbx)

return annotated_image


def overlayFaceEmojis(self,
annotated_image: np.array,
facial_emotions: list,
facial_bbxs: list):
"""
Overlays the corresponding emoji image on top of the video stream.

Args:
annotated_image: a video frame.
facial_emotions: Facial emotions recognized for the current frame.
facial_bbxs: A list of bounding boxes for the recognized face.
"""
emojis_to_display = []
for i, emotion in enumerate(facial_emotions):
if emotion == "Angry":
emojis_to_display.append((emojis["angry"], facial_bbxs[i]))
elif emotion == "Happy":
emojis_to_display.append((emojis["happy"], facial_bbxs[i]))
elif emotion == "Sad":
emojis_to_display.append((emojis["sad"], facial_bbxs[i]))
elif emotion == "Neutral":
emojis_to_display.append((emojis["neutral"], facial_bbxs[i]))
elif emotion == "Surprise":
emojis_to_display.append((emojis["surprised"], facial_bbxs[i]))

if len(emojis_to_display):
for emoji, bbx in emojis_to_display:
annotated_image = self.addEmoji(annotated_image, emoji, bbx)

return annotated_image


def main(self):
"""The main loop to run the models and process video frames."""
hand_gesture_options = GestureRecognizerOptions(
base_options=BaseOptions(model_asset_path='gesture_recognizer.task'),
running_mode=VisionRunningMode.LIVE_STREAM,
num_hands=5,
result_callback=self.recordHandLandmarkerResult)

face_landmarker_options = FaceLandmarkerOptions(
base_options=BaseOptions(model_asset_path='face_landmarker.task'),
running_mode=VisionRunningMode.LIVE_STREAM,
num_faces=5,
result_callback=self.recordFaceLandmarkerResult)

video = cv2.VideoCapture(1)
timestamp = 0

with GestureRecognizer.create_from_options(hand_gesture_options) as hand_gesture_recognizer, \
FaceLandmarker.create_from_options(face_landmarker_options) as face_landmarker:
# The hand gesture recognizer and face landmarker are initialized.
while video.isOpened():
# Capture frame-by-frame
ret, frame = video.read()

if not ret:
print("Ignoring empty frame")
break

timestamp += 1
mp_image = mp.Image(image_format=mp.ImageFormat.SRGB, data=frame)
hand_gesture_recognizer.recognize_async(mp_image, timestamp)
face_landmarker.detect_async(mp_image, timestamp)

if not self.hand_landmarker_results and not self.face_landmarker_results:
cv2.imshow('Show', frame)
else:
annotated_image = mp_image.numpy_view()
if self.hand_landmarker_results:
annotated_image = self.visualizeHandResults(
annotated_image,
self.hand_landmarker_results)
if self.face_landmarker_results:
annotated_image = self.visualizeFaceResults(
annotated_image,
self.face_landmarker_results)

cv2.imshow('Show',annotated_image)
print("showing detected image")
# print(self.hand_landmarker_results.gestures)

if cv2.waitKey(5) & 0xFF == ord('q'):
print("Closing Camera Stream")
break

video.release()
cv2.destroyAllWindows()


if __name__ == "__main__":
# Load emoji images
emojis = {}
emojis["closed_fist"] = cv2.imread("./emojis/closed_fist.png", cv2.IMREAD_UNCHANGED)
emojis["love"] = cv2.imread("./emojis/love.png", cv2.IMREAD_UNCHANGED)
emojis["open_palm"] = cv2.imread("./emojis/open_palm.png", cv2.IMREAD_UNCHANGED)
emojis["pointing_up"] = cv2.imread("./emojis/pointing_up.png", cv2.IMREAD_UNCHANGED)
emojis["thumb_down"] = cv2.imread("./emojis/thumb_down.png", cv2.IMREAD_UNCHANGED)
emojis["thumb_up"] = cv2.imread("./emojis/thumb_up.png", cv2.IMREAD_UNCHANGED)
emojis["victory"] = cv2.imread("./emojis/victory.png", cv2.IMREAD_UNCHANGED)

emojis["happy"] = cv2.imread("./emojis/happy.png", cv2.IMREAD_UNCHANGED)
emojis["neutral"] = cv2.imread("./emojis/neutral.png", cv2.IMREAD_UNCHANGED)
emojis["sad"] = cv2.imread("./emojis/sad.png", cv2.IMREAD_UNCHANGED)
emojis["surprised"] = cv2.imread("./emojis/surprised.png", cv2.IMREAD_UNCHANGED)
emojis["angry"] = cv2.imread("./emojis/angry.png", cv2.IMREAD_UNCHANGED)

# Init facial landmark -> facial emotion model
emotion_classifier = EmotionClassifier()
emotion_classifier_labels = ["Angry", "Happy", "Neutral", "Sad", "Surprise"]

# Init pipeline
recognizer = EmojiRecognition(emojis,
emotion_classifier,
emotion_classifier_labels)
recognizer.main()

The final video recording is here:

https://drive.google.com/file/d/1o-TMmeXvNhCh1R-mTDth1bK-vNXBvl-b/view?usp=sharing

The final presentation is here:

https://docs.google.com/presentation/d/102SMDv-iIOM3E-sE6RGol1hxRextD1EXkdM-Niqpd9Y/edit#slide=id.g2bd759fab9e_0_58

(24/04/24) Final Project — Play Test 2

In the last week, I’ve made significant progress on the final project. Since my schedule is busy during the final weeks, I’d like to complete the project as much as I can ahead of time.

To continue what I had since the last play test, which is to run the hand landmarker model that MediaPipe provides, I wanted to recognize hand gestures based on the landmark. As I browse through the MediaPipe library, I recognize that they have a hand gesture recognition model open sourced to recognize from landmarks to gestures. What I did was to modify the current OpenCV pipeline to call the hand gesture recognition model instead. I took advantage of this colab that Google provided, on how to use this library and visualize the hand landmarks.

During the first play test, I got the feedback that if the prototype is able to recognize multiple people’s hands, it would be more engaging. So I explored their library and they provide a parameter ‘num_hands’ which I can set to enable more hand recognitions.

The next step was to visualize the hand gestures as emojis and display the emojis on screen. The MediaPipe library allows the recognition of the following gestures:

  • Closed fist ✊
  • Open palm ✋
  • Pointing up 👆
  • Thumb down 👎
  • Thumb up 👍
  • Victory ✌️
  • I love you 🤟

What I did was to find the corresponding emoji images with transparent background from https://emojipedia.org/ and download them to the working directory. I used OpenCV library to overlay the emoji image on top of the video stream. To make it more exciting, I randomized the location of display of the emojis images on the screen, so that it seems more eye catching. This is different from what I initially proposed as an “emoji shower” going from top to bottom.

For the second play test, I asked my friends to try it out. I would like to know how they felt about the interactivity of this experience and how collaborative they felt. A feedback I got from them is that this is super fun to participate with your friends and has a pop culture aspect of it. They can imagine that people would be making all kinds of crazy gestures and facial expressions and take selfies!

I’ve also made enough progress close to what I proposed at the beginning to enable facial gesture recognition and emoji visualization. (I’ll talk about it in the final week’s blog).

The result looks like this:

https://drive.google.com/file/d/18l6etZFzF8TAfKyDpCL8YutI-HM-rZ80/view?usp=sharing

(24/04/24) OOP

class VendingMachine:
def __init__(self, candies=6, chips=10):
self.candies = candies
self.chips = chips
self.candyPrice = 0.75
self.chipsPrice = 2.75
self.revenue = 0.

def buyCandy(self):
if self.candies == 0:
print("Candy is sold out!")
return

self.candies -= 1
self.revenue += self.candyPrice

def buyChips(self):
if self.chips == 0:
print("Chips are sold out!")
return

self.chips -= 1
self.revenue += self.chipsPrice

def printStatus(self):
print(f"There are {self.candies} candies left and {self.chips} chips " \
f"left. The vending machine has earned {self.revenue} dollars " \
f"since last refill.")

def main():
vm = VendingMachine()
while True:
item = int(input("Hello! What would you like to buy? 1. candy 2. chips"))
if item == 1:
vm.buyCandy()
vm.printStatus()
elif item == 2:
vm.buyChips()
vm.printStatus()
else:
print("Sorry! Please choose between 1 and 2.")

cont = input("Do you want to buy more? Y/N")
if cont == 'N' or cont == 'n':
break


if __name__ == "__main__":
main()

(24/04/17) Final Project — Play test 1

This week, I was able to write a program with OpenCV and MediaPipe that takes video streams and runs the hand landmark model. I actually tried 2 models that MediaPipe provides:

  • Hand landmark
  • Pose/body landmark

There was some difficulties to install the OpenCV and MediaPipe library and versioning to make it work.

Here’s a recording:

https://drive.google.com/file/d/1k23Is7pfk4A-AyVNhQZrHGc8eErJoaDA/view?usp=drive_link

Assumptions I’m testing:

  • First, see if participants would know it’s tracking their hand landmarks without explicitly saying so (I’m also considering adding facial gesture recognition, which participants will be able to identify immediately)
  • Second, people would be able to try different hand gestures once they know it’s tracking the hand gesture.

Questions to testers:

  • What type of hand gestures should I include (since I will need to codify the rules from landmarks to gestures)?
  • Should I make it a target game (users are asked to perform certain hand gestures) or an emoji shower (users can do what hand gestures they want and the display will reflect/enhance their actions)?

(24/04/10) Final Project— Prototype proposal

4/10 Proposal and research

  • Specify the hand gestures and emojis to be recognized and generated. Expend to facial gesture recognition if time allows.
  • Determine the hardware requirements for capturing video input
  • Study OpenCV and MediaPipe documentation, focusing on hand detection and gesture recognition modules.
  • Install OpenCV and MediaPipe libraries along with their dependencies.
    Configure any necessary development tools for smooth workflow.

4/18: First play test in class

  • Implement a basic prototype to capture video input using OpenCV.
  • Integrate hand detection using MediaPipe to recognize basic hand gestures.
  • Develop a mechanism to generate emojis based on recognized hand gestures. Start with simple shapes or placeholders for emojis.
  • Design a minimalistic UI for displaying captured video and generated emojis.

4/25 Second play test in class

  • Integrate facial expression recognition using MediaPipe or similar libraries.
  • Define a set of facial expressions to be recognized and corresponding emojis.
  • Extend the system to combine hand gestures and facial expressions for more expressive emojis.

5/9 Final presentation

  • Implement logic to blend hand gestures and facial expressions seamlessly.
  • Improve the visuals to be interactive with the human movements.
  • Prepare a presentation or demo showcasing the project’s features and functionality.

(24/04/10) Week 11 — Webhook and n8n

For this assignment, I used n8n to automate the message sending to Slack. I set up a webhook node connected to a Slack node in n8n. To trigger the webhook, I run the local program which asks to press the key “Y” and will send a GET request to the n8n endpoint. The local program looks like the following:

import requests


def main():
while True:
print("Do you want to send a message? Y/N")
userInput = input()
if userInput == 'Y' or userInput == 'y':
url = "https://n8n.doubletake.design/webhook-test/c4a3d9ef-583d-488d-9150-f45bc5a2f03b/:winnie"
payload = {}
headers = {}

response = requests.request("GET", url, headers=headers, data=payload)
print("sent!")


if __name__=="__main__":
main()

(24/04/03) Midterm proposal

Idea 1 — sensor input, visual output using py5

The first idea is to extend my midterm project of generative art by taking sensor inputs. I would like to use the proximity detection and gesture detection functionality to let the user interact with the generative art. Specifically, when the user move their hand around the sensor board, they should be able to control where the strokes of the generative art go. The visuals will be more aesthetically involved compared to the midterm one. I’m mostly excited about this concept.

Idea 2 — sensor input, TouchDesigner

The second idea is to extend the board with kinect sensor which would enable the full body capturing of the user when visualizing the generative art. I can potentially explore the use of TouchDesigner to generate visual effects instead of py5.

Idea 3 —

Instead of using the person’s gesture, I’m thinking of using the sound volume to control the visuals.

(24/04/03) Week 10 — Communication Protocols

For this assigned, I chose to experiment with the serial connection option. I first ran the following command on my computer in order to install the pyserial library:

pip3 install pyserial

Then, I needed to get the port name for my sensor board by running the following code:

# in serial_listports.py

import serial.tools.list_ports
ports = serial.tools.list_ports.comports()

for port, desc, hwid in sorted(ports):
print("{}: {} [{}]".format(port, desc, hwid))

with the command:

python3 serial_listports.py

This gives me the device info: /dev/cu.usbmodem84722E2DAC0B1: COLUMBIA-DSL-SENSOR-BOARD-V1 [USB VID:PID=303A:81D0 SER=84722E2DAC0B LOCATION=0–1]. I’m able to get the port name: /dev/cu.usbmodem84722E2DAC0B1

I’d like to utilize the two buttons on the sensor board (on pin 16 and pin 15). So I defined output value to be 1 when D16 is pressed, and value to bo 2 when D15 is pressed. The logic on the sensor board looks like this:

# in code.py running on sensor board

import time
import board
from digitalio import DigitalInOut, Direction, Pull

# use board.D16 for Button1 or board.D15 for Button2
button1 = DigitalInOut(board.D16) # stores pin16 into the variable button
button1.direction = Direction.INPUT # make the button an input
button1.pull = Pull.DOWN # set the pull direction for the button

button2 = DigitalInOut(board.D15)
button2.direction = Direction.INPUT
button2.pull = Pull.DOWN

last_button_state1 = False
last_button_state2 = False

while True:
if button1.value is True and button1.value is not last_button_state1:
value = 1
print(value)

last_button_state1 = button1.value # saving the state of the button so we don't have multiple key presses

if button2.value is True and button2.value is not last_button_state2:
value = 2
print(value)

last_button_state2 = button2.value

time.sleep(0.01) # small amount of time for debounce

Finally, the logic running on my computer would parse the values sent from the sensor board. I used an if-else statement to say that if the value is 1, turn on the light; if the value is 2, play sound. This is a pseudo code without actually manipulate the light or the sound but this logic should reside on the computer/server if implemented. The logic running on the computer looks like this:

# in serial_readline.py

import serial
import time

sensor_board = serial.Serial(port="/dev/cu.usbmodem84722E2DAC0B1", baudrate=115200, timeout=.1)

def read():
time.sleep(0.05)
data = sensor_board.readline()
return str(data, 'ascii').split()

while True:
value = read()
if value != []:
if value[0] == '1':
print("Button 1 pressed. Turn on the light!")
elif value[0] == '2':
print("Button 2 pressed. Play sound!")

Here is the a video recording of the interaction:

https://drive.google.com/file/d/15mDRnInWfKDNNLpVylAN8CVKRIIbzfSr/view?usp=sharing

(24/03/27) Week 9 — IoTs

For this assignment, I’m partnered with Tak to create a wireless connection between our sensor boards. We are utilizing 2 different sensors:

  • The microphone and the potentiometer

and 2 different actuators:

  • The LED light and the neon light

The specific interaction looks like the following:

If I rotates the potentiometer from low to high, it will change the LED light from short pulse to long pulse on Tak’s board.

  • Here, my program on the board is publishing the potentiometer value to the “tak/winnie/winnie-data” topic (publisher)
  • My program on the board is listening to the mic volume on the “tak/winnie/tak-data” topic (subscriber)

On the other hand, If Tak speak to his board from low to high volume, it will change the color of my neon light on the board from green (quiet) to orange to red (loud).

  • Here, Tak’s program is publishing the mic volume to the “tak/winnie/tak-data”topic (publisher)
  • My program is listening to the potentiometer value on the “tak/winnie/winnie-data” topic (subscriber)

One challenge that we faced is that because each of the program acts as both the publisher and subscriber, we have to make it clear what data is published to which topic and which topic the program is subscribed to. It can get a bit confusing.

Additionally, the sensor input and the corresponding actuator effect are implemented separately. For example, the logic of taking the potentiometer value is implemented on my board, but the effect of pulse change on LED light is implemented on Tak’s board. And vice versa. Using the message() function helped to isolate the logic by taking the topic message only without caring too much about what is happening on the other side. This message value has to be parsed into int() in order to do if-else statements for the actuator.

See the video recording below:

https://drive.google.com/file/d/1zRTQH2PWe-o6xwvgcWWbo5zdN1PLILVr/view?usp=sharing

See the code below:

# on my board
import analogio
import board
import neopixel
import socketpool
import ssl
import time
import wifi

import adafruit_minimqtt.adafruit_minimqtt as MQTT

from analogio import AnalogIn
from digitalio import DigitalInOut, Direction


# Setup the topics
mqtt_topic_pub = "tak-winnie/winnie-data"
mqtt_topic_sub = "tak-winnie/tak-data"

# Setup the sensors

# publish potentiometer data to "tak-winnie/winnie-data"
analog_in = AnalogIn(board.A8) # set the pin to be an analog input

# control neon light color from subscribed "tak-winnie/tak-data" (i.e. mic volume)
pixel_pin = board.D42
pwr = DigitalInOut(board.NEOPIXEL_POWER)
pwr.direction = Direction.OUTPUT
pwr.value = True

pixels = neopixel.NeoPixel(pixel_pin, 1, brightness=0.3, auto_write=False)

GREEN = (0, 255, 0)
ORANGE = (255, 165, 0)
RED = (255, 0, 0)


# define callback funcs which are called when events occur
def connected(client, userdata, flags, rc):
print(f"Connected to MQTT broker! Listening to {mqtt_topic_sub}.")
client.subscribe(mqtt_topic_sub)

# This method is called when the client is disconnected
def disconnected(client, userdata, rc):
print("Disconnected from MQTT broker!")

# This method is called when a topic the client is subscribed to
# has a new message.
# It will change the neon light color based on the mic volume.
def message(client, topic, message):
print(f"New message on topic {topic}: {message}.")
if int(message) < 33000:
print("Received low volume")
pixels.fill(GREEN)
pixels.show()
elif int(message) < 35000:
print("Received mid volume")
pixels.fill(ORANGE)
pixels.show()
else:
print("Received high volume")
pixels.fill(RED)
pixels.show()

# connect to wifi
print("Connecting to WiFi")
wifi.radio.connect("WIFI_NAME", "WIFI_PASSWORD")
print("Connected to WiFi")

pool = socketpool.SocketPool(wifi.radio)
ssl_context = ssl.create_default_context()

# create our MQTT client and set up all the callbacks
mqtt_client = MQTT.MQTT(
broker="mqtt.doubletake.design",
username="columbia",
password="PASSWORD",
socket_pool=pool,
ssl_context=ssl_context,
)

mqtt_client.on_connect = connected
mqtt_client.on_disconnect = disconnected
mqtt_client.on_message = message

print("Connecting to MQTT broker...")
mqtt_client.connect()

# begin a loop that constantly checks for new messages
while True:
# Poll the message queue
mqtt_client.loop(timeout=1)

# Send a new message
potentiometerValue = analog_in.value
mqtt_client.publish(mqtt_topic_pub, potentiometerValue)
print("Sent! Potentiometer value: ", potentiometerValue)

time.sleep(0.01)
# on Tak's board
import analogio
import board
import pwmio
import socketpool
import ssl
import time
import wifi

import adafruit_minimqtt.adafruit_minimqtt as MQTT

from analogio import AnalogIn


# Setup the topics
mqtt_topic_pub = "tak-winnie/tak-data"
mqtt_topic_sub = "tak-winnie/winnie-data"

# Setup the sensors

# publish mic data to "tak-winnie/tak-data"
mic = AnalogIn(board.A9)

# control led pulse from subscribed "tak-winnie/winnie-data" (i.e. potentiometer value)
led = pwmio.PWMOut(board.D13)


# Define callback funcs which are called when events occur
def connected(client, userdata, flags, rc):
print(f"Connected to MQTT broker! Listening to {mqtt_topic_sub}.")
client.subscribe(mqtt_topic_sub)

# This method is called when the client is disconnected
def disconnected(client, userdata, rc):
print("Disconnected from MQTT broker!")

# This method is called when a topic the client is subscribed to
# has a new message.
# It will change the led pulse frequency based on the potentiometer value.
def message(client, topic, message):
print(f"New message on topic {topic}: {message}.")
for cycle in range(0, int(message)):
led.duty_cycle = cycle
for cycle in range(int(message), 0, -1):
led.duty_cycle = cycle


# connect to wifi
print("Connecting to WiFi")
wifi.radio.connect("Columbia University")
print("Connected to WiFi")

pool = socketpool.SocketPool(wifi.radio)
ssl_context = ssl.create_default_context()

# create our MQTT client and set up all the callbacks
mqtt_client = MQTT.MQTT(
broker="mqtt.doubletake.design",
username="columbia",
password="PASSWORD",
socket_pool=pool,
ssl_context=ssl_context,
)

mqtt_client.on_connect = connected
mqtt_client.on_disconnect = disconnected
mqtt_client.on_message = message

print("Connecting to MQTT broker...")
mqtt_client.connect()

# begin a loop that constantly checks for new messages
while True:
# Poll the message queue
mqtt_client.loop(timeout=1)

# Send a new message
micVolume = mic.value
mqtt_client.publish(mqtt_topic_pub, micVolume)
print("Sent! Mic volume: ", micVolume)

time.sleep(0.01)

(24/03/20) Week 8 — Sensors

The first sensor I’m interested in is the Adafruit TMP117 ±0.1°C High Accuracy I2C Temperature Sensor. It operates by measuring the voltage across a temperature-sensitive diode and converting it into a digital temperature reading. It provides high-resolution temperature data with up to ±0.3°C accuracy across a wide range of temperatures. Creative applications could include integrating the sensor into smart thermostats for precise temperature control, incorporating it into wearable devices to monitor body temperature, or using it in environmental monitoring systems to ensure optimal conditions for plants or animals.

The second sensor I researched on is the Adafruit AMG8833 IR Thermal Camera Breakout. It employs an 8x8 array of IR thermal sensors to detect infrared radiation emitted by objects. By measuring the intensity of this radiation, it generates 64 individual temperature readings via I2C communication. This data offers insights into temperature distribution across surfaces, facilitating applications such as occupancy detection in smart buildings, thermal imaging for robotics or drones, and even non-contact temperature monitoring in medical devices. Its compact size and ease of integration make it ideal for various projects requiring thermal sensing capabilities.

The next sensor I looked into is the Adafruit Ultimate GPS Logger Shield. It utilizes the MTK3333 chipset, which integrates GPS and GLONASS modules for accurate location tracking. It receives signals from satellites, determining latitude, longitude, and altitude. This data is relayed to Arduino-compatible boards via SPI and UART communication. Creative applications include geocaching projects, real-time location tracking for vehicles or drones, navigation systems for outdoor activities, and location-based music players that adjust tunes based on the user’s position within a city.

(24/03/06) Week 7 — CircuitPython

In this assignment, I tried using different values for sleep() to create different LED flash patterns. I’d like to try

  • when both buttons are released, the light flashes at a constant speed
  • when button 1 is pressed, the light flashes slower, at a constant speed
  • when button 2 is pressed, the light flashes even slower.

See the video below:

https://drive.google.com/file/d/1_bS_w7Er2YM8xckGIYYNSnEsH_6TdXYX/view?usp=sharing

See the code:

import board

from digitalio import DigitalInOut, Direction, Pull
import time


led = DigitalInOut(board.D13)
led.direction = Direction.OUTPUT

button1 = DigitalInOut(board.D15) # D15, D16 for buttons
button1.direction = Direction.INPUT
button1.pull = Pull.DOWN

button2 = DigitalInOut(board.D16)
button2.direction = Direction.INPUT
button2.pull = Pull.DOWN

while True:
# led.value = False
# time.sleep(1) # in seconds
# led.value = True
# time.sleep(1)
if button1.value is True:
led.value = True
time.sleep(1)
led.value = False
time.sleep(0.5)
elif button2.value is True:
led.value = True
time.sleep(0.5)
led.value = False
time.sleep(0.25)
else:
led.value = True
time.sleep(0.3)
led.value = False
time.sleep(0.15)

time.sleep(0.01) # debounce

Reflections on designing for interactivity with physical computing:

  • I like the point the author made that interactive work should motivate participants to press the button or walk into the room. I’ve seen that visitors are often intimidated by the installations works and would not touch the work. How to invite the audience to interact with the work could be challenging. The author mentioned later the article to use evocative materials, like the soft, to invite people to touch. One thing that I find often used is to use sensors, like cameras, sounds or lights to let the visitors trigger some effects, such as using TouchDesigner, without actually physically touching the surface. The author also referred to this type of interaction as implicit interaction later in the article.
  • The author mentioned that giving the audience the reason to collaborate or compete could be great ways to let people engaged. They would find more purpose and joy when initiating the interactions.
  • Physical computing as a great tool used in interactive art and participatory art has transformed the traditional art function as a way of the expression of the artist themselves. Visitors become the creators or artists and the piece cannot be complete without their contribution. In a way, creativity is no longer defined by the artist who author the work, but the participants as well. It’s a collective effort.

(24/02/28) Midterm Project

For the midterm project, I would like to explore algorithmically generated art inspired by abstract/minimalist paintings. Since this type of paintings are characterized by the primitive colors and simple geometric shapes, I would like to explore how these can be accomplished by code.

To get a random color palette, I explored a few APIs:

The last one seems to be the only one that is working. It requires a seed color and number of colors needed, and will return a list of colors.

# in random_generators.py

def generateColorPalette(seed_color: str, num_colors: int) -> []:
"""Sends a GET request to color palette generator API with a seed color."""
url = f"https://www.thecolorapi.com/scheme?hex={seed_color}&count={num_colors}"
response = requests.request("GET", url, headers={}, data={})
results = response.json()
colors = []
for i in results["colors"]:
colors.append(i["hex"]["value"])
print(colors)
return colors

I would like to randomly initialize the geometric shape to squares, circles, and triangles. I did this by generating a random number between 0 and 1. Given the uniform distribution of probabilities, if the number falls between 0 and 0.33, I draw a square; 0.33 to 0.66, I draw a circle; 0.66 to 1, I draw a triangle. Since I’d like to mimic the organic strokes of an actual painting, I would also initialize particles and visualize their travel trajectories with Perlin noise.

def initGeometry(particles, colors):
"""Intializes the geometries and particles."""
for j in range(N_SHAPES):
x = py5.random() * 500
y = py5.random() * 500
scale = int(py5.random() * 200)
half_scale = scale / 2
color = getColor(colors)
py5.no_stroke()
py5.fill(color)

seed = py5.random()
if seed < 0.33:
# draw squares
for i in range(-scale//2, scale//2):
particles.append(Particle(x + i, y - half_scale, color))
particles.append(Particle(x + i, y + half_scale, color))
particles.append(Particle(x - half_scale, y + i, color))
particles.append(Particle(x + half_scale, y + i, color))

py5.square(x, y, scale)

elif seed < 0.66:
# draw circles
for x in range(360):
a = py5.TWO_PI / 360 * x
particles.append(Particle(x + half_scale * py5.cos(a), y + half_scale * py5.sin(a), color))

py5.circle(x, y, scale)

else:
# draw triangles
for b in range(scale):
particles.append(Particle(x + b, y, color))
particles.append(Particle(x + b/2, y - (b/2) * py5.sqrt(2), color))
particles.append(Particle(x + scale - b/2, y - (b/2) * py5.sqrt(2), color))

py5.triangle(x, y, x + scale, y, x + half_scale, y - half_scale * py5.sqrt(3))

For each particle, I define its properties and movements. Since I would like each particle to have a non-linear trajectory, I used py5.noise() method to generate a noise-based delta for each movement, along with some angles so that it’s not entirely random, but mimicking the organic growth shape. The lifespan parameter will define how long the movement lasts.

class Particle:
"""A class defining properties and movements of a particle."""
def __init__(self, x, y, color):
"""Initializes the particle with properties."""
self.pos = py5.Py5Vector(x, y)
self.step = 2
self.angle = py5.random() * 10
self.lifeSpan = 40
self.noiseScale = 4
self.noiseStrength = 50
self.color = color

def show(self):
"""Displays the particle."""
py5.no_stroke()
py5.fill(self.color)
py5.circle(self.pos.x, self.pos.y, 2)

def move(self):
"""Moves the particle with noise."""
self.angle = py5.noise(self.pos.x / self.noiseScale, self.pos.y / self.noiseScale) * self.noiseStrength
self.pos.x += py5.cos(self.angle) * self.step
self.pos.y += py5.sin(self.angle) * self.step
self.lifeSpan -= 0.1

def isDead(self):
"""Whether the current particle has exhausted its lifespan."""
return self.lifeSpan < 0.0

def run(self):
"""Displays and moves the particle."""
self.show()
self.move()

Even though there are many randomizations going on in this project, I would like to be able to regenerate the same result given a fixed seed. An API that I found is https://www.randomnumberapi.com/ which returns random numbers given the range and number of numbers needed through a GET request. With this seed, I’m able to set the random_seed and noise_seed for py5, so that every time the randomization will give the same result given the same seed. Additionally, the seed color can be reproduced by getting the r, g, b from the same seed.

# in random_generators.py

def generateRandomSeed() -> int:
"""Sends a GET request to random number generator API."""
min = 0
max = 2**31 - 1
url = f"http://www.randomnumberapi.com/api/v1.0/random?min={min}&max={max}&count=1"
response = requests.request("GET", url, headers={}, data={})
results = response.json()
print(results)
return results[0]
# in sketch.py
...

# Sets seed
seed = None
print("Welcome to my sketch!")
print("Do you have a seed for a previous generation? Y/N")
choice = input()
if choice == "Y" or choice == "y":
print("Please provide the seed: ")
seed = int(input())
elif choice == "N" or choice == "n":
print("No worries! We'll generate one randomly now!")
seed = generateRandomSeed()
print(f"Your seed is: {seed}")
else:
print("Please answer Y or N.")
exit()
py5.random_seed(seed)
py5.noise_seed(seed)

# Sets color palette
r = py5.random_int(0, 255)
g = py5.random_int(0, 255)
b = py5.random_int(0, 255)
c = py5.color(r, g, b)
hex_c = py5.hex_color(c)
print(f"The hexcode for the color seed is: {hex_c}") # "#rrggbbaa"
colors = generateColorPalette(seed_color= hex_c[1:7], num_colors=16)

...

The slides for this project:

(24/02/21) Midterm Proposal / Research

For my midterm project, I intend to delve into the realm of algorithmically generated abstract art using the py5 library. My approach involves leveraging the Colormind API to dynamically generate unique color palettes for each iteration.

Drawing inspiration from the works of Kandinsky and Rothko, I aim to explore the harmonious interplay of color and shape combinations. The RGB color combinations obtained from the Colormind API will serve as the foundation for the generative painting. Employing the random() function in py5, I will determine the starting points, as well as the size, of each shape or stroke.

In the creative process, I will traverse the color list to assign distinct hues to each shape or stroke. Additionally, I plan to incorporate the noise() function in py5 within a while loop in draw() function to introduce Perlin noise. This will emulate natural perturbations in the trajectories of lines and variations in color density, imparting a sense of organic complexity to the artwork.

Kandinsky’s masterpiece, Several Circles, was completed in 1926. This could be a source of inspiration of what I’m aiming for in the final visual output.

The ultimate objective is to produce a singular, algorithmically generated painting with each run — a composition that visually approximates the essence of a traditional painting. Through this exploration, I aim to capture the aesthetic nuances of renowned artists while infusing a unique digital signature into each artwork.

(24/02/21) Week 4&5 — Creating a Chatbot

For this two week assignment, Janice and I brainstormed a few ideas while browsing through the web APIs listed. We would like to create a chatbot that can provide some useful insights for the user, for example, answering the weather predictions, or giving restaurant recommendations. We came across with the Open Brewery API that we can interact with by giving geolocation information, such as City, or State, and getting brewery recommendations, or simply getting a random recommendation.

Since endpoints will allow different search patterns, we think it would make sense to give the user the options to choose which route they would like to pursuit at the beginning, similar to a call center. For example, the user should press “1” if they would like to get a random recommendation, “2” if they’d like to get a recommendation by postal code, etc. Based on the user choice, we will use an if-else statement to send requests and parse the answer. We finalized the options we would like to handle as follow:

1: get a random recommendation

  • This is a GET request that hits the following endpoint: https://api.openbrewerydb.org/v1/breweries/random

2: get a recommendation by post code

  • This is a GET request that hits the following endpoint: https://api.openbrewerydb.org/v1/breweries?by_postal={POSTCODE} by replacing POSTCODE with user input

3: get a recommendation by city

  • This is a GET request that hits the following endpoint: https://api.openbrewerydb.org/v1/breweries?by_city={CITY} by replacing CITY with user input

4: get a recommendation by a search term

  • This is a GET request that hits the following endpoint: https://api.openbrewerydb.org/v1/breweries/search?query={TERM} by replacing TERM with user input

Finally, because there are some repeated logic, for example, for sending request and parsing the response, and print the final recommendation, we cleaned up the code by extracting them to functions.

The final code looks like this:

# API: https://www.openbrewerydb.org/documentation#single-brewery

import requests

########## Winnie's code starts here #########################
def getListOfResultsFromResponse(url: str, payload: dict={}, headers: dict={}) -> []:
"""
Returns a list of parsed results from response from sending the GET request.
"""
response = requests.request("GET", url, headers=headers, data=payload)
results = response.json()
return results

def printResult(brewery: {}):
print("Name: ", brewery['name'])
print("Location: ", brewery['address_1'], brewery['city'], brewery['state_province'], brewery['postal_code'])


def main():
print('Welcome to the Brewery Suggestion Bot!')

print(
"If you'd like a random suggestion, press 1. \n"
"If you'd like a suggestion by post code, press 2. \n"
"If you'd like a suggestion by city, press 3. \n"
"If you'd like to search brewies based on a term, press 4.")
choice = int(input())

if choice == 1:
getRandomBreweryUrl = "https://api.openbrewerydb.org/v1/breweries/random"
getRandomBreweryResult = getListOfResultsFromResponse(getRandomBreweryUrl)[0]
# print(getRandomBreweryResult)
print("Great! Here is one recommended brewery: ")
printResult(getRandomBreweryResult)

elif choice == 2:
print("Please enter a post code: ")
postCode = int(input())

getBreweryByPostCodeUrl = f"https://api.openbrewerydb.org/v1/breweries?by_postal={postCode}"
getBreweryByPostCodeList = getListOfResultsFromResponse(getBreweryByPostCodeUrl)
if len(getBreweryByPostCodeList) == 0:
print("Sorry! Unabled to find a brewery at this post code.")
return 0

getBreweryByPostCodeResult = getBreweryByPostCodeList[0]
# print(getBreweryByPostCodeResult)
print("Great! Here is one recommended brewery: ")
printResult(getBreweryByPostCodeResult)

elif choice == 3:
print("Please enter a city: ")
city = input()

getBreweryByCityUrl = f"https://api.openbrewerydb.org/v1/breweries?by_city={city}"
getBreweryByCityList = getListOfResultsFromResponse(getBreweryByCityUrl)
if len(getBreweryByCityList) == 0:
print("Sorry! Unabled to find a brewery in this city.")
return 0

getBreweryByCityResult = getBreweryByCityList[0]
# print(getBreweryByPostCodeResult)
print("Great! Here is one recommended brewery: ")
printResult(getBreweryByCityResult)

########## Winnie's code ends here #########################

########## Janice's code starts here #########################
elif choice == 4:
print("Please enter a term: ")
term = input()

getBreweryByTermUrl = f"https://api.openbrewerydb.org/v1/breweries/search?query={term}"
getBreweryByTermList = getListOfResultsFromResponse(getBreweryByTermUrl)

if len(getBreweryByTermList) == 0:
print("Sorry! Unabled to find a brewery with this term.")
return 0

getBreweryByTermResult = getBreweryByTermList[0]
# print(getBreweryByPostCodeResult)
print("Great! Here is one recommended brewery: ")
printResult(getBreweryByTermResult)

########## Janice's code ends here #########################


if __name__=="__main__":
main()

(24/02/07) Week 3 — API Explorations

I’m interested in machine learning and decided to explore OpenAI’s APIs. I checked the “Made for Developers” section, which has for major sections: Chat (to use GPT-3 to build interactive chatbots and virtual agents), Embeddings (to use GPT-3 to generate text embeddings for classification, search, and clustering), Analysis (to use GPT-3 for text summarization, synthesis, and AQ) and Fine-tuning (to fine-tune the GPT-3 model with customized datasets). The website also provides code snippets to get started easily.

I decided to explore the Chat functionality. It requires log in credentials and the website redirected to a playground where I can send requests. This page details how to hit the Assistants API in the playground environment and this page details how to use the Assistants API directly. Since this is a beta function, I need to include the special header:

OpenAI-Beta: assistants=v1

I first call the ListModels API to get a sense of available models. This would require you to provide your API key in the headers. The API keys can be generated here.

import requests

url = "https://api.openai.com/v1/models"

listModelsPayload = {}
listModelsHeaders = {
"Authorization": "Bearer $OPENAI_API_KEY"
}

listModelsResponse = requests.request("GET", url, headers = listModelsHeaders, data = listModelsPayload)

listModelsStatusCode = listModelsResponse.status_code
listModelsResult = listModelsResponse.json()
print(listModelsStatusCode)
print(listModelsResult)

The response I got is as follow and I parsed it as Json:

200
{'object': 'list', 'data': [{'id': 'dall-e-3', 'object': 'model', 'created': 1698785189, 'owned_by': 'system'}, {'id': 'dall-e-2', 'object': 'model', 'created': 1698798177, 'owned_by': 'system'}, {'id': 'gpt-3.5-turbo-0125', 'object': 'model', 'created': 1706048358, 'owned_by': 'system'}, {'id': 'text-embedding-ada-002', 'object': 'model', 'created': 1671217299, 'owned_by': 'openai-internal'}, {'id': 'tts-1-hd-1106', 'object': 'model', 'created': 1699053533, 'owned_by': 'system'}, {'id': 'text-embedding-3-small', 'object': 'model', 'created': 1705948997, 'owned_by': 'system'}, {'id': 'tts-1-hd', 'object': 'model', 'created': 1699046015, 'owned_by': 'system'}, {'id': 'davinci-002', 'object': 'model', 'created': 1692634301, 'owned_by': 'system'}, {'id': 'babbage-002', 'object': 'model', 'created': 1692634615, 'owned_by': 'system'}, {'id': 'text-embedding-3-large', 'object': 'model', 'created': 1705953180, 'owned_by': 'system'}, {'id': 'whisper-1', 'object': 'model', 'created': 1677532384, 'owned_by': 'openai-internal'}, {'id': 'gpt-3.5-turbo-16k-0613', 'object': 'model', 'created': 1685474247, 'owned_by': 'openai'}, {'id': 'gpt-3.5-turbo-16k', 'object': 'model', 'created': 1683758102, 'owned_by': 'openai-internal'}, {'id': 'gpt-3.5-turbo', 'object': 'model', 'created': 1677610602, 'owned_by': 'openai'}, {'id': 'gpt-3.5-turbo-0613', 'object': 'model', 'created': 1686587434, 'owned_by': 'openai'}, {'id': 'gpt-3.5-turbo-1106', 'object': 'model', 'created': 1698959748, 'owned_by': 'system'}, {'id': 'gpt-3.5-turbo-0301', 'object': 'model', 'created': 1677649963, 'owned_by': 'openai'}, {'id': 'tts-1-1106', 'object': 'model', 'created': 1699053241, 'owned_by': 'system'}, {'id': 'gpt-3.5-turbo-instruct', 'object': 'model', 'created': 1692901427, 'owned_by': 'system'}, {'id': 'tts-1', 'object': 'model', 'created': 1681940951, 'owned_by': 'openai-internal'}, {'id': 'gpt-3.5-turbo-instruct-0914', 'object': 'model', 'created': 1694122472, 'owned_by': 'system'}]}

I’d like to use the GPT-3.5-turbo version. I then evoked the CreateChatCompletion API through a POST request.

createChatCompletionUrl = "https://api.openai.com/v1/chat/completions"
createChatCompletionHeaders = {
"Authorization": "Bearer $OPENAI_API_KEY",
"Content-Type": "application/json",
}
createChatCompletionPayload = """{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}"""

createChatCompletionResponse = requests.request("POST", createChatCompletionUrl, headers=createChatCompletionHeaders, data=createChatCompletionPayload)

createChatCompletionStatusCode = createChatCompletionResponse.status_code
createChatCompletionResult = createChatCompletionResponse.json()
print(createChatCompletionStatusCode)
print(createChatCompletionResult)

I think I sent too many requests during testing and reach the quota and received the following response:

429
{'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}

So I decided to explore another API that does not require credentials and is for free forever — emojihub. I first tested to get a random emoji hitting the GetRandom endpoint, which does not need any header or payload.

getRandomEmojiUrl = "https://emojihub.yurace.pro/api/random"
getRandomEmojiPayload = {}
getRandomEmojiHeaders = {}

getRandomEmojiResponse = requests.request("GET", getRandomEmojiUrl, headers=getRandomEmojiHeaders, data=getRandomEmojiPayload)

getRandomEmojiStatusCode = getRandomEmojiResponse.status_code
getRandomEmojiResult = getRandomEmojiResponse.json()
print(getRandomEmojiStatusCode)
print(getRandomEmojiResult)

In the parsed into Json result I saw that I got a persevering face.

200
{'name': 'persevering face', 'category': 'smileys and people', 'group': 'face neutral', 'htmlCode': ['&#128547;'], 'unicode': ['U+1F623']}

In order to see all emojis, I sent the following to the All endpoint

getAllEmojisUrl = "https://emojihub.yurace.pro/api/all"
getAllEmojisPayload = {}
getAllEmojisHeaders = {}

getAllEmojisResponse = requests.request("GET", getAllEmojisUrl, headers=getAllEmojisHeaders, data=getAllEmojisPayload)

getAllEmojisStatusCode = getAllEmojisResponse.status_code
getAllEmojisResult = getAllEmojisResponse.json()
print(getAllEmojisStatusCode)
print(getAllEmojisResult)

It returns a huge array of all emojis.

200
[{'name': 'grinning face', 'category': 'smileys and people', 'group': 'face positive', 'htmlCode': ['&#128512;'], 'unicode': ['U+1F600']}, {'name': 'grinning face with smiling eyes', 'category': 'smileys and people', 'group': 'face positive', 'htmlCode': ['&#128513;'], 'unicode': ['U+1F601']}, {'name': 'face with tears of joy', 'category': 'smileys and people', 'group': 'face positive', 'htmlCode': ['&#128514;'], 'unicode': ['U+1F602']},...]
# do not show the whole list, since it's too long and crashing the blog.

It also allows exploring by categories and groups. I want to see the group “face-positive” hitting the GetRandom endpoint

getFacePositiveUrl = "https://emojihub.yurace.pro/api/random/group/face-positive"
getFacePositivePayload = {}
getFacePositiveHeaders = {}

getFacePositiveResponse = requests.request("GET", getFacePositiveUrl, headers=getFacePositiveHeaders, data=getFacePositivePayload)

getFacePositiveStatusCode = getFacePositiveResponse.status_code
getFacePositiveResult = getFacePositiveResponse.json()
print(getFacePositiveStatusCode)
print(getFacePositiveResult)

I got this:

200
{'name': 'smiling face with smiling eyes', 'category': 'smileys and people', 'group': 'face positive', 'htmlCode': ['&#128522;'], 'unicode': ['U+1F60A']}

Key takeaways, for beta features, it might not be stable. Additionally, for APIs that has subscriptions, if we send too many requests during testing, we might reach the billing cycle.

(24/01/31) Week 2 — Code Sketches

My passion lies in abstract and minimal art, particularly in paintings exemplified by artists like Rothko and Kline. I am intrigued by the prospect of employing code to emulate diverse color palettes, shapes, and textures through the use of randomness and shaders. Inspired by the aesthetic principles of these influential artists, I aim to experiment with the synthesis of digital art that captures the essence of abstract and minimalistic forms using computational creativity, e.g. p5js.

Sketch 1

My self portrait using ASCII code:

********************#*######################%%%%%%%%%%%%%%%%%%%%
+************************####################%#%%%%%%%%%%%%%%%%%
*+*********************##*##############%%%%%%%%%%%%%%%%%%%%%%%%
+*************************###########%%#%%%%%%%%%%%%%%%%%%%%%%%%
++**+******************############%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+************************####%%@@@%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
++++++++****************##%@@@@@@@@@@@@@@%%%%%%%%%%%%%%%%%%%%%%%
++++++**++*************#@@@@@@@@@@@@@@@@@@@%%%%%%%%%%%%%%%%%%%%%
+++++*++++************@@@@@@@@@@@@@@@@@@@@@@%%%%%%%%%%%%%%%%%%%%
+++++++++*++*+******#@@@@@@@@@@@@@@@@@@@@@@@@%%%%%%%%%%%%%%%%%%%
+++++++++**********#@@@@@@@@@@@@@@@@@@@@@@@@@@%%%%%%%%%%%%%%%%%%
++++++*+++********#@@@@@@@@%@@@@@@@@@@@@@@@@@@@%%%%%%%%%%%%%%%%%
+++++++++*+++*****%@@@@@@@@@@@@%%@@@%@@@@@@@@@@@%%%%%%%%%%%%%%#%
++++++++++++++***#@@@@@@@@#*#%**#@@+#@@@@@@@@@@@%%%%%%%%%%%%%%##
+++++++++++++****%@@@@@@@#--=-::...:-+*#%%@@@@@@%%%%%%%%%%%%%%#%
++=++++++++++***#@@@@@@@@*=-=*=:. :-:=**==%@@@@@%%%%%%%%%%%%####
++=+++++++++****%@@@@@@@%-. ..... :-:-==**%@@@@@@%%%%%%%%%######
==+=+++++++++**#%@@@@@@@%: ..:-:.::-=#@@@@@@%%%%%%%%#######
=+++++++++++***#%@@@@@@@%- .. .:-:..::=%@@@@@@%%%%%%%%#######
===+++++++++***#%@@@@@@@%=. .:.:=:...:-#@@@@@@@%%%%%%%%#######
====++++++++***#%@@@@@@@@*:.. ..:::::.:-*@@@@@@@@%%%%%%%########
====+==+++++**##%@@@@@@@@@*--=+++**=::=+@@@@@@@@@%%%%%##########
======++++++**##%%@@@@@@@@%+-..:---::=#@@@@@@@@@%%%%%%##########
=====++++++++*####%%@@%###*-. .::-+@@@@@@@@@@@%%%%############
========+++++*####%%%%+::.:::::::-==*@@@@@@@@@@%%%%%%###########
========++++**####%%*=:.. ..:::-:-+@@@@@@@%%%%%%%%%###########
======+=+++++**##%%+::.. ...::::-+#@@@@%%%%%%%%%############
=====++++++++*#@@@@%-:.. . ....:::-=+@@@@@@%%%%%%%%%##########
=======++*%@@@@@@@@@@%+=::.::.:..:-=:-*@@@@@@@@@%%%%%%%#########
====+=---=#%%@@@@@@@@@@@*=-::::::::-+#@@@@@@@@@@@#+##%%%%%######
===+:.....:*%%@@@@@@@@@@@@@@@@%%%%@@@@@@@@@@@@@@@@=---#%%%######
=++: :*%@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@*::..*%%######
=+- =%%@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@#:.. -%%%#####
+= -#%@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@-... +%%%####
+: :##@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@+:.. #%%####

I first used this tool to convert a JPEG image into a ASCII code. In this process, I configured the output dimensions to be 65 for both width and height. The final converted ASCII code are copied into the print statement. The code looks like this:

# self_portrait.py
print("""
********************#*######################%%%%%%%%%%%%%%%%%%%%
+************************####################%#%%%%%%%%%%%%%%%%%
*+*********************##*##############%%%%%%%%%%%%%%%%%%%%%%%%
+*************************###########%%#%%%%%%%%%%%%%%%%%%%%%%%%
++**+******************############%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+************************####%%@@@%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
++++++++****************##%@@@@@@@@@@@@@@%%%%%%%%%%%%%%%%%%%%%%%
++++++**++*************#@@@@@@@@@@@@@@@@@@@%%%%%%%%%%%%%%%%%%%%%
+++++*++++************@@@@@@@@@@@@@@@@@@@@@@%%%%%%%%%%%%%%%%%%%%
+++++++++*++*+******#@@@@@@@@@@@@@@@@@@@@@@@@%%%%%%%%%%%%%%%%%%%
+++++++++**********#@@@@@@@@@@@@@@@@@@@@@@@@@@%%%%%%%%%%%%%%%%%%
++++++*+++********#@@@@@@@@%@@@@@@@@@@@@@@@@@@@%%%%%%%%%%%%%%%%%
+++++++++*+++*****%@@@@@@@@@@@@%%@@@%@@@@@@@@@@@%%%%%%%%%%%%%%#%
++++++++++++++***#@@@@@@@@#*#%**#@@+#@@@@@@@@@@@%%%%%%%%%%%%%%##
+++++++++++++****%@@@@@@@#--=-::...:-+*#%%@@@@@@%%%%%%%%%%%%%%#%
++=++++++++++***#@@@@@@@@*=-=*=:. :-:=**==%@@@@@%%%%%%%%%%%%####
++=+++++++++****%@@@@@@@%-. ..... :-:-==**%@@@@@@%%%%%%%%%######
==+=+++++++++**#%@@@@@@@%: ..:-:.::-=#@@@@@@%%%%%%%%#######
=+++++++++++***#%@@@@@@@%- .. .:-:..::=%@@@@@@%%%%%%%%#######
===+++++++++***#%@@@@@@@%=. .:.:=:...:-#@@@@@@@%%%%%%%%#######
====++++++++***#%@@@@@@@@*:.. ..:::::.:-*@@@@@@@@%%%%%%%########
====+==+++++**##%@@@@@@@@@*--=+++**=::=+@@@@@@@@@%%%%%##########
======++++++**##%%@@@@@@@@%+-..:---::=#@@@@@@@@@%%%%%%##########
=====++++++++*####%%@@%###*-. .::-+@@@@@@@@@@@%%%%############
========+++++*####%%%%+::.:::::::-==*@@@@@@@@@@%%%%%%###########
========++++**####%%*=:.. ..:::-:-+@@@@@@@%%%%%%%%%###########
======+=+++++**##%%+::.. ...::::-+#@@@@%%%%%%%%%############
=====++++++++*#@@@@%-:.. . ....:::-=+@@@@@@%%%%%%%%%##########
=======++*%@@@@@@@@@@%+=::.::.:..:-=:-*@@@@@@@@@%%%%%%%#########
====+=---=#%%@@@@@@@@@@@*=-::::::::-+#@@@@@@@@@@@#+##%%%%%######
===+:.....:*%%@@@@@@@@@@@@@@@@%%%%@@@@@@@@@@@@@@@@=---#%%%######
=++: :*%@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@*::..*%%######
=+- =%%@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@#:.. -%%%#####
+= -#%@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@-... +%%%####
+: :##@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@+:.. #%%####
""")

Sketch 2

The mad lib I created is to calculate the zodiac sign of the user. It asks the user to provide the month and day of their birthday and calculate their zodiac sign by following this chart. An example output:

➡️  Which month were you born in? (Enter a number between 1 and 12):  5
➡️ Which day of the month were you born? (Enter a number between 1 and 31): 8
So, your zodiac sign is Taurus ♉️.

The code looks like this:

# mad_lib.py
def getZodiac(month: int, day: int) -> str:
if (month == 3 and day >= 21 and day <= 31) or (
month == 4 and day >= 1 and day <= 19
):
return "Aries ♈️"
elif (month == 4 and day >= 20 and day <= 30) or (
month == 5 and day >= 1 and day <= 20
):
return "Taurus ♉️"
elif (month == 5 and day >= 21 and day <= 31) or (
month == 6 and day >= 1 and day <= 20
):
return "Gemini ♊️"
elif (month == 6 and day >= 21 and day <= 30) or (
month == 7 and day >= 1 and day <= 22
):
return "Cancer ♋️"
elif (month == 7 and day >= 23 and day <= 31) or (
month == 8 and day >= 1 and day <= 22
):
return "Leo ♌️"
elif (month == 8 and day >= 23 and day <= 31) or (
month == 9 and day >= 1 and day <= 22
):
return "Virgo ♍️"
elif (month == 9 and day >= 23 and day <= 30) or (
month == 10 and day >= 1 and day <= 22
):
return "Libra ♎️"
elif (month == 10 and day >= 23 and day <= 31) or (
month == 11 and day >= 1 and day <= 21
):
return "Scorpio ♏️"
elif (month == 11 and day >= 22 and day <= 30) or (
month == 12 and day >= 1 and day <= 21
):
return "Sagittarius ♐️"
elif (month == 12 and day >= 22 and day <= 31) or (
month == 1 and day >= 1 and day <= 19
):
return "Capricorn ♑️"
elif (month == 1 and day >= 20 and day <= 31) or (
month == 2 and day >= 1 and day <= 18
):
return "Aquarius ♒️"
else:
return "Pisces ♓️"


print(
"➡️ Which month were you born in? (Enter a number between 1 and 12): ",
end=" ",
)
month = int(input())
if month < 1 or month > 12:
raise ValueError("Invalid month provided!")

print(
"➡️ Which day of the month were you born? (Enter a number between 1 and"
" 31): ",
end=" ",
)
day = int(input())
if (
(
(
month == 1
or month == 3
or month == 5
or month == 7
or month == 8
or month == 10
or month == 12
)
and (day < 1 or day > 31)
)
or (month == 2 and (day < 1 or day > 29))
or (
(month == 4 or month == 6 or month == 9 or month == 11)
and (day < 1 or day > 30)
)
):
raise ValueError("Invalid day in month provided!")

zodiac = getZodiac(month, day)

print(f"So, your zodiac sign is {zodiac}.")

Some thoughts on Future 100 trend

While perusing the 100 trends, a recurrent theme caught my attention — the emphasis on physical space or objects and their capacity to evoke positive emotions in people. These trends span innovative travel experiences, rejuvenating spa encounters, enticing beauty products, and adventurous culinary explorations. It appears that individuals are increasingly drawn to the allure of uncommon experiences, seeking excitement and novelty.

Another significant trend is the widespread yearning for community and connection, particularly in the post-pandemic era. In response to the challenges posed by recent times, individuals are actively seeking new avenues to connect with others, nature, and themselves. Notably, the design of collective experiences has gained prominence, encompassing events like music festivals and participatory art, whether in physical or virtual spaces. This emerging trend reflects a deep-seated desire for people to forge connections, find fulfillment, and express their creativity through shared experiences. There is a shift towards intentional and communal activities that foster a sense of belonging and shared purpose in a world seeking reconnection.

Additionally, I was struck by the optimism inherent in people’s perspectives on their lives, seemingly in contrast to the prevailing rise in mental health issues. Despite facing various challenges in reality, there seems to be a collective optimism about the future. This optimism may stem from the belief that technology can pave the way to a better future or offer a temporary escape from problems through immersive experiences. The juxtaposition of awareness and concern about real-world issues with an optimistic outlook emphasizes the complex interplay between societal trends and individual perceptions.

(24/01/24) Week 1 — An Outlook on Creative Coding

I’ve consistently found myself captivated by innovative mediums employed for artistic expression. Creative coding, in particular, holds a thrilling potential for bridging the realms of technology and art. Having a background in both fine art and computer engineering, I’ve shifted from traditional artistic mediums to the dynamic landscape of the digital realm. My ongoing exploration and practice of creative coding have fueled my passion for pushing boundaries and exploring new possibilities. Now, I am eager to immerse myself further into the vibrant and collaborative community of creative coding.

By day, I specialize in developing machine learning models and integrating them into products. Beyond my professional pursuits, I’ve delved into the space of 3D generative art with Houdini and contributed to the NFT community by creating generative art using p5js. Notably inspired by the captivating works of artists such as zancan and teaboswell, I find myself drawn to the idea of using code for visual expressions. This exploration has sparked a strong interest in potentially pursuing an artistic trajectory, driven by the dynamic possibilities and creativity inherent in generative art.

In this course, my aspirations encompass two key objectives. Firstly, I aim to generate compelling visual artifacts, and I am hopeful that the class can offer the necessary time, space, and resources to immerse myself in the process of creating artwork. Secondly, I am eager to broaden my expertise in physical computing. While my background primarily lies in frontend and backend development, my exposure to interacting with physical devices has been limited. Recognizing the growing influence of physical computing in the immersive art sphere, I am keen on leveraging this opportunity to delve into the realm of physical computing and acquire a deeper understanding of this exciting space.

Unlisted

--

--

WinnieTheWho
0 Followers

Media Art | Artificial Intelligence | Digital Humanity