Hackathon Practice and Algorithms: breaking down some intimidating parts of coding
This week I spent some time doing a “practice” hackathon and puzzling my way through algorithms. Now both of these buzz words were very intimating to me and were therefore a challenge I knew I would have to confront but was dreading it.
As always in life, it ended up being a little anti-climatic in some ways. I have built up the idea of a hackathon and algorithms as impossibly difficult tasks that only the elite programmers or those who were somehow born with brains that can spit out brilliant mathematical statements would be able to do early on in their coding journeys. Nope, that is simply not true, well, those individuals probably can and do, but us mere mortals and coding students can as well!
For my hackathon practice, I teamed up with some other code newbies to tackle some complex problems together in a set time limit. Now unlike a true hackathon our problems would be quick for a senior developer to solve and they could likely do them solo and in about 15 minutes rather than our 1 hour time frame. But to us they were more complicated, required us to communicate and I felt the most confident I have ever during this experience. Which was a very very unexpected result.
It solidified some of my “soft skills” that are absolutely essential for a successful career in tech, regardless of the job title. That is the significance for setting up and convincing teammates of code planning during a hackathon or collaborative programming setting to solve a complex problem.
This was not the only time, nor will it be that I end up leaning on my background in project management and academic teaching. I was the one to keep the big picture in focus, forced my team to really talk and verbalize the problem to break it down BEFORE we started getting into the coding. Now, this was not easy, because we were all excited to start the solving part, but it is impossible to really solve a problem as a team unless we all understand what the problem means, what are the steps we want to take to solve it. By verbalizing it out loud, with some dictation, we were able to ensure that we all had a mutual understanding of what everyone was thinking. The project manager in me also felt the need to ensure everyone felt heard. This allowed the actual coding to be more efficient and our “divide and conquer” algorithms were not difficult, since we had a plan for how to divide and conquer them. And when there were bugs, as a team we were better able to discuss them since we knew why the code was written.
So, my takeaways for my future hackathons and my breakdown for the structure that we took (with some reminders of what the big picture was sprinkled in for good measure).
1) Everyone has to be on the same page with what the goal is- it is always surprising how the same prompt or problem can be interpreted in different ways, or even the written steps can be interpreted differently. But in a timed situation and a collaborative setting, it is critical to have everyone on the same page at the start.
2) Code planning or even first verbalizing the logic behind the goal is important- this is the opportunity to ensure everyone using different phrases really means the same thing and everyone understands each other.
3) Then write out all the steps, that way it’s broken down into increments- so it’s like setting up a series of prompts / to-do lists to make it a series of simple problems that work together to solve the complex problem.
4) Then tackle the steps one by one in the business logic order or the agreed upon order.
5) Test increments as you go (unit testing or console.log).
6) Overall, make sure you are in continual communication. One person can be researching or even the team and then discuss which solution to try first.
7) Makes it a more efficient process, makes sure everyone is heard and contributes, but also ensures we’re getting to the solution in a manner where everyone benefits (turns it into a learning experience)
Now onto some algorithms and concepts of object oriented programming that I’ve been working through!
Immutability: What is it and what are the pros and cons?
Immutability is a core principle in functional programming and has lots to offer to object-oriented programs as well. A mutable object is an object whose state can be modified after it is created. An immutable object is an object whose state cannot be modified after it is created.
- Programs with immutable objects are less complicated to think about, since you do not need to worry about how an object may evolve over time. Therefore, in some ways it is more maintainable and easier to code and read between different developers on the same or different teams.
- One copy of an object is just as good as another, so you can cache objects or reuse the same object multiple times.
- Distribution of many small objects rather than modifying existing ones can cause a performance impact.
- Cyclical data structures such as graphs are difficult to build. If you have two objects which cannot be modified after initialization, how can you get them to point to each other?
That’s great, but how can we actually achieve immutability in our own code?
const declarations combined with the techniques mentioned above for creation.
Then, to “mutate” objects, use the spread operator,
Array.concat(), etc., to create new objects instead of mutating the original object.
The immutability is a very highly exploited concept in functional programming languages, where the values of the…
What are Divide and Conquer algorithms? How do they work?
Well, we worked through some divide and conquer problems in our hackathon and they seem relatively common in both coding interviews and real world programming situations. It can be divided into the following three parts:
Divide: dividing the problem into some sub problems
Conquer: Work through sub problems by calling recursively until a sub problem solved.
Combine: Once the sub problems are solved you will get find problem solution.
Some of the quick examples of divide and conquer algorithm problems include:
- Binary Search.
- Quick Sort.
- Merge Sort.
- Integer Multiplication.
- Matrix Multiplication (Strassen’s algorithm)
- Maximal Subsequence.
Some algorithm problems: How do insertion sort, heap sort, quick sort, and merge sort work?
First, note that algorithms are yes a mathematical term, but it’s much easier to think of them as just recipes. They’re blocks of code that work together to create something. When you really think about it, you’ve likely been writing, reading and working through algorithms long before stumbling onto the actual word “algorithm” in your coding journey. Hence my former statement of them being both conquerable by us mere coding mortals. But they can be difficult to recall in a pressure situation and they can also be incredibly complicated. Here are some simpler ones to get us started!
Insertion sort is a comparison algorithm that builds a final sorted array one element at a time.
It iterates through an input array and removes one element per iteration. It then finds the place the element belongs in the array, and places it there.
Quick sort is a comparison algorithm using divide-and-conquer to sort an array.
The algorithm picks a pivot element, A[x] and then rearranges the array into two subarrays.
Sub-array 1) A[p . . . . x-1] , so that all elements are less than A[x]
Sub-array 2) A[x+1 . . . r] , so that all elements are greater than or equal to A[x].
Heap sort is another comparison algorithm which uses a binary heap data structure to sort elements.
It divides its input into a sorted and an unsorted region. It then iteratively shrinks the unsorted region by removing the largest element and moving that into the sorted region.
Merge sort is a comparison algorithm that centers on how to merge together two pre-sorted arrays results in an array that is also sorted.
The Advantages & Disadvantages of Sorting Algorithms
Sorting a set of items in a list is a task that occurs often in computer programming. Often, a human can perform this…
What are the three laws of algorithm recursion?
Law 1) A recursive algorithm must have a base case.
Law 2) A recursive algorithm must change its state and move toward the base case.
Law 3) A recursive algorithm must call itself, recursively.
In a base case, this means that a problem that is small enough to solve directly.
The change its state and move toward the base case means that some of the data is modified and in some manner, it gets us closer to a solution.
The algorithm must call itself, which is essentially just recursion.
I hope this helps anyone who heard the words, “hackathon” and “algorithms” to be a little less intimidated and realize that yes, they can be scary and complicated but once we break them down, these two concepts are the same as every other part of our coding journey. A scary concept, which we then learn about and experience (sometimes push through) and then ultimately become less intimidating and perhaps fun?! Well, at least fun outside of the coding interviews.
Your friend in code,