BlogPost_206

Devina Mendez
Nov 3 · 4 min read
  1. Tell us about something you learned this week.
  2. What are the pros and cons of immutability?

Pros:

  • Programs with immutable objects are less complicated to think about, since you don’t need to worry about how an object may evolve over time.
  • You don’t need to make defensive copies of immutable objects when returning or passing to other functions, since there is no possibility an immutable object will be modified behind your back.
  • One copy of an object is just as good as another, so you can cache objects or re-use the same object multiple times.
  • Immutable objects are good for sharing information between threads in a multi-threaded environment since they don’t need to be synchronized.
  • Operations on immutable objects return new immutable objects while operations that cause side-effects on mutable objects usually return void. This means several operations can be chained together.
  • In languages where functions are first class values, operations like map, reduce, filter, etc. are basic operations on collections. These can be combined in many ways, and can replace most loops in a program.

Cons:

  • Cyclic data structures such as graphs are difficult to build. If you have two objects which can’t be modified after initialization, how can you get them to point to each other?
  • Allocating lots and lots of small objects rather than modifying ones you already have can have a performance impact. Usually the complexity of either the allocator or the garbage collector depends on the number of objects on the heap.
  • Naive implementations of immutable data structures can result in extremely poor performance. For instance, concatenating many immutable strings (like in Java) is O(n2) when the best algorithm is O(n).

3. How can you achieve immutability in your own code?

No string methods change the string they operate on, they all return new strings. The reason is that strings are immutable — they cannot change, we can only ever make new strings. Numbers are immutable too.

4. What are Divide and Conquer algorithms? Describe how they work. Can you give any common examples of the types of problems where this approach might be used?

Divide and conquer is an algorithm design paradigm based on multi-branched recursion. A divide-and-conquer algorithm works by recursively breaking down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem.

This divide-and-conquer technique is the basis of efficient algorithms for all kinds of problems, such as sorting (e.g., quicksort, merge sort), multiplying large numbers, finding the closest pair of points, and syntactic analysis.

An example is the Big O notation or routing mail.

5. How do Insertion sort, Heapsort, Quicksort, and Merge sort work?

Insertion sort: involves going through a pile, taking one item, comparing it to the first, swapping places if one item is larger than another and continuing this process until the minimum item is in the correct location.

Heapsort: it divides its input into a sorted and an unsorted region, and it iteratively shrinks the unsorted region by extracting the largest element and moving that to the sorted region. The improvement consists of the use of a heap data structure rather than a linear-time search to find the maximum.

Quicksort: first divides a large array into two smaller sub-arrays: the low elements and the high elements. Quicksort can then recursively sort the sub-arrays.

Merge sort: takes an array, splits it into two arrays (a left array and a right array) and then uses these two arrays as inputs into the merge() function, which in turn takes the two arrays and compares the values of the first indices of each array, adding each lesser value into a new array. In the end, an ordered array is returned when combining both now-sorted arrays.

6. What are the key advantages of Insertion Sort, Quicksort, Heapsort and Mergesort? Discuss best, average, and worst case time and memory complexity.

Insertion Sort: For nearly sorted arrays, the performance is very near O(n) with a very low constant. It is stable, in-place and used as a finishing run for many sorting algorithms.

Quicksort: high performance, easy implementation, easy combine with cashing and internal memory mechanisms

Heapsort: efficiency, minimal memory usage, simplicity, and consistency

Mergesort: O(nlogn) worst case asymptotic complexity, can be used for external sorting, highly parallelisable, can be used to implement a stable sort.

7. Explain the difference between mutable and immutable objects.

Mutable object — You can change the states and fields after the object is created.

Immutable object — You cannot change anything after the object is created.

8. What is an example of an immutable object in JavaScript?

  • numbers and strings

A mutable object example: objects, arrays, functions, classes, sets, and maps.

Austin Coding Academy

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade