Sitemap

Useful Shell Tools/Tips for Increasing Productivity

9 min readDec 22, 2019

--

Now that 2019 is coming to a close, I thought I would share some of the shell tools/tips I found really useful in 2019, particularly for searching.

The Silver Searcher

The Silver Searcher is a very useful tool for searching the contents of files. Many people prefer to use their IDE to search code bases and it can be sufficient for most use cases. However, at times it is not, especially if you are experienced with using bash and, for example, you want to scope down your search to match only a narrower set of files which is not always easy with an IDE, or if you want to filter down the result using grep, for example.

Faced with this, many developers use conventional bash tools like find, grep, awk, and the similar. For example, you could execute the following command:

grep -rnw '/path/to/somewhere/' -e 'pattern'

But I am sure that, like me, many of you struggle to remember the arguments and end up googling the command pretty much every time you want to use it. An alternative solution is to use the powerful silver searcher. Inside a certain directory, you simply execute the following command to search for a pattern inside all the files in that directory’s hierarchy:

ag <pattern>
Example search using the silver searcher

How simple that is! The command is even just two letters (which is the chemical symbol of silver if you are curious) making it very easy to remember. You might argue that you could simply create an alias for the grep-based command above to do the same thing, but the silver searcher comes with multiple advantages:

  1. It is extremely fast, which comes handy when you have a huge repository.

2. It comes with multiple options. For example, to scope down your search to Java files, you simply add the --java options:

ag --java <pattern>

3. It automatically ignores file patterns from your .gitignore and .hgignore.

4. “If there are files in your source repository you don’t want to search, just add their patterns to a .ignore file. ( cough **.min.js cough)" (quoted from the silver searcher GitHub page)

Go ahead and install it and I hope you will like it.

Notice that the silver searcher is not the only one of its kind. There are multiple other similar tools. In particular, ripgrep seems to be interesting. It is written in Rust programming language and some benchmarks seems to suggest it is even faster than the silver searcher.

Fuzzy Finder (fzf)

Fuzzy finder is an amazing tool for executing, well, fuzzy searches. If you struggled to find something in your code base in the past just because you forgot how exactly it was written, then check fzf.

The basic idea behind fzf is extremely simple; it allows you to execute fuzzy searches on a list of lines. There are two interesting features, however. First, the list can be any list: files in a folder hierarchy, command history (think the last time you struggled to find a command in Bash using Ctrl-R even though you are pretty sure it is in your command history), processes, the lines making up the content of a set of files, i.e. search in the content of files, etc. The other interesting feature is being fuzzy, i.e. all you need to remember is a few characters about what you want to find. For example, by typing “openamazon”, I was able to retrieve this command from my history (you will see below how to use fzf to search command history):

openssl s_client -connect amazon.com 2>/dev/null | openssl x509 -noout -text | grep "DNS"

You see, I didn’t have to worry about spaces, or writing the full openssl. Furthermore, it allows you to use arrows to move through the result, as opposed to the default Ctrl-R which shows one result:

There are multiple use cases for how this can be useful, but there are some suggestions which you can find on fzf GitHub page. Here are some useful use cases:

1. Instead of the limited command-line history search of Bash, you could replace Ctrl-R to use fzf to run interactive fuzzy search on your command-line history (see screenshot below). If you are a dedicated bash user, this will save you hours, I promise. Additionally, now that you have a powerful search for your command history, you should also go ahead and add the following lines to your to to your.bash_profile/.bashrc so you don't lose any command you have executed in the past. If you are a dedicated bash user, this will save you hours, I promise again:

# Increase  history size
HISTSIZE=1000000
HISTFILESIZE=1000000
Searching command-line history with fzf. This shows up when I press Ctrl-R. Notice that unlike that unlike Bash’s default command-line history, I could use the keyboard arrows here as well to navigate the search result.

2. Have you struggled to find a file in your huge workspace just because you don’t remember the name of the file exactly? You guessed it, you can use fzf to interactively search the hierarchy of the current folder. I simply press Ctrl-T and the following interactive fuzzy search shows up:

3. How about when you struggled to find a line in your code base because you can’t remember that particular unique set of characters which used to take you right into the line you wanted and your IDE cannot execute such fuzzy searches? Regular expressions might helped in the past, but fzf is definitely easier. I write the following line in bash:

grep --line-buffered --color=never -r "" * | fzf

and it starts an interactive session to search your code base in a fuzzy way! The grep command is used to pipe the contents of all files to fzf. Below, you will find how to use the silver searcher instead of grep .

I don’t want to mention more examples. Just go ahead and install fzf and be creative about how to integrate your recent tedious search tasks with fzf and you won’t regret installing this tool. With a bit of creativity, you could come up with unimaginable use cases.

bind

This is not technically a tool, but a built-in shell command for binding keys with certain commands. Let’s go back to fuzzy finder. We could use the following command to execute a fuzzy search over the content of the files in the current directory:

ag --nobreak --nonumbers --noheading . | fzf

This command lists all the lines of all the files in the current directory hierarchy and pipe the result to fzf, thus allowing the user to execute a fuzzy search. This can be really useful (trust me it can be way more powerful than your IDE’s search functionality) so let’s just bind it to Ctrl-F:

bind -x '"\C-f": "ag --nobreak --noheading . | fzf"'

All you need to do now is CD into your project directory and press Ctrl-F to quickly find stuff in your code base.

jq

With json becoming ubiquitous, it is highly unlikely you are not dealing with a lot of it in your daily programming life. You might frequently wanted to extract the values of a certain field from your json array and ended up using grep followed by some text processing to extract the values. If your json is not formatted, e.g. a response from your JSON API which you are debugging, then you have to reformat the JSON before grepping it. Would it not be cool if grep can understand json? Well, you don't need that when you have jq. jq allows you to extract fields (or multiple fields), apply filters, etc. This is best explained by examples so I will refer you to jq's tutorial. Check the manual for more information on what you can do with this exciting tool.

fswatch/rsync

For those of you who uses cloud machines a lot (and there are many of you these days), you probably frequently have to copy files from your local machine over to the cloud to do some testing. This is very repetitive and decreases your productivity, especially when you spend a long time trying to figure out why your code change is not working only to discover that you forgot to sync your code. An alternative solution is to use a file monitor which will automatically execute a certain command if any of the files being monitors is modified. One such tool is fswatch. Here is how I use it:

fswatch -o `pwd` | xargs -n1 -I{} ./sync.sh

This command will watch the current directory (which is my working directory) for any file changes. Whenever a change happens, it will execute the sync.sh command, which can be something like this:

echo Syncing files with <myserver>
rsync -azP --exclude=build --exclude='**/*.swp' ./ <myserver>:/home/rafidka/Workspace/AWSPlayground/src/AWSPlayground
echo Sync complete...

So I could go ahead and make any changes to my AWSPlayground project source code and it will automatically get synced to my cloud machine. Notice the use of rsync command, which is really useful to sync a local folder with a remote machine (it only copies modified files, so it is very efficient).

timeout command

Have you ever had a script taking a long time because a command you are using in your script takes a very long time before it times out, yet it doesn’t support a timeout option? If you have, the timeout command can definitely be handy:

timeout 60 echo Hello, World

This will execute whatever command follows the timeout command and if it doesn’t respond within 60 seconds, it will stop executing it. Obviously, the echo will immediately return, but you get the idea: you can put any shell command after it.

time command

If you want to measure the execution time of a certain command, you can use the time command, for example:

time sleep 5real 0m5.020s
user 0m0.005s
sys 0m0.012s

It displays the real time, along with the user and sys time.

pyenv/rbenv

For Python users, you might need to install multiple versions of Python. One reason is 2/3 compatibility. Another reason is testing on a clean Python installation to ensure your code installs the required packages. Or simply you want to install Python 3 or upgrade your Python installation. For this, you could use pyenv. It is very easy to install and after that you simply execute the following command to install a new Python version (3.6.5 in this case):

https://gist.github.com/a5f15f0ed59bc7f893b33a4197972cca

For Ruby enthusiasts, there is a similar tool called rbenv.

GNU Parallel

If you need to use Bash a lot in your daily job, you were likely hit in the past with some script that would have been better off with some parallelization to improve performance, rather than waiting for the commands within your script to sequentially finish, which considerably increases the execution time of your script. However, parallelizing a bash script is not a joke, and you will spend hours tweaking your script to ensure things like: the script correctly report the exit code if a sub-process fails, the script doesn’t stop if a sub-process fails, etc. Unless you are a super bash user, chances are you will spend a huge time to get this to work properly. Instead, you could use GNU parallel. You don’t need to rewrite your script to make it use parallel command; you could simply apply it over your script, e.g.

pushd/popd

Have you ever had to cd into multiple different directories and then wished you could go back to the previous folders just like stack? If yes, you can use the pushd and popd commands. Just act exactly like the cd command except that they push to/pop from the directory stack. Try it out!

Awesome

Last, but definitely not least and probably the most important part of this article, is awesome-shell GitHub repository. This repository is simply a README file containing a list of useful shell stuff. This is part of the awesome GitHub repository series, containing useful stuff for different IT areas, e.g. Programming Languages, Gaming, Editors, Security, etc. If you haven't heard of this repository before, make sure to check it; you will most probably find things you wish you knew about at the beginning of 2019. It is not too late, you could definitely build a list of useful stuff to learn about in 2020 to make your work much more productive.

--

--

Rafid Al-Humaimidi
Rafid Al-Humaimidi

Written by Rafid Al-Humaimidi

Software Dev with passion. Working @ Amazon in Vancouver. Doing Machine Learning in my free time. I enjoy writing about different topics, including psychology.

No responses yet