How I Evaluate Software to Avoid the Shiny Object Syndrome

James A. Manning
5 min readMar 13, 2023

--

I design training and instructions using different modes of teaching like online, in-person and blended. I specialize in creating internal sales training, one-day seminars, and presentations for trade shows.

My go-to tool for creating these materials is Adobe Creative Cloud, but I also like to use other software like Camtasia, 3D Vista, LearnDash, and good old-fashioned PowerPoint. Today’s technology moves at lightning speed, and I’m always looking for the latest and greatest tools. I’m confident in experimenting with new software. My motto is “move fast and break stuff.”

But here’s the thing, I want to avoid falling into the trap of chasing every new shiny object that pops up. That’s a surefire way to waste hours learning a new tool only to find out it doesn’t work. To avoid this, I have a set of criteria that software must meet before I even consider trying it out and another set of standards before I fully implement it into my workflow.

This approach helps me stay productive and efficient in my work. It also ensures that the tools I use align with my needs and the needs of my projects. Of course, I keep up-to-date with all the latest trends and recommendations from my peers.

My philosophy is to stay open to new technology trends while being practical about the tools I use. This way, I can create effective and engaging training materials that meet the needs of my clients and ensure their success. In this blog post, I’m presenting a high-level overview of how I research new software applications.

The Trial
When considering a new product, I have two essential requirements. First, it must offer a free trial period or a free tier. I prefer a 30-day trial period, but I’m willing to accept 14 days. Having some level of free access to the software is essential to allow for a risk-free evaluation of its capabilities. If the user interface (UI) or user experience (UX) is subpar, I will quickly abandon it.

Minimal Viable Product
Secondly, the software must meet the minimum viable product (MVP) threshold to solve my specific problem. For instance, I recently needed a product that would allow me to showcase a 3D model of my company’s products. Initially, I tried embedding a page from Sketchfab, but the models were too large and took too long to download, which was not feasible for a salesperson’s quick pitch.

After trying out different applications, I settled on 3D Vista. It’s an app designed to create virtual tours of homes, but it was the only one developers could develop more than virtual tours. Best of all, it was only $500, which was well within my budget.

The 20-hour rule
When I reach the final evaluation stage, I apply the 20-hour rule coined by Josh Kaufman. This rule states that it takes at least 20 hours of practice to overcome the “frustration barrier” when learning something new. By practicing for at least 20 hours, you’ll reach a point where you can self-correct and better understand the concept. After the 20-hour trial, I can continue using the product or give up.

To give myself ample time to evaluate the MVP and practice for at least 20 hours, I ensure the software I’m considering offers a free trial period and monthly subscription. This approach allows me to complete these tasks at no risk and a low cost within 45 days.

The Hard Pass
After a rigorous evaluation process, only about 3% of software products make it to the final phase. At this stage, I question my assumptions about the software and challenge myself to determine whether I need it. I ask myself if the software can significantly improve my workflow or enable me to do something new that enhances my efficiency.

In the past, most tools were Adobe Creative Cloud competitors. I’ve evaluated many and have yet to stray from the Creative Cloud due to its range of tools and dynamic linking between applications. However, recently, I’ve expanded my workflow by incorporating 3D Vista and Blender, which are outside the Adobe ecosystem, as they allow me to create more dynamic and engaging content.

What audience benefits from the software, and how?

According to Dr. Will Thalheimer, every presentation should assist with Engagement, Learning, Remembering, and Action. When evaluating new software, I ensure that it addresses at least one of these aspects for the three key audiences I serve: the sales team, learners, and buyers. The software must improve engagement for the sales team or buyers or engagement and learning outcomes for learners. No matter how great the tool may be, I will shelve it if it fails to meet these criteria.

I also look for additional pain points that the software may address. The software can do more than I realize. And it is always a win when you accidentally solve a problem you didn’t know you had.
Next, I look for my blind spots and question my assumptions. To help me with this, I bring in another set of eyes before making a final decision. It’s easy to get fixated on the “coolness” of the product and miss essential features that may be available or unavailable. That second person is there to say, “yeah, but” and consider all angles to ensure it meets the needs of all three audiences.

What level of friction is there with implementation?

Another critical question I consider when evaluating software is the friction with implementation. This requires me to view the tool from a different point of view, precisely that of the end-users. For instance, when implementing VR and AR for the sales team, I could tether the VR unit to a laptop for better visuals. While this may improve the quality of the experience, I knew the sales team would want to avoid lugging around a VR headset and connecting it to a laptop each time a customer wanted to see it.

What impact will Ai have on my model?

The rapid advancement of artificial intelligence has led to the development of many different tools for various business roles. As these tools become more prevalent, it’s essential to create strict guidelines for evaluating, which means making it into the workflow and which does not.
Using my wireframe, I evaluated several AI tools and ultimately selected ChatGPT and ElevenLabs as the only two readies for implementation into my workflow.

ElevenLabs is an AI voice software that includes voice cloning. I cloned the voices of coworkers who do voiceover work for our videos, so instead of setting aside a day to film, I can use the cloned voice with a script and record the voiceover; this has significantly improved our video production capacity.

Final Take
Creating and adhering to a system for evaluating software is crucial for content creators looking to streamline their workflow. It allows us to focus on the tools that truly meet our needs and helps us save time and money on devices that don’t.

By creating a system to evaluate software, we can measure its effectiveness and improve our workflow accordingly. An evaluation system saves us time and money and allows us to consistently produce high-quality content that meets the needs of our audiences.

--

--

James A. Manning

Pragmatic Futurist with a stoic take on the future of work and what it's going to take to keep up.