3+7 high-potential product ideas

The ‘Smart Handshake’ — to seamlessly identify the participants + 9 ideas for apps and startups from my personal backlog

1. The ‘Smart Handshake’

Did you ever have difficulties remembering names after a business or social handshake? If yes, check this seamless solution!

image: pixabay

Imagine this scenario: Two users handshake in a business environment and, with no further interaction or activity, both persons receive a notification about each other, with name, photo, location, context, and link to his/her LinkedIn or other social profile!

An app using the accelerometer, gyroscope and motion detection capabilities to identify the ‘universal’ pattern of a handshake

A business handshake between two persons who have installed the app in their smartphone or wearable will trigger the following sequence of events:

  1. The app uses the accelerometer and related tech to identify the ‘universal’ movement of hands during a typical business handshake: based on patterns extracted from a vast volume of handshake events — movement data, the system is able to identify the gesture-based on acceleration, speed, duration and the movement of the hand.
  2. When the handshake is identified with confidence, it is logged in the data store — independently for each of the users. The handshake event also contains the timestamp and location information.
  3. The system searches for other handshakes that happened at the same time and location; the closest one in time and location refers to the other person of the handshake event.
  4. At this point, both persons involved in the handshake, are identified. The app can look up their social network profile links — for instance, LinkedIn or Facebook. Each user receives information about the other person’s identity and social media profile.
  5. As users continue to meet people, the app maintains the history of the new persons met (names, photos, social network profiles) and — under sufficient permissions — can automatically invite or follow the other user on the default social network.

2. DIGITAL annotations on PHYSICAL books

You are reading a book — a physical one; it might be a novel, a technical, or textbook. You are at an important point/ paragraph where you need help or you need to comment on/ with a question or explanation.

image: pixabay

Imagine the following scenario:

  1. You use your smartphone to scan the paragraph or phrase of interest — the physical book
  2. The app performs OCR to extract the text from the paragraph/phrase/ page you just scan
  3. The app triggers a full-text-search against a large database of books — could be a service call such as Google’s Book API or similar.
  4. The app receives the response from the API — including the identifier of the book and the positioning — a reference to the paragraph and page.
  5. The app retrieves user-generated-content and metadata about the specific paragraph/phrase/page of the identified book.
  6. The summarized user-generated content is then presented to the user via the app — possibly in an Augmented Reality mode and/ or with voice support.
  7. The user can use voice, via the app, to append his/her own private or public comments on the identified paragraph of the book.

The full history is maintained for the user and the book — available also via classic search experience.

3. A self-organizing ‘Do Not Disturb’ mode

Ever been in a theater, cinema, or other noise-sensitive social situations where sounds from mobile notifications can spoil the moment?

The common sense in such a situation is to set the mobile in silent or ‘Do not disturb’ mode. Although obvious, this is not the case for everybody: there are always those few who either by mistake or disrespectfully skip this.

What if there was a way for the audience to seamlessly self-organize?

‘The system’ could identify the situation as requiring ‘silent mode’ and notify the members of the audience to silent their mobiles (those who haven’t already); Or, in a more aggressive scenario, automatically set the phones in to ‘Do not Disturb’ mode

Mobile devices automatically enter silent mode when users join special social arrangements (a concert, a lecture, etc.). This could happen seamlessly with no controlling system or particular rules: Assuming a number of people are at a particular place — within a specific radius and possibly around a particular known location; each time a mobile device is set to ‘silent mode’ by a user, an event is triggered which sends location and mode data into a centralized data store; this database allows the identification of ‘concurrent’ transitions to ‘silent mode’ within the same radius.

Multiple human-originated transitions to ‘silent mode’ which are time-aligned and within the same radius, indicate a self-adjusting behavior (people set their mobile phones to ‘silent mode’ at the same time and possibly for the same reason)

If this behavior is significant (as a percentage of the audience — more than x% of the people identified in the same radius and time frame) there is a clear signal that the particular situation (people arrangement+point in time+ location) is requiring mobile devices in silent mode. Assuming that this behavior follows particular patterns — like specific days of the week, months, time-slots within the day, size of the audience, time-frame length, etc. — the system can safely identify this location and time arrangement as ‘sensitive to noise’.

Images: pixabay



The community of Innovators and Inventors. We welcome people who are passionate about technology as the means of solving big problems. We believe in ideas and the power of online communities. Follow the Innovation Machine to discover problems worth solving and big ideas.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store