BlueTap — The Ultimate Virtual-Reality(VR) Keyboard

This is one of my IBM patents on a VR keyboard published recently that does not require the users to carry a keyboard device and allows to input your information even when you are in motion. Forget Google Glass, even the experience of Apple’s much rumored Apple-VR can multi-fold with this mother-of-all VR keyboard system and approach.

Back in 2014, while preparing my slides for one of my earlier seminar talks on accessibility Rediscovering Accessibility for Future Tech (you can read the linked post here https://www.linkedin.com/pulse/20140917025440-9377042-rediscovering-ccessibility-for-future-tech-everyone-is-affected), I hit upon the challenges and available solutions that were present for data input systems. The concept of mobility in devices has transcended beyond the scope of marginalizing telephony and computing, and has taken a shape in the form of wearable devices. However with advent of new technological invention in any field also gives rise to new challenges. In the smart wears one of such challenges is input of information to the associated system.

One such challenges was that — modern technologies are enabling people to interact with systems in distracted environments, while in-motion and while multi-tasking giving raising to its own set of problems which were not known in PC era. With the rise of smart-wearable devices, mobile computing and frequent use of access of information from cloud while on the move, one of such challenges is input of information to the associated system, as the systems are getting less in size and tending towards smaller displays. The recent trends in mobility domain, indicates the growth in the smart wearable devices. We are witnessing a time when every technology company is trying their best to own their part of innovation in Smart wears such as smart glass, smart watch etc. and this field aligns with IBM’s one of the currently prioritized strategies — i.e. mobility.

In most of the existing smart eye-wear device (e.g. Google Glass) , the input mechanism is typically via voice. While this is great for provide commands to the device, it is not still great for input longer texts. Again voice based input mechanisms are difficult to use in noisy and distracted environments. Also language localisation and accent issues adds to the list of issues in using voice effectively. Moreover when it comes to the productive work like drafting emails, writing a piece of programming code, voice-input method is not as effective as a standard keyboard based input mechanism (mostly found in PC, laptops, mobiles and missing in smart eye wears).

Usage of a physical keyboard or a secondary tooth enabled keyboard is possible but it requires the user to carry a keyboard wherever he goes. But even if the keyboard is carried, there is no guarantee the user will have a flat surface to place the keyboard or if he can comfortably use the keyboard while on the move. Imagine a person who is waiting on a busy bus stop and wants to reply his office email.

A set of new hardware are available in the market (e.g. Fin, Ring) which act as supplementary wearable to trigger pre-defined commands and provide limited input of information via finger movement and hand gestures over air. However none of them are effective enough to provide a keyboard solution that can be used for easier input of textual information into wearable device like a smart –eye wear or smart glass.

Also when it comes to input longer texts on a smart glass/eye-wear, there is absolutely no reliable method/system exists as of today which can work for users on the move and in distracted environments.

So with these problem statements in mind, I made a lit of dimensions of goodness —

  1. Effective even in typing-in long text into smart eye-wear system — can help user more productive. The user can input long emails or simply use smart-eye-wear to write a program or code even on the beach.
  2. No cognitive load on user to remember pre-defined commands/key names /gestures etc. (unlike the wearable rings based command trigger systems as detailed in prior art section)
  3. Can be used effectively while on the move & distracted or multitasking state (standing in que, at bus station, while having dinner, while driving, walking, in home while watching TV)
  4. No need for any extra or supplementary hardware required along with smart-eye-wear. No need for the user to carry separate input devices.
  5. A method that uses age old natural human habitual way of processing information through fingers
  6. Explore the new way to have a device free of any physical input accessories

With these pointers on the goodness, I iterated over a conceptual design of a Virtual Reality enabled keyboard for effective data input system (named it BlueTap) with many innovative approaches in a pattern of gestures that can be used in a augmented virtual space and was filed as a patent by IBM. BlueTap is basically an Input keyboard system with a set of gestures and methods that uses both fingers-tips & finger-joints, as a keyboard for smart eye-wears and a set of gestures to control the human-system interaction in the real life 3D virtual space using this keyboard system.

BlueTap is about an approach that uses natural human gestures in hand that are derived from the age old human cognitive approach of counting through fingers. This also focuses on the idea that there should not be any need for the user to carry separate input devices for typing long texts into the device. This helps the device to act as the independent device and not a supplementary device to some PC, tablet, mobile handsets. The user can input long texts any where any time – on the road, walking, in home while watching TV. This is an approach that allows to explore the new way to have a device free of any physical input accessories.

The idea uses a overlapping of a graphic/icons over a real camera stream on the screen is a known technique. Also recognize finger on a hand , finger tips and finger joints is a technically feasible technology using OpenCV , Kinetic type technology. Mapping of the alphabets and making visible these as graphic to finger tips and joints over glass screen on both hands in real time.Analyzing the finger movements to infer the “tap”, “double tap” and “swipe” gestures. This is also about a mechanism to sequence a string of inferred gestures to provide a typical typing kind of input experience.

The BlueTap proposes method and an effective and scalable interaction paradigm that  solves the issue of input of texts into a smart eye-wear in an effective manner even when the user is on move and is in distracted or noisy environments and there by making him more productive by allowing him  to use his fingers tips and joints as the keys to input the long texts effectively — which the user can do anywhere any time and does not need to carry a supplementary device (a key board or a phone/tablet with a virtual keyboard on it.). The invention can be used in digital smart eye-wear devices (e.g. Google Glass, “Jet” by Recon, Evena’s Eyes-On Glasses) in mostly consumer, enterprise and healthcare domain. This can be refined to integrate with in any future smart eye-wear product to provide quick and comfortable input mechanism to the device.

The interaction paradigm in BlueTap allows in run-time dynamic switching between different views (a view that can represent but not limited to any language and nature of selected keyboard layout ) and operation modes double hand as well as single handed keyboard operation, in case the user is busy in doing something (e.g. holding coffee cup, having dinner, driving, writing etc. ) or is having any disabilities of not having one hand.

BlueTap is more than just being a keyboard system, as it allows user to interact with eye-wear system in a new way – like interacting with an application menu of the system to open any app, or making a call to any phone number by enabling a dial pad or doing some calculations via an calculator interface displayed by BlueTap.

A set of unique gestures are also part of this system that allows to control different mode/view of Blue Tap, namely –

  1. Fist Gesture – Toggle among keyboard views such as aplabet and numeric or symbol keyboard views.
  2. Palm Flip & Back Gesture – To toggle between “keyboard” and “app menu” views
  3. Thumb Down & Up Gesture – Toggle between “enable” or “disable” state of BlueTap
  4. Thumb slide across Palm gesture – To slide between single handed and double handed keyboard modes.
  5. Double Tap on finger Gesture – Toggle caps lock state of keys.

BlueTap implements a novel way to super impose characters/symbols over finger-tips and finger-joints on the eye-wear view port and renders them in real time (through virtual reality technologies) to make them look natural tattoo kind of look n feel. Also the whole method of using finger joints as keys where virtually the key characters are visually rendered to create virtual visible keys, the tapping and double-tapping (to change caps lock) is novel in itself.

As far as implementation is concerned, BlueTap supports different technologies for finger detection — e.g. including but not limited to infrared sensors / thermal imaging based detection, body magenetism, camera visual image processing, additional wearable based solutions and/or any other technologies that can be utilized to detect finger tips and joints in 3D space.

BlueTap supports both or even any of the single hand to enter information. The user can easily swap between these single or double modes. Along with if the person requires left hand to type in he can move from double to single hand.

Along with this this is beneficial for the physically challenged person by providing an accessible way for data entry.

Different operational modes of the BlueTap are :

  1. Double Hand State (Default)
  2. Single Hand State

The double hand mode of the keyboard before the gesture made by the user to toggle into a single handed operational mode.

BlueTap also supports views like an Application menu or Home-screen, where the user can trigger different application for the eyewear. In such mode, the application icons appear instead of the alphabets. The user can easily pick and select the app to run.

The keyboard layout is flexible as well as dynamic to switch between various views, one such example is the user can open a numeric view or even a dialing interface.

Specific new sets of palm and hand related gestures are part of the BlueTap keyboard, that allows to quickly move between different views as well as typing mode.

BlueTap is comprised of the following key components which are involved in the generation of the virtual keyboard over human palm:

1.  Activation/Deactivation of BlueTap :

There can be a physical switch/button or via voice control or via launch of an app can trigger the activation or deactivation of BlueTap. This is simply switching ON/OFF of the BlueTap system.

2. Enable/Disable of BlueTap:

If the system is active, the user can disable the hand and finger monitoring by the BlueTap system. This is similar to enable/disable of virtual keyboard in typical mobile devices.

Different Modes of BlueTap:

1.      Keyboard Mode – This is the default mode of BlueTap System. This allows to render a keyboard over user’s palm.

2.      App Menu Mode – This mode renders the different app icons of the eye-wear device on the palm of the user so that the user can tap to trigger any app.

Note: Optional modes that might be introduced – for example :

3.      Call Dialer Mode – a call dialer pad layout will be rendered on the palm to allow the user to dial any number to make a call or similar activity.

4.      Calculator Mode – a calculator interface will be rendered so that using it calculation activities can be easily carried out by the user.

Different Keyboard views available are:

  1. Alphabets View
  2. Numeric/Special characters view
  3. Typing Email keyboard view
  4. Coding in HTML keyboard view
  5. Caht keyboard view with smileys
  6. Different keyboard views for different languages.

The BlueTap keyboard patent details:

Publication number: US 20170090747, Filed: Sep 24, 2015, Publication Date: Mar 30, 2017, Inventor: Samir K. Dash (Bangalore) , Application Number: 14/863,832, Assignee – International Business Machines Corporation (IBM)

Samir Dash is working as Associate Design Director at IBM based in Bengaluru, India. The above article is personal and does not necessarily represent IBM’s positions, strategies or opinions.

 

The Smallest Virtual Keyboard That Can Fit Into Any Wearable

The story of  smallest virtual keyboard that will make the gadgets like smart eye-wears, smart-watches independent in data input without the need for the user to carry a secondary data input mechanism.

With the advent of evolution of semiconductor technology, digital devices have becoming small in size. Keyboard layouts have improved over time to cater to the need of the new age devices with ever shrinking smaller display size. But none of the keyboard solutions/concepts is useful for smart devices like wrist wears, fitness wearable-devices, and watches. This is mostly due to lack of sufficient real estate in these slimmer and reals estate constrained devices.

(FIG : With the advent of evolution of semiconductor technology, digital devices have becoming small in size.)

The major challenges we face while designing UI interaction for screen real-estate constrained devices is that, when we use touch enabled UI, we do it via our finger tips and pads, which there by requires a minimum size of UI elements/buttons on screen that are expected to be tapped or touched to trigger some actions. While using touch enabled devices, standard recommended minimum touch area is defined to ensure the UI area is usable.

For example Apple recommends a minimum target size of 44 pixels wide 44 pixels tall on a 3.5 inch display at 164ppi. Similarly in the Windows Phone UI Design and Interaction Guide (PDF), Microsoft goes further and suggests: a recommended touch target size of 9mm/34px; a minimum touch target size of 7mm/26px; a minimum spacing between elements of 2mm/8px; and the visual size of a UI control to be 60-100% of the touch target size. Nokia’s developer resources suggest that touchable interface elements should not be smaller than the smallest average finger pad, that is, no smaller than 1 cm (0.4″) in diameter or a 1 cm × 1 cm square.

So on an average for usable UI control the minimum size is around 44point (pixel free unit) and is approx. 7mm x 7mm area. When a keyboard layout is designed, this minimum area of touchable surface on UI matters the most, there by restricting us from using a keyboard based input system on small or slim wearables like smart watches, wrist wears or any other device that has limited real estate.

During past few years many cell phone device makers came up with multiple approach to deal with small UI area while designing keyboards on smaller devices. One example is T9 keyboard.

(FIG: T9 keyboard is an example of using unique interactive method for a keyboard for small real estate.)

When iPhone attempted providing QWERTY type keyboard, it used multiple views of keyboard to accommodate all required keys . This was followed by Android and almost all touch enabled phones.

But The evolution of devices resulted into even smaller devices with minimal possible touch enabled displays / panels – many examples are the smart Watch, Wrist bands , medical equipments and many smaller and slimmer devices.

This has gave raised to a new problem. Now even the T9 or any other such keyboards do not have enough space in the screen area to fit into these devices. The typical physical dimension of the touch enabled displays of these devices come in different types – some are slim ones, some are oval or round shapes. For example main display size of Samsung Fit(slim) is 1.84 inch with 128 x 432px. Similarly the iWatch is around 2.5inch.

When I initially tried to explore the existing solutions available, I bumped upon Minuum which  needs at least a 1.63-inches (that is almost the same display area of Samsung Gear ) — it is due to the implementation provided where the sub panels appear to provide selection for character based on earlier selected character. So it was even not useful in slim-wears as well as any touch surface below the 1.63 inch surface.

So practically there was no significant keyboard was used in wearable devices with constraint real-estate. Rather most of them used on alternative methods of input mechanisms like voice and a secondary bigger device like a smartphone.

Most of the smart devices use voice as the major input systems due to lack of real-estate to accommodate a keyboard on them . However the voice based input systems have their own problems such as – (i) In noisy environments (e.g. out-doors, or in crowd) its really difficult to enter the texts via voice commands in an error free way (ii) due to variations in accent, tone of the speaker the voice input based system may not be totally effective and give raise to the scope of error. Surprisingly new age smart devices are more used as wearable and are used out doors which frequently are operated in noisy and distracted environments. Also the processing power in small devices makes it a thumb-rule to have the voice processed in cloud rather than in the device itself, for which the device needs to be connected to network.

(FIG: Due to lack of real estate, primary input system to many wearable-devices are mostly voice based.)

Using voice as an input system has it’s own problems:

1. Voice input systems are not 100% error free and accurate. As voice of different persons are different due to the use of pitch , tone and cultural influence, there are significant chances that such voice recognition systems may fail at certain times.

2. Having a full fledged voice recognition system is resource heavy and consumes lot of CPU and require heavy processing. So, practically , all of these wearble-devices now-a –days depend on cloud based voice recognition systems . This means, To use voice input, you need to be connected to internet, else you will not be able to input data to your system. In addition to this staying connected to cloud comes with additional issues, like high battery consumption and data charges. Specially power is an issues with smart watches and similar wearable-devices, so it becomes critical for the user . Many companies like Apple, Google are still fighting with challenges of reducing power consumption and improve battery life of wearable-devices.

3. Third issue with voice is it is prone to error in distracted and noisy environments. As wearable devices are expected to be used in motion and out doors, this becomes a critical issue for effective data input into the systems.

So all these remind us of good old keyboards, where the data entry is lot easier and error free.

(FIG: Some wearable-devices use alternative approach for text inputs – use of a mobile phone as the main input system.)

Some wearable-devices use alternative approach for text inputs – use of a mobile phone as the main input system. In such scenarios, the user uses the mobile phone as the primary device where he enters the text using phone keyboard . Many popular smart watches use this approach as this is more convenient for the user to input texts than voice mode. Samsung Gear, Microsoft Band, Apple iWatch and Moto 360 are examples where these devices are packaged as secondary devices to Samsung and windows phone.

The problem with this approach is the smart wear device is never plays the role of the standalone device. It always acts as an auxiliary devices. This strictly limits the device functionality and usage . Also the user is forced to carry additional hardware and a mobile phone to control and enter texts.

In such cases the smaller warbles most of the time act only as readable devices. For example, the user can read a “Shopping list” that was compiled on a phone. On the phone he can check and un-check the items from the list, how ever he won’t be able to alter the list by adding new items to it on the wearable device. He needs additional phone or hardware to make changes to the list. This kind of usage of the wearable are severely limiting the functionality of the wearable.

So in such cases a dimension of goodness that one would be aspiring for is to be looking forward to a future of human machine interaction, where wearable-devices, super small display-enabled or display-less smart devices will play an important role, it is highly important that we need to fix such major limitations of these devices, by providing a easy to use and implementable solution for text input method to the system.

Other dimensions of goodness should also take care of the following :

1. We need an effective keyboard that can work with super real estate constrained devices – especially like a smart watch or wrist wear etc. for effective data entry.

2. And the solution must be (i) Compatible with different display sizes with real estate constraint and (ii)  can work without the need to relay on voice or cloud (iii) can work in standalone way without the need of any additional hardware or secondary devices like a phone (iv) must be flexible enough to accommodate different language (v) must be scalable to meet different needs and work in various environments (vi) must work on touch enables displays or panels.

So here it is – the answer to this problem we face – the BlueSlide keyboard, a patent assigned by IBM about a keyboard that works effectively on real estate constrained devices. Also this is the keyboard that is the smallest one as it can be enabled with a surface of a square millimeter touch surface.

The core idea behind the “BlueSlide” keyboard is based on the principle that when one or more fingers are swiped across the touch enabled area, the system records the direction and the number of fingers involved in the swipe/slide action. Based on the a predefined key mapping with the finger count and the direction of swipe, the system concludes a text input character and records it.

Ergonomically swipe gesture is lot easier than point and click/tap — as point and focus requires attention and focus during operation . It also adds cognitive load on user. Persons wit shaky fingers , elders and people who are in distracted environments and in motion (where most of the wearable are expected to be used), will have difficulty in tapping — specially in small display areas. Swiping is less focused, easier to handle even in motion..as it requires less focus and accuracy than a point and focus element.

When initially I conceived the idea, I tried to implement to test the concept and see if it is really effective. To implement the prototype , I put a paper with a wearable printout where the display area is cut out. Placed this paper on a Samsung note 2 phone display , so that the interaction area on the display is now limited by the cut-out area — this is the actual area we will get in a similar wearable watch. Now I run an app implementing the concept and interacted using fingers to type in some character and changing keyboard view through touch interactions like double tap and double tap. Just to note: the video shows the basic prototype, that tries to showcase that the basic principles of the invention was intended to be put to practical use. As per the final concept & the patented invention the layout and UI elements might change based on the need of the case where it has to be implemented.

When I tested the results of accuracy and speed, it turned out well in similar set of touch surface real-estate. There was no accuracy issue, as all characters are mapped to different finger count and direction, which results in fairly good amount of error free operation.

https://www.linkedin.com/pulse/api/edit/embed?embed=%257B%2522type%2522%253A%2522video%2522%252C%2522title%2522%253A%257B%2522localized%2522%253A%257B%2522en_US%2522%253A%2522BlueSlide%2520POC%2520-%2520Smallest%2520Virtaul%2520Keyboard%2520%2528Patented%2529%2522%257D%257D%252C%2522description%2522%253A%257B%2522localized%2522%253A%257B%2522en_US%2522%253A%2522This%2520is%2520a%2520proof%2520of%2520concept%2520of%2520an%2520implementation%2520of%2520BlieSlide%2520%2528IBM%2520Patented%2520invention%2529%2520virtual%2520keyboard.%2520BlueSlide%2520is%2520smallest%2520virtual%2520keyboard%2520that%2520can%2520be%2520us…%2522%257D%257D%252C%2522author%2522%253A%257B%2522name%2522%253A%2522Samir%2520Dash%2522%257D%252C%2522provider%2522%253A%257B%2522name%2522%253A%2522YouTube%2522%252C%2522display%2522%253A%2522YouTube%2522%252C%2522url%2522%253A%2522https%253A%252F%252Fwww.youtube.com%252F%2522%257D%252C%2522request%2522%253A%257B%2522originalUrl%2522%253A%2522https%253A%252F%252Fwww.youtube.com%252Fwatch%253Fv%253DMy8X8A4_Ngg%2522%252C%2522finalUrl%2522%253A%2522https%253A%252F%252Fwww.youtube.com%252Fwatch%253Fv%253DMy8X8A4_Ngg%2522%257D%252C%2522images%2522%253A%255B%257B%2522url%2522%253A%2522https%253A%252F%252Fi.ytimg.com%252Fvi%252FMy8X8A4_Ngg%252Fhqdefault.jpg%2522%252C%2522width%2522%253A480%252C%2522height%2522%253A360%257D%255D%252C%2522data%2522%253A%257B%2522com.linkedin.treasury.Video%2522%253A%257B%2522html%2522%253A%2522%253Ciframe%2520scrolling%253D%255C%2522no%255C%2522%2520allowfullscreen%2520src%253D%255C%2522%252F%252Fmedia.licdn.com%252Fembeds%252Fmedia.html%253Fsrc%253Dhttps%25253A%25252F%25252Fwww.youtube.com%25252Fembed%25252FMy8X8A4_Ngg%25253Ffeature%25253Doembed%2526amp%253Burl%253Dhttps%25253A%25252F%25252Fwww.youtube.com%25252Fwatch%25253Fv%25253DMy8X8A4_Ngg%252526feature%25253Dyoutu.be%2526amp%253Btype%253Dtext%25252Fhtml%2526amp%253Bschema%253Dyoutube%255C%2522%2520width%253D%255C%2522459%255C%2522%2520frameborder%253D%255C%25220%255C%2522%2520class%253D%255C%2522embedly-embed%255C%2522%2520height%253D%255C%2522344%255C%2522%253E%253C%252Fiframe%253E%2522%252C%2522width%2522%253A459%252C%2522height%2522%253A344%257D%257D%257D&signature=AZR0m7dTHJfUbCQOEZWRD8MzkYmn

(VIDEO: Showing the earlier version of a quick working prototype of the invention.)

The “BlueSlide”keyboard concept utilizes combination of multiple fingers slide across the touch enabled display or panel to trigger key-input to the system for based on predefined key mappings. The proposed concept uses different combinations of minimum one to any number of fingers to slide over the touch panel. The minimum touch enabled surface dimension can be the width of one finger tip area or less in physical dimension.

So how the thin touch panel counts the number of fingers sliding across within a duration?

(FIG: how the thin touch panel counts the number of fingers sliding across within a duration?)

Each finger is sliding across the touch panel and the system records that count. There will be intervals between each finger is sliding across the thin touch panel consecutively within a duration.

(FIG: Using any standard smart watch dial interface (e.g. Moto 360 or iWatch), similar embodiment might look like this.)

(FIG: The image shows an impression of how this embodiment might look like on a slim-wear unit.)

There are many areas this new keyboard layout solved challenges —

1. Solves problem of input of text on a real-estate constrained devices like wearable-devices (watches, wrist wears) or mobile devices with or without the touch enabled displays.

2. Simpler implementation that does not need to identify the finger and track them . Also does not need to track hand orientation. As less parameters are required to processed by the system to map the characters, it is lightweight and effective in it’s implementation and can be used in smaller less resource consuming devices .

3. Can work on a touch panel that is only single-touch sensitive –It uses sequence of consecutive touch inputs and their directions to simulate a multi-finger touch case to map to a wider number of characters.

4. Completely unique input text keyboard embodiment that uses directional swipe gestures of single/multi-fingers swipe gestures and does not require on conventional virtual keyboard approach to tap on characters to input text /characters.

5.The complete keyboard solution can work on smallest display area/touch area which can be as small as just one tap area.

6.The invention proposes the complete text input method based on swipe gestures (using any single or multiple fingers without requiring to identify each finger ) and interaction paradigm to cover all sets of characters of any language including special characters

7. The embodiment suggests the use of multiple views of the keyboard to accommodate any number of characters in keyboard along with the interaction approach on how to switching between these views.

Alternate implementation of BlueSlide: Using super minimal real estate on a non-display type thin touch panel.

(FIG: This is useful when in an hardware, we just want to provide a display-less touch-panel to reduce cost. )

This is useful when in an hardware, we just want to provide a display-less touch-panel to reduce cost. The non-touchable display might show a the key mapping, where as the user will use a touch panel strip (which does not have display and can deploy pressure sensitivity or any other technology to detect finger count and directions).

This implementation even though not optimal, can be practical to be used in super slim devices or in devices where use of a touch enabled display is not possible due to cost or some other constraint.

(FIG: Simulating multi-touch using single touch supported sensor.)

Each finger is sliding across the touch panel and the system records that count. There will be intervals between each finger is sliding across the thin touch panel consecutively within a duration. The count of consecutive slide of finger (and gap between each finger) is counted to determine the gesture. (e.g. 3 fingers sliding consecutively across, in the same direction is determined as 3-finger swipe in that direction.)

The BlueSlide can be used beyond the smart-watch and smart-wrist devices. It can be now also be used in case of a smart eye wear (e.g. Google Glass), where the touch panel will be in the rim of the eye wear and the display will be in the smart eye-wear’s projected image. This is a new improved addition as in such scenarios, typically the user does not directly sees the touch panel of the device. He rather focuses on the UI being displayed/projected to him.

The touch panel is situated outside the display area. While wearing the eye-wear the user can type in text without the need to concentrate on any keyboard.

(FIG: The rim of the eye wear holds the touch panel and user can use one or multiple fingers to type in as describe in the invention.)

The rim of the eye wear holds the touch panel and user can use one or multiple fingers to type in as describe in the invention.

(FIG: In virtual reality games devices like “Oculus” the BlueSlide can be used to implemented to provide text input mechanism easily.)

(FIG: In eye-wear there can be two touch panels which can use both hands to reduce the number of finger of each being used.)

Another non-optimal special type of implementation of BlueSlide is provided in the following, where to allow real estate for other UI elements like more input fields /information etc., the keyboard touch area is reduced even further to somewhat around 7mm x 7mm (i.e. the touch area of single finger tip) on a screen area constrained device. The following image shows this example, where only single finger swipe with increased number of keyboard views are used to input data into the system. Depending on the implementation this can be further reduced to one square millimeter of touch surface.

(FIG: Alternative implementation of the keyboard that usages only one finger.)

Similarly any number of fingers can be put to use to create alternative embodiment of BlueSlide keyboard to work across various devices of different dimensions and nature.

Read the complete Patent/invention (US 2017/0003872 A1 – Touch Encoded Keyboard) here: http://patents.justia.com/patent/20170003872

Read it in YourStory at: https://yourstory.com/read/5f95c7528f-the-smallest-virtual-keyboard-that-can-fit-into-any-wearable-

Disclaimer: Samir Dash is working as Associate Design Director at IBM based in Bengaluru, India. The above article is personal and does not necessarily represent IBM’s positions, strategies or opinions.

How I Created a Software That In-turn Helped Me Creating My Graphic Book

It’s the story of Rangoli , my own custom developed cloud based image editor software that helped in churning-out the print ready images to help in creating a graphic book with specific effects to set the desired nostalgic tone.

If I today look back in time, I find many memories from Rourkela, a small township around India’s first public sector steel plant. Like any other Rourkelite, I have been a true enthusiast about my town. This passion led me to spend some time in last few years to collects smaller yet evocative details of events that shaped the place and the life within it. Then a time came when I gave it a serious thought to give them a shape of a book. Last year I published the book as titled Rourkela – The Illustrated Journey Into The Life Of The City 

As the book was supposed to be a graphic novel, there were multiple challenges I had. The first and foremost of them was getting the images/photograph of the town across different times in history. Being a small town it was really difficult task as not much of the facts and history of the place was recorded from different aspects of life. before the Rourkela Steel Plant was built in 1950’s, the place was least known tribal area with in the dense forest across the hills. Even though a high level details I had collected from secondary research, still the details were not enough to come up with a photographic book with truthful and accurate particulars. The problem with photographs is that they are too real – there is a little room available for imagination when you see a bright sharp hi definition photograph. The less details are in photo including color, there is more to imagine , and no doubt that’s why a painting with less realism is always effective in igniting emotion in viewer than the same subject in photograph.

https://www.linkedin.com/pulse/api/edit/embed?embed=%257B%2522type%2522%253A%2522video%2522%252C%2522title%2522%253A%257B%2522localized%2522%253A%257B%2522en_US%2522%253A%2522RANGOLI%253ACreative%2520Studio%2520-%2520Samir%2527s%2520Experiment%2520in%2520Image%2520Processing%2520over%2520Cloud%2522%257D%257D%252C%2522description%2522%253A%257B%2522localized%2522%253A%257B%2522en_US%2522%253A%2522Rangoli%2520is%2520a%2520fully%2520automated%2520cloud%2520based%2520creative%2520studio%2520app%252C%2520that%2520helps%2520in%2520converting%2520images%2520into%2520complex%2520sketches%2520and%2520artistic%2520outputs%2520that%2520can%2520be%2520used%2520for…%2522%257D%257D%252C%2522author%2522%253A%257B%2522name%2522%253A%2522Samir%2520Dash%2522%257D%252C%2522provider%2522%253A%257B%2522name%2522%253A%2522YouTube%2522%252C%2522display%2522%253A%2522YouTube%2522%252C%2522url%2522%253A%2522https%253A%252F%252Fwww.youtube.com%252F%2522%257D%252C%2522request%2522%253A%257B%2522originalUrl%2522%253A%2522https%253A%252F%252Fwww.youtube.com%252Fwatch%253Fv%253DGY27CzxwPp8%2522%252C%2522finalUrl%2522%253A%2522https%253A%252F%252Fwww.youtube.com%252Fwatch%253Fv%253DGY27CzxwPp8%2522%257D%252C%2522images%2522%253A%255B%257B%2522url%2522%253A%2522https%253A%252F%252Fi.ytimg.com%252Fvi%252FGY27CzxwPp8%252Fhqdefault.jpg%2522%252C%2522width%2522%253A480%252C%2522height%2522%253A360%257D%255D%252C%2522data%2522%253A%257B%2522com.linkedin.treasury.Video%2522%253A%257B%2522html%2522%253A%2522%253Ciframe%2520scrolling%253D%255C%2522no%255C%2522%2520allowfullscreen%2520src%253D%255C%2522%252F%252Fmedia.licdn.com%252Fembeds%252Fmedia.html%253Fsrc%253Dhttps%25253A%25252F%25252Fwww.youtube.com%25252Fembed%25252FGY27CzxwPp8%25253Ffeature%25253Doembed%2526amp%253Burl%253Dhttps%25253A%25252F%25252Fwww.youtube.com%25252Fwatch%25253Fv%25253DGY27CzxwPp8%252526feature%25253Dyoutu.be%2526amp%253Btype%253Dtext%25252Fhtml%2526amp%253Bschema%253Dyoutube%255C%2522%2520width%253D%255C%2522480%255C%2522%2520frameborder%253D%255C%25220%255C%2522%2520class%253D%255C%2522embedly-embed%255C%2522%2520height%253D%255C%2522270%255C%2522%253E%253C%252Fiframe%253E%2522%252C%2522width%2522%253A480%252C%2522height%2522%253A270%257D%257D%257D&signature=AQYbS5FyMfkBVs5b6NHV0b3Oc7aY

(FIG: The images in the published book – the line textured engraving effect gives the feeling of vintage nostalgia.)

As the book subject was history of the town with reference to different location within and specially about the town’s inception in 1950’s, and before, I thought it would be great to use illustration type graphics to match the old times, specially 50’s and 60’s of the town. In addition to that the challenge was that as it was a place, about which very less photographs along with recorded information was available, it was difficult to come up with a graphical history of the places through photographs. Secondly due to unavailability of details of life style and location description of the area, (as Rourkela was mostly unknown remote hilly area filled with jungle and aboriginal tribal before the steel plant came up in 50’s just after a few year of India’s Independence ), leaving something to imagination of the readers in images is ideal to evoke certain emotions.

When I started looking for a theme of the graphic book on history of a location, I explored many font types, graphic styles along with sketch patterns. I wanted the book to bring back nostalgia about the past that gone by, to remind the readers of the days gone by which no more exists. So natural tendency was to select vintage designs mostly popular in the periodicals, books and magazines printed using older print approach like wood-cut , linograph and engraves and the kind, when digital print didn’t exist which ever maintain the crispness of our printing methods today and had the printing imperfections. So I selected the wood-cut, old style text and textured background of the background images which help get the reader he feeling of vintage nostalgia.

The next step was to look for options to produce such an effect. After a search I noticed there are some plugins to Photoshop and other similar photo editors which provide the similar effects. But as I am more biased towards open source and used GIMP most of the time for my graphical needs, I wanted a good plugin that would provide the desired effect . There were some yet they didn’t match the exact way I wanted to represent the images. I wanted something to closely match the childhood local magazines that were printed and were more crude looking printouts. So the one of the best choice was to rather develop a program to quickly produce the exact effect I wanted to invoke same feeling. Initially I thought of learning to develop a plugin for GIMP, but that included additional work to learn the new language and the methods to build the plugin. Yet I did not want to depend on simple actions. Also I was not interested in depending on a commercial software and stick to it and was also not sure how long it would take me to learn and there by come-up with the solution that will in turn will help me complete my book project. I was stuck at a cross road, where the other option is to look around and find an approach that would be easier for me to come up with a software solution that will involve less work. I realized, I knew a decent level of PHP and some open source image processing libraries. In my earlier days of my career as a developer, I had played a lot with such libraries which used to run over shell scripts and via PHP through Execs. So Decided to try out this approch and decided will build my own Image Editor application that will allow to upload any image and convert that to the effect I desired. Thus my pet project Rangoli came into being to help me out to finish my graphic book project.

After spending two months over weekends, quickly I managed to build a working version with a feature to convert my images into the desired effect. As I used PHP, I was able to host it locally and use it across different laptops I use at home via the wifi , without the need to install anything on the system I was working with my book project. Later I added a few more variations of the effect and build a template that can be used to add more effects like plugins into ‘Rangoli’. One such addition was to remove either the white or black from the image and make it semi transparent (the same effect you get in Photoshop by turning the blending mode to ‘Multiply’).

(FIG: Rangoli Interface showing the list of effects available and the preview in center pane. Multiple effects can be queued and batch processed.)

Later added the feature to process multiple effects one after another based on the queue — more like a batch process but through the GUI and by following human friendly language than the geek’s favorite commands. These helped me build a system that is scalable and cross platform by the time I finished the book.

(FIG: Rangoli user-interface showing the rendered vector image with transparency, ready to be used in the book.)

Using this then I started creating visuals for my book. I merged multiple public domain images and some photographs I took some time back across the town, to create desired frame which then I processed in Rangoli to generate print ready vector that with old engraved vintage type image.

(FIG: From Left to right, different stages the source images were merged and then fed into Rangoli to get the old print type shown in right.)

The one more good thing happened – in my book I was able to provide dramatic scenes from past combining some modern day photographs and processing them through Rangoli.

(FIG: Blurring the realism – Left images are the original ones processed, where are the right one are the finally processed outputs from Rangoli.)

Finally I was able to complete the book that was referred as “Mini India’ Rourkela coming alive through flickers of images”. Though technology helped me in a greater way by empowering me to meet my goal which otherwise would have been impossible for me, I learned a new trick — sometimes, we need to spend time in building components that will help us in long run . In such cases even if the immediate or direct benefit may not be in sight, still they may act as the foundation stone to the bigger thing we desire to achieve.

You can get the book from here: http://amzn.in/9pobnWq 

https://www.linkedin.com/pulse/api/edit/embed?embed=%257B%2522type%2522%253A%2522video%2522%252C%2522title%2522%253A%257B%2522localized%2522%253A%257B%2522en_US%2522%253A%2522Experiment%2520on%2520the%2520Book%2520Form%2520-%2520History-cum-Travel%2520in%2520Comic%2520Book%2520Format%2520%2528Book%2520on%2520Rourkela%2529%2522%257D%257D%252C%2522description%2522%253A%257B%2522localized%2522%253A%257B%2522en_US%2522%253A%2522Rourkela%2520-%2520The%2520Illustrated%2520Journey%2520Into%2520The%2520Life%2520Of%2520The%2520City%2520Around%2520India%2527s%2520First%2520Public%2520Sector%2520Steel%2520Plant%2520Author%253A%2520Samir%2520Dash%252C%2520Sangeeta%2520Dash%2520Format%253A%2520Paperba…%2522%257D%257D%252C%2522author%2522%253A%257B%2522name%2522%253A%2522Illustrated%2520Type%2522%257D%252C%2522provider%2522%253A%257B%2522name%2522%253A%2522YouTube%2522%252C%2522display%2522%253A%2522YouTube%2522%252C%2522url%2522%253A%2522https%253A%252F%252Fwww.youtube.com%252F%2522%257D%252C%2522request%2522%253A%257B%2522originalUrl%2522%253A%2522https%253A%252F%252Fwww.youtube.com%252Fwatch%253Fv%253Dzs34MxEXTmo%2522%252C%2522finalUrl%2522%253A%2522https%253A%252F%252Fwww.youtube.com%252Fwatch%253Fv%253Dzs34MxEXTmo%2522%257D%252C%2522images%2522%253A%255B%257B%2522url%2522%253A%2522https%253A%252F%252Fi.ytimg.com%252Fvi%252Fzs34MxEXTmo%252Fhqdefault.jpg%2522%252C%2522width%2522%253A480%252C%2522height%2522%253A360%257D%255D%252C%2522data%2522%253A%257B%2522com.linkedin.treasury.Video%2522%253A%257B%2522html%2522%253A%2522%253Ciframe%2520scrolling%253D%255C%2522no%255C%2522%2520allowfullscreen%2520src%253D%255C%2522%252F%252Fmedia.licdn.com%252Fembeds%252Fmedia.html%253Fsrc%253Dhttps%25253A%25252F%25252Fwww.youtube.com%25252Fembed%25252Fzs34MxEXTmo%25253Ffeature%25253Doembed%2526amp%253Burl%253Dhttps%25253A%25252F%25252Fwww.youtube.com%25252Fwatch%25253Fv%25253Dzs34MxEXTmo%252526feature%25253Dyoutu.be%2526amp%253Btype%253Dtext%25252Fhtml%2526amp%253Bschema%253Dyoutube%255C%2522%2520width%253D%255C%2522480%255C%2522%2520frameborder%253D%255C%25220%255C%2522%2520class%253D%255C%2522embedly-embed%255C%2522%2520height%253D%255C%2522270%255C%2522%253E%253C%252Fiframe%253E%2522%252C%2522width%2522%253A480%252C%2522height%2522%253A270%257D%257D%257D&signature=AQGLpYWXM5trMpF0f4NyvZxr60f_

What the failure of Google Glass teaches about UX?

stop

In mid of January I saw the headlines making official announcement of the detah of Google Glass. I was not surprised. I knew lot of issues ave to be addressed before Gass could make it to the expectations. Many of them are issues related to UX. All of them related to an grey area of UX space, which was never given the prime consideration when designing a seminal product like Glass and many other legends.
Back in 2013, I had wrote a few posts on the usability in context to the social aspect of Google Glass that was being ignored. When I read now the article saying “privacy concerns” is one of many reasons of failure, it certainly louds the many of the design approach concerns I had raised.

Google Glass is not evil product, everyone agrees. Even all agree that it has immense potential. However, it certainly needs a facelift from product design point of view — and there by from UX point of view.

We saw, the raise and fall of Google Glass carrying it’s pattern where we can notice how with the emergence of Google Glass, the topics related to devices infringing with personal privacy became hot cakes for tech-debates. Many social scientists, human rights activists had started to see the ‘Glass’ as the evil that reminds them with George Orwell’s ‘1984’. The fear of a ‘Google Big Brother’ controlling the major shares of the information world is seen as the intruder to private aspects of ‘the public’. The “Glass Hole” incarnation of the Glass is equally seminal as the product “Glass” it self, due to bring out the topics like “user privacy”, “social context” and certainly what I believe as the “Context of the Other”.

It is not the case that Google has not spent money on user research and usability aspects before going ahead with the concept of persons using glass that may change the way we interact with systems in our daily life. Usability wise, it is definitely a super gadget that has the potential to catapult the device industry into next century. But the new features and interaction methods implemented in the device in a manner that is actually a decade old approach that is only fit for human-computer-interaction (HCI) in case of smart phones and tablets which have less tendency to hurt sentiments of those who do not directly interact with the device when the user might be performing some actions in a certain socio-cultural context. These sentiments could result in the fear of losing privacy , cultural distrust and humiliation among the second-hand users of the device who are impacted indirectly in some way by the device actions in the context.

Historically, the product design process while following the check and balances with heuristics and usability models, has never given prime importance to the user’s relationship to the ‘Other’ in his environment. And this is the missing piece that needs to be re-discovered and fit into standard usability matrix when Google might give “Glass” a face-lift to bring it back with a new incarnation that is more friendly and less intruder to user’s privacy and is compatible with SX model (Socio-cultural Usability Model) which I had proposed earlier.

Socio-Cultural User Experience (SX) – the missing piece in UX


‘Socio-Cultural User Experience to represent the aspect of Usability Design or User Experience (UX) that deals with usability aspect of products/ software in a social context. This is the same “Context of Other”

Considering the ‘Others’ in the User’s Social circle:

The existing UX model does not analyze the need beyond the current user and his ‘type’ to do a usability test — it never considers how it is impacting the other members of the society while the target user set is using the app/system.
For example, using car horn is a safety measure, but using it near a hospital or school is considered as unsocial and disturbing. There are many social check points that bar users of any system from using it in special socio logical context.

Criteria of a Good ‘SX’ Compatible System

Criteria of a sound usability design of an app on socio-cultural context:

1. Universal—has design elements that are universal.
2. Ethical – follows principles and approach that has positive ethical value
3. Non-racial – non biased and non-provocative attitude to user’s race and beliefs.
Socio-cultural User Experience (SX) and Social Interaction Design (SxD)
4. Respectful – towards user’s culture, social beliefs and ethnicity
5. Safety – has it’s social impact that is safe for the User.
6. Non-abusive – must not exploit the user and the environment he is in .
7. Common Sense – has geared towards common sense – behaves and reacts to the user in a sensible way
8. Protect Privacy – App’s feature and interaction must protect user’s privacy and other humans in the social circle.

Let’s take the case of Google Glass.

Google Glass is designed in a way that can act as more personal than a mobile handset, as it is a spectacle and can be indispensable accessory for the user once he gets addicted to it by replacing his conventional glass with it.
But the support for camera to take picture can pose a problem for the user to enter private areas, industrial areas, secure zones and offices where cameras are not allowed. In some places of earth, the cultural restrictions are in practice to ban cameras in certain places — most of the temples in India do not allow cameras inside. Now imagine, if the user has replaced his traditional spectacle for it , then he may find it difficult to manage without it in these scenarios.
So by following SX approach in usability design, the glass will require to have a “detachable set of camera” used in the glass so that the user can detach the camera and which would power it off and at the same time allow the user to keep on using the glass as a conventional spectacle.
This example may be just one of many features that Google glass might have, but it is enough to illustrate the approach in thought.

Points to Focus on while designing a SxD Compatible System

1. Provide multiple alternatives to the interaction methods to control the same functionalities in different socio-cultural context.
2. User should have total control over enable/disable of interaction methods for different scenarios.
3. The default interaction method must follow ‘SX’ approach.
4. Provide options to the user to switch between interaction methods with the system as and when needed.
5. Alternative mechanisms should be provided for physically challenged users. Rethink on the use of gestures and other interaction methods in the Article 508 context as everyday the new devices with unpredictable (not necessarily negative!) interaction methods and features.

Gesture and other Interaction Medium of SxD:

The ‘Social Interaction Design’ approach has the following major facets in the system interaction towards the user in socio-usability context:
1. Facial Gestures—The selection of Human triggered facial gestures (e.g. wink, smile etc.) to activate the system or trigger any action in the system must be judged based on the canonical meaning of those gestures in social and cultural context of the user where he is going to use it. For example, in case of Google Glass , the feature of “winking” (the gesture developed by Google Glass developer Mike DiGiovanni http://news.cnet.com/8301-1023_3-57582500-93/google-glass-code-lets-you-snap-a-photo-with-a-wink/ ) at someone to take a photo can pose a problem if the user is in India or Middle East countries. Even in western world winking at a lady or group of ladies (even though it is unintentional for any kind of abasement) can be taken as a negative action (e.g. weakness in character) and evoke anger and misunderstanding. So even if the winking to take a feature is a ‘cool feature’, in social context SxD will suggest the usability/interaction engineer to rethink on it to implement some options to ‘keep it disabled by default and allow the user the total freedom to use his judgment to enable and use the feature in any given socio-cultural context. Fig5: The ‘wink’ gesture developed by Google Glass developer Mike DiGiovann allows user to take a snap of the surrounding with just a wink of an eye.

2. Sound Gestures — The selection of sound gestures – the use of voice or sound pattern to control the system should be examined for different user environments. For example blowing a whistle to activate a play functionality on a portable music player, or to open an SMS on the cell phone can be an interesting feature, but on the other hand if it becomes useless in a busy street or in a meeting room where a discussion is going on.
3. Touch based Gestures – Touch, swipe and pinch are popular now a days as most of the tablets and smartphones offer this as a user friendly interaction method for the user. More devices are coming up which do not have any physical button rather a few multi-touch gestures are enough to fully control them. However ‘SxD’ stresses that the devices must be designed and developed with the interaction method that can allow alternative to the available touch triggered interaction mechanism. For example , while developing a digital medical instrument with touch sensitive display, the interaction methods should be carefully planned so that the surgeon can use the system without touching to avoid infections through contact with it while conducting any mission critical surgery.
4. Hand/Finger based 3D gestures – ‘SxD’ approach encourages to conduct a social analysis of the hand/finger based gestures that are planned to be used in a system – the gestures should selected / innovated by carefully studying the cultural context avoiding common gestures used in daily life that are considered abusive to others. In addition to this practical usage resulting out of user’s environment and work culture must be given consideration. For example the middle finger gesture commonly used by youths to represent the crack humiliating pun on the other should not be used for any app that is expected to be popular among the users from the similar demography. But note that only considering the demography is not enough to decide the gestures.
5. Mouse /Keyboard Control – Similar to the gesture , voice and the related interaction method with system, mouse, keyboard, joystick and other typical input device based methods should be considered with in the context in which they are going to be used. As this group of interaction method are very old, many standard guidelines are already in there in practice. They However we need to rethink on them and make sure they are upto date with the ever changing human –computer-interaction domain.

Our world needs products that are not only usable but also safe to use socially . It is high time, we need to consider the “Other” in our social context to improve the products and there by our future.

This is a rediscovery of "Accessibility" in the world of touch-screens and other natural interfaces. With new technology innovation the lines between accessibility technology and Technology for Mass are getting blurred. What used to be a special need is becoming a general need for mass use.Situational Disabilities Use-cases are defining the new age devices, wearable & smart interfaces. High time we need to rediscover on "accessibility" what we think we have already discovered!

Rediscovering Accessibility for Future Tech!

This is a rediscovery of “Accessibility” in the world of touch-screens and other natural interfaces. With new technology innovation the lines between accessibility technology and Technology for Mass are getting blurred. What used to be a special need is becoming a general need for mass use.Situational Disabilities Use-cases are defining the new age devices, wearable & smart interfaces.

High time we need to rediscover on “accessibility” what we think we have already discovered!

Linearity Matters: Rethinking eCommerce UI

“Linearity” plays a strong role when it comes to usability of any e-commerce checkout. Many theories supporting this concept have been proved by numerous statistics. UX sites which talks about the best practices to follow while designing the checkout process, always advocate maintaining linearity. It’s make sense when we see multiple principles in human factors indicate that in most of the time when users are “walking on the path” in a multi-step process they want to move forward. But only designing the checkout process is not enough, as from the views of typical goal oriented design, the whole experience of shopping starts with user’s objective to “find something that might influence him enough to buy”where the whole experience is a flow-state which maps to the mental model of the user where “finding” and “buying” are the major component of buying. The former being the “cause” and the latter being the “effect”, the design of the experience should always be linear in order to avoid the situation where the user is distracted by something else to break that state.

If users think of your multi-step process as a straight path, then the sequence of your views must be linear else you will break people’s expectations that will result into a bad experience and usability.

Traversing from user needs the towards the task flow

“I need” –> “I buy”–> checkout

is equivalent to

“I need” –> “I find it ” –> “I buy”–> checkout

is equivalent to

“I need” –> “I browse for it ” –> “I search for it ” –> “I buy” –> checkout

is equivalent to

“I need” –> “I browse for it ” –> “I search for it ” –> I compare –> “I buy” –> checkout

There are two major task clusters now:

1. “I need” –> “I browse for it ” –> “I search for it ” –> I compare –> “I buy”

2. “checkout”

Note the goal stating “I buy”, is the logical point that is represented by the behaviour of the user through the act of “adding to basket/cart”

Meanwhile the act of comparison of the products can be spanned from what is in the browsable and searchable views and what is already existing in the cart (which the user has added to the card already through a previous loop in this category of task). It is similar to the way that you might have added a deodorant “Old Spice” to the cart and suddenly decided to go for an “Axe” that offers 10% extra in the same price (Note that the user’s mind wanders 30% of time). So it helps to allow the user to be in the loop with in the first task group and then jump to the checkout while making the transition to checkout seamless. In order to achieve, the more the mental model matches to the conceptual one and indicate the user’s state in the flow and encouraging him through “progression” in the linearity path.

Here is a sample flow that takes the benefit of the linearity as a part of the process for the experience that covers the pre-checkout and checkout process to complete the flow state.

The target of the solution is primarily a tablet, which is acting as a catalyst as being a touch enabled swipe gesture controlled device it provides the user the effortless approach to move between the “browse/Search” <–> Cart <–> Checkout , once he has reached the entry point to the system.

Explore the complete project at
https://www.behance.net/gallery/19044315/Flip-the-Cart-Reimagining-Social-Commerce

(c) 2014, Samir Dash. All rights reserved.

Flip the Cart! : Reimagining Next in eCommerce

eCommerce has one of the strongest buzz that we come across now a days. Specially after the success of Amazon and Flipkart, the domain is seen as a goldmine that can help in bringing the disruptive business models to increase profit and business. So what is the next in eCommerce? I tried to conceptualise and re-imagine it from a socio-integrated experience. Following are some insights and sample sides that present those attempts:

 

Find the complete idea here:

https://www.behance.net/gallery/19044315/Flip-the-Cart-Reimagining-Social-Commerce

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

MakingSense: Reimagining the Next-generation Retails and e-Commerce Analytics platform

journey-consumerbenefits

 

 

In today’s ever changing world if we need to reimagine the next generation analytics platform for retail and e-commerce market, what it will look like? Well I gave it a go . Following is a conceptual framework (I call it  “MakingSense”) and a business case for an IoT based real time analytics framework for retail and FMCG market. The following post is presented as a real life business case:

 

 

 

 

Overview:

MakingSense” is IoT based market analytics platform to connect all goods and products (specially non-digital goods) that behave as fast moving goods to the cloud-analytics to get real-time market insights to make the required course correction for the market strategy and business decisions for the product manufacturers and retail chains. This enables a direct feedback model between the consumers with the producers and sellers.

 

The problem it solves:

As per current market trends, even high-tech goods like mobiles, digital accessories are behaving like FMCG in market. Rapid change is evident in consumer behaviour due to influence from technology, economy and changing buying power of the consumer . Speedy ‘go-to-market’ approach in the market from players in the market has increased the competition. Many local and niche competitors are giving tough challenges to bigger players in all segments, specially in emerging markets. For globl players the typical market strategy is not working in expected way. Traditional approach of market insights collection is not sufficient to apply the necessary check and balance for market plans in real time.

 

The 4 core needs that Making Sense will address are:

1.The manufacturers need market insights in real time

2.The consumer behaviour towards tech-goods is also needs to be tracked in real time to ensure how close they are behaving to the FMCG market behaviour.

3.In retail and non-tech sector the real time insights are needed for goods that are not digitally connected to analytics eco-system

4.Need to look beyond the traditional field immersions, surveys to get micro-insights for course corrections in strategy.

 

The solution and the business model:

The “MakingSense” platform will help product manufactures and retailers in gathering real time customer insights even for non-digital goods and helps adjust the customer retention dynamics . In retail and non-tech sector the real time insights are needed for goods that are not digitally connected to analytics eco-system. Need to look beyond the traditional field immersions, surveys to get micro-insights for course corrections in strategy.

The platform has self sustainable business model that will make it grow though benefiting the product manufacturers, retailers and the consumers. The final envisioned eco-system will have a big-data enabled management module in cloud, a super easy to use dashboard system for product manufacturers, retailers and consumers and mobile apps and easy integration public APIs along with one SMS enabled gateway.
The producers and sellers can register their products to generate specific code to their products category and if required can use the APIs to map their existing bar-code systems. The normal consumer can register themselves to generate the reward points and to be used directly in the registered retail chains or can redeem for some gifts from their dashboard itself.

The eco-system will allow the product companies and sellers to get real time analytics through creating data points directly.

 

 

Triple advantage benefits for consumers:
The incentivise/rewarding consumers for their feedback is what will make it more successful.
Whether the product the consumer gives feedback on is purchased from the registered retailer or not, he is definitely getting the reward points or freebies from the site.
If he purchases the product from the registered retailers, then he is getting additional discounts or reward points.
On top of it if the product manufacturer registers, then the customer is getting more discounts!

It’s a triple advantage for the customer.

Now, this is a new strategy where the customer is prompted to buy the product from specific manufacturer, from a specific retailers toget advantage of this.
While the whole aim is to get feedback and analytics running, this model also induces a new competition on product manufacturers to provide discounts to bring down the final competitive price to retain customers at the same time the customer is also getting benefitted.

This model is disruptive in nature where every one is getting benefited :

Manufacturer and Retailers – reduction in market research spending , getting real time-analytics — the mood of the segment- customer retention strategy formulation , attract customer to their outlets and
Consumer — reduction in home budget, get incentive, rewards for their feedback.

 

 

 

::Idea::

 

The “MakingSense” platform will help product manufactures and retailers in gathering real time customer insights in real time even for non-digital goods. In retail and non-tech sector the real time insights are needed for goods that are not digitally connected to analytics eco-system. Need to look beyond the traditional field immersions, surveys to get micro-insights for course corrections in strategy.

The platform has self sustainable business model that will make it grow though benefiting the product manufacturers, retailers and the consumers. The final envisioned eco-system will have a big-data enabled management module in cloud, a super easy to use dashboard system for product manufacturers, retailers and consumers and mobile apps and easy integration public APIs along with one SMS enabled gateway.

The producers and sellers can register their products to generate specific code to their products category and if required can use the APIs to map their existing bar-code systems. The normal consumer can register themselves to generate the reward points and to be used directly in the registered retail chains or can redeem for some gifts from their dashboard itself.

The eco-system will allow the product companies and sellers to get real time analytics through creating data points directly

:: Architecture:

The idea is about building an eco-system in multiple-phases that will have three components:

1. “MakingSense SmartCloud” — Cloud based server to store the data related to the consumers and host the analytic platform. This will have the following major components :
i. BigData analytic-engine that can do the necessary data mining to understand trends and formulate recommendations .

ii. Open REST APIs providing easy way to integrate third-party systems such as retail-management systems, third-party analytic and business process tools/apps.

iii. user management modules with different levels of access to different roles.

iv. Reward points management and coupon code management system.

v. Payment gateway and subscription and vendor management.

2.”MakingSense Portal” — Web-based portal/ thin-client solution that will allow consumers and product sellers/retailers who can register and access their respective dashboards

3. “Making-Sense client” — This is primarily mobile/device client and consumer facing service gateways (and optionally hardware) that can be used by the consumers to submit their feedback.
In the multi-phased development roadmap, initially the mobile apps will be primary representative of this section. Later phases will introduce SMS enabled gateways, custom “MakingSense” hardware, which will be cheap yet provide easier way to share data from the consumer.

 

:: User Journeys:

For Sellers (Retailers/Producers)-

1. gets account registered at ‘MakingSense Portal’.
2. get API if to connect to their card scanners /billing machines/users db/inventory/product catalog
3. gets products mapped to “Reward points”/discounts/freebies
4. product manufacture can offer discounts for their product using the API by registring their product

5.product manufacture can register their product to get insight for their product across globe
6.retailers get insight for their retails chains –anything sold though their system
7.retailers can buy insights (not customer details ) for other regions/ segments
8. product manufacturers can buy insights (not customer details) for competing brands and similar products

For Consumer –

1. Registers at the Portal — if he is from a retailer’s database, he can map his account to this system.
2. Connection to his SNS account (facebook) is encouraged (with some additional reward for it)
2. downloads free “makingsense” mobile app, and starts using
3. scan the barcode of any product he purchased and rates the product.
4. For each feedback on different (at least different batch of same product with in a specific time period range) the products, gets “reward points” or discounts codes or freebies coupon codes.
5. If retailers/product manufacturer has sponsored rewards, based on accumulated points, he can redeem them at their store. Else these can be redeemed at the “makingSense” dashboard at the portal.

:: Benefit:

For Sellers (Retailers/Producers)-

emerging market is the next potential
even high tech/high end goods are behaving like FMCG goods…
micro -insights are required to plan the market startegy adjustment
conventional type of large scale survey’s are not going to help much due to high cost and field immersion efforts and time.. — go-to-market is accelerating so ..time has value … micromax makes a phone in 70 days

So new age trend is required to gather data in real time ..to connect goods that are not even digital , we need those data
IoT will help in rapidly accessing this.

They get the following insight from the solution:

customer’s insight
————
who bought
how many times bought
which part of the year bought the most
consumer’s insight
—————————
liked? great – good – bad ?
what else he likes in the same product line ?
what similar product he uses?
what similar product he likes most ? what brands?
collective consumer insights
—————————-
how many such buyers are in the region who are potential buyer?
how is collective preference?

predictive market forecast
—————————-
in which part of the year the product consumption is going to increase — manufacture more and ensure smooth shelf-space management
who are competitor brands
how a product is behaving in a segment?
value curve?
what should be the status?

if connects to his SNS account/FB/retail shopers card — then u get location, age, gender, type etc. — new gold mine where every one will want to invest.

For Consumer –

1. Consumer is rewarded for any feedback he shares for any product he uses.
2. Apart from regular discounts, reward points the customer can get these additional discounts in purchase/freebies which can bring customer delight leading to more involvement and customer engagement (so no matter if consumer does not buy from registered retailers or products.)

 

 

 

 

 

:: Business model::

Business model is mostly through subscription based to access competitor analytics
Along with it the access to premium data and value added services like (customizable report, goal alignment, market strategy etc. ) can be a major source of revenue .

retailers get 100% discounts to access their data — all real time
retailers get 20-30% discount in viewing their competitor’s data in the same region or segment

retailers not listed in the program to have to pay full to get data for a segment
retailers not listed in the program can not view their data

consumer gets 2% bonus discounts or points to submit the feedback on every item he uses

 

 

:: Market Size::

As per ESOMAR Global Market Research conducted in 2011:

Global Spending on Market Research is 32 Billion USD.
Out of this Emerging Market share is 24% == 8 Billion USD
Out of this only Asia Pacific spending is == 5 Billion USD.

India & China are major share holders of 5 Billion USD
India == 40% of 5 Billion USD = 2 Billion USD

Based this 1st year
target in five 1st tier cities.
assuming 30% investment is done in these 5 cities == 0.6 Billion USD
assuming we get 30% of this share in 1st year == 0.2 Billion USD

That results in 100 Cr INR Revenue in 1st year .

 

 

 

:: Potential competitors & Competitive advantage of the idea::


Amazon Dash
Dash is a product by Amazon that allows to facilitate the consumer to order new products from Amazon store

 

 

Dash:

Hardware based + amazon portal is available for consumer to buy
Only limited to Amamzon portal
No-whitelabelled system — Amamzon uses it for it’s own usage.
It’s is NOT a feedback based model, the bigdata only shows which segment is purchasing which product.

 

Making Sense:

Special marketing-insight platform – unique and first of it’s kind.
Specially designed to handle multiple vendors, retailers and consumers along with reward points/incentive management model .
Unique analytics with predictive strategy formation
works across cross platform, outlets, cross multi-channecommercialal platforms both counter based or online.

 

SWOT  Analysis

Strength: 

Cross platform – mobile, tablet, PC, kosk, custom hardware
works across cross platform, outlets, cross multi-channel commerceial platforms both counter based or online.
Can appeal to consumers, retailer, manufacturers

Can be scaled from FMCG to insurance, banking/finance sectors.
Weakness:
New concept, new to the market — disruptive business model for market research where customers are incentivised for their feedback directly.
Requires fund to maintain the incentives/ rewards for the end-consumers.
Large scale imkplementation can bring meaningful results??????
Opportunities:
New unique business model.
Regional market is drib\ven by fragmented retail / distribution channels
Specially FMCG market is highly un-organised
Penetration of single super markets, and online selling is low.
Specially in India the coverage of super markets (Big Baazar, )
Market insights
Threats:
Dash making it’s platform focused on marketing research
Marketing research companies replicating this model– alternative models
Retail chains making their own platform — will get limited view only only their customers..still they need to spend more in their marketing agencies.

 

 

:: Why it’s a killer Idea? ::

The incentivise/rewarding consumers for their feedback is what will make it more successful.
Whether the product the consumer gives feedback on is purchased from the registered retailer or not, he is definitely getting the reward points or freebies from the site.
If he purchases the product from the registered retailers, then he is getting additional discounts or reward points.
On top of it if the product manufacturer registers, then the customer is getting more discounts!

It’s a tripple advantage for the customer.

Now, this is a new strategy where the customer is prompted to buy the product from specific manufacturer, from a specific retailers toget advantage of this.
While the whole aim is to get feedback and analytics running, this model also induces a new competition on product manufacturers to provide discounts to bring down the final competitive price to retain customers at the same time the customer is also getting benefitted.

This model is disruptive in nature where every one is getting benefited :

Manufacturer and Retailers – reduction in market research spending , gettingf realtime analytics — the mood of the segment- customer retaintion strategy formulation , attaract customer to their outlets and
Consumer — reduction in home budghet, get incentive, rewards for their feedback.

 

 

 

As per Tim Ambler of London Business School, “Marketing Productivity” is measured through the following 5 ponts :

1. routinely research consumer beavior?

2. routinely report research with financial matrics?

3. compare results with previously forecasted in business plans

4. compare with level achieved by your competitor using the same indicators?

5. adjust short term performance?

 

All of these are taken care in the  blue print of “MakingSense”

 

 

 

 

 

 

 

Indoor Mapping in Consumer or Retail Vertical and it’s impact on UX

Mapping and localization for indoor mapping is gaining momentum in consumer verticals of IT and software services industry to propose new age technological solutions and process re-engineering services that helps in providing value added features to the consumers of the verticals. Also the consumer data mining and data analytics are pushing new dimensions with the indoor mapping technologies to provide new insights into the consumer and end-users’ psyche. This ultimately opening doors to provide better and delightful user experience for the user during his shopping experiences.

 

Technology Used for Indoor Maps

 

Despite the fact that the location determination of mobile users within a building has attracted much attention lately due to its many applications in mobile networking including network intrusion detection problems, it is challenging due to the complexities of the indoor radio propagation characteristics exacerbated by the mobility of the user. Global navigation satellite systems (GPS or GNSS, which act as the benchmark for the standard Map related applications development, are generally not suitable to establish indoor locations, since microwaves will be attenuated and scattered by roofs, walls and other objects. Due to the signal attenuation caused by construction materials, the satellite based Global Positioning System (GPS) loses significant power indoors affecting the required coverage for receivers by at least for satellites. In addition to this, the multiple reflections at surfaces cause multi-path propagation serving for uncontrollable errors. So the most popular among the technologies that are employed for indoor mapping scenario is the wireless technologies like Wifi and RFID.

In Radio-Frequency identification (RFID) systems, the simple concept of location indexing and presence reporting for tagged objects are used, that only acts as the object identification. Typically RFIDs do not report the signal strengths and various distances of single tags or of a pulk of tags and do not renew any before known location coordinates of the sensor or current location of any tags. Operability of such approaches requires some narrow passage to prevent from passing by out of range. In Grid concepts, a dense network of low-range receivers may be arranged, e.g. in a grid pattern for economy, throughout the space being observed. Due to the low range, a tagged entity will be identified by only a few close, networked receivers. An identified tag must be within range of the identifying reader, allowing a rough approximation of the tag location. Advanced systems combine visual coverage with a camera grid with the wireless coverage for the rough location.

 

The use enhanced Wi-Fi infrastructure to provide location information actually provides the missing piece that only RFID can not provide. WiFi infrastructure help in establishing more accurate and stable landmarks, which serve to anchor the various partial trajectories. This approach uses Received signal strength indication (RSSI) – that is a measurement of the power level received by sensor. Because radio waves propagate according to the inverse-square law, distance can be approximated based on the relationship between transmitted and received signal strength (the transmission strength is a constant based on the equipment being used), as long as no other errors contribute to faulty results.

 

Once the user and/or the tracking objects are located and tracked for their movement, the resulting data is mapped to the pre-built indoor location map to provide meaningful observations on the user’s location in particular section of indoor space and based on this the shopping experience of the user can be enhanced.

 

Indoor Maps in Consumer or Retail Verticals

During the last few decades, research on localization for exploration and navigation in indoor environments has made significant progress. However this technology was not accessible to the consumers till Google declared “indoor maps” as the future of consumer facing verticals, which successful attempts in utilizing this technology in shopping malls, museum and related public places where the real-time user analytics based on his location inside the shop helped formulating a set of customized offering to the user to make his experience easier and delightful.

IKEA, one of the world’s leading home furnishings company, uses Google indoor maps for improving customer’s experience in navigating the stores that are typically “typically a two level building that ranges in size from 200,000 sq ft to 450,000 sq ft–the average size is approximately 320,000 sq ft” and which typically “can work against” the “IKEA’s goal is to make the customer feel comfortable and in control of their shopping experience” — “People can have a hard time navigating the store. There have been stories of people saying that they feel like we are are purposely keeping them in.We want to make their shopping experience as easy as possible” (Google).

In a typical customer’s experience in a large sized mall, or shopping store can be frustrating, when he “want the option to quickly find their way to a particular product or throughout the store”(Ibidem,1)and this is mostly the consumer sees as the product he “needs”. Whereas the shopper or the store owner’s intention in most cases is to “encourage customers to find items they didn’t know they needed” – which is in fact conflicting with the thought line of the customer that is more inclined towards the self-gratification through the identification of items of his need.

 

 

The common set of expectations that lies among the conflicts of interests between the shop/store owner and the shopper provides the foundation that helps sustaining the user experience of the shopper in such an eco-system. The common set of expectations mostly revolve around the concepts of getting (for shopper)and providing (for shop-owner)the best possible experience. The common mission when equipped with the technology, such as indoor maps, sets the momentum of better usability and at the same time offers avenues for more cash flow for the store owner.

Most of the mall or stores which have implemented indoor mapping technology have been profitable by capitalizing “on the growing population of smartphone users” who can use the technology through their handsets. As of March 2012, over 106 million people in the U.S. owned a smartphone with Apple and Google having market share of 30% and 51% respectively — which shows that a significant mass of the consumers are also depending on mobility as a medium to consume the technology aided services. This fact is itself acting as a catalyst to propel the usage of indoor maps in consumer sector. (comScore, 2012).

 

Keeping the user in-touch during the whole experience

 

One of the successful features of the indoor mapping eco-system is to keep the user informed at every step of his experience and maintain a communication thread between the user and the system. A sample flow is shown below where the two way communication is illustrated.

 

 

The illustration above highlights how a simple two way communication is established between the user (through his app on his mobile) and the indoor mapping backend running and performs the analysis of user location data to execute productive actions that meets the user goals and helps improve the user’s over shopping experience in the store.

(c) 2013-14, Samir Dash

UX Simplified: Models & Methodologies: Digital Edition [Kindle Edition] ISBN: 978-1-3115-9110-4

My  recent title is available on Kindle for download. This book covers basic models and methodologies that are revolved around User Experience (UX). The discussed topics include User Experience, Information Architecture, User Interface, Usability models, User Centered Design (UCD), User Centered Software Design (UCSD), different Software Lifecycles (SDLC) and how usability models fit into SDLC models.

The details of the book are as follows:

UX Simplified: Models & Methodologies: Digital Edition
by Samir Dash
ISBN: 978-1-3115-9110-4
ASIN: B00LPQ22O0

Kindle Price (US$):$1.87
Kindle Price (INR):Rs. 119.00 includes free international wireless delivery via Amazon Whispernet

http://www.amazon.com/dp/B00LPQ22O0/ref=r_soa_w_d

 

 

UX Simplified: Models & Methodologies: Digital Edition [Kindle Edition] ISBN: 978-1-3115-9110-4

UX Simplified: Models & Methodologies: Digital Edition [Kindle Edition] ISBN: 978-1-3115-9110-4

Major topics included in this book are :

• Why “UX: Simplified”?
o The Diverse Disciplines: The ABCs of UX
o User Experience(UX)
o Information Architecture(IA)
o Interaction Design (IxD)
o User Interface Design (UI)
• Usability and Mental Models: Foundations of UX
o What is Usability?
o System Models
o What is a “Mental Model” exactly?
o Most-likely Mental Model
o Conceptual Model
o Challenges in Usability Measurement and Metrics
o A List of Factors for Generic and Consolidated Usability Model
o Heuristics:Measuring Usability
• Engineering and Design Processes: Usability and User Centric Approach
o Usability Engineering
o User-Centered Systems Design (UCSD)
o Usability Design
o User-Centered Design (UCD)
o Don’t get Confused: UCD vs UCSD
o UCD Models and Process
• Software Development Life Cycle (SDLC): Where and How User Experience Models fit in?
o Software Development Life Cycle (SDLC)
o Waterfall model
o Spiral model
o Iterative Development Model
o Agile development
o Challenges in UX Integration to Different SDLC Models
o Usability Designing Process
o How Usability Design Process Fits in Different Phases of SDLC?
o How UX Fits in Different Models of SDLC?
o Challenges with Agile model of SDLC to implement UX
o Lean UX and Agile Model
• Agile in Usability Design:Without Reference to SDLC
o Usability Designing Process
• Lean UX: Another Agile UX?
o The Beauty of Lean UX: Everything is familiar
o Foundation Stones of Lean UX:
o Lean Startup method: The concept of “Build-Measure-Learn”
o Minimum Viable Product (MVP) – Prototyping at it’s best in Lean Startup Method
o Principles of Lean UX

  • File Size: 1435 KB
  • Print Length: 86 pages
  • Simultaneous Device Usage: Unlimited
  • Sold by: Amazon Digital Services, Inc.
  • Language: English
  • ASIN: B00LPQ22O0