The Evolution of Technology in the Context of Software Development & Design Process: Take-away from PatternFly Conference

[This post was originally published on Red Hat Developers, the community to learn, code, and share faster. To read the original post, click here.]

Last Sunday, I returned home, India, after attending a series of collaborative sessions in Raleigh, NC, with many designers and developers across Red Hat and the open-source community, at the UX Summit and the PatternFly Conference. The whole experience was inspiring, informative and at the same time thought provoking with many takeaways, out of which the most interesting for me was that cumulatively all the inspiring talks from the speakers of the conference were implicitly hinting towards a clue. How our attempt to solve the existing technical solutions also impact the existing work process and thereby demand a rethink on the process blocks we use.

Being part of the Red Hat UX Design team, it is always an exploration and giving value to the core principle of “being open” to the “Design” — this is something similar to how the “openness” is seen as the integral value of the “Development”.

“We approach user experience design in the same way we develop our products. It’s all about being open.”   – Jim Whitehurst, CEO & President, Red Hat

So, what’s so special about the design being open? Is it something different from how we design in a traditional enterprise organization? It’s natural to have such questions when we face the open paradigm of the design challenges — I had similar questions. One thing to note here is that designing for open-source has its own challenges that require a re-look at the existing design processes and frameworks especially in the software development scenarios that have been evolving over last two decades. With the focus to bring the user-centric focus to the existing sets of Software Development Life Cycle (SDLC) — these were mostly about advocating Human Factors in the perspective through which we have been used to see SDLC.

But as day-by-day major enterprises are going open-source to bring more and more collaboration with the community to develop the software, the open-source movement itself is getting mature by gradual evolution through different seasonings in the context of the growth of technology and the changing market needs that are driving shaping the SDLCs. For example with the rise of startups in last few years, the adoption of MVP-driven Lean Development models instead of traditional SDLCs has increased indicating the evolving dynamics of adoption of such processes.

In the traditional enterprise, the design process is more or less influenced by standard UX approaches that have evolved over the last couple of decades. Taking into account the SDLC defined towards the end goal of the organization in context to the user experience related benchmarks. Many take different components/tools and methodologies from the Design Thinking and other similar HCI process models, and align them into the SDLC, be it agile, waterfall or hybrid, and try to achieve the user-centered software development (UCSD) from an engineering point of view.

There are sects of organizations, who take into account the user needs related benchmark and make the existing SDLC components to align to that — be it through User-Centered Design (UCD) approach blended with a part of Iterative, Agile or Lean SDLC models or simply following a ready made Lean UX model. While everyone is attempting to solve the UX equation to come up with the best possible framework that works great in advocating user needs, the question is still unsolved — what UX approach works best in a diversified collaborative open-source culture?

In the very first PatternFly conference held on 8th of this month, in his keynote, Michael Tiemann, VP of Open-source Affairs at Red Hat, referred to continuous evolution in the software development domain in context to design. His expressed views, which resonated more with the belief that the sustainable design solution can evolve from an organic design process, where reusability can play an important role.

(Michael Tiemann, Vice President of Open Source Affairs at Red Hat, delivering the keynote at the very first PatternFly Conference at Red Hat Annex, Raleigh on 8th June 2017)

The PatternFly community project supported by Red Hat is an attempt in such a direction that promotes “design commonality and improved user experience”. Dana’s blog post, Are “Web Components” in the future for PatternFly?, is an interesting read in this context from a technical angle where he addresses the thoughts on building a UI framework when there are so many choices and so many strong feelings about the different choices? While every organization is still trying to find the solutions for challenges related to UI development for a better experience such as – how to structure and organize CSS code for reusability in the ever-growing complex world of UI framework choices like Bootstrap, Angular & React etc. It’s also important not to forget that with each adoption of UI framework the impact on how we implement experiences to our products is also evolving, thereby making an impact to the process blocks that run the design, and development testing phases of SDLC.

This leads us to the hypothesis that with the evolution of newer technology adoptions, we are bringing the change catalysts to our development process, which can influence how we conceptualize and deliver the best experience for any product we are building. Along with it, as SDLCs are now getting re-shaped due to needs like, “faster time to market” and to bring out more sustainable solutions for the business needs, so is the associated stages of “Design”, “Development” and “Testing”. These are also getting more mature through the focus on Continuous Integration (CI) and Continuous Delivery (CD); thereby bringing a sea of change to the way software was planned, designed, developed, and tested traditionally. To this model when we add the “Open” aspect that demands the need to improve the process dynamics in order to ensure the end-to-end software delivery is sustainable in coming times. Red Hat’s recent open initiative to announce OpenShift.io is an attempt that tries to address all these aspects, at least it is a “start” where we start re-thinking openly about the solution that we thought we already have.

Openshift.io as an end-to-end development platform tries to address multiple areas or the process blocks like Planning, Analysis, and Creation, that tries to answer one question – what is the best way to deliver a software product in the most effective, seamless and consistent way in a diversified work-culture with value and best experience possible. It would be interesting to see how this platform evolves in coming times with the support and collaborative effort from the open source community to address these process related change catalysts that are getting their way into the work practices due to an ever growing complexity of technology change.

In the forthcoming posts, I will be exploring this context with respect to existing UX models and methodologies try to reflect thoughts on “UX in the Open “. Stay tuned.

BlueTap — The Ultimate Virtual-Reality(VR) Keyboard

This is one of my IBM patents on a VR keyboard published recently that does not require the users to carry a keyboard device and allows to input your information even when you are in motion. Forget Google Glass, even the experience of Apple’s much rumored Apple-VR can multi-fold with this mother-of-all VR keyboard system and approach.

Back in 2014, while preparing my slides for one of my earlier seminar talks on accessibility Rediscovering Accessibility for Future Tech (you can read the linked post here https://www.linkedin.com/pulse/20140917025440-9377042-rediscovering-ccessibility-for-future-tech-everyone-is-affected), I hit upon the challenges and available solutions that were present for data input systems. The concept of mobility in devices has transcended beyond the scope of marginalizing telephony and computing, and has taken a shape in the form of wearable devices. However with advent of new technological invention in any field also gives rise to new challenges. In the smart wears one of such challenges is input of information to the associated system.

One such challenges was that — modern technologies are enabling people to interact with systems in distracted environments, while in-motion and while multi-tasking giving raising to its own set of problems which were not known in PC era. With the rise of smart-wearable devices, mobile computing and frequent use of access of information from cloud while on the move, one of such challenges is input of information to the associated system, as the systems are getting less in size and tending towards smaller displays. The recent trends in mobility domain, indicates the growth in the smart wearable devices. We are witnessing a time when every technology company is trying their best to own their part of innovation in Smart wears such as smart glass, smart watch etc. and this field aligns with IBM’s one of the currently prioritized strategies — i.e. mobility.

In most of the existing smart eye-wear device (e.g. Google Glass) , the input mechanism is typically via voice. While this is great for provide commands to the device, it is not still great for input longer texts. Again voice based input mechanisms are difficult to use in noisy and distracted environments. Also language localisation and accent issues adds to the list of issues in using voice effectively. Moreover when it comes to the productive work like drafting emails, writing a piece of programming code, voice-input method is not as effective as a standard keyboard based input mechanism (mostly found in PC, laptops, mobiles and missing in smart eye wears).

Usage of a physical keyboard or a secondary tooth enabled keyboard is possible but it requires the user to carry a keyboard wherever he goes. But even if the keyboard is carried, there is no guarantee the user will have a flat surface to place the keyboard or if he can comfortably use the keyboard while on the move. Imagine a person who is waiting on a busy bus stop and wants to reply his office email.

A set of new hardware are available in the market (e.g. Fin, Ring) which act as supplementary wearable to trigger pre-defined commands and provide limited input of information via finger movement and hand gestures over air. However none of them are effective enough to provide a keyboard solution that can be used for easier input of textual information into wearable device like a smart –eye wear or smart glass.

Also when it comes to input longer texts on a smart glass/eye-wear, there is absolutely no reliable method/system exists as of today which can work for users on the move and in distracted environments.

So with these problem statements in mind, I made a lit of dimensions of goodness —

  1. Effective even in typing-in long text into smart eye-wear system — can help user more productive. The user can input long emails or simply use smart-eye-wear to write a program or code even on the beach.
  2. No cognitive load on user to remember pre-defined commands/key names /gestures etc. (unlike the wearable rings based command trigger systems as detailed in prior art section)
  3. Can be used effectively while on the move & distracted or multitasking state (standing in que, at bus station, while having dinner, while driving, walking, in home while watching TV)
  4. No need for any extra or supplementary hardware required along with smart-eye-wear. No need for the user to carry separate input devices.
  5. A method that uses age old natural human habitual way of processing information through fingers
  6. Explore the new way to have a device free of any physical input accessories

With these pointers on the goodness, I iterated over a conceptual design of a Virtual Reality enabled keyboard for effective data input system (named it BlueTap) with many innovative approaches in a pattern of gestures that can be used in a augmented virtual space and was filed as a patent by IBM. BlueTap is basically an Input keyboard system with a set of gestures and methods that uses both fingers-tips & finger-joints, as a keyboard for smart eye-wears and a set of gestures to control the human-system interaction in the real life 3D virtual space using this keyboard system.

BlueTap is about an approach that uses natural human gestures in hand that are derived from the age old human cognitive approach of counting through fingers. This also focuses on the idea that there should not be any need for the user to carry separate input devices for typing long texts into the device. This helps the device to act as the independent device and not a supplementary device to some PC, tablet, mobile handsets. The user can input long texts any where any time – on the road, walking, in home while watching TV. This is an approach that allows to explore the new way to have a device free of any physical input accessories.

The idea uses a overlapping of a graphic/icons over a real camera stream on the screen is a known technique. Also recognize finger on a hand , finger tips and finger joints is a technically feasible technology using OpenCV , Kinetic type technology. Mapping of the alphabets and making visible these as graphic to finger tips and joints over glass screen on both hands in real time.Analyzing the finger movements to infer the “tap”, “double tap” and “swipe” gestures. This is also about a mechanism to sequence a string of inferred gestures to provide a typical typing kind of input experience.

The BlueTap proposes method and an effective and scalable interaction paradigm that  solves the issue of input of texts into a smart eye-wear in an effective manner even when the user is on move and is in distracted or noisy environments and there by making him more productive by allowing him  to use his fingers tips and joints as the keys to input the long texts effectively — which the user can do anywhere any time and does not need to carry a supplementary device (a key board or a phone/tablet with a virtual keyboard on it.). The invention can be used in digital smart eye-wear devices (e.g. Google Glass, “Jet” by Recon, Evena’s Eyes-On Glasses) in mostly consumer, enterprise and healthcare domain. This can be refined to integrate with in any future smart eye-wear product to provide quick and comfortable input mechanism to the device.

The interaction paradigm in BlueTap allows in run-time dynamic switching between different views (a view that can represent but not limited to any language and nature of selected keyboard layout ) and operation modes double hand as well as single handed keyboard operation, in case the user is busy in doing something (e.g. holding coffee cup, having dinner, driving, writing etc. ) or is having any disabilities of not having one hand.

BlueTap is more than just being a keyboard system, as it allows user to interact with eye-wear system in a new way – like interacting with an application menu of the system to open any app, or making a call to any phone number by enabling a dial pad or doing some calculations via an calculator interface displayed by BlueTap.

A set of unique gestures are also part of this system that allows to control different mode/view of Blue Tap, namely –

  1. Fist Gesture – Toggle among keyboard views such as aplabet and numeric or symbol keyboard views.
  2. Palm Flip & Back Gesture – To toggle between “keyboard” and “app menu” views
  3. Thumb Down & Up Gesture – Toggle between “enable” or “disable” state of BlueTap
  4. Thumb slide across Palm gesture – To slide between single handed and double handed keyboard modes.
  5. Double Tap on finger Gesture – Toggle caps lock state of keys.

BlueTap implements a novel way to super impose characters/symbols over finger-tips and finger-joints on the eye-wear view port and renders them in real time (through virtual reality technologies) to make them look natural tattoo kind of look n feel. Also the whole method of using finger joints as keys where virtually the key characters are visually rendered to create virtual visible keys, the tapping and double-tapping (to change caps lock) is novel in itself.

As far as implementation is concerned, BlueTap supports different technologies for finger detection — e.g. including but not limited to infrared sensors / thermal imaging based detection, body magenetism, camera visual image processing, additional wearable based solutions and/or any other technologies that can be utilized to detect finger tips and joints in 3D space.

BlueTap supports both or even any of the single hand to enter information. The user can easily swap between these single or double modes. Along with if the person requires left hand to type in he can move from double to single hand.

Along with this this is beneficial for the physically challenged person by providing an accessible way for data entry.

Different operational modes of the BlueTap are :

  1. Double Hand State (Default)
  2. Single Hand State

The double hand mode of the keyboard before the gesture made by the user to toggle into a single handed operational mode.

BlueTap also supports views like an Application menu or Home-screen, where the user can trigger different application for the eyewear. In such mode, the application icons appear instead of the alphabets. The user can easily pick and select the app to run.

The keyboard layout is flexible as well as dynamic to switch between various views, one such example is the user can open a numeric view or even a dialing interface.

Specific new sets of palm and hand related gestures are part of the BlueTap keyboard, that allows to quickly move between different views as well as typing mode.

BlueTap is comprised of the following key components which are involved in the generation of the virtual keyboard over human palm:

1.  Activation/Deactivation of BlueTap :

There can be a physical switch/button or via voice control or via launch of an app can trigger the activation or deactivation of BlueTap. This is simply switching ON/OFF of the BlueTap system.

2. Enable/Disable of BlueTap:

If the system is active, the user can disable the hand and finger monitoring by the BlueTap system. This is similar to enable/disable of virtual keyboard in typical mobile devices.

Different Modes of BlueTap:

1.      Keyboard Mode – This is the default mode of BlueTap System. This allows to render a keyboard over user’s palm.

2.      App Menu Mode – This mode renders the different app icons of the eye-wear device on the palm of the user so that the user can tap to trigger any app.

Note: Optional modes that might be introduced – for example :

3.      Call Dialer Mode – a call dialer pad layout will be rendered on the palm to allow the user to dial any number to make a call or similar activity.

4.      Calculator Mode – a calculator interface will be rendered so that using it calculation activities can be easily carried out by the user.

Different Keyboard views available are:

  1. Alphabets View
  2. Numeric/Special characters view
  3. Typing Email keyboard view
  4. Coding in HTML keyboard view
  5. Caht keyboard view with smileys
  6. Different keyboard views for different languages.

The BlueTap keyboard patent details:

Publication number: US 20170090747, Filed: Sep 24, 2015, Publication Date: Mar 30, 2017, Inventor: Samir K. Dash (Bangalore) , Application Number: 14/863,832, Assignee – International Business Machines Corporation (IBM)

Samir Dash is working as Associate Design Director at IBM based in Bengaluru, India. The above article is personal and does not necessarily represent IBM’s positions, strategies or opinions.