Omnipresent Operating System (OS) : Re-imagining the Next Killer Experience for future of OS.

ominipresent-OS-image

Reincarnation in the Hinduism is one of the fascinating philosophical concepts that says that a soul or spirit after meeting a biological death is capable of leading a new life in a new body. Being a Hindu, I also believe in the transfer of one’s soul after death into another body which produces a continuous cycle of birth, life, death and rebirth through one’s many lifetimes in Samsara.

Though this is an analogy from the mythologies and scriptures in Hinduism and other religious books, the concept of the same soul is passing through different bodies is somewhat can inspire the experience for the next-gen Operating System (OS) — an ecosystem where the OS is “omnipresent” as a soul and allows the user to move through different “systems” and “devices”(even we can extend this to IoT contexts!) .

A user story might record this user’s perspective something as —

“I should be able to use my application uninterruptedly and seamlessly across various devices/system as I move through different devices due to certain needs”.

It is like soul transfer — transferring consciousness from one body to another body.  In the movie The Matrix(1999), the Agents were transferring themselves to different person’s bodies. One interesting thing to note here is that being transferred to a new body they remember their goals, their memory etc.

(Fig: In the movie, The Matrix (1999), the agents can transfer themselves into other bodies at will.)

The idea that I termed as Omnipresent Operating System is to share application sessions (along with all the state/session data etc.) from one system to another system that allows to continue the application running on the later system in exactly in the same state and using the same session without the user to start the application from beginning or using a different application session in the later system to use all the benefits of available in the later system. This helps to achieve unifiedseamless & omnipresence-experience across different systems.

Use-Cases:

Following are five use-cases represented in images :

In each one, we can see a daily life usage where the experience for the user are extremely simplified through the proposed solution.

Historically the OS concept has progressed over time through many evolutions. Typically the OS is defined as a system of software which manages the hardware resources of the system to provide a base for its users’ programmatic computing needs. Throughout the history, multiple dimensions were addressed for operating systems such as performance, multitasking, usability, portability, mobility etc. We are witnessing a period of time when the transformation is happening to the evolution of OS at the highest rate possible. This is due to diversification of software technology, hardware, and evolution in new age eco-systems and new paradigms of digital devices. This is an age when we are witnessing the coming of IoT (i.e. Internet of Things) and cloud, where any device can be part of a bigger eco-system and be an extension of a cloud system.

So the question is what is the future of Operating system? Is it that cloud will be the ultimate operating system? At least by looking at the Chrome book, defining a thin-client based access to all the computational needs that stay in the cloud”. But still, the diversification of operating systems prompt us to pause and think, something is missing. The missing piece is convergence.

We have many derivatives of different types of OS. Android, iOS, Linux, Windows and OSX are to name the few of the variations, that continue the OS war. But users are limited by this. Imagining an application running in one OS . Can we use the same application in another one? No. Also imagine a situation, when you are reading an email and want to update it (the same email )on your PC…can you do it? No.

If we see the trends today towards the future, it’s all about micro-services and server-less architecture similar to torrent (peer-to-peer) or blockchain implementation, de-centralized and distributed eco-systems where the systems communicate to each other. In such scenarios, the next killer experience is to have the ability of the omnipresence across this distributed & de-centralized eco-systems.

Solution Implementation:

The solution can be implemented in two ways. To illustrate this assuming that there are two devices and the user initially starts using some application in the first one and then he moves to the second device and uses the same session and input data in the second device .

Approach 1. Both the devices (i.e. Device 1 & Device 2) will be running same Omni-present Operating System ( in real life scenarios it might be possible in some cases that they can be of different versions of the same OS ) which will have capabilities to transfer the application & user data including session/environment variables etc.. along with necessary components that will allow to render that in another device.

Approach 2. Both the devices may not run the same Omnipresent Operating System… they may be running 2 different OS (including 3rd party OS like Android and Windows etc.) .. in such cases both the device can have some Run-time Component installed to allow them to have the capabilities to transfer the application & user data including session/environment variables etc.. along with necessary components that will allow rendering that on the device on receipt of such data .

The following diagram illustrates the different fundamental components are to be placed in an architecture of the eco-system in order to make it work.

Description of the diagram :

100 and 200 are two devices connected over the same network via wifi/Lan/Bluetooth etc. (e.g. 150 & 250). 
110 is an application running in 100. 120 is the Runtime Component that helps to achieve the desired outcome proposed in the invention (might be part of the OS). 
100 is connected to internet via port 140. 
111 is the session data, 112 is user authentication and related data, 113 is environment related data (e.g. browser version, OS version, history, some system variable etc. ), 114 is any other metadata that might be associated with any of the components of the environment. 
115 is the application data (e.g. client session, user input data, cookies, local storage, application related variables etc. ). 
210 is an equivalent application running in 200.  
220 is the Runtime Component that helps to achieve the desired outcome proposed in the invention (might be part of the OS). 
200 is connected to the internet via port 240. 150 is a means of connection (e.g. wifi, LAN, Bluetooth, NFC, IR etc. ) of 100 with any other devices like in this case 200. 
250 is the similar means of connection. 
Both 100 and 200 are connected to each other - for which may 100 might be using any other port 130 and same for 200 where it might be using 230. 
Now when the user runs device 100 and runs application 110, his data related to that state is collected and transferred to the other device 200 over the connection. This transfer is depicted as 160. 
This data is stored by the runtime 220 as 111, 112, 113, 114, 115. 

Now runtime selects the compatible application 210 which may have it's own session 211 and other related data. 
Now runtime 220 uses 111, 112, 113, 114, 115 to determine what can be populated in 210 , which may include the text or any info the user had entered in device 100 and this allows to run the application 210 with the user's data. 
Now the user uses device 200 to continue using editing or working on his data using app 210 and once he finishes, runtime 220 collects the modified data from the app 210 and sends it back to device 100 , depicted in 260 so that in 100 the runtime 120 uses that to populate the app 110 and within the same session as 111 and if the need is to update some server over the internet, it can do so via using the same session and thread via the same network port as 140. 

Note : The above example shows only a generic approach and describes the overall components. Depending on the scenario the usage might vary slightly -- for example if the user does not need to use same session and the port to connect to server, the flow might be different where once the user has updated his info in 210 on device 200, instead of sending back the updated data to device 100 as depicted in 260, the 220 runtimes can submit it to server or backend using 240 over a different session.

Note: The above example shows only a generic approach and describes the overall components. Depending on the scenario the usage might vary slightly — for example if the user does not need to use the same session and the port to connect to server, the flow might be different where once the user has updated his info in 210 on device 200, instead of sending back the updated data to device 100 as depicted in 260, the 220 runtimes can submit it to server or backend using 240 over a different session.

Why is Omnipresence is the future for OS?

Interestingly, when Cloud OS concept came into existence, the driving force behind it was the thoayeught that a ‘good’ OS gets out of the way and lets the user get straight to what they want. In Microsoft’s approach – From the perspective of the user, they’re no longer using a program on a machine but consuming a service that lives in an arbitrary place. Because the service runs on an API common to all machines, it becomes easier to scale and failover. This is concept the drives Platform as a Service (Azure’s tour de force). [Source]

Big players like Microsoft also believe that the “perfect future [of OS] would mean that […] software is totally portable between desktop and mobile devices. “. The kind of attempts by such players are to converge the OS of mobile as well Desktops.

Though many attempted in the past to evaluate thin-client OS as the future (e.g. http://www.totalnetworx.com/computers-technology/google-chrome-os-operating-system-future/ ), also gradually started to foresee the future of OS as something that can not be seen in the light of only a server-client architecture rather it would be defined in something more organic that would be sustainable. Articles like Desktop 2.0 and the future of the networked operating system , interestingly long back predicted the conclusion that we are carrying forward today that the Chrome OS version of the future where all we need is a browser is wrong.

So how Omni-present OS concept is different than thin-client and (even some old geeks referring the Mainframes to get the secret-sauce for the future!)? Here are the basic differentiators:

The Begining:

As I mentioned, in today’s OS evolution, it is the diversification of operating systems that prompt us to pause and think about the missing bridge to the future. If we closely ponder, it appears that the missing piece is convergence. The Omnipresent OS concept is a thought towards that direction. It’s actually a beginning for us to be prepared for tomorrow’s decentralized networked world where the age-old philosophies will show us the way!

(c) 2017-18 Samir Dash, Attribution-NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND 3.0)

 

https://www.linkedin.com/pulse/omnipresent-operating-system-os-re-imagining-next-killer-samir-dash

 

Advertisements

Re-imagining Beta Testing in the Ever-Changing World of Automation.

Based on the paper on the Beta Testing presented at RedHat QE CampX , Bangalore on 7 Dec 2017

Beta-Testing fundamentally is all about ‘A test of a product performed by real users in the real environment’ — and of course, there are many tags we use to refer to the testing of similar characteristics like User Acceptance Testing (UAT), Customer Acceptance Testing (CAT), Customer Validation and Field Testing (more popular in Europe) etc. Whatever the tag we use to represent all these testing cases, the basic components are more or less same — it involves the user testing and front-end UI as well as UX related testing to find out potential issues and rectify. Also, this always happens in the iteration of SDLC where the idea has transformed into a design and has passed the development phases where mostly the unit testing and integration testings have already happened.

Basically, the Beta stage of the Product Life-cycle Management (PLM), is the perfect opportunity to hear from the target market and plan for the road ahead.  When we zoomed into this phase of testing it has a board range of spectrum that ranges from Front-end or UI related testing involving UI functionality, Cosmetic, UI level Interaction and Visual look and feel at one end to the User Experience (UX) at the other end including User Testing involving more from A/B (Split) Testing , Hypothesis, User Behaviour tracking and analysis, Heatmaps, user flows and segment preference study or Exploratory testing and Feedback loops.

Beta Testing has the popular beliefs on the dimension of goodness that defines the typical tools that enable the user to carry out such tests, namely — Shortening of beta cycles, reduce time investment, increase tester participation, improve feedback loop and the visibility etc. However, if we dive deep into the factors behind the existence of tools from different angles, we will find two major reasons that advocate the importance of Beta Testing.

1. Left-Right Brain analogy that points out to the overlap of Human and Technology.

The typical belief is that the left-hand side of the brain mostly processes the logical thinking whereas the right part is more about emotional parts of the thoughts. Based on this popular analogy, when we map the different aspects involved in different stages of SDLC for a digital product across a straight-line from left to right, (refer to the diagram below) we will notice the logical and more human-centered aspects are divided by an imaginary line from the center. We will also notice the gradual progression of the emotional index for the components from left to right.

And when we map these to the beta testing phase, we notice these right-hand components are predominant in such testing phases. As users, as the humans, of the products, we are more emotionally connected to such aspects of the product which are validated or verified in a Beta Testing, thereby making Beta Testing as one of the most important testing phases in any SDLC.

Another interesting point to note here that when we look from the traditional software approach to define “criticality”, the areas that are tested during UAT / Beta, mostly fall into Class 3 and 4 type of criticality. But because these touch the core human aspects, they become more important.

To illustrate this here is a nice video https://www.youtube.com/watch?v=-jHDyhJdDb8that reflects the importance of technology touching the human emotions. This YouTube video posted was a popular brand that offers a glass for the color-blind people which can correct the colorblind vision issue in the real time for the end user. Interestingly this aspect is about “Accessibility” that is one of the aspects that is typically covered during a Beta Testing. Just by looking at this aspect “Accessibility”, in context to the video, naturally, the question comes “What can we do for this father and the son, as a tester or a developer or a designer?”. And when we look at the stats, we find the number of people the accessibility impacts are huge– Every one in five-person is challenged by some kind of disability. But unfortunately in some reports indicate that at more than 90% of the websites in 2011, were not conformant to W3C’s accessibility guidelines.

This itself shows the human angle, that advocates why Beta Testing is important to ensure these aspects are validated and verified so that the target user needs are fulfilled and not go unattended.

2. From the standard’s perspective – Evaluating from ISO/IEC 9126-4 (2001) dimension that defines the difference between usability and quality as a perceptive line.

The International Standards Organization (ISO) has been getting the standards around Quality vs. Usability evolved over time. During the 1998 ISO identified the efficiency, effectiveness, and satisfaction as major attributes of usability. Just a year after that, in 1999, the quality model proposed involved an approach to measure quality in the light of internal software quality and external factors. In 2001 the ISO/IEC 9126-4 standard suggested that the difference between usability and the quality in use is a matter of context of use. The ISO/IEC 9126-4 (2001) standard also distinguished external versus internal quality and defined related metrics. Metrics for external quality can be obtained only by executing the software product in the system environment for which the product is intended.   This shows that without Usability / HCI in the right context, the Quality process is incomplete. One interesting point to notice here that that “context” referred here is actually that is fundamental to a beta testing i.e. “real users in a real environment”, thereby making the case of Beta Testing stronger.

Now we know why the Beta testing is important, let’s explore the challenges that are involved with the Beta stage.  If we notice any standards defined, including ISO/IEC 9126, most of these are static — none of these models really describe the relationship between phases in the product development cycle and appropriate usability measures at specific project milestone. We should note that any standard also gives relatively few guidelines about how to interpret scores from specific usability metrics. And specific to usability as a quality factor, it is worth to note that usability is that aspect of quality where the metrics have to be interpreted.

In this light when we look at popular Beta-Testing tools of today, we can notice that the Top Beta Testing tools leave the interpretation to the customer’s or end user’s discretion. This brings to our Number one challenge in Beta Testing — How to filter-out pure perception from the actual & valid issues and concerns?   As most of the issues are related to user-testing, split-testing, and front-end testing, there is no optimized single window solution that is smart enough to handle this in an effective way.  Real users in the real environment are unless empowered are handicapped to comprehend all the aspects of beta testing and react. Also, it’s all perspective and all of them cannot be validated with real data from some benchmark/standards.

The World Quality Report in 2015-16 Edition indicated that the expectations from the Beta testing is changing dramatically. It hinted that the customers are looking for more product insights through a reliable way to test quality, functionality along with the regular usability and user testing in real customer facing environment.

It’s not only the Beta Testing rather in the overall testing scenario, more user-demand is also getting impacted by the rising complexities and the challenges which is increasing due to accelerated changes in the technology, development and delivery mechanisms and approaches. The 2017-18 World Quality Report reports that the test environment and test data continue to be Achilles heel for QA and Testing along with the fact that the challenges with testing in agile development are increasing. There is now a demand for automation, mobility, and ubiquity along with smartness to be implemented in the software quality testing. Many believe that the analytics-based automation solutions would be the first step in transforming to smarter QA and smarter test automation

While this true for overall QA and testing, this is also true for Beta Testing, even when this testing, unlike the unit testing, system testing etc. deal with the functional aspect of the product.

Let’s see where we stand today as per this benchmark. If we explore popular beta testing solutions, we will get a big vacuum in the area where we try to benchmark user’s need for more functional aspects along with the usability and user testing aspects are mapped. Also, you can notice in the diagram that there is ample space to play around with the smart testing scenario with the use of cognitive, automation and machine learning.

(Note: Above figure shows my subjective analysis of the competitive scenario.)

Basically, we lack “Smartness” and proper “automation” in Beta Testing Solutions.

Apart from all these, there are some more challenges that we can notice if we start evaluating the user needs from the corresponding persona’s viewpoint. For example, even when assuming that the functional aspect is to be validated, the end-user or the customer may have an inability to recognize it. As the product -owner, customer or the end-user who are the “real users in a real environment”, are part of the user segment who may not be aware of the nuts and bolts of the technology involved in the development of the product they are testing to sign it off. It’s like the classic example of a customer who is about to buy a second-hand car and inspects the vehicle before making the payment. In most of the cases, he is paying the money without being able to recognize “What’s inside the bonnet!”. This is the ultimate use-case that advocates to “empower the customer”.

Now how to empower the end user or the customer? The tools should support that in a way so that the user can have his own peace of mind while validating the product. Unfortunately, many small tools who try to solve some of those little issues to empower the user (for example the Google Chrome extension that helps to analyze CSS and creating the report or an on-screen ruler that the user can use to check alignment etc.) are scattered. The ground reality is that there is no single-window extension/widgets based solution available. Also, not all widgets are connected. And those which are available, not all are comprehensible to the customer/end-user as almost all of them are either developer or tester centric. They are not meant for the customer without any special skills!

 

When we look at the automation solutions in testing as part of much Continous Integration (CI) and Continuous Delivery (CD), are engaged and effective in mostly “pre-beta” stage of SDLC. Also, they require specific skills to run them. With the focus on DevOps, in many cases, the CI-CD solutions are getting developed and integrated with the new age solutions looking at the rising complexities of technology stacks, languages, frameworks etc. But most of them are for the skilled dev or test specialists to use and execute them. And this is something that does not translate well when it comes to Beta testing where the end-user or the customer or the “real user in real environment” are involved.

Apart from all these even, assuming we can have all these automation features enabled in BETA it still points to another limitation in the existing solutions. It’s because the employment of automation brings in its own challenge of “information explosion” , where the end user needs to deal with the higher volume of data from automation and with so much information, the user will struggle to get a consolidated and meaningful insight of the specific product context.   So what we need to solve these challenge? Here is my view — We need smart, connected, single window beta testing solution with automation that can be comprehensible to the end-users in a real environment without the help of the geeks.

For sometime since a last few years, I have been exploring these aspects for the ideal Beta Testing solution and was working on the model and a proof of concept called “Beta Studio”, representing the ideal beta testing solution, which should have all these — Beta-Testing that utilizes data from all stages of SDLC and PLM along with the standards + specs and user testing data to provide more meaningful insights to the customer.  Test real application in real environment by the real users. Customer as well as end-user centric. Test soft aspects of the application — Usability, Accessibility, Cosmetic etc.  Smart enough to compare and analyze these soft aspects of the application against functional testing data.

Use machine-learning & cognitive to make the more meaningful recommendation and not just dump info about bugs and potential issues.

Here is an indicative vision of Beta Studio:

 

Mostly this vision of the ideal beta testing solution touches upon all the aspect we just discussed. It also touches upon all the interaction points of the different persona e.g. customer, end-user, developer, tester, product owner, project manager, support engineer etc. across the whole Product Life Cycle and utilizes automation along with the Machine Learning components such as Computer Vision (CV) and Natural Language Processing (NLP) to gather information that has to be processed by the cognitive aspect to generate the desired insights about the product and recommendations. During this process, the system will involve data from standards and specs along with the design benchmark generated from the inputs at the design phase of the SDLC, so that meaningful insights can be generated.

In order to achieve this vision being translated into reality, what is that we need. The following diagram hints exactly on the same:

Basically, the first step should involve creating the design benchmark from the information from the design stage, that can be used in comparing the product features against a metrics based on this design benchmark.

The second thing that matters should be automated and manual tracking of the product during runtime in real time and categorize and collate these data.  The third step involves creating features to support the user feedback cycle and user testing aspects (exploratory, split testing capabilities).

The fourth step would be to collect all standards and specifications on different aspects — e.g. Web Content Accessibility Guideline (WCAG) Section 508, Web Accessibility Initiative Specs ARIA, Design Principles, W3C Compliance, JS Standards, CSS Standards & Grids, Code Optimization Metrics Error codes & Specs, Device Specific Guidelines (e.g. Apple Human Interface Guideline) etc.

The fifth is about building the critical components such as Computer Vision and Natural Language Processing units which would process all the data collected in all these stages and generate the desired metrics and inferences.

The sixth step involves building the unit to generate the model to map the data and compare against the metrics.  The final or the seventh step is to build the cognitive unit that can compare the data and apply the correct models and metrics to carry out the filtering of the data and generate the insights which can be shared as actionable output to the end-user/customer.

While experimenting for BetaStudio, I have explored different aspects and built some bare bone POCs. For example, Specstra is a component that can help create Design benchmark from design files.

When I was exploring Specstra , that was trying to address the issues related to the Cosmetic aspect or visual look and feel. Typically when it comes to cosmetic issues, more than 30% or one-third issue are non-functional and mostly Cosmetic, there is no reliable solution that helps in benchmarking this kind of issues against some specific standards. At a minimum, one third of the issues found during the beta / UAT stages of testing are mostly cosmetic and alignment issues including category 3 and 4 types. And these happen mostly because the two personae – developer and designer involved have their own boundaries defined by a mythical fine line.

When a developer is in focus roughly 45% of them do not aware of all the design principles employed or the heuristics of UX to be employed. Similarly half of the designers are not aware of more than half of the evolving technological solutions about design.

And in mostly three fourth of the of the projects, we do not get detailed design specs to benchmark with. Detailing out a spec for design comes with a cost and skills. In more than two-thirds of the cases of development there is the absence of detailed design with specs. Wherever it exists many of the designs are not standardized, also most of them do not have clear and detailed specs. Also being design is carried out by different tools it is not always easy to have a centralized place where all the designs info is available for benchmarking.

To solve this Specstra comes handy as it is an Automation POC that is more like a cloud-based Visual Design Style Guide Generator from the third party design source files – this is a case where the user would like to continue using his existing Design tools like Photoshop/Sketch or Illustrator, PDF etc.

You can view the video of the demo here

 

https://youtu.be/_o4taIXCvck or read more here https://medium.com/design-ibm/specstra-experimenting-with-automation-in-design-industry-4641c0b4244d or here https://yourstory.com/read/71b0d89e20-specstra-experimenting-with-automation-in-design-industry

Similarly, the single window solution of tracking and be getting the real-time data on accessibility, visual design, CSS, JavaScript & Environment etc. from the product is explored in this POC. View a video on Beta Studio POC here https://youtu.be/kItqD5wc4_4.

 

I know reaching the goal of an ideal beta testing solution might need the effort and time and the concept will also evolve over time. But for sure the journey has started for all of us to connect and explore how to make it a reality.

Feel free to ping me in the comment section of this article.

To explore the open-source project of BetaStudio poc , follow the link here –https://github.com/betaStudio-online (will be uploading all the code to the repos in these in coming times. )

What the failure of Google Glass teaches about UX?

stop

In mid of January I saw the headlines making official announcement of the detah of Google Glass. I was not surprised. I knew lot of issues ave to be addressed before Gass could make it to the expectations. Many of them are issues related to UX. All of them related to an grey area of UX space, which was never given the prime consideration when designing a seminal product like Glass and many other legends.
Back in 2013, I had wrote a few posts on the usability in context to the social aspect of Google Glass that was being ignored. When I read now the article saying “privacy concerns” is one of many reasons of failure, it certainly louds the many of the design approach concerns I had raised.

Google Glass is not evil product, everyone agrees. Even all agree that it has immense potential. However, it certainly needs a facelift from product design point of view — and there by from UX point of view.

We saw, the raise and fall of Google Glass carrying it’s pattern where we can notice how with the emergence of Google Glass, the topics related to devices infringing with personal privacy became hot cakes for tech-debates. Many social scientists, human rights activists had started to see the ‘Glass’ as the evil that reminds them with George Orwell’s ‘1984’. The fear of a ‘Google Big Brother’ controlling the major shares of the information world is seen as the intruder to private aspects of ‘the public’. The “Glass Hole” incarnation of the Glass is equally seminal as the product “Glass” it self, due to bring out the topics like “user privacy”, “social context” and certainly what I believe as the “Context of the Other”.

It is not the case that Google has not spent money on user research and usability aspects before going ahead with the concept of persons using glass that may change the way we interact with systems in our daily life. Usability wise, it is definitely a super gadget that has the potential to catapult the device industry into next century. But the new features and interaction methods implemented in the device in a manner that is actually a decade old approach that is only fit for human-computer-interaction (HCI) in case of smart phones and tablets which have less tendency to hurt sentiments of those who do not directly interact with the device when the user might be performing some actions in a certain socio-cultural context. These sentiments could result in the fear of losing privacy , cultural distrust and humiliation among the second-hand users of the device who are impacted indirectly in some way by the device actions in the context.

Historically, the product design process while following the check and balances with heuristics and usability models, has never given prime importance to the user’s relationship to the ‘Other’ in his environment. And this is the missing piece that needs to be re-discovered and fit into standard usability matrix when Google might give “Glass” a face-lift to bring it back with a new incarnation that is more friendly and less intruder to user’s privacy and is compatible with SX model (Socio-cultural Usability Model) which I had proposed earlier.

Socio-Cultural User Experience (SX) – the missing piece in UX


‘Socio-Cultural User Experience to represent the aspect of Usability Design or User Experience (UX) that deals with usability aspect of products/ software in a social context. This is the same “Context of Other”

Considering the ‘Others’ in the User’s Social circle:

The existing UX model does not analyze the need beyond the current user and his ‘type’ to do a usability test — it never considers how it is impacting the other members of the society while the target user set is using the app/system.
For example, using car horn is a safety measure, but using it near a hospital or school is considered as unsocial and disturbing. There are many social check points that bar users of any system from using it in special socio logical context.

Criteria of a Good ‘SX’ Compatible System

Criteria of a sound usability design of an app on socio-cultural context:

1. Universal—has design elements that are universal.
2. Ethical – follows principles and approach that has positive ethical value
3. Non-racial – non biased and non-provocative attitude to user’s race and beliefs.
Socio-cultural User Experience (SX) and Social Interaction Design (SxD)
4. Respectful – towards user’s culture, social beliefs and ethnicity
5. Safety – has it’s social impact that is safe for the User.
6. Non-abusive – must not exploit the user and the environment he is in .
7. Common Sense – has geared towards common sense – behaves and reacts to the user in a sensible way
8. Protect Privacy – App’s feature and interaction must protect user’s privacy and other humans in the social circle.

Let’s take the case of Google Glass.

Google Glass is designed in a way that can act as more personal than a mobile handset, as it is a spectacle and can be indispensable accessory for the user once he gets addicted to it by replacing his conventional glass with it.
But the support for camera to take picture can pose a problem for the user to enter private areas, industrial areas, secure zones and offices where cameras are not allowed. In some places of earth, the cultural restrictions are in practice to ban cameras in certain places — most of the temples in India do not allow cameras inside. Now imagine, if the user has replaced his traditional spectacle for it , then he may find it difficult to manage without it in these scenarios.
So by following SX approach in usability design, the glass will require to have a “detachable set of camera” used in the glass so that the user can detach the camera and which would power it off and at the same time allow the user to keep on using the glass as a conventional spectacle.
This example may be just one of many features that Google glass might have, but it is enough to illustrate the approach in thought.

Points to Focus on while designing a SxD Compatible System

1. Provide multiple alternatives to the interaction methods to control the same functionalities in different socio-cultural context.
2. User should have total control over enable/disable of interaction methods for different scenarios.
3. The default interaction method must follow ‘SX’ approach.
4. Provide options to the user to switch between interaction methods with the system as and when needed.
5. Alternative mechanisms should be provided for physically challenged users. Rethink on the use of gestures and other interaction methods in the Article 508 context as everyday the new devices with unpredictable (not necessarily negative!) interaction methods and features.

Gesture and other Interaction Medium of SxD:

The ‘Social Interaction Design’ approach has the following major facets in the system interaction towards the user in socio-usability context:
1. Facial Gestures—The selection of Human triggered facial gestures (e.g. wink, smile etc.) to activate the system or trigger any action in the system must be judged based on the canonical meaning of those gestures in social and cultural context of the user where he is going to use it. For example, in case of Google Glass , the feature of “winking” (the gesture developed by Google Glass developer Mike DiGiovanni http://news.cnet.com/8301-1023_3-57582500-93/google-glass-code-lets-you-snap-a-photo-with-a-wink/ ) at someone to take a photo can pose a problem if the user is in India or Middle East countries. Even in western world winking at a lady or group of ladies (even though it is unintentional for any kind of abasement) can be taken as a negative action (e.g. weakness in character) and evoke anger and misunderstanding. So even if the winking to take a feature is a ‘cool feature’, in social context SxD will suggest the usability/interaction engineer to rethink on it to implement some options to ‘keep it disabled by default and allow the user the total freedom to use his judgment to enable and use the feature in any given socio-cultural context. Fig5: The ‘wink’ gesture developed by Google Glass developer Mike DiGiovann allows user to take a snap of the surrounding with just a wink of an eye.

2. Sound Gestures — The selection of sound gestures – the use of voice or sound pattern to control the system should be examined for different user environments. For example blowing a whistle to activate a play functionality on a portable music player, or to open an SMS on the cell phone can be an interesting feature, but on the other hand if it becomes useless in a busy street or in a meeting room where a discussion is going on.
3. Touch based Gestures – Touch, swipe and pinch are popular now a days as most of the tablets and smartphones offer this as a user friendly interaction method for the user. More devices are coming up which do not have any physical button rather a few multi-touch gestures are enough to fully control them. However ‘SxD’ stresses that the devices must be designed and developed with the interaction method that can allow alternative to the available touch triggered interaction mechanism. For example , while developing a digital medical instrument with touch sensitive display, the interaction methods should be carefully planned so that the surgeon can use the system without touching to avoid infections through contact with it while conducting any mission critical surgery.
4. Hand/Finger based 3D gestures – ‘SxD’ approach encourages to conduct a social analysis of the hand/finger based gestures that are planned to be used in a system – the gestures should selected / innovated by carefully studying the cultural context avoiding common gestures used in daily life that are considered abusive to others. In addition to this practical usage resulting out of user’s environment and work culture must be given consideration. For example the middle finger gesture commonly used by youths to represent the crack humiliating pun on the other should not be used for any app that is expected to be popular among the users from the similar demography. But note that only considering the demography is not enough to decide the gestures.
5. Mouse /Keyboard Control – Similar to the gesture , voice and the related interaction method with system, mouse, keyboard, joystick and other typical input device based methods should be considered with in the context in which they are going to be used. As this group of interaction method are very old, many standard guidelines are already in there in practice. They However we need to rethink on them and make sure they are upto date with the ever changing human –computer-interaction domain.

Our world needs products that are not only usable but also safe to use socially . It is high time, we need to consider the “Other” in our social context to improve the products and there by our future.

This is a rediscovery of "Accessibility" in the world of touch-screens and other natural interfaces. With new technology innovation the lines between accessibility technology and Technology for Mass are getting blurred. What used to be a special need is becoming a general need for mass use.Situational Disabilities Use-cases are defining the new age devices, wearable & smart interfaces. High time we need to rediscover on "accessibility" what we think we have already discovered!

Rediscovering Accessibility for Future Tech!

This is a rediscovery of “Accessibility” in the world of touch-screens and other natural interfaces. With new technology innovation the lines between accessibility technology and Technology for Mass are getting blurred. What used to be a special need is becoming a general need for mass use.Situational Disabilities Use-cases are defining the new age devices, wearable & smart interfaces.

High time we need to rediscover on “accessibility” what we think we have already discovered!

Flip the Cart! : Reimagining Next in eCommerce

eCommerce has one of the strongest buzz that we come across now a days. Specially after the success of Amazon and Flipkart, the domain is seen as a goldmine that can help in bringing the disruptive business models to increase profit and business. So what is the next in eCommerce? I tried to conceptualise and re-imagine it from a socio-integrated experience. Following are some insights and sample sides that present those attempts:

 

Find the complete idea here:

https://www.behance.net/gallery/19044315/Flip-the-Cart-Reimagining-Social-Commerce

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Engineering and Design Processes: Usability Engineering vs. Usability Design

Usability Engineering

Usability Engineering began to emerge as a distinct set of “professional practice” in the mid- to late 1980s. The majority of the professionals of this practices were from varied backgrounds such as  Computer Science or in a sub-field of Psychology such as Perception, Cognition or Human Factors. Today this field is being populated from some newer discipline such as  Cognitive Science  and  Human-Computer Interaction.

Usability engineering, is defined by Preece as

 ‘an approach to system design in which levels of usability are specified and defined quantitatively in advance, and the system is engineered towards these measures, which are known as metrics.’

The whole concept of Usability Engineering focuses on the “metrics for measuring usability”.

As the emphasis on usability metrics through “analysis and evaluation”is mostly the soul focus of this process, there is not enough focus on the actual design process. In this process the usability is tried to be attained through “engineering and quantifiable methods and techniques” rather than “designing the way to usability”.

Also the “usability engineering”focuses only on providing range of techniques to analyze users, specify usability goals, evaluate designs,  but it does not address the whole development process.It has more of a focus on “assessing and making recommendations to improve usability than it does on design, though Usability Engineers may still engage in design to some extent, particularly design of wire-frames or other prototypes”.

The usability engineering mostly seen as a separate activity that can be plugged into different SDLC models as a separate set of activities  from a process-oriented perspective.

The Usability Engineering conducts evaluations through the following tools and methodologies:

  1. usability testing
  2. interviews
  3. focus groups
  4. questionnaires/surveys
  5. cognitive walkthroughs
  6. heuristic evaluations
  7. RITE method
  8. cognitive task analysis
  9. contextual inquiry
  10. Think aloud protocol

User-Centered Systems Design (UCSD)

User-Centered Systems Design (UCDS) is set of “usability design” process focusing on usability throughout “the entire development process and further throughout the system life cycle”. It is based on the following key principle:

  1. User focus: The goals of the activity, the work domain or context of use, the users’ goals, tasksand needs should control the development.
  2. Active user involvement: Representative users should actively participate, early and continuously throughout the entire development process and throughout the system life cycle.
  3. Evolutionary systems development: The systems development should be both iterative and incremental.
  4. Simple design representations: The design must be represented in such ways that it can be easily understood by users and all other stakeholders.
  5. Prototyping: Early and continuously, prototypes should be used to visualize and evaluate ideas and design solutions in cooperation with the users.
  6. Evaluate use in context: Baseline usability goals and design criteria should control the development.
  7. Explicit and conscious design activities: The development process should contain dedicated design activities.
  8. A professional attitude: The development process should be conducted by effective multidisciplinary teams.
  9. Usability champion: Usability experts should be involved from the start of project to the very end.
  10. Holistic design: All aspects that influence the future use situation should be developed in parallel.
  11. Process customization: TheUCSDprocessmust be specified, adapted and implemented locally ineach organization. Usability cannot be achieved without a user-centered process. There is, however,no one-size-fits-all process.
  12. A user-centered attitude must be established: UCSD requires a user-centered attitude throughout theproject team, the development organization and the client organization.

 

UCSD_processcyscle

The typical process flow of UCSD can be visualized as the following steps (based on ISO/TR 18529:2000):

  1. Pre-study and business analysis: It  can be anything from a comprehensive analysis of work procedures, business processes, etc., to a brief statement or vision.

 

  1. Planning the user-centered systems design process: includes setting up the project with resources, activities, roles, methods, etc.

 

  1. Do iterative UCSD /Usability DesignActivities: The usability design process approximately.

 

  1. Formal Summative Evaluation: It covers the usability of the resulting system, as opposed to the formative evaluations used in the usability design process to learn about details in the design .

 

  1. Introduce and Operate the System:  includes installation, change management, user training, evaluating long-term effects and so forth.

 

The focus of UCDS is all about “changing the attitude among all professionals involved in the software development process” and these set of 10 principles are key for the “user-centered systems design process” which helps in giving “equal weight to interaction design, analysis and evaluation, combining interaction design, and usability engineering”.

 

 

Usability Design

The Usability Design is roughly a subset of the UCSD process that matches the “Do Iterative UCSD” step of the UCSD process.

usability_design

 Further study

 

The usability design outlines the steps in the development process involving usability design aspects. The process can be divided into three main phases:

  1. Requirements analysis: This step is synonymous with planning and analysis phase of typical software development life cycle(SDLC).
  2. Growing software with iterative design: This is the design and testing phase and development phase of typical SDLC.
  3. Deployment: This is same as deployment phase of typical SDLC.

 

https://samirshomepage.wordpress.com/2013/09/23/dont-get-confused-ucd-vs-ucsd/ 

 

(c) 2012-14, Samir  Dash

 

 

Don’t get Confused: UCD vs UCSD

In my last posts I discussed about Usability Design and User-Centered Design (UCD) and User-Centered Systems Design (UCSD). But many confuse between these two.  So in the following I am trying to differentiate these two:

UCD vs UCSD

UCD is differs from the UCSD in the following areas:

  1. Goal: The goal of UCSD is more on the process than the user so as to make the final product/system more usable. UCD rather focuses more on “users” of the product and not the design process. More focus is spent on understanding the user and their need.
  2. Process vs. Techniques Set: UCSD is about system development where as UCD is mostly a set of tecniques and process sets to be used with in UCSD 
  3. Perception: The DNA of UCSD is about changing the mindset of the professionals in the development process so that the designing aspect of usability can be put into practice freely and with higher priority. The UCD process is not about the changing perception about the priority of the design in the whole process.
  4. Broadness: UCSD covers the whole process that includes the areas which are even not part of “designing” whereas UCD can be seen as a subset of UCSD focusing of the “design process sets”.

UCD Models and Process

There 3 different models that support UCD in varying degrees and follow the ISO standard Human-Centred Design for interactive systems:

  1. Cooperative Design: This involves designers and users on an equal footing.
  2. Participatory Design (PD): Inspired by Cooperative Design, focusing on the participation of users
  3. Contextual Design:  “Customer-Centered Design” in the actual context, including some ideas from Participatory Design.

All these UCD models involve more or less a set of activities grouped into the following steps  mentioned below:

  1. Planning: in this stage the UCD process is planned and if needed customized. It involves  understanding the business needs and setting up the goals and objectives of the UX activities.  Also forming  the right team for the UX needs and if needed hiring specialties fall into this step.
  2. User data collection and analysis: This step involves data collection through different applicable methodologies such as user interviews, developing personas, conducting scenarios , user-cases and user stories analysis, setting up measurable goals.
  3. Designing and Prototyping : This involves activities like card sorting, conducting IA, wire framing and developing prototyping.
  4. Content writing: this  involves content refinement and writing for web and similar activities.
  5. Usability testing: This involves is a set of activities  of conducting tests and heuristic evaluations and reporting to allow refinement to the product. However Usability Testing can have its set of steps involving similar activities such as planning , Team forming, testing , review and data analysis and reporting.

All these are similar to most of the steps that fall under Usability Design as UCD can be seen as a subset of process with in Usability Design.

So many processes: What is where?

After going through multi relation models in all these processes and sub process discussed in this post and the previously discussed posts,  it might be little confusing to visual all the overlapping and dependable process sets. So here is a simple representation diagram that roughly shows the overlapping relations:

processes-relation

Usability Design and User-Centered Design (UCD)

The Usability Design is roughly a subset of the UCSD process that matches the “Do Iterative UCSD” step of the UCSD process.

usability_design

The usability design outlines the steps in the development process involving usability design aspects. The process can be divided into three main phases:

  1. Requirements analysis: This step is synonymous with planning and analysis phase of typical software development life cycle(SDLC).
  2. Growing software with iterative design: This is the design and testing phase and development phase of typical SDLC.
  3. Deployment: This is same as deployment phase of typical SDLC.

 

User-centered design (UCD) is a set of design processes in which “the needs, wants, and limitations of end users of a product are given extensive attention at each stage”. It is characterized as a multi-stage problem solving process involving designers who take the lead responsibility in foreseeing and solving the usability problems the users are likely to face while interacting with or using the system/product. UCD focuses on understanding the behavioral aspect of the user interacting for the first time so that the user’s learning curve in using the system can be evaluated in order to optimize and reduce it. User-centered design philosophy emphasizes on optimizing the product around “how users can, want, or need to use the product, rather than forcing the users to change their behavior to accommodate the product”.

Constantine and Lockwood define UCD as :

‘. . . loose collection of human-factors techniques united under a philosophy of understanding users and involving them in design’. . . ‘Although helpful, none of these techniques can replace good design. User studies can easily confuse what users want with what they truly need. Rapid iterative prototyping can often be a sloppy substitute for thoughtful and systematic design. Most importantly, usability testing is a relatively inefficient way to find problems you can avoid through proper design’. (‘. . . loose collection of human-factors techniques united under a philosophy of understanding users and involving them in design’. . . ‘Although helpful, none of these techniques can replace good design. User studies can easily confuse what users want with what they truly need. Rapid iterative prototyping can often be a sloppy substitute for thoughtful and systematic design. Most importantly, usability testing is a relatively inefficient way to find problems you can avoid through proper design’.

Putting it straightforward UCD is all about 4 factors which are mostly related to the end user :

  1. Needs of users
  2. Limitations of  users
  3. Preferences of users
  4. Business objectives of the product.

This helps in achieving the following benefits:

  1. User satisfaction through more user friendly product experience
  2. Increase in customer /user loyalty.
  3.  Making the product more relevant and valuable for the user
  4. Product / system is more value added as users

 

 

—–

(c)2012-13 : Samir Dash. All rights reserved.

 

 

SIMPLE: Digital Content Distribution from Cloud (Proof of Concept)

This is one of my proof of concept application demo on how to prepare an eco-system for delivering protected digital content specially related to e-learning and m-learning context.

SIMPLE is an eco-system that allows tools such as

SimpleAuthor : That allows content authoring from common formats such as MS Office documents , PDFs and CBRs along with the options to author contents using cloud based WYSIWYG editor .

SimplePublisher: This allows you to import contents such as set of HTML files, ePubs, PDFs, text files, set of images, CBR files and the output from SimpleAuthor

SimplePlayer: This is a native course packaged with the required run-time that the end user will be using to view the content . This allows content activation and offline content tracking . In this current demo the Player is an EXE file that is being generated from the SimplePublisher. In the POC of the Publisher, each course is generated as an EXE file containing the content and the required runtime in it.

SimpleStore: In this POC you can see it as an online store that show cases different titles published using SimplePublisher. Basically it manages the catalog, and tracks activation of each offline course.

Note: the SIMPLE eco-system can be seen as an experiment based on the CUPID (Common Unified Programs for Instruction Delivery) guidelines. Check CUPID related info here:

OR at

http://cupid.mobilewish.com/

Socio-cultural User Experience (SX) and Social Interaction Design (SxD): The Socio-cultural Context in UX and Usability Design

Summary

This paper introduces the ‘socio-cultural’ dimension of User Experience (UX) and Interaction Design (IxD) with reference to emerging devices and related eco-systems.

Background

With the emergence of Google Glass, the topics related to devices infringing with personal privacy became hot cakes for tech-debates. Many social scientists, human rights activists have started to see the ‘Glass’ as the evil that reminds them with George Orwell’s ‘1984’. The fear of a ‘Google Big Brother’ controlling the major shares of the information world is seen as the intruder to private aspects of ‘the public’.

Fig1: The backlashes on Google glass has become one of the major discussion in recent tech world.
Example link: http://blogs.computerworld.com/privacy/22144/hey-dont-wink-me-glasshole-itbwcw

It is not the case that Google has not spent money on user research and usability aspects before going ahead with the concept of persons using glass that may change the way we interact with systems in our daily life. Usability wise, it is definitely a super gadget that has the potential to  catapult the device industry into next century. But the new features and interaction methods implemented in the device in a manner that is actually a decade old approach that is only fit for human-computer-interaction (HCI) in case of smart phones and tablets which have less tendency to hurt sentiments of those who do not directly interact with the device when the user might be performing some actions in a certain socio-cultural context. These sentiments could result in the fear of losing privacy , cultural distrust and humiliation among the second-hand users of the  device who are impacted indirectly in some way by the device actions in the context.

There is an interesting article on web titled “7 Innocent Gestures That Can Get You Killed Overseas”  that gives nice examples of various innocent gestures in daily life a  person can actually get him into trouble in a different socio-cultural context http://www.cracked.com/article_16335_7-innocent-gestures-that-can-get-you-killed-overseas.html.

Therefore it is high time to rethink in UX domain to discover the missing pieces which we think we have already discovered.

Socio-Cultural User Experience (SX) – the missing piece in UX

I have coined the term ‘SX’ aka ‘Socio-Cultural User Experience’ to represent the aspect of Usability Design or User Experience (UX) that deals with usability aspect of products/ software in a social context.

 SX

Fig2: The UX model is incomplete without its ‘social’ context.
To make usability design meaningful the model must follow the social context

 

Social and Cultural aspect of UX can be interpreted as a new component of UX – it is the missing piece that makes UX meaningful in social context.

Traditional UX model is centered around human user. But the critical factor that makes human being differentiated from other beings is his ability to ‘evolve and follow social standards’. The history of human growth is actually more clearly represented in the “history of human civilization”. So any product design must adhere to the social factor of design  and this is what SX is all about.

Considering the ‘Others’ in the User’s Social circle

The existing UX model does not analyze the need beyond the current user and his ‘type’ to do a usability test  — it never considers how it is impacting the other members of the society while the target user set is using the app/system.

For example, using car horn is a safety measure, but using it near a hospital or school is considered as unsocial and disturbing. There are many social check points that bar users of any system from using it in special socio logical context.

 

Fig3 : The no horn , camera, mobile phones and similar sign posts can be seen as a few of the social guidelines represented graphically on the use of different systems.

So implementing social context related restrictive design is even though new , is actually a age old tradition.

Criteria of a Good ‘SX’ Compatible System

Criteria of a sound usability design of an app on socio-cultural context:

  1. Universal—has design elements that are universal.
  2. Ethical – follows principles and approach that has positive ethical value
  3. Non-racial – non biased and non-provocative attitude to user’s race and beliefs.
  4. Respectful – towards user’s culture, social beliefs and ethnicity
  5. Safety – has it’s social impact that is safe for the User.
  6. Non-abusive – must not exploit the user and the environment he is in .
  7. Common Sense – has geared towards common sense – behaves and reacts to the user in a sensible way
  8. Protect Privacy – App’s feature and interaction must protect user’s privacy and other humans in the social circle.

Example of a Possible Enhancements to a Real Life Product via ‘SX’ :

Fig4 : Google Glass has an inbuilt camera that can take pictures.  

Let’s take the case of Google Glass.

Google Glass is designed in a way that can act as more personal than a mobile handset, as it is a spectacle and can be indispensable   accessory for the user once he gets addicted to it by replacing his conventional glass with it.

But the support for camera to take picture can pose a problem for the user to enter private areas, industrial areas, secure zones and offices where cameras are not allowed. In some places of earth, the cultural restrictions are in practice to ban cameras in certain places — most of the temples in India do not allow cameras inside. Now imagine, if the user has replaced his traditional spectacle for it , then he may find it difficult to manage without it in these scenarios.

So by following SX approach in usability design, the glass will require to have a “detachable set of camera” used in the glass so that the user can detach the camera and which would power it off and at the same time allow the user to keep on using the glass as a conventional spectacle.

This example may be just one of many features that Google glass might have, but it is enough to illustrate the approach in thought.

Social Interaction Design (SxD) – Helping IxD to Focus on Context and Environment of the User

I am using the term ‘SxD’ aka ‘Social Interaction Design’ that deals with the ‘social aspect’ of Human – Computer – Interaction (HCI) and Interaction Design (IxD) that  focuses on usability design in context of how the user is interactive with the app in specific socio-cultural context.

Points to Focus on while designing a SxD Compatible System

  1. Provide multiple alternatives to the interaction methods to control the same functionalities in different socio-cultural context.
  2. User should have total control over enable/disable of interaction methods for different scenarios.
  3. The default interaction method must follow ‘SX’ approach.
  4. Provide options to the user to switch between interaction methods with the system as and when needed.
  1. Alternative mechanisms should be provided for physically challenged users. Rethink on the use of gestures and other interaction methods in the Article 508 context as everyday the new devices with unpredictable (not necessarily negative!) interaction methods and features.

Gesture and other Interaction Medium of SxD:

The ‘Social Interaction Design’ approach has the following major facets in the  system interaction towards the user in socio-usability context:

  1. Facial Gestures—The selection of Human triggered facial gestures (e.g. wink, smile etc.) to activate the system or trigger any action in the system must be judged based on the canonical meaning of those gestures in social and cultural context of the user where he is going to use it.  For example, in case of Google Glass , the feature of “winking” (the gesture developed by Google Glass developer Mike DiGiovanni http://news.cnet.com/8301-1023_3-57582500-93/google-glass-code-lets-you-snap-a-photo-with-a-wink/  ) at someone to take a photo can pose a problem if the user is in India or Middle East countries. Even in western world winking at a lady or group of ladies (even though it is unintentional for any kind of abasement) can be taken as a negative action (e.g. weakness in character) and evoke anger and misunderstanding. So even if the winking to take a feature is a ‘cool feature’, in social context SxD will suggest the usability/interaction engineer to rethink on it to implement some options to ‘keep it disabled’ by default and allow the user the total freedom to use his judgment to enable and use the feature in any given socio-cultural context.Fig5: The ‘wink’ gesture developed by Google Glass developer Mike DiGiovann allows user to take a snap of the surrounding with just a wink of an eye.

     

  2. Sound Gestures —  The selection of sound gestures – the use of voice or sound pattern to control the system should be examined for different user environments. For example blowing a whistle to activate a play functionality on a portable music player, or to open an SMS on the cell phone can be an interesting feature, but on the other hand if it becomes useless in a busy street or in a meeting room where a discussion is going on.
  3. Touch based Gestures –   Touch, swipe and pinch are popular now a days as most of the tablets and smartphones offer this as a user friendly interaction method for the user. More devices are coming up which do not have any physical button rather a few multi-touch gestures are enough to fully control them. However ‘SxD’ stresses that the devices must be designed and developed with the interaction method that can allow alternative to the available touch triggered interaction mechanism.  For example , while developing a digital medical instrument with touch sensitive display, the interaction methods  should be carefully planned so that the surgeon can use the system without touching to avoid infections through contact with it while conducting any mission critical surgery.
  1. Hand/Finger based 3D gestures – ‘SxD’ approach encourages to conduct a social analysis of the  hand/finger based gestures that are planned to be used in a system – the gestures should selected / innovated by carefully studying the cultural context avoiding common gestures used in daily life that are considered abusive to others. In addition to this practical usage resulting out of user’s environment and work culture must be given consideration. For example the middle finger gesture commonly used by youths to represent the crack humiliating pun on the other should not be used for any app that is expected to be popular among the users from the similar demography. But note that only considering the demography is not enough to decide the gestures.
  1. Mouse /Keyboard Control – Similar to the gesture , voice and the related interaction method with system, mouse, keyboard, joystick and other typical input device based methods should be considered with in the context in which they are going to be used. As this group of interaction method are very old, many standard guidelines are already in there in practice. They However we need to rethink on them and make sure they are upto date with the ever changing human –computer-interaction domain.

Going Forward

Understanding usability is the first step in developing a successful system/software. However we need to understand the social and cultural implications of the UX implementations of the definitions of usability which we think is good for the user in order to   make our software truly complete and meaningful in all aspect. ‘SX’ and ‘SxD’ are the two missing pieces of this whole puzzle, which we need to discover in order to rediscover UX.

 

 

Download the paper from Slideshare :

http://www.slideshare.net/MobileWish/sx-overview