DesOps : the Next Wave in Design (Part 2) The Design Systems

By definition, a Design System is a series of components that can be reused in different combinations to manage design at scale. In the context of software development lifecycle specific to digital products e.g. apps, applications and websites etc., the design system can contain design guidelines, visual style guidelines, assets, components, widget definitions, form controls, interaction paradigm definitions etc. at a different maturity level touching upon different roles starting from the interaction designer, visual designer to prototyper as well as the developer.

Many organizations have the pattern libraries clubbed with some best practices and guidelines used as a Design System for the organization and the community — for example is one such example is RedHat’s PatternFly. When you visit PatternFly website, you find it a typical design system that has different components including interaction paradigms, widgets, working code, visual style guides and assets touching upon many roles involved in the delivery thread. Similar is the case with lot other popular design systems. Among popular Design Systems as of today, most notable are: Google’s Material DesignBBC’s Gel, SalesForce’s LightningAirBnB’s Design Language , IBM’s NorthStar and so on.

To normalise practices and design patterns around a common visual language, style guides, pattern libraries, design systems play a critical role. Design systems can vary in maturity. With purpose, I have mentioned the case of “different maturity” — it means that the variations of the concept of the design system are already in place, and each of those variations has a slightly different efficiency to solve the common goal of designing at scale. at the bare minimum level, it can simply start with naming objects, colours, texts, components and conventions or even best practices. This can scale upto highest maturity level, where it can define finer details of the experience and even can end up in generative solutions in themselves — e.g. frameworks which may dynamically provide or define the experience details required for delivery.

Design Systems have been so popularised, they are arguably a default need of any product organisation of even limited scale. This usually becomes the very first push by an organisation to begin looking at their design operations with fresh eyes as introducing a design system brings with it a whole set of new problems that most digital product design organisations were previously ill-equipped to deal with on their own. The success of the implementation of a design system will ride on the support it gets operationally.

No doubt, Design Systems are a primary need for the organisation even on a limited scale and budget. However, the design systems mostly are developed within the operational vacuums. Have the best-designed components, but if you can’t create a workflow for their upkeep and deployment not just through your product offerings, but through your design production system itself, it will invariably cause more friction than the previous system did.

When you start with the pain point of a Design Systems, you are starting your operations with a solution to a specific problem. This lens answers the question — “How do we scale design quality through the enterprise?”

Why We Need Design Systems?

By nature, a Design System helps taking decisions faster and reduce wastage in delivery track. This thereby helps the designers to spend more time in defining workflows. Also as it helps to work in a fast track mode, it also promotes the designers in exploring multiple concepts in the same amount of time.

Having a Design System is to have a “single-source-of-truth, which can be referred by all different roles including engineering, testing as well as designers throughout the delivery lifecycle. This helps in articulating as well as implementing consistency across different modules of the application or even in the product line. However to ensure the Design System acts as a “single-source-of-truth”, certain criteria need to be fulfilled — and this is also the crossroad where a Design System that acts as the “single-source-of-truth”, paves way for DesOps.

Design System as “Single-Source-of-Truth”

Let’s first understand what do we really mean when we refer Design System as “Single-Source-of-Truth”. In the enterprise, any design on the hindsight has one major alignment — i.e. to align itself to the business goals or the enterprise, which also drives the product strategy as well as the delivery and impacts a lot of associated aspects such as the delivery life cycle that includes the typical Design, Development and Quality tracks. In most of the cases the associated operational systems which are made up of processes, practises and tools are mostly development centric. The traditional SDLC models were designed more from a process perspective that had the bias towards development aspect of the delivery. So when a Design system is typically established in an organisation, the normal tendency is to create a Design System that fills the gap between these different entities and their associated process blocks. The Design System that touches all these associated entities and the process blocks in a way that helps in optimizing the delivery of the product or in improving the quality of it, by becoming the single driving force of design aspect (from the creative problem-solving aspect ), can be regarded as a “Single-Source-of-Truth”. But in reality, as the different roles like the Designer, Developers and Quality Assurance team members, each play within their boundary of age-old definition of processes. The tools, technologies as well as the processes used by them are so diverse and the workflow among them so disjointed that on the ground reality, that having a Design System as “Single-Source-of-Truth” has not been an easy thing to achieve.

Designer Tools vs Developer Tools

One of the key aspects is that in modern time with the use of Agile along with design-driven practices like Design Thinking and User Centred Design models, has tried to reduce the tilt toward the engineering, has certainly helped many organisation to be more effective in delivering products with the qualitative design. However, the Design System developed in many organisations by nature involves the set of tools and processes that suffer from such bias and the selection of tools and technology-driven workflows that are more developer-centric. For example, the tools like TextEdit or Notepad and the command line terminal windows are highly preferred by the developers. The workflows defined around such tools to make developers more productive and help them improve their craft which is coding. When the processes and tools by themselves are technology driven and developer driven, and when such system is integrated with the standard components of a design system like style guides, codes, widgets etc. , the generation and maintainability of such things by designer gets affected as they are from the different world with their own way of working with their own world of GUI driven tools starting from mind mapping to wireframing and creating visual assets. And this what brings the disjoint experiences to the design operations, thereby preventing to achieve a seamless full-circle of design-development operations. But the result of such design system is always like a patchwork that may not introduce the fundamental changes in the process to meet the goal of DesOps — most of which are centred around consistency, collaboration, continuous integration and continuous design. And to achieve this is actually a goal that aligns with the core vision of DesOps.

We will continue exploring the Design Systems in context with DesOps and touch upon topics like Design System maturity and Living Design System as well as the concept of generative systems, in the part -3 of this DesOps article series. So keep tuned in and do feel free to share the word!!!

© 2018, Samir Dash. All rights reserved.

Advertisements

Re-imagining Beta Testing in the Ever-Changing World of Automation.

Based on the paper on the Beta Testing presented at RedHat QE CampX , Bangalore on 7 Dec 2017

Beta-Testing fundamentally is all about ‘A test of a product performed by real users in the real environment’ — and of course, there are many tags we use to refer to the testing of similar characteristics like User Acceptance Testing (UAT), Customer Acceptance Testing (CAT), Customer Validation and Field Testing (more popular in Europe) etc. Whatever the tag we use to represent all these testing cases, the basic components are more or less same — it involves the user testing and front-end UI as well as UX related testing to find out potential issues and rectify. Also, this always happens in the iteration of SDLC where the idea has transformed into a design and has passed the development phases where mostly the unit testing and integration testings have already happened.

Basically, the Beta stage of the Product Life-cycle Management (PLM), is the perfect opportunity to hear from the target market and plan for the road ahead.  When we zoomed into this phase of testing it has a board range of spectrum that ranges from Front-end or UI related testing involving UI functionality, Cosmetic, UI level Interaction and Visual look and feel at one end to the User Experience (UX) at the other end including User Testing involving more from A/B (Split) Testing , Hypothesis, User Behaviour tracking and analysis, Heatmaps, user flows and segment preference study or Exploratory testing and Feedback loops.

Beta Testing has the popular beliefs on the dimension of goodness that defines the typical tools that enable the user to carry out such tests, namely — Shortening of beta cycles, reduce time investment, increase tester participation, improve feedback loop and the visibility etc. However, if we dive deep into the factors behind the existence of tools from different angles, we will find two major reasons that advocate the importance of Beta Testing.

1. Left-Right Brain analogy that points out to the overlap of Human and Technology.

The typical belief is that the left-hand side of the brain mostly processes the logical thinking whereas the right part is more about emotional parts of the thoughts. Based on this popular analogy, when we map the different aspects involved in different stages of SDLC for a digital product across a straight-line from left to right, (refer to the diagram below) we will notice the logical and more human-centered aspects are divided by an imaginary line from the center. We will also notice the gradual progression of the emotional index for the components from left to right.

And when we map these to the beta testing phase, we notice these right-hand components are predominant in such testing phases. As users, as the humans, of the products, we are more emotionally connected to such aspects of the product which are validated or verified in a Beta Testing, thereby making Beta Testing as one of the most important testing phases in any SDLC.

Another interesting point to note here that when we look from the traditional software approach to define “criticality”, the areas that are tested during UAT / Beta, mostly fall into Class 3 and 4 type of criticality. But because these touch the core human aspects, they become more important.

To illustrate this here is a nice video https://www.youtube.com/watch?v=-jHDyhJdDb8that reflects the importance of technology touching the human emotions. This YouTube video posted was a popular brand that offers a glass for the color-blind people which can correct the colorblind vision issue in the real time for the end user. Interestingly this aspect is about “Accessibility” that is one of the aspects that is typically covered during a Beta Testing. Just by looking at this aspect “Accessibility”, in context to the video, naturally, the question comes “What can we do for this father and the son, as a tester or a developer or a designer?”. And when we look at the stats, we find the number of people the accessibility impacts are huge– Every one in five-person is challenged by some kind of disability. But unfortunately in some reports indicate that at more than 90% of the websites in 2011, were not conformant to W3C’s accessibility guidelines.

This itself shows the human angle, that advocates why Beta Testing is important to ensure these aspects are validated and verified so that the target user needs are fulfilled and not go unattended.

2. From the standard’s perspective – Evaluating from ISO/IEC 9126-4 (2001) dimension that defines the difference between usability and quality as a perceptive line.

The International Standards Organization (ISO) has been getting the standards around Quality vs. Usability evolved over time. During the 1998 ISO identified the efficiency, effectiveness, and satisfaction as major attributes of usability. Just a year after that, in 1999, the quality model proposed involved an approach to measure quality in the light of internal software quality and external factors. In 2001 the ISO/IEC 9126-4 standard suggested that the difference between usability and the quality in use is a matter of context of use. The ISO/IEC 9126-4 (2001) standard also distinguished external versus internal quality and defined related metrics. Metrics for external quality can be obtained only by executing the software product in the system environment for which the product is intended.   This shows that without Usability / HCI in the right context, the Quality process is incomplete. One interesting point to notice here that that “context” referred here is actually that is fundamental to a beta testing i.e. “real users in a real environment”, thereby making the case of Beta Testing stronger.

Now we know why the Beta testing is important, let’s explore the challenges that are involved with the Beta stage.  If we notice any standards defined, including ISO/IEC 9126, most of these are static — none of these models really describe the relationship between phases in the product development cycle and appropriate usability measures at specific project milestone. We should note that any standard also gives relatively few guidelines about how to interpret scores from specific usability metrics. And specific to usability as a quality factor, it is worth to note that usability is that aspect of quality where the metrics have to be interpreted.

In this light when we look at popular Beta-Testing tools of today, we can notice that the Top Beta Testing tools leave the interpretation to the customer’s or end user’s discretion. This brings to our Number one challenge in Beta Testing — How to filter-out pure perception from the actual & valid issues and concerns?   As most of the issues are related to user-testing, split-testing, and front-end testing, there is no optimized single window solution that is smart enough to handle this in an effective way.  Real users in the real environment are unless empowered are handicapped to comprehend all the aspects of beta testing and react. Also, it’s all perspective and all of them cannot be validated with real data from some benchmark/standards.

The World Quality Report in 2015-16 Edition indicated that the expectations from the Beta testing is changing dramatically. It hinted that the customers are looking for more product insights through a reliable way to test quality, functionality along with the regular usability and user testing in real customer facing environment.

It’s not only the Beta Testing rather in the overall testing scenario, more user-demand is also getting impacted by the rising complexities and the challenges which is increasing due to accelerated changes in the technology, development and delivery mechanisms and approaches. The 2017-18 World Quality Report reports that the test environment and test data continue to be Achilles heel for QA and Testing along with the fact that the challenges with testing in agile development are increasing. There is now a demand for automation, mobility, and ubiquity along with smartness to be implemented in the software quality testing. Many believe that the analytics-based automation solutions would be the first step in transforming to smarter QA and smarter test automation

While this true for overall QA and testing, this is also true for Beta Testing, even when this testing, unlike the unit testing, system testing etc. deal with the functional aspect of the product.

Let’s see where we stand today as per this benchmark. If we explore popular beta testing solutions, we will get a big vacuum in the area where we try to benchmark user’s need for more functional aspects along with the usability and user testing aspects are mapped. Also, you can notice in the diagram that there is ample space to play around with the smart testing scenario with the use of cognitive, automation and machine learning.

(Note: Above figure shows my subjective analysis of the competitive scenario.)

Basically, we lack “Smartness” and proper “automation” in Beta Testing Solutions.

Apart from all these, there are some more challenges that we can notice if we start evaluating the user needs from the corresponding persona’s viewpoint. For example, even when assuming that the functional aspect is to be validated, the end-user or the customer may have an inability to recognize it. As the product -owner, customer or the end-user who are the “real users in a real environment”, are part of the user segment who may not be aware of the nuts and bolts of the technology involved in the development of the product they are testing to sign it off. It’s like the classic example of a customer who is about to buy a second-hand car and inspects the vehicle before making the payment. In most of the cases, he is paying the money without being able to recognize “What’s inside the bonnet!”. This is the ultimate use-case that advocates to “empower the customer”.

Now how to empower the end user or the customer? The tools should support that in a way so that the user can have his own peace of mind while validating the product. Unfortunately, many small tools who try to solve some of those little issues to empower the user (for example the Google Chrome extension that helps to analyze CSS and creating the report or an on-screen ruler that the user can use to check alignment etc.) are scattered. The ground reality is that there is no single-window extension/widgets based solution available. Also, not all widgets are connected. And those which are available, not all are comprehensible to the customer/end-user as almost all of them are either developer or tester centric. They are not meant for the customer without any special skills!

 

When we look at the automation solutions in testing as part of much Continous Integration (CI) and Continuous Delivery (CD), are engaged and effective in mostly “pre-beta” stage of SDLC. Also, they require specific skills to run them. With the focus on DevOps, in many cases, the CI-CD solutions are getting developed and integrated with the new age solutions looking at the rising complexities of technology stacks, languages, frameworks etc. But most of them are for the skilled dev or test specialists to use and execute them. And this is something that does not translate well when it comes to Beta testing where the end-user or the customer or the “real user in real environment” are involved.

Apart from all these even, assuming we can have all these automation features enabled in BETA it still points to another limitation in the existing solutions. It’s because the employment of automation brings in its own challenge of “information explosion” , where the end user needs to deal with the higher volume of data from automation and with so much information, the user will struggle to get a consolidated and meaningful insight of the specific product context.   So what we need to solve these challenge? Here is my view — We need smart, connected, single window beta testing solution with automation that can be comprehensible to the end-users in a real environment without the help of the geeks.

For sometime since a last few years, I have been exploring these aspects for the ideal Beta Testing solution and was working on the model and a proof of concept called “Beta Studio”, representing the ideal beta testing solution, which should have all these — Beta-Testing that utilizes data from all stages of SDLC and PLM along with the standards + specs and user testing data to provide more meaningful insights to the customer.  Test real application in real environment by the real users. Customer as well as end-user centric. Test soft aspects of the application — Usability, Accessibility, Cosmetic etc.  Smart enough to compare and analyze these soft aspects of the application against functional testing data.

Use machine-learning & cognitive to make the more meaningful recommendation and not just dump info about bugs and potential issues.

Here is an indicative vision of Beta Studio:

 

Mostly this vision of the ideal beta testing solution touches upon all the aspect we just discussed. It also touches upon all the interaction points of the different persona e.g. customer, end-user, developer, tester, product owner, project manager, support engineer etc. across the whole Product Life Cycle and utilizes automation along with the Machine Learning components such as Computer Vision (CV) and Natural Language Processing (NLP) to gather information that has to be processed by the cognitive aspect to generate the desired insights about the product and recommendations. During this process, the system will involve data from standards and specs along with the design benchmark generated from the inputs at the design phase of the SDLC, so that meaningful insights can be generated.

In order to achieve this vision being translated into reality, what is that we need. The following diagram hints exactly on the same:

Basically, the first step should involve creating the design benchmark from the information from the design stage, that can be used in comparing the product features against a metrics based on this design benchmark.

The second thing that matters should be automated and manual tracking of the product during runtime in real time and categorize and collate these data.  The third step involves creating features to support the user feedback cycle and user testing aspects (exploratory, split testing capabilities).

The fourth step would be to collect all standards and specifications on different aspects — e.g. Web Content Accessibility Guideline (WCAG) Section 508, Web Accessibility Initiative Specs ARIA, Design Principles, W3C Compliance, JS Standards, CSS Standards & Grids, Code Optimization Metrics Error codes & Specs, Device Specific Guidelines (e.g. Apple Human Interface Guideline) etc.

The fifth is about building the critical components such as Computer Vision and Natural Language Processing units which would process all the data collected in all these stages and generate the desired metrics and inferences.

The sixth step involves building the unit to generate the model to map the data and compare against the metrics.  The final or the seventh step is to build the cognitive unit that can compare the data and apply the correct models and metrics to carry out the filtering of the data and generate the insights which can be shared as actionable output to the end-user/customer.

While experimenting for BetaStudio, I have explored different aspects and built some bare bone POCs. For example, Specstra is a component that can help create Design benchmark from design files.

When I was exploring Specstra , that was trying to address the issues related to the Cosmetic aspect or visual look and feel. Typically when it comes to cosmetic issues, more than 30% or one-third issue are non-functional and mostly Cosmetic, there is no reliable solution that helps in benchmarking this kind of issues against some specific standards. At a minimum, one third of the issues found during the beta / UAT stages of testing are mostly cosmetic and alignment issues including category 3 and 4 types. And these happen mostly because the two personae – developer and designer involved have their own boundaries defined by a mythical fine line.

When a developer is in focus roughly 45% of them do not aware of all the design principles employed or the heuristics of UX to be employed. Similarly half of the designers are not aware of more than half of the evolving technological solutions about design.

And in mostly three fourth of the of the projects, we do not get detailed design specs to benchmark with. Detailing out a spec for design comes with a cost and skills. In more than two-thirds of the cases of development there is the absence of detailed design with specs. Wherever it exists many of the designs are not standardized, also most of them do not have clear and detailed specs. Also being design is carried out by different tools it is not always easy to have a centralized place where all the designs info is available for benchmarking.

To solve this Specstra comes handy as it is an Automation POC that is more like a cloud-based Visual Design Style Guide Generator from the third party design source files – this is a case where the user would like to continue using his existing Design tools like Photoshop/Sketch or Illustrator, PDF etc.

You can view the video of the demo here

 

https://youtu.be/_o4taIXCvck or read more here https://medium.com/design-ibm/specstra-experimenting-with-automation-in-design-industry-4641c0b4244d or here https://yourstory.com/read/71b0d89e20-specstra-experimenting-with-automation-in-design-industry

Similarly, the single window solution of tracking and be getting the real-time data on accessibility, visual design, CSS, JavaScript & Environment etc. from the product is explored in this POC. View a video on Beta Studio POC here https://youtu.be/kItqD5wc4_4.

 

I know reaching the goal of an ideal beta testing solution might need the effort and time and the concept will also evolve over time. But for sure the journey has started for all of us to connect and explore how to make it a reality.

Feel free to ping me in the comment section of this article.

To explore the open-source project of BetaStudio poc , follow the link here –https://github.com/betaStudio-online (will be uploading all the code to the repos in these in coming times. )

Specstra: My Experiments in UI Automation to Extract CSS, Assets from Design Files

This is a proof-of-concept /experimental exploratory project I was spending my weekends during 2013-15 to come up with a cloud based tool that allows designers & developers in quickly getting assets, CSS and element details like position, dimension, shapes, raster, text formatting info etc. from the design file/screen mockups easily without them required to have installed Adobe Software suits like Photoshop.

The major challenges I faced were in reading the Adobe proprietary file formats like PSD to extract separate information on design elements specially the fonts and getting formatting information without using any of the Adobe technology available. Invented many workarounds and implemented them in an assembly line kind of architecture (i.e. chain of responsibility design pattern) to ensure that multiple design file uploads (each one having file sizes from few hundred MBs to 1 GB ) are processed successfully without crashing or over burdening the cloud system. Also the implementation required the necessary image processing tasks to achieve certain goals like creation and export of assets in specific resolution and rendering the red-lines on the fly.

The technologies used were: PHP, MySQL, Python, Perl, Ruby, Shell Scripting, HTML5, CSS3, JavaScript, Canvas & Node.

FEATURES:

Extract design info from a PSD comp to use with HTML & Native App projects like mobile and desktop designs, with Specstra.

Cloud based – Single dashboard to manage all design files.

Export design elements / assets e.g Raster (PNG,JPG) , Vectors (SVG), CSS

Dynamic selection of design elements/assets from the file.

Detects nested vector shapes, text objects, images/raster from the design file.

Detects color palette from the design file.

Specstra