In Search of Truth: At the Crossroad of Critical Theory and Technology in the DesOps World

[This is the paper I delivered at Department of English, Baba Bhairabananda Autonomous Mahavidyalaya, Jajpur, Odisha (http://bbmchandikhole.org/) on 16th July]

Abstract:

This UGC seminar session was an attempt to understand, from a non-traditional lens, the relevance of critical theory in context to today’s ever-changing technology space that is moving towards the Automation, Artificial Intelligence, Machine Learning and Distributed Computing, become much more important in history than ever, as it deals with softer aspects of human identity and socio-cultural dimension through communication or human expression.

This interactive session will have two major focus:

1. A brief overview of the critical theories from a diachronic lens that will be helping the students in grasping the fundamentals in a socio-cultural context.

2. A cross-discipline comparison with the modern design-driven practices in the software industry that would help the students understand the potential and opportunities in the real world scenario where these theories would help.]

Slide01.jpg

© 2018, Samir Dash. All rights reserved. Creative Commons Attribution-Share Alike 4.0 International License

Download Link: 

https://www.slideshare.net/MobileWish/in-search-of-truth-at-the-crossroad-of-critical-theory-and-technology-in-desops-world-16-july-2018-bbautonomus-college-chandikhol

More info:

http://desops.io/2018/07/17/download-the-paper-in-search-of-truth-at-the-crossroad-of-critical-theory-and-technology-in-desops-world/

Advertisements

DesOps: the Next Wave in Design (Part 7) The 10 Practices

A method, procedure, process, or rule used in a particular field is typically defined as practice for that field. In last article, we discovered about the 10 guiding principles that drive the DesOps. No wonder the practices involved in DesOps, loud the same principles to the core. Note that we are still discussing the culture driven by DesOps / DesignOps, that is typically fuelled by these practices.

Here are the broad practices that drive the DesOps philosophies:

1. Design Thinking

This practice ensures that we employ the creative problem solving, the typical methodologies and tools of Design thinking. Here are some quick notes and list of methodologies and tools used in IBM version of it which anyway follows the fundamental principles of Design Thinking https://medium.com/eunoia-i-o/quick-guide-notes-on-the-ibm-design-thinking-78490d7433dd.

The typical tools of Design Thinking, such as Stakeholders map, Experience maps (As Isand To Be), Personae, Roadmap, MVP, Kano Modelling, Story Boarding, Priority Grid etc. are coupled with continuous practices defined in the following to implement the Continuous Loop of the Design Thinking practice that holds the DesOps philosophies.

2. User-centred Design (UCD) and Usability Design

Users (both the typical user / persona from UX angle and the segment from marketing/business context ) are at the core of DesOps. Any design solution generated fundamentally is an advocacy of the user needs and tries to direct the business goals to build upon this. The business goals are also in such cases are market specific and are based on the pulse of the segments driven by the user needs. You can have a quick note on UCD and usability-design here https://www.linkedin.com/pulse/20140702070557-9377042-usability-design-and-user-centered-design-ucd/. As UCD or Usability design focuses on the Iterative Design approach of User Centered System Design (UCSD) process this fundamentally contributes to DesOps goals.

(Fig – Source: UX Simplified: Models & Methodologies, 2014, ISBN 1500499587 )

As UCD supports growing product through iterative design that is also fuelled by all 3 models of design which contribute to typical Design Thinking as well as Lean UX models, namely:

  • Cooperative Design: This involves designers and users on an equal footing.
  • Participatory Design (PD): Inspired by Cooperative Design, focusing on the participation of users
  • Contextual Design: “Customer-Centered Design” in the actual context, including some ideas from Participatory Design.

And here are the steps we use while implementing UCD , irrespective of the above models we follow: All these UCD models involve more or less a set of activities grouped into the following steps mentioned below:

  • STEP 1 – Planning: in this stage the UCD process is planned and if needed customized. It involves understanding the business needs and setting up the goals and objectives of the UX activities. Also forming the right team for the UX needs and if needed hiring specialties fall into this step.
  • STEP 2 – User data collection and analysis: This step involves data collection through different applicable methodologies such as user interviews, developing personas, conducting scenarios, use-cases and user stories analysis, setting up measurable goals.
  • STEP 3 – Designing and Prototyping: This involves activities like card sorting, conducting IA, wireframing and developing prototyping.
  • STEP 4 – Content writing: this involves content refinement and writing for web and similar activities.
  • STEP 5 – Usability testing: This involves is a set of activities of conducting tests and heuristic evaluations and reporting to allow refinement of the product. However, Usability Testing can have its set of steps involving similar activities such as planning , Team forming, testing, review and data analysis and reporting.

And you will see all these naturally fall into the places while any DesOps is implemented.

3. Hypothesis-Driven Design/Development (HDD) & Data-driven Decisions making

The DesOps story remains incomplete without referring to one of its key practices that are Hypothesis-driven – development (HDD), which certainly contributes to the service design like DesOps, that brings possibilities of changes to inculcate design thinking, innovation and organizational changes. It also promotes the lifecycle methods and adjusts them to ensure that integrated work-flow and work-culture is established that can make the best use of data-driven decision making by running multiple early-stage experiments (synonymous with what we are trying to achieve through continuous feedback loop and prototyping) and gathering insights from their outcomes (and not just output!). Another interesting thing is that this advocates the use of UCD approaches as it focuses on making an assumption, running experimenting and validating them with measurable data, and thereby taking some action based on the same.

Will elaborate on HDD driven methodologies in context to DesOps as we move in this series of articles.

4. Agile / Iterative Life Cycle

There are several challenges in integrating UX design and related activities into a typical agile software development lifecycle process. The most common problem is typically “ finding a balance between up-front interaction design and integrating interaction design with iterative coding with the aim of delivering working software instead of early design concepts”. This happens mostly because typical pure SDLC approaches primarily aim at “efficient coding tactics together with project management and team organization instead of usability engineering”.As Agile is more “a way of thinking about creating software products’ rather than being a specific process or methodology hints at the challenges of UX integration into it as integrating user research and UX design with agile, is itself an “agile antipattern”.The very idea of SDLC is a process for developing software, traditionally never kept the “user” into a focus, or event kept any scope for methodologies that try to bring any component that is not considered as a native ingredient of the process of creating a software product. The focus was always the “cost, scope, and schedule” that drive any traditional SDLC models including Agile. And sure enough, this typically gives rise to the challenges for UX integration into any SDLC as project managers never try to upset the balance of these three by reducing costs, tightening deadlines, and adding features in the specification. To know more about the typical challenges we face while implementing design / UX into Agile SDLC read my earlier article here: https://www.linkedin.com/pulse/20140706143027-9377042-challenges-in-ux-integration-with-different-sdlc-models/

(Fig – Source: UX Simplified: Models & Methodologies, 2014, ISBN 1500499587 )

However, there ways to mend this gaps between process driven life cycle models such as Agile model — one of such is to implement usability design model (we discussed that as a practice of DesOps above). Usability process supplements to any software development life cycle at various stages, as is not a complete product development process as it does not output the final product at the end of the process cycle. One such solution is reflected in below diagram :

(Fig – Source: UX Simplified: Models & Methodologies, 2014, ISBN 1500499587 )

And it is interesting to see that each cycle in such solution is actually contributing to a continuous cycle of Conceptualize – Design – Build – Test that aligns nicely with DesOpsprinciples and other practices.

5. Lean UX Approach

The Lean UX practice focuses on the Lean philosophies that focus primarily on reducing waste from the process and provide ways to simplify and expedite the delivery keeping the quality intact or enhanced. Interestingly the Lean UX is based on the 3 foundations which are also part of the DesOps practices list we are discussing:

  • Design Thinking: This foundation upholds the concept that “every aspect of a business can be approached with design methods” and gives “designers permission and precedent to work beyond their typical boundaries”.
  • Agile Software Development: Core values of Agile are the key to Lean UX.
  • Lean Startup method: Lean Startup uses a feedback loop called “build-measure-learn” to minimize project risk and gets teams building quickly and learning quickly

No practice used in Lean UX is something new. Rather it is “built from well-understood UX practices”. Many of the techniques used over the time in various UX process and have the practical usability even today have been packaged properly in Lean UX. So the following foundation pillars of this also supports DesOps as inherited from this practice:

  • Cross-Functional Teams: Specialists from various disciplines come together to form a cross-functional team to create the product. Such a team typically consists of Software engineering, product management, interaction design, visual design, content strategy, marketing, and quality assurance (QA).
  • Small, Dedicated, Collocated: Keep your teams small—no more than 10 total core people as keeping small team has the benefit of small teams comes down to three words: communication, focus, and camaraderie. It is easier to manage the smaller team as keeping track of status report, change management and learning.
  • Progress = Outcomes, Not Output: The focus should be on business goals which are typically are the “outcomes”, rather than the output product/system or service.
  • Problem-Focused Teams:“A problem-focused team is one that has been assigned a business problem to solve, as opposed to a set of features to implement”.
  • Removing Waste: This is one of the key ingredients of Lean UX which is focused on “removal of anything that doesn’t lead to the ultimate goal” so that the team resource can be utilized properly.
  • Small Batch Size: Lean UX focuses on “notion to keep inventory low and quality high”.
  • Continuous Discovery: “Regular customer conversations provide frequent opportunities for validating new product ideas”
  • GOOB: The New User-Centricity: GOOB stands for “getting out of the building” — meeting-room debates about user needs won’t be settled conclusively within your office. Instead, the answers lie out in the marketplace, outside of your building.
  • Shared Understanding: The more a team collectively understands what it’s doing and why, the less it has to depend on secondhand reports and detailed documents to continue its work.
  • Anti-Pattern: Rock-stars, Gurus, and Ninjas: Team cohesion breaks down when you add individuals with large egos who are determined to stand out and be stars. So more efforts should on team collaboration.
  • Externalizing Your Work: “Externalizing gets ideas out of teammates’ heads and on to the wall, allowing everyone to see where the team stands”.
  • Making over Analysis: “There is more value in creating the first version of an idea than spending half a day debating its merits in a conference room”.
  • Learning over Growth: “Lean UX favours a focus on learning first and scaling second”.
  • Permission to Fail: “Lean UX teams need to experiment with ideas. Most of these ideas will fail.The team must be safe to fail if they are to be successful”.
  • Getting Out of the Deliverables Business: “The team’s focus should be on learning which features have the biggest impact on the their customers. The artefacts the team uses to gain that knowledge are irrelevant.”

You can read more in one of my articles here:https://www.linkedin.com/pulse/20140710010240-9377042-lean-ux-another-agile-ux/?

6. Fail-Fast through Prototyping

Typically a Fail-Fast is about immediately reporting any condition that is likely to indicate a failure. Also, Fail-Fast allows gathering early stage feedback that serves as an input for the continuous UCD model which helps bring up solutions to the design issues using such input and thereby minimizes the risk of product failure in the hand of users or in the market. This is also a philosophy that aligns to the Lean Startup methodology and accelerates innovation as it encourages taking early stage risk. Typically the startup cultures undertake bold experiments to determine the long-term viability of a product or strategy, rather than proceeding cautiously and investing years in a doomed approach. In service design, this helps to improve the processes to make use of systems that support Lean methodologies and model. The great part is that this in DesOps while getting combined with UCD processes, provides options to run short and quick UCD iterative cycles of Think – Make – Break kind of model.

(Fig – Source: UX Simplified: Models & Methodologies, 2014, ISBN 1500499587 )

Prototype plays a crucial role in UCD models to achieve Fail-Fast and thereby ensuring early feedback on the design is received that can contribute to the evolution of the product. Different fidelity of prototypes is used in order to ensure that the target experience can be tested.

7. Continuous Discovery

Continuous Discovery is primaily involved with the conceptualization stage of the product lifecycle. This practice mostly is driven factors like

  • User focus: The goals of the activity, the work domain or context of use, the users’ goals, tasks and needs should control the development.
  • Active user involvement: Representative users should actively participate, early and continuously throughout the entire development process and throughout the system life cycle.
  • Simple design representations: The design must be represented in such ways that it can be easily understood by users and all other stakeholders.
  • Explicit and conscious design activities: The development process should contain dedicated design activities.

This practice, however, is not just limited to conceptualization stage, but organically is part of the evaluation and build stages as the outcomes from such stages of life cycle, it gets the input of feedbacks and evaluation results which aid in the discovery of the solution through the design process.

8. Continuous builds and delivery

This practice focuses on continuous design delivery that ensures that the DesOps sustains the lifecycle and supports iterative UCD cycles. This involves the process that supports the design of the solution which thereby contributing to the system development that is iterative and incremental. The early part of life cycles involving such practice typically gains life from prototyping which is used to visualize and evaluate ideas and design solutions in cooperation with the users.

So the factors in this practices are :

  • Evolutionary systems development: The systems development should be both iterative and incremental.
  • Prototyping: Early and continuously, prototypes should be used to visualize and evaluate ideas and design solutions in cooperation with the users.

9. Integrated & incremental testing

Evaluation and getting the feedback from all stages of the lifecycle is key to any DesOpsimplementation, therefore integrated testing (including usability testing) in an incremental fashion is what that plays a stronger role among all the practices. This actually draws from the UCD models running a User Centered System Design (UCSD) approach. As UCD experts help in benchmarking usability tests popularly known as “summative evaluations” that evaluates the performance of the system /product developed on several grounds. The metrics of this test is typically based on the “error rate for users as they use the system”, the “time it takes to attain proficiency performing a task”, and the “time it takes to perform a task once proficiency has been attained”. So the factor that drives the practice is —

  1. Evaluate use in context: Baseline usability goals and design criteria should control the development.

Note that the key here is that all the testing should support evaluations in context. In the real context of use, getting the data is what makes this effective and thereby making DesOpsmore fruitful.

(Fig – Source: Re-imagining Beta Testing in the Ever-Changing World of Automationhttps://medium.com/eunoia-i-o/re-imagining-beta-testing-in-the-ever-changing-world-of-automation-3579ac418007 )

The ISO standard also defines Quality process where context plays the major role. And interestingly Usability testing and HCI aspects are all driven by context. Read one of of my articles on how context plays a critical role in testing and usability, which also narrates an experiment named BetaStudio here – https://medium.com/eunoia-i-o/re-imagining-beta-testing-in-the-ever-changing-world-of-automation-3579ac418007 .

10. Service Design on an Integrated Feedback-Loop Model

The integrated feedback loop is actually more than getting testing reports. This practice ensures that the feedback flows from any point to any point as needed, may it be from stakeholder to Designers, or End-Users to Developer, Testers to Designers or in any path that flows from one persona to the other. Also, this includes the service design that helps to implement the DesOps which ensures the information, as well as the feedback, is flowing seamlessly even including from and to the systems and different roles. This certainly uses service design employing recent technologies like Automation, Machine Learning and Artificial Intelligence etc.

Hope this article was helpful. Keep tuning in for the next parts of this article series. Before moving on to next points on Culture of DesOps, we will be looking into a Business Model Canvas and try to see how DesOps fits in. Do share the words.

(c) 2018, Samir Dash. All rights reserved.

DesOps : the Next Wave in Design (Part 3) The Maturity of Design Systems

To understand where a DesignSystem  of an organisation stands in context of implementing a DesOps, first step is to evaluate the existing DesignSystem that is in place and contributes to the organisation’s design process. (We will explore the process aspect in details, in later articles in this series.) To evaluate any DesignSystems  in a broadway we can easily form a metrics that takes care of the following two perspectives on the system.

Designer System Types

Typically the Design Systems can be broadly categorised into 3 types, namely: StaticDynamic and Generative :

Static: Most of the attributes and elements that make this system is mostly static in nature. For example in a Static type Design System, the style guide may be pre-defined print ready reference, defining basic color standards and typography etc. The user has to read through and manually refer it to decide those related attributes in his work. This kind of System mainly prescribes guidelines, rules, principles which does not automatically change or created in a dynamic way either in the stages of creation or implementation by the developers etc. Typical organisational style guides, or UI pattern documentations where the system describes how and where to use the patterns with some sample code to refer are falling under this kind of categories.

Dynamic: This kind of DesignSystem  have the content as well as the principles of implementations are designed and developed in a way that can be directly used in the code. The creation and implementation of the content are dynamic and mostly geared towards the actual elements that can directly be used in the code. This kind of DesignSystem  is more than a reference system for the developer, rather as a part of the actual build of the actual products developed as a part of it. Most easily noticeable traits of this kind of DesignSystem  is that some special purpose frameworks, code libraries are part of it, which integrate into the actual builds of the products.

Generative: Generative DesignSystem  are the ones, which generate the actual build ready outputs that can directly go into the build of the product. For example, instead of a static style guide, a generative DesignSystem  can have the tool that will generate dev-ready HTML, CSS and Java Script based output from the designer/developer inputs. The output of such system, take care of the context for which the design outputs is needed. Let’s say if the developer needs to build a cross platform hybrid app, hen such system can generate the code that will take care of the scenarios for the interaction and behaviour for all target device resolutions, screen density as well as the behaviour for native wrappers as well as in-browser functionalities and restrictions. We will again journey into the details of the Generative Design Systems shortly.

Designer System Maturity 

The other angle to look at the Design Systems is to scale the maturity to measure how much the system has evolved. One of most important aspect of any Design System is to understand the maturity of it, as this helps to understand where it is in the overall DesOpsroadmap. Irrespective of the varied and complex categorisation of the same, we can still name the maturity as Low , Medium and High to get a quick and easy comprehension. And when we try to map the maturity, it takes care of the categorisation aspect.

Low Maturity: When the Design System has a low maturity, it mostly depends on the static attributes that we discussed above. The creation and maintenance of different attributes are mostly the result of manual effort and the most interesting point about this is that the designer and developers have the cognitive load to refer and comprehend and take decisions on what to use and not use in specific context of their work or product. It is also important, there may be some attributes which might have dynamic attributes , but most of them are out of the transition that the design system is having due to its evolution,

Medium Maturity: In the Design Systems  with medium maturity, the most elements and attributes are mainly dynamic in nature. These systems mostly depend on the frameworks, libraries etc. There may be some overlaps in static and well as generative attributes.

High Maturity: Similarly in Higher maturity, apart from the fact that it mostly contains generative attributes, it involves the aspects of automation, computer-vision and may deploy artificial intelligence (AI) to provide continuous pipelines that aspires to remove the human intervention form the process blocks. On ground reality it might require the human intervention to feed in the creative juices or decision power that impacts critical human needs or contexts.

When we map these 2 perspectives horizontally and vertically, we get the the right insight into the DesignSystem’s position in the graph that allows us to clearly understand where the gaps are for the DesignSystem to evolve on which dimensions.

Note that the metrics that govern the success of a DesOps implementation is almost synonymous to this metrics we explored about Design Systems. The factors that adds to this metric on Design Systems,  includes aspect where we measure how impactful the Design System is in touching the different design process lifecycle blocks where each role like an Information Architect or an Interaction Designer , Visual Designer or even the Developer are attached to, in the delivery track. This aspect is more figuratively termed as a Living Design System. 

The Living Design System

The scaling of design is a classic issue. Moreover in recent times the explosive growth of technology across different devices, platforms and ecosystems, it became an ever-growing monster that every designer faces sooner or later. Native (Windows, Android, iOS, Linux etc. ) Web (HTML, HTML5, CSS, CSS3, JavaScript and frameworks etc. ) Along with the combination – the Hybrid – make the scaling of the design language an unending challenge.

The Salesforce design team tried to solve the challenges of applying similar designs across cross-platform product families by introducing a dynamically configurable design asset system which viewed the individual entities of any design system as design tokens.

Technically it was a single JSON file that was the “Single Source of Truth” that contained a set of name-value pairs that defined the properties and their relationships under different categories like text colours, backgrounds, border sizes, font sizes, etc. This JSON was consumed by the framework (i.e. The Lightning Designing System link: https://www.lightningdesignsystem.com/downloads/) developed and templatized for different target platforms, devices, Operating Systems etc. The Lightning Designing System framework generated different formatted outputs for CSS via common CSS preprocessors like Sass, Less and Stylus. Also there was an output in XML format that is supported in Android and JSON for iOS specific development. The Salesforce Design Tokens open-sourced at https://github.com/salesforce-ux/design-system.

The second interesting aspect of this was the use of GitHub to host the design system. Unlike the design system of traditional organisation, where the design system was hosted as downloadable form (even in the cases where the version control like Git is used) these have to be either translated into desired formats for the target platform or hosted especially along with the code. But here the design tokens representing the styles definition and the properties, as hosted on GitHub, were directly integrated into the build process contributing to the Continuous Integration and Development approach of development. In this sense, it was more as a Living System acting a single source of truth, from which the required branch is pulled and be made as a part of the build.

Many other pattern library and/design systems like RedHat’s PatternFlyhttp://www.patternfly.org is also available in similar approach at GitHub (i.e. at https://github.com/patternfly )as that of this second aspect we discussed now. But the idea of making the style guide being available as a SingleSource of truth in combination of this second aspect is what makes the case of the Salesforce design system unique among similar attempts for an approach to deliver a consistent design across different platforms.

(c) 2018, Samir Dash. All rights reserved.

DesOps : the Next Wave in Design (Part 2) The Design Systems

By definition, a Design System is a series of components that can be reused in different combinations to manage design at scale. In the context of software development lifecycle specific to digital products e.g. apps, applications and websites etc., the design system can contain design guidelines, visual style guidelines, assets, components, widget definitions, form controls, interaction paradigm definitions etc. at a different maturity level touching upon different roles starting from the interaction designer, visual designer to prototyper as well as the developer.

Many organizations have the pattern libraries clubbed with some best practices and guidelines used as a Design System for the organization and the community — for example is one such example is RedHat’s PatternFly. When you visit PatternFly website, you find it a typical design system that has different components including interaction paradigms, widgets, working code, visual style guides and assets touching upon many roles involved in the delivery thread. Similar is the case with lot other popular design systems. Among popular Design Systems as of today, most notable are: Google’s Material DesignBBC’s Gel, SalesForce’s LightningAirBnB’s Design Language , IBM’s NorthStar and so on.

To normalise practices and design patterns around a common visual language, style guides, pattern libraries, design systems play a critical role. Design systems can vary in maturity. With purpose, I have mentioned the case of “different maturity” — it means that the variations of the concept of the design system are already in place, and each of those variations has a slightly different efficiency to solve the common goal of designing at scale. at the bare minimum level, it can simply start with naming objects, colours, texts, components and conventions or even best practices. This can scale upto highest maturity level, where it can define finer details of the experience and even can end up in generative solutions in themselves — e.g. frameworks which may dynamically provide or define the experience details required for delivery.

Design Systems have been so popularised, they are arguably a default need of any product organisation of even limited scale. This usually becomes the very first push by an organisation to begin looking at their design operations with fresh eyes as introducing a design system brings with it a whole set of new problems that most digital product design organisations were previously ill-equipped to deal with on their own. The success of the implementation of a design system will ride on the support it gets operationally.

No doubt, Design Systems are a primary need for the organisation even on a limited scale and budget. However, the design systems mostly are developed within the operational vacuums. Have the best-designed components, but if you can’t create a workflow for their upkeep and deployment not just through your product offerings, but through your design production system itself, it will invariably cause more friction than the previous system did.

When you start with the pain point of a Design Systems, you are starting your operations with a solution to a specific problem. This lens answers the question — “How do we scale design quality through the enterprise?”

Why We Need Design Systems?

By nature, a Design System helps taking decisions faster and reduce wastage in delivery track. This thereby helps the designers to spend more time in defining workflows. Also as it helps to work in a fast track mode, it also promotes the designers in exploring multiple concepts in the same amount of time.

Having a Design System is to have a “single-source-of-truth, which can be referred by all different roles including engineering, testing as well as designers throughout the delivery lifecycle. This helps in articulating as well as implementing consistency across different modules of the application or even in the product line. However to ensure the Design System acts as a “single-source-of-truth”, certain criteria need to be fulfilled — and this is also the crossroad where a Design System that acts as the “single-source-of-truth”, paves way for DesOps.

Design System as “Single-Source-of-Truth”

Let’s first understand what do we really mean when we refer Design System as “Single-Source-of-Truth”. In the enterprise, any design on the hindsight has one major alignment — i.e. to align itself to the business goals or the enterprise, which also drives the product strategy as well as the delivery and impacts a lot of associated aspects such as the delivery life cycle that includes the typical Design, Development and Quality tracks. In most of the cases the associated operational systems which are made up of processes, practises and tools are mostly development centric. The traditional SDLC models were designed more from a process perspective that had the bias towards development aspect of the delivery. So when a Design system is typically established in an organisation, the normal tendency is to create a Design System that fills the gap between these different entities and their associated process blocks. The Design System that touches all these associated entities and the process blocks in a way that helps in optimizing the delivery of the product or in improving the quality of it, by becoming the single driving force of design aspect (from the creative problem-solving aspect ), can be regarded as a “Single-Source-of-Truth”. But in reality, as the different roles like the Designer, Developers and Quality Assurance team members, each play within their boundary of age-old definition of processes. The tools, technologies as well as the processes used by them are so diverse and the workflow among them so disjointed that on the ground reality, that having a Design System as “Single-Source-of-Truth” has not been an easy thing to achieve.

Designer Tools vs Developer Tools

One of the key aspects is that in modern time with the use of Agile along with design-driven practices like Design Thinking and User Centred Design models, has tried to reduce the tilt toward the engineering, has certainly helped many organisation to be more effective in delivering products with the qualitative design. However, the Design System developed in many organisations by nature involves the set of tools and processes that suffer from such bias and the selection of tools and technology-driven workflows that are more developer-centric. For example, the tools like TextEdit or Notepad and the command line terminal windows are highly preferred by the developers. The workflows defined around such tools to make developers more productive and help them improve their craft which is coding. When the processes and tools by themselves are technology driven and developer driven, and when such system is integrated with the standard components of a design system like style guides, codes, widgets etc. , the generation and maintainability of such things by designer gets affected as they are from the different world with their own way of working with their own world of GUI driven tools starting from mind mapping to wireframing and creating visual assets. And this what brings the disjoint experiences to the design operations, thereby preventing to achieve a seamless full-circle of design-development operations. But the result of such design system is always like a patchwork that may not introduce the fundamental changes in the process to meet the goal of DesOps — most of which are centred around consistency, collaboration, continuous integration and continuous design. And to achieve this is actually a goal that aligns with the core vision of DesOps.

We will continue exploring the Design Systems in context with DesOps and touch upon topics like Design System maturity and Living Design System as well as the concept of generative systems, in the part -3 of this DesOps article series. So keep tuned in and do feel free to share the word!!!

© 2018, Samir Dash. All rights reserved.

DesOps : the Next Wave in Design (Part 1)

DesignOps or DesOps is an approach to design, inspired by the culture of DevOps. In this article series, we will be touching upon the practical approaches on how to prepare for this next-wave in design that compliments DevOps in the concepts of a cultural shift, collaboration and automation. We will also see what are the available solutions today that contribute to bringing the full circle of design in the context of software development lifecycle.

In today’s world, while design as a discipline is getting more and more recognition across the entrepreneur world and many industry efforts like IBM Design Thinking and similar frameworks, trying to create a synergy between the Agile approach of SDLC and the Design Thinking. It is an interesting crossroad in time where the next big thing in product delivery is to bring scalability as well as automation to the creative process. In the context of the Software industry, I always see “Design” as an intersection between creativity and the technology where both shape each with the help from user-needs and blending of these results into successful products. Any typical software product delivered involves many complex as well as diverting technologies, processes, people and visions. Though mostly a software delivery happens with the team members decided in two major groups – developers and designers, ultimately the best outcome always depends on the fact that how these two teams communicate with each other and how efficiently the thoughts and ideas are shared, propagated and translated.

When it comes to product development, the amount of complexity and the variety of aspects starting from the diversified thinkings, technology, tools and processes that go into it, is significant. Attempts have been made over the period to improve various aspects of it ensure the delivery process can be optimised to scale upto the ever-expanding needs. In software and IT infrastructure industry, recently one such phenomenon was DevOps that focused on rethinking the development and operations to improve productivity and efficiency. DevOpsstarted in the industry towards the last part of the first-decade post-millennium. Back in 2008, there was a fine separation between the roles who code and the roles who deploy them. Basically, the coders or the programmers were responsible with code generation while the infrastructure guys were looking after the process of deploying them.

Due to the rise of Agile process, this code generation and deployment as a part of delivery became more frequent and continuous unlike the age-old waterfall model, when it used to happen every 6 months or a year. In all major software services industry, it was common to have fixed calendar dates in the year that represented the release or deployments. The 2-3 weeks sprints of Agile approach made it obsolete in many. As the continuous delivery became the defacto standard, this narrowed down the gap between the development team and infrastructure team. This also gave rise to the need of multidisciplinary roles or individuals who can bridge the gap between the production environment and the development server, allowing their code to be deployed in a more efficient way and faster. As the DevOps took shape, the practices around it grew from a few talented hackers to a profession with a culture of its own involving its own set of tools, practices, technologies and workflows which became the norm in the industry today.

Most of the DevOps today focuses on the process blocks mostly impacted Engineering or technical aspects of the product rather than the design aspect. To bridge that gap, in recent times many attempts are being made to define a consistent approach called DesOps. The DesOps or DesignOps is a relatively new term. Many, to comprehend better, refer to DevOps, which has the prominent similar underlying philosophies and goals. Design operations (aka DesignOps ) is though relatively new concept yet is a growing area of concern for design teams seeking to help their teams increase the value they produce for their host organizations and that organization’s customers. The term and the practices exist inconsistently in many attempts made by different organisations since many years.

Even when we try to implement a DevOps geared process to run a Design driven process model, still the actual challenges, the gaps between the design and development or the design and testing blocks are not fixed. So without implementing a DesOps or DesignOpsto fix the design and other endpoint blocks in the process, the implementation of DevOpswill never yield the desired outcome and will not be able to sustain the core philosophies behind it.

DesOps and DevOps both are complementary to each other. The Design delivery process improvements try to optimise the overall delivery process and thereby contributing to the DevOps. For example the aspects such as testing of the product that involves design aspects, usability, accessibility etc. In the testing phase needs some benchmark to referee to that can only come from a process where the DesOps has implemented that outputs and feeds the benchmark to the DevOps phase where the testing block can use it. In addition to this when we are in Agile or iterative and continuous process models, at each sprint cycle the end to end flow is executed and thereby making the Continous Integration(CI) and Continous Delivery (CD) truly meaningful.

DesOps, was primarily born out of the primary need of how to design at scale. The factors that shaped it are of similar nature that shaped DevOps. With the new age software delivery in recent times, with the Agile process and Continuous Integration and Deployment of code, the DevOps approach provided a faster highway to ensure faster delivery with low risks. So the earlier SDLC model got redefined over the period with Agile and Then the DevOps to its current shape.

However, as the design was an integral part of any product delivered, the necessity to ensure the gaps between traditional design life cycle working along with the fast track of development life cycle using DevOps are bridged. The need for tighter integration between the design team and the engineering team became a necessity to ensure to design at scale. During recent two-three years, the top five big companies heavy investments into this area pave the way for other organisations and design communities to be more explorative in this area. The implications of DesOps is reflected in the outcome, where the silos among the teams and disciplines get reduced. Along with this, it improves the collaboration among cross-functional team and working practices, that contributes to minimising wastage in the delivery process.

Keep tuned in… next we will start explorations with the basic blocks for DesOps – i.e. the Design Systems.

(c) 2018, Samir Dash. All rights reserved.

Re-imagining beta testing in the ever-changing world of automation (Article at Opensource.com)

lightning-test

Check the article at :  https://opensource.com/article/18/1/beta-testing-automation

Today’s beta test solutions aren’t working to develop products that meet users’ needs. It’s time for a new vision.

If you ask most people why beta testing and related tools are important, they’ll name things such as shortening of beta cycles, reduced time investment, increased tester participation, improved feedback loops and visibility. However, all the reasons why beta testing is so critical can be narrowed down to two major issues, both of which are predominant in the beta testing phase of the SDLC:

  1. The intersection of humans and technology
  2. Usability and quality standards

Read complete article here: https://opensource.com/article/18/1/beta-testing-automation

 

 

 

 

 

Omnipresent Operating System (OS) : Re-imagining the Next Killer Experience for future of OS.

ominipresent-OS-image

Reincarnation in the Hinduism is one of the fascinating philosophical concepts that says that a soul or spirit after meeting a biological death is capable of leading a new life in a new body. Being a Hindu, I also believe in the transfer of one’s soul after death into another body which produces a continuous cycle of birth, life, death and rebirth through one’s many lifetimes in Samsara.

Though this is an analogy from the mythologies and scriptures in Hinduism and other religious books, the concept of the same soul is passing through different bodies is somewhat can inspire the experience for the next-gen Operating System (OS) — an ecosystem where the OS is “omnipresent” as a soul and allows the user to move through different “systems” and “devices”(even we can extend this to IoT contexts!) .

A user story might record this user’s perspective something as —

“I should be able to use my application uninterruptedly and seamlessly across various devices/system as I move through different devices due to certain needs”.

It is like soul transfer — transferring consciousness from one body to another body.  In the movie The Matrix(1999), the Agents were transferring themselves to different person’s bodies. One interesting thing to note here is that being transferred to a new body they remember their goals, their memory etc.

(Fig: In the movie, The Matrix (1999), the agents can transfer themselves into other bodies at will.)

The idea that I termed as Omnipresent Operating System is to share application sessions (along with all the state/session data etc.) from one system to another system that allows to continue the application running on the later system in exactly in the same state and using the same session without the user to start the application from beginning or using a different application session in the later system to use all the benefits of available in the later system. This helps to achieve unifiedseamless & omnipresence-experience across different systems.

Use-Cases:

Following are five use-cases represented in images :

In each one, we can see a daily life usage where the experience for the user are extremely simplified through the proposed solution.

Historically the OS concept has progressed over time through many evolutions. Typically the OS is defined as a system of software which manages the hardware resources of the system to provide a base for its users’ programmatic computing needs. Throughout the history, multiple dimensions were addressed for operating systems such as performance, multitasking, usability, portability, mobility etc. We are witnessing a period of time when the transformation is happening to the evolution of OS at the highest rate possible. This is due to diversification of software technology, hardware, and evolution in new age eco-systems and new paradigms of digital devices. This is an age when we are witnessing the coming of IoT (i.e. Internet of Things) and cloud, where any device can be part of a bigger eco-system and be an extension of a cloud system.

So the question is what is the future of Operating system? Is it that cloud will be the ultimate operating system? At least by looking at the Chrome book, defining a thin-client based access to all the computational needs that stay in the cloud”. But still, the diversification of operating systems prompt us to pause and think, something is missing. The missing piece is convergence.

We have many derivatives of different types of OS. Android, iOS, Linux, Windows and OSX are to name the few of the variations, that continue the OS war. But users are limited by this. Imagining an application running in one OS . Can we use the same application in another one? No. Also imagine a situation, when you are reading an email and want to update it (the same email )on your PC…can you do it? No.

If we see the trends today towards the future, it’s all about micro-services and server-less architecture similar to torrent (peer-to-peer) or blockchain implementation, de-centralized and distributed eco-systems where the systems communicate to each other. In such scenarios, the next killer experience is to have the ability of the omnipresence across this distributed & de-centralized eco-systems.

Solution Implementation:

The solution can be implemented in two ways. To illustrate this assuming that there are two devices and the user initially starts using some application in the first one and then he moves to the second device and uses the same session and input data in the second device .

Approach 1. Both the devices (i.e. Device 1 & Device 2) will be running same Omni-present Operating System ( in real life scenarios it might be possible in some cases that they can be of different versions of the same OS ) which will have capabilities to transfer the application & user data including session/environment variables etc.. along with necessary components that will allow to render that in another device.

Approach 2. Both the devices may not run the same Omnipresent Operating System… they may be running 2 different OS (including 3rd party OS like Android and Windows etc.) .. in such cases both the device can have some Run-time Component installed to allow them to have the capabilities to transfer the application & user data including session/environment variables etc.. along with necessary components that will allow rendering that on the device on receipt of such data .

The following diagram illustrates the different fundamental components are to be placed in an architecture of the eco-system in order to make it work.

Description of the diagram :

100 and 200 are two devices connected over the same network via wifi/Lan/Bluetooth etc. (e.g. 150 & 250). 
110 is an application running in 100. 120 is the Runtime Component that helps to achieve the desired outcome proposed in the invention (might be part of the OS). 
100 is connected to internet via port 140. 
111 is the session data, 112 is user authentication and related data, 113 is environment related data (e.g. browser version, OS version, history, some system variable etc. ), 114 is any other metadata that might be associated with any of the components of the environment. 
115 is the application data (e.g. client session, user input data, cookies, local storage, application related variables etc. ). 
210 is an equivalent application running in 200.  
220 is the Runtime Component that helps to achieve the desired outcome proposed in the invention (might be part of the OS). 
200 is connected to the internet via port 240. 150 is a means of connection (e.g. wifi, LAN, Bluetooth, NFC, IR etc. ) of 100 with any other devices like in this case 200. 
250 is the similar means of connection. 
Both 100 and 200 are connected to each other - for which may 100 might be using any other port 130 and same for 200 where it might be using 230. 
Now when the user runs device 100 and runs application 110, his data related to that state is collected and transferred to the other device 200 over the connection. This transfer is depicted as 160. 
This data is stored by the runtime 220 as 111, 112, 113, 114, 115. 

Now runtime selects the compatible application 210 which may have it's own session 211 and other related data. 
Now runtime 220 uses 111, 112, 113, 114, 115 to determine what can be populated in 210 , which may include the text or any info the user had entered in device 100 and this allows to run the application 210 with the user's data. 
Now the user uses device 200 to continue using editing or working on his data using app 210 and once he finishes, runtime 220 collects the modified data from the app 210 and sends it back to device 100 , depicted in 260 so that in 100 the runtime 120 uses that to populate the app 110 and within the same session as 111 and if the need is to update some server over the internet, it can do so via using the same session and thread via the same network port as 140. 

Note : The above example shows only a generic approach and describes the overall components. Depending on the scenario the usage might vary slightly -- for example if the user does not need to use same session and the port to connect to server, the flow might be different where once the user has updated his info in 210 on device 200, instead of sending back the updated data to device 100 as depicted in 260, the 220 runtimes can submit it to server or backend using 240 over a different session.

Note: The above example shows only a generic approach and describes the overall components. Depending on the scenario the usage might vary slightly — for example if the user does not need to use the same session and the port to connect to server, the flow might be different where once the user has updated his info in 210 on device 200, instead of sending back the updated data to device 100 as depicted in 260, the 220 runtimes can submit it to server or backend using 240 over a different session.

Why is Omnipresence is the future for OS?

Interestingly, when Cloud OS concept came into existence, the driving force behind it was the thoayeught that a ‘good’ OS gets out of the way and lets the user get straight to what they want. In Microsoft’s approach – From the perspective of the user, they’re no longer using a program on a machine but consuming a service that lives in an arbitrary place. Because the service runs on an API common to all machines, it becomes easier to scale and failover. This is concept the drives Platform as a Service (Azure’s tour de force). [Source]

Big players like Microsoft also believe that the “perfect future [of OS] would mean that […] software is totally portable between desktop and mobile devices. “. The kind of attempts by such players are to converge the OS of mobile as well Desktops.

Though many attempted in the past to evaluate thin-client OS as the future (e.g. http://www.totalnetworx.com/computers-technology/google-chrome-os-operating-system-future/ ), also gradually started to foresee the future of OS as something that can not be seen in the light of only a server-client architecture rather it would be defined in something more organic that would be sustainable. Articles like Desktop 2.0 and the future of the networked operating system , interestingly long back predicted the conclusion that we are carrying forward today that the Chrome OS version of the future where all we need is a browser is wrong.

So how Omni-present OS concept is different than thin-client and (even some old geeks referring the Mainframes to get the secret-sauce for the future!)? Here are the basic differentiators:

The Begining:

As I mentioned, in today’s OS evolution, it is the diversification of operating systems that prompt us to pause and think about the missing bridge to the future. If we closely ponder, it appears that the missing piece is convergence. The Omnipresent OS concept is a thought towards that direction. It’s actually a beginning for us to be prepared for tomorrow’s decentralized networked world where the age-old philosophies will show us the way!

(c) 2017-18 Samir Dash, Attribution-NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND 3.0)

 

https://www.linkedin.com/pulse/omnipresent-operating-system-os-re-imagining-next-killer-samir-dash