DesOps : the Next Wave in Design (Part 4) The 3C’s

The Living Design System is mostly perceived all about modular design. Mostly the patterns, being the referred to the “molecules” or “organisms” as a part of “atomic design process”, are made to allow the part of structure or group. This view of the living design system brings to focus, its two major aspects — first is, of course, the creation and maintenance of patterns. The later is about coming up with a process and ensuring that these fit into a workflow that both would touch both designer and developers and make the connections among their workflows. However, this latter aspect has remained challenging even for the experienced teams across the industry.

Historically and interestingly, DesOps at the beginning , without its formal name was focusing on the areas of creation and maintenance and sharing of its modular design systems.

Historically and interestingly, DesOps at the beginning, without its formal name was focusing on the areas of creation and maintenance and sharing of its modular design systems. At the very beginning phases in the last couple of years, it was more about the organisations having the design systems and making these socialised. Primarily these design systems have consisted of visual design languages and components and widgets. These design system had defined basic goals, principles, branding (for specific organisational identity) and a visual language that helped it maintain consistency in the creation of design artefacts and assets. Along with it the UI patterns and widget libraries included in them helped to bring consistency in terms of interactions across a wider scale of interfaces within the organisation or product portfolio.

Ensuring this became part of the strategic aspect of any UX or Design team, in the organisation that were responsible for driving this. Mostly this task became the primary share of the role of the Design Directors, Leads and Principals in the organisation as a part of their goal to ensure ringing the right maturity to their design team and practices.

This was definitely a low-hanging fruit in terms of what DesOps as principle is geared towards. The return from such low-hanging fruit was helpful in many ways. Apart from the consistency, it actually helped in reducing the friction among the teams regarding the design aspect. Also it helped reducing some aspect of operational inefficiencies in the design workflow to some extent and helped in reducing waste thereby helping the team to deliver at faster rate.

However the design work practices, unlike the development domain are more diverse and being the area with the most creative energy association in the whole lifecycle, the challenges to ensure the smooth amalgamation of the these design systems to the process blocks of the workflow was not easy. The fact is till the date writing this passage, it is still a challenge to fit the existing tools, to the design work-flows and then aligning it to the whole delivery track fuelled by the DevOps paradigm.

Recently the design team at AirBnB, came up with React Sketch App. Last year at RedHat UX team meet up Summit, as a part of a design challenge initiative I presented a concept Ditto, that was supposed to redefine the way the design can be integrated into a DevOpssupported environment. Will share the details of Ditto in coming articles of this series. ClearCleft also in recent times came up with Fractal that tried to reduce and even remove the distance between the design and development teams. Note that both the DevOps and DesOps are born out of similar drivers, however, the practices concerned with the two are very different.

From the example of Salesforce’s approach to design, the takeaway is the technological approach of use of “the single source of truth” can be a good starting point towards building a practical DesOps culture in the organisation. As the soul of DesOps based on the cultural shift and practices working towards Continuous Integration (CI) and Continuous Delivery(CD), it makes sense to use the living design systems as the foundation of the overarching concept of DesOps.

To understand the overarching structure of DesOps, we need to explore the various dimensions that give the concept its shape. From a framework point of view, if we look at it, the typical 3 pillars of any framework also fits here —

1. Consistency — in the context of DesOps  the consistency plays the major role both in approach and workflows as well as from the design perspective

2. Continuity — mostly it fuels the continuous design aspect that provides agility to the design process.

3. Complimentary – no doubt as DesOps completes the full-circle, it complements the vision of DevOps. 

In the next article, we will dive into the different models associated with the DesOps framework.

 

 

(c) 2018, Samir Dash. All rights reserved.

Advertisements

DesOps : the Next Wave in Design (Part 3) The Maturity of Design Systems

To understand where a DesignSystem  of an organisation stands in context of implementing a DesOps, first step is to evaluate the existing DesignSystem that is in place and contributes to the organisation’s design process. (We will explore the process aspect in details, in later articles in this series.) To evaluate any DesignSystems  in a broadway we can easily form a metrics that takes care of the following two perspectives on the system.

Designer System Types

Typically the Design Systems can be broadly categorised into 3 types, namely: StaticDynamic and Generative :

Static: Most of the attributes and elements that make this system is mostly static in nature. For example in a Static type Design System, the style guide may be pre-defined print ready reference, defining basic color standards and typography etc. The user has to read through and manually refer it to decide those related attributes in his work. This kind of System mainly prescribes guidelines, rules, principles which does not automatically change or created in a dynamic way either in the stages of creation or implementation by the developers etc. Typical organisational style guides, or UI pattern documentations where the system describes how and where to use the patterns with some sample code to refer are falling under this kind of categories.

Dynamic: This kind of DesignSystem  have the content as well as the principles of implementations are designed and developed in a way that can be directly used in the code. The creation and implementation of the content are dynamic and mostly geared towards the actual elements that can directly be used in the code. This kind of DesignSystem  is more than a reference system for the developer, rather as a part of the actual build of the actual products developed as a part of it. Most easily noticeable traits of this kind of DesignSystem  is that some special purpose frameworks, code libraries are part of it, which integrate into the actual builds of the products.

Generative: Generative DesignSystem  are the ones, which generate the actual build ready outputs that can directly go into the build of the product. For example, instead of a static style guide, a generative DesignSystem  can have the tool that will generate dev-ready HTML, CSS and Java Script based output from the designer/developer inputs. The output of such system, take care of the context for which the design outputs is needed. Let’s say if the developer needs to build a cross platform hybrid app, hen such system can generate the code that will take care of the scenarios for the interaction and behaviour for all target device resolutions, screen density as well as the behaviour for native wrappers as well as in-browser functionalities and restrictions. We will again journey into the details of the Generative Design Systems shortly.

Designer System Maturity 

The other angle to look at the Design Systems is to scale the maturity to measure how much the system has evolved. One of most important aspect of any Design System is to understand the maturity of it, as this helps to understand where it is in the overall DesOpsroadmap. Irrespective of the varied and complex categorisation of the same, we can still name the maturity as Low , Medium and High to get a quick and easy comprehension. And when we try to map the maturity, it takes care of the categorisation aspect.

Low Maturity: When the Design System has a low maturity, it mostly depends on the static attributes that we discussed above. The creation and maintenance of different attributes are mostly the result of manual effort and the most interesting point about this is that the designer and developers have the cognitive load to refer and comprehend and take decisions on what to use and not use in specific context of their work or product. It is also important, there may be some attributes which might have dynamic attributes , but most of them are out of the transition that the design system is having due to its evolution,

Medium Maturity: In the Design Systems  with medium maturity, the most elements and attributes are mainly dynamic in nature. These systems mostly depend on the frameworks, libraries etc. There may be some overlaps in static and well as generative attributes.

High Maturity: Similarly in Higher maturity, apart from the fact that it mostly contains generative attributes, it involves the aspects of automation, computer-vision and may deploy artificial intelligence (AI) to provide continuous pipelines that aspires to remove the human intervention form the process blocks. On ground reality it might require the human intervention to feed in the creative juices or decision power that impacts critical human needs or contexts.

When we map these 2 perspectives horizontally and vertically, we get the the right insight into the DesignSystem’s position in the graph that allows us to clearly understand where the gaps are for the DesignSystem to evolve on which dimensions.

Note that the metrics that govern the success of a DesOps implementation is almost synonymous to this metrics we explored about Design Systems. The factors that adds to this metric on Design Systems,  includes aspect where we measure how impactful the Design System is in touching the different design process lifecycle blocks where each role like an Information Architect or an Interaction Designer , Visual Designer or even the Developer are attached to, in the delivery track. This aspect is more figuratively termed as a Living Design System. 

The Living Design System

The scaling of design is a classic issue. Moreover in recent times the explosive growth of technology across different devices, platforms and ecosystems, it became an ever-growing monster that every designer faces sooner or later. Native (Windows, Android, iOS, Linux etc. ) Web (HTML, HTML5, CSS, CSS3, JavaScript and frameworks etc. ) Along with the combination – the Hybrid – make the scaling of the design language an unending challenge.

The Salesforce design team tried to solve the challenges of applying similar designs across cross-platform product families by introducing a dynamically configurable design asset system which viewed the individual entities of any design system as design tokens.

Technically it was a single JSON file that was the “Single Source of Truth” that contained a set of name-value pairs that defined the properties and their relationships under different categories like text colours, backgrounds, border sizes, font sizes, etc. This JSON was consumed by the framework (i.e. The Lightning Designing System link: https://www.lightningdesignsystem.com/downloads/) developed and templatized for different target platforms, devices, Operating Systems etc. The Lightning Designing System framework generated different formatted outputs for CSS via common CSS preprocessors like Sass, Less and Stylus. Also there was an output in XML format that is supported in Android and JSON for iOS specific development. The Salesforce Design Tokens open-sourced at https://github.com/salesforce-ux/design-system.

The second interesting aspect of this was the use of GitHub to host the design system. Unlike the design system of traditional organisation, where the design system was hosted as downloadable form (even in the cases where the version control like Git is used) these have to be either translated into desired formats for the target platform or hosted especially along with the code. But here the design tokens representing the styles definition and the properties, as hosted on GitHub, were directly integrated into the build process contributing to the Continuous Integration and Development approach of development. In this sense, it was more as a Living System acting a single source of truth, from which the required branch is pulled and be made as a part of the build.

Many other pattern library and/design systems like RedHat’s PatternFlyhttp://www.patternfly.org is also available in similar approach at GitHub (i.e. at https://github.com/patternfly )as that of this second aspect we discussed now. But the idea of making the style guide being available as a SingleSource of truth in combination of this second aspect is what makes the case of the Salesforce design system unique among similar attempts for an approach to deliver a consistent design across different platforms.

(c) 2018, Samir Dash. All rights reserved.

DesOps : the Next Wave in Design (Part 2) The Design Systems

By definition, a Design System is a series of components that can be reused in different combinations to manage design at scale. In the context of software development lifecycle specific to digital products e.g. apps, applications and websites etc., the design system can contain design guidelines, visual style guidelines, assets, components, widget definitions, form controls, interaction paradigm definitions etc. at a different maturity level touching upon different roles starting from the interaction designer, visual designer to prototyper as well as the developer.

Many organizations have the pattern libraries clubbed with some best practices and guidelines used as a Design System for the organization and the community — for example is one such example is RedHat’s PatternFly. When you visit PatternFly website, you find it a typical design system that has different components including interaction paradigms, widgets, working code, visual style guides and assets touching upon many roles involved in the delivery thread. Similar is the case with lot other popular design systems. Among popular Design Systems as of today, most notable are: Google’s Material DesignBBC’s Gel, SalesForce’s LightningAirBnB’s Design Language , IBM’s NorthStar and so on.

To normalise practices and design patterns around a common visual language, style guides, pattern libraries, design systems play a critical role. Design systems can vary in maturity. With purpose, I have mentioned the case of “different maturity” — it means that the variations of the concept of the design system are already in place, and each of those variations has a slightly different efficiency to solve the common goal of designing at scale. at the bare minimum level, it can simply start with naming objects, colours, texts, components and conventions or even best practices. This can scale upto highest maturity level, where it can define finer details of the experience and even can end up in generative solutions in themselves — e.g. frameworks which may dynamically provide or define the experience details required for delivery.

Design Systems have been so popularised, they are arguably a default need of any product organisation of even limited scale. This usually becomes the very first push by an organisation to begin looking at their design operations with fresh eyes as introducing a design system brings with it a whole set of new problems that most digital product design organisations were previously ill-equipped to deal with on their own. The success of the implementation of a design system will ride on the support it gets operationally.

No doubt, Design Systems are a primary need for the organisation even on a limited scale and budget. However, the design systems mostly are developed within the operational vacuums. Have the best-designed components, but if you can’t create a workflow for their upkeep and deployment not just through your product offerings, but through your design production system itself, it will invariably cause more friction than the previous system did.

When you start with the pain point of a Design Systems, you are starting your operations with a solution to a specific problem. This lens answers the question — “How do we scale design quality through the enterprise?”

Why We Need Design Systems?

By nature, a Design System helps taking decisions faster and reduce wastage in delivery track. This thereby helps the designers to spend more time in defining workflows. Also as it helps to work in a fast track mode, it also promotes the designers in exploring multiple concepts in the same amount of time.

Having a Design System is to have a “single-source-of-truth, which can be referred by all different roles including engineering, testing as well as designers throughout the delivery lifecycle. This helps in articulating as well as implementing consistency across different modules of the application or even in the product line. However to ensure the Design System acts as a “single-source-of-truth”, certain criteria need to be fulfilled — and this is also the crossroad where a Design System that acts as the “single-source-of-truth”, paves way for DesOps.

Design System as “Single-Source-of-Truth”

Let’s first understand what do we really mean when we refer Design System as “Single-Source-of-Truth”. In the enterprise, any design on the hindsight has one major alignment — i.e. to align itself to the business goals or the enterprise, which also drives the product strategy as well as the delivery and impacts a lot of associated aspects such as the delivery life cycle that includes the typical Design, Development and Quality tracks. In most of the cases the associated operational systems which are made up of processes, practises and tools are mostly development centric. The traditional SDLC models were designed more from a process perspective that had the bias towards development aspect of the delivery. So when a Design system is typically established in an organisation, the normal tendency is to create a Design System that fills the gap between these different entities and their associated process blocks. The Design System that touches all these associated entities and the process blocks in a way that helps in optimizing the delivery of the product or in improving the quality of it, by becoming the single driving force of design aspect (from the creative problem-solving aspect ), can be regarded as a “Single-Source-of-Truth”. But in reality, as the different roles like the Designer, Developers and Quality Assurance team members, each play within their boundary of age-old definition of processes. The tools, technologies as well as the processes used by them are so diverse and the workflow among them so disjointed that on the ground reality, that having a Design System as “Single-Source-of-Truth” has not been an easy thing to achieve.

Designer Tools vs Developer Tools

One of the key aspects is that in modern time with the use of Agile along with design-driven practices like Design Thinking and User Centred Design models, has tried to reduce the tilt toward the engineering, has certainly helped many organisation to be more effective in delivering products with the qualitative design. However, the Design System developed in many organisations by nature involves the set of tools and processes that suffer from such bias and the selection of tools and technology-driven workflows that are more developer-centric. For example, the tools like TextEdit or Notepad and the command line terminal windows are highly preferred by the developers. The workflows defined around such tools to make developers more productive and help them improve their craft which is coding. When the processes and tools by themselves are technology driven and developer driven, and when such system is integrated with the standard components of a design system like style guides, codes, widgets etc. , the generation and maintainability of such things by designer gets affected as they are from the different world with their own way of working with their own world of GUI driven tools starting from mind mapping to wireframing and creating visual assets. And this what brings the disjoint experiences to the design operations, thereby preventing to achieve a seamless full-circle of design-development operations. But the result of such design system is always like a patchwork that may not introduce the fundamental changes in the process to meet the goal of DesOps — most of which are centred around consistency, collaboration, continuous integration and continuous design. And to achieve this is actually a goal that aligns with the core vision of DesOps.

We will continue exploring the Design Systems in context with DesOps and touch upon topics like Design System maturity and Living Design System as well as the concept of generative systems, in the part -3 of this DesOps article series. So keep tuned in and do feel free to share the word!!!

© 2018, Samir Dash. All rights reserved.

DesOps : the Next Wave in Design (Part 1)

DesignOps or DesOps is an approach to design, inspired by the culture of DevOps. In this article series, we will be touching upon the practical approaches on how to prepare for this next-wave in design that compliments DevOps in the concepts of a cultural shift, collaboration and automation. We will also see what are the available solutions today that contribute to bringing the full circle of design in the context of software development lifecycle.

In today’s world, while design as a discipline is getting more and more recognition across the entrepreneur world and many industry efforts like IBM Design Thinking and similar frameworks, trying to create a synergy between the Agile approach of SDLC and the Design Thinking. It is an interesting crossroad in time where the next big thing in product delivery is to bring scalability as well as automation to the creative process. In the context of the Software industry, I always see “Design” as an intersection between creativity and the technology where both shape each with the help from user-needs and blending of these results into successful products. Any typical software product delivered involves many complex as well as diverting technologies, processes, people and visions. Though mostly a software delivery happens with the team members decided in two major groups – developers and designers, ultimately the best outcome always depends on the fact that how these two teams communicate with each other and how efficiently the thoughts and ideas are shared, propagated and translated.

When it comes to product development, the amount of complexity and the variety of aspects starting from the diversified thinkings, technology, tools and processes that go into it, is significant. Attempts have been made over the period to improve various aspects of it ensure the delivery process can be optimised to scale upto the ever-expanding needs. In software and IT infrastructure industry, recently one such phenomenon was DevOps that focused on rethinking the development and operations to improve productivity and efficiency. DevOpsstarted in the industry towards the last part of the first-decade post-millennium. Back in 2008, there was a fine separation between the roles who code and the roles who deploy them. Basically, the coders or the programmers were responsible with code generation while the infrastructure guys were looking after the process of deploying them.

Due to the rise of Agile process, this code generation and deployment as a part of delivery became more frequent and continuous unlike the age-old waterfall model, when it used to happen every 6 months or a year. In all major software services industry, it was common to have fixed calendar dates in the year that represented the release or deployments. The 2-3 weeks sprints of Agile approach made it obsolete in many. As the continuous delivery became the defacto standard, this narrowed down the gap between the development team and infrastructure team. This also gave rise to the need of multidisciplinary roles or individuals who can bridge the gap between the production environment and the development server, allowing their code to be deployed in a more efficient way and faster. As the DevOps took shape, the practices around it grew from a few talented hackers to a profession with a culture of its own involving its own set of tools, practices, technologies and workflows which became the norm in the industry today.

Most of the DevOps today focuses on the process blocks mostly impacted Engineering or technical aspects of the product rather than the design aspect. To bridge that gap, in recent times many attempts are being made to define a consistent approach called DesOps. The DesOps or DesignOps is a relatively new term. Many, to comprehend better, refer to DevOps, which has the prominent similar underlying philosophies and goals. Design operations (aka DesignOps ) is though relatively new concept yet is a growing area of concern for design teams seeking to help their teams increase the value they produce for their host organizations and that organization’s customers. The term and the practices exist inconsistently in many attempts made by different organisations since many years.

Even when we try to implement a DevOps geared process to run a Design driven process model, still the actual challenges, the gaps between the design and development or the design and testing blocks are not fixed. So without implementing a DesOps or DesignOpsto fix the design and other endpoint blocks in the process, the implementation of DevOpswill never yield the desired outcome and will not be able to sustain the core philosophies behind it.

DesOps and DevOps both are complementary to each other. The Design delivery process improvements try to optimise the overall delivery process and thereby contributing to the DevOps. For example the aspects such as testing of the product that involves design aspects, usability, accessibility etc. In the testing phase needs some benchmark to referee to that can only come from a process where the DesOps has implemented that outputs and feeds the benchmark to the DevOps phase where the testing block can use it. In addition to this when we are in Agile or iterative and continuous process models, at each sprint cycle the end to end flow is executed and thereby making the Continous Integration(CI) and Continous Delivery (CD) truly meaningful.

DesOps, was primarily born out of the primary need of how to design at scale. The factors that shaped it are of similar nature that shaped DevOps. With the new age software delivery in recent times, with the Agile process and Continuous Integration and Deployment of code, the DevOps approach provided a faster highway to ensure faster delivery with low risks. So the earlier SDLC model got redefined over the period with Agile and Then the DevOps to its current shape.

However, as the design was an integral part of any product delivered, the necessity to ensure the gaps between traditional design life cycle working along with the fast track of development life cycle using DevOps are bridged. The need for tighter integration between the design team and the engineering team became a necessity to ensure to design at scale. During recent two-three years, the top five big companies heavy investments into this area pave the way for other organisations and design communities to be more explorative in this area. The implications of DesOps is reflected in the outcome, where the silos among the teams and disciplines get reduced. Along with this, it improves the collaboration among cross-functional team and working practices, that contributes to minimising wastage in the delivery process.

Keep tuned in… next we will start explorations with the basic blocks for DesOps – i.e. the Design Systems.

(c) 2018, Samir Dash. All rights reserved.

Re-imagining beta testing in the ever-changing world of automation (Article at Opensource.com)

lightning-test

Check the article at :  https://opensource.com/article/18/1/beta-testing-automation

Today’s beta test solutions aren’t working to develop products that meet users’ needs. It’s time for a new vision.

If you ask most people why beta testing and related tools are important, they’ll name things such as shortening of beta cycles, reduced time investment, increased tester participation, improved feedback loops and visibility. However, all the reasons why beta testing is so critical can be narrowed down to two major issues, both of which are predominant in the beta testing phase of the SDLC:

  1. The intersection of humans and technology
  2. Usability and quality standards

Read complete article here: https://opensource.com/article/18/1/beta-testing-automation

 

 

 

 

 

Omnipresent Operating System (OS) : Re-imagining the Next Killer Experience for future of OS.

ominipresent-OS-image

Reincarnation in the Hinduism is one of the fascinating philosophical concepts that says that a soul or spirit after meeting a biological death is capable of leading a new life in a new body. Being a Hindu, I also believe in the transfer of one’s soul after death into another body which produces a continuous cycle of birth, life, death and rebirth through one’s many lifetimes in Samsara.

Though this is an analogy from the mythologies and scriptures in Hinduism and other religious books, the concept of the same soul is passing through different bodies is somewhat can inspire the experience for the next-gen Operating System (OS) — an ecosystem where the OS is “omnipresent” as a soul and allows the user to move through different “systems” and “devices”(even we can extend this to IoT contexts!) .

A user story might record this user’s perspective something as —

“I should be able to use my application uninterruptedly and seamlessly across various devices/system as I move through different devices due to certain needs”.

It is like soul transfer — transferring consciousness from one body to another body.  In the movie The Matrix(1999), the Agents were transferring themselves to different person’s bodies. One interesting thing to note here is that being transferred to a new body they remember their goals, their memory etc.

(Fig: In the movie, The Matrix (1999), the agents can transfer themselves into other bodies at will.)

The idea that I termed as Omnipresent Operating System is to share application sessions (along with all the state/session data etc.) from one system to another system that allows to continue the application running on the later system in exactly in the same state and using the same session without the user to start the application from beginning or using a different application session in the later system to use all the benefits of available in the later system. This helps to achieve unifiedseamless & omnipresence-experience across different systems.

Use-Cases:

Following are five use-cases represented in images :

In each one, we can see a daily life usage where the experience for the user are extremely simplified through the proposed solution.

Historically the OS concept has progressed over time through many evolutions. Typically the OS is defined as a system of software which manages the hardware resources of the system to provide a base for its users’ programmatic computing needs. Throughout the history, multiple dimensions were addressed for operating systems such as performance, multitasking, usability, portability, mobility etc. We are witnessing a period of time when the transformation is happening to the evolution of OS at the highest rate possible. This is due to diversification of software technology, hardware, and evolution in new age eco-systems and new paradigms of digital devices. This is an age when we are witnessing the coming of IoT (i.e. Internet of Things) and cloud, where any device can be part of a bigger eco-system and be an extension of a cloud system.

So the question is what is the future of Operating system? Is it that cloud will be the ultimate operating system? At least by looking at the Chrome book, defining a thin-client based access to all the computational needs that stay in the cloud”. But still, the diversification of operating systems prompt us to pause and think, something is missing. The missing piece is convergence.

We have many derivatives of different types of OS. Android, iOS, Linux, Windows and OSX are to name the few of the variations, that continue the OS war. But users are limited by this. Imagining an application running in one OS . Can we use the same application in another one? No. Also imagine a situation, when you are reading an email and want to update it (the same email )on your PC…can you do it? No.

If we see the trends today towards the future, it’s all about micro-services and server-less architecture similar to torrent (peer-to-peer) or blockchain implementation, de-centralized and distributed eco-systems where the systems communicate to each other. In such scenarios, the next killer experience is to have the ability of the omnipresence across this distributed & de-centralized eco-systems.

Solution Implementation:

The solution can be implemented in two ways. To illustrate this assuming that there are two devices and the user initially starts using some application in the first one and then he moves to the second device and uses the same session and input data in the second device .

Approach 1. Both the devices (i.e. Device 1 & Device 2) will be running same Omni-present Operating System ( in real life scenarios it might be possible in some cases that they can be of different versions of the same OS ) which will have capabilities to transfer the application & user data including session/environment variables etc.. along with necessary components that will allow to render that in another device.

Approach 2. Both the devices may not run the same Omnipresent Operating System… they may be running 2 different OS (including 3rd party OS like Android and Windows etc.) .. in such cases both the device can have some Run-time Component installed to allow them to have the capabilities to transfer the application & user data including session/environment variables etc.. along with necessary components that will allow rendering that on the device on receipt of such data .

The following diagram illustrates the different fundamental components are to be placed in an architecture of the eco-system in order to make it work.

Description of the diagram :

100 and 200 are two devices connected over the same network via wifi/Lan/Bluetooth etc. (e.g. 150 & 250). 
110 is an application running in 100. 120 is the Runtime Component that helps to achieve the desired outcome proposed in the invention (might be part of the OS). 
100 is connected to internet via port 140. 
111 is the session data, 112 is user authentication and related data, 113 is environment related data (e.g. browser version, OS version, history, some system variable etc. ), 114 is any other metadata that might be associated with any of the components of the environment. 
115 is the application data (e.g. client session, user input data, cookies, local storage, application related variables etc. ). 
210 is an equivalent application running in 200.  
220 is the Runtime Component that helps to achieve the desired outcome proposed in the invention (might be part of the OS). 
200 is connected to the internet via port 240. 150 is a means of connection (e.g. wifi, LAN, Bluetooth, NFC, IR etc. ) of 100 with any other devices like in this case 200. 
250 is the similar means of connection. 
Both 100 and 200 are connected to each other - for which may 100 might be using any other port 130 and same for 200 where it might be using 230. 
Now when the user runs device 100 and runs application 110, his data related to that state is collected and transferred to the other device 200 over the connection. This transfer is depicted as 160. 
This data is stored by the runtime 220 as 111, 112, 113, 114, 115. 

Now runtime selects the compatible application 210 which may have it's own session 211 and other related data. 
Now runtime 220 uses 111, 112, 113, 114, 115 to determine what can be populated in 210 , which may include the text or any info the user had entered in device 100 and this allows to run the application 210 with the user's data. 
Now the user uses device 200 to continue using editing or working on his data using app 210 and once he finishes, runtime 220 collects the modified data from the app 210 and sends it back to device 100 , depicted in 260 so that in 100 the runtime 120 uses that to populate the app 110 and within the same session as 111 and if the need is to update some server over the internet, it can do so via using the same session and thread via the same network port as 140. 

Note : The above example shows only a generic approach and describes the overall components. Depending on the scenario the usage might vary slightly -- for example if the user does not need to use same session and the port to connect to server, the flow might be different where once the user has updated his info in 210 on device 200, instead of sending back the updated data to device 100 as depicted in 260, the 220 runtimes can submit it to server or backend using 240 over a different session.

Note: The above example shows only a generic approach and describes the overall components. Depending on the scenario the usage might vary slightly — for example if the user does not need to use the same session and the port to connect to server, the flow might be different where once the user has updated his info in 210 on device 200, instead of sending back the updated data to device 100 as depicted in 260, the 220 runtimes can submit it to server or backend using 240 over a different session.

Why is Omnipresence is the future for OS?

Interestingly, when Cloud OS concept came into existence, the driving force behind it was the thoayeught that a ‘good’ OS gets out of the way and lets the user get straight to what they want. In Microsoft’s approach – From the perspective of the user, they’re no longer using a program on a machine but consuming a service that lives in an arbitrary place. Because the service runs on an API common to all machines, it becomes easier to scale and failover. This is concept the drives Platform as a Service (Azure’s tour de force). [Source]

Big players like Microsoft also believe that the “perfect future [of OS] would mean that […] software is totally portable between desktop and mobile devices. “. The kind of attempts by such players are to converge the OS of mobile as well Desktops.

Though many attempted in the past to evaluate thin-client OS as the future (e.g. http://www.totalnetworx.com/computers-technology/google-chrome-os-operating-system-future/ ), also gradually started to foresee the future of OS as something that can not be seen in the light of only a server-client architecture rather it would be defined in something more organic that would be sustainable. Articles like Desktop 2.0 and the future of the networked operating system , interestingly long back predicted the conclusion that we are carrying forward today that the Chrome OS version of the future where all we need is a browser is wrong.

So how Omni-present OS concept is different than thin-client and (even some old geeks referring the Mainframes to get the secret-sauce for the future!)? Here are the basic differentiators:

The Begining:

As I mentioned, in today’s OS evolution, it is the diversification of operating systems that prompt us to pause and think about the missing bridge to the future. If we closely ponder, it appears that the missing piece is convergence. The Omnipresent OS concept is a thought towards that direction. It’s actually a beginning for us to be prepared for tomorrow’s decentralized networked world where the age-old philosophies will show us the way!

(c) 2017-18 Samir Dash, Attribution-NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND 3.0)

 

https://www.linkedin.com/pulse/omnipresent-operating-system-os-re-imagining-next-killer-samir-dash

 

Beta Testing in the Ever-Changing World of Automation

Beta testing is fundamentally all about the testing of a product performed by real users in a real environment…Beta testing relies on the popular belief that goodness will prevail, which defines the typical tools that carry out such tests. Examples are the shortening of beta cycles, reducing the time investment, increasing tester participation, improving the feedback loop and visibility, etc…we lack “smartness” and proper “automation” in Beta Testing Solutions…

Check out the complete article at RedHat Developers Blog:

https://developers.redhat.com/blog/2018/01/05/beta-testing-automation/