UX Simplified: Models & Methodologies: Digital Edition [Kindle Edition] ISBN: 978-1-3115-9110-4

My  recent title is available on Kindle for download. This book covers basic models and methodologies that are revolved around User Experience (UX). The discussed topics include User Experience, Information Architecture, User Interface, Usability models, User Centered Design (UCD), User Centered Software Design (UCSD), different Software Lifecycles (SDLC) and how usability models fit into SDLC models.

The details of the book are as follows:

UX Simplified: Models & Methodologies: Digital Edition
by Samir Dash
ISBN: 978-1-3115-9110-4

Kindle Price (US$):$1.87
Kindle Price (INR):Rs. 119.00 includes free international wireless delivery via Amazon Whispernet




UX Simplified: Models & Methodologies: Digital Edition [Kindle Edition] ISBN: 978-1-3115-9110-4
UX Simplified: Models & Methodologies: Digital Edition [Kindle Edition] ISBN: 978-1-3115-9110-4
Major topics included in this book are :

• Why “UX: Simplified”?
o The Diverse Disciplines: The ABCs of UX
o User Experience(UX)
o Information Architecture(IA)
o Interaction Design (IxD)
o User Interface Design (UI)
• Usability and Mental Models: Foundations of UX
o What is Usability?
o System Models
o What is a “Mental Model” exactly?
o Most-likely Mental Model
o Conceptual Model
o Challenges in Usability Measurement and Metrics
o A List of Factors for Generic and Consolidated Usability Model
o Heuristics:Measuring Usability
• Engineering and Design Processes: Usability and User Centric Approach
o Usability Engineering
o User-Centered Systems Design (UCSD)
o Usability Design
o User-Centered Design (UCD)
o Don’t get Confused: UCD vs UCSD
o UCD Models and Process
• Software Development Life Cycle (SDLC): Where and How User Experience Models fit in?
o Software Development Life Cycle (SDLC)
o Waterfall model
o Spiral model
o Iterative Development Model
o Agile development
o Challenges in UX Integration to Different SDLC Models
o Usability Designing Process
o How Usability Design Process Fits in Different Phases of SDLC?
o How UX Fits in Different Models of SDLC?
o Challenges with Agile model of SDLC to implement UX
o Lean UX and Agile Model
• Agile in Usability Design:Without Reference to SDLC
o Usability Designing Process
• Lean UX: Another Agile UX?
o The Beauty of Lean UX: Everything is familiar
o Foundation Stones of Lean UX:
o Lean Startup method: The concept of “Build-Measure-Learn”
o Minimum Viable Product (MVP) – Prototyping at it’s best in Lean Startup Method
o Principles of Lean UX

  • File Size: 1435 KB
  • Print Length: 86 pages
  • Simultaneous Device Usage: Unlimited
  • Sold by: Amazon Digital Services, Inc.
  • Language: English
  • ASIN: B00LPQ22O0



CUPID : Common Unified Programs for Instruction Delivery


Common Unified Programs for Instruction Delivery

(Project for Venture-Lab Class: “Designing a New Learning Environment” by Professor Paul Kim, Stanford University)

alt text

Project Team:

CUPID http://venture-lab.org/education/teams/17372

Project Team Members:

Samir Dash (Team Lead)
Maryam Seyedjavadi
Opemipo Adebanjo
Saurabh Kale
Manoj Mahali
Sangeeta Dash

Stanford University


Project team URL:


Public site of the project:


GIT Repository address:



CUPID (Common Unified Programs for Instruction Delivery) framework stands for a framework for a new way of instruction delivery for self-paced learning modules/courses.

The importance of “self paced learning ” even in today’s new age of MOOCs and social networking based “learning for life ” concepts, is still applicable. “In remote areas and villages, the self paced learning is the standard when it comes to digital delivery of content. Mostly CBTs (Computer Based Training), downloadable learning materials on CDs/DVDs get most attention. We can’t expect good bandwidth in such a places where the user can take part in online interactive classes. So
after ILTs, the next nig thing is self paced learning through books, magazines, newspapers etc. When it comes to digital part
of such an approach, downloadable standalone digital learning materials win the case.So the fact is “self-paced learning” is a reality even in the age of social networking based MOOCs.” So one of the focus area of CUPID team is explore possibilities in the education sectors in developing countries, through exploration of new ways to improve the “self paced learning” domain.

And this is one of many reasons why CUPID focuses on “self paced learning”. In addition to that the focus areas in finding new ways to provide richer “meaningful experience” is what differentiates CUPID from existing approaches through conventional CBTs/WBTs.

We are not talking about replacing CBTs/WBTs with something new, rather improving the existing “self paced learning” approaches with new ways to improve user engagement and bring some reliability to it so that it can open new possibilities for certification courses to include “self paced learning” as a part of their certifications.

So it’s all about the “new and meaningful experience” that we are planning to bring into “self paced learning” through CUPID.

alt text

alt text

alt text

alt text


The proposed learning environment/framework has the major goals such as :

New ways to improve the self learning experience.
Explore the possibility to get the user engaged with the course in the self paced learning in a way so that the training

experience is not just a one way learning. And of course we need to think beyond the basic interactivity that typical CBT/WBT

courses provide though multiple choice or drag and drop quizzes.
Find a solution to make a self assessment certificates reliable source to meet some organisational goal (where the

organisation has directed the employee to take the self paced learning to improve some skill sets. ).
The basic aspects of CUPID is given below:

alt text

alt text

alt text

**For a clear view of the mind map please follow this link :


The Concept

The aim of this project CUPID is to conceptualize a new framework-cum-instructional design approach to define the best

possible solution for self paced e-learning experience.

The major goals of the framework are:

    1. To find new ways to improve the self learning experience.
    1. Add built in navigational components to support for physically challenged / disabled persons.
    1. Explore the possibility to get the user engaged with the course in the self paced learning in a way so that the training experience is not just a one way learning. And of course we need to think beyond the basic interactivity that typical CBT/WBT courses provide though multiple choice or drag and drop quizzes.
    1. Find a solution to make a self assessment certificates reliable source to meet some organisational goal (where the organisation has directed the employee to take the self paced learning to improve some skill sets.)

Based on the above said goals , there are 14 focus points which makes CUPID framework unique and helps in defining the

components of it. These focuses are defined in the section in the following.

Major Focus Points that makes CUPID unique

The proposed learning environment/framework focuses on:

Focus #1.Improving usability of the content and the interaction delivered through it.

Focus #2.Add built in navigational components to support for physically challenged / disabled persons.

Focus #3.Provide the user the ability to go through the course in both online and offline modes so that “any where any time learning” and be made possible to achieve.

Focus #4.Provide DRM to the course content so that it can be used for Intellectual Property (IP) protection and can be useful in commercial e-learning channels and related business models.

Focus #5.Provide features to make the two – way communication happen between the course and the user even in off-line mode.

Focus #6.Add a reliable way to make sure that the user has actually gone through the course.

Focus #7.Add a reliable simple certificate generator that will print the certificate to make the off line certification related course possible.

Focus #8.Make sure the content is compatible with well know e-learning specifications like SCORM / AICC etc.

Focus #9.Optional support for delivery of well known content package formats such as ePub, pdf, SCROM packages.

Focus #10. Support custom Markup language/ XML with custom DTD for building of content.

Focus #11.Provide components that can support the course content compatibility with Bloom’s taxonomy.

Focus #12.Introduce new age assessment engine that will help the user self assess and review the level of skill he has obtained.

Focus #13.Provide easy interaction between the course and the user for other features like bookmarking, highlighting, annotation, free hand annotations, text size and font type customization based on the user preferences.

Focus #14. Component based framework so different components/sub-components can be plug and play mode or can be enabled/disabled.

alt text

alt text

The Design

The basic design of the CUPID framework is based on component architecture. Different functionalities of the frameworks comes

from it’s different components. New feature can be introduced through extending the framework by creating a new plug-in and

adding that to the framework.

So basically the different plugins function in sync and provide the whole user experience and features.

These details are discussed in the next section while detailing the innovative framework components.

alt text

Framework Components

To fulfil this focus the concept learning framework has been designed in a component driven architecture.

alt text

alt text

The different components of the framework (based on the functionalities/feature) are:

Human Approach for Device Specific Navigation Approaches (HADSNA):

alt text

alt text
This is in line with Focus #1 we have discussed. The purpose of HADSNA component is to provide user friendly and simple

approach to course/content level navigation and interaction maps. For different devices, the device specific gesture s for

multi-touch support for touch enabled devices and mouse driven UI level approaches for PCs and desktops.

The unique features which give CUPID navigational system a fresh approach are :

    1. Automatic event binding depending on the environment:
      alt text
      All inbuilt components of the framework uses common event driven APIs for navigation and UI level interaction. e.g. content author can add simple API call such as onClick() and CUPID will ato matically sniff the environment to assign the correct event handler such as Mouse.click() or Touch events such as onTouchstart() and onTouchend()
    1. Multiple ways to trigger the same navigational event: The navigation of the UI is super simple and can be addressed using TAB Indexing g based navigation in an environment supporting only keyboard. Where as in the multi-touch gestured environments basic gestures will be used for navigation of the same content. Mouse driven environments anyway will support UI buttons and links for such needs. Example is shown in the diagram below:

alt text

    1. Contextual help: Contextual help will be available in each state/page and sections of different UI of the content displayed in CUPID which can be accessed through single gesture, mouse click or key hit. The contextual help will provide context specific information along with what is applicable in general including the hint on the navigational options available . This will help even the first-timers to find their position in the whole system.
    1. Built in navigational feature for visually challenged and persons having hard of hearing: Another most important aspect of the CUPID’s navigational system is the built in Offline Text to Speech Engine (OTTSE) along with Media Caption Manager (MCM). These help the disabled persons easily navigate through the content of the learning material. More information on this has been provided in the next focus point related solution in the following section.
    1. Built in Pseudo Inference Engine (PIE): This is another innovative approach through which the course interacts with the user to hep him set his learning path and based on which the navigation happens in the course. In simple words PIE acts as a guide for the user who helps in deciding which chapter of the course or content section to move next and which media to play in order to quickly meet the learning objectives and complete the learning path successfully . More details of this has been detailed in the coming section dedicated to the solution prescribed for Focus#5.

Built in Offline Text to Speech Engine (OTTSE) along with Media Caption Manager (MCM):

alt text

This is in line with Focus #2 we have discussed These components will allow built in audio captions displays for media elements like video, audio, Podcast and Flash items.

The basic flow for enabling features for disabled learners are given below:

alt text

The basic structure of this component is shown below:

alt text

The Offline Text To Speech Engine (OTTSE) will be helpful for the person having visual disabilities. Whenever the learner tries to run the course on a system that does not have the screen readers installed or enabled, he can easily use this OTTSE feature to navigate through the course as well as can listen to the TTS generated voice-overs to the provided text materials.

The OTTSE has 2 subcomponents:

    1. Offline real time text to speech generator for the navigational needs along with predefined audio clips. These will help in providing navigational hints to the end user on mouse clicks or tab key press, and multi touch gestures on tablets or mobile devices.
    1. Offline real time text to speech generator for course content having text . In case of any interactive media components like interactive Flash/HTML5 simulation, game or interactive, it will extract the text content from labels and process them for text audio generation.

The Media Caption Manager (MCM) has two sub components –
1. Authoring side sub-component: This will allow content authors to add textual captions for their media elements. Plus it will have an optional component for auto generation of caption from interactive Flash files, video and audio components.
2. User facing sub-component and rendering engine: this will be part of different media players in the framework such as video , flash items, HTML5 interactive objects, audio & podcast players.

The MCM will be helpful for the person having hearing disabilities. While they can watch the video , they can at the same time read the captions. Similarly, in case of the audio and podcasts , they can read the caption when they run those media components.

Offline Content Sync Manager (OCSM):

alt text

This is in line with Focus #3 we have discussed. This will have sub-components like

    1. Content Downloader & Reader and
    1. Content Sync Manager.

This will help the content to sync the online store and catalogue of the courses and sync the titles available for the user as per their profile level access. Thecontent downloader component will download the offline content or course packages from the store and copy it to the local drive of the user’s machine. The reader sub-component will allow the content to render on the screen and provide the content level navigation. When the user becomes online, thecontent sync manager will check if any update is available and will sync the local storage/drive of the user with the user’s profile assigned online content store/catalogue. The first time content is accessed is used online; it will be automatically cached and will be used for future offline access.

alt text

DRM Manager:

alt text

This is in line with Focus #4 we have discussed. Digital Rights Management (DRM) is one of the important aspect of the

framework. The DRM Manager component will ensure the proper encryption and decryption of the content at the both server and client level. The component will also take care of prevention of unauthorized copying or printing of the content. It will also provide the content level control for limited printing and copying of the course content.

There are various stages where the content is protected through different DRM methodologies:

    1. At the point of downloading content: the user when downloads the CUPID course the course content are downloaded separately into it from the server in an AES encrypted form. That content is decrypted on demand when the user wants to run the course and wishes to go through any lesson.
    1. At the point of user interaction with the content: the user can not copy paste the content from the course , neither he can not print the content except for those lessons or sections that the content author has enabled for copying and printing. Along with this the user will not be able to view every options such as “view source” etc. as the right click will be disabled.
    1. Course life time: Course content can be made to expire based on some time stamp and or number of uses.

Pseudo Inference Engine (PIE):

alt text

This is in line with Focus #5 we have discussed. The purpose of PIE component is to provide interaction based navigation, content searching and learning path decision making. This will be the component that will work on both offline and online mode. In the former mode, it will depend on a XML tree structured interaction map, to determine what kind of response it will give to the end-user to continue the two way interaction between the user and the course.

In the UI PIE is represented as a simple terminal like BOT or a guide or an assistant to help the user. Also it is more like an wizard type component that can show textual information, links, can trigger other CUPID components and events and do the search the content and enable or disable course settings for the user depending on the interaction.

PIE is actually one of the most innovative and unique component that is shipped with CUPID through which the course interacts with the user to help him set his learning path and based on which the navigation happens in the course. In simple words PIE acts as a guide for the user who helps in deciding which chapter of the course or content section to move next and which media to play in order to quickly meet the learning objectives and complete the learning path successfully.

An example interaction detailing the user interaction with PIE in a cooking related course developed in CUPID:

 CUPID PIE (Asks questions and infers)  : Hi How can I help you?

User (Gives Answers and gets help and info):Not sure which lesson to start with.

CUPID PIE (Asks questions and infers) : Ok. Are you looking for a quick solution for any particular problem or are you looking for understanding the whole concept and get a certificate?

User (Gives Answers and gets help and info):Quick solution only

CUPID PIE (Asks questions and infers) : Hmm.. please type your problem

User (Gives Answers and gets help and info):Really?

CUPID PIE (Asks questions and infers) : Yes. Please type your problem

User (Gives Answers and gets help and info): Hmm..ok. I want to know how to create a cake for my cooking class.Searching…please wait.

CUPID PIE (Asks questions and infers) : Found 2 results ..please see if you got the answer..I am waiting ..will try to help you if you are not satisfied…Search results: Link 1: Make plum cake..Link 2: Make chocolate cake

User (Gives Answers and gets help and info):** No I want vanilla cake recipe

CUPID PIE (Asks questions and infers) : Searching for Vanilla related recipe …it seems there is some some video you might be interested. Video1: How to add Vanilla to your cakes? .. Video2: Without vanilla make pies…Video3: Indian Vanilla flavors. Seems like video 2 is more similar .

User (Gives Answers and gets help and info): Hmm..right thank you. Please play this.

CUPID PIE (Asks questions and infers) : Sure. [PLAYS the video]

This is just a very basic simple interaction shown in the example. PIE can gather learner’s history and this personal details

to understand what can be most probable need of the user and based on that can determine the sequence of results/options it shows to him .

Creating PIE for the very basic and focused needs can be developed with the XML tree structured interaction map so that the

offline version can work without the need the user to be connected to the net. A sample XML based interaction map can be as follows:




< q=”good morning[USERNAME]”/>    

< q=”hi [USERNAME]”/>    



< q=”how can I help you?” />    

< q=”You need some help?” />    

< q=”Welcome back!” />    



The interaction maps can be developed uing CUPID authoring modules.

Learning Path Status (LPS) and Progress Indicator (PI):

alt text
This is in line with Focus #6 we have discussed. This component will provide reliable ways to ensure the actual progress of

the user in his learning path

LPS is made as an important feature of CUPID, looking at varying scenario of the user’s learning needs. Typically some users

just want to refer the course. Some just want to study the theory part to attend some exam. Some may want to have in-depth

study and generate a certificate. As all of these needs are different a single predefined sequence of lesson is not useful in

all these different scenarios. In such cases LPS actually helps in finding out what learning path is best fit for the user.

It will track how much the user has actually gone through both static as well as interactive and media contents. Based on the

progress of the user the component will ask in-between-questions to the user to keep the user engaged. This will also provide

optional controls and APIs to the content authors to determine if progress status based navigation locking will be enabled

for the target content in any course.

Learning Paths are one of the most important aspect of CUPID. The “learning path” is actually is the sequence of lessons or

sections the user needs to go through from any CUPID enabled course that is most effective for his needs/learning

LPS actually is the component that interacts with the user (on his first run of the course by default and on subsequent run

of the course on demand) to help him determine the best fit “learning path” for his requirement. It helps customising the way

contents are to be presented to him.

Typically the PI will be having some pre defined instruction/criteria to determine the progress of the user in each type of

the course content/sections/media etc. For example one sample instruction can be if the user has at least viewed the 80% of

the video and has spent playing the video more than 5 minutes (dynamically calculated based on the video length), then it

should be marked as “complete” status. This kind of instructions help PI to determine intellectually if the user has really

viewed the video .

Now for the above example if the user has completed 80% of the viedo but has really spent 2 minutes in watching it then CUPID

can know that the user has actually skipped the video and can post some alert through PI and mark that video as “incomplete”

plus this information can be used in the learning path to give the score or points to the user based on some calculation,

where the time spent by user are taken into account. Even some assessment section can be disabled till the user completely

watches the videos , textual or any interactive content.

This way PI will actually help in :

    1. improving the reliability of the self –learning modules in e-learning domain – so that in organisations self learnimng

modules can be trusted for providing trainings.

    1. makes sure that the user actually go through each and every recommended lessons of the course before taking the self

assessment and generate the certificate.

There will be some additional sub components such as a face tracking camera which can be enabled (if available in user’s

system as a web cam ) that will keep an eye on the user to check if he is actually present to view the running course content

media/video. If the user is not present while a media / video is running, it can pause it and stop all activities of the

course in progress till the user returns to his seat and take some action to resume the course.

However this special sub component should be made mandatory only in case the user needs a certification from the current self

learning course. More details of this aspect is discussed in the next section.

Offline Self-certification Manager (OSM):

alt text

This is in line with Focus #7 we have discussed. This component will have 2 subcomponents.

a. Self-certificate Assessment Mode Manager: The course will have a “Self Certificate Assessment Mode” which will

enable the learner to take the self assessment for generating a certificate from the learning framework for a target score.

By using this mode the user has to use a minimum set of pre-defined hardware such as a webcam , etc. This sub-component will

monitor all activities and user interaction of the user with the course and will take random photographs when needed to make

sure that the user has actually gone through the course and has himself attended the assessment.
b. Certificate Key Encryption: This component will take care of generating reliable “self learning completion”

certificates from the offline /online course. This component uses well known encryption alogorithms like SHA, Blowfish etc.

to encrypt the data to generate certificate information string and print in the certificate document, which the learner can

use as a reliable certificate of his learning status.
c. Online central Certificate Tracking : Whenever the user comes online, the certificate data is synced to a central

server of the home server of the framework and a key is assigned so that at any point of time the certificate can be tracked


Inbuilt SCORM/AICC Content Package Reader Components (CPRC):

alt text

This is in line with Focus #8 we have discussed. To make sure that Make sure the content is compatible with well know e-

learning specifications like SCORM / AICC etc. the framework will have reader components which will allow to run SCORM 1.2

/2004 and AICC packages in offline mode. These components will provide related adaptors and APIs to track the user scores ,

learning status from the target learning framework.

There are 2 ways of implementation , depending on the need:

Make SCORM / AICC packages compatible with CUPID to run them as a standalone offline course in offline.
alt text

Package CUPID packages as SCORM/ AICC packages so that they can be run in side an LMS compatible with SCORM / AICC

alt text

The CPRC component helps in communicating with the CUPID reader APIs through a wrapper layer that helps integrating the SCORM

component with the framework so that other non-SCORM features of CUPID are also available in the same course.

The CPRC also uses SCORM/AICC APIs to track all the user features and saves them locally so that if needed at a later point

of time those data can be exported to other standard LMS which are compatible with SCORM / AICC.

e-Pub and PDF Reader components:

alt text

This is in line with Focus #9 we have discussed. These will allow rendering of well know content package and file formats

such as ePub (2 and 3) and PDFs. The offline player will extract the epubs etc and by validating it’s spine will render the

pages. Similarly, the course will have independent PDF reader to render the PDF documents and integrate them to the whole

learning framework system. This will allow easy import/ migration of such content to this framework and will be a boon for

the content authors.

alt text

Custom Course Authoring Markup Language Rendering Engine:

alt text

This is in line with Focus #10 we have discussed. Apart from the importing content course from PDFs, EPubs, the framework supports a custom markup language named as “Cupid ML” ( with a file extension“.cml”) . Using this the authors can easily create content poages with built in features such as video players, media renderers, too tips, assessments, tabbed and simple pages etc. The CML language tags are self explanatory and content-author’s friendly to allow them to master it with in a few hours effort . By using the language the authors do not need to learn complex Java Script and/or any other programming skills to enable complex interactivity within the course. At a later point of time the course authoring tools can be developed that
will use this language to develop the courses. The CupidML markup language tags are simple meaning ful for content authors.

alt text

The .cml document starts and ends with the and inside this node the cml specific tags meant for pages, tabs, menus, media components etc. are added to easily create the new pages, chapters, sections in the course and add centralized event listeners without the content authors requiring the knowledge of complex programming languages. The only thing the content author needs to know is which tag is needed for what kind of element of the course. There is a tag for everything the

content author needs.

For example the individual pages can be created with a just adding simple tag like

Now to add a heading in the page
To add complex items like a podcast 1 will add a full fledged podcast component with all the

captioning etc. and “1” represents the URL index number in the podcast array defined in the data file / configuration file.

A typical page in .cml will be much simple than the actual code that is generated in the runtime by CUPID to make things happen and provide qall the interaction and event propagations and API calls.

Instructional Design Compatible Components (IDCC):

alt text

alt text
This is in line with Focus #11 we have discussed. There will be built in components that can support the course content compatibility with Bloom’s taxonomy. For example there will be default learning path creation components which will allow content authors to add “surprise” elements, in-between questions and “case studies” and scenario based task. The content authors can select predefined “Instructional Design” templates and then can fill up the place holders to quickly create the courses which will be compatible with Bloom’s taxonomy.

Different Instructional models fulfilling “creation”,” discovery”, “analysis”, “remembering”, “understanding” and “applying” phases in various degrees.

IDCC provides options for :

Scenario based training: where different scenarios are combined together with the assessment, guided tasks to the user, so that the user can actually.

Case studies : In this model the sample case is presented through real life scenes, and based on that case, related video, audio, interactive, media along with assessments can be grouped together.
Guided Tasks: In a guided task, the major focus is on providing the user a task to complete where the user assisted by some character/bot /or simple alert based instructions. Whenever the user makes any mistakes the system alerts the user and provides hints on how to solve or proceed with the task.

Now combining these sub-types, most of the course can be built to fit the need of any type of self learning modules including soft skills training, software simulation trainings to educational course materials.

So all lessons in side the course is grouped into 2 categories:

Textual Study Mode: In this mode the content types like text, hyperlinks, internal anchors, popup tool tips, quiz, assessments etc. will be available for the target course.

Interactive Media Mode: In this mode the content types like video, audio, podcasts, animations, interactive , game, simulation, guided task etc. will be available for the target course.

These two modes are inter switchable and the user can toggle between any these modes at any point of time of his learning. For any particular section/lesson these two modes will be available to the user so that user can him self decide which one he

needs to use for his learning at any point of the time.

alt text

Innovative Assessment Engines (IAE):

alt text

This is in line with Focus #12 we have discussed. The aim of this new age assessment engine is to help the user self assess

and review the level of skill he has obtained. This will include new kinds of interactive assessment types along with the

typically used “multiple choice”, “true-false”, “drag-n-drop” and “hot-spot” ones. The new kinds will include

(1) Guided tasks
(2) Interactive Scenario based question sets
(3) Basic simulators for software related trainings
(4) The “in-between” hit and click type question with special scoring.

The Guided Task is a kind of assessment, where the user can self assess if needed in a BOT/Wizard assisted simulated environment. This kind gives the hands on kind of experience when combined with simulation. Basically there will be a wizard or guide or assistant who will first assign a task to the learner based on the things he has learned and then help him on the way to complete his task by providing hints and task related tips to the users.

The Interactive Scenario based question sets are provided as a part of any lesson / logical learning chunk where the user is provided with a case scenario through a interactive/video/animation/slides. Then the user is asked to view some screen-cast or interactive simulated environments and then is provided with some interactive questions.

The Basic simulators for software related trainings are the sets where the user faces some software training related UI screens / simulated environment to hit and click different UI elements and provide input to complete the tasks. This will help the user in giving a first hand kind of feel of the target software on which he is getting his training.

The “in-between” hit and click type question with special scoring is a kind where the user is asked with some questions in random in the middle of any lesson. And these questions are related to the lesson the learner is going through. This makes sure that the user is actually going through the lesson with attention. Plus as these questions are having special scoring which will be calculated and be added to the final certification generated, this makes sure that the user can not ignore


User Frequent Course Interaction Manager (UFCIM):

alt text

This is in line with Focus #13 where the user needs to make bookmark any point of the course (a page or a tab), make some content selection and highlighting it in addition to adding a note or annotations. Plus free hand whiteboard type annotation of the course and customization of font size, font type and theme are taken care of by this component.

User is provided with standard UI with controls in the lesson area so that the user can avail any these functionalities with in single clicks.

alt text

Component Based Architecture and Plug n’ Play Rendering Engine (PnPRE):

alt text

This is in line with Focus #14 we have discussed. This is related to the technical aspect of the framework. The whole framework is a combination of multiple “Plug n’ Play” capable components. This component is responsible of allowing integration of new components with the core framework engine. This is also responsible for opening APIs that can be used by the developers to create and add new Plug an Play components for the framework.

alt text

The diagram above, clearly details the process flow and how the core APIs are related to the different plugin or extension components. Any code chunk or package that is compiled as a component or plugin of the CUPID framework is consumed by PnPRE and then it process that code of the component to add that to the core CUPID framework in real time to provide the new plugin related functionalities /feature. This makes easy for any developer to extend the CUPID framework and / or customise it.

The Usability and UI Aspect

The CUPID framework basically targets all the standard digital devices from personal computers, desktops to net-books, small screen mobile devices and stand multi-touch tablets and the related hybrids.

That is the reason, the usability focus is always on device specific and based on the context and environment the user is interacting.

The major flow that the CUPID follows to determine the kind of usability experience to provide are:

    1. Determine the host environment through browser and OS sniffing.Also sniffing made for available real eastet on the screen and resolution, color depth etc. to provide suitable UI layout.
    1. Also sniff for available hardwares like camera, speaker, mic etc. in the device so that the usability aspect can be determined.
    1. Sniff for input devices like multi-touch screen or mouse or keyboards, so that the user interaction types can be determined.
    1. Determine if the user/learner needs special assistance due to any visual or hearing disabilities. This will help in deciding if the built in offline voice-over features enabled or the captioning system to be enabled.

Depending on the above factors suitable user navigation, context menus, navigational links, meida types, layouts and the

customisation thresholds are determined and the best possible experience is delivered to the learner.

The Technology Used

The technology stack used in CUPID are based mostly on open technologies so that the framework can be easily maintained using open source (MIT licensed only) code , plugins.
will be economical /cheap to develop and maintain and extend in future
will be easy and economical to get the support on open technologies related online communities/ forums.
Will be low on budget
get more contributors from the community.

Based on these the key technology stack used in CUPID are:

Client Side:
Flash Action Script (Using Flasm/Ming/and Free Adobe Flash Builder SDK which is free)
HTML5 / CSS3/ JavaScript Frameworks – includes open source frameworks like jQuery, Dojo, YUI, Twitter Boot starp, Adobe’s
Edge Runtime (it’s free unlike the builder), Sencha SDK , Coffee Script, JSON
HTML4/CSS2 – for older browsers and low end systems
XML, XSLT – data and style rendering.
Java Script – Also CoffeeScript

Host browser specific (e.g. Safari specific, or google Play widget specific, Windows 8 app specific)
Platform /native specific: Objective C, Android, C, C++, Phonegap SDK, Sencha SDKs etc.

Server Side:
PHP (Scripting)
MySQL (Database)
XML and JSON (API format)

Specifications :
TinCan API

Video Demos


License and Business Models

The decision on the business model of CUPID has been taken into account the following goals:

The project should sustain in the currently competing commercial and open source worlds.The framework should available FREE of cost to the content authors from the under privileged or having financial challenges.

It should be available as an open source initiative while at the same time it should allow scope of the framework to be used commercially.
Any commercial firm can use the framework for custom development to develop or enhance it for commercial purpose by paying some licensing fees.

To make sure that all of the above applies to CUPID project, the project licensing is decided as follows:

The project will be MIT licensed so that the commercial versions are not bound to open source any modifications made on the framework unlike the GPL one. Any company/organisation can pay a license fee based on the volume of usage their commercial version will have, in order to develop/extend it for a commercial version.The content authors/teachers/developers/organisations/NGOs doing social service on education and would like to extend and use the framework non-commercially are free to do so without paying any fees. And their extensions will not be open sources if they wish.

For getting support on technical or behavioral aspect of CUPID, there will be an annual support fee to be paid, which needs to be renewed each year till the time the support is needed.

How this framework “re-innovates” the Self Learning e-learning domain

alt text

CUPID is the framework that actually attempts to rediscover in the “self-learning” domain of e-learning , which we think we have already discovered! CUPID, implements some fresh concepts and unique ways to improve the learning experience for the user and at the same time makes it easy for the content authors to quickly develop and deliver highly interactive and rich content without the content author to be worried about the technicalities.

CUPID also address the needs of the continuously changing world which is on a fast track pace in moving through the digital revolutions e.g from PC to laptops to mobiles and tablets. It provides unique way to create once and deploy in many platform.

Also CUPID goes beyond the standard limits of the Instructional Designs and provide new blends that can provide fresh way to keep user engaged with the course.

CUPID also mends the gaps between the MOOC based elearning trend and the “self paced ones” by adding ground breaking “course-user interactions” through the assistants, BOTS and the PIE component, so that always the user will feel like the being assisted and guided through his learning curve.

CUPID helps the user in finding the correct learning path to make his learning curve more effective and short.

Last but not te least, CUPID has built in features that supports visually challenged and persons with the challenges in hard of hearing by providing feature that work even in a case of screen reader is not present. Also it provides unique way of voice assisting navigation plus caption based (interactive text labels ) navigation and content media guidance for people with hard of hearing.

In a single line CUPID is all about re-innovating the “self-learning modules” for the next decade.

How CUPID can change the world

The approach of CUPID follows a vision that focuses on every aspect that a learning environment depends on.

It focuses on:

    • User, his taste, his learning requirements and assists him in all though the way of his learning curve for the self learning period on the topic from the target module.
    • Author and his needs on making it easy for integrating content, providing support for quick super-easy way to add content media, making content compatible with standard Instructional Design needs (The Bloom Factors), Also helps in protecting content (e.g. DRM)
      Certification needs: so that the user can go through self –learning and at the same time generate the certificate which is reliable and meaningful for authorities/organisation requiring certificates from the learner (an employee).
    • Commercial aspect/developers: provides options for extending the learning framework for a successful commercial product as well as for content producer companies who can reliably protect their premium content on the self learning modules through DRM feature implemented.
    • Open source communities: CUPID is a great contribution to the open source communities, as it SDK is MIT licensed. Plus it
      uses open technologies to support the “open source movement”
      Financially backward sections in the world: CUPID is FREE for financially backward sections/organisations/authors to develop and extend it for their use non-commercially.
    • Free Institutions: All NGOs, educational institutions, universities, non-profit institutions are free to extend it and allowed for keeping their extensions non-open sourced if they want . So that they can use it in exam needs etc. as long as they are not using it for commercial purpose, they can enjoy this freedom.
    • Physically challenged: Visually challenged and persons with hard of hearing can use CUPID without any extra screen reader or hardware/ software installed in their system.


The MIT License (MIT)

Wikipedia entry on MIT license

Trends of Innovation in Education :

Technology, Content, Pedagogy, and Value


Video URL :http://youtu.be/LfjlUX_lLUA
Slides URLhttp://www.slideshare.net/MobileWish/cupid-common-unified-programs-for-instruction-delivery

Project Presentation

Sample Code for a lip sync animation using LipSyncAS3 v1

Few years back I was using FlashAmp tool to create amplitude based lip sync for Flash characters to use in mini games and e-learning projects. basically it was a tool that allows to get the array of amplitude value from the mp3 voice over files and then using this array I could sync the movieclip containing lip  of the character to the mp3 voice being played. But this was never dynamic…that means if you change the mp3 file, you need to pre-process the audio file and update the array again.  Now things have changed.

With LipSyncAS3, similar effects can be achieved, but there is no more pre-processing required.  LipSyncAS3 is an easy to use development kit to integrate audio amplitude based runtime lip sync to your Flash characters/animations. The LipSyncAS3 does not require you to preprocess your audio MP3 files by any tool, rather it is based on amplitude based lip-sync generated in real-time. The LipSyncAS3 does not even require any server to render lips , so any animation/interactive solution using this can be delivered via CD/DVD for playing locally on a PC, and therefore is a good tool to be used in e-learning and CBT/WBTs.

Note that, as the lip sync is not based on visme or phoneme set , the lip sync is not accurate to details. However it is good enough to be added to small interactive projects, web animation, e-learning projects like SCORM/CBT/WBT where interactive lip sync is required for voice mp3.

The lip sync engine gives the designer/animators the freedom to design their own lip images with different positions in reference to the provided template, which is rendered in their Flash movies via without need to pre-process the voice audio in any manner.

Many projects like the talking animals/birds at www.mgreetings.com use this LipSyncAS3 to do the lipsync parts.

Following is what exactly you need to be able to do a basic lip sync project using this kit:

First step is to get the kit (Trial version) from http://patterngraphic.com/detailspage.php?product=PG001. This is a full working version with a trial version message appearing.

System Requirements

LipSyncAS3 is a purely Flash AS3 based solution. It requires your Flash movies to be published for at least Adobe Flash Player 9 or above. The Flash Movies must be compiled with ActionScript 3. To use the LipSyncAS3 kit you need Adobe Flash CS3 or above IDE to integrate to your Flash movies. You can also integrate your lip sync movies to Flex/Flash Builder applications.

The audio files to be used as voice must be MP3 files as Flash only supports MP3 files to be loaded dynamically.

To run your Flash movies using LipSyncAS3 needs your Flash movie to be compiled as a projector file (.exe in windows) to run on a desktop locally. To run it from web you can use the swf version with publish setting with networking privilege

LipSyncAS3 Development Kit consist of the following basic files and folders

Lip Engine.swf : This is the lip engine that has to be loaded into your movie. This engine does not have any visual parts visible in the stage. Rather it helps to run any lip pattern or the lip graphic you have designed to render in real-time.

LipPattern.swf: This is the graphic representation of the lip to suit your animated character, that is rendered on the screen. You can create your own custom lip set based on a template FLA provided. The lip pattern can have maximum of 10 lip positions as shown below where the lip position image set at the frame 1 of the timeline will refer to a lip in silent position. From frame 2 on ward the lip image must show the opened positions based on the amplitude ..like the frame 3 will have the lip more opened than the one in frame 2. similarly in frame 10 the lip will have the lip image with widest/longest open lip position.

Note that depending on the mp3 audio quality, voice type, you may need to fine tune your lip design to match the needs.

The final out put folder may contain similar files as shown in the image below:

Now the available methods in the LipSyncAS3 is as follows:

SetLipPattern(lipPatternSource:String, lipLicenseEmail:String, lipLicenseSerial:String, onLipPatternLoaded:Function) :void

This method is called at the beginning, to initiate the lip engine. This method does the following: • It validates provided serial number • Loads Lip engine • loads your custom lip pattern.

lipPatternSource:String is path to your lip pattern swf file path;

lipLicenseEmail:String is your registered email id while purchasing a license of this product.

lipLicenseSerial:String is your purchased license/serial number.

onLipPatternLoaded:Function is the method which you can use to trigger anything when LipPattern finishes loading

In case you want to try the product use “TRIAL” (case sensitive) as your lipLicenseSerial and in case of registered email id use any of your email id. Running a trial version will not affect any of the functionality. It will only add a message to your flash movie mentioning that this is a trail version. Note trial version is only for the purpose of reviewing product.

SetVoicePath(Mp3FilePath:String, onVoiceLoadComplete:Function, onVoicePlayComplete:Function, onID3Load:Function, onIOError:Function, onVoiceLoadProgress:Function ) :void

This method sets the mp3 file path which contains the voice for lip sync.

This methods accepts number of parameters out of which all but the first one are optional

Mp3FilePath:String is the MP3 file path

onVoiceLoadComplete:Function is a function where you define what to happen when MP3 load is complete.

onVoicePlayComplete:Function is a function where you define what to happen when voice playing is complete.

onID3Load:Function is a function where you define what to happen when voice ID3 details are loaded from the mp3 file. For example you can then call the GetID3Info():ID3Info to get the ID3 details like artist name, album name, year and so on.

onIOError:Function is a function where you define what to happen when mp3 file could not be loaded.

onVoiceLoadProgress:Function is a function where you define what to happen while loading of mp3 continues. For example you can check for total loaded bytes to show a loading progress.

PlayLip():void This methods start playing already loaded mp3 and along with it starts rendering the lip. If the mp3 was paused using PauseLip(), then it resumes the playback from the last paused position.

This methods stops the playback of mp3 and stops the lip movement.

This methods start playing already loaded mp3 from the beginning.

SetBufferTime (seconds:Number) :void
The default buffer time for the sound object to load mp3 is set to 1000 miliseconds. You can change this by using this method.

This moves the play head to a particular position (in miliseconds) in mp3 to play. This can be useful in creating progress bar for the playback to control where to start playing .

This returns an ID3Info object for the playing mp3 file (if the ID3 details are present)

Example :
var tempID3InfoObj:ID3Info = MovieClip(LipLoader.content).GetID3Info();
trace(“ID3 Loaded”); trace(“artist : “+tempID3InfoObj.artist); trace(“track : “+tempID3InfoObj.track); trace(“comment : “+tempID3InfoObj.comment); trace(“songName : “+tempID3InfoObj.songName); trace(“album : “+tempID3InfoObj.album); trace(“genre : “+tempID3InfoObj.genre); trace(“year : “+tempID3InfoObj.year);

This returns the number of bytes loaded for the mp3

This returns the total bytes (file size) for the mp3 being loaded.

This sets the volume of the target voice object for any particular mp3. Each time a new mp3 is loaded the volume sets to default value., and in that case you can set the volume to your desired level using this method.

This sets the pan value for the target voice object.

This returns the volume value.

This returns the pan value.

Now you can get the sample code files from

The sample zip archive contains one sample template FLA file with example usage of most of the available public methods. In addition a sample Pocast Reader application FLA file is also included in the archive, in which a character’s lipsync is rendered as per the pocast voice. Also there are 2 sample lip pattern FLA files, which you can use to create your own lips.

http://www.patterngraphic.com/samplefilesdownload.php?product=PG001 to download the zip archive with the sample files.

If you open the sample files archive you can see the following code which is self explanatory:




(c)2009, www.PatternGraphic.com, All rights reserved.


This template provides most of the available public methods of the LipSyncAS3 engine.

You can use this sample template to create your own lip sync enabled movies.

You need to publish your movie for Action Script 3 and aFlash Player 9 or above.


import flash.media.ID3Info;


var lipPatternSource:String ="LipPattern/lip_pattern_puppet.swf";

var lipEngineSource:String = "LipEngine/lipsyncas3v1_lip_engine_network.swf";

var lipLicenseEmail:String = "support@patterngraphic.com";//Use your registered email id here

var lipLicenseSerial:String = "TRIAL"; //Use your purchased license serial number here.

//Create a Loader Movie clip to load the lip

var LipLoader:Loader = new Loader();

//Now attach the lip loader to stage (you can attach it to your character's face movie clip too)


//Now position it where ever you want

LipLoader.x = 25;

LipLoader.y = 120;

//Now load the lip engine swf file into your loader

var bgURL:URLRequest = new URLRequest(lipEngineSource);

LipLoader.contentLoaderInfo.addEventListener(Event.COMPLETE, lipLoaded);


function onLipPatternLoaded():void {

//the method is to trigger something when lip pattern is finished loaded. helpful when lip pattern is a big file.

trace("Lip Pattern is finished loading")


function lipLoaded(event:Event):void


//the following command loads the specific lip pattern



//Now to control the lip sync add some buttons (you can use your own method to control it)

loadBtn.addEventListener(flash.events.MouseEvent.CLICK, LoadVoice);

loadBtn2.addEventListener(flash.events.MouseEvent.CLICK, LoadVoice2);

function LoadVoice(e:MouseEvent):void {

//the following command loads the mp3 containing the voice to the lip

resetProgressBar ();

MovieClip(LipLoader.content).SetVoicePath("mp3/demo.mp3", onVoiceLoadComplete,onVoicePlayComplete, onID3Load, onIOError, onVoiceLoadProgress );


function LoadVoice2(e:MouseEvent):void {

//the following command loads the mp3 containing the voice to the lip

MovieClip(LipLoader.content).SetVoicePath("mp3/0.mp3", onVoiceLoadComplete,onVoicePlayComplete, onID3Load, onIOError, onVoiceLoadProgress );


//Now to control the lip sync add some buttons (you can use your own method to control it)

playBtn.addEventListener(flash.events.MouseEvent.CLICK, PlayVoice);

function PlayVoice(e:MouseEvent):void {

//the following command plays the sound and starts the lip sync activity



pauseBtn.addEventListener(flash.events.MouseEvent.CLICK, PauseVoice);

function PauseVoice(e:MouseEvent):void {

//the following command pauses the lip



restartBtn.addEventListener(flash.events.MouseEvent.CLICK, RestartVoice);

function RestartVoice(e:MouseEvent):void {

//the following command pauses the lip



function onVoiceLoadProgress():void {

var bytestotal:Number = MovieClip(LipLoader.content). GetBytesTotal();

var bytesloaded:Number = MovieClip(LipLoader.content).GetBytesLoaded();

if(bytestotal>0 ){

updateProgressBar (bytestotal, bytesloaded, 0, 0);



function onVoiceLoadComplete():void {

trace("Loading of Voice Completed")

//GetVoiceLength():Number will return the length  of the loaded audio

var voiceLength:Number = MovieClip(LipLoader.content).GetVoiceLength();


//auto play from the begining of the newly loaded audio when the load completes



function onVoicePlayComplete():void {

trace("the voice playing completed!")


function onID3Load():void {

//GetID3Info():ID3Info  returns the ID3 object of the loaded voice

var tempID3InfoObj:ID3Info = MovieClip(LipLoader.content).GetID3Info();

trace("ID3 Loaded");

trace("artist : "+tempID3InfoObj.artist);

trace("track  : "+tempID3InfoObj.track);

trace("comment  : "+tempID3InfoObj.comment);

trace("songName  : "+tempID3InfoObj.songName);

trace("album  : "+tempID3InfoObj.album);

trace("genre  : "+tempID3InfoObj.genre);

trace("year  : "+tempID3InfoObj.year);


function onIOError():void {

trace("Some error occoured in loading voice!")



positionBtn.addEventListener(flash.events.MouseEvent.CLICK, GetPlaybackProgress);

function GetPlaybackProgress(e:MouseEvent):void {

//GetChennelPosition() returns the current channel position of the playing audio

var tempPosition: Number = MovieClip(LipLoader.content).GetChennelPosition();



function GetBufferTime():void {

MovieClip(LipLoader.content).GetBufferTime ();


function SetBufferTime():void {

//Note: default buffer time is set to 1000;

MovieClip(LipLoader.content).SetBufferTime (8000);


//----------------------PROGRESS BAR-----------------------------

this.addEventListener(Event.ENTER_FRAME, checkPlaybackProgress);

function checkPlaybackProgress(e:Event){

if(MovieClip(LipLoader.content) != null){

var tempPosition: Number = MovieClip(LipLoader.content).GetChennelPosition();

var voiceLength:Number = MovieClip(LipLoader.content).GetVoiceLength();

var bytestotal:Number = MovieClip(LipLoader.content). GetBytesTotal();

var bytesloaded:Number = MovieClip(LipLoader.content).GetBytesLoaded();

var voicelength:Number = MovieClip(LipLoader.content).GetVoiceLength();


updateProgressBar (bytestotal, bytesloaded, tempPosition,voicelength );




function resetProgressBar ():void {

progressBar.progressBarPlayed.width = 0;

progressBar.progressBarLoaded.width = 0;


function updateProgressBar(BytesTotal:Number , BytesLoaded:Number, voicePosition:Number, voicelength:Number):void{

if(BytesTotal>0 ){

progressBar.progressBarPlayed.width = voicePosition/voicelength*progressBar.progressBarBG.width;

progressBar.progressBarLoaded.width = BytesLoaded*(progressBar.progressBarBG.width/BytesTotal);

} else{

resetProgressBar ()




//---------------------------------------ADDITIONAL METHODS -------------------------------------

seekBtn.addEventListener(flash.events.MouseEvent.CLICK, Seek);

function Seek(e:MouseEvent):void {

//Note: default buffer time is set to 1000;



bytesLoadedBtn.addEventListener(flash.events.MouseEvent.CLICK, bytesLoaded);

function bytesLoaded(e:MouseEvent):void {



totalBytesBtn.addEventListener(flash.events.MouseEvent.CLICK, totalBytes);

function totalBytes(e:MouseEvent):void {

trace(MovieClip(LipLoader.content). GetBytesTotal());


setVolBtn.addEventListener(flash.events.MouseEvent.CLICK, setVol);

function setVol(e:MouseEvent):void {

MovieClip(LipLoader.content). SetVoiceVolume(5);


setPanBtn.addEventListener(flash.events.MouseEvent.CLICK, setPan);

function setPan(e:MouseEvent):void {

MovieClip(LipLoader.content). SetVoicePan(-2);


getVolBtn.addEventListener(flash.events.MouseEvent.CLICK, getVol);

function getVol(e:MouseEvent):void {

trace(MovieClip(LipLoader.content). GetVoiceVolume());


getPanBtn.addEventListener(flash.events.MouseEvent.CLICK, getPan);

function getPan(e:MouseEvent):void {

trace(MovieClip(LipLoader.content). GetVoicePan());


UI2MVC for AS3: Saving Efforts in the Begining Stages of the Project Development

Everyone knows that developing a PureMVC application in AS3 requires a good amount of effort in the begining weeks. Same was my experience in some of my applications I had developed in past for different clients, who wanted the perfect pureMVC structure to be implemented and that too with in limited budget and time.  It was a pian for the team to deliver the project in PureMVC if the budget is crushed. So I was working on a small tool kit that actually helps reduce the efforts for the team in the beginning stage of the project that implement that use the PureMVC framework for AS3.

Actually any project implementing pureMVC in AS3 in Flash , approxmiately 15% – 30% of total development effort is spent in writing the UI level Views,  Notifications, Commands, Mediators and Proxies . In the begining weeks the initial effort is spent in preparing the basic work flow based on the business flow defined in the requirement docs. Once the basic flow is ready , the rest of the effort goes to implementing the business logic to the base framework prepared.  Another point is very intresting to note — the more you split your application into smaller loosely coupled modules/logical units, the more flexible your application becomes, but at the same time it requires adding more .AS files representing more splitted code lines of logical units, commands, notifications and views etc. But client is never happy with the flexibility implemented, nor with the requests to add more to budget for more lines of codes …rather wants the application to be unendingly flexible as well as modular without allowing any additional effort hours. And a typical out put can look something like this as in the following image in the first weeks of development with the basic flow ready.

 Now how if this can be done with in a day or two instead of spending one or two weeks? Well that’s what UI2MVC is all about. Currently it takes UI level inputs, lable names, file names in the form of a specific formatted  XML (I call it UIXML) and out puts the basic flow of  pureMVC ready application that can be compiled without any graphical views. Though the tool is in very primitive stage, I have successfully used to deliver two projects where roughly 2-3 weeks of time is saved in each project. I am now trying to create a GUI version of it so that it will be more userfriendly and can fasten the workflow. Also currently this is meant for certain types of applications development with AS3 PureMVC, such as it is limited to single common navigation controller. Note: this also currently does not support Flex/Flash Builder projects.  

MobileWish Pocket Travel Assistant has been shortlisted as a Finalist for MBillionth Congress & Awards 2010

Great News! My Travel application Pocket Travel Assistant has been shortlisted (in Grand Jury held in Colombo during June 10-13, 2010.) as a finalist in MBillionth Congress & Awards Gala to be held at New Delhi on 23rd july 2010.The projects have found place among 160 “Successful Quality Nominations” received from countries like India, Pakistan, Sri Lanka, Bangladesh, Nepal, and MaldivesMBillionth Awards South Asia Mobile Congress is organised by Digital Empowerment Foundation, , WSA MobileContent, Department of Information Technology , Govt of India with partners like Nokia Siemens Network, MobileMonday etc.

Pocket Travel Assistant was nominated under the category m-Travel & Tourism which focused on “se of Mobile and complementing applications and embedded services for enriching the information society by aggregating digital content pertaining to travel and tourism thus help creating knowledge rich information society; offer of travel and tourism related information and services like real time travel bookings, location and transport information, including through GPS and GIS.”

There wetre other categories also like m-Environment, m-News & Journalism, m-Inclusion, m-Health, m-Education & Learning, m-Entertainment, m-Governance, m-Culture & Heritage, m-Business & Commerce/Banking.

Following are some details from the MBillionth website:

M-Billionth Mobile Congress is going to be held at NewDelhi 23rd, July, 2010:

An exciting lineup of speakers from the mobile industries, as well as other industries and sectors will be touched by the mobile market, including entertainment and education. Conference will be held in multiple parallel thematic sessions in all 10 categories with panel of speakers, presenters and participants will be from mobile world of application, service delivery and content. The recommendations from the sessions would be used to analyze the growth of mobile application in the respective category.


m-Billionth Award is a dedicated platform to explore the latent potential of mobile sector across South Asia. These Media phones/mobile, handy gadgets can now do anything from allow physicians to review patient records on-the-go, to alert drivers in any city of nearby parking spaces while looking for spot, in addition to the texting, surfing and calling on many of today’s phones.

The m-Billionth Award South Asia 2010 is first of its kind in the region recognising and felicitating mobile innovations, applications and content services delivery. It is to honour excellence in mobile communications across South Asia spread over 9 core categories. The m-Billionth Award is designed as an annual South Asia’s leading mobile content’s award platform towards larger regional Mobile Congress in media and policy advocacy.

What is m-Billionth Award?

21st Century World could be a cellular age going by the phenomenal increase in cell phones worldwide. The cellular age is fast accelerating with cell phones magnifying our larger identity in both individual and collective domains and in various social, cultural, economic contexts. This tremendous growth of mobile penetrations is a scene to be witnessed worldwide especially in developing countries with larger population concentration. Thus, the power of cellular technology is evident. And South Asia is and cannot be left behind.

The amazing picture is this. India has reached 600 million mobile penetrations. Other countries like Bangladesh and Sri Lanka in South Asia also showing great results as mobile users. The population percentage coverage of mobiles in the region is such – India with 46.37, Bangladesh with 31.11 , Sri Lanka 51 and Pakistan 59 and the trend is moving upward each day. South Asia is bearing witness to the power of the mobile device to empower millions with information, content and services. The power of mobile in meeting knowledge, social and economic deficits is a challenge in these countries. On the contrary, the power of mobile innovations and applications in content and services delivery is amazing and exponentially on the rise.

It is time that we in South Asia, the nations and governments in the region, as well as service providers, give a thrust on mobile for delivering meaningful services, especially that could empower and enable efficient day-to-day life for the larger masses. It is imperative now that a situation be created where we have the best of applications, contents, and services accumulated as examples to drive the government and industry to ensure they create meaningful opportunities in last mile mobile content and services. Governance and development is bound to be better with this.

Towards this objective, in order to make the content and services deliverable through mobile as a medium and means, Digital Empowerment Foundation has launched the m-Billionth Award SOUTH ASIA ON MOBILE CONTENT & APPLICATIONS FOR MASSES.


  1. Recognise mobile innovations and creativity.
  2. Provide a platform for the innovations into benchmarking in the mobile industry.
  3. Provide a wider forum for strategic networking, alliances and partnership building.
  4. Provide a South Asia Congress to exchange ideas, practices and policy strengths & mobile advocacy.
  5. Facilitate in building a South Asia network towards campaign and advocacy in mobile for mass empowerment and inclusive growth.

mBillionth Award Partners

For registration visit :  http://mbillionth.in/reg/

MobileWish NewsBox : News App for N97 with a 3d cube navigation

I have uploaded the MobileWish NewsBox application at the following link:


MobileWish NewsBox is one  m-News application developed in FlashLite 3.1 with a 3D Navigation for touch enabled handsets like Nokia N97 that reads out the various news feeds, Twitter updates, NASDAQ stock updates and blog posts . The idea behind MobileWish NewsBox was to develop a Talking news app with a very user-friendly and easy-to-use touch enabled 3D navigation system which will be easy to personalize.

You can see the test demo using on N97 at Nokia RDA site in the following links:







To download visit the following link: http://mobilewish.com/newsbox/

See the slides at 

MobileWish News Box

MobileWish News Box MobileWish NewsBox is an unique m-News application with a 3D Navigation for touch enabled… http://www.slideshare.net/MobileWish/mobilewish-news-box

Send Talking Dogs and Singing Cats to your friend’s mobile: mgreetings.com is Live!

Yes,  www.mgreetings.com -one of the very cool Flash (rather Flex AS3) based greetings website is live now!  Yvo and the team struggled hard for the project and it took 18 months to get from idea to go live. This is the only site in the world that is Next to AmericanGreetings and Hallmark (both billion dollar companies), who can deliver Mobile video greeting cards. And the best part of the news is that  mgreetings.comis the only company that can deliver Talking cards to a mobile phone! The site showcases several options to choose from different characters who speak or sing aloud the texts you write. And once personalised you can send them to your friend’s mobile device. Most of the popular handset brands, OS are supported by the generated card including iphone, symbian, windows. As long as your friend’s mobile supports video he can enjoy the card you sent to him.

There are many FREE cards which you can send using a FREE 1 months trial period.

So why wait. visit the site and sign up at www.mgreetings.com

The team has  lots of new ideas and features in the road map. So don’t forget to give feedback at the site.

Note. I have limited promotional codes with me which you can use to get 20 PREMIUM cards free.  If you need one,  send me an email asap so that you can get it before anyone else gets it.