Multimodality will provide a more natural way of using mobile devices. This will enable the user to control the device by making calls, requesting data, or sending messages by speech, touch (using a keypad or stylus), or both speech and touch simultaneously.
Users will also be able to receive information via text or voice, regardless of how the message was originally created. They will be able to change their method of interaction (visual, speech or touch) at any time without having to end the conversation or data session. And it will be possible to speak at the same time as writing text, reviewing a presentation, or sending a message.
Users will have the freedom to choose the interaction mode (voice, visual or touch) that is most comfortable, convenient, and intuitive at any moment, and to switch spontaneously from mode to mode as circumstances change.
The following are some of the most commonly asked questions:
* What impact will it have on existing services such as SMS?
Multimodality will introduce new interactions that previously did not exist. Today SMS users are limited in the ability to SMS other users. Multimodality will allow SMS users to send messages to any destination knowing that the target user will receive their message in the recipient's own preferred way - be it voice, text or video.
* The ability to do text and speech at the same time is tied into GPRS as a standard. What is different about Comverse's multimodality concepts?
Using text and speech at the same time or fast switching between voice and text is certainly an attribute of 2,5 G networks, however, in order to exploit the full potential of this capability there is a need for a network server that will do what we call 'session management'. This provides the ability to do the switching in the right context according to a predefined profile or system set-up.
* What is the billing model for multimodal services?
The whole billing model issue, which relates to data services, is still unclear and the integration of voice and data makes it even more complicated. We suggest starting with a simple billing model allowing for a subscription-based fee, plus an airtime fee.
* Will all the software reside on the network or will part of it be on the handset? Will users need to wait for new handsets before they can benefit from multimodal services?
Comverse will, for example, use whatever standard capability is within the handset itself or use the network technology available. In some cases we will cooperate and partner with handset manufacturer to promote technical solutions. Even in 2 G networks and for the subscriber using today's handsets, Comverse allows a subscriber to choose the media they prefer to receive information and to switch from one media to another, such as hearing a text e-mail message and replying by voice in the same session.
* How easy is it for the users to adopt a multimodal solution?
Ease of use is the key objective in using multimodal solutions. Users will decide on the communication mode, whether they are more comfortable with listening to information, viewing text or pictures on screen, responding via touch-tone or responding via voice-control or key-click navigation.
* How much extra will it cost to the user? Why will the user want to pay extra for this service?
The benefits include convenience, ease of use and ease of communication. There will be a cost factor, but service providers will ensure that the services are affordable, otherwise cost will become a barrier limiting the adoption of the technology.
* Do you foresee network operators reaching a new audience through multimodality?
Due to the user interface flexibility, multimodality will allow users who were not using existing services to begin using the new services.
* Do you see business users taking up multimodal services?
Yes. In terms of sending pictures and videos, the applications are numerous.
* What about standards?
Eight of the world's leading mobile vendors announced at Cannes at the beginning of March 2002 the formation of a multimedia messaging service (MMS) Interoperability Group to help smooth the path for MMS deployment by mobile service providers worldwide.
Ericsson, Nokia, Motorola, Siemens, Logica, CMG, Comverse, and handset joint venture Sony Ericsson said the group, 'defined and approved' by the Third Generation Partnership Project (3GPP) and the WAP Forum, will work to develop systems that will allow interoperability between MMS-enabled handsets and servers from different manufacturers.
The group's activities will include "facilitation and co-ordination of MMS interoperability testing, problem solving and the channelling of technical information related to such testing," to help ensure that MMS applications are compatible and so speed up the introduction of such services.
The key activities of the group include the facilitation and co-ordination of MMS interoperability testing, MMS interoperability problem solving, and the channelling of technical information related to such testing. The results of this work will be released to the 3GPP and WAP Forum to be included in the open standards defined by these bodies.
* Can you describe the MMS Conformance document in more detail?
The MMS Conformance paper aims to identify the issues that need to be addressed to ensure the interoperability of MMS functionalities between terminals and network elements produced by different manufacturers. A minimum set of requirements has been defined at four levels to achieve interoperability: content of the message, allowed elements and attributes of the presentation language, media content format, and lower level capabilities.
* What is the area that requires most work in MMS interoperability?
The main interface to be tested in MMS is the interface between the phone and the network server. In addition, compatibility in the terminal presentation, ie how the messages look on the phone displays, has to be tested and verified. It is possible that these two areas will present the interoperability challenge for the group to solve.
* Why are no operators involved?
This initiative focuses on testing the technical interoperability between MMS equipment. It is a technical exercise to ensure the goals set by the industry for MMS standards are met. Therefore, this is a task for the vendors to solve before the equipment is delivered to the operators. The operators will perform their own commercial interoperability - and verification test based on the commercial agreements with their vendors.
Multimodality has real potential to improve people's everyday lives. End-users generally grasp the concept, are impressed by it, and are eager to discover more. Key perceived benefits are improved time management, greater connectivity, more choice, freedom and fun. Some concerns were expressed, specifically cost, reliability, ease of use, and fear of new technology (intrusiveness). But these initial fears are often overcome as users appreciate that multimodality creates more (not less) user choice and freedom.
Participants envision a wide range of benefits. Users are generally most excited by the parallel modes of interaction, and give a favourable response to the richer user experience that multimodality produces. In addition, participants demonstrated an impressive level of creativity, generating several new scenarios where multimodal interaction would produce user benefits. Participants' own definitions of multimodality, elicited at the end of each session, demonstrated a clear grasp of the concept and its advantages.
Multimodality is a valuable competitive differentiator. If positioned and marketed correctly, multimodality can deliver real competitive advantage and brand loyalty. As an operator's ability to charge for multimodality will be subject to normal user resistance to price increases, multimodality should be positioned as an enhancement to existing applications rather than as something entirely new. And while implementation must not disappoint in terms of key user concerns (price, reliability, ease of use), overall user responses indicate that multimodality will generate additional airtime and ARPU for operators. It is expected that the first solution will be launched mid-2002.