Category Archives: Accessibility

Autonomous vehicles and their unexpected consequences: an accessibility Mexican stand-off

I recently joined an Institute of Engineering & Technology (IET) workshop looking into some of the unexpected consequences of autonomous vehicles. The IET will be producing a report shortly but I’m here to promote the accessibility design agenda.

The technical capabilities of the vehicle, the compute power on board for navigation, vehicle management and entertainment are mind-boggling. To be totally autonomous they need to be independent from any external inputs but they will also benefit from the explosion of external data sources. The impact on city planning, traffic, parking and overall public services will be life-changing for everyone.

Doubtless the vehicles themselves, becoming more sentient, will be very grateful for the increased levels of communications, new information sources and peer-to-peer information flowing between fellow road users as well as input from the streets and other city constituents. Making sure these information flows are bi-directional, linking the on-board vehicle systems to city information sources such as traffic and buildings information is also essential to bring the passengers to the right door for a wheelchair user, to have the robot or drone deliver the package or to find the right assistance for a vision-impaired person trying to find their way into the mall.

However, what really needs careful consideration is the user interface for these vehicles and their associated services. The most important issue is to get the wide variety of different users with different levels of IT skills and accessibility to initiate the autonomous vehicle and assisted travel through their preferred means. And, of course, if someone else orders or initiates the service, for the person to be communicated with in their preferred manner.

Designing the interface from scratch with inclusivity in mind will save the painful and often fated fall-back position of add-on development needed for different disabilities.

The good news is that the work being done around omni-channel customer experience with its multiple options for communicating with people and machines, addresses many of the issues. Add to this upfront design that embraces the accessibility features of smart phones, tablets, personal assistants and home automation systems and we have the beginnings of an inclusive design.

The justification for this is not necessarily just around including all disabled people in the digital economy. No, inclusive design actually makes it easy for everyone to use a service. Some people like talking to their app, some like interacting via a touch screen, some might even prefer the old QWERTY keyboard approach.

Artificial intelligence and machine learning will also contribute to the smooth incorporation of all users into the emerging scenarios. For example, once an individual with particular needs is identified as having ordered an autonomous vehicle service, the system can route a specific vehicle, possibly specially adapted, to the desired location.

Autonomous vehicles can then contribute further in terms of local authorities streamlining their service to people requiring home visits, social care and, of course, linking them into the healthcare systems. Ambulances will be redefined with autonomous driving and the paramedics able to concentrate on looking after the patient.

So, we should also be excited about the impact of autonomous cars on groups previously precluded from driving. New ways of spending time while transferring from one place to another, entertainment, work, rest, will appeal to all customers but we must allow for the human machine interface and offer that interface in a variety of ways that suit all users under all circumstances.

One final thought: what happens when the autonomous vehicle senses a person with a guide dog and the guide dog senses an approaching vehicle? A Mexican stand-off – go find the algorithm for that one!

Keep an eye out for the full IET report coming soon.

Tagged ,

The Hyper-connected individual meets the Healthcare system

At the Great Telco Debate last year, one of the biggest laughs was when my co-host Graham Wilde was attacked for buying his wife a FitBit, implying she needed to lose weight! The success of these so-called health tracking devices, and their associated apps, is an indication of how wearables, combined with smart phones and tablets, are beginning to change our behaviour and our lives.

Outside the healthcare industry, these devices with their life-changing outputs are seen as wondrous. However, inside the healthcare sector, they are often dismissed as being toys providing inaccurate and misleading information.

The consumer electronics industry, with its dynamic gadget crazy geeks, coming up against the established healthcare profession, with its hospitals and insurance organisations, represents a key battleground for us all. Regulation in the medical area is rife, and so it should be. Consumer electronics is a considerably more liberal environment. So we have the challenge of making money and identifying new markets on the one hand, whilst accurately treating people with illness and disabilities on the other.

In previous articles I have considered the world’s billion disabled and opportunities for assistive technology in the form of regular smart phones, wearables, apps and the Internet of Things (IoT). I now think it is worth expanding the discussion to include the broader healthcare industry. The simple reason is that if we get it right for the healthcare sector as a whole, the solutions will include everybody, whether suffering from a short-term illness or long term disability.

We all have experiences which involve the healthcare system at some point in our lives. As with many industries, the Internet and availability of smart devices of all types gives us an insight into a world that was previously shrouded in mystique.

From home: remote diagnosis

Before we even enter into a doctor’s surgery or hospital, we are armed with information from our web searches and data from our mobile health lifestyle apps. Exercise, diet, alongside our essential measurements are tracked to give us an indication as to how we are doing. A blip in how we feel, or in the data, might trigger an Internet search – often leading to inaccurate self-diagnosis and unnecessary alarm.

Today we are seeing the beginnings of new fee based services where medical professionals provide consultation via online chat or even video . In many circumstances, such interaction will be sufficient to satisfy the ‘customer’ that everything is fine; or can generate sufficient advice, or even a prescription, to address the issue. If not, escalation to a more formal, traditional consultation will be necessary. This potential virtual triage could be useful for the industry in reducing the number of people unnecessarily entering the ‘real-world’ healthcare system.

The traditional medical system

In surgeries and hospitals, medical professionals can benefit from costly devices and services necessary to diagnose and treat the individual. These devices are increasingly connected through multiple channels allowing even remote specialists to access the patient records and produce a diagnosis. Furthermore, scans which were previously too big to circulate, are now fizzed across the network infrastructure for everyone to share on their multiple self-provided or hospital-provided devices.

Furthermore, we shouldn’t lose sight of the fact that a specialist surgeon could perform an operation via a robot, and indeed with a virtual scalpel, given the right connectivity, video and local support.

And, of course should the condition be acute, an ambulance also completely connected to the medical facilities can be dispatched with diagnostics and treatment carried out by the paramedics. Perhaps we could think of this as mobile triage!

Following medical intervention, physicians and nursing staff are increasingly armed with sophisticated bed-side monitoring equipment, once again feeding into central patient records. However, this is increasingly being complemented with more smart phone based offerings. That’s not to say the clipboard on the bottom of the bed will disappear, but that this ‘analogue service’ will be complemented by electronic versions with analysis and alarms to notify staff.

Developments in devices, sensors, applications and medical add-ons are all helping to change the dynamic of treating conditions:

  • Self-administered blood monitoring is radically changing the treatment of diabetes and dramatically lowering the levels of insulin required
  • Pseudo off-the-shelf 3D-printed artificial limbs are accelerating limb replacements
  • Sensors, cameras and microphones are allowing sensory enhancement or indeed replacement.

Smart phones are the unifying element but this doesn’t have to be the case. We will doubtless end up with many separate connections and data flows from our bodies to our carers, physicians or indeed to our own smart devices.

Aftercare

On leaving the formal hospital environment, there are now many new opportunities to reduce the frequency of return visits and reduce the cost of supporting the patient. The aftercare hitherto confined to follow up consultations at the hospitals can increasingly be delivered via video-based services and ideally some more dispersed facilities in the local community. After all, consultation on how well a hip replacement or skin condition has improved can just as easily be done over a Skype link. As with many industry transformations, this requires an organisational, process, financial and cultural shift. If the follow-up consultation is carried out perfectly well by video link, why should it only command a fraction of the fee usually assigned to an in-person or  in-hospital consultation?

Social care

Follow up social care, whether in the person’s home or in a social care unit, can also benefit from this ultra-connected world. The vastly expensive scanners obviously cannot be dispersed out into the community, but care staff and patient associations can use much simpler, slimmed down technology as well as some of the more consumer-electronics like devices and the myriad of apps to give both the patient and the carer a better-informed understanding of activities. Scheduling appointments is an obvious first step to make better use of care staff. But simple data gathering from questioning the patient, or using medical devices operated by the carer, will certainly be of incremental value to any consumer-like devices in the form of blood pressure or heart-rate monitors and motion detectors.

And, if a doctor is required to visit the patient, then mobile devices such as ECG machines can feed data back into the patients’ records over cellular or WiFi networks.

Long term care is a major focus for the industry today. With an increase in chronic conditions and subsequent drain on resources, anything which can reduce the total cost of this service, whilst improving the quality of care given to individuals is a ‘no brainer’.

The journey

So, the entire journey from initial Internet search, through formal medical intervention, to aftercare can benefit from the better connected environment. There is, of course, the issue of who pays – public or private. This depends to a large extent on individual countries. No doubt, the best use of fixed broadband/mobile technology, smart phones/tablets, wearables/IoT, consumer/industry-approved apps and a willingness for all parties to adapt to the new environment will pay massive dividends.

Who can argue that the future daily routine of a nurse or doctor will consist of a couple of hours on face-to-face duty, followed by a couple of hours online?

It is vital when designing devices and apps in this area that simplicity and accessibility for all levels of technical ability are built in from scratch. In many countries for the foreseeable future we have an ageing population that is not smartphone literate. This could be one way of bringing many into the touch screen world as long as we don’t confuse the issue with overly complex solutions. Technology exists to hide the complexity behind a simple interface or different accessible features depending if a person has limited vision, dexterity or mobility. After all, if we can build a button to simplify the ordering of a pizza, we can build an app button to address the key requirement for a particular patient and their needs.

What is clear, is that the lines of demarcation between home, formal medical facilities and after care are blurring. Volumes of data and information flows between all participants are increasing with personal and medical devices. Centralised patient records fed from all points are vital. Technology has to be embedded into simpler processes in order to underpin the new healthcare regimes.

We are all part of this particular journey. Let’s encourage all parties, patients, medical staff, administration and perhaps most importantly, politicians, that this is one way that technology can literally help us all to a better life. Many industries have been disrupted dramatically through devices, apps and connectivity. We could see a restructuring of the healthcare sector. Who knows, it might lead to more local services and a move away from the previous trend to bigger and bigger hospitals.

The Six Million Dollar man 40 years on. Wearables, Smartphones, 3D printing. Cost to you <$100k!

As a teenager in the 1970s I loved Steve Austin, the astronaut who crashed and was rebuilt. Remember the tag line “Gentlemen, we can rebuild him. We have the technology, the capability to make the world’s first Bionic man”! It captured my imagination and has come to the front of my thinking now as I consider the possibilities of using technology to compensate for the different disabilities affecting people today. Perhaps we can’t replicate the eagle-eye zoom or the leopard-like speed of Lee Major’s character, but we can certainly bring functional replacement or complementary devices and applications to bring the astronaut-specific re-build of the 70s down to a very affordable level today.

The price of electronic components is continually falling, fueling the consumer electronics boom. Smartphones, and their associated explosion of applications, leverage the mobile network and the cloud computing phenomenon to deliver a wealth of apps both mainstream and specific to certain conditions, often free or at a minimal charge. On top of this comes the wearables revolution: watches, bracelets, eyeware, hearing devices, patches and exoskeleton limbs. 3D printing also means that the manufacturing of specialist devices is literally at the push of a button and can be taken to the most remote part of the world, delivering prosthetics to Africa at an affordable price point.

So how does the $6 million (not even allowing for inflation) look today? What gadgets, software and services could we pluck from consumer electronics retail outlets, apps stores and the medical community to build our modern-day bionic person?

  • Smartphone – $500
  • Exoskeleton bionic hand – $20,000
  • Exoskeleton legs with muscle stimulated control – $30,000
  • Sight (glasses) – $2,500
  • Hearing – $2,500
  • Bracelet with haptic feedback – $500
  • Smart watch – $500
  • Skin patches- $50

Total = <$100,000

There are, of course, some very expensive options such as retinal implants which still cost $100,000s plus complex surgery. However, most of the shopping list is literally off-the-shelf, or even off-the-printer.

The individual will need a subscription to a mobile provider, and probably also a link to their home WiFi, to enjoy the luxury of controlling the various household devices and services in the smarter-home environment; and, outside of the home, to link with the smart village, town and city services that can also complement the items in terms of navigation, linking into public services as well as the broader business community.

The e-health perspective also needs to be built into the thinking. Some of the devices will link into social and e-health services. Some of the information loops could potentially be to doctors and carers without the individual even needing to be involved. In effect, multiple information loops will feed off and to the individual, whilst improved monitoring and reduced cost of maintaining contact will help fund installations where required.

Steve Austin was funded by the US government in the TV show. In the case of a person being disabled through an accident, insurance would doubtless be involved, as would the medical authorities. Most of the items here are off-the-shelf and affordable for a vast swathe of society, not just limited to astronauts!

The world’s billion disabled people (source WHO) will have an increasing chance of joining in the digital revolution at home, at work and in society as a whole if we all help bring it to their attention. We also need to educate other relevant parties – family and friends, doctors and governments to name a few.

In the year that we all went Back to the Future, it just goes to show that time spent watching TV as a teenager wasn’t wasted. In Thunderbirds and Joe 90, Gerry Anderson predicted video mobile phones, Telepresence and brainwave transplants, and don’t forget the crew of the Starship Enterprise had mobile devices. If you want to predict the future, keep an eye on the TV!
six million dollar man

Tagged , , , , , , , ,

What tech is out there for disabled people?

Recent interview with Telefonica about accessibility technology available today and in the future. I’m not the one in the bow tie….

Lewis Insight Interview with Telefonica

Tagged , , , , , ,

You’re blind: How do you ‘read’, join in social media and find your way around, let alone run a business?

Picture the scene: a blind man walking down the street moving white stick to and fro. He is muttering to himself while clicking a small black thing in his left hand. What is he doing? Actually, he is running his business, doing email, messaging, reading documents, checking-in for his flight and working out the best route using bus and tube to get to the airport. The black device is a mini keyboard, controlling the iPhone in his pocket and it is talking to him via his in-ear Bluetooth device….

Having been registered blind for over 30 years, I am accustomed to the regular question about how the hell do you run a business? I thought it worth while to put this down in writing both as a record of how things stand in 2015, but also as evidence of how my world has changed since the days of cumbersome magnifiers, papers being sent off to be recorded, and very clunky interfaces with early PCs.

Equipment & technology

  1. iPhone 6. This is my main means of consuming content and keeping up to date using the built-in Voiceover feature, (not Siri) as a screen reader that describes to me what is on the screen. Add to this larger than necessary device (the screen size is irrelevant to me) is a small mini Bluetooth keyboard, the RiVo, which I use as a remote control to the iPhone (leaving the phone in my pocket or bag) and a Plantronics Bluetooth earpiece.
  2. Lenovo laptop with Windoweyes and Zoomtext: I still use a laptop for main content creation such as this blog. This is now simply on account of the fact that I like the feel of a full old-fashioned keyboard and a large screen magnified to make me feel I am still working properly! There are no specific built-in applications on the laptop beyond this add-on assistive technology. Updates to Windoweyes and Zoomtext can often cause problems because their interworking with either the hardware from Lenovo or the Windows operating system is a continuous struggle.
  3. Standard TV: On the main TV in the house I do insist on Audio Description being turned on so that I can better follow those tricky dialogue light films and programmes. The verbal description woven in between the actual dialogue often enhances the programme for all the family members – try it for yourself sometime!
  4. Victor Stream Reader from Humanware: This is the one specialist device I use. This no-screen device has very tactile buttons, long battery life and stores my talking books from Audible along with podcasts and access to live streamed radio and some Internet.

Apps

On the iPhone I have a mix of regular and specialist apps. The regular apps I use most often are:

  • BBC Sport:
  • BBC New: simple interface and straight forward despite the picture contents
  • Podcasts: annoying interface but great to have access to all that content: perhaps publish a list of my favourites at a later date
  • BBC Weather: simple and really useful when travelling around although not always accurate!
  • British Airways: for managing flights, getting mobile boarding cards – however, the latest version has lost some of its accessibility features and says ‘button’ an awful lot of the time!
  • Google maps: still struggling to get the most out of them but they are good
  • Virgin Media TV Anywhere to manage my set top box and record programmes
  • BBC iPlayer to give me access to my favourite radio  stations and podcasts
  • Twitter: pretty straight forward with Voiceover
  • Google docs to get access and manage my documents on my Google Drive: really useful when out and about
  • LinkedIN: somewhat easier to navigate than LinkedIn on the laptop/web but still clumsy
  • Hailo & Uber for taxis both work well once you have struggled through what needs to be input, when!

In terms of specialist apps, I mainly use:

  • Blind Square for finding restaurants, previewing menus and finding numbers to call for directions in case the map app fails
  • Be My Eyes: for identifying things via a video link to a volunteer when nobody sighted is around to help
  • Tap Tap See: ditto
  • RNIB Navigator: finding my way around and checking that cab drivers are not taking the micky
  • RNIB Overdrive: for access to the library of talking books and magazines!
  • Lire: not really a specialist app but it is a simple RSS app that scans the web for news feeds from your favourite sources.
  • Movie Reading: a beta version of an app that downloads audio description and synchronises with the cinema, TV programme or DVD
  • Camcard: a business card scanning app that uses the phone camera to scan and turn content into input for your contacts

Using the RiVo mini keypad does make navigating the iPhone a lot easier. It also makes typing easier. My preference is using it in the old T9 format, the one you would have used for text on your old Nokia phones. However, it does have a small QWERTY setting but I haven’t gone there.

Using the iPhone with keyboard and earpiece does mean that I can carry on doing email, listening to content, while walking along carrying my white stick. I suspect this is a little like people using their phones while driving but it does make my travel time walking, being driven, flown or sailed, a lot more productive and interesting.

As you may gather, I am close to dispensing with the services of a laptop if I can get a high quality full QWERTY keyboard that fits my aging fingers and suits my typing style! I would still plug it into a big screen in the office to give me the option of magnifying as and when necessary.

With most content now being available digitally and via the web or an app, I can consume and create content almost as readily as a sighted peer. Spreadsheets do pose a problem, as does Power Point. So, as with the apps world, I do draw on some human sighted assistance when this poses a problem.

The good news is that barriers are coming down, the more digital society gets, the more I should be able to join in on an equal footing.

I will keep you posted as things change.

Tagged , , , , ,

Big Screen, Small Screen, No Screen – Assistive Technology for the Visually Impaired

At the recent Vision UK 2020 conference, stakeholders in the eye-care sector pulled together a Tech Table to demonstrate the breadth and depth of current and emerging technology available to the visually impaired (VI). In the mainstream telecoms market, people talk about Big Screen (television) to Small Screen (smart phone) as a continuum of devices through which people consume their digital lifestyles. We demonstrated that these are equally relevant for the visually impaired as well as extending the continuum with a few ‘No Screen’ devices specially developed for the VI sector.

The Big Screen is represented by increasing number of accessible televisions on the market. Samsung has now released its accessible sets where on-screen menus can be turned into speech generating prompts for the visually impaired. They don’t yet work with set top boxes provided by Sky and Virgin, but apps are increasingly available on smart phones to step in and provide this element of accessibility.

In between the Big and Small screens come tablets that are increasingly peoples’ preferred devices. Android and Apple provide a range of accessible tablets that can help the visually impaired, both through magnified screen and screen readers. They are affordable, light, and can act as peoples’ access point into the digital world. Looking a little further forward, they can also be the hub for controlling many aspects of the household as well as a link for e-health services.

The now traditional laptop fits in here as well. It is unfortunate that screen readers are not designed as part of the operating system, except for Apple. This tends to make many of the required apps not totally accessible. However, as a tool for creating documents and for those of us who grew up believing a computer needs a physical keyboard, they still play a vital role. Hopefully, as Microsoft releases Windows 10, we will have a built-in screen reader, whilst benefiting from a much cheaper braille display.

The small screen is represented by ubiquitous smart phones. Apple, Samsung, Microsoft, amongst others, provide a very accessible platform for people at home and out and about. The trend in this market is for larger screens, creeping up to the tablet level. However, the availability of high quality Bluetooth earpieces and mini Bluetooth keyboards does mean that the chunky smart phone can stay in the pocket, and all its wonderful apps can be used, even while walking along carrying a white cane!

There is an even smaller screen in the form of the smart watch. Apple recently brought out its watch that certainly helps seed the market. It has accessibility built in both in terms of speech output and haptic feedback. However, it still needs a smartphone to operate and the screen is only useful to a small minority of VI people. A nice piece of fashion but, for now, it seems to be in the luxury category rather than essential.

No Screen is, for the time being, something specific for the VI community. Humanware’s Victor Reader and the SonataPlus from the British Wireless Association for the Blind, are two great examples of devices made specifically for the blind and partially sighted community. Simple, tactile button operated devices linking the VI person to both downloaded and streamed content including radio, podcasts and those lovable audio books all make life a lot easier. And, with no screen, the Victor Reader battery life lasts a lot longer than your average smart phone. The Sonata has a very simple set of buttons to operate and also has the possibility of remote assistance to help users set up and manage their digital content – blending a little human help with the wealth of technology available.

From a mainstream technology point of view, the assistance provided by Siri and Cortana on smart devices is now going to be joined by solutions such as Amazon Echo, a general purpose screenless device for your home providing voice activated information and entertainment.

It is also worth mentioning a non-smart device at this point. The Bradley Time piece is a great example of watch design with magnetic ball bearing providing a tactile minute and hour information. It is a fine example of beautiful design and absolutely practical function that compliments all of the computer-based technology on the continuum.

Across all these platforms exist a series of apps that come from both the open market and the VI market. An accessible device allows us to do online shopping, banking and web browsing that our sighted peers have enjoyed for some time. In addition, our daily lives can be helped by specialist apps that address our lack of eye sight and use computing power, image and recognition software as well as volunteers to help provide some human sighted assistance. Perhaps more importantly, an accessible smart phone will also increasingly link to household devices, cinemas and the outside world in the form of city information and services to facilitate a lot more accessibility.

In the digital accessibility stream, we also heard from Oxford University and Microsoft (working with RNIB and Guide Dogs respectively), about future looking projects which will enhance our lives through incredibly powerful camera and recognition software as well as the ability to navigate with smart technology on our streets and buildings. Combine this with the range of peripheral devices, (some might call wearables) becoming available and we can leverage bone conduction headsets with CD quality 360 sound to complement all other senses at play.

In short, there is a wealth of technology available today. It is increasingly accessible and relatively simple to use. Some need hand-holding to get started, but nobody should fear technology when sight is a problem. Spread the word: technology as a life enhancing tool is here to stay and it is there to be adapted to the visually impaired and not vice versa.

Tagged , , , ,

Accessibility At The Top Table At Mobile World Congress 2015

At Mobile World Congress in Barcelona last week accessibility took to one of the main stages. IBM, Microsoft, Google and the Mobile Manufacturers Forum (MMF) joined me to present perspectives on how accessibility is going mainstream.

I introduced the session with some of the key findings from the second Telefonica accessibility report “Digitising the Billion Disabled: Accessibility Gets Personal“. In summary, the billion disabled people represent a major spending group, combining earnings of some $2.3 Trillion and state support of $1.3 Trillion. Disabled people on average earn only 60% of their able-bodied peers and, of course, many disabled people don’t get the opportunity to work at all. 4% of children and 10% of the working population are disabled, but perhaps most striking, over three quarters of the elderly. Combine this dynamic with Douglas Adams theory of adopting technology getting harder as we get older and you can see the ticking time bomb of disability and age.

The good news is that the technology required to assist the Billion is getting more mainstream, affordable and accessible. Mobile sits at the centre of this change. As devices arrive with built-in accessibility, the emphasis shifts to the applications and web content being correctly labelled to trigger the necessary assistive input and output.

The flow of the session was as follows:

  • Frances W West, Chief Accessibility Officer at IBM told us how Big Blue has been dealing with accessibility for over a hundred years! She talked about ‘Millions of Markets of One’, a mobility accessibility app checker and a move to hyper personalisation in a broader context of smart cities. Accessibility is more than just accommodating disabled people; it is about inclusive innovation.
  • Rob Sinclair, Chief Accessibility Officer at Microsoft took us through the way in which they are “Rethinking Interaction and Design” educating their engineers around disability. The pentagrams he used to illustrate a reduction of peoples’ senses is a powerful method of raising awareness of a lack of vision, hearing, touch etc. Rob interacted with the audience to identify examples of temporary disability or situationally disability adding some more instances such as under water, gloved hands in the frozen North as well as the often cited driving example.
  • Eve Andersson, Manager Accessibility Engineering at Google introduced the accessibility features of Android including TalkBack, BrailleBack, magnification, switch access, captioning and Android Wear – an open platform for everyone to embrace.
  • Michael Milligan, Secretary General, Mobile Manufacturers Forum described the way in which the Global Accessibility Reporting Initiative (GARI) has compiled over eleven hundred mobile devices along with their accessibility features into a single database. Mobile operators can draw upon this database to highlight accessibility features to people when visiting their web sites or retail outlets anywhere in the world.
  • Henry Evans, Adaptive Technology pioneer then finished the session by presenting his view as a quadriplegic only having access through slight head and thumb movement to produce his presentation, demonstrating how he can use remote control robots to virtually visit museums around the world and fetch things from his fridge

I can honestly say that, being registered blind and interacting with Henry and his wife Jane via a letter board and the Beam robot on stage, was the strangest but most rewarding experience of over 25 years running industry conferences.

As well as getting accessibility on the main MWC agenda, it was also important to hear consistent messaging from the main vendors on stage. We all agreed that the worlds of accessibility and mainstream technology are converging. Most importantly however, we need education and training at every step of the value chain and channel to market. Disabled people themselves need to be better informed as to possibilities for complementing or replacing particular sensory experiences. And, the people training and educating the billion need better information and training. Like so many other eco systems, it is a matter of taking the holistic view and identifying individual actions that will help the overall flow. And, this is a matter for everyone to consider; it is about those who are disabled and those not yet disabled. The ticking time bomb of age, combined with the temporary disability during our daily lives, all means that this is becoming a more mainstream topic. Onwards and upwards everyone!

View the panel session here.

Tagged , , , ,
%d bloggers like this: