Researchers at California State University-Northridge and UCLA Olive View Medical Center are developing a powerful new machine that uses brain-interface technology to help those with physical disabilities.
The product, a motorized wheelchair that can be navigated with a user’s brainwaves, is not yet market-ready. The technology, however, has the potential to provide a new degree of freedom to disabled users.
Brain-computer interface, or BCI, already was being used for a variety of purposes. Gamers have long been familiar with BCI headsets, for example, which allow them to navigate virtual universes.
Dr. C.T. Lin, a professor of mechanical engineering at CSUN, wanted to see if BCI could be used to assist paraplegics or other users who had lost the use of their limbs.
“I thought, ‘Well, why not take the existing available device and explore the possibility of whether that device can be used for any engineering application?’”
The project began in earnest in August 2010. Lin and his team, which includes undergraduates as well as graduate students, developed a wheelchair equipped with a laser sensor, a laptop computer and a headset. Electrodes on the headset are able to absorb brainwaves, which are then translated into “motion commands” such as left, right, forward or backward.
The wheelchair can operate in an autonomous mode in which the computer makes navigating decisions or in a hybrid mode where the user gives commands.
Lin says that the real challenge going forward will be trying to marry human behavior, which is often erratic and unpredictable, with the neat and often unyielding precision of computers.
“The difficulty really comes with the fact that the human is involved,” says Lin. “And humans are not consistent.”
BCI technology, by its very nature, requires the brain and the computer to interact; therefore, the two may clash, producing widely divergent results.
“If you are turning a corner,” explains Lin, “and you generate the thoughts too early or too late, you will have a turn that is going to become awkward.”
Certain users with cognitive disabilities also may have difficulty maneuvering the device, he says. Users will have to be trained to use the wheelchair.
“The role of technology in higher education for students with disabilities has had a phenomenally positive impact on the inclusion and participation of students in higher education,” says John Bennett, director of disability resources and services at Temple University.
He calls the new wheelchair a potentially “phenomenal” development, but he also expresses concerns about its practicality.
“What needs to be really considered is the human interface with this technology,” he says. “How usable and affordable is it?”
Bennett says that, as the technology develops, researchers must consider that most people, regardless of physical disability, have what he calls “a very average” technology competence.
“When we take, for example, voice recognition [software],” he says, “it was an extremely expensive piece of technology.
“And few people had access to it. And today it’s a very widely used piece of technology with low cost and high impact.”
Lin says that Lesson No. 1 of product marketing is that anything too complicated will be rejected by users.
“Whenever you’re trying to design a medical device, you have to really include the user in the design cycle before you finalize your design decision,” he says.
The prototype was tested with a CSUN student and a faculty member who both have physical impairments.
The project is being funded by an assistive technology grant, the Ethel Louis Foundation Endowment, which awards up to $20,000 for faculty-student collaborative projects at CSUN.
In coming months, Lin expects the wheelchair to undergo an initial round of assessments. It will be tested for safety, practicality, and ease of use. A model is expected to be ready for manufacture in about two years, and Lin hopes that its cost will be kept between $5,000 and $8,000.
Lin says his goal is to develop a product that is both sophisticated and simple.
“It’s like a black box,” he says. “The user doesn’t need to know what’s going on inside the box. They only need to interact with what is really transparent to the user in terms of generating commands and expecting an outcome.”