Microsoft reinvents the mouse for people with disabilities

BY MARK WILSON: For Complete Post, Click Here…

A new wireless button design will let anyone make Microsoft’s hardware their own.

The mouse looks tiny. It’s a small square, reminiscent of one of those novelty mice sold to people who refuse to use the trackpad on their computer. It has two buttons and a scroll wheel, and when Microsoft claims that this new mouse has been designed alongside people with disabilities, I’m skeptical.

And then Gabi Michel, director of accessible accessories at Microsoft, demonstrates how this tiny mouse has been designed to slide perfectly into its “tail”—a large ergonomic grip that your hand can grasp almost like a ball, switching from left- to right-handed configurations with a simple squeeze. Then she pops it out and slides it into a different tail (this one with large finger indentations for people with tremors) and then another (with a longer and even more pronounced set of finger indentations).

“No two people are going to be the same,” Michel says. “Everyone needs a different solution.”

The mouse is part of Microsoft’s new Adaptive Accessories line, a kit of tools meant to provide an easier mouse and keyboard experience for people with disabilities—on a PC or phone. The entire kit consists of three main components: the mouse, a button that has programmable pressure sensors underneath, and a hub to connect them all. The hub can be used with the mouse, a keyboard, plus any other assistive devices one might have, whether they’re made by Microsoft or not (wirelessly via Bluetooth or through the hub’s many ports). Everything will be available later this year for an as-yet-undisclosed price.

Be the Light: Elizabeth Bonker’s 2022 Valedictorian Speech at Rollins College Commencement

From Rollins College: For Complete Post, Click Here…

Rollins College’s 2022 Valedictorian Elizabeth Bonker, who is affected by non-speaking autism and communicates solely by typing, urges her fellow graduates to use their voices, serve others, and see the value in everyone they meet in her valedictory address. Read more about Elizabeth’s story: https://bit.ly/3P4rsdP

Apple Promoting New Accessibility Features For IOS Users

From Apple: For Complete Post, Click Here…

Apple this week celebrated Global Accessibility Awareness Day by announcing new accessibility features that will be available later this year with iOS 16 and other software updates. However, while we wait for those updates, the company has been promoting accessibility tips that anyone can take advantage of.

For instance, Apple shows how users can increase text size, play calming background sounds, magnify objects near you using the Magnifier app, and even identify certain ambient sounds like alarms and sirens with Sound Recognition. Other tips include how to listen to what’s on the screen, control a device using custom voice commands, and lock your iPhone or iPad to stay in a single app.

One of the new accessibility features teased by Apple this week is called “Door Detection,” and it uses the LiDAR scanner on supported iPhone and iPad models to help users understand how far away they are from a door. It can also read signs and symbols around the door.

For Apple Watch users, a new option will mirror the watch’s screen on the iPhone so that people with physical and motor disabilities can interact with features such as ECG, Blood Oxygen, and Heart Rate. Also, live captions are finally coming to FaceTime on iPhone, iPad, and Mac.

Android 13 Beta 3 will bring native support for braille displays

By Kishan Vyas: For Complete Post, Click Here…

Over the years, Google has added many features to Android to make the operating system more accessible to people with special needs. For instance, Andriod 10 brought Live Caption and Live Transcribe, while Android 12 added a new feature called Camera Switches, allowing users to control their phones with facial expressions. As part of this continued push, Google will be adding native support for braille displays in Android 13.

In a blog post on Thursday, Google announced that the upcoming Android 13 beta release would bring out-of-the-box support for braille display. For the unaware, a refreshable braille display is an electro-mechanic device that surfaces information by raising round-tipped pins through holes in a flat surface. It enables blind and deafblind users (who can’t use a screen reader) to access smartphones or computers. It has always been possible to use a braille display on Andriod using the Talkback app. But now, Google is baking the Talkback features right into Android so users won’t need to download a separate app.

Apple previews innovative accessibility features combining the power of hardware, software, and machine learning

From Apple Newsroom: For Complete Post, Click Here…

Software features coming later this year offer users with disabilities new tools for navigation, health, communication, and more.

Apple today previewed innovative software features that introduce new ways for users with disabilities to navigate, connect, and get the most out of Apple products. These powerful updates combine the company’s latest technologies to deliver unique and customizable tools for users, and build on Apple’s long-standing commitment to making products that work for everyone.

Using advancements across hardware, software, and machine learning, people who are blind or low vision can use their iPhone and iPad to navigate the last few feet to their destination with Door Detection; users with physical and motor disabilities who may rely on assistive features like Voice Control and Switch Control can fully control Apple Watch from their iPhone with Apple Watch Mirroring; and the Deaf and hard of hearing community can follow Live Captions on iPhone, iPad, and Mac. Apple is also expanding support for its industry-leading screen reader VoiceOver with over 20 new languages and locales. These features will be available later this year with software updates across Apple platforms.

“Apple embeds accessibility into every aspect of our work, and we are committed to designing the best products and services for everyone,” said Sarah Herrlinger, Apple’s senior director of Accessibility Policy and Initiatives. “We’re excited to introduce these new features, which combine innovation and creativity from teams across Apple to give users more options to use our products in ways that best suit their needs and lives.”

Door Detection for Users Who Are Blind or Low Vision

Feds Warn Employers Against Discriminatory Hiring Algorithms

By KHARI JOHNSON: For Complete Post, Click Here…

As AI invades the interview process, the DOJ and EOCC have provided guidance to protect people with disabilities from bias.

AS COMPANIES INCREASINGLY involve AI in their hiring processes, advocates, lawyers, and researchers have continued to sound the alarm. Algorithms have been found to automatically assign job candidates different scores based on arbitrary criteria like whether they wear glasses or a headscarf or have a bookshelf in the background. Hiring algorithms can penalize applicants for having a Black-sounding name, mentioning a women’s college, and even submitting their résumé using certain file types. They can disadvantage people who stutter or have a physical disability that limits their ability to interact with a keyboard.

All of this has gone widely unchecked. But now, the US Department of Justice and the Equal Employment Opportunity Commission have offered guidance on what businesses and government agencies must do to ensure their use of AI in hiring complies with the Americans with Disabilities Act.

“We cannot let these tools become a high-tech pathway to discrimination,” said EEOC chair Charlotte Burrows in a briefing with reporters on Thursday. The EEOC instructs employers to disclose to applicants not only when algorithmic tools are being used to evaluate them but what traits those algorithms assess.

“Today we are sounding an alarm regarding the dangers tied to blind reliance on AI and other technologies that we are seeing increasingly used by employers,” assistant attorney general for civil rights Kristen Clark told reporters in the same press conference. “Today we are making clear that we must do more to eliminate the barriers faced by people with disabilities, and no doubt: The use of AI is compounding the long-standing discrimination that job seekers with disabilities face.”

The Federal Trade Commission gave broad guidance on how businesses can use algorithms in 2020 and again in 2021, and a White House agency is working on an AI Bill of Rights, but this new guidance signals how the two agencies will handle violations of federal civil rights law involving the use of algorithms. It also carries the credible threat of enforcement: The Department of Justice can bring lawsuits against businesses, and the EEOC receives discrimination complaints from job seekers and employees that can result in fines or lawsuits.

Grads With Down Syndrome Change the Narrative at GMU

By Cory Smith: For Complete Post, Click Here…

Charlotte Woodward and Madison Essig are just the fifth and sixth students born with Down syndrome to earn a bachelor’s degree in the United States.

Their achievements were recognized during the commencement ceremony, and they’re both honored to hold that distinction.

But it’s also bittersweet because they both believe that number should be much higher.

“Our society seems to have low expectations of what we can and can’t do,” Woodward said.

“I want other people with Down syndrome and other disabilities to have high expectations for themselves and do what they love, do what they want to do with their education,” Essig said.

The pair changed GMU. Thanks to Essig, students with intellectual and developmental disabilities in George Mason’s LIFE program can enjoy a full college experience and do things like participate in Greek life and student government. She was the first student with a developmental disability to join a sorority and she was also an elected student senator.

DeafBlind Communities May Be Creating a New Language of Touch

By Andrew Leland: For Complete Post, Click Here…

Protactile began as a movement for autonomy and a system of tactile communication. Now, some linguists argue, it is becoming a language of its own.

hen John Lee Clark was five years old, in 1983, he entered a small Deaf program within a public school near his home in Eden Prairie, Minnesota. Clark was a dreamy kid who dressed in tucked-in button-downs and pressed slacks. He came from a large Deaf family—his father and brother are DeafBlind, his mother and sister are Deaf and sighted—and the family had communicated in American Sign Language (or A.S.L.) for generations. On Clark’s first day of kindergarten, his mother, worried, followed his school bus in her car. When she surprised him at school to ask if he was O.K., Clark said that he was fine but that the bus driver had forgotten how to speak. His mother laughed and reminded him that the driver didn’t know how to speak: she was hearing! “This is a common story among Deaf families,” Clark told me recently. “The gradual dawning that all those mutes could actually talk with one another, but in a very different way.”

In third grade, Clark began a bilingual Deaf program. Instruction was in A.S.L., but students were grouped on the basis of their ability to read English, a second language that Clark accessed only in print. “My literacy was abysmal,” he said. He still has a workbook from that time, in which he answered questions—“What is your favorite sport?” “Who are the members of your family?”—with drawings instead of in English. But he was gifted in A.S.L., and teachers would ask him for help with tricky words. He sometimes pranked them by inventing ostentatiously elaborate versions. The word “heaven” is difficult for A.S.L. learners, involving a precise looping of the hands; Clark added several gratuitous loops.

At twelve, Clark began attending a residential Deaf school, many of whose students came from Deaf families. But, around this time, he began to go blind. Hundreds of thousands of people in the U.S. have some combined hearing and vision loss, but most are older adults and have spent the bulk of their lives hearing and sighted. A much smaller group—about ten thousand, according to some estimates—become DeafBlind earlier in life; a leading genetic cause is Usher syndrome. Clark, his father, and his brother have Usher, which can cause a person to be born deaf and to gradually go blind. At fourteen, Clark started to lose track of A.S.L. conversations. “I was this boy who always said, ‘Say again?,’ who might collide into you,” Clark told me. “So pathetic.” He began reading in Braille, which his father had encouraged him to learn as a child, and started walking with a white cane.

In high school, Clark stopped trying to follow A.S.L. visually and began using tactile reception, feeling words with his hands. This helped, but miscommunication was common. A.S.L. is a fundamentally visual language. The dominant-hand gestures for the words “stamp” and “fun,” for instance, look very similar, except that “stamp” begins near the mouth, whereas “fun” starts at the nose. Yes-or-no questions are signified with raised eyebrows, and sentences can be negated with a shake of the head. When Clark would reply in A.S.L., he’d have no idea how the person was responding, or whether she was still paying attention at all; he said that it was like “talking to a wall.” He attended Gallaudet, a Deaf university in Washington, D.C., with his future partner, Adrean, a sighted-Deaf artist. “It was really when I got married that I noticed more serious problems,” he told me. He would come home from the store without the items that Adrean had requested, and misunderstood the timing of their appointments: “It’d blow up on me, how that information in ASL had failed to register.”

USPSTF Guidance Misses the Mark on Youth Suicide Risk Screening

by Christine Yu Moutier, MD: For Complete Post, Click Here…

We can all play a role in identifying at-risk kids.

With increasing rates of suicide and mental health issues among U.S. youth, and with suicide as the second leading cause of death among people ages 10 to 34, the state of youth mental health has reached crisis proportions. For this reason, I’m gravely concerned about the impact of the recent draft recommendations of the U.S. Preventive Services Task Force (USPSTF) that found insufficient evidence for implementing screening for suicide risk among youth.

The USPSTF’s methodology may be mismatched with the real-world implementation science and the scope of the problem concerning youth suicide. Clinicians and mental health professionals must have a clear understanding of the USPSTF’s guidelines for reviewing evidence and arriving at a recommendation, as their approach is out of touch with recent expert recommendations on screening for youth suicide risk. The USPSTF findings may cast doubt among healthcare providers on the importance of suicide screening and preventive care.

In collaboration with our partner organization on the Blueprint for Youth Suicide Prevention, the American Academy of Pediatrics, and experts from the National Institute of Mental Health, we have identified three key weaknesses of the USPSTF draft report.

Suicide Screening Can Be Done Safely

The USPSTF calls for screening asymptomatic adolescents ages 12 to 18 years for major depressive disorder, and youth between the ages of 8 and 18 for anxiety, saying there would be a moderate benefit to each. We support this recommendation and believe the benefits would be more than moderate. On suicide risk, the USPSTF concluded there is insufficient evidence to weigh the benefits and harms of screening asymptomatic children and adolescents. However, in regards to the suicide risk, the report excluded or overlooked a number of key research studies that find universal suicide screening in pediatric medical settings validated with high sensitivity (97%) and specificity (91%), and demonstrate feasibility, accessibility by youth, parents, and clinicians, and importantly, demonstrate no evidence of harm.

Google brings transcripts and auto-translated captions to YouTube on mobile

By J. Conditt: For Complete Post, Click Here…

Google is rolling out auto-translated video captions for YouTube on mobile devices, with support for 16 languages, the company announced during its I/O 2022 keynote today. The feature is live now. Additionally, YouTube video transcripts are now available to all Android and iOS users.

This is all part of Google’s work to make YouTube videos easier to navigate and search, building on existing features like auto-generated chapters. Google has a plan to increase the number of YouTube videos with auto-generated chapters from 8 million to 80 million by the end of the year.