Image Generated by Wifu diffusion prompt was 1girl, fairy, in wheelchair, flying, shrine, looking at viewer, night, watercolor

If you're at all fan of this blog you will know that I use all sorts of retro computing and ultramodern technologies., To achieve a computing environment that is retro futuristic in style and in substance. For example my desktop environment most closely resembles 2007 era Linux. My voice control system  is the latest and greatest  ai assisted kit. That the open source and charityware  community has to offer. Combined with a terminal mode word processor from 1998. What most people do not understand however is the rationale behind these eclectic and some might say eccentric computing decisions. This is not just some weird  and appalling hobby of mine, except in the most accidental of senses. I use technology in this way because  it is the best technology that meets my needs as an autistic programmer. Without  making me dependent on the Disability Industrial Complex.

Getting a complex

Most people even the technically inclined, operate under a misconception about assistive technology. They think it's a solved problem with readily available solutions that are cheap and easy. When I reveal the people that I use Linux and open source software to achieve my accessibility  nirvana.  Two reactions are common the first is " that's so cool how can I make my own". The other is to insist that I am just making work for myself. Such people think should get a  Mac, or try some big brand windows accessibility software. Which they insist would be better then my hodgepodge solution.

I've even been fired from jobs and booted from college  courses due to my  due to my technical choices. Which employers and professors  assumed were at fault for  The challenges I encountered with their disability accommodation and accessibility situations.  Nothing could be further from the truth however.  Accessible computing is not a solved problem it is not cheap, nor easy.  The software and tools or held hostage by an industry more interested in making large sales to institutional users. For the sake of legal compliance. As opposed actually serving the needs of disabled people.

I was offered grants at the beginning of my college career to get a full technology assessment done. And to have software purchased. This brand new software quickly broke down. and it proved impossible to maintain particularly with college and work deadlines.

A shocking discovery

What I quickly discovered  was that assistive technology exists in what's called a captive market. The users must use it so companies can charge any amount of money they want. Worse yet the companies don't price or market their software to individuals, or families. They market usually toward government agencies or schools and institutions. As an example one of the cheapest products for helping people with dyslexia use a computer is TextHelp Read and Write Gold. This is given out free sometimes if your institution contract with the company. And you are officially diagnosed with A Qualifying Condition. Otherwise this piece of software runs you $213 annually or about $550 for a three year annual subscription.

 Good luck if you have a more complex disability such as low vision and dyslexia  or cortical visual impairments. That software comes from completely different company and can easily cost a half a grand. And the disabled person themselves better have the hardware on which to run it. As most government or charitable agencies no longer offer money towards the cost of a computer because of "Previous Fraud".

Greg  be not ammused
<Greg> Yes you read that Right folks, when the state funded, Disability programs get defrauded… Even once it often comes at the cost of the entire program. In New York State for instance you used to be able to get a new computer ever 3-5 years or so. Until parents of children receiving services were caught selling their children’s equipment on eBay. And most charities that provided this service no longer do for similar reasons allegedly

  Persons with disabilities  can often require a high end gaming tower just to get basic work done.

This can sometimes cost upwards of $10,000  for a multiply disabled person to get the equipment and software that they may need  to live life to its fullest.  and we're only talking initial investment, not ongoing maintenance or upgrade costs. is it any wonder that people are still using their Classic Mac based accessibility software originally issued to them in the nineties.

The Complex and Me

 We in the disability activism community call these interlocking systems of oppression The Disability Industrial Complex.  The acronym is intentional. I realized this dynamic at the age of seventeen, and I knew if I ever wanted to work as a programmer ever in my life I would have to fight to overcome it. that is to overcome the ableism not the disability.

Originally I had grand visions of becoming the definitive open source programmer working in this space. I wanted to dunk the DIC, to write an Free Software Solution  to every common disability need, to found a nonprofit that would not only do this work but also educate the next generation of disabled programmers to do the work our own selves.

  As It Turns Out college with disability is hell on earth, and when you add mental illness to that it becomes an almost insurmountable challenge filled with pain  the likes of which I would not wish on my worst enemy. Not Only did I fail to accomplish any of my grand vision. it was a struggle to meet my own daily needs. But after years of toiling away in relative obscurity I can finally say that with the help of the Linux Community. Mission One meeting my own needs has mostly been accomplished. The remainder of this  post will be dedicated to how I achieved, my current build.  As well as giving pointers on how you can achieve your Accessibility Isekai.

Me Explained

I Will start with the description of my disability, as I see it and then move on to the problems she imposes, (Yes my disability has gender, get over it). Then I will move to how I solved each problem with freely available software on Linux. So strap in folks this is going to be a long one.

My Official Diagnosis has always been a matter of some debate. Every Clinician Seems to Agree that I have some form of cerebral palsy, but as to which form and how severe it is there is no consensus. Some Clinicians also believe that I have some form of autism, and or ADHD. However I am sure that even if there were unanimity among the experts a mere diagnostic label wouldn't tell you much. I will attempt to convey how I experience the world.

The easiest way I can think of to describe my experience of the world is to make in analogy to early computer generated movies or three dimensional games. I have trouble moving nearly all my limbs and appendages.moving my eyes is particularly difficult so imagine me as a low polygon count Buzz Lightyear.  With someone drunk in the motion capture suit.  laugh all you want but this is genuinely the closest I've ever come to describing what life is actually like. add to this the usual sensory differences that you see with autism.

Greg
<Greg> He had a particular problem with florescent lights when I was younger. I have phases of selective eating worse than any toddler you’ve ever experienced. To the point where can go weeks eating just chicken nuggets. ``` I have no peripheral vision, and blind spots in the lower part of my eyes. The last thing I would note is that I have trouble translating thoughts into movements.

An example

  At this point I think an example of how all this comes together would be helpful for the reader.

Right Now  even though I can't see anything below my nose without moving my head I can feel that there is a pokemon plushy on the floor right next to my foot.  If I were to shift my foot over and put a slight bit of pressure on whatever stuffed animal it is I would be able to tell you precisely which sensation in my feet is just that good.

In order to bend over and pick it up it would take me about ten seconds just to think about how my body needs to move in order to accomplish that simple task. Another seven seconds to initiate the so called "Motor Plan". And a whole 4.5 to complete the movement, and then the toy is only in my hand. There Needs to Be and entirely new motor planning and reaction cycle  to get Sylvian to a safe place. Did I Mention that I had ADHD as well.

Greg  be not ammused
<Greg> In Fact it took three tries while narrating this process simultaneously to get Sylvian from the floor to the top of the computer tower..

 This describes my disability in practical terms, but what challenges does it impose while using a computer?

Computing while Disabled

I Am glad you asked.  The first challenge this imposes Is with reading text onscreen. The usual black text on a white background just does not have a high enough contrast ratio for me to have any comfort in using it.   in other words I was a dark mode aficionado about fifteen years before that was fashionable. While dark mode helps higher contrast ratios are better, I generally prefer reading at a 6.4:1 ratio or better.  For reference to achieve an AAA  rating on the Web Content Accessibility Guidelines Version 2.1  specifies that 7:1 ratio between body text and background is required.

Greg  be not ammused
<Greg> For those of you who have smart enough rear ends to realize that black text on a white background is a solid 21 on the contrast calculator. You’re right congratulations, however the pure luminance of the white background makes everything else fuzzy a dark mode has the same contrast ratio but everything appears sharper.

Eyes Off

The other problem I have is eye  fatigue if I read stuff on computer screen without looking away for too long my eyes start to feel like they're burned into their sockets and I start getting migraine  like headaches that no painkiller will fix this is why every so often I turn away from the screen and just use voice control, or rest my eyes in some other way.

This also makes reading quite slow a  neurotypical human being can read at something approaching three hundred words a minute I am lucky if I get to go half that speed, the last time I had it professionally measured I topped out it around one hundred and twenty. I think I have improved since then because my reading software as  you'll see in a minute has an incidental feature that is kind of therapeutic for that condition, but I'm not paying for the measure again.  That's only half the battle as stated above my eyes fatigue rather quickly so the frequent breaks I need to take slow it down even further.

Speeding things up

For this reason some sort of text speech system has always been required for me to use a computer, or even a book effectively.  I have used nearly every text to speech synthesis product on the market from high end products such as Read and Write Gold, and Kurtzweil Reader too low end freeware projects such as ReadPlease 2003 and DocTalker. Ironically the low end projects do a better job.

This text to speech system was naturally the first thing I looked for on Linux. In the my early days, (around 2005 or so) there was KDE utility called K-speak.  Which replicated the feature set of a low end text to speech freeware program on Linux.   I switched to  Linux full time as soon as I had this figured out and a video card that was capable of running Compiz Fusion  with all its accessibility features turned on.

Then the trouble started, because the APIs that software relied on had been changed in 2007 without any consideration for backwards compatibility and when i upgraded past Ubuntu Fiesty, speech synthesizer stopped working.

I eventually managed through the use of an old Windows ME machine I had lying  around at the time to cobble together shell script that replicated the functions of the missing program.  It is this shell script or more accurately its great grandson  that I still use to this day for  orchestrating my speech synthesis needs. To this day there is no better accessibility tool for low vision users than Compiz Fusion.

Practical Details

Weston and Gnome 3 are getting there, but Compiz has this feature where you can invert the color pallet of a window for me this minimizes eye fatigue and strain which no known Wayland compositor implements. I have tried on multiple occasions to write one myself, but the project has always failed for one reason or another. If any expert in Wayland is reading this post I would welcome a voice chat for a Wayland 101 tutorial.

As I have alluded to you but not mentioned explicitly minimizing eye fatigue and strain is critical to a successful computational experience. So some form of text speech system is a must have, unfortunately  while there are many text to speech backends on Linux and unix  in general. svox,festival,mimic3,espeak and more, and while we have a good screen reader for blind and ultra low vision people in Orca. We lack a robust system to use text speech to help with reading for people with what are called "Print Related Disabilities" in the disability industrial complex.

The needs of these users are completely different then say a blind person. Where a blind or low vision person  the whole screen read to them, a person with PRD may only need selected parts of the screen read aloud,  or  only require different screen fonts, or high contrast modes. Or anything in between. I use a hybrid of high contrast and selective screen reading.  so far as I'm aware my own program  VSSS, is the only program which orchestrates a speech synthesizer for the purpose of assisting with Print Related Disabilities on Linux.  However it is a polyglot program, that has evolved over ten years of constant use to meet my specific needs.

Greg  be not ammused
<Greg> I had another program designed for more general audience, which I created back in 2010. but the special project grant through my university ended in the fall of 2011. And I had to lay it aside. I have always wanted to rewrite this particular program the one for the general audience, using my vastly improved experience. With both programming and accessibility, but due to my poor financial circumstances I would need about three thousand dollars in grant money to make the project feasible. I know that sounds mercenary for an open source developer. But this is not something I can do with my spare time on ten year old equipment. I want to give this project the time and attention it deserves, and that requires money. But that’s enough of that tangent

VSSS then is a speech orchestration system  for my specific print related disability. But what does it do specifically well to quote from the source.

A brief code snipit

VOX="Callie"

rate=190

PIPE\_COLOR="1;33;45m"

export PIPE\_COLOR

audio\_bckend="padsp"

spkedit="pluma"

PATH=/home/matt/pkg/bin:$PATH

LD\_LIBRARY\_PATH=$LD\_LIBRARY\_PATH:/home/matt/pkg/lib64

speak\_bckend() {

   if \[ -f /tmp/vsss.lock \]

   then

       echo "Speech output is currently in use"

       return 0

   else

       touch /tmp/vsss.lock

   fi

   cat $1 | ./emojifilter.py > $1.new

   cat $1.new > $1

   $audio\_bckend swift -n $VOX -p "speech/rate=$rate" -f $1 -m text -t |  ./pcg

   rm /tmp/vsss.lock

   return 1

}

VSSS Explained

Now this requires some explanation this function takes a file name as a parameter the file name could theoretically be anything on the system. But the file is most often generated by DBUS  service which we will get to in a minute then it takes the file through a complicated shell pipeline beginning with the so called emoji filter. Which is really misnamed because it's a generalized preprocessor which also does regular expression replaces. And ending with the invocation of the swift which is a proprietary speech synthesizer originally meant for telephony applications. Which outputs the audio and simultaneously prints the output of the so called normalization process to standard out which then goes to a program called pcg. Which stands for pretty color graphics.  and is just there to make the normalization output easier to read onscreen. The final output looks something like this

I can't give you an audio file of how it sounds due to copyright restrictions. But here's a YouTube video of Cepstral Callie, saying something. There are a few other bells and whistles that allow me to jump around in the document, edit the document before it is read. And as I said there is a whole preprocessor  emojis with their text descriptions, and does standard regular expression replacements for various things.  This is the first component of my accessibility system.

MasterText

The next component which I wrote myself is called MasterText. This requires a little bit of context before I explain what it does. In 2020-2021  I was accepted into graduate school for Information Technology Management. At Empire State College, our state universities remote learning college.

While the disability services there were far from the worst I'd ever seen. It was basically a torture test for my disability. Not helped by the fact that due to how grants worked. I didn't get the money for my textbooks until six weeks into the semester.  

I eventually had to drop out. Both during and after I contemplated ways to augment my accessibility system to cope with the workload.

Out of this came MasterText, which quite simply replaces the  API which fetches text from the screen into the reader. So that every piece of text that is run through the reader also ends up in  persistent database. Which is both full text searchable and content addressable.

Bestowing Superpowers on myself

Every piece of text has a unique address which can be referenced if I need to read it again. And I can search through anything I've read since 2019. It also has a web interface which has some features of a wiki. Meaning I can annotate texts, cross reference them and generally do any shenanigans which I feel are necessary. When researching a topic or writing  a paper and so forth. If I had thought of this idea before I had entered grad school  I would have my masters by now.

This tool  allows me to use my auditory memory, which is one of my most advanced skills. As a very finely  honed tool in any sort of writing endeavor.

BlueProxy

The final tool that I coded for myself is a more  recent addition, and simply leverages Mozilla's Readability API. To simplify the contents of websites so that the speech system produces more coherent output. I could always do this in browser. But having it on  as a web proxy and API frees me to use simpler browsers such as otter, midori  and others. And also will eventually lead to integrations with other tools. That I am currently working on, so stay tuned as they say.

The Rest

On to parts of the system that I didn't write myself. As I stated in the beginning of the article I use Compiz Fusion with the Mate desktop, to provide missing accessibility features in X11. A terminal based word processor  Wordperfect for Linux about which I've written previously. And I have recently replaced my  dictation and voice control system, which used a windows XP based virtual machine and several horrible kludges. With the much more functional, and elegant Talon Voice Control System. Which has made life so much easier in the past few  weeks.

Conclusion

Now if the disability I described, sounds something like you. and you want to set up a similar system. I would recommend starting with the long term support release of Ubuntu Mate. Though I am a Fedora user and proud of it. The release model of fedora can make it susceptible to sudden and dramatic changes, which the user has to work around. For example recently during the transition to pipewire the fedora devs removed the backward compatibility shim that allowed OSS  binaries to run as pulseaudio clients which completely broke my entire text to speech system. They did this even though there was no reason to remove such a component as pipewire, implements the same API as pulseaudio. And so the shim is also compatible with pipewire. I had to find  a way. To reinstate that missing feature all without the ability to read.

Suffice to say fedora is an adventure and if you're just starting out building your own accessible computer system without the disability industrial complex. I recommend starting with something stable, even though new features are slower to get to you.

If you want to get VSSS or MasterText running shoot me at DM  over on tilde.chat,  or an email I am matt -at- piusbird dot spce.  I would be very interested in  what sort of documentation would be needed for a new user to start using the system.

Finally if you know where I can get grant money to pursue first building a speech system for print disabled users, and then secondarily making a  distribution derivative, that more tightly integrates some of these accessibility components. And makes them easier to install and use. I would welcome your  Feedback as well