For the few geeky people among the visitors to this website who
wonder about certain aspects of how the ADP website is maintained, this
page is for you! This page was last updated on September 23, 2021.
- My involvement with this website began in 2003 when I took over
as webmaster of the new Audio Description International website.
The American Council of the Blind offered me and the website a new
home in May 2010 when it was renamed the Audio Description Project
and moved to the ACB website.
- With the exception of two pages, all of the content of this
website is maintained by one person: me, Fred Brack, the
webmaster. (The exceptions are the Tech page and the Daily TV
- While I used to use the free (and unsupported) product
Microsoft Expression Web 4 to organize and publish the pages, in mid-2021 I had to switch to Adobe's Dreamweaver to support new security protocols for publishing.
- The "header" and "footer" of each page on the website are common
and are maintained via what is called a Template, part of the Dreamweaver product.
- None of the pages are dynamically built via technologies such as
ASP, nor is any database product like SQL used. Every page is a straight HTML file. However,
mobile devices, accessibility options, navigation options, and more recently the dynamic build of the pages which sort streaming service titles by year, genre, rating, etc.
- Most of the "Titles with AD" pages are programmatically
produced; that is, the web page is built by a program integrating
current title data with a base HTML template. (How do I do this? See below.)
Each program is similar but unique, typically over 1000 lines of
code per service.
- The programming language I use to build those pages and provide
a lot of website analysis and support is IBM's
Language, which I learned many years ago when I worked for IBM.
While Rexx runs on many platforms, the specific version I use is
Rexx for Windows. Why Rexx? Because I learned and
language at IBM, and relearning it was less challenging than learning a new
language like Python. Rexx has a very flexible syntax and
- There are currently about 200 pages of HTML comprising this
website and well over 10,000 lines of Rexx code.
How the 'Titles With AD' Pages Are Maintained
Arguably, the 15+ pages which comprise "Titles with AD" are the heart
of the website. The number of pages continues to grow each year
with the introduction of new streaming services. The individual
pages are maintained in a variety of means as noted below.
- The current year DVDs page is the oldest one,
the one-and-only in its day. Currently, I consult the
Release Dates website every week and copy the names of the
coming week's expected DVD releases into a spreadsheet for tracking.
Then I look up each title on Amazon and record the studio name.
If it is Sony, they will list whether or not the title has an AD
track under Product Details. It is rare that anyone else does
this. For the others, I look for an image of the back cover on
Amazon (also rare, but thank you Universal). If present, I
enlarge it and look for "English Audio
Description" or "DVS" as a language; then I mark the
title as having AD. If not present, I have to look for the
physical DVD in a local store each Tuesday (DVD release day).
If I can't find it in a store, I consult
www.worldcat.org, but this
generally only works after the day of release because the library
system has to obtain their copy first. For
those titles determined to have an AD track, I use a program I wrote
to capture the key data about the title from Amazon and IMDb (for
the short description) via manual entry
and update a database. The same program helps me determine if the title is also available on
iTunes, Netflix, Prime Video, or Hulu, which I so-note in the
listing. From that data, my program
builds the HTML to make a new entry in the current year's DVD
database, adds it to the file, re-indexes all the DVDs in the file,
and also updates the Alphabetic List of DVD Titles.
I then list information about any new titles on the main page of the
website each week.
- The TV by Network page is now built
programmatically from a database which is updated with new titles I glean from a program which analyzes
our weekly TV by Days listing
(a Google Doc to which the ADP site links). That page is manually
updated each week by my associate Timothy Wynn via a manual search of
available online listings.
- The Cinema Page is a listing of movies
currently playing in local cinemas with audio description available.
I create the page programmatically based on an email that Vicki Vogt
sends out each Friday from Perkins School for the Blind in
- The Other Described Videos page is a catch-all
recently started to list non-mainstream videos for free or fee that
you won't find in theaters or on major streaming services. Titles
are manually added.
Ideally these pages would all be created programmatically from data supplied
by the streaming service to me, and that remains my objective! I only get some of them that way now.
When I get the data from the service, it typically arrives as an Excel
spreadsheet, and my program reads in all the data (from a .csv file), alphabetizes it, and
extracts the key information like genre and rating to create the
alphabetized index you see on the website. Please note my
discussion of title discrepancies below!
- Apple TV+ is relatively new and has a small number of
described videos. I use a program I wrote to extract title information directly
from a weekly spreadsheet that Apple sends me, which fully meets my objective of "push a button" to get an updated HTML file in less than a minute! Thank you Apple.
- Disney+ is also relatively new but has hundreds
of described titles.
Disney gives me access to a Google Doc spreadsheet containing the
fields I requested, from which I programmatically build the
listing. Since they offer me future release dates, I can
share them for "coming soon" titles. I have also been given
access to a list of titles available in the USA with foreign
language description, so I include that information at the bottom of
- Google Play Store lists titles for rent in a manner similar to iTunes. I capture all the titles from their "Movies with Audio Description" page as a text file and turn it into a page of HTML titles via software.
- HBO Max began operation in late March 2021. At the present time, title extraction is semi-automated by downloading their pages of audio described titles to create a database of title information which is then used to generate the HTML page of titles.
- Hulu began offering AD in the spring of 2019
under the terms of a "settlement agreement" with the ACB.
Because lawyers are still involved, I am not allowed to have a
contact at Hulu, and I have to manually gather the information on a
best-effort basis. I use a program to compare the list of titles on
Hulu's AD Hub with my own list and update a database accordingly. Another program accesses
the database to rebuild the web page.
- iTunes is a dream to work with! They send
me a spreadsheet every Tuesday from which I am able to directly
generate a new page of HTML of their titles with rare manual
- Netflix is a little more difficult at present,
though I have wonderful support from the company. The methodology for capturing titles has changed over the years, but now I go to the
Netflix page listing titles with AD and manually scroll down over two
dozen times to get to the bottom of the page. The reason for
this is that there are so many titles (with images of each one) that
the Netflix page uses something called dynamic loading when you page
down to repeatedly load more titles. I have to get all the way
to the bottom to have them all in the screen buffer at the same
time. Then I save the page as a text file (which is basically just titles), which I then post-process to find the new titles this week (see
discussion below). I then look up each title manually on
Netflix (because the screen capture only has titles) and choose information to enter into a database I maintain
via another program. Then I run a third program to build my
Netflix AD web page from my database! See below for more.
(I am still working with Netflix on getting a weekly
spreadsheet feed to make this process easier, though it will cut
down updates from twice a week to once a week and drop a few details that I add manually to some titles.)
- Paramount+ (the former CBS All Access) is
currently maintained in a semi-automated manner, as we do not yet
have a data feed from them, though they are working on it!
- Peacock is the newest streaming service. At the present time, data collection is semi-automated to a database and then to the HTML file. The exact process is complicated, so I won't go into details here!
- Prime Video is also programmatically produced,
but it has been the most challenging, with changing formats and
information over time and the difficulty of dealing with the same
title and different video modes (like 4K) or options (like "With
Bonus Material"). Recently I've been able to extract and include
listings for Free-With-Prime titles and Free-With-IMDB TV titles
because my contact at Amazon flags them for me in the spreadsheet.
- Spectrum Access is new as of May 2020. I
rebuild the page periodically from an updated list of titles they
send me. Since I only list titles for Spectrum (no other
details), this is an easy
More on Netflix: Every two weeks I get a spreadsheet from
Netflix that I post-process for three reasons:
- It serves to validate that I have correctly extracted all
described titles from the website and removed pulled ones.
- I periodically run a program to compare the Netflix-chosen genre
for each title against my choice from those offered when I do the
manual lookup of each title during the week. I may or may not
make changes based on this comparison, but I usually do.
- Netflix includes information about all the titles they have
which contain description tracks in languages other than English.
I use this data to build my
USA Netflix Shows
Audio Described in Languages Other Than English page. This
is the only such listing in the world.
There are two types of complications that I have to deal with when
processing data from vendors programmatically.
The first is title, rating, and genre changes. Sometimes I get
a title which is incorrect in some manner. Examples:
- For iTunes, for reasons unknown, a few of the titles are
as-released in other countries, such as "Mums' Night Out" instead of
"Moms' Night Out."
- For Prime Video, for reasons unknown, ratings are sometimes
different for versions in different video formats (such as 4K) and
have to be overridden, or they use non-standard terminology, such as
ages_13_and_older or all_ages, which needs to be converted.
- I ask for and receive a single genre for each title from each
service, and sometimes their choice seems odd, so I need to override
it. For instance, a title will come in as "Historical
Fiction," where a better alternate choice to combine with other
titles would be "Drama" (which the service will list as an
- Sometimes there are several versions of the same exact
title, and I need to force a distinction so users know which is
which. The most blatant example is Robin Hood, for which at
one time I was carrying FIVE versions! Another
example is Aladdin. In these cases, I have to catch them and
suffix them with more information, such as a year. In the case
of Robin Hood, I include the vendor, as in "Robin Hood [Disney
- The situation really becomes complicated when I try to
consolidate titles for the Master AD List.
Minor title variations between providers creates multiple entries,
when there should only be one. I have a list of over 125 title
changes (consolidations) that my program consults for this purpose.
For example: "Tyler Perry's Single Moms Club" has to be
changed to "Tyler Perry's the Single Moms Club";
and 14+ Disneynature films have to be prefixed with "Disneynature:"
to match other services.
The second complication is self-imposed: I like to do my users
a favor and list ADDITIONS since the last listing where it is convenient
to do so. This means I
have to track the date-of-first-entry for each title, then compare the
current list against the previous one, looking not only for additions,
but how long ago the addition occurred so I can mark it as ADDED in the
listing for a couple of weeks. Turns out that's a lot of code!
There are numerous other programs I have written to support the
website. For instance, to create the Master AD list, I dynamically extract all the current titles with AD from all the HTML
pages for each service, then combine them into a consolidated listing.
Also, I periodically run a program to compare all the listings of DVDs
over the years with the Alphabetic listing and the Children's Described
DVDs listing to make sure I haven't missed any or placed them out of
alphabetical sequence. In the second half of 2020, I implement
available) by year; and I created a Christmas-Themed Videos listing
The Other Pages
There is not a lot to say about the other 100+ pages on the site, as
they are all hand-coded content. The most dynamic page is the main
page, of course, which I update every Tuesday. (It's
Tuesday because that's the day of the week new DVDs are released.) I
review AD-related information that I receive from ADP Committee members,
users of the website, and stuff I find on my own to determine new
articles. I list any new DVDs and/or new TV shows with
Periodically I update support material in the Reference section of
the main page (such as the "Audio Description: Where and How" report) or
other support material elsewhere.
Mobile and Accessibility Support, Plus How-To ...
- Accessibility support is discussed on a separate page:
- Mobile support is implemented via the method I discuss in an
article on my own home page:
Making a Website Mobile Friendly.
- Additional mobile technology is discussed in my article:
Automatic Resizing of Images for Mobile Devices.
- And while we are covering how-to stuff, you can also see my
How to Build a Web Page Programmatically and
CSS for Images and Tables to Comply with HTML5 Standards.
If you have any questions about how the site is put together, drop me
a line via the Webmaster link below.