Btw, re: my opinion that computers are not gonna be able to translate sign languages in our lifetime, it's not that sign languages are necessarily More complicated than spoken/written languages (I truly don't know how you'd measure that but I'd assume they're equally complicated). But video is, in terms of sheer data, much bigger and presumably harder to process than audio. I cannot imagine this happening without *astounding* computational resources which would take far more energy, water, and money than a human interpreter (and, more importantly, wouldn't work as well, at least for the foreseeable future). I assume the computation would happen off site in most cases if it did work, meaning the Internet connection is gonna need to be phenomenal (there is already widespread dissatisfaction with VRS human interpreters used in medical settings because half the time the connection drops). Speech to text, with all the issues it still has, seems like a breeze in comparison to 'understanding' a video.
I also cannot wrap my mind around how a machine would handle depictions. Like, with some practice behind me, my human mind is now able to understand (some) depictions I've never seen before (thank goodness, because there will ALWAYS be new depictions I haven't seen before, bc Deaf people are resourceful and creative), but I don't see how a machine would. That's pure sci fi to me. I also wouldn't expect a machine to do a good job translating stuff it's never heard before in a spoken language (e.g. wordplay, or the way you can sometimes tell the meaning of a new slang word from context, or an uncommon name even), but the thing is I think depiction is a much bigger part of daily life than wordplay is?
6 notes
·
View notes