#I have been able to extract basically all the music unedited. so now I can listen to fog rolls in with no reverb! it's very interesting!
Explore tagged Tumblr posts
Text
Actually, I've done something really epic gamer right now. Turns out. The photos were available in high quality. Just not from that archive! Apparently These pictures are still on Tumblr, all you need is the link! So, copy and pasting the links from the archive post, allows us to view the files! But they're still kinda low quality... However, when you hover over the pictures in the archive you can see that they say: "tumblr_njuardbVUb1s6muneo(1-6,8)_1280.jpg"... and in these lower quality versions of the files they instead have 500 or 250 instead of 1280 So, If you change the numbers at the end to 1280...
YOU GET TO DOWNLOAD THEM IN THEIR ORIGINAL QUALITY. (Or at least... close to the originals. (The sonic is my boyfriend art is still slightly compressed... leading me to believe that there might still be a higher quality version than 1280... (Here's the link to the art if any of you wanna try and find it... (And here are the pictures on Tumblr: 1, 2, 3, 4, 5, 6, 8. (Also... Judging from the naming convention, I'm lead to believe that picture 7 is missing...))) Do with these and this information what you will, but I am incredibly delighted to have found these!!!
Whoever archived this post by GlicthCityLA on the Wayback Machine is an absolute legend!!!! Thank you so much!!! We have some official photos of the event now!
Unfortunately, the pictures are low quality but low quality is better than nothing! I'm gonna be downloading all of these just to keep them safe. Also bonus thing too:
#sonic dreams collection#sonic is my boyfriend#investigation#GlitchCityLA#You have no idea how happy I am.#Holy shit#I'm super hacker zone now!!!!11!#mwhahaha!!#Sonic is my boyfriend cannot hide from me for long!!!#I will find as much as possible!!!!#I will âdecompileâ the game and rebuild it from scratch!!! (Okay maybe that's a stretch but could you imagine if i did that?)#That'd be awesome.#also#fun fact#Sonic Movie Maker's repository can be found here: https://github.com/mcteapot/SMovieMaker On GitHub.#I have downloaded it and I will probably explore it sometime later.#Also also#I believe that Sonic Dreams Collection was made in Unity 4 beta. Because when i dug around the files with AssetStudio#I saw the unity engine Splash Screen pictures in the files while looking for the game's audio files. and it said made with unity 4 beta#Speaking of which#I have been able to extract basically all the music unedited. so now I can listen to fog rolls in with no reverb! it's very interesting!#maybe I'll make a separate post later idk.#Can you tell I like this game too much?!?!#HAHAHAHAHAHAHAHA I'm so normal about this game!#sorry I love Sonic Dreams Collection a lot.#Arcane Kids is a very fascinating group#Bubsy 3d is pretty cool#Perfect stride and Zenith look really interesting.#sorry. I'll stop.#For now. hehehehahaha
1 note
¡
View note
Text
WellSaid aims to make natural-sounding synthetic speech a credible alternative to real humans
Many things are better said than read, but the best voice tech out there seems to be reserved for virtual assistants, not screen readers or automatically generated audiobooks. WellSaid wants to enable any creator to use quality synthetic speech instead of a human voice â perhaps even a synthetic version of themselves.
Thereâs been a series of major advances in voice synthesis over the last couple years as neural network technology improves on the old highly manual approach. But Google, Apple, and Amazon seem unwilling to make their great voice tech available for anything but chirps from your phone or home hub.
As soon as I heard about WaveNet, and later Tacotron, I tried to contact the team at Google to ask when theyâd get to work producing natural-sounding audiobooks for everything on Google Books, or as a part of AMP, or make it an accessibility service, and so on. Never heard back. I considered this a lost opportunity, since there are many out there who need such a service.
So I was pleased to hear that WellSaid is taking on this market, after a fashion anyway. The company is the first to launch from the Allen Institute for AI (AI2) incubator program announced back in 2017. They do take their time!
Allen-backed AI2 incubator aims to connect AI startups with world-class talent
Talk the talk
I talked with the co-founders CEO Matt Hocking and CTO Michael Petrochuk, who explained why they went about creating a whole new system for voice synthesis. The basic problem, they said, is that existing systems not only rely on a lot of human annotation to sound right, but they âsound rightâ the exact same way every time. You canât just feed it a few hours of audio and hope it figures out how to inflect questions or pause between list items â much of this stuff has to be spelled out for them. The end result, however, is highly efficient.
âTheir goal is to make a small model for cheap [i.e. computationally] that pronounces things the same way every time. Itâs this one perfect voice,â said Petrochuk. âWe took research like Tacotron and pushed it even further â but weâre not trying to control speech and enforce this arbitrary structure on it.â
âWhen you think about the human voice, what makes natural, kind of, is the inconsistencies,â said Hocking.
And where better to find inconsistencies than in humans? The team worked with a handful of voice actors to record dozens of hours of audio to feed to the system. Thereâs no need to annotate the text with âspeech markup languageâ to designate parts of sentences and so on, Petrochuk said: âWe discovered how to train off of raw audiobook data, without having to do anything on top of that.â
So WellSaidâs model will often pronounce the same word differently, not because a carefully manicured manual model of language suggested it do so, but because the person whose vocal fingerprint it is imitating did so.
And how does that work, exactly? That question seems to dip into WellSaidâs secret sauce. Their model, like any deep learning system, is taking innumerable inputs into account and producing an output, but it is larger and more far-reaching than other voice synthesis systems. Things like cadence and pronunciation arenât specified by its overseers but extracted from the audio and modeled in real time. Sounds a bit like magic, but thatâs often the case when it comes to bleeding-edge AI research.
It runs on a CPU in real time, not on a GPU cluster somewhere, so it can be done offline as well. This is a feat in itself, since many voice synthesis algorithms are quite resource-heavy.
What matters is that the voice produced can speak any text in a very natural sounding way. Hereâs the first bit of an article â alas, not one of mine, which would have employed more mellifluous circumlocutions â read by Googleâs WaveNet, then by two of WellSaidâs voices.
youtube
The latter two are definitely more natural sounding than the first. On some phrases the voices may be nearly indistinguishable from their originals, but in most cases I feel sure I could pick out the synthetic voice in a few words.
That itâs even close, however, is an accomplishment. And I can certainly say that if I was going to have an article read to my by one of these voices, it would be WellSaidâs. Naturally it can also be tweaked and iterated, or effects applied to further manipulate the sound, as with any voice performance. You didât think those interviews you hear on NPR are unedited, did you?
The goal at first is to find the creatives whose work would be improved or eased by adding this tool to their toolbox.
âThere are a lot of people who have this need,â explained Hocking. âA video producer who doesnât have the budget to hire a voice actor; someone with a large volume of content that has to be iterated on rapidly; if English is a second language, this opens up a lot of doors; and some people just donât have a voice for radio.â
It would be nice to be able to add voice with a click rather than just have block text and royalty-free music over a social ad (think the admen):
youtube
I asked about the reception among voice actors, who of course are essentially being asked to train their own replacements. They said that the actors were actually positive about it, thinking of it as something like stock photography for voice; get a premade product for cheap, and if you like it, pay the creator for the real thing. Although they didnât want to prematurely lock themselves into future business models, they did acknowledge that revenue share with voice actors was a possibility. Payment for virtual representations is something of a new and evolving field.
A closed beta launches today, which you can sign up for at the companyâs site. Theyâre going to be launching with five voices to start, with more voices and options to come as WellSaidâs place in the market becomes clear. Part of that process will almost certainly be inclusion in tools used by the blind or otherwise disabled, as I have been hoping for years.
Sounds familiar
And what comes after that? Making synthetic versions of usersâ voices, of course. No brainer! But the two founders cautioned thatâs a ways off for several reasons, even though itâs very much a possibility.
âRight now weâre using about 20 hours of data per person, but we see a future where we can get it down to 1 or 2 hours while maintaining a premium lifelike quality to the voice,â said Petrochuk.
âAnd we can build off existing datasets, like where someone has a back catalog of content,â added Hocking.
The trouble is that the content may not be exactly right for training the deep learning model, which advanced as it is can no doubt be finicky. There are dials and knobs to tweak, of course, but they said that fine-tuning a voice is more a matter of adding corrective speech, perhaps having the voice actor reading a specific script that props up the sounds or cadences that need a boost.
They compared it with directing such an actor rather than adjusting code. You donât, after all, tell an actor to increase the pauses after commas by 8 percent or 15 milliseconds, whichever is longer. Itâs more efficient to demonstrate for them: âsay it like this.â
youtube
Even so getting the quality just right with limited and imperfect training data is a challenge that will take some serious work if and when the team decides to take it on.
But as some of you may have noticed, there are also some parallels to the unsavory world of âdeepfakes.â Download a dozen podcasts or speeches and youâve got enough material to make a passable replica of someoneâs voice, perhaps a public figure. This of course has a worrying synergy with the existing ability to fake video and other imagery.
This is not news to Hocking and Petrochuk. If you work in AI this kind of thing is sort of inevitable.
âThis is a super important question and weâve considered it a lot,â said Petrochuk. âWe come from AI2, where the motto is âAI for the common good.â Thatâs something we really subscribe to, and that differentiates us from our competitors who made Barack Obama voices before they even had an MVP [minimum viable product]. Weâre going to watch closely to make sure this isnât being used negatively, and weâre not launching with the ability to make a custom voice, because that would let anyone create a voice from anyone.â
Active monitoring is just about all anyone with a potentially troubling AI technology can be expected to do â though they are looking at mitigation techniques that could help identify synthetic voices.
With the ongoing emphasis on multimedia presentation of content and advertising rather than written, WellSaid seems poised to make an early play in a growing market. As the product evolves and improves, itâs easy to picture it moving into new, more constrained spaces, like time-shifting apps (instant podcast with 5 voices to choose from!) and even taking over territory currently claimed by voice assistants. Sounds good to me.
source https://techcrunch.com/2019/03/07/wellsaid-aims-to-make-natural-sounding-synthetic-speech-a-credible-alternative-to-real-humans/
0 notes
Text
WellSaid aims to make natural-sounding synthetic speech a credible alternative to real humans
Many things are better said than read, but the best voice tech out there seems to be reserved for virtual assistants, not screen readers or automatically generated audiobooks. WellSaid wants to enable any creator to use quality synthetic speech instead of a human voice â perhaps even a synthetic version of themselves.
Thereâs been a series of major advances in voice synthesis over the last couple years as neural network technology improves on the old highly manual approach. But Google, Apple, and Amazon seem unwilling to make their great voice tech available for anything but chirps from your phone or home hub.
As soon as I heard about WaveNet, and later Tacotron, I tried to contact the team at Google to ask when theyâd get to work producing natural-sounding audiobooks for everything on Google Books, or as a part of AMP, or make it an accessibility service, and so on. Never heard back. I considered this a lost opportunity, since there are many out there who need such a service.
So I was pleased to hear that WellSaid is taking on this market, after a fashion anyway. The company is the first to launch from the Allen Institute for AI (AI2) incubator program announced back in 2017. They do take their time!
Allen-backed AI2 incubator aims to connect AI startups with world-class talent
Talk the talk
I talked with the co-founders CEO Matt Hocking and CTO Michael Petrochuk, who explained why they went about creating a whole new system for voice synthesis. The basic problem, they said, is that existing systems not only rely on a lot of human annotation to sound right, but they âsound rightâ the exact same way every time. You canât just feed it a few hours of audio and hope it figures out how to inflect questions or pause between list items â much of this stuff has to be spelled out for them. The end result, however, is highly efficient.
âTheir goal is to make a small model for cheap [i.e. computationally] that pronounces things the same way every time. Itâs this one perfect voice,â said Petrochuk. âWe took research like Tacotron and pushed it even further â but weâre not trying to control speech and enforce this arbitrary structure on it.â
âWhen you think about the human voice, what makes natural, kind of, is the inconsistencies,â said Hocking.
And where better to find inconsistencies than in humans? The team worked with a handful of voice actors to record dozens of hours of audio to feed to the system. Thereâs no need to annotate the text with âspeech markup languageâ to designate parts of sentences and so on, Petrochuk said: âWe discovered how to train off of raw audiobook data, without having to do anything on top of that.â
So WellSaidâs model will often pronounce the same word differently, not because a carefully manicured manual model of language suggested it do so, but because the person whose vocal fingerprint it is imitating did so.
And how does that work, exactly? That question seems to dip into WellSaidâs secret sauce. Their model, like any deep learning system, is taking innumerable inputs into account and producing an output, but it is larger and more far-reaching than other voice synthesis systems. Things like cadence and pronunciation arenât specified by its overseers but extracted from the audio and modeled in real time. Sounds a bit like magic, but thatâs often the case when it comes to bleeding-edge AI research.
It runs on a CPU in real time, not on a GPU cluster somewhere, so it can be done offline as well. This is a feat in itself, since many voice synthesis algorithms are quite resource-heavy.
What matters is that the voice produced can speak any text in a very natural sounding way. Hereâs the first bit of an article â alas, not one of mine, which would have employed more mellifluous circumlocutions â read by Googleâs WaveNet, then by two of WellSaidâs voices.
youtube
The latter two are definitely more natural sounding than the first. On some phrases the voices may be nearly indistinguishable from their originals, but in most cases I feel sure I could pick out the synthetic voice in a few words.
That itâs even close, however, is an accomplishment. And I can certainly say that if I was going to have an article read to my by one of these voices, it would be WellSaidâs. Naturally it can also be tweaked and iterated, or effects applied to further manipulate the sound, as with any voice performance. You didât think those interviews you hear on NPR are unedited, did you?
The goal at first is to find the creatives whose work would be improved or eased by adding this tool to their toolbox.
âThere are a lot of people who have this need,â explained Hocking. âA video producer who doesnât have the budget to hire a voice actor; someone with a large volume of content that has to be iterated on rapidly; if English is a second language, this opens up a lot of doors; and some people just donât have a voice for radio.â
It would be nice to be able to add voice with a click rather than just have block text and royalty-free music over a social ad (think the admen):
youtube
I asked about the reception among voice actors, who of course are essentially being asked to train their own replacements. They said that the actors were actually positive about it, thinking of it as something like stock photography for voice; get a premade product for cheap, and if you like it, pay the creator for the real thing. Although they didnât want to prematurely lock themselves into future business models, they did acknowledge that revenue share with voice actors was a possibility. Payment for virtual representations is something of a new and evolving field.
A closed beta launches today, which you can sign up for at the companyâs site. Theyâre going to be launching with five voices to start, with more voices and options to come as WellSaidâs place in the market becomes clear. Part of that process will almost certainly be inclusion in tools used by the blind or otherwise disabled, as I have been hoping for years.
Sounds familiar
And what comes after that? Making synthetic versions of usersâ voices, of course. No brainer! But the two founders cautioned thatâs a ways off for several reasons, even though itâs very much a possibility.
âRight now weâre using about 20 hours of data per person, but we see a future where we can get it down to 1 or 2 hours while maintaining a premium lifelike quality to the voice,â said Petrochuk.
âAnd we can build off existing datasets, like where someone has a back catalog of content,â added Hocking.
The trouble is that the content may not be exactly right for training the deep learning model, which advanced as it is can no doubt be finicky. There are dials and knobs to tweak, of course, but they said that fine-tuning a voice is more a matter of adding corrective speech, perhaps having the voice actor reading a specific script that props up the sounds or cadences that need a boost.
They compared it with directing such an actor rather than adjusting code. You donât, after all, tell an actor to increase the pauses after commas by 8 percent or 15 milliseconds, whichever is longer. Itâs more efficient to demonstrate for them: âsay it like this.â
youtube
Even so getting the quality just right with limited and imperfect training data is a challenge that will take some serious work if and when the team decides to take it on.
But as some of you may have noticed, there are also some parallels to the unsavory world of âdeepfakes.â Download a dozen podcasts or speeches and youâve got enough material to make a passable replica of someoneâs voice, perhaps a public figure. This of course has a worrying synergy with the existing ability to fake video and other imagery.
This is not news to Hocking and Petrochuk. If you work in AI this kind of thing is sort of inevitable.
âThis is a super important question and weâve considered it a lot,â said Petrochuk. âWe come from AI2, where the motto is âAI for the common good.â Thatâs something we really subscribe to, and that differentiates us from our competitors who made Barack Obama voices before they even had an MVP [minimum viable product]. Weâre going to watch closely to make sure this isnât being used negatively, and weâre not launching with the ability to make a custom voice, because that would let anyone create a voice from anyone.â
Active monitoring is just about all anyone with a potentially troubling AI technology can be expected to do â though they are looking at mitigation techniques that could help identify synthetic voices.
With the ongoing emphasis on multimedia presentation of content and advertising rather than written, WellSaid seems poised to make an early play in a growing market. As the product evolves and improves, itâs easy to picture it moving into new, more constrained spaces, like time-shifting apps (instant podcast with 5 voices to choose from!) and even taking over territory currently claimed by voice assistants. Sounds good to me.
Via Devin Coldewey https://techcrunch.com
0 notes