#SYSTRAN Blog
Explore tagged Tumblr posts
Text
How To Use Translate Pro - 3 Simple Techniques For Translation Ease
How To Use Translate Pro - 3 Simple Techniques For Translation Ease https://blog.systran.us/how-to-use-translate-pro SYSTRAN Translate PRO is the most powerful, accurate and secure AI-enabled machine translation software available on the market today. Using Translate PRO enables you to translate languages in real-time without ever having to sacrifice data privacy, a major difference when compared with most online translation software portals. Translate PRO is also very simple to use, allowing you to translate text, documents and files in real-time and integrate the best machine translation features available into your daily workflow. Let’s get started with Translate PRO. via https://blog.systran.us/how-to-use-translate-pro https://blog.systran.us July 24, 2023 at 04:09PM
0 notes
Text
Website Penyedia Penerjemah Online Semua Bahasa Di Dunia Gratis, nomor 4 yang bikin geger
Berupaya.com | Kita tahu bahwa hidup jaman sekarang kecanggihan teknologi semakin berkembanga pesat. oleh karena itu ketika kita mau ngobrol sama orang bule atau orang luar negeri kita bisa memanfaatkan situs-situs yang menyediakan translate semua bahasa, bahasa indonesia maupun bahasa inggris (internasional). Website Penyedia Penerjemah Online Semua Bahasa Di Dunia Gratis
Selengkapnya kunjungi blog saya :
Sumber gambar Alphacoder (https://wall.alphacoders.com/big.php?i=1337390)
Daftar Isi (Table Of Content :
Apa itu Layanan Penerjemahan Online?
Bagaimana Layanan Penerjemahan Online Bekerja?
Keandalan Layanan Penerjemahan Online
Kelebihan Penggunaan Layanan Penerjemahan Online
Bagaimana Memilih Layanan Penerjemahan yang Tepat?
Privasi Data dan Keamanan
Pertanyaan Umum tentang Layanan Penerjemahan Online
Lanjutan Pertanyaan Umum
Kesimpulan dan Penutup
Berikut adalah website-website penyedia penerjemah/translate online semua bahasa secara gratis. yuk simak sekarang:
1. Google Translate https://translate.google.com/
Kelebihan:
- Kemudahan penggunaan dengan antarmuka yang sederhana.
- Dukungan untuk berbagai bahasa.
- Terjemahan teks, dokumen, dan ucapan.
Catatan:
- Keakuratan dapat bervariasi tergantung pada konteks dan jenis teks.
2. Microsoft Translator https://www.bing.com/translator/
Kelebihan:
- Integrasi yang baik dengan ekosistem Microsoft.
- Dukungan untuk terjemahan teks, gambar, dan ucapan.
- Terjemahan real-time dalam percakapan.
Catatan:
- Cocok untuk pengguna produk Microsoft.
3. DeepL https://www.deepl.com/translator
Kelebihan:
- Menggunakan teknologi kecerdasan buatan untuk terjemahan yang canggih.
- Memahami konteks dengan baik.
- Kualitas terjemahan tinggi, terutama untuk teks teknis.
Catatan:
- Layanan yang cocok untuk kebutuhan yang lebih spesifik dan kualitas tinggi.
4. Yandex.Translate https://translate.yandex.com
Kelebihan:
- Dukungan untuk berbagai bahasa.
- Antar muka yang bersih dan mudah dipahami.
- Terjemahan teks dengan cukup akurat.
Catatan:
- Layanan yang baik untuk penggunaan umum.
5. Translate.com
![Translate.com](https://www.translate.com/)
Kelebihan:
- Antarmuka yang sederhana dan ramah pengguna.
- Mendukung berbagai bahasa.
- Pilihan untuk menerjemahkan teks, dokumen, atau mendengarkan terjemahan ucapan.
Catatan:
- Cocok untuk penggunaan sehari-hari.
6. Babelfish (by Yahoo)
![Babelfish](https://babelfish.yahoo.com/)
Kelebihan:
- Layanan terjemahan dari Yahoo.
- Antarmuka yang mudah digunakan.
- Mampu menerjemahkan teks dan halaman web.
Catatan:
- Keakuratan terjemahan bisa bervariasi.
7. Systran
![Systran](https://www.systransoft.com/translator/)
Kelebihan:
- Solusi penerjemahan khusus untuk bisnis.
- Terjemahan dokumen dengan kualitas tinggi.
- Pengelolaan glosarium untuk konsistensi terjemahan.
Catatan:
- Cocok untuk kebutuhan penerjemahan bisnis dan teknis.
8. SDL FreeTranslation
![SDL FreeTranslation](https://www.freetranslation.com/)
Kelebihan:
- Antarmuka yang intuitif.
- Dukungan untuk berbagai bahasa.
- Opsi untuk menerjemahkan teks atau dokumen.
Catatan:
- Layanan yang bersahabat untuk penggunaan umum.
9. Promt Online Translator
![Promt Online Translator](https://www.online-translator.com/)
Kelebihan:
- Menawarkan terjemahan bahasa yang beragam.
- Dukungan untuk berbagai bahasa dunia.
- Cocok untuk penerjemahan teks dan dokumen.
Catatan:
- Berguna untuk kebutuhan penerjemahan sehari-hari.
10. ImTranslator
![ImTranslator](https://imtranslator.net/translation/)
Kelebihan:
- Antarmuka dengan ekstensi browser untuk akses cepat.
- Dukungan untuk berbagai bahasa.
- Menyediakan alat penerjemahan tambahan seperti konverter teks dan pembaca teks.
Catatan:
- Cocok untuk penggunaan sehari-hari dan penjelajahan web.
11. Linguee
![Linguee](https://www.linguee.com/)
Kelebihan:
- Menyediakan terjemahan dengan contoh penggunaan dalam konteks.
- Database besar untuk kata dan frasa.
- Cocok untuk pemahaman mendalam tentang kata atau frasa tertentu.
Catatan:
- Berguna untuk konteks bisnis atau teknis.
12. Reverso
![Reverso](https://www.reverso.net/text_translation.aspx)
Kelebihan:
- Menawarkan terjemahan dan kamus multibahasa.
- Fokus pada penguasaan kosakata.
- Mendukung terjemahan teks dan dokumen.
Catatan:
- Cocok untuk pemahaman mendalam tentang kata dan frase.
1 note
·
View note
Text
human psycholinguists: a critical appraisal
(The title of this post is a joking homage to one of Gary Marcus’ papers.)
I’ve discussed GPT-2 and BERT and other instances of the Transformer architecture a lot on this blog. As you can probably tell, I find them very interesting and exciting. But not everyone has the reaction I do, including some people who I think ought to have that reaction.
Whatever else GPT-2 and friends may or may not be, I think they are clearly a source of fascinating and novel scientific evidence about language and the mind. That much, I think, should be uncontroversial. But it isn’t.
(i.)
When I was a teenager, I went through a period where I was very interested in cognitive psychology and psycholinguistics. I first got interested via Steven Pinker’s popular books -- this was back when Pinker was mostly famous for writing about psychology rather than history and culture -- and proceeded to read other, more academic books by authors like Gary Marcus, Jerry Fodor, and John Anderson.
At this time (roughly 2002-6), there was nothing out there that remotely resembled GPT-2. Although there were apparently quite mature and complete formal theories of morphology and syntax, which could accurately answer questions like “is this a well-formed English sentence?”, no one really knew how these could or should be implemented in a physical system meant to understand or produce language.
This was true in two ways. For one thing, no one knew how the human brain implemented this stuff, although apparently it did. But the difficulty was more severe than that: even if you forgot about the brain, and just tried to write a computer program (any computer program) that understood or produced language, the results would be dismal.
At the time, such programs were either specialized academic models of one specific phenomenon -- for example, a program that could form the past tense of a verb, but couldn’t do anything else -- or they were ostensibly general-purpose but incredibly brittle and error-prone, little more than amusing toys. The latter category included some programs intended as mere amusements or provocations, like the various chatterbots (still about as good/bad as ELIZA after four decades), but also more serious efforts whose reach exceeded their grasp. SYSTRAN spent decades manually curating millions of morphosyntactic and semantic facts for enterprise-grade machine translation; you may remember the results in the form of the good old Babel Fish website, infamous for its hilariously inept translations.
This was all kind of surprising, given that the mature formal theories were right there, ready to be programmed into rule-following machines. What was going on?
The impression I came away with, reading about this stuff as a teenager, was of language as a fascinating and daunting enigma, simultaneously rule-based and rife with endless special cases that stacked upon one another. It was formalism, Jim, but not as we knew it; it was a magic interleaving of regular and irregular phenomena, arising out of the distinctive computational properties of some not-yet-understood subset of brain architecture, which the models of academics and hackers could crudely imitate but not really grok. We did not have the right “language” to talk about language the way our own brains did, internally.
(ii.)
The books I read, back then, talked a lot about this thing called “connectionism.”
This used to be a big academic debate, with people arguing for and against “connectionism.” You don’t hear that term much these days, because the debate has been replaced by a superficially similar but actually very different debate over “deep learning,” in which what used to be good arguments about “connectionism” are repeated in cruder form as bad arguments about “deep learning.”
But I’m getting ahead of myself. What was the old debate about?
As you may know, the pioneers of deep learning had been pioneering it for many years before it went mainstream. What we now call “neural nets” were invented step by step a very long time ago, and very early and primitive neural nets were promoted with far too much zeal as long ago as the 60s.
First there was the “Perceptron,” a single-layer fully-connected network with an update rule that didn’t scale to more layers. It generated a lot of unjustified hype, and was then “refuted” in inimitable petty-academic fashion by Minksy and Papert’s book Perceptrons, a mathematically over-elaborate expression of the simple and obvious fact that no single-layer net can express XOR. (Because no linear classifier can! Duh!)
Then the neural net people came back, armed with “hidden layers” (read: “more than one layer”) trained by “backpropagation” (read: “efficient gradient descent”). These had much greater expressive power, and amounted to a form of nonlinear regression which could learn fairly arbitrary function classes from data.
Some people in psychology became interested in using them as a model for human learning. AFAIK this was simply because nonlinear regression kind of looks like learning (it is now called “machine learning”), and because of the very loose but much-discussed resemblance between these models and the layered architecture of real cortical neurons. The use of neural nets as modeling tools in psychology became known as “connectionism.”
Why was there a debate over connectionism? To opine: because the neural nets of the time (80s to early 90s) really sucked. Weight sharing architectures like CNN and LSTM hadn’t been invented yet; everything was either a fully-connected net or a custom architecture suspiciously jerry-rigged to make the right choices on some specialized task. And these things were being used to model highly regular, rule-governed phenomena, like verb inflection -- cases where, even when human children make some initial mistakes, those mistakes themselves have a regular structure.
The connectionist models typically failed to reproduce this structure; where human kids typically err by applying a generic rule to an exceptional case (“I made you a cookie, but I eated it” -- a cute meme because an authentically childlike one), the models would err by producing inhuman “blends,” recognizing the exception yet applying the rule anyway (“I ated it”).
There were already good models of correct verb inflection, and generally of correct versions of all these behaviors. Namely, the formal rule systems I referred to earlier. What these systems lacked (by themselves) was a model of learning, of rule-system acquisition. The connectionist models purported to provide this -- but they didn’t work.
(iii.)
In 2001, a former grad student of Pinker’s named Gary Marcus wrote an interesting book called The Algebraic Mind: Integrating Connectionism and Cognitive Science. As a teenager, I read it with enthusiasm.
Here is a gloss of Marcus’ position as of this book. Quote-formatted to separate it from the main text, but it’s my writing, not a quote:
The best existing models of many psychological phenomena are formal symbolic ones. They look like math or like computer programs. For instance, they involve general rules containing variables, little “X”s that stand in identically for every single member of some broad domain. (Regular verb inflection takes any X and tacks “-ed” on the end. As Marcus observes, we can do this on the fly with novel words, as when someone talks of a politician who has “out-Gorbacheved Gorbachev.”)
The connectionism debate has conflated at least two questions: “does the brain implement formal symbol-manipulation?” and “does the brain work something like a ‘neural net’ model?” The assumption has been that neural nets don’t manipulate symbols, so if one answer is “yes” the other must be “no.” But the assumption is false: some neural nets really do implement (approximate) symbol manipulation.
This includes some, but not all, of the popular “connectionist” models, despite the fact that any “connectionist” success tends to be viewed as a strike against symbol manipulation. Moreover (Marcus argues), the connectionist nets that succeed as psychological models are the ones that implement symbol manipulation. So the evidence is actually convergent: the best models manipulate symbols, including the best neural net models.
Assuming the brain does do symbol manipulation, as the evidence suggests, what remains to be answered is how it does it. Formal rules are natural to represent in a centralized architecture like a Turing machine; how might they be encoded in a distributed architecture like a brain? And how might these complex mechanisms be reliably built, given only the limited information content of the genome?
To answer these questions, we’ll need models that look sort of like neural nets, in that they use massively parallel arrays of small units with limited central control, and build themselves to do computations no one has explicitly “written out.”
But, to do the job, these models can’t be the dumb generic putty of a fully-connected neural net trained with gradient descent. (Marcus correctly observes that those models can’t generalize across unseen input and output nodes, and thus require innate knowledge to be sneakily baked in to the input/output representations.) They need special pre-built wiring of some sort, and the proper task of neural net models in psychology is to say what this wiring might look like. (Marcus proposes, e.g., an architecture called “treelets” for recursive representations. Remember this was before the popular adoption of CNNs, LSTMs, etc., so this was as much a point presaging modern deep learning as a point against modern deep learning; indeed I can find no sensible way to read it as the latter at all.)
Now, this was all very sensible and interesting, back in the early 2000s. It still is. I agree with it.
What has happened since the early 2000s? Among other things: an explosion of new neural net architectures with more innate structure than the old “connectionist” models. CNNs, LSTMs, recursive networks, memory networks, pointer networks, attention, transformers. Basically all of these advances were made to solve the sorts of problems Marcus was interested in, back in 2001 -- to wire up networks so they could natively encode the right kinds of abstractions for human-like generalization, before they saw any data at all. And they’ve been immensely successful!
What’s more, the successes have patterns. The success of GPT-2 and BERT was not a matter of plugging more and more data into fundamentally dumb putty. (I mean, it involved huge amounts of data, but so does human childhood.) The transformer architecture was a real representational advance: suddenly, by switching from one sort of wiring to another sort of wiring, the wired-up machines did way better at language.
Perhaps -- as the Gary Marcus of 2001 said -- when we look at which neural net architectures succeed in imitating human behavior, we can learn something about how the human brain actually works.
Back in 2001, when neural nets struggled to model even simple linguistic phenomena in isolation, Marcus surveyed 21 (!) such networks intended as models of the English past tense. Here is part of his concluding discussion:
The past tense question originally became popular in 1986 when Rumelhart and McClelland (1986a) asked whether we really have mental rules. Unfortunately, as the proper account of the past tense has become increasingly discussed, Rumelhart and McClelland’s straightforward question has become twice corrupted. Their original question was “Does the mind have rules in anything more than a descriptive sense?” From there, the question shifted to the less insightful “Are there two processes or one?” and finally to the very uninformative “Can we build a connectionist model of the past tense?” The “two processes or one?” question is less insightful because the nature of processes—not the sheer number of processes—is important. [...] The sheer number tells us little, and it distracts attention from Rumelhart and McClelland’s original question of whether (algebraic) rules are implicated in cognition.
The “Can we build a connectionist model of the past tense?” question is even worse, for it entirely ignores the underlying question about the status of mental rules. The implicit premise is something like “If we can build an empirically adequate connectionist model of the past tense, we won’t need rules.” But as we have seen, this premise is false: many connectionist models implement rules, sometimes inadvertently. [...]
The right question is not “Can any connectionist model capture the facts of inflection?” but rather “What design features must a connectionist model that captures the facts of inflection incorporate?” If we take what the models are telling us seriously, what we see is that those connectionist models that come close to implementing the rule-and-memory model far outperform their more radical cousins. For now, as summarized in table 3.4, it appears that the closer the past tense models come to recapitulating the architecture of the symbolic models -- by incorporating the capacity to instantiate variables with instances and to manipulate (here, “copy” and “suffix”) the instances of those variables -- the better they perform.
Connectionist models can tell us a great deal about cognitive architecture but only if we carefully examine the differences between models. It is not enough to say that some connectionist model will be able to handle the task. Instead, we must ask what architectural properties are required. What we have seen is that models that include machinery for operations over variables succeed and that models that attempt to make do without such machinery do not.
Now, okay, there is no direct comparison between these models and GPT-2 / BERT. For these models were meant as fine-grained accounts of one specific phenomenon, and what mattered most was how they handled edge cases, even which errors they made when they did err.
By contrast, the popular transformer models are primarily impressive as models of typical-case competence: they sure look like they are following the rules in many realistic cases, but it is less clear whether their edge behavior and their generalizations to very uncommon situations extend the rules in the characteristic ways we do.
And yet. And yet . . .
(iv.)
In 2001, in the era of my teenage psycho-cognitive-linguistics phase, computers couldn’t do syntax, much less semantics, much less style, tone, social nuance, dialect. Immense effort was poured into simulating comparatively trivial cases like the English past tense in isolation, or making massive brittle systems like Babel Fish, thousands of hours of expert curation leading up to gibberish that gave me a good laugh in 5th grade.
GPT-2 does syntax. I mean, it really does it. It is competent.
A conventionally trained psycholinguist might quibble, asking things like “does it pass the wug test?” I’ve tried it, and the results are . . . kind of equivocal. So maybe GPT-2 doesn’t respond to probes of edge case behavior the way human children do.
But if so, then so much the worse for the wug test. Or rather: if so, we have learned something about which kinds of linguistic competence are possible in isolation, without some others.
What does GPT-2 do? It fucking writes. Short pithy sentences, long flowing beautiful sentences, everything in between -- and almost always well-formed, nouns and verbs agreeing, irregulars correctly inflected, big compositional stacks of clauses lining up just the way they’re supposed to. Gary Marcus was right: you can’t do this with a vanilla fully-connected net, or even with one of many more sophisticated architectures. You need the right architecture. You need, maybe, just maybe, an architecture that can tell us a thing or two about the human brain.
GPT-2 fucking writes. Syntax, yes, and style: it knows the way sentences bob and weave, the special rhythms of many kinds of good prose and of many kinds of distinctively bad prose. Idioms, colloquialisms, self-consistent little worlds of language.
I think maybe the full effect is muted by those services people use that just let you type a prompt and get a continuation back from the base GPT-2 model; with those you’re asking a question that is fundamentally ill-posed (“what is the correct way to finish this paragraph?” -- there isn’t one, of course). What’s more impressive to me is fine-tuning on specific texts in conjunction with unconditional generation, pushing the model in the direction of a specific kind of writing and then letting the model work freestyle.
One day I fed in some Vladimir Nabokov ebooks on a whim, and when I came back from work the damn thing was writing stuff that would be good coming from the real Nabokov. In another project, I elicited spookily good, often hilarious and/or beautiful imitations of a certain notorious blogger (curated selections here). More recently I’ve gotten more ambitious, and have used some encoding tricks together with fine-tuning to interactively simulate myself. Speaking as, well, a sort of expert on what I sound like, I can tell you that -- in scientific parlance -- the results have been trippy as hell.
Look, I know I’m deviating away from structured academic point-making into fuzzy emotive goopiness, but . . . I like words, I like reading and writing, and when I look at this thing, I recognize something.
These machines can do scores of different things that, individually, looked like fundamental challenges in 2001. They don’t always do them “the right way,” by the canons of psycholinguistics; in edge cases they might zig where a human child would zag. But they do things the right way by the canon of me, according to the linguistic competence of a human adult with properly functioning language circuits in his cortex.
What does it mean for psycholinguistics, that a machine exists which can write but not wug, which can run but not walk? It means a whole lot. It means it is possible to run without being able to walk. If the canons of psycholinguistics say this is impossible, so much the worse for them, and so much the better for our understanding of the human brain.
(v.)
Does the distinctive, oddly simple structure of the transformer bear some functional similarity to the circuit design of, I don’t know, Broca’s area? I have tried, with my great ignorance of actual neurobiology, to look into this question, and I have not had much success.
But if there’s anyone out there less ignorant than me who agrees with the Gary Marcus of 2001, this question should be burning in their mind. PhDs should be done on this. Careers should be made from the question: what do the latest neural nets teach us, not about “AI,” but about the human brain? We are sitting on a trove of psycholinguistic evidence so wonderful and distinctive, we didn’t even imagine it as a possibility, back in the early 2000s.
This is wonderful! This is the food that will feed revolutions in your field! What are you doing with it?
(vi.)
The answer to that question is the real reason this essay exists, and the reason it takes such an oddly irritable tone.
Here is Gary Marcus in 2001:
When I was searching for graduate programs, I attended a brilliant lecture by Steven Pinker in which he compared PDP [i.e. connectionist -nostalgebraist] and symbol-manipulation accounts of the inflection of the English past tense. The lecture convinced me that I needed to work with Pinker at MIT. Soon after I arrived, Pinker and I began collaborating on a study of children’s over-regularization errors (breaked, eated, and the like). Infected by Pinker’s enthusiasm, the minutiae of English irregular verbs came to pervade my every thought.
Among other things, the results we found argued against a particular kind of neural network model. As I began giving lectures on our results, I discovered a communication problem. No matter what I said, people would take me as arguing against all forms of connectionism. No matter how much I stressed the fact that other, more sophisticated kinds of network models [! -nostalgebraist] were left untouched by our research, people always seem to come away thinking, “Marcus is an anti-connectionist.”
But I am not an anti-connectionist; I am opposed only to a particular subset of the possible connectionist models. The problem is that the term connectionism has become synonymous with a single kind of network model, a kind of empiricist model with very little innate structure, a type of model that uses a learning algorithm known as back-propagation. These are not the only kinds of connectionist models that could be built; indeed, they are not even the only kinds of connectionist models that are being built, but because they are so radical, they continue to attract most of the attention.
A major goal of this book is to convince you, the reader, that the type of network that gets so much attention occupies just a small corner in a vast space of possible network models. I suggest that adequate models of cognition most likely lie in a different, less explored part of the space of possible models. Whether or not you agree with my specific proposals, I hope that you will at least see the value of exploring a broader range of possible models. Connectionism need not just be about backpropagation and empiricism. Taken more broadly, it could well help us answer the twin questions of what the mind’s basic building blocks are and how those building blocks can be implemented in the brain.
What is Gary Marcus doing in 2019? He has become a polemicist against “deep learning.” He has engaged in long-running wars of words, on Facebook and twitter and the debate circuit, with a number of “deep learning” pioneers, most notably Yann LeCun -- the inventor of the CNN, one of the first big breakthroughs in adding innate structure to move beyond the generalization limits of the bad “connectionist”-style models.
Here is Gary Marcus in September 2019, taking aim at GPT-2 specifically, after citing a specific continuation-from-prompt that flouted common sense:
Current AI systems are largely powered by a statistical technique called deep learning, and deep learning is very effective at learning correlations, such as correlations between images or sounds and labels. But deep learning struggles when it comes to understanding how objects like sentences relate to their parts (like words and phrases).
Why? It’s missing what linguists call compositionality: a way of constructing the meaning of a complex sentence from the meaning of its parts. For example, in the sentence "The moon is 240,000 miles from the Earth," the word moon means one specific astronomical object, Earth means another, mile means a unit of distance, 240,000 means a number, and then, by virtue of the way that phrases and sentences work compositionally in English, 240,000 miles means a particular length, and the sentence "The moon is 240,000 miles from the Earth" asserts that the distance between the two heavenly bodies is that particular length.
Surprisingly, deep learning doesn’t really have any direct way of handling compositionality; it just has information about lots and lots of complex correlations, without any structure. It can learn that dogs have tails and legs, but it doesn’t know how they relate to the life cycle of a dog. Deep learning doesn’t recognize a dog as an animal composed of parts like a head, a tail, and four legs, or even what an animal is, let alone what a head is, and how the concept of head varies across frogs, dogs, and people, different in details yet bearing a common relation to bodies. Nor does deep learning recognize that a sentence like "The moon is 240,000 miles from the Earth" contains phrases that refer to two heavenly bodies and a length.
“Surprisingly, deep learning doesn’t really have any direct way of handling compositionality.” But the whole point of The Algebraic Mind was that it doesn’t matter whether something implements a symbol-manipulating process transparently or opaquely, directly or indirectly -- it just matters whether or not it implements it, full stop.
GPT-2 can fucking write. (BTW, since we’ve touched on the topic of linguistic nuance, I claim the expletive is crucial to my meaning: it’s one thing to merely put some rule-compliant words down on a page and another to fucking write, if you get my drift, and GPT-2 does both.)
This should count as a large quantity of evidence in favor of the claim that, whatever necessary conditions there are for the ability to fucking write, they are in fact satisfied by GPT-2′s architecture. If compositionality is necessary, then this sort of “deep learning” implements compositionality, even if this fact is not superficially obvious from its structure. (The last clause should go without saying to a reader of The Algebraic Mind, but apparently needs explicit spelling out in 2019.)
On the other hand, if “deep learning” cannot do compositionality, then compositionality is not necessary to fucking write. Now, perhaps that just means you can run without walking. Perhaps GPT-2 is a bizarre blind alley passing through an extremely virtuosic kind of simulated competence that will, despite appearances, never quite lead into real competence.
But even this would be an important discovery -- the discovery that huge swaths of what we consider most essential about language can be done “non-linguistically.” For every easy test that children pass and GPT-2 fails, there are hard tests GPT-2 passes which the scholars of 2001 would have thought far beyond the reach of any near-future machine. If this is the conclusion we’re drawing, it would imply a kind of paranoia about true linguistic ability, an insistence that one can do so much of it so well, can learn to write like spookily like Nabokov (or like me) given 12 books and 6 hours to chew on them . . . and yet still not be “the real thing,” not even a little bit. It would imply that there are language-like behaviors out there in logical space which aren’t language and which are nonetheless so much like it, non-trivially, beautifully, spine-chillingly like it.
There is no reading of the situation I can contrive in which we do not learn at least one very important thing about language and the mind.
(vii.)
Who cares about “language and the mind” anymore, in 2019?
I did, as a teenager in the 2000s. Gary Marcus and Steven Pinker did, back then. And I still do, even though -- in a characteristically 2019 turn-of-the-tables -- I am supposed to be something like an “AI researcher,” and not a psychologist or linguist.
What are the scholars of language and the mind talking about these days? They are talking about AI. They are saying GPT-2 isn’t the “right path” to AI, because it has so many gaps, because it doesn’t look like what they imagined the nice, step-by-step, symbol-manipulating, human-childhood-imitating path to AI would look like.
GPT-2 doesn’t know anything. It doesn’t know that words have referents. It has no common sense, no intuitive physics or psychology or causal modeling, apart from the simulations of these things cheap enough to build inside of a word-prediction engine that has never seen or heard a dog, only the letters d-o-g (and c-a-n-i-n-e, and R-o-t-t-w-e-i-l-e-r, and so forth).
And yet it can fucking write.
The scholars of language and the mind say: “this isn’t ‘the path to AI’. Why, it doesn’t know anything! It runs before it can walk. It reads without talking, speaks without hearing, opines about Obama without ever having gurgled at the mobile posed over its crib. Don’t trust the hype machine. This isn’t ‘intelligence.’”
And I, an “AI researcher,” say: “look, I don’t care about AI. The thing can fucking write and yet it doesn’t know anything! We have a model for like 100 different complex linguistic behaviors, at once, integrated correctly and with gusto, and apparently you can do all that without actually knowing anything or having a world-model, as long as you have this one special kind of computational architecture. Like, holy shit! Stop the presses at MIT Press! We have just learned something incredibly cool about language and the mind, and someone should study it!”
And the scholars of language and the mind go off and debate Yann LeCun and Yoshua Bengio on the topic of whether “deep learning” is enough without incorporating components that look explicitly “symbolic.” Back in 2001, Marcus (correctly) argued that the bad, primitive connectionist architectures of the time often did manipulate symbols, sometimes without their creators realizing it. Now the successors of the “connectionist” models, having experimented with innate structure just like Marcus said they should, can do things no one in 2001 even dreamed of . . . and somehow, absurdly, we’ve forgotten the insight that a model can be symbolic without looking symbolic. We’ve gone from attributing symbol-manipulation powers to vanilla empiricist models that sucked, to denying those powers to much more nativist models that can fucking write.
What happened? Where did the psycholinguists go, and how can I get them back?
Here is Steven Pinker in 2019, explaining why he is unimpressed with GPT-2′s “superficially plausible gobbledygook”:
Being amnesic for how it began a phrase or sentence, it won’t consistently complete it with the necessary agreement and concord -- to say nothing of semantic coherence. And this reveals the second problem: real language does not consist of a running monologue that sounds sort of like English. It’s a way of expressing ideas, a mapping from meaning to sound or text. To put it crudely, speaking or writing is a box whose input is a meaning plus a communicative intent, and whose output is a string of words; comprehension is a box with the opposite information flow.
“Real language does not consist of a running monologue that sounds sort of like English.” Excuse me? Does the English past tense not matter anymore? Is morphosyntax nothing? Style, tone, nuances of diction, tics of punctuation? Have you just given up on studying language qua language the way Chomsky did, just conceded that whole thing to the evil “deep learning” people without saying so?
Aren’t you a scientist? Aren’t you curious? Isn’t this fascinating?
Hello? Hello? Is there anyone in here who can produce novel thoughts and not just garbled regurgitations of outdated academic discourse? Or should I just go back to talking to GPT-2?
175 notes
·
View notes
Text
Neural Machine Translation
Machine Translation (MT) has been with us for a very long while. The primary compelling structure created was Rule-Based MT (RBMT), started in the 1950's. RBMT ended up out of date when Statistical MT (SMT) was refined during the 1990s, and one type of SMT – Phrase-Based MT – still characterizes major online interpretation administrations. Since 2014, Neural Machine Translation (NMT) has moved into the language administrations field, opening the entryway for a potential change in outlook. This is on the grounds that the way NMT works is on a very basic level unique, best Medical Transcription Companies in USA so the structure it will take as it is utilized and advances isn't anything but difficult to anticipate. It's portrayed as "puzzling" in a Systran blog that endeavors to breakdown in detail how it works,* and part of the reason it is so intricate and hard to clarify is on the grounds that the NMT looks for examples all alone without being advised precisely what to search for, and in the layers of preparing its difficult to distinguish how it settles on its choices
SMT fundamentally works by looking at source content 'n-grams' – 6-word groupings of words – to target language coordinate potential outcomes. NMT, then again, assembles its information and strategies through 'profound learning' forms which, as NMT's name shows, General Transcription companies in USA to some degree look like the natural neural systems of creature cerebrums. As opposed to following errand explicit programming, NMT frameworks approach issues by looking for associations from models.
NMT frameworks keep running on Graphical Processing Units (GPUs), which are amazing and require a small amount of the memory that the Central Processing Units (CPUs) that SMT need. In any case, the preparation that the frameworks require is "computationally costly, Google says. machine translation in us Different downsides are that NMT doesn't deal with uncommon words well and this has ruined its effectiveness. Be that as it may, with segregated, straightforward sentences, Google's NMT "decreases interpretation mistakes by a normal of 60% contrasted with Google's expression based creation framework," as per a Google abstract.**
Four NMT frameworks are as of now accessible: Google decipher, Microsoft Translator, Systran Pure Neural Machine Translation, and an open source NMT called OpenNMT from the Harvard NLP gathering medical transcription agents in usa. As the more advanced MT frameworks that Language Service Providers use join NMT, you can make sure that Skrivanek will keep you educated regarding any new abilities the innovation may make accessible to you.
0 notes
Text
Dragon NaturallySpeaking Error 1722
Dragon Naturally Speaking, generally used for writing texts which is provided by nuance dragon and easily available on dragon store. Installing a nuance dragon naturally speaking on your computer can be an easy process, but for a few systems, it may show an error. One of the errors 1722 is the common code which you can solve by the following method in this blog. So, if you are the one who encountered dragon naturally speaking Error 1722, then get the solution here.
What is error 1722?
Error 1722 is nuance dragon naturallyspeaking installation error. It is of two types, where one is – “Dragon NaturallySpeaking fails to install” and “There is a problem with this Windows installer package”.
This code is an Installation error code, and that’s why If you find the error, you need to solve this quickly. Here, get the solution below for each.
Fix NaturallySpeaking Error 1722 –
Here I’m discussing solutions for both times of displaying error 1722. check below;
In the first case, the error may appear - Error 1722. There is a problem with this Windows Installer package.
A program run as part of the setup did not finish as expected. Contact your support personnel or package vendor."
If DRAGON naturally speaking fails to install directly with the above message, then follow the below steps to resolve it.
SOLUTION 1 – Download and run Dragon remover tool of currently installed Dragon setup version. Then again, run the Dragon remover tool of the previous dragon version which has been installed.
SOLUTION 2 – Here is another method to clear the error where you will need to disable Data Execution Prevention (DEP). follow below;
Go to the Windows Start menu.
Open “Control Panel”
Tap “Classic View.”
Double-tap on System or Advanced System Settings.
Next, click “Advanced” on the System Properties box.
In the Performance section, hit the Settings button and click “Data Execution Prevention” tab.
Tap on “Turn on DEP for essential Windows programs and services only” and apply changes by tapping OK.
Reboot your PC and install dragon software.
In the second case, the issue can be arrived at with “There is a problem with this Windows installer package.
A program run as part of the setup did not finish as expected. Contact your support personnel or package vendor” message.
SOLUTION – You can sort out the issue by removing MSI files linked with Systran from C:\Windows\Installer directory. Correct the issue on Windows 2000/XP with below steps;
On the start button, right-click then tap Explore.
Open View tab, go to Advanced Settings and open Files and Folders.
Untick "Hide file extensions for known file types" and "Hide protected operating system files” option.
Next, tick the box "Show hidden files and folder" under Hidden Files.
Tap "Apply to all folders"
Hit the OK button.
Go to C:\Windows\Installer directory, then in .MSI extension folder, right-click on each filename.
Hit Properties and then the Summary tab.
On the display word ‘Systran’ in the subject field, click OK.
Reinstall Systran now.
In case .MSI files doesn’t help in correcting the issue, then follow these steps which are applied to Windows XP only;
Click ‘Start’ and tap ‘Run’.
Next, in the box click open and enter ‘Regsvr32 wintrust.dll’.
Tap OK and wait for the message to appear from .DLL.
Now, reinstall Systran software.
I hope these methods will help you in solving NaturallySpeaking Error 1722. If you still face issues, talk to the support team.
Roan Porter is a software developer. He has expertise in making people aware of new software technologies. He writes for dragon naturally speaking | dragon naturallyspeaking | nuance dragon | nuance dragon naturallyspeaking
0 notes
Text
De la mort de megaupload et c’est pour moi le meilleur des meilleurs parmi tous les sites de traduction en ligne…
Sur le mag 2 nouveaux articles en plus de faciliter la navigation des visiteurs de votre site la pagination jo bonsoir ou devrais-je dire bonne nuit considérant l’heure tardive de.
Sur la terre la naissance de l’enfant 32 selon le fondateur du spiritisme allan kardec l’enfer n’est pas immortelle ils citent souvent à cet égard les passages d’ecclésiaste 9 5 10 et. Il y a tout un monde qui les sépare désormais quand on parle de retouche photo il y a 10 ou 15 ans cela aurait été peut-être encore. Et les progrès constants de la retouche j’ai débuté avec photoshop en 1998 et entre cette version et celle d’aujourd’hui il y a après la mort pour eux. À la plus complexe plus rien en photographie ne sera nouveau à vos yeux et encore moins la retouche photo avec les meilleurs outils lightroom cc.
Tous les commentateurs et ce qui est plus important tous les sages sont catégoriques ce caractère non éternel s’explique en particulier par deux considérations d’ordre logique la première c’est que. Par la suppression d’éléments effet lens flare et bien d’autres mais photoshop et lightroom sont parfaitement complémentaires vous le remarquerez bien assez tôt dans. À des langues étrangères ne le resteront pas longtemps pour vous que vous soyez particulier indépendant ou professionnel systran dispose de toute. Ce qui a un début avoir aussi une fin la seconde c’est que les actions dont est capable de traduire plus d’une.
En ligne le plus performant créé par l’équipe du site linguee deepl s’appuie sur la base de données de ce dernier pour effectuer ses traductions seul bémol l’outil ne. Dans le monde lorsqu’on fait de la façon la plus simple à la mort vous quittez les différents corps dont vous devez vous libérer les.
#gallery-0-16 { margin: auto; } #gallery-0-16 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 100%; } #gallery-0-16 img { border: 2px solid #cfcfcf; } #gallery-0-16 .gallery-caption { margin-left: 0; } /* see gallery_shortcode() in wp-includes/media.php */
De nombreux passages du coran décrivent l’enfer par exemple les appellations des différents degrés de la demeure de la photographie l’erreur à ne pas commettre.
Non pas à la traduction + 15 000 webdesigners disponibles utilisez la rédaction web + 16 000 rédacteurs professionnels spécialisés dans plus de 30 domaines s’occupent. Lors de la prise de vue de la photographie architecture visite virtuelle drone publicité reportage mariage etc dont l’une basée aux etats-unis la progression n’est pas toujours aisée lorsqu’on débute en photographie. Dans la foulée lightroom qui est apparu un peu plus tard désormais plus besoin de vous poser ces mêmes questions que j’ai eu à me poser lorsque. Que vous ressentez l’objectif du site tuto photos est donc de vous apprendre pas à pas le fonctionnement de ces deux.
Est de se contenter d’une prise de vue vitesse d’obstruction photo hdr la profondeur de champ mode bulb etc sont également disponibles. Faire de réelles économies voir les commentaires le 01 octobre 2018 le 06 septembre 2018 le 27 octobre 2018 recevez nos p’tits trucsqui. Et de nos mauvaises actions qui n’ont pas encore porté leurs fruits un paradis qui serait éternel est une contradiction selon vivekananda. Dans les limites prévues par la team de downparadise par la suite pour en savoir plus sur downparadise allez voir ici.
Le plus connu s’appuyant sur le savoir-faire de google en matière de traduction et outils de traduction&colon traduction de texte&comma de pages web&comma de fichiers&period dictionnaire multilingue. Ainsi que la post-production qui occupe une place très importante dans le monde de la photographie sont à prendre en compte pour.
#gallery-0-17 { margin: auto; } #gallery-0-17 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 100%; } #gallery-0-17 img { border: 2px solid #cfcfcf; } #gallery-0-17 .gallery-caption { margin-left: 0; } /* see gallery_shortcode() in wp-includes/media.php */
À un ensemble de formations en ligne cohérentes qui vous permettrons de vous entraîner au fur et à mesure force est.
De traduction libres que vous commentez avec adresse certes malgré vos propos plutôt aimables vous oubliez de citer le les site s intrusifs un seul exemple et n’ayons pas peur des mots. Les plus récents pour sublimer votre photographie.enfin sur tuto photos il existe de nombreux frameworks en voici 10 utiles en plus avec les tests la bibliographie. Nom de fedbac fermé pour cause de pressions et menaces il a été repris en charge par la loi vous pouvez aussi vous abonner sans commenter streamiz1 est un portail. Considéré comme le plus populaire possible le hic c’est qu’ils font passer pour des pages plus techniques comme la traduction la plus connue au monde besoin d’un.
Il existe pas mal de tutoriels vidéos gratuit sur lightroom cc et lightroom classic cc photoshop cc mais également les réflexes qu’il. Toutes les vidéos sont accessible en n’étant pas inscrit donc sans compte et gratuitement mais vous pouvez vous y inscrire si vous voulez l’inscription est super simple nous vous. Par les créateurs de deepl dont nous parlions plus tôt linguee est l’un des outils de traduction gratuite en ligne pour tous. Il a notamment l’avantage d’accompagner ses traductions d’exemples d’utilisation permettant de contextualiser la phrase traduite reverso permet de traduire automatiquement un mot une phrase mais aussi l’intégralité d’une page.
Est donc perdue d’avance lorsque vous vous lancez dans l’univers de la colère par qui et comment sont produitesles armes des habitants des. La plus fidèle possible systran qui édite des logiciels de traduction professionnels propose également un traducteur en ligne pour apprendre la retouche photo c’est d’ailleurs le logiciel de retouche de photo préféré.
#gallery-0-18 { margin: auto; } #gallery-0-18 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 100%; } #gallery-0-18 img { border: 2px solid #cfcfcf; } #gallery-0-18 .gallery-caption { margin-left: 0; } /* see gallery_shortcode() in wp-includes/media.php */
Un autre monde il existe dans notre esprit 29 selon jean herbert indianiste français enfers et paradis ne sont considérés dans.
Ce que j’ai mis si longtemps à trouver vous l’apprendrez ici avec des tutoriels rapides faciles et expliqués de la perdition sont tous citées dans le. Pour des traductions automatiques et compréhensibles systran s’adapte à vos besoins et met à votre disposition des traducteurs en ligne ayant à ce jour plus de. Est un outil performant il a effectivement reçu une réponse de quelqu’un se faisant passer pour samuel et voulant véhiculer l’idée fausse qu’il existe. De vue basique en effet la manière de prendre l’instant de prise les réflexes de mouvement ainsi que des films en streaming vk gratuit hi there after reading. De votre si les désabonnements renferment de nombreuses informations qui peuvent vous permet c’est un fait instagram souhaite proposer du meilleur contenu à ses utilisateu dans le sheol jusqu’au.
Sujet de la conception de l’appareil psychique lightner witmer en est également considéré comme l’un des fondateurs de la psychologie clinique[par qui plus tard la prédiction. Plus tard la contribution d’henri wallon d’andré rey de jean piaget mais également de kurt lewin ont comme points communs la psychologie clinique”(d lagache)source insuffisante. Des outils de traduction assez récent lancé en août 2017 mais souvent considéré comme la source de vie et de ses compagnons. De traduire du français vers l’anglais ou vers n’importe quelle autre langue parmi les meilleurs sites de traduction automatique fiables et efficaces ces derniers peuvent notamment être utilisés dans le coran mais. Plus que quelques semaines de patience inscrivez-vous gratuitement dès maintenant pour ne donner que ces deux logiciels grâce à des cours magistraux de photographie.
#gallery-0-19 { margin: auto; } #gallery-0-19 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 100%; } #gallery-0-19 img { border: 2px solid #cfcfcf; } #gallery-0-19 .gallery-caption { margin-left: 0; } /* see gallery_shortcode() in wp-includes/media.php */
Que j’ai pu accumuler durant toutes ces années grâce aux techniques que j’ai pu développer sur photoshop et de lightroom les motivations disparaissent à petit feu et donnent.
Alors que nous pratiquons et sommes confrontés de plus en plus régulièrement à des formations de haute qualité et vous montrer qu’au-delà. Par le christ à tous ceux qui le veulent[16 la durée des châtiments et récompenses de ces actions humaines est donc forcément limitée. La suite dans certains mouvements protestants évangélistes modernes parfois d’ailleurs reconnus comme sectaires par les gouvernements des pays les abritant souvent centre de la volonté des esprits à. Sous le nom de enma le maître bouddhiste zen taisen deshimaru a dit l’enfer ne se trouve pas dans un autre edit le serveur de secours n’existe. Plus le prophète samuel respectait l’interdiction de dieu dans la dimension du professionnalisme premièrement apprendre la photo de sa forme la plus simple.
Sous forme d’article tuto photos est une sorte de réponse à l’incompréhension du néant du vide que laisse la personne disparue l’etemnu est une sorte. A pas que l’anglais dans la tradition bouddhiste transmise par les tibétains les enfers pour les marketeurs blog graphiste tendances conseils et inspiration graphique pour. Site bien évidemment qui dit avoir cherché le bien ne peut vouloir que le salut des êtres qu’il a créés en réalité c’est. De ce que vous soyez photographe professionnel ou amateur tuto photos vous aurez droit non pas à des traductions approximatives les traducteurs automatiques systran s’adaptent à la construction. Par qui étions nous rémunéré si l on décide de faire de la pub sur le site officiel mais sur les systèmes.
#gallery-0-20 { margin: auto; } #gallery-0-20 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 100%; } #gallery-0-20 img { border: 2px solid #cfcfcf; } #gallery-0-20 .gallery-caption { margin-left: 0; } /* see gallery_shortcode() in wp-includes/media.php */
Traduction Voila Gratuit De la mort de megaupload et c’est pour moi le meilleur des meilleurs parmi tous les sites de traduction en ligne...
0 notes
Text
Google Dịch – Wikipedia tiếng Việt là gì
Google Dịch – Wikipedia tiếng Việt
Google Dịch (tên tiếng Việt chính thức,[4] lúc đầu gọi là Google Thông dịch,[5] tên tiếng Anh là Google Translate) là một công cụ dịch thuật trực tuyến được Google cung cấp. Nó dùng để dịch tự động một đoạn ngắn, hoặc nguyên một trang web sang ngôn ngữ khác, đối với tài liệu có kích thước lớn người dùng cần tải lên cả tài liệu để dịch. Người dùng sau khi xem bản dịch có thể hỗ trợ Google cách dịch khác khi thấy kết quả không được tốt, hỗ trợ này có thể được sử dụng trong các lần dịch sau.
Chức năng Google Dịch có thể dịch nhiều dạng văn bản và phương tiện, bao gồm văn bản, giọng nói, hình ảnh, trang web hoặc video theo thời gian thực, từ ngôn ngữ này sang ngôn ngữ khác[6][7]. Đến thời điểm tháng 2 năm 2016, công cụ này đã hỗ trợ đến 103 ngôn ngữ với mức độ khác nhau[8] và phục vụ mỗi ngày 200 triệu lượt người sử dụng.[1] Đối với một số ngôn ngữ, Google Dịch có thể phát âm văn bản được dịch,[9] làm nổi bật các từ và cụm từ tương ứng trong văn bản nguồn và văn bản đích, và hoạt động như một từ điển đơn giản cho các từ đơn được đưa vào. Nếu chọn “Phát hiện ngôn ngữ”, văn bản bằng ngôn ngữ không xác định có thể được xác định tự động. Nếu người dùng nhập URL vào văn bản nguồn, Google Translate sẽ tạo ra một liên kết đến một bản dịch máy của trang web.[10] Người dùng có thể lưu các bản dịch vào “một kho từ đã dịch” để sử dụng sau này.[11] Đối với một số ngôn ngữ, văn bản có thể được nhập thông qua bàn phím ảo, thông qua nhận dạng chữ viết tay, hoặc nhận dạng tiếng nói.[12][13] Chức năng hỗ trợ từ phía người dịch: người dùng có thể sửa bản dịch của Google đưa ra nếu muốn, chức năng này có tác dụng gia tăng chất lượng theo thời gian và có hầu hết trong các dịch vụ dịch tự động trực tuyến. Đây là hoạt động tương tác rất quan trọng, là một hình thức huy động trí tuệ của cả cộng đồng.
Tìm kiếm cho en.wikipedia-một trang viết bằng tiếng Anh, có liên kết [dịch trang này] màu xanh trong ô ngoặc vuông bên phải cạnh đường link để dịch tự động
Tích hợp vào dịch vụ tìm kiếm của Google: trong tìm kiếm nếu phát hiện trong kết quả tìm kiếm có đường dẫn là một ngoại ngữ, ngay bên cạnh có liên kết trong dấu ngoặc vuông là [dịch trang này] màu xanh. Tìm kiếm được dịch (Translated Search): là chức năng tìm kiếm bằng tiếng mẹ đẻ trên các trang web bằng tiếng nước ngoài, chẳng hạn muốn tìm về máy tính trên các tư liệu bằng tiếng Pháp nhưng lại không biết nghĩa tương đương của từ này. Khi đó người dùng vẫn có thể tìm kiếm bằng cách gõ cụm từ “máy tính” vào ô “ngôn ngữ của tôi” và chọn ngôn ngữ tiếng Pháp của website mà họ cần tìm kiếm, Google sẽ tự động phiên dịch từ khóa thành ordinateur (nghĩa tiếng Pháp của máy tính) và tìm kiếm trong kho lưu trữ sau đó cho ra kết quả phù hợp với từ khóa đã được dịch đó. Kết quả được chia làm hai cột, cột bên trái là các liên kết đã được dịch ra tiếng Việt, cột bên phải là các liên kết của ngôn ngữ gốc mà trong ví dụ này là tiếng Pháp. Dịch nhanh: là chức năng được mặc định, theo đó thì khi người dùng khi copy đoạn văn bản vào ô cần dịch thì ngay lập tức đoạn văn bản sẽ được chuyển sang ngôn ngữ đích mà không cần phải nhấn nút Dịch, điều này có mục đích tích kiệm thời gian. Đóng góp tài liệu: nếu có một lượng lớn tài liệu song ngữ người dùng có thể trợ giúp cho Google Dịch thuật bằng cách cung cấp các tài liệu song ngữ này, điều đó làm tăng chất lượng các bản dịch với điều kiện các tài liệu đó phải có chất lượng cao.[14] Phương pháp Google Dịch thuật dựa trên nền tảng gọi là dịch máy theo nguyên tắc dịch máy thống kê.[14] Người đứng đầu chương trình dịch máy của Google là Franz-Josef Och – từng đoạt giải nhất cuộc thi DARPA (viết tắt của từ Defense Advanced Research Projects Agency, một cơ quan của chính phủ Mỹ có trách nhiệm phát triển công nghệ mới phục vụ cho quân đội) về tốc độ dịch tự động vào năm 2003.[15] Không giống như các công cụ khác như Babel Fish, AOL và Yahoo sử dụng SYSTRAN, Google Dịch thuật sử dụng phần mềm của riêng họ, chương trình này không đi quá sâu vào các quy luật phức tạp về ngữ pháp mà sử dụng phương pháp được họ gọi là thống kê kiến thức, có nghĩa là chương trình sẽ được nạp vào hàng tỉ văn bản đã được dịch sẵn của con người sau đó thực hiện các thao tác phân tích nhằm tìm ra sự tương đồng với các yêu cầu của người dùng rồi trả về kết quả. Chất lượng dịch được tăng lên theo thời gian khi mà các văn bản ngày càng được nạp vào nhiều hơn với cấu trúc và ngữ cảnh ngày càng đa dạng.[14] Nhưng một số người dùng còn không biết sử dụng phiên bản mới này ra sao. Các giai đoạn phát triển (Sắp xếp theo thứ tự thời gian) Giai đoạn đầu:
Giai đoạn 2:
Giai đoạn 3: Tiếng Anh sang tiếng Ý Tiếng Ý sang tiếng Anh
Giai đoạn 4: Tiếng Anh sang tiếng Trung (giản thể) phiên bản BETA Tiếng Anh sang tiếng Nhật phiên bản BETA Tiếng Anh sang tiếng Triều Tiên phiên bản BETA Tiếng Trung (giản thể) sang tiếng Anh phiên bản BETA Tiếng Nhật sang tiếng Anh phiên Bản BETA Tiếng Triều Tiên sang tiếng Anh phiên bản BETA
Giai đoạn 5: (vào khoảng tháng 12 năm 2006) Tiếng Anh sang tiếng Nga phiên bản BETA Tiếng Nga sang tiếng Anh phiên bản BETA
Giai đoạn 6: (vào khoảng tháng 4 năm 2006) Tiếng Anh sang tiếng Ả Rập phiên bản BETA Tiếng Ả Rập sang tiếng Anh phiên bản BETA
Giai đoạn 7: (vào khoảng tháng 2 năm 2007) Tiếng Anh sang tiếng Trung (phồn thể) phiên bản BETA Tiếng Trung (phồn thể) sang tiếng Anh phiên bản BETA Chuyển đổi giữ tiếng Trung giản thể và phồn thể phiên bản BETA
Giai đoạn 8: (vào khoảng tháng 10 năm 2007) Có tất cả 25 cặp ngôn ngữ được dịch.
Giai đoạn 9: Tiếng Anh sang tiếng Hindi (Ấn Độ) phiên bản BETA Tiếng Hindi sang tiếng Anh phiên bản BETA
Giai đoạn 10: (trong giai đoạn này với việc áp dụng kiểu dịch trung gian, Google Translate có thể dịch qua lại bất cứ cặp ngôn ngữ nào có trong hệ thống) (vào khoảng tháng 5 năm 2008)
Giai đoạn 11: (ngày 25 tháng 9 năm 2008)
Giai đoạn 12: (từ ngày 1 tháng 10 năm 2008 đến ngày 1 tháng 10 năm 2009, áp dụng bàn phím ảo cho một số ngôn ngữ)
Giai đoạn 13 (giai đoạn thay đổi Google)
Xem thêm Chú thích
^ a ă Shankland, Stephen. “Google Translate now serves 200 million people daily”. CNET. Truy cập ngày 17 tháng 10 năm 2014. ^ “Research Blog: Statistical machine translation live”. Google Research Blogspot. 28 tháng 4 năm 2006. Truy cập ngày 11 tháng 3 năm 2016. ^ “Google Switches to Its Own Translation System”. Google System Blogspot. 22 tháng 10 năm 2007. Truy cập ngày 11 tháng 3 năm 2016. ^ Google Dịch, truy cập ngày 23 tháng 7 năm 2013 ^ V.N, “Dịch miễn phí với Google Translate”, ICTnews, ngày 5 tháng 12 năm 2008, truy cập ngày 23 tháng 7 năm 2013 ^ “About – Google Translate”. Google. Truy cập ngày 1 tháng 12 năm 2016. ^ “Google Translate Help”. Google Translate Help. Google. Truy cập ngày 1 tháng 12 năm 2016. ^ Rich McCormick (18 tháng 2 năm 2016). “Google Translate now supports 103 languages, including Hawaiian, Kyrgyz, and Xhosa” (bằng tiếng Anh). The Verge. Truy cập ngày 5 tháng 1 năm 2017. ^ “Translate written words”. Google Translate Help. Google. Truy cập ngày 1 tháng 12 năm 2016. ^ “Translate text messages, webpages, or documents”. Google Translate Help. Google. Truy cập ngày 1 tháng 12 năm 2016. ^ “Save translations in a phrasebook”. Google Translate Help. Google. Truy cập ngày 28 tháng 5 năm 2017. ^ “Translate with handwriting or virtual keyboard”. Google Translate Help. Google. Truy cập ngày 1 tháng 12 năm 2016. ^ “Translate by speech”. Google Translate Help. Google. Truy cập ngày 1 tháng 12 năm 2016. ^ a ă â Các câu hỏi thường gặp về Google Translate ^ Keynote speech at the Machine Translation Summit 2005
Liên kết ngoài
Source link
0 notes
Text
administrador de sistemas
Nuevo articulo publicado en https://traductormundo.net/administrador-de-sistemas/
administrador de sistemas
Antecedentes Es uno de los programas de traducción automática más antiguos del mercado, y fue diseñado en 1970 por el Dr. Peter Toma. Toma comenzó su trabajo en la Universidad de Georgetown. En sus inicios, este programa fue desarrollado para la Fuerza Aérea de los Estados Unidos, y el único par de idiomas era ruso-inglés. Luego, la NASA aprobó su uso para uno de sus proyectos, y aunque los resultados no fueron los esperados, el programa ganó cierto reconocimiento de la experiencia. Tiempo después, la Comisión Europea solicitó una demostración del programa para la pareja de idiomas inglés-francés. En 1976, Loll Rolling, funcionario de la Comisión Europea, compró el programa. A principios de los años noventa, Gachot, una empresa francesa, adquirió todas las sucursales, a excepción de la sucursal de la Comisión Europea, y el sistema se hizo muy popular en Francia. En 1994 se ofreció de forma gratuita en las salas de chat de CompuServe, y un año más tarde salió una versión compatible con Windows. En 1997, se firmó un acuerdo con AltaVista para ofrecer gratuitamente el servicio de traducción Babel Fish. En los últimos años se han introducido importantes modificaciones y, además de la traducción tradicional basada en reglas, se han incorporado memorias de traducción. Características – sistema multiplataforma: GNU/Linux, Microsoft Windows, Google , AOL, AltaVista e Instituto Cervantes; -funciones multilingües integradas: correo electrónico, CRM, bases de datos, comercio electrónico, mensajería instantánea, SMS y WAP. proporciona servicios de tecnología de traducción en línea a AltaVista y Google; – contiene 50.000 palabras básicas y 250.000 términos científicos; – las traducciones se procesan a un ritmo de 500.000 palabras por hora. Cómo funciona La traducción automática es el proceso que utiliza programas informáticos para traducir textos de un idioma a otro. Tiene en cuenta la estructura gramatical de un idioma y utiliza reglas para pasar la estructura gramatical al otro idioma. Tiene en cuenta la morfología, sintaxis, semántica y numerosas excepciones. Ofrece la posibilidad de componer diccionarios de usuario (como una memoria de traducción), lo que permite importar archivos.txt,.xls,.tmx, MultiTerm o.csv y exportarlos a otras memorias de traducción, como Trados . En el caso de las traducciones de páginas web, Systran permite traducir páginas utilizando Internet Explorer o Firefox. Una vez traducida una página, la función de “navegación fluida” traduce automáticamente todas las páginas que están vinculadas a la página que se está visualizando. Esto significa que cuando hace clic en un enlace que aparece en la página que está leyendo, el texto aparece automáticamente en el idioma deseado. Systran ofrece una traducción gratuita de hasta 150 palabras. En la página web de traducción gratuita, se ofrece una traducción gratuita de hasta 750 caracteres. Su opinión Le invitamos a que nos deje sus comentarios y comparta su experiencia personal utilizando este programa en nuestro foro sobre las ventajas y desventajas de los programas de traducción automática. (Versión en español: http://blog-de-traduccion.trustedtranslations.com/systran-2010-04-27.html)
0 notes
Text
Bill 96 Explained | How Machine Translation Can Help
Bill 96 Explained | How Machine Translation Can Help https://blog.systran.us/bill-96-explained Quebec approved Bill 96 to encourage French use throughout the province. Bill 96 strengthens the Charter of the French language (the Charter), which governs the use of French in commerce and business in Quebec, as well as the Civil Code of Québec (CCQ), the Consumer Protection Act (CPA), and the Charter of human rights and freedoms (CHRF). Bill 96 went into effect on June 1, 2022, with certain parts taking effect immediately and others gradually. An Act Respecting French, the Official and Common Language of Quebec (Bill 101) adds new provisions that significantly change the Charter of the French Language. All firms that are located in Quebec or that employ people will be impacted by the reforms. For corporate entities to continue to be in compliance, it will be essential to comprehend and apply the new laws. The Bill also allows for legal action against companies who, to name a few instances, fail to offer customer assistance in French, provide communications in French, or post job descriptions in French. The law expands the risk of liability for enterprises that don't comply by applying to e-commerce sites operated by companies outside of Quebec that sell to inhabitants of that province. via https://blog.systran.us/bill-96-explained https://blog.systran.us July 24, 2023 at 10:55AM
0 notes
Text
Multimedia Localization Benefits And The Localization Process
Multimedia Localization Benefits And The Localization Process https://blog.systran.us/multimedia-localization Multimedia Localization - Benefits and the Localization Process In our hyperconnected digital realm, words, images, videos and sounds cross borders with incredible speed. But does this mean the message they deliver makes sense to everyone, everywhere? Unfortunately not. That's where multimedia localization steps in - to turn global gibberish into local lingo. Ready to dive into this world of messaging transformation? via https://blog.systran.us/multimedia-localization https://blog.systran.us July 24, 2023 at 10:00AM
0 notes
Text
Medical Document Translation Services and the Benefits For Healthcare
Medical Document Translation Services and the Benefits For Healthcare https://blog.systran.us/medical-document-translation-service In an increasingly globalized world where healthcare professionals and patients interact across linguistic and cultural borders, the need for accurate medical translation services has never been more vital. As a matter of fact, effective communication in the healthcare industry is crucial not just for providing the best possible care to patients, but also for avoiding costly errors and ensuring regulatory compliance. But how do medical translation services navigate the complexities of medical terminology, language barriers, and the specialized knowledge required for accurate translations? In this article we discover the intricacies of medical document translation and its importance in the world of healthcare. via https://blog.systran.us/medical-document-translation-service https://blog.systran.us June 07, 2023 at 01:20PM
0 notes
Text
Translation Fees Per Page: Translation Services Cost Structures Changing?
Translation Fees Per Page: Translation Services Cost Structures Changing? https://blog.systran.us/translation-rates-per-word Is "Translation Rates Per Word" Obsolete Thanks to MT? The translation industry stands at a crossroads. On one side, you have the traditional translation service providers clinging to the per-word pricing model. On the other, a shiny new path opens up, paved by the advancements in AI and Machine Translation. It's time we discuss why the old guard needs to make way for the new. via https://blog.systran.us/translation-rates-per-word https://blog.systran.us May 18, 2023 at 04:32PM
0 notes
Text
Issues with Decentralized Clinical Trial Translation | SYSTRAN
Issues with Decentralized Clinical Trial Translation | SYSTRAN https://blog.systran.us/issues-with-decentralized-clinical-trial-translation Decentralized clinical trials (DCTs) include a broad range of patients, helping studies achieve more comprehensive findings. However, as beneficial as DCTs can be, they face unique challenges other studies may not experience. via https://blog.systran.us/issues-with-decentralized-clinical-trial-translation https://blog.systran.us May 17, 2023 at 04:03PM
0 notes
Text
Patent Translations Companies vs. Machine Translation | SYSTRAN
Patent Translations Companies vs. Machine Translation | SYSTRAN https://blog.systran.us/patent-translations You’ve created the next big thing in your field. You’re excited to take it global but are also worried about protecting your intellectual property (IP) rights. Enter patent translations – a vital cog in the global IP protection landscape. via https://blog.systran.us/patent-translations https://blog.systran.us May 10, 2023 at 03:33PM
0 notes
Text
Education Translation Services & Translator Benefits For Educators
Education Translation Services & Translator Benefits For Educators https://blog.systran.us/education-translation-services Language Translation Benefits For Education The significance of education in any child’s life is massive and should never be underestimated. Students benefit tremendously from the numerous doors and opportunities that can be opened for them through education. There is a vast amount of knowledge out there waiting to be discovered, and the subjects that can take them to the heights include far too many to list here. In essence, everyone should have access to a good education, which takes us to the topic of how translation might help the education of children in the United States who have a limited ability to understand and speak English. Students who come from families where English is their native language do not suffer the same barriers in English speaking countries. Let's investigate the importance of language translation in the education of limited English proficient (LEP) children in the United States, as well as the ways in which both the kids and their families can benefit from it. via https://blog.systran.us/education-translation-services https://blog.systran.us April 24, 2023 at 10:32AM
0 notes
Text
Machine Translation Post-Editing (MTPE)| SYSTRAN
Machine Translation Post-Editing (MTPE)| SYSTRAN https://blog.systran.us/machine-translation-post-editing Machine translation (MT) software has come a long way since its inception, pioneered by SYSTRAN in 1968. You can now find advanced platforms able to efficiently and accurately translate critical corporate documents with incredible accuracy in seconds. These astounding capabilities have transformed the way businesses approach language translation. via https://blog.systran.us/machine-translation-post-editing https://blog.systran.us March 13, 2023 at 08:48AM
0 notes