#soundstream
Explore tagged Tumblr posts
Text
Soundstream - Come To The Ghetto (1981) [FLAC]
- Contact me for digital exchange!
#Rare Afro Exchange#Funk#Disco#Reggae#Boogie#Highlife#Music#West Africa#South Africa#Nigeria#Ghana#Ivory Coast#Togo#Benin#Cameroon#Antilles#Soundstream - Come To The Ghetto (1981) [FLAC]
2 notes
·
View notes
Text
While I wait for Living Presence....
As I have this rather good audio system I should listen to it. Xmas is coming and I needed something other than another loop of Michael Buble' xmas songs.
So Fleetwood Mac. Oh not just any album. TUSK. It is idiosyncratic. I recall being disappointed when I got it in the late 70s or was it early 80s. Whatever, the disks (2) are over 40 years old. And I enjoyed this listen. It has been years since the last and I did not recall how good the recording was. It is good.
Very few notes on the production beyond it being the most expensive album ever made to date. Apparently an early Soundstream digital recording. So low noise and HDR stuff. For the first time I heard the little things they hid in the mix. I think I last listened three preamps ago. I was pulling stuff out of the vinyl that surprised me. Weird guitar licks and Mr Buckingham pretending he was a genius song writer. Sorry the girls did that job, but he could play.
But the songs were all better fully resolved. Nothing hiding. Those little Sotto Voce bits from Mr Fleetwood, "real savage like." Apparently the group founder Peter Green played on one song, and he is different. Damn what would it have been like if he hadn't done so much acid. So many little secrets.
So even though it is not an audiophile album it is worth giving it a close listen. I enjoyed reacquainting myself with it. Oh there are some semi-hit songs, but hey retrospective is a way of looking at things too.
2 notes
·
View notes
Text
Ray Juss Carnival Mix
Tracklisting
Artist - Title
TY - Wait A Minute Divorce From New York - What's Jazz Crackazat - Its All Different Angie Lee - Whats Your Name? (MJ Cole Master Mix) Sunshine Bros Vs MC Ward - I Need Someone (Just Like You) Alan Brax - Intro Kyle Hall - Crushed! 4Hero - Starchaser Sbtrkt - Jamlock Very Rich - Line Weight Galcher Lustwerk - I Neva Seen Lovebirds - Want You In My Soul Wheelup - Copacetic Soundstream - Inferno Restless Soul - Namby It Aint Kyle Hall - Postcard To Another Planet Dub Syndicate Productions - All This Love Trilogy Inc - Awakening Solid Groove - Flookin' Sunship - City Life
#london#vinyl#music#beats#records#dj#house#soundcloud#grime#food#brokenbeat#uk garage#youtube#SoundCloud
1 note
·
View note
Text
0 notes
Text
GAUTE GRANLI& LIONEL FERNANDEZ + PAULA SANCHEZ + SISSY-MARICÓN TRICOT DE TÊTE + NOISETTE
Le Non_Jazz
MARDI LE 16.01
GAUTE GRANLI & LIONEL FERNANDEZ no / fr PAULA SANCHEZ / ar SISSY-MARICÓN TRICOT DE TÊTE au-tas / fr NOISETTE / fr
Les Nautes 1 Quai des Célestins 75004 M° Sully-Morland
20:00 portes
20:45 action!
GAUTE GRANLI & LIONEL FERNANDEZ no / fr Deux parmi les ( très rares ) guitar-heroes préférés du Non_Jazz croiseront les manches de leurs instruments totémiques pour notre plus grand délice. Les deux hommes s'étaient déjà croisés ça et là, lors de telle ou telle tournée - qui avec son projet solo qui en groupe ( pêle-mêle - et je vous laisse démêler - avec / dedans RUMP STATE / GG / FIRMAET FORVOKSEN / CONTUMACE / SISTER IODINE / IBIZA DEATH... ) et visiblement ont dû suffisamment s'apprécier mutuellement - ainsi que leurs " univers sonores " respectifs - pour tenter de joindre leurs forces sous forme de ce nouveau duo inédit lors de la présente tournée.
https://label-cave12.bandcamp.com/.../gaute-granli-contumace
youtube
youtube
PAULA SANCHEZ / ar Violoncelliste travaillant à l'intersection de l'improvisation libre / musique expérimentale / art / performance. Composition / décomposition + destruction / construction / Pour " augmenter " son instrument, elle a recours aux techniques dites élargies ( p.ex. en utilisant des bouts de plastique / verre / végétaux... ), sans rechigner à y ajouter de l'électronique ou de la ... voix.
Plusieurs projets et collaborations ( avec Fred Frith, Mariana Carvalho, Violeta Garcia, Kevin Sommer... )
" SINCE 2018 PAULA HAS BEEN ACTIVE IN CONTEMPORARYMUSIC AND IMPROVISATION CIRCLES IN EUROPE PARTICIPATING IN NUMEROUS CONCERTS. AND DEVELOPING HER WORK AS A COMPOSER AND PERFORMER. SHE STANDS OUT FOR HER USE OF EXTENDED TECHNIQUES AND UNCONVENTIONAL WAYS OF PLAYING CELLO, WORKING WITH MATERIALS LIKE PLASTIC, GLASS AND ELEMENTS FROM NATURE COMBINED WITH VOICE AND ELECTRONICS. HERS IS THE PURE PRESENCE OF AN EMBODIED SOUND WHICH INVENTS ITS RELATIONSHIPS AS IT MAKES ITS WAY INTO NOTHINGNESS. "
youtube
youtube
youtube
SISSY-MARICÓN TRICOT DE TÊTE au-tas / fr Duo inédit - ? Ou : plus ou moins inédit entre deux agité(e)s du biscornu, Julia Drouhin et Blenno dWB.
Dr Julia Drouhin is a French Australian artist and curator based in Tasmania-lutruwita. She explores embodiment of invisible soundstreams that reveal friction in sociality and shift usual modes of transmission through radioscapes, installations and collaborative performances. Her work using field recordings, electromagnetic frequencies as well as textiles, edible and found objects had been presented in Europe, Hong Kong, Brazil, South Africa and Australia, as well as broadcast on terrestrial airwaves and online radios*.
Le Dr Julia Drouhin est une artiste et commissaire franco-australienne basée en Tasmanie-lutruwita. Elle explore l'incarnation de flux sonores invisibles qui révèlent des frictions dans la socialité et modifient les modes de transmission habituels à travers des paysages radiophoniques, des installations et des performances collaboratives. Son travail utilisant des enregistrements de terrain, des fréquences électromagnétiques ainsi que des textiles, des objets comestibles et trouvés a été présenté en Europe, à Hong Kong, au Brésil, en Afrique du Sud et en Australie, ainsi que diffusé sur les ondes terrestres et les radios en ligne*.
Blenno die WurstBrücke = informations virales. Presque de la musique. Transmettre...mais le chaos, mais des informations tronquées. Un chaos Arte Povera ou non-art.
Pousser une armée d'instruments bricolés (porte-voix, bandes magnétiques, jouets...) jusqu'au bout de l'aléatoire. Sons et volumes devenus collages. Les sons et formes se mêlent en une performance bruitiste mais non brutaliste: musique bidouillée, flippante et iconoclaste. Un sabbat de l'aléatoire, des lasagnes de collages totalement incontrôlables.
NOISETTE / fr À travers ses divers projets ( celui-ci mais aussi : Glass Nest, Harmoni, Desktop Tunes... ) Régis Lemberthe, artiste sonore basé à Berlin, utilise diverses approches et techniques pour " sculpter le son "
( electroaquatic experimental / coded sound-images/ "bureaucratic noise " / / ultramaximalist phygital explorations, " etc. " ). Noisette en est une facette autour de son travail avec un dispositif no-input.
" J’arriverai avec deux mixers et l'envie irrésistible de leur faire faire des choses indécentes: du noise, no-input, minimal, texturé, et qui pétille joliment. C’est un projet qui m’accompagne depuis quelques années, et que j’ai développé en simplifiant et dénumérisant progressivement mes installations précédentes. Il a tourné aux Philippines et en Indonésie ce printemps... "
Fly - Jo L'Indien
1 note
·
View note
Text
What is Video Poet? Unleash Your Creative Potential with Free Text to Video AI !
In the ever-evolving world of technology, Google's VideoPoet emerges as a game-changer in the realm of video generation. As a sophisticated Large Language Model (LLM), VideoPoet is not just a tool; it's a harbinger of a new era in visual storytelling. The Innovation of VideoPoet VideoPoet harnesses the power of LLMs to transform various inputs, such as text, images, and video clips, into high-quality videos. What sets it apart is its zero-shot learning capability, allowing it to produce dynamic, high-motion videos without extensive specialized training. Understanding VideoPoet’s Mechanism At its core, VideoPoet relies on multiple tokenizers to process different modalities - video, image, audio, and text. Each tokenizer, such as MAGVIT V2 for video and SoundStream for audio, plays a crucial role in converting these signals into a language the model understands. This intricate process enables VideoPoet to blend various content forms seamlessly.
VideoPoet’s Versatile Applications From animating still images to applying unique styles to videos, VideoPoet’s applications are vast. It can create videos that fill in missing elements or extend beyond their original scope, offering innovative solutions for content creation. The Future of Visual Storytelling VideoPoet is not just a technological marvel; it's a canvas for creativity. It opens up new avenues in fields like advertising, filmmaking, and digital content creation, where the boundaries of imagination are constantly being pushed. The Technical Breakthrough of VideoPoet Understanding VideoPoet's advanced mechanics offers a glimpse into its extraordinary capabilities. The platform utilizes state-of-the-art tokenizers for each modality it processes. For instance, the MAGVIT V2 tokenizer intricately handles video and images, capturing both spatial and temporal information. This precision is crucial in creating fluid, lifelike videos from static inputs. Similarly, the SoundStream tokenizer revolutionizes audio processing with its nuanced understanding of sound patterns, making the audio-video synchronization in VideoPoet remarkably realistic. Expanding Creative Horizons VideoPoet is not just a tool for creating content; it's a catalyst for creative exploration. Its ability to animate images, style videos, and even repair or expand existing videos opens up a world of possibilities for content creators. Imagine transforming a simple sketch into a full-fledged animated story or restyling a classic film scene into a modern art piece. VideoPoet makes these imaginative scenarios possible. Empowering Content Creators and Marketers In the realm of marketing and content creation, VideoPoet is a game-changer. It offers brands and creators a powerful way to convey their messages more engagingly and memorably. Whether it's for creating compelling advertisements, enhancing social media content, or producing educational materials, VideoPoet provides a platform that amplifies creativity and effectiveness.
Examples that would blow your mind
Text to video Text prompt: Two pandas playing cards
Image to video with text prompts Text prompt accompanying the images (from left): 1. A ship navigating the rough seas, thunderstorm and lightning, animated oil on canvas 2. Flying through a nebula with many twinkling stars 3. A wanderer on a cliff with a cane looking down at the swirling sea fog below on a windy day Image (left) and video generated (immediate right)
Credit: Google Zero-shot video stylization VideoPoet can also alter an existing video, using text prompts. In the examples below, the left video is the original and the one right next to it is the stylized video. From left: Wombat wearing sunglasses holding a beach ball on a sunny beach; teddy bears ice skating on a crystal clear frozen lake; a metal lion roaring in the light of a forge.
Credit: Google Video to audio The researchers first generated 2-second video clips and VideoPoet predicts the audio without any help from text prompts. VideoPoet also can create a short film by compiling several short clips. First, the researchers asked Bard, Google’s ChatGPT rival, to write a short screenplay with prompts. They then generated video from the prompts and then put everything together to produce the short film. Longer videos, editing and camera motion Google said VideoPoet can overcome the problem of generating longer videos by conditioning the last second of videos to predict the next second. “By chaining this repeatedly, we show that the model can not only extend the video well but also faithfully preserve the appearance of all objects even over several iterations,” they wrote. VideoPoet can also take existing videos and change how the objects in it move. For example, a video of the Mona Lisa is prompted to yawn.
Credit: Google Text prompts can also be used to change camera angles in existing images. For example, this prompt created the first image: Adventure game concept art of a sunrise over a snowy mountain by a crystal clear river. Then the following prompts were added, from left to right: Zoom out, Dolly zoom, Pan left, Arc shot, Crane shot, and FPV drone shot.
Ethical and Societal Implications As with any advanced technology, VideoPoet comes with its set of ethical considerations. The ease of creating realistic videos raises questions about authenticity and the potential for misuse. It's crucial for users and developers alike to navigate these challenges responsibly, ensuring that this powerful tool is used for positive and ethical purposes. Looking to the Future VideoPoet is not just a current marvel; it's a stepping stone to the future of digital storytelling. As AI continues to evolve, we can expect even more sophisticated and intuitive tools that further blur the lines between reality and digital creation. VideoPoet is leading the way, showing us a glimpse of the potential that AI holds in transforming how we see, interpret, and create our narratives. In conclusion, Google's VideoPoet stands as a testament to the incredible advancements in AI and machine learning. It's a tool that not only enhances the way we produce and consume video content but also challenges us to rethink the boundaries of creativity and technology. As we move forward, VideoPoet will undoubtedly continue to inspire and revolutionize the landscape of visual storytelling. Read the full article
0 notes
Text
VideoPoet: A large language model for zero-shot video generation
New Post has been published on https://thedigitalinsider.com/videopoet-a-large-language-model-for-zero-shot-video-generation/
VideoPoet: A large language model for zero-shot video generation
Posted by Dan Kondratyuk and David Ross, Software Engineers, Google Research
A recent wave of video generation models has burst onto the scene, in many cases showcasing stunning picturesque quality. One of the current bottlenecks in video generation is in the ability to produce coherent large motions. In many cases, even the current leading models either generate small motion or, when producing larger motions, exhibit noticeable artifacts.
To explore the application of language models in video generation, we introduce VideoPoet, a large language model (LLM) that is capable of a wide variety of video generation tasks, including text-to-video, image-to-video, video stylization, video inpainting and outpainting, and video-to-audio. One notable observation is that the leading video generation models are almost exclusively diffusion-based (for one example, see Imagen Video). On the other hand, LLMs are widely recognized as the de facto standard due to their exceptional learning capabilities across various modalities, including language, code, and audio (e.g., AudioPaLM). In contrast to alternative models in this space, our approach seamlessly integrates many video generation capabilities within a single LLM, rather than relying on separately trained components that specialize on each task.
To view more examples in original quality, see the website demo.
Overview
The diagram below illustrates VideoPoet’s capabilities. Input images can be animated to produce motion, and (optionally cropped or masked) video can be edited for inpainting or outpainting. For stylization, the model takes in a video representing the depth and optical flow, which represent the motion, and paints contents on top to produce the text-guided style.
An overview of VideoPoet, capable of multitasking on a variety of video-centric inputs and outputs. The LLM can optionally take text as input to guide generation for text-to-video, image-to-video, video-to-audio, stylization, and outpainting tasks. Resources used: Wikimedia Commons and DAVIS.
Language models as video generators
One key advantage of using LLMs for training is that one can reuse many of the scalable efficiency improvements that have been introduced in existing LLM training infrastructure. However, LLMs operate on discrete tokens, which can make video generation challenging. Fortunately, there exist video and audio tokenizers, which serve to encode video and audio clips as sequences of discrete tokens (i.e., integer indices), and which can also be converted back into the original representation.
VideoPoet trains an autoregressive language model to learn across video, image, audio, and text modalities through the use of multiple tokenizers (MAGVIT V2 for video and image and SoundStream for audio). Once the model generates tokens conditioned on some context, these can be converted back into a viewable representation with the tokenizer decoders.
A detailed look at the VideoPoet task design, showing the training and inference inputs and outputs of various tasks. Modalities are converted to and from tokens using tokenizer encoder and decoders. Each modality is surrounded by boundary tokens, and a task token indicates the type of task to perform.
Examples generated by VideoPoet
Some examples generated by our model are shown below.
Videos generated by VideoPoet from various text prompts. For specific text prompts refer to the website.
For text-to-video, video outputs are variable length and can apply a range of motions and styles depending on the text content. To ensure responsible practices, we reference artworks and styles in the public domain e.g., Van Gogh’s “Starry Night”.
Text Input “A Raccoon dancing in Times Square” “A horse galloping through Van-Gogh’s ‘Starry Night’” “Two pandas playing cards” “A large blob of exploding splashing rainbow paint, with an apple emerging, 8k” Video Output
For image-to-video, VideoPoet can take the input image and animate it with a prompt.
An example of image-to-video with text prompts to guide the motion. Each video is paired with an image to its left. Left: “A ship navigating the rough seas, thunderstorm and lightning, animated oil on canvas”. Middle: “Flying through a nebula with many twinkling stars”. Right: “A wanderer on a cliff with a cane looking down at the swirling sea fog below on a windy day”. Reference: Wikimedia Commons, public domain**.
For video stylization, we predict the optical flow and depth information before feeding into VideoPoet with some additional input text.
Examples of video stylization on top of VideoPoet text-to-video generated videos with text prompts, depth, and optical flow used as conditioning. The left video in each pair is the input video, the right is the stylized output. Left: “Wombat wearing sunglasses holding a beach ball on a sunny beach.” Middle: “Teddy bears ice skating on a crystal clear frozen lake.” Right: “A metal lion roaring in the light of a forge.”
VideoPoet is also capable of generating audio. Here we first generate 2-second clips from the model and then try to predict the audio without any text guidance. This enables generation of video and audio from a single model.
An example of video-to-audio, generating audio from a video example without any text input.
By default, the VideoPoet model generates videos in portrait orientation to tailor its output towards short-form content. To showcase its capabilities, we have produced a brief movie composed of many short clips generated by VideoPoet. For the script, we asked Bard to write a short story about a traveling raccoon with a scene-by-scene breakdown and a list of accompanying prompts. We then generated video clips for each prompt, and stitched together all resulting clips to produce the final video below.
[embedded content]
When we developed VideoPoet, we noticed some nice properties of the model’s capabilities, which we highlight below.
Long video
We are able to generate longer videos simply by conditioning on the last 1 second of video and predicting the next 1 second. By chaining this repeatedly, we show that the model can not only extend the video well but also faithfully preserve the appearance of all objects even over several iterations.
Here are two examples of VideoPoet generating long video from text input:
Text Input “An astronaut starts dancing on Mars. Colorful fireworks then explode in the background.” “FPV footage of a very sharp elven city of stone in the jungle with a brilliant blue river, waterfall, and large steep vertical cliff faces.” Video Output
It is also possible to interactively edit existing video clips generated by VideoPoet. If we supply an input video, we can change the motion of objects to perform different actions. The object manipulation can be centered at the first frame or the middle frames, which allow for a high degree of editing control.
For example, we can randomly generate some clips from the input video and select the desired next clip.
An input video on the left is used as conditioning to generate four choices given the initial prompt: “Closeup of an adorable rusty broken-down steampunk robot covered in moss moist and budding vegetation, surrounded by tall grass”. For the first three outputs we show what would happen for unprompted motions. For the last video in the list below, we add to the prompt, “powering up with smoke in the background” to guide the action.
Image to video control
Similarly, we can apply motion to an input image to edit its contents towards the desired state, conditioned on a text prompt.
Animating a painting with different prompts. Left: “A woman turning to look at the camera.” Right: “A woman yawning.” **
Camera motion
We can also accurately control camera movements by appending the type of desired camera motion to the text prompt. As an example, we generated an image by our model with the prompt, “Adventure game concept art of a sunrise over a snowy mountain by a crystal clear river”. The examples below append the given text suffix to apply the desired motion.
Prompts from left to right: “Zoom out”, “Dolly zoom”, “Pan left”, “Arc shot”, “Crane shot”, “FPV drone shot”.
Evaluation results
We evaluate VideoPoet on text-to-video generation with a variety of benchmarks to compare the results to other approaches. To ensure a neutral evaluation, we ran all models on a wide variation of prompts without cherry-picking examples and asked people to rate their preferences. The figure below highlights the percentage of the time VideoPoet was chosen as the preferred option in green for the following questions.
Text fidelity
User preference ratings for text fidelity, i.e., what percentage of videos are preferred in terms of accurately following a prompt.
Motion interestingness
User preference ratings for motion interestingness, i.e., what percentage of videos are preferred in terms of producing interesting motion.
Based on the above, on average people selected 24–35% of examples from VideoPoet as following prompts better than a competing model vs. 8–11% for competing models. Raters also preferred 41–54% of examples from VideoPoet for more interesting motion than 11–21% for other models.
Conclusion
Through VideoPoet, we have demonstrated LLMs’ highly-competitive video generation quality across a wide variety of tasks, especially in producing interesting and high quality motions within videos. Our results suggest the promising potential of LLMs in the field of video generation. For future directions, our framework should be able to support “any-to-any” generation, e.g., extending to text-to-audio, audio-to-video, and video captioning should be possible, among many others.
To view more examples in original quality, see the website demo.
Acknowledgements
This research has been supported by a large body of contributors, including Dan Kondratyuk, Lijun Yu, Xiuye Gu, José Lezama, Jonathan Huang, Rachel Hornung, Hartwig Adam, Hassan Akbari, Yair Alon, Vighnesh Birodkar, Yong Cheng, Ming-Chang Chiu, Josh Dillon, Irfan Essa, Agrim Gupta, Meera Hahn, Anja Hauth, David Hendon, Alonso Martinez, David Minnen, David Ross, Grant Schindler, Mikhail Sirotenko, Kihyuk Sohn, Krishna Somandepalli, Huisheng Wang, Jimmy Yan, Ming-Hsuan Yang, Xuan Yang, Bryan Seybold, and Lu Jiang.
We give special thanks to Alex Siegman and Victor Gomes for managing computing resources. We also give thanks to Aren Jansen, Marco Tagliasacchi, Neil Zeghidour, John Hershey for audio tokenization and processing, Angad Singh for storyboarding in “Rookie the Raccoon”, Cordelia Schmid for research discussions, Alonso Martinez for graphic design, David Salesin, Tomas Izo, and Rahul Sukthankar for their support, and Jay Yagnik as architect of the initial concept.
** (a) The Storm on the Sea of Galilee, by Rembrandt 1633, public domain. (b) Pillars of Creation, by NASA 2014, public domain. (c) Wanderer above the Sea of Fog, by Caspar David Friedrich, 1818, public domain (d) Mona Lisa, by Leonardo Da Vinci, 1503, public domain.
#8K#apple#approach#arc#Art#audio#background#bard#benchmarks#Blue#canvas#change#code#computing#crystal#Design#diffusion#drone#Editing#efficiency#engineers#form#framework#Future#game#generators#Google#Graphic design#green#hand
1 note
·
View note
Text
Арт-директор «Обложки» София Цой: как придумывается обложка (2023)
«В профессии арт-директора сочетаются две абсолютно несочетаемые вещи». София Цой о том, как совмещать творчество и холодную голову.
Автор: Кочан капусты за авторский лист (Дарья Буданцева) Ресурс: Яндекс.Музыка, VK, Google Podcasts, Castbox, Spotify, YouTube, Звук, Mave, Pocket Casts, SoundStream Тип публикации: подкаст
СЛУШАТЬ
1 note
·
View note
Quote
By representing audio as a sequence of discrete tokens, audio generation can be performed with Transformer-based sequence-to-sequence models — this has unlocked rapid progress in speech continuation (e.g., with AudioLM), text-to-speech (e.g., with SPEAR-TTS), and general audio and music generation (e.g., AudioGen and MusicLM). Many generative audio models, including AudioLM, rely on auto-regressive decoding, which produces tokens one by one. While this method achieves high acoustic quality, inference (i.e., calculating an output) can be slow, especially when decoding long sequences. To address this issue, in “SoundStorm: Efficient Parallel Audio Generation”, we propose a new method for efficient and high-quality audio generation. SoundStorm addresses the problem of generating long audio token sequences by relying on two novel elements: 1) an architecture adapted to the specific nature of audio tokens as produced by the SoundStream neural codec, and 2) a decoding scheme inspired by MaskGIT, a recently proposed method for image generation, which is tailored to operate on audio tokens. Compared to the autoregressive decoding approach of AudioLM, SoundStorm is able to generate tokens in parallel, thus decreasing the inference time by 100x for long sequences, and produces audio of the same quality and with higher consistency in voice and acoustic conditions. Moreover, we show that SoundStorm, coupled with the text-to-semantic modeling stage of SPEAR-TTS, can synthesize high-quality, natural dialogues, allowing one to control the spoken content (via transcripts), speaker voices (via short voice prompts) and speaker turns (via transcript annotations), as demonstrated by the examples below:
SoundStorm: Efficient parallel audio generation – Google AI Blog
0 notes
Text
Soundstream Vrcpaa 106
https://www.mooncarstereo.com/collections/stereos
Check out our most recent offering of the Soundstream Vrcpaa 106 from Moon Car Stereo at a discount. Modern technological features are included in our product.
0 notes
Text
With the help of our experts in engineering and technology The list of the Best Competition Car Ampwas created. You may be interested by the most popular brands listed below: Skar Audio, Hifonics, Gogogo Sport Vpro, Rockville ARBrend, SoundXtreme, Rockville, Lanzar, Gravity Audio, Soundstream, TTZ Audio.
More info: https://www.facebook.com/petstjamesgoshen/ https://twitter.com/petstjames https://www.pinterest.com/petstjamesgoshen/ https://www.linkedin.com/in/petsstjamesgoshen/
1 note
·
View note
Text
Old CDs
Specifically TELARC from the 1980s. Back when CDs were the future I climbed on that wagon. Telarc bragged about their all digital process which I think used the Soundstream system. They made very good recordings. Several sources claim that they were 16 bit 50 kHz. Why would you ever need more!? Oh and now it must be converted to a slower sampling rate for CDs. Oh dear! The math, the math!
I have a handful of Telarc CDs. My Carmina Burana is a Telarc. I have some others with Eric Kunzel with the Cincinnati Symphony. One I mentioned before was the Grand Canyon Suite with digital thunder. They went into that stuff like real canons in the 1812 Overture.
Anyway I am a bit puzzled as I know I have heard the Telarc 1812 as I clearly recall the other tracks on it Capriccio Italien, and Mazepa. I do not have it anywhere I can find. I also remember a disk called "Ein Straussfest" which had a track with gun shots. Being Telarc they used real guns. Again not in my stash.
It may be that I borrowed them from, or lent them to a friend I just don't recall. My wife uses some CDs as weights to hold down papers when she is cutting out patterns for sewing. I should check there too.
Here comes the rabbit hole.
Of course I went to see if there were any of these out in the "verse". Of course there are. Discogs has lots. I found out that people have painstakingly documented every issue and pressing of most of the Telarc CDs. HUH?! Apparently there are worse and better pressings of polycarbonate just like for vinyl. Some were done in Japan by JVC or Matsushita or several others. Some were done in Europe by Polydor. Oh and some issues were clipped or poorly mastered. They did that with CDs? It should just be a FN digital file.
So rather than just slag them as CDs and stop there they curate the better and worse ones. This is a tribe I never knew existed. So now aside from just buying a clean disc you need to see if it is off a good batch. Life is so complicated!
Hey I just want the music. I love vinyl, but I like most of my CDs. Some of those are my favorites for a given recording. Many are my only recording of a piece. My Mercury Living Presence CDs are excellent as are some Telarcs.
And you know what, since CDs are obsolete they let me keep my Luddite spirit intact.
1 note
·
View note
Photo
Tamaño de la resolución de #audio . . . #digitalaudio #music #class #hifi #files #highendaudio #portableaudio #IDMusic #headphones #digitalaudioplayer #KB #headphone #MB #WAV #producer #hifiaudio #Sound #audioengineer #RubenErre #mastering #bit #Bytes #pulseaudio #audiomastering #formula #musiclife #stereo #soundstream #mono https://www.instagram.com/p/ClUaL-vujIw/?igshid=NGJjMDIxMWI=
#audio#digitalaudio#music#class#hifi#files#highendaudio#portableaudio#idmusic#headphones#digitalaudioplayer#kb#headphone#mb#wav#producer#hifiaudio#sound#audioengineer#rubenerre#mastering#bit#bytes#pulseaudio#audiomastering#formula#musiclife#stereo#soundstream#mono
0 notes
Photo
New Arrivals! #caricari #parcels #johannesalbert #agnesobel #softcell #soundstream https://www.instagram.com/p/Bui73yLHfeg/?utm_source=ig_tumblr_share&igshid=nsbqgtz6npno
3 notes
·
View notes
Link
1 note
·
View note