#ultraculture
Explore tagged Tumblr posts
gotenmoten88 · 3 years ago
Text
Tumblr media
WAY OF LIFE
37 notes · View notes
lazyyogi · 5 years ago
Photo
Tumblr media
A traveling altar. It may look like randomness to some and weirdness to others. The answer is yes. But it is also orienting. Today I moved into a new apartment that is mine for the next month. In my own home, I have an extensive altar. What to bring? Such a situation requires reflection on utility versus attachment. I didn’t think too much about it. I took what I felt would be helpful and trusted it would come together in its own way. Ta-da. #altar #invisibles #guru #yoga #tarot #ganesha #ultraculture #shivarudrabalayogi https://www.instagram.com/p/BzW05yInjWI/?igshid=1di4y8z46qghq
27 notes · View notes
magethepodcast · 6 years ago
Text
Jason Louv on Mage and John Dee and the Empire of Angels: Enochian Magick and the Occult Roots of the Modern World
Tumblr media
On today’s show were joined by Jason Louv, who is a real life magician and author. Jason’s most recent book is John Dee and the Empire of Angels: Enochian Magick and the Occult Roots of the Modern World. We discuss his first book, Generation Hex; Jason’s introduction to Mage: The Ascension; Mage artist Leif Jones; what is Magick; the relationship between Magick and postmodernism; clinical psychologist Jordan Peterson; the Frankfurt School; Enochian magic; John Dee and Edward Kelley; Francis Fukuyama and the end of history; President Trump’s use of Magick; Jason’s Ultraculture podcast; Jason’s School for Magick, Magick.me.
Join us again next week. Chris Zac, moderator for the White Wolf RPGs Game Play and Media Facebook Group, will talk about plot hooks you can use in your next Chronicle.
Subscribe to Mage: the Podcast on iTunes, Google Play and TuneIn.
Follow our Mage Chronicle on Twitter #mtrpg.
Follow us on Twitter @magethepodcast
1 note · View note
eyeconicvisions · 6 years ago
Link
King Faro - Guess Who
1 note · View note
antibothis · 4 years ago
Photo
Tumblr media
If the modern world is crumbling, then magic is what’s growing up between the cracks. In Generation Hex, editor Jason Louv assembles a collection of dispatches from the edge—a generation of young adults who are inventing and imagining radically new directions for spirituality and human evolution. Arising from the magical and occult underground of the early twenty-first century, the authors, artists, thinkers, and magicians assembled in Generation Hex collectively point the way to a future in which fanaticism and dogma have disappeared, in which human beings are free to realize their own destinies, and where the theory and practical applications of magic—the psychic ability of all human beings to engage and participate with the creative energy of the universe itself—-saturate and regenerate this troubled planet. Through critical essays and practical demonstrations of how a positive interaction with the occult, esoteric, and psychic undercurrents of human life can radically alter one’s existence, the young magicians collected here provide a collective snapshot of a dramatically new way forward for global culture as it emerges from the fringes and into the mainstream, from counterculture to ultraculture. Generation Hex offers the reader an excursion into the lives and practices of real-life Harry Potters, young men and women who practice real magic, here stripped of its sinister trappings and revealed to be what it truly is—the key to human evolution. Generation Hex provides a blueprint for escaping the suicidal rut of modern life and the radical redesign of the very essence of what it means to be human. Jason Louv is a New York-based writer and editor. He has spent the last six years researching and practicing magic, being initiated into various questionable secret societies, traveling around the Near East, and learning how to cloud minds. This is his first book.
1 note · View note
bussamove · 7 years ago
Video
Video for “Fun” by @kingfaro_ is out now. Link in the Bio #pettytapes dropping Nov.3 be on the look out #djbereal #bereal #dj #music #video #fun #ultra #ultraculture #lonelyheartsclub 📸 @eyeconicvisions @kellz55 (at Brooklyn NYC)
1 note · View note
chaoselph · 8 years ago
Link
3 notes · View notes
naturecrusader-blog · 7 years ago
Link
Third and final post from Ultraculture (for now at least).
0 notes
strixa · 5 years ago
Photo
You’re welcome, TMA and TBTP fandoms.
Tumblr media
over here sharing the content
140K notes · View notes
erspears · 8 years ago
Photo
Tumblr media
Bay-Lor the Crusher Claims Another Victim (2017)
6 notes · View notes
dailytechnologynews · 5 years ago
Photo
Tumblr media
The Coming Age of Imaginative Machines: If you aren't following the rise of synthetic media, the 2020s will hit you like a digital blitzkrieg
The faces on the left were created by a GAN in 2014; on the right are ones made in 2018.
Ian Goodfellow and his colleagues gave the world generative adversarial networks (GANs) five years ago, way back in 2014. They did so with fuzzy and ethereal black & white images of human faces, all generated by computers. This wasn't the start of synthetic media by far, but it did supercharge the field. Ever since, the realm of neural network-powered AI creativity has repeatedly kissed mainstream attention. Yet synthetic media is still largely unknown. Certain memetic-boosted applications such as deepfakes and This Person Does Not Exist notwithstanding, it's safe to assume the average person is unaware that contemporary artificial intelligence is capable of some fleeting level of "imagination."
Media synthesis is an inevitable development in our progress towards artificial general intelligence, the first and truest sign of symbolic understanding in machines (though by far not the thing itself--- rather the organization of proteins and sugars to create the rudimentary structure of what will someday become the cells of AGI). This is due to the rise of artificial neural networks (ANNs). Popular misconceptions presume synthetic media present no new developments we've not had since the 1990s, yet what separates media synthesis from mere manipulation, retouching, and scripts is the modicum of intelligence required to accomplish these tasks. The difference between Photoshop and neural network-based deepfakes is the equivalent to the difference between building a house with power tools and employing a utility robot to use those power tools to build the house for you.
Succinctly, media synthesis is the first tangible sign of automation that most people will experience.
Public perception of synthetic media shall steadily grow and likely degenerate into a nadir of acceptance as more people become aware of the power of these artificial neural networks without being offered realistic debate or solutions as to how to deal with them. They've simply come too quickly for us to prepare for, hence the seemingly hasty reaction of certain groups like OpenAI in regards to releasing new AI models.
Already, we see frightened reactions to the likes of DeepNudes, an app which was made solely to strip women in images down to their bare bodies without their consent. The potential for abuse (especially for pedophilic purposes) is self-evident. We are plunging headlong into a new era so quickly that we are unaware of just what we are getting ourselves into. But just what are we getting into?
Well, I have some thoughts.
I want to start with the field most people are at least somewhat aware of: deepfakes. We all have an idea of what deepfakes can do: the "purest" definition is taking one's face replacing it with another, presumably in a video. The less exact definition is to take some aspect of a person in a video and edit it to be different. There's even deepfakes for audio, such as changing one's voice or putting words in their mouth. Most famously, this was done to Joe Rogan.
I, like most others, first discovered deepfakes in late 2017 around the time I had an "epiphany" on media synthesis as a whole. Just in those two years, the entire field has seen extraordinary progress. I realized then that we were on the cusp of an extreme flourishing of art, except that art would be largely-to-almost entirely machine generated. But along with it would come a flourishing of distrust, fake news, fake reality bubbles, and "ultracultural memes". Ever since, I've felt the need to evangelize media synthesis, whether to tell others of a coming renaissance or to warn them to be wary of what they see.
This is because, over the past two years, I realized that many people's idea of what media synthesis is really stops at deepfakes, or they only view new development through the lens of deepfakes. The reason why I came up with "media" synthesis is because I genuinely couldn't pin down any one creative/data-based field AI wasn't going to affect. It wasn't just faces. It wasn't just bodies. It wasn't just voice. It wasn't just pictures of ethereal swirling dogs. It wasn't just transferring day to night. It wasn't just turning a piano into a harpsichord. It wasn't just generating short stories and fake news. It wasn't just procedurally generated gameplay. It was all of the above and much more. And it's coming so fast that I fear we aren't prepared, both for the tech and the consequences.
Indeed, in many discussions I've seen (and engaged in) since then, there's always several people who have a virulent reaction against the prospect neural networks can do any of this at all, or at least that it'll get better enough to the point it will affect artists, creators, and laborers. Even though we're already seeing the effects in the modeling industry alone.
Look at this gif. Looks like a bunch of models bleeding into and out of each other, right? Actually, no one here is real. They're all neural network-generated people.
Neural networks can generate full human figures, and altering their appearance and clothing is a matter of changing a few parameters or feeding an image into the data set. Changing the clothes of someone in a picture is as easy as clicking on the piece you wish you change and swapping it with any of your choice (or result in the personal wearing no clothes at all). A similar scenario applies for make-up. This is not like an old online dress-up flash game where the models must be meticulously crafted by an art designer or programmer— simply give the ANN something to work with, and it will figure out all the rest. You needn't even show it every angle or every lighting condition, for it will use commonsense to figure these out as well. Such has been possible since at least 2017, though only with recent GPU advancements has it become possible for someone to run such programs in real time.
The unfortunate side effect is that the amateur modeling industry will be vaporized. Extremely little will be left, and the few who do remain are promoted entirely because they are fleshy & real human beings. Professional models will survive for longer, but there will be little new blood joining their ranks. As such, it remains to be seen whether news and blogs speak loudly of the sudden, unexpected automation of what was once seen as a safe and human-centric industry or if this goes ignored and under-reported— after all, the news used to speak of automation in terms of physical, humanoid robots taking the jobs of factory workers, fast-food burger flippers, and truck drivers, occupations that are still in existence en masse due to slower-than-expected roll outs of robotics and a continued lack of general AI.
We needn't have general AI to replace those jobs that can be replicated by disembodied digital agents. And the sudden decline & disappearance of models will be the first widespread sign of this.
Actually, I have an hypothesis for this: media synthesis is one of the first signs that we're making progress towards artificial general intelligence.
Now don't misunderstand me. No neural network that can generate media is AGI or anything close. That's not what I'm saying. I'm saying that what we can see as being media synthesis is evidence that we've put ourselves on the right track. We never should've thought that we could get to AGI without also developing synthetic media technology.
What do you know about imagination?
As recently as five years ago, the concept of "creative machines" was cast off as impossible— or at the very least, improbable for decades. Indeed, the phrase remains an oxymoron in the minds of most. Perhaps they are right. Creativity implies agency and desire to create. All machines today lack their own agency. Yet we bear witness to the rise of computer programs that imagine and "dream" in ways not dissimilar to humankind.
Though lacking agency, this still meets the definition of imagination.
To reduce it to its most fundamental ingredients: Imagination = experience + abstraction + prediction. To get creativity, you need only add "drive". Presuming that we fail to create artificial general intelligence in the next ten years (an easy thing to assume because it's unlikely we will achieve fully generalized AI even in the next thirty), we still possess computers capable of the former three ingredients.
Someone who lives on a flat island and who has never seen a mountain before can learn to picture what one might be by using what they know of rocks and cumulonimbus clouds, making an abstract guess to cross the two, and then predicting what such a "rock cloud" might look like. This is the root of imagination.
As Descartes noted, even the strongest of imagined sensations is duller than the dullest physical one, so this image in the person's head is only clear to them in a fleeting way. Nevertheless, it's still there. Through great artistic skills, the person can learn to express this mental image through artistic means. In all but the most skilled, it will not be a pure 1-to-1 realization due to the fuzziness of our minds, but in the case of expressive art, it doesn't need to be.
Computers lack this fleeting ethereality of imagination completely. Once one creates something, it can give you the uncorrupted output.
Right now, this makes for wonderful tools and apps that many play around with online and on our phones.
But extrapolating this to the near future results in us coming face to face many heavy questions, and not just of the "can't trust what you see variety."
Because think about it.
If I'm a musical artist and I release an album, what if I accidentally recorded a song that's too close to an AI-generated track (all because AI generated literally every combination of notes?) Or, conversely, what if I have to watch as people take my music and alter it? I may feel strongly about it, but yet the music has its notes changed, its lyrics changed, my own voice changed, until it might as well be an entirely different artist making that music. Many won't mind, but many will.
I trust my mother's voice, as many do. So imagine a phisher managing to steal her voice, running it through a speech synthesis network, and then calling me asking me for my social security number. Or maybe I work at a big corporation, and while we're secure, we still recognize each other's voice, only to learn that someone stole millions of dollars from us because they stole the CEO's voice and used to to wire cash to a pirate's account.
Imagine going online and at least 70% of the "people" you encounter are bots. They're extremely coherent, and they have profile images of what looks to be real people. And who knows, you may even forge an e-friendship with some of them because they seem to share your interests. Then it turns out they're just bundles of code.
Oh, and those bot-people are also infesting social media and forums in the millions, creating and destroying trends and memes without much human input. Even if the mainstream news sites don't latch on at first, bot-created and bot-run news sites will happily kick it off for them. The news is supposed to report on major events, global and local. Even if the news is honest and telling the truth, how can they truly verify something like this, especially when it seems to be gaining so much traction and humans inevitably do get involved? Remember "Bowsette" from last year? Imagine if that was actually pushed entirely by bots until humans saw what looked like a happenin' kind of meme and joined in? That could be every year or perhaps even every month in the 2020s onwards.
Likewise, imagine you're listening to a pop song in one country, but then you go to another country and it's the exact same song but most of the lyrics have changed to be more suitable for their culture. That sort of cultural spread could stop... or it could be supercharged if audiences don't take to it and pirate songs/change them and share them at their own leisure.
Or maybe it's a good time to mention how commissioned artists are screwed? Commission work boards are already a race to the bottom— if a job says it pays three cents per word to write an article, you'd better list your going rate as 2 cents per word, and then inevitably the asking rate in general becomes 2 cents per word, and so on and so forth. That whole business might be over within five to ten years if you aren't already extremely established. Because if machines can mimic any art style or writing style (and then exaggerate & alter it to find some better version people like more), you'd have to really be tech-illiterate or very pro-human to want non-machine commissions.
And to go back to deepfakes and deep nudes, imagine the paratypical creep who takes children and puts them into sexual situations, any sexual situation they desire thanks to AI-generated images and video. It doesn't matter who, and it doesn't have to be real children either. It could even be themselves as a child if they still have the reference or use a de-aging algorithm on their face. It's squicky and disgusting to think about, but it's also inevitable and probably has already happened.
And my god, it just keeps going on and on. I can't do this justice, even with 40,000 characters to work with. The future we're about to enter is so wild, so extreme that I almost feel scared for humanity. It's not some far off date in the 22nd century. It's literally going to start happening within the next five years. We're going to see it emerge before our very eyes on this and other subreddits.
I'll end this post with some more examples.
Nvidia's new AI can turn any primitive sketch into a photorealistic masterpiece. You can even play with this yourself here.
Waifu Synthesis- real time generative anime, because obviously.
Few-Shot Adversarial Learning of Realistic Neural Talking Head Models | This GAN can animate any face GIF, supercharging deepfakes & media synthesis
Talk to Transformer | Feed a prompt into GPT-2 and receive some text. As of 9/29/2019, this uses the 774M parameter version of GPT-2, which is still weaker than the 1.5B parameter "full" version."
Text samples generated by Nvidia's Megatron-LM (GPT-2-8.3b). Vastly superior to what you see in Talk to Transformer, even if it had the "full" model.
Facebook's AI can convert one singer's voice into another | The team claims that their model was able to learn to convert between singers from just 5-30 minutes of their singing voices, thanks in part to an innovative training scheme and data augmentation technique. as a prototype for shifting vocalists or vocalist genders or anything of that sort.
TimbreTron for changing instrumentation in music. Here, you can see a neural network shift entire instruments and pitches of those new instruments. It might only be a couple more years until you could run The Beatles' "Here Comes The Sun" through, say, Slayer and get an actual song out of it.
AI generated album covers for when you want to give the result of that change its own album.
Neural Color Transfer Between Images [From 2017], showing how we might alter photographs to create entirely different moods and textures.
Scammer Successfully Deepfaked CEO's Voice To Fool Underling Into Transferring $243,000
"Experts: Spy used AI-generated face to connect with targets" [GAN faces for fake LinkedIn profiles]
This Marketing Blog Does Not Exist | This blog written entirely by AI is fully in the uncanny valley.
Chinese Gaming Giant NetEase Leverages AI to Create 3D Game Characters from Selfies | This method has already been used over one million times by Chinese gamers.
"Deep learning based super resolution, without using a GAN" [perceptual loss-based upscaling with transfer learning & progressive scaling], or in other words, "ENHANCE!"
Expert: AI-generated music is a "total legal clusterf*ck" | I've thought about this. Future music generation means that all IPs are open, any new music can be created from any old band no matter what those estates may want, and AI-generated music exists in a legal tesseract of answerless questions
And there's just a ridiculous amount more.
My subreddit, /r/MediaSynthesis, is filled with these sorts of stories going back to January of 2018. I've definitely heard of people come away in shock, dazed and confused, after reading through it. And no wonder.
6 notes · View notes
johnathonmichael · 7 years ago
Photo
Tumblr media
#israelregardie #ultraculture
0 notes
chetzar · 6 years ago
Photo
Tumblr media
#Repost @magick.me with @get_repost ・・・ Genesis P-Orridge has been admitted to the hospital and faces a serious surgery to clear her lungs. She needs your help. LINK IN BIO. . As you likely know, Genesis P-Orridge has been battling leukemia for the last year. A few days ago, she was admitted to the hospital, as she was unable to breathe without oxygen. Gen's lungs have become blocked with fluid, which her doctors had previously attempted to remove with a suction needle. This failed, as the fluid has become too viscous and thick. Gen now needs to undergo full surgery to clean out her lungs—a serious and potentially life-threatening procedure. . Today, Gen made a request on Instagram for you to send compassionate and healing energy to her to assist in the success of her surgery, as well as visualizing her surgeons succeeding in cleaning out her lungs. She has also requested that you visualize her girlfriend Susana Atkins smiling and happy tonight, after the surgery. If you can, please take a few moments this morning and throughout the day to visualize the success of Gen's surgery and her return to health. . Gen currently lives on little to no money. Despite her incalculable contributions to culture, esoteric transgression does not pay a pension or provide a safety net. She faces mounting medical bills and is plowed under by day to day expenses. Outside of occasional DJ or band gigs (which she is no longer healthy enough to do), she has no income. This means that she is currently living only on donations. And this, to the great shame of our society and its values, is the same position that similar esoteric titans like Robert Anton Wilson and Sasha Shulgin were reduced to at the end of their lives. As our ever-growing occulture (or, as I hope it continually becomes, Ultraculture) matures, it is my great hope that we can do better by our elders. . Thank you for your caring and compassion (as Gen would say, Big L-ov-E), and I'm hoping to share news of Gen's speedy recovery with you all very soon. LINK IN BIO. #magick #occult #genesisporridge #topy #templeovpsychickyouth #industrialmusic #industrialmusicelectronics #industrialmusicforindustrialpeople https://www.instagram.com/chetzar/p/Buq_3ZRA5G7/?utm_source=ig_tumblr_share&igshid=w5l9xwui7le5
4 notes · View notes
elizatyrakowskaportfolio · 2 years ago
Video
tumblr
the title sequence for the film - I used my term "ultraculture" in both English and Polish and juxtaposed them
0 notes
bussamove · 7 years ago
Video
Another 1☝🏽!!! #studioafterdark #lonelyheartsclub #djbereal #dj #bereal #boombapbeats #boombap #guccigang #lilpump #hiphop #ultraculture #turntablelism #pioonerdj #trap (at New York, New York)
0 notes
jhavelikes · 2 years ago
Quote
With media properties and collaborations ranging across the digital, print, film and broadcast domains, Ultraculture is carving out a new kind of media empire by bringing together the best of traditional spirituality and cutting-edge technology.
Ultraculture Incorporated
0 notes