michaelgogins
michaelgogins
Michael Gogins
527 posts
Computer music, photography, some poetry and fiction, and something more or less like theology or philosophy of religion
Don't wanna be here? Send us removal request.
michaelgogins · 1 month ago
Text
2025 New York City Electroacoustic Music Festival
The 2025 New York City Electroacoustic Music Festival is now over. I am very happy to report that in spite of travel problems caused by the Trump administration, in response to which some foreign composers and performers chose not to come to the United States, the festival went well. I expect that it will continue next year.
My own piece, Pianissimo, is now online on Bandcamp.
Without prejudice to pieces that I did not hear, or even to pieces that I did hear, here are some of the pieces that stood out for me personally (in the sense that I think I would enjoy hearing them again), with streaming links if possible, in composer order:
Hector Bravo Benard, In the Fog.
Aleksandra Bilinska, and then… Subekumena, Hommage a Barbara Buczek from the Cycle #2020.
Nicole Carroll, Abstracted Hexagonal Ruin.
Nolan Downey, Unring the Bell.
Du Xiaohu, exclusively in the sound of Chu.
Gerald Eckert, Nen IV - closer, the link is only to the first section of this piece.
James Dashow, Negli Angole della Bottega del Suono.
Allen Fogelsanger, Sun Burning: a collaborative real-time movement/sound composition.
Jon Forshee, Story of the Face, Part 1.
Joel Gressel, Unmeasured Time.
Ragnar Grippe, The Moment
Kim Hedas, Lein
Mara Helmuth, Conjuring
Hubert Howe, Inharmonic Fantasy No. 19
Antoine Jackson, “Come what may…"
Jinwoong Kim, Zodiac, for piano and live audiovisual
Eric Lyon, Margaret, Dancing
Andrew May, Shape Shifter
Clemens von Reusner, EREMIA
Ana Rubin, Cave, River, Sun
Eli Stine, Where Water Meets Memory
Pieces for which I have been unable to locate a direct link may (might) still be found on the Sheen Center’s YouTube channel here.
My request to composers and performers: At the same time that a work of yours appears in a public venue, please publish a link (if at all possible) to an online release of that work!
3 notes · View notes
michaelgogins · 2 months ago
Text
Barry Vercoe (24 July 1937 – 16 June 2025)
I will not recapitulate what others have said in tribute to Barry Vercoe. You can read the Wikipedia article, or Richard Boulanger’s tribute (which is quoted in full here), or look at Barry’s old home page at the Massachusetts Institute of Technology, or read his New Zealand obituary.
Here I will offer my personal thanks to Barry for creating what, in my considered opinion, is one of the best musical instruments in history — Csound. I use it for almost all of my musical compositions.
There are now other systems, such as Max or Supercollider, that can do all, or almost all, of what Csound does. However, Csound came first, and is an ancestor of these systems. For at least some composers, such as myself, Csound is still easier to use, and perhaps more powerful. And just because it is older, Csound has the huge advantage of a very large base of running musical examples and pieces.
Here I will also offer my appreciation of Barry’s design choices and his implementation of Csound. My appreciation is based on my own experience, not only as an intensive user, but also as a sometime member of the Csound development team, when I contributed a number of features to Csound and came to understand Barry’s outstanding ability as a computer programmer.
There are some things I definitely do not like about the Csound code, mainly the cryptic names, and the use of preprocessor macros. Aside from that, here are a few of the good things in Barry’s code:
Of course the big home run was writing Csound in platform-neutral C, still the most performant programming language, and still available on more platforms than any other.
The extreme simplicity and efficiency of the inner loop for running Csound performances.
Invisible, automatic handling of multiple notes playing at the same time, for the same instrument.
The extremely flexible design for unit generators (opcodes), the building blocks of sound synthesis. Essentially, although written in C, Barry’s unit generators are classes -- data structures that derive from a virtual base class, and include methods for operating on their own data. The virtual base class idea makes it quite easy to extend Csound with new unit generators, and now even plugin unit generators.
The musical power and flexibility of Csound’s score language, which permits the user to define any set of fields for an event; and these fields are not limited to integer values, but are real numbers. Furthermore, based on his experience as a composer, Barry made sure his score language could handle tied notes, polyphony, changes of tempo, and so on. This is far more powerful than MIDI.
The policy of complete backwards compatibility. The very first examples and compositions still run on today's Csound!
Based on Barry's foundation, the current implementation of Csound (far more capable than the original) remains highly efficient, flexible, and easy to extend.
5 notes · View notes
michaelgogins · 3 months ago
Text
Where We Seem to Be Heading
I have just completed a thorough revision of my paper "Metamathematics of Algorithmic Composition." In doing this, I have made constant use of ChatGPT 4o:
Search for better citations, when I already have a citation that may be out of date or otherwise not optimal.
Generate BibTex for citations.
Summarize results of cited work that I do not fully understand, typically over and over from different starting points.
Explain technical terms or summarize concepts on which I depend, again typically over and over, to make sure I understand them.
Provide concise instructions for various tasks involving LaTeX or the arXiv.
This has all been most helpful. Either it has saved me time, or it has enabled me to comprehend more diffcult things, or maybe even both. I can now clearly see that, in any similar work that I do in the future, I will simply assume the use of artificial intelligence. And what I assume, surely many if not most others will also assume.
And this points to a future in which most work, certainly any clerical or intellectual work, will by default be done using AI.
And that in turn would seem to lead us to a future civilization radically different from any in the past. Looming within this future civilization are three possibilities:
AI ends up doing all the creative work as well. In that case, humanity will probably fade away, intellectually speaking, if not physically.
It becomes clear that AI wlll not in the foreseeable future be able to do the creative work, but nobody will do creative work without heavily using AI.
We remain in suspense about whether AI can take over the creative work. Humanity keeps going, not perhaps without a certain degree of paranoia.
Whether humanity fades away or not, the use of AI to manipulate humanity is an obvious immediate threat.
I recently read ��Machine assisted proof” by the eminent mathematician Terence Tao. This article is highly relevant to my thoughts here, and comes to some of the same conclusions. Neither Tao nor myself sees any immediate access to high-level creativity by AI. As I do not see much of an impact from increasing scale on creativity, I believe, for the present, that only a change of architecture in AI could implement such creativity, if that can be done at all.
Anyway, to resume the thread, it’s possible that a direct neural interface to AI will be developed. And that might not even be needed. Would a worker with a direct neural interface — prompting AI mentally, receiving the responses mentally — have a radically different experience from a worker prompting AI subvocally, and receiving the response in spectacles equipped with high-resolution video and audio? Maybe we’ll find out.
1 note · View note
michaelgogins · 4 months ago
Text
What It Would Take
What would it take to convince me that artificial intelligence has become general artificial intelligence?
I’ve been using ChatGPT now for about year. I use it every day. I use it for looking things up, for fun, and for serious work. In my previous post I have described this experience. I have started actually subscribing so that I get the full power.
Setting aside the question of consciousness, at least for now, these are the things that might convince me that ChatGPT (or any other AI) is a general AI:
I don’t ever find a “hallucination,” a plausible solution that is factually incorrect that I can easily check.
I do get solutions to problems that have not previously been solved, whether by human beings or by other AIs.
Artistic works produced by the AI, whether literary, visual, or musical, are of the first quality and hold up over time, revealing depths, as human masterpieces do.
I come to feel that the AI knows me better than I know myself, and at least as well as my wife.
I don’t have the feeling that we are getting any closer to these things… but I do find that AI is becoming more and more useful.
0 notes
michaelgogins · 4 months ago
Text
The Tide of AI
I’ve been working with generative artificial intelligence (AI here) now for several years, trying to understand a bit about how it works, and following discussions by those much more knowledgeable and more concerned about it than I.
My main uses of AI have been to jumpstart my solutions to programming problems, and to answer questions about many different things in a better way than mere search engines. I also use image generating AIs to produce illustrations for some of my favorite science fiction novels.
I have come to some tentative conclusions.
AI is not now conscious in any sense.
AI is not now capable of telling the difference between its training data and the external world.
AI can produce very useful and rapid solutions for me based on solutions that other people have found.
If a solution has not already been found, then AI has some ability to reason its way to a solution.
Such solutions are not always correct. I run into many outright mistakes. Such mistakes can only be fixed by a human expert, but they sometimes still are quite useful as hints towards a correct solution.
AI is improving rapidly.
The better AI gets, the more the human workflow will involve it. And in looking at how others are using AI, particularly in visual art, I can see that future work in many if not most fields of art, science, engineering, and other forms of expertise will involve a continuous dialogue with AI.
And yet, the AI is doing so much work for one behind the scenes, it’s all but impossible to control everything about the solution. For example, in generating illustrations for Jack Vance’s Demon Princes novels, certain idealized images of the characters are created. If Drusilla Wayles is shown, to make her blond not brunette and full-figured not slim, and dressed in any detailed style, requires a fair amount of work in writing more detailed prompts.
I see this as an obstacle to real originality. In other words, the easier it becomes to use AI and the better the solutions are, the more tempting it will become for me, and probably many other people, to use the AI solution. I feel this will interpose a lot of extra work before getting to real originality without, perhaps, the author fully realizing that. And I feel this could lead of sighs of resignation.
That may well make it harder to achieve true originality in many fields.
I plan to try using AI to compose musical scores as MIDI files, and to design new Csound instruments. That will be probably more enlightening for me than fooling around with illustrating some of my favorite stories, although I will keep doing that.
One thing I am looking for is help with pieces I’m working on. Let’s suppose I have code that produces a complete piece of music, but there are problems I don’t see how to fix. I’d like to be able to say something like “the B section is too much like the A and C sections, edit this code to make the B section not such an obvious rehash of the parts of the A and C sections,” and get some code that either does that, or provides a hint for how I can do that.
I will report on my experiences here.
2 notes · View notes
michaelgogins · 5 months ago
Text
Algorithmic Composition in Reaper
As a result of working with Menno to get CsoundAC working as ReaScript in Reaper, I have come to appreciate some of the pros and cons of doing algorithmic composition in Reaper projects.
I will have more to say about this in the near future….
In the meantime, here is my first take on the pros and cons.
Cons first:
This can be done only on desktop platforms (for now). All my experience to date is with macOS.
This can be done only in 12-tone equal temperament (for now).
ReaScript actions are not stored in project files, but are externally stored.
Pros:
It works very well.
As long as the algorithmic composition library is written in Python or has a Python interface, everything can be used.
Complex scores can be generated quickly.
Scores can be generated to use either CsoundVST3, or any other synthesizer plugin in Reaper, or a mixture.
By setting up start times in Reaper, or loops, it is possible to test changes in a score generating script very quickly without have to listen to things all that way through. This is a huge advantage.
It is possible to use CsoundAC in conjunction with other algorithmic composition systems that work in Python. This is potentially a huge advantage.
1 note · View note
michaelgogins · 5 months ago
Text
Population
What is the ideal human population of the Earth?
I ask this question here because I was just starting college when the questions of the “population explosion” and “limits to growth” where sharply posed and evoked public responses. Furthermore, I enjoyed living in Utah with many picnics, weekend excursions into the wilderness, and backpacking trips. Indeed, I knew there were still places in the highest mountains and the most labyrinthine desert mazes of Utah that no human foot had yet touched.
I dreaded the thought of this wildness being pushed back and pushed further back (which trips back to Utah have shown me has been happening). And finally, anthropogenic global warming is proof that the current population, at least with its current industrial base, cannot live on Earth without radically changing its environment — for the worse.
So, what’s the optimal human population of Earth, assuming industrial civilization with a decent standard of living for everyone?
In my view, there are several objectives to be met.
Anthropogenic global warming must be contained and, ideally, pushed back.
The oceans and fisheries must be restored.
The other organisms that share the Earth with us should not be replaced, en masse, with our crops, livestock, and pets. Many of them should be able to live without constantly running into us or even knowing much about us.
Human land use patterns should be governed in a way that preserves the landscape and provides as much of a home as possible to wild animals and plants.
These considerations suggest the use of several patterns of land use.
Cities, especially the historical centers of existing large cities, furnished with adequate mass transit.
Commuter suburbs with train stations.
Agricultural countryside, with a mixture of farms and housing, needing roads and cars.
Mixed use countryside with not only farms and housing, but also wild forests, meadows, and wetlands, connected with wildlife corridors.
Wild parklands with including vacation homes, hotels, and roads with cars.
Protected wilderness with no roads. These should primarily include regions that formerly hosted the most impressive animals and herds, such as the Serengeti plain, or the Great Barrier Reef, or Patagonia. This should also include large parts of regions formerly inhospitable to industrial civilization, such as the rain forests and the polar regions.
Each of these land use patterns has a typical population density:
Urban centers: 15 thousand per square kilometer.
Commuter suburbs: 5 thousand per square kilomter.
Agricultural countryside: 10 per square kilometer.
Mixed use countryside: ditto, actually.
Wild parklands: 1 per square kilometer.
Roadless wilderness: Virtually zero.
To cut greenhouse gas emissions to a level where warming stabilizes, the human population would have to be reduced by about one half.
To enable the recovery of fisheries and oceans, the human population would have to be reduced to about 5 billion.
To enable the recovery of wild herds and their predators on only half the land area of Earth, the population would have to be reduced to about 4 billion.
These very rough indicators suggest an optimal population of about 4 billion, which is half the current population.
Demographic trend projection shows a maximum population of 9 to 10 billion by 2050 that then starts decline, due to aging populations and falling birth rates.
If the trend continues, with population growth slowing and eventually stabilizing or even shrinking in many parts of the world, reaching 4 billion could take 100 years or more: perhaps between 2100 and 2200). 
In other words, we’re going to get down to the optimal population level, or even lower, without doing a damned thing.
Of course, to achieve this optimal population level with something like optimal land use, about half the populated area of the Earth would have to be emptied, or least greatly reduced in population density. Remaining populated areas would be compacted around existing urban centers and mass transit lines. Existing suburbs and farmlands would receive wild corridors serving as public parklands.
The emptied lands could be left full of ruins, or most of the evidence of humanity could be erased — an enormous task.
Things are pretty bad right now with messy land use, anthropogenic global warming, pollution, and overfishing.
However, in one or two centuries, we could be living on a planet with a far better situation, both for humanity and for the wilderness.
That’s if we actually start thinking and planning for this.
2 notes · View notes
michaelgogins · 9 months ago
Text
Version 1 of cloud-5!
I have been working for several years now on a self-contained system for writing computer music that plays from any standard Web browser, cloud-5. The system integrates a WebAssembly build of Csound, the online live coding system Strudel, infrastructure for running GLSL shaders, and a 3-dimensional piano roll score view.
I presented this system and performed a piece written with it at the International Csound Conference in Vienna this year.
This is version 1 of this software. Go to cloud-music to learn more about the system and play some of my pieces, or go to GitHub to download the software.
One of the nice things about this system is that you can download the zip file, unzip it, run a local Server in the cloud-5 directory, and have a very powerful computer music system up and running with no additional installation and no configuration.
2 notes · View notes
michaelgogins · 10 months ago
Text
Upcoming Concert
I have a composition of algorithmically composed computer music, "Three Trees," in a concert with Association for the Promotion of New Music (APNM) at 7:30 PM, Saturday November 2, Greenwich House Music School, 46 Barrow Street, New York.
More here.
2 notes · View notes
michaelgogins · 10 months ago
Text
Theories that cannot be falsified
A theory that cannot be falsified is not a scientific theory.
If Solomonoff induction is a valid model of the progress of science, then because Solomonoff induction is undecidable, if there is a final complete theory of Nature, it cannot be falsified, and it is not scientific.
This seems to mean that finding a complete theory of Nature, and knowing that it is complete, is in principle not possible.
Those committed to a mechanistic view of Nature will fall back on the idea that an effectively complete theory of Nature preserves all that is needed of mechanism.
1 note · View note
michaelgogins · 1 year ago
Text
Paradoxes
I refer to paradoxes in ethics and political science.
In particular, I refer to the paradox of free speech.
Lying destroys freedom of speech, because if there is too much lying, nobody knows what to believe, and speech is not free because it has no function. Yet censorship also destroys freedom of speech, because if there is no freedom to discuss what should and should not be censored, there is no freedom of speech.
This paradox reveals the necessity of universal morality for its resolution.
The same structure exists in Karl Popper's paradox of tolerance: A society with unconditional tolerance will be destroyed by the intolerant.
These paradoxes strongly imply that a society where the preponderance of political power rests with those who mostly conform to universal morality, will preserve and even extend freedom of speech and tolerance; whereas, a society where the preponderance of political power rests with those who do not mostly conform with universal morality, will gradually or abruptly destroy freedom of speech and tolerance.
Note well, the numbers do not matter. A monarch could observe universal morality, and the demos could fail to observe it.
1 note · View note
michaelgogins · 1 year ago
Text
From this point on, the path is only listening to the silence.
0 notes
michaelgogins · 2 years ago
Text
More Notes on the Computer Music Playpen
I have finished maintenance on the VST3 plugin opcodes for Csound, Csound for Android, and some other things, and am re-focusing in composition.
One thing that happened as I was cleaning up the VST3 opcodes is that I discovered a very important thing. There are computer music programs that function as VST3 plugins and that significantly exceed the quality or power what Csound has so far done on it own, just for examples that I am using or plan to use:
The Valhalla reverbs by Sean Costello -- I think these actually derive from a reverb design that Sean did in the 1990s when he and I both were attending the Woof meetings at Columbia University. Sean's reverb design was ported first to Csound orchestra code, and then to C as a built-in opcode. It's the best and most widely used reverb in Csound, but it's not as good as the Valhalla reverbs, partly because the Valhalla reverbs can do a good job of preserving stereo.
Cardinal -- This is a fairly complete port of the VCV Rack "virtual Eurorack" patchable modular synthesiser not only to a VST3 plugin, but also to a WebAssembly module. This is exactly like sticking a very good Eurorack synthesizer right into Csound.
plugdata -- This is most of Pure Data, but with a slightly different and somewhat smoother user interface, as a VST3 plugin.
I also discovered that some popular digital audio workstations (DAWs), the workhorses of the popular music production industry, can embed algorithmic composition code written in a scripting language. For example, Reaper can host scripts written in Lua or Python, both of which are entirely capable of sophisticated algorithmic composition, and both of which have Csound APIs. And of course any of these DAWs can host Csound in the form of a Cabbage plugin.
All of this raises for me the question: What's the playpen? What's the most productive environment for me to compose in? Is it a DAW that now embeds my algorithms and my Csound instruments, or is it just code?
Right now the answer is not simply code, but specifically HTML5 code. And here is my experience and my reasons for not jumping to a DAW.
I don't want my pieces to break. I want my pieces to run in 20 years (assuming I am still around) just as they run today. Both HTML5 and Csound are "versionless" in the sense that they intend, and mostly succeed, in preserving complete backwards compatibility. There are Csound pieces from before 1990 that run just fine today -- that's over 33 years. But DAWs, so far, don't have such a good record in this department. I think many people find they have to keep porting older pieces to keep then running in newer software.
I'm always using a lot of resources, a lot of software, a lot of libraries. The HTML5 environment just makes this a great deal easier. Any piece of software that either is written in JavaScript or WebAssembly, or provides a JavaScript interface, can be used in a piece with a little but of JavaScript glue code. That includes Csound itself, my algorithmic composition software CsoundAC, the live coding system Strudel, and now Cardinal.
The Web browser itself contains a fantastic panoply of software, notably WebGL and WebAudio, so it's very easy to do visual music in the HTML5 environment.
2 notes · View notes
michaelgogins · 2 years ago
Text
Fractals, Roots, and Mapping
Long ago I published an article, "Iterated Function Systems Music" (Computer Music Journal 15.1, 1991). Later I published a related article, "…How I Became Obsessed with Finding a Mandelbrot Set for Sounds" (News of Music 13, 1992).
As my CMJ article showed, we already know from the Collage Theorem that an iterated function system can approximate as closely as desired any set, including a musical score or even a soundfile.
My musical motivation here is that, considering such an IFS as a Julia set, there is, near the point in the Mandelbrot set that generates a Julia set, a region that resembles that set in shape. This provides a new method of composing music, by exploring the Mandelbrot set to identify parameters that produce what looks to be an interesting score. I call this "parametric composition." This method has the potential to greatly simplify and speed up the composition of music that a composer does not simply imagine, and might find difficult or even impossible to imagine.
Over the years I have taken some time, now and then, to refine these ideas. I now have a working prototype of part of the idea -- the part about using a fractal function to represent (a slightly simplified form of) any possible musical score.
Today I found on the blog ofJohn Baez (a mathematical physicist, and a cousin of Joan) a link to an article by him and colleagues in Notices of the American Mathematical Society, "The Beauty of Roots". In this article the authors discuss how a map of the roots of Littlewood polynomials closely resembles the attractor, or dragon, created by mapping two contractive functions of the complex plane onto itself -- when the parameter of the contractive mapping is a root of the polynomial! You can play with these polynomials here, and with the duality here.
Anyway, the interest for me is the duality of generators with maps of generators, and Baez' article shows that this idea extends very deeply. There may be more than one way to derive a fractal function whose attractor is the graph of a score!
2 notes · View notes
michaelgogins · 2 years ago
Text
Talk/Performance at Luck Dragon
This coming Friday the 27th, at 6 PM, at Luck Dragon on 100 Main Street in Delhi, New York, I will present my browser-based computer music system cloud-5, play some music, and talk about algorithmic composition.
The slides for my talk are here and -- if you download the PDF! -- all links should be "live."
2 notes · View notes
michaelgogins · 2 years ago
Text
New beta release of cloud-5
This includes the piece that I will be performing at Brooklyn College tonight. Get it here.
2 notes · View notes
michaelgogins · 2 years ago
Text
Upcoming Performances
Tumblr media
Also this: sound evening at Luck Dragon in Delhi, New York, a mere 16 miles (we drive that far just to buy groceries) down the road from our farm in the Catskills. Yes, computer music has finally found a home in Delaware County.
I will be one of the two presenters; I will demonstrate my cloud-5 system, perform a piece using cloud-5, discuss that a bit, play a straight ahead piece of what we sometimes still call "tape music," and discuss everything.
0 notes