#cognitive architecture
Explore tagged Tumblr posts
frank-olivier · 9 days ago
Text
Tumblr media
Rethinking AI Research: The Paradigm Shift of OpenAI’s Model o1
The unveiling of OpenAI's model o1 marks a pivotal moment in the evolution of language models, showcasing unprecedented integration of reinforcement learning and Chain of Thought (CoT). This synergy enables the model to navigate complex problem-solving with human-like reasoning, generating intermediate steps towards solutions.
OpenAI's approach, inferred to leverage either a "guess and check" process or the more sophisticated "process rewards," epitomizes a paradigm shift in language processing. By incorporating a verifier—likely learned—to ensure solution accuracy, the model exemplifies a harmonious convergence of technologies. This integration addresses the longstanding challenge of intractable expectation computations in CoT models, potentially outperforming traditional ancestral sampling through enhanced rejection sampling and rollout techniques.
The evolution of baseline approaches, from ancestral sampling to integrated generator-verifier models, highlights the community's relentless pursuit of efficiency and accuracy. The speculated merge of generators and verifiers in OpenAI's model invites exploration into unified, high-performance architectures. However, elucidating the precise mechanisms behind OpenAI's model and experimental validations remain crucial, underscoring the need for collaborative, open-source endeavors.
A shift in research focus, from architectural innovations to optimizing test-time compute, underscores performance enhancement. Community-driven replication and development of large-scale, RL-based systems will foster a collaborative ecosystem. The evaluative paradigm will also shift, towards benchmarks assessing step-by-step solution provision for complex problems, redefining superhuman AI capabilities.
Speculations on Test-Time Scaling (Sasha Rush, November 2024)
youtube
Friday, November 15, 2024
2 notes · View notes
archiveofaffinities · 2 months ago
Text
Tumblr media
William J. Mitchell, The Logic of Architecture Design, Computation and Cognition, A Vocabulary of Stair Motifs (After Thiis Evensen, 1988)
715 notes · View notes
geometrymatters · 5 months ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
The geometry of the Borromean Rings
Borromean rings are a captivating geometric structure composed of three interlinked rings. What makes them unique is their interdependency; if any one ring is removed, the entire structure collapses. This fascinating property, known as "Brunnian" linkage, means that no two rings are directly linked, yet all three are inseparable as a group. This intricate dance of unity and fragility offers a profound insight into the nature of interconnected systems, both in mathematics and beyond.
Borromean Rings and Mathematical Knots
Borromean rings also find a significant place in the study of mathematical knots, a field dedicated to understanding how loops and tangles can be organized and categorized. The intricate relationship among the rings provides a rich visual and conceptual tool for mathematicians. Knot theorists use these rings to explore properties of space, topology, and the ways in which complex systems can be both resilient and fragile. The visual representation of Borromean rings in knot theory not only aids in mathematical comprehension but also enhances our appreciation of their symmetrical beauty and profound interconnectedness.
Symbolism and Divinity in Borromean Rings
Throughout history, Borromean rings have been imbued with symbolic significance, often associated with divinity and the concept of the trinity. In Christianity, they serve as a powerful visual metaphor for the Holy Trinity – the Father, the Son, and the Holy Spirit – illustrating how three distinct entities can form a single, inseparable divine essence. This symbol is not confined to Christianity alone; many other cultures and religions see the interconnected rings as representations of unity, interdependence, and the intricate balance of the cosmos.
Borromean Rings as a Metaphor for Illusory Reality
Beyond their mathematical and symbolic significance, Borromean rings offer a profound metaphor for the nature of reality itself. They illustrate how interconnectedness can create the illusion of a solid, stable structure. This resonates with philosophical and spiritual notions that reality, as perceived, is a complex web of interdependent elements, each contributing to an overarching illusion of solidity and permanence. In this way, the Borromean rings challenge us to reconsider the nature of existence and the interconnectedness of all things.
78 notes · View notes
russellmoreton · 2 months ago
Video
Landscape, asperity/kinship/collage/photogram by Russell Moreton Via Flickr: russellmoreton.blogspot.com/ The Unfolded Garment Embracing Subjectivity Pierced Assemblage on Photogram What is Philosophy? Gilles Deleuze, Felix Guattari Their book is a profound and careful interrogation of what it might mean to be a 'friend of wisdom', but it is also a devastating attack on the sterility of what has become, when 'the only events are exhibitions and the only concepts are products which can be sold'. Philosophy, they insist, is not contemplation, reflection or communication, but the creation of concepts www.amazon.co.uk/gp/product/0860916863/ref=pe_2724401_140...
2 notes · View notes
fugly-jeans · 6 months ago
Text
husb & i walked by our straight male friend who I have very much been at odds with lately on the way to our dinner date & he texted the gc to tell me/our friends that I “looked fire emoji in that dress” and my ego has never been more fed in my life lol
he is self-professed only into “cute girls” (his ex looked 10 years younger than she was and weighed about 110) and I am plus sized and very much consider myself a woman who looks her age - I would lowkey be horrified if he was actually *attracted* to me, but it’s nice to be told I’m hot (because I am!)
3 notes · View notes
jcmarchi · 12 days ago
Text
Do LLMs Remember Like Humans? Exploring the Parallels and Differences
New Post has been published on https://thedigitalinsider.com/do-llms-remember-like-humans-exploring-the-parallels-and-differences/
Do LLMs Remember Like Humans? Exploring the Parallels and Differences
Memory is one of the most fascinating aspects of human cognition. It allows us to learn from experiences, recall past events, and manage the world’s complexities. Machines are demonstrating remarkable capabilities as Artificial Intelligence (AI) advances, particularly with Large Language Models (LLMs). They process and generate text that mimics human communication. This raises an important question: Do LLMs remember the same way humans do?
At the leading edge of Natural Language Processing (NLP), models like GPT-4 are trained on vast datasets. They understand and generate language with high accuracy. These models can engage in conversations, answer questions, and create coherent and relevant content. However, despite these abilities, how LLMs store and retrieve information differs significantly from human memory. Personal experiences, emotions, and biological processes shape human memory. In contrast, LLMs rely on static data patterns and mathematical algorithms. Therefore, understanding this distinction is essential for exploring the deeper complexities of how AI memory compares to that of humans.
How Human Memory Works?
Human memory is a complex and vital part of our lives, deeply connected to our emotions, experiences, and biology. At its core, it includes three main types: sensory memory, short-term memory, and long-term memory.
Sensory memory captures quick impressions from our surroundings, like the flash of a passing car or the sound of footsteps, but these fade almost instantly. Short-term memory, on the other hand, holds information briefly, allowing us to manage small details for immediate use. For instance, when one looks up a phone number and dials it immediately, that’s the short-term memory at work.
Long-term memory is where the richness of human experience lives. It holds our knowledge, skills, and emotional memories, often for a lifetime. This type of memory includes declarative memory, which covers facts and events, and procedural memory, which involves learned tasks and habits. Moving memories from short-term to long-term storage is a process called consolidation, and it depends on the brain’s biological systems, especially the hippocampus. This part of the brain helps strengthen and integrate memories over time. Human memory is also dynamic, as it can change and evolve based on new experiences and emotional significance.
But recalling memories is only sometimes perfect. Many factors, like context, emotions, or personal biases, can affect our memory. This makes human memory incredibly adaptable, though occasionally unreliable. We often reconstruct memories rather than recalling them precisely as they happened. This adaptability, however, is essential for learning and growth. It helps us forget unnecessary details and focus on what matters. This flexibility is one of the main ways human memory differs from the more rigid systems used in AI.
How LLMs Process and Store Information?
LLMs, such as GPT-4 and BERT, operate on entirely different principles when processing and storing information. These models are trained on vast datasets comprising text from various sources, such as books, websites, articles, etc. During training, LLMs learn statistical patterns within language, identifying how words and phrases relate to one another. Rather than having a memory in the human sense, LLMs encode these patterns into billions of parameters, which are numerical values that dictate how the model predicts and generates responses based on input prompts.
LLMs do not have explicit memory storage like humans. When we ask an LLM a question, it does not remember a previous interaction or the specific data it was trained on. Instead, it generates a response by calculating the most likely sequence of words based on its training data. This process is driven by complex algorithms, particularly the transformer architecture, which allows the model to focus on relevant parts of the input text (attention mechanism) to produce coherent and contextually appropriate responses.
In this way, LLMs’ memory is not an actual memory system but a byproduct of their training. They rely on patterns encoded during their training to generate responses, and once training is complete, they only learn or adapt in real time if retrained on new data. This is a key distinction from human memory, constantly evolving through lived experience.
Parallels Between Human Memory and LLMs
Despite the fundamental differences between how humans and LLMs handle information, some interesting parallels are worth noting. Both systems rely heavily on pattern recognition to process and make sense of data. In humans, pattern recognition is vital for learning—recognizing faces, understanding language, or recalling past experiences. LLMs, too, are experts in pattern recognition, using their training data to learn how language works, predict the next word in a sequence, and generate meaningful text.
Context also plays a critical role in both human memory and LLMs. In human memory, context helps us recall information more effectively. For example, being in the same environment where one learned something can trigger memories related to that place. Similarly, LLMs use the context provided by the input text to guide their responses. The transformer model enables LLMs to pay attention to specific tokens (words or phrases) within the input, ensuring the response aligns with the surrounding context.
Moreover, humans and LLMs show what can be likened to primacy and recency effects. Humans are more likely to remember items at the beginning and end of a list, known as the primacy and recency effects. In LLMs, this is mirrored by how the model weighs specific tokens more heavily depending on their position in the input sequence. The attention mechanisms in transformers often prioritize the most recent tokens, helping LLMs to generate responses that seem contextually appropriate, much like how humans rely on recent information to guide recall.
Key Differences Between Human Memory and LLMs
While the parallels between human memory and LLMs are interesting, the differences are far more profound. The first significant difference is the nature of memory formation. Human memory constantly evolves, shaped by new experiences, emotions, and context. Learning something new adds to our memory and can change how we perceive and recall memories. LLMs, on the other hand, are static after training. Once an LLM is trained on a dataset, its knowledge is fixed until it undergoes retraining. It does not adapt or update its memory in real time based on new experiences.
Another key difference is in how information is stored and retrieved. Human memory is selective—we tend to remember emotionally significant events, while trivial details fade over time. LLMs do not have this selectivity. They store information as patterns encoded in their parameters and retrieve it based on statistical likelihood, not relevance or emotional significance. This leads to one of the most apparent contrasts: “LLMs have no concept of importance or personal experience, while human memory is deeply personal and shaped by the emotional weight we assign to different experiences.”
One of the most critical differences lies in how forgetting functions. Human memory has an adaptive forgetting mechanism that prevents cognitive overload and helps prioritize important information. Forgetting is essential for maintaining focus and making space for new experiences. This flexibility lets us let go of outdated or irrelevant information, constantly updating our memory.
In contrast, LLMs remember in this adaptive way. Once an LLM is trained, it retains everything within its exposed dataset. The model only remembers this information if it is retrained with new data. However, in practice, LLMs can lose track of earlier information during long conversations due to token length limits, which can create the illusion of forgetting, though this is a technical limitation rather than a cognitive process.
Finally, human memory is intertwined with consciousness and intent. We actively recall specific memories or suppress others, often guided by emotions and personal intentions. LLMs, by contrast, lack awareness, intent, or emotions. They generate responses based on statistical probabilities without understanding or deliberate focus behind their actions.
Implications and Applications
The differences and parallels between human memory and LLMs have essential implications in cognitive science and practical applications; by studying how LLMs process language and information, researchers can gain new insights into human cognition, particularly in areas like pattern recognition and contextual understanding. Conversely, understanding human memory can help refine LLM architecture, improving their ability to handle complex tasks and generate more contextually relevant responses.
Regarding practical applications, LLMs are already used in fields like education, healthcare, and customer service. Understanding how they process and store information can lead to better implementation in these areas. For example, in education, LLMs could be used to create personalized learning tools that adapt based on a student’s progress. In healthcare, they can assist in diagnostics by recognizing patterns in patient data. However, ethical considerations must also be considered, particularly regarding privacy, data security, and the potential misuse of AI in sensitive contexts.
The Bottom Line
The relationship between human memory and LLMs reveals exciting possibilities for AI development and our understanding of cognition. While LLMs are powerful tools capable of mimicking certain aspects of human memory, such as pattern recognition and contextual relevance, they lack the adaptability and emotional depth that defines human experience.
As AI advances, the question is not whether machines will replicate human memory but how we can employ their unique strengths to complement our abilities. The future lies in how these differences can drive innovation and discoveries.
0 notes
isubhamdas · 3 months ago
Text
How to Effectively Apply Behavioral Economics for Consumer Engagement?
Tumblr media
I never thought behavioral economics would revolutionize my marketing strategy.
But here I am, telling you how it changed everything.
It all started when our company's engagement rates plummeted. We were losing customers faster than we could acquire them.
That's when I stumbled upon behavioral economics.
I began by implementing subtle changes. We reframed our pricing strategy using the decoy effect.
Suddenly, our premium package became more attractive. Sales increased by 15% in the first month.
Next, we tapped into loss aversion. Our email campaigns highlighted what customers might miss out on. Open rates soared from 22% to 37%.
But the real game-changer was social proof. We showcased user testimonials prominently on our website. Conversion rates jumped by 28%.
As we delved deeper, we encountered challenges. Some team members worried about ethical implications.
Were we manipulating consumers?
We addressed this by prioritizing transparency.
Every nudge we implemented was designed to benefit both the customer and our business.
This approach not only eased internal concerns but also built trust with our audience.
The results spoke for themselves. Overall engagement increased by 45% within six months. Customer retention improved by 30%.
But it wasn't just about numbers. We were creating meaningful connections. Customers felt understood and valued.
Looking back, I realize behavioral economics isn't about tricks or gimmicks. It's about understanding human behavior and using that knowledge to create win-win situations.
So, how can you improve your consumer engagement using behavioral economics?
Start by observing your customers' behaviors. What motivates them? What holds them back?
Use these insights to craft strategies that resonate.
Remember, the goal is to guide, not manipulate.
How are you applying behavioral economics in your business?
Get Tips, Suggestions, & Workarounds, in 2-3 mins, on How to Effectively Apply Behavioral Economics for Consumer Engagement?
0 notes
metadevo · 11 months ago
Text
For the new MetaDevo blog on AI, robotics and other things, visit:
or
or
0 notes
anokha-swad · 1 year ago
Text
youtube
https://cognitivetype.com/fi-behaviorism-mythology/
Tumblr media
Department Store "Abraham & Straus" (1950-51) in Hampstead, NY, USA, by Marcel Breuer
100 notes · View notes
beakers-and-telescopes · 1 year ago
Text
Slime Molds and Intelligence
Tumblr media
Okay, despite going into a biology related field, I only just learned about slime molds, and hang on, because it gets WILD.
This guy in the picture is called Physarum polycephalum, one of the more commonly studied types of slime mold. It was originally thought to be a fungus, though we now know it to actually be a type of protist (a sort of catch-all group for any eukaryotic organism that isn't a plant, animal, or a fungus). As protists go, it's pretty smart. It is very good at finding the most efficient way to get to a food source, or multiple food sources. In fact, placing a slime mold on a map with food sources at all of the major cities can give a pretty good idea of an efficient transportation system. Here is a slime mold growing over a map of Tokyo compared to the actual Tokyo railway system:
Tumblr media
Pretty good, right? Though they don't have eyes, ears, or noses, the slime molds are able to sense objects at a distance kind of like a spider using tiny differences in tension and vibrations to sense a fly caught in its web. Instead of a spiderweb, though, this organism relies on proteins called TRP channels. The slime mold can then make decisions about where it wants to grow. In one experiment, a slime mold was put in a petri dish with one glass disk on one side and 3 glass disks on the other side. Even though the disks weren't a food source, the slime mold chose to grow towards and investigate the side with 3 disks over 70% of the time.
Tumblr media
Even more impressive is that these organisms have some sense of time. If you blow cold air on them every hour on the hour, they'll start to shrink away in anticipation when before the air hits after only 3 hours.
Now, I hear you say, this is cool and all, but like, I can do all those things too. The slime mold isn't special...
To which I would like to point out that you have a significant advantage over the slime mold, seeing as you have a brain.
Yeah, these protists can accomplish all of the things I just talked about, and they just... don't have any sort of neural architecture whatsoever? They don't even have brain cells, let alone the structures that should allow them to process sensory information and make decisions because of it. Nothing that should give them a sense of time. Scientists literally have no idea how this thing is able to "think'. But however it does, it is sure to be a form of cognition that is completely and utterly different from anything that we're familiar with.
2K notes · View notes
mindblowingscience · 3 months ago
Text
The longest genome of all the animals on Earth belongs not to a giant, or a cognitively advanced critter, but a writhing, water-dwelling creature seemingly frozen in time, right at the cusp of evolving into a beast that can live on land. These are the lungfish, a class of freshwater vertebrates whose peculiar characteristics are reflected in a colossal genetic code. Able to breathe both air and water, with limb-like fins, and a well-developed skeletal architecture, these strange ancient creatures are thought to is thought to share a common ancestor with all four-limbed vertebrates known as tetrapods.
Continue Reading.
298 notes · View notes
reasonsforhope · 1 year ago
Text
Humans are so cute. They think they can outsmart birds. They place nasty metal spikes on rooftops and ledges to prevent birds from nesting there.
It’s a classic human trick known in urban design as “evil architecture”: designing a place in a way that’s meant to deter others. Think of the city benches you see segmented by bars to stop homeless people sleeping there.
But birds are genius rebels. Not only are they undeterred by evil architecture, they actually use it to their advantage, according to a new Dutch study published in the journal Deinsea.
Crows and magpies, it turns out, are learning to rip strips of anti-bird spikes off of buildings and use them to build their nests. It’s an incredible addition to the growing body of evidence about the intelligence of birds, so wrongly maligned as stupid that “bird-brained” is still commonly used as an insult...
Magpies also use anti-bird spikes for their nests. In 2021, a hospital patient in Antwerp, Belgium, looked out the window and noticed a huge magpie’s nest in a tree in the courtyard. Biologist Auke-Florian Hiemstra of Leiden-based Naturalis Biodiversity Center, one of the study’s authors, went to collect the nest and found that it was made out of 50 meters of anti-bird strips, containing no fewer than 1,500 metal spikes.
Hiemstra describes the magpie nest as “an impregnable fortress.”
Tumblr media
Pictured: A huge magpie nest made out of 1,500 metal spikes.
Magpies are known to build roofs over their nests to prevent other birds from stealing their eggs and young. Usually, they scrounge around in nature for thorny plants or spiky branches to form the roof. But city birds don’t need to search for the perfect branch — they can just use the anti-bird spikes that humans have so kindly put at their disposal.
“The magpies appear to be using the pins exactly the same way we do: to keep other birds away from their nest,” Hiemstra said.
Another urban magpie nest, this one from Scotland, really shows off the roof-building tactic:
Tumblr media
Pictured: A nest from Scotland shows how urban magpies are using anti-bird spikes to construct a roof meant to protect their young and eggs from predators.
Birds had already been spotted using upward-pointing anti-bird spikes as foundations for nests. In 2016, the so-called Parkdale Pigeon became Twitter-famous for refusing to give up when humans removed her first nest and installed spikes on her chosen nesting site, the top of an LCD monitor on a subway platform in Melbourne. The avian architect rebelled and built an even better home there, using the spikes as a foundation to hold her nest more securely in place.
...Hiemstra’s study is the first to show that birds, adapting to city life, are learning to seek out and use our anti-bird spikes as their nesting material. Pretty badass, right?
The genius of birds — and other animals we underestimate
It’s a well-established fact that many bird species are highly intelligent. Members of the corvid family, which includes crows and magpies, are especially renowned for their smarts. Crows can solve complex puzzles, while magpies can pass the “mirror test” — the classic test that scientists use to determine if a species is self-aware.
Studies show that some birds have evolved cognitive skills similar to our own: They have amazing memories, remembering for months the thousands of different hiding places where they’ve stashed seeds, and they use their own experiences to predict the behavior of other birds, suggesting they’ve got some theory of mind.
And, as author Jennifer Ackerman details in The Genius of Birds, birds are brilliant at using tools. Black palm cockatoos use twigs as drumsticks, tapping out a beat on a tree trunk to get a female’s attention. Jays use sticks as spears to attack other birds...
Birds have also been known to use human tools to their advantage. When carrion crows want to crack a walnut, for example, they position the nut on a busy road, wait for a passing car to crush the shell, then swoop down to collect the nut and eat it. This behavior has been recorded several times in Japanese crows.
But what’s unique about Hiemstra’s study is that it shows birds using human tools, specifically designed to thwart birds’ plans, in order to thwart our plans instead. We humans try to keep birds away with spikes, and the birds — ingenious rebels that they are — retort: Thanks, humans!
-via Vox, July 26, 2023
1K notes · View notes
archiveofaffinities · 2 months ago
Text
Tumblr media
William J. Mitchell, The Logic of Architecture Design, Computation and Cognition, Symmetrical harmonic proportions for rooms as recommended by Palladio
67 notes · View notes
geometrymatters · 5 months ago
Text
Buckminster Fuller: Synergetics and Systems
Tumblr media
Synergetics
Synergetics, concept introduced by Buckminster Fuller, is an interdisciplinary study of geometry, patterns, and spatial relationships that provides a method and a philosophy for understanding and solving complex problems. The term “synergetics” comes from the Greek word “synergos,” meaning “working together.” Fuller’s synergetics is a system of thinking that seeks to understand the cooperative interactions among parts of a whole, leading to outcomes that are unpredicted by the behavior of the parts when studied in isolation.
Fuller’s understanding of systems relied upon the concept of synergy. With the emergence of unpredicted system behaviors by the behaviors of the system’s components, this perspective invites us to transcend the limitations of our immediate perception and to perceive larger systems, and to delve deeper to see relevant systems within the situation. It beckons us to ‘tune-in’ to the appropriate systems as we bring our awareness to a particular challenge or situation.
He perceived the Universe as an intricate construct of systems. He proposed that everything, from our thoughts to the cosmos, is a system. This perspective, now a cornerstone of modern thinking, suggests that the geometry of systems and their models are the keys to deciphering the behaviors and interactions we witness in the Universe.
In his “Synergetics: Explorations in the Geometry of Thinking” Fuller presents a profound exploration of geometric thinking, offering readers a transformative journey through a four-dimensional Universe. Fuller’s work combines geometric logic with metaphors drawn from human experience, resulting in a framework that elucidates concepts such as entropy, Einstein’s relativity equations, and the meaning of existence. Within this paradigm, abstract notions become lucid, understandable, and immediately engaging, propelling readers to delve into the depths of profound philosophical inquiry.
Fuller’s framework revolves around the principle of synergetics, which emphasizes the interconnectedness and harmony of geometric relationships. Drawing inspiration from nature, he illustrates that balance and equilibrium are akin to a stack of closely packed oranges in a grocery store, highlighting the delicate equilibrium present in the Universe. By intertwining concepts from visual geometry and technical design, Fuller’s work demonstrates his expertise in spatial understanding and mathematical prowess. The book challenges readers to expand their perspectives and grasp the intricate interplay between shapes, mathematics, and the dimensions of the human mind.
At its core, “Synergetics” presents a philosophical inquiry into the nature of existence and the human thought process. Fuller’s use of neologisms and expansive, thought-provoking ideas sparks profound contemplation. While some may find the book challenging due to its complexity, it is a testament to Fuller’s intellectual prowess and his ability to offer unique insights into the fundamental workings of the Universe, pushing the boundaries of human knowledge and transforming the fields of design, mathematics, and philosophy .
When applied to cognitive science, the concept of synergetics offers a holistic approach to understanding the human mind. It suggests that cognitive processes, rather than being separate functions, are interconnected parts of a whole system that work together synergistically. This perspective aligns with recent developments in cognitive science that view cognition as a complex, dynamic system. It suggests that our cognitive abilities emerge from the interaction of numerous mental processes, much like the complex patterns that emerge in physical and biological systems studied under synergetics.
In this context, geometry serves as a language to describe this cognitive architecture. Just as the geometric patterns in synergetic structures reveal the underlying principles of organization, the ‘geometric’ arrangement of cognitive processes could potentially reveal the principles that govern our cognitive abilities. This perspective extends Fuller’s belief in the power of geometry as a tool for understanding complex systems, from the physical structures he designed to the very architecture of our minds. It suggests that by studying the ‘geometry’ of cognition, we might gain insights into the principles of cognitive organization and the nature of human intelligence.
Tumblr media
Systems
Fuller’s philosophy underscored that systems are distinct entities, each with a unique shape that sets them apart from their surroundings. He envisioned each system as a tetrahedron, a geometric form with an inside and an outside, connected by a minimum of four corners or nodes. These nodes, connected by what Fuller referred to as relations, serve as the sinews that hold the system together. These relations could manifest as flows, forces, or fields. Fuller’s philosophy also emphasized that systems are not isolated entities. At their boundaries, every node is linked to its surroundings, and all system corners are ‘leaky’, either brimming with extra energy or in need of energy.
Fuller attributed the properties and characteristics of systems to what he called generalized principles. These are laws of the Universe that hold true everywhere and at all times. For instance, everything we perceive is a specific configuration of energy or material, and the form of this configuration is determined by these universal principles.
Fuller’s philosophy also encompassed the idea that every situation is a dance of interacting systems. He encouraged us to explore the ways in which systems interact within and with each other. He saw each of us as part of the cosmic dance, continually coupling with other systems. This coupling could be as loose as the atoms of air in a room, or as flexible as molecules of water flowing.
We find that precession is completely regenerative one brings out the other. So I gave you the dropping the stone in the water, and the wave went out that way. And this way beget that way. And that way beget that way. And that’s why your circular wave emanates. Once you begin to get into “precession” you find yourself understanding phenomena that you’ve seen a stone falling in the water all of your life, and have never really known why the wave does just what it does.
Fuller’s concept of precession, or systems coupling, is a testament to his deep understanding of systems and their interactions. He described how we sometimes orbit a system, such as a political movement or an artistic method. Our orbit remains stable when the force that attracts us is dynamically balanced by the force that propels us away. This understanding of precession allows us to comprehend phenomena that we have observed all our lives, yet never truly understood why they behave as they do. Fuller’s teachings on systems and their inherent geometry continue to illuminate our understanding of the Universe and our place within it.
45 notes · View notes
russellmoreton · 8 months ago
Video
Temporal Self : Light Laboratory Canterbury
flickr
Temporal Self : Light Laboratory Canterbury by Russell Moreton
1 note · View note
azdoine · 25 days ago
Text
the way you have to pretend that the world isn't ending in order to be functional enough to care about the future but you have to remember that the world is ending in order to try to save it. extremely normal and functional cognitive architecture
113 notes · View notes