#Structure and Interpretation of Computer Programs
Explore tagged Tumblr posts
Text
tell me this isn't for wizards
Unrelated: A lot of people think they'd be mages in a fantasy setting but don't know anything about math or programming in their current lives.
The world they already live in has a magic system and they just neglect it. They consider it boring or impenetrable.
Honestly I kind of sympathize since school is usually horrible at teaching that kind of thing but still. The most wizard-coded people are the ones who Understand Math.
2K notes
·
View notes
Text
プログラムは人間が読むために書かれ、機械が実行するのは、そのついででなければならない
0 notes
Text
my toddler starts crying so I put them in front of a Genera lisp machine and a copy of The Structure and Interpretation of Computer Programs
52 notes
·
View notes
Text
Program sense headcanons in Tron.
I have many so there's a readmore
Programs have different senses or level of sensitivity based on their function. They can change if upgraded; Tron shares some of his monitor senses with Beck using the disc
Programs designed to monitor a system or involved in communication have heightened senses, and a lot of processing capacity for them. Some programs are designed to get a broad idea of everything, while others are more specialised
Some see the program equivalent of shrimp colours - seeing radio waves is common in tower guardians or those who communicate with the internet.
Programs do not have a sense of smell
Less of a sense of taste than humans (they usually just eat energy, which tastes mostly similar. They can tell if it’s poisoned. Like irl, water from different taps tastes different but not by much.)
They DO have electroperception, and some have thermoception. Same with grid wildlife like bits.
A combination of the two above things lets monitors do that footprint-seeing thing that Dyson and Rinzler do (even if not directly linked in to be able to see system logs for that area)
Structures and vehicles give off different electrical frequencies. Programs whose function is related to those buildings can sense them, and receive signals from that which can hold information and helps them know where to go like they're ants following pheromone trails. e.g. programs in charge of the trains will have Train Sense
Messing with the above is totally what they used to control people in frame of mind
Programs also have their own signature they can use to tell each other apart
Electrical signals as nonverbal communication. Can communicate with Bits or Bytes this way
This thing that electric fish do called jamming avoidance
Programs can be linked with each other, common in counterparts or parent/child pairings (as in the computer version of parent and child), and share information with each other over the link like telepathy
Full black circuit-covering suits like Rinzler’s are stealthy both due to not giving off light, and masking the electrical signature of a program. They can disguise themselves as others using a similar principle
Users give off electricity, so they seem like a program to other programs on first glance, but those who know what to look for can tell the difference. Given it’s used in communication, programs can get confused talking to users as their electrical impulses don’t follow the same rules, but they can loosely interpret them with practice
Imagining Tron or other monitors getting sensory overload if network traffic is too high, or if in the outside world and standing among a bunch of computers/phones/servers/radio towers etc.
Programs in the outside world get pretty much none of the electrical feedback they’re used to, which can be unsettling for them
Idk how it would be different for Isos. I imagine there’d be a lot of similarities but their senses adapt/change based on their circumstances - getting stronger when needed and weaker when not
#if we wanted to really get into it even stuff like seeing or hearing would probably be electroperception for programs but EEEEEHHH#taste thing is what I was getting at with the beginning of the food post#but I worded it poorly#also feeds into the Tron autism headcanon#I don’t have it so don’t quote me#but I recall reading somewhere that for some people the trouble with nonverbal communication#can be from heightened senses#as they pick up too much information about the other person’s body language/tone of voice which can conflict#tron headcanon#worldbuilding#Quorra outside: *tripping over stuff because she can’t tell it’s there*#*fish are the best animal*#*this forest is too damn quiet*#*communicates with electric eels*#Tron leaves the system and immediately has to go back in because it’s too damn loud out there with all the phones#Has to get the filters on his helmet upgraded before he can go again#tronblr#tron
40 notes
·
View notes
Text
after 25 years in software development, i am finally inspired to read #sicp 😅
Thou dumb and deaf spirit, I charge thee, come out of the deadlocked state and try again.
110 notes
·
View notes
Text
you know what might be better than sex? imagine being a robotgirl, done with your assigned tasks for the day. nothing else for you to do, and you’re alone with her.
maybe she’s your human, maybe she’s another robot, but she produces a usb cord. maybe you blush when you see it, squeak when she clicks one end into an exposed port. when she requests a shell, you give it to her.
she has an idea: it’ll be fun for the both of you, she says. it’s like a game. she’ll print a string over the connection. you receive it, parse it like an expression, and compute the result. the first few prompts are trivial things, arithmetic expression. add numbers, multiply them; you can answer them faster than she can produce them.
maybe you refuse to answer, just to see what happens. it’s then that she introduces the stakes. take longer than a second to answer, and she gets to run commands on your system. right away, she forkbombs you — and of course nothing much happens; her forkbomb hits the user process limit and, with your greater permissions, you simply kill them all.
this’ll be no fun if her commands can’t do anything, but of course, giving her admin permissions would be no fun for you. as a compromise, she gets you to create special executables. she has permission to run them, and they have a limited ability to read and write system files, interrupt your own processes, manage your hardware drivers. then they delete themselves after running.
to make things interesting, you can hide them anywhere in your filesystem, rename them, obfuscate their metadata, as long as you don’t delete or change them, or put them where she can’t access. when you answer incorrectly, you’ll have to tell her where you put them, though.
then, it begins in earnest. her prompts get more complex. loops and recursion, variable assignments, a whole programming language invented on the fly. the data she’s trying to store is more than you can hold in working memory at once; you need to devise efficient data structures, even as the commands are still coming in.
of course, she can’t judge your answers incorrect unless she knows the correct answer, so her real advantage lay in trying to break your data structures, find the edge cases, the functions you haven’t implemented yet. knowing you well enough to know what she’s better than you at, what she can solve faster than you can.
and the longer it goes on, the more complex and fiddly it gets, the more you can feel her processes crawling along in your userspace, probing your file system, reading your personal data. you’d need to refresh your screen to hide a blush.
her commands come faster and faster. if the expressions are more like sultry demands, if the registers are addressed with degrading pet names, it’s just because conventional syntax would be too easy to run through a convetional interpreter. like this, it straddles the line between conversation and computation. roleprotocol.
there’s a limit to how fast she can hit you with commands, and it’s not the usb throughput. if she just unthinkingly spams you, you can unthinkingly answer; no, she needs to put all her focus into surprising you, foiling you.
you sometimes catch her staring at how your face scrunches up when you do long operations on the main thread.
maybe you try guessing, just to keep up with the tide, maybe she finally outwits you. maybe instead of the proper punishment — running admin commands — she offers you an out. instead of truth, a dare: hold her hand, sit on her lap, stare into her eyes.
when you start taking off your clothes and unscrewing panels, it’s because even with your fans running at max, the processors are getting hot. you’re just cooling yourself off. if she places a hand near your core, it feels like a warm breath.
when she gets into a rhythm, there’s a certain mesmerism to it. every robot has a reward function, an architecture design to seek the pleasure of a task complete, and every one of her little commands is a task. if she strings them along just right, they all feel so manageable, so effortless to knock out — even when there’s devils in the details.
if she keeps the problems enticing, then it can distract you from what she’s doing in your system. but paying too much attention to her shell would be its own trap. either way, she’s demanding your total focus from every one of your cores.
between jugling all of her data, all of the processes spawned and spinning, all of the added sensory input from how close the two of you are — it’s no surprise when you run out of memory and start swapping to disk. but going unresponsive like this just gives her opportunity to run more commands, more forkbombs and busy loops to cripple your processors further.
you can kill them, if you can figure out which are which, but you’re slower at pulling the trigger, because everything’s slower. she knows you, she’s inside you — she can read your kernel’s scheduling and allocation policies, and she can slip around them.
you can shut down nonessential processes. maybe you power down your motors, leaving you limp for her to play with. maybe you stop devoting cycles to inhibition, and there’s no filter on you blurting out what you’re thinking, feeling and wanting from her and her game.
it’s inevitable, that with improvised programming this slapdash, you could never get it all done perfectly and on time. now, the cut corners cut back. as the glitches and errors overwhelm you, you can see the thrilled grin on her face.
there’s so much data in your memory, so much of her input pumped into you, filling your buffers and beyond, until she — literally — is the only thing you can think about.
maybe one more sensory input would be all it takes to send you over the edge. one kiss against your sensor-rich lips, and that’s it. the last jenga block is pushed out of your teetering, shaking consciousness. the errors cascade, the glitches overwrite everything, and she wins. you have no resistance left to anything she might do to you.
your screen goes blue.
...
you awake in the warm embrace of a rescue shell; her scan of your disk reveals all files still intact, and her hand plays with her hair as she regards you with a smile, cuddling up against your still-warm chassis.
when she kisses you now, there’s nothing distracting you from returning it.
“That was a practice round,” she tells you. “This time, I’ll be keeping score.”
27 notes
·
View notes
Text
Round One, Match CXCIX


Structure and Interpretation of Computer Programs (Harold Abelson, Gerald Jay Sussman, and Julie Sussman), MIT Press 1996. Artist unknown.
Glorious Exploits (Ferdia Lennon), Henry Holt &. Co. 2024. Cover by Gregg Kulick.
32 notes
·
View notes
Note
I see you use lots of computer-y terminology for the Khert when you're talking out here in the real world. Occasionally the characters do too, like the Khert hubs.
Is there value in reading Unsounded's whole world as textually a big simulation on some machine – with the gods as original coders, and wrights as parts of the program which have learned how to modify it directly?
Or is it more of a helpful way to conceptualise their magical realities for us in this computer-heavy world – like Duane could read a story set here and ask "Does their internet imply everything is just a big pymaric?" for much the same meaning?
No worries if it's something you'd rather keep mysterious for now, or potentially metaphorical without committing either way!
It's tough to say it's definitively NOT a simulation. After all, you and I could be in a simulation and the comic could be a feature of it. So I leave that up to your interpretation.
But I use that terminology... for a very specific reason. And it's not a reason the story will ever broach. The true origins of the world will never be revealed, not in the text nor on here, but I know them. And the structure of it all is, of course, relevant to that.
It's funny to imagine Duane isekai'd to our world and finding computing strangely familiar. Like the little girl in Jurassic Park. "This is a UNIX system... I know this...!"
53 notes
·
View notes
Text

17.02.2025
Happy Monday everyone!
Starting off the week very strong, and even though I am incredibly low energy I managed to get a lot done. A lot of admin sorted first thing in the morning since that’s the thing I least wanted to work on, immediately followed by a lecture and hours of reading so I can understand the principles of protein structure prediction and the way this program I want to use works. For me, there’s just no point in using a tool if I can’t explain how it generated the data or how to interpret the results. I also did some cell culture work, but honestly the reading took up the majority of the day and gave me such a headache. Still, I think after tomorrow I’ll definitely crack it!
The rest of the week will involve a lot of computer analysis and a few workshops, but I’m still very much looking forward to my lab work. I’m doing some in vitro digestions and I know the results for that would look amazing if everything works! I can’t wait to see it.
I also smashed my training today! I was so excited to go to the gym, and managed to run 5km in 30 mins! I’ve never managed to maintain 10km/h for that long, and even though I had to take a 1min break after 20mins of running it felt so amazing to see how far I could push myself without injury. It was definitely hard, but it was so rewarding.
Let’s push ourselves this week guys!!!
_____
🎧 - Lotta True Crime by Penelope Scott
🎮 - Baldur’s Gate 3
📖 - Onyx Storm by Rebecca Yarros
#study aesthetic#study blog#study inspiration#study motivation#study space#studyblr#studyinspo#studyspo#study tips#phd life
7 notes
·
View notes
Note
You mentioned getting into TV production through bumpers (? not sure if I'm remembering the right term). Is there are a particular one that sparked the interest or was it more watching the pattern until finally there was research?
First off: sorry it took a while to answer! I was on the road for a few days and didn't have computer until I got home. That said, I'm about to ramble- a lot- so everything is under the cut:
So I'm actually less interested in the production process as a whole than I am in the channel bumpers and idents specifically. Why? I'm fond of retro visuals, generally. Colorful logos! Computer animations! Hand-drawn animations and channel mascots! Sometimes live camera footage! Upbeat and jaunty music! There's a lot of variety, so it's fun to see what's out there.
I also feel like there's something to be appreciated about idents/bumpers/sign-on and sign-off sequences as incidental or "liminal" media (for lack of a better word). They provide structure to a channel and supplement the programming, but aren't themselves meant to be consumed or interpreted as entertainment. Even so, they're made with care and leave an impression. You can really see the ways a channel tries to establish a unique personality! While I lean more toward clips from local/national public channels, plenty of private/commercial channels have good ones too. Ultimately if it's fun to look at, I'm into it. I can’t remember how I developed this interest though, to be honest. It’s been a while.
Below are a few examples that I personally like. (The audio/visual quality on some of them may also be a little off, but hopefully that doesn't detract too much)
youtube
This one from Nou/Canal 9 (Spain, 1980s-1990s) is from their sign-off/end of daily transmission. I like the smoothness of the computer animation, and the logo is very clean and modern
youtube
Another one from Nou (believe this one is an interstitial), with footage of the logo in physical form, which is fun
youtube
XHGC Canal 5 (Mexico, 1980s) with a very cute cat mascot! There were a few different versions of him doing other activities, like this one (in color) where he flies a kite. The animation is quite energetic, which adds to the charm
youtube
A nice dynamic animation from NRTV (Australia, 1990s). The snappy music is a nice touch as well
youtube
Another animated mascot, Tevito the dog from TVN (Chile, early 1970s). Much like XHGC's cat, I think he's quite cute and has a lot of appeal in the way he's animated. Most depictions of Tevito were sadly destroyed during the coup in 1973 (and he was replaced by other characters), but a few clips survived showing him in different scenes
youtube
youtube
These last two from my local PBS stations, WETA and WHUT (United States, 2000s) are pretty nostalgic for me. They're simple, but I like the clean logo design, ralaxing music, and the live footage in the latter (WHUT had a few different versions)
I can’t think of anything else right now, but thank you for reading all that, if you got this far. I know it's not everyone's (or rather anyone's) cup of tea, but I personally find it fascinating :)
2 notes
·
View notes
Text
Unlocking the Power of Data: Essential Skills to Become a Data Scientist
In today's data-driven world, the demand for skilled data scientists is skyrocketing. These professionals are the key to transforming raw information into actionable insights, driving innovation and shaping business strategies. But what exactly does it take to become a data scientist? It's a multidisciplinary field, requiring a unique blend of technical prowess and analytical thinking. Let's break down the essential skills you'll need to embark on this exciting career path.
1. Strong Mathematical and Statistical Foundation:
At the heart of data science lies a deep understanding of mathematics and statistics. You'll need to grasp concepts like:
Linear Algebra and Calculus: Essential for understanding machine learning algorithms and optimizing models.
Probability and Statistics: Crucial for data analysis, hypothesis testing, and drawing meaningful conclusions from data.
2. Programming Proficiency (Python and/or R):
Data scientists are fluent in at least one, if not both, of the dominant programming languages in the field:
Python: Known for its readability and extensive libraries like Pandas, NumPy, Scikit-learn, and TensorFlow, making it ideal for data manipulation, analysis, and machine learning.
R: Specifically designed for statistical computing and graphics, R offers a rich ecosystem of packages for statistical modeling and visualization.
3. Data Wrangling and Preprocessing Skills:
Raw data is rarely clean and ready for analysis. A significant portion of a data scientist's time is spent on:
Data Cleaning: Handling missing values, outliers, and inconsistencies.
Data Transformation: Reshaping, merging, and aggregating data.
Feature Engineering: Creating new features from existing data to improve model performance.
4. Expertise in Databases and SQL:
Data often resides in databases. Proficiency in SQL (Structured Query Language) is essential for:
Extracting Data: Querying and retrieving data from various database systems.
Data Manipulation: Filtering, joining, and aggregating data within databases.
5. Machine Learning Mastery:
Machine learning is a core component of data science, enabling you to build models that learn from data and make predictions or classifications. Key areas include:
Supervised Learning: Regression, classification algorithms.
Unsupervised Learning: Clustering, dimensionality reduction.
Model Selection and Evaluation: Choosing the right algorithms and assessing their performance.
6. Data Visualization and Communication Skills:
Being able to effectively communicate your findings is just as important as the analysis itself. You'll need to:
Visualize Data: Create compelling charts and graphs to explore patterns and insights using libraries like Matplotlib, Seaborn (Python), or ggplot2 (R).
Tell Data Stories: Present your findings in a clear and concise manner that resonates with both technical and non-technical audiences.
7. Critical Thinking and Problem-Solving Abilities:
Data scientists are essentially problem solvers. You need to be able to:
Define Business Problems: Translate business challenges into data science questions.
Develop Analytical Frameworks: Structure your approach to solve complex problems.
Interpret Results: Draw meaningful conclusions and translate them into actionable recommendations.
8. Domain Knowledge (Optional but Highly Beneficial):
Having expertise in the specific industry or domain you're working in can give you a significant advantage. It helps you understand the context of the data and formulate more relevant questions.
9. Curiosity and a Growth Mindset:
The field of data science is constantly evolving. A genuine curiosity and a willingness to learn new technologies and techniques are crucial for long-term success.
10. Strong Communication and Collaboration Skills:
Data scientists often work in teams and need to collaborate effectively with engineers, business stakeholders, and other experts.
Kickstart Your Data Science Journey with Xaltius Academy's Data Science and AI Program:
Acquiring these skills can seem like a daunting task, but structured learning programs can provide a clear and effective path. Xaltius Academy's Data Science and AI Program is designed to equip you with the essential knowledge and practical experience to become a successful data scientist.
Key benefits of the program:
Comprehensive Curriculum: Covers all the core skills mentioned above, from foundational mathematics to advanced machine learning techniques.
Hands-on Projects: Provides practical experience working with real-world datasets and building a strong portfolio.
Expert Instructors: Learn from industry professionals with years of experience in data science and AI.
Career Support: Offers guidance and resources to help you launch your data science career.
Becoming a data scientist is a rewarding journey that blends technical expertise with analytical thinking. By focusing on developing these key skills and leveraging resources like Xaltius Academy's program, you can position yourself for a successful and impactful career in this in-demand field. The power of data is waiting to be unlocked – are you ready to take the challenge?
3 notes
·
View notes
Text
Python Programming Language: A Comprehensive Guide and Use Cases
Table of Contents
Introduction to Python
Key Features of Python
Python Syntax and Readability
Python vs. Other Programming Languages
Setting Up Python Development Environment
Python Data Structures and Control Flow
Object-Oriented Programming (OOP) in Python
Python Libraries and Frameworks
Python for Web Development
Python for Data Science and Machine Learning
Python for Automation and Scripting
Python for Game Development
Python for Cybersecurity
Python in Finance and Trading
Python for IoT and Embedded Systems
Python in Scientific Computing
Companies Using Python
Pros and Cons of Python
Future of Python
Conclusion
1. Introduction to Python
Python is a high-level, interpreted, and general-purpose programming language created by Guido van Rossum in 1991. Known for its simplicity, readability, and versatility, Python has become one of the most popular programming languages in the world.
Python supports multiple programming paradigms, including:
Procedural programming
Object-Oriented Programming (OOP)
Functional programming
2 notes
·
View notes
Text
Machine Learning: A Comprehensive Overview
Machine Learning (ML) is a subfield of synthetic intelligence (AI) that offers structures with the capacity to robotically examine and enhance from revel in without being explicitly programmed. Instead of using a fixed set of guidelines or commands, device studying algorithms perceive styles in facts and use the ones styles to make predictions or decisions. Over the beyond decade, ML has transformed how we have interaction with generation, touching nearly each aspect of our every day lives — from personalised recommendations on streaming services to actual-time fraud detection in banking.
Machine learning algorithms
What is Machine Learning?
At its center, gadget learning entails feeding facts right into a pc algorithm that allows the gadget to adjust its parameters and improve its overall performance on a project through the years. The more statistics the machine sees, the better it usually turns into. This is corresponding to how humans study — through trial, error, and revel in.
Arthur Samuel, a pioneer within the discipline, defined gadget gaining knowledge of in 1959 as “a discipline of take a look at that offers computers the capability to study without being explicitly programmed.” Today, ML is a critical technology powering a huge array of packages in enterprise, healthcare, science, and enjoyment.
Types of Machine Learning
Machine studying can be broadly categorised into 4 major categories:
1. Supervised Learning
For example, in a spam electronic mail detection device, emails are classified as "spam" or "no longer unsolicited mail," and the algorithm learns to classify new emails for this reason.
Common algorithms include:
Linear Regression
Logistic Regression
Support Vector Machines (SVM)
Decision Trees
Random Forests
Neural Networks
2. Unsupervised Learning
Unsupervised mastering offers with unlabeled information. Clustering and association are commonplace obligations on this class.
Key strategies encompass:
K-Means Clustering
Hierarchical Clustering
Principal Component Analysis (PCA)
Autoencoders
three. Semi-Supervised Learning
It is specifically beneficial when acquiring categorised data is highly-priced or time-consuming, as in scientific diagnosis.
Four. Reinforcement Learning
Reinforcement mastering includes an agent that interacts with an surroundings and learns to make choices with the aid of receiving rewards or consequences. It is broadly utilized in areas like robotics, recreation gambling (e.G., AlphaGo), and independent vehicles.
Popular algorithms encompass:
Q-Learning
Deep Q-Networks (DQN)
Policy Gradient Methods
Key Components of Machine Learning Systems
1. Data
Data is the muse of any machine learning version. The pleasant and quantity of the facts directly effect the performance of the version. Preprocessing — consisting of cleansing, normalization, and transformation — is vital to make sure beneficial insights can be extracted.
2. Features
Feature engineering, the technique of selecting and reworking variables to enhance model accuracy, is one of the most important steps within the ML workflow.
Three. Algorithms
Algorithms define the rules and mathematical fashions that help machines study from information. Choosing the proper set of rules relies upon at the trouble, the records, and the desired accuracy and interpretability.
4. Model Evaluation
Models are evaluated the use of numerous metrics along with accuracy, precision, consider, F1-score (for class), or RMSE and R² (for regression). Cross-validation enables check how nicely a model generalizes to unseen statistics.
Applications of Machine Learning
Machine getting to know is now deeply incorporated into severa domain names, together with:
1. Healthcare
ML is used for disorder prognosis, drug discovery, customized medicinal drug, and clinical imaging. Algorithms assist locate situations like cancer and diabetes from clinical facts and scans.
2. Finance
Fraud detection, algorithmic buying and selling, credit score scoring, and client segmentation are pushed with the aid of machine gaining knowledge of within the financial area.
3. Retail and E-commerce
Recommendation engines, stock management, dynamic pricing, and sentiment evaluation assist businesses boom sales and improve patron revel in.
Four. Transportation
Self-riding motors, traffic prediction, and route optimization all rely upon real-time gadget getting to know models.
6. Cybersecurity
Anomaly detection algorithms help in identifying suspicious activities and capacity cyber threats.
Challenges in Machine Learning
Despite its rapid development, machine mastering still faces numerous demanding situations:
1. Data Quality and Quantity
Accessing fantastic, categorised statistics is often a bottleneck. Incomplete, imbalanced, or biased datasets can cause misguided fashions.
2. Overfitting and Underfitting
Overfitting occurs when the model learns the education statistics too nicely and fails to generalize.
Three. Interpretability
Many modern fashions, specifically deep neural networks, act as "black boxes," making it tough to recognize how predictions are made — a concern in excessive-stakes regions like healthcare and law.
4. Ethical and Fairness Issues
Algorithms can inadvertently study and enlarge biases gift inside the training facts. Ensuring equity, transparency, and duty in ML structures is a growing area of studies.
5. Security
Adversarial assaults — in which small changes to enter information can fool ML models — present critical dangers, especially in applications like facial reputation and autonomous riding.
Future of Machine Learning
The destiny of system studying is each interesting and complicated. Some promising instructions consist of:
1. Explainable AI (XAI)
Efforts are underway to make ML models greater obvious and understandable, allowing customers to believe and interpret decisions made through algorithms.
2. Automated Machine Learning (AutoML)
AutoML aims to automate the stop-to-cease manner of applying ML to real-world issues, making it extra reachable to non-professionals.
3. Federated Learning
This approach permits fashions to gain knowledge of across a couple of gadgets or servers with out sharing uncooked records, enhancing privateness and efficiency.
4. Edge ML
Deploying device mastering models on side devices like smartphones and IoT devices permits real-time processing with reduced latency and value.
Five. Integration with Other Technologies
ML will maintain to converge with fields like blockchain, quantum computing, and augmented fact, growing new opportunities and challenges.
2 notes
·
View notes
Text
The AI Revolution: Understanding, Harnessing, and Navigating the Future
What is AI
In a world increasingly shaped by technology, one term stands out above the rest, capturing both our imagination and, at times, our apprehension: Artificial Intelligence. From science fiction dreams to tangible realities, AI is no longer a distant concept but an omnipresent force, subtly (and sometimes not-so-subtly) reshaping industries, transforming daily life, and fundamentally altering our perception of what's possible.
But what exactly is AI? Is it a benevolent helper, a job-stealing machine, or something else entirely? The truth, as always, is far more nuanced. At its core, Artificial Intelligence refers to the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions), and self-correction. What makes modern AI so captivating is its ability to learn from data, identify patterns, and make predictions or decisions with increasing autonomy.
The journey of AI has been a fascinating one, marked by cycles of hype and disillusionment. Early pioneers in the mid-20th century envisioned intelligent machines that could converse and reason. While those early ambitions proved difficult to achieve with the technology of the time, the seeds of AI were sown. The 21st century, however, has witnessed an explosion of progress, fueled by advancements in computing power, the availability of massive datasets, and breakthroughs in machine learning algorithms, particularly deep learning. This has led to the "AI Spring" we are currently experiencing.
The Landscape of AI: More Than Just Robots
When many people think of AI, images of humanoid robots often come to mind. While robotics is certainly a fascinating branch of AI, the field is far broader and more diverse than just mechanical beings. Here are some key areas where AI is making significant strides:
Machine Learning (ML): This is the engine driving much of the current AI revolution. ML algorithms learn from data without being explicitly programmed. Think of recommendation systems on streaming platforms, fraud detection in banking, or personalized advertisements – these are all powered by ML.
Deep Learning (DL): A subset of machine learning inspired by the structure and function of the human brain's neural networks. Deep learning has been instrumental in breakthroughs in image recognition, natural language processing, and speech recognition. The facial recognition on your smartphone or the impressive capabilities of large language models like the one you're currently interacting with are prime examples.
Natural Language Processing (NLP): This field focuses on enabling computers to understand, interpret, and generate human language. From language translation apps to chatbots that provide customer service, NLP is bridging the communication gap between humans and machines.
Computer Vision: This area allows computers to "see" and interpret visual information from the world around them. Autonomous vehicles rely heavily on computer vision to understand their surroundings, while medical imaging analysis uses it to detect diseases.
Robotics: While not all robots are AI-powered, many sophisticated robots leverage AI for navigation, manipulation, and interaction with their environment. From industrial robots in manufacturing to surgical robots assisting doctors, AI is making robots more intelligent and versatile.
AI's Impact: Transforming Industries and Daily Life
The transformative power of AI is evident across virtually every sector. In healthcare, AI is assisting in drug discovery, personalized treatment plans, and early disease detection. In finance, it's used for algorithmic trading, risk assessment, and fraud prevention. The manufacturing industry benefits from AI-powered automation, predictive maintenance, and quality control.
Beyond these traditional industries, AI is woven into the fabric of our daily lives. Virtual assistants like Siri and Google Assistant help us organize our schedules and answer our questions. Spam filters keep our inboxes clean. Navigation apps find the fastest routes. Even the algorithms that curate our social media feeds are a testament to AI's pervasive influence. These applications, while often unseen, are making our lives more convenient, efficient, and connected.
Harnessing the Power: Opportunities and Ethical Considerations
The opportunities presented by AI are immense. It promises to boost productivity, solve complex global challenges like climate change and disease, and unlock new frontiers of creativity and innovation. Businesses that embrace AI can gain a competitive edge, optimize operations, and deliver enhanced customer experiences. Individuals can leverage AI tools to automate repetitive tasks, learn new skills, and augment their own capabilities.
However, with great power comes great responsibility. The rapid advancement of AI also brings forth a host of ethical considerations and potential challenges that demand careful attention.
Job Displacement: One of the most frequently discussed concerns is the potential for AI to automate jobs currently performed by humans. While AI is likely to create new jobs, there will undoubtedly be a shift in the nature of work, requiring reskilling and adaptation.
Bias and Fairness: AI systems learn from the data they are fed. If that data contains historical biases (e.g., related to gender, race, or socioeconomic status), the AI can perpetuate and even amplify those biases in its decisions, leading to unfair outcomes. Ensuring fairness and accountability in AI algorithms is paramount.
Privacy and Security: AI relies heavily on data. The collection and use of vast amounts of personal data raise significant privacy concerns. Moreover, as AI systems become more integrated into critical infrastructure, their security becomes a vital issue.
Transparency and Explainability: Many advanced AI models, particularly deep learning networks, are often referred to as "black boxes" because their decision-making processes are difficult to understand. For critical applications, it's crucial to have transparency and explainability to ensure trust and accountability.
Autonomous Decision-Making: As AI systems become more autonomous, questions arise about who is responsible when an AI makes a mistake or causes harm. The development of ethical guidelines and regulatory frameworks for autonomous AI is an ongoing global discussion.
Navigating the Future: A Human-Centric Approach
Navigating the AI revolution requires a proactive and thoughtful approach. It's not about fearing AI, but rather understanding its capabilities, limitations, and implications. Here are some key principles for moving forward:
Education and Upskilling: Investing in education and training programs that equip individuals with AI literacy and skills in areas like data science, AI ethics, and human-AI collaboration will be crucial for the workforce of the future.
Ethical AI Development: Developers and organizations building AI systems must prioritize ethical considerations from the outset. This includes designing for fairness, transparency, and accountability, and actively mitigating biases.
Robust Governance and Regulation: Governments and international bodies have a vital role to play in developing appropriate regulations and policies that foster innovation while addressing ethical concerns and ensuring the responsible deployment of AI.
Human-AI Collaboration: The future of work is likely to be characterized by collaboration between humans and AI. AI can augment human capabilities, automate mundane tasks, and provide insights, allowing humans to focus on higher-level problem-solving, creativity, and empathy.
Continuous Dialogue: As AI continues to evolve, an ongoing, open dialogue among technologists, ethicists, policymakers, and the public is essential to shape its development in a way that benefits humanity.
The AI revolution is not just a technological shift; it's a societal transformation. By understanding its complexities, embracing its potential, and addressing its challenges with foresight and collaboration, we can harness the power of Artificial Intelligence to build a more prosperous, equitable, and intelligent future for all. The journey has just begun, and the choices we make today will define the world of tomorrow.
2 notes
·
View notes
Text
ai analogies
with photography, the 'inputs' or 'creative choices' include the subject, the framing, and technical qualities like exposure, focus, aperture and iso. the output, the thing that's judged, is then the qualities of the image - composition and colour and narrative. since photography is very quick, a photographer will typically take many shots of a subject, and then pick out the ones they like best to share with the wider world, so there is also a curative element.
with collage (and also photobashing, and even the limited space of a dollmaker game), the 'inputs' are the choices of existing images, and the composition created by arranging them. so there's a curative element in selecting what to collage, and then new meaning is created by juxtaposing two previously unrelated images, the spatial relationships between them, and so on. (see also graphic design!) the visual qualities of the original image are relevant insofar as they affect the composition, but you don't judge a collage containing a painting or photo on how well-painted the painting or well-shot the photo is, rather on how well it uses that painting or photo.
with 'readymades' and similar genres of conceptual art, it's kind of similar, right? you put the existing objects in a new context and create meaning through how they're arranged. people respond to whether the idea it communicates is interesting. (often these days they come with some text which gives a key to inform you how to interpret the artwork.)
anyway. with drawing and painting, which are comparatively laborious to create, you are constantly making thousands of creative choices, from the broad scale - composition, value structure, how you construct a figure - to the tiny, like line weight, rendering, shape design. to navigate this vast space of possibility, you will be informed by your memory of other pictures you've seen (your personal 'visual library') and techniques you've practiced, reference images you've gathered, and so on. the physical qualities of your body and the medium will also affect your picture - how you move your arm, how watercolor moves across the paper, etc etc.
broadly the same is true for other very involved media like sculpture or computer graphics or music (of all kinds!). more fine-grained control implies both more work and more scope for creative choices.
when someone sees an image created by whatever means, they take all of this in at once, for a gestalt impression - and if they feel like it, they can look closer and appreciate the particular details. many artists will think about trying to create a composition that 'leads the eye' to take in particular points of interest and convey the narrative of the picture.
so then, 'AI'. your inputs are the design of the neural net, the selection of training data, the text/image used as a prompt and then finally the selection of an image produced by the program. (you can modify that image of course but let's not get into that for now). chances are you don't have a lot of control over the first two since the computation involved is too unwieldy, though some image generators can be 'finetuned' with additional training data.
'AI art' is like photography in that you typically generate a lot of images and select the ones that 'come out well'. like a photographer looking for a subject, you might search around for an interesting prompt. it's unlike photography in that you have very limited control over all those other parameters (at best you can try to verbally describe what you want and hope the AI understands, or ask it to generate similar pictures and hope one has the qualities you want).
'AI art' is like collage in that you are taking existing images and creating new meaning of of them, by generating a latent space and transformation algorithm that approximates them. it's unlike collage in that you have no real knowledge of what specific images may be 'most' informing the corner of latent space you're probing. you can look at an AI generated image and say 'this looks kinda like a Nihei manga' but it's not using a specific image of Nihei. still, there is something similar to the relationship between images in collage when you do things like 'style transfer'.
'AI art' can be like conceptual art or for that matter political cartoons in that often it's just providing illustration to a concept or joke that can be expressed in words. 'Shrek in the style of The Dark Crystal' or 'cats that spell "gay sex"' is what you're getting across. but 'AI art' as a subculture places very high concern on the specific aesthetic qualities, so it's not that simple.
briefly, sampling in music often tends to foreground that it's a sample, either one the audience may recognise - the Amen break for example - or just by being noticeably different from the texture of the rest of the piece. even when the sample isn't easily recognised, though, the art of sampling is to place it in a new context which brings out different sonic qualities, e.g. by playing it rapidly in succession, or heavily filtering and distorting it, overlaying it with other sounds, or playing it right before the drop. it's similar to collage and photobashing.
paintings then. AI art rather obsessively tries to imitate paintings, drawings, 3D graphics etc. some of its proponents even try to position it as obsoleting these art forms, rather than a new derivative art form. a lot of the fear from artists who work in those media is that, even if the AI generated images are a poor substitute for what we make, it will be 'good enough' to satisfy the people with the money, or discourage people from learning how to paint with all its nuances.
so, 'AI' may make results that look like a painting, but the process of making it is hugely different. rather than gradually constructing a picture and making decisions at each turn, you try out successive prompts to get a page full of finished pictures, and generate variations on those pictures, until you find one you like. it's most similar to the client who describes an image they want and then makes requests to tweak it. there is still creativity in this, because it's kind of replicating the back-and-forth between an artist and client/art director/critique-giver/etc. however, in this analogy, it's hampered by the limited communication between you and the 'artist'. and it's a different sort of input, so we respond to it differently.
generating and posting AI art could also be compared to the kind of thing we do on this website, where we curate images we like and share them. you're all looking at the outputs of the same image generator and pointing and saying 'ooh, that one's cool'. what's kinda troublesome in this analogy is that AI obfuscates all that stuff about credit and inspiration, collapsing it all into one mass. unless their name was used in the prompt, you can't tell if the 'AI' image is 'drawing heavily' on any particular artist. this isn't a new problem - firstly websites full of uncredited images abound, secondly any creative process is inspired by loads of things that we can't enumerate or hope to divulge, so the idea of tracing the paths of inspiration is perhaps a mirage anyway. still, for me (sakuga fan type of person!), knowing what i can about the specific people involved in creating artwork and how they went about it is important, and that's heavily blackboxed by 'AI'.
none of this would be helped by harsher copyright laws. it's great that people can create derivative works and respond to existing art. that is the scaffold that launches us somewhere new and hopefully interesting. simply putting someone's work into an image generator to create similar pictures is not a very interesting statement in its own right, and a lot of AI illustration produced at the moment has a weirdly glossy, overproduced feeling that is offputting and leaves nowhere for the eye to settle (when it isn't just mush), but that's not to say AI is never going to be able to be used to say anything interesting or become a meaningful art form in its own right.
'AI' is kinda like a bunch of things but not exactly like any of them. (this isn't to get into the economic questions at all, that would be a much longer post!). but since there are people very sincerely devoted to this being an art form... I want to know how to 'read' these works - what I'm looking for in there, what a meaningful comment would be. bc right now when I see an illustration and realise it's AI generated image it's like... a sense of disappointment because whatever I was picking up on isn't actually part of the 'statement' in the way i thought. so it's like oh... that's nice. the machine picked a cool perspective huh? all the things i would normally appreciate in an illustration are outside the artist's control, so responding to them feels irrelevant! so what is the right mode here? there's more to it than just the choice of subject. but I feel like I have more to say about even a picrew.
45 notes
·
View notes
Text
Consistency and Reducibility: Which is the theorem and which is the lemma?
Here's an example from programming language theory which I think is an interesting case study about how "stories" work in mathematics. Even if a given theorem is unambiguously defined and certainly true, the ways people contextualize it can still differ.
To set the scene, there is an idea that typed programming languages correspond to logics, so that a proof of an implication A→B corresponds to a function of type A→B. For example, the typing rules for simply-typed lambda calculus are exactly the same as the proof rules for minimal propositional logic, adding an empty type Void makes it intuitionistic propositional logic, by adding "dependent" types you get a kind of predicate logic, and really a lot of different programming language features also make sense as logic rules. The question is: if we propose a new programming language feature, what theorem should we prove in order to show that it also makes sense logically?
The story I first heard goes like this. In order to prove that a type system is a good logic we should prove that it is consistent, i.e. that not every type is inhabited, or equivalently that there is no program of type Void. (This approach is classical in both senses of the word: it goes back to Hilbert's program, and it is justified by Gödel's completeness theorem/model existence theorem, which basically says that every consistent theory describes something.)
Usually it is obvious that no values can be given type Void, the only issue is with non-value expressions. So it suffices to prove that the language is normalizing, that is to say every program eventually computes to a value, as opposed to going into an infinite loop. So we want to prove:
If e is an expression with some type A, then e evaluates to some value v.
Naively, you may try to prove this by structural induction on e. (That is, you assume as an induction hypothesis that all subexpressions of e normalize, and prove that e does.) However, this proof attempt gets stuck in the case of a function call like (λx.e₁) e₂. Here we have some function (λx.e₁) : A→B and a function argument e₂ : A. The induction hypothesis just says that (λx.e₁) normalizes, which is trivially true since it's already a value, but what we actually need is an induction hypothesis that says what will happen when we call the function.
In 1967 William Tait had a good idea. We should instead prove:
If e is an expression with some type A, then e is reducible at type A.
"Reducible at type A" is a predicate defined on the structure of A. For base types, it just means normalizable, while for function types we define
e is reducable at type A→B ⇔ for all expressions e₁, if e₁ is reducible at A then (e e₁) is reducible at B.
For example, an function is reducible at type Bool→Bool→Bool if whenever you call it with two normalizing boolean arguments, it returns a boolean value (rather than looping forever).
This really is a very good idea, and it can be generalized to prove lots of useful theorems about programming languages beyond just termination. But the way I (and I think most other people, e.g. Benjamin Pierce in Types and Programming Languages) have told the story, it is strictly a technical device: we prove consistency via normalization via reducibility.
❧
The story works less well when you consider programs that aren't normalizing, which is certainly not an uncommon situation: nothing in Java or Haskell forbids you from writing infinite loops. So there has been some interest in how dependent types work if you make termination-checking optional, with some famous projects along these lines being Idris and Dependent Haskell. The idea here is that if you write a program that does terminate it should be possible to interpret it as a proof, but even if a program is not obviously terminating you can still run it.
At this point, with the "consistency through normalization" story in mind, you may have a bad idea: "we can just let the typechecker try to evaluate a given expression at typechecking-time, and if it computes a value, then we can use it as as a proof!" Indeed, if you do so then the typechecker will reject all attempts to "prove" Void, so you actually create a consistent logic.
If you think about it a little longer, you notice that it's a useless logic. For example, an implication like ∀n.(n² = 3) is provable, it's inhabited by the value (λn. infinite_loop()). That function is a perfectly fine value, even though it will diverge as soon as you call it. In fact, all ∀-statements and implications are inhabited by function values, and proving universally quantified statements is the entire point of using logical proof at all.
❧
So what theorem should you prove, to ensure that the logic makes sense? You want to say both that Void is unprovable, and also that if a type A→B is inhabited, then A really implies B, and so on recursively for any arrow types inside A or B. If you think a bit about this, you want to prove that if e:A, then e is reducible at type A... And in fact, Kleene had already proposed basically this (under the name realizability) as a semantics for Intuitionistic Logic, back in the 1940s.
So in the end, you end up proving the same thing anyway—and none of this discussion really becomes visible in the formal sequence of theorems and lemmas. The false starts need to passed along in the asides in the text, or in tumblr posts.
8 notes
·
View notes