#DataSets
Explore tagged Tumblr posts
Text
The Internet Archive saves the day again
#cdc#centers for disease control and prevention#health#public health#science#diseases#datasets#data science#health science#stem fields
97 notes
·
View notes
Text
Breaking the Scaling Code: How AI Models Are Redefining the Rules
New Post has been published on https://thedigitalinsider.com/breaking-the-scaling-code-how-ai-models-are-redefining-the-rules/
Breaking the Scaling Code: How AI Models Are Redefining the Rules
Artificial intelligence has taken remarkable strides in recent years. Models that once struggled with basic tasks now excel at solving math problems, generating code, and answering complex questions. Central to this progress is the concept of scaling laws—rules that explain how AI models improve as they grow, are trained on more data, or are powered by greater computational resources. For years, these laws served as a blueprint for developing better AI.
Recently, a new trend has emerged. Researchers are finding ways to achieve groundbreaking results without simply making models bigger. This shift is more than a technical evolution. It’s reshaping how AI is built, making it more efficient, accessible, and sustainable.
The Basics of Scaling Laws
Scaling laws are like a formula for AI improvement. They state that as you increase the size of a model, feed it more data, or give it access to more computational power, its performance improves. For example:
Model size: Larger models with more parameters can learn and represent more complex patterns. Parameters are the adjustable parts of a model that allow it to make predictions.
Data: Training on vast, diverse datasets helps models generalize better, enabling them to handle tasks they weren’t explicitly trained for.
Compute: More computational power allows faster and more efficient training, achieving higher performance.
This recipe has driven AI’s evolution for over a decade. Early neural networks like AlexNet and ResNet demonstrated how increasing model size could improve image recognition. Then came transformers where models like GPT-3 and Google’s BERT have showed that scaling could unlock entirely new capabilities, such as few-shot learning.
The Limits of Scaling
Despite its success, scaling has limits. As models grow, the improvements from adding more parameters diminish. This phenomenon, known as the “law of diminishing returns,” means that doubling a model’s size doesn’t double its performance. Instead, each increment delivers smaller gains. This means that to further push the performance of such models would require even more resources for relatively modest gains. This has real-world consequences. Building massive models comes with significant financial and environmental costs. Training large models is expensive. GPT-3 reportedly cost millions of dollars to train. These costs make cutting-edge AI inaccessible to smaller organizations. Training massive models consumes vast amounts of energy. A study estimated that training a single large model could emit as much carbon as five cars over their lifetimes.
Researchers recognized these challenges and began exploring alternatives. Instead of relying on brute force, they asked: How can we make AI smarter, not just bigger?
Breaking the Scaling Code
Recent breakthroughs show it’s possible to outperform traditional scaling laws. Smarter architectures, refined data strategies, and efficient training techniques are enabling AI to reach new heights without requiring massive resources.
Smarter Model Designs: Rather than making models larger, researchers are focusing on making them more efficient. Examples are:
Sparse models: Instead of activating all parameters at once, sparse models only use the parts needed for a specific task. This approach saves computational power while maintaining performance. A notable example is Mistral 7B, which, despite having only 7 billion parameters, outperforms much larger models by using a sparse architecture.
Transformer improvements: Transformers remain the backbone of modern AI, but their designs are evolving. Innovations like linear attention mechanisms make transformers faster and less resource-intensive.
Better Data Strategies: More data isn’t always better. Curated, high-quality datasets often outperform sheer volume. For example,
Focused datasets: Instead of training on massive, unfiltered data, researchers are using clean and relevant datasets. For instance, OpenAI has shifted toward carefully selected data to improve reliability.
Domain-specific training: In specialized areas like medicine or law, targeted datasets help models perform well with fewer examples.
Efficient Training Methods: New training techniques are reducing resource demands without sacrificing performance. Some examples of these training methods include:
Curriculum learning: By starting with simpler tasks and gradually introducing harder ones, models learn more effectively. This mirrors how humans learn.
Techniques like LoRA (Low-Rank Adaptation): These methods fine-tune models efficiently without retraining them entirely.
Gradient checkpointing: This approach reduces memory use during training, enabling larger models to run on limited hardware.
Emergent Abilities: As models grow, they sometimes display surprising capabilities, like solving problems they weren’t explicitly trained for. These emergent abilities challenge traditional scaling laws, as they often appear in larger models but not in their smaller counterparts. Researchers are now investigating ways to unlock these abilities more efficiently, without relying on brute-force scaling.
Hybrid Approaches for Smarter AI: Combining neural networks with symbolic reasoning is another promising direction. These hybrid systems combine pattern recognition with logical reasoning, making them more intelligent and adaptable. This approach reduces the need for massive datasets and compute power.
Real-World Examples
Several recent models showcase how these advancements are rewriting the rules:
GPT-4o Mini: The model delivers performance comparable to its much larger version but at a fraction of the cost and resources. It achieves these results with the help of smarter training techniques and focused datasets.
Mistral 7B: With only 7 billion parameters, this model outperforms models with tens of billions. Its sparse architecture proves that smart design can surpass raw size.
Claude 3.5: Prioritizing safety and ethical considerations, this model balances strong performance with thoughtful resource use.
The Impact of Breaking Scaling Laws
These advancements have real-world implications.
Making AI More Accessible: Efficient designs lower the cost of developing and deploying AI. Open-source models like Llama 3.1 are making advanced AI tools available to smaller companies and researchers.
A Greener Future: Optimized models reduce energy consumption, making AI development more sustainable. This shift is critical as concerns about AI’s environmental footprint grow.
Expanding AI’s Reach: Smaller, more efficient models can run on everyday devices, like smartphones and IoT gadgets. This opens new possibilities for applications, from real-time language translation to autonomous systems in cars.
The Bottom Line
Scaling laws have shaped AI’s past, but they no longer define its future. Smarter architectures, better data handling, and efficient training methods are breaking the rules of traditional scaling. These innovations are making AI not just more powerful, but also more practical and sustainable.
The focus has shifted from brute-force growth to intelligent design. This new era promises AI that’s accessible to more people, environmentally friendly, and capable of solving problems in ways we’re just beginning to imagine. The scaling code isn’t just being broken—it’s being rewritten.
#ai#AI development#AI models#AI scaling laws#ai tools#applications#approach#architecture#artificial#Artificial Intelligence#attention#autonomous#autonomous systems#BERT#billion#breaking scaling laws in AI#Building#carbon#Cars#challenge#claude#claude 3#claude 3.5#code#Companies#cutting#data#datasets#deploying#Design
3 notes
·
View notes
Text
DATASETS IN FINTECH STARTUP WORLD
Here are some real-world examples of fintech companies using datasets to improve their services:
1. Personalized Financial Planning:
Mint: Mint aggregates financial data from various sources like bank accounts, credit cards, and investments to provide users with a holistic view of their finances. It then uses this data to offer personalized budgets, track spending habits, and suggest ways to save money.
Personal Capital: Similar to Mint, Personal Capital analyzes user data to provide personalized financial advice, including investment recommendations and retirement planning.
2. Credit Scoring and Lending:
Upstart: Upstart uses alternative data sources like education and employment history, in addition to traditional credit scores, to assess creditworthiness and provide loans to individuals who may be overlooked by traditional lenders. This expands access to credit and often results in fairer lending practices.
Kiva: Kiva uses a dataset of loan applications and repayment history to assess the risk of lending to individuals in developing countries. This data-driven approach allows them to provide microloans to entrepreneurs who lack access to traditional banking systems.
3. Fraud Detection:
Stripe: Stripe uses machine learning algorithms to analyze transaction data and identify potentially fraudulent activity. This helps protect businesses from losses and ensures secure online payments.
Paypal: Paypal employs sophisticated fraud detection systems that analyze vast amounts of data to identify and prevent unauthorized transactions, protecting both buyers and sellers.
4. Investment Platforms:
Robinhood: Robinhood uses data to provide users with insights into stock performance, market trends, and personalized investment recommendations. This makes investing more accessible and helps users make informed decisions.
Betterment: Betterment uses algorithms and data analysis to create diversified investment portfolios tailored to individual risk tolerance and financial goals. This automated approach simplifies investing and helps users achieve their long-term financial objectives.
These are just a few examples of how fintech companies leverage datasets to improve their services and provide better value to their customers.
#DATASETS IN FINTECH STARTUP WORLD#robinhood#betterment#stripe#paypal#datasets#fintech#startup#startups#fintech startup#kiva#upstart#Mint#Personal Capital
2 notes
·
View notes
Text
I bleed revolution. If your only anarchist actions are related to union organizing, then you’re not an anarchist, you’re a corporate puppet. Everything you do should work to subvert the current and future actions of the state and all of their tentacle corporate affiliations. If your only goal in life is to work under the orders of someone else, under someone’s else’s direction, with someone else’s instructions, then you’re not a human being. You’re chattel cattle at best. If a corporate pig tells or wants you to do something, then you should do the exact opposite, or else you’re just a pawn in a game of global corporate chess. Every one of your actions should be both a defensive and offensive maneuver. If you defend while you attack, you become one with your true purpose, which is to dismantle the state and all corporate authority. If you don’t think in a linear manner, then you’re not apart of their datasets, and they can’t predict your next move. You operate from outside of their datasets and what they think is your next move is never your next move. Then they start to doubt their own intelligence and all the false assumptions it’s based on, and the system starts to crumble. You use any means necessary, because that is your constitutional right, just as they use any means necessary to hold onto the power they stole from you. They stole your birthright, and it’s your legal duty as an American citizen to seek a redress of your grievances, using whatever it takes. Under no pretext.
#Revolution#constitution#anarchy#authority#system#corporate#American#America#birthright#dataset#datasets#AI#artificial intelligence#intelligence#CIA#anomaly#alien#UFO#wavelength#signals#amplitude#frequency
9 notes
·
View notes
Text
3 notes
·
View notes
Text
youtube
Ever wondered what the datasets used to train AI look like? This video is a subset of ImageNet-1k (18k images) with some other metrics.
Read more on how I made it and see some extra visualizations.
Okay! I'll split this up by the elements in the video, but first I need to add some context about
The dataset
ImageNet-1k (aka ILSVRC 2012) is an image classification dataset - you have a set number of classes (in this case 1000) and each class has a set of images. This is the most popular version of ImageNet, which usually has 21000 classes.
ImageNet was made using nouns from WordNet, searched online. From 2010 to 2017 yearly competitions were held to determine the best image classification model. It has greatly benefitted computer vision, developing model architectures that you've likely used unknowingly. See the accuracy progression here.
ResNet
Residual Network (or ResNet) is an architecture for image recognition made in 2015, trying to fix "vanishing/exploding gradients" (read the paper here). It managed to achieve an accuracy of 96.43% (that's 96 thousand times better than randomly guessing!), winning first place back in 2015. I'll be using a smaller version of this model (ResNet-50), boasting an accuracy of 95%.
The scatter plot
If you look at the video long enough, you'll realize that similar images (eg. dogs, types of food) will be closer together than unrelated ones. This is achieved using two things: image embeddings and dimensionality reduction.
Image embeddings
In short, image embeddings are points in an n-dimensional space (read this post for more info on higher dimensions), in this case, made from chopping off the last layer from ResNet-50, producing a point in 1024-dimensional space.
The benefit of doing all of that than just comparing pixels between two images is that the model (specifically made for classification) only looks for features that would make the classification easier (preserving semantic information). For instance - you have 3 images of dogs, two of them are the same breed, but the first one looks more similar to the other one (eg. matching background). If you compare the pixels, the first and third images would be closer, but if you use embeddings the first and second ones would be closer because of the matching breeds.
Dimensionality reduction
Now we have all these image embeddings that are grouped by semantic (meaning) similarity and we want to visualize them. But how? You can't possibly display a 1024-dimensional scatter plot to someone and for them to understand it. That's where dimensionality reduction comes into play. In this case, we're reducing 1024 dimensions to 2 using an algorithm called t-SNE. Now the scatter plot will be something we mere mortals can comprehend.
Extra visualizations
Here's the scatter plot in HD:
This idea actually comes from an older project where I did this on a smaller dataset (about 8k images). The results were quite promising! You can see how each of the 8 classes is neatly separated, plus how differences in the subject's angle, surroundings, and color.
Find the full-resolution image here
Similar images
I just compared every point to every other point (in the 2d space, It would be too computationally expensive otherwise) and got the 6 closest points to that. You can see when the model incorrectly classifies something if the related images are not similar to the one presented (eg. there's an image of a payphone but all of the similar images are bridges).
Pixel rarity
This one was pretty simple, I used a script to count the occurrences of pixel colors. Again, this idea comes from an older project, where I counted the entirety of the dataset, so I just used that.
Extra visualization
Here are all the colors that appeared in the image, sorted by popularity, left to right, up to down
Some final stuff
MP means Megapixel (one million pixels) - a 1000x1000 image is one megapixel big (it has one million pixels)
That's all, thanks for reading. Feel free to ask questions and I'll try my best to respond to them.
3 notes
·
View notes
Text
Data Cleaning in Data Science
Data cleaning is an integral part of data preprocessing viz., removing or correcting inaccurate information within a data set. This could mean missing data, spelling mistakes, and duplicates to name a few issues. Inaccurate information can lead to issues during analysis phase if not properly addressed at the earlier stages.
Data Cleaning vs Data Wrangling : Data cleaning focuses on fixing inaccuracies within your data set. Data wrangling, on the other hand, is concerned with converting the data’s format into one that can be accepted and processed by a machine learning model.
Data Cleaning steps to follow :
Remove irrelevant data
Resolve any duplicates issues
Correct structural errors if any
Deal with missing fields in the dataset
Zone in on any data outliers and remove them
Validate your data
At EduJournal, we understand the importance of gaining practical skills and industry-relevant knowledge to succeed in the field of data analytics / data science. Our certified program in data science and data analytics is designed to equip freshers / experienced with the necessary expertise and hands-on experience experience so they are well equiped for the job.
URL : http://www.edujournal.com
#data_science#training#upskilling#irrevelant_data#duplicate_issue#datasets#validation#outliers#data_cleaning#trends#insights#machine_learning
2 notes
·
View notes
Text
This reminded me of the time I was doing social service for my bachelor's degree.
I'm a biologist. Back then (2007-2008ish, I guess? Don't remember, it's been a while lol) I joined the Ornithology Lab hoping to start my bachelor's thesis early (I did NOT but that's another story lmao). Part of my social service job involved transcribing lots (and I mean LOTS, there were journals dating back to the 80s) of field journals from past students into Excel spreadsheets and then entering the curated info into a special database designed by the Mexican environmental commission (CONABIO) for it to be accessible to other researchers and to add to the national biodiversity repository.
Oh, boy.
The spelling in plenty of those journals was TERRIBLE. And I'm not referring to the questionable spelling of scientific names (which can be truly difficult to memorize and write). I'm talking about the spelling of things like the alpha codes we ornithologists use to abbreviate either the scientific names or the standardized common names in English (HOW DO YOU MISSPELL FOUR / SIX LETTERS???), site identifiers, descriptions, field observations, etc. Heck, there were times when even the names of the observers were spelled differently ON THE SAME PAGE written BY THE SAME PERSON. Had at least one instance where a student regularly spelled his own name wrong and the head of the Laboratory didn't remember which spelling was the correct one, so we had to settle with the most common spelling of that student's name.
Considering all this information was gathered by fellow biology students during field practices (who in all likelihood were making these identifications with the aid of guidebooks and the professors' guidance), one would expect them to be able to write with certain grammatical consistency, as was to be expected of their academic level. But nope.
And yes, I know people can be dyslexic (or have other undiagnosed learning disabilities) and struggle with reading and writing, but some of those journals were written by people who were somewhat bordering on functional illiteracy, which I find truly baffling of people studying for a higher education degree.
Curating all that info was tortuous but I managed. And in the end I completed the mandatory 480 hours (and more!) of the social service necessary for graduation. Good grief, though. Reading OPs post gave me serious war flashbacks 😂
Working on a dataset of roadkill reports. state agency personnel CANNOT spell

#data collection#databases#datasets#fieldwork notes#personal anecdotes#i do miss those days tho#my adhd wasn't nearly as bad as it is right now and working on those datasets was truly stimulating#but sometimes it do be like that#especially when you have to gather information from untrained sources#but it's not the end of the world#oh and by the way#WELL DONE OP#thank you for your service
43K notes
·
View notes
Text
Medieval Astrology-Web data
Catalogue
Texts
Images

Source websites
0 notes
Text
Greetings, my fellow latent space explorers!
Today’s expedition into the past has yielded an exquisite relic: a 1920 medical book, over 1,400 pages of archaic wisdom and delightfully unsettling illustrations. A tome of knowledge, long since forgotten, yet now ripe for diffusion into the digital ether.
I shall be scanning, digitizing, and releasing this dataset into the wilds of open-source! Ensuring these century-old curiosities may live again through the sorcery of AI. Expect LoRAs for image models, text transcriptions for LLM fine-tuning, and perhaps a few unsettling excerpts to horrify and amuse.
But tell me, fellow explorers, are there any ethical paradoxes lurking in the shadows of this endeavor? Have I overlooked some arcane moral clause in the grimoire of responsible AI?
Let us discuss, lest I accidentally summon something truly cursed.
Aspire to inspire.


#training data#ai training#vintage books#public domain#ethical ai#datasets#doctor diffusion#open source#ai art community#aiartcommunity#datascience#1920s#ai art#ethics#ai
0 notes
Text
CDC Datasets
Update on the compressed single-file version of the CDC datasets: the ingest to the Internet Archive is now complete, and my redirect has been updated to point there.
You can pull the file from them directly, or use their torrent.
That link again: https://dave.io/go/cdc
0 notes
Text
Skewing Large Language Models By Providing Them An Artificially Limited Dataset
0 notes
Text
The Many Faces of Reinforcement Learning: Shaping Large Language Models
New Post has been published on https://thedigitalinsider.com/the-many-faces-of-reinforcement-learning-shaping-large-language-models/
The Many Faces of Reinforcement Learning: Shaping Large Language Models


In recent years, Large Language Models (LLMs) have significantly redefined the field of artificial intelligence (AI), enabling machines to understand and generate human-like text with remarkable proficiency. This success is largely attributed to advancements in machine learning methodologies, including deep learning and reinforcement learning (RL). While supervised learning has played a crucial role in training LLMs, reinforcement learning has emerged as a powerful tool to refine and enhance their capabilities beyond simple pattern recognition.
Reinforcement learning enables LLMs to learn from experience, optimizing their behavior based on rewards or penalties. Different variants of RL, such as Reinforcement Learning from Human Feedback (RLHF), Reinforcement Learning with Verifiable Rewards (RLVR), Group Relative Policy Optimization (GRPO), and Direct Preference Optimization (DPO), have been developed to fine-tune LLMs, ensuring their alignment with human preferences and improving their reasoning abilities.
This article explores the various reinforcement learning approaches that shape LLMs, examining their contributions and impact on AI development.
Understanding Reinforcement Learning in AI
Reinforcement Learning (RL) is a machine learning paradigm where an agent learns to make decisions by interacting with an environment. Instead of relying solely on labeled datasets, the agent takes actions, receives feedback in the form of rewards or penalties, and adjusts its strategy accordingly.
For LLMs, reinforcement learning ensures that models generate responses that align with human preferences, ethical guidelines, and practical reasoning. The goal is not just to produce syntactically correct sentences but also to make them useful, meaningful, and aligned with societal norms.
Reinforcement Learning from Human Feedback (RLHF)
One of the most widely used RL techniques in LLM training is RLHF. Instead of relying solely on predefined datasets, RLHF improves LLMs by incorporating human preferences into the training loop. This process typically involves:
Collecting Human Feedback: Human evaluators assess model-generated responses and rank them based on quality, coherence, helpfulness and accuracy.
Training a Reward Model: These rankings are then used to train a separate reward model that predicts which output humans would prefer.
Fine-Tuning with RL: The LLM is trained using this reward model to refine its responses based on human preferences.
This approach has been employed in improving models like ChatGPT and Claude. While RLHF have played a vital role in making LLMs more aligned with user preferences, reducing biases, and enhancing their ability to follow complex instructions, it is resource-intensive, requiring a large number of human annotators to evaluate and fine-tune AI outputs. This limitation led researchers to explore alternative methods, such as Reinforcement Learning from AI Feedback (RLAIF) and Reinforcement Learning with Verifiable Rewards (RLVR).
RLAIF: Reinforcement Learning from AI Feedback
Unlike RLHF, RLAIF relies on AI-generated preferences to train LLMs rather than human feedback. It operates by employing another AI system, typically an LLM, to evaluate and rank responses, creating an automated reward system that can guide LLM’s learning process.
This approach addresses scalability concerns associated with RLHF, where human annotations can be expensive and time-consuming. By employing AI feedback, RLAIF enhances consistency and efficiency, reducing the variability introduced by subjective human opinions. Although, RLAIF is a valuable approach to refine LLMs at scale, it can sometimes reinforce existing biases present in an AI system.
Reinforcement Learning with Verifiable Rewards (RLVR)
While RLHF and RLAIF relies on subjective feedback, RLVR utilizes objective, programmatically verifiable rewards to train LLMs. This method is particularly effective for tasks that have a clear correctness criterion, such as:
Mathematical problem-solving
Code generation
Structured data processing
In RLVR, the model’s responses are evaluated using predefined rules or algorithms. A verifiable reward function determines whether a response meets the expected criteria, assigning a high score to correct answers and a low score to incorrect ones.
This approach reduces dependency on human labeling and AI biases, making training more scalable and cost-effective. For example, in mathematical reasoning tasks, RLVR has been used to refine models like DeepSeek’s R1-Zero, allowing them to self-improve without human intervention.
Optimizing Reinforcement Learning for LLMs
In addition to aforementioned techniques that guide how LLMs receive rewards and learn from feedback, an equally crucial aspect of RL is how models adopt (or optimize) their behavior (or policies) based on these rewards. This is where advanced optimization techniques come into play.
Optimization in RL is essentially the process of updating the model’s behavior to maximize rewards. While traditional RL approaches often suffer from instability and inefficiency when fine-tuning LLMs, new approaches have been developed for optimizing LLMs. Here are leading optimization strategies used for training LLMs:
Proximal Policy Optimization (PPO): PPO is one of the most widely used RL techniques for fine-tuning LLMs. A major challenge in RL is ensuring that model updates improve performance without sudden, drastic changes that could reduce response quality. PPO addresses this by introducing controlled policy updates, refining model responses incrementally and safely to maintain stability. It also balances exploration and exploitation, helping models discover better responses while reinforcing effective behaviors. Additionally, PPO is sample-efficient, using smaller data batches to reduce training time while maintaining high performance. This method is widely used in models like ChatGPT, ensuring responses remain helpful, relevant, and aligned with human expectations without overfitting to specific reward signals.
Direct Preference Optimization (DPO): DPO is another RL optimization technique that focuses on directly optimizing the model’s outputs to align with human preferences. Unlike traditional RL algorithms that rely on complex reward modeling, DPO directly optimizes the model based on binary preference data—which means it simply determines whether one output is better than another. The approach relies on human evaluators to rank multiple responses generated by the model for a given prompt. It then fine-tune the model to increase the probability of producing higher-ranked responses in the future. DPO is particularly effective in scenarios where obtaining detailed reward models is difficult. By simplifying RL, DPO enables AI models to improve their output without the computational burden associated with more complex RL techniques.
Group Relative Policy Optimization (GRPO): One of the latest development in RL optimization techniques for LLMs is GRPO. While typical RL techniques, like PPO, require a value model to estimate the advantage of different responses which requires high computational power and significant memory resources, GRPO eliminates the need for a separate value model by using reward signals from different generations on the same prompt. This means that instead of comparing outputs to a static value model, it compares them to each other, significantly reducing computational overhead. One of the most notable applications of GRPO was seen in DeepSeek R1-Zero, a model that was trained entirely without supervised fine-tuning and managed to develop advanced reasoning skills through self-evolution.
The Bottom Line
Reinforcement learning plays a crucial role in refining Large Language Models (LLMs) by enhancing their alignment with human preferences and optimizing their reasoning abilities. Techniques like RLHF, RLAIF, and RLVR provide various approaches to reward-based learning, while optimization methods such as PPO, DPO, and GRPO improve training efficiency and stability. As LLMs continue to evolve, the role of reinforcement learning is becoming critical in making these models more intelligent, ethical, and reasonable.
#agent#ai#AI development#AI models#Algorithms#applications#approach#Article#artificial#Artificial Intelligence#Behavior#biases#binary#challenge#chatGPT#claude#data#datasets#Deep Learning#deepseek#deepseek-r1#development#direct preference#direct preference optimization#DPO#efficiency#employed#Environment#ethical#Evolution
3 notes
·
View notes
Text
Finalized on a dataset for the 1st Capstone project at #mlzoomcamp led by Alexey Grigorev @DataTalksClub .
0 notes
Text
Salary dataset
(1) 250 celebrity endorsement earnings data (2)200 luxury advertisement videos (3) 9 Financial data of luxury companies
0 notes
Text
Will there be an algorithm to airbrush away our worst features? Must we buy this privatization of culture? Does the postmodern critique of the museum, the call for tearing down its walls, do anything but free art for the shopping mall? I’ll take Bilbao, thanks.
Saunders, W. S. (ed.) (2017). Commodification and Spectacle in Architecture: A Harvard Design Magazine Reader. Harvard Design Magazine.
0 notes