#Data Historian Market
Explore tagged Tumblr posts
phantomrose96 · 9 months ago
Text
If anyone wants to know why every tech company in the world right now is clamoring for AI like drowned rats scrabbling to board a ship, I decided to make a post to explain what's happening.
(Disclaimer to start: I'm a software engineer who's been employed full time since 2018. I am not a historian nor an overconfident Youtube essayist, so this post is my working knowledge of what I see around me and the logical bridges between pieces.)
Okay anyway. The explanation starts further back than what's going on now. I'm gonna start with the year 2000. The Dot Com Bubble just spectacularly burst. The model of "we get the users first, we learn how to profit off them later" went out in a no-money-having bang (remember this, it will be relevant later). A lot of money was lost. A lot of people ended up out of a job. A lot of startup companies went under. Investors left with a sour taste in their mouth and, in general, investment in the internet stayed pretty cooled for that decade. This was, in my opinion, very good for the internet as it was an era not suffocating under the grip of mega-corporation oligarchs and was, instead, filled with Club Penguin and I Can Haz Cheezburger websites.
Then around the 2010-2012 years, a few things happened. Interest rates got low, and then lower. Facebook got huge. The iPhone took off. And suddenly there was a huge new potential market of internet users and phone-havers, and the cheap money was available to start backing new tech startup companies trying to hop on this opportunity. Companies like Uber, Netflix, and Amazon either started in this time, or hit their ramp-up in these years by shifting focus to the internet and apps.
Now, every start-up tech company dreaming of being the next big thing has one thing in common: they need to start off by getting themselves massively in debt. Because before you can turn a profit you need to first spend money on employees and spend money on equipment and spend money on data centers and spend money on advertising and spend money on scale and and and
But also, everyone wants to be on the ship for The Next Big Thing that takes off to the moon.
So there is a mutual interest between new tech companies, and venture capitalists who are willing to invest $$$ into said new tech companies. Because if the venture capitalists can identify a prize pig and get in early, that money could come back to them 100-fold or 1,000-fold. In fact it hardly matters if they invest in 10 or 20 total bust projects along the way to find that unicorn.
But also, becoming profitable takes time. And that might mean being in debt for a long long time before that rocket ship takes off to make everyone onboard a gazzilionaire.
But luckily, for tech startup bros and venture capitalists, being in debt in the 2010's was cheap, and it only got cheaper between 2010 and 2020. If people could secure loans for ~3% or 4% annual interest, well then a $100,000 loan only really costs $3,000 of interest a year to keep afloat. And if inflation is higher than that or at least similar, you're still beating the system.
So from 2010 through early 2022, times were good for tech companies. Startups could take off with massive growth, showing massive potential for something, and venture capitalists would throw infinite money at them in the hopes of pegging just one winner who will take off. And supporting the struggling investments or the long-haulers remained pretty cheap to keep funding.
You hear constantly about "Such and such app has 10-bazillion users gained over the last 10 years and has never once been profitable", yet the thing keeps chugging along because the investors backing it aren't stressed about the immediate future, and are still banking on that "eventually" when it learns how to really monetize its users and turn that profit.
The pandemic in 2020 took a magnifying-glass-in-the-sun effect to this, as EVERYTHING was forcibly turned online which pumped a ton of money and workers into tech investment. Simultaneously, money got really REALLY cheap, bottoming out with historic lows for interest rates.
Then the tide changed with the massive inflation that struck late 2021. Because this all-gas no-brakes state of things was also contributing to off-the-rails inflation (along with your standard-fare greedflation and price gouging, given the extremely convenient excuses of pandemic hardships and supply chain issues). The federal reserve whipped out interest rate hikes to try to curb this huge inflation, which is like a fire extinguisher dousing and suffocating your really-cool, actively-on-fire party where everyone else is burning but you're in the pool. And then they did this more, and then more. And the financial climate followed suit. And suddenly money was not cheap anymore, and new loans became expensive, because loans that used to compound at 2% a year are now compounding at 7 or 8% which, in the language of compounding, is a HUGE difference. A $100,000 loan at a 2% interest rate, if not repaid a single cent in 10 years, accrues to $121,899. A $100,000 loan at an 8% interest rate, if not repaid a single cent in 10 years, more than doubles to $215,892.
Now it is scary and risky to throw money at "could eventually be profitable" tech companies. Now investors are watching companies burn through their current funding and, when the companies come back asking for more, investors are tightening their coin purses instead. The bill is coming due. The free money is drying up and companies are under compounding pressure to produce a profit for their waiting investors who are now done waiting.
You get enshittification. You get quality going down and price going up. You get "now that you're a captive audience here, we're forcing ads or we're forcing subscriptions on you." Don't get me wrong, the plan was ALWAYS to monetize the users. It's just that it's come earlier than expected, with way more feet-to-the-fire than these companies were expecting. ESPECIALLY with Wall Street as the other factor in funding (public) companies, where Wall Street exhibits roughly the same temperament as a baby screaming crying upset that it's soiled its own diaper (maybe that's too mean a comparison to babies), and now companies are being put through the wringer for anything LESS than infinite growth that Wall Street demands of them.
Internal to the tech industry, you get MASSIVE wide-spread layoffs. You get an industry that used to be easy to land multiple job offers shriveling up and leaving recent graduates in a desperately awful situation where no company is hiring and the market is flooded with laid-off workers trying to get back on their feet.
Because those coin-purse-clutching investors DO love virtue-signaling efforts from companies that say "See! We're not being frivolous with your money! We only spend on the essentials." And this is true even for MASSIVE, PROFITABLE companies, because those companies' value is based on the Rich Person Feeling Graph (their stock) rather than the literal profit money. A company making a genuine gazillion dollars a year still tears through layoffs and freezes hiring and removes the free batteries from the printer room (totally not speaking from experience, surely) because the investors LOVE when you cut costs and take away employee perks. The "beer on tap, ping pong table in the common area" era of tech is drying up. And we're still unionless.
Never mind that last part.
And then in early 2023, AI (more specifically, Chat-GPT which is OpenAI's Large Language Model creation) tears its way into the tech scene with a meteor's amount of momentum. Here's Microsoft's prize pig, which it invested heavily in and is galivanting around the pig-show with, to the desperate jealousy and rapture of every other tech company and investor wishing it had that pig. And for the first time since the interest rate hikes, investors have dollar signs in their eyes, both venture capital and Wall Street alike. They're willing to restart the hose of money (even with the new risk) because this feels big enough for them to take the risk.
Now all these companies, who were in varying stages of sweating as their bill came due, or wringing their hands as their stock prices tanked, see a single glorious gold-plated rocket up out of here, the likes of which haven't been seen since the free money days. It's their ticket to buy time, and buy investors, and say "see THIS is what will wring money forth, finally, we promise, just let us show you."
To be clear, AI is NOT profitable yet. It's a money-sink. Perhaps a money-black-hole. But everyone in the space is so wowed by it that there is a wide-spread and powerful conviction that it will become profitable and earn its keep. (Let's be real, half of that profit "potential" is the promise of automating away jobs of pesky employees who peskily cost money.) It's a tech-space industrial revolution that will automate away skilled jobs, and getting in on the ground floor is the absolute best thing you can do to get your pie slice's worth.
It's the thing that will win investors back. It's the thing that will get the investment money coming in again (or, get it second-hand if the company can be the PROVIDER of something needed for AI, which other companies with venture-back will pay handsomely for). It's the thing companies are terrified of missing out on, lest it leave them utterly irrelevant in a future where not having AI-integration is like not having a mobile phone app for your company or not having a website.
So I guess to reiterate on my earlier point:
Drowned rats. Swimming to the one ship in sight.
35K notes · View notes
rjzimmerman · 7 months ago
Text
Frontier myth vilified the California grizzly. Science tells a new story. (Washington Post)
Tumblr media
The grizzly, a subspecies of brown bear, has long held a place in mainstream American myth as a dangerous, even bloodthirsty creature. Its scientific name, Ursus arctos horribilis, means “the horrible bear.” But that image is being challenged by a new set of studies that combine modern biochemical analysis, historical research and Indigenous knowledge to bring the story of the California grizzly from fiction to fact.
In January, a team of experts led by University of California at Santa Barbara ecologist Alexis Mychajliw published a paper in the Proceedings of the Royal Society B about the diet of the California grizzly bear and how that influenced its extinction. The results challenge virtually every aspect of the bear’s established story.
“Pretty much everything that I thought I knew about these animals turned out to be wrong,” said Peter Alagona, an ecologist and historian at UCSB and co-author of the study.
Much of the grizzly bear’s long-standing narrative comes from stories, artwork and early photographs depicting California grizzlies as huge in size and aggressive in nature. Many of these reports, which found wide readership in newspapers elsewhere in the West and in the cities back East, were written by what Alagona calls the Californian influencers of their time.
“They were trying to get rich and famous by marketing themselves as these icons of the fading frontier,” Alagona said. “A lot of the historical sources that we have about grizzlies are actually not about grizzlies. They’re about this weird Victorian 19th-century celebrity culture.”
The team of ecologists, historians and archivists compared the image of California grizzlies from these frontier reports to harder data in the form of bear bones from museum collections all over the state.
The frontier myth had painted the California bears as larger than grizzlies elsewhere in the country, but the bone analysis revealed that they were the same size and weight, about 6 feet long and 440 pounds for the average adult.
In an even larger blow to the popular story of the vicious grizzly, the bones showed that before 1542, when the first Europeans arrived, the bears were only getting about 10 percent of their diet from preying on land animals. They were primarily herbivores, surviving on a varied diet of acorns, roots, berries, fish and occasionally larger prey such as deer.
As European-style farming and ranching began to dominate the landscape, grizzlies became more like the stories those frontier influencers were telling about them. The percentage of meat in their diet rose to about 25 percent, probably in large part because of the relative ease of catching a fenced-in cow or sheep compared to a wild elk.
Colonialism forced so many changes on the California landscape so quickly, affecting every species that the bears ate and interacted with, that the exact cause of this change will be difficult to ever fully understand.
Still, grizzlies were never as vicious or purely predatory as the stories made them out to be. The narrative of the huge killer bear instead fed a larger settler story of a landscape — and a people — that could not coexist with the settlers themselves. And that story became a disaster for more than just bears.
Although we will never have exact numbers, experts agree that hundreds of thousands of Indigenous people were living in what is now California before White settlers arrived. One frequently cited estimate puts the population at 340,000.
By 1900, that number had been slashed by more than 95 percent to around 16,000 surviving tribal members throughout the state. Eliminating the bear and the vast majority of California’s Indigenous people can be seen as parts of the same concerted effort to replace one landscape — and one set of stories — with another.
“The annihilation of the California grizzly bear was part of a much larger campaign of annihilation,” Alagona said. “I think it’s clear that what happened in California meets the legal definition of a genocide. But in a way, it was even more than that, because these were not just attempts to eliminate groups of people. These were attempts to destroy an entire world.”
183 notes · View notes
itsbansheebitch · 7 months ago
Text
More thoughts
I get both sides, but I feel a little confused they couldn't find four people in their +25 employees
Data analyst (Are you seriously telling me you couldn't personally email or even just HIRE matpat's team who do data analytics as part of Theorist Media to help??? The man would be overjoyed to help???)
Editor (Put the first $6 towards a can of coffee grounds, dude)
PR Team (Even, like, a single person, please, for the love of god)
Business Major (Or literally anyone that has taken a home ec/budgeting/personal finance class)
First, the Dish Granted series was started when gold leaf burgers were novel, now it's seen as tone deaf (for obvious reasons) it should have shifted to something like interviews with people who make that kind of food or local businesses (like parmesan cheese shops in Parma, Italy) or the history of food (like talking about the history of modern Native American slavery on Californian wine vinyards). Not to mention the untapped potential of Food Fraud topics. Either shift it, or scrap it. Any data analyst or chronically online person could tell you that.
Second, why did you keep "anyone can afford $6 a month" in? Are the editors asleep at the wheel? Are they overworked? What is going on? You know damn well to not make generalizations about what people can afford. That's NEVER a good idea, especially when you KNOW (because YT gives you analytics) that most of your viewers are young (16/18-30/35 range, I'd guess) who probably, either 1, are still in school and either arent paid well/dont have jobs OR 2, arent paid well and tired of people's shit, like people who own businesses talking about "tough financial decisions." To them, Watcher isn't going to look different from the other people talking like that, because this was so sudden, with no input from fans, and in the video you hear shit like "anyone can afford [X]." To be frank, it wouldn't really matter what the amount is, because that generalization goes against the message they have stood by for years. THAT is a slap in the face.
Third, what are yall doing with the budgeting? Every artist has a right to make art that they are proud of. Every artist deserves to have their work seen if they so choose. Every artist deserves to make a living. HOWEVER, there are MANY options online when it comes to making money, especially on YT. You could get into marketing, data analysis, expanding your demographic, looking at what people are interested in right now VS what will stand the test of time (not gold leaf burgers), etc.
You have to either have these skills, develop these skills, or hire someone to do it for you. It's understandable that you would want a team behind the production, but I find +25 employees to be WAY too many people, especially in LA. Bailey Sarian has a Dark History section on her YT (and Spotify podcast) where she has hired historians to help make sure her episodes are as accurate as possible. You've caught heat before from Puppet History's missing & incorrect info, you should do the same. She has about three (3) "intermissions" per episode for ad breaks. I never see anyone complain. People WOULD listen to yall talk for that long (+1 hour videos), tbh, though that's not necessary.
Why are yall out here with Teslas, expensive food, new gear, scripts (where there weren't scripts before, PH is different, that makes sense), and "better than TV" level sets??? I need to put your accountant in this week's church prayer list what the actual hell??? Ya'll, this video is literally the meme:
Guys help me budget:
LA Rent: 2K per month
Videos: 100K per vid
+25 Employees: God only knows
New stuff for videos: Don't get me started
Like, are you serious?
You have a right to do whatever you want with your art. You have a right to charge whatever you'd like for that art. You have a right to make a living from your art and you have a right to ask your fans for money.
Your fans have a right to be angry when they've been supporting yall for, what, almost 10 years? They have a right to choose when and where to spend their money even when you've made an impact. They have a right to feel betrayed, especially when there are better options (like Nebula or consulting with Theorist Media).
Fans DO NOT have a right to be racist to any members of Watcher, now that they have made a decision they do not agree with.
I personally, think this is a really silly decision and could have been solved (haha solved) with a simple YT poll, but apparently we had to get... this. I respect their decision, I just don't think it was a smart one. I wish them the best, and I hope they find a better solution. Any further comment from me will depend on what steps they take next.
53 notes · View notes
transmutationisms · 1 year ago
Note
How do you find the time to read all your book recs?? Also would you mind talking about your process for researching specific topics :)
i generally only make rec lists for things i have enough familiarity with to navigate the literature so, you have to keep in mind those lists are sometimes literally a decade+ of cumulative reading on my end. i do also sometimes include texts i haven't read in their entirety, or occasionally even ones i've only come across in footnotes but still think are foundational or relevant enough to warrant a rec.
as to my research process: there's no single answer here because the sort of research i do will depend on what questions i'm trying to answer. usually if i'm starting to look at a topic completely from scratch, i'll ask someone who publishes in that area what the major recent works are, then scan a few of them. i might 'snowball' those texts (read the works they cite in their footnotes) but, that strategy has limited utility because it only goes backward in time and sometimes a recent or uncited text can be incredibly valuable. so there's a fair amount of bumbling around in the secondary literature at this point. some academic journals maintain bibliographies for their subfields, which are not comprehensive but can be useful; i usually also do a certain amount of keyword fuckery in my library's database. sometimes i waste a lot of time at this point chasing leads that turn out to be irrelevant, or i discover that a question i was chasing is really better tackled from an entirely different direction. shit happens.
at some point i usually reach a stage where i need to look at some primary sources, because i'm oriented enough in the major issues to identify spots where previous researchers haven't made full use of historical records, or may be interpreting them in a way i disagree with. so, what exactly i'm looking for now really varies. sometimes i just want to read the primary texts that another historian is commenting on: for example, the last few months i was trawling through the french national library's archives to see what people were saying in print about a specific historical figure between about 1778 and 1862. other times i might want population data or land records: births, deaths, cholera infections, records of church property sales, &c. depending on, again, what sorts of questions you're asking, anything might have useful information to you: postmortem personal auction catalogues have given me some mileage, along with wills and personal correspondance. i have a committee member who collects and analyses postcards often being sold for pennies at flea markets out of people's grandparents' attics, and another who has an ongoing project looking at a zillion editions of a specific children's book printed in the late 19th century. along the way, as i look at primary sources, i will typically go back and forth to more secondary literature, as i find new topics that might be relevant or help me contextualise what i'm looking at. i can't ever really plan these things out systematically; i just follow what looks promising and interesting and see where it leads me.
another thing to consider is that the primary sources sometimes tell me useful information directly in their capacity as material objects. what type of paper is used, what personal or library stamps appear on the cover, who's the publisher, how many editions did it go through, are the print and typeset jobs sloppy, where was this copy found or preserved? these sorts of details tell me about how people reacted to the text, its author, and the ideas within, which can be a valuable part of whatever investigation i'm trying to conduct. sometimes i end up chasing down information on a publisher or the owner whose personal library a book or piece of ephemera came out of; there are people who research processes of preservation, printing, &c in themselves, which has yielded some fascinating studies in recent decades.
at some point, if it's a research project i'm trying to communicate to other people, i will switch to writing mode, where i try to organise ^^ all of that in my head, and form a coherent narrative or argument that i think is worth making. this might be revisionist in nature ('people have argued before that such and such was x way or historical actors thought about it like y, but what i have here indicates we should actually understand it in the context of z') or it might be more like, "hey, i found this thing i don't think anyone knows about!" or anything else. again, the way you put together a research project will vary so widely depending on what you're researching, and why, and why you think it matters and to whom.
also, i should emphasise that what i've written here isn't necessarily something that happens on a strict or compressed timeline. i'm working on a dissertation, so for that topic, i do have reasons i want to complete parts at certain times, unfortunately. but i also have research projects that i just chip away at for fun, that i've had on various backburners for literally years, that i might sometimes write about (eg, on here) without necessarily ever planning to subject them to the hegemon of academic publishing. i think knowledge dissemination is great and to that end i love to talk to people about what i'm researching and hear about their stuff as well. but, i also think research projects can be fun / rewarding / &c when they're completely for your own purposes, untimed, unpublished, &c &c. i guess i'm just saying, publishing and research conventions and rules sometimes have purposes (like "make it possible to publish this as a book in the next 5 years") but don't get so hung up on those rules that they prevent you from just researching something for any number of other reasons. there are so many ways to skin a cat 📝
47 notes · View notes
nicklloydnow · 8 months ago
Text
“‘The hope that political action will gradually humanise industrial society has given way to a determination to survive the general wreckage or, more modestly, to hold one’s own life together in the face of mounting pressures.” American historian and cultural critic Christopher Lasch’s pessimistic prognosis of the shifting relationship of individuals to society and to each other in The Minimal Self was published 40 years ago. It might have been written yesterday.
From the late 1970s, Lasch published a series of books, most notably The Culture of Narcissism, The Minimal Self and The Revolt of the Elites, that prefigured many contemporary debates, about culture wars, the rise of a “liberal elite”, the corrosiveness of individualism, the encroachment of the market into social life, the creation of a celebrity culture, the rise of a “therapeutic” mindset.
(…)
For Lasch, the combination of consumer capitalism, competitive individualism and the abandonment by radicals of campaigns for material change in favour of demands for cultural transformation, had led to the emergence of a new narcissistic personality type. Lasch did not mean narcissism in the colloquial sense, such as a Trump-like figure, bursting with “self-centredness, boastfulness, feelings of entitlement and a need for admiration”, as one profile put it. Rather, drawing on psychoanalysis, Lasch was describing an individual who could not distinguish between themselves and the world beyond and so came to “see the world as a mirror, more particularly as a projection of one’s own fears and desires”.
It was a beleaguered self rather than an overbearing one. “The new Narcissus,” Lasch wrote, “gazes at his own reflection, not so much in admiration as in unremitting search of flaws, signs of fatigue, decay”, seeking “relief from the burden of selfhood”. He described people as increasingly yearning for contact and intimacy with others, yet fearful of the pain of engagement.
(…)
Perhaps the current theme that speaks most to Laschian fears is the growing concern about what a report last year from the US surgeon general called “an epidemic of loneliness”, an alarming rise in the social disconnectedness of people. In Britain, Tracey Crouch was in 2018 appointed as Britain’s first minister for loneliness following a report from the Jo Cox commission on loneliness.
Against this background came a study last week that compared perceptions of loneliness among middle-aged people in the US and 13 European nations, examining data over the past two decades from surveys. It found, perhaps unsurprisingly, that Americans seemed the loneliest, followed by Britons. It is a finding that fits in with the general perception of Britain and the US as societies with the greatest stress on individualism and therefore more likely to nurture a sense of loneliness.
(…)
There is a deeper issue, too: the tendency to individualise social issues, whether poverty or unemployment, to view them as psychological dispositions or even as moral failure. Loneliness, too, is frequently framed as a psychological condition, or mental health problem, the product of narcissism or self-obsession.
Forty years ago, Lasch was trying to show how social changes were distorting relationships, and to describe people’s attempts to negotiate a new world. But his psychoanalytical eye often overwhelmed his social vision and what many took from his work was less his social critique than his delineation of a new, narcissistic personality type. The end point of his analysis (the emergence of a public disconnected from one another and so more self-centred) became, instead, the starting point for explanation – that people’s narcissism and self-obsession explained their disconnectedness and the erosion of communal bonds.
This is even more true today. Too much of contemporary discussion about the impact of social and technological changes on people’s psychology – from the influence of social media on the wellbeing of the young to the effect of hyper-individualism on our sense of self – fetishises the psychology at the expense of social analysis. We look for loneliness inside our heads when its source lies all around us, in the destruction of collective life, the erosion of communal bonds, the ruin of civil society, the squeezing of public spaces. We could do with obsessing less about personality types than about the obsession with the psychological at the expense of the social.”
3 notes · View notes
fatehbaz · 2 years ago
Text
In other words, the planetary evokes what we call in French le vivant, which in English is something like “the living world.”
Le vivant is, for me, the planetary in its multiplicity [...]. It is true that a key driver of the process of planetarization is capitalism. [...] To some extent, the market has become a totality, or in any case our core moral experience. [...] Can we rely on infrastructures that have, to some extent, contributed to turning the world into a burning house? [...]
We need to begin by agreeing on what is at stake. From an African perspective, the core of the problem is the precariousness of life. [...] When I look at cosmologies of existence among the Dogon in Mali, or among the Yoruba in Nigeria or other communities in the Congo Basin, what strikes me is the central place these cultures give to the principle of animation — with the sharing of vital breath. Breath is a right that is universal, in the sense that we all breathe [...]. We also share the vital breath. [...] In that sense, we have here cosmogonies that are not at all convinced that there is a fundamental difference between the human subject and the world around it [...]. Everything is an effect of power, an agency that is shaped by all. [...]
---
[W]e are increasingly surrounded by multiple and expanding forms of calculation [...] . The integration of algorithms and big data analysis in the biological sphere is not only bringing with it a greater belief in techno-positivism, in modes of statistical thought, it’s also paving the way for regimes of assessment of the natural world, modes of prediction and analysis that are treating life itself as a computable object. This is [...] affecting not only our political imagination, but also the ways in which we understand what knowledge stands for, and what is it is all about. [...]
[W]e are experiencing a clash of temporalities: geological time, the deep time of those processes that fashioned our terrestrial home; historical time; and experiential time. All these times now fold in on one another. We are not used to thinking of time as simultaneous. We think of time as linear: past, present, future. So how do we begin to think about time in a way that takes these concatenations seriously? [...]
[T]hat’s what the Anthropocene shows us. As the historian Dipesh Chakrabarty has argued, there’s no longer a social history separate from natural history. That is over. Human history and Earth history are now indivisible. The epoch we have entered into is one of indivisibility, of entanglement, of concatenations. [...]
---
And also, speaking for the planet and listening to the planet are not exactly the same things. Maybe the first step is to listen. The question then becomes, how do we listen to the planet? Does the planet speak for itself? [...] [W]e have to get out of a certain epistemology that has been premised on the fact that humans are the only speaking entity, that what distinguishes us is that we mastered language and the others didn’t. But we now have studies showing that plants speak, that forests speak: a de-monopolization of the faculty of speech, of language.
When we look into the archives of the whole world, not just the archives of the West, broadly speaking we find knowledges of how other-than-humans speak — and how humans, or some humans, have learned to listen to those languages. This requires a radical decentering, premised on the capacity to know together, to generate knowledge together.
The French term for knowledge is connaissance, a word that literally means “being born together.”
---
Text by: Achille Mbembe. As interviewed by Nils Gilman. “How To Develop A Planetary Consciousness.” Noema (Noema Magazine). 11 January 2022. [Some paragraph breaks added by me.]
49 notes · View notes
markwateneymemorialcrater · 6 months ago
Text
If I had practically endless money, one of my pet projects would be to make a network of long term libraries. Develop long term data storage technology, and then archive all sorts of information in these geologically secure facilities.
Specifically I would want to store information on our technology and tools. If possible, even store some tools and machinery in these libraries.
Sure they would almost certainly become known and marketed as apocalypse libraries to help jump start human civilization, and they very well could be used that way. But I think it also just be a very useful archive in general. Both for future historians and for current people as you would now have an institution trying to preserve technical info that we are always losing.
3 notes · View notes
adamsvanrhijn · 1 year ago
Note
I'm sorry nobody knows what a tumblr job is
if you aren't missing a comma or a "but" there. thank u i love u
if i had unlimited characters and thought people would bother reading context associated with a poll, (i am pretty sure they usually do not) and also had some foresight about some of the specific things people would interpret differently than i intended, here's what this would have looked like:
[begin post]
[begin poll]
which of these listed tumblr job types is yours?
retail, grocery, consumer-facing service and sales (excludes call centers and food service)
food service (includes restaurants, cafes, coffee shops, bars, cafeterias, food trucks, commercial kitchens, food kitchens, etc; cooking + serving + hosting)
teacher, professor, tutor, teaching assistant, childcare provider, but not educational administrator, if you do another kind of education read the rest of the options first because one of them might be a better fit
engineer, developer, technical designer, data or computer scientist, not including technical writers
parks and wildlife, outdoor research, animal care (including animal boarding and veterinary care), animal keeping (such as zoos or aquariums), animal husbandry, farming, ranching, environmental conservation, gardener or plant nursery worker (but not florist), field archaelogist, equine therapist, outdoor camp counselor if you work with and have knowledge around plants and/or animals
librarian, archivist, docent, tour guide, curator, patron-facing researcher, consultant historian or sociologist, community educator at a cultural or knowledge (probably nonprofit) institution (including but not limited to historical reenactors); reception or guest services related to any of the previous, excepting food service, which is the food service option
writer - content, technical, proposal, marketing, fiction, nonfiction, educational, research, including editing, but has to be actively working and receiving pay from a company, an organization, and/or clients/commissioners including publications/publishers, even if income is inconsistent or not received at regular intervals. does not include captioning or transcription. translators use your best judgment
artist, illustrator, graphic designer, but has to be actively working and receiving pay from a company, an organization, clients/commissioners including publications and galleries, may include visual marketing but does NOT include ad sales, also does not include video editing unless you do like, speed paints or show off your art in videos
patient healthcare, caregiving, and advocacy. does not include animal care, pharmaceutical research or dispensing, or medical coding/billing
you are not currently working as / do not currently have any of the Tumblr Jobs listed above
[end poll]
a "Tumblr Job" is decided based on the following criteria, which is based on my own dashboard/Tumblr experiences, and tumblr searches:
either many Tumblr users have or have had that job, OR, one to a few high-profile Tumblr users have or have had that job
posts are made about that job, either about the day-to-day work itself (positive or negative), as an appeal to authority ("librarian here!"), or about the concept and/or vibes of the job or having that job
posts are reblogged about that job, and reach an audience of people who do not have that job; posts go viral about that job, even
Tumblr users might aspire to have that job, or be studying toward it
it might require a high level of interest and/or personal investment with limited positions available, OR, it might be a very common job in the world
the job is NOT one that many Tumblr users would perhaps complain and/or rant and/or criticize about other people having
the job is NOT one commonly associated with things Tumblr tends to skew away from, like industries known to contribute to climate change or jobs that tend to be associated with ultra high earners with wealth inequality in the same industry (editing to plug my other poll that 6 & 7 apply to that nobody is taking)
a "Tumblr Job" is NOT just a job anybody on tumblr might have or even a job most people or many people on tumblr might have. it's not even a job that is necessarily common on tumblr (common on tumblr != common tumblr job), that's something i am interested in finding out via the poll.
just because tumblr users you know also have your job doesn't mean it Is a tumblr job. and just because your job type isn't listed doesn't mean it is Not a tumblr job. there are only 9 job hodgepodge types here. there are more jobs in the world than this and there are more tumblr jobs on tumblr than this. these are also not and are not intended to be standard industry classifications. they are grouped by tumblr vibes, not necessarily by job role or duties.
if you do not have a job at this time, you do not currently right now have one of the listed tumblr jobs. being unemployed or being a student, even if you once had one of these jobs or are studying to obtain one of these jobs, are not jobs for the purpose of this poll.
the purpose of the poll is to see how many respondents actually have and are doing these specific jobs that i have identified as "tumblr jobs". nothing more and nothing less!
[end post]
but nobody is gonna fucking read all that & tumblr polls have character limits. if the poll was as specific as i wanted it to be it would not have however many thousands of votes it currently has. curse of my autisms <3
13 notes · View notes
yournightowl · 11 months ago
Text
Your NightOwl #042
If you were to plot the prevalance of organized crime across history, what would that graph look like?
My first instinct is to say that it should go from bottom left to top right- Pweople getting more crooked, more corrupt, everything going down the can as time goes on.(#`Д´)
But after giving it a little more thought, (and after having eaten lunch,)amidst my newfound positivity i thought, "Couldn't it be just the opposite?" ♡( ◡‿◡ )
Crime gets harder and harder to get away with as technology improves- and improvements in how fast goods and information can travel helps make the global playing field getting more equal, if not less psychotically competitive.
And now, after having taken a nap
i realize neither are even close to correct. ( ̄□ ̄」)
Because crime, especially the large scale organized type, isn't influenced by technology or the progression of society- it's really only ever been dependent on economics.
Tumblr media
The heyday, as it were, for organized crime came in the early 20th century- between wars and depressions and plagues and the stock market - we saw higher highs and lower lows than ever thought possible.
There were other blips on the radar after that of course- regime changes, speculative bubbles, empires falling and unions collapsing- but through it all, wherever the money went, crime followed soon after.
And it's no different in the 21st century.
Because even more important than the stock market or the price of grain in this conversation is the level of unemployment- which spiked massively in the 30's and 40's as early automated systems wiped out entire sectors.
The language models that passed for ear-tifical antelly-gents back then drove a spike between the haves and the have nots; paying one guy to scrape data from another guy's life's work, then training a bot on that data and firing the first two guys anyway. It was a shortsighted fool's goldrush where every middle manager on earth raced against each other to disprove their own relevance as fast as possible. (°ロ°) ! The global economy remodeled itself into a system based mostly on theft. So for your average unemployed joe shmoe, reselling tech that fell off the back of a truck must've seemed like the most honest work in the world. (^_<)〜☆
The inverted bubble didn't last. ヾ( ̄◇ ̄)ノ〃 llms were eventually recognized as the technological dead end they always were, and "automatically generated" went from impressive to trendy to tacky to gross in just a few short years-
But the damage was done- the social contract between "job creators" and real human beings now had a big gash torn through it- and ever since, mom and pop mafias and grassroots mobsters have fiercely guarded the niche of the economy they tore out for themselves.
If you don't believe organized crime is just as real as ever, just try and introduce androids to a "close-knit" industry like shipping. They'll send your free trial bots back to you in a compact, melted-down cube of slag before the first shift even ends.
Now i don't condone that kind of thing. It's violent and immature, and they should take their frustration out on the people trying to replace them, not the bots they bring in to do the replacing.
But as somewhat of a historian, i can see the appeal.
friend of the family, your nightowl
3 notes · View notes
ausetkmt · 2 years ago
Text
TIME: Racism Is Not a Mental Illness—But It's Complicated | Time
Everyone knows that Law & Order plotlines are often, as they say, ripped from the headlines.
But Dr. Alvin Poussaint, 88, knows this on another level: An emeritus professor of psychiatry at Harvard Medical School, he has had the unusual experience of seeing his ideas incorporated into a season 12 episode of the long-running show. In it, a white working stiff murders a Black CEO in a dispute over a New York City taxicab. When the trial begins, a respected Black psychiatrist takes the stand to present his idea that the defendant suffers from “extreme racism,” a mental illness. His lawyers argue that extreme racism has such a complete hold on the defendant that it mitigates their client’s legal responsibility for the murder. In the final moments, the audience is encouraged to feel that it’s a victory for justice, for law and order, when the jury rejects the psychiatrist’s ideas, Poussaint tells me with a tinge of disdain.
In the real world, Poussaint was that psychiatrist. Sort of.
While he never brought his ideas to the witness stand inside the New York City courthouse behind those massive stone steps that Law & Order made famous, in 1999 he shared his theories on the link between mental health and extreme forms of bigotry on the op-ed pages of the New York Times. In doing so, he helped set off a debate that ended with the American psychiatric establishment publicly rejecting the concept—partly on the grounds that so many people are racists.
But even now, after nearly a decade during which the number of hate crimes has steadily increased, the question of the relationship between bigotry and mental illness has never fully been resolved. In fact, recent high-profile incidents have made public perception of that dynamic perhaps as muddled as ever. The issue comes up in relation to everything from major mass shootings to pop-culture discourse. The racist attack at a Buffalo, N.Y., supermarket, for which the gunman pleaded guilty this week to state murder and domestic terrorism charges, prompted calls for the country to “get serious about mental health” as well as pleas not to talk about the shooting as a matter of psychiatric illness rather than a racist hate crime.
Read more: Anti-Black Violence Has Long Been the Most Common American Hate Crime—And We Still Don’t Know the Full Extent
Tumblr media
A memorial outside the Tops Friendly Market after a mass shooting in Buffalo, N.Y., is seen July 14, 2022.
Joshua Bessex—AP
Though the FBI typically releases data each fall detailing the prior year’s hate-crimes statistics, the agency has not yet done so in 2022. But social conditions are rife, experts say, for the increase to continue. (In 2020, the most recent year for which FBI data is available, the agency reported 8,263 incidents—an increase of nearly a thousand over the prior year, despite 452 fewer agencies reporting—and most experts believe the real number is higher.) Police have noted that 47 out of 100 people arrested for hate crimes in New York City in early 2022 “have prior documentation of an [emotionally disturbed person] incident.” So the stakes are high for the nation’s courtrooms to respond to the trauma unleashed by that dynamic—and for Americans to decide what constitutes a just outcome.
“It’s complicated,” says Sander Gilman, a historian at Emory University who researches the relationship between health, science, law, and society, and who has long taught a course on extremism. “But I’m going to start with two things that I call Gilman’s Law: not all racists are crazy, but crazy people can be racists.”
Thanks in part to the influence of pop culture—not least those TV police procedurals—many assume that insanity pleas are common. In reality, mental-health defenses are rare, and even more rarely lead to reduced punishment. Mounting that kind of defense requires time and significant resources to gather evidence and expert testimony, so in practice it is not an avenue available to all defendants.
In England and the U.S., courts began to reliably consider the mental health of defendants only in the 19th century. The M’Naghten rule, a standard for determining a defendant’s sanity at the time of a crime, was established in 1840s England. It holds that only the sane can be held responsible for their actions. As a result, the question of mental fitness—sufficient sanity to participate meaningfully in one’s own defense—is sometimes evaluated before a trial. (Whether the state has an obligation or right to treat such an illness in order for the person to then stand trial is, Gilman says, a question that goes back as far as the idea of such a defense.) Once a competency decision has been made, the accused who do go to trial have the right to a defense. In some cases, that may include an option for a jury to find the defendant not guilty by reason of mental disorder. In others, ideas about the mental health of the defendant may more informally shape what evidence is presented.
But so-called “mental disease or defect” defenses are used in only about 1% of cases, says Michael Boucai, a professor at the State University of New York at Buffalo School of Law and an expert on mental health and other social issues in courtrooms. Those defenses are successful in even fewer cases; in fact, such a tactic often backfires or results in a defendant confined for a longer term inside a hospital than that person would have spent in prison, sometimes with no end date. And even rarer—though not unheard of—is an attempt to use racism or other bigotry as an indicator of mental-health challenges, he says.
Tumblr media
A crowd prays outside the Emanuel AME Church after a memorial service for the nine people killed in a racist attack at the church in Charleston, S.C., June 19, 2015.
Stephen B. Morton—AP
Some fear that raising mental-health issues in court runs the risk of bolstering inaccurate myths about the mentally ill. In reality, mentally ill people are disproportionately more likely to become the victims of crime, and most do nothing to victimize others. And, as has been observed after so many headlinemaking crimes, suspects from privileged groups are more likely to have their actions described as illness in need of treatment instead of criminal evil meriting punishment. Some experts fear that shifting the conversation to questions of mental health can also draw attention away from hateful ideas embraced by the person accused of the crime—ideas that are today often shared by people, including public figures, whose mental health is not questioned. That’s how important social problems that require the nation’s attention are transformed into one individual’s medical problems, says W. Carson Byrd, a sociologist at the University of Michigan.
Column: We Need to Take Action to Address the Mental Health Crisis in This Pandemic
That line of thinking is particularly problematic in a culture prone to dismiss the need for systemwide reforms to address inequality, Byrd says. It can foster an emphasis on quick fixes for the world’s long-standing problems with bigotry. (In 2012, for example, a team of British researchers announced that an existing heart and blood-pressure drug appeared to reduce implicit bias after a study involving just 36 subjects.)
“White supremacy is a very normal part of society,” says Byrd, who is also the faculty director of research initiatives at his university’s National Center for Institutional Diversity. It is not a good or productive part of society, he points out, but a deeply entrenched one. “One of the detriments of trying to look at racism as a form of psychopathology or mental illness is that it makes that [illness] abhorrent, as if everything else is working in a certain [non-racist] way.”
Research has also long shown that bigotry is not an inborn human trait, but rather something learned from our environments, Byrd says. While racism can influence one’s mental health, describing racism itself—even “extreme racism”—as a mental illness implies that bigotry exists beyond our individual and collective control.
“By medicalizing [extreme racism], making it something curable, a mental-health disorder, it pulls away from having those broader conversations about how society is impacting people,” he says. “We just try to figure out, ‘How do we fix this one person?’”
This problem, Boucai notes, can already be clearly seen in discussions about gun crime, and mass shootings in particular: “It’s very hard to understand these crimes and the discourse of insanity provides one way to do it,” says Boucai. “But I think we can see where that sort of language is irresponsible and potentially undermines a just result.”
In other words, if a mass shooter is simply insane, then wholesale gun-law reform can, to some, seem unnecessary, even unwise. When bigotry is involved, that supposed insanity might, some who oppose Poussaint’s ideas believe, undermine systemwide efforts toward equity—or at least toward greater safety for those most likely to be targeted.
In 1999, when Poussaint wrote his op-ed advocating for increased research into possible psychiatric treatments for extreme racism, he was the author of acclaimed books about the effects of racism on Black mental health and a veteran of public controversy. Years earlier he’d argued publicly that racial pride among Black Americans could be taken to an unhealthy extreme. By the 1990s, he may have been best known for his work with Bill Cosby, consulting on scripts for The Cosby Show in a massively popular effort to disrupt stereotypes.
Poussaint first published his ideas about extreme racism weeks after a man named Buford O. Furrow Jr. shot and seriously injured four children and an adult inside a Jewish Community Center in Los Angeles, then shot and killed a nearby Asian-American postal worker. When captured and ultimately convicted, Furrow told investigators his actions had been motivated by hate. Reporters unearthed information indicating that Furrow had close relationships with known white nationalists, and also that he had been evaluated by Washington State’s mental-health system only months before the attack. He’d even told officials after a previous arrest for assault that he’d “fantasized,” about mass murder. To Poussaint, this story signaled a growing threat posed by a failure to recognize that, while highlighting and combating systemic racism is important in preventing discrimination, so is identifying and helping individuals motivated by bigotry who might go so far as to injure or kill others.
Read more: 11 People Were Killed in 48 Hours in Mass Shootings Across America. It’s Likely to Get Worse
Tumblr media
Train passengers are treated on the platform after Colin Ferguson opened fire on the train as it arrived at Garden City, N.Y., on December 7, 1993.
Alan Raia—Newsday RM/Getty Images
“Extreme racism crosses the line and is out of control,” he explains. “Just like somebody can have a little bit of anxiety, but if they have anxiety to the point that it is immobilizing, then it is a mental disorder.” Mass shooters behind hate crimes are, as he sees it, in a similar state: “[They] weren’t functioning individuals. They were impaired by their mental pathology.”
When the third edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM), the guide that mental-health professionals use to make their diagnoses, was published in 1980, clinicians like Poussaint considered racism—not extreme racism, but what he calls everyday racism—a potential symptom of several conditions, from paranoid personality disorder to generalized anxiety disorder. Racism alone is not sufficient to diagnose a patient with one of those conditions, but an extreme racist, Poussaint says, likely suffers from delusions. Such people live with multiple symptoms including paranoia; they are likely to project negative traits or outcomes onto entire groups and sustain those beliefs even in the face of strong countervailing evidence. Many embrace conspiracy theories. Some may grow violent. In fact, Poussaint argued in a 2002 Western Journal of Medicine article, the extreme racist often begins with “verbal expression of antagonism, progresses to avoidance of members of disliked groups, then to active discrimination against them, to physical attack, and finally to extermination (lynchings, massacres, genocide). That fifth point on the scale, the acting out of extermination fantasies, is readily classifiable as delusional behavior.”
Poussaint was not the first person to raise the idea that extreme racism is itself a mental illness. But he was among those leading the call for the American Psychiatric Association, publishers of the DSM, to consider putting it in subsequent editions of the manual, which has a long history of evolving, often slowly, in response to research and norms. When his op-ed ran, it seemed to Poussaint, people pounced.
Some of his fellow Black psychiatrists argued that such a diagnosis would unleash a wave of legal excuse-making, helping no one but the violent racists themselves. Poussaint counters that other health diagnoses have yet—nearly 200 years after psychological concerns officially entered American courtrooms—to produce rafts of acquittals. And some clinicians and researchers argued that there are other ways of attacking racism besides treating it as an illness—educational programs, diversity initiatives, policy changes. That’s a point Poussaint says he doesn’t oppose, at least when it comes to everyday racism. Those steps can help racists who embrace repugnant ideas while remaining functional parts of society. But those aren’t the people he’s talking about.
“Racism negatively impacts public health,” the American Psychiatric Association told TIME in a statement. “The American Psychiatric Association has been focusing on this in the DSM by identifying and addressing the impact of structural racism on the over- or underdiagnosis of mental disorders in certain ethno-racial groups. From time to time we have received proposals to create a diagnosis of extreme racism but they have not met the criteria identified for creating new disorders.”
To this day, Poussaint believes that extreme racism is very likely its own disorder in need of study, possible diagnostic criteria, and evidence-based treatment, he told me in September from his home in Massachusetts. But after retiring about 2½ years ago at the age of 86, he says he’s too far out of the professional mix to continue to push for an extreme racism diagnosis.
I ask Poussaint what he thinks might have happened if extreme racism had become its own diagnosable condition listed in the DSM. Extreme racism might have been a topic on talk shows and a more frequent topic of news coverage, he says. With research and public information campaigns, the need for intervention could have been as clear as it is for heart attacks; the steps to do so as well-known as CPR.
“We’d get away from treating it as if [extreme racism] is normative,” Poussaint says, “like a cultural difference because America is a racist country. We have made it normative by not calling it what it is. Even people in general society, friends and relatives and even the afflicted individual, would recognize it as a disorder and say, ‘I’m not alright.’ People who get swept up by anxiety and can’t function, they don’t think they are normal. They say, ‘This is taking over my life. I need some help.’ [A diagnosis] clues the family to say, ‘This person is really troubled and we have to get them some help.’”
12 notes · View notes
madhumore · 8 days ago
Text
0 notes
hydralisk98 · 19 days ago
Text
Building locations for the next revision of Angora?
Tumblr media Tumblr media
Sidestream follow-up to this article thread:
As a hobby historian in the manifestation space, can't help but wonder about some alternate outcomes and manifest what seems most interesting and useful for me. And I have gotten a couple successes as I discovered new and old topics of interest fitting my wishes like "Pacific Cyber/Metrix", Glaxnimate, DEC Alpha AXP 3000's Turbochannel as a open bus standard, FSF+Symbolics shared history... which is really interesting and sweet to unravel!
Now, while I want to get started Vtubing soon, I got a couple life habits & lively essentials to fix first and foremost. But expect me to get started soon (hopefully...);
So yes, finally onto rebuilding theology through 16^12 Angora worldbuilding! My goal is to suggest and iterate (yet another) decent theological framework for biological & also machine sophonts (aka androids) out of researching existing systems, adding in some novelties worth exploring.
Either way, GDD dump for Angora in Môsi fork (with Borksy hacks?):
Media dump:
Tumblr media Tumblr media Tumblr media
List of rooms, levels & locations for Angora:
Shoshone scroll library (bronze age period)
Contemporary public library
Pub
Arcade
Inn
Tavern
Public plaza
Market
Institute
Hospital
Lumber Mills
Walls
Commune of Hatriah?
Museums
Autel
Temples
Computer Lab
Cybercafes
Roads
Railways
Suspended Monorail
Commute by Tramways, Buses & Subways
Residences
Commerces
Industries
Farmland
Public Schools
Hotel
Bastion
Senate
House of Commons
Experts Council
Archives
Manor
Manufacture
Factory
University
Research Center
Medical Lab
Military Base
Arsenal
Communal data center
Video rental stores
Bookstores
Forested park
Rollers' skate park
Youth community center
Nuclear Powerplant
[...]
Kate & Ava's household
"Pinegroove" cityscape
"Black Bear" suburbia
"?" metropolis
"Maskoch" township
"Maynard" township
Mesa Laboratories
EBM, ICL, Tsunami Electrics, Bull...
Pacific Cyber Metrix, Pflaumen Cooperative (DEC+ZuseKG), Utalics Cooperative (Symbolics), GLOSS Foundation (FSF / GNU), Luna Macrotronics (Sun Microsystems);
Macroware (Microsoft, NeXT, Apple, Google, Minitel...)
Rajah Palace and their "forbidden city" courtroom
Hanging Gardens
Temple of Artemis
Super Mall "Nitta" Complex
Distant future "Iron Dusk of Time" (lucid dreamscape vision quest)
Pflaumen Cooperative (4408-4545) := Free Software Foundation (GLOSS Foundation, for the Libreware licensing), Symbolics (Commercial -grade / -tier LISP software & hardware accelerators), Digital Equipment Corporation (open industry-standard hardware in PDP8, Rainbow 100 & early Alfa series), Pacific Cyber Metrix (later Alfa, Beta & Gama series with Compact, Mobile & Ultra variants)... ?
Progressives: Syndicalists, (Geocenter) Greens, Harmonists;
Neue TurboChannel (as a consortium of GLOSS Foundation, Utalics, Pflaumen Cooperative put together) standards, LibreVast (Dreamcast+3DO+M2+Nuon...) home computer system with their own Nova modules (VMU+Sifteo... + Loopy with its Magical Shop addon?);
0 notes
visual-sculptors · 2 months ago
Text
How Infographics Enhance Information Retention and Engagement
1.Why infographics is the best?
 Infographics are a valuable tool for simplifying complex information through a visually appealing format. They combine data, visuals, and text to present information in an engaging way that captures attention and improves understanding. Infographics effectively convey important messages in a clear and concise manner, making them useful for communication in many fields and industries. Their ability to make information easier to grasp helps audiences better understand and retain key points. Overall, infographics enhance the way information is shared and understood, proving to be an essential asset for effective communication. In today’s digital age, infographics play a vital role in effective communication. They help simplify complex information, making it easier for people to understand and remember. As organizations face more competition, using infographics allows them to share knowledge in a way that engages diverse audiences. By presenting information visually, infographics enhance clarity and impact, making them an essential tool for any professional setting.
2. What is infographic history?
Infographic history is a valuable tool for education and research, as it presents complex historical information in a visually engaging way. By using graphics, charts, and images, infographics create compelling visual narratives that make it easier for people to understand and connect with historical content. This format combines text and visuals to effectively showcase timelines, trends, and statistics, making the information more accessible and engaging. Infographics cater to a wide audience, including students and scholars, by simplifying complex data and enhancing comprehension. Overall, infographic history transforms the way we learn about the past by making it more interactive and visually appealing. Infographic history enhances our understanding of historical events and helps us remember and share historical knowledge. Its engaging and creative storytelling approach captures the attention of viewers and promotes active learning, making it a valuable tool for educators, researchers, and historians. In the digital age, infographic history is an effective way to share and preserve our collective past. It combines creativity with functionality, making it essential for navigating today’s information-rich environment.
3. What is an infographic post?
Infographic posts are effective tools for visually presenting information in a way that is easy to understand and engaging. They combine text, images, and graphics to simplify complex ideas, making them accessible to viewers. Businesses and marketers use infographics to communicate important messages clearly and attract their target audience. Their visual appeal and interactivity help drive traffic to websites, boost engagement, and improve brand awareness. Overall, infographics are a powerful way to convey information and capture attention.
Creating an effective infographic post requires selecting relevant data, using clear language, and designing visually appealing elements. Key data points and engaging visuals are vital for communicating the message clearly to the audience. By focusing on attractive and informative infographic posts, businesses can enhance their online presence, build credibility, and foster deeper connections with their audience.
4. What is infographic data?
Infographics are valuable tools for data visualization, making complex information easy to understand and visually appealing. By using graphical elements like charts, graphs, icons, and images, infographics turn complicated data into clear and engaging stories that resonate with viewers. They help simplify information, enhancing the audience's understanding and memory of the content. Infographics are widely used in various fields, including business, marketing, education, and journalism, to effectively communicate important messages, statistics, and trends in a concise and impactful way. Overall, they play a crucial role in helping people grasp and retain complex information. Infographics combine text and visuals to effectively grab attention and help audiences quickly understand complex data. This blend not only communicates information clearly but also engages viewers, creating a stronger connection with the content. By using infographics, organizations can present their data insights in a visually appealing way that resonates with stakeholders, aiding in informed decision-making. The adaptability and effectiveness of infographics make them a vital tool in modern communication strategies across different industries.
5. What is the goal of an infographic?
Infographics are effective visual communication tools that make complex information easier to understand. By combining images, charts, and brief text, they simplify dense data into engaging and attractive formats. Infographics highlight key messages or stories, allowing audiences to grasp important information quickly. They are widely used in various fields, including business, marketing, education, and research, to convey information in a clear and impactful way. Overall, infographics help present data in a straightforward manner, making it accessible to a diverse range of viewers.
Infographics are powerful tools that go beyond just presenting data; they help improve understanding and engagement. By breaking down complex ideas into easy-to-understand visuals, infographics highlight important points and organize information in an appealing way. This makes it easier for viewers to grasp and remember key concepts. The main goal of an infographic is to make complicated information more accessible and memorable. This makes them essential for professionals who want to communicate effectively in today's visually oriented society.
Tumblr media
Visit: VS Website See: VS Portfolio
0 notes
fatehbaz · 1 year ago
Text
Adam Sills’s well-written and beautifully produced Against the Map is in some ways a strange book to review [...] [from the disciplinary perspective of environmental studies]. Sills shows little interest in environmental history or ecocriticism, even in the “ecology without nature” mode [...]. His basic argument is that cartography, because of print capitalism, seeped into all sorts of facets of life on the British Isles during the late seventeenth and eighteenth centuries. It became something that playwrights, novelists, and creative nonfiction types, like Samuel Johnson, developed spaces of resistance to in their publications. Sills highlights the political nature and problematic historical genealogies of maps, an argument that has broader implications for [contemporary] environmental historians who use maps to convey [relatively more “objective” and/or “scientific” information] [...].
Sills begins by accepting the idea, derived from Ben Anderson’s comparative work, that “the history of the map and the history of the modern nation state are inextricably bound up with each other” (p. 1). He then cites two of the key analysts of this in relation to Britain: Richard Helgerson on the literary nationalism of the English Renaissance and John Brewer on the fiscal-military state of the eighteenth century, with its army of surveyors and excise tax collectors. In this historiography, the “surveyor emerges as an authorial figure,” key to the making of the modern state as distinct from traditional dynastic and ecclesiastical authority (p. 3). Combined with cheap printing, the result was what Mary Pedley has called a “democratization of the map” (p. 4). [...]
---
For John Bunyan, the “neighborhood” became a site of resistance (as it is for Denis Wood in his 2010 Everything Sings: Maps for a Narrative Atlas). [...]
For Aphra Behn, [...] the theater and “built environment” of the “fragmented, chameleonlike ... scenic stage” had the ability to challenge coherent representations of the Atlantic empire produced by maps like those of world atlas publisher and road mapper John Ogilby (p. 65).
From Dublin, Jonathan Swift directly satirized the cartographic and statistical impulses of the likes of William Petty, Henry Pratt, and Herman Moll, who all helped visualize London’s colonial relationship with Ireland [...].
From London, Daniel Defoe questioned efforts to define what precisely makes a market or market town through maps and travel itineraries, pointing toward the entropic aspects of the market (“its inherent instabilities and elusive nature”) that challenged and escaped efforts to stabilize such spaces through representations in print (p. 163).
Johnson’s travels to Scotland redefined surveying, resisting the model put forward by the fiscal-military state in the aftermath of the Jacobite Rebellion of 1745.
---
The final chapter and conclusion, “The Neighborhood Revisited,” looks at Jane Austen’s Mansfield Park (1814), a classic novel of the artificial environment of the estate garden. By the early nineteenth century, neighborhoods were more like gated communities and symptomatic of Burkean conservatism and nostalgia. But in Austen’s hands, their structures of affect also suggest the limits of the controversial map- and data-centric literary methodologies [...] and perhaps more broadly the digital humanities. “The principle of spatial difference and differentiation, the heterotopic conceit, always remains a formal possibility, not only at the margins of the empire but at its very center as well,... a possibility that the map cannot acknowledge or register in any fashion” (p. 234). For Sills, this is true of eighteenth-century mapping as well as the fashion for “graphs, maps, and trees” in the early twenty-first century.
---
Sills’s basic argument, that a certain canonical strain of English literature - from Bunyan to Austen - positioned itself “against the map,” seems quite solid. He makes this point most directly by appealing to the work of Mary Poovey on the modern “fact,” with the map as “a rhetorical mode ... that serves to legitimate private and state interests by displacing and, ultimately, effacing the political, religious, and economic impact of those interests” (p. 91).
Nevertheless, returning to a[n] [exclusively] canonical, Bunyan-centered, “small is beautiful” neighborhood approach [potentially ignoring planetary environmental systems, the global context, in cartography] seems limited and problematic from the perspective of Anthropocene [...]. The global maps and mathematics used by the likes of Edmund Halley and Isaac Newton, which were directly satirized by Swift in the Laputa section of Gulliver’s Travels (1726), did something different than Petty’s mapping of Ireland. High-flying as they may have been, such maps and diagrams were key to the development of [...] environmental thinking by Charles Darwin, Charles Lyell, Alexander von Humboldt, and others in the nineteenth century. More recently, global mapping [...], like the internet closely tied initially to the modern American fiscal-military state, have [also later then] been essential to identifying processes of climate change, ocean acidification, deforestation, dead zones, sea level rise, desertification, and a host of other processes that would otherwise be challenging to perceive. This is no mere “Vanity Fair.” Sills’s book would have benefited from engaging with Jason Pearl’s Utopian Geographies and the Early English Novel, published in 2014 [...]. Pearl also does close readings of Behn, Defoe, and Swift, choosing Margaret Cavendish instead of Bunyan and stopping in 1730, just before things became picturesque but just after they were financialized by the South Sea Bubble, Newton’s mint, and Robert Walpole. Pearl reproduces maps by Defoe of Robinson Crusoe’s global travels and of Crusoe’s island, Swift of Houyhnhnmland, Ambrosius Holbein of Thomas More’s Utopia [...].
What if rather than “against the map,” we are seeing struggles between radical and conservative cartography [...] engaged in a fight over the future (utopia)?
What if what [...] [some have] called “capitalist realism” [...], what might in the eighteenth century be called “nationalist realism,” is not the only thing happening with maps and the imagination?
---
Text above by: Robert Batchelor. “Review of Sills, Adam, Against the Map: The Politics of Geography in Eighteenth-Century Britain.” H-Environment, H-Net Reviews. May 2023. Published online at: h-net,org/reviews/showrev,php?id:58887. [Bold emphasis and some paragraph breaks/contractions added by me. In this post, all italicized text within brackets added by me.]
19 notes · View notes
marketnewskk · 3 months ago
Text
0 notes
rushikeshmmr · 4 months ago
Text
Data Historian Market was valued at US$ 1.17 Bn. in 2023. The Global Data Historian Market size is estimated to grow at a CAGR of 7.1% over the forecast period.
0 notes