#Data Collection and Analysis
Explore tagged Tumblr posts
Text
Salisbury Autistic Care - The Sensory Haven Future Homes for Autistic People
Salisbury Autistic Care: Creating Inclusive Communities for Individuals on the Autism Spectrum is putting its best foot forward by designing homes best suited for autistic individuals. Efforts are made to provide an environment where those on the autism spectrum can thrive and feel at ease.
In this discussion, we'll explore how Salisbury's real estate sector is designing homes with the latest innovations that prioritize the safety concerns of these individuals.
Let's discover together how the latest innovative homes are reshaping the landscape of inclusive living.
Smart Home Technology: Real estate is focusing on installing homes with smart home devices that can be controlled remotely or automated to perform tasks autonomously. It includes devices like voice-activated assistants (like Amazon Alexa or Google Home), smart thermostats, lighting systems, and security cameras that can greatly improve the autonomy and comfort of individuals with autism. These technologies can be programmed to adjust environmental factors according to the individual's preferences, providing a sense of control and reducing sensory overload.
Communication Apps and Devices: Many autistic people face trouble in communication. However, integrating communication apps and devices within the property can facilitate effective communication. It will help them by assisting in conveying their message to their caregivers. These may include augmentative and alternative communication (AAC) apps, picture exchange communication systems (PECS), or specialized devices that support speech output.
Safety and Monitoring Solutions: Autistic individuals are not much aware of their safety in the surrounding environment. As a result, they may unintentionally engage in behaviors that could put their well-being at risk. Technology can play a crucial role in ensuring their safety. GPS tracking devices, door alarms, and wearable sensors can alert caregivers if a resident leaves the property or enters restricted areas, allowing for timely intervention. Additionally, smart locks and security systems can enhance overall safety within the property.
Sensory Regulation Tools: Many individuals with autism are sensitive to sensory stimuli. The real estate must focus on designing calming sensory rooms with soft lighting, comfortable seating, tactile objects, soothing music or sounds, and visual projections. Interactive projections or immersive virtual reality experiences can provide engaging and customizable sensory experiences, allowing individuals with autism to explore different sensory inputs in a controlled and therapeutic environment.
Data Collection and Analysis: Homes installed with smart sensors can help in tracking daily behavior patterns like sleep patterns, activity levels, or emotional states, providing valuable insights about the individual. This information can be used to create personalized care plans and interventions.
Educational and Therapeutic Resources: Integrating educational and therapeutic resources within autism care properties empowers residents to engage in meaningful activities and skill-building exercises that support their development and enhance their quality of life. Smart home technology helps them to have access to educational and therapeutic sessions that promote learning, growth, and self-confidence for individuals with autism.
Conclusion
Through these advancements, Salisbury Autistic Care — Most Desirable Areas to Live in is not only addressing the unique needs and challenges faced by autistic individuals but also trying to create surroundings where they can feel safe and comfortable. By prioritizing safety, communication, sensory comfort, and personalized support, these homes are reshaping the landscape of inclusive living and setting a new standard for the integration of technology and compassion in real estate development.
#Educational and Therapeutic Resources#Data Collection and Analysis#Sensory Regulation Tools#Safety and Monitoring Solutions#Smart Home Technology#Future Homes#Sensory Haven:#Salisbury Autistic Care
8 notes
·
View notes
Text
Algo trading, short for algorithmic trading, is a revolutionary approach to financial markets that leverages computer algorithms to execute trading strategies.
#Algo Trading#financial markets#Algorithm Development#Data Collection and Analysis#Automated Execution
0 notes
Text
Sketched out the final part of the growth spurt comic!
Already made up my mind, but out of curiosity. What do you think: Who will end up being the shortest turtle? :)
#Saw comments about both of them sooo… I just have to do some data collection before the last part drops!#And then we can compare twitter tumblr and instagram#some fun data analysis!
9 notes
·
View notes
Note
will you make a poll for most slay and ugly senti designs. i really think guiltrip deserves to be honored somehow. perhaps an honorable mention if reflekta wins either category.
LOVE the way you’re thinking. honestly this process has made me desperate to know the ml fandom’s opinions on everything
#akuma poll 2023#asks#i love data collection im tempted to start doing statistical analysis kwndkdnd#sentis is a good one i’m also curious about ppl’s opinions of the hero/villain designs#perhaps next year’s poll…#also yes the fact that reflekta got nominated in both categories is sooo. we love a divisive queen
77 notes
·
View notes
Text
currently working on the early stages (ie. user research) of a spotify user interface redesign as a personal portfolio project and i am ridiculously excited about it
#this is my current hyperfixation#i'm working on designing a survey and interview guide#with luck i will start conducting user interviews next week#my goal is to spend the next two weeks collecting data and then analysis it the following week#then it will be on to defining the problem statements and working on personas and user journeys and other deliverables#also thinking abt using tiktok to get the survey to (hopefully) reach a wider audience and document my process#lots of big things#this is what happens when my literal ux design job does not give me enough tasks to entertain me#antlerknives.txt
2 notes
·
View notes
Text
My Excel knowledge has grown so much in the past 2 months of working at my new job, so I am planning to revamp my Pokémon card spreadsheet. I'm going to start during the long weekend coming up. I am going to make pivot tables and chart up the wazoo. I am going to organize, analyze, and report. You ask me any question about my collection, and I will give you the answer. You wanna know how many stage 2 psychic Pokémon cards I have that are less than 5 years old? I'll tell you. You want a list of trainer item cards that start with the letter D? You got it. You want to know what percentage of basic energy cards are water type? You need only ask.
#pokemon#pokémon#pokemon tcg#autism#collection#collecting#excel#microsoft excel#data analysis#llbtspost
7 notes
·
View notes
Text
it’s crazy like actually insane that i can’t find historical demographic analyses of childlessness rates among women of any time period before the 1800s. that seems like it should be such a interesting illustrative statistic in and of itself but also as an indicator of broader social/economic/religious/political/environmental conditions. it’s a statistic that establishes something entirely distinct from what you get out of an average birth rate...
OK NVM post kind of cancelled i'm finding a couple articles. but nothing that's really what i'm looking for/what i'd be wanting answers to. w/e do i sound insane/is this already obvious
#like they're bringing it up as something that was discussed/understood in the period but not how frequent it was and what factors informed#its presence#like there are time periods in many locations prior to the late modern period that would theoretically have administrative#records/census data collection that you could use to extract this statistic/analysis. i wonder if my jstor/school#library searches just aren't properly describing what i'm thinking of.#sorry. im high#a
4 notes
·
View notes
Text
I dont want to do this dissertation crap anymore actually
#my god i wish i couldve just done primary data collection instead of driving myself mad with document analysis#but the time frame for primary data is so tough to manage
2 notes
·
View notes
Text
Precision Insights: Expert Quantitative Market Research Services
Our Quantitative Market Research Services help you quickly gather insights from our panellists and understand the changing consumer behaviour. Using our comprehensive services, we find the answers to the most of your questions! Follow this link to know more https://insighttellers.com/services/quantitative-research-market
#Quantitative Market Research Services#Qualitative Research#Translation#Survey Programming#Data Collection & Analysis#Secondary Research#Panel Aggregation#Contracted Work
2 notes
·
View notes
Text
Exploratory Analysis of Google Search Trends for 'microclots' from 2015-2023: Part 1
Tracking a reported increase in Google searches for "microclots", part 1 of an exploratory analysis of public interest in COVID-19 over time
Tweet about this Analysis Tracking #Covid19: #Google searches for #microclots were up 23% for the last week of April 2023 👀 Interested? Learn more in this brief #article by Dr. Heather Sue M. Rosen discussing part 1 of her #ExploratoryAnalysis of search trends for microclots! #TeamClotsTweet View the Analysis on RPubs Take me to RPubs by hsuemrosen Submit Content for Feature on…
View On WordPress
#RStats#covid#COVID-19#data analysis#data collection#data manipulation#data science#data visualization#data viz#examples#Google Trends#GtrendsR#long covid#microclots#pandemic#post covid#R#R code#R Studio#time series
4 notes
·
View notes
Text
.
#I’m working on my second research project ever at UCLA lmfao#and holy shit I forgot how easy most research is#my advisor stuck me on such an impossible fucking project for my first one#and was like you can only do this before you move on#but conducting work with another professor I’m like holy shit I have the whole thing together and ready for data collection#in less than a week!!!#like I’m sorry but I’ve been absolutely fucked by my advisor lmfao#sweetest lady but she has no understanding of the technology we use or stat analysis#meaning she hands me ridiculously difficult projects and then I have to teach myself everything#I hate it lol#but doing other work I’m like oh shit this is fun#maybe there’s hope for me after all lmfao
2 notes
·
View notes
Text
Clocking in at 4200 words and 30 pages with figures and formatting stuff…….but really all that is done is the introduction and part I, which to be fair is the beefiest parts bc they have the most data and the most references and the most complicated method and discussion. Tomorrow I need to verify my math, rerun some simulations, look at the results, and then lock in and finish this. We are entering approximately 48 hours til it’s due but there’s a tiny bit of wiggle room………. I believe it can get finished . Presentation isn’t until a week from Wednesday so I’m taking a couple days off thesis stuff then making that. I also need to submit an embargo request cuz I don’t actually know who has the rights to all this data LOL. Mostly I’m just worried about journals requiring first dibs and not wanting it to be released anywhere else first bc some do that and we are trying to publish part I.
#but also genuinely not actually sure who owns this data bc I collected some of it as a paid contractor at two different institutions#did some follow up work for class credit under my institution#and redid all the calculations for free bc I had time during my other internship and I wanted to get a first author paper#so I’m gonna go with a one year embargo with the option to extend so we can get out the paper#embargoing it just seems so funny like since when have I made something worth embargoing#and also it’s just a sensitivity analysis with recommendations for future directions it’s not like we can get scooped it’s not an open#source model and nobody else w a model has the capability to do it#but again journal rights
3 notes
·
View notes
Text
3 AI use cases to elevate your strategy
New Post has been published on https://thedigitalinsider.com/3-ai-use-cases-to-elevate-your-strategy/
3 AI use cases to elevate your strategy
This article is based on Liza Adams’s brilliant talk at the Product Marketing Summit in Denver.
Product marketers and even CMOs rarely make it to the boardroom. In fact, only 41 members of Fortune 1000 boards are CMOs, and less than 3% of board members have managerial-level marketing experience.
Why?
Because marketing is often dismissed as tactical – beautiful ads, catchy campaigns, and glossy brochures – while the strategic work that underpins it goes unnoticed. This misconception limits opportunities for marketers to demonstrate the true impact of their expertise on business decisions.
But here’s the good news: AI is changing the game.
AI has the power to elevate product marketing from a tactical function to a strategic force. It enables us to align executives, refine priorities, and amplify results, making the work of product marketers more visible and valuable at the highest levels.
Yet mastering AI isn’t a race – it’s a journey. Whether you’re just starting to explore its possibilities or already using it to shape strategy, it’s important to embrace where you are and keep learning.
In this article, I’ll show how AI can help you step into a more strategic role by focusing on three key use cases:
Segmentation and targeting
Competitive analysis
Thought leadership
These examples will demonstrate how AI can go beyond creating content to drive strategic decision-making and deliver real impact.
Let’s dive in.
AI use case #1: Segmentation and targeting
Our first use case comes from a real scenario where I acted as a fractional CMO. The company was what I like to call a “COVID darling” – it experienced rapid growth during the pandemic; however, post-COVID, it struggled to sustain that growth.
The executive team’s instinct was to expand their market and target more segments. My response? Don’t go broad – go deep.
Instead of spreading resources thinly across multiple segments, I encouraged the team to focus on two or three key segments. The goal was to understand these customers so thoroughly that we could become the best fit for their unique needs. Broad, shallow targeting wouldn’t deliver the value these customers required.
Here’s where the challenge got interesting. Each executive had their own idea about which segment to prioritize:
The CEO wanted to target healthcare, citing its large market size.
The CFO pushed for manufacturing, pointing to its high growth rate.
The CPO advocated for retail, aligning with the product roadmap.
The truth is, they were all right – from their individual perspectives. So, the product marketing team and I developed a framework to align these viewpoints and make an informed decision.
We identified evaluation criteria for analyzing each segment, including factors like market size, growth potential, competitive intensity, number of reference customers, and partner strength. Then, we built a heatmap to visually compare how each segment performed against these criteria.
This heatmap became a game-changer. It allowed the executive team to see, at a glance, how each segment stacked up. This data-driven approach shifted the conversation from subjective opinions to objective insights, making it clear which segments offered the most strategic opportunity.
By narrowing the focus and targeting the right segments, the company could allocate resources effectively, align their teams, and maximize their market fit – rather than chasing opportunities that stretched them too thin.
The challenge of gathering data
Before I dive into how we used AI to create a market heatmap, it’s important to acknowledge the most challenging part of the process: data collection and curation.
While the conversation with ChatGPT took about three hours, gathering and organizing the necessary data took two to three weeks. This stage was critical because feeding AI accurate, well-structured data is the foundation for meaningful insights.
Here’s a breakdown of the types of data we gathered and the sources we used:
Market size and growth: Pulled from analyst reports, including Gartner, to estimate total addressable markets (TAMs) and growth trends.
Competitive intensity: Sourced from customer review platforms like G2 and Capterra to understand how competitors were performing in various categories.
Win rates: Derived from our CRM (in this case, HubSpot), including metrics on win-loss ratios.
Product roadmap alignment: Compiled in a Google Doc to compare customer needs across segments with our current and planned product offerings.
Partner strength: Extracted from a database tracking partner leads, conversions, and overall performance.
Customer references: Assessed from a reference database to evaluate the strength and quantity of reference customers in each segment.
This process involved pulling data from disparate systems, formatting it consistently, and redacting sensitive information to maintain confidentiality. Only after this groundwork was done did we begin leveraging AI.
How we used ChatGPT to create our segment targeting heatmap
Once the data was ready, we uploaded it into ChatGPT in spreadsheet format and began prompting it for analysis. Here’s a simplified walkthrough of how we approached the first two rows of our heatmap – market size and growth – using AI:
Initial prompt: “You are an expert market researcher and analyst in the supply chain management space. Please review the attached Excel sheet, analyze it, and provide a summary of your key takeaways. I will provide further instructions after that.” ChatGPT’s initial response included basic insights, like identifying the verticals with the highest growth rates and highlighting steady growth areas.
Follow-up prompt: “Please create a table with two rows: one showing the 2025 market size and another showing the growth rate you calculated. Please order the verticals as manufacturing, healthcare, energy, food, and retail.” This prompt resulted in a clear, organized table, allowing us to visualize and compare the market data.
Heatmap creation: “Turn the table into a single heatmap reflecting forced rankings for market size and growth rate. Assign a score of 5 to the largest market size and highest growth rate, and a score of 1 to the smallest and lowest.” The output was a color-coded heatmap that visually represented each segment’s market size and growth potential, making it easy to prioritize opportunities.
By repeating this process for the remaining rows – competitive intensity, win rates, partner strength, and customer references – we built a comprehensive heatmap that showed the most valuable segments to target.
Presenting the analysis to the executive team
Next, it was time to present the findings to the executive team. It’s important to note that this analysis was just a starting point – a framework to guide discussions and foster a 360-degree view of the market opportunities.
Unlike previous conversations where each executive approached the problem from their one-dimensional perspective, this approach introduced eight dimensions of analysis, offering a more holistic view.
With the heatmap in hand, the executive team could now debate and refine the findings collaboratively. Some execs disagreed with certain rankings, so we made some on-the-fly adjustments to the data.
We also assigned different weights to certain criteria, recognizing that not all of them were equally important. For example, market growth might carry more weight than competitive intensity, depending on the company’s priorities.
This flexibility allowed us to fine-tune the analysis and reach a consensus. And, within a week, we validated the findings and identified the top two to three market segments to focus on.
This post is for paying subscribers only
Subscribe now
Already have an account? Sign in
#2025#ai#ai use cases#Analysis#approach#Article#Artificial Intelligence#board#boards#Business#CEO#CFO#challenge#chatGPT#Color#comprehensive#content#covid#craft#crm#data#data collection#data-driven#Database#dimensions#easy#energy#evaluation#excel#excel sheet
0 notes
Text
#Tags:AI-Driven Analysis#Data Collection#Data Monetization#Elon Musk#facts#Global Connectivity#Global Surveillance#Internet Privacy#IoT Integration#life#Metadata#Podcast#Privacy Concerns#Satellite Internet#Satellite Technology#serious#Social Media Platforms#Space Missions#SpaceX#Starlink#straight forward#Telemetry Data#truth#upfront#website#X (formerly Twitter)
0 notes
Text
The Age of the Digital Precog: How AI Predicts Our Every Move
Just like the precogs could glimpse future events, AI sifts through enormous amounts of data to make informed guesses about what we’ll do next.
Whats On My Mind Today? Who Are You? How to passively gather personal information is a fascinating look into how businesses, marketers, and tech giants employ techniques to connect your protected data to the information you freely share. Passive information gathering means collecting data about a person without actively seeking it or requiring direct interaction. This can happen through…
#AI data gathering#ChatGPT#data transparency#digital privacy#digital surveillance#IoT privacy#MidJourney#passive data collection#personal privacy#predictive AI#public records#social media analysis#synthetic data
0 notes
Text
Centreoftheselights just shared the new 2024 AO3 stats.
In less than two days their post has already received 16 MILLION views and 80 THOUSAND retweets/quote tweets, with every comment I've seen taking the data at face value and using it to draw conclusions, much to my horror.
While OP did change the title of the "new works" column to "works gained" so that they're at least not blatantly lying now (the bare minimum), the wording is still very misleading. More importantly though they continue to use the same extremely flawed methodology and continue to bury and obfuscate those flaws and what the data actually represents. Nowhere on the chart or the details provided on the main page does it even say that only publicly available works are counted... and that's not even the biggest problem!
This data is, yet again, garbage and absolutely should not be used to determine the current size and popularity of a fandom (inarguably the main reason for it existing).
AO3 Ship Stats: Year In Bad Data
You may have seen this AO3 Year In Review.
It hasn’t crossed my tumblr dash but it sure is circulating on twitter with 3.5M views, 10K likes, 17K retweets and counting. Normally this would be great! I love data and charts and comparisons!
Except this data is GARBAGE and belongs in the TRASH.
I first noticed something fishy when I realized that Steve/Bucky – the 5th largest ship on AO3 by total fic count – wasn’t on this Top 100 list anywhere. I know Marvel’s popularity has fallen in recent years, but not that much. Especially considering some of the other ships that made it on the list. You mean to tell me a femslash HP ship (Mary MacDonald/Lily Potter) in which one half of the pairing was so minor I had to look up her name because she was only mentioned once in a single flashback scene beat fandom juggernaut Stucky? I call bullshit.
Now obviously jumping to conclusions based on gut instinct alone is horrible practice... but it is a good place to start. So let’s look at the actual numbers and discover why this entire dataset sits on a throne of lies.
Here are the results of filtering the Steve/Bucky tag for all works created between Jan 1, 2023 and Dec 31, 2023:
Not only would that place Steve/Bucky at #23 on this list, if the other counts are correct (hint: they're not), it’s also well above the 1520-new-work cutoff of the #100 spot. So how the fuck is it not on the list? Let’s check out the author’s FAQ to see if there’s some important factor we’re missing.
The first thing you’ll probably notice in the FAQ is that the data is being scraped from publicly available works. That means anything privated and only accessible to logged-in users isn’t counted. This is Sin #1. Already the data is inaccurate because we’re not actually counting all of the published fics, but the bots needed to do data collection on this scale can't easily scrape privated fics so I kinda get it. We’ll roll with this for now and see if it at least makes the numbers make more sense:
Nope. Logging out only reduced the total by a couple hundred. Even if one were to choose the most restrictive possible definition of "new works" and filter out all crossovers and incomplete fics, Steve/Bucky would still have a yearly total of 2,305. Yet the list claims their total is somewhere below 1,500? What the fuck is going on here?
Let’s look at another ship for comparison. This time one that’s very recent and popular enough to make it on the list so we have an actual reference value for comparison: Nick/Charlie (Heartstopper). According to the list, this ship sits at #34 this year with a total of 2630 new works. But what’s AO3 say?
Off by a hundred or so but the values are much closer at least!
If we dig further into the FAQ though we discover Sin #2 (and the most egregious): the counting method. The yearly fic counts are NOT determined by filtering for a certain time period, they’re determined by simply taking a snapshot of the total number of fics in a ship tag at the end of the year and subtracting the previous end-of-year total. For example, if you check a ship tag on Jan 1, 2023 and it has 10,000 fics and check it again on Jan 1, 2024 and it now has 12,000 fics, the difference (2,000) would be the number of "new works" on this chart.
At first glance this subtraction method might seem like a perfectly valid way to count fics, and it’s certainly the easiest way, but it can and did have major consequences to the point of making the entire dataset functionally meaningless. Why? If any older works are deleted or privated, every single one of those will be subtracted from the current year fic count. And to make the problem even worse, beginning at the end of last year there was a big scare about AI scraping fics from AO3, which caused hundreds, if not thousands, of users to lock down their fics or delete them.
The magnitude of this fuck up may not be immediately obvious so let’s look at an example to see how this works in practice.
Say we have two ships. Ship A is more than a decade old with a large fanbase. Ship B is only a couple years old but gaining traction. On Jan 1, 2023, Ship A had a catalog of 50,000 fics and ship B had 5,000. Both ships have 3,000 new works published in 2023. However, 4% of the older works in each fandom were either privated or deleted during that same time (this percentage is was just chosen to make the math easy but it’s close to reality).
Ship A: 50,000 x 4% = 2,000 removed works Ship B: 5,000 x 4% = 200 removed works
Ship A: 3,000 - 2,000 = 1,000 "new" works Ship B: 3,000 - 200 = 2,800 "new" works
This gives Ship A a net gain of 1,000 and Ship B a net gain of 2,800 despite both fandoms producing the exact same number of new works that year. And neither one of these reported counts are the actual new works count (3,000). THIS explains the drastic difference in ranking between a ship like Steve/Bucky and Nick/Charlie.
How is this a useful measure of anything? You can't draw any conclusions about the current size and popularity of a fandom based on this data.
With this system, not only is the reported "new works" count incorrect, the older, larger fandom will always be punished and it’s count disproportionately reduced simply for the sin of being an older, larger fandom. This example doesn’t even take into account that people are going to be way more likely to delete an old fic they're no longer proud of in a fandom they no longer care about than a fic that was just written, so the deletion percentage for the older fandom should theoretically be even larger in comparison.
And if that wasn't bad enough, the author of this "study" KNEW the data was tainted and chose to present it as meaningful anyway. You will only find this if you click through to the FAQ and read about the author’s methodology, something 99.99% of people will NOT do (and even those who do may not understand the true significance of this problem):
The author may try to argue their post states that the tags "which had the greatest gain in total public fanworks” are shown on the chart, which makes it not a lie, but a error on the viewer’s part in not interpreting their data correctly. This is bullshit. Their chart CLEARLY titles the fic count column “New Works” which it explicitly is NOT, by their own admission! It should be titled “Net Gain in Works” or something similar.
Even if it were correctly titled though, the general public would not understand the difference, would interpret the numbers as new works anyway (because net gain is functionally meaningless as we've just discovered), and would base conclusions on their incorrect assumptions. There’s no getting around that… other than doing the counts correctly in the first place. This would be a much larger task but I strongly believe you shouldn’t take on a project like this if you can’t do it right.
To sum up, just because someone put a lot of work into gathering data and making a nice color-coded chart, doesn’t mean the data is GOOD or VALUABLE.
#please keep spreading this post#and for the love of god please do not spread these 'studies'#every time someone trusts their data and uses it in any kind of fandom analysis I die a little bit inside#16 MILLION VIEWS!#it should be illegal to spread misinformation to that many people#I will not rest until OP learns how to collect and present data correctly#ao3#ao3 stats#fandom
2K notes
·
View notes