#data processing
Explore tagged Tumblr posts
Text
60s era Sperry Rand UNIVAC nameplate.
#the 60s#the 1960s#computing#vintage computers#vintage tech#vintage technology#technology#the digital age#vintage electronics#electronics#digital computers#digital computing#data entry#univac#sperry#sperry rand#the rand corporation#sperry univac#minicomputers#mainframe computers#data processing
40 notes
·
View notes
Text
Victoria Composite High School vocational classes: Data Processing, Edmonton, Alberta, 1966
21 notes
·
View notes
Text
Seems that EA is now allegedly accused of a mayor privacy violation, having used tracking tools on The Sims FreePlay app to secretly gather and transmit players’ personal information to Facebook for advertising purposes. This data potentially includes unique Facebook IDs, which can be used to match players’ in-game activities to their individual Facebook profiles. Attorneys suspect that these potential data-sharing practices may violate a federal privacy law and are now gathering players to take action.
So, there are at least two class action against EA, because it seems to collect data from players using the Meta Pixel software to harness data from players and sell it to the Meta company, who owns Instagram, Facebook and other social networks.
It would be interesting to learn if this allegations are true and how this would be seen in the eyes of GDPR, European Regulation 679/2016, which allows the processing of personal data only with consent given by the data subjects and also in the context of (online) games.
Consent in the context of the GDPR must be understood as an unambiguous indication of an informed and freely given choice by the data subject, relating to specific processing activities. The burden of proof that these criteria are fulfilled falls upon the controller (i.e., the game developer).
Google Play list the privacy condition of EA for its games, including The Sims Freeplay. Basically EA claims to use players data only to give them "better game experience", which seems vague but not less legit. The only less transparent thing I noticed is that the instructions to opt out of targeted marketing of in-game ads are in English and not in Italian: downloading the game, players allows EA to share their account information with third-party partners to customize advertising experience, which is basically all app developers do, but it's weird that the instruction to opt out doesn't have been translated at all!
This is not the first time EA is accused of, well, unethical commercial practice, since EA has been sentenced to pay fines by Austrian (2023) and Belgian (2018) civil court, because their FIFA loot boxes violated local gambling laws.
Moreover, it's important to notice that in January 2023, the European Parliament adopted a report calling for harmonized EU rules to achieve better player protection in the online video game sector.
The Parliament called for greater transparency from developers about in-game purchases: player should be aware of the type of content before starting to play and during the game. Also, players should be informed on the probabilities in loot box mechanisms, including information in plain language about what algorithms are devised to achieve.
The Parliament further stressed that the proposed legislation should assess whether an obligation to disable in-game payments and loot boxes mechanisms by default or a ban on paid loot boxes should be proposed to protect minors, avoid the fragmentation of the single market and ensure that consumers benefit from the same level of protection, no matter of their place of residence.
The Parliament highlighted problematic practices, including exploiting cognitive biases and vulnerabilities of consumers through deceptive design and marketing, using layers of virtual currencies to mask/distort real-world monetary costs, and targeting loot boxes and manipulative practices towards minors.
#vavuskapakage#ea#electronic arts#Ea sucks#the sims freeplay#the sims franchise#data breach#privacy violations#data privacy#data protection#data processing#gdpr#gdpr compliance#mobile games#fifa#Fifa 18#loot boxes#EA is trash#EA is evil#Ea is garbage
9 notes
·
View notes
Text
Chiques fíjense de activar la opción de no compartir datos en el apartado "Visibilidad" en Ajustes ‼️‼️
#Tumblr#ai#ai generated#argie tumblr#español#artificial intelligence#consent#no sé q poner acá#cuidado#caution#data protection#data privacy#online privacy#internet privacy#invasion of privacy#data processing#anti ai#fuck ai
5 notes
·
View notes
Text
The Ultimate Data Collection Handbook: Exploring Methods, Types, and Advantages
Data collection is a fundamental part of any research, business strategy, or decision-making process. Whether you're a student, a professional, or just curious about how data is gathered and used, understanding the basics of data collection can be incredibly useful. In this guide, we'll explore the methods, types, and benefits of data collection in a way that’s easy to understand.
What is Data Collection?
Data collection is the process of gathering information to answer specific questions or to support decision-making. This information, or data, can come from various sources and can be used to make informed decisions, conduct research, or solve problems.
Methods of Data Collection
Surveys and Questionnaires
What Are They? Surveys and questionnaires are tools used to gather information from people. They can be distributed in person, by mail, or online.
How Do They Work? Respondents answer a series of questions that provide insights into their opinions, behaviors, or experiences.
When to Use Them? Use surveys and questionnaires when you need to gather opinions or experiences from a large group of people.
Interviews
What Are They? Interviews involve asking questions to individuals in a one-on-one setting or in a group discussion.
How Do They Work? The interviewer asks questions and records the responses, which can be either structured (with set questions) or unstructured (more conversational).
When to Use Them? Use interviews when you need detailed, qualitative insights or when you want to explore a topic in depth.
Observations
What Are They? Observations involve watching and recording behaviors or events as they happen.
How Do They Work? The observer notes what is happening without interfering or influencing the situation.
When to Use Them? Use observations when you need to see actual behavior or events in their natural setting.
Experiments
What Are They? Experiments involve manipulating variables to see how changes affect outcomes.
How Do They Work? Researchers control certain variables and observe the effects on other variables to establish cause-and-effect relationships.
When to Use Them? Use experiments when you need to test hypotheses and understand the relationships between variables.
Secondary Data Analysis
What Is It? This method involves analyzing data that has already been collected by someone else.
How Does It Work? Researchers use existing data from sources like government reports, research studies, or company records.
When to Use It? Use secondary data analysis when you need historical data or when primary data collection is not feasible.
Types of Data
Quantitative Data
What Is It? Quantitative data is numerical and can be measured or counted.
Examples: Age, income, number of products sold.
Use It When: You need to quantify information and perform statistical analysis.
Qualitative Data
What Is It? Qualitative data is descriptive and involves characteristics that can be observed but not measured numerically.
Examples: Customer feedback, interview responses, descriptions of behavior.
Use It When: You need to understand concepts, opinions, or experiences.
Benefits of Data Collection
Informed Decision-Making
Data provides insights that help individuals and organizations make informed decisions based on evidence rather than guesswork.
Identifying Trends and Patterns
Collecting data allows you to identify trends and patterns that can inform future actions or strategies.
Improving Services and Products
By understanding customer needs and preferences through data, businesses can improve their products and services to better meet those needs.
Supporting Research and Development
Data is crucial for researchers to test hypotheses, validate theories, and advance knowledge in various fields.
Enhancing Efficiency
Data helps in streamlining processes and improving operational efficiency by highlighting areas that need attention or improvement.
Conclusion
Understanding the methods, types, and benefits of data collection can greatly enhance your ability to gather useful information and make informed decisions. Whether you're conducting research, running a business, or just curious about the world around you, mastering data collection is a valuable skill. Use this guide to get started and explore the many ways data can help you achieve your goals.
To know more: A Guide to Data Collection: Methods, Types, and Benefits
Outsource Data Collection Services
5 notes
·
View notes
Text
Non-fiction books that explore AI's impact on society - AI News
New Post has been published on https://thedigitalinsider.com/non-fiction-books-that-explore-ais-impact-on-society-ai-news/
Non-fiction books that explore AI's impact on society - AI News
.pp-multiple-authors-boxes-wrapper display:none; img width:100%;
Artificial Intelligence (AI) is code or technologies that perform complex calculations, an area that encompasses simulations, data processing and analytics.
AI has increasingly grown in importance, becoming a game changer in many industries, including healthcare, education and finance. The use of AI has been proven to double levels of effectiveness, efficiency and accuracy in many processes, and reduced cost in different market sectors.
AI’s impact is being felt across the globe, so, it is important we understand the effects of AI on society and our daily lives.
Better understanding of AI and all that it does and can mean can be gained from well-researched AI books.
Books on AI provide insights into the use and applications of AI. They describe the advancement of AI since its inception and how it has shaped society so far. In this article, we will be examining recommended best books on AI that focus on the societal implications.
For those who don’t have time to read entire books, book summary apps like Headway will be of help.
Book 1: “Superintelligence: Paths, Dangers, Strategies” by Nick Bostrom
Nick Bostrom is a Swedish philosopher with a background in computational neuroscience, logic and AI safety.
In his book, Superintelligence, he talks about how AI can surpass our current definitions of intelligence and the possibilities that might ensue.
Bostrom also talks about the possible risks to humanity if superintelligence is not managed properly, stating AI can easily become a threat to the entire human race if we exercise no control over the technology.
Bostrom offers strategies that might curb existential risks, talks about how Al can be aligned with human values to reduce those risks and suggests teaching AI human values.
Superintelligence is recommended for anyone who is interested in knowing and understanding the implications of AI on humanity’s future.
Book 2: “AI Superpowers: China, Silicon Valley, and the New World Order” by Kai-Fu Lee
AI expert Kai-Fu Lee’s book, AI Superpowers: China, Silicon Valley, and the New World Order, examines the AI revolution and its impact so far, focusing on China and the USA.
He concentrates on the competition between these two countries in AI and the various contributions to the advancement of the technology made by each. He highlights China’s advantage, thanks in part to its larger population.
China’s significant investment so far in AI is discussed, and its chances of becoming a global leader in AI. Lee believes that cooperation between the countries will help shape the future of global power dynamics and therefore the economic development of the world.
In thes book, Lee states AI has the ability to transform economies by creating new job opportunities with massive impact on all sectors.
If you are interested in knowing the geo-political and economic impacts of AI, this is one of the best books out there.
Book 3: “Life 3.0: Being Human in the Age of Artificial Intelligence” by Max Tegmark
Max Tegmark’s Life 3.0 explores the concept of humans living in a world that is heavily influenced by AI. In the book, he talks about the concept of Life 3.0, a future where human existence and society will be shaped by AI. It focuses on many aspects of humanity including identity and creativity.
Tegmark envisions a time where AI has the ability to reshape human existence. He also emphasises the need to follow ethical principles to ensure the safety and preservation of human life.
Life 3.0 is a thought-provoking book that challenges readers to think deeply about the choices humanity may face as we progress into the AI era.
It’s one of the best books to read if you are interested in the ethical and philosophical discussions surrounding AI.
Book 4: “The Fourth Industrial Revolution” by Klaus Schwab
Klaus Martin Schwab is a German economist, mechanical engineer and founder of the World Economic Forum (WEF). He argues that machines are becoming smarter with every advance in technology and supports his arguments with evidence from previous revolutions in thinking and industry.
He explains that the current age – the fourth industrial revolution – is building on the third: with far-reaching consequences.
He states use of AI in technological advancement is crucial and that cybernetics can be used by AIs to change and shape the technological advances coming down the line towards us all.
This book is perfect if you are interested in AI-driven advancements in the fields of digital and technological growth. With this book, the role AI will play in the next phases of technological advancement will be better understood.
Book 5: “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy” by Cathy O’Neil
Cathy O’Neil’s book emphasises the harm that defective mathematical algorithms cause in judging human behaviour and character. The continual use of maths algorithms promotes harmful results and creates inequality.
An example given in the book is of research that proved bias in voting choices caused by results from different search engines.
Similar examination is given to research that focused Facebook, where, by making newsfeeds appear on users’ timelines, political preferences could be affected.
This book is best suited for readers who want to adventure in the darker sides of AI that wouldn’t regularly be seen in mainstream news outlets.
Book 6: “The Age of Em: Work, Love, and Life when Robots Rule the Earth” by Robin Hanson
An associate professor of economics at George Mason University and a former researcher at the Future of Humanity Institute of Oxford University, Robin Hanson paints an imaginative picture of emulated human brains designed for robots. What if humans copied or “emulated” their brains and emotions and gave them to robots?
He argues that humans who become “Ems” (emulations) will become more dominant in the future workplace because of their higher productivity.
An intriguing book for fans of technology and those who love intelligent predictions of possible futures.
Book 7: “Architects of Intelligence: The truth about AI from the people building it” by Martin Ford
This book was drawn from interviews with AI experts and examines the struggles and possibilities of AI-driven industry.
If you want insights from people actively shaping the world, this book is right for you!
CONCLUSION
These books all have their unique perspectives but all point to one thing – the advantages of AI of today will have significant societal and technological impact. These books will give the reader glimpses into possible futures, with the effects of AI becoming more apparent over time.
For better insight into all aspects of AI, these books are the boosts you need to expand your knowledge. AI is advancing quickly, and these authors are some of the most respected in the field. Learn from the best with these choice reads.
#2024#ai#ai news#ai safety#Algorithms#Analytics#applications#apps#Article#artificial#Artificial Intelligence#author#background#Bias#Big Data#book#Books#brains#Building#change#China#code#competition#creativity#data#data processing#Democracy#development#double#dynamics
2 notes
·
View notes
Text
Speechy Research Devlog: Some New Tools & New Discoveries
Hey everyone, so it is about 8:30pm and I am sure that by the time I write this it will be nearly 9 but I wanted to update everyone who is following my Speechy research on here. I programmed 2 new programs today, a Prosodic Pitch Analyzer (PPA), and an RMS Energy Analyzer using my handy-dandy new favorite library librosa.
Prosodic Pitch Analyzer
The PPA calculates the fundamental frequency (F0) or pitch of an audio signal and visualizes it using a line plot. This is a useful tool for analyzing prosodic features of speech such as intonation, stress, and emphasis.
The code takes an audio file as input, processes it using the librosa library to extract the fundamental frequency / pitch, and then plots the pitch contour using matplotlib.
The output plot shows the pitch contour of the audio signal over time, with changes in pitch represented by changes in the vertical position of the line. The plot can be used to identify patterns in the pitch contour, such as rising or falling intonation, and to compare the pitch contour of different audio signals. The prosodic pitch analyzer can be used to detect changes in pitch, which can be indicative of a neurological speech disorder. For example, a person with ataxic dysarthria, which is caused by damage to the cerebellum, may have difficulty controlling the pitch and loudness of their voice, resulting in variations in pitch that are not typical of normal speech. By analyzing changes in pitch using a tool like the prosodic pitch analyzer, it is possible to identify patterns that are indicative of certain neurological disorders. This information can be used by clinicians to diagnose and treat speech disorders, and to monitor progress in speech therapy.
RMS Energy Analyzer
The program that calculates the energy of a person's speech processes an audio file and calculates the energy of the signal at each time frame. This can be useful for analyzing changes in a person's speech over time, as well as for detecting changes in the intensity or loudness of the speech.
The program uses the librosa library to load and process the audio file, and then calculates the energy of each frame using the root-mean-square (RMS) energy of the signal. The energy values are then plotted over time using the matplotlib library, allowing you to visualize changes in the energy of the speech.
By analyzing changes in energy over time, you can gain insight into how the speech patterns of people with these disorders may differ from those without.
Analysis with PPA
The research that I've been focused on today primarily looked at the speech recording of myself, the mid-stage HD patient with chorea, the late-stage HD patient (EOL), and a young girl with aphasia.
The patient with aphasia had slurred speech and varied rising and falling much like an AD patient. Earlier I saw her ROS and was surprised at the differences between my rate of speech and hers (aphasia v AD)
My rate of speech
The girl with aphasia's rate of speech
So I decided to compare our speech pitches as well and this is what ours looked like side-by-side.
Hers is on the left, mine on the right.
Her pitch seemed to start off higher (unstable though) like mine, but mine fell during my recording and wobbled for a while. She had some drastic pitch differences but mine had around 16 peaks, where hers had around 18-19. Her latter peaks weren't as high frequency as mine, as my frequency peaks ended up mostly very high in the 1600hz or around 1000hz. There is quite a bit of instability in both our pitches though.
Her energy levels in the 15 seconds of speech started off at high-mid energy, then dropped around 1 second in until almost 3 seconds, shot back up and varied in high, high-mid energy, then had several "dips, and higher moments of energy. At the end around 13 seconds she got a huge boost of "gusto" (well.. energy). She had around 7 breaths (noted by the dips / flatlines)
This was mine. It seems like as the 15 seconds went on I started to run out of steam. I wasn't able to keep my energy higher. Mine had around 11 breaths so I was running out of breath eg having a breathier voice more than she was.
Research Conclusion for Today
Although we have quite a bit in common with our speech energy and pitches, our rate of speaking isn't. She used more syllables at a constant rate which made it pretty obvious she had a lot of slurring / overshooting, mine was a lot less syllables and rate of speech was quite slow and varied more than hers. This illustrates my cognitive difficulties and use of placeholder words along with slight slurring.
As far as pitch, seems that we had similar issues with pitch throughout the 15 second clips, mine spiked in the latter when I was getting "wore out" and hers spiked earlier when she had more energy.
Our energy levels differ because although she had moments of energy, I tuckered out pretty quickly.
I hope this helps shed some insight into both aphasia patients and ataxic dysarthria / HD patients speech / some cognitive differences.
Will update again tomorrow when I am done with another day of programming and research!
#python#python developer#python development#data science#data scientist#data processing#data scraping#web scraping#speech pathology#speech disorder#ataxic dysarthria#ataxia#huntingtons disease#stroke#dementia#aphasia#data analyst#data analysis#medical research#medical technology#medical#biotechnology#artificial intelligence#sound processing#sound engineering#machine learning#ai#cognitive issues
12 notes
·
View notes
Text
Everything You Need to Know About Machine Learning
Ready to step into the world of possibilities with machine learning? Learn all about machine learning and its cutting-edge technology. From what do you need to learn before using it to where it is applicable and their types, join us as we reveal the secrets. Read along for everything you need to know about Machine Learning!
What is Machine Learning?
Machine Learning is a field of study within artificial intelligence (AI) that concentrates on creating algorithms and models which enable computers to learn from data and make predictions or decisions without being explicitly programmed. The process involves training a computer system using copious amounts of data to identify patterns, extract valuable information, and make precise predictions or decisions.
Fundamentally, machine Learning relies on statistical techniques and algorithms to analyze data and discover patterns or connections. These algorithms utilize mathematical models to process and interpret data. Revealing significant insights that can be utilized across various applications by different AI ML services.
What do you need to know for Machine Learning?
You can explore the exciting world of machine learning without being an expert mathematician or computer scientist. However, a basic understanding of statistics, programming, and data manipulation will benefit you. Machine learning involves exploring patterns in data, making predictions, and automating tasks.
It has the potential to revolutionize industries. Moreover, it can improve healthcare and enhance our daily lives. Whether you are a beginner or a seasoned professional embracing machine learning can unlock numerous opportunities and empower you to solve complex problems with intelligent algorithms.
Types of Machine Learning
Let’s learn all about machine learning and know about its types.
Supervised Learning
Supervised learning resembles having a wise mentor guiding you every step of the way. In this approach, a machine learning model is trained using labeled data wherein the desired outcome is already known.
The model gains knowledge from these provided examples and can accurately predict or classify new, unseen data. It serves as a highly effective tool for tasks such as detecting spam, analyzing sentiment, and recognizing images.
Unsupervised Learning
In the realm of unsupervised learning, machines are granted the autonomy to explore and unveil patterns independently. This methodology mainly operates with unlabeled data, where models strive to unearth concealed structures or relationships within the information.
It can be likened to solving a puzzle without prior knowledge of what the final image should depict. Unsupervised learning finds frequent application in diverse areas such as clustering, anomaly detection, and recommendation systems.
Reinforcement Learning
Reinforcement learning draws inspiration from the way humans learn through trial and error. In this approach, a machine learning model interacts with an environment and acquires knowledge to make decisions based on positive or negative feedback, referred to as rewards.
It's akin to teaching a dog new tricks by rewarding good behavior. Reinforcement learning finds extensive applications in areas such as robotics, game playing, and autonomous vehicles.
Machine Learning Process
Now that the different types of machine learning have been explained, we can delve into understanding the encompassing process involved.
To begin with, one must gather and prepare the appropriate data. High-quality data is the foundation of any triumph in a machine learning project.
Afterward, one should proceed by selecting an appropriate algorithm or model that aligns with their specific task and data type. It is worth noting that the market offers a myriad of algorithms, each possessing unique strengths and weaknesses.
Next, the machine goes through the training phase. The model learns from making adjustments to its internal parameters and labeled data. This helps in minimizing errors and improves its accuracy.
Evaluation of the machine’s performance is a significant step. It helps assess machines' ability to generalize new and unforeseen data. Different types of metrics are used for the assessment. It includes measuring accuracy, recall, precision, and other performance indicators.
The last step is to test the machine for real word scenario predictions and decision-making. This is where we get the result of our investment. It helps automate the process, make accurate forecasts, and offer valuable insights. Using the same way. RedBixbite offers solutions like DOCBrains, Orionzi, SmileeBrains, and E-Governance for industries like agriculture, manufacturing, banking and finance, healthcare, public sector and government, travel transportation and logistics, and retail and consumer goods.
Applications of Machine Learning
Do you want to know all about machine learning? Then you should know where it is applicable.
Natural Language Processing (NLP)- One area where machine learning significantly impacts is Natural Language Processing (NLP). It enables various applications like language translation, sentiment analysis, chatbots, and voice assistants. Using the prowess of machine learning, NLP systems can continuously learn and adapt to enhance their understanding of human language over time.
Computer Vision- Computer Vision presents an intriguing application of machine learning. It involves training computers to interpret and comprehend visual information, encompassing images and videos. By utilizing machine learning algorithms, computers gain the capability to identify objects, faces, and gestures, resulting in the development of applications like facial recognition, object detection, and autonomous vehicles.
Recommendation Systems- Recommendation systems have become an essential part of our everyday lives, with machine learning playing a crucial role in their development. These systems carefully analyze user preferences, behaviors, and patterns to offer personalized recommendations spanning various domains like movies, music, e-commerce products, and news articles.
Fraud Detection- Fraud detection poses a critical concern for businesses. In this realm, machine learning has emerged as a game-changer. By meticulously analyzing vast amounts of data and swiftly detecting anomalies, machine learning models can identify fraudulent activities in real-time.
Healthcare- Machine learning has also made great progress in the healthcare sector. It has helped doctors and healthcare professionals make precise and timely decisions by diagnosing diseases and predicting patient outcomes. Through the analysis of patient data, machine learning algorithms can detect patterns and anticipate possible health risks, ultimately resulting in early interventions and enhanced patient care.
In today's fast-paced technological landscape, the field of artificial intelligence (AI) has emerged as a groundbreaking force, revolutionizing various industries. As a specialized AI development company, our expertise lies in machine learning—a subset of AI that entails creating systems capable of learning and making predictions or decisions without explicit programming.
Machine learning's widespread applications across multiple domains have transformed businesses' operations and significantly enhanced overall efficiency.
#ai/ml#ai#artificial intelligence#machine learning#ai development#ai developers#data science#technology#data analytics#data scientist#data processing
3 notes
·
View notes
Link
This guide provides valuable insights into the benefits of having a portfolio and offers a range of significant projects that can be included to help you get started or accelerate your career in data science. Download Now.
#database#data mining#data warehousing#data#data science#data scientist#data analysis#data analyst#Big Data Analysis#data processing#data projects
5 notes
·
View notes
Text
python matching with ngrams
# https://pythonprogrammingsnippets.com def get_ngrams(text, n): # split text into n-grams. ngrams = [] for i in range(len(text)-n+1): ngrams.append(text[i:i+n]) return ngrams def compare_strings_ngram_pct(string1, string2, n): # compare two strings based on the percentage of matching n-grams # Split strings into n-grams string1_ngrams = get_ngrams(string1, n) string2_ngrams = get_ngrams(string2, n) # Find the number of matching n-grams matching_ngrams = set(string1_ngrams) & set(string2_ngrams) # Calculate the percentage match percentage_match = (len(matching_ngrams) / len(string1_ngrams)) * 100 return percentage_match def compare_strings_ngram_max_size(string1, string2): # compare two strings based on the maximum matching n-gram size # Split strings into n-grams of varying lengths n = min(len(string1), len(string2)) for i in range(n, 0, -1): string1_ngrams = set(get_ngrams(string1, i)) string2_ngrams = set(get_ngrams(string2, i)) # Find the number of matching n-grams matching_ngrams = string1_ngrams & string2_ngrams if len(matching_ngrams) > 0: # Return the maximum matching n-gram size and break out of the loop return i # If no matching n-grams are found, return 0 return 0 string1 = "hello world" string2 = "hello there" n = 2 # n-gram size # find how much of string 2 matches string 1 based on n-grams percentage_match = compare_strings_ngram_pct(string1, string2, n) print(f"The percentage match is: {percentage_match}%") # find maximum ngram size of matching ngrams max_match_size = compare_strings_ngram_max_size(string1, string2) print(f"The maximum matching n-gram size is: {max_match_size}")
#python#ngrams#ngram#string comparison#strings#string#comparison#basic python#python programming#tutorial#snippets#nlp#natural language processing#ai processing#data#datascience#data processing#language#matching strings#string matching#matching#ai#text processing#text#processing#dev#developer#programmer#programming#source code
4 notes
·
View notes
Text
A page from Sperry UNIVAC’s computer brochure - 1976.
#the 70s#the 1970s#computing#vintage computers#vintage tech#vintage technology#technology#the digital age#vintage electronics#electronics#digital computers#digital computing#data entry#univac#sperry#sperry rand#the rand corporation#sperry univac#hey you start computing#minicomputers#mainframe computers#data processing
47 notes
·
View notes
Text
IoT Data Analytics Benefits
Explore how IoT data analytics can transform industries. Learn about key use cases and the benefits of leveraging IoT for more intelligent business decisions and efficiency. In this context, Internet of Things data analytics relates to the collection, transformation, and analysis of data from Internet of Things devices.
#IoT Data Analytics#Internet of Things data analytics#Data acquisition#data management#data processing#data presentation#Primathon
1 note
·
View note
Text
In today’s data-driven world, effective decision-making relies on different types of data processing to manage and analyze information efficiently. Batch processing is ideal for handling large data sets periodically, while real-time processing provides instant insights for critical decisions. OLTP ensures smooth transaction management, and OLAP allows businesses to uncover trends through multidimensional analysis. Emerging methods like augmented analytics and stream processing leverage AI to automate insights and handle continuous data flows, empowering organizations to stay ahead in a fast-paced environment.
0 notes
Text
Enhance your research and project management skills with strategies, tools, and best practices. Learn how to streamline workflows, improve collaboration, and achieve project goals efficiently.
Project Management Services
survey programming services
2 notes
·
View notes
Text
Surface-based sonar system could rapidly map the ocean floor at high resolution
New Post has been published on https://thedigitalinsider.com/surface-based-sonar-system-could-rapidly-map-the-ocean-floor-at-high-resolution/
Surface-based sonar system could rapidly map the ocean floor at high resolution
On June 18, 2023, the Titan submersible was about an hour-and-a-half into its two-hour descent to the Titanic wreckage at the bottom of the Atlantic Ocean when it lost contact with its support ship. This cease in communication set off a frantic search for the tourist submersible and five passengers onboard, located about two miles below the ocean’s surface.
Deep-ocean search and recovery is one of the many missions of military services like the U.S. Coast Guard Office of Search and Rescue and the U.S. Navy Supervisor of Salvage and Diving. For this mission, the longest delays come from transporting search-and-rescue equipment via ship to the area of interest and comprehensively surveying that area. A search operation on the scale of that for Titan — which was conducted 420 nautical miles from the nearest port and covered 13,000 square kilometers, an area roughly twice the size of Connecticut — could take weeks to complete. The search area for Titan is considered relatively small, focused on the immediate vicinity of the Titanic. When the area is less known, operations could take months. (A remotely operated underwater vehicle deployed by a Canadian vessel ended up finding the debris field of Titan on the seafloor, four days after the submersible had gone missing.)
A research team from MIT Lincoln Laboratory and the MIT Department of Mechanical Engineering‘s Ocean Science and Engineering lab is developing a surface-based sonar system that could accelerate the timeline for small- and large-scale search operations to days. Called the Autonomous Sparse-Aperture Multibeam Echo Sounder, the system scans at surface-ship rates while providing sufficient resolution to find objects and features in the deep ocean, without the time and expense of deploying underwater vehicles. The echo sounder — which features a large sonar array using a small set of autonomous surface vehicles (ASVs) that can be deployed via aircraft into the ocean — holds the potential to map the seabed at 50 times the coverage rate of an underwater vehicle and 100 times the resolution of a surface vessel.
Play video
Autonomous Sparse-Aperture Multibeam Echo Sounder Video: MIT Lincoln Laboratory
“Our array provides the best of both worlds: the high resolution of underwater vehicles and the high coverage rate of surface ships,” says co–principal investigator Andrew March, assistant leader of the laboratory’s Advanced Undersea Systems and Technology Group. “Though large surface-based sonar systems at low frequency have the potential to determine the materials and profiles of the seabed, they typically do so at the expense of resolution, particularly with increasing ocean depth. Our array can likely determine this information, too, but at significantly enhanced resolution in the deep ocean.”
Underwater unknown
Oceans cover 71 percent of Earth’s surface, yet more than 80 percent of this underwater realm remains undiscovered and unexplored. Humans know more about the surface of other planets and the moon than the bottom of our oceans. High-resolution seabed maps would not only be useful to find missing objects like ships or aircraft, but also to support a host of other scientific applications: understanding Earth’s geology, improving forecasting of ocean currents and corresponding weather and climate impacts, uncovering archaeological sites, monitoring marine ecosystems and habitats, and identifying locations containing natural resources such as mineral and oil deposits.
Scientists and governments worldwide recognize the importance of creating a high-resolution global map of the seafloor; the problem is that no existing technology can achieve meter-scale resolution from the ocean surface. The average depth of our oceans is approximately 3,700 meters. However, today’s technologies capable of finding human-made objects on the seabed or identifying person-sized natural features — these technologies include sonar, lidar, cameras, and gravitational field mapping — have a maximum range of less than 1,000 meters through water.
Ships with large sonar arrays mounted on their hull map the deep ocean by emitting low-frequency sound waves that bounce off the seafloor and return as echoes to the surface. Operation at low frequencies is necessary because water readily absorbs high-frequency sound waves, especially with increasing depth; however, such operation yields low-resolution images, with each image pixel representing a football field in size. Resolution is also restricted because sonar arrays installed on large mapping ships are already using all of the available hull space, thereby capping the sonar beam’s aperture size. By contrast, sonars on autonomous underwater vehicles (AUVs) that operate at higher frequencies within a few hundred meters of the seafloor generate maps with each pixel representing one square meter or less, resulting in 10,000 times more pixels in that same football field–sized area. However, this higher resolution comes with trade-offs: AUVs are time-consuming and expensive to deploy in the deep ocean, limiting the amount of seafloor that can be mapped; they have a maximum range of about 1,000 meters before their high-frequency sound gets absorbed; and they move at slow speeds to conserve power. The area-coverage rate of AUVs performing high-resolution mapping is about 8 square kilometers per hour; surface vessels map the deep ocean at more than 50 times that rate.
A solution surfaces
The Autonomous Sparse-Aperture Multibeam Echo Sounder could offer a cost-effective approach to high-resolution, rapid mapping of the deep seafloor from the ocean’s surface. A collaborative fleet of about 20 ASVs, each hosting a small sonar array, effectively forms a single sonar array 100 times the size of a large sonar array installed on a ship. The large aperture achieved by the array (hundreds of meters) produces a narrow beam, which enables sound to be precisely steered to generate high-resolution maps at low frequency. Because very few sonars are installed relative to the array’s overall size (i.e., a sparse aperture), the cost is tractable.
However, this collaborative and sparse setup introduces some operational challenges. First, for coherent 3D imaging, the relative position of each ASV’s sonar subarray must be accurately tracked through dynamic ocean-induced motions. Second, because sonar elements are not placed directly next to each other without any gaps, the array suffers from a lower signal-to-noise ratio and is less able to reject noise coming from unintended or undesired directions. To mitigate these challenges, the team has been developing a low-cost precision-relative navigation system and leveraging acoustic signal processing tools and new ocean-field estimation algorithms. The MIT campus collaborators are developing algorithms for data processing and image formation, especially to estimate depth-integrated water-column parameters. These enabling technologies will help account for complex ocean physics, spanning physical properties like temperature, dynamic processes like currents and waves, and acoustic propagation factors like sound speed.
Processing for all required control and calculations could be completed either remotely or onboard the ASVs. For example, ASVs deployed from a ship or flying boat could be controlled and guided remotely from land via a satellite link or from a nearby support ship (with direct communications or a satellite link), and left to map the seabed for weeks or months at a time until maintenance is needed. Sonar-return health checks and coarse seabed mapping would be conducted on board, while full, high-resolution reconstruction of the seabed would require a supercomputing infrastructure on land or on a support ship.
“Deploying vehicles in an area and letting them map for extended periods of time without the need for a ship to return home to replenish supplies and rotate crews would significantly simplify logistics and operating costs,” says co–principal investigator Paul Ryu, a researcher in the Advanced Undersea Systems and Technology Group.
Since beginning their research in 2018, the team has turned their concept into a prototype. Initially, the scientists built a scale model of a sparse-aperture sonar array and tested it in a water tank at the laboratory’s Autonomous Systems Development Facility. Then, they prototyped an ASV-sized sonar subarray and demonstrated its functionality in Gloucester, Massachusetts. In follow-on sea tests in Boston Harbor, they deployed an 8-meter array containing multiple subarrays equivalent to 25 ASVs locked together; with this array, they generated 3D reconstructions of the seafloor and a shipwreck. Most recently, the team fabricated, in collaboration with Woods Hole Oceanographic Institution, a first-generation, 12-foot-long, all-electric ASV prototype carrying a sonar array underneath. With this prototype, they conducted preliminary relative navigation testing in Woods Hole, Massachusetts and Newport, Rhode Island. Their full deep-ocean concept calls for approximately 20 such ASVs of a similar size, likely powered by wave or solar energy.
This work was funded through Lincoln Laboratory’s internally administered R&D portfolio on autonomous systems. The team is now seeking external sponsorship to continue development of their ocean floor–mapping technology, which was recognized with a 2024 R&D 100 Award.
#000#2023#2024#3d#3d reconstructions#aircraft#Algorithms#amp#applications#approach#Arrays#Atlantic ocean#autonomous#autonomous systems#autonomous vehicles#beam#Best Of#board#Cameras#climate#climate change#Collaboration#collaborative#communication#communications#data#data processing#debris#deep ocean#deploying
0 notes
Text
The Rise of Automated Machine Learning
Automated machine learning (AutoML) refers to the use of machine learning techniques to automate or partially automate the machine learning process. With AutoML, much of the tedious human involvement required for data preparation, model selection, hyperparameter tuning, and evaluation can be automated. Automated Machine Learning tool allows more resources to be devoted to understanding the problem domain and interpreting results, rather than manual trial-and-error in the ML workflow.
In the automated machine learning is revolutionizing how organizations both large and small can benefit from artificial intelligence. As these technologies continue to evolve, they hold much promise to automate routine data science work and improve ML productivity through means not feasible for human data scientists alone.
Get more insights on, Automated Machine Learning
#Coherent Market Insights#Integration with Existing IT Infrastructure#Retail & E-commerce#Data Processing#Model Selection
0 notes