Tumgik
#maluuba
jcmarchi · 3 months
Text
The Sequence Chat: Justin D. Harris - About Building Microsoft Copilot
New Post has been published on https://thedigitalinsider.com/the-sequence-chat-justin-d-harris-about-building-microsoft-copilot/
The Sequence Chat: Justin D. Harris - About Building Microsoft Copilot
Quick bio
This is your second interview at The Sequence. Please tell us a bit about yourself. Your background, current role and how did you get started in AI?  
I grew up in the suburbs of Montreal and I have always been passionate about mathematics. I left Montreal to study math and computer science at the University of Waterloo in Canada. I currently live in Toronto with my wonderful girlfriend and our little dog Skywalker who enjoys kayaking with us around the beaches and bluffs. I am a Principal Software Engineer at Microsoft, where I have worked on various AI projects and have been a core contributor in the development of Microsoft Copilot. While my colleagues recognize me as a diligent engineer, but only a few have had the opportunity to witness my prowess as a skier.
For my career, I have been dedicated to building AI applications since I was in university 15 years ago. During my studies, I joined Maluuba as one of the early engineers. We developed personal assistants for phones, TVs, and cars that handled a wide range of commands. We started with using classical machine learning models such as SVMs, Naive Bayes, and CRFs before adapting to use deep learning. We sold Maluuba to Microsoft in 2017 to help Microsoft in their journey to incorporate AI into more products. I have been working on a few AI projects at Microsoft including some research to put trainable models in Ethereum smart contracts, which we talked about in our last interview. Since 2020, I have been working on a chat system built into Bing which evolved into Microsoft Copilot. I am currently a Principal Software Engineer on the Copilot Platform team at Microsoft, where we’re focused on developing a generalized platform for copilots at Microsoft.
TheSequence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
🛠 ML Work  
Your recent work includes working on Microsoft Copilot, which is a central piece of Microsoft’s AI vision. Tell us about the vision and core capabilities of the platform.
We have built a platform for copilots and apps that want to leverage large language models (LLMs) and easily take advantage of the latest developments in AI. Many products use our platform such as Windows, Edge, Skype, Bing, SwiftKey, and many Office products. Their needs and customization points vary. It’s a fun engineering challenge to build a system that’s designed to work well for many different types of clients in different programming languages and that scales from simple LLM usage to more sophisticated integrations with plugins and custom middlewares. Many teams benefit not only from the power of our customizable platform, but they also benefit from the many Responsible AI (RAI) and security guards built into our system.
Are copilots/agents the automation paradigm of the AI era? How would you compare copilots with previous automation trends, such as robotic process automation (RPA), middleware platforms, and others?
Copilots help us automate many types of tasks and get our work done more quickly in a breadth of scenarios, but right now, we still often need to review their work such as emails or code they write. Other types of automation might be hard for an individual to configure, but once it’s configured, it’s designed to run autonomously and be trusted because its scope is limited. Another big difference with using LLMs compared to previous automation trends is that we can now use the same model to help with many different types of tasks when given the right instructions and examples. When given the right instructions and grounding information, LLMs can often generalize to new scenarios.
There are capabilities such as planning/reasoning, memory, and action execution that are fundamental building blocks of copilots or agents. What are some of the most exciting research and technologies you have seen in these areas?
AutoGen is an interesting paradigm that’s adapting classical ideas like ensembling techniques from previous eras of AI for the new more generalized LLMs. AutoGen can use multiple different LLMs to collaborate and solve a task.
Semantic Kernel is a great tool for aiding in orchestrating LLMs and easily integrating plugins, RAG, and different models. It also works well with my favorite tool to easily run models locally: Ollama.
Here’s a somewhat controversial question: copilots/agents are typically constructed as an LLM surrounded by features like RAG, action execution, memory, etc. How much of those capabilities do you foresee becoming part of the LLM (maybe fine-tuned) themselves versus what remains external? In other words, does the model become the copilot?
It’s really helpful to have features like RAG available as external integrations for brand new scenarios and to ensure that we cite the right data. When training models, we talk about the ‘cold start’ problem: how do we get data and examples for new use cases? Very large models can learn about certain desired knowledge, but it’s hard to foresee what will be required in this quickly changing space. Many teams using our Copilot Platform expect to use RAG and plugins to easily integrate their stored knowledge from various sources that update often, such as content from the web based on news, or documentation that changes daily. It would be outlandish to tell them to collect lots of training data, even if it’s unlabeled or unstructured data, and to fine-tune a model hourly or even more often as the world changes. We’re not ready for that yet. Citing the right data is also important. Without RAG, current models hallucinate too much and cannot yet be trusted to cite the original source of information. With RAG, we know what information is available for the model to cite at runtime and we include those links in the UI along with a model’s response, even if the model did not choose to cite them, because they’re helpful as references for us to learn more about a topic.
One of the biggest challenges with using models such as GPT-4 for domain-specific agents or copilots is that they are just too big and expensive. Microsoft has recently been pushing the small language model (SLMs) trend with models like Orca or Phi. Are SLMs generally better suited for business copilot scenarios?
SLMs are very useful for specific use cases and can be more easily fine-tuned. The biggest caveat for SLMs is that many only work well in fewer languages than GPT-4 which knows many languages. I’ve had a great time playing around with them using Ollama. It’s easy to experiment and build an application with a local SLM, especially while you’re more focused on traditional engineering problems and designing parts of your project. Once you’re ready to scale to many languages and meet the foray of customer needs, a more powerful model might be more useful. I think the real answer will be hybrid systems that find ways to leverage small and large models.
How important are guardrails in Microsoft’s Copilots, and how do you think about that space?
We have many important guardrails for Responsible AI (RAI) built into our Copilot Platform from inspecting user input to verifying model output. These protections are one of the main reasons that many teams use our platform. The guardrails and shields that we set up for RAI are very important in our designs. RAI is a core part of every design review, and we standardize how RAI works for everything that goes into and comes out of our platform. We work with many teams across Microsoft to standardize what to validate and share knowledge. We also ensure that the long prompt with special instructions, examples, and formatting is treated securely, just like code, and not exposed outside of our platform.
Your team was very vocal about their work in the Copilot user experience, using technologies like SignalR. What makes the UX for copilots/agents different from previous paradigms?
We built new user experiences for our copilots to integrate them into existing products and we wrote a blog to share some of our design choices such as how we stream responses and designed the platform to work with many different types of clients in different programming languages. I also did a podcast to discuss some topics mentioned in the blog post more. One of the biggest noticeable differences with previous assistants or agents is how an answer is streamed word by word, or token by token, as the response is generated. The largest and most powerful models can also be the slowest ones and it can take many seconds or sometimes minutes to generate a full response with grounding data and references, so it’s important for us to start to show the user an answer as quickly as possible. We use SignalR to help us simplify streaming the answer to the client. SignalR automatically detects and chooses the best transport method among the various web standard protocols and techniques. WebSockets are used as the transport method by default for most of our applications and we can gracefully fall back to Server-Sent Events or long polling. SignalR also simplifies bidirectional communication, such as when the application needs to send information to the service to interrupt the streaming of a response.
We use Adaptive Cards and Markdown to easily scale to displaying responses in multiple different applications or different programming languages. We use the new object-basin library that we built to generalize and simplify streaming components of JSON to modify the JSON in the Adaptive Cards that were already streamed to the application. This gives the service a lot of control over what is displayed in the applications and the application can easily tweak how the response is formatted, for example, by changing CSS.
💥 Miscellaneous – a set of rapid-fire questions 
What is your favorite area of research outside of generative AI?
Quantum Computing.
Is reasoning and planning the next big thing for LLMs and, consequently, copilots? What other research areas are you excited about?
Reasoning and planning are important for some complex scenarios beyond question answering where multiple steps are involved such as planning a vacation or determining the phases of a project. I’m also excited about ways that we can use smaller and simpler local models securely for simple scenarios.
How far can the LLM scaling laws take us? All the way to AGI?
I’m confident that we will get far with LLMs because we’ve seen them do awesome things already. My personal observation is that we tend to make giant leaps in AI every few years and then the progress is slower and more incremental in the years between the giant leaps. I think at least one more giant leap will be required before AGI is achieved, but I’m confident that LLMs will help us make that giant leap sooner by making us more productive. Language is just one part of intelligence. Models will need to understand the qualia associated with sensory experiences to become truly intelligent.
Describe a programmer’s world with copilots in five years.
Copilots will be integrated more into the development experience, but I hope they don’t eliminate coding completely. Copilots will help us even more with our tasks and going back to not having a copilot already feels weird and lonely to me. I like coding and feeling like I built something, but I’m happy to let a copilot take over with more tedious tasks or help me discover different techniques. Copilots will help us get more done faster as they get more powerful and increase in context size to understand more of a project instead of just a couple of sections or files. Copilots will also need to become more proactive and less reactive to respond only when prompted. We will have to be careful to build helpful systems that are not pestering.
Who are your favorite mathematicians and computer scientists, and why?
I don’t think I can pick a specific person that I fully admire, but right now, even though we wouldn’t typically call them mathematicians, Amos Tversky and Daniel Kahneman come to mind. People have been talking more about them lately because Daniel Kahneman passed away a few months ago. I think about them, system 1 vs. system 2 thinking, and slowing down to apply logic, a deep kind of mathematics, as I read “The Undoing Project” and “Thinking, Fast and Slow” a few years ago.
TheSequence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
0 notes
nlprocby · 7 years
Text
Ekaterina Kruchinina: NLP is on the road
Hi Katja, let me ask about your professional experience: how did you find your interest in computational linguistics and developed through it?
Tumblr media
Hi, I started my university studies in Rostov-on-Don, Russia. My subject was German language and literature, with emphasis on literature and translation studies. I studied for 2 years and then continued my studies at the University of Cologne in Germany. I started there from the first year, as in Germany similar program starts two years later than in Russia. Additionally, I took another topic of interest, French, as I was keen to learn it. But after a couple of years, I realized I had high interest in linguistics, especially after taking courses in modern linguistics and formal syntax in university. But honestly, I was not aware of computational linguistics at that point. One day I found that there is a study subject ‘Linguistic data processing’ at the University of Cologne and I joined the class after a talk with a professor. After a couple of years I started to work at the department, and of course, it was a good time to learn programming, which I really enjoyed. At that point, I realized much more about computer science. We studied Java as a first language, though many in the field start now with Python. I remember we programmed a search engine over a summer.
It reminds me a talk to Natalia Karlova-Burbonus. Natalia has a very similar story: going from interest to the German language to Computational Linguistics in Germany. 
My next question whether you remember your first project or last at that time.
Yes, my first job was related to exploring self-organizing maps (so-called Kohonen nets). I don’t remember all project details, but we worked on syntactic dependency structures and tried to represent it in Kohonen maps for the German language. After we tried different IE approaches, text classification and run other experiments. That was a great time for learning. I had done an internship during my studies as well. It was in Paris, at a software company called Arisem, so I could also practice my French. It was the B2B company which focused on semantic search, dedicated one, including crawling. Then I came back to finish my master thesis.
What was master thesis topic about?
It was about the numerical representation of text corpora including how can we represent corpora for classification. I tried LSA that time also, but the topic was like a meta-analysis of different approaches.
Then Ph.D. happened to you.
Yes, at some point after I decided to stay in academia, to do a Ph.D. I went to Jena university, a big move from Cologne. But it was not only a Ph.D. position but a research assistant position in a European project BootStrep. The focus was on biomedical text processing: text mining in biology, semantic search over the publication of medicine/bio published research. There is a huge database PubMed which has millions of citations and which continue to grow quickly. And, obviously, a problem for a biologist is to find relevant information in such an enormous amount of data. So, preprocessing of data, named entity recognition (NER), normalization of extracted entities and relation extraction, are of particular interest here. My personal focus was on relation extraction, e.g. how a researcher describes gene expression processes.
Did you have medicine ontology for named entities?
We had a couple of Ph.D. students, which helped to develop the ontology in our group, of course using terminology from established sources. It reminds also what else was great about the project group: everybody had a specific skill-set and the tasks were assigned well and according to a person focus: somebody worked on NER and fast annotation using active learning, someone - preprocessing, another person cared about the ontology, search engines. I focused on detection of events and relations. It was a great experience to have such a skilled team.
Do you remember a day when you realized that you need to leave the project?
I continued working on the project during my Ph.D. I started later and the main result I would say was my participation in BioNLP 2009 shared task, where I got a second place once evaluated. After that, I elaborated on my topic. On 2012 I’ve completed my Ph.D. and started to look for a new challenge. I could have stayed in the Biomedical domain, but I was open to other topics also as I studied a lot while reading about different topics, including dependency parsing, collecting data in general. Then I found an open position at Nuance, there were not many at that moment in Germany. So, I became one of the first joining the NLU (Natural Language Understanding) team and moved to Aachen, which I also like as it’s close to Cologne.
How many people in Nuance NLU team now?
There are about 60 people in Automotive cloud NLU, which includes Aachen, Montreal, and Burlington offices and people working remotely. Company-wide there are more NLU people (100+).
NLU is a challenge by the name. So, tell us, what do you do and how do you overcome the challenges?
First, our main application area is an automotive domain. Our team works at the moment on a classification of user intents and named entity recognition. So, you have one-two step dialog, one-shot query, which requires a classification of the intent. I’d say that it’s now for the navigation system, office system in the car.
Well, actually from my experience I remember around a year ago participating in a hackathon organized by Nuance NLU system. And if I recall correctly, for NLU system you need to provide not only intents but also labels, concepts to train it, am I right?
Yes, you also need to provide concepts which need to be detected.
Would be nice if you can share an example of a use-case.
Ok, the simple example is a question about the weather: “What will be the weather tomorrow in Trento?” So, we need to recognize the intent: weather, the date: tomorrow, the location: Trento. Another example, you can: would it be sunny tomorrow in Trento? So, we do have multiple steps, relying on statistical models and many features, like named entities, and lexical information (keywords sun, weather, etc). Both are possible: you can do intent classification first and then named entities or the other way around.
As I remember from the mentioned hackathon, you have two interfaces: speech and text.
You are right, but it’s another project, it’s Nuance MIX you mean, our project. In our solution, we provide an ability to type, use speech interface and handwriting.
You haven’t told us a lot of internal details yet ;) Ok, what languages do you support?
We support over 20 languages for Automotive cloud NLU, additionally to major European languages we have Czech, Swedish, Turkish, for Asia - Japanese, Cantonese, Mandarin, and others.
It leads me to the question: do you reuse models available or develop all yourself?
We develop all internally. For example, we have developers graduated from the Charles University in Prague, who work on Czech support.
That’s an interesting story about computational linguistics in Czech, though I wouldn’t call it as widely spoken as others in Europe, Charles University has two or three groups which develop universal dependencies for the language, though some more representative languages have none.
Alright, what do you work on currently?
It’s mostly improving accuracy for automotive-related projects (for features like navigation, weather search, and more), which includes adding of data. Also embedding extensions, and for that case, the main challenge is the proper evaluation, which helps to avoid a degradation in quality. We worked on a hybrid solution: embedded NLU and cloud NLU. As we have some overlap, we need to split the responsibilities in a clear way. We need to work on confidence for prediction. We are facing AI as well, I mean complex request, e.g. a user could ask: find me a good restaurant and a parking slot around. So, a combination of intents brings an interesting challenge.
So, let’s come back quickly to language sources: do you plan to release the language resources to the language developers community.
I have no insight regarding this from the business.
It is a company which was bought by Microsoft, Maluuba, which developed an evaluation dataset, NewsQA. So, releasing an evaluation dataset can be a good step from Nuance. Thank you for the talk and I wish you good luck with a challenge of multiple intents.
Thanks, I was happy to share the knowledge and what we do.
Image 1 is published with an agreement of K. Kruchinina
Author: @yauhen_info for @nlprocby
4 notes · View notes
mindblowingscience · 7 years
Link
The notoriously tricky video game Ms. Pac-Man has proved no match for artificial intelligence software, with Microsoft's latest bots able to achieve the maximum high-score of 999,990 - something neither human nor machine has managed before.
Researchers developed a new learning technique to beat the game - using multiple AI bots instead of just one to tackle the different challenges that Ms. Pac-Man throws up.
According to the team from Microsoft-owned startup Maluuba, this approach is particularly suited to Ms. Pac-Man. Not only do gamers have to find their way around a maze, they also need to find bonus items and avoid (or eat) ghosts.
Each aspect of Ms. Pac-Man - avoiding a ghost, eating pellets that make ghosts edible, picking up point-boosting pieces of fruit - was assigned a certain weight as to its importance within the game, and then 163 bots known as "agents" used trial and error to work out the best approach for each element.
One agent might be tasked with finding a fruit, for instance, while another might have the job of avoiding a ghost.
A master agent then used all of the feedback from its subagents to plot the best course through the game. The researchers found that the subagents worked best when they focussed on their own goals, leaving the "senior manager" to see the big picture.
"There's this nice interplay between how [the agents] have to, on the one hand, cooperate based on the preferences of all the agents, but at the same time each agent cares only about one particular problem," says one of the team, Harm Van Seijen.
With so many agents out in the field, as it were, the AI could weigh up the best approach when choosing between avoiding a ghost or heading towards a fruit, or any other decision. It eventually worked out how to pick up maximum points.
Continue Reading.
182 notes · View notes
thetechportal · 7 years
Text
Microsoft backed Maluuba's AI has just beaten MS Pac-Man
Microsoft backed MaaluAI has just beaten MS Pac-Man #AI #Maluuba #microsoft #mspac-man #pac-man
An Artificial Intelligence backed by Microsoft has just done the impossible. The AI has managed to beat Ms. Pac-Man. If you have played the game for yourself, I don’t need to remind you of how badly you lost. This is indeed, the first time ever that a human or an AI has managed to best the notoriously difficult game.
Maluuba, which was acquired by Microsoft and is a subsidiary of the latter since…
View On WordPress
0 notes
techholo · 7 years
Text
Microsoft’s AI beats Ms. Pac-Man
Microsoft’s AI beats Ms. Pac-Man
Tumblr media
[ad_1] As with so many things in the world, the key to cracking Ms. Pac-Man is team work and a bit of positive reinforcement. That… and access to funding from Microsoft and 150-plus artificial intelligence agents — as Maluuba can now attest. Last month, the Canadian deep learning company (a subsidiary of Microsoft as of January) became the first team of AI programmers to beat the 36-year-old…
View On WordPress
0 notes
newscultofficial · 7 years
Text
And the First Person to Achieve a Perfect 'Ms. Pac-Man' Score is... Not a Person?
According to Microsoft, researchers from from Maluuba, a Canadian deep learning startup acquired by Microsoft earlier this year, have created an artificial intelligence that achieved the maximum possible score of 999,990 points in Ms. Pac-Man.
“Anything you can do, I can do better,” – Microsoft’s AI to Abdner Ashman, the person who…
View On WordPress
0 notes
sosyalmedyablog · 8 years
Text
New Post has been published on Girişimci Ruhu
New Post has been published on http://girisimciruhu.com/microsoft-yapay-zeka-girisimi-maluubayi-satin-aldi/
Microsoft yapay zeka girişimi Maluuba’yı satın aldı
Microsoft, Kanada merkezli yapay zeka ve derin öğrenme (deep learning) girişimi Maluuba’yı satın aldı. Microsoft blogunda duyurulan anlaşmanın detaylarına göre Maluuba, Microsoft bünyesinde çalışmaya devam edecek.
Sam Pasupalak ve Kaheer Suleman’ın kurucu ortak olarak 2011 yılında kurulan şirket bugüne dek, deep learning konusunda önemli işlere imza atmış. Maluuba’nın danışmanlarından biri, akademisyen kimliği ve yapay zeka konusunda önemli isimler arasında anılan Yoshua Bengio. Satın almayla birlikte Yoshua Bengio, Microsoft’a da danışman olarak atanırken, Maluuba ekibi Microsoft’a transfer oluyor.
Microsoft’un gelecekte Cortana ya da geliştireceği benzer ürünlerine katkı sağlaması beklenen bu satın almayla Microsoft ürünleri insansı düşünebilme, soru cevap yapma ve derin öğrenme konusunda kendini geliştirmesi bekleniyor. Microsoft’un Maluuba için ne kadarlık bir rakam ödediği bilinmiyor.
Maluuba ilk tohum yatırımını 2012 yılında Samsung Ventures’tan 2 milyon dolarak almıştı. Ağustos 2015’te 9 milyon dolarlık Seri A yatırımına imza atan şirket Montreal şehrinde bir ofis daha açmıştı.
0 notes
brwworld-blog · 8 years
Text
Microsoft acquires Canadian AI startup Maluuba to bolster its natural language processing abilities
Microsoft acquires Canadian AI startup Maluuba to bolster its natural language processing abilities
Microsoft has acquired Canadian AI startup Maluuba for an undisclosed sum. Maluuba’s acquisition will bolster the company’s already strong capabilities in the fields of natural language processing and deep learning.
As Microsoft explains in its blog, “Maluuba’s vision is to advance toward a more general artificial intelligence by creating literate machines that can think, reason and communicate…
View On WordPress
0 notes
waaaiz · 4 years
Text
Artificial Intelligence Market Research Report
This global study conducted by GMI Research provides an overview of the existing market trends, drivers, restrictions, and metrics and offers a viewpoint for different segments. The market report tracks the overall demand growth during the forecast period. A regional study of the market is also carried out in North America, Asia-Pacific, Europe, and the Rest of the World. The report indicates growth parameters in the regional markets, along with leading players that are dominating the industry.
Request Sample of this report:  https://www.gmiresearch.com/report/global-artificial-intelligence-ai-market/sample-request
Competitive Landscape:
The leading organizations covered in the Global Artificial Intelligence Market are:
·        Intel Corporation
·        NVIDIA Corporation
·        Samsung
·        Xilinx
·        Micron Technology, Inc.
·        IBM Corporation
·        Microsoft Corporation
·        Google
·        Amazon Web Services (AWS)
·        Facebook
Drivers of the Global Artificial Intelligence Market:
The global Artificial Intelligence (AI) market is experiencing an upsurge in demand over the next six years. This is mainly due to the increasing adoption of cloud-based applications and services and the continuous growth of big data at the global level. The advanced AI technologies are improving the performance of economies, businesses, and various sectors. Additionally, the strong demand for intelligent virtual assistants across several industry verticals will further open new opportunities for the growth of the market in the coming years. The continuous integration of image recognition technology with optical character recognition, pattern matching, and face recognition in numerous end-use applications, which includes self-driving cars, drones, and robotics will surge the demand for AI.
The leading players are emphasizing on mergers and acquisition with emerging start-ups to gain a competitive advantage over the other players in the market. For example, in 2017, Microsoft Corporation acquired Maluuba with an aim to bring advancement in the deep learning and AI landscape. Furthermore, the increasing adoption of AI technologies will create several growth opportunities in the market.
The strong demand for AI-powered industrial robots, particularly in developing economies such as China and Japan, will drive market growth. On the other edge of the spectrum, the difficulty in data labeling, limited application of AI in complex models, and the restricted number of AI experts are the major factors that will hinder the growth of the market over the forecast period.
Key Benefits offered by the Global Artificial Intelligence Market report:
·        This study represents the analytical depiction of the Global Artificial Intelligence Market along with the ongoing trends and future estimations to determine the imminent investment pockets.
·        The report provides detailed information related to market drivers, restraints, and growth opportunities, followed by the analysis of the Global Artificial Intelligence Market Share.
Segmentation of the Global Artificial Intelligence Market:
The Global Artificial Intelligence Market has been categorized into the following segments:
Breakup by Offering:
·        Hardware
o   Processors
§  MPU
§  GPU
§  FPGA
§  Others
o   Memory
o   Network
·        Software
o   AI Platforms
§  Application Program Interface (API)
§  Machine Learning Framework
o   AI Solutions
§  On Premises
§  Cloud
·        Services
o   Deployment & Integration
o   Support & Maintenance
Breakup by Technology:
·        Machine Learning
o   Deep Learning
o   Supervised Learning
o   Unsupervised Learning
o   Reinforcement Learning
o   Generative Adversarial Networks (GANS)
o   Others
·        Natural Language Processing (NLP)
·        Context-Aware Computing
·        Computer Vision
Breakup by Type:
·        Narrow/Week AI
·        General/Strong AI
Breakup by End-User:
·        Healthcare
o   Patient Data & Risk Analysis
o   Lifestyle Management & Monitoring
o   Precision Medicine
o   Inpatient Care & Hospital Management
o   Medical Imaging & Diagnostics
o   Drug Discovery
o   Virtual Assistant
o   Wearables
o   Research
·        Manufacturing
o   Material Movement
o   Predictive Maintenance and Machinery Inspection
o   Production Planning
o   Field Services
o   Reclamation
o   Quality Control
·        Automotive
o   Autonomous Driving
o   Hmi
o   Semiautonomous Driving
·        Agriculture
o   Precision Farming
o   Livestock Monitoring
o   Drone Analytics
o   Agricultural Robots
o   Others
·        Retail
o   Product Recommendation and Planning
o   Customer Relationship Management
o   Visual Search
o   Virtual Assistant
o   Price Optimization
o   Payment Services Management
o   Supply Chain Management and Demand Planning
o   Others  
·        Security
o   Identity and Access Management
o   Risk and Compliance Management
o   Encryption
o   Data Loss Prevention
o   Unified Threat Management
o   Antivirus/Antimalware, Intrusion Detection/Prevention Systems
o   Others
·        Human Resources
o   Virtual Assistant
o   Applicant Tracking & Assessment
o   Personalized Learning and Development
o   HR Analytics
o   Job Recommendation
o   Sentiment Analysis
·        Marketing
o   Social Media Advertising
o   Search Advertising
o   Dynamic Pricing
o   Virtual Assistant
o   Content Curation
o   Sales & Marketing Automation
o   Analytics Platform
o   Others
·        Law
o   Ediscovery
o   Legal Research
o   Contract Analysis
o   Case Prediction
o   Compliance
o   Others
·        Fintech
o   Virtual Assistant
o   Business Analytics & Reporting
o   Customer Behaviour Analytics
o   Others
·        Construction
o   Project Management
o   Field Management
o   Risk Management
o   Schedule Management
o   Defence
o   Warfare Platform
o   Cyber Security
o   Target Recognition
o   Battlefield Healthcare
o   Simulation & Training
o   Planning & Allocation
o   Threat Monitoring & Situational Awareness
o   Information Processing
o   Other
·        Aerospace
o   Smart Maintenance
o   Flight Operations
o   Training, Surveillance
o   Virtual Assistants
o   Dynamic Pricing
o   Manufacturing
o   Others
·        Supplychain
o   Fleet Management
o   Supply Chain Planning
o   Warehouse Management
o   Virtual Assistant
o   Risk Management
o   Freight Brokerage
o   Others
·        Building Automation
·        Consumer
·        Food & Beverage
·        Gaming
·        Media and Entertainment
·        Telecommunication
o   Predictive Maintenance & Machine Inspection
o   Customer Analytics
o   Network Security
o   Network Optimization
o   Self-Diagnostics
o   Virtual Assistance
o   Others
·        Oil & Gas
o   Predictive Maintenance & Machine Inspection
o   Material Movement
o   Production Planning
o   Field Services
o   Quality Control
o   Reclamation
Breakup by Region:
·        North America
o   United States of America
o   Canada
 ·        Asia Pacific
o   China
o   Japan
o   India
o   Rest of APAC
·        Europe
o   United Kingdom
o   Germany
o   France
o   Spain
o   Rest of Europe
·        RoW
o   Brazil
o   South Africa
o   Saudi Arabia
o   UAE
o   Rest of the world (remaining countries of the LAMEA region)
About GMI Research
GMI Research is a full-service market research and business-consulting organization that offers global as well as medium and small businesses with bespoke industry and market analysis to help them in making the toughest business decision. We have a team of industry experts that provides unbiased insights on reports to best meet the specific needs of our clients. GMI Research’s analysts rely on secondary research to lay down the foundation of every report. We use not only the internal but also the external databases to cull out relevant information of the industry. Our main aim is to keep our clients abridged of the emerging opportunities and challenges in a wide range of industries. We provide step-by-step assistance to our client, through strategic and consulting services to reach a managerial and actionable decision. Featured in the ‘Top 20 Most Promising Market Research Consultants’ list of Silicon India Magazine in 2018, we at GMI Research are always looking forward to helping businesses stay ahead of the curve.
Contact Us
Company Name: GMI RESEARCH
Phone: +353 1 442 8820
Address:Level 1, The Chase Carmanhall Road, Sandyford Industrial Estate
City: Dublin
State: Dublin
Country: Ireland
Website: https://www.gmiresearch.com/
0 notes
jcmarchi · 8 months
Text
AI Acquisitions: Who’s Leading the Charge and Why?
New Post has been published on https://thedigitalinsider.com/ai-acquisitions-whos-leading-the-charge-and-why/
AI Acquisitions: Who’s Leading the Charge and Why?
Tumblr media Tumblr media
Artificial Intelligence (AI) has a significant impact on various sectors like healthcare, finance, education, and entertainment. This technology is reshaping business operations, demonstrating its undeniable potential to transform various industries. However, developing AI solutions is not without its challenges. It requires a unique combination of specialized skills, substantial resources, and vast data sets.
In response to these complexities, major tech players have strategically opted for a different approach. Rather than undertaking in-house development, they have chosen to acquire AI startups. This tactical shift not only expedites their entry into the competitive AI landscape but also positions them to exploit the innovative potential present within these specialized entities.
The AI Acquisition Paradigm
From 2010 to 2023, the AI acquisition landscape has witnessed significant evolution. There was a notable rise in acquisitions until 2021, peaking at 231, followed by a decline to 189 in 2023. Potential factors contributing to this drop include the economic disruptions caused due to COVID-19 pandemic, which may have slowed down investment activities. Furthermore, the AI market has matured and saturated, with major tech companies having already absorbed many promising startups.
Leading this acquisition trend are the tech giants collectively known as FAMGA (Facebook, Apple, Microsoft, Google, and Amazon). They have consistently dominated the acquisition scene, accounting for most acquisitions. In 2023, FAMGA was responsible for 76 out of the 189 acquisitions. Similarly, in 2021, they accounted for 76 out of 231 acquisitions. Among the FAMGA members, Apple leads with 29 acquisitions, followed by Google with 15, Microsoft with 13, Facebook with 12, and Amazon with 7. Their collective spending on AI acquisitions from 2010 to 2023 amounted to a substantial $19.7 billion.
The FAMGA members pursue distinct strategies when it comes to AI acquisitions. Apple prioritizes computer vision, natural language processing, voice recognition, and healthcare to enhance its products. Google focuses on expanding AI in search, advertising, cloud, healthcare, and education, with a particular emphasis on deep learning.
Likewise, Microsoft strengthens its cloud and enterprise software through acquisitions in natural language processing, computer vision, and cybersecurity. Facebook aims to improve social media through computer vision, natural language processing, and virtual reality. Similarly, Amazon diversifies in e-commerce, cloud, healthcare, and entertainment, with an emphasis on natural language processing, computer vision, and robotics.
Despite their unique objectives, FAMGA members share common interests in technologies such as natural language processing and computer vision, which drive their AI acquisition strategies.
The Advantages of Acquiring AI Startups
Acquiring AI startups offers significant benefits to tech giants. It allows them to adopt advanced technology and gain access to valuable talent, which in turn opens doors to new markets. For instance, Apple’s acquisition of Siri in 2010 enabled the integration of a voice assistant into the iPhone 4S.
Similarly, Google’s acquisition of DeepMind in 2014 improved services like search and recommendations. Microsoft’s 2017 acquisition of Nuance enhanced cloud and enterprise software through enhanced speech recognition.
In addition to tech benefits, these acquisitions also provide access to talent that enhances AI capabilities. Microsoft, for example, hired the co-founders of Maluuba, while Facebook brought in the co-founder of Wit.ai for natural language and speech expertise.
Moreover, these acquisitions facilitate expansion into new markets and product lines. Intel’s acquisition of Nervana in 2016 strengthened its position in AI chip development, and Salesforce’s acquisition of MetaMind in 2016 resulted in the creation of the AI platform Einstein.
The Challenges of Acquiring AI Startups
Acquiring AI startups also poses challenges for acquirers. These challenges include issues related to data privacy, ethics, legal disputes, regulatory hurdles, and risk aversion. For example, Facebook’s acquisition of WhatsApp in 2014 raised concerns about data usage, resulting in a significant fine from the European Commission. To address ethical concerns, Google established an ethics board after acquiring DeepMind in 2014 to oversee sensitive research.
Moreover, some acquisitions have led to legal disputes and financial consequences. Uber’s acquisition of Otto in 2016, for instance, resulted in a lawsuit by Waymo. Additionally, regulatory approvals may be required, as seen in IBM’s acquisition of Promontory Financial Group in 2016, where regulatory clearance was needed to leverage expertise in training AI.
Acquiring AI startups can also face skepticism and implementation challenges. Amazon’s acquisition of Kiva Systems in 2012, for example, encountered resistance and a prolonged implementation process for warehouse robots.
The Impact of AI Startups Acquisition
The acquisition of AI startups by big tech companies has a significant influence on the startups themselves. The outcomes of these acquisitions vary based on factors such as the preservation or loss of autonomy, culture, and innovation within the acquired company. For instance, DeepMind after being acquired by Google, has maintained its autonomy, and continued to promote innovation, exemplifying a successful integration that values creativity.
On the other hand, Siri lost its autonomy and became Apple’s voice assistant. Likewise, cultural clashes, such as the case of WhatsApp with Facebook, have led to the departure of key personnel. However, some acquisitions have managed to preserve cultural alignment. Instagram, for example, remained culturally aligned with Facebook after its acquisition in 2012, and its co-founders continued to be involved until 2018.
In terms of product innovation, the outcomes of these acquisitions can vary. Some startups, like Zoox, which was acquired by Amazon in 2020, have flourished with increased resources, leading to the launch of a self-driving taxi service in 2021.
However, there are also instances where acquisitions have faced setbacks. Uber’s acquisition of Otto in 2016, for example, experienced challenges and ultimately discontinued its self-driving truck project in 2018 due to legal disputes with Waymo. These examples demonstrate the diverse outcomes and impacts of acquiring AI startups, including both successes and challenges for the involved companies.
Looking at the broader impact on innovation, competition, and regulation, these acquisitions shape the AI discipline. The influence on innovation is dependent upon the preservation of autonomy and culture. For example, Google’s acquisition of DeepMind in 2014 promoted innovation by maintaining cutting-edge research. In contrast, Uber’s acquisition of Otto in 2016 resulted in operational shutdowns and legal disputes, hindering innovation in autonomous vehicles.
The Future Outlook and Implications of AI Acquisitions
Looking ahead, the future of AI acquisitions holds significant promise. The AI market is projected to reach $733.7 billion by 2027, driven by a compound annual growth rate of 42.2%. This growth is fueled by factors such as the increasing adoption of cloud-based services, rising demand for intelligent solutions, and advancements in AI technologies and research. With over 40 AI segments, including computer vision, natural language processing, robotics, and healthcare, the landscape is constantly expanding through new startups and innovative applications.
In addition, global inclusivity is gaining prominence, with AI startups from various regions contributing to the market. The top 10 countries with the most AI startups in 2020 were the United States, China, India, the United Kingdom, Israel, Canada, Germany, France, Japan, and South Korea, collectively representing 77% of the total number of AI startups and 88% of the total funding raised. It is worth noting that startups from countries like Brazil, Nigeria, Singapore, and Australia are also making noteworthy contributions.
The Bottom Line
The AI acquisition landscape, led by major tech players like FAMGA, has experienced a surge in the last few years. Despite challenges, there are significant benefits for tech giants, including accelerated entry, talent acquisition, and market exploration. The future of the AI market appears promising due to global inclusivity, diverse segments and projected substantial growth. The success of AI startups is influenced by the complex dynamics of data, talent, capital, innovation, and competition, while acquisitions deeply impact innovation, competition, and regulation.
0 notes
ads-group-blog · 5 years
Link
Термин «интеллект» – как определяет его наука – это способность объекта к решению определенных задач. Он включает в себя и другие области познания окружающего мира: память, восприятие, ощущения (тактильные, визуальные, аудиальные), логическое и абстрактное мышление, воображение и интуицию. Всеми этими качествами обладает психика человека. Разум способен приспосабливаться к новым ситуациям, делать выводы на основе полученного опыта и применять их на практике.
Издавна человечество старалось «наделить» этими качествами машины, чтобы они могли выполнять свои функции максимально эффективно, и их методы решения задач были более схожи с алгоритмом мышления человека. В начале 80-х годов ХХ века учеными Барром и Файгенбаумом, занимающимся вопросами теории вычисления, было дано следующее объяснение природы ИИ:
«Это одна из областей науки «информатика», занимающаяся разработкой интеллектуальных компьютерных программ и систем. Ее продуктом должны стать машины, обладающие возможностями, которые традиционно приписывают человеческому разуму – понимание языка, способность к дальнейшему обучению, способность делать выводы и решать сложные проблемы».
По своей сути искусственный интеллект – это совокупность технологий и программных процессов, которые способны к быстрому прогрессу и развитию:
обработка текста на «понятном» и естественном для восприятия человека языке;
системы виртуальных рекомендаций;
чат-боты;
обучение компьютеров.
Немного истории
Развитие искусственного интеллекта в качестве нового направления науки берет свое начало в середине ХХ века. В это время были популярны философские теории о человеческой природе и механизмах познания окружающего мира. Наука «психология» рассматривала передовые методы влияния на человеческое создание, были более детально изучены практики гипноза и его применения в психотерапии. Математики активно занимались вопросами теории вычислений (алгоритмов). Были выпущены первые модели компьютеров.
Возможности новых аппаратов к вычислению были гораздо выше, чем у человека. Да и по скорости с ними не мог сравниться ни один гениальный математик. Поэтому ученые задались вопросом: как далеко могут зайти возможности машин и смогут ли они достичь уровня человека? И в 1950 году вышел труд Алана Тьюринга, в котором описывается процедура определения момента, когда компьютер способен сравниться с человеком. Он получил название «Тест Тьюринга».
Единого подхода к пониманию природы ИИ не существует. Каждый исследователь и автор научных трудов отталкивается от разных определений и теорий. На практике использовались различные подходы к изучению этого понятия:
Символический. Это самый первый в истории подход к пониманию теории ИИ в эпоху развития цифровых технологий. После Лисп (языка программирования, созданного Джоном Маккарти) у ученых появилась наде��да применить эти знания к созданию полноценного ИИ.
Логический. Подобный подход отталкивается от основ логического программирования.
Гибридный. В его основу вошла теория, что совокупность нескольких нейронных концепций вместе с символическими моделями даст больше результата, чем применение их по отдельности.
Как создается система ИИ?
Создание искусственного интеллекта – это имитация модели мышления человека. Нужно создать систему, которая в ответ на вводные данные смогла бы выдать выходной ответ очень похожий на результаты умственных действий человека. Примером этому служат современные голосовые ассистенты в телефоне, например, Siri. Вам нужно лишь сказать ей: «Привет» и программа запустится и сможет выполнить не только ваши команды, но и поддержать разговор.
Разработка искусственного интеллекта начинается с создания функций, реализующих корректное восприятие входящей информации. После создаются системы, которые помогают программе обучаться. И в последнюю очередь проектируется специальное хранилище данных, где будет храниться вся информация, получаемая во время обучения.
В завершении разработчиками создаются функции воображения и логического мышления. Это позволяет программе моделировать ситуации уже с имеющимися данными и добавлять к ним новую информацию, обрабатывая ее и соединяя воедино. После полученные «знания» также отправляются в хранилище.
Обучение бывает двух видов:
индуктивным;
продуктивным.
В первом случае система получает пары данных, вопросов и логичных ответов на них. Программа самостоятельно находит связи между ними, выдавая правильные данные. В качестве дедуктивного метода используются экспертные знания, которые заносятся в базу. Информация содержит не только наборы готовых данных, но и основные правила, используемых при поиске решения задачи или ответов на вопросы.
Системы ИИ, используемые в современных гаджетах и компьютерах, используют одновременно два подхода. Они уже на первых этапах обучены и продолжают процесс в ходе своей работы. Разработчики это делают для того, чтобы уже вначале программа была способна продемонстрировать высокую результативность и становилась только совершеннее. Она способна учитывать предпочтения и пожелания пользователя.
ИИ даже возможно запрограммировать на непредсказуемые ситуации, чтобы сделать его более схожим на разум человека.
Как искусственный интеллект влияет на человечество?
Стивен Хокинг не раз высказывался о том, что использование искусственного интеллекта может привести к глобальной катастрофе и уничтожению человечества в целом. В 2017 году ученый выразил опасение, что разумные машины вполне могут заменить человека и полностью поработить его создание.
«Ученые способны создать такой мощный искусственный интеллект, который будет не только обучаем, но и определит для себя цели. И если его конечные цели не будут совпадать с человеческими, то наш мир ожидают крупные проблемы».
Уже сейчас обучаемые машины и современные роботы способны заменить людей на производстве. Рассмотрим основные причины, почему техника показывает себя эффективней, чем человек:
машина ничего не забывает, и способна извлекать любую полученную раннее информацию из базы данных;
способность обдумывать свои действия на несколько шагов вперед и анализировать результаты;
отсутствие эмоций, которые могут стать причиной ошибки;
практически безграничные ресурсы.
По мнению аналитиков из Gartner технологии искусственного интеллекта уже к 2020 году будут присутствовать практически в каждом программном или пользовательском сервисе. Кроме того, больше трети крупных компаний будут инвестировать средства в развитие ИИ.
Искусственные интеллект наши дни: где применяется?
Современный искусственный интеллект охватывает множество промышленных и не только сфер. Он помогает решать глобальные задачи и минимизирует человеческий фактор, который может стать причиной большинства ошибок на предприятии, во время военных действий и так далее.
Мы приведем примеры действительно интересных разработок, а главное они ни чем не угрожают будущему человечества. Например, в августе 2017 года компания Google вместе с исследователями MIT (Массачусетсткого института) презентовали алгоритм, способный ретушировать фото в режиме онлайн. Технология позволяет более быстро и качественно обработать изображения, чтобы в результате получилось фото, над которым трудился профессиональный ретушер.
Во время тестирования нейронной сети было выбрано несколько изображений, созданных программами Adobe и MIT. Ретушью каждого из них занимались разные фотографы. А после, полученные результаты сравнивали между собой.
Разработчики протестировали новый продукт на смартфоне (модель не была указана), и программный алгоритм смог выдать на экран полностью обработанную фотографию с параметрами 1920×1080. По словам ученых из Гугл, программу можно «натренировать» на наборе изображений, чтобы после имитировать стиль определенных ретушеров.
Применение искусственного интеллекта уже успели оценить обладатели iPhone X. Разработчики «поместили» в смартфон нейросеть (один из вариантов ИИ) для узнавания лиц. Система способна проанализировать одновременно около 40 тысяч лицевых точек человека и обеспечить 100% идентификацию своего владельца. Весь процесс занимает буквально секунду. Даже, если на вас надета шляпа или очки, то iPhone способен вас «узнать». Это же правило касается и бороды, ведь мужчинам тоже свойственно менять свой стиль.
Обучение искусственного интеллекта безгранично. Машина способна выступить автором музыкальных произведений. Летом прошлого года был представлен альбом певицы Тэрин Саузерн «I AM AI». Исполнителем и автором музыки, которого стал ИИ Amper. Эту программу совместно создал тандем сессионных музыкантов и программистов. Главная цель этого проекта – помощь людям в продвижении своих музыкальных произведений. Программа самостоятельно создала почти все аккордовые структуры, а музыканты лишь подправили их, чтобы они звучали более «человечно».
В 2017 году совместные труды Microsoft Research и Maluuba обучили ИИ не просто играть, а выигрывать, в Ms. Pac-Man. Программа даже побила мировой рекорд. Результаты были достигнуты за счет 150 программ-агентов. Они решали задачи игры, то есть избегали призраков, искали правильные ходы для передвижения и собирали трофеи. Для теста была использована версия для Atari 2600, так как она была менее предсказуемой, поэтому машина продемонстрировала высокие результаты.
2017 год ознаменовался еще одним событием в мире игроманов. Был создан специальный бот, который смог посоревноваться и одержать победу в покер. Технология получила название Libratus, и именно она выиграла 1.7 миллиона $ в 20-дневном турнире. Ученые так и не раскрыли секретов, как работает алгоритм этой программы, но обещали в скором будущем опубликовать статью в одном из журналов на эту тему и ответить на самые популярные вопросы.
ИИ также способен писать тексты, и сочинять поэмы. Это доказывает разработка Dewey. Предварительно в ее базу были внесены книги из популярной библиотеки, помимо этого в память загрузили научные статьи из Гугл Академии. После были заданы основные критерии для написания сюжета романа. И программа выполнила задачу! Сюжет был о влюбленных, которым пришлось расстаться. Хотя содержание было слегка необычное, например, у героев были неестественные имена, а также имеется раздел с содержимым, напоминающим строку кода. Но разработчики считают, что это только самое начало!
И это еще не предел, ведь искусственный интеллект тестировали на предмет «понимания» живописи. В 2015 году команда специалистов из Гугл «попросили» программу самостоятельно написать картину. Как оказалось, компьютер довольно странно интерпретирует окружающий мир. К примеру, запрос был «нарисовать гантели». В итоге на картине металл был соединен с помощью человеческих рук. Скорее всего, это произошло от того, что во время обучения изображения для анализа содержали фотографии с руками, держащими спортивный инвентарь, а нейронные связи машины не совсем верно это «поняли».
Подведем итоги
Итак, счет между ИИ и человеком пока 1:1. Машина во многом превосходит человеческий разум в решении задач, где присутствует большое количество данных. Также компьютер способен запоминать огромные объемы информации и выдавать ее при необходимости. Однако творческие способности у машины практически нулевые.
Многие ученые придерживаются теории Стивена Хоки��га, который призывал не наделять бездушные машины человеческими качествами. Поэтому очень важно не переступать черту и использовать возможности роботов во благо человечества.
Источник: https://adsgroup.com.ua/iskusstvennyj-intellekt
0 notes
Artificial Intelligence Market Size To Reach $ 35,870 Million By 2025
The global artificial intelligence market is expected to reach USD 35,870.0 million by 2025 from its direct revenue sources, growing at a CAGR of 57.2% from 2017 to 2025, whereas it is expected to garner around USD 58,975.4 million by 2025 from its enabled revenue arenas, according to a new report by Grand View Research, Inc.
Artificial Intelligence (AI) is considered to be the next stupendous technological development, alike past developments such as the revolution of industries, the computer era, and the emergence of smartphone technology. The North American region is expected to dominate the industry due to the availability of high government funding, the presence of leading players, and strong technical base. Advances in image and voice recognition are driving the growth of the artificial intelligence market as improved image recognition technology is critical to offer enhanced drones, self-driving cars, and robotics. The AI market can be categorized based on solutions, technologies, end use, and geography.
Tumblr media
The two major factors enabling market growth are emerging AI technologies and growth in big data espousal. The growing prominence of artificial intelligence is enabling new players to venture into the market by offering niche application-specific solutions. Furthermore, companies are also engaging in industry consolidations in a bid to gain competitive advantages. For instance, in January 2017, Microsoft Corporation acquired Maluuba for advancing in the deep learning and Artificial Intelligence (AI) landscape. Existing solution providers are innovating their product portfolio to include both hardware and software solutions.
The Asia Pacific regional market is expected to be the fastest-growing market, owing to the rapid improvements in information storage capacity, high computing power, and parallel processing, which have contributed to the swift uptake of artificial intelligence technology in end-use industries such as automotive and healthcare.
AI has the potential to reduce the burden on clinicians and provide an efficient tool to workforces for accomplishing their tasks in a better way. For instance, AI voice-enabled symptom checkers can better access a situation and assist patients to the emergency department when immediate treatment is required. AI is expected to resolve around 20% of unmet healthcare demands.
To request a sample copy or view summary of this report, click the link below: http://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-market
Further key findings from the report suggest:
Growth in     the volume of data being generated from different end-use industries is     expected to provide traction to the technology adoption
The     increasing adoption of image and pattern recognition in the Asia-Pacific     region is expected to provide new growth opportunities over the forecast     period.
The North     America region is one of the early adopters of advanced technologies and     renders organizations to adopt artificial intelligence capabilities at a     faster pace.
The     increasing amount of digital data (in the form of speech, videos, and     images, from different social media sources such as IoT and consumer     analytics) is driving the need for data mining and analytics. AI is     extensively used by several media and online services such as Google     Assistant, dialogue and voice recognition of Siri, Microsoft Cortana,     image classification in Facebook & Google Photo, and Amazon’s Alexa.
The key     industry participants include Atomwise, Inc., Lifegraph, Sense.ly, Inc.,     Zebra Medical Vision, Inc., Baidu, Inc., H2O ai, IBM Watson Health,     NVIDIA, Enlitic, Inc., Google, Inc., Intel Corporation, and Microsoft Corporation.       
See More Reports of This Category: https://www.grandviewresearch.com/industry/emerging-and-next-generation-technologies
About Grand View Research:
Grand View Research, Inc. is a U.S. based market research and consulting company, registered in the State of California and headquartered in San Francisco. The company provides syndicated research reports, customized research reports, and consulting services. To help clients make informed business decisions, we offer market intelligence studies ensuring relevant and fact-based research across a range of industries, from technology to chemicals, materials and healthcare.
0 notes
kikuzu · 7 years
Quote
 米Microsoftの研究部門であるMicrosoft Researchは6月14日(現地時間)、「強化学習」採用の人工知能(AI)で、家庭用ゲーム機「Atari 2600」版の1980年代のゲーム「Ms. Pac-Man」(ミズ・パックマン)で99万9990という最高スコアを打ち出したと発表した。 最高スコアの瞬間  201面でのこのスコアが最高点であることは、達成後スコアが0に戻ったことで判断した。  ミズ・パックマンは、1981年に人気ゲーム「パックマン」のクローンとして開発され、後にナムコに公認されたアーケードゲーム。ゴーストの動きがランダムだったり、フルーツも迷路内を移動するなど、オリジナルパックマンより難しい。  コンプリートに成功したのは、Microsoftが1月に買収したカナダのディープラーニング企業Maluubaのチーム。  同社が「Hybrid Reward Architecture」と呼ぶアーキテクチャを採用するこのAIは、150以上の単目的のエージェントとそれらのエージェントの情報に基いて総合的な判断をするトップエージェントで構成される。部下のエージェントはそれぞれパラレルに働き、自分にとって最善の決定をトップエージェントに報告し、トップエージェントはそれを総合して判断する。  例えば、多くの部下が「フルーツが右方向にあるから右に行きたい」と主張しても、2つの部下が「右に行くとゴーストがいるから左に行くべきだ」と主張すれば、トップは左に行く方を選ぶ。  Hybrid Reward Architectureは、米Alphabet傘下のDeepMindが「DQN」で採用する「教師あり学習」ではなく「強化学習」を使っている。強化学習は、教師あり学習と異なり正解を与えずにAIが出す答えを評価していくことで学習させるので、正解のない事象で最善策を決めるのに役立つ。  Microsoftは、強化学習採用AIエージェントは、高度で複雑な知的労働での意思決定支援に活用できるとしている。
MicrosoftのAIが「ミズ・パックマン」で全面クリア - ITmedia NEWS
7 notes · View notes
techholo · 7 years
Photo
Tumblr media
New Post has been published on https://www.techholo.com/2017/06/16/microsofts-ai-beats-ms-pac-man/
Microsoft’s AI beats Ms. Pac-Man
As with so many things in the world, the key to cracking Ms. Pac-Man is team work and a bit of positive reinforcement. That… and access to funding from Microsoft and 150-plus artificial intelligence agents — as Maluuba can now attest.
Last month, the Canadian deep learning company (a subsidiary of Microsoft as of January) became the first team of AI programmers to beat the 36-year-old classic.
It was a fairly anticlimactic defeat. The number hit 999,990, before the odometer flipped back over to zero. But it was an impressive victory nonetheless, marking the first time anyone — human or machine — has achieved the feat. It’s been a white whale for the AI community for a while now.
Google’s DeepMind was able to beat nearly 50 Atari games back in 2015, but the complexity of Ms. Pac-Man, with its many boards and moving parts, has made the classic title an especially difficult target. Maluuba describes its approach as “divide and conquer,” taking on the Atari 2600…
Continue reading on TechCrunch… Source: TechCrunch
Disclaimer: All photos and content are under the right of TechCrunch.
#Artificial_Intelligence, #Maluuba, #Microsoft
0 notes
nonviable-hostage · 7 years
Link
via natural language processing - Google News
1 note · View note
un-enfant-immature · 6 years
Text
Microsoft acquires Lobe, a drag-and-drop AI tool
Microsoft today announced that is has acquired Lobe, a startup that lets you build machine learning models with the help of a simple drag and drop interface. Microsoft plans to use Lobe, which only launched into beta earlier this year, to build upon its own efforts to make building AI models easier, though, for the time being, Lobe will operate as before.
“As part of Microsoft, Lobe will be able to leverage world-class AI research, global infrastructure, and decades of experience building developer tools,” the team writes. “We plan to continue developing Lobe as a standalone service, supporting open source standards and multiple platforms.”
Lobe was co-founded by Mike Matas, who previously worked on the iPhone and iPad, as well as Facebook’s Paper and Instant Articles products. The other co-founders are Adam Menges and Markus Beissinger.
In addition to Lobe, Microsoft also recently bought Bonsai.ai, a deep reinforcement learning platform, and Semantic Machines, a conversational AI platform. Last year, it acquired Disrupt Battlefield participant Maluuba. It’s no secret that machine learning talent is hard to come by, so it’s no surprise that all of the major tech firms are acquiring as much talent and technology as they can.
“In many ways though, we’re only just beginning to tap into the full potential AI can provide,” Microsoft’s EVP and CTO Kevin Scott writes in today’s announcement. “This in large part is because AI development and building deep learning models are slow and complex processes even for experienced data scientists and developers. To date, many people have been at a disadvantage when it comes to accessing AI, and we’re committed to changing that.”
It’s worth noting that Lobe’s approach complements Microsoft’s existing Azure ML Studio platform, which also offers a drag-and-drop interface for building machine learning models, though with a more utilitarian design than the slick interface that the Lobe team built. Both Lobe and Azure ML Studio aim to make machine learning easy to use for anybody, without having to know the ins and outs of TensorFlow, Keras or PyTorch. Those approaches always come with some limitations, but just like low-code tools, they do serve a purpose and work well enough for many use cases.
0 notes