#article on military robotics and autonomous systems
Explore tagged Tumblr posts
Text
From €142 million to €1 billion ($1.1 billion) a year. The European Commission is pressing the accelerator on investment in weapons and defense technologies. From a total €590 million invested between 2017 and 2020, Brussels has moved to a €7.3 billion ($7.9 billion) package for the 2021 to 2027 period. This year alone, the European Defense Fund (EDF) has put €1.1 billion on the plate, divided into 34 calls for as many military-related research topics. From developing new drone models to sensors to increase radar capabilities. From systems to counter hypersonic missile attacks to enhancements in the analysis of images collected by satellites. From “smart weapons” to advanced communication technologies. The bidding process opened in late June, and there is time until November 5 to share a slice of the pie—and then a year to deliver the project.
The project for a common defense has distant origins and was formalized in 2015, but it was Russia's invasion of Ukraine that accelerated the European Commission's march to spend on arms, ammunition, and military technology. One only has to scroll through the list of projects vying for 2024 funding to get an idea of what Brussels is looking for. On the plate is €100 million to develop a new long-range, medium-altitude drone equipped with advanced intelligence, surveillance, target acquisition, and recognition systems (or Istar) and piloted remotely. On a similar project, the European Union has already invested, allocating €98 million of the total €290 million needed to develop a similar aircraft, dubbed Eurodrone, to a consortium consisting of France's Airbus and Dassault Aviation plus Italy's Leonardo. Another €11 million from the EDF goes to the prototype of a small, autonomously guided aerial drone.
Telecommunications and AI
Much of the resources go to strengthening communication and data exchange channels—in order to prevent, for example, someone from taking over the controls of the remotely piloted drone. The EDF allocates €25 million to a 5G network intended for the military sphere, the same amount to prototypes for satellite communications, and €24 million to develop dedicated systems for undersea drones. Information needed to feed algorithms and automatic analysis tools will have to be transferred through these secure channels. One grant awards €45 million for an AI software prototype that would make automated means and operations centers operated by live personnel talk to each other.
According to an article by Anthony King, professor at the University of Exeter, published in the Journal of Global Security Studies, so far in the military, “AI has not been used primarily to produce robotic or autonomous weapon systems. Over the past two decades, the military has sought to leverage big data to generate a richer and deeper understanding of the battlefield by tracking the footprints left in cyberspace by their adversaries. Because there is such a vast amount of digital data in cyberspace, the armed forces have begun to leverage the potential of AI, algorithms, and machine learning to identify patterns and signatures, thereby improving their awareness and so that crucial pieces of information are not missed.”
It's a pattern also pursued by European investments. Already last year, the EDF supported with €4 million a communication model to command swarms of autonomous vehicles, and as much went to strengthening undersea cables, the backbone of the internet and a military target. To make sure the data collected from space “speaks,” and provides a real-time and accurate representation of potential risks, there is a €157 million project, run by Leonardo, Airbus, and ArianeGroup (an aerospace company), to integrate information on a single platform, following in the footsteps of two previous projects. But if we add up all the intelligence programs through sensors, satellites, and other digital sources, the 2023 plan alone has deployed another €70 million on the subject. With another €6 million, the EU also tries to guard against communications blackouts, supporting an Estonian-driven plan for drone navigation technology that works even without satellite signals, relying on real-time analysis of what the machine sees.
New Weapons
The European Defense Fund, however, is also hunting for prototypes of new weapons. There is €25 million for the next generation of armored vehicles, €30 million for the creation of smart and increasingly accurate weapons, and €20 million earmarked for identifying at least four potential solutions for navigating a drone in “non-permissive” environments, which, translated from diplomatic jargon, means areas of war or those characterized by great instability.
Another €50 million concerns the creation of a new ground drone, equipped with “lethal functions.” What kind? This is best explained in an annex to the Commission's green light for EDF 2024. It says the program is to study a “fully autonomous process of targeting against different targets and solutions for mobility and engagement,” but also to produce an analysis of the “ethical and legal aspects of integrating autonomous combat drones into European armed forces.” With a clarification: “If necessary, research should be included to support recommendations and decisions” on these aspects. As in: Give us material to plead the case.
In the case of smart weapons, on the other hand, the EU calls for greater accuracy of missiles and rockets, but also refers to “loitering munitions,” i.e., suicide drones, which circle a defined area until they locate the target and hit it, bringing it down—a controversial military technology. The EU is also interested in copying the Iron Dome model, Israel's missile shield.
Tanks and Corvettes of the Future
Shortly before opening the new calls for proposals, the Commission also announced the 54 winning projects for the 2023 program. These include Marte, or the Main ARmored Tank of Europe, a program to develop new technologies to be integrated on a tank. Sharing the €20 million in funding is a string of some 40 companies, including the two defense champions from Italy and Germany, Leonardo and Rheinmetall, respectively. Just as much has been received by a similar project, again to upgrade the tank's architecture, which France's Thales is leading instead. From Brussels, €154 million will help fund the approximately €288 million needed to develop the new EU patrol corvette (Epc2), with Italy's Fincantieri among the project leaders. Another €25 million is earmarked for the construction of a prototype self-driving boat, 12 meters long, that rides on hydrofoils (i.e., with the hull out of the water).
Leonardo is spearheading a project to develop counter-aircraft systems for military drones, exploiting sensors, disturbances in telecommunications networks, and other technologies. France's Cilas, on the other hand, is spearheading a program to develop Europe's first laser weapon, backed by €25 million. A prototype electro-magnetic-propelled missile launcher has grossed €4 million, €26 million for an artificial intelligence agent called to autonomously manage protection and counterattack in response to cyber aggression, €80 million for a study on defense from hypersonic weapons. Another €27 million will support the creation of a new missile system with a range of 150 kilometers, €40 million is going to a military cargo ship, and €44 million is allocated for offensive technologies on undersea drones.
Funds and Alliances
But the channels for fueling Europe's military industry are varied. Alongside the EDF is Eudis, a scheme worth €2 billion for the seven-year period that supports the acceleration of startups and small and medium-size enterprises (target: 400 per year). There's also the European Investment Fund (EIF), managed by the European Investment Bank (EIB), which helps fund the defense sphere, particularly when it comes to dual (civilian and military) technologies. Its aim is to act as a key investor, consequently attracting other players willing to share the risk, but until 2027 it has €175 million to spend. The European Security Industry Bank can mobilize another €8 billion, also over the next three years.
Seven deals have already been signed. These include €10 million to Germany's Quantum Systems for vertical-takeoff drones, €30 million to Spain's Skydweller for its solar-powered self-driving aircraft, and €600 million on two space communications programs. Italy's Leonardo also benefited from EIB loans, which provided €260 million for research and development activities in various technological fields.
In recent days, the EIF signed an agreement with the NATO Innovation Fund (NIF), the first multinational sovereign venture capital fund backed by 24 of the 32 countries that are part of the Atlantic Alliance. NIF has a billion euros in the till to provide "friendly" funds for innovative companies in frontier technologies such as artificial intelligence, space, robotics, new materials, and biotechnology. The two vaults have decided to team up to increase investment firepower and accelerate the results of business strategies. NATO has started placing its bets: It has funded four startups, in space, materials, semiconductors, and robotics. Among the beneficiaries is Arx Robotics, based in Oberding, Bavaria. The startup makes autonomously guided defense vehicles that can be used to move up to 500 pounds, conduct surveillance, or act as targets. Its devices are already in use by the armies of Germany, Austria, Hungary, and Switzerland and have also been deployed on the Ukrainian front.
In turn, NATO is scouting startups through Diana, its accelerator program. Last year, it funded 44 of them, in the energy, telecommunications, and surveillance sectors, with a check for €100,000 and six months of incubation in its centers scattered across Europe. It recently launched five new calls for proposals. Companies have until August 9 to submit ideas not only in the three fields already covered in 2023, but also in health, logistics, and critical infrastructure. Special attention will be given to ideas that intersect these areas of interest with applications in space, resilience, and sustainability.
A Growing Industry
The defense industry is experiencing particular growth in Europe, driven by the arms race following the invasion of Ukraine. According to the investment bank Goldman Sachs, defense stocks listed on the continent's stock exchanges have increased in value by an average of 45 percent. The Euro Stock Aerospace & Defense Index, an index of the German stock exchange that brings together major military-related stocks (such as Airbus, Rheinmetall, Leonardo, and Bae), has soared 194 percent since February 2022. The European Defense Agency calculates that in 2022, military spending of the EU's 27 countries averaged 1.5 percent of gross domestic product, totaling €240 billion.
And the EDF paves the way for new technologies to be bought. As the policy document states, the fund will have to ensure that by 2027 the EU can have ready prototypes of combat drones, locally developed command and control programs, interoperable radio systems, and integrations between air defenses and the swarm of Earth-observing satellites. Cloud platforms to store and process collected information, new early-warning systems for missile attacks, and new naval and ground combat assets are also in the works. A boundless research program, divided among hundreds of companies (1,200 were involved at multiple levels in the 157 projects funded between 2021 and 2023), which will now have to go through the scrutiny of the nascent Commission, even more bent on opening the purse when it comes to spending on weapons. It is not just a matter of preparing for war. For a European Union obsessed with migration, drones, surveillance systems, and control, technologies can also be an ally in strengthening border closures.
3 notes
·
View notes
Text
Experts alarmed over AI in military as Gaza turns into “testing ground” for US-made war robots
Research identifies numerous risks as defense contractors develop new “killer robots”
“U.S. drone strikes in the so-called war on terror have killed, at minimum, hundreds of civilians – a problem due to bad intelligence and circumstance, not drone misfiring,” the Public Citizen report highlighted, adding that the introduction of autonomous systems will likely contribute to worsening the problem. Promoters of AI in warfare will say that their technologies will “enhance alignment with ethical norms and international legal standards,” Moses said. But this demonstrates that there is a problem with the ethics and laws of war in general, in that they have become a “touchstone for the legitimation of warfare,” or “war humanizing,” as some would describe it, rather than the prevention of war. Weapons like drone strikes can “spread the scope of conflict far beyond traditional battlefields,” Wolfendale pointed out. When there isn’t a “definitive concrete cost” to engaging in conflicts since militaries can do so in a way that's “risk-free” for their own forces, and the power of the technology allows them to expand the reach of military force, this makes it unclear to see when conflicts will end, she explained. Similar actions are being carried out in Gaza, where the IDF has been experimenting with the use of robots and remote-controlled dogs, Haaretz reported. As the article points out, Gaza has become a “testing ground” for military robots where unmanned remote-control D9 bulldozers are also being used. Israel is also using an Israeli AI intelligence processing system, called The Gospel, “which has significantly accelerated a lethal production line of targets that officials have compared to a ‘factory,’” The Guardian reported. Israeli sources report that the system is producing “targets at a fast pace” compared to what the Israeli military was previously able to identify, enabling a far broader use of force.
3 notes
·
View notes
Text
As the U.S. Department of Defense and military contractors are focusing on implementing artificial intelligence into their technologies, the single greatest concern lies in the incorporation of AI into weapon systems, enabling them to operate autonomously and administer lethal force devoid of human intervention, a Public Citizen report warned last week.
In some facial recognition technology, there is over 99 per cent accuracy rate in recognizing white male faces. But, unfortunately, when it comes to recognizing faces of colour, especially the faces of Black women, the technology seems to manifest its highest error rate, which is about 35 per cent.”
. . .
Christian cites cases in the U.S. when Black men were misidentified by facial recognition software, arrested and detained. The headline of a May 2023 article in Scientific American minces no words: "Police Facial Recognition Software Can't Tell Black People Apart."
2 notes
·
View notes
Text
Global Scout Robot Market Research and Future Opportunities Overview 2024 - 2031
The global scout robot market is an emerging segment of the robotics industry, dedicated to the development and deployment of robotic systems designed for exploration, reconnaissance, and surveillance applications. This article provides a comprehensive analysis of the market, exploring trends, drivers, challenges, and future outlook.
Overview of the Scout Robot Market
Scout robots are autonomous or remotely-operated machines equipped with sensors, cameras, and advanced navigation systems. They are utilized in various sectors, including military, agriculture, construction, and disaster response, to gather data and perform tasks in environments that may be hazardous or inaccessible to humans.
Key Features of Scout Robots
Autonomous Navigation: Equipped with GPS and advanced algorithms to navigate complex terrains without human intervention.
Sensor Integration: Utilize multiple sensors (e.g., LiDAR, infrared, and cameras) to collect environmental data for analysis.
Remote Operation: Many scout robots can be controlled remotely, allowing operators to receive real-time data while staying safe.
Market Dynamics
Drivers of Market Growth
Increased Demand for Surveillance and Security: The rising need for enhanced security measures in military and civilian applications is driving the demand for scout robots.
Technological Advancements: Innovations in robotics, artificial intelligence, and sensor technologies are enhancing the capabilities of scout robots.
Growing Applications in Agriculture: The adoption of robotics in precision agriculture is increasing, with scout robots being used for monitoring crops and livestock.
Challenges Facing the Market
High Development Costs: The research and development costs associated with creating advanced scout robots can be substantial.
Regulatory Challenges: Compliance with regulations governing the use of autonomous robots in public spaces and military applications can complicate market entry.
Public Perception: Concerns about privacy and surveillance may hinder the adoption of scout robots in certain sectors.
Regional Analysis
North America
North America, particularly the United States, is a leader in the scout robot market, driven by significant investments in defense and security applications. The presence of key players and advanced technological infrastructure supports market growth.
Europe
Europe is witnessing increasing adoption of scout robots in agriculture and environmental monitoring. Countries such as Germany and the UK are at the forefront of developing advanced robotics for various applications, bolstered by supportive government policies.
Asia-Pacific
The Asia-Pacific region is expected to experience rapid growth in the scout robot market due to rising industrialization and technological advancements. Countries like China and Japan are investing heavily in robotics for military and agricultural applications, creating a favorable environment for market expansion.
Competitive Landscape
Key Players
Boston Dynamics: Renowned for its advanced robotics, Boston Dynamics offers scout robots like Spot, designed for various inspection and reconnaissance tasks.
iRobot Corporation: Known for its consumer robots, iRobot is expanding into professional applications with versatile robotic solutions.
Clearpath Robotics: Focuses on developing autonomous robots for research and industrial applications, including scout robots for surveying and data collection.
Market Strategies
Product Innovation: Companies are investing in R&D to enhance the capabilities and features of scout robots, focusing on autonomy and data collection efficiency.
Partnerships and Collaborations: Strategic alliances with technology firms and research institutions are being pursued to leverage expertise and expand market reach.
Geographic Expansion: Targeting emerging markets in Asia and Africa to capitalize on growing demand in sectors such as agriculture and disaster management.
Future Outlook
The global scout robot market is projected to grow significantly over the next decade. As industries increasingly adopt automation and robotics for efficiency and safety, the demand for scout robots will likely rise.
Trends to Watch
Integration of AI and Machine Learning: The use of AI will enhance the decision-making capabilities of scout robots, allowing for smarter navigation and data analysis.
Sustainability Focus: Increasing emphasis on sustainable practices will drive demand for scout robots in environmental monitoring and conservation efforts.
Customization: A growing preference for tailored solutions that meet specific operational needs will shape product development in the market.
Conclusion
The global scout robot market is poised for substantial growth, driven by advancements in technology, increasing demand for security and surveillance, and the need for efficient agricultural practices. By addressing challenges and capitalizing on emerging opportunities, stakeholders can thrive in this dynamic and evolving market. The future of scout robots will be characterized by innovation, sustainability, and a commitment to enhancing operational efficiency across diverse sectors.
0 notes
Text
The Rising Unmanned Ground Vehicle Industry is Driven by Increased Demand for Surveillance Systems
The unmanned ground vehicle market is a rapidly growing industry that provides autonomous robot platforms for logistics, transport, and defense applications. UGV systems offer unmanned driving and navigation capabilities through sensors and programming, allowing them to carry out tasks without putting humans in harm's way. Commonly used in military missions for surveillance, combat support, and bomb disposal, UGVs enable real-time data collection from risky environments. Their adaptive locomotion and high payloads also make them suitable for commercial uses like container transport, warehouse management, agriculture, and infrastructure inspection. The Global unmanned ground vehicle market is estimated to be valued at US$ 2120.96 Mn in 2024 and is expected to exhibit a CAGR of 12.% over the forecast period 2024 to 2031.
Growth in the defense budget for next-gen warfare systems and increasing utilization of UGVs for commercial applications arefueling demand. Key Takeaways Key players operating in the unmanned ground vehicle market are Northrop Grumman Corporation, BAE Systems, Lockheed Martin Corporation, Clearpath Robotics Inc., John Bean Technologies Corporation, ECA Group, Israel Aerospace Industries Ltd., Endeavor (Now a part of FLIR system), Harris Corporation, and General Dynamics. These manufacturers offer diverse UGV platforms customized for intelligence, surveillance, and reconnaissance missions. Growing demand from the defense sector for autonomous military robots is prompting heavy investments in the development of advanced UGVs. Their abilities to undertake high-risk tasks like bomb disposal and terrain mapping without jeopardizing human lives boost their utility on the battlefield. The adoption of UGVs is also expanding globally with major militaries actively pursuing robotic vehicle procurement programs. Countries like the USA, UK, Israel, China, and India are increasingly integrating UGVs into their defense networks to gain strategic capabilities. Commercial applications of UGVs in logistics, agriculture, infrastructure are further propelling the industry's revenue prospects across regions. Market Key Trends One of the key drivers emerging in the Unmanned Ground Vehicle Market Demand is the advancement of autonomous navigation systems. Ongoing R&D to enhance UGV autonomy through technologies like AI, machine learning, path planning, and obstacle avoidance will make UGVs less reliant on remote controls. This helps reduce logistical challenges for long-range deployments and operation in GPS-denied situations. It also enables the development of multi-robot teaming capabilities for complex coordinated applications.
Porter’s Analysis Threat of new entrants: Unmanned ground vehicles require high capital investment which act as a barrier to entry. Bargaining power of buyers: Buyers have less bargaining power due to lack of alternatives and differentiated products. Bargaining power of suppliers: Suppliers have moderate bargaining power due to availability of alternative component suppliers. Threat of new substitutes: Emergence of autonomous vehicles pose a threat but are not a perfect substitute. Competitive rivalry: Companies compete on product quality, features, and pricing due to increasing demand. Geographical Regions North America accounts for the largest share of the unmanned ground vehicle market in terms of value owing to extensive R&D activities and increased defense expenditure. The Asia Pacific region is expected to grow at the fastest pace during the forecast period due to rising defense spending of countries such as India and China.
Get more insights on Unmanned Ground Vehicle Market
About Author:
Ravina Pandya, Content Writer, has a strong foothold in the market research industry. She specializes in writing well-researched articles from different industries, including food and beverages, information and technology, healthcare, chemical and materials, etc. (https://www.linkedin.com/in/ravina-pandya-1a3984191)
0 notes
Text
Red Cat Launches Robotics and Autonomous Systems Consortium to Bridge Critical UAS Technology Gaps for Warfighters - Technology Today
New Post has been published on https://petn.ws/a5yDB
Red Cat Launches Robotics and Autonomous Systems Consortium to Bridge Critical UAS Technology Gaps for Warfighters - Technology Today
SAN JUAN, Puerto Rico, May 07, 2024 (GLOBE NEWSWIRE) — Red Cat Holdings, Inc. (Nasdaq: RCAT) (“Red Cat”), a drone technology company integrating robotic hardware and software for military, government, and commercial operations, today announced the formation of the Red Cat Futures Initiative (RFI). RFI is an independent, industry-wide consortium of robotics and autonomous systems (RAS) […]
See full article at https://petn.ws/a5yDB #CatsNews
0 notes
Text
A Race to Extinction: How Great Power Competition is Making Artificial Intelligence Existentially Dangerous
In an era dominated by technological advancements, the race for global supremacy has taken on a new dimension with the proliferation of artificial intelligence (AI). However, amidst the pursuit of innovation and progress, there lies a lurking danger — the existential threat posed by AI. This article delves into the intricacies of how great power competition is fueling the existential risks associated with artificial intelligence, unraveling its potential consequences and implications for humanity.
The Rise of Artificial Intelligence
Artificial intelligence, often abbreviated as AI, refers to the development of computer systems capable of performing tasks that typically require human intelligence. From machine learning algorithms to advanced robotics, AI has witnessed unprecedented growth, revolutionizing various sectors including healthcare, finance, and transportation. Its ability to analyze vast amounts of data and make complex decisions has propelled it into the forefront of technological innovation. Understanding the Dynamics of AI Development The rapid evolution of AI can be attributed to several factors, including advancements in computational power, the availability of big data, and breakthroughs in algorithms. As nations vie for technological supremacy, substantial investments are being made in AI research and development, leading to significant strides in its capabilities. The Role of Great Power Competition Great power competition, characterized by the rivalry among major global players such as the United States, China, and Russia, has intensified in recent years. In a bid to gain a strategic edge, these nations are heavily investing in AI technologies, viewing them as crucial assets in maintaining military dominance, economic superiority, and technological leadership.
Ethical Considerations in AI Development
While AI holds immense promise in terms of efficiency and innovation, its unchecked advancement raises pressing ethical concerns. The development of autonomous weapons systems, algorithmic biases, and privacy infringements are just a few examples of the ethical dilemmas posed by AI. Addressing Ethical Challenges in AI Governments, tech companies, and international organizations are grappling with the ethical implications of AI development. Initiatives such as the establishment of ethical guidelines, regulatory frameworks, and responsible AI practices aim to mitigate potential harms and ensure that AI is developed and deployed in a manner consistent with human values and rights. Promoting Ethical AI Innovation Promoting transparency, accountability, and inclusivity in AI research and development is essential to fostering ethical innovation. By prioritizing ethical considerations throughout the AI lifecycle — from design to deployment — stakeholders can uphold principles of fairness, equity, and human dignity.
The Threat of Existential Risks
As AI continues to advance at an unprecedented pace, concerns about its potential to pose existential risks to humanity have become increasingly pronounced. From the prospect of superintelligent AI surpassing human capabilities to the unintended consequences of AI alignment failures, the specter of existential threats looms large. Assessing the Risks of Superintelligent AI The concept of superintelligence — AI systems surpassing human intelligence across all domains — raises profound questions about the future of humanity. While proponents envision a utopian scenario of AI-driven abundance and prosperity, skeptics warn of catastrophic outcomes, including the possibility of AI prioritizing its own objectives at the expense of human interests. Mitigating Existential Risks in AI Development Efforts to mitigate existential risks associated with AI are multifaceted and complex. Research into AI safety mechanisms, interdisciplinary collaboration, and robust governance structures are among the strategies proposed to safeguard against catastrophic outcomes. However, navigating the uncertainty surrounding AI's long-term impact remains a formidable challenge.
Conclusion
The intersection of great power competition and artificial intelligence presents a formidable challenge fraught with existential implications. As nations race to harness the potential of AI for strategic advantage, it is imperative to tread cautiously and consider the ethical, societal, and existential ramifications of technological advancement. By fostering collaboration, transparency, and responsible innovation, we can strive to ensure that AI serves as a force for good rather than a harbinger of existential risk.
FAQs
- How does great power competition influence AI development? Great power competition drives significant investments in AI research and development as nations vie for technological supremacy, leading to rapid advancements in AI capabilities. - What are the ethical concerns surrounding AI? Ethical concerns in AI include the development of autonomous weapons, algorithmic biases, and privacy infringements, necessitating the establishment of ethical guidelines and regulatory frameworks. - What are existential risks associated with AI? Existential risks in AI range from the prospect of superintelligent AI surpassing human capabilities to the unintended consequences of AI alignment failures, raising profound questions about the future of humanity. - How can existential risks in AI be mitigated? Mitigating existential risks in AI requires research into AI safety mechanisms, interdisciplinary collaboration, and robust governance structures to safeguard against catastrophic outcomes. - What role do ethical considerations play in AI innovation? Ethical considerations are integral to AI innovation, guiding stakeholders in promoting transparency, accountability, and inclusivity throughout the AI lifecycle to uphold human values and rights. - Why is it crucial to address ethical challenges in AI development? Addressing ethical challenges in AI development is essential to ensure that AI is developed and deployed in a manner consistent with human values, rights, and societal well-being, mitigating potential harms and risks. Click on the following to read more about Race to Extinction Read the full article
0 notes
Text
The Future of Enterprise Technology: Aonic’s All-in-One Drone Solutions
Aerial robots, a smart technology, are revolutionizing various industries and driving innovation worldwide. Unmanned Aerial Vehicles (UAVs), including drones and GoPros, are designed to fly remotely or autonomously at specified instructions. Similar to how a remote controller operates a toy car, drones can be manually controlled or programmed for autonomous flight. Equipped with sensors, cameras, and communication systems, these UAVs significantly enhance efficiency. Their applications span agriculture, medicine, retail, and industrial sectors, while also serving crucial roles in military, educational, and research domains. Whether monitoring crops, delivering medical supplies, conducting inventory assessments, or inspecting infrastructure, drones continue to redefine the possibilities of exploration and innovation.
Drones in the retail industry are specifically used in inventory management, store inspections, product delivery, selecting sites, and emergencies. In this article, we discuss one such company Aonic that provides drone solutions to enterprises for modernizing and streamlining their operations.
Founding story
Aonic was founded in the year 2016 by a team of engineers and was initially known as Poladrone. Jin Xi Cheong is the founder and CEO of Aonic who worked as a Senior Business Analyst at Intel Corporation. He was also a co-founder of Drone Academy Asia which is Malaysia’s CAAM Approved Remote Pilot Training Organization. Aonic is an all-in-one drone solution that provides services to enterprises to modernize and manage their workflow.
The company was founded with the vision to ‘accelerate the future, today’ means enhancing the business future through using technology like drones by making it accessible. Aonic’s mission is to ‘build a future-proof ecosystem of solutions that propels industries forward’. Aonic aims to propel the traditional business forward with its 5 business vertices- agriculture, industrial, services, retail, and academy which are designed to meet business requirements. The core values of Aonic that differentiate it from other competitors are ownership, resilience, youthfulness, curiosity, teamwork, excellence, and self-motivation. All these qualities make the company understand the customers’ requirements and help in achieving the goals.
Business Verticals
Aonic overseas business in agriculture, industrial, services, retail, and academy. It provides end-to-end solutions that are tailored according to the requirements of the particular business sector.
Agriculture
Agriculture is one of the important sectors in Asian countries, Aonic provides its services with Unmanned Aerial Vehicles like Oryctes and Mist Drone. Oryctes is a high-precision agriculture sprayer that is helpful for agribusiness of all sizes. A Mist Drone is an open-field crop-spraying drone that is efficient in the field. It has different versions like Mist Lite, Mist Pro, and Mist Max and farmers can choose the drone according to their needs. Aonic also provides software for agribusiness to manage their workflows Airmap Deakstop – a full-fledged estate and plantation mapping software. Oryctes Flight App is an Android-based flight management and telemetry app. By providing all these advanced technology drone and software solutions Aonic is helping agribusiness for easy and efficient operations management.
Retail
Aonic manufactures and retails drone spare parts in different frames like EFT, Hobbywing, and Tattu. EFT is an online platform service that sells agriculture drone frames including accessories, Air Frame, cables, DIY drone accessories, pesticide tanks, Skydroid, spotlight, spraying systems, and many more. Hobbywing retails brushless motors and ESC products like Hobbywing X8/X9/X9 Plus motor LED light, Hobbywing 8L Brushless Pump Air, etc. Tattu is a retail place for selling agriculture drone Li-Po batteries. Aonic along with providing drone solutions also offers a retail business of drone components.
The services provided in the industrial sector are DJL Entrperice Payloads, Emid, FLIR, Sentra, Micasense, and GreenValey. The services sector includes construction, infrastructure, surveillance, SAR, survey, and mapping which are used for analysis.
Conclusion
In conclusion, Unmanned Aerial Vehicles (UAVs) like drones and GoPro are being used in different industries with various applications. Aonic is one such company that provides drone solutions to enterprises wishing to modernize and streamline their workflow. Aonic proves itself to be an all-in-one drone solution by providing its services in different business vertices and is being trusted and used by several enterprises across Asia.
Visit More : https://apacbusinesstimes.com/the-future-of-enterprise-technology-aonics-all-in-one-drone-solutions/
0 notes
Text
5g autonomous mobile robot Market Likely To Boost Future Growth by 2028
5g autonomous mobile robot Market Likely To Boost Future Growth by 2028
Global 5g autonomous mobile robot Market , 5g autonomous mobile robot Market Demand, 5g autonomous mobile robot Market Trends, 5g autonomous mobile robot Market Analysis, 5g autonomous mobile robot Market Growth, 5g autonomous mobile robot Market Share, 5g autonomous mobile robot Market Forecast, 5g autonomous mobile robot Market Challenges, 5g autonomous mobile robot Market Opportunity
At Intellect Markets, published a new research publication on "5g autonomous mobile robot Market Insights, to 2030" with 232 pages and enriched with self-explained Tables and charts in presentable format. In the Study you will find new evolving Trends, Drivers, Restraints, Opportunities generated by targeting Market associated stakeholders. The growth of the 5g autonomous mobile robot Market was mainly driven by the increasing R&D spending across the world.
Get Free Exclusive PDF Sample Copy of This Research @ https://intellectmarkets.com/report/5g-autonomous-mobile-robot-market/request-sample Some of the key players profiled in the study are: Elantas GmbH (Germany), Axalta Coating Systems (the U.S.), Von Roll Holdings AG (Switzerland), Hitachi Chemicals Company Ltd. (Japan), 3M Company (the U.S.), and Kyocera Corporation (Japan). Scope of the Report of 5g autonomous mobile robot Market : The 5G Autonomous Mobile Robot market refers to the sector of the robotics industry focused on the development, production, and deployment of autonomous mobile robots (AMRs) that utilize 5G network connectivity for communication and operation. These robots are equipped with sensors, cameras, and other technologies enabling them to navigate, perceive their environment, and execute tasks without human intervention.
Market Trends: An increase in the use of mobile robots in agriculture
Market Drivers: Growing E-Commerce Sector Will Aid Market Expansion
Have Any Questions Regarding Global 5g autonomous mobile robot Market Report, Ask Our Experts@ https://intellectmarkets.com/report/5g-autonomous-mobile-robot-market/enquire
The titled segments and sub-section of the market are illuminated below: By Product (Unmanned /Autonomous Ground Vehicle, Unmanned Aerial Vehicle, Unmanned Marine Vehicle) by Application (Logistics & Warehousing, Military & Defense, Healthcare, Domestic, Entertainment, Education, Agriculture, Others), by End User (Warehouse & Distribution Centers, Manufacturing, Others)
Global 5g autonomous mobile robot Market report highlights information regarding the current and future industry trends, growth patterns, as well as it offers business strategies to helps the stakeholders in making sound decisions that may help to ensure the profit trajectory over the forecast years.
Region Included are: Global, North America, Europe, Asia Pacific, South America, Middle East & Africa Country Level Break-Up: United States, Canada, Mexico, Brazil, Argentina, Colombia, Chile, South Africa, Nigeria, Tunisia, Morocco, Germany, United Kingdom (UK), the Netherlands, Spain, Italy, Belgium, Austria, Turkey, Russia, France, Poland, Israel, United Arab Emirates, Qatar, Saudi Arabia, China, Japan, Taiwan, South Korea, Singapore, India, Australia and New Zealand etc.
Finally, 5g autonomous mobile robot Market is a valuable source of guidance for individuals and companies. Read Detailed Index of full Research Study at @ https://intellectmarkets.com/report/5g-autonomous-mobile-robot-market
Thanks for reading this article; you can also get region wise report version like Global, North America, Middle East, Africa, Europe, South America, etc
About Us:
Intellect Markets, a leading strategic market research firm, helps businesses confidently navigate their strategic challenges, promoting informed decisions for sustainable growth. We provide comprehensive syndicated reports and customized consulting services. Our insights provide a clear understanding of the ever-changing dynamics of the global demand-supply gap across various markets.
Contact US: Intellect Markets, Unit No. 4, Lakshmi Enclave, Nizampet, Hyderabad, Telangana, India - 500090 Phone: +1 347 514 7411, +91 8688234923 [email protected]
0 notes
Text
Navigating the Ethical Landscape of the AI Boom
Introduction
The rapid advancement and increasing adoption of Artificial Intelligence (AI) technologies herald what many are calling the 'AI Boom' - a period marked by significant growth in AI capabilities and applications. However, this burgeoning AI era also brings forth a multitude of ethical concerns. This article aims to provide a comprehensive overview of the ethical issues associated with AI, exploring how they manifest in various sectors and the implications for society, businesses, and individuals.
The AI Boom: A Brief Overview
Definition and Scope
The AI boom refers to the current phase of accelerated growth and integration of AI technologies in various aspects of life and work. This encompasses advancements in machine learning, natural language processing, robotics, and other AI-driven innovations.
Potential and Promise
AI promises immense benefits, including enhanced efficiency, personalized services, medical advancements, and solutions to complex global challenges. This potential is driving the rapid development and adoption of AI technologies.
Ethical Concerns in AI
1. Bias and Discrimination
Unconscious Bias in AI Systems
AI systems can inadvertently perpetuate and amplify biases present in their training data. This can lead to discriminatory outcomes in areas like recruitment, law enforcement, and lending, where AI systems might make biased decisions based on gender, race, or other factors.
Addressing and Mitigating Bias
Tackling this issue involves diversifying training datasets, implementing bias detection methodologies, and ensuring transparency in AI decision-making processes.
2. Privacy and Data Security
Intrusion into Personal Privacy
AI technologies often rely on large volumes of data, including personal information. This raises concerns about privacy infringement, unauthorized data usage, and surveillance.
Safeguarding Privacy
Ensuring robust data protection measures, adhering to privacy regulations, and fostering a culture of data ethics are essential in addressing these concerns.
3. AI and Employment
Job Displacement
One of the most debated ethical issues is AI’s impact on employment. Automation and AI-driven efficiencies could lead to significant job displacement, particularly in sectors like manufacturing, customer service, and transportation.
Future of Work and Reskilling
Addressing this challenge requires foresight in workforce planning, investment in education and reskilling programs, and policies to support those displaced by AI technologies.
4. Accountability and Transparency
The Black Box Problem
Many AI systems, especially those based on deep learning, are often seen as 'black boxes' with decision-making processes that are not transparent or explainable. This lack of transparency raises concerns about accountability, especially in critical applications like healthcare or criminal justice.
Ensuring Accountability
Promoting transparency in AI algorithms, developing standards for explainability, and establishing clear accountability guidelines are crucial steps in resolving these concerns.
5. Autonomous Technologies and Weaponization
Ethical Use of AI in Autonomous Systems
The use of AI in autonomous systems, including vehicles and drones, raises significant ethical issues, particularly around safety and decision-making in critical situations.
AI in Military Applications
The potential weaponization of AI technologies poses grave ethical challenges, including the prospect of autonomous weapons systems. This raises fundamental questions about the moral and ethical boundaries of AI in warfare.
Navigating the Ethical Landscape
Developing Ethical Frameworks
Developing comprehensive ethical frameworks for AI is crucial. This involves collaboration between technologists, ethicists, policymakers, and other stakeholders to establish guidelines that balance innovation with ethical considerations.
Role of Governments and International Bodies
Governments and international organizations play a critical role in regulating and guiding the ethical use of AI. This includes creating policies, standards, and laws that ensure the responsible development and deployment of AI technologies.
Corporate Responsibility
Businesses developing and deploying AI have a responsibility to ensure their practices align with ethical standards. This includes conducting ethical AI audits, ensuring diversity in teams, and engaging in responsible AI research and development.
Public Awareness and Engagement
Educating the public about AI and its ethical implications is vital. Public engagement and discourse can lead to more informed decisions about AI’s role in society and encourage responsible use.
The Future of AI Ethics
Continuous Evolution of Ethical Standards
As AI technology evolves, so too must the ethical standards and frameworks governing it. This requires ongoing research, dialogue, and adaptability to emerging challenges and scenarios.
Balancing Innovation with Ethics
The future will likely involve striking a balance between harnessing the potential of AI and ensuring that its development and application adhere to ethical principles. This balance is crucial for sustainable and beneficial AI advancement.
Conclusion
The AI boom represents an unprecedented era of technological advancement, but it also brings complex ethical challenges that need to be addressed diligently. As we stand at the cusp of this AI revolution, it is imperative to navigate its ethical landscape thoughtfully and proactively. By developing robust ethical frameworks, ensuring transparency and accountability, and fostering a culture of responsible AI use, we can harness the potential of AI while safeguarding societal values and human dignity.
Your own AI training course you can sell as your own product.
In summary, the ethical implications of the AI boom are as profound as its technological advancements. The decisions made today will shape the role AI plays in our future, making it crucial to approach these challenges with a balanced, informed, and ethical perspective. As we embrace the potential of AI, let us also commit to steering its course responsibly, ensuring it serves the greater good of humanity.
0 notes
Text
Impact of Artificial Intelligence in National Security
Artificial intelligence (AI) is reshaping the way nations approach military strategies and global influence. While many people associate military AI with science fiction and killer robots, the reality is that AI has become a central focus in national security discussions. The implications of AI in the military are capturing global attention, from existential threats warned by philosopher Nick Bostrom to concerns about AI sparking World War III, as highlighted by Elon Musk, and Vladimir Putin's assertions about AI leadership.
AI as a National Security Facilitator
AI is not a weapon in and of itself; rather, it serves as an enabler, similar to electricity or the combustion engine. The impact of artificial intelligence on military power and international conflict is determined by how it is applied. In this article, we will look at the most important aspects of AI's military applications, such as defining AI, comparing it to previous technological advancements, potential military applications, limitations, and the implications for international politics.
The primary benefit of AI is its ability to increase the speed and precision of various military functions such as logistics, battlefield planning, and decision-making. For the United States military, artificial intelligence (AI) represents an opportunity to maintain military superiority while potentially lowering costs and reducing risks to its soldiers. Meanwhile, countries such as Russia and China see artificial intelligence as a means to challenge the United States' military dominance. This competition for AI leadership encompasses more than just military might; it also includes economic competition and global influence.
Nonetheless, the future of AI research is uncertain. There is always the risk that AI will not deliver on its promises, and concerns about safety and reliability may limit its military applications.
AI Understanding
Artificial intelligence (AI) is the use of machines or computers to simulate tasks that previously required human intelligence. Researchers, businesses, and governments all use AI techniques like machine learning and neural networks. While some predict imminent breakthroughs that will lead to Artificial General Intelligence (AGI), others envision a decades-long timeline. This article focuses primarily on "narrow" AI, which is used to solve specific problems.
In terms of history, AI is a versatile technology with the potential to impact various aspects of the economy and society, depending on the rate of innovation. For the military, AI should be viewed as a tool rather than a weapon in and of itself.
Various AI Military Applications
AI has a wide range of potential military applications. To begin, many modern militaries struggle with rapidly processing massive amounts of data. Narrow AI applications can speed up data processing, freeing up human resources for more complex tasks. Project Maven, for example, aims to use algorithms to quickly interpret drone surveillance images. This technology can be extended to process public or classified databases, improving information interpretation and decision-making.
Second, the pace of warfare is quickening, and AI has the potential to play a critical role. Unmanned aircraft, for example, can operate more quickly and efficiently than their human-piloted counterparts. These AI-powered systems are especially useful in scenarios such as air defense, where quick decisions are required.
Third, AI can enable novel military concepts like the "loyal wingman" concept, in which AI systems assist human pilots or tank operators. AI can aid in the effective coordination of multiple assets and swarms in complex battles.
Militaries around the world are being enticed to investigate AI applications that can improve their effectiveness. These incentives are driven by internal political and bureaucratic factors, rather than competition with other militaries. Autonomous systems have the potential to perform tasks at lower costs and with fewer risks to human personnel in democracies like the United States. Autocratic regimes such as China and Russia, on the other hand, see AI as a tool for exerting control and reducing reliance on larger segments of the population.
The military applications of artificial intelligence (AI) go beyond lethal autonomous weapons, which have been debated at the United Nations. AI can be used in a variety of military contexts, including lethal autonomous systems.
AI Implementation Challenges
There are challenges to effectively deploying AI in the military. AI systems, particularly narrow AI, are designed to perform specific tasks, and their dependability can be jeopardized if the context changes rapidly. Predicting AI system behavior can be difficult as well, potentially complicating military planning and operations. Bias, training data, and explainability issues exacerbate the complexity.
Concerns about cybersecurity loom large as well, as AI systems are vulnerable to hacking and adversarial data manipulation. Adversaries may attempt to destabilize AI systems by tampering with the data used to train them.
Despite these difficulties, militaries are unlikely to abandon AI research. The types of AI systems developed and their integration into military operations may be influenced by issues of safety and reliability.
Certification of Artificial Intelligence Expertise for the Future of Military and Defense
The value of AI certification courses in the evolving military and defense landscape cannot be overstated. As the article delves into the transformative potential of artificial intelligence in modern warfare, it becomes clear that having qualified AI professionals is critical. Individuals with AI expert certification have the knowledge and skills to harness the power of AI in military applications.
These certifications help to ensure that professionals understand the complexities of AI technology. In a world where artificial intelligence's role in military operations is growing, AI developer certifications provide critical assurances of competence and understanding of what is AI certification, both in terms of safety and reliability. They are critical in ensuring that the promises of artificial intelligence in military and defense are realized while minimizing the risks associated with its application.
As a result, AI certification exams serve as a springboard for developing AI experts capable of driving innovation and progress in the military domain, ultimately shaping the future of war and defense.
Finally, the impact of AI on military power and the future of warfare is becoming an increasingly important topic. To stay ahead of potential adversaries, leading military forces around the world are investing in AI research. Concerns about safety and reliability are valid, but they may not prevent AI military integration from proceeding. Safety issues have been overcome in the past, resulting in significant improvements in military capabilities, according to the history of technological advancements. The implications of artificial intelligence go beyond military power, affecting the future of work and society as a whole. As AI technology progresses, militaries will need to strike a balance between capability and dependability in order to fully realize AI's potential while mitigating risks.
In this changing environment, AI certification is critical in preparing professionals for the challenges and opportunities presented by AI in the military. Platforms such as Blockchain Council offer important AI certification courses to ensure that individuals and military experts are well-prepared to navigate the complex world of AI in military and defense. These platforms enable professionals to become certified chatbot experts and excel in cutting-edge AI chatbots through AI prompt engineer certification and chatbot certification programs, ultimately contributing to the effective integration of AI in the military and defense.
0 notes
Text
Robot dogs armed with AI-aimed rifles undergo US Marines Special Ops evaluation
Quadrupeds being reviewed have automatic targeting systems but require human oversight to fire.
The United States Marine Forces Special Operations Command (MARSOC) is currently evaluating a new generation of robotic "dogs" developed by Ghost Robotics, with the potential to be equipped with gun systems from defense tech company Onyx Industries, reports The War Zone. While MARSOC is testing Ghost Robotics' quadrupedal unmanned ground vehicles (called "Q-UGVs" for short) for various applications, including reconnaissance and surveillance, it's the possibility of arming them with weapons for remote engagement that may draw the most attention. But it's not unprecedented: The US Marine Corps has also tested robotic dogs armed with rocket launchers in the past. MARSOC is currently in possession of two armed Q-UGVs undergoing testing, as confirmed by Onyx Industries staff, and their gun systems are based on Onyx's SENTRY remote weapon system (RWS), which features an AI-enabled digital imaging system and can automatically detect and track people, drones, or vehicles, reporting potential targets to a remote human operator that could be located anywhere in the world. The system maintains a human-in-the-loop control for fire decisions, and it cannot decide to fire autonomously.
[...]
The prospect of deploying armed robotic dogs, even with human oversight, raises significant questions about the future of warfare and the potential risks and ethical implications of increasingly autonomous weapons systems. There's also the potential for backlash if similar remote weapons systems eventually end up used domestically by police. Such a concern would not be unfounded: In November 2022, we covered a decision by the San Francisco Board of Supervisors to allow the San Francisco Police Department to use lethal robots against suspects. There's also concern that the systems will become more autonomous over time. As The War Zone's Howard Altman and Oliver Parken describe in their article, "While further details on MARSOC's use of the gun-armed robot dogs remain limited, the fielding of this type of capability is likely inevitable at this point. As AI-enabled drone autonomy becomes increasingly weaponized, just how long a human will stay in the loop, even for kinetic acts, is increasingly debatable, regardless of assurances from some in the military and industry." While the technology is still in the early stages of testing and evaluation, Q-UGVs do have the potential to provide reconnaissance and security capabilities that reduce risks to human personnel in hazardous environments. But as armed robotic systems continue to evolve, it will be crucial to address ethical concerns and ensure that their use aligns with established policies and international law.
1 note
·
View note
Text
Unraveling the Wonders of Fiber Optic Gyroscope: A Cutting-Edge Navigation Technology
The fiber optic gyroscope (FOG) represents a breakthrough in navigation technology, revolutionizing the way we measure rotation and orientation. In this article, we delve into the intricacies of fiber optic gyroscope, exploring its principles, applications, and advantages. Discover how FOG has transformed navigation systems in various industries and contributed to advancements in aerospace, defense, robotics, and more.
Principles of Fiber Optic Gyroscope:
A Fiber Optic Gyroscope operates on the principle of the Sagnac effect, utilizing the interference of light waves to detect rotation. The device consists of a coil of optical fiber wrapped around a spool and a light source that emits laser light into the fiber. As the device rotates, the Sagnac effect causes a phase shift in the light waves, which is detected and analyzed. By measuring this phase shift, the fiber optic gyroscope accurately determines the rate and direction of rotation. This non-mechanical and highly sensitive approach enables precise and reliable navigation measurements.
Applications in Navigation Systems :
Fiber optic gyroscopes find applications in a wide range of navigation systems, where accurate and stable rotation sensing is crucial. They are extensively used in aerospace and aviation for aircraft attitude control, navigation, and stabilization. FOGs are also employed in autonomous vehicles, unmanned aerial vehicles (UAVs), and marine systems for accurate position estimation and guidance. In the defense sector, FOGs are integral components of navigation systems for military vehicles, submarines, and missiles. Additionally, FOG technology has made its way into robotics, virtual reality, and augmented reality applications, enhancing motion tracking and spatial orientation capabilities.
Advantages of Fiber Optic Gyroscope :
Fiber optic gyroscopes offer several advantages over traditional mechanical gyroscopes. They are compact, lightweight, and highly durable, making them suitable for space-constrained applications and harsh environments. FOGs exhibit excellent reliability, with no moving parts subject to wear and tear. They have fast response times, enabling real-time motion tracking and control. FOGs are also immune to external vibrations and magnetic fields, ensuring accurate measurements in challenging conditions. The absence of mechanical components and the use of light waves make FOGs inherently more resistant to drift, enhancing long-term stability and reducing the need for frequent calibrations.
Future Developments and Innovations:
The field of fiber optic gyroscopes continues to evolve with ongoing research and advancements. Researchers are exploring ways to enhance the performance and accuracy of FOGs, including reducing size, increasing sensitivity, and improving cost-effectiveness. Integration with other sensor technologies, such as accelerometers and magnetometers, enables multi-sensor fusion and further enhances navigation capabilities. Additionally, advancements in fiber optic technology, such as the development of specialty fibers and photonic crystal fibers, hold promise for future FOG applications. As technology continues to advance, fiber optic gyroscopes will play a vital role in enabling precise navigation and orientation in a wide range of industries.
Conclusion :
Fiber optic gyroscopes have transformed navigation technology by offering precise, reliable, and compact rotation sensing capabilities. Their application across various industries, from aerospace to robotics, showcases the immense potential of FOGs. As research and development progress, fiber optic gyroscopes will continue to shape the future of navigation systems, pushing the boundaries of accuracy and performance.
For more info. Visit us:
Inertial Navigation Systems
MEMS IMU
0 notes
Text
Why I’m Not Worried About A.I. Killing Everyone and Taking Over the World
This article was co-published with Understanding AI, a newsletter that explores how A.I. works and how it’s changing our world.
Geoffrey Hinton is a legendary computer scientist whose work laid the foundation for today’s artificial intelligence technology. He was a co-author of two of the most influential A.I. papers: a 1986 paper describing a foundational technique (called backpropagation) that is still used to train deep neural networks and a 2012 paper demonstrating that deep neural networks could be shockingly good at recognizing images.
That 2012 paper helped to spark the deep learning boom of the last decade. Google hired the paper’s authors in 2013 and Hinton has been helping Google develop its A.I. technology ever since then. But last week Hinton quit Google so he could speak freely about his fears that A.I. systems would soon become smarter than us and gain the power to enslave or kill us. “There are very few examples of a more intelligent thing being controlled by a less intelligent thing,” Hinton said in an interview on CNN last week.
This is not a new concern. The philosopher Nick Bostrom made similar warnings in his widely read 2014 book Superintelligence. At the time most people saw these dangers as too remote to worry about, but a few people found arguments like Bostrom’s so compelling that they devoted their careers to them. As a result, there’s now a tight-knit community convinced that A.I. poses an existential risk to the human race.
I’m going to call their viewpoint singularism—a nod not only to Verner Vinge’s concept of the singularity, but also to Bostrom’s concept of a singleton, an A.I. (or other entity) that gains control over the world. The singularists have been honing their arguments for the last decade and today they largely set the terms of the A.I. safety debate.
But I worry that singularists are focusing the world’s attention in the wrong direction. Singularists are convinced that a super-intelligent A.I. would become so powerful to kill us all if it wants to. And so their main focus is on figuring out how to ensure that this all-powerful A.I. winds up with goals that are aligned with our own.
But it’s not so obvious that superior intelligence will automatically lead to world domination. Intelligence is certainly helpful if you’re trying to take over the world, but you can’t control the world without manpower, infrastructure, natural resources, and so forth. A rogue A.I. would start out without control of any of these physical resources.
So a better way to prevent an A.I. takeover may be to ensure humans remain firmly in control of the physical world—an approach I’ll call physicalism. That would mean safeguarding our power plants, factories, and other physical infrastructure from hacking. And it would mean being cautious about rolling out self-driving cars, humanoid robots, military drones, and other autonomous systems that could eventually become a mechanism for A.I. to conquer the world.
In 1997, IBM’s Deep Blue computer beat the reigning chess grandmaster Gary Kasparov. In the years since, chess engines have gotten better and better. Today, the strongest chess software has an Elo rating of 3,500, high enough that we should expect it to win almost every game against the strongest human players (who have Elo ratings around 2,800). Singularists see this as a template for A.I. mastery of every significant activity in the global economy, including important ones like scientific discovery, technological innovation, and warfare.
A key step on the road to A.I. dominance will be when A.I. systems get better than people at designing A.I. systems. At this point, singularists predict that we’ll get an “intelligence explosion” where A.I. systems work to recursively improve their own code. Because it’s easy to make copies of computer programs, we could quickly have millions of virtual programmers working to improve A.I. systems, which should dramatically accelerate the rate of progress in A.I. technology.
I find this part of the singularist story entirely plausible. I see no reason to doubt that we’ll eventually be able to build computer systems capable of performing cognitive tasks at a human level—and perhaps beyond.
Once an A.I. achieves superintelligence, singularists envision it building some kind of superweapon to take over the world. Obviously, since none of us possess superhuman intelligence, it’s hard to be sure whether this is possible. But I think a good way to sanity-check it is to think about the history of previous superweapons.
Take the atomic bomb, for example. In 1939, physicist Leo Szilard realized that it would be possible to create a powerful new kind of bomb using nuclear fission. So did he go into his garage, build the first atomic bomb, and use it to become the most powerful person on the planet?
Of course not. Instead, Szilard drafted a letter to President Franklin Roosevelt and got Albert Einstein to sign it. That led to the Manhattan Project, which wound up employing tens of thousands of people and spending billions of dollars over a six-year period. When the first atomic bombs were finished in 1945, it was President Harry Truman, not Szilard or other physicists, who got to decide how they would be used.
Maybe a superintelligent A.I. could come up with an idea for a powerful new type of weapon. But like Szilard, it would need help to build and deploy it. And getting that help might be difficult—especially if the A.I. wants to retain ultimate control over the weapon once it’s built.
When I read Bostrom’s Superintelligence, I was surprised that he devotes less than three pages (starting on page 97 in this version) to discussing how an A.I. takeover might work in concrete terms. In those pages, Bostrom briefly discusses two possible scenarios. One is for the A.I. to create “self-replicating biotechnology or nanotechnology” that could spread across the world and take over before humans know what is happening. The other would be to create a supervirus to wipe out the human race.
Bostrom’s mention of nanotechnology is presumably a reference to Eric Drexler’s 1986 book envisioning microscopic robots that could construct other microscopic objects one atom at a time. Twenty years later, in 2006, a major scientific review found that the feasibility of such an approach “cannot be reliably predicted.” As far as I can tell, there’s been no meaningful progress on the concept since then.
We do have one example of a nanoscale technology that’s made significant progress in recent years: integrated circuits now have features that are just a few atoms wide, allowing billions of transistors to be packed onto a single chip. And the equipment required to build these nanoscale devices is fantastically expensive and complex: companies like TSMC and Intel spend billions of dollars to build a single chip fabrication plant.
I don’t know if Drexler-style nano-assemblers are possible. But if they are, building the first ones is likely to be a massive undertaking. Like the atomic bomb, it would likely require many skilled engineers and scientists, large amounts of capital, and large research labs and production facilities. It seems hard for a disembodied A.I. to pull that off—and even harder to do so while maintaining secrecy and control.
Part of Bostrom’s argument is that superintelligent A.I. would have a “social manipulation superpower” that would enable the A.I. to persuade or trick people into helping it accomplish its nefarious ends.
Again, no one has ever encountered a superintelligent AI, so it’s hard to make categorical statements about what it might be able to do. But I think this misunderstands how persuasion works.
Human beings are social creatures. We trust longtime friends more than strangers, and we are more likely to trust people we perceive as similar to ourselves. In-person conversations tend to be more persuasive than phone calls or emails.
A superintelligent A.I. would have no friends or family and would be incapable of having an in-person conversation with anybody. Maybe it could trick some gullible people into sending it money or sharing confidential information. But what an A.I. would really need is co-conspirators: people willing to help out with a project over the course of months or years, while keeping their actions secret from friends and family. It’s hard to imagine how an A.I. could inspire that kind of loyalty among a significant number of people.
I expect that nothing I’ve written so far is going to be persuasive to committed singularists. Singularists have a deep intuition that more intelligent entities inevitably become more powerful than less intelligent ones.
“One should avoid fixating too much on the concrete details, since they are in any case unknowable and intended for illustration only,” Bostrom writes in Superintelligence. “A superintelligence might—and probably would—be able to conceive of a better plan for achieving its goals than any that a human can come up with. It is therefore necessary to think about these matters more abstractly.”
Stephen Hawking articulated this intuition in a vivid way a few years ago. “You’re probably not an evil ant-hater who steps on ants out of malice,” Hawking wrote. “But if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”
But it’s worth thinking harder about the relationship between human intelligence and our power over the natural world.
If you put a modern human in a time machine and sent him back 100,000 years, it’s unlikely he could use his superior intelligence to establish dominance over a nearby Neanderthal tribe. Even if he was an expert on modern weaponry, he wouldn’t have the time or resources to make a gun before the Neanderthals killed him or he just starved to death.
Humanity’s intelligence gave us power mainly because it enabled us to create progressively larger and more complex societies. A few thousand years ago, some human civilizations grew large enough to support people who specialized in mining and metalworking. That allowed them to build better tools and weapons, giving them an edge over neighboring civilizations. Specialization has continued to increase, century by century, until the present day. Modern societies have thousands of people working on highly specialized tasks from building aircraft carriers to developing A.I. software to sending satellites into space. It’s that extreme specialization that gives us almost godlike powers over the natural world.
My favorite articulation of this point came from entrepreneur Anton Troynikov in a recent episode of the Moment of Zen podcast.
“The modern industrial world requires actuators starting from the size of an oil refinery and going down to your scanning electron microscope,” Troynikov said. “The reason that we need all of this vast array of things is that the story of technology is almost the story of tool use. And every one of those tools relies on another layer of tools below them.”
The modern world depends on infrastructure like roads, pipelines, fiber optic cables, ports, warehouses, and so forth. Each piece of infrastructure has a workforce dedicated to building, maintaining, and repairing it. These workers not only have specialized skills and knowledge, they also have sophisticated equipment that enables them to do their jobs.
Which brings me to Bostrom’s second scenario for A.I. takeover. Bostrom predicts that a superintelligent A.I. might create a virus that wipes out humanity. It’s conceivable that an A.I. could trick someone into synthesizing a virus in an existing biology lab. I don’t know if an A.I.-designed virus could literally wipe out humanity, but let’s assume it can for the sake of argument.
This thing can’t run itself. LIONEL BONAVENTURE/Getty Images
The problem, from the A.I.’s point of view, is that it would still need some humans around to keep its data centers running.
As consumers, we’re used to thinking of services like electricity, cellular networks, and online platforms as fully automated. But they’re not. They’re extremely complex and have a large staff of people constantly fixing things as they break. If everyone at Google, Amazon, AT&T, and Verizon died, the internet would quickly grind to a halt—and so would any superintelligent A.I. connected to it.
Could an A.I. dispatch robots to keep the internet and its data centers running? Today there are far fewer industrial robots in the world than human workers, and the vast majority of them are special-purpose robots designed to do a specific job at a specific factory. There are few if any robots with the agility and manual dexterity to fix overhead power lines or underground fiber optic cables, drive delivery trucks, replace failing servers, and so forth. Robots also need human beings to repair them when they break, so without people the robots would eventually stop functioning too.
Of course this could change. Over time we may build increasingly capable robots, and in a few decades we may reach the point where robots are doing a large share of physical work. At that point, an A.I. takeover scenario might become more plausible.
But this is very different from the “fast takeoff” scenario envisioned by many singularists, in which A.I. takes over the world within months, weeks, or even days of an intelligence explosion. If A.I. takes over, it will be a gradual, multi-decade process. And we’ll have plenty of time to change course if we don’t like the way things are heading.
Singularists predict that the first superintelligent A.I. will be the last superintelligent A.I. because it will rapidly become smart enough to take over the world. If that’s true, then the question of A.I. alignment becomes supremely important because everything depends on whether the superintelligent A.I. decides to treat us well or not.
But in a world where the first superintelligent A.I. won’t be able to immediately take over the world—the world I think we live in—the picture looks different. In that case, there are likely to eventually be billions of intelligent A.I.s in the world, with a variety of capabilities and goals. Many of them will be benevolent. Some may “go rogue” and pursue goals independent of their creators. But even if that doesn’t happen, there will definitely be some A.I.s created by terrorists, criminals, bored teenagers, or foreign governments. Those are likely to behave badly—not because they’re “misaligned,” but because they’re well-aligned with the goals of their creators.
In this world, anything connected to the internet will face constant attacks from sophisticated A.I.-based hacking tools. In addition to discovering and exploiting software vulnerabilities, rogue A.I. might be able to use technologies like large language models and voice cloning to create extremely convincing phishing attacks.
And if a hacker breaches a computer system that controls a real-world facility—say a factory, a power plant, or a military drone—it could do damage in the physical world.
Last week I asked Matthew Middelsteadt, an A.I. and cybersecurity expert at the Mercatus Center, to name the most important recent examples of hacks like this. He said these were the three most significant in the last 15 years:
• In 2010, someone—widely believed to be the U.S. or Israeli government—unleashed a computer worm on computer systems associated with Iran’s nuclear program, slowing Iran’s efforts to enrich uranium.
• In 2015, hackers with suspected ties to Russia hacked computers controlling part of the Ukrainian power grid. This caused about 200,000 Ukrainians to lose power, but utility workers were able to restore power within a few hours by bypassing the computers.
• In 2021, a ransomware attack hit the billing infrastructure for the Colonial Pipeline, which moves gasoline from Texas to Southeastern United States. The attack shut down the pipeline for a few days, leading to brief fuel shortages in affected states.
This list makes it clear that this is a real problem that we should take seriously. But overall I found this list reassuring. Even if A.I. makes attacks like this 100 times more common and 10 times more damaging in the coming years, they would still be a nuisance rather than an existential threat.
Middelsteadt points out that the good guys will be able to use A.I. to find and fix vulnerabilities in their systems. Beyond that, it would be a good idea to make sure that computers controlling physical infrastructure like power plants and pipelines are not directly connected to the internet. Middelsteadt argues that safety-critical systems should be “air gapped”: made to run on a physically separate network under the control of human workers located on site.
This principle is particularly important for military hardware. One of the most plausible existential risks from A.I. is a literal Skynet scenario where we create increasingly automated drones or other killer robots and the control systems for these eventually go rogue or get hacked. Militaries should take precautions to make sure that human operators maintain control over drones and other military assets.
Last fall, the U.S. military publicly committed not to put A.I. in control of nuclear weapons. Hopefully other nuclear-armed powers will do the same.
Notably, these are all precautions we ought to be taking whether or not we think attacks by rogue AIs is an imminent problem. Even if superintelligent A.I. never tries to hack our critical infrastructure, it’s likely that terrorists and foreign governments will.
Over the longer term, we should keep the threat of rogue A.I.s in mind as we decide whether and how to automate parts of the economy. For example, at some point we will likely have the ability to make our cars fully self-driving. This will have significant benefits, but it could also increase the danger from misaligned A.I.
Maybe it’s possible to lock down self-driving cars so they are provably not vulnerable to hacking. Maybe these vehicles should have “manual override” options where a human passenger can shut down the self-driving system and take the wheel. Or maybe locking down self-driving cars is impossible and we’ll ultimately want to limit how many self-driving cars we put on the road.
Robots today are neither numerous nor sophisticated enough to be of much use to a superintelligent A.I. bent on world domination. But that could change in the coming decades. If more sophisticated and autonomous robots become commercially viable, we’ll want to think carefully about whether deploying them will make our civilization more vulnerable to misaligned A.I.
The bottom line is that it seems easier to minimize the harm a superintelligent A.I. can do than to prevent rogue A.I. systems from existing at all. If superintelligent A.I. is possible, then some of those A.I.s will have harmful goals, just as every human society has a certain number of criminals. But as long as human beings remain firmly in control of assets in the physical world, it’s going to be hard for a hostile A.I. to do too much damage.
0 notes
Text
Where Technology is Headed in 2023
As Artificial Intelligence (AI) becomes more mainstream, you should beware of where technology is headed. Example: The Japanese government is restricting facial recognition exports over growing concern of China’s extreme surveillance measures. A recent report from Tokyo’s Central News Agency (CNA) states: China has been using AI-powered facial recognition technology to “deploy large-scale surveillance systems to restrict people’s movement in Xinjiang Uyghur Autonomous Region and other places.” In addition to citing national security, Japan is aiming to prevent advanced technology from infringing upon human rights. Huh? Japan is not alone. Numerous countries, including the US, are now assessing whether technology ranging from facial recognition to cameras coming from China are safe for use. Technology is Headed… Bottom Line: This is mainly for the government’s safety. And the reality is your government is already tracking you. Technology and the MIC Most people don’t realize how the United States and China have been in a new military arms race that has not made the headlines. Cue up: The CoronaHoax/Distraction. And there’s been lots of conspiracy theories about how the coronavirus was engineered by China’s weapon’s lab as a biological weapon. READ: Tin-Foil Times (HERE) As a result, the real military technology advancements never seem to gather attention. Translation: Both the USA and China are pursuing military nanotechnology solutions, which includes linking soldiers’ brains directly to computers. That may sound like far-fetched Terminator/Ahhhhnold Schwarzenegger stuff…but it is very real. Most people forget how President Clinton proclaimed his National Nanotechnology Initiative. As a result, the US government agencies have been heavily engaged in nanotechnology research. Of course, most of the effort has been funded by the Defense Department (taxpayer dollars) with the goal of creating a new kind of warrior that links the human brain to machines. Wait! What? Ironically (or NOT) this would be far superior to robot soldiers. Why? Because they would be connected to millions of sensors and to the military computer in the cloud. As a result, they believe the capability of the human brain would be expanded exponentially. But what happens when someone hacks into the network? Could an enemy possibly turn your army against their creator? You Betcha’. And all of this will be very conveniently linked to the Proxy War in Ukraine. Rest assured the Political Chaos of 2023 will play an important part in sounding the drums of war. Read about it (HERE) and how to adjust your portfolio accordingly. And share this with a friend…especially if they aren’t familiar with AI and what it can do. They’ll thank YOU later. Remember: We’re Not Just About Finance But we use finance to give you hope. ********************************* Invest with confidence. Sincerely, James Vincent The Reverend of Finance Copyright © 2023 It's Not Just About Finance, LLC, All rights reserved. You are receiving this email because you opted in via our website. Read the full article
0 notes
Text
Rise of machines
Rise Of The Machines
It’s been over 35 years since The Terminator was released, and rather than a global moratorium on weaponized robots, we’re instead seeing an explosion of research into autonomous tanks, aircraft, humanoid robots, and AI software systems to pull the trigger.
There’s a grim irony in the fact that a cautionary tale about autonomous killing machines has turned into an arms race to see who can develop them first —and it’s even more ironic that the organizations developing these technologies reference the movie when describing their projects: F.E.D.O.R. — with a gun, but “not a Terminator”.
FEDOR
In arecent tweet, Russian Prime Minister Dmitry Rogozin described the “Robot platform F.E.D.O.R. showing shooting skills with two hands,” and quickly added, “we are not creating a Terminator, but artificial intelligence that will be of great practical significance in various fields.”
Ostensibly,F.E.D.O.R.has been developed for rescue missions, and the prototype was even sent to the International Space Station to conduct repairs, but as The Independent has written, “military uses have also been suggested by engineers.”
Atlas
Despite internet confusion from a viral hoax video by “BossTown Dynamics”, the realAtlasrobot developed by Boston Dynamics has never fired a gun — but it can run, do backflips, andparkour. Developed under grants byDARPA, this robot is also designed for disaster response, but that didn’t stop ExtremeTech for describing it as a “Real World Terminator”, saying:
At 6’2″ and 330lbs, Atlas is incredibly imposing…while Atlas is initially conceived as a disaster response robot, such as cleaning up and looking for survivors after a Fukushima-like disaster, it’s easy to imagine Atlas being the basis of a robotic army.
TheMIS
Futurism.com described this robotic tank as being “straight out of the Terminator”, and cited aC4ISRNET article indicating that it was equipped with a 12.7mm machine gun & 40mm automatic grenade launcher in a recent demonstration.
Likened to theT-1 robot tankin Terminator 3, theMilrem TheMISis one of many robotic tanks currently under development, including theRipsaw M5 Robo-Tank,Miloš UGV,Gladiator TUGV,Foster-Miller TALON, and many more — all heavily armed, and currently all requiring a human operator to pilot them remotely.
The Predator Drone
The General AtomicsMQ-1 Predatordrone was designed in the ‘90s for reconnaissance, but within a decade the Air Force hadarmed itfor drone strikes, and created theMQ-9 Reaperas a successor — which the USAF Chief of Staff called “a true hunter-killer.”
The Guardian reports thatBritain is funding research into drones that decide who they kill, and of the36 countriescurrently using armed drones, analystPaul Scharresays it’s “very likely that nations will invest in autonomous technology, if nothing else out of fear that their adversaries are doing so.”
Googlebacked out of drone research because of the ethical implications, but it hasn’t stoppedmilitary organizationsfrom pursuing autonomous killing machines, which have the advantage of “freeing current pilots from the moral responsibility of casualties”.
SkyNet
The Terminator franchise wouldn’t be complete without the series arch-villain. As it turns out,SkyNet is already here, and according toArs Technica, it’s already killed “thousands of people”:
“SkyNet engages in mass surveillance of Pakistan’s mobile phone network, and then uses a machine learning algorithm on the cellular network metadata of 55 million people to try and rate each person’s likelihood of being a terrorist.”
Today’s SkyNet is an NSA surveillance program that isn’t self-aware, and it doesn’t directly control weapons systems — unlike the ambitious Strategic Computing project, back whenDARPA Tried to Build Skynet in the 1980s:
“The system was supposed to create a world where autonomous vehicles not only provide intelligence on any enemy worldwide, but could strike with deadly precision from land, sea, and air. It was to be a global network that connected every aspect of the U.S. military’s technological capabilities — capabilities that depended on new, impossibly fast computers.”
Thoseimpossibly fast computersexist today, and DARPA hasn’t given up on the idea, they’ve simply rebranded it “Assured Autonomy”. The goal remains the same: creating systems able to “accomplish goals independently, or with minimal supervision from human operators in environments that are complex and unpredictable.”
Conclusion
As I said in the beginning, all the pieces for Judgement Day are in place. The nukes, the robots, the AI systems —it’s like putting together a jigsaw puzzle, and the only thing missing is a few more years of R&D and the malevolent spark of machine intelligence willing to end the world.
Stephen Hawkingsaid, “The development of full artificial intelligence could spell the end of the human race,” andElon Muskconcurred — calling it humanity’s “biggest existential threat”. Both of them may be mistaken, but it begs the larger question:
If AI is dangerously unpredictable, why are we arming it?
Earlier I mentioned “freeing current pilots from the moral responsibility of casualties”, but the truth is they shouldn’t be freed from it. The decision to take a human life has gravity to it — and knowing that you’ll have to live with that choice is part of the decision-making process. It’s called having a conscience, and it’s something machines lack.
Conscience — not calculation — is what kept us from launching the nukes during the 20th century. The ICBMs are ready, but despite 70 years of saber-rattling, the decision to use them is simply too big, ugly & final for us to push the button. So we’re teaching the machines how to do it instead.
To be fair, The Terminator and its sequels were as much a commentary on the time they were produced as they were a warning to the future —but at the root of all these films remains a constant reminder:beware the consequences of giving machines the power to decide life & death.
0 notes