#and the real issue is lack of regulation on specific generative 'ai' business models
Explore tagged Tumblr posts
Text
Striking a balance between data analytics and data privacy
At the beginning of the decade, case was being made out for organizations to adopt a data driven business model to retain the competitive edge. This was because big data and analytics promised to transform business performance. As we stand at threshold of the next decade, data has become even more important, as evident in a survey undertaken by NewVantage Partners’ 2019 Big Data and Executive Survey.
Findings of the survey indicate that companies are acknowledging the need for a data driven business model:
92% respondents reported acceleration in pace of big data and AI initiatives
88% reported a greater urgency to invest in big data and AI
55% companies reported big data investments in excess of $50MM
68% organizations had Chief Data Officer (CDO) roles as compared to just 12% in 2012
However, these data driven initiatives have the potential to go against data privacy issues if not taken care of proactively. The ever-evolving data privacy regulations across the globe are forcing enterprises to look at ways in which they can use key data without any conflict with privacy compliances.
The 2018 Cambridge Analytica scandal was a watershed moment in the history of data privacy when people realized what could happen to their personal data in the wrong hands. The scandal also brought to light the role of businesses in ensuring data security measures for storing and providing access to customer data. Records of 87 million Facebook users had been harvested by simply hosting a quiz.
The interesting point was that only a couple of thousands of people had taken the quiz. But Cambridge Analytica could access the personal record of all the “friends” of Facebook users who had taken their quiz. The systems to maintain data privacy were not in place.
In this article we will discuss why data driven approach is essential for businesses and how they can strike a balance between data analytics and data privacy. Let’s dive straight in.
Why do businesses need a data driven approach
The data driven approach is no longer a luxury; it is becoming a strategic necessity that can make or break companies’ chances of future growth. Data driven businesses have access to meaningful operational data in all business-critical areas, which strengthen decision making as well as process innovation. Both crucial to business growth.
Collection and analysis of data is nothing new for businesses. Customer feedback forms, telephonic or snail mail surveys, customer interviews, etc. have been around for centuries. But Internet has taken all forms of data collection to another level. And Internet of Things keeps churning out data every second, without sometimes even the customer being aware of it. Humans are generating 2.5 quintillion bytes of data every day.
Enables data-driven decision making
Using data and insights provided by it to make an informed decision is called data-driven decision making. Past data can provide estimated projections for the future, which can be used to validate a decision being taken.
Businesses may collect the data using survey responses, user testing launch a new product or service to a selected group of customers. The decision of which data to include and how, depends upon business goals and data available.
Only 41.4% of CDOs considered their chief responsibility to be managing and leveraging data as an enterprise data asset. Probably the overwhelming amount of data makes it difficult for them to analyze into real actionable insights.
Generates more confidence
Traditionally business decisions were made on the basis of gut instinct and past experience. This approach to business decision making has two major flaws. One, gut instinct does not always prove right. Two, not every business has access to experience at all times. If they rely only on experience, what would happen when the senior most leader is no more available to guide.
Data driven decision making can be adopted by any organization, irrespective of its age and expertise. Data can serve as the benchmark against which the efficacy of current processes can be verified. Data can also be used to finetune the current processes and reliably predict how effective they can be. Also, data is more logical and tangible as compared to gut instinct or experience. By using data to take decisions, you are effectively removing the subjective elements from decisions, which makes you more confident of the outcomes.
Ensures cost savings
Taking a data driven approach to business decreases expenses, as found in a survey of Fortune 1000 executives. 49% of the organizations claimed their data projects to be successful and reported an actual decrease in expenses.
Is data privacy a hurdle in adopting data driven model
Making decisions on the basis of hard evidence in the form of data needs a major cultural, technological and mind set change. Established organizations have internal barriers culturally as well as practically (think legacy infrastructure and traditional ways of operating). But this should not mean they cannot transform their situation. If you think of it, organizations that are decades old have that much more amount of data at their disposal.
Still they struggle to exploit it because they are unable to transform data into usable, actionable insights. Forrester reports that between 60% and 73% of data within an enterprise goes unutilized for analytics. This could be due to siloed data, poor reliability and a lack of analytical talent. The data may be spread in different formats across different themes and cannot be compared using the legacy systems.
The primary requirement of established organizations is to link data to business-critical impact. The data and insights generated by them must be easily accessible, inter-pretable and actionable whenever required. This can be made possible if organizations employ requisite technology and stop thinking in terms of “we have always done it this way.” And this is something businesses must resort to because the true power of data is in its use.
Once the cultural challenges are taken care of, two major operational challenges come up in creating a data-driven approach to business:
Evolving data privacy regulations
Increasing consumer awareness
Privacy can broadly be defined as access to individually identifiable personal data.
The GDPR defines personal data as any information relating to an identified or identifiable natural person. It further clarifies that an identifiable natural person is a person who can be identified directly or indirectly, in particular by reference to an identifier such as a name and identification number location data and online identifier or to one or more factors specific to the physical and physiological, genetic, mental, economic, cultural or social identity of that natural person.
Data privacy compliance
Data can be misused online in multiple ways, from banking frauds to identity theft. Users are increasingly aware of importance of keeping their private data safe. They are also aware of their legal rights to data privacy. Enterprises collecting, storing and processing data had it much easier initially. But as people became more concerned about who has access to their private data, governments have woken up to the extent to which private data can be misused in unscrupulous hands.
Data privacy and security guideline and regulations are being drafted and made into law all over the world. GDPR and CCPA are the two most prominent ones but other countries have their own laws. The Data Protection Act 2018 is the UK’s way of making organizations collecting private data responsible for keeping them safe rather than the individuals who provide that data.
Enterprises need to be compliant to privacy regulations of the territory where they operate as well as where their customers reside. These compliances not only pose operational challenges but increase costs as well.
Handling privacy aware customer
Customers are aware of the pitfalls of data breach. So, they prefer products and services of organizations that are transparent about their data collection, storage and analysis processes. Consumers also want to know who has access to the insights generated by their data.
Customers are just one click away from changing allegiance. So, enterprises need to be transparent about their privacy policies for retaining customer loyalty and acquiring new customers.
Balancing the need for data analytics and privacy
Contrary to widespread belief that data analytics and data privacy are contradictory, it is possible to strike a balance between them. Let us see how.
Holistic approach to data privacy compliance
A CGOC study found that just 57% of organizations train their staff on data protection compliance, and out of those only 25% are regular in their training and audits. This indicates a disjointed approach towards data privacy compliance. The study further reveals that 50% of respondents think that internal staff and practices are the biggest threat to data security as compared to just 38% who think external hackers are the biggest threats.
Data privacy compliance can be achieved only if enterprises do away with their siloed operations and there is continuous collaboration among its IT, security, legal and marketing departments. The focus here must be on utilizing the right tools and technologies to ensure data privacy without increasing business overheads.
People handling private data must be sensitized to the perils of data theft because one weak link in the security chain, from data collection and storage to analysis and distribution of insights, can prove dangerous.
Privacy-first design for data analytics projects
It is possible to protect data being used for Data Analytics projects. The most important and easiest way of doing this is anonymizing data. Most Data Analytics projects are designed to identify patterns and Trends from huge volumes of data. It is not necessary to identify an individual with their personal data to get this analysis. So, anonymizing data can ensure data privacy while still being useful for getting insights.
In fact, anonymizing data renders data protection regulations like GDPR and CCPA invalid. Because these regulations apply on data that can identify an individual. When the individual cannot be identified through the available data, there is no need to implement these data privacy regulations.
This absolves the organization using anonymized data of most of the privacy compliance requirements. All they need to ensure is that the data remains has been collected legally and it remains anonymized during storage and sharing.
If the data is truly really anonymized, it is not even subject to the rights of data owners. This means they cannot request access to their data, ask the data to be removed, or object to processing it in a certain way. Come to think of it, using really anonymized data would be the ideal situation where the business potential of data can be unlocked without compromising privacy of individuals.
Be honest about your data privacy policies
Brand trust has never been more important for businesses than it is now. Customers have always been fastidious about which products they choose or which services they use. Recently, Accenture conducted a study that quantified the impact of trust. The study revealed that a USD 30 billion retail company experiencing a material drop interest stands to lose USD 4 billion in future revenue. And, 54% of the companies surveyed experienced a material drop interest over the last two years. So, it’s not a matter of if but when a trust issue will crop up.
Being honest about your data privacy policies can be an effective way of winning trust of the customers. Building customer trust can also ensure your data quality because people are more honest about their personal data with organizations they trust. Privacy can be used as a differentiator to establish brand loyalty and engage with customers on equal footing.
How TechAhead helps in developing privacy compliant apps
The team of app design and mobile application development experts approach app design with data privacy first approach. They have developed 100+ standalone or integrated data analytics projects that are fully compliant with data privacy regulations.
Summary
Data-driven approach to business helps an organization retain competitive edge as it gives access to meaningful operational data across all functions. These data can be used to generate insights that support and strengthen the decision-making process. Businesses have been collecting and analyzing data for ages with the help of feedback forms, telephonic interviews, snail mail, customer interviews, etc. But Internet has made it very easy to collect data globally because it is easier, faster and cheaper to reach customers for feedbacks.
Data driven decision making is informed decision-making, which makes businesses more confident of their future. Taking a data driven approach to business also decreases expenses as claimed by executives in a survey of Fortune 1000 companies.
However, these data driven initiatives have the potential of going against data privacy issues if data collected is not shared, analyzed and distributed responsibly. As people and governments become aware of what could happen to personal data in the wrong hands, data privacy regulations are evolving across the globe. GDPR and CCPA are just two cases in the point.
Besides evolving data privacy regulations, businesses also need to deftly handle the increasingly aware consumer. Consumers are aware of their legal rights to their personal data, even if it has been collected by a business. They know they can demand access to their data anytime and object to how it is processed to generate Insights. All constraints can put restrictions on the amount of data that can be collected and analyzed by organizations.
But data and the insights that it generates is so critical to businesses that a middle path has to be found — a balance between maintaining data privacy and generating useful insights through data analytics. Any organization planning to collect user data and using it for insights must comply with all data privacy regulations wherever it operates and where its customers reside.
Another effective way of maintaining data privacy is anonymizing the data collected, so that persons cannot be identified based on data being used for analytics. If data is truly anonymized, even the provisions of privacy regulations like the GDPR become invalid because they apply only to personally identifying data.
Source: https://www.techaheadcorp.com/blog/data-analytics-and-data-privacy/
0 notes
Text
To mitigate the costs of future pandemics, establish a common data space
New Post has been published on http://khalilhumam.com/to-mitigate-the-costs-of-future-pandemics-establish-a-common-data-space/
To mitigate the costs of future pandemics, establish a common data space
By Stephanie Chin, Caitlin Chin The COVID-19 pandemic calls attention to the fact that—despite best intentions and efforts—real-time data emerging from global crises may be uncertain, rapidly evolving, incomplete, or even misleading. The lag between COVID-19 transmission and the onset of symptoms, as well as the lag between getting tested and receiving test results, can lead to outdated infection rate estimates and dynamically-changing public health guidance—which in turn reduces public understanding and compliance. Governments and academic researchers must choose whether and how to update backlogged information or retroactively fix past statistics, which may lead to changing, reversing, or delayed policies. During recent outbreaks, including the 2015 Ebola epidemic and the ongoing COVID-19 pandemic, sharing genomic sequencing data in public databases and data repositories such as GenBank and GISAID has proven extremely valuable—and is supported by international agreements such as the 1996 Bermuda Principles and 2010 Nagoya Protocol. Meanwhile, academic interest in public health data sharing has led to innovative platforms to map disease occurrence, leverage open source and social media intelligence about public health, and even crowdsource data collection. However, these efforts are fragmented at both the global and national levels.
Building common data spaces to enhance information flows
To improve data sharing during global public health crises, it is time to explore the establishment of a common data space for highly infectious diseases. Common data spaces integrate multiple data sources, enabling a more comprehensive analysis of data based on greater volume, range, and access. At its essence, a common data space is like a public library system, which has collections of different types of resources from books to video games; processes to integrate new resources and to borrow resources from other libraries; a catalog system to organize, sort, and search through resources; a library card system to manage users and authorization; and even curated collections or displays that highlight themes among resources. Even before the COVID-19 pandemic, there was significant momentum to make critical data more widely accessible. In the United States, Title II of the Foundations for Evidence-Based Policymaking Act of 2018, or the OPEN Government Data Act, requires federal agencies to publish their information online as open data, using standardized, machine-readable data formats. This information is now available on the federal data.gov catalog and includes 50 state- or regional-level data hubs and 47 city- or county-level data hubs. In Europe, the European Commission released a data strategy in February 2020 that calls for common data spaces in nine sectors, including healthcare, shared by EU businesses and governments. Going further, a common data space could help identify outbreaks and accelerate the development of new treatments by compiling line list incidence data, epidemiological information and models, genome and protein sequencing, testing protocols, results of clinical trials, passive environmental monitoring data, and more. Moreover, it could foster a common understanding and consensus around the facts—a prerequisite to reach international buy-in on policies to address situations unique to COVID-19 or future pandemics, such as the distribution of medical equipment and PPE, disruption to the tourism industry and global supply chains, social distancing or quarantine, and mass closures of businesses.
Challenges of establishing a global common data space
Despite these potential advantages, setting up a common data space that is usable and secure is no simple task. Even with widespread consensus within academia on the importance of sharing public health data, there are real-world technical, geopolitical, and ethical barriers to implementation on a global scale. First is the technical challenge of setting up a comprehensive, secure, and usable data space system. Integrating data from multiple data sources can be time-consuming and difficult, especially considering low data quality, disparate methods of data collection, lags in data reporting, and inherent uncertainties. Thus, it is important to regularly audit data in shared data spaces—flagging poor data quality and outdated information—to communicate levels of confidence or uncertainty in the data. In addition, novel “data space” approaches can help avoid the high upfront costs of cleaning, processing, and integrating data ex ante, while emerging AI and ML algorithms and data standards could automatically provide basic functionality—enabling researchers to focus their efforts on advanced integrations. The application of blockchain could improve the security and resiliency of systems against accidental or malicious data corruption. On the geopolitical front, issues of data protectionism, national security, economic competition, lack of trust, and differing privacy regulations and values impede the development of an international common data space. Pre-print publication policies have helped incentivize data sharing and reduce academic concerns about IP, data ownership, and publication rights—yet there remains a gap in translating academic research to policymakers and the general public. In the past, the exchange of epidemiological prediction models, risk maps, and disaster planning simulations has aided researchers to understand country-specific concerns while navigating future uncertainty and high levels of stakeholder complexity. Underlying these geopolitical issues are ethical questions about data access, equity, and privacy. For instance, how can we ensure that the costs and benefits of a common data space are fairly distributed? It will become necessary to fill gaps in disease detection in under-resourced areas, while simultaneously ensuring fair and affordable access to resulting medicines and treatments among communities that contribute data. On the other hand, it is essential to consider how to address “free riding” nations which can benefit from a common data space without sharing their own data. We must also question how to handle ownership and attribution when researchers share data, what ethical research and accountability standards are necessary, under which contexts to require informed consent from research participants, and how to offer widely-accessible data during public health emergencies while maintaining the privacy of all individuals involved. With these considerations in mind, the question then turns to whether a national or regional system of data sharing is the most realistic goal—or whether it is possible to achieve a truly global system of common data sharing. Paradoxically, a common data space may help increase international trust and cooperation during future pandemics—but cannot be enacted without some baseline level of international trust and cooperation.
0 notes
Text
Core technologies and tools for AI, big data, and cloud computing
Core technologies and tools for AI, big data, and cloud computing
Highlights and use cases from companies that are building the technologies needed to sustain their use of analytics and machine learning.
Profiles of IT executives suggest that many are planning to spend significantly in cloud computing and AI over the next year. This concurs with survey results we plan to release over the next few months. In a forthcoming survey, “Evolving Data Infrastructure,” we found strong interest in machine learning (ML) among respondents across geographic regions. Not only are companies interested in tools, technologies, and people who can advance the use of ML within their organizations, they are beginning to build the core foundational technologies needed to sustain their usage of analytics and ML. With that said, important challenges remain. In other surveys we ran, we found “lack of skilled people,” “lack of data,” and cultural and organizational challenges as the leading obstacles cited for holding back the adoption of machine learning and AI.
In this post, I’ll describe some of the core technologies and tools companies are beginning to evaluate and build. Many companies are just beginning to address the interplay between their suite of AI, big data, and cloud technologies. I’ll also highlight some interesting uses cases and applications of data, analytics, and machine learning. The resource examples I’ll cite will be drawn from the upcoming Strata Data conference in San Francisco, where leading companies and speakers will share their learnings on the topics covered in this post.
AI and machine learning in the enterprise
When asked what holds back the adoption of machine learning and AI, survey respondents for our upcoming report, “Evolving Data Infrastructure,” cited “company culture” and “difficulties in identifying appropriate business use cases” among the leading reasons. Attendees of the Strata Business Summit will have the opportunity to explore these issues through training sessions, tutorials, briefings, and real-world case studies from practitioners and companies. Recent improvements in tools and technologies has meant that techniques like deep learning are now being used to solve common problems, including forecasting, text mining and language understanding, and personalization. We’ve assembled sessions from leading companies, many of which will share case studies of applications of machine learning methods, including multiple presentations involving deep learning:
Strata Business Summit
AI and machine learning in the enterprise
Deep Learning
Temporal data and time-series analytics
“Forecasting Financial Time Series with Deep Learning on Azure”
Text and Language processing and analysis
Graph technologies and analytics
Foundational data technologies
Machine learning and AI require data—specifically, labeled data for training models. There are many articles that point to the explosion of data, but in order for that data that be useful for analytics and ML, it has to be collected, transported, cleaned, stored, and combined with other data sources. Thus, our surveys have shown that companies tend to apply machine learning and AI in areas where they have prior simpler use cases (business intelligence and analytics) that required data technologies to already be in place. In our upcoming report, “Evolving Data Infrastructure,” respondents indicated they are beginning to build essential components needed to sustain machine learning and AI within their organizations:
Take data lineage, an increasingly important consideration in an age when machine learning, AI, security, and privacy are critical for companies. At Strata Data San Francisco, Netflix, Intuit, and Lyft will describe internal systems designed to help users understand the evolution of available data resources. As companies ingest and use more data, there are many more users and consumers of that data within their organizations. Data lineage, data catalog, and data governance solutions can increase usage of data systems by enhancing trustworthiness of data. Moving forward, tracking data provenance is going to be important for security, compliance, and for auditing and debugging ML systems.
Data Platforms
Data Integration and Data Pipelines
Model lifecycle management
Automation in data science and big data
Data preparation, data governance, and data lineage
Open Data, Data Generation and Data Networks
Companies are embracing AI and data technologies in the cloud
In the survey behind our upcoming report, “Evolving data infrastructure,” we found 85% of respondents indicated they had data infrastructure in at least one of the seven cloud providers we listed, with two-thirds (63%) using Amazon Web Services (AWS) for some portion of their data infrastructure. We found companies run a mix of open source technologies and managed services, and many respondents indicated they used more than one cloud provider.
This agrees with other surveys I’ve come across that indicated IT executives plan to invest a significant portion of their budgets in cloud computing resources and services.
AI and Data technologies in the cloud
“Building a Serverless Big Data Application on AWS”
Streaming and realtime analytics
“Architecture and Algorithms for End-to-End Streaming Data Processing”
“Running multidisciplinary big data workloads in the cloud”
Security and privacy
Regulations in Europe (GDPR) and California (Consumer Privacy Act) have placed concepts like “user control” and “privacy-by-design” at the forefront for companies wanting to deploy ML. With these new regulations in mind, the research community has stepped up and new privacy-preserving tools and techniques—including differential privacy—are becoming available for both business intelligence and ML applications. Strata Data San Francisco will feature sessions on important topics including: data security and data privacy; the use of data, analytics, and ML in (cyber)security; privacy-preserving analytics ; and secure machine learning.
Security and Privacy
“Manage the Risks of ML - In Practice”
“Machine Learning on Encrypted Data: Challenges and Opportunities”
“Framework to quantitatively assess ML Safety – Technical Implementation & Best Practices”
Ethics
When it come to ethics, it’s fair to say the data community (and the broader technology community) is very engaged. As I noted in an earlier post, the next-generation data scientists and data engineers are undergoing training and engaging in discussions pertaining to ethics. Many universities are offering courses; some like UC Berkeley have multiple courses. We’re at the point where companies are beginning to formulate and share some best practices and processes. We are pleased to announce that we have a slate of tutorials and sessions—and a full day of presentations dedicated to ethics—at the upcoming Strata Data conference in San Francisco.
Ethics
Strata Data Ethics Summit - a day of presentations from leading experts and practitioners
“The Measure and Mismeasure of Fairness in Machine Learning”
Use cases and solutions
Data, machine learning, and AI are impacting companies across industries and geographic locations. Companies are beginning to build key components including solutions that address data lineage and data governance, as well as tools that can increase the productivity of their data scientists (“data science platforms”). Many technologies and techniques are general purpose and cut across domains and industries. However, there are tools and methods that are used more heavily in certain verticals, and more importantly, we all like learning what our industry peers have been building and thinking about. Here are some related talks from a few verticals:
Media, Marketing, Advertising
Retail and e-commerce
Transportation and Logistics
Health and Medicine
Financial services
Continue reading Core technologies and tools for AI, big data, and cloud computing.
https://oreil.ly/2SKShsU
0 notes
Text
Core technologies and tools for AI, big data, and cloud computing
Core technologies and tools for AI, big data, and cloud computing
Highlights and use cases from companies that are building the technologies needed to sustain their use of analytics and machine learning.
Profiles of IT executives suggest that many are planning to spend significantly in cloud computing and AI over the next year. This concurs with survey results we plan to release over the next few months. In a forthcoming survey, “Evolving Data Infrastructure,” we found strong interest in machine learning (ML) among respondents across geographic regions. Not only are companies interested in tools, technologies, and people who can advance the use of ML within their organizations, they are beginning to build the core foundational technologies needed to sustain their usage of analytics and ML. With that said, important challenges remain. In other surveys we ran, we found “lack of skilled people,” “lack of data,” and cultural and organizational challenges as the leading obstacles cited for holding back the adoption of machine learning and AI.
In this post, I’ll describe some of the core technologies and tools companies are beginning to evaluate and build. Many companies are just beginning to address the interplay between their suite of AI, big data, and cloud technologies. I’ll also highlight some interesting uses cases and applications of data, analytics, and machine learning. The resource examples I’ll cite will be drawn from the upcoming Strata Data conference in San Francisco, where leading companies and speakers will share their learnings on the topics covered in this post.
AI and machine learning in the enterprise
When asked what holds back the adoption of machine learning and AI, survey respondents for our upcoming report, “Evolving Data Infrastructure,” cited “company culture” and “difficulties in identifying appropriate business use cases” among the leading reasons. Attendees of the Strata Business Summit will have the opportunity to explore these issues through training sessions, tutorials, briefings, and real-world case studies from practitioners and companies. Recent improvements in tools and technologies has meant that techniques like deep learning are now being used to solve common problems, including forecasting, text mining and language understanding, and personalization. We’ve assembled sessions from leading companies, many of which will share case studies of applications of machine learning methods, including multiple presentations involving deep learning:
Strata Business Summit
AI and machine learning in the enterprise
Deep Learning
Temporal data and time-series analytics
“Forecasting Financial Time Series with Deep Learning on Azure”
Text and Language processing and analysis
Graph technologies and analytics
Foundational data technologies
Machine learning and AI require data—specifically, labeled data for training models. There are many articles that point to the explosion of data, but in order for that data that be useful for analytics and ML, it has to be collected, transported, cleaned, stored, and combined with other data sources. Thus, our surveys have shown that companies tend to apply machine learning and AI in areas where they have prior simpler use cases (business intelligence and analytics) that required data technologies to already be in place. In our upcoming report, “Evolving Data Infrastructure,” respondents indicated they are beginning to build essential components needed to sustain machine learning and AI within their organizations:
Take data lineage, an increasingly important consideration in an age when machine learning, AI, security, and privacy are critical for companies. At Strata Data San Francisco, Netflix, Intuit, and Lyft will describe internal systems designed to help users understand the evolution of available data resources. As companies ingest and use more data, there are many more users and consumers of that data within their organizations. Data lineage, data catalog, and data governance solutions can increase usage of data systems by enhancing trustworthiness of data. Moving forward, tracking data provenance is going to be important for security, compliance, and for auditing and debugging ML systems.
Data Platforms
Data Integration and Data Pipelines
Model lifecycle management
Automation in data science and big data
Data preparation, data governance, and data lineage
Open Data, Data Generation and Data Networks
Companies are embracing AI and data technologies in the cloud
In the survey behind our upcoming report, “Evolving data infrastructure,” we found 85% of respondents indicated they had data infrastructure in at least one of the seven cloud providers we listed, with two-thirds (63%) using Amazon Web Services (AWS) for some portion of their data infrastructure. We found companies run a mix of open source technologies and managed services, and many respondents indicated they used more than one cloud provider.
This agrees with other surveys I’ve come across that indicated IT executives plan to invest a significant portion of their budgets in cloud computing resources and services.
AI and Data technologies in the cloud
“Building a Serverless Big Data Application on AWS”
Streaming and realtime analytics
“Architecture and Algorithms for End-to-End Streaming Data Processing”
“Running multidisciplinary big data workloads in the cloud”
Security and privacy
Regulations in Europe (GDPR) and California (Consumer Privacy Act) have placed concepts like “user control” and “privacy-by-design” at the forefront for companies wanting to deploy ML. With these new regulations in mind, the research community has stepped up and new privacy-preserving tools and techniques—including differential privacy—are becoming available for both business intelligence and ML applications. Strata Data San Francisco will feature sessions on important topics including: data security and data privacy; the use of data, analytics, and ML in (cyber)security; privacy-preserving analytics ; and secure machine learning.
Security and Privacy
“Manage the Risks of ML - In Practice”
“Machine Learning on Encrypted Data: Challenges and Opportunities”
“Framework to quantitatively assess ML Safety – Technical Implementation & Best Practices”
Ethics
When it come to ethics, it’s fair to say the data community (and the broader technology community) is very engaged. As I noted in an earlier post, the next-generation data scientists and data engineers are undergoing training and engaging in discussions pertaining to ethics. Many universities are offering courses; some like UC Berkeley have multiple courses. We’re at the point where companies are beginning to formulate and share some best practices and processes. We are pleased to announce that we have a slate of tutorials and sessions—and a full day of presentations dedicated to ethics—at the upcoming Strata Data conference in San Francisco.
Ethics
Strata Data Ethics Summit - a day of presentations from leading experts and practitioners
“The Measure and Mismeasure of Fairness in Machine Learning”
Use cases and solutions
Data, machine learning, and AI are impacting companies across industries and geographic locations. Companies are beginning to build key components including solutions that address data lineage and data governance, as well as tools that can increase the productivity of their data scientists (“data science platforms”). Many technologies and techniques are general purpose and cut across domains and industries. However, there are tools and methods that are used more heavily in certain verticals, and more importantly, we all like learning what our industry peers have been building and thinking about. Here are some related talks from a few verticals:
Media, Marketing, Advertising
Retail and e-commerce
Transportation and Logistics
Health and Medicine
Financial services
Continue reading Core technologies and tools for AI, big data, and cloud computing.
https://oreil.ly/2SKShsU
0 notes
Text
What leading consultants say about the usage of AI in marketing?
30-second summary:
In Deloitte’s 2019 CMO Survey Report, 56% of participants anticipated using AI for content personalization. 33% of respondents foresee applying AI for improving marketing ROI by optimizing marketing content and timing.
Mckinsey’s survey found that 44% of companies reduced operational costs and increased business revenue by using AI in marketing.
The Bain and Company found that marketing insights gathered through new machine-learning capabilities helped gain 10x in marketing performance. The company also saw a 25% increase in additional revenue through cross-selling.
An EY study of CEOs and business leaders reveals that 62% of respondents believe artificial intelligence will have a significant impact on creating efficiencies at their company.
In a survey conducted by BCG, 90% of respondents agreed that artificial intelligence represents a business opportunity for their company. However, 70% of companies report minimal or no impact from AI so far.
Marketers have started using artificial intelligence to enhance every step of the customer journey. AI is used to forecast sales, deliver personalized website experience, offer 24/7 customer service via chatbots, and optimize ROI via programmatic ad targeting.
Leading consultants like Deloitte, McKinsey and Company, Bain and Company, EY LLP, and Boston Consulting Group have been researching to identify the role of artificial intelligence in marketing.
Let’s look at what these leading consultants say about the usage of AI in marketing.
Deloitte: “Deliver personalized experience at enterprise scale using AI”
In Deloitte’s 2019 CMO Survey Report, 56% of participants anticipated using AI for content personalization. 33% of respondents foresee applying AI for improving marketing ROI by optimizing marketing content and timing.
Marketers cannot process tons of data about all their customers at one time. That’s why they have been dividing customers into different segments (based on characteristics, browsing behavior, and purchase history).
However, that might not be enough. Customers now demand brands to act on their needs, just like they are treated by their favorite staff at their go-to restaurant.
It is challenging to offer this type of personalization at an enterprise level. That’s where artificial intelligence can help businesses deliver those experiences to millions of customers at a time.
56% of marketers believe AI can help yield better engagement with customers and prospects. Companies like Omniconvert are already using artificial intelligence to deploy more than 500 new automated experiments every 4 hours.
Deloitte says, “Artificial intelligence and machine learning can make decisions in the moment based on hundreds or thousands of data points far more than we humans can consider and do so at enterprise scale.”
This doesn’t mean relying entirely on AI for your business’s marketing. It’s about using the superpowers of artificial intelligence to make lightning-quick decisions and nurture millions of individual customers simultaneously.
To use AI effectively in marketing, you need to have an appropriate level of transparency and interpretability. Without interpretability, it would be difficult to determine how artificial intelligence is contributing to your business.
By proactively and strategically finding ways to integrate AI and expanding machine-human collaboration across your organization, you can build more-human bonds with millions of your customers.
McKinsey and Company: “Implement core practices to boost artificial intelligence results”
Mckinsey’s survey found that 44% of companies reduced operational costs and increased business revenue by using AI in marketing.
Artificial Intelligence is expected to have around $2.6 Trillion worth of business impact in sales and marketing annually.
Some of the companies, from various sectors, are attaining outsize business results from AI. These high-performing companies saw both higher revenue increases, and more significant cost decreases than other companies that use AI.
It was found that 20% of organizations implemented AI in their company in 2019. The number is expected to grow in 2020.
According to Mckinsey, these companies are more likely to apply AI core practices to drive value and mitigate risks associated with technology.
The core practices applied by artificial intelligence high-performers include:
Investing in AI talent and training.
Ensuring business staff and technical staff have the necessary skills.
Aligning business, analytics, and IT leaders to work together on specific problems.
Adjusting AI tactics with their corporate strategy.
Having an AI strategy with a clear enterprise-level road map of use cases.
Creating well-defined governance processes for critical data-related decisions.
Updating AI models frequently.
Using AI insights in real-time to enable daily decision making.
Tracking a comprehensive set of distinct AI performance indicators.
Implementing these core practices can increase your chances of improving results generated via artificial intelligence in your marketing.
Bain and Company: “Leverage the power of AI to boost cross-selling”
Online retailers using machine learning algorithms to generate customer intelligence and detailed shopper profiles have seen average order values increase from 5% to 10% and experienced ROIs of 6x-7x.
Cross-selling can improve your average order value and bottom-line profit. But, what products to cross-sell and when is a challenge for many marketers.
In an interesting case study, the company focused on cross-selling using artificial intelligence.
The company had everything (good-quality data, the right technology, and internal talent) it takes to generate better results. However, the analytics team was not aligned with business-unit and functional experts.
The company created a new team of both coders and analytics experts. The new squad changed the old approach to analytics to produce meaningful results.
The team assessed the current cross-selling performance and explored the events that triggered the sale of additional products.
They then leveraged artificial intelligence algorithms to determine which product a customer was likely to buy next. The team trained the AI by integrating 20 databases into a system that contained a 10-year history of the client and external data.
They used Agile development methodologies to break the project into small parts that covered every core task, such as data prep and loading, test and implementation, and knowledge transfer.
They also conducted weekly meetings with the top management to address roadblocks affecting the cross-selling.
The marketing insights gathered through new machine-learning capabilities helped gain 10x in marketing performance. The company also saw a 25% increase in additional revenue through cross-selling.
By strategically implementing AI to understand customer behavior and recommend additional products can significantly boost your cross-selling capabilities.
EY (Ernst and Young) LLP: “Assess the risks and overcome the barriers of implementing artificial intelligence”
An EY study of CEOs and business leaders reveals that 62% of respondents believe artificial intelligence will have a significant impact on creating efficiencies at their company.
Another 62% say AI will have a substantial role in their company staying competitive. Besides, 60% of respondents believe artificial intelligence will help them gain a better understanding of customers.
However, the biggest barrier to implementing artificial intelligence is the lack of skilled professionals. In a survey, EY found some restrictions affecting AI adoption. These barriers include:
Lack of experts needed to implement AI.
Absence of trust in the quality of data.
Concerns about data privacy and use.
Lack of required infrastructure and interoperability.
The two factors to overcome these barriers (as cited by CEOs and business leaders) are:
Having a compelling business case for AI.
Having a strategic vision and commitment to AI from C-level executives.
For AI implementation, you can either take a top-down or bottom-up approach, both of which are great.
A top-down approach begins with identifying a business problem and goes down to accessing the technical feasibility. A bottom-up approach begins with identifying an AI technology and goes down to determining what value can be provided.
The biggest risk artificial intelligence possesses to businesses is the bias in data. AI will amplify the bias unless you accurately put in checks to prevent this from happening. If you fail to recognize the preference, you will end up spending hundreds of dollars in marketing campaigns and see no result.
The regulators’ lack of understanding of AI could also cause issues. Therefore, it is essential for organizations to invest in learning and development of internal controls, so that they can make informed, data-driven decisions.
Boston Consulting Group: “Combine AI and human elements of the business”
Boston says, “Organizations that combine the capabilities of humans and machines will develop superior customer experiences and relationships, more productive operations, and dramatically increased rates of innovation.”
A global FMCG company used AI to optimize its process for allocating marketing spending. Advanced analytics modeling compared ROI across brands, markets, and media channels and created a dynamic model to analyze different potential scenarios for allocating that spending.
As a result, the company made better spending decisions and saw a 10% increase in marketing ROI during the first 12 months.
But, is every company that implements AI is winning?
In a survey conducted by BCG, 90% of respondents agreed that artificial intelligence represents a business opportunity for their company. However, 70% of companies report minimal or no impact from AI so far.
For AI to deliver exceptional results, companies need to integrate it into the individual processes that power the core of their business. Companies need to develop the right feedback loops so that artificial intelligence can get better with time.
According to Boston, companies that combine human (organization, talent, and ways of working) with technology (data and digital platforms) see better outcomes. The outcomes include:
Personalized customer experiences and relationships
Bionic operations
New offers, services, and business models
Conclusion
Businesses that use AI in marketing see higher growth than those who don’t. Research and studies by top consulting firms have also proved it. However, to get the maximum benefit of AI, it is crucial to determine the problem you want to solve and hire the best talent to manage the technology.
Joydeep Bhattacharya is a digital marketing evangelist and author of the SEO Sandwitch blog.
The post What leading consultants say about the usage of AI in marketing? appeared first on ClickZ.
source http://wikimakemoney.com/2020/06/13/what-leading-consultants-say-about-the-usage-of-ai-in-marketing/
0 notes
Text
Core technologies and tools for AI, big data, and cloud computing
Highlights and use cases from companies that are building the technologies needed to sustain their use of analytics and machine learning.
Profiles of IT executives suggest that many are planning to spend significantly in cloud computing and AI over the next year. This concurs with survey results we plan to release over the next few months. In a forthcoming survey, “Evolving Data Infrastructure,” we found strong interest in machine learning (ML) among respondents across geographic regions. Not only are companies interested in tools, technologies, and people who can advance the use of ML within their organizations, they are beginning to build the core foundational technologies needed to sustain their usage of analytics and ML. With that said, important challenges remain. In other surveys we ran, we found “lack of skilled people,” “lack of data,” and cultural and organizational challenges as the leading obstacles cited for holding back the adoption of machine learning and AI.
In this post, I’ll describe some of the core technologies and tools companies are beginning to evaluate and build. Many companies are just beginning to address the interplay between their suite of AI, big data, and cloud technologies. I’ll also highlight some interesting uses cases and applications of data, analytics, and machine learning. The resource examples I’ll cite will be drawn from the upcoming Strata Data conference in San Francisco, where leading companies and speakers will share their learnings on the topics covered in this post.
AI and machine learning in the enterprise
When asked what holds back the adoption of machine learning and AI, survey respondents for our upcoming report, “Evolving Data Infrastructure,” cited “company culture” and “difficulties in identifying appropriate business use cases” among the leading reasons. Attendees of the Strata Business Summit will have the opportunity to explore these issues through training sessions, tutorials, briefings, and real-world case studies from practitioners and companies. Recent improvements in tools and technologies has meant that techniques like deep learning are now being used to solve common problems, including forecasting, text mining and language understanding, and personalization. We’ve assembled sessions from leading companies, many of which will share case studies of applications of machine learning methods, including multiple presentations involving deep learning:
Strata Business Summit
AI and machine learning in the enterprise
Deep Learning
Temporal data and time-series analytics
“Forecasting Financial Time Series with Deep Learning on Azure”
Text and Language processing and analysis
Graph technologies and analytics
Foundational data technologies
Machine learning and AI require data—specifically, labeled data for training models. There are many articles that point to the explosion of data, but in order for that data that be useful for analytics and ML, it has to be collected, transported, cleaned, stored, and combined with other data sources. Thus, our surveys have shown that companies tend to apply machine learning and AI in areas where they have prior simpler use cases (business intelligence and analytics) that required data technologies to already be in place. In our upcoming report, “Evolving Data Infrastructure,” respondents indicated they are beginning to build essential components needed to sustain machine learning and AI within their organizations:
Take data lineage, an increasingly important consideration in an age when machine learning, AI, security, and privacy are critical for companies. At Strata Data San Francisco, Netflix, Intuit, and Lyft will describe internal systems designed to help users understand the evolution of available data resources. As companies ingest and use more data, there are many more users and consumers of that data within their organizations. Data lineage, data catalog, and data governance solutions can increase usage of data systems by enhancing trustworthiness of data. Moving forward, tracking data provenance is going to be important for security, compliance, and for auditing and debugging ML systems.
Data Platforms
Data Integration and Data Pipelines
Model lifecycle management
Automation in data science and big data
Data preparation, data governance, and data lineage
Open Data, Data Generation and Data Networks
Companies are embracing AI and data technologies in the cloud
In the survey behind our upcoming report, “Evolving data infrastructure,” we found 85% of respondents indicated they had data infrastructure in at least one of the seven cloud providers we listed, with two-thirds (63%) using Amazon Web Services (AWS) for some portion of their data infrastructure. We found companies run a mix of open source technologies and managed services, and many respondents indicated they used more than one cloud provider.
This agrees with other surveys I’ve come across that indicated IT executives plan to invest a significant portion of their budgets in cloud computing resources and services.
AI and Data technologies in the cloud
“Building a Serverless Big Data Application on AWS”
Streaming and realtime analytics
“Architecture and Algorithms for End-to-End Streaming Data Processing”
“Running multidisciplinary big data workloads in the cloud”
Security and privacy
Regulations in Europe (GDPR) and California (Consumer Privacy Act) have placed concepts like “user control” and “privacy-by-design” at the forefront for companies wanting to deploy ML. With these new regulations in mind, the research community has stepped up and new privacy-preserving tools and techniques—including differential privacy—are becoming available for both business intelligence and ML applications. Strata Data San Francisco will feature sessions on important topics including: data security and data privacy; the use of data, analytics, and ML in (cyber)security; privacy-preserving analytics ; and secure machine learning.
Security and Privacy
“Manage the Risks of ML - In Practice”
“Machine Learning on Encrypted Data: Challenges and Opportunities”
“Framework to quantitatively assess ML Safety – Technical Implementation & Best Practices”
Ethics
When it come to ethics, it’s fair to say the data community (and the broader technology community) is very engaged. As I noted in an earlier post, the next-generation data scientists and data engineers are undergoing training and engaging in discussions pertaining to ethics. Many universities are offering courses; some like UC Berkeley have multiple courses. We’re at the point where companies are beginning to formulate and share some best practices and processes. We are pleased to announce that we have a slate of tutorials and sessions—and a full day of presentations dedicated to ethics—at the upcoming Strata Data conference in San Francisco.
Ethics
Strata Data Ethics Summit - a day of presentations from leading experts and practitioners
“The Measure and Mismeasure of Fairness in Machine Learning”
Use cases and solutions
Data, machine learning, and AI are impacting companies across industries and geographic locations. Companies are beginning to build key components including solutions that address data lineage and data governance, as well as tools that can increase the productivity of their data scientists (“data science platforms”). Many technologies and techniques are general purpose and cut across domains and industries. However, there are tools and methods that are used more heavily in certain verticals, and more importantly, we all like learning what our industry peers have been building and thinking about. Here are some related talks from a few verticals:
Media, Marketing, Advertising
Retail and e-commerce
Transportation and Logistics
Health and Medicine
Financial services
Continue reading Core technologies and tools for AI, big data, and cloud computing.
from FEED 10 TECHNOLOGY https://oreil.ly/2SKShsU
0 notes
Text
6 Key Surveillance Industry Predictions: Axis Communications
The surveillance industry has seen tremendous growth in the past 10 to 15 years. Once, it was needed for only business, but now its expanded for our personal security as well. IoT and smart city concepts advanced the surveillance industry. And now organizations are looking for video analytics systems to monitor customer behavior and to reduce crime, as well as for marketing. Johan Paulsson, Chief Technology Officer, Axis Communications, talked about the 6 predictions for the Surveillance industry 2019 where he mentioned about AI, Cybersecurity, Cloud and many other aspects.
Artificial Intelligence
For all the attention and discussion, you could be forgiven for thinking that artificial intelligence (AI) is fundamentally changing every industry and sector. Though for the more cynical (and with reference to the Gartner Hype Cycle) you may feel that a lack of demonstrable applications of AI (and the associated machine learning and deep learning) points to either a Peak of inflated expectations or even the Trough of disillusionment. In reality, of course, progress differs from industry to industry and from application to application. In some fields – notably healthcare and specifically cancer detection – AI is already having a significant positive impact. In other areas, progress is steadier. Video surveillance is one of them. Video Analytics In our industry today, machine or deep-learning is mostly used for video analytics, but we expect the technology will be an important component in many different applications and products in the future. Over time it will become a common tool for software engineers and will be included in many different environments and devices. But, again, its application will be driven by the most compelling use cases, not by the technology itself. There is a temptation in the surveillance and security sector to over-promise in relation to new technologies. This has been true of AI in video analytics and, particularly, in some of the claims made around the current application of deep learning. With AI and deep learning, as with any new technology, we’re committed to making sure its implementation is robust, reliable and addresses real customer challenges. Deep Learning Phases Deep learning consists of two different phases: the training phase and the execution phase. The former requires a lot of processing power, data and time, so most likely will be run on a server and/or in the cloud, while additional training (fine tuning) could be done at the edge (which is a neat link into our next trend). The execution phase – that which requires ‘trained’ data to work – can be done at any level within the system, purely dependent on how much processing power is required and how time-critical the application is. Research and progress will continue, steadily, and bring incremental improvements and benefits over the next year rather than radical change.
Cloud and Edge Computing
If AI could still be said to be in the earlier stages of the Gartner Hype Cycle, it’s difficult to argue that cloud computing is anything other than firmly established and heading towards, if not already on, the Plateau of productivity. There can be few organizations in the private or public spheres that aren’t making use of cloud computing at some level, and many have moved their entire infrastructures to a cloud-based model. Internet of Things That said, cloud computing is based on the centralized computing in one or many data center, and as the proliferation of connected, Internet of Things (IoT) devices grows exponentially, so does the amount of data produced. Even as more data centers with ever-increased capacity are created, this tsunami of data could become overwhelming. This can be particularly critical in areas such as video surveillance, where despite the development of technologies designed to reduce storage and bandwidth needs, data demands are still significant. Edge Computing This is where the benefits of edge computing come to the fore. In simple terms, as its name suggests, edge computing puts more data processing at the ‘edge’ of the network, close to where the data is collected by the sensor and before transfer to the data center. One particular benefit in some sectors relates to the speed of processing and the ability to act upon the data captured. Take, for instance, an autonomous vehicle. Without edge computing – where both data capture and processing take place in the vehicle itself – the delay in communication with a cloud-based data center, even if only milliseconds, might be the difference between the vehicle avoiding an accident or otherwise. Edge Computing In Surveillance In our business, edge computing means processing data within the camera itself. While perhaps not as dramatic as avoiding road accidents, the benefits can still be significant. Firstly, initial processing of data within the camera can significantly reduce the bandwidth demands of both data transfer and storage. Additionally, data can be anonymized and encrypted before it is transferred, addressing security and privacy concerns. Ultimately, cloud and edge computing will not be an ‘either…or’ decision; the two will work in balance to the greatest benefit.
Personalization vs Privacy
Concerns and Regulations In years to come, 2018 might be considered as the year when broad awareness of data privacy reached its highest point, particularly that associated with personal information. To those in the public and private sectors, the EU’s General Data Protection Regulation (GDPR) bought a higher level of scrutiny than ever before to how organizations collect, store, share and use personal information (including that captured by video surveillance). To the broader consumer, however, it is more likely to be issues relating to Facebook’s use of data which has heightened awareness and concern regarding what happens to the personal data given away online. Degree of Personalization Ultimately, we live in a world where we have been given valuable online services in exchange for knowingly or unconsciously handing over a significant amount of personal data. Indeed, this data is used by the likes of Facebook, Amazon, Google, and others to increase the value of these services through a high degree of personalization. To many, however, it feels like a line has been crossed between useful personalization and invasion of privacy, and the rumors that home voice assistants listen in to domestic conversations will only cause this unease to increase. Ultimately, the trust between an organization and its customers is becoming an increasingly important and tangible asset. Indeed, recent research from consulting firm Accenture has established a correlation between stakeholder trust and revenue. Concerns about a company’s approach to privacy and the use of personal data will be one of the most impactful aspects of trust in business moving forwards.
Cybersecurity
Can something continue to be a ‘trend’ when it appears every year, and is a constant concern? Whatever your answer to that question, it’s impossible to think about issues that will affect every sector this year without a mention of cybersecurity. Indeed, in relation to the previous point, the fastest way to damage trust between a company and its customers (and shareholders) is through a cybersecurity breach. Just ask British Airways. Cybersecurity will never be solved because the cybercriminals (and increasingly nation-states) will never stop trying to find and exploit vulnerabilities. These organizations are incredibly well-funded and organized and can innovate much more quickly than companies that need to adhere to industry regulations. Cyber Attacks Attacks are becoming more sophisticated, at a time when the number of connected devices mean that potential vulnerabilities and insecure network end-points are growing exponentially. One particular area of vulnerability that has been highlighted recently is in the supply chain, where either a lack of good cybersecurity practice or even deliberately malicious actions can result in cybersecurity breaches being achieved through both software and hardware. The provenance of products is ever more critical than ever, with manufacturers needing to be confident that every link in their supply chain is secure as it should be.
Smart Technology To Deliver Environmental Benefits
We’ve already seen how video analytics can be used as an operational planning tool by organizations looking to improve energy efficiency within offices, with the subsequent positive benefits for the environment. But new types of sensors can more accurately measure environmental impact across an organization’s sites, effectively acting as highly sensitive artificial ‘noses’ calibrated to different forms of output, and thermal imaging can be used to pinpoint areas of energy wastage. Healthcare Benefits For instance, one critical area where such sensors can heighten awareness, understanding and, increasingly allow for remedial action is in air quality. Whether inside buildings or in the external urban environment, the negative impacts on health and associated costs are becoming an ever-greater issue. Smart sensors will have a central role to play in addressing the problem globally. Such applications add value to organizations through efficiencies and cost savings (and, hopefully, health benefits), but also help them reach their own environmental and sustainability goals.
Sensor Integration Driving Smart Actions
In themselves, individual sensors such as those described above can deliver significant benefits. But a final trend that we’re confident will be increasingly prevalent in 2019 will be combining and integrating sensors to prompt ‘smart’ actions. Smart Alerts For instance, in a smart city, a motion sensor connected to a barrier could trigger a camera which, in turn, would trigger an alert in the operations center, allowing for a rapid and appropriate response. Or an environmental sensor could again trigger a video or thermal camera to quickly identify fires or spillages, again prompting alerts which will create a more rapid and effective response. When the range of sensors are considered – from thermal to motion, from atmospheric to video – the ways in which they could be combined are endless, as are the potential benefits of doing so.
Conclusion
Technology continues to develop at a rapid and accelerating pace. While it can be easy to become distracted by the potential of every new trend or innovation, each must be considered in relation to the use cases that are going to deliver maximum positive impact and value to organizations and citizens. This remains the lens through which we view technology trends and their application, and 2019 promises to be another exciting year in bringing new technologies to market in increasingly useful ways. Read the full article
0 notes
Text
Deep Learning is not the AI future
Everyone now is learning, or claiming to learn, Deep Learning (DL), the only field of Artificial Intelligence (AI) that went viral. Paid and free DL courses count 100,000s of students of all ages. Too many startups and products are named "deep-something", just as buzzword: very few are using DL really. Most ignore that DL is the 1% of the Machine Learning (ML) field, and that ML is the 1% of the AI field. What's used in practice for most "AI" tasks is not DL. A "DL-only expert" is not a "whole AI expert". DL is not synonym of AI! The most advertised AI tools by Google, Facebook etc are mainly or only DL, so the wide public thinks that all the new AI records are (and will be) done with DL only. This is not true. Decision Trees like XGBoost are not making headlines, but silently beat DL at many Kaggle tabular data competitions. The media implied that AlphaGo is DL-only, but it's a Monte Carlo tree search + DL, an evidence that pure DL was not enough to win. Many reinforcement learning tasks are solved with Neuroevolution's NEAT, no backpropagation. There is "deep misinformation" in AI. I am not saying that DL is not solving the tasks: DL is impressive. Trees and other algorithms don't beat DL often, and there is no DL substitute to solve some tasks, but I expect non-DL systems to be (re)discovered in the future to beat DL. Perhaps also solving the legal nightmare of DL decisions, that even if correct, can't be explained when legally questioned? Also I would like to read in the press about DL issues like "catastrophic forgetting", the tendency to abruptly forget previously learned information upon learning new information, and about the daily fight against "overfitting". About "intelligence": DL will simply believe the training data given, without understand what's true or false, real or imaginary, fair or unfair. Humans believe fake news too, but only up to a certain level, and even kids know that movies are fiction, not real. For more details, if you got time, read my longer article: AI (Deep Learning) explained simply. Everyone 20 years ago was learning HTML, the markup language to write web pages by hand, considered enough at the time to become a dot com billionaire. Like others, I learned each tech when seemed useful: HTML, mobile apps, DL, and I invite everyone to continue learning new things across all the life time. In fact, you don't simply learn one tech once in a life! If you learn DL, you don't get a lifetime AI know how. The 1995 HTML became outdated and not enough: CSS, Javascript and server languages took over. In the same way, DL will be outdated and not enough too. Most popular mobile apps contain no HTML at all, so who knows if future AI apps will contain DL or not? Really, DL is a 1980s tech, older than HTML: Trained with more data, 1970s "neural networks with hidden layers" gave better results, then was renamed as DL and hyped. In 1992 I briefly checked some neural network source codes, together with other stuff like fractals and cellular automata. Like almost everyone else, I dismissed DL at the time as an academic math puzzle with no practical uses. Instead, I focused on learning what gave immediate results: 3D for video games, then internet, and so on. But we was all wrong, DL can do amazing things with big data! I got fascinated in 2015 by Deep Dream, then by GANs etc. Still, DL it's not the last, perfect AI science we can invent. The ancient DL was already studied extensively and updated across decades to solve more tasks more accurately, but no DL version (Convolutional, RNN, RNN + LSTM, GANs etc.) can explain its own decisions. While DL will surely solve more tasks and kill more jobs in future, unlikely will solve all, or reserve surprising updates capable of discussing a legally valid defense about the fairness of its own decisions.
(philosophers Plato and Aristotle talking: DL is unable to understand them) Future AI should explore other, new or old but overlooked ways, not DL only. A DL limit is that considers truth simply what it spots more frequently in the data, and false what's statistically more rare, or opposite of what's more frequent. The DL fairness comes not from DL itself, but from the humans selecting and preparing the DL data. A DL can read texts and translate between texts, but not in "human way". If a DL model is trained over 100 books: 40 telling how hate, war, death and destruction are bad, and 60 books telling that Hitler's Nazi ideas was correct, the DL will end up 100% Nazi! I think most humans can avoid becoming Nazi even if reading 60% Nazi propaganda. DL will never figure out on its own that killing Jews, gays and disabled people is bad, if Nazism is the most popular opinion in the training data. No wonder that DL will not explain its own decisions, except a naive: "I've read most often that Nazism is right, so it should be right". DL will learn and mimic the most flawed logic without figure out the flaws, including terrorism. Even small kids understand on their own who's the bad guys in a movie, but not DL, unless humans teach it explicitly first. The DL specific things like gradient descent with backpropagation are cool, as well as custom DL hardware, but that's mostly statistics and geometry, so probably will not be in the AI of 2037. For more on the far future AI side effects, read: Will AI kill us all after taking our jobs? For many tasks, Deep Learning AI is or will become illegal, not compliant. Who collects data about citizens of the 28 European countries (including UK, no matter the brexit), should follow the General Data Protection Regulation (GDPR) by May 25, 2018. This is the date when DL will be abandoned for several apps in EU, causing AI startups to quickly replace DL with whatever else, or risking to be fined. Fines for noncompliance are 4% of global revenue, including USA revenue. If you never heard of GDPR before, that's not only you: most of the EU businesses are unprepared, or have not even a clue about it. GDPR, about automated decision-making, requires the right to an explanation, and to prevent discriminatory effects based on race, opinions, health, etc. Laws similar to GDPR exist or are planned worldwide, it's only matter of time. The US Fair Credit Reporting Act requires to disclose all of the factors that adversely affected the credit score of the consumer, for a maximum of 4 factors allowed. DL factors are normally thousands or millions, not just 4, how to simplify into 4? AI, like bitcoin ICOs, started ignoring regulation, but laws and fines always come. DL systems taking more relevant decisions than telling if an image is a cat, or where to add bunny ears to selfies, will be replaced with non-DL systems. The AI will have to be accountable, so different from DL, with outcomes you can explain to average judges and users in simple, legally valid words. DL complexity, that looks like "magics" to judges and users, is a legal risk: not a cool feature. DL will advice or alert humans, for example detecting sicknesses from medical images, to be verified by a medical doctor, but this is only partial automation lacking details. What to tell to users getting rejected from the AI (denied a loan, job, etc.) and asking explanations? Laws are including the "right to an explanation", for example why a job or a loan is denied. DL gives results with no natural (legal) language explanations. Pages of DL variables are available, but not acceptable by judges or users, since not even the best mathematicians or other algorithms can figure out and simplify into words a DL model. Even where humans take final decisions, the AI tools should give detailed reasons that humans can either figure out as wrong (and so override, reverse the AI decision), or quickly accept by simply copy, paste and sign explanations prepared by AI. No one knows how to modify DL to give simple human-like explanations, so DL can't be made compliant! This issue affects also several other AI and Machine Learning algorithms, but not all or as much as DL. Decision trees also become not explainable if boosted or in ensemble. But in the future, new or rediscovered AIs, that can defend their own decisions, will be used for the regulated decisions in place of both DL and humans. In the case of GDPR, only human staff can reject an application: the AI can automate the positive outcomes, else, if the AI denies a loan, job etc., it should pass the task to human staff, that will handle those negative decisions that make users angry, inquisitive. But in case of denial, the human staff will have no help or explanation from a DL-based AI, they can't know if the DL logic was right or wrong. They will have to check the data from scratch on their own, to decide if ultimately reject or not, and write a reasonable cause for the decision. The risk is that the human staff, to save time and money, will make up fake explanations for AI rejections, and blindly accept AI approvals. But judges called to decide on the fairness of AI rejections, will also ask why the others was accepted, to compare. To be safe, you need solid reasons for accepting too, not for rejecting only, no matter what's in laws like GDPR. Non-DL AI systems providing human readable explanations of all decisions to users, judges and support staff, will be ultimately the only ones used, for both fully and partially automated decisions. Explainability was already a big issue before of any specific laws and before DL. In antitrust cases, companies like Google are asked why a product rather than others is shown in the top of search results. This was before DL too: many other algorithms also mix data in a crazy way to get results, so no human can easily reconstruct the decision reasons. Judges are told that engineers don't know exactly, and pages of linear algebraare given as evidence. This can't end well: billion dollars of fines was ruled in multiple cases, with warnings to change systems, even before a specific law existed. Class action lawsuits of users automatically denied jobs, loans, refunds etc, against automated decision units of stores, banks, insurances, etc. will be the norm, and being unable to explain will mean "no defense", being fined, and a brand's public relations disaster. For most people, "AI" means the sci-fi movies AI that can give smart explanations, where humans can quickly decide if they agree or not, very easy for legal validation. Most people, including judges and who write laws like GDPR, hearing that companies are "AI-first" or "adding AI", expect an "AI" like in movies, that would defend its own decisions if called in court, impressing users and judges. Instead, we got unexplainable "DL AI", that will not be used much, even for tasks it can solve, just because lacking interpretability. DL will not save costs and will not kill jobs where sensitive automated decisions are needed. Even where humans must take the final decision anyway, tool AIs explaining their advice will be much preferable to tool AIs giving responses without giving causes or reasons. Explainable AIs, when (re)discovered, will be safer, legally compliant, cheaper, faster, and replace both DL and humans. Since DL was invented in 1960s-1980s then rediscovered in 2010s, probably the base of explainable future AIs is already described by some researchers somewhere, but being not DL, no one will careto check and develop these AI types for years. Until rediscovered and hyped. GDPR, about automated decision-making, also requires to prevent discriminatory effects based on race, opinions, health status, etc. But DL models trained from user-generated data like social media and news (rather than ground truth data like medical or financial records), always contain evil biases implicitly. As told before, DL can read a lot of texts and data, and mimic its contents, but will not critically understand it. DL will just believe what's spotted more often, underline patterns and trends found in data, and so: amplify the human society biases and problems. The data shows that black people are arrested more often than white people: the DL will simply suspect blacks first if any crime is committed. The data shows that more males than females are directors in corporate boards: the DL will simply prefer male candidates in job applications. DL decisions end up more discriminatory, racist, sexist than the average sample in the training data. This issue happens in all the ML algorithms, but DL model bias is one of the hardest to test, detect, control and tune. It is so hard to fix, that rather than try to patch it, simply caused the abrupt cancellation of many DL experiments already, from chat bots went nazi and hateful, to apps whitening black face photos in "beauty" filters.
(DL is a neural network, you can't edit separately each answer's outcome) You can't fix a discriminatory, racist or sexist DL model by trying to balance it with patches after the training. DL is a neural network, and unlike some other AI methods, you can't edit specific answers with local surgery, you must retrain all with different, 100% balanced and fair data, rare in the wild world. DL mimics what found in the data without understand it: DL will not disagree with any data, will not figure out the injustices in the society, it's just all "data to learn". You should hire a dedicated human staff to create fake fair data of an ideal society where white people are arrested as often as blacks, where 50% of directors are women, and so on. But the cost of creating vast amounts of de-biased data edited by human experts, just to train a DL model, makes not worth to replace humans with AI in first place! Further, even if you had trained a DL model that really is fair, you have no evidence to convince a judge or a user about the fairness of any decision, since the DL will give no explanations. DL will be of secondary importance, used for non-business apps or games not posing legal risks. When explainable AIs will be popular, DL will not be abandoned like magnetic tapes or cathode TVs. People losing game plays against bots will unlikely convince a judge to fine the AI company because it can't explain how the AI won. People unhappy of how FaceApp edited their selfie photo into older, younger, or opposite sex, will unlikely convince a judge to fine FaceApp because it can't explain how the AI decided the new looks (except a "race change" filter, removed after massive protests, no judge needed). Detecting sickness in medical images is a safe DL use, as long as users will ask confirmation from human doctors before to take medication. The legally safe DL market is very limited: judges can fine in all the cases where the decision outcome can make a financial or health difference or be discriminatory, where DL will not help to understand if and why the decision was fair. How about self-driving cars? DL seems a legal risk to use in all that is more than art, games or good taste jokes. Existing non-DL methods can replace DL where needed, and new methods will be (re)discovered, so the AI progress will continue nicely. Especially if everyone will study (and invest into) all the old and new algorithms of the whole AI and Machine Learning sciences, not only DL: the only way to become a "whole AI lifetime expert". Except DL being "illegal" to use for many useful tasks it can solve, it's also unable to solve several tasks: those requiring the abstract reasoning to figure out what's fair and unfair in the data seen, and to explain the logic of its own decisions. Even for tasks not requiring explanation where DL seems the best system, like image recognition, DL is not as safe as human eyes. You can fool DL with "adversarial examples": photos of something, like a cat, with invisible perturbations added, can fool the DL into seeing other, like a dog. All humans will still see a cat, but the DL will instead see a dog or whatever the hacker secretly embedded. This can be exploited in street signs to hack current self-driving cars. New AI systems resisting this hack will replace DL. The author of Keras, the most popular DL library, in his post "The limitations of deep learning", said: "the only real success of DL has been the ability to map space X to space Y using a continuous geometric transform, given large amounts of human-annotated data." These spaces got lots of dimensions, not just 3D, this is how DL can mimic Picasso art styles, Poker bluffs and some human creativity in many tasks. But in layman terms, I would say that this means: DL can be trained to recognize cat photos without understand what is a cat, and to be racist without knowing of being racist. DL can recognize cats or be racist or win at games, which is impressive and at times useful, but DL can't explain why a photo shows a cat, or if a decision was racist. In "The future of deep learning" the Keras author describes his vision of a new system where DL is only in "geometric modules", that should interact with not yet existing "algorithmic modules" and "meta learners". This would increase the number and types of tasks solved, but still failing to explain the decisions, due to the DL modules. It's like when we can't explain, in words, certain feelings or images computed in our brain. Humans explain all, but with mostly made up, oversimplified excuses, that everyone seems to believe as accurate. Machines are instead unfairly asked to be really accurate. Other experts are drafting new AI systems that do not include DL at all, but they lack funds: everyone invests in DL only now, and the DL mania will continue for a while. No one knows what will be the next big AI thing, but unlikely will be DL 2.0. The DL is hyped because only who sells DL software and hardware, despite the conflict of interest, is interviewed in the AI debates. Have you noticed any legitimate "natural intelligence" experts, like psychologists and philosophers, supporting DL? If you have neither AI urgency or time to study, wait the next AI system to be ready and study it directly, skipping DL 1.0. Else, if you have AI urgency and/or time to study, be sure to cover the whole AI and the many Machine Learning fields, not DL only. Click to Post
#artificial intelligence#australia#business#businessworld#centiplex#entrepreneur#entrepreneurship#inc42#london#magazine#yourstory
0 notes
Text
6 Key Surveillance Industry Predictions: Axis Communications
The surveillance industry has seen tremendous growth in the past 10 to 15 years. Once, it was needed for only business, but now its expanded for our personal security as well. IoT and smart city concepts advanced the surveillance industry. And now organizations are looking for video analytics systems to monitor customer behavior and to reduce crime, as well as for marketing. Johan Paulsson, Chief Technology Officer, Axis Communications, talked about the 6 predictions for the Surveillance industry 2019 where he mentioned about AI, Cybersecurity, Cloud and many other aspects.
Artificial Intelligence
For all the attention and discussion, you could be forgiven for thinking that artificial intelligence (AI) is fundamentally changing every industry and sector. Though for the more cynical (and with reference to the Gartner Hype Cycle) you may feel that a lack of demonstrable applications of AI (and the associated machine learning and deep learning) points to either a Peak of inflated expectations or even the Trough of disillusionment. In reality, of course, progress differs from industry to industry and from application to application. In some fields – notably healthcare and specifically cancer detection – AI is already having a significant positive impact. In other areas, progress is steadier. Video surveillance is one of them. Video Analytics In our industry today, machine or deep-learning is mostly used for video analytics, but we expect the technology will be an important component in many different applications and products in the future. Over time it will become a common tool for software engineers and will be included in many different environments and devices. But, again, its application will be driven by the most compelling use cases, not by the technology itself. There is a temptation in the surveillance and security sector to over-promise in relation to new technologies. This has been true of AI in video analytics and, particularly, in some of the claims made around the current application of deep learning. With AI and deep learning, as with any new technology, we’re committed to making sure its implementation is robust, reliable and addresses real customer challenges. Deep Learning Phases Deep learning consists of two different phases: the training phase and the execution phase. The former requires a lot of processing power, data and time, so most likely will be run on a server and/or in the cloud, while additional training (fine tuning) could be done at the edge (which is a neat link into our next trend). The execution phase – that which requires ‘trained’ data to work – can be done at any level within the system, purely dependent on how much processing power is required and how time-critical the application is. Research and progress will continue, steadily, and bring incremental improvements and benefits over the next year rather than radical change.
Cloud and Edge Computing
If AI could still be said to be in the earlier stages of the Gartner Hype Cycle, it’s difficult to argue that cloud computing is anything other than firmly established and heading towards, if not already on, the Plateau of productivity. There can be few organizations in the private or public spheres that aren’t making use of cloud computing at some level, and many have moved their entire infrastructures to a cloud-based model. Internet of Things That said, cloud computing is based on the centralized computing in one or many data center, and as the proliferation of connected, Internet of Things (IoT) devices grows exponentially, so does the amount of data produced. Even as more data centers with ever-increased capacity are created, this tsunami of data could become overwhelming. This can be particularly critical in areas such as video surveillance, where despite the development of technologies designed to reduce storage and bandwidth needs, data demands are still significant. Edge Computing This is where the benefits of edge computing come to the fore. In simple terms, as its name suggests, edge computing puts more data processing at the ‘edge’ of the network, close to where the data is collected by the sensor and before transfer to the data center. One particular benefit in some sectors relates to the speed of processing and the ability to act upon the data captured. Take, for instance, an autonomous vehicle. Without edge computing – where both data capture and processing take place in the vehicle itself – the delay in communication with a cloud-based data center, even if only milliseconds, might be the difference between the vehicle avoiding an accident or otherwise. Edge Computing In Surveillance In our business, edge computing means processing data within the camera itself. While perhaps not as dramatic as avoiding road accidents, the benefits can still be significant. Firstly, initial processing of data within the camera can significantly reduce the bandwidth demands of both data transfer and storage. Additionally, data can be anonymized and encrypted before it is transferred, addressing security and privacy concerns. Ultimately, cloud and edge computing will not be an ‘either…or’ decision; the two will work in balance to the greatest benefit.
Personalization vs Privacy
Concerns and Regulations In years to come, 2018 might be considered as the year when broad awareness of data privacy reached its highest point, particularly that associated with personal information. To those in the public and private sectors, the EU’s General Data Protection Regulation (GDPR) bought a higher level of scrutiny than ever before to how organizations collect, store, share and use personal information (including that captured by video surveillance). To the broader consumer, however, it is more likely to be issues relating to Facebook’s use of data which has heightened awareness and concern regarding what happens to the personal data given away online. Degree of Personalization Ultimately, we live in a world where we have been given valuable online services in exchange for knowingly or unconsciously handing over a significant amount of personal data. Indeed, this data is used by the likes of Facebook, Amazon, Google, and others to increase the value of these services through a high degree of personalization. To many, however, it feels like a line has been crossed between useful personalization and invasion of privacy, and the rumors that home voice assistants listen in to domestic conversations will only cause this unease to increase. Ultimately, the trust between an organization and its customers is becoming an increasingly important and tangible asset. Indeed, recent research from consulting firm Accenture has established a correlation between stakeholder trust and revenue. Concerns about a company’s approach to privacy and the use of personal data will be one of the most impactful aspects of trust in business moving forwards.
Cybersecurity
Can something continue to be a ‘trend’ when it appears every year, and is a constant concern? Whatever your answer to that question, it’s impossible to think about issues that will affect every sector this year without a mention of cybersecurity. Indeed, in relation to the previous point, the fastest way to damage trust between a company and its customers (and shareholders) is through a cybersecurity breach. Just ask British Airways. Cybersecurity will never be solved because the cybercriminals (and increasingly nation-states) will never stop trying to find and exploit vulnerabilities. These organizations are incredibly well-funded and organized and can innovate much more quickly than companies that need to adhere to industry regulations. Cyber Attacks Attacks are becoming more sophisticated, at a time when the number of connected devices mean that potential vulnerabilities and insecure network end-points are growing exponentially. One particular area of vulnerability that has been highlighted recently is in the supply chain, where either a lack of good cybersecurity practice or even deliberately malicious actions can result in cybersecurity breaches being achieved through both software and hardware. The provenance of products is ever more critical than ever, with manufacturers needing to be confident that every link in their supply chain is secure as it should be.
Smart Technology To Deliver Environmental Benefits
We’ve already seen how video analytics can be used as an operational planning tool by organizations looking to improve energy efficiency within offices, with the subsequent positive benefits for the environment. But new types of sensors can more accurately measure environmental impact across an organization’s sites, effectively acting as highly sensitive artificial ‘noses’ calibrated to different forms of output, and thermal imaging can be used to pinpoint areas of energy wastage. Healthcare Benefits For instance, one critical area where such sensors can heighten awareness, understanding and, increasingly allow for remedial action is in air quality. Whether inside buildings or in the external urban environment, the negative impacts on health and associated costs are becoming an ever-greater issue. Smart sensors will have a central role to play in addressing the problem globally. Such applications add value to organizations through efficiencies and cost savings (and, hopefully, health benefits), but also help them reach their own environmental and sustainability goals.
Sensor Integration Driving Smart Actions
In themselves, individual sensors such as those described above can deliver significant benefits. But a final trend that we’re confident will be increasingly prevalent in 2019 will be combining and integrating sensors to prompt ‘smart’ actions. Smart Alerts For instance, in a smart city, a motion sensor connected to a barrier could trigger a camera which, in turn, would trigger an alert in the operations center, allowing for a rapid and appropriate response. Or an environmental sensor could again trigger a video or thermal camera to quickly identify fires or spillages, again prompting alerts which will create a more rapid and effective response. When the range of sensors are considered – from thermal to motion, from atmospheric to video – the ways in which they could be combined are endless, as are the potential benefits of doing so.
Conclusion
Technology continues to develop at a rapid and accelerating pace. While it can be easy to become distracted by the potential of every new trend or innovation, each must be considered in relation to the use cases that are going to deliver maximum positive impact and value to organizations and citizens. This remains the lens through which we view technology trends and their application, and 2019 promises to be another exciting year in bringing new technologies to market in increasingly useful ways. Read the full article
0 notes