Tumgik
Text
Trailer: Fear Not, The Robots Are Already Here
Coming November 12th: Uprising - A new original series featuring the latest in robotics news and tech.
We’ve been afraid of robots for a lot longer than they’ve even been a reality. In fact, we’ve been afraid of robots for far longer than we’ve even called them “robots.”
For many, the idea of a “robotic uprising” brings to mind images of the T-800 from the Terminator series - a chrome skeleton, red-eyed, crushing human skulls under its feet. But the Terminator series has roots in fiction that predate modern robotics by a considerable margin: namely the story of Frankenstein’s Monster, which was published in 1818.
Mary Shelly’s Frankenstein was a critique of technology run amok, progress unchained by human emotions or sensitivities, and inspired at least in part by the ancient Hebrew myth of the Golem. While stories of Golems date back to Old Testament times and center around the idea of an artificial person, usually made of mud, given life through magic.
The series examines our fears of the latest robot tech, from friendly, emotional robots to autonomous weapons.
A soulless, shambling approximation of humanity, devoid of empathy or reason, run amok and wreaking havoc amongst good and decent people - this is the cultural template that we have applied to robots, a reputation they have not earned and do not deserve.
Even the story of the Terminator isn’t about the evil of machines, but the evil programmed into them by humans. So then, why are we so afraid of robots? We’re glad you asked.
Uprising is a Freethink original series that acknowledges and examines our fears, one by one. In each episode, you’ll explore the places and people responsible for the latest advancements in robotics news, research, and tech.
From the "cobots" that could steal your job to the cuter, emotional robots already living in our homes, this series provides an in-depth look at what’s to come in the not-too-distant future of robotic technology.
0 notes
Text
Robot Made of Ice Can Repair and Rebuild Itself
The prototype IceBot isn’t quite ready, but someday could exhibit “self-reconfiguration, self-replication, and self-repair.”
A team of researchers want to build robots out of ice and send them to space. The idea is that — lacking a local repair shop — the icy bots can use found materials to rebuild themselves.
Ice can be located all over the solar system, from the moon to the distant rings around Saturn. So researchers from the University of Pennsylvania are trying to figure out how to tap into that nearly unlimited resource for robotics.
NASA wants to send the robot dog, Spot, to space. The canine-bot can do many tricks — from herding sheep to helping the NYPD in a hostage situation — but it likely won't be able to repair itself. Where could it find enough materials to do the job?
Subscribe to Freethink for more articles like this.Email
Introducing IceBot, a concept robot that could be the future of robotic space exploration — although the team says they've only just begun. The work is still very preliminary, reports IEEE Spectrum, however, their goal is to design a robot that can exhibit "self-reconfiguration, self-replication, and self-repair."
In a paper presented at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) the team outlined several ways to create the bots out of ice, using additive and subtractive manufacturing processes.
Their first robot, a proof-of-concept Antarctic exploration robot, weighed 6.3 kilos, could roll up a 2.5-degree incline, and turn side to side. One caveat: they included regular batteries and actuators in their plans. But the bulk of the robot, and its structural parts and wheels were all built out of ice.
Devin Carroll told IEEE Spectrum that he and his co-author Mark Yim wanted to build the robots from found materials "as a way to add robustness to robotic systems operating in remote or hostile environments."
"We ultimately settled on ice because of the design flexibility it affords us and the current interest in icy, remote environments. Climate change has many folks interested in the Antarctic and ice sheets while NASA and other space exploration groups are looking to the stars for ice and water," he said.
Carroll sees the ice robots working in teams, where an explorer bot collects materials and the other bot acts as the mechanic.
"We can envision the exploration class of robot returning to a centralized location with a request for a plow or some other augmentation and the manufacturing system will be able to attach the augmentation directly to the robot," he said, adding that one of the biggest challenges is minimizing the amount of energy required to repair the robots.
There is still a lot of work to do before IceBot is space-ready. But this proof-of-concept robot is the first step in demonstrating that a robot made of ice could perform different tasks. For now, there are other exciting space feats to look forward to this year.
0 notes
Text
In machine learning, each type of artificial neural network is tailored to certain tasks. This article will introduce two types of neural networks: convolutional neural networks (CNN) and recurrent neural networks (RNN). Using popular Youtube videos and visual aids, we will explain the difference between CNN and RNN and how they are used in computer vision and natural language processing.
What is the Difference Between CNN and RNN?
The main difference between CNN and RNN is the ability to process temporal information or data that comes in sequences, such as a sentence for example. Moreover, convolutional neural networks and recurrent neural networks are used for completely different purposes, and there are differences in the structures of the neural networks themselves to fit those different use cases.
CNNs employ filters within convolutional layers to transform data. Whereas, RNNs reuse activation functions from other data points in the sequence to generate the next output in a series.
While it is a frequently asked question, once you look at the structure of both neural networks and understand what they are used for, the difference between CNN and RNN will become clear.
To begin, let’s take a look at CNNs and how they are used to interpret images.
What is a Convolutional Neural Network?
In machine learning, each type of artificial neural network is tailored to certain tasks. This article will introduce two types of neural networks: convolutional neural networks (CNN) and recurrent neural networks (RNN). Using popular Youtube videos and visual aids, we will explain the difference between CNN and RNN and how they are used in computer vision and natural language processing.
Within a convolutional layer, the input is transformed before being passed to the next layer. A CNN transforms the data by using filters.
What are Filters in Convolutional Neural Networks?
A filter in a CNN is simply a matrix of randomized number values like in the diagram below.
What is the Difference Between CNN and RNN?
The main difference between CNN and RNN is the ability to process temporal information or data that comes in sequences, such as a sentence for example. Moreover, convolutional neural networks and recurrent neural networks are used for completely different purposes, and there are differences in the structures of the neural networks themselves to fit those different use cases.
CNNs employ filters within convolutional layers to transform data. Whereas, RNNs reuse activation functions from other data points in the sequence to generate the next output in a series.
While it is a frequently asked question, once you look at the structure of both neural networks and understand what they are used for, the difference between CNN and RNN will become clear.
To begin, let’s take a look at CNNs and how they are used to interpret images.
What is a Convolutional Neural Network?
Within a convolutional layer, the input is transformed before being passed to the next layer. A CNN transforms the data by using filters.
What are Filters in Convolutional Neural Networks?
A filter in a CNN is simply a matrix of randomized number values like in the diagram below.
As the filter convolves through the image, the matrix of values in the filter line up with the pixel values of the image and the dot product of those values is taken.
The filter moves, or convolves, through each 3 x 3 matrix of pixels until all the pixels have been covered. The dot product of each calculation is then used as input for the next layer.
Initially, the values in the filter are randomized. As a result, the first passes or convolutions act as a training phase and the initial output isn’t very useful. After each iteration, the CNN adjusts these values automatically using a loss function. As the training progresses, the CNN continuously adjusts the filters. By adjusting these filters, it is able to distinguish edges, curves, textures, and more patterns and features of the image.
While this is an amazing feat, in order to implement loss functions, a CNN needs to be given examples of correct output in the form of labeled training data.
Where CNNs Fall Short
CNNs are great at interpreting visual data and data that does not come in a sequence. However, they fail in interpreting temporal information such as videos (which are essentially a sequence of individual images) and blocks of text.
Entity extraction in text is a great example of how data in different parts of a sequence can affect each other. With entities, the words that come before and after the entity in the sentence have a direct effect on how they are classified. In order to deal with temporal or sequential data, like sentences, we have to use algorithms that are designed to learn from past data and ‘future data’ in the sequence. Luckily, recurrent neural networks do just that.
What is a Recurrent Neural Network?
Recurrent neural networks are networks that are designed to interpret temporal or sequential information. RNNs use other data points in a sequence to make better predictions. They do this by taking in input and reusing the activations of previous nodes or later nodes in the sequence to influence the output. As mentioned previously, this is important in tasks like entity extraction. Take, for example, the following text:
President Roosevelt was one of the most influential presidents in American history. However, Roosevelt Street in Manhattan was not named after him.
RNNs for Autocorrect
To dive a little deeper into how RNNs work, let’s look at how they could be used for autocorrect.  At a basic level, autocorrect systems take the word you’ve typed as input. Using that input, the system makes a prediction as to whether the spelling is correct or incorrect. If the word doesn’t match any words in the database, or doesn’t fit in the context of the sentence, the system then predicts what the correct word might be. Let’s visualize how this process could work with an RNN:
The RNN would take in two sources of input. The first input is the letter you’ve typed. The second input would be the activation functions corresponding to the previous letters you typed. Let’s say you wanted to type “network,” but typed “networc” by mistake. The system takes in the activation functions of the previous letters “networ”, and the current letter you’ve inputted “c”. It then spits out “k” as the correct output for the last letter.
This is just one simplified example of how RNN’s could work for spelling correction. Today, data scientists use RNNs to do much more incredible things. From generating text and captions for images to creating music and predicting stock market fluctuations, RNNs have endless potential use cases.
Hopefully this brief introduction to CNNs and RNNs helped you understand the difference between the two neural networks. While the ability to process temporal or sequential data is one of the main distinctions, the structures of the networks themselves and their use cases are vastly different as well.
0 notes
Text
Google launches Cloud AI Platform pipelines in beta to simplify machine learning development
Google today announced the beta launch of Cloud AI Platform pipelines, a service designed to deploy robust, repeatable AI pipelines along with monitoring, auditing, version tracking, and reproducibility in the cloud. Google’s pitching it as a way to deliver an “easy to install” secure execution environment for machine learning workflows, which could reduce the amount of time enterprises spend bringing products to production.
“When you’re just prototyping a machine learning model in a notebook, it can seem fairly straightforward. But when you need to start paying attention to the other pieces required to make a [machine learning] workflow sustainable and scalable, things become more complex,” wrote Google product manager Anusha Ramesh and staff developer advocate Amy Unruh in a blog post. “A machine learning workflow can involve many steps with dependencies on each other, from data preparation and analysis, to training, to evaluation, to deployment, and more. It’s hard to compose and track these processes in an ad-hoc manner — for example, in a set of notebooks or scripts — and things like auditing and reproducibility become increasingly problematic.”
AI Platform Pipelines has two major parts: (1) the infrastructure for deploying and running structured AI workflows that are integrated with Google Cloud Platform services and (2) the pipeline tools for building, debugging, and sharing pipelines and components. The service runs on a Google Kubernetes cluster that’s automatically created as a part of the installation process, and it’s accessible via the Cloud AI Platform dashboard. With AI Platform Pipelines, developers specify a pipeline using the Kubeflow Pipelines software development kit (SDK), or by customizing the TensorFlow Extended (TFX) Pipeline template with the TFX SDK. This SDK compiles the pipeline and submits it to the Pipelines REST API server, which stores and schedules the pipeline for execution.
AI Pipelines uses the open source Argo workflow engine to run the pipeline and has additional microservices to record metadata, handle components IO, and schedule pipeline runs. Pipeline steps are executed as individual isolated pods in a cluster and each component can leverage Google Cloud services such as Dataflow, AI Platform Training and Prediction, BigQuery, and others. Meanwhile, the pipelines can contain steps that perform graphics card and tensor processing unit computation in the cluster, directly leveraging features like autoscaling and node auto-provisioning.
AI Platform Pipeline runs include automatic metadata tracking using ML Metadata, a library for recording and retrieving metadata associated with machine learning developer and data scientist workflows. Automatic metadata tracking logs the artifacts used in each pipeline step, pipeline parameters, and the linkage across the input/output artifacts, as well as the pipeline steps that created and consumed them.
In addition, AI Platform Pipelines supports pipeline versioning, which allows developers to upload multiple versions of the same pipeline and group them in the UI, as well as automatic artifact and lineage tracking. Native artifact tracking enables the tracking of things like models, data statistics, model evaluation metrics, and many more. And lineage tracking shows the history and versions of your models, data, and more.
Google says that in the near future, AI Platform Pipelines will gain multi-user isolation, which will let each person accessing the Pipelines cluster control who can access their pipelines and other resources. Other forthcoming features include workload identity to support transparent access to Google Cloud Services; a UI-based setup of off-cluster storage of backend data, including metadata, server data, job history, and metrics; simpler cluster upgrades; and more templates for authoring workflows.
1 note · View note