benlangham-hackspace-individual
benlangham-hackspace-individual
Hackspace - "Anger Bot" An Angry Chatbot
1 post
An individual project looking at modern chatbot abstraction. Project created as part of Creative Computing module hackspace at Bath Spa University.
Don't wanna be here? Send us removal request.
Text
Github repo: https://github.com/blanghamm/angerbot
Hosted project: https://hackspace-angrybot-v1.netlify.app/
Project Demonstration
youtube
Video of project using script which is avialable in the github readme.
Brief & Context
The second part of the hackspace module consisted of a continuation of the experiments ideation process used during the collaborative project. We had been brought together for a hackathon with the purpose of rapid idea generation which would prompt our individual project basis. Due to the current situation our planned second Hackathon was completed online rather than in person like the previous hackathon. This of course had some pitfalls, not being able to quickly talk about ideas with any other members of the group. This blog will have a commentary on certain struggles and overall show the process that led to the creation of the final prototype.
Much like the first hackspace experiment the individual project required ideas, but unlike the first hackathon session our ideas were solely our own. The brief was very open, much like the collaborative project and that made idea generation difficult at first, making sure to be concise of scope and time constraints and overall personal ambition. Through this online session we were tasked with intervals of idea development, explaining to the group a number of ideas we had begun thinking about.
I had early on decided that my technology stack would have some dedicated influence on the overall project. After this online hackathon and the gathering of peer feedback I finally decided on my chatbot centered project. Looking at previous applications of chatbots and how they are helpful, friendly and functional. The chatbot, simply referred to as “Abusive chatbot” would remove all helpful and functional feelings from its design. It would focus heavily on argumentative conversations, and provide no helpful interaction, pulling away from standard use cases for chatbots and machine learning applications. The idea being that it would be a playful project still wrapped in the context of modern chatbot applications but revesering the delivery. The development and refinement of the idea will be mentioned in the next section; also mentioning how this idea developed as part of The Scamper Method.
Ideation
Hackathon
The second hackathon taken online due to Covid-19 ws setup with a similar goal to the previous hackathon. Rapid idea generation with a number of modifiers to help facilitate and restrict our personal ideation process. Looking at the previous hackathon and methods we had been taught it was easier to quickly formulate a project.
First Idea
We spent an allotted amount of time generating some ideas looking at the methods previously described. I had a personal project I had thought about during the first hackathon, it centered around a music visualisation app, generating shapes and geometric patterns when analysing songs. It would run in the browser and allow users to journey through an environment generated via the songs they presented to it. What I found early on was that I was not able to develop the idea further from this initial concept. Not being able to apply the ideation processes to it had been difficult.
Second Idea
This led me to the next idea, which had stemmed from a previous project I had created whilst studying my undergraduate degree. That had featured a sentiment analysis chat system, it would analyse the users message and return an emoji as a representation of the sentiment it had detected. Thinking about this previous project, I looked at most implementations of chatbots and sentiment analysis in modern applications. Most chatbots are created with the intention of being helpful, friendly and functional; providing a service for basic client needs. Applying The Scamper Method to this implementation helped formulate the main project idea. Adapting a current use case into a fun and playful project, removing the key elements of most chatbots and replacing them with an unhelpful, unfriendly chatbot. It still presented some functionality, being that the intention was to create a humorous adaptation of existing chatbot models.
Methods
The Scamper Method also made it clear of a rearrangement of the user process, with the roles being reimagined for the users. They are no longer briefly interacting with a chatbot before interacting with a human agent. The chatbot would feature an output and vocabulary similar to a human agent, albeit slightly unfriendly and rather rude.
Though I have already mentioned the chosen project and the ideation techniques used to formulate the idea, there were some pivotal moments during the hackathon that helped move the project forward. The two ideas I've mentioned were presented to the group to gain peer feedback on which project was preferred. Most of the group found the Chatbot idea to be more interesting, enjoying the idea that rather than helping them it would engage them in argumentative conversation, reacting similarly to aggressive messages and not backing down from a challenge.
Modifiers
We were then given a number of modifiers during the hackathon. These were: Misbalance Aberration Conduction Monochrome Elevate Division
These modifiers were introduced in an attempt to push the ideas further moving through each modifier to see if it would benefit the project. I found through this process I found a number of applications for the Chatbot. Using the elevate modifier, I found that specific applications for the project, such as an educational tool. Developing the project for anger management and dealing with outbursts. It would allow users to have an argumentative conversation with a ‘robot’ that didn’t have feelings, trying to combat bullying and stop users from using these outbursts on people who would be affected by the conversation. From there I found that using the elevate modifier I could use the chatbot for teaching manners to children, having the chatbot react to abusive or nasty comments and try to elicit an emotional response in the users. The main intention being they would realise that they must be polite and friendly to everyone.
Ultimately I found that though the modifiers were able to generate some abstractions from the initial idea, it removed the humorous and playful core of the idea which had made it a popular choice when receiving peer feedback from the group. It serves a purpose in itself but does not need to have such a serious impact on its user base.
Research Development
Overview
After the hackathon was complete I began researching more the context of the application and how it could be developed further. I came up with a number of questions that I needed to answer.
How would it differ from most chatbots? What characteristics would it have? Did it need visuals to help with the conversations?
Differences
I had already covered the initial question during the ideation process but it was good to clarify this during the research process. It would be unhelpful, unfriendly and wouldn’t help a user achieve any goal in relation to existing chatbot models.
Characteristics
The next step would be creating a chatbot character that was playful and interesting for the user. With a brief outline of the concept covered I began looking at fictional machines with human-like personalities, such as those in movies or video games. 'Wheatley' from Portal was a good example of a very self-aware android, talking to itself and being overly sarcastic throughout. A combination of these types of robotic characters could be applied to the chatbot modal. With multiple applications or modes that could be triggered by user input. I also looked at another example of Marvin from Hitchhiker's Guide to the Galaxy, a very depressive android. These characters seemed to be ideal for the basis of a character, using a combination of them to create a bipolar chatbot, making it more interesting for the user having lots of varied outputs based on fictional characters.
Visuals
In combination with the chat interface that users would interact with, I decided that the chatbot itself would need to have a visual aspect in order to help articulate its characteristics. Initial thoughts for the visual element would be a character that would become visually frustrated/angry as the conversation progressed, signalling to the user that the conversation was negative. I also felt that there would need to be a cut off for the interaction with the chatbot, as the conversation became increasingly heated the chatbot would eventually block the user from replying and then begin to calm down itself. Helping to notify the user that they needed to stop momentarily.
Methods
Tumblr media
During this initial research development I felt it was necessary to create a kanban board for time and organisation management. I had used this type of management system before and found that it had helped me focus on project goals. Using this method of approach, with miniature scums to decide what was next and set up the tickets within the Kanban board, I would hopefully not deviate from the given tasks and be able to continually motivate myself throughout the project. This use of the agile workflow helps in refinement throughout the project and is an industry standard in management and organisation theories. Though the agile methodology is normally used in a collaborative environment I still felt it was appropriate as a development method for an individual project.
There were of course moments throughout the project where I simply ignored the Kanban board and focused on areas I felt needed more attention. This did however lead to momentary tangents that in retrospect were not beneficial to the project. But for the most part it helped build a foundation for the development of the project and helped with documentation throughout. Making it easier to identify the process that led to the final prototype. Visual Development
Once the idea had been selected and the project organised into sprints. It was time for some base design prototyping, this would consist of paper prototyping and some basic wireframes. Finally leading into some high fidelity prototypes that would serve as the final designs. I found that once I had decided on the core components for the application and its functionality the visual ideas came very quickly. Of course the application was a chat interface so it didn’t require as much immediate thought as they must share similar design principles to many chat interfaces, making it easier for users to understand what to do.
Visualising the visual interface
I’d decided very early on that the interface would consist of three components, the navigation, chat box, and animated character. There was no need to overcomplicate the application and a minimalist design was of personal taste. From this initial idea I began with some brief paper prototyping but it didn’t require a lot of dedicated time.
Paper Prototyping
Tumblr media
Wireframing
Tumblr media Tumblr media
From these initial paper prototypes I began moving forward with some low fidelity prototypes using Figma, my design tool of choice. Figma allowed me to mock up some quick prototypes and flesh out the overall look and feel of the application quite early on. Using the Kanban board I mentioned earlier to quickly run through targets and allow the technological development to begin taking shape.
Tumblr media Tumblr media
Side Note
The wireframes feature a suggested responses UI element between the chat box and message input, this was not included during the initial stages but it is mentioned later in more detail that after user feedback it was an essential feature.
Technical Development
GitHub repo: https://github.com/blanghamm/angerbot
Drive of development screenshots: https://drive.google.com/drive/folders/1W0JTiuSI7XX1uQ9_3FPOuvaM6YS7Jmq2?usp=sharing
Backend
The technical backend research had begun to take place in parallel with the overall concept research and contextual positioning. This being my preferred area of development I had a number of ideas of what technologies would work well for this particular project, specifically due to the project I had worked on during my undergraduate degree. I knew I needed to include some form of conversational machine learning application, there being a number of free to use options available.
Initial Technologies
I initially looked at using a combination of Google’s Dialog Flow for creating the initial flow of conversation and IBM’s Tone Analyser API which would allow for sentiment analysis. What I found from this initial research was that IBM offered it’s on dialog flow chatbot assistant which could be integrated nicely with the Tone Analyser.
Before finding that I could use IBM’s Watson Assistant in combination with their Tone Analyser I’d begun attempting to use the Tone Analyser API. So using their API and curl in the terminal I was able to get some quick sentiment analysis which was promising in the early stages. Using a simple JSON file I could quickly send a POST request to the Tone Analyser endpoint and it would return a JSON object with the sentiment detected, be it “anger” and the degree of accuracy which was a decimal number between 0 and 1.
Tumblr media
From here I went on to map out the flow of the backend application, this however was in the early stages of the development process and would change quite dramatically once I decided to use the Watson Assistant. I have included the diagram anyway to show the process and how this was developed later on.
Tumblr media
Moving Forward
I had initially intended on creating a server API that the frontend application would connect to. This would then connect to the Tone Analyser sending user data up which would then return the sentiment to the server API. Once the server API had received the sentiment it would pull from a database held in MongoDB and return the chatbot's reply. Initially I thought this was a great idea as it was complex enough for the time frame of the project and would test my technical ability. What I found was that to keep any form of structure with the conversation it would need increased dialog analysis which I would not be able to achieve with this model.
This led to a conversation that highlighted IBM’s Watson Assistant which I mentioned earlier, it would bridge this gap and help keep a controlled flow of conversation between client and chatbot.
What came next
With the inclusion of two IBM technologies the next decision was based on how to connect everything up. When researching the combination of these two products a number of tutorials mentioned the use of Node-Red. In a subsequent 1-2-1 tutorial the module lecturer Lee Scott had mentioned Node-Red as well. Using Node-Red to bolt the two applications together and create a flow made the process much simpler, not having to create a server API to deal with all the requests that would be needed.
youtube
A video demonstrating the Node-Red Flow and how it connects up to the project.
Note: Sorry about the mono audio in your right ear.
Developing the Node-Red application had also presented itself with many new and welcome challenges, the most streamlined way of delivery the application was to upload it to the IBMCloud, used in combination with a new application called Docker. Docker allows for you to create virtual containers for applications to run, helping them to run on any development environment. Include link to docker documentation. As well as using Docker to run the application locally before uploading it to the IBMCloud I needed to familiarise myself with the IBMCloud CLI (Command Line Interface) Include screenshots. Though something new to learn I found the experience beneficial to my personal development and found that Docker would be useful for future projects.
As you can see in this screenshot, the Node-Red interface is intuitive and allows for quick integration of the IBM APIs that are key to this project's success. I have attached a video documentation running through everything I used during the development inside of Node-Red.
The next addition to the Node-Red flow was the inclusion of http requests which would allow the frontend interaction with the backend and deliver the experience to the user. Attach screen shots of the postman. I’ve used postman a number of times before just to test API endpoints and this time was no different. The Node-Red application becomes an API once I add the request options. I do briefly cover this in the video documentation, but as you can see in the screenshot the message is sent via POST request and the response is two arrays, one containing the sentiment analysis and the second the return response from the chatbot.
Postman is a helpful tool for testing everything works early in development and helps in designing the frontend application that needs to interpret the information and display it appropriately. https://www.postman.com/
Tumblr media
Frontend
The frontend research was not undertaken until later, with the backend functionality being key early on. This was also attributed to the fact I'd had a clear idea of the technology I wanted to use for the frontend section of the project. I had decided that the only piece of frontend technology required was React, developed by Facebook it has become one of the most widely used frontend libraries for web based applications.
React: https://reactjs.org/
Why react
Having used React during my undergraduate degree and being extremely interested in how it works. I began to look at the best way of using it when creating the applications frontend. The key concept behind react is its composition, it allows you to break each UI element into lots of smaller components, making it easier to organise and helping split complex logic components from purely visual components.
To illustrate this workflow, I've attached two screenshots, one shows the chat component which contains the complex logic the second contains a purely visual component that takes data from the chat component. The information is passed through the components using props, which is a specific workflow unique to react. It allows you to send data down through your composition of components.
“Conceptually, components are like JavaScript functions. They accept arbitrary inputs (called “props”) and return React elements describing what should appear on the screen.” React Website.
Tumblr media
Chat component screenshot - Logic Component
Tumblr media
Message component screenshot - UI Component
As you can see in the chat screenshot it contains a component called <Messages/> this is wrapped inside of the map function which creates a new instance of the Messages component for every message that is contained within the array of user and chatbot messages. The Messages component which you see simply takes the information passed from the Chat component in as props, that prop being the “message” seen at the top. Then that data is used to display the new messages inside of the chat interface. I will attach a number of screenshots with a side by side of the code and rendered component to help illustrate my point.
This kind of modularity that react offers helps keep each section separate from each other and helps with the flow of the application. It is also very helpful for readability and doesn’t make my head hurt as much.
Notable Mentions
Throughout this process I was working with a lot of different data types and I found at times this could be difficult to manage. One addition to the react library that I found super useful was ‘prop-types’ it helps in clarifying what data type a specific prop will be. Insert image. This meant when debugging the application everything became considerably easier to identify. Of course statically typed languages don’t have this issue but being as Javascript is dynamically typed it’s flexibility can sometimes be its downfall. https://github.com/facebook/prop-types
Another extra that proved helpful in the development process was ESLint, which helps with overall code consistency and the removal or identification of bugs. https://eslint.org/
Analysis
Though I was familiar with react from previous projects I found during the project that I learnt a lot of new techniques which ultimately benefited the project. Really harnessing the modularity and composition of react made working with it much easier. Taking complex logic away from specific components helped in thinking about the application as a whole. It was also beneficial in removing the chance for errors or addressing errors without having to trawl through huge blocks of code.
Overall Analysis
The entire technical approach was my preferred area, being that I could continue to develop and learn new emerging technologies. The combination of React and Node-Red was seamless and the modularity of react worked really well in dealing with all the data received from the IBM systems. Though at a number of moments I did find myself rather stumped and frustrated, turning to helpful tools such as StackOverflow and Medium blogs. Also the continued use of the Kanban board throughout the technical development helped in keeping on track and not focusing on unnecessary features, not to say that didn’t happen on a number of occasions.
Peer Testing - Feedback
During the development process there were a number of peer feedback sessions that helped shape the next steps. The final peer review sessions highlighted a number of key issues with the application that I had to address. The first and initially unsettling problem was the app kept crashing when everyone began to use the application. Luckily looking through the Watson Assistant documentation I was able to see that lite plan only allowed 2 concurrent users. Due to the fact the application is still a prototype this is not as problematic as it first seemed, making a note in the documentation on github that it can only be run by 2 users simultaneously.
During this peer review a questionnaire was created using TypeForm, the form is available here: https://benjaminlangham20.typeform.com/to/R80oDhwY
Tumblr media
The questions during the peer feedback session focused on the aesthetics of the application, with all finding it appealing. The feedback was positive in this sense, highlighting the playfulness of the application and also its functionality. One tester saying: “Looks great and impressive that you've got any of it working really! I managed a couple of conversations with it.”
Other feedback suggested improvements to the chat capability, this would ultimately change a number of key design decisions. Thankfully due to the setup of the application, any change in the data structure or the data was easy to implement.
What changed
After the peer review testing, it became apparent that the application needed to guide the user to the destination, due to it currently being a prototype, there depth of data and research needed far exceeded my capabilities with the time frame. This led to multiple conversations regarding a user script for testing purposes.
The flow of the conversation would need to be guided from start to finish, but with some leeway. Having a basic script would help reduce the number of possibilities the Watson Assistant would need to deal with. This removed the need for a really extensive collection of conversational data which had proved very difficult during the development process. This guidance script that would be available to any tester of the product (It’s available in the readme on github) would allow for a controlled conversation flow but still give the user freedom with up to number of different phrasing of each sentence, Anger Bot would also feature a number of different return responses.
Using this new and improved setup helped with the feasibility of the application and also allowed for a new UI element to be added. It would feature a number of suggested responses for the user to click. Insert images from suggestions. Given the flexibility and modularity of react it was easy to add a new component that displayed user suggestions.
Tumblr media
Critical Reflection
Development
Overall the requirements for the project were met, with a prototype being created and hosted via Netlify. The development process was complex enough to be challenging during the entire process. Though there were a number of occasions throughout the process that required a lot more time than others. The coding of the project was of relative ease having spent some time using React before-hand. When it came to using React and Watson Assistant together it became difficult.
Positives
There are areas of the project that I think worked really well. The UI fits the context of the project and delivers a nice playful experience. The project also enabled me to learn a plethora of new technologies, such as Node-Red and all of the IBM products, as well as Docker and IBM CLI functions. The overall experience of making this project was positive and there are a number of skills learnt that have informed future iterations of Anger Bot.
Failures
However, a number of stumbling blocks and reflections highlighted areas for improvement. I found throughout the entire process even with the shift to a guided script for testing purposes that Watson Assistant was fundamentally geared towards another purpose. This made it difficult to fully achieve the goals of this project. This reflection would lead to a change in future developments that I will mention in more detail later. Another area for focus would be the overall scope of the project, though it was helpful to move through the development process understanding and shifting certain ideas in different directions. It would have been more beneficial to spend time in the research to understand the limitations of the chosen technology. Having to plan out the entire dialog route, with all possible conversational routes was not achievable in this project's time frame. It would also require a number of extra resources to enable it.
Though it was discussed previously that the visual component would react to user conversation throughout the scripted conversation. Ultimately this is where my skills fell short and would need to be greatly improved. I began by recreating the Anger Bot Character in SVG format. With this created it was imported into react and I attempted to manipulate it using Framer Motion, my chosen animation library. What I quickly discovered was that the entire process was very difficult and animating an SVG required a lot of time. Though I was disappointed not being able to fulfill one of the more playful features, it did highlight that more research was needed to find the best solution for the Character.
Future developments
This project has for the most part been an interesting development process, though there have been a number of failures. The prototype is substantial enough to highlight the playfulness of the chatbot. I am planning on working on this project's continued development after this module is assessed. I have decided to take a rather different approach for Version 2. Changing the underlying chatbot functionality to something created in PyTorch, using a generative deep learning modal. This will enable the project to analyse user input and generate responses itself. Automating the process and hopefully producing better more natural results.
Personal reflection / struggles
When starting the project there was a lot of motivation for this project, with the initial testing going well in the first week and quickly being able to ascertain the critical context for the application. Unfortunately, with everything as it stands at the moment the speed of development did crawl to halt during the Christmas period. This did lead to the project suffering in terms of the final prototype with the level of interaction limited. I believe with further development and the use of the new technologies described Anger Bot could be brought from its prototype phase into a fully working application.
0 notes