edwardsg491-blog
edwardsg491-blog
COMP 491-01
17 posts
Greg Edwards's school blog for 2019-2020 Senior Seminar w/ Professor Farhan Siddiqui
Don't wanna be here? Send us removal request.
edwardsg491-blog · 5 years ago
Text
Facial Recognition Technology (2/11/20)
Note: Date formatted in MM/DD/YYYY
This is the first student-led, in-class discussions for the semester. From this point on, we will have small groups of three students lead a discussion about a given topic. These can vary in form and conduct, depending on the whims of the discussion leaders. This week, me and my two peers led a discussion about the ethical implications of facial recognition technology. Our chosen approach was to conduct the discussion as usual, but with more emphasis on highlighting key points of arguments being presented, pressing the students on particular distinctions, and giving follow-up questions after substantial discussion. This was intended to prepare them to debate one another over a final question, but we didn’t get the chance to conduct this ending debate because of how long it took to cover the previous questions.
Overall, I think it went very well. There was less repetition among group members than usually occurs in the regular discussions, and some interesting ideas were shared and debated during discussion. Even though we didn’t get to conduct the debate that we had planned, I think we got our peers thinking critically enough about their own inclinations. It felt accessible and like everyone was able to provide their own perspectives on the issue at hand.
The major part of the discussion which surprised me is that the majority of the class’s response to the first question was negatory. Meaning, that they did not, after discussing the question among themselves, report thinking that the right to protect the information of one’s face is guaranteed by the Social Contract. In making the question, we thought that it would be more likely for people to think that it was. Our thought was that people would instinctively want to believe, and also think that others would have the same expectation, that their fellow society constituents would respect that right if expressed. In regards to facial recognition technology this would mean that, in today’s society, there is an expectation for the users of technology to be restricted and/or allow their subjects to “opt-out”, so to speak. Instead, the common thought my classmates presented was that modern societal constituents, especially the young, are too flippant with this right and sacrifice it too easily. Essentially, pictures and videos of oneself (and others) have become such a common public commodity that it is practically expected that one sacrifice said right. The persistence of malicious use of that information - the “violations” mentioned in the question - was certainly acknowledged, but this consistency didn’t seem to alarm them enough to sway them from their realism-based ideas.
I hadn’t considered this perspective, so while I listened to my classmates’ various points supporting this dominant view, I had to modify my approach to the question. As a follow-up to the discussion of the first question, I posed to them a counter-question along the lines of “If facial recognition technology purportedly does not violate any rights guaranteed by the social contract, then what does one who has been the victim of malicious activity using facial information do to protect oneself? Can they claim that their rights have been violated?” One response to this, after another small bout of in-group discussion, is that this is related to the difference between legal rights and moral rights - that non-explicit grey area of conflict in which the legality and (socially subjective) morality might be at odds. Perhaps, in this case, one could argue that their legal rights have been transgressed, but that doesn’t necessarily mean that it would be considered morally wrong. And since this technology will continue to evolve and likely improve in functional capability more quickly than we will be able to write legislation for it (”The Pacing Problem” - observed for last week’s discussion on automated vehicles) we should trust that the majority of malicious activity will fall within this gray area. I think there are some significant problems with this approach. For one, it almost feels as though this is describing a very particular type of malicious activity that does not fully encapsulate the range of effects possible. The current applications of “deepfakes” in fraud and pornography, the high bias rate coupled with widespread implementation, and the most common issue - the loss of privacy - are just some examples of malicious activity that has much more serious consequences than what seems to be allowed to lie comfortably within this gray zone-emphasized approach. Also, the pacing problem in and of itself is objectively that - a problem. While some might disagree, I don’t think it’s something which we should rely on as a way to effectively sweep moral dilemmas under the carpet.
It is certainly an interesting debate. Facial Recognition Technology seems like one of many frontiers of the new age for a technological revolution. The pacing problem, as mentioned before, is quite disconcerting in this regard. I personally have a lot of trepidation for this technology in particular. One other topic that we discussed is that it seems objectively true that the data of facial information is a much more personal form of biometric information. It is for this reason that the topic of consent and, once again, the option to “opt-out” should be paramount. I’ve seen many technologies emerge first as ostensibly optional but simultaneously work to actively render the alternatives obsolete. For someone like me, who bears a lot of apprehension for the misuse of this widely available personal information, this is a pretty formidable possibility. I chose this topic because it’s one that I am quite wary of and felt that we would all benefit for a rigorous discussion of, so I hope that we were able to accomplish that if only a little bit.
The readings and preparation material from this round of discussion were:
How Does Facial Recognition Work? by Andrew Heinzman on How to Geek
A Gentle Introduction to Deep Learning for Face Recognition by Jason Brownlee on Machine Learning Mastery
Facebook Research: Designing a face recognition application for people with visual impairments by Lindsay Reynolds and Shaomei Wu
How Ethical Is Facial Recognition Technology? by Yaroslav Kuflinski
Sections 2-5: Ethical aspects of facial recognition systems in public places by Philip Brey. Journal of Information, Communication & Ethics in Society; Bingley Vol. 2, Iss. 2, (2004): 97-109
The questions covered during class discussion were:
Is the right to protect the information of your face guaranteed by the Social Contract Theory? If not, explain why you think so. If so, what type of right is it? How does Facial Recognition Technology violate this right? What needs to change in order for this violation to be resolved? Does it need to be cultural, legal, or come from some other source?
What are some of the benefits and drawbacks that facial recognition could produce? Is facial recognition morally right or wrong under Act Utilitarianism and Rule Utilitarianism?
The best forms of Facial Recognition Technology are claimed to be around 97% accurate, but this changes among different ethnic and racial groups. Fingerprints scanners are commonly used as biometric identity and have around a 98% accuracy. How do these two technologies compare? Do you consider one more personal than the other?
0 notes
edwardsg491-blog · 5 years ago
Text
Ethical Issues with Autonomous Cars (02/03)
Note: Date formatted in MM/DD/YYYY
This is the second of two professor-led, in-class discussions for this semester. Beginning the following week, we will have student-led discussions where members of the senior seminar class form groups and guide their classmates through a discussion in whatever way they see fit. This week, we talked about ethical decision making regarding autonomous cars.
Most of the discussion that we focused on was not about whether the usage of the cars was ethical. At one point when discussing responsibility and blame, we did briefly discuss that who was in control of the vehicle during an accident does seem to affect who may be deemed morally responsible. But we only mentioned comparing autonomous cars’ accuracy to that of human accuracy. This certainly is a topic that should be focused on in circles discussing autonomous vehicles or the automation of general transportation. Since we are really only on the cusp of this technology, there is a lot of attention and scrutiny surrounding accidents involving automated automotives and mechanical failures. There are statistics from the manufacturers of these vehicles about incidence rates, but even from just these few isolated incidents there is a lot talk about. For this discussion, we continued under the assumption that these vehicles have a significant increase in accuracy. 
In addition, we decidedly did not consider the ethical implications of the automation of vehicles leading to potentially new forms of terrorism, like car hacking. I’m glad we didn’t consider this given how tough it was for us to try to answer these other questions already. But this is certainly a topic worth discussion, and I would hope that this is a discussion being conducted in legislative and judicial circles. With every new technology comes to the possibility of malevolent use. Was it morally right to introduce gunpowder to armed forces? What about the Manhattan Project? I can sense that this will be a recurring (perhaps unspoken) undercurrent throughout the discussions that follow in this semester. We will be continually discussing new technologies and how they fit into society - which will always include considering the possibility of new kinds of crime.
Thus, the majority of our discussion centered around life-or-death incidents in which these programmed vehicles have to make ethical decisions that mirror similar ethical questions that have circulated for centuries. For instance, in a situation where a car must swerve and the only options are to hit a small girl or an old woman, what should the car do? In my opinion, this discussion did not prove very fruitful. I do think that it is important to observe these philosophical questions in a modern lens in a classroom setting where the chance for multicultural perspective sharing is high. But I will not pretend that this discussion was not merely a class about the Trolley Problem in a 21st-century disguise. Much of the reasoning provided was circular and repetitive because of uncertainty (and possibly even discomfort) with making these decisions that amount to universal handling of difficult situations.
Another assumption under which the discussion operated unknowingly until it was pointed out by the professor was that we were assuming that the AVs that are making these assessments and judgments bear the technological capabilities of doing so. Many of the situations which we described were split-second decisions, or decisions needed to be made far too fast for a rational human driver. To be able to assess these situations and compute an ethical result in time to enact an action that would actually bring about the result would require some incredibly fast and robust technology. AVs have come a long way since their first introduction, but we’re not entirely sure if these vehicles have the capability to do this just yet.
All in all, I think the majority of the discussions of these questions surmounted to reiterated variations of “we should treat all beings equally” and “who is responsible for making these choices?”. It went unsaid, but I think we all agreed that the Utilitarian approach is flawed here. I usually went a step further to present conflicting ethical considerations during our personal discussions of the questions. At one point we talked about using moral rules for the AV programming, and I countered that the universalization of most rules would likely lead to a logical contradiction. Take, for instance, the example from the reading that an autonomous vehicle would choose to hit a motorcyclist with their helmet on rather than one without a helmet. Should this rule always be applied, eventually this would likely lead to the result that motorcyclists no longer wear helmets, which is more unsafe for everyone regardless of the situation. This is probably going to be the same for pedestrians. One example we considered in class was if a group of pedestrians was violating the law by jaywalking, but the autonomous vehicle was programmed to minimize the loss of human life, then the car might veer off to avoid them but hit someone who was not violating the law and was, perhaps, just waiting for a bus on the sidewalk. Rules like these (minimizing the loss of human lives) will likely lead to socio-behavioral effects for pedestrians as well, like traveling in groups, since they would theoretically share the road with these autonomous vehicles.
One last thing that I would like to note (since this post is getting to be pretty long) is that I find it unrealistic to believe that any particular body of people should be responsible for these technologies. No matter who one considers, there are conflicting interests, varying moral implications, and a nonconsensual assignment of too much responsibility. For example, a business’s interests might misalign with that of legislation. A business might consider it too detrimental to crash into the stock market of Wall Street, but legislation might require an autonomous vehicle to preserve human life at all costs. Also, imagine a future where the programming of a vehicle’s “preferences” was conducted by the car owner. Now imagine the trauma introduced if that owner is involved in a crash. Does it seem like too much responsibility for a rational agent now?
The readings and preparation material from this round of discussion were:
Autonomous Driving: Technical, Legal and Social Aspects by Markus Maurer, Chris Gerdes, Barbara Lenz, and Hermann Winner, Chapter 4, “Why Ethics Matters for Autonomous Cars” (Peter Lin)
A Moral Map for AI Cars by Amy Maxmen, nature.com
The social dilemma of autonomous vehicles by Jean-François Bonnefon, Azim Shariff, and Iyad Rahwan
The Pacing Problem, the Collingridge Dilemma & Technological Determinism by Adam Theirer
The questions covered during the class discussion were:
Should autonomous vehicles (AVs) use utilitarian models that will sometimes require sacrificing the vehicle's passengers? Would you want to own such a vehicle? Would you ride in one operated by a car service (e.g. Uber)?
In situations where an accident is inevitable, what factors should an AV use, or not use, when targeting the accident victims?
Assuming AVs will contain utilitarian ethical models, should they be determined by the manufacturer? Legislation? Owner tunable?
Thierer says "We shape our tools and then our tools shape us." What does he mean? Can you think of examples that fit this statement?
0 notes
edwardsg491-blog · 5 years ago
Text
Introduction to Ethics (01/28)
Note: Date formatted in MM/DD/YYYY
This is the first discussion of the second semester of my senior seminar class, where we will once again be holding multiple in-class discussions throughout the semester interspersed with our team capstone projects. The first two discussions, which include this and the next, held on February 4, 2020, will be facilitated by the professor like all discussions from last semester. This semester, however, every discussion beyond those first two will be facilitated by small groups of students in the class. The professor has provided us with a list of her suggested topics to cover in our respective discussion periods, but these groups are free to cover any topic of their choosing. On their own, they must also find preparatory readings and other materials for their peers as well as create questions from which their peers must choose and to which they must write a thoughtful response.
A particular focus for this semester’s discussions is that of the study of ethics. In each discussion that we will cover, we must cover three main concepts as a class: a general academic understanding of the topic, case studies from contemporary media, and an ethical analysis of the topic. In preparation for this discussion, we covered the most influential and ubiquitous ethical theories posited over the history of the field and how to philosophically apply them to ethical dilemmas. During class, we talked about our responses to the questions which asked us to apply said theoretical approaches to hypothetical case studies dealing with ethics of modern information technology and biometric security. The main ethical theories we read about were: subjective relativism, cultural relativism, divine command theory, ethical egoism, Kantianism, act utilitarianism, rule utilitarianism, social contract theory, and virtue ethics, identifying only the five latter theories as workable.
As a whole, I thought the discussion went very well. Despite the monolithic introduction to the topics at hand, it seemed like the majority of the class had a remarkably good grasp on these theories and how to appropriately apply them to the presented ethical dilemmas in expository juxtaposition to, rather than convoluted confluence of, their own inherent, personal senses of morality. In fact, there were a few times when the posited facets of said peer responses surprised me and raised unpremeditated perspectives. I’ll take the time in this post to touch on a few of them.
The first came during my group’s discussion of the first question, which does not focus on a particular scenario but instead the concept of privacy and how the social contract theory could be utilized to supply an argument for either the morality of privacy or the lack thereof. There was a bit of dissent among the class about whether privacy would be considered a positive right or a negative right, but my group decided on the latter due to our definition, which I found to be of particular interest and opened to the class for dissection. My group defined privacy as the moral right to protect information without being required to divulge it to others. One can see clearly that this, then, is a negative right given that in order to provide that right, the protected information agent must simply be left alone. However, the crux is that of the inclusion of “information” in this definition. It seems easy to produce examples that seem fitting for this brand of privacy - browser history, whereabouts, secrets, etc. - but there are also some confounding ones, like being able to look into one’s home or an action preferred by the agent to remain private, that don’t quite fit the category of “information” quite as nicely but are still widely regarded as being a part of a right to privacy.
We didn’t come up with a solution to this particular issue, but it served useful later during the discussion when discussing what seemed like a solution to the moral dilemma of denying the right to privacy. We claimed that as long as consent is brought into the equation, the dilemma raised of when it is moral to breach the right to privacy seems to soften in many social situations. This would also come up later during our discussion of the third question, which asked us to evaluate a situation where a malevolent “worm” program (known as the Blaster worm) was introduced in the personal computers of others which exploited security holes, and a similar but benevolent worm (known as the Nachi worm) was introduced to exterminate the other worm and patch up the holes. Under Kantianism, we agreed that under either formulation of the Categorical Imperative the act of introducing the benevolent worm could not possibly be seen as morally right, even with good intention. This is more obvious with the application of the second formulation by saying that the worm treated the owners/users of said personal computers as means to an end, namely, the elimination of the Blaster worm and the patching of security holes. However, we identified first in group and then in discussion, as a class, that the requirement for said introduction of the worm to be consensual might fix this morality issue under Kantianism.
It seemed universal that, socioculturally, the right to privacy also comes across as a limited right. In many contexts it seems that the right to one’s privacy, when characterized as one’s right to retain information, can morally be provoked or even revoked. Our example for this was a time-sensitive situation where information is direly necessary; in this case, it seems morally okay to interrogate the source of that information. Another example produced was that of a cross-examination in a court setting. However, I expressed caution during this point of the discussion because it seemed to me like the true acceptance of this situation rested within a complicated mixture of both legal and moral standards - something I wished to avoid considering further for the sake of analytic simplicity.
The readings and preparation material from this round of discussion were:
Ethics for the Information Age by Michael J. Quinn, Chapter 2, “Introduction to Ethics”
'Black teenagers' vs. 'white teenagers': Why Google's algorithm displays racist results by Ethan Chiel 
The questions covered during class discussion were:
On p. 83, it is suggested that "the right to privacy" should be added to "life, liberty and property" when applying Social Contract Theory. What is privacy? Should it be a right? If not, why not? If so, is it a positive or negative right? An absolute right or a limited right?
Consider the scenario in 2.8.2 (p. 78) from the perspective of Kantianism and Act Utilitarianism. Was the release of the worm morally right under each framework?
Consider the scenario in 2.10.2 (p. 91) from the perspective of Rule Utilitarianism and Social Contract Theory. What would each of these frameworks require that Josh do?
Imagine you were in the Google CEO's position and had to decide whether or not to alter the image search algorithm. Give an argument for this decision from the perspective of each of the workable ethical frameworks: Kantianism, Act Utilitarianism, Rule Utilitarianism, Virtue Ethics and Social Contract Theory. Do they all give the same answer?
0 notes
edwardsg491-blog · 5 years ago
Text
FOSS Success Stories & Criticisms (12/10/19)
Note: Date formatted in MM/DD/YYYY
This discussion is the last discussion that will be conducted during class for this semester. Next semester, we may conduct more in which I will continue to update this blog with posts. In this discussion we focused on a couple examples of FOSS success stories as well as some criticisms of the cultural structure that has formed around FOSS since the beginning of the FOSS movement.
The two major success stories that we talked about were Red Hat and LinkedIn. Mat Asay also mentioned Amazon Web Services and Microsoft Azure in his TechRepublic article as modern giants of cloud computing hosts. But much of the discussion was based around Red Hat. Red Hat’s business model which has pioneered its success as an open source company is based around, as Mat Asay deftly describes, selling support services that aim to reduce the complexity of software and packages backed by Red Hat Linux. Essentially, they retain their objective to provide free code but receive revenue by selling their services to those who rely on Red Hat Linux for things like maintenance, support, and installation that may be overly complex without such services. Red Hat is the most ubiquitous example of a successful open source company. However, as Peter Levine points out, revenue-wise it is nowhere near the same caliber as other company giants who utilize the same code that Red Hat provides. A large part of his criticism and reasoning for this is that their business model, while highly successful, “does not enable adequate funding of ongoing investments”. The funds originate from an unnecessary and overly forgiving service and are neither sufficient nor designed to feed into research and development processes which would be crucial to any other proprietary company.
One thing that I had brought up during the discussion is that it seems almost like, under the Red Hat model, the packaged software being provided needs to be just complex enough for it to be more expedient and thus profitable for companies who want to use that code to purchase the support services. However, this conflicts with the open source ideals that Red Hat so fiercely clutches to their chest and profusely touts as the be-all end-all. This could be because we do not yet have the proper mindset of how to navigate this complex and seemingly conflicting interests within the context of our heavily proprietary economic systems, sure. Or it could also be that I don’t know enough about economics or the successful business models of other open source companies. However, I would suggest that it is pertinent for any company involved in OSS that, as Peter Levine says, “wants to be the next Red Hat” to consider these points of the model they seek to emulate.
The part of the discussion about the criticisms of FOSS brought up by Ashe Dryden was also particularly interesting. In discussion of the term “meritocracy”, there was a lot of talk surrounding how meritocracy transcendentally governs the culture surrounding FOSS and how the opposite might be true. This was particularly interesting to me, who had only thought of the former. Some of my peers’ thoughts were that the way in which FOSS has developed is not enough to constitute a “true meritocracy” whereas I had already accepted that FOSS was based upon meritocracy and focused instead on critiquing on how the meritocratic system had wronged FOSS communities. It may be possible that those who spoke of a “true meritocracy” have a misunderstanding of the term’s meaning and structure, or that I and my like-minded peers accepted too readily of what Ashe Dryden spoke.
The readings and preparation material from this round of discussion were:
What is the Secret of Red Hat's Success? by Ruth Suehle
Why There Will Never Be Another RedHat: The Economics Of Open Source by Peter Levine
Red Hat's open source success story built on killing complexity in IT by Mat Asay
The secrets to LinkedIn's open source success by Matt Asay
The Ethics of Unpaid Labor and the OSS Community by Ashe Dryden
(Supplemental) The new struggles facing open source by Matt Asay
(Supplemental) The top 14 new open source project on InfoWorld
(Supplemental) Why Linux creator Linus Torvalds doesn't really care about open source
(Supplemental) Ganesh Prasad: Open Source-onomics: Examining some pseudo-economic arguments about Open Source
(Supplemental) Open Source Stories, a series of short films
The questions covered during class discussion were:
What is RedHat’s Model? How does it work? What is Peter Levine's argument for why "the next Red Hat" will never happen? Do you agree?
What is a meritocracy? Does it exist in FOSS? The tech industry? In what ways yes? No?
What are some factors that would cause representational imbalances in FOSS that are larger than in traditional companies? How could FLOSS improve? What have your experiences/perceptions been like with your projects?
Is it fair to consider FOSS contributions for job candidate screening? What have your interview experiences been like?
0 notes
edwardsg491-blog · 5 years ago
Text
Software Testing (11/26/19)
Note: Date formatted in MM/DD/YYYY
I was absent for this discussion, but still wanted to provide the prefatory materials and questions answered during the discussion.
The readings and preparation material from this round of discussion were:
What is Software Testing? Introduction, Definition, Basics & Types
7 Software Testing Principles: Learn with Examples
Introduction to Software Engineering/Testing (WikiBook)
(Supplemental) Software Testing Overview by David Young
(Supplemental) “Introduction to Software Testing” by Paul Ammann and Jeff Offutt (pp3-24)
(Supplemental) Software Testing Dictionary on Tutorials Point
(Supplemental) Software Errors Cost U.S. Economy $59.5                  Billion Annually, a report from NIST
The questions covered during class discussion were:
What are black box, white box and gray box testing? What are the advantages and disadvantages of each? What are Verification and Validation? When does each happen?
What are the 7 testing principles? What does each mean? 
What is tested by each of the different levels of testing? Which are most important?
What are static and dynamic testing? What are they used for?
 What are manual and automated testing? What are they used for? What are unit testing, integration testing, regression testing? What are they used for?
0 notes
edwardsg491-blog · 5 years ago
Text
H/F/OSS Motivations (12/06/19)
Note: Date formatted in MM/DD/YYYY
In this round of discussion, we read and talked about the motivations that a business might have for getting involved with an H/F/OSS project. Overall, this discussion was neither particularly satisfying nor substantial. Much of what was discussed was not new material. We rehashed lots of points that were made during previous discussions. In discussion of motivations for developer involvement in FOSS projects, we brought up points from our early discussions of FOSS. In discussion of sustainable OSS development, we brought up points from our previous discussion on Software Processes. Even in the later parts of the discussion we still brought up many points discussed in previous discussions like that of FOSS Licensing and Software Processes. In the future, I would suggest a redesign of what this round of reading and discussion should focus upon and gauge what would be beneficial to cover at this point in the semester.
That being said, I think there are at least some new material worth discussing. The central theme at hand, whether a business should get involved with FOSS projects, seemed rather intuitive. But there were still some interesting perspectives brought about during discussion. For instance, peers postulated that the relationship that a company has with its “customers” is radically different in this type of structure. While the company would be working more closely with the developer base and would thus have a higher-tuned sense of what is desired of their users, this might not necessarily be a good thing. On the one hand, the company is able to make strides towards catering to their customers and make their product more marketable. On the other hand, if the desires of the customer base and the mission of the company misalign, these conflicting interests could cause a motley of issues with the direction of development.
This brings up another interesting point, in that, depending on the forkability of the project determined by the license, the company relinquishes an aspect of control over the project; the reliance of a company with a specific goal or set of goals (likely including making a profit) on their developer base for expediency of progress gives influence to that group on the direction of the development. Speaking to the extreme, to prevent disagreement from evolving into mutinous sentiment among that user base the company must tread very carefully and maintain constant, respectful, proper communication and at times may also need to compromise. One of my group members brought up a point during this portion of the discussion that I found particularly well-put - that there is an inherent convolution of ethics and legality when mixing corporate principles with those of the open source software movement. What practices are just? Whose work is credited? Who is considered an employee? To this last question, our professor posed a rather astute gratitude, “Thank goodness the US tax structure does not consider money made to be money earned”.
The readings and preparation material from this round of discussion were:
Toward an understanding of the motivation of open source software developers by Yunwen Ye and Kouichi Kishida
Build Something: The computer science answer to Habitat for Humanity by Annie Murphy Paul
Open Source Case for Business: Advocacy from the Open Source Initiative
The Business of Open Source Software: A Primer by Michael (Monty) Widenius, Linus Nyman
(Supplemental)  Intrinsic motivation in open source software development, a report by Jurgen Bitzer, Wolfram Schrettl and Philipp Schroder
(Supplemental) Setting Up Shop: The Business of Open-Source Software by Frank Hecker
(Supplemental) The Magic Cauldronby Eric S. Raymond
(Supplemental) Introducing “Business Source”: The Future of Corporate Open Source Licensing? by Michael (Monty) Widenius and Linus Nyman
(Supplemental) Using Open Source in Business by Frank Wiles
(Supplemental) 5 Successful Business Models for Web-Based Open-Source Projects by Chris Meier
The questions covered during class discussion were:
What motivates individuals to contribute to H/FOSS projects? How can an OSS project have a sustainable development?
What are the advantages and disadvantages to companies of adopting a FOSS model? What are some common business models for FOSS? How do they work?
Why is FOSS widely considered more reliable or secure than closed source software?
0 notes
edwardsg491-blog · 5 years ago
Text
Software Processes (12/03/19)
Note: Date formatted in MM/DD/YYYY
In this discussion, we discussed different establish processes and models for software development. This differs from software patterns and architectures, which describe code structure, in that it describes the exterior approach to software development and how to organize that across time and developers. We discussed many very different models which we divided into sub-types of general software processes and compared and contrasted to discuss the appropriateness of using each in different situations. The discussion went well - I felt that my peers and I were all consistently engaged and found the concepts digestible enough to provide substantial and insightful input. The numerous readings were well-rounded and suitable for a discussion on introductory knowledge of software processes. In particular, the variety of approaches to these processes for differing reasons was a great launching point for suiting different techniques and styles to different considerable necessities of development.
I’d also like to note that this round of preparation and discussion harkened  back to discussions in the beginning of the semester about H/FOSS communities. During those discussions, we talked about general structures of communities that form around H/FOSS projects, the types of tools they use to communicate with one another, how the nature of those tools in combination with the latent structures of seniority and observable skill level affect how those developers communicate, and other subtleties. While this discussion focused on involved persons with largely different motivations (which we will focus upon in the next round of discussion) and organizational structure, much of what we discussed felt very similar; we discussed each process in terms of what is done, when, and how, but also those involved with the process and for what they are responsible.
While initially researching my FOSS project that my team and I have been working with this semester (Jenkins), I learned about Waterfall and Agile independently and its role in the history of my project’s development. However, this context provides a much less chronologically linear perspective of these two distinct processes that I would confer when considering the central theme of the discussion (”Given these circumstances, which process should be used?”).
Of other processes discussed which augmented my understanding of possible methods for software development, I was especially fond of the Scrum and Test-Driven Development (TDD) processes. Learning about Scrum was a fascinating look into an oft-employed team structure of a small group of highly skilled developers. The structure, while time-consuming, details an interesting mixture of rigidity and fluidity that I’d like to touch upon. Though developers on a Scrum team work specifically to address the stories of users and stakeholders in each sprint, once a sprint begins new input about additions to the backlog or other story details are unable to be accommodated until the sprint ends. Also, despite the pillars of transparency, inspection, and adaptation, there is still a fixed, unchangeable time schedule of when the sprint begins and ends and each ceremony performed within that time frame.
We did not reach the portion of the discussion dedicated to test-driven development, so our instructor requested that we discuss it in our respective blog posts. Test-Driven Development is the process by which tests of what the code is meant to achieve are written first and must fail before code which would make the test pass is written. As Simon Allardice posited it in his video (cited below), this is a surprising structure for those who have never seen it. However, it presents many unexpected benefits which remind me of the flaws written about by Max Alexander-Kanat which we talked about in detail during our discussion on Software Flaws. He emphasized, in his description of his three flaws, finding a balance when writing code between over-engineering (expecting too much of what this code will be used for in the future) and not designing code flexibly enough. This is what Test-Driven Development excels in enforcing; by writing tests first, it sets in very clear terms what needs to be achieved of the code. There is no convolution as to what should be done with the development at that moment.
The readings and preparation material from this round of discussion were:
The Software Crisis - Georgia Tech - Software Development Process | Video by Udacity
Evidence of the Software Crisis - Georgia Tech - Software Development Process | Video by Udacity
Software Process Models by Thomas L. Alspaugh
What is Agile? | Video by Mark Shead
Intro to Scrum Videos | Video Playlist by Sally Elatta, president of Agile Transformation (Videos 1, 2 & 3)
Test-driven development tutorial: What is test-driven development (TDD)? | lynda.com | Video by Simon Allardice at lynda.com
(Supplemental) Wikipedia - Waterfall model
(Supplemental) Wikipedia - Spiral model
(Supplemental) Wikipedia - Agile software development
(Supplemental) Wikipedia - Test-driven development
(Supplemental) Wikipedia - Scrum (software development)
(Supplemental) The Agile Manifesto
(Supplemental) Embracing Agile by Darrell K. Rigby, Jeff Sutherland, and Hirotaka Takeuchi
(Supplemental) Youtube - Jim Coplien and Bob Martin Debate TDD
The questions covered during class discussion were:
How do the code-and-fix, waterfall, sashimi, spiral, and prototyping methodologies work? When is each appropriate? What are the benefits and drawbacks of each?
What is Agile Software Development? What are the Agile principles? What do they mean? What the advantages of Agile?
What is Scrum? How does it work? What are story points and how do they work?
What is Test Driven Development? How does it work? What are its advantages? Disadvantages?
0 notes
edwardsg491-blog · 5 years ago
Text
Software Patterns (11/19/19)
Note: Date formatted in MM/DD/YYYY
In this discussion, we talked about common patterns of software development. Specifically, we studied the general form of a design pattern, classifications of design patterns, and the 23 patterns textualized in the canonical reading Design Patterns: Elements of Reusable Object-Oriented Design by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides (commonly known as the Gang of Four). The reading provided gave a very good simple introduction to the aforementioned concepts. However, I did notice during multiple different points of the discussion that the content which we covered in preparation proved insufficient for being able to ponder some of the questions posed. Thus, I would suggest for a future discussion of software patterns to provide another preparatory work, but from a different source so as to provide other perspectives that can be considered by the discussion participants as a means of engendering theoretical posit via comparison and contrast.
One such limitation was that which occurred when considering the possibility of new patterns, or patterns outside those which we studied. The purpose of a design pattern is to address a commonly found problem in software design and provide a solution through a specified design. However, given that we had only studied these concepts described by one author (of the article) and the patterns textualized by this single, albeit rather ubiquitous, group of developers, talking about the possibility of whether or not what we studied was exhaustive of all possible patterns felt like grasping at straws. We pondered that maybe new patterns might emerge from modifying an existing one, or that increasing technological advancements beget new problems to address. Perhaps patterns cannot truly work together unless a hybrid is constructed, and then one must consider if that constitutes a “new” pattern. There were a lot of “maybe”s thrown around. Regardless, because we didn’t know all of the 23 patterns in detail nor understand the process of how they were created, it was difficult to address the conception of new patterns.
One comparison that I had not made while doing the preparation for this discussion was between design patterns and software architecture. Both govern the format of the code and structures that make up a software project, but differ in a way that we had not explored in reading. What I learned was that software architecture blueprints the project in larger scale ways than design patterns. Essentially, software architecture is detailed by concepts and ideas of communication between components, and design patterns describe how to structure the components so that they achieve the result detailed in the architecture. A particularly interesting architecture proposed by a peer was one that does not assume uniformity among its components; that is, that each problem is introduced by a component dedicated to solving that problem, and communicated with each other through a generic, uniformly interpretable input/output format.
We did not reach the portion of the discussion dedicated to dispatch, so our instructor requested that we discuss it in our respective blog posts. Double dispatch refers to a very specific type of multiple dispatch, which is a classification of dynamic dispatch. Dynamic dispatch is the process of choosing between polymorphic operations in an inheritance structure. This process occurs at run time and checks the data type of the object performing the operation. The type of dispatch dictates which data type is observed at this time. In single dispatch, the type of the instance is used, and in multiple dispatch every argument is used to work generally with two or more objects. Double dispatch is merely multiple dispatch of two objects. Java does not support multiple dispatch, but it can be simulated.
The readings and preparation material from this round of discussion were:
Introduction to design patterns by Jeff Friesen, Part 1: Design pattern history and classification
Introduction to design patterns by Jeff Friesen, Part 2: Gang-of-four classics revisited
[Supplemental Material] Wikipedia - Software design patterns
[Supplemental Material] A Beginner’s Guide to Design Patterns by Nikko Bautista
[Supplemental Material] Design Patterns Video Tutorial Video Playlist by Derek Banas (Videos 1 & 2)
The questions covered during class discussion were:
What is the purpose of software design patterns? What are their advantages and disadvantages? How do you use them? How are they related to software architectures?
What are some common design patterns? What are they used for? Which ones have we used in classes here?
How do you choose a design pattern?  Can multiple design patterns work together? Where do new design patterns come from?
What is an anti-pattern? Is an anti-pattern used? What is double dispatch?
0 notes
edwardsg491-blog · 5 years ago
Text
Software Flaws (11/08/19)
Note: Date formatted in MM/DD/YYYY
This week, we discussed major flaws commonly found in the process of software development. This is in continuation of our discussions on software development, starting with the discussion on Software Architecture. I thought this was a particularly interesting discussion, and found that I was able to connect to the material and provide more substantial input than average. I think this is likely because of a couple reasons. For one, the concepts felt more attainable with me and my peers’ shared coding experience, and so I think we were all able to kind of envision what these types of flaws might look like from the code we have written and seen. Another reason is that the preparatory content was in the form of a series of video chats/interviews, which is very different from the usual online readings that we would do. This of course has the downside that it is a lot less easily referenceable, but intake of the information was much easier for me, and I think possibly for many others as well. Lastly, while Max Kanat-Alexander’s views on the three different types of flaws were very descriptive and used a lot of key terms, when viewed together it all seems rather contrary at a glance, and thus questions about how to best traverse the tricky balance he is talking about are vivacious and filled with possibility. I could tell from the questions posed by my peers on the forum before class and during the in-class discussion that this last point was something on everyone’s mind. For context, Kanat-Alexander’s first and third presented software flaws are writing code not needed yet and over-engineering respectively, and the second is not making code easy to change. As he states, the practices that introduce the first and third flaws assume too much about the future of the software and the practices that introduce the second flaw assume too little. So he introduced the Incremental Design and Development model in which he suggests that for each new addition (given multiple additions, they should be isolated and treated one at a time) the code design should be reconsidered so as to make the resulting system look as though it was designed from the beginning to do what it will now do.
In class, we discussed the ways in which Kanat-Alexander and us thought this model can be implemented to best aid against making these key flaws. Suggestions included lots of refactoring, ensuring I/O requirements in preparation, prioritization by usage, and (lightly) using generics. One instance of this that we discussed in depth was the usage and role of comments in code. It was interesting thinking about what good comments and bad comments are like. Obviously, as with any other piece of writing, the audience of who is viewing the code should be kept in mind. But, assuming code literacy of the audience, we deduced that in general comments have the possibility to clarify unclear sections of the code which contributes to its overall readability, but simultaneously introduces visual noise and so should be used sparingly and only in very critical and important parts of the code. One example of bad comments that I found particularly interesting was code that describes the abstract locality of classes or methods such as where else they are utilized. These types of comments would likely be helpful for a developer, but code is usually better documented in other areas which moots these comments, and also without automation that upkeep becomes very overly complicated and unnecessary.
Another part of the discussion that I want to touch upon was the idea of tying together the Incremental Development and Design model and FOSS projects. Since the setting of the talk was between employees of proprietary companies, FOSS was not discussed in context. The first thought that a peer had regarding this relationship was that it seems as though it would be rather difficult to implement in the FOSS structure. As he put it, FOSS requires a lot of different things to be done and contributed by many different people, so lateral development is obviously required and as such overhead of that development becomes complicated enough to seem impossible. But the subsequent point brought up provided a new angle of appreciation for bug/issue trackers. We discussed that with a good bug tracking system said overhead can be automated and reduce the complexity of assignment of responsibilities. When the question was brought up to the larger class, another student added that a positive side-effect of a good bug tracking system is that one can see the frequency of bugs/issues introduced by a feature introduction or change and thus can use that frequency as a parameter for judging whether that feature's design should be reviewed. In review, we instead found that though the concept seems daunting, in practice because of the rather interconnected systems inherent in a FOSS project which we covered in previous discussions, it becomes rather surprisingly seamless.
The readings and preparation material from this round of discussion were:
The Three Flaws in Software Design & How to Avoid Them Video Playlist: 4 Part Video Talk Series from Google Developer Max Kanat-Alexander
The questions covered during class discussion were:
How does Incremental Development and Design work?  How does it prevent the software flaws? How can the Incremental Development and Design approach be applied to FOSS?
What is the purpose of comments? What makes good comments? What makes bad comments?
What is worth designing or planning? What is not? How much effort should we put in to the design?
0 notes
edwardsg491-blog · 5 years ago
Text
FOSS Licensing (11/01/19)
Note: Date formatted in MM/DD/YYYY
This week, we had a class discussion on the topic of licenses used in FOSS projects. This topic of legality is one that we had not yet explored in the contexts of our other discussion topics. However, as we learned, the capabilities of users of FOSS projects under a specific license have sway of the project’s culture, which we have discussed previously in length. Said capabilities are also tied to the spirit of FOSS - a recurring theme since the beginning of this semester.
Similar to our discussion on Version Control, most of the questions posed seemed to me to tend toward comprehension, which is apt for the material since it was on a previously uncovered subject. Also similarly, many members of the class naturally took the opportunity with the professor to discuss many different, important questions regarding the legality of FOSS Licensing that were not necessarily answered by our introductory readings. This is also similar to the discussion on Version Control, but the origin of these questions was more curiosity for a deeper understanding whereas those questions were more out of confusion by insufficient reading comprehension in the preparatory material.
It is for these reasons that I want to spend the rest of this blog post discussing two things that I learned not from the reading material but from those teachable moments peppered throughout the discussion by the exploratory questions of students.
The first idea we discussed in class that was not explored in-depth in the preparatory readings was how exactly application of licenses works. An underlying theme of the entire discussion on Licensing, which reminds me of our discussion of Software Architecture, is that the truly exhaustive description of a given action’s legality depends on the license in question. It is important to keep this in mind when discussing any license in particular. However, an important general partition that we can make across all common-use licenses is between proprietary licenses and free licenses. Our professor, in response to this question of how exactly a license can be applied, stated that most free licenses are written in such a way that the isolated application of the license (in the usual ways, such as stating the license on the FOSS website, including the documentation with the source files, etc.) is equivalent to the legal application of the license, whereas proprietary licenses would likely require contacting the proper agencies.
The second insight I gathered from these exploratory, miniaturized sub-discussions is the concept of using expiration as a tool within license detail. This concept came about from questions about the legality of using code from a project whose license has changed at some point after its official release. In trying to understand the gravity with which the decision whether to change a FOSS’s license should be considered, we (implicitly) chose to focus on the consequences of doing so to judge the worth of that decision. We stated that developers and users can choose which code (the code under the previous license or the code under the new license) to use, and can legally use that code in any way dictated by the respective license. So, maintainers of a FOSS project cannot legally prevent anyone from redistributing or manipulating code under the old license as long as the terms of the old license allow it. However, while maintainers cannot “revoke” it, they can legally restrict access to that code in one of two ways provided the terms of the license permit either (or both) of these actions: (1) They could remove all traces of the code (including controlled versions, repository commits, etc.) before the date of the license change, and/or (2) set an expiration date (expiration being of legal usage) for the software within the terms of the old license which can then be enforced.
The readings and preparation material from this round of discussion were:
Producing Open Source Software by Karl Fogel, Ch. 2 - Getting Started, Section: Starting From What You Have
A Short Guide To Open Source Licenses by Cameron Chapman
Categories of free and nonfree software from the GNU Website
GPL V2 and GPL V3 - A listserve message from Linus Torvalds arguing for why he prefers V2
The questions covered during class discussion were:
Should a license be applied to a software? How is a license applied to a software? How and when should a project select a license? Can a software have multiple licenses?
What is the point of the MIT License? How are the MIT license and GPL different? Can a license be changed later? Which Licenses are applied to your selected projects? Why?
What is Tivoization? How does this relate to the GPL2 vs GPL3 debate?
2 notes · View notes
edwardsg491-blog · 5 years ago
Text
Software Architectures (10/25/2019)
Note: Date formatted in MM/DD/YYYY
After taking a couple of weeks of break from in-class discussion, we returned to go over the basics of software architecture together. Our readings, provided below, were quite extensive (in particular, the Carnegie Mellon paper) and provided a thorough acquaintance to the concepts at hand. Something commented upon in the introductory sections of that paper and then became more evident was that the architectural structures discussed were not necessarily ubiquitously agreed upon structures. The descriptions of their inner mechanisms were vague enough to convey general intention of processes, but not specific enough to deeply understand without contextual knowledge. Some models were ones that I had seen before in abstract representations of computer-related concepts, but had never seen applied to software design. My main critique of this week’s round of reading/preparation and subsequent discussion is that the time reserved in class for discussion did not allow for a lot of time to discuss the main six models in detail. There was an attempt to distribute the duties of discussing only two of said models (instead of all six) when the appropriate questions came around, but since we were not assigned any beforehand we ended up having overlap.
Given, though, that it was my first real introduction to the concepts of software architecture and engineering, there was a lot of understanding rounded out during discussion. An interesting distinction that was made more so during class than in my critical reception of the readings is that between software architecture and implementation details. The conclusion that we came to was, essentially, that the architecture dictated the agreed-upon form of how the software is supposed to be organized (i.e. how what is processing data is connected and structured) in a generic instance, and is very fundamentally important to the overall success of the software; it is analogous to data structures and algorithm design. Meanwhile, implementation details is the term which encompasses all necessary adjustments, additions, or modifications that arise when working to integrate the architecture into the specific usage. One distinction that, upon discussion, deepened understanding of the concepts which govern implementation details was that between functional and non-functional requirements. Many classmates had very creative and eye-opening ideas for potential examples of these requirements; these include operating system, mechanical capabilities, or data types for functional requirements and security, maintainability, and performance for non-functional requirements, or quality attributes.
As I stated previously, we did not get a lot of chance to discuss, in depth, the intricacies of the organization and implementation of the main six structures. This is unfortunate, because I felt these were the centrally important set of concepts to understand for a proper understanding of basic software design. And, given that we did not spend time with formal instruction on them, there was no opportunity to implement effective learning techniques like posing relevant questions or making constructive comparisons. Also, a common stream of thought that I gathered from the peer forum posts I read and the discussion is that the case studies presented in the Carnegie Mellon paper were not demonstrative enough, and that it would be helpful to have other, possibly more detailed examples of those architectures and/or respective implementation details.
The readings and preparation material from this round of discussion were:
David Hill's preface to the Microsoft Application Architecture Guide, 2nd edition.
An Introduction to Software Architecture - David Garlan and Mary Shaw (1994).
The questions covered during class discussion were:
What is Software Architecture?  What are implementation details? Which matters more in a project- implementation details or overall design? Why?
What are the main software architectures discussed? Which of them have you seen? Where? What are their relative strengths and weaknesses?
Given an application to be built how does a Software Architect choose an architecture? How and when would architectures be combined together?
0 notes
edwardsg491-blog · 5 years ago
Text
Version Control II (09/27/2019) & Version Control Activity (10/01/2019)
Note: Date formatted in MM/DD/YYYY
This post is the second reporting on the multiple-class long introduction, discussion, and practical activity surrounding the topic of version control. To see my first post, please follow this link.
Our discussion hosted this week concerning the remaining unexplored aspects of version control was certainly a unique one. For one, the intention of the class was to first cover the discussion questions with only a limited dedicated time and then begin the first few sections of the Git/GitHub Workflow Activity, but we ended up instead spending all of our time covering the discussion questions and discussing them both amongst ourselves and in the form of questions for our professor. Second, the discussion questions posed were ostensibly comprehension questions, which we have explored in the past, but minimally and with the focus on being exploring unfamiliar topics together to share information and deepen understanding. Lastly, despite the questions being merely comprehension (and thus leading to the idea that they should not have taken long to cover) the reason why we ended up using all of the class time to cover the questions instead of our pre-determined dedicated time is because it seemed quite universal that all of us students were fairly confused by the reading and the concepts being covered. We all struggled to provide substantial responses to the questions because (at least) the majority of us didn’t quite know how to answer them. Because this was common to many in the class, I would reason that this is not likely due to a lack of preparation, but of the complexity of the concepts and the relatively short preparation material. This is also why the class seemed to devolve into a Q&A session between us and our professor, because many of us needed to have some things clarified or restated.
Regardless, we talked a lot about merging and rebasing, heads, the HEAD, and dangling commits, and fetching and pulling. We established basic differences between each of these and what made them distinct from one another as well as their uses. It seemed like everyone was struggling with said distinctions, especially between that of merging and rebasing. As the professor stated, I believe it will become more clear through practical use, but I think everyone could use some more demonstrative examples or review of what they are, when they occur, and the consequences of using them. I intend to review the electronic preparatory materials provided in my previous blog post as well as the supplementary reading provided and some recommendations from my peers (I have listed said readings below).
Our in-class practical activity, which was to be covered all within one class period, was only covered up to the checkpoint where we as a class were to submit pull requests to the upstream repository maintained by our professor. Given that she could not merge them all until they had all been properly submitted, this is where we ended the activity for the day. This activity certainly helps to provide some reproducible steps and practical understanding about forking, cloning, making changes, staging changes, pushing changes, and the distinctions between the upstream, origin, and local repositories. However, as we still have yet to conclude the activity I can only make predictions and comment on the progress so far. As such, I hope that by the end of the activity we will have had the chance to collectively reflect on the concepts that we explored during the activity. For instance, how does the upstream repository differ from the origin and local repositories? How are changes made to files propagated upwards to the upstream repository? How can we use the software that we downloaded in Ubuntu during the Software Installation Activity (Java, Oracle JDK, Eclipse, Git, and Meld) in this process?
The readings and preparation material from this round of discussion were:
Understanding Git Conceptually by Charles Duan, Sections 4-5
Git/GitHub Workflow Activity (Restricted Access)
The questions covered during class discussion were:
Merge vs. Rebase: How are they same?  How are they different? What are the advantages and disadvantages of each?
What is a commit object? What is HEAD?  What is a dangling commit?
Difference between fetch and pull? What is a fast-forward merge? When is fast-forward merge done?
Supplementary reading:
A Visual Guide to Version Control
Intro to Distributed Version Control (Illustrated)
Git Videos from the Documentation on Git’s website
Peer recommendations:
Merging vs. Rebasing (hosted on Atlassian)
Git - Rebase and Merge by Filiz Senyuzluler
0 notes
edwardsg491-blog · 5 years ago
Text
Version Control I (09/24/2019)
Note: Date formatted in MM/DD/YYYY
This week, we began our discussion on version control. This was the first of multiple successive rounds of reading, discussion, and practical activity about version control that we will do. We read and discussed about what it is, what uses it is, how different kinds of version control systems function, and how branching, pull requests, and write access play a part in FOSS projects.
The beginning of the discussion, which we dedicated to defining often-used terms related to version control and the differences between terms with similar definitions, gave me more trouble than I anticipated going into it. It made me realize that some of these terms have even more similar definitions than I thought. For instance, I discerned that, since I didn’t know the difference between forks and clones is that forks are more global and public than clones, the definitions of forks and clones were much closer than I thought.  So, I resolved to make sure to dedicate time before the next discussion to knowing these differences well.
The readings about version control gave me a pretty good idea of version control, but the discussion definitely helped to round out the concepts. As I began the reading and learned the definition of version control, I immediately was able to come up with reasons for its widespread usage and common uses, but the idea of version control providing a virtual workspace that allows for every contributor to develop the project without fear of interfering with each other’s progress came about, for me, in discussion. I like this analogy because I appreciate the explanatory nature it introduces about version control being the central force which inspires parallel development by many individuals in a collaborative way, which is reflective of the spirit of OSS that we talked about in our first discussion.
When we started talking about centralized vs decentralized version systems, I could tell that I wasn’t the only one struggling to conjure a substantial response. My team was also struggling, and when we began the inter-group discussion, I and many other students had questions about them and the nature of their differences. I think this is likely because our readings did not talk about centralized and decentralized version control systems in depth, and our professor clarified that we would be going into more depth in the next discussion. I hope that we also discuss the lock-modify-unlock and copy-modify-merge models more in the next discussion, because I was struggling with those concepts as well.
Branching fell naturally from our discussion of centralized versus decentralized version control systems, because in discussing their differences we had to mention how changes are propagated to the master branch. I think we all had a pretty fair grasp on the honor system as well, and how it inspires trust throughout a developer base. One point that I was unclear on after the reading that I tried to address during the discussion was whether or not a developer who has been incorporated into the honor system still has to document their changes and/or await permission to propagate them via pull requests. However, I did not get a very clear answer to that question, so I’m hoping that the second discussion will elucidate this point of confusion for me.
The readings and preparation material from this round of discussion were:
Producing Open Source Software by Karl Fogel, Ch. 3 - Technical Infrastructure, Section: Version Control
Understanding Git Conceptually by Charles Duan, Sections 1-3
New Ubuntu User: How To Install Software In Ubuntu by Charlie Henson on YouTube
Linux Sysadmin Basics 05 -- Package Management with apt-get by tutoriaLinux on YouTube
Learn Git Branching: Introduction Sequence (1-4)
Software Installation Activity (Restricted Access)
The questions covered during class discussion were:
Define the following: Fork vs Clone, Branch, Commit vs Tag, Upstream vs Origin vs Local, Files vs Stage
Why do we need version control? What files in a project should version control be applied to – source code files? Anything else? Everything? What project files should not be version controlled?
What is the difference between a centralized version control system and a decentralized version control system? What are the pros and cons of each system? What type of version control system is Git? Does Git use a lock-modify-unlock model or a copy-modify-merge model and why?
When should branching be done? How can branches help collaborative development? Who has write access in FOSS? Who reviews requests for changes/patches/pull requests? What is the honor system? What are its pros and cons?
0 notes
edwardsg491-blog · 5 years ago
Text
H/FOSS Project Tools/Structures/Communication (09/17/2019)
Note: Date formatted in MM/DD/YYYY
Once again, the effect from this round of reading and discussion differs from that of the previous rounds. The effect from the first two rounds was described in my last post. To summarize, the first round strove to build foundational knowledge of the nature of FOSS, and the second delved into the infrastructures used to maintain user/developer contribution and responsibilities of project managers and contributors. This round focused on the communication tools that are utilized in FOSS projects to perfect presentation to potential new users/contributors, assess the activity and accessibility of projects, and foster proper development and problem-solving techniques. When, for class, we  revisit projects and provide sustainable assessments of their progress via statistics, resources, and documentation, this information will serve as a convenient yet appropriate checklist.
We discussed the importance of presentation when concerning the retention of potential new contributors, the complexity of bug tracking systems, and the different kinds of communication tools used by a project’s developer base.
Some aspects of presentation for a FOSS project that are attractive to potential new contributors seem intuitive, but, as pointed out in the reading and during discussion, can often go overlooked. These include having the mission statement with the words “open source”, feature listing, and adequate sitemapping to other crucial information like documentation, source code repositories, bug trackers, communication channels, and download info all within the front page. Other things brought up that are less intuitive (to me, at least) but still have good implementation value include claiming similar domains purely for redirection, sections specifically dedicated for new users and new developers, and easily accessible documentation of project architecture, roadmaps, version history, etc.
Aligned with the reading, we as a class seemed to have mutually agreed that “bug tracker” is a misnomer for the tool, and “issue tracker” was a alternative title proposed by another student. We discussed the problems of duplicate and non-issue tickets, how they plague bug tracking systems, and, although it was not required for the discussion question, efficient, easily implementable ways to prevent said tickets or parse them. One pertinent point that was repeatedly revisited was that of human moderation versus automated moderation. It seems obvious from the perspective of a developer that these two forms of moderation would have different capabilities and thus should have differing set limitations on the kinds of issues they track and how they track them, but we pointed out that it is with care and attention to the unique structure of the project that one who sets said limitations addresses this boundary between the two.
Forms of communication are quite numerous, which my group learned during the inter-group class exchange. We settled on three particular examples so as to demonstrate the gradient from synchronous communication to asynchronous: IRC Channels, Forums, and Wikis. However, we quickly realized that the ways of defining communication in respect to our chosen examples could easily be expanded in other ways, as examples such as email groups, project newsletters, collaboration software, blogs, and planets were enumerated by other group speakers.
The readings and preparation material from this round of discussion were:
Producing Open Source Software by Karl Fogel, Ch. 2 - Getting Started, Section: Starting From What You Have
Producing Open Source Software by Karl Fogel, Ch. 3 - Technical Infrastructure, Section: Bug Tracker
Teaching Open Source by Greg DeKoenigsberg, Ch. 3 - Communicating, Sections 3.4-3.10
Producing Open Source Software by Karl Fogel, Ch. 3 - Technical Infrastructure, Section: IRC / Real-Time Chat Systems
The questions covered during class discussion were:
What aspects of a project would encourage people (users, contributors, ..) to get involved ? How are you using these features to assess the quality of the projects while completing the project reviews?
What are bug tracking systems used for in open source projects? What does a typical ticket life cycle involve? What are some of the challenges that are experienced during the ticket life cycle and how are they addressed?
What are some of the asynchronous and synchronous FOSS communication tools? What is the purpose of each tool? What are the benefits and limitations of each tool?
0 notes
edwardsg491-blog · 5 years ago
Text
H/FOSS Project Community (09/13/2019)
Note: Date formatted in MM/DD/YYYY
A positive effect from this round of reading material and discussion is that I have gleaned a seemingly more stable and detailed understanding of FOSS projects. The first round strived to establish understanding by distinguishing open source projects from proprietary software, develop the meaning of the term "free" and its relevance to this distinction, and the fundamental history of the FOSS movement. This time, we learned and spoke more about the infrastructure of FOSS projects and sought to answer fundamental questions to understanding them, such as "How do they work?", "What attracts developers to a project, and what makes them stay?", and "What are the structures that govern progress of a project like?". By exploring multiple different demonstrative, generalized situations and examples of such, I have gained a better understanding of the communication, responsibilities, and expected effects/outcomes of a FOSS project's developer base. This will be helpful when we begin joining developer bases for projects later in the semester.
Two concepts that, for me, I had trouble understanding prior to discussion and came out of the discussion with a much better understanding of were: the single vendor open source community type and development forks.
1. From the reading, I knew the definition of the single vendor community type, but it still confused me as it seemed to me like an outlier amongst the other described types. I had many questions about its structure and how it differed from the other types, and in the discussion we touched on that type directly and discussed it in length. This confirmed my intuitive suspicion that the single vendor type is likely one of controversy in the open source community because it might seem to defile “the spirit of OSS”. My group and I discussed in length about how a contributor might be deterred by this type because of the inherently different power structure and influence over intracommunal culture development, but on the other hand uniquely introduces reliable, referrable expertise and the ever-present dedication to brand loyalty. These arguments about the pros and cons of the structure really helped to fill out my understanding of this community type.
2. Development forks were something that I seem to have accidentally read over at some point, because I was wholly unfamiliar with the term at the start of the discussion. However, introducing the term in reference to its counterpart, hard forks, really helped to round out the overall concept of forking and its role in an open source development community. From that my group and I decided that we would have to move the definition of forking, which was more aligned with that of hard forking, away from that initial attempt and more towards a general idea that can be modified to provide the respective specific details necessary to convey the ideas of hard forking and development forking.
In both cases, I am glad that we touched upon these points during the discussion. The last point from the discussion that I would like to touch upon was that of the Contributor Mountain. We all agreed that this was a quite well-visualized concept and a good tool for future reference. The main conclusion that we were able to formulate from our discussion of the tool and how it was explored in the reading through the very specific example provided is that while it might at times seem optimistic, it is surprising when one realistically views every contribution from a community regardless of the contributor's location on the Contributor Mountain to be helpful and effective. It is easy to find value in contribution, no matter how futile it might seem to someone from the perspective of a developer or how indecipherable it might seem to someone from the perspective of a user.
The readings and preparation material from this round of discussion were:
Four types of open source communities by Matthias Stürmer
Teaching Open Source by Greg DeKoenigsberg, Ch. 2 - Introduction_to_Free_and_Open_Source_Software, Sections 2.5.1-2.5.4, 2.6 
Producing Open Source Software by Karl Fogel, Ch. 4 - Social and Political Infrastructure
Teaching Open Source by Greg DeKoenigsberg, Ch. 3 - The Lay of the Land, Sections 3.1-3.3
Producing Open Source Software by Karl Fogel, Ch. 8 - Managing Participants
The questions covered during class discussion were: 
What motivation do companies have to create single vendor open source projects? What motivation do developers have to join single vendor open source project communities? Can there be any possible reasons for a developer to avoid contributing to single vendor open source project?
What is forking? What is the difference between “development forks” and “hard forks”? In what situations do hard forks occur and what are its consequences? How does the potential for forks prevent true dictators in an open source community? Should companies that release open source projects be worried about forkability?
What qualities enable an open source community to thrive and be successful? What are the various levels of contribution to a FOSS project?
0 notes
edwardsg491-blog · 5 years ago
Text
H/FOSS History & Philosophy Discussion (09/06/2019)
Note: Date formatted in MM/DD/YYYY
Something that I noted of the readings that prepared our class for this discussion was that they each seemed to have a different tone regarding the usefulness of free/open source software. Eric Raymond’s approach (in “The Cathedral and the Bazaar”) was rather optimistic, praising, and declarative of how it could be of immense use to developers, whereas “Producing OSS: Introduction” was more wary of its pitfalls and willing to explore its faults.
During the class discussion, this was explored in a way that elaborated more on this divided tone by hearing about many varied possibilities and pros and cons of open source versus closed source project models. This assisted my understanding, as the orgiin of the prerequisite readings’ content came from the standpoint of developers in a way to teach their readers, but the discussion discussed theoretical pros and cons from like-minded peers with similar relevant experience. Also, many ideas were presented that I had not considered prior. For instance, the difference in difficulty of overhead and management between the two models was something that I had pondered while doing the reading but still found nebulous. I had also not considered the vitality of the open source model’s reliance on the contentiousness of its developer base to refrain from plaguing the project with malicious code; I found to be an interesting topic that I felt was worthy of continued discussion. A very interesting and demonstrative example that was brought up by another student was that of the notorious DIY Drones program, which was utilized by the terrorist group ISIS to operate drones for bombing campaigns. This brings up an important ethical question of the access that open source coding provides.
This leads me to another topic of interest that came up within my group and inspired much discussion of Richard Stallman’s video address detailing his “four freedoms”. To provide context, my group spent some time discussing how we both simultaneously respected his legacy and self-written axioms for open source software and found his presented idea of “justice” regarding proprietary software to be far-fetched, quixotic, and economically unrealistic. As per my last example, it seems that there also exists an posable ethical dispute that could provide fodder for future fruitful discussion.
The readings and preparation material from this round of discussion were:
The Cathedral and the Bazaar by Eric S. Raymond
Producing Open Source Software by Karl Fogel, Ch. 1 - Introduction
Teaching Open Source by Greg DeKoenigsberg, Ch. 1- Foreword, Sections 1.1-1.4
Free software and the four freedoms, a video essay by Richard Stallman [VIDEO]
The questions covered during class discussion were:
What is the difference between the Cathedral and the Bazaar models? How is the Bazaar model beneficial to the software development process? How is it detrimental? Same questions for the Cathedral model.
What is the difference between free, open source and proprietary software? How can a FOSS project attract developers? Do FOSS projects compete with each other in any way? What benefits can be obtained by “opening up” a project (converting it to FOSS)? Are there any drawbacks?
How important are the four freedoms that Stallman talks about? Can proprietary software be considered “unjust” for users? Should the issue of injustice receive more attention, such as from the government? Would FOSS security issues compared to proprietary software?
0 notes
edwardsg491-blog · 5 years ago
Text
COMP 491-01 [Fall 2019]
Welcome to my new blog!
This blog will be used exclusively to post original writing for my Senior Seminar course by me, Gregory Edwards, Dickinson College Class of 2020 student. I will be posting reflective writing on reading and discussion topics from class related to free and open source software (FOSS), software engineering, and contemporary social and ethical issues in technology.
As of today’s post, I don’t have many plans for post-graduation. My hope is that I will be able to locate an internship or research opportunity to receive experience in a field that I develop an interest in over the coming two semesters. I, frankly, do not have a wealth of knowledge concerning possible career paths or graduate study after my time as an undergraduate.
- Gregory Edwards
Tuesday, 9/3/2019
2 notes · View notes