Tumgik
comp491-mollerus · 1 year
Text
Mollerus Reflection Blog Post
During my time as a student at Dickinson College, I have had the opportunity to participate in a collegiate computer science curriculum in the context of a liberal arts education. While computer science courses are traditionally STEM focused, learning about core curriculum topics, such as data structures and algorithmic analysis, while also enrolling in non-STEM courses as per the school’s distribution requirements has enhanced the overall experience of my education. By complementing core curriculum courses while also giving me the opportunity to explore other disciplines and how they relate to computer science, the liberal arts education Dickinson has provided me with has allowed me to make significant progress towards achieving the goals of the major, and the mission of the college as a whole. 
At the heart of the curriculum of the computer science major at Dickinson are core classes, such as data structures and analysis algorithms, as well as abstraction implementations and elective topics in computer science. These courses have successfully taught me important principles and skills necessary to take part in software engineering projects, while also giving me a mental framework through which I can continue to learn and expand my skill set after graduation. Throughout the past two years, I have had many opportunities to apply these hard skills I learned by enrolling in core curriculum classes to two external internship and one part-time job experience. With time, I felt increasingly more prepared to tackle problems presented to me during these work experiences, as well as being comfortable with exploring technical areas with which I didn’t have any prior experience. Much of the material taught to me in class was cumulative in nature, so as I approached my senior year, I began to realize more connections between material that ultimately fortified my general understanding of computer science, and confidence in my own abilities to apply this knowledge to work outside of school. I feel assured that as I enter the workforce after my graduation, the core computer science curriculum Dickinson has provided me with has prepared me to become receptive to the requisite technical knowledge to thrive and grow as a professional. 
As a student, I have spent a great deal of time reinforcing my own understanding of class material by studying or working on my own. However, a large portion of my own learning process has occurred in situations where I have cooperated with classmates, such as in labs and group projects. One aspect of software engineering I learned early on in my education is that an individual’s talent and work ethic can only go so far without capable team members, as well as a strong and coherent channel of communication. The senior seminar class sequence has aided in clearly defining the components and conditions needed for healthy collaborative work to occur. Through this, I have been able to consciously analyze my own behavior while collaborating with others on projects or labs, and how it impacts the quality of the work being produced. I try to reconcile any unconscious biases while working with others so that I can avoid perpetuating common workplace injustices, such as sexism or cultural biases. Additionally, working in team settings has also allowed me to improve my organizational abilities, such as when breaking down projects into chunks of work that each team member could tackle individually. Before beginning the computer science track as a student, I used to have a strong preference for working alone rather than on a team when problem solving or tackling projects. Now that I have acquired new soft skills, in addition to an increased mindfulness of my own behavior in relation to the success of a group project, I have realized the true potential of working with others, as well as how essential it is to learn how to work well with others on large scale projects. 
One of the areas I have seen Dickinson’s liberal arts education really shine is how well it has prepared me to communicate heavily technical ideas and concepts to audiences in layman’s terms. While the core curriculum of the computer science major has trained me to become receptive to new technical tools and concepts, this alone is not enough to grant one the ability to bridge the gap between the world of technology with everything that lies outside of this realm. Through fulfilling distribution requirements by enrolling in classes within the domains of humanities studies and business management, I have had many semesters spent developing writing, communication and presentation skills, which ultimately have made me a much better writer and public speaker. During my internship experience this past summer, one of my assigned tasks was to present an overview of my technical project and accomplishments to a panel of high-level executives within the insurance company I worked for, including the CTO. While creating and practicing for this presentation, I harnessed interpretive skills I had been developing over the course of my time as student, recalling essays I had written for religion and anthropology classes, for example. Despite being completely unrelated to the subject matter of my internship’s technical project, I was able to utilize these skills to help translate technical ideas into business terminology and encapsulate abstract concepts into real business value that my project generated over the three-month course of my internship. I was later congratulated by many of the executives in the audience, including the CTO, on the outcome of my projects, as well as on the quality of my final presentation. More recently, during a job interview for a solutions engineering position, I was told by the individual interviewing me that I displayed excellent communication skills and adaptability that they had not observed in other candidates from larger, non-liberal arts universities and colleges. When asked about non-major related interests, it was easy to elaborate on highlights in classes I enjoyed outside my major, such as my ‘God in America’ religion class I took sophomore year. During a follow up on the interview, I was told that I did very well, and that I appeared to be a very ‘well-rounded’ and open-minded candidate. Just a few days ago, I received an offer letter from the company for this position. I am quite certain that if not for the mission of Dickinson to provide students with an interdisciplinary education, I would have appeared to be much more one dimensional as a candidate for this position.  
Had I chosen to attend a large, non-liberal arts university, I might have had the opportunity to focus more of my time and energy on taking more major related, technical classes. However, in doing so I would have denied myself the opportunity to develop crucial soft skills, to allow my curiosity in other fields of study to blossom, and to grow not only as a computer scientist, but also as a global citizen and member of a community of open-minded thinkers. Dickinson has done an excellent job at preparing me for the real world, where most hard skills are acquired on the fly; I feel confident in my abilities to pick up new tools and technologies using the fundamental framework of core concepts the computer science curriculum has provided me with, as well as in my ability to encapsulate technical concepts into terms easily understood by non-STEM audiences. Through working alongside group project members, I have developed key interpersonal and leadership skills that will stay with me for life, and hopefully allow me to continue to bond with coworkers and colleagues and produce high quality, effective and efficient solutions to real world problems.  
0 notes
comp491-mollerus · 2 years
Text
Blog Post #2: Low/No-Code Architecture
The concept of project architecture plays a crucial role in the inception, lifecycle and direction of software. The work needed to organize and coordinate millions of line of code, hosted on highly complex infrastructures is a difficult and important task, and it there is no doubt that poor or maladapted architectures paired with solid engineering can still fail, or become obsolete. While there are several well-established architecture styles that are commonly used, often in parallel with one another within a single project, new ideas are constantly emerging as technologies in general have evolved. One of these is an industry disrupting concept known as “Low Code” or “No Code” solutions. This particular architecture allows so-called ‘citizen developers’, or people who have little to no coding or programming experience, to design and upgrade platforms, and even deploy them. 
What makes low/no code architecture so valuable is that it can facilitate massive increases in project involvement from community or organization members that may not have any formal coding experience. These layouts often pair cloud computing technologies with visual interfaces to allow citizen developers to build their own applications or services. Freeing up much of a company's workforce in a way that allows them to directly implement fresh ideas, while also lightening the load on developers makes this architecture a smart investment when it comes to early adoption. A recent survey published by KPMG cited findings that “100% of enterprises who have implemented a low- and no-code development platform have seen ROI through these initiatives” (link to the survey). It is also predicted to grow 28% annually by 2027, as more businesses and enterprises choose to adopt this emerging technology.  
Tumblr media
BuildBox No-Code Interface (source)  
This architecture is not without it’s drawbacks, however. It is accepted that No/low code comes withs may present security risks to the platforms that utilize them. This stems from the fact that citizen developers may unknowingly create security vulnerabilities while building applications or services that handle sensitive data or delicate processes with numerous dependencies. There are also concerns that this introduced increase in system malfunctions or failures can increase the presence of ‘Shadow IT’, or IT departments within companies or organizations that expend an inefficient amount of time monitoring their dependent users and their services.  
Another limitation of Low/no-code architecture is that that it limits the complexity and customizability of the platforms and infrastructures that builds on top of it. Thus, companies investing in this new technology may find that low/no-code solutions may not meet every aspect of their business needs, or that it cannot be maintained or upgraded easily as business needs change over time.  
That being said, this technology is very much still in its infancy, and will take time to mature and reach it’s potential. Since this architecture allows employees without much or any programming experience to build software, it may make companies that implement it more attractive to job seekers who may possess other important skills and domain knowledge, but not necessarily and coding skills. Waiting some time will allow companies that use low/no-code to calibrate, as well as let the technology itself advance even further. This is yet another example of automated software building, alongside with GitHub CoPilot technology that uses AI to automate the programming/engineering process of software development. It will be interesting to see how automation continues to play a larger role in the process of application and platform development. 
0 notes
comp491-mollerus · 2 years
Text
Blog Post #1
As Artificial Intelligence takes the computing and technology sphere by storm, it is becoming increasingly evident that licensing and the general legal ethics surrounding open-sourcing projects reliant on data and technology is quickly becoming outdated. For decades, the consensus was that open-source licensing dealt comfortably within the realm of “software”. While legal grey zones still certainly existed, the general perception of what is considered as intellectual property was not subject of a lot of debate. With the introduction of data driven solutions, Artificial Intelligence and Machine Learning models, new products and designs that transcend the definition of software are spilling onto the scene, causing more and more confusion regarding the rules that govern their distribution and fair use.
The Copilot project controversy is a prime example of this issue taking hold of the open-source community, as well as the world of proprietary ownership. Never before have such peculiar legal circumstances arisen until the machine learning model capabilities that unlock the potential of projects such as Copilot came to fruition. Much debate has been had over the ethics of scraping open-source repositories, and whether or not projects such as Copilot that capitalize on these data mining techniques have the right to declare themselves as proprietary. Copilot is not the sole example of this dilemma. In an article published recently, a new web-based AI powered art generator called DALL-E is discussed in a similar context. This project relies on web scraping other artist’s work scattered all over the internet to train a highly complex model, which in turn is used to produce eerily good images that are inspired by a user given prompt. While some may argue that the model itself represents a unique idea, it is undeniable that it would not function the way it does without that training ‘data’ being made freely available in the first place. Moreover, this the criteria of what is considered a violation of artist’s intellectual property is made even more unclear. Some may argue this is fair use, as the generator spits out a modification of these works, while others consider it copyright infringement.
It is clear that a new, undefined category of AI driven assets are stirring up controversy within the realm of technology and licensing. As large corporations such as Microsoft and Amazon lean into these unprecedented technological advances, with operational and data mining capabilities that far exceed those of smaller stature, things can only get messier from here. Without appending new laws to our legal system that govern and protect intellectual property, we may run risk of discouraging artists and content creators as a group from continuing to innovate and develop. As AI continues to push the envelop of making the impossible indeed possible, consumers, innovators and legislators must brace for change in order to adapt to this new, unpredictable environment.
1 note · View note
comp491-mollerus · 2 years
Text
This blog's purpose is to discuss numerous topics involving computing and computer science.
My plans after college are to get a job in the tech field, and gain professional experience in the tech industry by leveraging skills I have picked up at Dickinson.
What I hope to gain from this course is to gain experience working on large projects, gain experience with software engineering, and to learn more about the tools that are being used to develop large projects.
1 note · View note