#Customer Database Software
Explore tagged Tumblr posts
orangescrumblog · 13 days ago
Text
Why Orangescrum is the Best Pivotal Tracker Alternative?
Looking for the best alternative to Pivotal Tracker? Discover why Orangescrum is the perfect project management solution with advanced features, seamless collaboration, robust task tracking, and customization options to meet all your workflow needs. Explore now!
Tumblr media
Read the full blog
0 notes
geeconglobal · 6 months ago
Text
Streamlining Application Development and Maintenance in London: Best Practices for Success
In the heart of London's bustling tech scene, businesses are continually seeking efficient and effective methods to develop and maintain their applications. With the rapid pace of technological advancement and the high expectations of consumers, it's crucial for companies to adopt best practices that ensure success. Here are some strategies to streamline application development and maintenance in London:
1. Adopt Agile Methodologies
Agile methodologies, including Scrum and Kanban, promote iterative development, collaboration, and flexibility. By adopting Agile, development teams in London can respond quickly to changes, ensure continuous improvement, and deliver high-quality applications that meet user needs.
2. Leverage DevOps Practices
DevOps combines development and operations to create a culture of collaboration and efficiency. Implementing DevOps practices such as continuous integration and continuous deployment (CI/CD) can significantly reduce the time to market, improve product quality, and enhance the reliability of applications.
3. Embrace Cloud Computing
Cloud computing offers scalability, flexibility, and cost-efficiency, making it an essential component for modern application development and maintenance. By utilizing cloud services from providers like AWS, Azure, or Google Cloud, companies can easily scale their applications based on demand and ensure robust performance and security.
4. Invest in Automation
Automation tools can greatly enhance productivity and reduce the risk of human error. Automated testing, deployment, and monitoring can help ensure that applications are thoroughly tested, reliably deployed, and continuously monitored for issues, allowing for quick resolution and minimal downtime.
5. Focus on User Experience (UX)
A great user experience is critical for the success of any application. Conducting user research, usability testing, and continuous feedback loops can help ensure that applications are intuitive, efficient, and meet the needs of their users. In London’s competitive market, a superior UX can be a significant differentiator.
6. Ensure Robust Security
With increasing cyber threats, security must be a top priority. Implementing best practices for application security, such as regular security assessments, penetration testing, and adherence to security standards like ISO/IEC 27001, can help protect sensitive data and maintain user trust.
7. Continuous Learning and Improvement
The tech landscape is constantly evolving, and staying ahead requires continuous learning and adaptation. Encouraging a culture of continuous improvement, providing ongoing training for development teams, and staying updated with the latest industry trends and technologies can help maintain a competitive edge.
By adopting these best practices, companies in London can streamline their application development and maintenance London processes, delivering high-quality, reliable, and secure applications that meet the demands of today's fast-paced market. Embracing these strategies not only enhances operational efficiency but also drives innovation and growth in one of the world's leading tech hubs. Visit more information for your website
0 notes
correctiveoralsurgery · 11 months ago
Text
does anyone know of any basic office software that lets you like... tag things and nest them inside each other? i work at a school with a bunch of students and i'd love to be able to put my info for them in one place and then just... press buttons to sort them by grade, class, cycle, etc. but then also be able to just click a student and see every class theyre in
1 note · View note
xbsoftware · 1 year ago
Text
Developing a web application, you are always concerned about data. The major issues are how to manage it efficiently and store it securely. In some simple cases, you can solve them by applying sharding or partitioning, but if you’re looking for absolute control, working with multiple databases can help. A solution we provide here may use multiple databases. We suggest using the approach described in the Aarticle to keep the sanity of the development team building such an application.
0 notes
tech-ahead-corp · 1 year ago
Text
youtube
Digital Marketing Services
TechAhead offers comprehensive digital marketing services that encompass various strategies and techniques, including SEO, social media marketing, content creation, and more, to help businesses establish a strong online presence and drive growth!
0 notes
creativdigital1 · 2 years ago
Text
Tumblr media
Ecommerce Website Design Sydney
We have a great e-commerce web design team who can customise your e-commerce store specifically for your business. We value your business, so we work to make your site work for you. With careful planning and deliberation, intuitive analysis, and impressive, professional design, your e-commerce site will be flexible and effective.
Whether it is a simple online store or a large e-commerce project we have the expertise to ensure your project is a success.
1 note · View note
lazeecomet · 24 days ago
Text
The Story of KLogs: What happens when an Mechanical Engineer codes
Since i no longer work at Wearhouse Automation Startup (WAS for short) and havnt for many years i feel as though i should recount the tale of the most bonkers program i ever wrote, but we need to establish some background
WAS has its HQ very far away from the big customer site and i worked as a Field Service Engineer (FSE) on site. so i learned early on that if a problem needed to be solved fast, WE had to do it. we never got many updates on what was coming down the pipeline for us or what issues were being worked on. this made us very independent
As such, we got good at reading the robot logs ourselves. it took too much time to send the logs off to HQ for analysis and get back what the problem was. we can read. now GETTING the logs is another thing.
the early robots we cut our teeth on used 2.4 gHz wifi to communicate with FSE's so dumping the logs was as simple as pushing a button in a little application and it would spit out a txt file
later on our robots were upgraded to use a 2.4 mHz xbee radio to communicate with us. which was FUCKING SLOW. and log dumping became a much more tedious process. you had to connect, go to logging mode, and then the robot would vomit all the logs in the past 2 min OR the entirety of its memory bank (only 2 options) into a terminal window. you would then save the terminal window and open it in a text editor to read them. it could take up to 5 min to dump the entire log file and if you didnt dump fast enough, the ACK messages from the control server would fill up the logs and erase the error as the memory overwrote itself.
this missing logs problem was a Big Deal for software who now weren't getting every log from every error so a NEW method of saving logs was devised: the robot would just vomit the log data in real time over a DIFFERENT radio and we would save it to a KQL server. Thanks Daddy Microsoft.
now whats KQL you may be asking. why, its Microsofts very own SQL clone! its Kusto Query Language. never mind that the system uses a SQL database for daily operations. lets use this proprietary Microsoft thing because they are paying us
so yay, problem solved. we now never miss the logs. so how do we read them if they are split up line by line in a database? why with a query of course!
select * from tbLogs where RobotUID = [64CharLongString] and timestamp > [UnixTimeCode]
if this makes no sense to you, CONGRATULATIONS! you found the problem with this setup. Most FSE's were BAD at SQL which meant they didnt read logs anymore. If you do understand what the query is, CONGRATULATIONS! you see why this is Very Stupid.
You could not search by robot name. each robot had some arbitrarily assigned 64 character long string as an identifier and the timestamps were not set to local time. so you had run a lookup query to find the right name and do some time zone math to figure out what part of the logs to read. oh yeah and you had to download KQL to view them. so now we had both SQL and KQL on our computers
NOBODY in the field like this.
But Daddy Microsoft comes to the rescue
see we didnt JUST get KQL with part of that deal. we got the entire Microsoft cloud suite. and some people (like me) had been automating emails and stuff with Power Automate
Tumblr media
This is Microsoft Power Automate. its Microsoft's version of Scratch but it has hooks into everything Microsoft. SharePoint, Teams, Outlook, Excel, it can integrate with all of it. i had been using it to send an email once a day with a list of all the robots in maintenance.
this gave me an idea
and i checked
and Power Automate had hooks for KQL
KLogs is actually short for Kusto Logs
I did not know how to program in Power Automate but damn it anything is better then writing KQL queries. so i got to work. and about 2 months later i had a BEHEMOTH of a Power Automate program. it lagged the webpage and many times when i tried to edit something my changes wouldn't take and i would have to click in very specific ways to ensure none of my variables were getting nuked. i dont think this was the intended purpose of Power Automate but this is what it did
the KLogger would watch a list of Teams chats and when someone typed "klogs" or pasted a copy of an ERROR mesage, it would spring into action.
it extracted the robot name from the message and timestamp from teams
it would lookup the name in the database to find the 64 long string UID and the location that robot was assigned too
it would reply to the message in teams saying it found a robot name and was getting logs
it would run a KQL query for the database and get the control system logs then export then into a CSV
it would save the CSV with the a .xls extension into a folder in ShairPoint (it would make a new folder for each day and location if it didnt have one already)
it would send ANOTHER message in teams with a LINK to the file in SharePoint
it would then enter a loop and scour the robot logs looking for the keyword ESTOP to find the error. (it did this because Kusto was SLOWER then the xbee radio and had up to a 10 min delay on syncing)
if it found the error, it would adjust its start and end timestamps to capture it and export the robot logs book-ended from the event by ~ 1 min. if it didnt, it would use the timestamp from when it was triggered +/- 5 min
it saved THOSE logs to SharePoint the same way as before
it would send ANOTHER message in teams with a link to the files
it would then check if the error was 1 of 3 very specific type of error with the camera. if it was it extracted the base64 jpg image saved in KQL as a byte array, do the math to convert it, and save that as a jpg in SharePoint (and link it of course)
and then it would terminate. and if it encountered an error anywhere in all of this, i had logic where it would spit back an error message in Teams as plaintext explaining what step failed and the program would close gracefully
I deployed it without asking anyone at one of the sites that was struggling. i just pointed it at their chat and turned it on. it had a bit of a rocky start (spammed chat) but man did the FSE's LOVE IT.
about 6 months later software deployed their answer to reading the logs: a webpage that acted as a nice GUI to the KQL database. much better then an CSV file
it still needed you to scroll though a big drop-down of robot names and enter a timestamp, but i noticed something. all that did was just change part of the URL and refresh the webpage
SO I MADE KLOGS 2 AND HAD IT GENERATE THE URL FOR YOU AND REPLY TO YOUR MESSAGE WITH IT. (it also still did the control server and jpg stuff). Theres a non-zero chance that klogs was still in use long after i left that job
now i dont recommend anyone use power automate like this. its clunky and weird. i had to make a variable called "Carrage Return" which was a blank text box that i pressed enter one time in because it was incapable of understanding /n or generating a new line in any capacity OTHER then this (thanks support forum).
im also sure this probably is giving the actual programmer people anxiety. imagine working at a company and then some rando you've never seen but only heard about as "the FSE whos really good at root causing stuff", in a department that does not do any coding, managed to, in their spare time, build and release and entire workflow piggybacking on your work without any oversight, code review, or permission.....and everyone liked it
54 notes · View notes
geeconglobal · 7 months ago
Text
Super Useful Tips To Improve Process Analysis
Improving process analysis is essential for enhancing efficiency, identifying bottlenecks, and optimizing workflows. Here are some super useful tips to help you improve process analysis:
Define Clear Objectives: Clearly define the goals and objectives of the process analysis. What specific outcomes are you looking to achieve? Having clear objectives will guide your analysis and ensure that it remains focused on the most critical aspects of the process.
Map Out the Process: Create visual process maps or flowcharts to document the current process from start to finish. Mapping out the process visually helps to identify the sequence of activities, decision points, and dependencies involved.
Identify Stakeholders: Involve key stakeholders who are directly or indirectly impacted by the process. Their insights and perspectives can provide valuable input during the analysis phase and help ensure that all relevant factors are considered.
Gather Data: Collect quantitative and qualitative data related to the process, including performance metrics, cycle times, error rates, and feedback from stakeholders. Use a variety of methods such as surveys, interviews, observation, and data analysis tools to gather comprehensive data.
Identify Pain Points and Bottlenecks: Analyze the process data to identify pain points, bottlenecks, and areas of inefficiency. Look for recurring issues, delays, redundancies, and unnecessary steps that may be hindering productivity or quality.
Benchmark Against Best Practices: Compare your current process against industry best practices and standards. Identify opportunities to adopt proven methodologies, tools, or techniques that can help streamline operations and improve outcomes.
Brainstorm Solutions: Once you've identified areas for improvement, facilitate brainstorming sessions with relevant stakeholders to generate ideas for optimization. Encourage creativity and innovation in exploring alternative approaches or solutions.
Prioritize Improvements: Prioritize improvement opportunities based on their potential impact, feasibility, and resource requirements. Focus on addressing high-priority issues that offer the greatest return on investment in terms of efficiency gains or cost savings.
Implement Changes Incrementally: Implement changes incrementally rather than attempting to overhaul the entire process at once. Break down larger initiatives into smaller, manageable steps and monitor the effects of each change before proceeding to the next.
Measure and Monitor Performance: Establish key performance indicators (KPIs) to track the effectiveness of process improvements over time. Continuously monitor performance metrics and solicit feedback from stakeholders to ensure that the process remains optimized and aligned with organizational goals.
By following these tips, you can enhance your process analysis efforts and drive continuous improvement within your organization. Visit more information for your website
0 notes
customsoftwarephilippines · 2 years ago
Text
youtube
EACOMM offers Custom HRIS and Payroll Software to give your HR Management the Competitive Advantage it needs.
Managing HR and Payroll for large organizations can be hard and time-consuming. Off-the-shelf systems don’t always work for HR management because they don’t take into account the unique reports and policies of each organization. The solution? A fully customized HRIS and Payroll system that lets you combine processes and reports. EACOMM has been deploying customized HRIS and Payroll systems for over a decade. Their cutting-edge technology allows for microservice architecture and Artificial Intelligence components. EACOMM’s HRIS and Payroll System has several customizable modules such as Manpower Database, Source, Select & Hire Employee, Training & Development, HR Plans & Program, Attendance Monitoring, Employee Relationship Program (ER Program), Performance Measurement, Compensation and Benefits, and 201 File Module. EACOMM works closely with clients to determine which modules apply to their company. Through constant feedback, communication, and testing, EACOMM ensures that their customized HRIS and Payroll System captures all the processes you need automated. Partner with EACOMM to gain a competitive advantage through a fully customized HRIS and Payroll System. EACOMM: Your Ideal Partner for Custom HRIS and Payroll Software.
258 notes · View notes
fluffmugger · 1 year ago
Text
so this workplace I'm working at has me in because they need to be brought up to like bare minimum IT standards
I have a look at their core software. They have 25 staff. There are three logins.
"Ok how many people are sharing logins"
"all of them!" …..
"Ok this software also doesn't actually have user tiers. So everyone who uses this is a superuser. That's not good. please stop looking at me like that. Trust me, This is Not Good. This is So Not Good I am actually not allowed to use the words I want to use to describe it in a professional setting and I'm one of those people who has literally said 'cunt' to her bosses face. This means that everyone who has access to this software - WHICH IS ACCESSIBLE VIA THE INTERNET - can do every single function with it. Including sending content and KDMs that belong to some incredibly mercenary studios with large swathes of expensively suited lawyers"
"Oh our defense is that people don't know how to use half the functions"
"So you have everyone as a superuser on your customer database-cum-centralised-distribution software with the capacity to wipe everyone's logins and destroy the whole thing and they don't know how to use it?"
FUCK. ME.
Tumblr media
90 notes · View notes
metamatar · 1 year ago
Text
polymath robotics finally did that episode they were promising. why robotics is so much harder than software as a service products. so much delicious shade on how modern web microservices products are basically about gluing together other mature products. like its absolutely bonkers that the public and vc expectations for robotics view doing physical things in the real word as the same domain of problem as processing information in a database.
now that i have quit my job i remember for three months we worked on improving traction control on slopes and the funding guys were always like. thats it? and im like. we're making serious progress on the problem at both the perception of the slope itself and good, stable control on a custom platform which is difficult to configure and tune. this is not a solved problem. this is not like writing a new UI feature.
97 notes · View notes
xbsoftware · 2 years ago
Link
How to Overcome Common Challenges in Data Engineering to Make Data Work for You With Minimum Effort
Information is the cornerstone of every business. The way you use the information you have may vary depending on the current needs of your organization. For some, a web app allowing users to check the dates of appointments to the veterinarian will do the trick. Some more demanding business owners may need to use Machine Learning and Artificial Intelligence algorithms to predict customer behavior and adopt their strategy accordingly.
Even if you work in a small company, at some point, you may find that a bunch of excel tables can’t cover your needs anymore. Using different apps that work with different data formats makes everything even more complicated. Without a significant development background, it may be hard even to determine how many databases you need to remain efficient. In such a case, you can rely on data engineers. Their job is to take care of your data infrastructure and the challenges they face along the way will be considered today.
0 notes
tech-ahead-corp · 1 year ago
Text
youtube
Web Development Company
A professional firm specializing in web development services, delivering high-quality websites and applications!
0 notes
agapi-kalyptei · 4 months ago
Text
crowdstrike hot take 5: so who was incompetent, really?
OK so it's the first Monday after the incident. CrowdStrike (CS) is being tight-lipped about the actual cause of the incident, which Microsoft estimates to have affected 8.5 million devices.
Here's an unconfirmed rumor: CS has been firing a lot of QA people and replacing them with AI. I will not base this post on that rumor. But...
Here's a fact: wikipedia listed 8429 CS employees as of April 2024. Now the updated page says they have 7925 employees in their "Fiscal Year 2024".
Anyway. Here's a semi-technical video if you want to catch up on what bluescreen and kernel-mode drivers are in the contexts of the CS incident by a former microsoft engineer. He also briefly mentions WHQL certification - a quality assurance option provided by Microsoft for companies who want to make sure their kernel drivers are top-notch.
Now conceptually, there are two types of updates - updates to a software itself, and a definition update. For a videogame, the software update would be a new feature or bugfixes, and content update would add a new map or textures or something. (Realistically they come hand in hand anyway.) For an antivirus/antimalware, a definition update is basically a list of red flags - a custom format file that instructs the main software on how to find threats.
The video mentions an important thing about the faulty update: while many people say "actually it wasn't a software update that broke it, it was a definition file", it seems that CS Falcon downloads an update file and executes code inside that file - thus avoiding the lengthy re-certification by Microsoft while effectively updating the software.
Some background: On audits in software
A lot of software development is unregulated. You can make a website, deploy it, and whether you post puppy pictures or promote terrorism, there's no one reviewing and approving your change. Laws still apply - even the puppy pictures can be problematic if they include humans who did not consent to have their photos taken and published - but no one's stopping you immediately from publishing them.
And a lot of software development IS regulated - you cannot make software for cars without certifications, you cannot use certain programming languages when developing software for spaceships or MRIs. Many industries like online casinos are regulated - IF you want to operate legally in most countries, you need a license, and you need to implement certain features ("responsible gaming"), and you must submit the actual source code for reviews.
This varies country by country (and state by state, in USA, Canada, etc) and can mean things like "you pay $200 for each change you want to put to production*", or it can mean "you have to pay $40'000 if you make a lot of changes and want to get re-certified".
*production means "web servers or software that goes to end customers", as opposed to "dev environment", "developer's laptop", "QA environment" or "staging" or "test machines", "test VMs" or any of the other hundreds way to test things before they go live.
The certification, and regular audits, involves several things:
Testing the software from user's perspective
Validating the transactions are reported correctly (so that you're not avoiding taxes)
Checking for the user-protecting features, like being able to set a monthly limit on depositing money, etc
Checking the source code to make sure customers are not being ripped off
Validating security and permissions, so a janitor can't download or delete production databases
Validating that you have the work process that you said you would - that you have Jira (or similar) tickets for everything that gets done and put to production, etc, and
...that you have Quality Assurance process in place, and that every change that goes to production is tested and approved
You can see why I highlighted the last point, right.
Now, to my knowledge, security software doesn't have its own set of legal requirements - if I want to develop an antivirus, I don't need a special permission from my government, I can write code, not test it at all, and start selling it for, idk for example $185 per machine it gets deployed to.
And here's the thing - while there certainly is a level of corruption / nepotism / favoritism in the IT industry, I don't think CloudStrike became one of the biggest IT security providers in the world just by sweet talking companies. While there isn't any legal regulation, companies do choose carefully before investing into 3rd party solutions that drastically affect their whole IT. What I mean, CloudStrike probably wasn't always incompetent.
(Another rumor from youtube comments: A company with ~1000 employees was apparently pressured by an insurance company to use CrowdStrike - whether it's a genuine recommendation, an "affiliate link" or just plain old bribery... I do not know.)
WHY what happened is still very baffling
See, this is what would be the process if I was running a security solutions company:
a team is assigned a task. this task is documented
the team discusses the task if it's non-trivial, and they work on it together if possible
solo developer taking the task is not ideal, but very common, since you cannot parallelize (split it between several people) some tasks
while developing, ideally the developer can test everything from start to finish on their laptop. If doing it on their laptop isn't possible, then on a virtual machine (a computer that runs only inside software, and can be more or less stored in a file, duplicated, restored to a previous version, backed up, etc, just by copying that file)
in case of automated software updates, you would have "update channels". In this case it means... like if you have a main AO3 account where you put finished things, and then you'd have another AO3 account where you only put beta fics. So in my hypothetical company, you'd have a testing update channel for each developer or each team. The team would first publish their work only on their update channel, and then a separate QA team could test only their changes.
Either way, after maybe-mostly-finishing the task, the code changes would be bundled in something called a "pull request" or "PR" or "merge request". It's basically a web page that displays what was the code before and after. This PR would be reviewed by people who have NOT worked on the change, so they can check and potentially criticize the change. This is one of the most impactful things for software quality.
Either before or after the PR, the change would go to QA. First it would be tested just in the team's update channel. If it passes and no more development is needed on it, it would go to a QA update channel that joins all recent changes across all teams.
After that, it would be released to an early access or prerelease update channel, sometimes called a canary deploy. Generally, this would be either a limited amount - maybe 100 or 1000 computers, either used internally, or semi-randomly spread across real clients, or it could be as much as 10% of all customers' computers.
THEN YOU WAIT AND SEE IF THERE ARE NO ERROR REPORTS.
Basically ALL modern software (and websites! all the cookies!) collect "metrics" - like "how often each day is this running", or "did our application crash"
you absolute MUST have graphs (monitoring - sometimes this is a part of discipline called "reliability engineering") that show visually things like the number of users online, how many customers are lagging behind with updates, how many errors are reported, how many viruses are being caught by our software. If anything goes up or down too much, it's a cause for concern. If 10% of your customers are suddenly offline after a canary deploy is out, you're shitting your pants.
ONLY after waiting for a while to see everything is okay, you can push the update to ALL clients. It is unfathomable how anyone would do that straight away, or maybe how someone could do it without proper checks, or how the wrong thing got sent to the update.
As ClownStrike is still silent about the actual cause of the issue, we can only make guesses about how much they circumvented their own Quality Assurance process to push the faulty update to millions of computers.
It gets worse
Here's the thing: CrowdStrike itself allows users to create computer groups and let them choose the update channel. You, as a business customer, can say
these 100 unimportant laptops will have the latest update
these important servers will have N-1 update (one version behind)
the rest of the company will have N-2 update (two update versions behind)
CrowdStrike has ignored those settings. According to some youtube comments, supposedly they pushed the update to "only" 25% of all devices - which is worrying to think this could have gone even worse.
Third time isn't the charm
And hey, do you know what happened two years before CrowdStrike was founded? The CEO George Kurtz was at the time, in 2010, the CTO of McAfee, the controversial / crappy security company (IMO offering one of the worst antivirus programs of all times, that was aggressively pushed through bundled OEM deals). In both 2009 and 2010 their enterprise software deleted a critical operating system file and bricked a lot of computers, possibly hundreds of thousands.
And yes, the trigger wasn't an update to the antivirus itself, but a faulty "definition update". Funny coincidence, huh.
12 notes · View notes
earhartsease · 5 months ago
Text
things that might have hinted to us sooner that we're autistic include our two year stint in the mid 2000s working customer services for a "buddhist ethical gift business" doing phones and computer stuff, mainly troubleshooting wholesale deliveries - during which we developed the following two habits that have not so far left us
every time we sent an email we'd murmur "fly like the wind!" as we'd just seen Toy Story 2 and that was our takeaway
we discovered a hack in the database software that allowed us to sort orders in reverse order by hitting the E key, and this started us intoning "eeeee" happily whenever we hit that key - which turned into a strange talismanic relationship with drawing a lowercase e in the air with our index finger to express appreciation (nobody knows that's what we're doing so it's a great stealth stim)
anyway it's a testament to our three fellow customer service workers that nobody batted an eyelid at any of this, but in retrospect all four of us were luminously on the spectrum
17 notes · View notes
mitchipedia · 10 months ago
Text
For a moment there, Lotus Notes appeared to do everything.
The program was a weird combination of email, databases, and workflow that allowed companies to stand up custom applications and deploy them to relevant groups of workers inside Notes.
Also:
… It provided not just your email, but an internal telephone directory, contact database, booking system for time off, company handbook, and more, all accessible via a single application and a single set of credentials, long before single sign-on became a thing.
Nowadays, it is common for most if not all of these functions to be delivered via separate web-based applications, each requiring a different login so you need to have dozens of different credentials, and each one sporting a different user interface. So I guess you could regard the web browser as an app runtime that is the ultimate successor to Notes?
Also:
Eventually, IBM, which had acquired Lotus in 1995, announced in 2012 that it would be discontinuing the Lotus brand altogether, before offloading Notes to Indian software outfit HCL Technologies in 2018.
The platform still survives, with HCL releasing Domino 14.0 last year, which, as The Register commented at the time, speaks to the “stickiness” of the custom workflows built on the platform.
Also:
But Notes is nowhere near holding the record for the oldest piece of software still being used. The US Defense Contract Management Agency (DCMA), which takes care of contracts for the Department of Defense (DoD), is said to have a program called Mechanization of Contract Administration Services (MOCAS), which was introduced in 1958, making it nearly twice as old.
24 notes · View notes