#Microsoft Server operating system
Explore tagged Tumblr posts
radiantindia · 5 months ago
Text
Microsoft Windows Server 2022 Licensing: What You Need to Know
Understand the licensing options for Microsoft Windows Server 2022, including different editions, pricing, and how to choose the right license for your business.
0 notes
littlemssam · 10 months ago
Text
!Important Warning!
These Days some Mods containing Malware have been uploaded on various Sites.
The Sims After Dark Discord Server has posted the following Info regarding the Issue:
+++
Malware Update: What We Know Now To recap, here are the mods we know for sure were affected by the recent malware outbreak: "Cult Mod v2" uploaded to ModTheSims by PimpMySims (impostor account) "Social Events - Unlimited Time" uploaded to CurseForge by MySims4 (single-use account) "Weather and Forecast Cheat Menu" uploaded to The Sims Resource by MSQSIMS (hacked, real account) "Seasons Cheats Menu" uploaded to The Sims Resource by MSQSIMS (hacked, real account)
Due to this malware using an exe file, we believe that anyone using a Mac or Linux device is completely unaffected by this.
If the exe file was downloaded and executed on your Windows device, it has likely stolen a vast amount of your data and saved passwords from your operating system, your internet browser (Chrome, Edge, Opera, Firefox, and more all affected), Discord, Steam, Telegram, and certain crypto wallets. Thank you to anadius for decompiling the exe.
To quickly check if you have been compromised, press Windows + R on your keyboard to open the Run window. Enter %AppData%/Microsoft/Internet Explorer/UserData in the prompt and hit OK. This will open up the folder the malware was using. If there is a file in this folder called Updater.exe, you have unfortunately fallen victim to the malware. We are unware at this time if the malware has any function which would delete the file at a later time to cover its tracks.
To quickly remove the malware from your computer, Overwolf has put together a cleaner program to deal with it. This program should work even if you downloaded the malware outside of CurseForge. Download SimsVirusCleaner.exe from their github page linked here and run it. Once it has finished, it will give you an output about whether any files have been removed.
+++
For more Information please check the Sims After Dark Server News Channel! Or here https://scarletsrealm.com/malware-mod-information/
TwistedMexi made a Mod to help detect & block such Mods in the Future: https://www.patreon.com/posts/98126153
CurseForge took actions and added mechanics to prevent such Files to be uploaded, so downloading there should be safe.
In general be careful, where and what you download, and do not download my Mods at any other Places than my own Sites and my CurseForge Page.
2K notes · View notes
nenelonomh · 4 days ago
Text
Tumblr media Tumblr media Tumblr media
guys,, don't forget to do your research (AI)
researching ai before using it is crucial for several reasons, ensuring that you make informed decisions and use the technology responsibly.
it actually makes me angry that people are too lazy or perhaps ignorant to spend 15ish minutes reading and researching to understand the implications of this new technology (same with people ignorantly using vapes, ugh!). this affects you, your health, the people around you, the environment, society, and has legal implications.
first, understanding the capabilities and limitations of ai helps set realistic expectations. knowing what ai can and cannot do allows you to utilize it effectively without overestimating its potential. for example, if you are using ai as a study tool - you must be aware that it is unable to explain complex concepts in detail. additionally! you must be aware of the effects that it poses on your learning capabilities and how it discourages you from learning with your class/peers/teacher.
second, ai systems often rely on large datasets, which can raise privacy concerns. researching how an ai handles data and what measures are in place to protect your information helps safeguard your privacy.
third, ai algorithms can sometimes exhibit bias due to the data they are trained on. understanding the sources of these biases and how they are addressed can help you choose ai tools that promote fairness and avoid perpetuating discrimination.
fourth, the environmental impact of ai, such as the energy consumption of data centers, is a growing concern. researching the environmental footprint of ai technologies can help you select solutions that are more sustainable and environmentally friendly.
!google and microsoft ai use renewable and efficient energy to power their data centres. ai also powers blue river technology, carbon engineering and xylem (only applying herbicides to weeds, combatting climate change, and water-management systems). (ai magazine)
!training large-scale ai models, especially language models, consumes massive amounts of electricity and water, leading to high carbon emissions and resource depletion. ai data centers consume significant amounts of electricity and produce electronic waste, contributing to environmental degradation. generative ai systems require enormous amounts of fresh water for cooling processors and generating electricity, which can strain water resources. the proliferation of ai servers leads to increased electronic waste, harming natural ecosystems. additionally, ai operations that rely on fossil fuels for electricity production contribute to greenhouse gas emissions and climate change.
fifth, being aware of the ethical implications of ai is important. ensuring that ai tools are used responsibly and ethically helps prevent misuse and protects individuals from potential harm.
finally, researching ai helps you stay informed about best practices and the latest advancements, allowing you to make the most of the technology while minimizing risks. by taking the time to research and understand ai, you can make informed decisions that maximize its benefits while mitigating potential downsides.
impact on critical thinking
ai can both support and hinder critical thinking. on one hand, it provides access to vast amounts of information and tools for analysis, which can enhance decision-making. on the other hand, over-reliance on ai can lead to a decline in human cognitive skills, as people may become less inclined to think critically and solve problems independently.
benefits of using ai in daily life
efficiency and productivity: ai automates repetitive tasks, freeing up time for more complex activities. for example, ai-powered chatbots can handle customer inquiries, allowing human employees to focus on more strategic tasks.
personalization: ai can analyze vast amounts of data to provide personalized recommendations, such as suggesting products based on past purchases or tailoring content to individual preferences.
healthcare advancements: ai is used in diagnostics, treatment planning, and even robotic surgeries, improving patient outcomes and healthcare efficiency.
enhanced decision-making: ai can process large datasets quickly, providing insights that help in making informed decisions in business, finance, and other fields.
convenience: ai-powered virtual assistants like siri and alexa make it easier to manage daily tasks, from setting reminders to controlling smart home devices.
limitations of using ai in daily life
job displacement: automation can lead to job losses in certain sectors, as machines replace human labor.
privacy concerns: ai systems often require large amounts of data, raising concerns about data privacy and security.
bias and fairness: ai algorithms can perpetuate existing biases if they are trained on biased data, leading to unfair or discriminatory outcomes.
dependence on technology: over-reliance on ai can reduce human skills and critical thinking abilities.
high costs: developing and maintaining ai systems can be expensive, which may limit access for smaller businesses or individuals.
further reading
mit horizon, kmpg, ai magazine, bcg, techopedia, technology review, microsoft, science direct-1, science direct-2
my personal standpoint is that people must educate themselves and be mindful of not only what ai they are using, but how they use it. we should not become reliant - we are our own people! balancing the use of ai with human skills and critical thinking is key to harnessing its full potential responsibly.
🫶nene
55 notes · View notes
reelmegabyte · 11 months ago
Text
ever wonder why spotify/discord/teams desktop apps kind of suck?
i don't do a lot of long form posts but. I realized that so many people aren't aware that a lot of the enshittification of using computers in the past decade or so has a lot to do with embedded webapps becoming so frequently used instead of creating native programs. and boy do i have some thoughts about this.
for those who are not blessed/cursed with computers knowledge Basically most (graphical) programs used to be native programs (ever since we started widely using a graphical interface instead of just a text-based terminal). these are apps that feel like when you open up the settings on your computer, and one of the factors that make windows and mac programs look different (bc they use a different design language!) this was the standard for a long long time - your emails were served to you in a special email application like thunderbird or outlook, your documents were processed in something like microsoft word (again. On your own computer!). same goes for calendars, calculators, spreadsheets, and a whole bunch more - crucially, your computer didn't depend on the internet to do basic things, but being connected to the web was very much an appreciated luxury!
that leads us to the eventual rise of webapps that we are all so painfully familiar with today - gmail dot com/outlook, google docs, google/microsoft calendar, and so on. as html/css/js technology grew beyond just displaying text images and such, it became clear that it could be a lot more convenient to just run programs on some server somewhere, and serve the front end on a web interface for anyone to use. this is really very convenient!!!! it Also means a huge concentration of power (notice how suddenly google is one company providing you the SERVICE) - you're renting instead of owning. which means google is your landlord - the services you use every day are first and foremost means of hitting the year over year profit quota. its a pretty sweet deal to have a free email account in exchange for ads! email accounts used to be paid (simply because the provider had to store your emails somewhere. which takes up storage space which is physical hard drives), but now the standard as of hotmail/yahoo/gmail is to just provide a free service and shove ads in as much as you need to.
webapps can do a lot of things, but they didn't immediately replace software like skype or code editors or music players - software that requires more heavy system interaction or snappy audio/visual responses. in 2013, the electron framework came out - a way of packaging up a bundle of html/css/js into a neat little crossplatform application that could be downloaded and run like any other native application. there were significant upsides to this - web developers could suddenly use their webapp skills to build desktop applications that ran on any computer as long as it could support chrome*! the first applications to be built on electron were the late code editor atom (rest in peace), but soon a whole lot of companies took note! some notable contemporary applications that use electron, or a similar webapp-embedded-in-a-little-chrome as a base are:
microsoft teams
notion
vscode
discord
spotify
anyone! who has paid even a little bit of attention to their computer - especially when using older/budget computers - know just how much having chrome open can slow down your computer (firefox as well to a lesser extent. because its just built better <3)
whenever you have one of these programs open on your computer, it's running in a one-tab chrome browser. there is a whole extra chrome open just to run your discord. if you have discord, spotify, and notion open all at once, along with chrome itself, that's four chromes. needless to say, this uses a LOT of resources to deliver applications that are often much less polished and less integrated with the rest of the operating system. it also means that if you have no internet connection, sometimes the apps straight up do not work, since much of them rely heavily on being connected to their servers, where the heavy lifting is done.
taking this idea to the very furthest is the concept of chromebooks - dinky little laptops that were created to only run a web browser and webapps - simply a vessel to access the google dot com mothership. they have gotten better at running offline android/linux applications, but often the $200 chromebooks that are bought in bulk have almost no processing power of their own - why would you even need it? you have everything you could possibly need in the warm embrace of google!
all in all the average person in the modern age, using computers in the mainstream way, owns very little of their means of computing.
i started this post as a rant about the electron/webapp framework because i think that it sucks and it displaces proper programs. and now ive swiveled into getting pissed off at software services which is in honestly the core issue. and i think things can be better!!!!!!!!!!! but to think about better computing culture one has to imagine living outside of capitalism.
i'm not the one to try to explain permacomputing specifically because there's already wonderful literature ^ but if anything here interested you, read this!!!!!!!!!! there is a beautiful world where computers live for decades and do less but do it well. and you just own it. come frolic with me Okay ? :]
*when i say chrome i technically mean chromium. but functionally it's same thing
343 notes · View notes
lazeecomet · 23 days ago
Text
The Story of KLogs: What happens when an Mechanical Engineer codes
Since i no longer work at Wearhouse Automation Startup (WAS for short) and havnt for many years i feel as though i should recount the tale of the most bonkers program i ever wrote, but we need to establish some background
WAS has its HQ very far away from the big customer site and i worked as a Field Service Engineer (FSE) on site. so i learned early on that if a problem needed to be solved fast, WE had to do it. we never got many updates on what was coming down the pipeline for us or what issues were being worked on. this made us very independent
As such, we got good at reading the robot logs ourselves. it took too much time to send the logs off to HQ for analysis and get back what the problem was. we can read. now GETTING the logs is another thing.
the early robots we cut our teeth on used 2.4 gHz wifi to communicate with FSE's so dumping the logs was as simple as pushing a button in a little application and it would spit out a txt file
later on our robots were upgraded to use a 2.4 mHz xbee radio to communicate with us. which was FUCKING SLOW. and log dumping became a much more tedious process. you had to connect, go to logging mode, and then the robot would vomit all the logs in the past 2 min OR the entirety of its memory bank (only 2 options) into a terminal window. you would then save the terminal window and open it in a text editor to read them. it could take up to 5 min to dump the entire log file and if you didnt dump fast enough, the ACK messages from the control server would fill up the logs and erase the error as the memory overwrote itself.
this missing logs problem was a Big Deal for software who now weren't getting every log from every error so a NEW method of saving logs was devised: the robot would just vomit the log data in real time over a DIFFERENT radio and we would save it to a KQL server. Thanks Daddy Microsoft.
now whats KQL you may be asking. why, its Microsofts very own SQL clone! its Kusto Query Language. never mind that the system uses a SQL database for daily operations. lets use this proprietary Microsoft thing because they are paying us
so yay, problem solved. we now never miss the logs. so how do we read them if they are split up line by line in a database? why with a query of course!
select * from tbLogs where RobotUID = [64CharLongString] and timestamp > [UnixTimeCode]
if this makes no sense to you, CONGRATULATIONS! you found the problem with this setup. Most FSE's were BAD at SQL which meant they didnt read logs anymore. If you do understand what the query is, CONGRATULATIONS! you see why this is Very Stupid.
You could not search by robot name. each robot had some arbitrarily assigned 64 character long string as an identifier and the timestamps were not set to local time. so you had run a lookup query to find the right name and do some time zone math to figure out what part of the logs to read. oh yeah and you had to download KQL to view them. so now we had both SQL and KQL on our computers
NOBODY in the field like this.
But Daddy Microsoft comes to the rescue
see we didnt JUST get KQL with part of that deal. we got the entire Microsoft cloud suite. and some people (like me) had been automating emails and stuff with Power Automate
Tumblr media
This is Microsoft Power Automate. its Microsoft's version of Scratch but it has hooks into everything Microsoft. SharePoint, Teams, Outlook, Excel, it can integrate with all of it. i had been using it to send an email once a day with a list of all the robots in maintenance.
this gave me an idea
and i checked
and Power Automate had hooks for KQL
KLogs is actually short for Kusto Logs
I did not know how to program in Power Automate but damn it anything is better then writing KQL queries. so i got to work. and about 2 months later i had a BEHEMOTH of a Power Automate program. it lagged the webpage and many times when i tried to edit something my changes wouldn't take and i would have to click in very specific ways to ensure none of my variables were getting nuked. i dont think this was the intended purpose of Power Automate but this is what it did
the KLogger would watch a list of Teams chats and when someone typed "klogs" or pasted a copy of an ERROR mesage, it would spring into action.
it extracted the robot name from the message and timestamp from teams
it would lookup the name in the database to find the 64 long string UID and the location that robot was assigned too
it would reply to the message in teams saying it found a robot name and was getting logs
it would run a KQL query for the database and get the control system logs then export then into a CSV
it would save the CSV with the a .xls extension into a folder in ShairPoint (it would make a new folder for each day and location if it didnt have one already)
it would send ANOTHER message in teams with a LINK to the file in SharePoint
it would then enter a loop and scour the robot logs looking for the keyword ESTOP to find the error. (it did this because Kusto was SLOWER then the xbee radio and had up to a 10 min delay on syncing)
if it found the error, it would adjust its start and end timestamps to capture it and export the robot logs book-ended from the event by ~ 1 min. if it didnt, it would use the timestamp from when it was triggered +/- 5 min
it saved THOSE logs to SharePoint the same way as before
it would send ANOTHER message in teams with a link to the files
it would then check if the error was 1 of 3 very specific type of error with the camera. if it was it extracted the base64 jpg image saved in KQL as a byte array, do the math to convert it, and save that as a jpg in SharePoint (and link it of course)
and then it would terminate. and if it encountered an error anywhere in all of this, i had logic where it would spit back an error message in Teams as plaintext explaining what step failed and the program would close gracefully
I deployed it without asking anyone at one of the sites that was struggling. i just pointed it at their chat and turned it on. it had a bit of a rocky start (spammed chat) but man did the FSE's LOVE IT.
about 6 months later software deployed their answer to reading the logs: a webpage that acted as a nice GUI to the KQL database. much better then an CSV file
it still needed you to scroll though a big drop-down of robot names and enter a timestamp, but i noticed something. all that did was just change part of the URL and refresh the webpage
SO I MADE KLOGS 2 AND HAD IT GENERATE THE URL FOR YOU AND REPLY TO YOUR MESSAGE WITH IT. (it also still did the control server and jpg stuff). Theres a non-zero chance that klogs was still in use long after i left that job
now i dont recommend anyone use power automate like this. its clunky and weird. i had to make a variable called "Carrage Return" which was a blank text box that i pressed enter one time in because it was incapable of understanding /n or generating a new line in any capacity OTHER then this (thanks support forum).
im also sure this probably is giving the actual programmer people anxiety. imagine working at a company and then some rando you've never seen but only heard about as "the FSE whos really good at root causing stuff", in a department that does not do any coding, managed to, in their spare time, build and release and entire workflow piggybacking on your work without any oversight, code review, or permission.....and everyone liked it
53 notes · View notes
reality-detective · 1 year ago
Text
WIRES>]; ATTACK ON ISRAEL WAS A FALSE FLAG EVENT
_Israel with over 10,000 Spys in the military imbedded inside IRAN. Saudi Arabia and world Militaries.... Israels INTELLIGENCE Agencies, including MOSSAD which is deeply connected to CIA, MI6 .. > ALL knew the Hamas was going to attack Israel several weeks before and months ago including several hours before the attack<
_The United States knew the attack was coming was did Australia, UK. Canada, EU INTELLIGENCE...... Several satellites over Iran, Israel, Palestine and near all captured thousands of troops moving towards Israel all MAJOR INTELLIGENCE AGENCIES knew the attack was coming and news reporters (Israeli spys) in Palestine all knew the attack was coming and tried to warn Israel and the military///// >
>EVERYONE KNEW THE ATTACK WAS COMING,, INCLUDING INDIA INTELLIGENCE WHO TRIED TO CONTACT ISRAEL ( but Israel commanders and President blocked ALL calls before the attack)
_WARNING
>This attack on Israel was an inside Job, with the help of CIA. MOSSAD, MI6 and large parts of the funding 6 billion $$$$$$$ from U.S. to Iran funded the operations.
_The weapons used came from the Ukraine Black market which came from NATO,>the U.S.
The ISRAELI President and Prime minister Netanyahu ALL STOOD DOWN before the attacks began and told the Israeli INTEL and military commanders to stand down<
___
There was no intelligence error. Israeli intensionally let the stacks happen<
_______
FOG OF WAR
Both the deep state and the white hats wanted these EVENTS to take place.
BOTH the [ ds] and white hats are fighting for the future control of ISRAEL
SOURCES REPORT> " INSIDE OF ISRAELI BANKS , INTELLIGENCE AGENCIES AND UNDERGROUND BASES LAY THE WORLD INFORMATION/DATA/SERVERS ON HUMAN TRAFFICKING WORLD OPERATIONS CONNECTED TO PEDOPHILE RINGS.
]> [ EPSTEIN] was created by the MOSSAD
with the CIA MI6 and EPSTEIN got his funding from MOSSAD who was Ghislaine Maxwells father> Israeli super spy Robert Maxwell_ ( who worked for, cia and mi6 also)/////
____
The past 2 years in Israel the military has become divided much like the U.S. military who are losing hope in the government leaders and sectors. Several Revolts have tried to start but were ended quickly.
🔥 Major PANIC has been hitting the Israeli INTEL, Prime minister and military commanders community as their corruption and crimes keep getting EXPOSED and major PANIC is happening as U S. IS COMING CLOSER TO DROPPING THE EPSTEIN FILES. EPSTEIN LIST AND THE MAJOR COUNTRIES WHO DEALT WITH EPSTEIN> ESPECIALLY ISRAEL WHO CREATED EPSTEIN w/cia/mi6
_
_
____
Before EPSTEIN was arrested, he was apprehended several times by the military intelligence ALLIANCE and he was working with white hats and gave ALL INFORMATION ON CIA. MI6 . MOSSAD. JP MORGAN. WORLD BANKS. GATES. ETC ETC ECT EX ECT E TO X..>> ISRAEL<<BIG TECH
GOOGLE. FACEBOOK YOUTUBE MICROSOFT and their connection to world deep state cabal military intelligence and world control by the Elites and Globalist,<
_
This massive coming THE STORM is scaring the CIA. MOSSAD KAZARIAN MAFIA. MI6 ETC ECT . ect etc AND THEY ARE TRYING TO DESTROY ALL THE MILITARY INTELLIGENCE EVIDENCE INSIDE ISRAEL AND UNDERGROUND BUNKERS TO CONCEAL ALL THE EVIDENCE OF THE WORLD HUMAN TRAFFICKING TRADE
_ THE WORLD BIG TECH FACEBOOK GOOGLE YOUTUBE CONTROL
_THE WORLD MONEY LAUNDERING SYSTEM THAT IS CONNECTED FROM ISRAEL TO UKRAINE TO THE U S. TO NATO UN. U.S. INDUSTRIAL MILITARY COMPLEX SYSTEM
MAJOR PANIC IS HAPPENING IN ISRAEL AS THE MILITARY WAS PLANNING A 2024 COUP IN ISRAEL TO OVER THROW THE DEEP STATE MILITARY AND REGIMEN THAT CONNECTED TO CIA.MI6 > CLINTON'S ROCKEFELLERS.>>
( Not far from where Jesus once walked.... The KAZARIAN Mafia. The cabal, dark Families began the practice of ADRENOCHROME and there satanic rituals to the god of moloch god of child sacrifice ..
Satanism..... This is why satanism is pushed through the world and world shopping centers and music and movies...)
- David Wilcock
Something definitely doesn't seem right and destroying evidence has been going on for a long time, think Oklahoma City bombing, 9/11's building 7 and even Waco Texas was about destroying evidence. Is this possible? Think about it and you decide. 🤔
107 notes · View notes
window-to-the-void · 1 year ago
Note
Hey so what's Linux and what's it do
It’s an operating system like macOS or Windows. It runs on like 90% of servers but very few people use it on desktop. I do because I’m extra like that I guess.
It’s better than Windows in some ways since it doesn’t have a bunch of ads and shit baked into it. Plus it’s free (both in the sense of not paying and the sense of freedom [like all the code is public so you know what’s running on your system, can change whatever you want if you have the skills to, etc]). Also it’s more customizable.
The downside is a lot of apps don’t run on it (games that have really invasive anti-cheat [cough Valorent cough], Adobe products, Microsoft office — although a shit ton of Windows-only games do run totally fine). There’s some alternatives, like libreoffice instead of MS office (also anything in a browser runs fine, so Google docs too). GIMP as a photoshop replacement does exist, it’s fine for basic stuff but I’ve heard it’s not great for advanced stuff.
55 notes · View notes
tonguetyd · 5 months ago
Note
literally annoyed that all coastal states (including my dumb glove shaped state) aren't 90% hydro/wind
i might not be an engineer but i am with you there
*drags soapbox out and jumps on top*
DO YOU KNOW HOW INFURIATING IT IS TO HAVE EVERYONE SAY “ELECTRIFY EVERYTHING” KNOWING FULL GODDAMN WELL THAT THE GRID 1) CANNOT SUPPORT IT AND 2) IS DRASTICALLY NOT BASED ON RENEWABLE ENERGY?!?!?!
Don’t get me wrong I love electric cars, I love heat pump systems, I love buildings and homes that can say they are fossil fuel free! Really! I do!
But it means FUCK ALL when you have!!!! Said electricity!!!! Sourced by fossil fuels!!!! I said this in my tags on the other post but New York City! Was operating on *COAL*!!!!! Up until like 5 years ago.
WE ARE SITTING IN THE MIDDLE OF A RIVER.
Not to mention the ocean which like. You ever been to the beach?! You know what there’s a whole hell of a lot of at the beach? Wind!!!!!!!! And yet we have literal campaigns saying “save our oceans! Say no to wind power!”
Idk bruh I feel like the fish are gonna be less happy in a boiling ocean than needing to swim around a giant turbine but. I’m not a fuckin fish so.
NOT TO MENTION (I am fully waving my hands around like a crazy person because this is the main thing that gets me going)
THE ELECTRICAL GRID OF THE UNITED STATES HAS NOT BEEN UPDATED ON LARGE SCALE LEVELS SINCE IT WAS BUILT IN THE 1950s AND 60s.
It is not DESIGNED to handle every building in the city of [random map location] Chicago being off of gas and completely electrified. It’s not!!! The plants cannot handle it as now!
So not only do we not have renewable sources because somebody in Iowa doesn’t want to replace their corn field with a solar field/a rich Long Islander doesn’t want to replace their ocean view with a wind turbine! We also are actively encouraging people to put MORE of a strain on the grid with NO FUCKING SOLUTION TO MEET THAT DEMAND!
I used to deal with this *all* the time in my old job when I was working with smaller building - they ALWAYS needed an electrical upgrade from the street and like. The utility only has so many wires going to that building. And it’s not planning on bringing in more for the most part!
(I am now vibrating with rage) and THEN you have the fuckin AI bros! Who have their data centers in the middle of nowhere because that’s a great place to have a lot of servers that you need right? Yeah sure, you know what those places don’t have? ELECTRICAL INFRASTRUCTURE TO SUPPORT THE STUPID AMOUNT OF POWER AI NEEDS!!!!!!!
Now the obvious solution is that the AI bros of Google and Microsoft and whoever the fuck just use their BILLIONS OF FUCKING DOLLARS IN PROFIT to be good neighbors and upgrade the fucking systems because truly what is the downside to that everybody fucking wins!
But what do I know. I’m just friendly neighborhood engineer.
*hops down from soapbox*
11 notes · View notes
navigniteitsolution · 10 days ago
Text
Expert Power Platform Services | Navignite LLP
Tumblr media
Looking to streamline your business processes with custom applications? With over 10 years of extensive experience, our agency specializes in delivering top-notch Power Apps services that transform the way you operate. We harness the full potential of the Microsoft Power Platform to create solutions that are tailored to your unique needs.
Our Services Include:
Custom Power Apps Development: Building bespoke applications to address your specific business challenges.
Workflow Automation with Power Automate: Enhancing efficiency through automated workflows and processes.
Integration with Microsoft Suite: Seamless connectivity with SharePoint, Dynamics 365, Power BI, and other Microsoft tools.
Third-Party Integrations: Expertise in integrating Xero, QuickBooks, MYOB, and other external systems.
Data Migration & Management: Secure and efficient data handling using tools like XRM Toolbox.
Maintenance & Support: Ongoing support to ensure your applications run smoothly and effectively.
Our decade-long experience includes working with technologies like Azure Functions, Custom Web Services, and SQL Server, ensuring that we deliver robust and scalable solutions.
Why Choose Us?
Proven Expertise: Over 10 years of experience in Microsoft Dynamics CRM and Power Platform.
Tailored Solutions: Customized services that align with your business goals.
Comprehensive Skill Set: Proficient in plugin development, workflow management, and client-side scripting.
Client-Centric Approach: Dedicated to improving your productivity and simplifying tasks.
Boost your productivity and drive innovation with our expert Power Apps solutions.
Contact us today to elevate your business to the next level!
2 notes · View notes
elsa16744 · 1 month ago
Text
The AI Power Conundrum: Will Renewables Save the Day? 
The rapid advancement of artificial intelligence (AI) technologies is transforming industries and driving unprecedented innovation. However, the surge in AI applications comes with a significant challenge: the growing power demands required to sustain these systems. As we embrace the era of AI power, the question arises: can renewable energy rise to meet these increasing energy needs? 
Understanding the AI Power Demand 
AI systems, particularly those involving deep learning and large-scale data processing, consume vast amounts of electricity. Data centers housing AI servers are notorious for their high energy requirements, often leading to increased carbon emissions. As AI continues to evolve, the demand for energy is expected to skyrocket, posing a substantial challenge for sustainability. 
The Role of Renewable Energy 
Renewable energy sources, such as solar, wind, and hydroelectric power, present a viable solution to the AI power conundrum. These sources produce clean, sustainable energy that can help offset the environmental impact of AI technologies. By harnessing renewable energy, tech companies can significantly reduce their carbon footprint while meeting their power needs. 
Tech Giants Leading the Way 
Several tech giants are already paving the way by integrating renewable energy into their operations. Companies like Google, Amazon, and Microsoft are investing heavily in renewable energy projects to power their data centers. For instance, Google has committed to operating on 24/7 carbon-free energy by 2030. These initiatives demonstrate the potential for renewable energy to support the massive energy demands of AI power. 
Challenges and Opportunities 
While the integration of renewable energy is promising, it comes with its own set of challenges. Renewable energy sources can be intermittent, depending on weather conditions, which can affect their reliability. However, advancements in energy storage solutions and smart grid technologies are addressing these issues, ensuring a more stable and dependable supply of renewable energy. 
The synergy between AI and renewable energy also presents unique opportunities. AI can optimize the performance of renewable energy systems, predict energy demand, and enhance energy efficiency. This symbiotic relationship has the potential to accelerate the adoption of renewable energy and create a more sustainable future. 
Conclusion 
The AI power conundrum is a pressing issue that demands innovative solutions. Renewable energy emerges as a crucial player in addressing this challenge, offering a sustainable path forward. As tech companies continue to embrace renewable energy, the future of AI power looks promising, with the potential to achieve both technological advancement and environmental sustainability. 
By integrating renewable energy into the AI ecosystem, we can ensure that the growth of AI technologies does not come at the expense of our planet. The collaboration between AI and renewable energy is not just a possibility; it is a necessity for a sustainable future. 
2 notes · View notes
Text
Exploring Kerberos and its related attacks
Introduction
In the world of cybersecurity, authentication is the linchpin upon which secure communications and data access rely. Kerberos, a network authentication protocol developed by MIT, has played a pivotal role in securing networks, particularly in Microsoft Windows environments. In this in-depth exploration of Kerberos, we'll delve into its technical intricacies, vulnerabilities, and the countermeasures that can help organizations safeguard their systems.
Understanding Kerberos: The Fundamentals
At its core, Kerberos is designed to provide secure authentication for users and services over a non-secure network, such as the internet. It operates on the principle of "need-to-know," ensuring that only authenticated users can access specific resources. To grasp its inner workings, let's break down Kerberos into its key components:
1. Authentication Server (AS)
The AS is the initial point of contact for authentication. When a user requests access to a service, the AS verifies their identity and issues a Ticket Granting Ticket (TGT) if authentication is successful.
2. Ticket Granting Server (TGS)
Once a user has a TGT, they can request access to various services without re-entering their credentials. The TGS validates the TGT and issues a service ticket for the requested resource.
3. Realm
A realm in Kerberos represents a security domain. It defines a specific set of users, services, and authentication servers that share a common Kerberos database.
4. Service Principal
A service principal represents a network service (e.g., a file server or email server) within the realm. Each service principal has a unique encryption key.
Vulnerabilities in Kerberos
While Kerberos is a robust authentication protocol, it is not immune to vulnerabilities and attacks. Understanding these vulnerabilities is crucial for securing a network environment that relies on Kerberos for authentication.
1. AS-REP Roasting
AS-REP Roasting is a common attack that exploits weak user account settings. When a user's pre-authentication is disabled, an attacker can request a TGT for that user without presenting a password. They can then brute-force the TGT offline to obtain the user's plaintext password.
2. Pass-the-Ticket Attacks
In a Pass-the-Ticket attack, an attacker steals a TGT or service ticket and uses it to impersonate a legitimate user or service. This attack can lead to unauthorized access and privilege escalation.
3. Golden Ticket Attacks
A Golden Ticket attack allows an attacker to forge TGTs, granting them unrestricted access to the domain. To execute this attack, the attacker needs to compromise the Key Distribution Center (KDC) long-term secret key.
4. Silver Ticket Attacks
Silver Ticket attacks target specific services or resources. Attackers create forged service tickets to access a particular resource without having the user's password.
Technical Aspects and Formulas
To gain a deeper understanding of Kerberos and its related attacks, let's delve into some of the technical aspects and cryptographic formulas that underpin the protocol:
1. Kerberos Authentication Flow
The Kerberos authentication process involves several steps, including ticket requests, encryption, and decryption. It relies on various cryptographic algorithms, such as DES, AES, and HMAC.
2. Ticket Granting Ticket (TGT) Structure
A TGT typically consists of a user's identity, the requested service, a timestamp, and other information encrypted with the TGS's secret key. The TGT structure can be expressed as:
Tumblr media
3. Encryption Keys
Kerberos relies on encryption keys generated during the authentication process. The user's password is typically used to derive these keys. The process involves key generation and hashing formulas.
Mitigating Kerberos Vulnerabilities
To protect against Kerberos-related vulnerabilities and attacks, organizations can implement several strategies and countermeasures:
1. Enforce Strong Password Policies
Strong password policies can mitigate attacks like AS-REP Roasting. Ensure that users create complex, difficult-to-guess passwords and consider enabling pre-authentication.
2. Implement Multi-Factor Authentication (MFA)
MFA adds an extra layer of security by requiring users to provide multiple forms of authentication. This can thwart various Kerberos attacks.
3. Regularly Rotate Encryption Keys
Frequent rotation of encryption keys can limit an attacker's ability to use stolen tickets. Implement a key rotation policy and ensure it aligns with best practices.
4. Monitor and Audit Kerberos Traffic
Continuous monitoring and auditing of Kerberos traffic can help detect and respond to suspicious activities. Utilize security information and event management (SIEM) tools for this purpose.
5. Segment and Isolate Critical Systems
Isolating sensitive systems from less-trusted parts of the network can reduce the risk of lateral movement by attackers who compromise one system.
6. Patch and Update
Regularly update and patch your Kerberos implementation to mitigate known vulnerabilities and stay ahead of emerging threats.
4. Kerberos Encryption Algorithms
Kerberos employs various encryption algorithms to protect data during authentication and ticket issuance. Common cryptographic algorithms include:
DES (Data Encryption Standard): Historically used, but now considered weak due to its susceptibility to brute-force attacks.
3DES (Triple DES): An improvement over DES, it applies the DES encryption algorithm three times to enhance security.
AES (Advanced Encryption Standard): A strong symmetric encryption algorithm, widely used in modern Kerberos implementations for better security.
HMAC (Hash-based Message Authentication Code): Used for message integrity, HMAC ensures that messages have not been tampered with during transmission.
5. Key Distribution Center (KDC)
The KDC is the heart of the Kerberos authentication system. It consists of two components: the Authentication Server (AS) and the Ticket Granting Server (TGS). The AS handles initial authentication requests and issues TGTs, while the TGS validates these TGTs and issues service tickets. This separation of functions enhances security by minimizing exposure to attack vectors.
6. Salting and Nonces
To thwart replay attacks, Kerberos employs salting and nonces (random numbers). Salting involves appending a random value to a user's password before hashing, making it more resistant to dictionary attacks. Nonces are unique values generated for each authentication request to prevent replay attacks.
Now, let's delve into further Kerberos vulnerabilities and their technical aspects:
7. Ticket-Granting Ticket (TGT) Expiry Time
By default, TGTs have a relatively long expiry time, which can be exploited by attackers if they can intercept and reuse them. Administrators should consider reducing TGT lifetimes to mitigate this risk.
8. Ticket Granting Ticket Renewal
Kerberos allows TGT renewal without re-entering the password. While convenient, this feature can be abused by attackers if they manage to capture a TGT. Limiting the number of renewals or implementing MFA for renewals can help mitigate this risk.
9. Service Principal Name (SPN) Abuse
Attackers may exploit misconfigured SPNs to impersonate legitimate services. Regularly review and audit SPNs to ensure they are correctly associated with the intended services.
10. Kerberoasting
Kerberoasting is an attack where attackers target service accounts to obtain service tickets and attempt offline brute-force attacks to recover plaintext passwords. Robust password policies and regular rotation of service account passwords can help mitigate this risk.
11. Silver Ticket and Golden Ticket Attacks
To defend against Silver and Golden Ticket attacks, it's essential to implement strong password policies, limit privileges of service accounts, and monitor for suspicious behavior, such as unusual access patterns.
12. Kerberos Constrained Delegation
Kerberos Constrained Delegation allows a service to impersonate a user to access other services. Misconfigurations can lead to security vulnerabilities, so careful planning and configuration are essential.
Mitigation strategies to counter these vulnerabilities include:
13. Shorter Ticket Lifetimes
Reducing the lifespan of TGTs and service tickets limits the window of opportunity for attackers to misuse captured tickets.
14. Regular Password Changes
Frequent password changes for service accounts and users can thwart offline attacks and reduce the impact of credential compromise.
15. Least Privilege Principle
Implement the principle of least privilege for service accounts, limiting their access only to the resources they need, and monitor for unusual access patterns.
16. Logging and Monitoring
Comprehensive logging and real-time monitoring of Kerberos traffic can help identify and respond to suspicious activities, including repeated failed authentication attempts.
Kerberos Delegation: A Technical Deep Dive
1. Understanding Delegation in Kerberos
Kerberos delegation allows a service to act on behalf of a user to access other services without requiring the user to reauthenticate for each service. This capability enhances the efficiency and usability of networked applications, particularly in complex environments where multiple services need to interact on behalf of a user.
2. Types of Kerberos Delegation
Kerberos delegation can be categorized into two main types:
Constrained Delegation: This type of delegation restricts the services a service can access on behalf of a user. It allows administrators to specify which services a given service can impersonate for the user.
Unconstrained Delegation: In contrast, unconstrained delegation grants the service full delegation rights, enabling it to access any service on behalf of the user without restrictions. Unconstrained delegation poses higher security risks and is generally discouraged.
3. How Delegation Works
Here's a step-by-step breakdown of how delegation occurs within the Kerberos authentication process:
Initial Authentication: The user logs in and obtains a Ticket Granting Ticket (TGT) from the Authentication Server (AS).
Request to Access a Delegated Service: The user requests access to a service that supports delegation.
Service Ticket Request: The user's client requests a service ticket from the Ticket Granting Server (TGS) to access the delegated service. The TGS issues a service ticket for the delegated service and includes the user's TGT encrypted with the service's secret key.
Service Access: The user presents the service ticket to the delegated service. The service decrypts the ticket using its secret key and obtains the user's TGT.
Secondary Authentication: The delegated service can then use the user's TGT to authenticate to other services on behalf of the user without the user's direct involvement. This secondary authentication occurs transparently to the user.
4. Delegation and Impersonation
Kerberos delegation can be seen as a form of impersonation. The delegated service effectively impersonates the user to access other services. This impersonation is secure because the delegated service needs to present both the user's TGT and the service ticket for the delegated service, proving it has the user's explicit permission.
5. Delegation in Multi-Tier Applications
Kerberos delegation is particularly useful in multi-tier applications, where multiple services are involved in processing a user's request. It allows a front-end service to securely delegate authentication to a back-end service on behalf of the user.
6. Protocol Extensions for Delegation
Kerberos extensions, such as Service-for-User (S4U) extensions, enable a service to request service tickets on behalf of a user without needing the user's TGT. These extensions are valuable for cases where the delegated service cannot obtain the user's TGT directly.
7. Benefits of Kerberos Delegation
Efficiency: Delegation eliminates the need for the user to repeatedly authenticate to access multiple services, improving the user experience.
Security: Delegation is secure because it relies on Kerberos authentication and requires proper configuration to work effectively.
Scalability: Delegation is well-suited for complex environments with multiple services and tiers, enhancing scalability.
In this comprehensive exploration of Kerberos, we've covered a wide array of topics, from the fundamentals of its authentication process to advanced concepts like delegation.
Kerberos, as a network authentication protocol, forms the backbone of secure communication within organizations. Its core principles include the use of tickets, encryption, and a trusted third-party Authentication Server (AS) to ensure secure client-service interactions.
Security is a paramount concern in Kerberos. The protocol employs encryption, timestamps, and mutual authentication to guarantee that only authorized users gain access to network resources. Understanding these security mechanisms is vital for maintaining robust network security.
Despite its robustness, Kerberos is not impervious to vulnerabilities. Attacks like AS-REP Roasting, Pass-the-Ticket, Golden Ticket, and Silver Ticket attacks can compromise security. Organizations must be aware of these vulnerabilities to take appropriate countermeasures.
Implementing best practices is essential for securing Kerberos-based authentication systems. These practices include enforcing strong password policies, regular key rotation, continuous monitoring, and employee training.
Delving into advanced Kerberos concepts, we explored delegation – both constrained and unconstrained. Delegation allows services to act on behalf of users, enhancing usability and efficiency in complex, multi-tiered applications. Understanding delegation and its security implications is crucial in such scenarios.
Advanced Kerberos concepts introduce additional security considerations. These include implementing fine-grained access controls, monitoring for unusual activities, and regularly analyzing logs to detect and respond to security incidents.
So to conclude, Kerberos stands as a foundational authentication protocol that plays a pivotal role in securing networked environments. It offers robust security mechanisms and advanced features like delegation to enhance usability. Staying informed about Kerberos' complexities, vulnerabilities, and best practices is essential to maintain a strong security posture in the ever-evolving landscape of cybersecurity.
12 notes · View notes
itcourses-stuff · 3 months ago
Text
How to Become a Cloud Computing Engineer
Introduction:
Cloud computing has become a cornerstone of modern IT infrastructure, making the role of a Cloud Computing Engineer highly in demand. If you're looking to enter this field, here's a roadmap to help you get started:
Build a Strong Foundation in IT A solid understanding of computer networks, operating systems, and basic programming is essential. Consider getting a degree in Computer Science or Information Technology. Alternatively, Jetking offer you to make your career in Cloud computing Courses and gain the technical knowledge needed.
Learn Cloud Platforms Familiarize yourself with popular cloud service providers such as AWS (Amazon Web Services), Microsoft Azure, and Google Cloud. Many platforms offer certification courses, like AWS Certified Solutions Architect, which will help validate your skills.
Gain Hands-On Experience Practical experience is critical. Set up your own cloud projects, manage databases, configure servers, and practice deploying applications. This will give you the real-world experience that employers seek.
Master Programming Languages Learn programming languages commonly used in cloud environments, such as Python, Java, or Ruby. Scripting helps automate tasks, making your work as a cloud engineer more efficient.
Understand Security in the Cloud Security is paramount in cloud computing. Gain knowledge of cloud security best practices, such as encryption, data protection, and compliance standards to ensure safe operations and become Master in cloud computing courses.
Get Certified Earning cloud certifications from AWS, Azure, or Google Cloud can enhance your credibility. Certifications like AWS Certified Cloud Practitioner or Microsoft Certified: Azure Fundamentals can provide you a competitive edge.
Keep Learning Cloud technology evolves rapidly, so continuous learning is key. Stay updated by taking advanced courses and attending cloud tech conferences.
Join Jetking today! Click Here
By building your expertise in these areas, you’ll be well on your way to a successful career as a Cloud Computing Engineer!
2 notes · View notes
seogoogle1 · 6 months ago
Text
Windows Server 2016: Revolutionizing Enterprise Computing
In the ever-evolving landscape of enterprise computing, Windows Server 2016 emerges as a beacon of innovation and efficiency, heralding a new era of productivity and scalability for businesses worldwide. Released by Microsoft in September 2016, Windows Server 2016 represents a significant leap forward in terms of security, performance, and versatility, empowering organizations to embrace the challenges of the digital age with confidence. In this in-depth exploration, we delve into the transformative capabilities of Windows Server 2016 and its profound impact on the fabric of enterprise IT.
Tumblr media
Introduction to Windows Server 2016
Windows Server 2016 stands as the cornerstone of Microsoft's server operating systems, offering a comprehensive suite of features and functionalities tailored to meet the diverse needs of modern businesses. From enhanced security measures to advanced virtualization capabilities, Windows Server 2016 is designed to provide organizations with the tools they need to thrive in today's dynamic business environment.
Key Features of Windows Server 2016
Enhanced Security: Security is paramount in Windows Server 2016, with features such as Credential Guard, Device Guard, and Just Enough Administration (JEA) providing robust protection against cyber threats. Shielded Virtual Machines (VMs) further bolster security by encrypting VMs to prevent unauthorized access.
Software-Defined Storage: Windows Server 2016 introduces Storage Spaces Direct, a revolutionary software-defined storage solution that enables organizations to create highly available and scalable storage pools using commodity hardware. With Storage Spaces Direct, businesses can achieve greater flexibility and efficiency in managing their storage infrastructure.
Improved Hyper-V: Hyper-V in Windows Server 2016 undergoes significant enhancements, including support for nested virtualization, Shielded VMs, and rolling upgrades. These features enable organizations to optimize resource utilization, improve scalability, and enhance security in virtualized environments.
Nano Server: Nano Server represents a lightweight and minimalistic installation option in Windows Server 2016, designed for cloud-native and containerized workloads. With reduced footprint and overhead, Nano Server enables organizations to achieve greater agility and efficiency in deploying modern applications.
Container Support: Windows Server 2016 embraces the trend of containerization with native support for Docker and Windows containers. By enabling organizations to build, deploy, and manage containerized applications seamlessly, Windows Server 2016 empowers developers to innovate faster and IT operations teams to achieve greater flexibility and scalability.
Benefits of Windows Server 2016
Windows Server 2016 offers a myriad of benefits that position it as the platform of choice for modern enterprise computing:
Enhanced Security: With advanced security features like Credential Guard and Shielded VMs, Windows Server 2016 helps organizations protect their data and infrastructure from a wide range of cyber threats, ensuring peace of mind and regulatory compliance.
Improved Performance: Windows Server 2016 delivers enhanced performance and scalability, enabling organizations to handle the demands of modern workloads with ease and efficiency.
Flexibility and Agility: With support for Nano Server and containers, Windows Server 2016 provides organizations with unparalleled flexibility and agility in deploying and managing their IT infrastructure, facilitating rapid innovation and adaptation to changing business needs.
Cost Savings: By leveraging features such as Storage Spaces Direct and Hyper-V, organizations can achieve significant cost savings through improved resource utilization, reduced hardware requirements, and streamlined management.
Future-Proofing: Windows Server 2016 is designed to support emerging technologies and trends, ensuring that organizations can stay ahead of the curve and adapt to new challenges and opportunities in the digital landscape.
Conclusion: Embracing the Future with Windows Server 2016
In conclusion, Windows Server 2016 stands as a testament to Microsoft's commitment to innovation and excellence in enterprise computing. With its advanced security, enhanced performance, and unparalleled flexibility, Windows Server 2016 empowers organizations to unlock new levels of efficiency, productivity, and resilience. Whether deployed on-premises, in the cloud, or in hybrid environments, Windows Server 2016 serves as the foundation for digital transformation, enabling organizations to embrace the future with confidence and achieve their full potential in the ever-evolving world of enterprise IT.
Website: https://microsoftlicense.com
5 notes · View notes
rosiemaesworld · 4 months ago
Text
1. **Convergence**:
- **Definition**: Convergence in ICT refers to the integration of multiple technologies, platforms, or services into a single, cohesive system.
- **Example**: Smartphones that combine telephone, internet browsing, email, GPS, and multimedia functions.
- **Impact**: It leads to more versatile devices and systems, simplifying user experience and increasing efficiency by reducing the need for multiple, separate devices.
2. **Social Media**:
- **Definition**: Social media consists of online platforms that facilitate the creation, sharing, and interaction with content and user-generated content in virtual communities.
- **Example**: Facebook, Twitter, Instagram, LinkedIn.
- **Impact**: Social media has revolutionized communication and information sharing, influencing personal interactions, marketing strategies, public relations, and even political campaigns.
3. **Mobile Technologies**:
- **Definition**: Mobile technologies encompass portable devices and the infrastructure that enables wireless communication and internet access.
- **Example**: Smartphones, tablets, wearable devices like smartwatches.
- **Impact**: These technologies enable users to access information, communicate, and perform various tasks from virtually anywhere, enhancing connectivity and productivity.
4. **Assistive Media**:
- **Definition**: Assistive media includes tools and technologies designed to help individuals with disabilities access and use ICT effectively.
- **Example**: Screen readers for the visually impaired, voice recognition software, alternative input devices.
- **Impact**: Assistive media ensures accessibility and inclusivity, allowing people with disabilities to participate fully in the digital world, improving their quality of life and opportunities for education and employment.
5. **Cloud Computing**:
- **Definition**: Cloud computing involves delivering computing services—such as storage, processing power, and applications—over the internet, rather than from local servers or personal devices.
- **Example**: Google Drive, Microsoft Azure, Amazon Web Services (AWS).
- **Impact**: Cloud computing offers scalable, flexible, and cost-effective resources, enhancing collaboration, data accessibility, and operational efficiency for both individuals and organizations.
ROSIE MAE,RONDINA,SAIDUNA
2 notes · View notes
govindhtech · 4 months ago
Text
Intel VTune Profiler For Data Parallel Python Applications
Tumblr media
Intel VTune Profiler tutorial
This brief tutorial will show you how to use Intel VTune Profiler to profile the performance of a Python application using the NumPy and Numba example applications.
Analysing Performance in Applications and Systems
For HPC, cloud, IoT, media, storage, and other applications, Intel VTune Profiler optimises system performance, application performance, and system configuration.
Optimise the performance of the entire application not just the accelerated part using the CPU, GPU, and FPGA.
Profile SYCL, C, C++, C#, Fortran, OpenCL code, Python, Google Go, Java,.NET, Assembly, or any combination of languages can be multilingual.
Application or System: Obtain detailed results mapped to source code or coarse-grained system data for a longer time period.
Power: Maximise efficiency without resorting to thermal or power-related throttling.
VTune platform profiler
It has following Features.
Optimisation of Algorithms
Find your code’s “hot spots,” or the sections that take the longest.
Use Flame Graph to see hot code routes and the amount of time spent in each function and with its callees.
Bottlenecks in Microarchitecture and Memory
Use microarchitecture exploration analysis to pinpoint the major hardware problems affecting your application’s performance.
Identify memory-access-related concerns, such as cache misses and difficulty with high bandwidth.
Inductors and XPUs
Improve data transfers and GPU offload schema for SYCL, OpenCL, Microsoft DirectX, or OpenMP offload code. Determine which GPU kernels take the longest to optimise further.
Examine GPU-bound programs for inefficient kernel algorithms or microarchitectural restrictions that may be causing performance problems.
Examine FPGA utilisation and the interactions between CPU and FPGA.
Technical summary: Determine the most time-consuming operations that are executing on the neural processing unit (NPU) and learn how much data is exchanged between the NPU and DDR memory.
In parallelism
Check the threading efficiency of the code. Determine which threading problems are affecting performance.
Examine compute-intensive or throughput HPC programs to determine how well they utilise memory, vectorisation, and the CPU.
Interface and Platform
Find the points in I/O-intensive applications where performance is stalled. Examine the hardware’s ability to handle I/O traffic produced by integrated accelerators or external PCIe devices.
Use System Overview to get a detailed overview of short-term workloads.
Multiple Nodes
Describe the performance characteristics of workloads involving OpenMP and large-scale message passing interfaces (MPI).
Determine any scalability problems and receive suggestions for a thorough investigation.
Intel VTune Profiler
To improve Python performance while using Intel systems, install and utilise the Intel Distribution for Python and Data Parallel Extensions for Python with your applications.
Configure your Python-using VTune Profiler setup.
To find performance issues and areas for improvement, profile three distinct Python application implementations. The pairwise distance calculation algorithm commonly used in machine learning and data analytics will be demonstrated in this article using the NumPy example.
The following packages are used by the three distinct implementations.
Numpy Optimised for Intel
NumPy’s Data Parallel Extension
Extensions for Numba on GPU with Data Parallelism
Python’s NumPy and Data Parallel Extension
By providing optimised heterogeneous computing, Intel Distribution for Python and Intel Data Parallel Extension for Python offer a fantastic and straightforward approach to develop high-performance machine learning (ML) and scientific applications.
Added to the Python Intel Distribution is:
Scalability on PCs, powerful servers, and laptops utilising every CPU core available.
Assistance with the most recent Intel CPU instruction sets.
Accelerating core numerical and machine learning packages with libraries such as the Intel oneAPI Math Kernel Library (oneMKL) and Intel oneAPI Data Analytics Library (oneDAL) allows for near-native performance.
Tools for optimising Python code into instructions with more productivity.
Important Python bindings to help your Python project integrate Intel native tools more easily.
Three core packages make up the Data Parallel Extensions for Python:
The NumPy Data Parallel Extensions (dpnp)
Data Parallel Extensions for Numba, aka numba_dpex
Tensor data structure support, device selection, data allocation on devices, and user-defined data parallel extensions for Python are all provided by the dpctl (Data Parallel Control library).
It is best to obtain insights with comprehensive source code level analysis into compute and memory bottlenecks in order to promptly identify and resolve unanticipated performance difficulties in Machine Learning (ML),  Artificial Intelligence ( AI), and other scientific workloads. This may be done with Python-based ML and AI programs as well as C/C++ code using Intel VTune Profiler. The methods for profiling these kinds of Python apps are the main topic of this paper.
Using highly optimised Intel Optimised Numpy and Data Parallel Extension for Python libraries, developers can replace the source lines causing performance loss with the help of Intel VTune Profiler, a sophisticated tool.
Setting up and Installing
1. Install Intel Distribution for Python
2. Create a Python Virtual Environment
   python -m venv pyenv
   pyenv\Scripts\activate
3. Install Python packages
   pip install numpy
   pip install dpnp
   pip install numba
   pip install numba-dpex
   pip install pyitt
Make Use of Reference Configuration
The hardware and software components used for the reference example code we use are:
Software Components:
dpnp 0.14.0+189.gfcddad2474
mkl-fft 1.3.8
mkl-random 1.2.4
mkl-service 2.4.0
mkl-umath 0.1.1
numba 0.59.0
numba-dpex 0.21.4
numpy 1.26.4
pyitt 1.1.0
Operating System:
Linux, Ubuntu 22.04.3 LTS
CPU:
Intel Xeon Platinum 8480+
GPU:
Intel Data Center GPU Max 1550
The Example Application for NumPy
Intel will demonstrate how to use Intel VTune Profiler and its Intel Instrumentation and Tracing Technology (ITT) API to optimise a NumPy application step-by-step. The pairwise distance application, a well-liked approach in fields including biology, high performance computing (HPC), machine learning, and geographic data analytics, will be used in this article.
Summary
The three stages of optimisation that we will discuss in this post are summarised as follows:
Step 1: Examining the Intel Optimised Numpy Pairwise Distance Implementation: Here, we’ll attempt to comprehend the obstacles affecting the NumPy implementation’s performance.
Step 2: Profiling Data Parallel Extension for Pairwise Distance NumPy Implementation: We intend to examine the implementation and see whether there is a performance disparity.
Step 3: Profiling Data Parallel Extension for Pairwise Distance Implementation on Numba GPU: Analysing the numba-dpex implementation’s GPU performance
Boost Your Python NumPy Application
Intel has shown how to quickly discover compute and memory bottlenecks in a Python application using Intel VTune Profiler.
Intel VTune Profiler aids in identifying bottlenecks’ root causes and strategies for enhancing application performance.
It can assist in mapping the main bottleneck jobs to the source code/assembly level and displaying the related CPU/GPU time.
Even more comprehensive, developer-friendly profiling results can be obtained by using the Instrumentation and Tracing API (ITT APIs).
Read more on govindhtech.com
2 notes · View notes
techblog-365 · 1 year ago
Text
CLOUD COMPUTING: A CONCEPT OF NEW ERA FOR DATA SCIENCE
Tumblr media
Cloud Computing is the most interesting and evolving topic in computing in the recent decade. The concept of storing data or accessing software from another computer that you are not aware of seems to be confusing to many users. Most the people/organizations that use cloud computing on their daily basis claim that they do not understand the subject of cloud computing. But the concept of cloud computing is not as confusing as it sounds. Cloud Computing is a type of service where the computer resources are sent over a network. In simple words, the concept of cloud computing can be compared to the electricity supply that we daily use. We do not have to bother how the electricity is made and transported to our houses or we do not have to worry from where the electricity is coming from, all we do is just use it. The ideology behind the cloud computing is also the same: People/organizations can simply use it. This concept is a huge and major development of the decade in computing.
Cloud computing is a service that is provided to the user who can sit in one location and remotely access the data or software or program applications from another location. Usually, this process is done with the use of a web browser over a network i.e., in most cases over the internet. Nowadays browsers and the internet are easily usable on almost all the devices that people are using these days. If the user wants to access a file in his device and does not have the necessary software to access that file, then the user would take the help of cloud computing to access that file with the help of the internet.
Cloud computing provide over hundreds and thousands of services and one of the most used services of cloud computing is the cloud storage. All these services are accessible to the public throughout the globe and they do not require to have the software on their devices. The general public can access and utilize these services from the cloud with the help of the internet. These services will be free to an extent and then later the users will be billed for further usage. Few of the well-known cloud services that are drop box, Sugar Sync, Amazon Cloud Drive, Google Docs etc.
Finally, that the use of cloud services is not guaranteed let it be because of the technical problems or because the services go out of business. The example they have used is about the Mega upload, a service that was banned and closed by the government of U.S and the FBI for their illegal file sharing allegations. And due to this, they had to delete all the files in their storage and due to which the customers cannot get their files back from the storage.
Service Models Cloud Software as a Service Use the provider's applications running on a cloud infrastructure Accessible from various client devices through thin client interface such as a web browser Consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage
Google Apps, Microsoft Office 365, Petrosoft, Onlive, GT Nexus, Marketo, Casengo, TradeCard, Rally Software, Salesforce, ExactTarget and CallidusCloud
Cloud Platform as a Service Cloud providers deliver a computing platform, typically including operating system, programming language execution environment, database, and web server Application developers can develop and run their software solutions on a cloud platform without the cost and complexity of buying and managing the underlying hardware and software layers
AWS Elastic Beanstalk, Cloud Foundry, Heroku, Force.com, Engine Yard, Mendix, OpenShift, Google App Engine, AppScale, Windows Azure Cloud Services, OrangeScape and Jelastic.
Cloud Infrastructure as a Service Cloud provider offers processing, storage, networks, and other fundamental computing resources Consumer is able to deploy and run arbitrary software, which can include operating systems and applications Amazon EC2, Google Compute Engine, HP Cloud, Joyent, Linode, NaviSite, Rackspace, Windows Azure, ReadySpace Cloud Services, and Internap Agile
Deployment Models Private Cloud: Cloud infrastructure is operated solely for an organization Community Cloud : Shared by several organizations and supports a specific community that has shared concerns Public Cloud: Cloud infrastructure is made available to the general public Hybrid Cloud: Cloud infrastructure is a composition of two or more clouds
Advantages of Cloud Computing • Improved performance • Better performance for large programs • Unlimited storage capacity and computing power • Reduced software costs • Universal document access • Just computer with internet connection is required • Instant software updates • No need to pay for or download an upgrade
Disadvantages of Cloud Computing • Requires a constant Internet connection • Does not work well with low-speed connections • Even with a fast connection, web-based applications can sometimes be slower than accessing a similar software program on your desktop PC • Everything about the program, from the interface to the current document, has to be sent back and forth from your computer to the computers in the cloud
About Rang Technologies: Headquartered in New Jersey, Rang Technologies has dedicated over a decade delivering innovative solutions and best talent to help businesses get the most out of the latest technologies in their digital transformation journey. Read More...
9 notes · View notes