Tumgik
#Linux Mint vs Fedora
msrlunatj · 1 month
Text
Guía sobre Linux Mint: La Distribución Linux para la Productividad
1. Introducción
Presentación de Linux Mint
Linux Mint es una distribución de Linux basada en Ubuntu, conocida por su enfoque en la facilidad de uso y la accesibilidad. Lanzada por primera vez en 2006, Mint ha ganado popularidad por su interfaz amigable y su capacidad para ofrecer una experiencia de usuario similar a la de sistemas operativos tradicionales como Windows.
Importancia de Linux Mint en el ecosistema Linux
Linux Mint ha sido una de las distribuciones más queridas por usuarios que buscan una transición suave desde otros sistemas operativos. Su enfoque en la estabilidad y la facilidad de uso lo convierte en una opción popular para nuevos usuarios y para quienes desean una alternativa confiable a otros sistemas operativos.
2. Historia y Filosofía de Linux Mint
Origen y evolución de Linux Mint
Linux Mint fue creado por Clement Lefebvre como una alternativa más amigable y accesible a Ubuntu, con el objetivo de ofrecer un entorno de escritorio completo y fácil de usar desde el primer momento. A lo largo de los años, ha evolucionado para incluir una serie de herramientas y características que mejoran la experiencia del usuario.
Filosofía de Linux Mint y el software libre
Linux Mint sigue los principios del software libre y open source, pero a diferencia de Debian y Ubuntu, Mint incluye software propietario y controladores para asegurar una experiencia de usuario más completa. Su lema, "Just Works" (Simplemente Funciona), refleja su compromiso con la usabilidad.
3. Características Clave de Linux Mint
Facilidad de uso
Linux Mint está diseñado para ser intuitivo y fácil de usar, con un entorno de escritorio que facilita la transición desde otros sistemas operativos. Ofrece una experiencia de usuario familiar con menús y paneles que se asemejan a los de Windows.
Gestor de paquetes
APT (Advanced Package Tool) es el gestor de paquetes principal de Linux Mint, heredado de Ubuntu. APT facilita la instalación, actualización y eliminación de software desde los repositorios de Mint.
Comandos básicos: sudo apt update, sudo apt install [paquete], sudo apt remove [paquete].
Formatos de paquetes soportados
Linux Mint es compatible con varios formatos de paquetes:
.deb: El formato nativo de Debian y Ubuntu, utilizado también en Mint.
.snap: Linux Mint soporta Snap, un formato de paquetes universales desarrollado por Canonical.
.appimage: Archivos portátiles que pueden ejecutarse directamente sin necesidad de instalación.
.flatpak: Linux Mint puede instalar soporte para Flatpak, un formato de paquetes universal.
4. Proceso de Instalación de Linux Mint
Requisitos mínimos del sistema
Procesador: 1 GHz o superior.
Memoria RAM: 2 GB como mínimo, 4 GB o más recomendados.
Espacio en disco: 20 GB de espacio libre en disco.
Tarjeta gráfica: Soporte para una resolución mínima de 1024x768.
Unidad de DVD o puerto USB para la instalación.
Descarga y preparación del medio de instalación
Linux Mint se puede descargar desde el sitio web oficial. Se puede preparar un USB booteable usando herramientas como Rufus o balenaEtcher.
Guía paso a paso para la instalación
Selección del entorno de instalación: Linux Mint ofrece un instalador gráfico sencillo que guía a los usuarios a través del proceso de instalación.
Configuración de particiones: El instalador ofrece opciones de particionado automático y manual para adaptarse a diferentes necesidades.
Configuración de la red y selección de software: Durante la instalación, se configuran las opciones de red y se pueden elegir opciones de software adicional.
Primeros pasos post-instalación
Actualización del sistema: Es recomendable ejecutar sudo apt update && sudo apt upgrade después de la instalación para asegurarse de que todo el software esté actualizado.
Instalación de controladores adicionales y software: Linux Mint puede detectar e instalar automáticamente controladores adicionales para el hardware.
5. Entornos de Escritorio en Linux Mint
Cinnamon (predeterminado)
Cinnamon ofrece una experiencia de usuario moderna con un diseño intuitivo y muchas opciones de personalización.
MATE
MATE proporciona un entorno de escritorio clásico y estable, basado en GNOME 2.
Xfce
Xfce es conocido por su ligereza y eficiencia, ideal para sistemas más antiguos o con recursos limitados.
6. Gestión de Paquetes en Linux Mint
APT: El gestor de paquetes de Linux Mint
Comandos básicos: apt-get, apt-cache, aptitude.
Instalación y eliminación de paquetes: sudo apt install [paquete], sudo apt remove [paquete].
Snap: Paquetes universales
Comandos básicos de Snap: sudo snap install [paquete], sudo snap remove [paquete].
Snap permite instalar software con todas sus dependencias en un solo paquete, asegurando la compatibilidad.
Flatpak: Paquetes universales
Comandos básicos de Flatpak: flatpak install [repositorio] [paquete], flatpak uninstall [paquete].
Flatpak proporciona una forma de distribuir y ejecutar aplicaciones en contenedores aislados.
Gestor de software de Linux Mint
Linux Mint incluye el "Gestor de actualizaciones" y el "Gestor de software" para simplificar la instalación y actualización de aplicaciones.
7. Linux Mint en el Entorno Empresarial y Servidores
Uso de Linux Mint en el entorno empresarial
Linux Mint es popular en entornos de escritorio debido a su facilidad de uso y estabilidad. Sin embargo, para servidores, muchas empresas optan por Ubuntu Server o Debian debido a sus características y soporte específicos.
Mantenimiento y soporte
Linux Mint sigue un ciclo de lanzamiento basado en la versión LTS de Ubuntu, proporcionando actualizaciones y soporte a largo plazo.
8. Comparativa de Linux Mint con Otras Distribuciones
Linux Mint vs. Ubuntu
Objetivo: Linux Mint ofrece una experiencia de usuario más cercana a sistemas operativos tradicionales, con un enfoque en la simplicidad y la accesibilidad. Ubuntu, por otro lado, se enfoca en la innovación y la integración con el ecosistema de Canonical.
Filosofía: Linux Mint incluye más software y controladores propietarios para una experiencia lista para usar, mientras que Ubuntu ofrece más flexibilidad y actualizaciones más frecuentes.
Linux Mint vs. Fedora
Objetivo: Fedora está orientado a ofrecer las últimas tecnologías de Linux, mientras que Linux Mint se enfoca en una experiencia de usuario estable y familiar.
Filosofía: Fedora prioriza la integración de nuevas tecnologías, mientras que Mint sigue un enfoque más conservador en términos de estabilidad y familiaridad.
Linux Mint vs. Arch Linux
Objetivo: Arch Linux está diseñado para usuarios avanzados que desean un control total sobre su sistema, mientras que Linux Mint se enfoca en la facilidad de uso y una experiencia lista para usar.
Filosofía: Arch sigue la filosofía KISS y el modelo rolling release, mientras que Mint proporciona versiones estables y preconfiguradas para un uso inmediato.
9. Conclusión
Linux Mint como una opción amigable y productiva
Linux Mint es una excelente opción para quienes buscan una distribución Linux fácil de usar y con una experiencia de usuario familiar. Su enfoque en la estabilidad y la accesibilidad lo convierte en una opción popular para usuarios que desean una transición sin problemas desde otros sistemas operativos.
Recomendaciones finales para quienes consideran usar Linux Mint
Linux Mint es ideal para aquellos que buscan un sistema operativo confiable y accesible, con un entorno de escritorio amigable y una amplia gama de herramientas y aplicaciones preinstaladas.
10. Preguntas Frecuentes (FAQ)
¿Linux Mint es adecuado para principiantes?
Sí, Linux Mint es muy adecuado para principiantes debido a su interfaz amigable y facilidad de uso.
¿Cómo actualizo mi sistema Linux Mint?
Ejecutando sudo apt update && sudo apt upgrade mantendrás tu sistema actualizado.
¿Es Linux Mint una buena opción para servidores?
Aunque Linux Mint es más popular en entornos de escritorio, para servidores muchas empresas prefieren Ubuntu Server o Debian.
2 notes · View notes
punisheddonjuan · 8 months
Text
How I ditched streaming services and learned to love Linux: A step-by-step guide to building your very own personal media streaming server (V2.0: REVISED AND EXPANDED EDITION)
This is a revised, corrected and expanded version of my tutorial on setting up a personal media server that previously appeared on my old blog (donjuan-auxenfers). I expect that that post is still making the rounds (hopefully with my addendum on modifying group share permissions in Ubuntu to circumvent 0x8007003B "Unexpected Network Error" messages in Windows when transferring files) but I have no way of checking. Anyway this new revised version of the tutorial corrects one or two small errors I discovered when rereading what I wrote, adds links to all products mentioned and is just more polished generally. I also expanded it a bit, pointing more adventurous users toward programs such as Sonarr/Radarr/Lidarr and Overseerr which can be used for automating user requests and media collection.
So then, what is this tutorial? This is a tutorial on how to build and set up your own personal media server using Ubuntu as an operating system and Plex (or Jellyfin) to not only manage your media, but to also stream that media to your devices both at home and abroad anywhere in the world where you have an internet connection. This is a tutorial about how building a personal media server and stuffing it full of films, television shows and music that you acquired through indiscriminate and voracious media piracy legal methods like ripping your own physical media to disk, you’ll be free to completely ditch paid streaming services. No more will you have to pay for Disney+, Netflix, HBOMAX, Hulu, Amazon Prime, Peacock, CBS All Access, Paramount+, Crave or any other streaming service that is not named the Criterion Channel (which is actually good). If you want to watch your favourite films and television shows, you’ll have your own custom service that only features things that you want to see, and where you have control over your own files and how they’re delivered to you. And for music fans out there, both Jellyfin and Plex support music streaming, meaning you can even ditch music streaming services. Goodbye Spotify, Youtube Music, Tidal and Apple Music, welcome back unreasonably large MP3 (or FLAC) collections.
On the hardware front, I’m going to offer a few options catered towards differing budgets and media library sizes. The cost of getting a media server up and running using this guide will cost you anywhere from $450 CDN/$325 USD at the entry level to $1500 CDN/$1100 USD at the high end. My own server was priced closer to the higher figure, with much of that cost being hard drives. If that seems excessive, consider for a moment, maybe you have a roommate, a close friend, or a family member who would be willing to chip in a few bucks towards your little project provided they get a share of the bounty. This is how my server was funded. It might also be worth thinking about cost over time, how much you spend yearly on subscriptions vs. a one time cost of setting up a server. Additionally there's just the joy of being able to scream "fuck you" at all those show cancelling, movie deleting, hedge fund vampire CEOs who run the studios through denying them your money. Drive a stake through David Zaslav's heart.
On the software side I will walk you step-by-step through installing Ubuntu as your server's operating system, configuring your storage as a RAIDz array with ZFS, sharing your zpool to Windows with Samba, running a remote connection between your server and your Windows PC, and then a little about started with Plex/Jellyfin. Every terminal command you will need to input will be provided, and I even share a custom #bash script that will make used vs. available drive space on your server display correctly in Windows.
If you have a different preferred flavour of Linux (Arch, Manjaro, Redhat, Fedora, Mint, OpenSUSE, CentOS, Slackware etc. et. al.) and are aching to tell me off for being basic and using Ubuntu, this tutorial is not for you. The sort of person with a preferred Linux distro is the sort of person who can do this sort of thing in their sleep. Also I don't care. This tutorial is intended for the average home computer user. This is also why we’re not using a more exotic home server solution like running everything through Docker Containers and managing it through a dashboard like Homarr or Heimdall. While such solutions are fantastic and can be very easy to maintain once you have it all set up, wrapping your brain around Docker is a whole thing in and of itself. If you do follow this tutorial and had fun putting everything together, then I would encourage you to return in a year’s time, do your research and set up everything with Docker Containers.
Lastly, this is a tutorial aimed at Windows users. Although I was a daily user of OS X for many years (roughly 2008-2023) and I've dabbled quite a bit with various Linux distributions (mostly Ubuntu and Manjaro), my primary OS these days is Windows 11. Many things in this tutorial will still be applicable to Mac users, but others (e.g. setting up shares) you will have to look up for yourself. I doubt it would be difficult to do so.
Nothing in this tutorial will require feats of computing expertise. All you will need is a basic computer literacy (i.e. an understanding of what a filesystem and directory are, and a degree of comfort in the settings menu) and a willingness to learn a thing or two. While this guide may look overwhelming at first glance, it is only because I want to be as thorough as possible. I want you to understand exactly what it is you're doing, I don't want you to just blindly follow steps. If you half-way know what you’re doing, you will be much better prepared if you ever need to troubleshoot.
Honestly, once you have all the hardware ready it shouldn't take more than a weekend to get everything up and running.
(This tutorial is just shy of seven thousand words long so the rest is under the cut.)
Step One: Choosing Your Hardware
Linux is a light weight operating system, depending on the distribution there's close to no bloat. There are recent distributions available at this very moment that will run perfectly fine on a fourteen year old i3 with 4GB of RAM. Moreover, running Plex or Jellyfin isn’t resource intensive in 90% of use cases. All this is to say, we don’t require an expensive or powerful computer. This means that there are several options available: 1) use an old computer you already have sitting around but aren't using 2) buy a used workstation from eBay, or what I believe to be the best option, 3) order an N100 Mini-PC from AliExpress or Amazon.
Note: If you already have an old PC sitting around that you’ve decided to use, fantastic, move on to the next step.
When weighing your options, keep a few things in mind: the number of people you expect to be streaming simultaneously at any one time, the resolution and bitrate of your media library (4k video takes a lot more processing power than 1080p) and most importantly, how many of those clients are going to be transcoding at any one time. Transcoding is what happens when the playback device does not natively support direct playback of the source file. This can happen for a number of reasons, such as the playback device's native resolution being lower than the file's internal resolution, or because the source file was encoded in a video codec unsupported by the playback device.
Ideally we want any transcoding to be performed by hardware. This means we should be looking for a computer with an Intel processor with Quick Sync. Quick Sync is a dedicated core on the CPU die designed specifically for video encoding and decoding. This specialized hardware makes for highly efficient transcoding both in terms of processing overhead and power draw. Without these Quick Sync cores, transcoding must be brute forced through software. This takes up much more of a CPU’s processing power and requires much more energy. But not all Quick Sync cores are created equal and you need to keep this in mind if you've decided either to use an old computer or to shop for a used workstation on eBay
Any Intel processor from second generation Core (Sandy Bridge circa 2011) onwards has Quick Sync cores. It's not until 6th gen (Skylake), however, that the cores support the H.265 HEVC codec. Intel’s 10th gen (Comet Lake) processors introduce support for 10bit HEVC and HDR tone mapping. And the recent 12th gen (Alder Lake) processors brought with them hardware AV1 decoding. As an example, while an 8th gen (Kaby Lake) i5-8500 will be able to hardware transcode a H.265 encoded file, it will fall back to software transcoding if given a 10bit H.265 file. If you’ve decided to use that old PC or to look on eBay for an old Dell Optiplex keep this in mind.
Note 1: The price of old workstations varies wildly and fluctuates frequently. If you get lucky and go shopping shortly after a workplace has liquidated a large number of their workstations you can find deals for as low as $100 on a barebones system, but generally an i5-8500 workstation with 16gb RAM will cost you somewhere in the area of $260 CDN/$200 USD.
Note 2: The AMD equivalent to Quick Sync is called Video Core Next, and while it's fine, it's not as efficient and not as mature a technology. It was only introduced with the first generation Ryzen CPUs and it only got decent with their newest CPUs, we want something cheap.
Alternatively you could forgo having to keep track of what generation of CPU is equipped with Quick Sync cores that feature support for which codecs, and just buy an N100 mini-PC. For around the same price or less of a used workstation you can pick up a Mini-PC with an Intel N100 processor. The N100 is a four-core processor based on the 12th gen Alder Lake architecture and comes equipped with the latest revision of the Quick Sync cores. These little processors offer astounding hardware transcoding capabilities for their size and power draw. Otherwise they perform equivalent to an i5-6500, which isn't a terrible CPU. A friend of mine uses an N100 machine as a dedicated retro emulation gaming system and it does everything up to 6th generation consoles just fine. The N100 is also a remarkably efficient chip, it sips power. In fact, the difference between running one of these and an old workstation could work out to hundreds of dollars a year in energy bills depending on where you live.
You can find these Mini-PCs all over Amazon or for a little cheaper on AliExpress. They range in price from $170 CDN/$125 USD for a no name N100 with 8GB RAM to $280 CDN/$200 USD for a Beelink S12 Pro with 16GB RAM. The brand doesn't really matter, they're all coming from the same three factories in Shenzen, go for whichever one fits your budget or has features you want. 8GB RAM should be enough, Linux is lightweight and Plex only calls for 2GB RAM. 16GB RAM might result in a slightly snappier experience, especially with ZFS. A 256GB SSD is more than enough for what we need as a boot drive, but going for a bigger drive might allow you to get away with things like creating preview thumbnails for Plex, but it’s up to you and your budget.
The Mini-PC I wound up buying was a Firebat AK2 Plus with 8GB RAM and a 256GB SSD. It looks like this:
Tumblr media
Note: Be forewarned that if you decide to order a Mini-PC from AliExpress, note the type of power adapter it ships with. The mini-PC I bought came with an EU power adapter and I had to supply my own North American power supply. Thankfully this is a minor issue as a barrel plug 30W/12V/2.5A power adapters are plentiful and can be had for $10.
Step Two: Choosing Your Storage
Storage is the most important part of our build. It is also the most expensive. Thankfully it’s also the most easily upgrade-able down the line.
For people with a smaller media collection (4TB to 8TB), a more limited budget, or who will only ever have two simultaneous streams running, I would say that the most economical course of action would be to buy a USB 3.0 8TB external HDD. Something like this one from Western Digital or this one from Seagate. One of these external drives will cost you in the area of $200 CDN/$140 USD. Down the line you could add a second external drive or replace it with a multi-drive RAIDz set up such as detailed below.
If a single external drive the path for you, move on to step three.
For people with larger media libraries (12TB+), who prefer media in 4k, or care who about data redundancy, the answer is a RAID array featuring multiple HDDs in an enclosure.
Note: If you are using an old PC or used workstatiom as your server and have the room for at least three 3.5" drives, and as many open SATA ports on your mother board you won't need an enclosure, just install the drives into the case. If your old computer is a laptop or doesn’t have room for more internal drives, then I would suggest an enclosure.
The minimum number of drives needed to run a RAIDz array is three, and seeing as RAIDz is what we will be using, you should be looking for an enclosure with three to five bays. I think that four disks makes for a good compromise for a home server. Regardless of whether you go for a three, four, or five bay enclosure, do be aware that in a RAIDz array the space equivalent of one of the drives will be dedicated to parity at a ratio expressed by the equation 1 − 1/n i.e. in a four bay enclosure equipped with four 12TB drives, if we configured our drives in a RAIDz1 array we would be left with a total of 36TB of usable space (48TB raw size). The reason for why we might sacrifice storage space in such a manner will be explained in the next section.
A four bay enclosure will cost somewhere in the area of $200 CDN/$140 USD. You don't need anything fancy, we don't need anything with hardware RAID controls (RAIDz is done entirely in software) or even USB-C. An enclosure with USB 3.0 will perform perfectly fine. Don’t worry too much about USB speed bottlenecks. A mechanical HDD will be limited by the speed of its mechanism long before before it will be limited by the speed of a USB connection. I've seen decent looking enclosures from TerraMaster, Yottamaster, Mediasonic and Sabrent.
When it comes to selecting the drives, as of this writing, the best value (dollar per gigabyte) are those in the range of 12TB to 20TB. I settled on 12TB drives myself. If 12TB to 20TB drives are out of your budget, go with what you can afford, or look into refurbished drives. I'm not sold on the idea of refurbished drives but many people swear by them.
When shopping for harddrives, search for drives designed specifically for NAS use. Drives designed for NAS use typically have better vibration dampening and are designed to be active 24/7. They will also often make use of CMR (conventional magnetic recording) as opposed to SMR (shingled magnetic recording). This nets them a sizable read/write performance bump over typical desktop drives. Seagate Ironwolf and Toshiba NAS are both well regarded brands when it comes to NAS drives. I would avoid Western Digital Red drives at this time. WD Reds were a go to recommendation up until earlier this year when it was revealed that they feature firmware that will throw up false SMART warnings telling you to replace the drive at the three year mark quite often when there is nothing at all wrong with that drive. It will likely even be good for another six, seven, or more years.
Tumblr media
Step Three: Installing Linux
For this step you will need a USB thumbdrive of at least 6GB in capacity, an .ISO of Ubuntu, and a way to make that thumbdrive bootable media.
First download a copy of Ubuntu desktop (for best performance we could download the Server release, but for new Linux users I would recommend against the server release. The server release is strictly command line interface only, and having a GUI is very helpful for most people. Not many people are wholly comfortable doing everything through the command line, I'm certainly not one of them, and I grew up with DOS 6.0. 22.04.3 Jammy Jellyfish is the current Long Term Service release, this is the one to get.
Download the .ISO and then download and install balenaEtcher on your Windows PC. BalenaEtcher is an easy to use program for creating bootable media, you simply insert your thumbdrive, select the .ISO you just downloaded, and it will create a bootable installation media for you.
Once you've made a bootable media and you've got your Mini-PC (or you old PC/used workstation) in front of you, hook it directly into your router with an ethernet cable, and then plug in the HDD enclosure, a monitor, a mouse and a keyboard. Now turn that sucker on and hit whatever key gets you into the BIOS (typically ESC, DEL or F2). If you’re using a Mini-PC check to make sure that the P1 and P2 power limits are set correctly, my N100's P1 limit was set at 10W, a full 20W under the chip's power limit. Also make sure that the RAM is running at the advertised speed. My Mini-PC’s RAM was set at 2333Mhz out of the box when it should have been 3200Mhz. Once you’ve done that, key over to the boot order and place the USB drive first in the boot order. Then save the BIOS settings and restart.
After you restart you’ll be greeted by Ubuntu's installation screen. Installing Ubuntu is really straight forward, select the "minimal" installation option, as we won't need anything on this computer except for a browser (Ubuntu comes preinstalled with Firefox) and Plex Media Server/Jellyfin Media Server. Also remember to delete and reformat that Windows partition! We don't need it.
Step Four: Installing ZFS and Setting Up the RAIDz Array
Note: If you opted for just a single external HDD skip this step and move onto setting up a Samba share.
Once Ubuntu is installed it's time to configure our storage by installing ZFS to build our RAIDz array. ZFS is a "next-gen" file system that is both massively flexible and massively complex. It's capable of snapshot backup, self healing error correction, ZFS pools can be configured with drives operating in a supplemental manner alongside the storage vdev (e.g. fast cache, dedicated secondary intent log, hot swap spares etc.). It's also a file system very amenable to fine tuning. Block and sector size are adjustable to use case and you're afforded the option of different methods of inline compression. If you'd like a very detailed overview and explanation of its various features and tips on tuning a ZFS array check out these articles from Ars Technica. For now we're going to ignore all these features and keep it simple, we're going to pull our drives together into a single vdev running in RAIDz which will be the entirety of our zpool, no fancy cache drive or SLOG.
Open up the terminal and type the following commands:
sudo apt update
then
sudo apt install zfsutils-linux
This will install the ZFS utility. Verify that it's installed with the following command:
zfs --version
Now, it's time to check that the HDDs we have in the enclosure are healthy, running, and recognized. We also want to find out their device IDs and take note of them:
sudo fdisk -1
Note: You might be wondering why some of these commands require "sudo" in front of them while others don't. "Sudo" is short for "super user do”. When and where "sudo" is used has to do with the way permissions are set up in Linux. Only the "root" user has the access level to perform certain tasks in Linux. As a matter of security and safety regular user accounts are kept separate from the "root" user. It's not advised (or even possible) to boot into Linux as "root" with most modern distributions. Instead by using "sudo" our regular user account is temporarily given the power to do otherwise forbidden things. Don't worry about it too much at this stage, but if you want to know more check out this introduction.
If everything is working you should get a list of the various drives detected along with their device IDs which will look like this: /dev/sdc. You can also check the device IDs of the drives by opening the disk utility app. Jot these IDs down as we'll need them for our next step, creating our RAIDz array.
RAIDz is similar to RAID-5 in that instead of striping your data over multiple disks, exchanging redundancy for speed and available space (RAID-0), or mirroring your data writing by two copies of every piece (RAID-1), it instead writes parity blocks across the disks in addition to striping, this provides a balance of speed, redundancy and available space. If a single drive fails, the parity blocks on the working drives can be used to reconstruct the entire array as soon as a replacement drive is added.
Additionally, RAIDz improves over some of the common RAID-5 flaws. It's more resilient and capable of self healing, as it is capable of automatically checking for errors against a checksum. It's more forgiving in this way, and it's likely that you'll be able to detect when a drive is dying well before it fails. A RAIDz array can survive the loss of any one drive.
Note: While RAIDz is indeed resilient, if a second drive fails during the rebuild, you're fucked. Always keep backups of things you can't afford to lose. This tutorial, however, is not about proper data safety.
To create the pool, use the following command:
sudo zpool create "zpoolnamehere" raidz "device IDs of drives we're putting in the pool"
For example, let's creatively name our zpool "mypool". This poil will consist of four drives which have the device IDs: sdb, sdc, sdd, and sde. The resulting command will look like this:
sudo zpool create mypool raidz /dev/sdb /dev/sdc /dev/sdd /dev/sde
If as an example you bought five HDDs and decided you wanted more redundancy dedicating two drive to this purpose, we would modify the command to "raidz2" and the command would look something like the following:
sudo zpool create mypool raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf
An array configured like this is known as RAIDz2 and is able to survive two disk failures.
Once the zpool has been created, we can check its status with the command:
zpool status
Or more concisely with:
zpool list
The nice thing about ZFS as a file system is that a pool is ready to go immediately after creation. If we were to set up a traditional RAID-5 array using mbam, we'd have to sit through a potentially hours long process of reformatting and partitioning the drives. Instead we're ready to go right out the gates.
The zpool should be automatically mounted to the filesystem after creation, check on that with the following:
df -hT | grep zfs
Note: If your computer ever loses power suddenly, say in event of a power outage, you may have to re-import your pool. In most cases, ZFS will automatically import and mount your pool, but if it doesn’t and you can't see your array, simply open the terminal and type sudo zpool import -a.
By default a zpool is mounted at /"zpoolname". The pool should be under our ownership but let's make sure with the following command:
sudo chown -R "yourlinuxusername" /"zpoolname"
Note: Changing file and folder ownership with "chown" and file and folder permissions with "chmod" are essential commands for much of the admin work in Linux, but we won't be dealing with them extensively in this guide. If you'd like a deeper tutorial and explanation you can check out these two guides: chown and chmod.
Tumblr media
You can access the zpool file system through the GUI by opening the file manager (the Ubuntu default file manager is called Nautilus) and clicking on "Other Locations" on the sidebar, then entering the Ubuntu file system and looking for a folder with your pool's name. Bookmark the folder on the sidebar for easy access.
Tumblr media
Your storage pool is now ready to go. Assuming that we already have some files on our Windows PC we want to copy to over, we're going to need to install and configure Samba to make the pool accessible in Windows.
Step Five: Setting Up Samba/Sharing
Samba is what's going to let us share the zpool with Windows and allow us to write to it from our Windows machine. First let's install Samba with the following commands:
sudo apt-get update
then
sudo apt-get install samba
Next create a password for Samba.
sudo smbpswd -a "yourlinuxusername"
It will then prompt you to create a password. Just reuse your Ubuntu user password for simplicity's sake.
Note: if you're using just a single external drive replace the zpool location in the following commands with wherever it is your external drive is mounted, for more information see this guide on mounting an external drive in Ubuntu.
After you've created a password we're going to create a shareable folder in our pool with this command
mkdir /"zpoolname"/"foldername"
Now we're going to open the smb.conf file and make that folder shareable. Enter the following command.
sudo nano /etc/samba/smb.conf
This will open the .conf file in nano, the terminal text editor program. Now at the end of smb.conf add the following entry:
["foldername"]
path = /"zpoolname"/"foldername"
available = yes
valid users = "yourlinuxusername"
read only = no
writable = yes
browseable = yes
guest ok = no
Ensure that there are no line breaks between the lines and that there's a space on both sides of the equals sign. Our next step is to allow Samba traffic through the firewall:
sudo ufw allow samba
Finally restart the Samba service:
sudo systemctl restart smbd
At this point we'll be able to access to the pool, browse its contents, and read and write to it from Windows. But there's one more thing left to do, Windows doesn't natively support the ZFS file systems and will read the used/available/total space in the pool incorrectly. Windows will read available space as total drive space, and all used space as null. This leads to Windows only displaying a dwindling amount of "available" space as the drives are filled. We can fix this! Functionally this doesn't actually matter, we can still write and read to and from the disk, it just makes it difficult to tell at a glance the proportion of used/available space, so this is an optional step but one I recommend (this step is also unnecessary if you're just using a single external drive). What we're going to do is write a little shell script in #bash. Open nano with the terminal with the command:
nano
Now insert the following code:
#!/bin/bash CUR_PATH=`pwd` ZFS_CHECK_OUTPUT=$(zfs get type $CUR_PATH 2>&1 > /dev/null) > /dev/null if [[ $ZFS_CHECK_OUTPUT == *not\ a\ ZFS* ]] then IS_ZFS=false else IS_ZFS=true fi if [[ $IS_ZFS = false ]] then df $CUR_PATH | tail -1 | awk '{print $2" "$4}' else USED=$((`zfs get -o value -Hp used $CUR_PATH` / 1024)) > /dev/null AVAIL=$((`zfs get -o value -Hp available $CUR_PATH` / 1024)) > /dev/null TOTAL=$(($USED+$AVAIL)) > /dev/null echo $TOTAL $AVAIL fi
Save the script as "dfree.sh" to /home/"yourlinuxusername" then change the ownership of the file to make it executable with this command:
sudo chmod 774 dfree.sh
Now open smb.conf with sudo again:
sudo nano /etc/samba/smb.conf
Now add this entry to the top of the configuration file to direct Samba to use the results of our script when Windows asks for a reading on the pool's used/available/total drive space:
[global]
dfree command = home/"yourlinuxusername"/defree.sh
Save the changes to smb.conf and then restart Samba again with the terminal:
sudo systemctl restart smbd
Now there’s one more thing we need to do to fully set up the Samba share, and that’s to modify a hidden group permission. In the terminal window type the following command:
usermod -a -G sambashare “yourlinuxusername”
Then restart samba again:
sudo systemctl restart smbd
If we don’t do this last step, everything will appear to work fine, and you will even be able to see and map the drive from Windows and even begin transferring files, but you'd soon run into a lot of frustration. As every ten minutes or so a file would fail to transfer and you would get a window announcing “0x8007003B Unexpected Network Error”. This window would require your manual input to continue the transfer with the file next in the queue. And at the end it would reattempt to transfer whichever files failed the first time around. 99% of the time they’ll go through that second try, but this is still all a major pain in the ass. Especially if you’ve got a lot of data to transfer or you want to step away from the computer for a while.
It turns out samba can act a little weirdly with the higher read/write speeds of RAIDz arrays and transfers from Windows, and will intermittently crash and restart itself if this group option isn’t changed. Inputting the above command will prevent you from ever seeing that window.
The last thing we're going to do before switching over to our Windows PC is grab the IP address of our Linux machine. Enter the following command:
hostname -I
This will spit out this computer's IP address on the local network (it will look something like 192.168.0.x), write it down. It might be a good idea once you're done here to go into your router settings and reserving that IP for your Linux system in the DHCP settings. Check the manual for your specific model router on how to access its settings, typically it can be accessed by opening a browser and typing http:\\192.168.0.1 in the address bar, but your router may be different.
Okay we’re done with our Linux computer for now. Get on over to your Windows PC, open File Explorer, right click on Network and click "Map network drive". Select Z: as the drive letter (you don't want to map the network drive to a letter you could conceivably be using for other purposes) and enter the IP of your Linux machine and location of the share like so: \\"LINUXCOMPUTERLOCALIPADDRESSGOESHERE"\"zpoolnamegoeshere"\. Windows will then ask you for your username and password, enter the ones you set earlier in Samba and you're good. If you've done everything right it should look something like this:
Tumblr media
You can now start moving media over from Windows to the share folder. It's a good idea to have a hard line running to all machines. Moving files over Wi-Fi is going to be tortuously slow, the only thing that’s going to make the transfer time tolerable (hours instead of days) is a solid wired connection between both machines and your router.
Step Six: Setting Up Remote Desktop Access to Your Server
After the server is up and going, you’ll want to be able to access it remotely from Windows. Barring serious maintenance/updates, this is how you'll access it most of the time. On your Linux system open the terminal and enter:
sudo apt install xrdp
Then:
sudo systemctl enable xrdp
Once it's finished installing, open “Settings” on the sidebar and turn off "automatic login" in the User category. Then log out of your account. Attempting to remotely connect to your Linux computer while you’re logged in will result in a black screen!
Now get back on your Windows PC, open search and look for "RDP". A program called "Remote Desktop Connection" should pop up, open this program as an administrator by right-clicking and selecting “run as an administrator”. You’ll be greeted with a window. In the field marked “Computer” type in the IP address of your Linux computer. Press connect and you'll be greeted with a new window and prompt asking for your username and password. Enter your Ubuntu username and password here.
Tumblr media
If everything went right, you’ll be logged into your Linux computer. If the performance is sluggish, adjust the display options. Lowering the resolution and colour depth do a lot to make the interface feel snappier.
Tumblr media
Remote access is how we're going to be using our Linux system from now, barring edge cases like needing to get into the BIOS or upgrading to a new version of Ubuntu. Everything else from performing maintenance like a monthly zpool scrub (this is important!!!) to checking zpool status and updating software can all be done remotely.
Tumblr media
This is how my server lives its life now, happily humming and chirping away on the floor next to the couch in a corner of the living room.
Step Seven: Plex Media Server/Jellyfin
Okay we’ve got all the ground work finished and our server is almost up and running. We’ve got Ubuntu up and running, our storage array is primed, we’ve set up remote connections and sharing, and maybe we’ve moved over some of favourite movies and TV shows.
Now we need to decide on the media server software to use which will stream our media to us and organize our library. For most people I’d recommend Plex. It just works 99% of the time. That said, Jellyfin has a lot to recommend it by too, even if it is rougher around the edges. Some people run both simultaneously, it’s not that big of an extra strain. I do recommend doing a little bit of your own research into the features each platform offers, but as a quick run down, consider some of the following points:
Plex is closed source and is funded through PlexPass purchases while Jellyfin is open source and entirely user driven. This means a number of things: for one, Plex requires you to purchase a “PlexPass” (purchased as a one time lifetime fee $159.99 CDN/$120 USD or paid for on a monthly or yearly subscription basis) in order to access to certain features, like hardware transcoding (and we want hardware transcoding) or automated intro/credits detection and skipping. jellyfish features for free. On the other hand, Plex supports a lot more devices than Jellyfin and updates more frequently. That said Jellyfin's Android/iOS apps are completely free, while the Plex Android/iOS apps must be activated for a one time cost of $6 CDN/$5 USD. But that $6 fee gets you a mobile app that is much more functional and features a unified UI across Android and iOS platforms, the Plex mobile apps are simply a more polished experience. The Jellyfin apps are a bit of a mess and the iOS and Android versions are very different from each other.
Jellyfin’s actual media player itself is more fully featured than Plex's, but on the other hand Jellyfin's UI, library customization and automatic media tagging really pale in comparison to Plex. Streaming your music library is free through both Jellyfin and Plex, but Plex offers the PlexAmp app for dedicated music streaming which boasts a number of fantastic features, unfortunately some of those fantastic features require a PlexPass. If your internet is down, Jellyfin can still do local streaming, while Plex can fail to play files. Jellyfin has a slew of neat niche features like support for Comic Book libraries with the .cbz/.cbt file types, but then Plex offers some free ad-supported TV and films, they even have a free channel that plays nothing but Classic Doctor Who.
Ultimately it's up to you, I settled on Plex because although some features are pay-walled, it just works. It's more reliable and easier to use, and a one-time fee is much easier to swallow than a subscription. I do also need to mention that Jellyfin does take a little extra bit of tinkering to get going in Ubuntu, you’ll have to set up process permissions, so if you're more tolerant to tinkering, Jellyfin might be up your alley and I’ll trust that you can follow their installation and configuration guide. For everyone else, I recommend Plex.
So pick your poison: Plex or Jellyfin.
Note: The easiest way to download and install either of these packages in Ubuntu is through Snap Store.
After you've installed one (or both), opening either app will launch a browser window into the browser version of the app allowing you to set all the options server side.
The process of adding creating media libraries is essentially the same in both Plex and Jellyfin. You create a separate libraries for Television, Movies, and Music and add the folders which contain the respective types of media to their respective libraries. The only difficult or time consuming aspect is ensuring that your files and folders follow the appropriate naming conventions:
Plex naming guide for Movies
Plex naming guide for Television
Jellyfin follows the same naming rules but I find their media scanner to be a lot less accurate and forgiving than Plex. Once you've selected the folders to be scanned the service will scan your files, tagging everything and adding metadata. Although I find do find Plex more accurate, it can still erroneously tag some things and you might have to manually clean up some tags in a large library. (When I initially created my library it tagged the 1963-1989 Doctor Who as some Korean soap opera and I needed to manually select the correct match after which everything was tagged normally.) It can also be a bit testy with anime (especially OVAs) be sure to check TVDB to ensure that you have your files and folders structured and named correctly. If something is not showing up at all, double check the name.
Once that's done, organizing and customizing your library is easy. You can set up collections, grouping items together to fit a theme or collect together all the entries in a franchise. You can make playlists, and add custom artwork to entries. It's fun setting up collections with posters to match, there are even several websites dedicated to help you do this like PosterDB. As an example, below are two collections in my library, one collecting all the entries in a franchise, the other follows a theme.
Tumblr media
My Star Trek collection, featuring all eleven television series, and thirteen films.
Tumblr media
My Best of the Worst collection, featuring sixty-nine films previously showcased on RedLetterMedia’s Best of the Worst. They’re all absolutely terrible and I love them.
As for settings, ensure you've got Remote Access going, it should work automatically and be sure to set your upload speed after running a speed test. In the library settings set the database cache to 2000MB to ensure a snappier and more responsive browsing experience, and then check that playback quality is set to original/maximum. If you’re severely bandwidth limited on your upload and have remote users, you might want to limit the remote stream bitrate to something more reasonable, just as a note of comparison Netflix’s 1080p bitrate is approximately 5Mbps, although almost anyone watching through a chromium based browser is streaming at 720p and 3mbps. Other than that you should be good to go. For actually playing your files, there's a Plex app for just about every platform imaginable. I mostly watch television and films on my laptop using the Windows Plex app, but I also use the Android app which can broadcast to the chromecast connected to the TV. Both are fully functional and easy to navigate, and I can also attest to the OS X version being equally functional.
Part Eight: Finding Media
Now, this is not really a piracy tutorial, there are plenty of those out there. But if you’re unaware, BitTorrent is free and pretty easy to use, just pick a client (qBittorrent is the best) and go find some public trackers to peruse. Just know now that all the best trackers are private and invite only, and that they can be exceptionally difficult to get into. I’m already on a few, and even then, some of the best ones are wholly out of my reach.
If you decide to take the left hand path and turn to Usenet you’ll have to pay. First you’ll need to sign up with a provider like Newshosting or EasyNews for access to Usenet itself, and then to actually find anything you’re going to need to sign up with an indexer like NZBGeek or NZBFinder. There are dozens of indexers, and many people cross post between them, but for more obscure media it’s worth checking multiple. You’ll also need a binary downloader like SABnzbd. That caveat aside, Usenet is faster, bigger, older, less traceable than BitTorrent, and altogether slicker. I honestly prefer it, and I'm kicking myself for taking this long to start using it because I was scared off by the price. I’ve found so many things on Usenet that I had sought in vain elsewhere for years, like a 2010 Italian film about a massacre perpetrated by the SS that played the festival circuit but never received a home media release; some absolute hero uploaded a rip of a festival screener DVD to Usenet, that sort of thing. Anyway, figure out the rest of this shit on your own and remember to use protection, get yourself behind a VPN, use a SOCKS5 proxy with your BitTorrent client, etc.
On the legal side of things, if you’re around my age, you (or your family) probably have a big pile of DVDs and Blu-Rays sitting around unwatched and half forgotten. Why not do a bit of amateur media preservation, rip them and upload them to your server for easier access? (Your tools for this are going to be Handbrake to do the ripping and AnyDVD to break any encryption.) I went to the trouble of ripping all my SCTV DVDs (five box sets worth) because none of it is on streaming nor could it be found on any pirate source I tried. I’m glad I did, forty years on it’s still one of the funniest shows to ever be on TV.
Part Nine/Epilogue: Sonarr/Radarr/Lidarr and Overseerr
There are a lot of ways to automate your server for better functionality or to add features you and other users might find useful. Sonarr, Radarr, and Lidarr are a part of a suite of “Servarr” services (there’s also Readarr for books and Whisparr for adult content) that allow you to automate the collection of new episodes of TV shows (Sonarr), new movie releases (Radarr) and music releases (Lidarr). They hook in to your BitTorrent client or Usenet binary newsgroup downloader and crawl your preferred Torrent trackers and Usenet indexers, alerting you to new releases and automatically grabbing them. You can also use these services to manually search for new media, and even replace/upgrade your existing media with better quality uploads. They’re really a little tricky to set up on a bare metal Ubuntu install (ideally you should be running them in Docker Containers), and I won’t be providing a step by step on installing and running them, I’m simply making you aware of their existence.
The other bit of kit I want to make you aware of is Overseerr which is a program that scans your Plex media library and will serve recommendations based on what you like. It also allows you and your users to request specific media. It can even be integrated with Sonarr/Radarr/Lidarr so that fulfilling those requests is fully automated.
And you're done. It really wasn't all that hard. Enjoy your media. Enjoy the control you have over that media. And be safe in the knowledge that no hedgefund CEO motherfucker who hates the movies but who is somehow in control of a major studio will be able to disappear anything in your library as a tax write-off.
1K notes · View notes
tipsonunix · 2 years
Text
Install VSCodium 1.70.0 on Ubuntu / Fedora & Alma Linux
Tumblr media
This tutorial will be helpful for beginners to install VSCodium 1.70.0 on Ubuntu 22.04 LTS, Linux Mint 21, Pop OS 22.04 LTS, Fedora 36, Alma Linux 9 and Rocky Linux 9. VSCodium is a community-driven, free/Libre Open Source Software Binaries of VS Code current release windows_build_status. VSCodium is not a fork. It will automatically build Microsoft vscode repository into freely licensed binaries with a community driven default configuration.
Install VSCodium 1.7.0 on Ubuntu / Linux Mint
Step 1: Make sure system is up to date sudo apt update && sudo apt upgrade Step 2: Add the VSCodium GPG Key wget -qO - https://gitlab.com/paulcarroty/vscodium-deb-rpm-repo/raw/master/pub.gpg | gpg --dearmor | sudo dd of=/usr/share/keyrings/vscodium-archive-keyring.gpg Step 3: Add the repository echo 'deb https://download.vscodium.com/debs vscodium main' | sudo tee /etc/apt/sources.list.d/vscodium.list Step 4: Install VSCodium on Ubuntu / Linux Mint sudo apt update && sudo apt install codium
Tumblr media
Install VSCodium 1.7.0 on Fedora / Alma Linux / RHEL
Step 1: Make sure system is up to date. Step 2: Add the GPG Key to your system sudo rpmkeys --import https://gitlab.com/paulcarroty/vscodium-deb-rpm-repo/-/raw/master/pub.gpg Step 3: Add the Repository cat >> /etc/yum.repos.d/vscodium.repo Read the full article
0 notes
Text
Visual Studio Code Apt
Tumblr media
Visit the VS Code install page and select the 32 or 64 bit installer. Install Visual Studio Code on Windows (not in your WSL file system). When prompted to Select Additional Tasks during installation, be sure to check the Add to PATH option so you can easily open a folder in WSL using the code command. Visual Studio Code is a free and open-source, cross-platform IDE or code editor that enables developers to develop applications and write code using a myriad of programming languages such as C, C, Python, Go and Java to mention a few. To Install Visual Studio Code on Debian, Ubuntu and Linux Mint: 1. Update your system by running the command. Introduction to Visual Studio Code. Introduction to Visual Studio Code. Skip to Content Current Page: Blog About Contact FAQs. Now, to install the Visual Studio Code DEB package file, run the APT command as follows. $ sudo apt install. / code.deb The APT package manager should start installing the DEB package file. At this point, Visual Studio Code should be installed.
Visual Studio Code Apt
Visual Studio Code Apt Install
Apt Remove Visual Studio Code
Installation
See the Download Visual Studio Code page for a complete list of available installation options.
By downloading and using Visual Studio Code, you agree to the license terms and privacy statement.
Snap
Visual Studio Code is officially distributed as a Snap package in the Snap Store:
You can install it by running:
Once installed, the Snap daemon will take care of automatically updating VS Code in the background. You will get an in-product update notification whenever a new update is available.
Note: If snap isn't available in your Linux distribution, please check the following Installing snapd guide, which can help you get that set up.
Learn more about snaps from the official Snap Documentation.
Debian and Ubuntu based distributions
The easiest way to install Visual Studio Code for Debian/Ubuntu based distributions is to download and install the .deb package (64-bit), either through the graphical software center if it's available, or through the command line with:
Note that other binaries are also available on the VS Code download page.
Installing the .deb package will automatically install the apt repository and signing key to enable auto-updating using the system's package manager. Alternatively, the repository and key can also be installed manually with the following script:
Then update the package cache and install the package using:
RHEL, Fedora, and CentOS based distributions
We currently ship the stable 64-bit VS Code in a yum repository, the following script will install the key and repository:
Then update the package cache and install the package using dnf (Fedora 22 and above):
Or on older versions using yum:
Due to the manual signing process and the system we use to publish, the yum repo may lag behind and not get the latest version of VS Code immediately.
openSUSE and SLE-based distributions
The yum repository above also works for openSUSE and SLE-based systems, the following script will install the key and repository:
Then update the package cache and install the package using:
AUR package for Arch Linux
There is a community-maintained Arch User Repository package for VS Code.
To get more information about the installation from the AUR, please consult the following wiki entry: Install AUR Packages.
Nix package for NixOS (or any Linux distribution using Nix package manager)
There is a community maintained VS Code Nix package in the nixpkgs repository. In order to install it using Nix, set allowUnfree option to true in your config.nix and execute:
Installing .rpm package manually
The VS Code .rpm package (64-bit) can also be manually downloaded and installed, however, auto-updating won't work unless the repository above is installed. Once downloaded it can be installed using your package manager, for example with dnf:
Note that other binaries are also available on the VS Code download page.
Updates
VS Code ships monthly and you can see when a new release is available by checking the release notes. If the VS Code repository was installed correctly, then your system package manager should handle auto-updating in the same way as other packages on the system.
Note: Updates are automatic and run in the background for the Snap package.
Node.js
Node.js is a popular platform and runtime for easily building and running JavaScript applications. It also includes npm, a Package Manager for Node.js modules. You'll see Node.js and npm mentioned frequently in our documentation and some optional VS Code tooling requires Node.js (for example, the VS Code extension generator).
If you'd like to install Node.js on Linux, see Installing Node.js via package manager to find the Node.js package and installation instructions tailored to your Linux distribution. You can also install and support multiple versions of Node.js by using the Node Version Manager.
To learn more about JavaScript and Node.js, see our Node.js tutorial, where you'll learn about running and debugging Node.js applications with VS Code.
Setting VS Code as the default text editor
Tumblr media
xdg-open
You can set the default text editor for text files (text/plain) that is used by xdg-open with the following command:
Debian alternatives system
Debian-based distributions allow setting a default editor using the Debian alternatives system, without concern for the MIME type. You can set this by running the following and selecting code:
If Visual Studio Code doesn't show up as an alternative to editor, you need to register it:
Windows as a Linux developer machine
Another option for Linux development with VS Code is to use a Windows machine with the Windows Subsystem for Linux (WSL).
Tumblr media
Windows Subsystem for Linux
Tumblr media
Visual Studio Code Apt
With WSL, you can install and run Linux distributions on Windows. This enables you to develop and test your source code on Linux while still working locally on a Windows machine. WSL supports Linux distributions such as Ubuntu, Debian, SUSE, and Alpine available from the Microsoft Store.
When coupled with the Remote - WSL extension, you get full VS Code editing and debugging support while running in the context of a Linux distro on WSL.
See the Developing in WSL documentation to learn more or try the Working in WSL introductory tutorial.
Next steps
Once you have installed VS Code, these topics will help you learn more about it:
Additional Components - Learn how to install Git, Node.js, TypeScript, and tools like Yeoman.
User Interface - A quick orientation to VS Code.
User/Workspace Settings - Learn how to configure VS Code to your preferences through settings.
Common questions
Azure VM Issues
I'm getting a 'Running without the SUID sandbox' error?
You can safely ignore this error.
Debian and moving files to trash
If you see an error when deleting files from the VS Code Explorer on the Debian operating system, it might be because the trash implementation that VS Code is using is not there.
Run these commands to solve this issue:
Conflicts with VS Code packages from other repositories
Some distributions, for example Pop!_OS provide their own code package. To ensure the official VS Code repository is used, create a file named /etc/apt/preferences.d/code with the following content:
'Visual Studio Code is unable to watch for file changes in this large workspace' (error ENOSPC)
When you see this notification, it indicates that the VS Code file watcher is running out of handles because the workspace is large and contains many files. Before adjusting platform limits, make sure that potentially large folders, such as Python .venv, are added to the files.watcherExclude setting (more details below). The current limit can be viewed by running:
The limit can be increased to its maximum by editing /etc/sysctl.conf (except on Arch Linux, read below) and adding this line to the end of the file:
The new value can then be loaded in by running sudo sysctl -p.
While 524,288 is the maximum number of files that can be watched, if you're in an environment that is particularly memory constrained, you may wish to lower the number. Each file watch takes up 1080 bytes, so assuming that all 524,288 watches are consumed, that results in an upper bound of around 540 MiB.
Arch-based distros (including Manjaro) require you to change a different file; follow these steps instead.
Another option is to exclude specific workspace directories from the VS Code file watcher with the files.watcherExcludesetting. The default for files.watcherExclude excludes node_modules and some folders under .git, but you can add other directories that you don't want VS Code to track.
I can't see Chinese characters in Ubuntu
We're working on a fix. In the meantime, open the application menu, then choose File > Preferences > Settings. In the Text Editor > Font section, set 'Font Family' to Droid Sans Mono, Droid Sans Fallback. If you'd rather edit the settings.json file directly, set editor.fontFamily as shown:
Package git is not installed
This error can appear during installation and is typically caused by the package manager's lists being out of date. Try updating them and installing again:
The code bin command does not bring the window to the foreground on Ubuntu
Running code . on Ubuntu when VS Code is already open in the current directory will not bring VS Code into the foreground. This is a feature of the OS which can be disabled using ccsm.
Under General > General Options > Focus & Raise Behaviour, set 'Focus Prevention Level' to 'Off'. Remember this is an OS-level setting that will apply to all applications, not just VS Code.
Visual Studio Code Apt Install
Cannot install .deb package due to '/etc/apt/sources.list.d/vscode.list: No such file or directory'
Tumblr media
This can happen when sources.list.d doesn't exist or you don't have access to create the file. To fix this, try manually creating the folder and an empty vscode.list file:
Cannot move or resize the window while X forwarding a remote window
If you are using X forwarding to use VS Code remotely, you will need to use the native title bar to ensure you can properly manipulate the window. You can switch to using it by setting window.titleBarStyle to native.
Using the custom title bar
The custom title bar and menus were enabled by default on Linux for several months. The custom title bar has been a success on Windows, but the customer response on Linux suggests otherwise. Based on feedback, we have decided to make this setting opt-in on Linux and leave the native title bar as the default.
Apt Remove Visual Studio Code
The custom title bar provides many benefits including great theming support and better accessibility through keyboard navigation and screen readers. Unfortunately, these benefits do not translate as well to the Linux platform. Linux has a variety of desktop environments and window managers that can make the VS Code theming look foreign to users. For users needing the accessibility improvements, we recommend enabling the custom title bar when running in accessibility mode using a screen reader. You can still manually set the title bar with the Window: Title Bar Style (window.titleBarStyle) setting.
Broken cursor in editor with display scaling enabled
Due to an upstream issue #14787 with Electron, the mouse cursor may render incorrectly with scaling enabled. If you notice that the usual text cursor is not being rendered inside the editor as you would expect, try falling back to the native menu bar by configuring the setting window.titleBarStyle to native.
Repository changed its origin value
If you receive an error similar to the following:
Use apt instead of apt-get and you will be prompted to accept the origin change:
Tumblr media
0 notes
sololinuxes · 5 years
Text
AppArmor vs SELinux
Tumblr media
AppArmor vs SELinux. Para aumentar los mecanismos de seguridad que ofrecen los permisos y las listas de control de acceso "ugo/rwx", la Agencia de Seguridad Nacional de los Estados Unidos (NSA) desarrollo un control de acceso obligatorio que todos conocemos como SELinux (Security Enhanced Linux). Por otro lado, y a modo privativo la empresa Immunix creo AppArmor. Novell adquirió AppArmor, y puso la herramienta en manos de la comunidad en formato open source. Actualmente es Canonical quien maneja su desarrollo y mantenimiento. RHEL, CentOS y Fedora, son las distribuciones linux más conocidas que usan SELinux por defecto. Por otra parte Ubuntu, Linux Mint y Open Suse, están entre las destacadas que utilizan AppArmor (Open Suse desarrollo su propia GUI para manejar fácilmente las reglas de la herramienta). Seria muy dificil hacer una comparativa realista de las dos aplicaciones. El fin de las dos es el mismo, pero su forma de operar difiere en gran medida, por eso vamos a explicar un poco cada herramienta, y tu decides cual te conviene dado que ambas son compatibles con cualquier distribución linux.  
AppArmor vs SELinux
AppArmor Si hay algo que me gusta de AppArmor es su modo de autoaprendizaje, es capaz de detectar cómo debe funcionar nuestro sistema de forma automática. En vez de políticas administradas por comandos, AppArmor utiliza perfiles que se definen en archivos de texto que pueden ser modificados de forma muy sencilla. Puedes encontrar los perfiles predefinidos (se permite agregar más), con el siguiente comando: cd /etc/apparmor.d dir ejemplo de edición... nano usr.bin.firefox Profundizar en la configuración de SELinux es complejo, se requieren unos conocimientos nivel medio/alto. Por el motivo mencionado se creo AppArmor, digamos que puede hacer lo mismo pero de una forma mucho mas sencilla y con menos peligro (si te equivocas editas de nuevo el archivo o desactivas el perfil). Para desactivar un perfil tan solo tienes que ejecutar lo siguiente (en el ejemplo el de firefox): sudo ln -s /etc/apparmor.d/usr.sbin.firefox /etc/apparmor.d/disable/ AppArmor ofrece dos modos predefinidos de seguridad, complain y enforce. Modificar un perfil con un modo de seguridad predeterminado también es una operación simple. En el ejemplo modificamos el perfil "usr.sbin.ntpd". # complain aa-complain /etc/apparmor.d/usr.sbin.ntpd # enforce aa-enforce /etc/apparmor.d/usr.sbin.ntpd También podemos verificar el estado actual de AppArmor y visualizar los perfiles habilitados. apparmor_status imagen de ejemplo...
Tumblr media
AppArmor Para más información visita la Ubuntu wiki.   SELinux Por su forma de operar SELinux puede ser mucho más estricto que AppArmor, incluso muchas herramientas y paneles de control web recomiendan desactivarlo para no tener problemas. Al igual que AppArmor, SELinux (Security Enhanced Linux) también tiene dos modos de protección: enforcing: SELinux niega accesos en función de la política de las reglas establecidas. permissive: SELinux no bloquea el acceso, pero se registrarán para un posterior análisis. Puedes verificar el modo que estas utilizando con el comando... getenforce Podemos modificar el modo (incluso deshabilitar SELinux) en su archivo de configuración. nano /etc/selinux/config Cambias la linea: SELINUX= xxx por alguna de las siguientes opciones: enforcing permissive disabled Guarda el archivo, cierra el editor, y reinicia el sistema. reboot Verificamos el estado de SELinux. sestatus
Tumblr media
SELinux Ahora vemos unos ejemplos de uso. Una de las operaciones más comunes es al modificar el puerto ssh (por defecto 22), es evidente que debemos decirle a SELinux que ssh cambia de puerto, por ejemplo el 123. semanage port -a -t ssh_port_t -p tcp 123 Otro ejemplo es cambiar la carpeta permitida como servidor web. semanage fcontext -a -t httpd_sys_content_t "/srv/www(/.*)?" restorecon -R -v /srv/www Puedes obtener más información en su wiki oficial.   Conclusión final Los dos sistemas de seguridad tratados en este articulo nos ofrecen herramientas para aislar aplicaciones, y a un posible atacante del resto del sistema (cuando una aplicación es comprometida). Los conjuntos de reglas de SELinux son excesivamente complejos, por otro lado nos permite tener más control sobre cómo se aíslan los procesos. La generación de las políticas puede ser automatica, aun así su manejo es complicado si no eres experto. AppArmor es más sencillo de usar. Los perfiles se pueden escribir a mano con tu editor preferido, o generarlos con "aa-logprof". AppArmor utiliza un control basado en rutas, por tanto el sistema es más transparente y se puede verificar de forma independiente. La elección es tuya.   Canales de Telegram: Canal SoloLinux – Canal SoloWordpress Espero que este articulo te sea de utilidad, puedes ayudarnos a mantener el servidor con una donación (paypal), o también colaborar con el simple gesto de compartir nuestros artículos en tu sitio web, blog, foro o redes sociales.   Read the full article
0 notes
savetopnow · 6 years
Text
2018-04-06 12 LINUX now
LINUX
Linux Academy Blog
Introducing the Identity and Access Management (IAM) Deep Dive
Spring Content Releases – Week 1 Livestream Recaps
Announcing Google App Engine Deep Dive
Employee Spotlight: Favian Ramirez, Business Development Representative
Say hello to our new Practice Exams system!
Linux Insider
Bluestar Gives Arch Linux a Celestial Glow
Mozilla Trumpets Altered Reality Browser
Microsoft Offers New Tool to Grow Linux in Windows
New Firefox Extension Builds a Wall Around Facebook
Neptune 5: A Practically Perfect Plasma-Based Distro
Linux Journal
Tackling L33t-Speak
Subutai Blockchain Router v2.0, NixOS New Release, Slimbook Curve and More
VIDEO: When Linux Demos Go Wrong
How Wizards and Muggles Break Free from the Matrix
Richard Stallman's Privacy Proposal, Valve's Commitment to Linux, New WordPress Update and More
Linux Magazine
Solomon Hykes Leaves Docker
Red Hat Celebrates 25th Anniversary with a New Code Portal
Gnome 3.28 Released
Install Firefox in a Snap on Linux
OpenStack Queens Released
Linux Today
How to Install Ansible AWX on CentOS 7
Tips To Speed Up Ubuntu Linux
How to install Kodi on Your Raspberry Pi
Open Source Accounting Program GnuCash 3.0 Released With a New CSV Importer Tool Rewritten in C++
Linux Mint vs. MX Linux: What's Best for You?
Linux.com
Linux Kernel Developer: Steven Rostedt
Cybersecurity Vendor Selection: What Needs to Be in a Good Policy
5 Things to Know Before Adopting Microservice and Container Architectures
Why You Should Use Column-Indentation to Improve Your Code’s Readability
Learn Advanced SSH Commands with the New Cheat Sheet
Reddit Linux
​A top Linux security programmer, Matthew Garrett, has discovered Linux in Symantec's Norton Core Router. It appears Symantec has violated the GPL by not releasing its router's source code.
Unboxing the HiFive Unleashed, RISC-V GNU/Linux Board
patch runs ed, and ed can run anything
Open source is under attack from new EU copyright laws. Learn How the EU's Copyright Reform Threatens Open Source--and How to Fight It
Long overdue for a Linux phone, don't you agree?
Riba Linux
How to install Antergos 18.4 "KDE"
Antergos 18.4 "KDE" overview | For Everyone
SimbiOS 18.0 (Ocean) - Cinnamon | Meet SimbiOS.
How to install Archman Xfce 18.03
Archman Xfce 18.03 overview
Slashdot Linux
The FCC Is Refusing To Release Emails About Ajit Pai's 'Harlem Shake' Video
Motorola's Modular Smartphone Dream Is Too Young To Die
Microsoft Modifies Open-Source Code, Blows Hole In Windows Defender
Secret Service Warns of Chip Card Scheme
Coinbase Launches Early-Stage Venture Fund
Softpedia
LibreOffice 6.0.3
Fedora 27 / 28 Beta
OpenBSD 6.3
RaspArch 180402
4MLinux 24.1 / 25.0 Beta
Tecmint
GraphicsMagick – A Powerful Image Processing CLI Tool for Linux
Manage Your Passwords with RoboForm Everywhere: 5-Year Subscriptions
Gerbera – A UPnP Media Server That Let’s You Stream Media on Home Network
Android Studio – A Powerful IDE for Building Apps for All Android Devices
System Tar and Restore – A Versatile System Backup Script for Linux
nixCraft
OpenBSD 6.3 released ( Download of the day )
Book review: Ed Mastery
Linux/Unix desktop fun: sl – a mirror version of ls
Raspberry PI 3 model B+ Released: Complete specs and pricing
Debian Linux 9.4 released and here is how to upgrade it
0 notes
Link
This year, we have seen a huge influx of Linux users (source), but we are seeing more distributions to try and pull people in. So let's talk about why 2020 is the best year in Linux (so far).
Windows 10 vs Linux
Now we need to understand, Windows 10 has nothing Linux can't do, and Linux has many things Windows 10 can't do. This is fantastic. Here is a simple table of basic things.
Win10 Linux OEM Support Yes Yes Functional default shells No (!) Yes Ability to ignore shells Yes Yes Graphical Environments Yes Yes No GUI Options No Yes Easy software management No (!2) Yes Easy customization Yes (!3) Yes Dedication to low-spec machines No (!4) Yes Redistribution allowed No Yes
(!) - While Powershell and CMD exist, CMD is the default, and calling it "functional" is not exactly correct.
(!2) - Partly subjective, will be explained further.
(!3) - While easy, not many, and alternatives aren't easy to work with if they even exist.
(!4) - While ARM support exists, dedication to old machines with very low-spec hardware (below 4GB RAM, 2CPU Cores) is not great.
Not only is this list pretty large, but also doesn't take everything into account. Yet the point is made, the main things Windows 10 users might care about is mentioned. Now with that said, Linux is a better product, but it isn't easy to get/install. Except for System76, Juno Computers, DELL, HP, and others have options to get a form of Linux (usually Ubuntu or derivative) shipped with the machine.
Now, what about Phones? Linux has Android but with how closed and locked down it is, Android is to Linux as macOS is to FreeBSD. Now phone operating systems are mostly shipping with some Linux fork, (i.e. Android), removing Android and iOS, we get KaiOS as the third largest. Now KaiOS is more for non-smartphones, but it is somewhat available for such. Otherwise, we also have Neon Mobile from KDE, Ubuntu Touch from UBPorts, Manjaro has a mobile spin, Purism, LuneOS, Tizen, Sailent, and many many others. Not to mention Volla Phone is making a bit of noise itself. Not to mention we still have old things to fork, like Firefox OS, WebOS, and any desktop Linux distro with a mobile desktop environment, so we can plausibly expect more Linux phones.
Now on the tasty side, more Linux contributors exist. From the people who contribute to small distributions to those starting their own. In the Ubuntu world, we are seeing FOUR new desktop remixes. These being (in order by release) Ubuntu Cinnamon, Ubuntu Deepin/DDE, Ubuntu Unity, and Ubuntu Lumina. Four desktops Linux remixes for Ubuntu, making the total remixes being 11. While this is a little weird to have 11 remixes, note that this is good for Linux, is mostly found in the fact that it promotes options for those who care. Those who find Ubuntu boring can find a functionally similar distribution in terms of a flavor/remix they might prefer. This, depending on the advertising of the remix done by the leads, they could (all by themselves) drag new users to their distribution of Ubuntu, while dragging more people to Linux.
This is a lot so far, but it keeps getting better. We have seen Lenovo coming closer to Linux, by doing an interview with LinuxForEveryone, joined with someone from Fedora (Watch Here). So we also have Lenovo coming to Linux, plus many OEMs for Linux specifically, such as System76, Juno Computers, Tuxedo Computers, and probably a million others. Manjaro has an OEM, elementaryOS has an OEM, Kubuntu has an OEM, Ubuntu has a billion OEMs, Pop!_OS is developed by an OEM, not to mention we might also get more from Lubuntu, other remixes/flavors, other distributions like FerenOS and Linux Mint. We might even get Drauger an OEM for gaming-specific stuff.
Now this is aiming to be as short as I can get this article, as I have to focus on Ubuntu Lumina at least somewhat, but I want to post a lot so I can have fun talking with the community while I work, and maybe inspire someone.
0 notes
msrlunatj · 1 month
Text
Guía Integral para la Selección de Distribuciones Linux: Todo lo que Necesita Saber
1. Introducción
Breve introducción al mundo Linux
Linux es un sistema operativo de código abierto que se ha convertido en una base sólida para una amplia variedad de distribuciones, cada una adaptada a diferentes necesidades y usuarios.
Importancia de escoger la distribución adecuada
La elección de la distribución Linux correcta puede mejorar considerablemente la experiencia del usuario. Esta decisión afecta la facilidad de uso, la estabilidad del sistema, y la disponibilidad de software, entre otros factores.
2. ¿Qué es una Distribución Linux?
Definición de distribución Linux
Una distribución Linux es un sistema operativo compuesto por el kernel de Linux, software del sistema y aplicaciones, todo empaquetado para ofrecer una experiencia específica al usuario.
Componentes clave de una distribución
Kernel de Linux: El núcleo que interactúa directamente con el hardware.
Entorno de escritorio: La interfaz gráfica (Gnome, KDE, Xfce, etc.).
Gestores de paquetes: Herramientas para instalar, actualizar y gestionar software (APT, YUM, Pacman, etc.).
Cómo surgen las diferentes distribuciones
Las distribuciones Linux suelen derivarse de bases comunes como Debian, Red Hat o Arch, adaptadas para cumplir con diferentes filosofías, niveles de estabilidad y propósitos.
3. Tipos de Distribuciones Linux
Distribuciones basadas en Debian
Características principales: Estabilidad, gran comunidad, soporte a largo plazo.
Ejemplos populares: Ubuntu, Linux Mint.
Distribuciones basadas en Red Hat
Características principales: Orientación empresarial, robustez, soporte comercial.
Ejemplos populares: Fedora, CentOS, RHEL.
Distribuciones basadas en Arch
Características principales: Personalización, simplicidad, enfoque en el usuario avanzado.
Ejemplos populares: Arch Linux, Manjaro.
Distribuciones especializadas
Para servidores: CentOS, Ubuntu Server.
Para hardware antiguo: Puppy Linux, Lubuntu.
Para seguridad: Kali Linux, Parrot OS.
Para desarrolladores: Pop!_OS, Fedora Workstation.
4. Factores Clave a Considerar al Escoger una Distribución
Experiencia del usuario
Algunas distribuciones están diseñadas para ser amigables y fáciles de usar (ej. Linux Mint), mientras que otras requieren conocimientos avanzados (ej. Arch Linux).
Compatibilidad de hardware
Es crucial asegurarse de que la distribución sea compatible con el hardware disponible, especialmente en computadoras más antiguas.
Gestión de paquetes
La simplicidad en la instalación y actualización del software es esencial, y aquí es donde entran los gestores de paquetes.
Frecuencia de actualizaciones
Rolling release (actualizaciones continuas, como en Arch Linux) vs. release fijas (ciclos estables, como en Ubuntu).
Entorno de escritorio
El entorno de escritorio afecta la experiencia visual y funcional del usuario. GNOME, KDE, y Xfce son algunos de los más comunes.
Uso previsto
Dependiendo de si el sistema se usará para tareas de oficina, desarrollo, servidores, o seguridad, se debe elegir una distribución acorde.
5. Guía Comparativa de Distribuciones Populares
Ubuntu vs. Fedora
Objetivo: Ubuntu se centra en la facilidad de uso para el usuario final, mientras que Fedora impulsa la adopción de tecnologías más recientes y es una base para Red Hat.
Filosofía: Ubuntu se basa en la simplicidad y accesibilidad, mientras que Fedora sigue la filosofía de "Freedom, Friends, Features, First" (Libertad, Amigos, Características, Primero), priorizando la innovación.
Debian vs. Arch Linux
Objetivo: Debian prioriza la estabilidad y seguridad, siendo ideal para servidores, mientras que Arch Linux es para usuarios que desean un sistema personalizado y actualizado constantemente.
Filosofía: Debian se adhiere a la filosofía de software libre y estabilidad, mientras que Arch sigue el principio de "Keep It Simple, Stupid" (KISS), ofreciendo un sistema base para construir según las necesidades del usuario.
Kali Linux vs. Ubuntu
Objetivo: Kali Linux está diseñado para pruebas de penetración y auditorías de seguridad, mientras que Ubuntu es una distribución generalista para uso en escritorio.
Filosofía: Kali Linux sigue una filosofía de seguridad y especialización extrema, mientras que Ubuntu promueve una experiencia accesible y amigable para todos.
Manjaro vs. CentOS
Objetivo: Manjaro busca combinar la personalización de Arch con la facilidad de uso, mientras que CentOS es una opción estable y robusta para servidores.
Filosofía: Manjaro es para usuarios que desean la última tecnología con una curva de aprendizaje más accesible, mientras que CentOS sigue una filosofía de estabilidad y durabilidad a largo plazo en entornos empresariales.
6. Cómo Instalar y Probar Distribuciones Linux
Métodos para probar distribuciones
Live USB/CD: Permite ejecutar la distribución sin instalarla.
Máquina virtual: Usar software como VirtualBox o VMware para probar distribuciones sin modificar tu sistema principal.
Guía paso a paso para instalar una distribución
Preparación del medio de instalación: Crear un USB booteable con herramientas como Rufus o Etcher.
Configuración del sistema durante la instalación: Configurar particiones, seleccionar el entorno de escritorio y el gestor de arranque.
Post-instalación: Actualizar el sistema, instalar controladores, y personalizar el entorno.
7. Distribuciones Recomendadas para Diferentes Usuarios
Principiantes: Ubuntu, Linux Mint.
Usuarios intermedios: Fedora, Manjaro.
Usuarios avanzados: Arch Linux, Debian.
Administradores de servidores: CentOS, Ubuntu Server.
Desarrolladores y profesionales IT: Fedora, Pop!_OS.
Entusiastas de la seguridad: Kali Linux, Parrot OS.
8. Conclusión
Resumen de los puntos clave
Elegir una distribución Linux adecuada depende de varios factores, incluyendo la experiencia del usuario, el propósito del uso, y las preferencias personales.
Recomendaciones finales
Experimentar con diferentes distribuciones utilizando métodos como Live USB o máquinas virtuales es crucial para encontrar la que mejor se adapte a tus necesidades.
Llamada a la acción
Te invitamos a probar algunas de las distribuciones mencionadas y unirte a la comunidad de Linux para seguir aprendiendo y compartiendo.
9. Glosario de Términos
Kernel: El núcleo del sistema operativo que gestiona la comunicación entre el hardware y el software.
Entorno de escritorio: La interfaz gráfica que permite interactuar con el sistema operativo.
Gestor de paquetes: Herramienta que facilita la instalación y gestión de software en una distribución Linux.
Rolling release: Modelo de desarrollo en el cual el software se actualiza continuamente sin necesidad de versiones mayores.
Fork: Un proyecto derivado de otro, pero que sigue su propio camino de desarrollo.
10. FAQ (Preguntas Frecuentes)
¿Cuál es la mejor distribución para un principiante?
Ubuntu o Linux Mint suelen ser las mejores opciones para nuevos usuarios.
¿Puedo instalar Linux junto a Windows?
Sí, puedes instalar Linux en dual-boot para elegir entre ambos sistemas al iniciar la computadora.
¿Qué distribución es mejor para un servidor?
CentOS y Ubuntu Server son opciones populares para servidores.
¿Qué es una distribución rolling release?
Es un tipo de distribución que se actualiza de manera continua sin necesidad de esperar versiones nuevas.
4 notes · View notes
tqvcancun · 4 years
Text
Menos distribuciones y más aplicaciones es lo que necesita el escritorio Linux
«No necesitamos más distribuciones Linux. Deja de hacer distribuciones y crea aplicaciones«. Más claro, el agua. La frase no es mía, sino de Alan Pope, conocido desarrollador de Canonical y Ubuntu; pero como si lo fuera, porque la suscribo en su plenitud. Y me explico, además de explicaros el contexto en el que se da dicha frase.
Make a Linux App
Make a Linux App es una iniciativa impulsada por Pope con el fin de animar a los desarrolladores ociosos a contribuir con el ecosistema e GNU/Linux en la dirección correcta, que no es crear una nueva distribución que solo va a interesar a cuatro gatos -por lo general, también ociosos- y que nadie está pidiendo. ¿Por qué? Porque ya tenemos distros de sobra y muy buenas. ¡No necesitamos más!
Lo que propone Pope en Make a Linux App, por el contrario, es de cajón: crear una aplicación, una buena aplicación, es más complicado, pero mucho más beneficioso para todos los usuarios de GNU/Linux sin importar la distribución que utilicen, porque aunque la falta de aplicaciones no es el eslabón más débil de la experiencia del escritorio Linux, seguimos por detrás de Windows y Mac.
En Make a Linux App se explican las razones por las que crear aplicaciones para Linux es positivo tanto para el ecosistema como para el propio desarrollador; se recomiendan puntos de partida, por ejemplo el framework a utilizar, incluyendo el de GNOME, KDE, elementary OS, Ubuntu Touch y (bravo por señalarlo, a pesar de la oposición que tiene entre muchos usuarios) Electron; así como se recuerdan todas las posibilidades de distribución actuales, de AppImage a Flatpak, Snap y el Open Build Service de openSUSE.
Como no podía ser de otra manera, Pope ha sido lo suficientemente ecuánime como para no destacar por encima del resto las soluciones de Canonical. La iniciativa, además, cuenta con el apoyo de los principales implicados, léase proyectos como GNOME, KDE, elementary o UBPorts. Echadle un vistazo al sitio; merece la pena y toda la información es concisa, no os va a marear con mil datos.
La insoportable levedad de algunas distribuciones Linux
Ahora bien, ¿por qué el desarrollador que trabaja en su propia distribución, por lo general refrito del refrito, debería hacer caso a esta iniciativa? Pues porque de manera directa e indirecta, es lo que la mayoría de usuarios estamos pidiendo. Tal cual. Una buena muestra de ello la tenemos en los índices de popularidad de las distros, que ilustraré con los resultados de nuestra encuesta de fin de año.
Si nos fijamos, de las 20 opciones disponibles, el grueso de los votos se los llevan las grandes: Ubuntu, Linux Mint, Debian, Manjaro, Arch Linux, KDE neon, Fedora, Deepin, elementary OS, openSUSE… La popularidad del resto es residual y su base de usuarios irá a la par, con contadas excepciones. Pero incluso si las sumamos, siguen quedándose fuera muchas otras pequeñas distros que no aportan ningún valor.
Por poner un par de ejemplos que conoceréis y cuya aportación es más sustancial, en los últimos tiempos os hemos recomendado MX Linux, una derivada de Debian que yo considero bastante interesante. Sin embargo, lo más interesante no es la distro en sí, sino las herramientas que proporciona. ¿No sería posible abstraer esas herramientas de la distribución y ofrecerlas como un paquete de software para Debian? Lo mismo para Peppermint, que como principal valor tiene ofrecer una herramienta para la creación de aplicaciones web.
Ojo: son dos ejemplos cogidos al vuelo y como he dicho, su aportación es más sustancial que la de otras, en las que su justificación existencial se basa en modificar el aspecto y añadir toneladas de software preinstalado. Además, no siempre es más sencillo para desarrollador y usuario añadir paquetes que utilizar algo listo desde su misma instalación. No obstante, incluso en estos dos casos hablamos de distros residuales que de desaparecer no afectarían en nada al escritorio Linux.
Distros como herramienta vs distros de propósito general
Por otro lado, no hay que confundir lo que son distribuciones que funcionan como una herramienta, a distribuciones de propósito general, esto eso, las que instalamos en nuestro PC. Nada tienen que ver Ubuntu, Fedora o Manjaro con Kali, Tails, Puppy, Robolinux, LibreELEC, SteamOS, SystemRescueCd… Y tantas otras, cuyo enfoque no es servir de sistema operativo para el día a día, sino cubrir una necesidad concreta. Esta es una de las grandezas de GNU/Linux que no debería cambiar.
Redundancia no equivale a fragmentación
Y tampoco hay que confundir redundancia con fragmentación. No hay una fragmentación reseñable en GNU/Linux, por más que haya distribuciones que aporten poco o nada y la razón ya la he señalado: las grandes se comen casi todo el pastel. Haciendo alusión de nuevo a los resultados de nuestra encuesta de fin de año, todo lo basado en Debian se llevaba en 70% de los votos… pero es que de ese porcentaje, más del 50% se lo repartían entre Ubuntu, Linux Mint y Debian; lo demás se iba para KDE neon, Deepin, elementary OS, MX Linux, Zorin OS y para de contar.
Luego la principal inconveniencia de crear más distribuciones de propósito general no es que genere fragmentación o que afecte al ecosistema: es que no lo beneficia. Así de simple. Por supuesto…
El software libre lo es para lo bueno y para «lo malo»
He aquí el quid de la cuestión, y es que el software libre es libre para todo, también para que quien lo desee se monte su propia distribución, en lugar de aportar a otras áreas donde es más bienvenido el esfuerzo. Como dijo en su momento el insigne Linus Torvals, el software libre o de código abierto ha triunfado a base del tipo de egoísmo de «yo hago esto porque me beneficia a mí», con la salvedad de que el fruto es compartido.
Por lo tanto, basta con que haya alguien que quiera hacer su refrito para que lo haga, nadie puede impedírselo. Lo que sí podemos hacer, y de eso trata Make a Linux App, es pedirle que considere otras formas de contribución. Como crear una aplicación o, quizás, ayudar a mantener alguna que lo necesite. Y son muchas.
Imagen: Pixabay
Fuente: MuyLinux
0 notes
tipsonunix · 3 years
Text
How to Install Google Chrome 98 in Ubuntu / Rocky Linux & Fedora
Tumblr media
Google Chrome is one of the common and most widely used web browsers in the world. It is blazing fast and easy to use with security features. Google Chrome's newer version is 98 and it is the second web browser update of the year. This release contains changes related to the developer side and some user-impacting differences. This tutorial will be helpful for beginners to install Google Chrome 98 on Ubuntu 20.04 LTS, Linux Mint 20.3, Rocky Linux 8, AlmaLinux 8, and Fedora 35. Google Chrome 98 Changelog - COLRv1 Color Gradient Vector Fonts in Chrome 98 - Remove SDES key exchange for WebRTC - WritableStream controller AbortSignal - Private Network Access preflight requests for subresources - New window.open() popup vs. window behavior For complete changelog refer to the chromestatus.com
Install Google Chrome 98 on Ubuntu / Linux Mint
Google Chrome can be installed on Ubuntu and Linux Mint systems via deb file. Open the terminal and run the below command wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb Install Chrome on Ubuntu / Linux Mint sudo dpkg -i google-chrome-stable_current_amd64.deb
Tumblr media
Install Google Chrome 98 on Fedora
Google Chrome can be installed on fedora via terminal Step 1: Install Third-Party Repositories sudo dnf install fedora-workstation-repositories Step 2: Enable the Google Chrome Repository sudo dnf config-manager --set-enabled google-chrome Step 3: Install Google Chrome on fedora sudo dnf install google-chrome-stable
Tumblr media
Install Google Chrome 98 on Rocky Linux 8 / AlmaLinux 8
Step 1: Create the Google chrome repository with the below contents cat > /etc/yum.repos.d/google-chrome.repo name=google-chrome baseurl=http://dl.google.com/linux/chrome/rpm/stable/x86_64 enabled=1 gpgcheck=1 gpgkey=https://dl.google.com/linux/linux_signing_key.pub Step 2: Update the repository sudo dnf update -y Step 3: Install Google Chrome 98 via DNF sudo dnf install google-chrome-stable
Tumblr media
Conclusion
From this tutorial, you have learned how to download and install google chrome 98 on Ubuntu 20.04 LTS, Ubuntu 22.04, Linux Mint 20.3, Rocky Linux 8, AlmaLinux 8, and Fedora 35 Do let us know your comments and feedback in the comments section below. If my articles on TipsonUNIX have helped you, kindly consider buying me a coffee as a token of appreciation.
Tumblr media
Thank You for your support!! Read the full article
0 notes
hubspotexamanswers · 7 years
Text
Fedora VS Ubuntu: How are they different?
New distributions of Linux continue and continue to appear, and for some users it is becoming tedious to try to keep up. Maybe you have even been asked to explain the differences between two Linux distributions. These questions may seem strange to you, among other things because Fedora and Ubuntu have been around for too long. But if the interested person is a beginner, these questions do begin to make sense.
Although neither of the two distributions are new, both release versions continuously. The last one of Ubuntu was version 17.10, which came out in October of last year and Fedora released version 27 in November of 2017. If you want, you can check here more things about the latest versions of Ubuntu .
In a previous article we explained the differences between Ubuntu and Linux Mint
If you have already read it, you will see that between the two distributions that we will see in this article, the differences are more marked.
1. History and development
The history of Ubuntu is much better known than that of Fedora. To summarize, we can say that Ubuntu was born from an unstable branch of Debian, in October 2004. Fedora was born before, in November 2003 and its development history is a bit more tangled. That first version of Fedora was called Fedora Core 1, and was based on Red Hat Linux 9. Which officially sold out on April 30, 2004.
Fedora then emerged as a community-oriented alternative to Red Hat and had two main repositories. One of them was the Core , which was maintained by the developers of Red Hat, and the other was called Extras , which was maintained by the community.
However, in late 2003, Red Hat Linux merged with Fedora to become a single community distribution, called Red Hat Enterprise Linux, as its equivalent with commercial support.
Until 2007, Core was part of the Fedora name, but as of Fedora 7, the Core and Extra repositories joined. Since then the distribution is called simply Fedora.
The biggest difference so far is that the original Red Hat Linux was essentially divided into Fedora and Red Hat Enterprise Linux, while Debian remains a complete and separate entity from Ubuntu, which imports packages from one of the Debian branches.
Although many think that Fedora is based on Red Hat Enterprise Linux (RHEL), nothing is further from reality. On the contrary, the new versions of RHEL are Fedora forks that are thoroughly tested for quality and stability before launch.
For example, RHEL 7 is based on Fedora repositories 19 and 20. The Fedora community also provides additional packages for RHEL in a repository called Extra Packages for Enterprise Linux (Extra Packages for Enterprise Linux), whose abbreviation is EPEL.
      Other groups include the Board of Forums, the IRC Board and the Membership Board of Developers. Users can apply for Ubuntu membership and volunteer as collaborators on various teams organized by the community.
2. Release and support cycle
Ubuntu releases a new version every six months, one in April and the other in October. Each fourth version is considered as long-term support (LTS). Therefore, Ubuntu releases an LTS version every two years. Each of them receives official support and updates for the next five years.
The regular versions used to have a support of 18 months, but as of 2013, the support period was shortened to 9 months.
Fedora does not have strict periods to launch new versions. They usually happen every 6 months, and have support for 13 months. That is why now the support period is greater than that of Ubuntu. Fedora, unlike Ubuntu, does not release LTS versions in the long term.
3. How is your name composed?
If you know the rules that apply Ubuntu to assign the names to their versions, you will know that the versions are composed of two numbers, the first of which means the year and the second the month of the release of that version. In this way, you can deduce exactly the launch date of each version, in addition to knowing when you will have a new one at your fingertips. For example, the last version of Ubuntu is version 17.10, which was released in October 2017.
Fedora maintains a simpler system and uses whole numbers, starting with 1 for the first version, and currently it reached version 27.
The nomenclature of the Ubuntu versions consists of two words that start with the same letter. The first word is an adjective and the second is an animal, which is usually a weird one. These versions are announced by Mark Shuttleworth, with a brief introduction or anecdote related to the name. The name of the last stable version is Artful Aardvark, which has been a supposed anteater pig
Fedora 20 Heisenbug 2013 was the latest version of Fedora that has a code name. All subsequent versions are simply called “Fedora X”, where X represents the number that follows the previous version. Before that, anyone in the community could suggest a name, which could qualify for approval if it complied with a set of rules. The approval was in charge of the Fedora council.
If the choice of the Ubuntu code name seemed extravagant to you, look how they chose the name of the Fedora versions. A possible name of a new version had to keep a certain relationship with the previous one, the more unusual or novel the better.
They should not be names of living people or trademarked terms. The relationship between the names of Fedora X and Fedora X + 1 must match the formula “is a”, so that the following is true: X is a Y, and so is X + 1. To put a little light in this confusion, Fedora 14 was called Laughlin, and Fedora 15 Lovelock. Both Lovelock and Laughlin are cities in Nevada. However, the ratio of Fedora X and Fedora X + 2 should not be the same.
Now perhaps the reason why the developers have decided to stop assigning names to the new versions is clearer.
4. Editions and desktop environments
Fedora has three main editions: Cloud (Atomik), for servers (Server) and for workstations (Workstation). The names of the first two are very illustrative, and the edition for work stations is actually the edition that most people use, that of desktop and laptop computers (32 or 64 bits). The Fedora community also provides separate images of the three editions for ARM-based devices. There is also Fedora Rawhide, a continuously updated development version of Fedora that contains the latest compilations of all Fedora packages. Rawhide is a testing field for new packages, so it is not 100% stable.
Currently, Ubuntu outperforms Fedora in terms of quantity. Along with the standard desktop edition, Ubuntu offers separate products called Cloud, Server, Core and Ubuntu Touch, for mobile devices. The desktop edition supports 32-bit and 64-bit systems, and server images are available for different infrastructures (ARM, LinuxON and POWER8). There is also Ubuntu Kylin, a special edition of Ubuntu for Chinese users, which came out in 2010 as “Ubuntu Chinese Edition”, and which was renamed as an official subproject in 2013.
In terms of desktop environments, the main edition of Fedora 27 uses Gnome 3.6 with Gnome Shell. The default Ubuntu Desktop Environment (DE) is Unity, and other options are provided through “Ubuntu flavors”, which are variants of Ubuntu with different desktop environments. There is Kubuntu (with KDE), Ubuntu GNOME, Ubuntu MATE, Xubuntu (with Xfce) and Lubuntu (with LXDE). In the latest versions of Ubuntu, specifically from 17.10, it was said goodbye to Unity and was incorporated by default GNOME, accompanied also by a new graphic server called Wayland.
The equivalent to Ubuntu flavors in Fedora are the Spins or “alternative desktops” . There are Spins with KDE, Xfce, LXDE, MATE and Cinnamon desktop environments, and a special Spin called Sugar on a Stick with a simplified learning environment. This project is designed for children and schools, particularly in developing countries.
Fedora also has labs, or “functional software packages . ” They are collections of software for specific functions that can be installed in an existing Fedora system or as an independent Linux distribution. The functional software packages available include Design Suite, Games, Robotics Suite, Security Lab and Scientific. Ubuntu provides something similar in the form of Edubuntu, Mythbuntu and Ubuntu Studio, subprojects with specialized applications for education, home entertainment systems and multimedia production, respectively.
5. Packages and repositories
The most notorious differences between these two distributions are found in this section. As for the package management system, Fedora uses .rpm format packages , while Ubuntu uses packages with .deb format . Therefore, they are not compatible by default, with each other, you have to convert them with tools such as Alien , for example. Ubuntu has also presented Snappy packages, which are much safer and easier to maintain.
With the exception of some binary firmware, Fedora does not include any proprietary software in its official repositories. This applies to graphics drivers, codecs and any other software restricted by patents and legal issues. The direct consequence of this is that Ubuntu has more packages in its repositories than Fedora.
One of the main objectives of Fedora is to provide only open source software, free and free. The community encourages users to find alternatives for their non-free applications. If you want to listen to MP3 music or play DVDs in Fedora, you will not find support for that in the official repositories. However, there are third-party repositories like RPMFusion that contain a lot of free and non-free software that you can install in Fedora.
Ubuntu aims to comply with the Debian Free Software Guidelines, but still makes many concessions. Unlike Fedora, Ubuntu includes proprietary drivers in its restricted branch of official repositories.
There is also a partner repository that contains software owned by Canonical’s associated providers: Skype and Adobe Flash Player, for example. It is possible to buy commercial applications from the Ubuntu Software Center and enable compatibility for DVD, MP3 and other popular codecs by simply installing a single package called “ubuntu-restricted-extras” from the repository.
Copr of Fedora’s Copr is a platform similar to the Ubuntu Personal Packet Files (PPA): they allow anyone to upload packages and create their own repository. The difference here is the same as with the general approach of software licenses. It is assumed that packets containing non-free components or anything else that is explicitly prohibited by the Fedora Project Board should not be loaded.
6. Public and Objectives
Copr, from Fedora, is a package distribution system that has focused on three things: innovation, community and freedom. It offers and promotes exclusively free and open source software, and emphasizes the importance of each member of the community. It has been developed by the community, and users are actively encouraged to participate in the project, not only as developers, but also as writers, translators, designers and public speakers (Fedora Ambassadors). There is a special project that helps women who want to contribute. The objective is to participate in the fight against prejudice and gender segregation in technology and free software circles.
In addition, Fedora is often the first, or one of the first distributions to adopt and show new technologies and applications. It was one of the first distributions to integrate with SELinux (Security Enhanced Linux) , includes the Gnome 3 desktop interface, uses Plymouth as the bootsplash application, adopts systemd as the default init system and uses Wayland instead of Xorg as the graph server predetermined.
Fedora developers collaborate with other distributions and upstream projects, to share their updates and contributions with the rest of the Linux ecosystem. Due to this constant experimentation and innovation, Fedora is often mislabeled as an unstable, next-generation distribution that is not suitable for beginners and everyday use. This is one of the most widespread Fedora myths, and the Fedora community is working hard to change this perception. Although developers and advanced users who want to try the latest features are part of the main target audience of Fedora, it can be used by anyone, just like Ubuntu.
Speaking of Ubuntu, some of the objectives of this distribution overlap with Fedora. Ubuntu also strives to innovate, but choose a much more user-friendly approach. By providing an operating system for mobile devices, Ubuntu tried to gain a foothold in the market and simultaneously boost its main project, although that dream was impossible.
Ubuntu is often proclaimed as the most popular Linux distribution, thanks to its strategy of being easy to use and simple enough for beginners and former Windows users. However, Fedora has an ace up its sleeve: Linus Torvalds, the creator of Linux, uses Fedora on his computers.
With all these points exposed, you can make the best decision for you and your computer.
The post Fedora VS Ubuntu: How are they different? appeared first on News Bodha.
from WordPress http://ift.tt/2F4b0W9 via IFTTT
0 notes
ushf · 7 years
Text
Win10/F26 UEFI PITA
Trying to do a quick install of F26 onto a refurb T430. Turns out that the Win10 installation must have been done as legacy BIOS instead of UEFI for some reason. The quick solution was to boot and hit "enter" > "F10" > "Startup" > "Legacy only". However, I shall return to this and get the whole UEFI and Secure Boot chain working in a couple of months. So here are the appropriate links for converting the BIOS to UEFI etc https://social.technet.microsoft.com/wiki/contents/articles/14286.converting-windows-bios-installation-to-uefi.aspx https://winaero.com/blog/install-windows-10-using-uefi-unified-extensible-firmware-interface/ https://tutorialsformyparents.com/how-i-dual-booted-linux-mint-alongside-windows-10-on-a-uefi-system/ https://msdn.microsoft.com/en-us/library/windows/hardware/dn640535(v=vs.85).aspx#gpt_faq_gpt_have_esp http://linuxbsdos.com/2016/12/01/dual-boot-fedora-25-windows-10-on-a-computer-with-uefi-firmware/ https://geeksocket.in/blog/dualboot-fedora-windows/
0 notes
Text
Windows Vs Linux : Distros    Before we begin, we need to address one of the more confusing aspects to the Linux platform. While Windows has maintained a fairly standard version structure, with updates and versions split into tiers, Linux is far more complex. Originally designed by Finnish student Linus Torvalds, the Linux Kernel today underpins all Linux operating systems. However, as it remains open source, the system can be tweaked and modified by anyone for their own purposes. What we have as a result are hundreds of bespoke Linux-based operating systems known as distributions, or 'distros'. This makes it incredibly difficult to choose between them, far more complicated than simply picking Windows 7, Windows 8 or Windows 10. Given the nature of open source software, these distros can vary wildly in functionality and sophistication, and many are constantly evolving. The choice can seem overwhelming, particularly as the differences between them aren't always immediately obvious. However, this does mean that consumers are free to try as many different Linux distros as they like, at no cost. The most popular of these, and the closest the platform has to a 'standard' OS, is Ubuntu, which strives to make these as simple as possible for new Linux users. Other highly popular distros include Linux Debian, Mint, and Fedora, the last of which Torvalds personally uses on his own machines. There are also specialist builds that strip away functions to get the most from underpowered hardware, or distros that do the opposite and opt for fancy, graphically intense features.
Tumblr media
0 notes
Photo
Tumblr media
PING: Calamares, KDE Neon, Netrunner, Linux AIO Ubuntu, Linux Mint, Fedora, Liri OS…
Se nota que volvemos de lleno a la rutina, y es que el  PING  viene cargado de distribuciones como pocas veces antes, y no por falta de oportunidades, sino porque ha habido un goteo de noticias pequeñas pero interesantes que donde mejor caben es aquí, en nuestra recopilación de enlaces semanal. ¡Empezamos que se nos va el sábado!    Calamares 3.0.  Pues no, no empezamos con una distro, sino con el lanzamiento de Calamares 3.0, la nueva versión de este instalador de sistema ya convertido en uno de los más populares, al menos entre las ‘distros de periferia’. Novedades trae muchas pero ninguna demasiado llamativa (excepto para desarrolladores), a juzgar por el  anuncio oficial .    KDE Neon.  Casi podían haber dado el anuncio al unísono, porque con una diferencia de horas la distribución del proyecto KDE se estrenaba con Calamares 3.0 como instalador por petición popular. Más información en el  blog de Jonathan Riddell .    Netrunner Desktop 17.01.1.  Y como no hay dos sin tres, dicen, la siguiente en renovar su imagen de instalación para dar cabida a Calamares 3.0 fue Netrunner Desktop (basada en KDE Neon). Más información en el  blog de Netrunner .    Linux AIO Ubuntu 14.04.5.  Completando los lanzamientos del proyecto Linux AIO que recogimos en el   PING   de la semana pasada, tal vez os interese haceros con el último de Trusty Tahr,  la despedida de Ubuntu 14.04 LTS  en forma de ISO que ya está disponible en formato  all in one . Más información en el  blog de Linux AIO .    Linux Mint 18.1 KDE Beta.  Se acaba la espera por cerrar el ciclo de Serena con el lanzamiento que quedaba por asomar, la edición con KDE Plasma. De momento es todavía una beta, pero no debería tardar mucho más en verse la estable. Más información en el  blog de Linux Mint .    ¿Fedora LXQt?  Por ahora es solo una pregunta, una proposición de cara al lanzamiento de Fedora 26 que tiene que discutirse y que no implicaría en nada al spin con LXDE, así que no se trata de reemplazar, sino de sumar. Cómo se resolverá os lo contaremos cuando se sepa, mientras tanto, en la  wiki de Fedora  os podéis enterar de lo que hay.    Liri OS.  De las “cenizas” de Hawaii y Papyros surge Liri OS, una nueva distribución basada en Qt5 y Material Design que ya veremos a dónde llega a la vista de los antecedentes. Con todo, habrá que darles tiempo y para quien no pueda esperar, ya está disponible una nighly alpha (!). A partir del  anuncio oficial  encontraréis toda la información al respecto.    PulseAudio 10.0.  Cerramos el bloque de las distribuciones con otro complemento más importante si cabe que un solo instalador, como es el servidor de sonido de GNU/Linux. Son varios los cambios que presenta esta nueva versión y en las  notas de lanzamiento  los encontraréis listados. Todo bien.     Fedora vs Ubuntu vs openSUSE vs Clear Linux For Intel Steam Gaming Performance.   Se puede decir más alto pero no más claro, así que aprovechando que esta semana hablamos de las  mejoras de Mesa para gráficas Intel , estos  benchmarks  de  Phoronix  aparecen en buen momento. Y para los más viciosos, un  versus con Windows 10 y Kabylake …  ains …    ¿recalboxOS o Retropie?  Terminamos con un último artículo recomendado, un tutorial de nuestros compañeros de MuyComputer que a los fans del cacharreo y los videojuegos retro os va a encantar. Contempla una Raspberry Pi, componentes, periféricos, distribuciones Linux muy específicas… Podéis verlo en   Cómo construir tu arcade casera por menos de 80 euros  .
0 notes
sololinuxes · 5 years
Text
Razones para cambiar a linux
Tumblr media
Razones para cambiar a linux. Prefieres Windows o Linux?. Si visitas mi sitio web sobre Linux, creo que ya se la respuesta, pero ¿cuáles son las razones para cambiar a Linux?. Linux es un sistema operativo 100% libre, eso está claro; pero existen otros motivos de peso para que mudes a linux, si es que no lo has echo ya, en el articulo de hoy vemos las mas importantes.  
Razones para cambiar a linux
Linux es gratis Una de los motivos por los que debes cambiar a linux es su precio, independientemente de la distribución que elijas,  Debian con Ubuntu y Linux MInt, derivados de Red Hat como Fedora o Centos, el propio Arch con Manjaro, y otras tantas que me dejo en el tintero; son gratis. Salvo alguna excepción o distribución especifica, la practica totalidad de distros linux no solo son gratuitas, también son de código abierto. Te parece poco?, pues tranquilo que aun tenemos más, porque de la misma forma que se distribuye linux, igualmente se concede la inmensa mayoría de su software. Soy consciente que "los de las ventanas" saldrán con lo de siempre... que si photoshop, que si no se que, bla,bla,bla, palabrería nada más. Vamos a ver, tu piensa, muchos son los que vienen a linux, muy pocos los que retornan a Windows.   Linux es estable Tras más de 20 años usando linux y anteriormente otros sistemas Unix, puedo contar con los dedos de mis manos los bloqueos que me han ocurrido por culpa del sistema, algo común en otros sistemas con sus típicas pantallas azules más conocidas como pantallazo de la muerte, jaja. Linux es capaz casi al 100% sin reiniciarlo y sin peligro de cosas extrañas. Si por algún caso, una aplicación o software se bloquea en tu Linux, puede matarlo con un solo comando de forma sencilla y confiable desde la terminal. En Windows, cruza los dedos y reza a San Pantuflo para que el administrador de tareas sea capaz de cerrar el proceso que genera inestabilidad.
Tumblr media
Linux vs Windows   Privacidad y Seguridad Por su diseño, linux es un S.O. mucho más seguro que otros con coste. Nunca he visto ningún problema relacionado con la seguridad de un sistema que estuviera actualizado. Cuando vemos que las grandes empresas del sector, como Google, Amazon, incluso el propio Microsoft, apenas sufren violaciones de seguridad en sus sistemas... por algo será, no pienses que es casualidad, todos usan linux en sus servidores críticos (incluso Microsoft). Es evidente que nadie puede afirmar que linux es 100% seguro, pero que es el más seguro, si.   Soporte y compatibilidad Soporte - Cambiar a linux El soporte para Linux no tiene comparación con ningún otro sistema operativo. No importa qué problema tienes y que distribución uses, tutoriales, manuales, usuarios altruistas, etc, seguro que encontraras la solución muy rápido. También tenemos sitios web como este, listas de correo, grupos y foros con el único propósito de ayudarte a resolver tus dudas y a que continúes aprendiendo. Ademas de la comunidad, cada distribución linux cuenta con sus propios manuales específicos y sistemas de foros, un buen ejemplo lo tenemos con la wiki de Arch Linux que es de lo mejor que puedes encontrar. Compatibilidad - Cambiar a linux En otros sistemas operativos, cada vez que lanzan una nueva versión parece que tienes que actualizar tu hardware. Esto nos hace sospechar que tanto los desarrolladores de los sistemas operativos, como los del hardware están de acuerdo, si no es así que alguien me lo explique. Tienen una política de fabricación, que ante los frecuentes cambios en los requisitos de diseño y especificaciones, todo se queda obsoleto de una forma casi insultante para el usuario. Es comercio abusivo, y pondría la mano en el fuego porque es una practica totalmente planificada por Windows, Apple, y los grandes fabricantes de hardware. y requieren ser reemplazados debido a . Hay, se llama obsolescencia planificada y Windows, Apple y los fabricantes de hardware son expertos en eso. Afortunadamente, esta practica abusiva no existe en el movimiento linux, y ademas es uno de los motivos por los que miles de usuarios de Windows 7 (termina su soporte) buscan refugio en distribuciones linux que sean similares estéticamente y fáciles de usar, te recomiendo revisar este articulo.   Conclusión final Hemos visto las principales razones por las que deberías cambiar a Linux inmediatamente. Cada una de ellas es motivo suficiente para migrar, y eso que omitimos otras como la facilidad de uso, su excelente administración de paquetes, flexibilidad, actualizaciones constantes, velocidad, rendimiento, el peso del sistema operativo, y muchas más. ¡Linux te espera! Este articulo no es una critica, es una realidad; soy consciente que Windows tiene derecho a existir al igual que todo bicho viviente, incluyendo partes orgánicas pestilentes. Aun así lo tengo claro, de mi plato no comerá, que le invite otro.   Canales de Telegram: Canal SoloLinux – Canal SoloWordpress Espero que este articulo te sea de utilidad, puedes ayudarnos a mantener el servidor con una donación (paypal), o también colaborar con el simple gesto de compartir nuestros artículos en tu sitio web, blog, foro o redes sociales.   Read the full article
0 notes
savetopnow · 7 years
Text
2018-03-12 06 LINUX now
LINUX
Linux Academy Blog
Linux Academy Weekly Roundup 109
The Story of Python 2 and 3
Happy International Women’s Day!
Month of Success – February 2018
AWS Security Essentials has been released!
Linux Insider
Deepin Desktop Props Up Pardus Linux
Kali Linux Security App Lands in Microsoft Store
Microsoft Gives Devs More Open Source Quantum Computing Goodies
Red Hat Adds Zing to High-Density Storage
When It's Time for a Linux Distro Change
Linux Journal
Weekend Reading: Using Python in Science and Machine Learning
What's the Geek Take on the GDPR?
Purism Announces Hardware Encryption, Debian for WSL, Slack Ending Support for IRC and More
Best Editor
Looking for New Writers and Meet Us at SCaLE 16x
Linux Magazine
OpenStack Queens Released
Kali Linux Comes to Windows
Ubuntu to Start Collecting Some Data with Ubuntu 18.04
CNCF Illuminates Serverless Vision
LibreOffice 6.0 Released
Linux Today
A quick and easy way to make your first open source contribution
How to Access Google Drive from Ubuntu Desktop
How to create a cron job with Kubernetes on a Raspberry Pi
How to Install and Configure Nibbleblog on Ubuntu 16.04
Install and integrate Rspamd
Linux.com
LFS462 Open Source Virtualization
LFS305 Deploying and Managing Linux on Azure
LFD450 Embedded Linux Development
LFD301 Introduction to Linux, Open Source Development and GIT
A Comparison of Three Linux 'App Stores'
Reddit Linux
Need some help
Linuxbrew is using almost no CPU
Linux Operating Systems for the Raspberry Pi
Why is it still a pain to install applications locally without root permissions?
Old 2008 laptop, Manjaro vs Chakra vs Neon vs Mint vs Solus vs Fedora vs Zorin vs Kubuntu
Riba Linux
How to install Pardus 17.2
Pardus 17.2 overview | a competitive and sustainable operating system
How to install SwagArch GNU/Linux 18.03
SwagArch GNU/Linux 18.03 overview | A simple and beautiful Everyday Desktop
How to install Nitrux 1.0.9
Slashdot Linux
MoviePass Wants To Gather a Whole Lot of Data About Its Users
Elon Musk: SpaceX's Mars Rocket Could Fly Short Flights By Next Year
Elon Musk: The Danger of AI is Much Greater Than Nuclear Warheads. We Need Regulatory Oversight Of AI Development.
EPA's Science Advisory Board Has Not Met in 6 Months
Report Says Radioactive Monitors Failed at Nuclear Plant
Softpedia
Linux Kernel 4.15.8 / 4.16 RC4
Linux Kernel 4.14.25 LTS / 4.9.86 LTS / 4.4.120 LTS / 4.1.50 LTS / 3.18.98 EOL / 3.16.55 LTS
GNOME 3.26.2 / 3.28.0 RC
siduction KDE 2018.2.0
siduction GNOME 2018.2.0
Tecmint
systemd-analyze – Find System Boot-up Performance Statistics in Linux
Learn Ethical Hacking Using Kali Linux From A to Z Course
Exodus – Safely Copy Linux Binaries From One Linux System to Another
How to Setup iSCSI Server (Target) and Client (Initiator) on Debian 9
How to Install Particular Package Version in CentOS and Ubuntu
nixCraft
Debian Linux 9.4 released and here is how to upgrade it
400K+ Exim MTA affected by overflow vulnerability on Linux/Unix
Book Review: SSH Mastery – OpenSSH, PuTTY, Tunnels & Keys
How to use Chomper Internet blocker for Linux to increase productivity
Linux/Unix desktop fun: Simulates the display from “The Matrix”
0 notes