#or should I run straight debian instead
Explore tagged Tumblr posts
Text
ramble about FreeBSD and Unix~~
how out of my depth would I be trying to install FreeBSD?
would it even boot on my machine?
am I smart enough to go through the install for the system itself as well as get the GUI that I want?
I think you have to go through the command line for quite a bit of time before you get a GUI up and running....
I started off being really interested in BSD/Unix in high school, and tried to fiddle around with a BSD live disc thing in a book (that I don't remember the name of) and then only fiddled around with Linux.
I've been watching videos on youtube of people expressing how stable FreeBSD's modern release is~~
I want to use it on my own hardware; but that's a problem with it I believe, is that it works on sort of limited amount of hardware, as opposed to Linux, that you could even run on a toaster...
Is it really that much harder to deal with than Linux?
Of course I've only dealt with a few distros~~ the rundown of distros I've messed around with are;
Ubuntu (not anymore tho)
Debian (current os being Linux Mint Debian 6)
OpenSUSE briefly (tried to get my sibling to use it on their laptop, with them knowing next to nothing about Linux, sorry...)
Fedora back in high school, I ran it on a laptop for a while. I miss GNOME....
Mageia (I dual booted it on a computer running windows 7, also in or right after high school, so a long time ago)
attempted GhostBSD but it wouldn't boot after install from the live CD (also many years ago at this point)
I like to hop around and (hopefully now I have, yeah right...) I can't make up my mind which I actually want to use permanently.
Linux Mint Debian edition is really good so far tho~~!!
Current PC is an ASUS ROG Stryx (spelling?) that I bought on impulse many years ago~~ Was running windows 10, fixed the issue and now use the OS stated above~~
or maybe I should maybe ditch Mint and run straight Debian... Thought of that too. and it might have an easier time installing and actually booting than FreeBSD on this machine...
but then BSD and by extension unix is meant to be used on older hardware and to be efficient both in execution of things, and space.
"do one thing and do it well" iirc was a bit of the unix philosophy...
yeah, no I HATE technology /heavy sarcasm/
#personal#thoughts#thinking#Operating system#operating systems#Linux#Linux Distributions#Linux Distros#ubuntu#opensuse#fedora#debian#linux mint#mageia#<- how obscure is this#windows 7#ghost bsd#free bsd#unix#unix like os#distro hopping#am I smart enough to do it tho#will it run on my computer?#or should I run straight debian instead#a history of all the distros and things I've tried#fedora was really cool tho and I miss GNOME#rambles about unix and bsd
13 notes
·
View notes
Text
In this tutorial, I will show you how to set up a Docker environment for your Django project that uses PostgreSQL instead of the default SQLite DB in the development phase. In our previous article, we discussed how to dockerize A Django application. Step 1: Install Docker Engine You need a Docker runtime engine installed on your server/Desktop. Our Docker installation guides should be of great help. How to install Docker on CentOS / Debian / Ubuntu Step 2: Setup the Environment I have set up a GitHub repo for this project called django-postgresql-docker-dev. Feel free to fork it or clone/download it. First, create the folder to hold your project. My folder workspace is called django-postgresql-docker-dev. CD into the folder then opens it in your IDE. Step 3: Create Dockerfile In a containerized environment, all applications live in a container. Containers themselves are made up of several images. You can create your own image or use other images from Docker Hub. A Dockerfile is Docker text document that Docker reads to automatically create/build an image. Our Dockerfile will list all the dependencies required by our project. Create a file named Dockerfile. In the file type the following: # base image FROM python:3 #maintainer LABEL Author="CodeGenes" # The enviroment variable ensures that the python output is set straight # to the terminal with out buffering it first ENV PYTHONBUFFERED 1 #directory to store app source code RUN mkdir /zuri #switch to /app directory so that everything runs from here WORKDIR /zuri #copy the app code to image working directory COPY ./zuri /zuri #let pip install required packages RUN pip install -r requirements.txt Create the Django requirements.txt file and add your projects requirements. Note that I have some requirements that you may not need but the most important one is the Django and psycopg2 Django>=2.1.3,=2.7,
0 notes
Text
Maemo For Mac
Memo For Management
Memo For Mac
Maemo For Mac Os
Maemo For Macbook Air
Memo For Microsoft Word
IMO for Mac pc: With the help of IMO for Mac we can able to stay connected with our friends, Imo families and relations no matter whatever may be the distance IMO for Mac pc is not just a social app by which we can send a message to the people worth to us. This is a TECHNOLOGY PREVIEW of a new development tool for Maemo. MADDE stands for Maemo Application Development and Debugging Environment and offers the following features: Command-line cross-compiling Multi-platform support (Linux (32-bit/64-bit), Windows, Mac OS X) Configurable for different targets & toolchains. Maemo Community e.V. Invitation to the General Assembly 01/2015 Nomination period closed for Q4 2014 council election Announcement of the Q4 2014 Community Council election. GPE is a suite of applications that was ported to Maemo. Search for GPE in the downloads section for your version of Maemo. (You'll probably want GPE Calendar, GPE Contacts, gpesyncd to start with.) These are standalone applications, there's no integration with the Maemo address book. You'll not find data from GPE in the Mail client or Chat.
Download
Thank you for using our software library. Use the link below and download Maemo Flasher legally from the developer's site.
We wish to warn you that since Maemo Flasher files are downloaded from an external source, FDM Lib bears no responsibility for the safety of such downloads. We recommend checking your downloads with an antivirus. FDM Lib shall include an option for direct download from developers, should it become available in the future.
Often downloaded with
Flash WallpaperCreate and distribute Flash Wallpapers! Flash Wallpaper converts a Flash...DOWNLOAD
Flash Media PlayerFlash Media Player is a handy flash tool kit designed as Macromedia flash...DOWNLOAD
Flash JoinerFlash Joiner provides a new way to create your SWF easily. You can merge...$39.95DOWNLOAD
Flash SecurerFlash Securer is program which will allow you to carry your sensitive documents...DOWNLOAD
Flash WiperMany people believe formatting the drive or deleting the file will complete...$19.95DOWNLOAD
iWisoft Flash SWF to Video Converter
Editors & Converters
The Nokia 770, N800 andN810are 'Internet Tablets'running Maemo: a handheld Linux distribution based on Debian.
Although there is acommand line flasher availablefor Mac OS X from Maemo, there's no officialGUI interface for it. This has been written usingPlatypus and CocoaDialog and is, of course,supplied with no warranty.
This is not affiliated with Nokia and so if your machine turns into amongoose and starts dancing ballet, don'tblame me. Or blame me, but don't complain - or, more importantly, sue.
Usage
Download the latestNokia image (large file ending in .bin, for exampleNokia_770_0.2005.45-8.bin) and either selectthis file when prompted by 770Flasher, or just drag the file on to the770Flasher icon.
Screenshot
770Flasher-2.0.dmg(Mac OS X disk image, 361K, requires 10.3 or above)
tablet-encode (aka 770-encode)
770-encode has now been renamed tablet-encodeand moved to a larger project called mediautils.
Due to the unreliability of garage.maemo.org, there is a mirror here:
mediaserv
mediaserv is a project which allows you to convert, on-the-fly,video from a Linux, Unix or Mac OS X box and watch it on your Nokia InternetTablet. It even integrates with VideoCenter.
Like tablet-encode, this is part of mediautils.
Due to the unreliability of garage.maemo.org, there is a mirror here:
mediaserv.tar.gz(Perl tarball, v0.05, 29K)
mud-builder
MUD is anauto-builder, designed to make it easier for people to port, in a simple andmaintainable fashion, software to Maemo; customising the resulting packages toMaemo's subtle requirements.
More info can be found on its Garage page.
Wikipedia
Wikipedia is anexcellent online resource and tied with a network connection through aNokia 770 is almost equivalent to the Hitchhiker's Guide to the Galaxy.Although not yet available offline for Maemo, it is possible to enhanceWikipedia to make it look better on the 770's screen.
The default skin contains a long left-hand column, however by creatingan account with Wikipedia (which is free), you can change the 'skin'to one more suited to a device such as the 770.
Default style
'MySkin' style
Usage
Create an account on Wikipedia.
Go to the URL, http://en.wikipedia.org/wiki/User:YourUserName/myskin.css.='detail'>
Paste the code below into the text area andclick Save:
/* <pre><nowiki>*/ @import url('http://www.bleb.org/software/maemo/wikipedia/myskin.css'); /* </nowiki></pre> */
Go to your Preferences page and selectthe Skin category.
Select MySkin and click Save.
Backgrounds
Under development
I've currently got the following under development. For each there is a short description and links to screenshots and photos. If you have any questions on them, please don't hesitate to contact me. Updates will be provided in my diary.
Better Maemo planet layout
I don't like the new MaemoPlanet that much. I've developed a user style for Firefox to turnit into this.
ArcEm
Acorn Archimedes emulator, allowing RISC OS to be run on an ARM device inyour pocket. [1], [2],[photo 1], [photo 2].
NetSurf
A lightweight open source web browser, for when Opera is deciding to betempremental. [1], [2], [3], [photo].
Galculator
A scientific calculator. No screenshots available, but a straight-forward port of a Glade application.
Java
Following on from Alexander Lash's work porting JamVM/Classpath/Jikesto Maemo, I've some thoughts on auto-Hildonisation of Javaapplications which could help make Java a suitable high-level language forMaemo application development.
Older stuff
Sylpheed
Sylpheed is a full-featured email client: supporting POP3,IMAP, SSL and everything else you'd expect. The full feature list can beseen at the Sylpheed homepage.
This is a port and Hildonisation of Sylpheed to integrate it as a properMaemo application. It's not finished, and so should be viewedas an alpha-release. You may be better off using Claws or (even better, hopefully)Modest.
Known bugs
Not all windows are Hildonised yet (that is, many have menubars ratherthan pop-up menus, and so on).
Fix dependencies on N800 to avoid start-up problem (see this solution in the mean time).
Full-screen button doesn't work.
Some windows appear too small, others too big.
Select from middle of direction pad should open message in proper viewwindow.
Problems with (some?) LDAP servers.
...
Limitations
No GPG support as yet.
Address book functionality removed due to a bug.
Built-in FAQ, manual and support for non-English languages removed forspace reasons.
Screenshots
sylpheed.deb(Maemo v2 package, v2.2.0rc-3, 511K)
Rebuilding from source
If the binary above whets your appetite for Maemo development,and you want to help with this port, the Maemo port is being maintained in aSubversion repository.
Username/password: guest/guest.[Browse the source]
Synchronisation and backup using rsync & make
Please note this has not been updated for 2006 OS,instead I prefer bind-mounts, however it is easily customisable.
The built-in backup/restore tool doesn't backup all yourdevice's configuration or installed applications. This script (aMakefile)meets those requirements and allows for maintaining patched parts of the rootfilesystem across firmware upgrades.
Usage
Requires rsync and SSH (on both 770 & hostcomputer) and make on the host.
Memo For Management
To 'install' the script:
Create a new, empty directory, on a Unix-like box (e.g. Mac OS X,Linux, *BSD, Windows with cygwin) and ensure you have rsync, make and SSHinstalled.
Download Makefile.770sync and move it to thenew directory, named Makefile.
Modify the line beginning REMOTE_DEVICE topoint to your 770. For example, my 770 has a fixed IP, I have root accesson it (by enablingR&D mode) and Dropbear is running on port 22 (the default), thereforethe line in my local copy says:
There are three 'targets' which can be executed to act onyour device. To execute them, run the following command:
Memo For Mac
make target
push
Push changes in the local copy to the remote device. This is effectivelya restore from a backup, if pull has been previously run.
pull
Pull changes from the remote device to your local copy. Effectivelyperforming a backup.
bootstrap
Similar to push but doesn't use rsync. This is useful when yourdevice has just been reflashed and is missing any software. By just installingand starting SSH, thistarget can be used to restore your documents, changes and applications (suchas rsync). An article on dillernet.com,Recovering From A Firmware Flash, has some techniqueswhich may also help in this regard (specifically scripts to install theneeded packages & SSH keys from the MMC card).
Since bootstrap will generally be required when reflashing andbefore SSH is running as root; SSH may well be listening on port 2222 (ifstarted by an unprivileged user), rather than port 22. Therefore, you canoverride the REMOTE_DEVICE variable:
make [email protected]:2222/ bootstrap
Note: in this example, as the SSH server was started as anormal user, it would not be possible to restore symlinks in /etc.
Example
One common requirement when SSH is installed is startingit automatically when your device is turned on. This is easily done bycreating a symlink, as described in the InstallSsh document in the wiki.
Unfortunately, when you reflash your device, this symlink will be lost.However, by using the script you can ensure that this (and similar changes)are put back on the device when you restore:
$ mkdir ~/770-sync $ cd ~/770-sync $ wget http://bleb.org/software/maemo/Makefile.770sync $ mv Makefile.770sync Makefile $ mkdir -p etc/rc2.d $ ln -s /var/lib/install/etc/init.d/dropbear-server etc/rc2.d/S99dropbear-server $ make push
As you can see, the local 770-sync directorycontains a copy of any changes you've made to the file system. In addition,the user's home directory, the configuration and the installed applications are pulled back on a pull operation.
Citrix ICA client
The below screenshots show that theCitrixARM Linux client can be got to run on a Nokia 770, although it iscurrently much use: the virtual keyboard is tied to onboard GTK+applications. A USB keyboard, or a Bluetooth keyboard using kbddshould work, however.
The Citrix install file won't work with busybox's 'expr' implementation andwithout 'cpio'. The application itself requires a few extra debs (whichfortunately Debian/ARM can provide):
libxaw6_4.3.0.dfsg.1-14sarge1_arm.deb
libxmu6_4.3.0.dfsg.1-14sarge1_arm.deb
libxp6_4.3.0.dfsg.1-14sarge1_arm.deb
libxpm4_4.3.0.dfsg.1-14sarge1_arm.deb
Maemo For Mac Os
Unfortunately, with the advent of 2006 OS and the use of EABI, older ARMLinux binaries will no longer work on the 770 without recompilation. Therefore,unless Citrix recompile and provide new binaries, or an open source clientis made available, Citrix is not easily possible on a modernMaemo device.
Maemo For Macbook Air
vim/rsync
Memo For Microsoft Word
These ports were for the 2005 OS, and have now been removed.mud-builder isa project which aims to simplify simple ports like thesein future.
0 notes
Text
Thursday 30th
This morning I wanted to start manipulating line in audio, and set up the output to go through headphones for the sake of the people around me. Turns out the signal is a lot louder when it’s in your ear, so I had to make a volume control. I first tried putting a 100 ohm potentiometer next to the switch, but it didn’t have enough range. It was also ‘before’ the timer chips in the circuit, so it also changed the pitch of the output sound. I had a 2K ohm slider controlling one of the timer’s resistors, so I swapped those and the volume control worked.
I also changed up the circuit so that by pressing a switch i could change where the speakers got their signal from - either both of the 555 timers, or just the first one. I thought this would give some variation in the sound, but it really wasn’t a big difference. Because of this I put it back to how the circuit was, but it does make me wonder how much of an effect having doing dual timers really has. I know that sometimes they create some phase offset distortion sounds, but that's only a certain higher frequencies.
I created the line-in circuit today, and I think it was working. The first time that I thought it was working, it was actually skipping the transistor and going straight to the speaker, but now it is actually being distorted properly. Transistors work by sending current from a source (collector, 9 V+) out to the emitter, based on the input from the base. In this way, it functions as an amplifier, by using the line-in voltage as ‘cues’ that send the 9V though, so you’re actually hearing the 9V.
Friday 31st
So, at this point I’ve given up on using line in audio. I’ve reached the limit of my capabilities, and am pivoting my idea. I don’t know how to use line-in audio as
I had the idea of using my raspberry pi to automate the oscillator. It would be like the tune-able keyboard I made a couple of days ago, except the current would flow through the RPi’s GPIO pins, and would trigger each resistor path/pitch in sequence. As this sequence played, you could change the pitch of each of the four/five steps by twisting the potentiometers.
So I tried to get it working today, but ran into a lot of issues. First, i spent most of the just trying to remember/crack the password I had set, and forgotten. After a couple of hours I gave up and re-flashed the SD card. To control the Pi, I was running it headless and using SSH
After I was finally able to SSH into the Pi. The next obstacle was internet connection; I brought my own router into the studio and wired my laptop and pi into it. Although it didn’t have a connection to the internet, it allowed me to find the IP of the pi easily through the router’s DHCP lease page. When I wrote a python script to control the GPIO pins, the RPi couldn’t find the module. Turns out the OS I was running didn’t come with it, so I would need to connect both my laptop and the pi to the internet. This was also an issue because AUT uses PEAL on their WiFi network, so I didn’t know how to connect with the pi. I looked it up and there's a relatively simple solution - if you’re running raspbian or Debian or some normal OS, but as I have been using it as a media server my Raspberry Pi runs OSMC, which keeps its WiFi files somewhere else. At this point it was getting late and I was frustrated and tired, so I headed home.
But the issues didn’t stop there.
At home, I tried to set up the Pi and my laptop, but WiFi kept dropping out on my laptop for some reason, and the Pi’s WiFi adapter wasn’t connecting at all.
So, I found an old WiFi extender and tried to set that up, but an infuriating hour later I decided that it was actually broken. I felt like the victim of a mean spirited prank show, that everything that could have gone wrong, had. But, I kept trying; I relocated, reset the router, and gave it another go.
Finally, I could actually connect to the Pi through SSH, and this small victory gave me some hope. I kept working on it and eventually got the pi recognizing a button press on the breadboard.
After looking into how I would control the oscillator, I realized I couldn't run the 9V's current through the Pi at all, but would have to use the Pi’s own current to control the 9V’s. The obvious answer was using relays, but that seemed like massive overkill. Then I remembered how the line-in circuit worked - using transistors to control a larger voltage circuit with a smaller one.
I looked up ‘transistor switch’ and it’s a thing!
So, I put it together on the breadboard and was able to switch an LED between three brightness states by ‘opening’ one of three transistors to complete the circuit - each with a different resistor.
Saturday and Sunday
Re-built my circuits with all the switches and dials through a box to keep it more organised.
Having issues with these transistor switches; they’re not working when they should be, and do work when they shouldn't. I can get them to trigger using positive output from the first timer into the base, and I guess I could use this, but the overall resistance of the circuit is affecting the frequencies so I can’t seem to get two different resistances/pitches.
I think this is happening because it’s actually using the base pin’s current, rather than the collector’s. I don’t know why its doing this, but it means the first test with the Raspberry pi and LED could have been a false positive - the LEDs may have just been powered by the Pi’s current rather than the 9V’s. I guess I could have tested this by disconnecting the battery, but at the time I didn’t think to do that.
I think I’ll try re-building the original resistor keyboard from Wednesday, and then when I have that working I’ll sue the pi+transistor switch. Hopefully I’ve just rebuilt the circuit wrong and it's not a more fundamental issue with the transitions being used in this way.
For this last iteration, i actually disassembled my original dual timer oscillator and made the circuits more dense, so it only takes one breadboard instead of two. Although it does work, it doesn’t sound as good or have the range that the last configuration had. I don’t know how to fix it, and as the idea for the sequencer hasn’t worked i’ll be disappointed if my final instrument is a worse version of a previous iteration. Hopefully I can get it doing something interesting.
As I was writing this blog post, I had the idea to create two separate single timer oscillators, and use pi transistor switch to change power. This should work, but I guess I’ll find out.
1 note
·
View note
Text
Neuer Blogpost: Transfer Raspbian from NOOBS partition to regular SD Card
Der nachfolgende Blogbeitrag ist rein englischsprachig … und sehr nerdig. — The following blog post describes how to transfer a living Raspbian (a/k/a Debian on a Raspberry Pi) that was installed via NOOBS, freeing it from the abstruse NOOBS partition scheme and then store it on a regular SD card with just a boot and a root partition.
The proposed method is extremely useful if you want to grow or shrink your Raspbian OS to a bigger or smaller SD card, or if you just want to backup your Raspi and not end up with a disk image that is sized like the entire SD card, even it was only half full. All three of these scenarios can usually be tackled e.g. with ApplePi-Baker v2, but it fails if your starting point was NOOBS. … As I googled for a HowTo like this and couldn’t find anything, I figured that other people might find this worth reading as well.
So let the nerd talk begin. I require that you to meet the following …
Prerequisites:
Your Raspi in question has booted into the Raspbian OS that you would like to transfer.
You are sitting in front of it, with a keyboard and a monitor connected to it. (An ssh remote access won’t suffice.)
A root password has been set and you know it. (A password-less sudo won’t suffice.)
You are logged into the text console as root. (If you’re still at the desktop, you should be able to switch to text console 2 with Ctrl-Alt-F2.)
The target SD card is attached to the Raspi via a USB card reader, but preferably not yet mounted.
First of all, make sure that the target SD card is big enough to hold the living system, duh. So please check with “df -h /”, just to be sure. Remember, a little bit of breathing space won’t harm. So if your Raspbian takes up 3.5 GB, it’s probably a good idea to use an 8 GB card. — Just saying …
Checking the existing partitions
Now check which partitions are actually mounted. Try “mount” and you should see which partition is mounted to “/” (your root partition) and which one is mounted to /boot (your boot partition). … Sometimes (e.g. with Debian 7 “Wheezy”) you might see that your root partition is mounted via /dev/root – in that case you might want to check with “ls -la /dev/root” where that symlink actually points to.
All that should also reflect if you check “cat /etc/fstab”: Depending on your prior NOOBS installation, your actual partition IDs might vary, but typically you’ll have /dev/mmcblk0p6 to be your boot partition, and /dev/mmcblk0p7 to be your root partition. … Take a note of this – if nothing else, take a mental note.
Partitioning the target SD card
Yes, we’re gonna do that manually – here be dragons. But fear not, it’s not that difficult.
Your target SD card should be attached to your Raspi via an SD card reader, and should thus be available as /dev/sda. Make sure with “mount” that none of the sda1/sda2/sda3 etc. partitions are mounted – and unmount them prior to continuing.
Now please fire up the fdisk partition manager with “fdisk /dev/sda” and enter “p” to see what kinds of partitions are present. Those need to be deleted with the “d” command. … It will then ask you which partition to delete – or, if there’s only one left, delete that automatically. – You may at any point check where you stand with “p” and make sure you’ve deleted each and every partition that was present before. Don’t worry, nothing is permanent until writing the modified partition able back to the SD card. If anything goes wrong, simply hit Ctrl-C and start over.
After deleting the former partitions, we’ll now create the boot partition – which is supposed to be partition #1 at the beginning of the SD card, and typically it’s 60M big. The sequence of text commands to be entered (which you certainly shouldn’t follow blindly, but consciously – so nothing strange will happen along the way on your particular machine) should be like:
“n” – new partition
“p” – primary
“1” – partition #1
Enter – default starting position at the beginning of the SD card
“+60M” / Enter – we want to have that partition to be 60M big
“t” – change the partition type
(“1”) – select partition #1, probably selected automatically, as it’s the only one right now
“c” / Enter – “W95 FAT32 (LBA)”
When you check with “p”, you should see something like this:
/dev/sda1 2048 124928 61440 c W95 FAT32 (LBA)
Now for the root partition – this goes down a bit quicker:
“n” – new partition
“p” – primary
“2” – partition #2
Enter – default starting position right behind the boot partition
Enter – use the rest of the SD card
When you now check with “p”, you should see something like this (in my case on an 8 GB SD card):
/dev/sda1 2048 124928 61440+ c W95 FAT32 (LBA) /dev/sda2 124929 15523839 7699455+ 83 Linux
Now the moment has come to write the new partition table. This will erase any previously existing partition(s) that was present before you deleted them. … This is the point of no return: Press “w” and hit the Enter key.
Formatting the freshly created partitions
Now we’ve got a DOS compatible boot partition and a Linux compatible root partition. All we have to do now for them to be usable is to format them, like this:
mkdosfs /dev/sda1
mkfs.ext4 /dev/sda2
mkdosfs will be done in a split second, while mkfs.ext4 will take a little longer while delivering a couple of status messages. Nothing unusual here.
Copying the boot partition
The DOS compatible boot partition contains a couple of firmware files and configuration scripts that the Raspi needs to boot up. It should be mounted at /boot – now it’s time to mount the target boot partition to be able to copy its contents.
mount -t vfat /dev/sda1 /mnt
You might omit the „-t vfat“, as mount usually assumes „-t auto“ nowadays. After that, please type “mount” to check that /dev/sda1 was indeed mounted at /mnt; then type “df -h /mnt” to check that you’re really looking at the 60M partition that we just created and afterwards formatted.
As for the actual copying, type the following:
cd /boot ; tar cvf - . | (cd /mnt && tar xf -)
In plain English: First change into the /boot directory where the source files reside. Our nice tar tool is going to “create a tape archive”, but will then pipe its output straight to stdout. This will be taken up via stdin by the second tar process at the end of that pipe, which will first change into the /mnt directory, and then “unpack” what it receives. … Don’t worry, no actual .tar archive is being written during the process, it’s all happening on the fly in memory.
After the copying has finished, “ls -la /boot” and “ls -la /mnt” should display pretty much the exact same files. … If something went wrong, like if perhaps your original /boot partition is bigger than 60M, start over with deleting and creating the partition – but now create a bigger boot partition.
Modifying the command line
With /dev/sda1 still mounted as /mnt, this is the right time to edit the Raspi’s future boot-time command line.
So please enter “vi /mnt/cmdline.txt” (or use your preferred light-weight text editor instead). In there, you should see (among other things) something like “root=/dev/mmcblk0p7” (i.e. the root partition that you took a mental note about): Change that to “root=/dev/mmcblk0p2”, as this is going to be our new root partition; currently, it’s sda2, which will become mmcblk0p2 later. (No, we haven’t copied that one just yet, hold on.)
Save the cmdline.txt file, leave the /mnt directory (e.g. with “cd /”), and then unmount both /boot and /mnt with “umount /boot” and “umount /mnt“.
Entering single user mode and freezing the root filesystem
Now for the big guns: We’re about to duplicate the content of the existing, living and breathing root partition. This takes a few preparations. First of all, let’s head into single user mode. — Note: Everything up to this point could have been done remotely via ssh, but once we’re kicking down networking and other essential system daemons in a second, all you’re going to have is the text console, i.e. your real monitor and your physical keyboard.
So enter “init 1” to switch to single user mode. Raspbian will now ask you for your root password to do so. … If root doesn’t have a password set, you won’t be able to switch into single user mode.
Once you’ve reached single user mode, please check whether your root filesystem is still mounted read/write or if it has been remounted read-only. Simply enter “mount” and see for yourself: If you should see something like
/dev/mmcblk0p7 on / type ext4 (ro,noatime)
then your root partitions has been re-mounted “ro” = read-only. (In Raspbian 10 / “Buster”, this should be the case.) But if you should see something like
/dev/root on / type ext4 (rw,noatime)
your root partition is still “rw” = “read/write”. (I’ve seen this in Raspbian 7 / Wheezy.) In this case, I recommend to freeze any write access to the root file system by entering “fsfreeze -f /” right now.
Copying the root partition
Now for the actual copying. Did you unmount all unnecessary partitions yet? If not, just to be sure, enter “umount -a”. Then it’s time to mount the target root partition on the target SD card. So enter the following:
mount -t ext4 /dev/sda2 /mnt
Again, you could probably omit the “-t ext4” and it should still mount perfectly fine. However, better check with “mount” that you’ve actually mounted the e4fs partition /dev/sda2 at /mnt.
As we are about to copy the content of the root partition, you should know that we need to leave out a couple of directories:
/mnt – well, duh, we’re going to drop our files there, so trying to copy them would lead to a loop
/proc
/sys
/run
The latter three directories do not contain real files, but virtual ones that represent the currently running Linux system one way or another. They can’t be copied, but it isn’t necessary, as their contents are being created dynamically anyway.
Brace yourself for a lengthy command, again consisting of two tar processes, one gathering the files, one writing them. Here goes:
cd / ; tar --exclude=./mnt --exclude=./proc--exclude=./sys --exclude=./run -cvf - . | (cd /mnt && tar xf -)
This is supposed to be one long line, like shown on this screenshot:
Please note that the dash in front of the “-cvf” is crucial in this case (as opposed to when we were copying the boot partition earlier).
Hit enter – and have a little patience … actually a lot of patience, this might take a bit of time, copying a couple of GBs over USB2 to the target SD card.
As a matter of fact, the long list of filenames scrolling over the screen slows things down. You might switch over to another text console with Alt-F2 (or Alt–F3, in case you’re already on console 2). The other text console won’t be functional, but you will notice that the copying is faster, as you can see about the more rapid SSD activity LED flashing. Switch back and forth to watch what’s going on.
When something goes wrong: Interrupt the copying
If something looks odd – like perhaps when tar behaves bitchy and won’t honor the excludes: A simple Ctrl-C won’t be able to stop the copying.
I suggest the following method: Hit Ctrl-Z. This will suspend the copying and push it in to the background. You can check with “jobs” that it’s still there. If you really want to interrupt the copy process, enter “killall tar” – the two tar processes should be the only ones running (which you could check with some variation of “ps” if you want to be sure). After killing the two tar processes, push the suspended copying back into the foreground with “fg” – and I promise it will terminate immediately.
After such an emergency stop, I’d probably go for “umount /mnt” to unmount the half-copied root filesystem – this might take a little while, as Raspbian is probably still be on its way syncing the write cache. However, after that, I’d start over with formatting the root partition with mkfs.ext4 and start over with a modified command line that works better for you. … No, you won’t have to re-format and re-copy the boot partition if that went well before.
If everything went well: Creating empty folders and modifying the fstab
Let’s assume the copy process took its time, but went through just fine. — If you froze the read/write root filesystem, this is the time to unfreeze it with “fsfreeze -u /”.
We excluded /mnt, /proc, /sys, and /run from the copying. Still, those four folders should exist as empty folders in the new root partition. So type the following:
cd /mnt ; mkdir mnt proc sys run
The most essential thing to do now, however, is to make sure that the copied Raspbian will mount the correct partitions as its boot and root partitions. So enter “vi /mnt/etc/fstab” (or use your preferred light-weight text editor instead again). In there, you should see something like
/dev/mmcblk0p6 /boot vfat defaults 0 2 /dev/mmcblk0p7 / ext4 defaults,noatime 0 1
Change that to
/dev/mmcblk0p1 /boot vfat defaults 0 2 /dev/mmcblk0p2 / ext4 defaults,noatime 0 1
and save the file.
One last thing: In case you have a /dev/root device that is a symlink to the actual root device, you should update that as well (although I think that Raspbian does that automatically, but better safe than sorry).
If “ls -la /mnt/dev/root” displays something like “/mnt/dev/root -> mmcblk0p7”, please enter the following:
cd /mnt/dev ; ln -sf mmcblk0p2 root
Now the root device symlink should be in place and correctly point to mmcblk0p2.
Shutting down, swapping cards, rebooting
Everything is prepared: Shut down your Raspi with “shutdown -h now” and wait until the SSD activity LED flashes 10 times in a row before you can safely cut the power supply.
Now pull out the USB SD card reader, remove the card, and replace the Raspis original SD card with the one we just partitioned, formatted, and copies. Fingers crossed – once you power up the Raspi, it should boot up the cloned system as if it was nothing. With one major difference: You won’t see the “For recovery, hold Shift” message anymore, as we got rid of NOOBS.
The reward of everything we just did is: You can now power down your Raspi, take out the SD, pop it into your Mac’s SD card reader, and create a beautifully shrunken backup copy of it with ApplePi-Baker v2.
Additional notes about PARTUUIDs vs. /dev/mmcblk0p* device names
I recently noticed that in some Raspbian installations, both /etc/fstab and as well /boot/cmdline.txt do not in fact contain /dev/mmcblk0p* device names for the root and boot partitions, but actually PARTUUIDs or UUIDs. You can easily retrieve those UUIDs via “sudo blkid” for every block device that is connected to the system, even for non-mounted ones.
As a matter of fact, when copying partitions with ApplePi-Baker v2, it will leave these IDs unchanged – not even when shrinking or growing partitions during the imaging process. At least that’s my experience, and that’s probably a good thing, otherwise the cloned systems wouldn’t boot anymore
Anyway: If it should become necessary at some point to use these UUIDs instead of device names to address partitions, you’ll have to edit those in /etc/fstab and /boot/cmdline.txt before booting the copied Raspbian.
However: While I’ve seen those PARTUUIDs on a fresh regular installation of Raspbian 10 “Buster”, I never came across them when looking at NOOBS installations. If this should change at some point, I’ll look into it and update this blog post.
Share
von GZB – Gero Zahns Blog – ger.oza.hn https://ift.tt/2UM5Rx5
0 notes
Text
Blockchains are the new Linux, not the new Internet
Cryptocurrencies are booming beyond belief. Bitcoin is up sevenfold, to $2,500, in the last year. Three weeks ago the redoubtable Vinay Gupta, who led Ethereums initial release, published an essay entitled What Does Ether At $100 Mean? Since then it has doubled. Too many altcoins to name have skyrocketed in value along with the Big Two. ICOs are raking in money hand over fist over bicep. What the hell is going on?
(eta: in the whopping 48 hours since I first wrote that, those prices have tumbled considerably, but are still way, way up for the year.)
A certain seductive narrative has taken hold, is what is going on. This narrative, in its most extreme version, says that cryptocurrencies today are like the Internet in 1996: not just new technology but a radical new kind of technology, belittled or ignored by by most, which has slowly and subtly grown in power and influence over the last several years, and is about to explode into worldwide relevance and importance with shocking speed and massive repercussions.
(Lest you think Im overstating this, I got a PR pitch the other day which literally began: Blockchains 1996 Internet moment is here, as a preface to touting a $33 million ICO. Hey, whats $33 million between friends? Its now pretty much taken as given that were in a cryptocoin bubble.)
I understand the appeal of this narrative. Im no blockchain skeptic. Ive been writing about cryptocurrencies with fascination for six years now. Ive been touting and lauding the power of blockchains, how they have the potential to make the Internet decentralized and permissionless again, and to give us all power over our own data, for years. Im a true believer in permissionless money like Bitcoin. I called the initial launch of Ethereum a historic day.
But I cant help but look at the state of cryptocurrencies today and wonder where the actual value is. I dont mean financial value to speculators; I mean utility value to users. Because if nobody wants to actually use blockchain protocols and projects, those tokens which are supposed to reflect their value are ultimately well worthless.
Bitcoin, despite its ongoing internal strife, is very useful as permissionless global money, and has a legitimate shot at becoming a global reserve and settlement currency. Its anonymized descendants such as ZCash have added value to the initial Bitcoin proposition. (Similarly, Litecoin is now technically ahead of Bitcoin, thanks to the aforementioned ongoing strife.) Ethereum is very successful as a platform for developers.
But still, eight years after Bitcoin launched, Satoshi Nakamoto remains the only creator to have built a blockchain that an appreciable number of ordinary people actually want to use. (Ethereum is awesome, and Vitalik Buterin, like Gupta, is an honest-to-God visionary, but it remains a tool / solution / platform for developers.) No other blockchain-based software initiative seems to be at any real risk of hockey-sticking into general recognition, much less general usage.
With all due respect to Fred Wilson, another true believer and, to be clear, an enormous amount of respect is due it says a lot that, in the midst of this massive boom, hes citing Rare Pepe Cards, of all things, as a prime example of an interesting modern blockchain app. I mean, if thats the state of the art
Maybe Im wrong; maybe Rare Pepe will be the next Pokmon Go. But on the other hand, maybe the ratio of speculation to actual value in the blockchain space has never been higher, which is saying a lot.
Some people argue that the technology is so amazing, so revolutionary, that if enough money is invested, the killer apps and protocols will come. That could hardly be more backwards. Im not opposed to token sales, but they should follow If you build something good enough, investors will flock to you, not If enough investors flock to us, we will build something good enough.
A solid team working on an interesting project which hasnt hit product-market fit should be able to raise a few million dollars or, if you prefer, a couple of thousand bitcoin and then, once their success is proven, they might sell another tranche of now-more-valuable tokens. But projects with hardly any users, and barely any tech, raising tens of millions? That smacks of a bubble made of snake oil one all too likely to attract the heavy and unforgiving hand of the SEC.
That seductive narrative though! The Internet in 1996! I know. But hear me out. Maybe the belief that blockchains today are like the Internet in 1996 is completely wrong. Of course all analogies are flawed, but theyre useful, theyre how we think and maybe there is another, more accurate, and far more telling, analogy here.
I propose a counter-narrative. I put it to you that blockchains today arent like the Internet in 1996; theyre more like Linux in 1996. That is in no way a dig but, if true, its something of a death knell for those who hope to profit from mainstream usage of blockchain apps and protocols.
Decentralized blockchain solutions are vastly more democratic, and more technically compelling, than the hermetically-sealed, walled-garden, Stack-ruled Internet of today. Similarly, open-source Linux was vastly more democratic, and more technically compelling, than the Microsoft and Apple OSes which ruled computing at the time. But nobody used it except a tiny coterie of hackers. It was too clunky; too complicated; too counterintuitive; required jumping through too many hoops and Linuxs dirty secret was that the mainstream solutions were, in fact, actually fine, for most people.
Sound familiar? Today theres a lot of work going into decentralized distributed storage keyed on blockchain indexes; Storj, Sia, Blockstack, et al. This is amazing, groundbreaking work but why would an ordinary person, one already comfortable with Box or Dropbox, switch over to Storj or Blockstack? The centralized solution works just fine for them, and, because its centralized, they know who to call if something goes wrong. Blockstack in particular is more than just storage but what compelling pain point is it solving for the average user?
The similarities to Linux are striking. Linux was both much cheaper and vastly more powerful than the alternatives available at the time. It seemed incredibly, unbelievably disruptive. Neal Stephenson famously analogized 90s operating systems to cars. Windows was a rattling lemon of a station wagon; MacOS was a hermetically sealed Volkswagen Beetle; and then, weirdly beyond weirdly there was
Linux, which is right next door, and which is not a business at all. Its a bunch of RVs, yurts, tepees, and geodesic domes set up in a field and organized by consensus. The people who live there are making tanks. These are not old-fashioned, cast-iron Soviet tanks; these are more like the M1 tanks of the U.S. Army, made of space-age materials and jammed with sophisticated technology from one end to the other. But they are better than Army tanks. Theyve been modified in such a way that they never, ever break down, are light and maneuverable enough to use on ordinary streets, and use no more fuel than a subcompact car. These tanks are being cranked out, on the spot, at a terrific pace, and a vast number of them are lined up along the edge of the road with keys in the ignition. Anyone who wants can simply climb into one and drive it away for free.
Customers come to this crossroads in throngs, day and night. Ninety percent of them go straight to the biggest dealership and buy station wagons They do not even look at the other dealerships.
I put it to you that just as yesterdays ordinary consumers wouldnt use Linux, todays wont use Bitcoin and other blockchain apps, even if Bitcoin and the the other apps build atop blockchains are technically and politically amazing (which some are.) I put it to you that the year of widespread consumer use of [Bitcoin | Ripple | Stellar | ZCash | decentralized ether apps | etc] is perhaps analogous to the year of [Ubuntu | Debian | Slackware | Red Hat | etc] on the desktop.
Please note: this is not a dismissive analogy, or one which in any way understates the potential eventual importance of the technology! There are two billion active Android devices out there, and every single one runs the Linux kernel. When they communicate with servers, aka the cloud, they communicate with vast, warehouse-sized data centers teeming with innumerable Linux boxes. Linux was immensely important and influential. Most of modern computing is arguably Linux-to-Linux.
Its very easy to imagine a similar future for blockchains and cryptocurrencies. To quote my friend Shannon: It [blockchain tech] definitely seems like it has a Linux-like adoption arc ahead of it: Theres going to be a bunch of doomed attempts to make it a commercially-viable consumer product while it gains dominance in vital behind-the-scenes applications.
But if your 1996 investment thesis had been that ordinary people would adopt Linux en masse over the next decade which would not have seemed at all crazy then you would have been in for a giant world of hurt. Linux did not become important because ordinary people used it. Instead it became commodity infrastructure that powered the next wave of the Internet.
Its easy to envision how and why an interwoven mesh of dozens of decentralized blockchains could slowly, over a period of years and years, become a similar category of crucial infrastructure: as a reserve/settlement currency, as replacements for huge swathes of todays financial industry, as namespaces (such as domain names), as behind-the-scenes implementations of distributed storage systems, etc. while ordinary people remain essentially blissfully unaware of their existence. Its even easy to imagine them being commoditized. Does Ethereum gas cost too much? No problem; just switch your distributed system over to another, cheaper, blockchain.
So dont tell me this is like the Internet in 1996, not without compelling evidence. Instead, wake me up when cryptocurrency prices begin to track the demonstrated underlying value of the apps and protocols built on their blockchains. Because in the interim, in its absence of that value, Im sorry to say that instead we seem to be talking about decentralized digital tulips.
Disclosure, since it seems requisite: I mostly avoid any financial interest, implicit or explicit, long or short, in any cryptocurrency, so that I can write about them sans bias. I do own precisely one bitcoin, though, which I purchased a couple of years ago because I felt silly not owning any while I was advising a (since defunct) Bitcoin-based company.
More From this publisher : HERE
=> *********************************************** See Full Article Here: Blockchains are the new Linux, not the new Internet ************************************ =>
Blockchains are the new Linux, not the new Internet was originally posted by A 18 MOA Top News from around
0 notes
Text
Virtualize Like a Boss (Part 1)
Some of my colleagues may know that I have, for many years, maintained a collection of hosting accounts for my personal pursuits. For nearly a decade now there has been at least one Linux dedicated virtual machine I've maintained at all times hosting dozens of different domains, and over the years IIS hosts have come and gone as well, plus several Mac OS servers, and even the occasional home-built contraption. All of them have existed, more or less, for a few basic purposes:
As platforms for me to use in testing out my ideas or in pursuing various projects.
As environments where I can set up certain development platforms for experimentation or study (like LAMP, MEAN, and all of those acronym-ish things).
As a place to store my personal stuff that I want to make accessible to the wider web.
As a giant dumping ground for the gazillions of (non-work) emails I get and never read.
In short, these environments have existed as the backbone of my own personal laboratory of sorts - one that exists as my virtual mad-scientist lair for personal and professional growth. And hey, it's been great - and valuable. But it has also been expensive.
I sat down a month or so back and I started doing the math on just how much all of this experimentation has been costing me, and I came up with an ugly figure: Over the past 10 years, I have spent (conservatively) at least $8,000 just paying someone to serve my bits and bytes - and that's not counting the cost of domain registrations... which (though obscenely high in my case) brings the total to well over $10,000.
Now don't get me wrong - in the grand scheme of things, I recognize that $10,000 over 10 years may not seem like that much. Plenty of people, after all, blow that much on cigarettes and booze alone. But not being one prone to excess in these areas... and looking at this from my analytical perspective, I just couldn't escape thinking of that $10,000 (which I've essentially poured into someone else's pants-pockets) as more like the $16,000 to $24,000 it could have been if poured it instead into some suitable investment medium over that same timeframe. And that got me casting about looking for some alternatives.
Enter the fine folks at Antsle.
Antsle is a San Diego based startup that markets what they call a "personal cloud server". What the Antsle really is, however, is a nicely designed little server running a custom built OS that is really good at bare-metal virtualization. That is, it's a machine that you can stick on your home network, and then upon which you can spin up as many virtual machines as you might want (within reason, people)... for whatever purposes you might have. Home media server? Gotcha covered. Running a game server? Heck yes. Wanting to set up a MEAN stack for exploration, or wanting to write that killer web-app you've had bumping around your cranium? Covered. Basically, it's like having 1-n physical machines running all of the things you've always wanted to tinker with, except without cluttering up your living room, decimating your electric bill, and angering your spouse. And you can do all of that without having to pay your friendly neighborhood hosting provider one cent. At least that's how Antsle spins it. Of course, we all know that although you can expose machines on your local network to the outside world, your ISP is probably going to be pretty upset if you try and run your publicly accessible Mumble server across your home internet connection. And rightfully so, since it straight-up violates their TOS. So, there are practical limits to how much you can really make publicly consumable from one of these machines, should you so choose. That said, the value proposition of an Antsle really got me thinking: how much of what I have sitting out there with a public face actually needs that face? How much of it is stuff that I actually must expose, and how much of it is stuff I have out there just because I required a customizable and accessible web hosting environment in order to build it?
My answer to these questions is that at least 80% of what I do on my publically accessible server(s) is stuff that can sit on my private network instead until such time that it graduates into something I'd like to share with others. Therefore (if you want to be just a tad bit overly simplistic) I've been paying 80% too much every month. For 10 years. What if I could move all of that onto one machine on my local network - a machine where I could install many different instances of operating systems, for which I could develop using the environments and platforms in which I am interested... and which I could pay for once and then use until it is old and tired?
Thus began my quest to virtualize my home lab.
Buy or Build?
Obviously with an Antsle you can get up and running doing virtualization for your home lab in minutes. And honestly, if you just want to save some hosting money and have something that gives you the flexibility to experiment and tinker across many different platforms, then get yourself an Antsle One (or if you are spendy, one of its big brothers) and be done with it. Antsle promises you can spin up virtual machines in seconds, and the Antsle OS takes care of creating local domains for your virtual machines, routing for them on your LAN, and more automatically. With all that, it's essentially a no-muss no-fuss sort of arrangement: you get to spin up environments to suit your whims with wild abandon, and Antsle takes care of all the murky details which would otherwise make this experience much more painful. Antsle will even allow you to spin up environments using templates where the OS is already installed, which means that in less than 10 seconds you can ostensibly go from zero to a fully configured machine running Debian, or CentOS, or Ubuntu, or Free BSD, or even Windows... completely ready and just waiting for you to build something cool (and already on your local network ready for you to work your magic). From my perspective, that's a compelling value story. So listen: if I just described you, stop reading right here. Go visit Antsle.com and pick up one of their machines - they have a 30 day money back guarantee, and they will even let you pay for it over 12 months, no credit check required. So really, what are you waiting for? Just go.
Walking the Path of Pain
The fact that you are still reading must mean that you, like me, enjoy torturing yourself rummaging about in the guts of complex systems. Because: "It's fun". Or maybe you just want to get more out of the experience of doing this stuff than just havingvirtualization: you want to learn something about how virtualization works and you want mastery over setting up and maintaining the infrastructure itself: you don't just want to virtualize, you want to virtualize like a boss. If so, the very first thing you will need to do is get your steamy hands on some decent hardware.
Hardware Selection
Decent hardware is not, as one might expect, hard to come by. This is especially true if you are a spendy ubermensch endowed with a Daddy Warbucks level of largesse. But if you, like me, are an economic miser looking for greatest amount of bang for least amount of buck, then you need to give thought to the desired characteristics of the system you are want to deploy. For me, I wanted a system that had at least the bulk of the following characteristics:
I wanted professional grade equipment (no NUCs or other such gizmos.)
I wanted something that had a reasonable expandability in terms of memory and hard drive storage.
I wanted something that I could upgrade later if possible for greater processing power.
I wanted something that wasn't huge, or ugly, or loud.
I wanted something that didn't need to be rack-mounted.
I wanted something that wasn't incredibly expensive.
That may all sound terribly contradictory for a server that one is going to use for virtualization, but it turns out that it occurred to the good folks at HPE (that's Hewlett Packard Enterprise) that some crazy individuals might want exactly those things... so they cooked up all of those ingredients into a really lovely little server they call the HPE Proliant Microserver Gen 8.
The ProLiant Microserver Gen 8 is a small business class server, and it comes with all of the bells and whistles any normal HPE server would. It features two 1 gigabit network ports, plus it features a dedicated ILO4 port (that's HP's Integrated Lights Out server management system) so that you can access your server for maintenance and configuration over the network - whether it is on or not. It also features instant out of the box provisioning and deployment, and it comes with four front accessible drive bays that can be configured for RAID 0 - 5 using a built in Smart Array B120i controller. It has 6 USB ports (4 USB2 and 2 USB3) - two accessible from the front, and it has a single PCIe expansion slot for who knows what you might want to shove in there. Finally, the CPU is socketed, so you can upgrade the CPU to whatever pin-compatible generation of chip you can lay your hands on - a nice feature if you want to start with an entry level machine. It has a beefy (for the size) power supply with an overly-large fan... which means the fan usually runs at a really low RPM, making it very quiet. It has a fantastic build quality; the handsomely designed magnetic-closure front door appears to be cast metal, and the easily opened case is heavy-gauge steel. It even thoughtfully comes with a service tool (an allen wrench, really) affixed to the front right next to the drives you'll need it to remove. Finally, it accomplishes all of that in a small form factor (roughly a slightly less than 10 x 10 inch cube) that will sit on your desk happily (and handsomely). Congrats to HPE, because this is a very well thought-out and well-executed design.
The best thing about the MicroServer Gen8? Price. You can get one of these on Amazon for as little as $379, depending on which model you want to pick up. At the low end, you'll be getting an Intel Celeron (G1610T) based machine. But remember: this CPU is socketed and is pin compatible with up to a Xeon E3 v2. Some people are even pushing a beefier 4-core CPU (the Xeon E3-1265LV2) into the box, which appears to be fine even though the chip exceeds the max rated power a bit. So, even though the Gen8 is getting a little long in the tooth CPU-wise, you can upgrade this machine to a more powerful pin-compatible CPU later if you so choose without much fuss. Be aware that the low end box will also come with no drives and only 4GB of RAM. So, CPU question aside, you'll obviously need to upgrade the machine at least a bit before use if you start at the entry level. If you plan to do that with parts you may have laying around, take note: the machine uses unbuffered RAM, and your mileage may vary with drives, so read up on what is compatible; WD and Seagate drives should work fine in capacities up to 4TB (HP says 3).
All in all I think the Gen 8 is the little engine that can; although there are many, many other options for hardware to use, for me this little machine really offers an optimal mix of price / performance / palatability. Can you get yourself a blade server that will take more RAM and have more slots for the same ball-park price? You bet. However, unless you happen to have a rack sitting around waiting for it (or you don't mind a noisy jet engine taking up half your desk) that just isn't a great option for most people. I did opt for a higher-powered unit out of the chute; I purchased the 783959-S01 model, which comes loaded with 4 1TB drives, 8 GB of memory (one bank), and a Xeon E3 V2. I added a second stick of 8GB RAM to max it out, and still managed all of that for under $1000.
Two downsides for the power-max crowd: max memory on this box is, as I alluded to above, 16GB. That's a bit low if you want to virtualize like a boss and run all those VM's simultaneously... but from my perspective, I'm not likely to run more than 5 or 6 at the same time. So, this is adequate. The other downside is that the drives are not hot-swappable. But again, you want to virtualize your home lab, right? It's not like a drive failure is going to take down your enterprise just because you need to shut the box down for 10 minutes.
So, yeah: I love me some Gen 8 MicroServer. Full specs here.
Virtualizing...
Once you have your hardware lined up, you are going to need to confront the question of what route to use for virtualization. You can choose a containerization route, where you use a so-called "type 2" hypervisor to host your virtual machines on an operating system that is installed on your physical hardware. This will no-doubt feel very familiar to anyone who has ever spun up a virtual machine on their desktop, because in practice this is a lot like running something like VMWare or VirtualBox locally and then spinning up virtual machines using these tools. And just as with those products, the virtual machines you run using a type 2 hypervisor all utilize the host operating system for their core services. As such they suffer from any limitations the host OS itself may bring to the table. The alternative approach to this is using what is sometimes called a "bare metal" (type 1) hypervisor. These hypervisors are distinguished from type 2 hypervisors in that they install directly on the machine with no operating system layer sitting between them and the hardware itself (hence the term "bare metal"). In real terms, when you use a type 1 hypervisor, the hypervisor itself is the operating system - one designed specifically to do only one thing as efficiently as possible: act as the host for 1 - n virtual machines. So which route to take?
There are probably good arguments for why one might want to use a type 2 (hosted) hypervisor over a type 1 (bare metal) hypervisor... I'm just not aware of any. Well ok, one: it's easier. But my admittedly narrow perspective on the matter is that it is best to eliminate as much as you can of what could be sitting between your virtual machines and the hardware they are running on, under the theory that added layers increase opportunities for inefficiency that can impact performance. Memory utilization in and of itself is a good reason to go with a type 1 hypervisor, as there is little reason for you to allow a bloated host OS to consume memory that would otherwise be available to your virtual machines were you instead running a slim and optimized hypervisor only. So, from my perspective, the choice really isn't between a type 1 and a type 2 hypervisor, it really boils down to which bare metal (type 1) hypervisor you are going to use.
To VMWare or Not to VMWare
There are quite a few type 1 hypervisors available, running on multiple Linux variants as well as windows. Among them are products by Oracle, Microsoft, and VMWare, as well as open-source alternatives. So which to choose? This is a complicated topic, so I think it is helpful to break the question down into more easily digestible chunks. Personally, I'm in this not just to virtualize, but to learn. If I'm going to muck about installing a hypervisor, I want it to be one that is relevant. That means choosing one of the hypervisors that are most commonly used in industry, which narrows the field down quite a bit. Finally, being mr. cheap, I would like to install something that is free. And as audacious as that sounds, it turns out that there are in fact multiple type 1 hyper that are free.
Here's a short list that satisfies both sets of these criteria (commonly used & free):
VMWare ESXi
Microsoft Hyper-V
Citrix XenServer
To jump straight to the point, I chose VMWare ESXi. There are a number of reasons for this.
First, my perspective is that VMWare offers a more robust and well-supported product with more bells and whistles. If one looks at a comparison between these popular hypervisors, it's clear that VMWare offers a stronger set of offerings in the free tier, from my perspective. As an example, the most recent versions of VMWare ESXi feature a very nice web user interface for managing your ESXi instance and all of your virtual machines. You can even launch consoles for these virtual machines directly from the web client to interact directly with the VM and its operating system. By comparison, Microsoft Hyper-V uses power-shell with no GUI at all, and enabling a GUI for the product is quite convoluted. And although xenServer offers a GUI, I feel it is nowhere near as evolved as ESXi's tooling, nor as easily accessed. When installing ESXi, once the server is up you can hit its IP address ( http://yourServerIP/ui/ ) and the GUI is already installed and ready to go.
Second, HP and VMWare have a partnership and the folks at HP Enterprise have gone out of their way to ensure that installing VMWare on HPE iron is insanely simple to do. To that end, HP Enterprise has provided customized installer ISO disk images for the VMWare product which are preconfigured for easy installation using HPE's intelligent provisioning tool (get those images here). With intelligent provisioning, you need to do little more than point the server at a USB thumb-drive containing your ISO, and intelligent provisioning will do the rest. And as mentioned, once that process is done the GUI for VMWare is already ready to roll. That means you can go straight to creating your virtual machines rather than wasting cycles doing more work to configure the server; this is a huge productivity win.
In any case, for all of these reasons and more, my choice was VMWare, and that's what we'll cover installing and using. But not today, friends - I've gone on long enough. Look for how to install ESXi and spin up virtual machines like a pro in Part 2.
0 notes
Text
reblog to mainly think I guess, and also figure out which one I'd like to try this with this time...
Tho with all of the different flavors of linux, I don't remember which desktop environment I like most. XFCE is lightweight and kind of old fashioned, cinnamon is a memory hog, kde is also one but kind of snazzy, I don't like ubuntu's at all... was it unity?
actually I'd been trying to stay away from ubuntu given what they've been doing. sticking me with debian. either linux mint debian, or just straight debian.
I love linux and technology to death and hate that I'm still a newbie at running linux/bsd/unix operating systems years after I got fascinated with it. makes me kinda sad.
but no time to learn like the present, huh?
these are the disks for the distros I've found off hand:
Debian 11.6.0 (mate, lxqt, lxde, cinnamon, and the standard install)
Linux mint 21 and it's variations (that I'm ignoring because Ubuntu)
Linux mint debian editions 5-6
a windows 10 install disk just in case
had a flash drive of linux mint debian I think; but idk where it went, I took it over to my siblings to install it on a laptop they had and it got lost in the mess.... it booted so fast too T_T;;
these are all the current distros I have currently as I haven't been paying attention to what the linux mint/debian teams were up to.
idk how tf I was able to run fedora on a computer back in the day, but in hindsight it was kind of cool~~!! and I liked fedora's desktop environment~~ it was probably also a memory hog but the different desktops and stuff was so snazzy~~ I forgot what the name for the environment was tho. GNOME?
reminds me of the time I tried to drop my sibling into the deep and and installed OpenSUSE on their system before reinstalling windows again, lol.
just moreso thinking to myself this time instead of a full blown rant about it. I guess I'll poke around with getting it to work at some point.
I never even thought about it not installing the boot partition not where it should be. I thought the installer for whatever I was installng was smart enough to put it in the proper place. I seriously just thought that windows had it's hooks so deep into the system that it was preventing linux from booting somehow. the thought never crossed my mind.
Guess don't feel stupid cause I'm learning~~ don't feel stupid for learning new things, even if it makes you feel stupid for a bit, it's a new piece of information that we didn't know about until then.
don't feel stupid for learning~~
just screw around and find out~~! and if I do lose any data, just copy it back over from my external hard drive.
edit: I think maybe I found it~~!! Hopefully found the issue but then how to fix. I poked around in the BIOS a bit and found that I can make a new boot path; perhaps I'll need to do that instead of it just pointing to a windows partition that no longer exists.
now how to do that? idk where the linux boot thing is unless i can find the path while messing around in the bios after I install a distro.
hopefully now I"m one step closer to running linux on my beefier computer~~!! Didn't think to check the boot paths in the BIOS of all things cause I didn't know it was a thing.
will update or reblog this post if something happens~~ assuming people actually care. maybe I'm just writing this for myself but who knows.
now what distribution do I want to use? is the main question. look for a comparison of all the desktop environments on youtube? or pop in the disks and waste time booting them all and poking around?
stupid thing about the BIOS is, I changed the boot priority of the linux live cd thing and the windows boot loader itself, and I guess because no disk is in the drive, the linux live cd thing isn't the same thing as the cd drive itself... and it's still under the windows boot loader even after I saved the settings. ??
so this would mean:
choose a distro to use
install distro
restart machine not being surprised that it won't boot
hit key to enter bios
go to settings and change the boot path to whatever boot path linux uses
troubleshoot until the machine boots
congrats, you're now using linux on a computer you thought it wouldn't work on
windows/linux rant
unrelated to SMT or anything on my blog currently I guess...
I keep seeing the "prep for windows 11" thing show up on my computer, and it upsets me every time that you have to have a microsoft account to install it.
windows 10 reaches end of support next year~~
I hope by then the computer I"m using will actually let me run linux on it instead of an outdated windows os....
I want to run linux on this machine so bad but whenever I try to install it, it seems to install fine and dandy
and then it won't boot
I've snooped around in the BIOS, disabled the "smart startup" or "secure startup" or whatever else I thought was borking the boot for linux.
but it still won't boot~~~
and sadly i've got games on here that only work on windows anyway.
I'd much rather be able to stay relatively safe online and maintain an OS, than play the games I bought on steam....
something is preventing linux from booting on this machine, it upsets me when I think about it, and it upsets me more when I try to install linux and it goes fine, but then doesn't boot
cause I have to take the time to reinstall windows again.....
why computer? why won't you let linux boot? what do I need to do to you to have linux actually boot? I don't understand...
*confused screaming*
running an ASUS ROG Stryx gaming pc that I bought on impulse years ago. so you'd think linux mint would boot just fine, right??
how do I fix it? if I can fix it? especially if I don't destroy the data on the second hard drive of this thing. I've got stuff I don't want to lose on here, preferably....
#personal#thoughts#thinking#windows#windows 10#windows 10 end of support next year#wondows 11#i'd rather run linux than make a microsoft account#linux#linux mint#linux mint debian#debian#also straight debian linux#debian linux#don't remember which desktop environments i liked tho#it's been that long#all the stuff I've got might be out of date by now who knows#boot issue#boot issues#it won't boot#rant#rant post#vent#vent post#less of a vent this time around just recounting my history with linux and wondering which one to try this time#love linux and technology to death and the fact I'm still a newbie makes me kind of sad#but don't be sad cause I'm still learning new things and figuring stuff out#don't be sad for not knowing stuff cause you're learning#learning is just screw around and find out after all
11 notes
·
View notes