Real men do not test backups, remember?

I always said, real men don’t make backups for their important data :)
I do not want to lose data. I am in IT industry for some time, and I know, that it is not “IF hard drive will fail”** … but “when will it fail”. Here is the story we all can learn from:

About 20 years ago, I worked for a company which I shall not name, which used CVS as its source repository. All of the developers’ home directories were NFS mounted from a central Network Appliance shared storage (Network Appliance was the manufacturer of the NAS device), so everyone worked in and built on that one central storage pool. The CVS repository also lived in that same pool. Surprisingly, this actually worked pretty well, performance-wise.

One of the big advantages touted for this approach was that it meant that there was a single storage system to back up. Backing up the NA device automatically got all of the devs’ machines and a bunch more. Cool… as long as it gets done.

One day, the NA disk crashed. I don’t know if it was a RAID or what, but whatever the case, it was gone. CVS repo gone. Every single one of 50+ developers’ home directories, including their current checkouts of the codebase, gone. Probably 500 person-years of work, gone.

Backups to the rescue! Oops. It turns out that the sysadmin had never tested the backups. His backup script hadn’t had permission to recurse into all of the developers’ home directories, or into the CVS repo, and had simply skipped everything it couldn’t read. 500 person-years of work, really gone.

Almost.

Luckily, we had a major client running an installation of our hardware and software that was an order of magnitude bigger and more complex than any other client. To support this big client, we constantly kept one or two developers on site at their facility on the other side of the country. So those developers could work and debug problems, they had one of our workstations on-site, and of course *that* workstation used local disk. The code on that machine was about a week old, and it was only the tip of the tree, since CVS doesn’t keep a local copy of the history, only a single checked-out working tree.

But although we lost the entire history, including all previous tagged releases (there were snapshots of the releases of course… but they were all on the NA box), at least we had an only slightly outdated version of the current source code. The code was imported into a new CVS repo, and we got back to work.

In case you’re wondering about the hapless sysadmin, no he wasn’t fired. That week. He was given a couple of weeks to get the system back up and running, with good backups. He was called on the carpet and swore on his mother’s grave to the CEO that the backups were working. The next day, my boss deleted a file from his home directory and then asked the sysadmin to recover it from backup. The sysadmin was escorted from the building two minutes after he reported that he was unable to recover the file.

from Slashdot by swillden.

** I am talking not only about HDD, but about media in general. And this also applies to humans, because we humans are making mistakes too, and we lose data every day.

Solution to “The term ‘-Version’ is not recognized as the name of a cmdlet…”

When trying to run PowerShell.exe with command line argument -Version, you may get the following error:
-Version : The term '-Version' is not recognized as the name of a cmdlet,
function, script file, or operable program. Check the spelling of the name, or
if a path was included, verify that the path is correct and try again.
At line:1 char:1
+ -Version 2.0 -InputFormat none -File C:\SomeFolder\YoutScript.ps1
+ ~~~~~~~~
+ CategoryInfo : ObjectNotFound: (-Version:String) [], CommandNot
FoundException
+ FullyQualifiedErrorId : CommandNotFoundException

The command line used:
powershell.exe -ExecutionPolicy Bypass -Version 2.0 -InputFormat none -File C:\SomeFolder\YoutScript.ps1

You can resolve this error, by changing order of Version parameter:

powershell.exe -Version 2.0 -ExecutionPolicy Bypass -InputFormat none -File C:\SomeFolder\YoutScript.ps1

This seems like a bug, because official documentation for the PowerShell does not mention order of parameters. Even more, the use Execution Policy before Version. See the following PowerShell help article in MS site.

MS-DOS command redirection operators

This article is from our Febooti archive, it was relevant then, and I think that it is still relevant today (a few details changed).
Previous article: IF statement in DOS batch file.

console-dos-2-small

The two most commonly used redirection operators used in console are the output redirection operator (>) and the input redirection operator (<). The output redirection operator > is used to send the command output to somewhere other than the screen. A plain text file would be an example. The next example shows how to put the results of the DIR command into a text document. Let us name the text file file-list.txt:

C:\>dir >file-list.txt

When this command is entered either at a Windows CMD prompt or in a batch file/script you do not see the directory listing like you would if you had simply entered the DIR command. Instead, a file is created and in this case the file is named file-list.txt. This can be used to capture practically any command.

Run this file several times and notice the changes. Each time you run this program (DIR >file-list.txt), the result of the DIR command gets saved to that file-list.txt file. Note that “>” will erase the preceding contents and save the new DIR listing information. But what if we wanted to create a type of log that would append and save each new occurrences of the above program? We can use what is called the append redirection operator. This is done by adding another greater than sign (>>).

C:\>DIR >>file-list.txt

Next we have the input redirection operator (<). The input redirection operator is used to send the contents of a file to a DOS command (normally the contents of keyboard input get sent to the DOS command).

An example of when you would want to do this would be if you have a text file already prepared, or for use in the batch script, when run in unattended mode. You can enter the following command sequence:

C:\>more <file-list.txt

If your DIR listing was very long, then you would be prompted to “Press any key to continue…” after a screen-full (one page) of data was displayed. Once you pressed any key you would see the next page of text. If it was short then the MORE command will simply display the whole text as if you typed the TYPE command. The TYPE command is used to display the contents of a file. You do not need to use a redirection operator to use the TYPE command.

C:\>type file-list.txt

This will type out the contents of file-list.txt to screen. And if the file contains more than one screen, it will display the text very fast, and you may only see the last page.

The next command we will learn is the DOS piping command. The pipe | can usually be inserted to the command prompt by holding down the SHIFT key and pressing the key to the direct left of the backspace key (or under backspace on some keyboards). DOS piping is a technique that combines both the output and input redirection operators. We can capture the output of one command and send it as the input to another command. Example:

C:\>dir | more

In this example the pipe command captures the output of the DIR command and sends it as input to the MORE command which will display one screen-full of results at a time.

You will come to find that batch scripting really does have a great amount of power. However, there will be a times when you will find that batch scripting is not powerful enough. When you find yourself in a such situations, you may want to look at other scripting possibilities such as multi-platform Python, Perl, Lua, or if you need all native Windows way, PowerShell is the answer.

This article is from our Febooti archive, it was relevant then, and I think that it is still relevant today (a few details changed).