Showing posts with label command line interface. Show all posts
Showing posts with label command line interface. Show all posts

Wednesday, August 13, 2014

Basic Computer and OS Information Using uname

Need a little information on the computer OS your using? uname is the perfect command for that. Lets say your installing some software on a friends computer that you know nothing about, or maybe your not sure if you need the 64 or 32 bit version of a program. The uname command can save the day.

By typing man uname you can read all about the options that can be used with this command. For the sake of simplicity and keeping this blog post short we're going to use the -a option. This is the all option and will give you just about all the information you would get from using the separate options. For instance, uname -p tells you your using a x86_64 bit processor. Now you know you can install that 64bit version of that software you wanted to install.
I Gimped the computer name for reasons of paranoia. This is the basic output from the uname -a command.

Here's the output I get from uname -a

Linux localhost.localdomain 3.14.8-200.fc20.x86_64 #1 SMP Mon Jun 16 21:57:53 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

This tells me that I'm running Linux, the name of the network node is localhost.localdomain very helpful when doing networking, I'm using 3.14.8.fc20.x86_64  which translates to kernel version 3.14.8 Fedoracore 20 x86_64  basically Fedora 20 64bit. I have no clue what #1 is ? Maybe one of the readers could leave a comment if you know? SMP is Symmetric multi-processing I'm pretty sure this means I'm running multiple processors, the date the kernel was compiled Mon Jun 16 21:57:53 UTC 2014, the system architecture x86_64, meaning its a 64 bit system in this case, and again x86_64 GNU/Linux operating system.

Thats a lot of information in a quick keystroke. The command comes in handy when working on unfamiliar systems, or when logged onto  a computer remotely. To find out more about uname try either man uname or uname --help.

Wednesday, July 16, 2014

wget A Super Duper Image Scooper !

I love downloading images on the internet! Cars, pin up girls, scenery and of course I've downloaded my fair share of stupid human tricks and fail pix.

Occasionally you come across a page of content and you just don't feel like clicking on each image or video individually. The other day I came across a site of hot rods with over a thousand images. That's a lot of clicks ! Who has time for that. Here's what you do.

wget to the rescue !


 You may have used wget to download stuff from the internet. A simple file grab with wget would look like this :

wget http://www.websiteName.com/filename.mp3

If you've never used wget before to download a file, search the internet for a file to download, open up a terminal, type wget and paste the download link after it like in the example above.

I don't know why but for some reason when I download something from the command line using wget it seems to download so much faster than downloading it in Firefox or using a browser. So I often cut and paste download links from the internet and download with wget from the command line. Ahright, I digress ! Assuming you understand the very basics of wget, here's how we would use it to grab images from a webpage.

Note: Use man wget to learn more about this command. This is the very basics and just something I've been experimenting with. 

The Command


wget -r --level=2 -v -A jpeg,jpg --wait=2 http://www.targetDomain.com/webpage.htm

The above is all one line of code. So what do we have here and why does it work?

wget (command line utility used to download files)

-r (this is recursive, and will continue scanning directories to find the images or videos)

--level=2 (this will only allow it to scan 2 levels of directories, the higher the number the more directories it will download.)

(if you wanted to download an entire website of files you could omit the --level=2 NOT RECOMMENDED, but you could if you want)

-v  (this is verbose, and will show you whats happening as it downloads, again this is optional)

-A jpeg,jpg (this creates the accept list in this example jpeg, jpg,. You could just as easily change jpg to gif, flv, mp3, mp4 etc, etc, or change it up a bit and download jpg, and mp4. That would download images and video. You can add as many file types as you want here separated by comma.You get the point ! )

--wait=2 (this is really important, this gives you a 2 second wait before each downloaded file. This command will download files so fast, that you really want to add this to help decrease server load. If you were to download an entire site, leaving out --level you should probably increase this number to around 5 to 10 seconds. You don't want to DDoS the server.

Finally ...


http://www.targetDomain.com/webpage.htm  (the web page or website you want to download. Again I would refrain from downloading entire websites, as this can really strain the server your downloading from.)

So there you have it. This is the very basics, and you could really get more detailed in creating a Super Duper Image and Video Scooper command.  As always to learn all the command options type man wget to learn more about this powerful tool/command ! Once you create a really great command , create a script for it and then you could just run the script and add the web address for instant downloading fun.

Enjoy !

Here's a link to the ftp: man page on wget.


Tuesday, March 18, 2014

fedup Fedora 18 to Fedora 20

So, I really didn't get fed up with Fedora 18, but I used fedup to upgrade to Fedora 20 and was pretty happy with the results. I was considering trying another distro, but was really happy with Fedora 18, then today I came across some software that I wanted to try but it was only available for Fedora 2O. I looked into updating my system and fedup was the suggested solution to upgrade. I had access to a really fast internet connection today, so I decided if I was going to try this , today would have to be the day. This saved me a ton of time on the downloading packages part of this post. All in all the entire process took about two hours. Here's what I did and what happened.

NOTE: You must use su or sudo for all commands ! If you don't have a really fast internet connection add some time to your install for package downloads. This could take some time.

So I decided to try the fedup upgrade from Fedora 18 to Fedora 20 . The first thing I did was read the wiki on fedup here.

As per the instructions I did a full system update to insure everything was compatible with fedup.

yum update

I'm glad I did this because I did make a few changes to Grub that I forgot about and that may have affected the upgrade. The update reset grub back to it original configuration. 

Once Fedora 18 was fully updated I installed fedup

yum install fedup

You want to make sure you download fedup-0.8.0-3  as per my research earlier versions can be buggy. Once fedup was installed I rebooted the computer to make sure all the changes and updates took effect.

Once the computer rebooted I typed

fedup --network 20 --nogpgcheck

Unfortunately I didn't notate my notes as to why, but the --nogpgcheck is necessary when updating from 18 to 20 but not when updating from 19 to 20 . 

At this point if you don't have Chrome browser installed everything should go easy peazy. Skip the Chrome steps.  If you do have Chrome installed continued reading.


Chrome

So here's where I ran into my first problem.

Downloading failed: failure: repodata/filelists.xml.gz from google-chrome: [Errno 256] No more mirrors to try.

Google-chrome and google-repositories really goofed me up good for about 15 minutes.  If you run google-chrome uninstall it.

yum remove google-chrome

yum clean all

yum clean metadata

Then disable the google repositories

yum-config-manager --disable google-chrome

Once I did this I ran the

fedup --network 20 --nogpgcheck

And everything went fairly smoothly from there. fedup took care of just about everything. At the end of it, where it asks to reboot, I ended up with a couple warnings but at that point I think its almost to late to make any changes. I did some research before I did the reboot and it looked like the warnings were bugs, and o.k. to ignore. This was kind of scary, but all went well. If you get any warnings USE A SEARCH ENGINE AND MAKE SURE ITS BUG AND NOT AN ACTUAL PROBLEM. I think it should go without saying as well that you may want to backup any important data before you upgrade.

Once I rebooted , the entire install took a little over an hour on my computer. No problems at all, all data, and most of my configurations remained unchanged.  For a little clean up I simply did :

rpm --rebuilddb

I want to research doing additional clean up, because I know there are a lot of unused files left behind. After I do some research, that may be a post for another day.

And that's it, your done ! Hope this goes as smoothly for you as it did for me and enjoy your update Fedora 20 install. Good luck !


Friday, March 14, 2014

Appending Sequential Numbers To A Word Or File Using Linux

I needed to append some sequential numbering to a word to use in a list. For example :

word01
word02
word03
word04

The sequence of numbers had to go from 1 - 4999. Now I could of wrote this out 4999 times but that would of been painfully boring and probably would of given me a terrible case of carpal tunnel by the end of it all. Plus I use Linux, there must be an easier way to do this and there is ! It's called seq

If  you open up the terminal of your choice and type man seq it pulls up a small man page with a few options you can use. The option were interested in is the -f option. This is the printf style floating-point FORMAT. If your interested in getting a better understanding of this click here. Here's a basic command using this format with output:

seq -f "%04g" 6

output:

0001
0002
0003
0004
0005
0006


 The "%04g" gives us 4 digits starting with 0001 and continues on to 0006. This is pretty basic. We need to append these numbers to a word now, and start with 01 and go up to 4999. Heres what we do :

seq -f "yourword%02g" 4999

output:

yourword01
yourword02
yourword03


And this goes on till it reaches yourword4999. That would of been a lot of typing.

So now we need to put all this into a text file for another program to reference. Here's what you do.

seq -f "yourword%02g" 4999 > reference_file.txt

By adding the greater than symbol it pipes the output to a text file.

You can name the text file whatever you want. For my purposes it was a list file for another program I was running. Each word pointed the program to a numbered dir on a computer. You could also use this to create sequential file names.  For example:

touch $(seq -f "yourfilename%02g.txt" 10)
 
This would create yourfilename01.txt to yourfilename10.txt.  

 This is a great little program and with a little imagination you could really put this program to great use.   

Sunday, February 9, 2014

Three Ways To Get Help From The Command Line

Most people that use Linux know about and use the man pages. If you're not familiar with man pages there pretty simple to use. Open up the terminal program of your choice, at the command prompt simply type man followed by the subject you want to learn about. For example:

man finger

By typing this at the command prompt man will pull up all the information on using finger. What it does, how it works, and how to use it. To close the particular man page just press the q key and the terminal will quit man and return you to the command prompt.

There's a man page for just about everything in Linux. Want to learn more about the unzip program in my last post. Type man unzip. When your done press the q key and it will return you to the command prompt.

O.K. most people know about man pages and if you didn't, well now you know. There is also a help command that does , you guessed it... helps.If you type help help at the command prompt it will bring up some info on using help. Use help when your not sure about what a command does. For example try this.

help pwd

This will pull up a little info on the pwd command. Help isn't as in depth or massive as the man pages, but from time to time when your stuck, sometimes help can really help.

Finally we have the info command. Want to learn more about using man pages, type info man. This will bring up, you guessed it ... info on man pages. Again just hit the q key to return to the command prompt.Here's some fun things to info

info vi
info bash

You get the point. These are three easy things you can do to get help directly from the command line. This can really save your life when your in a pinch. It can make for some dry reading but you'll be surprised at what you can learn from some of these man, info, and help pages.

Saturday, February 8, 2014

Using unzip in the command line for multiple zip files

One reason for using Linux and the command line in a terminal program is the amount of time you can save doing redundant tasks. One of my resolutions for 2014 was to start using the command line more, to save time and of course learn a few tricks.

Today I tried two new things I never did with the command line. The first one was a total fail, and I'll shamefully explain what I tried to accomplish. The other thing I did was a success and saved me about a half hour of work.

The first thing I tried was downloading a bunch of zip files from a website at the same time, or with one command. There were roughly 30 zip files I needed to download, and I was planning on using wget to download all the files while I worked on other things. I have no clue what went wrong, but no matter what I tried, I just couldn't get wget to get the zip files I needed. I think it had something to do with the websites php server and some security features set on the server end. I'll be trying this again because I know it can be done, I'm fairly sure I was doing it correct, I just had to tweak something. I'll try it on an easier project in the future. So that was the fail.

Now for the success. I downloaded the 30 zip files manually, and placed them all in a temp directory. Normally I use Ark in KDE and it works great but unzipping 30 files with Ark would of taken at least 20 to 30 minutes. Instead I used "unzip" from the command line and it took about 30 seconds!!! The files I'm unzipping are jpg image files, so the filename really isn't that important, as long as I don't overwrite the duplicates and lose them.

Not knowing what I was doing I made a dir called temp-test and copied all the files to the temp-test directory I created. Then for the sake of simplicity and not making this post overly complicated and confusing for anyone newer to this like me, I used unzip -B "*.zip" and unzip did the entire job in about 30 seconds. I now had every file unzipped and ready to use.

A couple things worth mentioning. You need the quotes around the "*.zip" so that the shell won't recognize it as a wild card. Without the quotes I got

caution: filename not matched:  filename.zip

I also needed to ad the -B modifier to add a ~ to the end of any duplicate filenames. This left me with 300 files with duplicate names, but I unzipped over 1000 files. So I got a little more than a third of my files in 30 seconds.

This is the command I used.

unzip -B "*.zip"

unzip is the command to unzip the file, -B is the modifier to make a backup of duplicate named files. So if you have multiple files named 03.jpg it will create a backup of 03.jpg~ 03.jpg~1 etc. etc. Finally the "*.zip" is the wild card to unzip every file with the .zip extension.

If you have no duplicate file names in your multiple zips that's great. I had some duplicate names, which left me with close to 300 backup files that I needed. I couldn't figure out how to fix this using the command line, but a quick fix using the Dolphin file manager, (I think this will work with any file manager though) was to move all the backup files into its own folder and then highlight them all, right click and select rename, this will pop up a window that will allow you to add a filename#. The hashtag will add a sequential number. So I named it dupe#.jpg which created dupe1.jpg dupe2.jpg up to 300. Luckily they were all jpg's, any name would work as long as I had the jpg extension, and this technique worked great.

I do a lot of image work so this worked perfectly for me. All in all, what took a few minutes with unzip and renaming the dupes in Dolphin would of took about 20 to 30 minutes using Ark. I'll be using this a lot for different work projects.

Thursday, June 27, 2013

Extract rar files with unrar CLI Command Line Interface

Today I came across some rar files. For the most part .rar files are just like compressed files comparable to a zip file. I didn't really know what to do with them in Linux, the file was associated with ark, but that didn't seem to work for them. I found a CLI that worked great called unrar.

Most Linux distros have this program available. Using Fedora with KDE I opened Konsole my terminal of choice, and did yum search unrar. Yum found the program and in my case I selected yum install unrar.x86_64 based on my system requirements. Yum installed the program for me with no problems. This is a very light program and I should of noted the file size for this post, but I didn't. It's a small program, that packs a great punch.

Using the program couldn't of been easier. In my terminal program I simply went to the directory where I downloaded the rar files, in my case it was cd Downloads/rar_files I then typed unrar x filename.rar and unrar uncompressed the files for me. The x option is for extracting the files. That's all there was to it. That will make the program work.

The man pages for the program are informative and have all the options and switches needed to operate the program successfully. Unrar is a really easy to use CLI command line interface for extracting rar files.