Dollar Prompt

300+ commands available at the dollar prompt

# Calculators

command standard description important flags
bc POSIX arbitrary precision calculator -l load transcendental functions
dc arbitrary precision, reverse Polish notation calculator -f FILE run commands
factor finds the prime factors of a number
math Mathematica Kernel
maxima Maxima, the open source version of Macsyma
numpy
octave
r

bc is an arbitrary precision calculator:

``````\$ bc
10 ^ 100
10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000```
```

Calculations are done with the same precision needed to express the most precise decimal in the expression. The precision in the variable scale will be used if it is greater than the most precise decimal in the expression. By default scale is zero:

``````\$ bc
6.1 * 7.1
43.3
scale = 3
6.1 * 7.1
43.31
.01 ^ 50
0
scale = 1000
.01 ^ 50
.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001```
```

Invoke bc with the -l flag to get transcendental functions. scale will be set to 20 automatically:

``````\$ bc -l
l(10) /* natural log */
2.30258509299404568401
e(1)
2.71828182845904523536
s(3.14) /* sine */
.00159265291648695254
c(3.14) /* cosine */
-.99999873172753954528
a(1) /* arc tangent */
.78539816339744830961```
```

bc has user defined variables, comments, and the control statements if, while, and for.

Unix also provides dc, an arbitrary precision reverse Polish notation calculator. Here are the basic stack operations:

operator operation
p peek at top of stack
n pop stack and display popped value
r swap top two stack values
d push copy of top stack value onto the stack
c clear stack
k set precision
v replace top of stack with its square root
schar pop stack and store value under character char
lchar push value stored under character char onto stack
Schar Lchar pop main stack and store in auxiliary stack under character char; pop auxiliary stack under character char and push onto main stack
[]schar store macro under character char
lcharx invoked the macro stored under character char
q exit dc; acts as return inside macro
=char
<char
>char
perform comparison on top two stack values and perform macro stored under character char if true
!=char
!<char
!>char
negated versions of the above operators

dc lacks trig and log functions. Negative numbers are entered with an underscore prefix:

# Compression and Archiving

 algorithm LZW DEFLATE DEFLATE BZIP2 LZMA LZMA2 date 1984 1989 1992 1996 1999 2009 compress/uncompress utilities uncompress zip unzip gzip gunzip bzip2 bunzip2 lzma unlzma xz unxz suffixes .Z .zip .gz .bz2 .lzma .xz magic number 0x1f9d none 0x1f8b BZh none .7zXZ other utilities unzipsfx zipcloak zipinfo zipnote zipsplit gzexe zcat zcmp zdiff zegrep zfgrep zforce zgrep zless zmore znew bzcat bzcmp bzdiff bzegrep bzfgrep bzgrep bzip2recover bzless bzmore lzcat lzmainfo xzcat xzcmp xzdiff xzegrep xzgrep xzfgrep xzless xzmore

Two early compression tools on Unix are compress and uncompress. Originally they used the LZW algorithm, though on some systems they are aliased to gzip and gunzip. The tools have fell into disuse because of the patent on the LZW algorithm held by Unisys Corporation. LZW was also used in GIF images and the patent motivated the development of the alternative PNG format. The LZW algorithm patent expired in 2003.

The DEFLATE algorithm is a mix of LZ77 and Huffman encoding. It was introduced by Philip Katz as part of his PKZIP software in 1989, though the underlying algorithm was made freely available. The DEFLATE algorithm was also used by gzip which appeared in 1992. The compression code used by gzip was spun off as the zlib library in 1995. PKZIP 2.0 introduced the .zip file format in 1993. On Unix the utilities for manipulating this format are called zip and unzip. Although both zip and gzip use the DEFLATE algorithm, zip performs both archiving and compression. Hence the file formats are not the same.

zcat was originally used with compress and uncompress as part of the LZW suite of compression tools. The GNU version of zcat will handle .Z, .zip, and .gz compressed files.

bzip2 uses the Burrows-Wheeler transform, described in 1994, among other techniques. It is a simple data compression tool like gzip.

The LZMA algorithm was published with the release of the 7-Zip utility, which was mostly used on Windows. The utility has a file format which uses both archiving and data compression and uses a .7z suffix. The 7z utility is sometimes available on Unix for manipulating this format. The lzma suite of utilities use the LZMA algorithm to perform simple data compression. It is recommended that the xz suite of tools which use the LZMA2 algorithm be used instead.

Here is the performance of the compression utilities when run on 4.1M of C source code. Jeff Dean mentions Zippy in his "numbers every engineer should know" talk. I used a Ruby script to call the library; a utility could probably be written in C that is faster. According to Dean's numbers it would take 0.041s to compress 4.1M with Zippy.

tool compression time compressed file size decompression time
snappy (aka zippy) 0.168s 37.6% 0.157s
compress (.Z) 0.168s 36.4% 0.086s
zip (.zip) 0.286s 22.1% 0.072s
gzip (.gz) 0.299s 22.1% 0.081s
bzip2 (.bz2) 0.693 18.6% 0.234s
lzma (.lzma) 3.062s 16.9% 0.127s
xz (.xz) 3.051s 16.9% 0.134s

Archiving utilities:

 ar jar tar ranlib zip/unzip

ar and ranlib are the traditional tools for creating and manipulating static libraries of compiled and linkable C code. On Mac OS X libtool is intended to replace these tools. Neither ar nor ranlib use compression. The format used by these tools varies from architecture to architecture. tar, by contrast, has a defined format. Like ar it is exclusively an archiver, not a data compressor. Data compression tools are often used on tarballs, and the following file suffix abbreviations are sometimes seen:

 .taz .tar.Z .tgz .tar.gz .tbz or .tb2 .tar.bz2 .txz .tar.xz

jar is the archiver for java .class files. It uses the zip file format and thus uses data compression.

• at
• atq
• atrm
• batch
• cal
• calendar
• crontab
• date
• mailx
• ncal

# Editors

a table of emacs, vi, and nano key bindings

command standard description appearance
ctags POSIX generate tag file for vi 3.0BSD (1979)
ed POSIX Version 1 Unix (1971)
emacs GNU project Emacs GNU Emacs 13 was released in 1985. It was the first public release of a version of Emacs implemented in C with a Lisp extension language.
emacsclient tells a running Emacs to visit a file
etags generate tag file for Emacs
ex POSIX 1BSD (1978)
nano Nano's ANOther editor, an enhanced free Pico clone
vi POSIX 2BSD (1979)
view
vim Vi IMproved, a programmers text editor first Unix port was version 1.22 (1992)
vimdiff edit two, three or four versions of a file with Vim and show differences

ed was written for the first version of Unix by Ken Thompson who had previously ported the QED editor to the the CTSS operating system. Thompson introduced regular expressions in the QED port and they would also be a feature of ed.

emacs existed by 1976 on the ITS operating system. It was originally implemented in the macro language of the TECO editor. Gosling wrote a version of Emacs which was implemented in C and ran on Unix in 1981. It had an extension language called Mock Lisp, but Stallman did not like the language because it did not have lists. GNU Emacs was publicly released in 1985.

Bill Joy released ex and vi as part of the first and second BSD distributions in 1978 and 1979, respectively. ex was the first visual editor for Unix. It differs from vi in that the visual editing mode is not enabled by default at the start of an edit session.

Traditionally vi was an editor with fast startup time and no extensibility. emacs had slow startup time, especially if the user had a lot of Lisp in the start up file. A vi user would exit and retstart vi between edits, whereas an emacs user would typically leave the editor running all day. Modern hardware has made the performance difference the two editors much less perceptible. Furthermore on modern systems vi is usually a link to vim, an upgraded version of vi which appeared in 1991 and boasts features and extensions comparable to those of Emacs.

Today the editors are mostly distinguished by their key bindings. vi is bi-modal; the names of the modes are command mode and write mode. The ESC key is used to go from write to command mode. Emacs is uni-modal: commands are distinguished by pressing the Ctrl or the Meta key in combination with a letter. The Meta key might be called the Alt or Option key on some keyboards. If no Meta key is available, the ESC key can be used, but the ESC key is not a modifier key.

vi is always installed with Unix systems where emacs sometimes isn't. For this reason system administrators almost universally use vi.

nano is a clone of pico, which was a side product of the pine mail client. When the UW decided to move off the MVS operating system, it searched for a replacement for the Ben mail client written at UCLA. The UW opted to adapt the Elm mail client to suit its needs. The first public release was version 2.0 in 1992. For an editor the UW adapted Micro Emacs and simplified it further. The distribution license for pine became more restrictive with version 3.9.2 in 1996. nano was developed in 1999.

There is a unix command called vimtutor for learning vim. Typing C-h t inside emacs will bring up the Emacs tutorial. nano doesn't require a tutorial. One can get by just using the 12 commands that are usually displayed at the bottom of the edit window. C-g brings up a screen documenting the remaining commands.

# Environment and Locale

command standard flags appearance
env POSIX -i
-u ENV
getconf POSIX -a
gettext
hostname
locale POSIX
localedef POSIX
ngettext
printenv
uname POSIX -a -s -n -r -v -m -o 1977 (PWB Unix)
uptime

On some systems you can run getconf -a to show all the system configuration variables and their values. Here are a few select variables and the values I observed on my systems:

variable filesystem dependent? description linux mac cygwin
ARG_MAX maximum length of arguments passed to exec including environment 2097152 262144 1048576
ATEXIT_MAX number of functions that can be registered with atexit 2147483647 2147483647 2147483647
CHILD_MAX number of simultaneous processes per user id undefined 266 512
CHAR_MAX 127 127 unsupported
CHAR_MIN -128 -128 unsupported
HOST_NAME_MAX 64 255 unsupported
INT_MAX 2147483647 2147483647 2147483647
INT_MIN -2147483648 -2147483648 -2147483648
LINE_MAX length of a command's input line 2048 2048 2048
LINK_MAX yes number of hard links to a file 32000 32767 1
LOGIN_NAME_MAX 256 255 298
NAME_MAX yes length of file name 255 255 255
NZERO default process priority 20 20 20
OPEN_MAX number of files a process can have open 1024 256 1024
PAGESIZE page size of memory 4096 4096 65536
PAGE_SIZE page size of memory 4096 4096 65536
PATH /usr/bin:/bin:/usr/sbin:/sbin /bin:/usr/bin /bin
PATH_MAX yes length of pathname 4096 1024 512
PIPE_BUF yes 4096 512 5120
SHRT_MAX 32767 32767 32767
SHRT_MIN -32768 -32768 -32768

The results of running uname on my systems:

uname -a
uname [-s] uname -n uname -r uname -v uname -m uname -o
Linux foo 2.6.35-28-generic #50-Ubuntu SMP Fri Mar 18 18:42:20 UTC 2011 x86_64 GNU/Linux
Darwin foo.local 10.7.0 Darwin Kernel Version 10.7.0: Sat Jan 29 15:17:16 PST 2011; root:xnu-1504.9.37~1/RELEASE_I386 i386
CYGWIN_NT-6.1-WOW64 foo 1.7.9(0.237/5/3) 2011-03-29 10:10 i686 Cygwin

# File System

cmd flags
cd
chgrp
chmod
chown
chroot
cp
cpio
df
du
find
link
ln
locate
ls -a -l -F -i -r -R -1
sort order: -t -S
mdfind
mkdir
mkfifo
mknod
mktemp -d -t -u
mount
mv
pwd
readlink
rm
rmdir
rsync
stat
touch
truncate
umount
unlink

• apropos
• file
• info
• man
• manpath
• whatis

# Images

• gnuplot
• graphviz
• dot: directed graphs
• neato: undirected graphs
• twopi: radial layouts
• circo: circular layouts
• fdp: undirected graphs
• sfdp: large undirected graphs
• imagemagick
• convert
• potrace

# Internet and Web

• curl
• ftp
• lynx
• nc
• ping
• siege
• telnet
• wget

Neither ftp and telnet have encryption and have been rendered obsolete by sftp and ssh from the ssh suite. Few servers provide telnet access these days, but the client is nevertheless usually installed in /usr/bin.

ftp remains a popular way to distribute files. Usually one uses the anonymous account, and by convention one can provide an email address as the password, though this is not necessary to login.

One can use ftp in session mode or auto-fetch mode. The two modes are invoked as follows:

``````ftp [USER@HOST]
ftp ftp://[USER[:PASSWORD]@]HOST/PATH```
```

In session mode the following commands are sufficient for downloading files. There is no way to download the contents of a directory recursively.

ftp commands
ascii set file transmission type to ascii. The default and opposite of binary
binary set file transmission type to binary. The opposite of ascii
bye terminate ftp session
cd DIR change remote working directory
close disconnect from host
dir list remote directory contents
get FILE [LOCAL_FILE] get a remote file. The local file defaults to have the same name as the remote file
lcd DIR change local working directory
lpwd print local working directory
mget GLOB get remote files which match glob expression
open HOST [PORT] connect to host. The port defaults to the well known port for FTP, which is 21
pwd print remote working directory
system display operating system type of remote host

ping uses the ICMP protocol to perform a low level test of network connectivity. In sends ICMP messages of type 8 (echo request) and listens for ICMP messages of type 0 (echo reply) in response.

nc is used to listen or write to TCP and UDP ports. As an example, the following will launch an nc process listening to port 8008:

````nc -l 8008 > /tmp/8008.out`
```

Switching to a different terminal on the same machine, the following will write a message to the above process, which will in turn write the message to standard out.

````echo "lorem ipsum" | nc localhost 8008`
```

wget, curl, and lynx are useful for getting files over the internet via the HTTP protocol.

``````wget URL
wget URL -O LOCAL_FILE_NAME
wget -r -k -l LEVELS```
```

wget is a quick way to download a file over the internet. The file will be written to the current working directory and have the same name as basename of the URL unless the -O flag is used.

wget -r will recursively follow links in the domain and download their targets. This can be used to replicate a website. If the website is to be used for local viewing, the -k can be used to modify the links in the downloaded documents accordingly. The -l flag controls the depth of recursion and can be set to -l inf to get everything.

curl is also used for making HTTP requests. By default it writes the body of the HTTP request to standard out. The -I flag can be used to instead write the HTTP response header to standard out. There is a flag -d for specifying parameter data. It can be used multiple times. By default a POST is made if the -d flag is used, but the -G flag can be used to make a GET instead. The -b flag is used to provide cookie data. Cookies received from the server can be stored in a file with the -c flag.

``````curl -b COOKIE=VAL URL
curl -c COOKIE_JAR URL
curl -d PARAM=VAL URL
curl -G -d PARAM=VAL URL
curl -I URL```
```

Lynx is a curses browser. It ignores JavaScript and CSS and cannot display images, so it is useless on many websites. It does handle cookies. The settings (option O) can be used to change the default behavior of asking whether each cookie should be accepted. Make sure to save the options, which will result in the file ~/.lynxrc being created.

The dump command can be used from the command line to get a URL and display it rendered to the best of lynx's ability to standard out:

````lynx -dump 'http://duckduckgo.com?q=hyperpolyglot !g'`
```

# Languages and Build Tools

## C and C++

as

as is usually the GNU assembler. The GNU assembler by default uses AT&T syntax, though a directive is available to support Intel syntax. Other assemblers one is likely to encounter such as nasm and yasm use Intel syntax.

One can use gcc -S to convert C code to assembly. A few points about AT&T assembly syntax:

• mov instruction arguments are source, destination
• the suffixes b, w, l, q are used to indicate 8, 16, 32, and 64 bit lengths
• the \$ prefix is used for immediate values (i.e. constants) and the % prefix for register names

autoconf

bison

c++filt

C++ name demangler. The mangled names can be provided as standard input or as command line parameters. The output of nm can be piped to c++filt.

cc

The default system C compiler. On MacOS and Ubuntu this is a link to gcc.

cmake

autoconf assumes a POSIX operating system and build environment with make and a POSIX shell. cmake, by contrast, can use the native build system of a greater variety of operating systems: originally the traditional Unix build tools and MSVC++ were supported.

A project that is to be built with cmake is configured by creating a CMakeLists.txt file in each directory. Here is an example:

``````\$ cat CMakeLists.txt
cmake_minimum_required (VERSION 2.6)
project (HELLO)
add_subdirectory (Hello)
add_subdirectory (Demo)

\$ cat Hello/CMakeLists.txt
add_library (Hello hello.cxx)

\$ cat Demo/CMakeLists.txt
include_directories (\${HELLO_SOURCE_DIR}/Hello)
link_directories (\${HELLO_BINARY_DIR}/Hello)
add_executable (helloDemo demo.cxx demo_b.cxx)
target_link_libraries (helloDemo Hello)```
```

cpp

cpp is the C preprocessor. Invoking it is equivalent to invoking gcc -E.

The -D, -U, -I and -o flags behave as for gcc.

• #if
• #ifdef
• #ifndef
• #else
• #elif
• #endif

#ifdef and #ifndef are alternatives to #if defined and #if ! defined. defined looks like a keyword but it is a C preprocessor operator. If a macro evaluates to an expression with defined in it the behavior is undefined by the C specification.

gcc

option description example
-c compile but don't link gcc -c foo.c
--coverage instrument code for coverage analysis (gcov)
-DMACRO[=VAL] define MACRO
-E preprocess but don't compile
-g use debugging symbols
-I INCUDE_DIR
-lLIB search LIB when linking
-L LIB_DIR
-M output Makefile dependencies
-o EXECUTABLE specify executable name gcc foo.c -o foo
-O -O1 -O2 -O3 optimize (-O is same as -O1)
-O0 don't optimize; the default
-pg profile (used with gprof)
-S compile but don't assemble
-UMACRO undefine MACRO
-w no warnings
-Wall lots of warnings

gdb

Compile an executable with gcc -g to include symbols for debugging.

gdb can be used to start the process to be debugged. It can attach to a running process. It can load a core dump. The syntax for invoking gdb in these situations is:

``````gdb EXECUTABLE
gdb EXECUTABLE PID
gdb EXECUTABLE CORE```
```

Mac OS X does not create core dumps by default. You can configure your environment to get them with this shell command:

````limit coredumpsize 10000000`
```

I'm not really sure what a good number is to provide as an argument, but if you set the number to zero you won't get a core. The cores appear in /cores.

Here are the commands for use within gdb:

• (ba)cktrace
• (b)reak LINE | FUNC
• (b)reak LINE | FUNC if CONDITION
• (cl)ear LINE | FUNC
• (c)ontinue
• (d)elete BREAKNUM
• (do)wn
• (info br)eakpoints
• (l)ist
• (load)
• (n)ext
• (p)rint VAR
• (q)uit
• (r)un ARG1 ARG2 …
• (set variable) VAR=VAL
• (s)tep
• (u)p

clear will remove all the breakpoints at a line number or function. delete will delete a breakpoint identified by the sequential number assigned to it when it was created. Breakpoints are numbered 1, 2, 3, … Multiple breakpoints at a line are useful if they have conditionals. When setting breakpoints by line number a file can also be specified, e.g. foo.c:5

step and next behave the same except for functions. step will go to the start of the function. next will execute the function and go to the next line in the current frame if the function returned normally.

The main routine is the top of the stack. The current frame is the bottom of the stack.

load is useful to load a shared object so that a breakpoint can be set on a function in it.

ifnames

ifnames displays the conditional arguments of #if #elif #ifdef and #ifndef macros in C code. It is thus a way to see what options one might want to specify with gcc -DMACRO.

m4

make

A Makefile consists of rules which have the following format:

``````TARGET ... : PREREQUISITE ...
ACTION
...```
```

The rule consists of usually one target, zero or more prerequisites, and zero or more actions. The actions are collectively called the recipe for the rule.

When multiple targets are provides the effect is the same as defining separate rules (one for each target) with the same prerequisites and actions. The targets are not necessarily synonyms since the actions can inspect the \$@ variable to get the target name.

When the target is invoked, make will first execute any of the prerequisites which can be defined using rules. If make cannot find a rule for a prerequisite it will exit with an error. Then make will execute the recipe.

The actions of a recipe are a sequence of lines, each starting with a tab character and containing a shell expression. If the last character on an action line is a backslash \, then the action continues on the following line. make will interpret any makefile variables in the action and then fork a shell to execute the shell expression. Makefile variables start with a dollar sign. Use duplication to pass a dollar sign character to the shell. Make echoes the action before executing it, but this can be suppressed by putting an ampersand after the tab character.

Here is an example of defining and using a Makefile variable:

``````hello = Hello, World!

hello :
@echo \$(hello)```
```

There is target with the same name as the variable. Targets and variables live in separate namespaces so there is no conflict.

Here is an example of how to define a suffix rule. In the recipe \$@ refers to the target and \$< refers to the dependency.

``````%.html: %.md
markdown \$< > \$@```
```
• files with the same name as targets, .PHONY
• invoking make, default rule

nm

Displays the symbols in compiled code. The format is VALUE TYPE NAME. The value is an offset into the executable if the symbol is defined there. The type will be U is the symbol is not. Possibilities for defined symbols are t, d, and b which indicate the section of the executable the symbol is defined in. Sometimes the value letters for a section are upper case. There is an A type which appears to be used to mark the boundaries of sections in the executable.

objdump and otool

Disassemblers. Try this:

``````objdump -td a.out

otool -td a.out```
```

size

Shows the sizes of the sections of the executable. On Linux the sections are text, data, and bss. The bss section contains the global and static variables which are uninitialized or zero-initialized.

strings

Dumps ASCII strings in an executable. Typically these were string literals in the source. If the string literal contained non-ASCII characters the non-ASCII portion of the string may be printed. Use -o to show the offset of the string in the file and -n to specify a minimum string length.

strip

Removes debugging symbols which would be present if the executable were compiled with gcc -g.

yacc

On modern systems yacc is often an alias for bison -y. The -y instructs bison to imitate traditional yacc behavior and put output in y.tab.c. The same effect can be achieved with the -o y.tab.c option.

• ant
• jar
• java
• javac
• jdb
• mvn

• cpan
• perl

• pip
• python
• virtualenv

• erb
• irb
• gem
• rbenv
• ruby
• ruby-build

• lua
• luac

• tclsh

## JavaScript

js

Install SpiderMonkey for a command line JavaScript interpreter named js.

jsawk

jsawk expects a command line JavaScript interpreter named js. It operates on serialized array of JSON objects which it finds on standard input. The jsawk script is JavaScript that is applied to each JSON object in the array in turn. The JSON object is referenced with the this keyword.

After each application of the jsawk script, the return value, if not null, is appended to the output array.

No-operation: passes the input to standard out unmodified:

````echo '[{"foo":"bar"}, {"foo":"baz"}]' | jsawk ''`
```

Always writes an empty JSON array [] to standard out:

````echo '[{"foo":"bar"}, {"foo":"baz"}]' | jsawk 'return null'`
```

Translation: replaces "foo" keys having value "bar" with "baz":

````echo '[{"foo":"bar"}]' | jsawk 'this.foo = "baz"'`
```

Filtering: removes objects where the value of "foo" is "baz":

````echo '[{"foo":"bar"}, {"foo":"baz"}]' | jsawk 'if ( this.foo == "baz") { return null }'`
```

The -b and -a flags can be used to mimic the BEGIN and END blocks of awk. -b and -a each take a jsawk script as an argument which is executed before and after the main jsawk script respectively if there is one. In before and after scripts the this keyword refers to the entire JSON array.

Here are two ways to get the number of JSON objects in the input array:

``````echo '[{foo:"bar"},{foo:"baz"}]' | jsawk -a 'return this.length'

echo '[{foo:"bar"},{foo:"baz"}]' | jsawk -b 'cnt = 0' 'cnt += 1; return null' -a 'return cnt'```
```

• racket
• sbcl

• ocaml
• ocamlc
• ocamlopt
• ocamlrun

• ghc
• ghci
• runghc

• swipl

• erl

• gforth

## XML

• xmllint

xmllint is a quick way to check whether the XML in a file is well formed. It can be used as a pretty printer:

````\$ echo "<foo><bar>one</bar><bar>two</bar>" | xmllint --format -`
```

Python can be used to pretty print JSON:

````echo '{"foo":"bar","bar":[1,2,3]}' | python -m json.tool`
```

# Processes and Terminals

• bg
• clear
• fg
• ipcrm
• ipcs
• jobs
• kill
• mesg
• nice
• nohup
• ps
• renice
• reset
• screen
• script
• sleep
• strace
• stty
• tabs
• talk
• time
• timeout
• tmux
• top
• tput
• tset
• tty
• wait
• wall
• write

## tmux

 reset if gets messed up by displaying a binary file C-b [ to go into scrollback mode; ESC to exit

# Shells

command standard description appearance
bash 1989
csh 1979 (2BSD)
dash 1997: ash shell was ported to Linux; 2002: renamed dash
ksh 1983
sh POSIX 1979 (Unix Version 7)
tcsh 1983
zsh 1990

## bash

Default shell on Linux and Mac OS X since 10.3

## zsh

 !NUM:s^FOO^BAR repeat command NUM from history, replacing FOO with BAR

• openssl
• scp
• sftp
• slogin
• ssh
• ssh-agent
• ssh-keygen
• ssh-keyscan

# Text Processing and Encoding

command flags
aspell
awk
cat -e -n -t -v
cksum
comm
csplit
cut
dd
diff
diff3
egrep
expand
fgrep
fmt
fold
grep -b -C NUM -c -E -F -H -h -i -L -l -n -o -P -q -r -v
head
iconv
ispell
join
less
more
nl
od
paste
patch
pr
ptx
pv
rev
sdiff
sed
seq
shuf
sort -k start,end -n -r -R -t sep -u
spell
split
sum
tac
tail
tee
tr -c complement
-d delete
-s squeeze
tsort
unexpand
uniq
uudecode
uuencode
wc
xxd
yes

How to remove carriage returns from FILE1:

````tr -d \\r < FILE1 > FILE2`
```

• asciidoc
• enscript
• eqn
• groff
• grog
• grops
• grotty
• markdown
• nroff
• pic
• preconv
• tbl
• troff

• groups
• id
• login
• logname
• newgrp
• passwd
• su
• users
• w
• who
• whoami

• bzr
• ci
• co
• cvs
• git
• hg
• rcsdiff
• rlog
• svn

# Cygwin Variations

installing cron on cygwin

some of these are available in /cygdrive/c/Windows/SUA/bin

• pl instead of swipl
• clisp instead of sbcl
• at, atq, atrm, batch
• cal
• calendar
• clear
• crontab
• gem
• ghc
• lex
• localedef
• mailx
• mesg
• more
• ncal
• newgrp
• renice
• rev
• sbcl
• script
• swipl
• tabs
• talk
• tmux
• tput
• wait
• wall
• whereis
• write

# Shell Built-ins

bash csh dash ksh sh tcsh zsh
alias
bg
cd
command
exit
export
fc
fg
getopts
hash
history
jobs
read
type
ulimit
umask
unalias
which

# Core Dollar Prompt

The core dollar prompt commands are a list of about 300 commands which are reliably available on Linux, Mac OS X, or Cygwin on Windows. These are commands are not necessarily installed by default. There is a ruby script which can be used to find commands which are missing on your particular installation.

The Bourne shell introduced in 1979 used a pound sign prompt # when the user was root and a dollar sign prompt \$ when the user was unprivileged. Modern shells permit the user to customize the prompt by setting the PS1 variable.

Some of the core dollar prompt commands date back to Bell Lab Research Unix and the Berkeley Standard Distributions.

# Minimal Dollar Prompt

See the commands provided by BusyBox.

# Extended Dollar Prompt

• ghostscript
• x windows
• mail tools (mutt, postfix, maildrop, mailman, fetchmail, mail merge with sed and mail)
• chat (irssi)
• printing commands: lp enscript etal
• webservers: httpd, apachectl, nginx, php
• databases: mysql, psql, mongo, redis-cli
• samba: smbclient, etal

Under X Windows, use ssh -X to open an ssh connection that will have the DISPLAY environment variable set.

# Unix Files

/dev/null
/etc/passwd
/etc/group
/etc/hosts
/etc/services
/etc/protocols
/usr/share/dict/words

# Obsolete Dollar Prompt

SCCS was written at Bell Labs in 1972. The initial version was implemented in Snobol and ran on the IBM System/370. It was rewritten in C and released as part of PWB Unix in 1977. It was perhaps the first version control system to keep a history of all the versions of a file in a space efficient manner. The suite of tools consisted of get, delta, sccsdiff, and admin. The generic names of the commands and the fact that the software has been almost completely unused in the last 20 years are the reason the software is regarded as obsolete. A clone called CSSC is available on Ubuntu Linux and MacPorts but not Cygwin.

The UUCP (Unix to Unix COpy) suite was the first RPC system for Unix. It was used at Bell Labs and distributed with Version 7 Unix in 1979. In the early days the computers were usually connected by modems, but the UUCP suite was later used to run over TCP/IP. uux was used to execute commands remotely, but the commands were queued for execution and the caller did not block waiting for the outcome. The uustat could be used to inquire about the result of a queued command. The suite was used to push out software at Bell Labs. Another use of the protocol was Usenet. The suite was also used to route email before SMTP (1982) and the exclamation points that appeared in early email addresses were for this purpose. The suite is still available as a package on Ubuntu Linux and MacPorts.

The rlogin suite was introduced with 4.2BSD in 1983. It was more convenient than the UUCP quite but it is also insecure and the equivalent commands from the SSH suite should always be used instead. SSH was introduced in 1995 and open source version, OpenSSH, was introduced in 1999.

page revision: 406, last edited: 30 Oct 2012 15:52
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License