13 May 2014 @ 7:57 AM 

Hello !

A new post after a long time…

 

As you know, the variables in bash are global by default (their scope is from the initialisation (usually when you start using it), to the destruction of the variable (usually at the end of the script). The variable will be available through all the functions within the script.

Now, if you want to reduce the scope of a variable, and tie it to a specific function, you usually render it unique (by starting with an underscore for instance), then initialise it at the beginning of your function, then destroy it (unset) at the end of the function.

There is another option : you can use the keyword “local” like in other languages.

Here is an example on how to use it :

#!/bin/bash

MyFunction () {

local TheVar

TheVar=1

echo “In The Function ${TheVar}”

}

declare -i TheVar

TheVar=99

echo “Before the Function ${TheVar}”

MyFunction

echo “After the Function ${TheVar}”

Here is the output :

Before the Function 99

In The Function 1

After the Function 99

And if you comment out the “local” part :

Before the Function 99

In The Function 1

After the Function 1

Another note on this.

According to my tests done on my bash version (GNU bash, version 3.2.51(1)-release), you can achieve the same results by simply declaring your variable within a function.

e.g.

#!/bin/bash

MyFunction () {

declare -i TheVar

TheVar=1

echo “In The Function ${TheVar}”

}

declare -i TheVar

TheVar=99

echo “Before the Function ${TheVar}”

MyFunction

echo “After the Function ${TheVar}”

Has  the following results :

Before the Function 99

In The Function 1

After the Function 99

Note also that, if you re-use the same variable name (as per our example) once you declare a local variable, the content of the global variable of the same name is not accessible within the function anymore !

e.g.

#!/bin/bash

MyFunction () {

#local TheVar

echo “Before declaration : ${TheVar}”

declare -i TheVar

echo “After declaration, before assignement : ${TheVar}”

TheVar=1

echo “After assignement : ${TheVar}”

}

declare -i TheVar

TheVar=99

echo “Before the Function ${TheVar}”

MyFunction

echo “After the Function ${TheVar}”

Gives the following results :

Before the Function 99

Before declaration : 99

After declaration, before assignement :

After assignement : 1

After the Function 99

Think about this the next time you do recursive functions, this might help. Counters (while [ i -lt 200 ];) should always be declared local within a function. Keep in mind also that “local” keyword is only available within a function.

Hope you enjoyed and see you next time !

Posted By: Dimi
Last Edit: 13 May 2014 @ 07:57 AM

EmailPermalinkComments (0)
Tags
Categories: Bash, Snippets

 08 Oct 2013 @ 10:30 PM 

Hi guys,

Super short one today.

I had the following issue today : getting the stdin piped to a bash function.

I tried the following :

mycoolfunction () {

echo “This is what I got : ${1}”

}

Does not work :

echo “Hello!” | mycoolfunction

This is what I got :

 

As the pipe sends to stdio, the correct way to do it is :

mycoolfunction () {

read Whatever

echo “This is what I got: ${Whatever}”

unset Whatever

}

Results :

echo “Hello!” | mycoolfunction

This is what I got: Hello!

As simple as that…

Posted By: Dimi
Last Edit: 08 Oct 2013 @ 10:50 AM

EmailPermalinkComments (0)
Tags
Categories: Uncategorized

 07 Jun 2013 @ 3:56 PM 

Hi Guys,

It has been a long time since we did make an entry…

And today will be a short one.

 

How to generate a random number in Bash.

There is a variable called “$RANDOM”. It provides a random number :

echo $RANDOM
27652

Thus, if you want to generate a random number between two barriers, you can use :

$(($RANDOM % (<higher barrier>-(<lower  barrier>-1)) + <lower barrier>)) 

e.g.

echo $(($RANDOM % 100 + 1))
54

And, if you want to get a random line in a file :

echo ${RANDOM} >/dev/null;cat AA_AA | sed -n “$(( ${RANDOM} % `cat AA_AA | wc -l` + 1)) p”

The first “echo ${RANDOM} is to force the re-assignement of the ${RANDOM} value, that, for some reason, does not get updated when used in the formula.

 

Have a good day !

Update : Updated based on KenS comment. Thank you.

Posted By: Dimi
Last Edit: 13 May 2014 @ 06:53 AM

EmailPermalinkComments (3)
Tags
Categories: Bash, Snippets

 17 Jan 2013 @ 11:26 PM 

In this post, we’ll talk about fetching collections of data into structures, using CURSORS and FETCH.

To fetch a collection of data row by row, consider the following block of code –

declare

cursor cursor1 is
select tabname, defname from REFTABLE order by tabname,refno;
tableName REFTABLE.tabname%TYPE;
expresion REFTABLE.defname%TYPE;

begin

open cursor1;
loop

fetch cursor1 into tableName,expresion;
EXIT WHEN cursor1%NOTFOUND OR cursor1%NOTFOUND IS NULL;

dbms_output.put_line(cursor1%ROWCOUNT || ‘. ‘ || tableName);
— INSERT FURTHER PROCESSING HERE

end loop;
close cursor1;

end;

In case you are dealing with huge collections and you only want to process the first 100, substitute the following line with the ‘EXIT WHEN’ in the code above –

EXIT WHEN cursor1%NOTFOUND OR cursor1%NOTFOUND IS NULL OR cursor1%ROWCOUNT>100;

CURSOR ATTRIBUTES (such as NOTFOUND) can be used to control the fetching behaviour.

The above code block is a useful way of fetching collections of data row-by-row. In the next post we’ll talk about BULK COLLECT, which fetches entire collections much faster.

Posted By: Kevin
Last Edit: 17 Jan 2013 @ 11:26 PM

EmailPermalinkComments (0)
Tags
Tags:
Categories: basics, Snippets, SQL

 09 Jul 2012 @ 10:53 PM 

Improper handling of NULL values is a common cause of application failure. This is true for applications built on both Oracle and Sybase.

We’ll talk about how to avoid such problems when building where clauses in your SQLs.

Firstly, a NULL value is a value that is not known. A NULL value cannot be compared correctly with other values- even with another NULL value.

When building your where clauses, use the IS NULL and the IS NOT NULL statements. The following example can be used to compare columns between 2 tables –

select T1.COL1, T2.COL1 from TABLE1 T1, TABLE2 T2 where
(T1.COL1<>T2.COL1) or (T1.COL1 is null and T2.COL1 is not null) or
(T1.COL1 is not null and T2.COL1 is null)

Another solution is to use the DECODE statement. To perform the above operation, consider –

select T1.COL1, T2.COL1 from TABLE1 T1, TABLE2 T2 where
(decode(T1.COL1,T2.COL1,1,0)=0)

Depending on the needs of your application, you may need to handle NULL values in different ways. NVL can be used in select statements to replace returned NULLs with a user defined expression.

Posted By: Kevin
Last Edit: 09 Jul 2012 @ 10:57 PM

EmailPermalinkComments (0)
Tags
Tags: ,
Categories: basics, Snippets

 20 Apr 2012 @ 8:00 PM 

If you happen to write some scripts for sftp, ftp (…) files transfer, you might want to check if the remote host does accept the connection on the port you specify before triggering anything.

What’s the point of doing this test? – it can be discussed, actually. But one might prefer to handle the case where the host is not responding or refusing the connection otherwise than by parsing the answer provided through the command line by the sftp, ftp (sftp..) binary.

That’s what I wanted to do and my next challenge was to find a way to do this test without using telnet, ftp, sftp, ssh (etc.). The thing is most of the example you can find on the web are using these commands.

The trick is to use the bash sockets…And especially this one: /dev/tcp/. We can try to write something in the socket

exec 3>/dev/tcp/REMOTE_DEST/REMOTE_PORT

and depending on the success or failure assume we could connect… or not. For example, let’s do the following:

exec 3>/dev/tcp/192.168.124.55/22 && echo “OK”

If the connection is not sucessful you will have the answer:

unix:solaris> exec 3>/dev/tcp/192.168.124.55/22 && echo “OK”
-bash: connect: Connection refused

And if it can connect it will simply print in your screen “OK”:

unix:solaris> exec 3>/dev/tcp/192.168.124.55/22 && echo “OK”
OK

The return codes will be “1” when unsuccessful and “0” when successful.

Then it’s up to you to wrap up that command in your function and properly handle the errors/exception/log.

One example of a full test function:

testConnection()
{
[[ $VERBOSE = TRUE ]] && doVerbose “Testing the connection to the server …”
res=”not Connected”

# Note: return is 0 if sucessful and 1 is unsucessful
`exec 3>/dev/tcp/${REMOTE_DEST}/${REMOTE_PORT}` && res=”Connected” || res=”not Connected”
if [[ ${res} != “Connected” ]]
then
doLog “Cannot connect to ${REMOTE_DEST}/${REMOTE_PORT}…[FAILURE]”
quit “Cannot connect, server ${REMOTE_DEST}/${REMOTE_PORT} does not accept the connection”
fi
doLog “${res}…[OK]”
}

Where:

  1. doLog, quit, doVerbose are functions within the same script dedicated to handle the log files, manage exits and finally manage the verbose mode, if triggered while executing the script.
  2. ${REMOTE_DEST} and ${REMOTE_PORT} are variables set outside the fuction.

To go further:

/dev/tcp is one of the pseudo-device unix like systems have. We are more familiar with /dev/null when for example we are only interested with the STDOUT stream and not STDERR (the famous 2>/dev/null) but there are actually a few of them: /dev/zero, /dev/random, /dev/full for example.

You can read the wikipedia page on device files that is quite interesting.

Posted By: Nicolas
Last Edit: 20 Apr 2012 @ 10:38 AM

EmailPermalinkComments (0)
Tags
Tags: , , ,
Categories: Bash, Snippets

 12 Jan 2012 @ 8:00 PM 

Hello all,

Today a fun -really, you can use this in pranks and all- stuff.

Imagine you have a script, that will need deactivation. You do not want someone to find the deactivator easily (if you want to “lock” a script for example).

There are may ways to do this, one easy one is to create a script with name ” ” (space) and to execute it in your main script.

To create your space script :

cat >\
echo ” You should not start this script”
exit 0

and then, you add the following line somewhere, for example after the functions.

. ./\

The script will exit with the message at that place, and it is very difficult for the user to find the cause.

Another option can be to  encrypt the script using crypt (you can install it if you do not know where it is) so that the user cannot use grep on all files to find the blocking one. You simply decrypt the file before running it (this can be done by your “space” script.

Oh, yeah, and a quick important note, on a related but different issue.

If you want to deactivate the following function :

function removeall {
rm -Rf .
}

DO NOT try to comment it out like this  :

#function removeall {
rm -Rf .
}

 Because this will simply remove everything every time we try to load your library !
Do it like this :

function removeall {
return 0
rm -Rf .
}

This is all for today, tomorrow we will talk about the misuse of the tr function.

Thank you for reading, and see you tomorrow !

Posted By: Dimi
Last Edit: 12 Jan 2012 @ 08:54 PM

EmailPermalinkComments (1)
Tags
Tags: , ,
Categories: Bash, basics, Snippets

 11 Jan 2012 @ 8:00 PM 

Hello,

You remember we have talked about default variables earlier (Look here). Today, we are going to see an even more efficient way to cast default variables, for Bash users.

Note : The following only works in bash. Should you be unsure if the user of your function is going to use bash or ksh, you should use the way we have explained earlier.

So, this is the way to give a default value to a variable :

[ -z $var ] && var=’default’

The same command if you are using Bash :

var=${var:-‘default’}

It might look a bit more cryptic, but it’s the correct way to do it if you are sure to keep using Bash.

The right part of the :- operator can be a string, an integer or another variable.

Thank you for reading, and see you tomorrow !

Posted By: Dimi
Last Edit: 06 Jan 2012 @ 11:51 AM

EmailPermalinkComments (0)
Tags
Categories: Uncategorized

 05 Jan 2012 @ 8:00 PM 

Hello all,

The last post before a long week-end (I will not be able to post tomorrow, and on Monday and Tuesday. I’ll see you on wednesday.

Today, the topic is “The difference between find -exec and find | xargs”

The situation

You have to look into a specific directory and subdirectories for all the files named “*log” because you are quite sure one of them contains the word “error”.

You have two ways to do this :

1 – using -exec

This is a parameter to the find commands that allows you to do an extra command on the found files. The syntax is as follow :

-exec <command> {} \;

<command> replaces your command

{} will be replaced by the name of the found file.

\; terminates the command

i.e. you type :

find . -name \*log -exec grep -i error {} \;

and it will return all the lines containing error, regardless of the case.

2 – using xargs.

xargs is a command that allows the piped data to be passed as parameter to another command. The syntax is as follow :

| xargs <command>

xargs simply puts at the end of the command the piped data. i.e. you type :

find . -name \*log | xargs grep -i

3 – Using a while loop.

Yes, you can do like that, but it is not the topic of this discussion.

 

The difference

What is the main difference between the two?

-exec is going to take each file, and execute the command with it. Using this, you will get a list of all the lines, but not the name of the file, as grep assumes you know which file he talks about, as you have passed the name as parameter !

A sub process will be spawn for each file to be checked.

xargs, on the other end, is going to pass the complete list of files to grep. The name of the files will be shown in the result list.

The separator for the parameters will be the space, and this is OK as long as there is no space in the names of the files (but who puts spaces in the names of the files ? Ah, ok, users…).

In fact, the fail proof  way to deal with this specific request is to make the while loop.

 

The conclusion

Both ways can be useful, depending of the request and the situation. It is important that you understand the different ways those tools work, so that you can choose which one is the best for your usage.

Thank you for reading, and see you next wednesday ! (told you, long week-end for me 🙂 )

 

Posted By: Dimi
Last Edit: 02 Jan 2012 @ 05:17 PM

EmailPermalinkComments (0)
Tags
Tags: , , , ,
Categories: Bash, basics, Snippets

 04 Jan 2012 @ 8:00 PM 

Hello, and thank you for coming again !

I have noticed that some of you are coming via the RSS feed, and it is nice to see you are following this blog thoroughly !

Today, the topic is ” Equal Tilde operator”

The situation

You have a variable, and you need to check if the content is a single digit, or the letter “a” or the letter “A” or the letter “b” or the letter “B”

The solution

You do not use the equal tilde and you die of boredom :

if [[ ${var} = [0-9] ]] || [ “$var” = “a” ] || [ “$var” = “A” ] || [ “$var” = “b” ] || [ “$var” = “B” ] ; then

or you use the equal tile operator :

if [[ “${var}” =~ \^\[0-9]\|a\|b\|A\|B\$ ]] ; then

and you can live another day.

The equal tilde operator allows you to use regex (the same ones you used for see, remember ?) in an if command.

Note the two square brackets.

Thank you for reading, and see you tomorrow !

 

Nota bene :

You should not put the right side of the =~ operator in quotes, as this will mean a string, and not a regex.

If you do not want escaping everything that might need to be escaping, just put your complicated regex in a variable :

RegEx=”My Complicated Regex”

if [[ “${var}” =~ ${RegEx} ]]; then

 

Posted By: Dimi
Last Edit: 05 Jan 2012 @ 07:54 PM

EmailPermalinkComments (0)
Tags





 Last 50 Posts
Change Theme...
  • Users » 66
  • Posts/Pages » 25
  • Comments » 4
Change Theme...
  • VoidVoid « Default
  • LifeLife
  • EarthEarth
  • WindWind
  • WaterWater
  • FireFire
  • LightLight