Wednesday, December 24, 2008
NPTL threads issue :-
Linux NPTL thread library implements thread cancellation by throwing an (implementation-defined) exception that you are not allowed to finalize (i.e., you can catch it but you have to rethrow). The problem is that you cannot identify this exception in your C++ program. This is one the issue with NPTL threads.
Thursday, December 18, 2008
format specifiers,
This post is in regard to, printing a string with %S. The "%S" means 16 bit printing, which may lead to "core", if it tries to print 8 bit string.
Tuesday, December 16, 2008
Mutex init,
Usage of pthread mutex w/o calling pthread_mutex_init (i.e., initialisation) will behave crazy. It will lead to core under load conditions and debugging the problem would be not that straight forward. People often may forget this call and start using the mutex. To make this as a check point ,this particular post is being added.
Friday, November 21, 2008
C++ Important Concepts
Virtual Functions
For example: vptr->run() is compiler to call a function through pointer.
v->run();
Compiler generates
((v->__vptr)[0])(v /* this pointer */); // call function pointed by first entry in the // vtable.
This is because first entry corresponds // to run member function.
Virtual Destructor
Consider:
class B {public: ~B(); };class D : public B {public: ~D();};
class BVD {public: virtual ~BVD();};class DVD : public BVD {public: virtual ~DVD();};
B* b = new D;delete b; // calls ~B(), does not call ~D() !!! Crash.
BVD* bvd = new DVD;delete bvd; // calls ~DVD(); good.
Operator Overloading
Abstraction
Abstraction is of the process of hiding unwanted details from the user.
Inline functions
Copy constructors vs Assignment operator
Deep copy vs Shallow
Name Mangling
Namespaces
Access specifiers
friend functions and friend classes
Templates
Inheritance
static variable,static functions
Virtual inheritance
Object slicing
Compilation Process
Static vs Dynamic libraries
Basic operations on Linked List
1.basic creation:Node tmp = new Node(newdat); // Step 1tmp.setNext( n.getNext() ); // Step
2(a)n.setNext( tmp ); // Step 2(b)
2.reverse:
struct node *p,*q,*r;r=null;p=head;while(p){ q=p->next; p->next=r; r=p; p=q;}head=p;
3.finding if it is a circular:while (pointer1) {pointer1 = pointer1->next;pointer2 = pointer2->next; if (pointer2) pointer2=pointer2->next;if (pointer1 == pointer2) {print (\"circular\n\");}}
How to find processor endiness?
Serious run-time performance penalties occur when using TCP/IP on a little endian processor. For that reason, it may be unwise to select a little endian processor for use in a device, such as a router or gateway, with an abundance of network functionality.
For any class that has virtual function(s) [ and/or derived from a class that has virtual function(s)], compiler generates a 'virtual table' ('vtable' in short).
A vtable is an array of pointer to functions. Compiler also adds a hidden data member, say __vptr to the class. This '__vptr' points to the vtable of the actual class of the object.
For example: vptr->run() is compiler to call a function through pointer.
In vehicle class case, vehicle class has only one virtual member function called run. So compiler generates a vtable for vehicle class. This vtable has only one entry which points to vehicle::run. Assuming 'car' is derived from vehicle and overrides vehicle::run, car class vtable contains one entry. This entry points to car::run.
Every vehicle class object's '__vptr' (pointer to vtable) points to vehicle class vtable. This hidden pointer may be put at the end of the object. Every car class object's '__vptr' points to car class vtable. Whenever a virtual function is called like
v->run();
Compiler generates
((v->__vptr)[0])(v /* this pointer */); // call function pointed by first entry in the // vtable.
This is because first entry corresponds // to run member function.
If a class has more than one virtual function, say 'N' virtual functions then corresponding vtable will have more 'N' entries. Each time 'n'th virtual function is called as
(v->__vptr[n - 1])(v /*, other arguments if any */);Virtual Destructor
A virtual destructor is a destructor that is deployed virtually.i.e. when the delete keyword is used with a pointer to a base class,the derived most destructor is called.
Consider:
class B {public: ~B(); };class D : public B {public: ~D();};
class BVD {public: virtual ~BVD();};class DVD : public BVD {public: virtual ~DVD();};
B* b = new D;delete b; // calls ~B(), does not call ~D() !!! Crash.
BVD* bvd = new DVD;delete bvd; // calls ~DVD(); good.
Clearly, you want to have virtual destructors just so that the correctdestructor is called when an object is deleted. 
But what if your objects don't need any destructors?  What if thereare no cleanup activities that you need to write for your objects?  Inthat case, YOU STILL NEED VIRTUAL DESTRUCTORS!.
When the storage for an object is going to be destroyed, two pieces ofinformation are required.
The address and size of the block of storage that isgoing to be returned to the heap.  It is the destructor which suppliesthis information to the runtime system.
Operator Overloading
Operator overloading is the ability to tell the compiler how to perform a certain operation when its corresponding operator is used on one or more variables
Abstraction
Abstraction is of the process of hiding unwanted details from the user.
Inline functions
When the compiler inline-expands a function call, the function's code gets inserted into the caller's code stream (conceptually similar to what happens with a #define macro). This can, depending on a zillion other things, improve performance, because the optimizer can procedurally integrate the called code — optimize the called code into the caller.
If the compiler inline-expands the call to g(), all those memory operations could vanish. The registers wouldn't need to get written or read since there wouldn't be a function call, and the parameters wouldn't need to get written or read since the optimizer would know they're already in registers.
It can be implemented in two ways.
1) By defining in the declaration itself.
2) By including inline keyword while defining the member functions
1) By defining in the declaration itself.
2) By including inline keyword while defining the member functions
Copy constructors vs Assignment operator
A copy constructor is used to initialize a newly declared variable from an existing variable. A assignment operator is used to assign an existing variable from an another existing variable.The copy constructor and assignment operator do similar things. They both copy state from one object to another, leaving them with equivalent semantic state. In other words, both objects will behave the same way and return the same results when their methods are called.
A copy constructor doesn't need to delete previously allocated memory: since the object in question has just been created, it cannot already have its own allocated data. * and another interesting point regarding Copy constructor to note is, When a function takes an object as argument, instead of, e.g., a pointer or a reference, the copy constructor is called to pass a copy of an object as the argument.* The copy constructor is also implicitly called when a function returns an object:
Deep copy vs Shallow
A deep copy copies all fields, and makes copies of dynamically allocated memory pointed to by the fields. To make a deep copy, you must write a copy constructor and overload the assignment operator, otherwise the copy will point to the original, with disasterous consequences.
A shallow copy of an object copies all of the member field values. This works well if the fields are values, but may not be what you want for fields that point to dynamically allocated memory. The pointer will be copied. but the memory it points to will not be copied -- the field in both the original object and the copy will then point to the same dynamically allocated memory, which is not usually what you want. The default copy constructor and assignment operator make shallow copies.
The default copy constructor and assignment operator make shallow copies.To make a deep copy, you must write a copy constructor and overload the assignment operator, otherwise the copy will point to the original, with disasterous consequences
Name Mangling
Name mangling, (the more politically correct term is name-decoration, although it is rarely used) is a method used by a C++ compiler to generate unique names for identifiers in a program. The exact details of the algorithm are compiler-dependent, and they may vary from one version to another.
Name mangling ensures that entities with seemingly identical names still get unique identifications. The resultant mangled name contains all the necessary information that may be needed by the linker, such as linkage type, scope, calling convention, and so on. When a global function is overloaded, the generated mangled name for each overloaded version is unique. Name mangling is also applied to variables. Thus, a local variable and a global variable with the same user-given name still get distinct mangled names
Namespaces
Namespaces allow to group entities like classes, objects and functions under a name. This way the global scope can be divided in "sub-scopes", each one with its own name.The format of namespaces is:
namespace identifier{entities}Access specifiers
Public, protected and private are three access specifier in C++. Public data members and member functions are accessible outside the class. Protected data members and member functions are only available to derived classes. Private data members and member functions can’t be accessed outside the class. However there is an exception can be using friend classes.
friend functions and friend classes
Once a non-member function is declared as a friend, it can access the private data of the class similarly when a class is declared as a friend, the friend class can have access to the private data of the class which made this a friend.
This is a good way out given by C++ to avoid restrictions on private variables. But this should be used with caution though. If all the functions and classes are declared as friends, then the concept of encapsulation and data security will go for a toss.
Templates
Use function templates to write generic functions that can be used with arbitrary types. For example, one can write searching and sorting routines which can be used with any arbitrary type.
We also have the possibility to write class templates, so that a class can have members that use template parameters as types
Inheritance
There are some points to be remembered about C++ inheritance. The protected and public variables or members of the base class are all accessible in the derived class. But a private member variable not accessible by a derived class.
Some of the exceptions to be noted in C++ inheritance are as follows.The constructor and destructor of a base class are not inherited the assignment operator is not inherited the friend functions and friend classes of the base class are also not inherited.
if you derive with public access specifier,the derived class gets max access level..i.e., public will be public and protected will be protected and private will be private.if you derive with protected access specifier, the derived class gets limited access level..,i.e., public will be protected and protected will become protected and private will remain like private.if you derive with private access specifier, the derived class gets min access level..i.e., public will become as private, protected will become as private and private will remain like private.
The way that the access specifiers, inheritance types, and derived classes interact causes a lot of confusion. To try and clarify things as much as possible:
First, the base class sets it.s access specifiers. The base class can always access it.s own members. The access specifiers only affect whether outsiders and derived classes can access those members.
Second, derived classes have access to base class members based on the access specifiers of the immediate parent. The way a derived class accesses inherited members is not affected by the inheritance method used!
Finally, derived classes can change the access type of inherited members based on the inheritance method used. This does not affect the derived classes own members, which have their own access specifiers. It only affects whether outsiders and classes derived from the derived class can access those inherited members.
static variable,static functions
For some reason, static has different meanings in in different contexts. When specified on a function declaration, it makes the function local to the file.
When specified with a variable inside a function, it allows the vairable to retain its value between calls to the function. See static variables. It seems a little strange that the same keyword has such different meanings....
* Static member functions have external linkage. These functions do not have this pointers. As a result, the following restrictions apply to such functions:
They cannot access nonstatic class member data using the member-selection operators (. or –>). They cannot be declared as virtual. They cannot have the same name as a nonstatic function that has the same argument types. * When a data member is declared as static, only one copy of the data is maintained for all objects of the class.
Static data members are not part of objects of a given class type; they are separate objects. As a result, the declaration of a static data member is not considered a definition. The data member is declared in class scope, but definition is performed at file scope. These static members have external linkage
Virtual inheritance
With virtual inheritance there is only one copy of each object even if (because of multiple inheritance) the object appears more than once in the hierarchy. virtual inheritance is a kind of inheritance that solves some of the problems caused by multiple inheritance (particularly the "diamond problem") by clarifying ambiguity
Object slicing
When we assign Derived Object to Base object or when we instantiate Base object with Derived object, object slicing happens.It also happens when we pass Derived object to a function which takes Base object, which eventually calls Copy constructor.Object slicing happens because, Base copy function (assignment) and constructor does not know anything about Derived
*The C++ run-time system makes sure that when memory allocation fails, an error function is activated.
*The mutable keyword is used to modify datamembers of a object, which is defined as constant object.
Compilation Process
1. Driver - what we invoked as "cc". This is actually the "engine", that drives the whole set of tools the compiler is made of. We invoke it, and it begins to invoke the other tools one by one, passing the output of each tool as an input to the next tool.
2. C Pre-Processor - normally called "cpp". It takes a C source file, and handles all the pre-processor definitions (#include files, #define macros, conditional source code inclusion with #ifdef, etc.) You can invoke it separately on your program, usually with a command like:
cc -E single_source.c3. The C Compiler - normally called "cc1". This is the actual compiler, that translates the input file into assembly language. As you saw, we used the "-c" flag to invoke it, along with the C Pre-Processor, (and possibly the optimizer too, read on), and the assembler.
4. Optimizer - sometimes comes as a separate module and sometimes as the found inside the compiler module. This one handles the optimization on a representation of the code that is language-neutral. This way, you can use the same optimizer for compilers of different programming languages.
5. Assembler - sometimes called "as". This takes the assembly code generated by the compiler, and translates it into machine language code kept in object files. With gcc, you could tell the driver to generated only the assembly code, by a command like:
cc -S single_source.c4. Optimizer - sometimes comes as a separate module and sometimes as the found inside the compiler module. This one handles the optimization on a representation of the code that is language-neutral. This way, you can use the same optimizer for compilers of different programming languages.
5. Assembler - sometimes called "as". This takes the assembly code generated by the compiler, and translates it into machine language code kept in object files. With gcc, you could tell the driver to generated only the assembly code, by a command like:
6. Linker-Loader - This is the tool that takes all the object files (and C libraries), and links them together, to form one executable file, in a format the operating system supports. A Common format these days is known as "ELF". On SunOs systems, and other older systems, a format named "a.out" was used. This format defines the internal structure of the executable file - location of data segment, location of source code segment, location of debug information and so on.
Static vs Dynamic libraries
When you compile a program with static libraries statically linked libraries are linked into the final executable by the linker. This increases the size of the executable. Likewise when a library needs to be updated you'll need to compile the new library and then recompile the application to take advantage of the new library. Ok, so why do we have static libraries then? Well if you're booting your system into maintenance mode static libraries can be beneficial.
Dynamic libraries are libraries that have the library name embedded into the executable but the library itself is not compiled into the binary file. This makes upgrading libraries easier. You can upgrade the library and the application will still work barring any unforeseen changes to the library itself. If the name of the library changes you can just create a symbolic link to the old library name and your back in business.
Also, since the library is not compiled into the executable the executables tend to be smaller. If you have a lot of programs that utilize the same library you might be able to save considerable disk space. If you have a lot of programs that utilize the same library you only need to upgrade the library and then all the programs that use it go on as if nothing has changed. Since the libraries are shared the memory footprint goes down as well.
There is a third type of libraries that are used by programs. These are called dynamically loaded libraries. These libraries are built as normal shared or statically linked libraries. The difference is that they are not loaded at program startup, instead you would use the dlsym() and dlopen() application programming interface to activate the library. This is how you get web browser plugin's, modules (Apache), or just in time compilers to work. When you are done using the library, you would call dlclose() to remove the library from memory. Errors are handled via the dlerror() application programming interface.
executable format(a.out, ELF formats etc):-a.out is the venerable executable format that was common in Unix's early history and originally Linux's only executable format. To this day, the default name of the executable output file of the GNU compiler is a.out (regardless of what it's format is).
Some of the capabilities of ELF are dynamic linking, dynamic loading, imposing runtime control on a program, and an improved method for creating shared libraries[3]. The ELF representation of control data in an object file is platform independent, an additional improvement over previous binary formats.
The three main types of ELF files are executable, relocatable, and shared object files. These file types hold the code, data, and information about the program that the operating system and/or link editor need to perform the appropriate actions on these files. The three types of files are summarized as follows:
An executable file supplies information necessary for the operating system to create a process image suitable for executing the code and accessing the data contained within the file. A relocatable file describes how it should be linked with other object files to create an executable file or shared library. A shared object file contains information needed in both static and dynamic linking.
ELF file format contains the five section types. These five types are (1) the ELF header, (2) the program header table, (3) the section header table, (4) the ELF sections, and (5) the ELF segments.
Basic operations on Linked List
1.basic creation:Node tmp = new Node(newdat); // Step 1tmp.setNext( n.getNext() ); // Step
2(a)n.setNext( tmp ); // Step 2(b)
2.reverse:
struct node *p,*q,*r;r=null;p=head;while(p){ q=p->next; p->next=r; r=p; p=q;}head=p;
3.finding if it is a circular:while (pointer1) {pointer1 = pointer1->next;pointer2 = pointer2->next; if (pointer2) pointer2=pointer2->next;if (pointer1 == pointer2) {print (\"circular\n\");}}
How to find processor endiness?
The attribute of a system that indicates whether integers are represented with the most significant byte stored at the lowest address (big endian) or at the highest address (little endian).
Serious run-time performance penalties occur when using TCP/IP on a little endian processor. For that reason, it may be unwise to select a little endian processor for use in a device, such as a router or gateway, with an abundance of network functionality.
Wednesday, November 19, 2008
Brief Overview on IPCs
IPC - Inter Process Communication
Different types of IPC mechanisms
• pipes
• FIFOs (named pipes)
• message queues
• semaphores
• shared memory
Pipes
* Pipe can be created by using int pipe(int *filedes)
* The pipe system call returns two file descriptors filedes[0] for reading and filedes[1] for writing.
* One process writes onto the pipe using the write end fd and the other process reads the pipe by using the read end fd.
* Just to make sure that each process does either writing or reading using the pipes, the corresponding fd is closed in that process.
* For example, if parent writes and child reads, then in parent process, the programmer can close the read fd and in the child process the write fd can be closed.
* For creating a two-way communication, we need to create two pipes.
* Pipes use kernel memory for the actual pipe buffer. Pipe has a finite size which is set to 4096 bytes atleast.
* Disadvantages: Pipes can be used only between related processes. Pipes does not have any entry in the name space because of which it can not be used between two unrelated processes.
Named Pipes (FIFOs)
* FIFO can be created by using int mknod (char *pathname, int mode, int dev)
* Once the FIFO has been created, it needs to be opened for either reading or writing using the open system call.
* Pipes or FIFO follows below rules for reading or writing
* read of data less than is in the pipe or FIFO returns the requested amount and remaining can be read by subsequent reads
* If more data is requested, only the amount available is returned
* If no data (or) no writer on the pipe, the read will return zero
* If two processes write simultaneously ( total less than max limit), then one process data follows another but won't intermix
* If process writes onto pipe and if no process opens it for reading, then SIGPIPE is generated
* unlink system call can be used to remove a FIFO.
Message Q, Semaphores, Shared Memory
* These 3 are called system V IPCs. These share a commonality. All the three IPCs can be identified by using a key_t (integer)
* System calls that operate these IPCs also are similar.
* replace ipc with msg/sem/shm to get the corresponding system call for each IPC mechanism
get - system call to create or open
ctl - system call to control operations
msgsnd/msgrecv - for send / recv in msgQ
semop - opertions on semaphores
shamat/shmdt - operations on shared memory
Message Queues
  * In Message Queues, different processes communicate with each other by means of messages which are predefined and agreed upon by all these processes. MsgQ uses kernel memory and are basically a linked list. 
* Each message in the queue is identified by items.
1. message type
2. length of data portion // This is optional
3. data portion
* For receiving a message, int msgrecv(int msgqid, struct msgbuf *buf, int len, long msgtype, int flags) is used.
* The msgtype indicates the type of the message that needs to be read from the Q.
If the msgtype is 0, first message on the Q is returned
If the msgtype is >0, first message with that msgtype on the Q is returned
 If the msgtype is <0, style="font-weight: bold;">
Shared Memory 
In MsgQs and other mechanisms of IPC, the buffers that are used for communication are mainly in kernel memory. So, the process needs to do mode switching from user/kernel while accessing the kernel memory which makes it slow.  In shared memory, the memory is in the user space only and hence no mode switching and hence speeds up.  
Semaphores
* These are means of synchronization. Semaphores are nothing but some global counters. We can assume that semaphore is a kind of a global integer which is common to different processes on the system.  
* How can we have a global variable which can be common to different processes. Basically,  the semaphore will be maintained in the kernel space, so that it can be accessed by different processes. 
* Semaphores just work with the same kind of calls as anyother IPC mechanisms, like
- semget ( key, numOfSemaphoreSets , permissionflags)
* In semaphores, we can create more number of resource counters which are associated with the same semaphore key. The second paramter of the semget indicates this. Max value is 25 
Permissionflag values can range IPC_CREAT, IPC_EXCLUS along with the normal permissions.
 * If we put just IPC_CREAT, then if no semaphore is created with the key specified, then the semaphore will be created. If an already semaphore existed that is created by another process, then that semId is returned. 
* If IPC_CREAT and IPC_EXCLUS flags are mentioned, then only if there is no semaphore, it will create one, otherwise, semget fails. Basically, it gives exclusivity. 
* We can set the value of a semaphore to any value we want by using semctl system call.
* Semctl ( semId, semNum, Cmd, Args )  second arg is the sub semaphore number
Ex : semctl ( semId, 0, GETVAL )  To get the value of a sempahore
semctl ( semId, 0, SETVAL, 13 )  To set the value of a semaphore to 13
* How to use semaphores ?
It depends on the applications themselves. Let’s say one process creates a semaphore and sets the value to 2 and start using a resource. The protocol between the processes is whenever the semaphore value is 0, the resource is free and the process can access that resource. 
* The second process, gets the value of the semaphore by using GETVAL in semctl and checks if it is == 0 , if not , it waits otherwise it will use the resource. 
* Even in the above procedure, there is a synchornisation prblem. Process 1 sets the value of the semaphore to 2 and starts using the resource. Once done, it resets the semaphore to 0. 
* If there are two processes waiting for this resource, then Process2 gets the value and tries to check the value if it’s zero. Just before the check, if the CPU has gone to Process 3, then it also get the value as 0 and tries to use the resource. 
* Once the CPU comes to  Process2, then the check passes and it also tries to use the resource which creates again the same synchronisation problem for which the semaphores are designed for. 
* To make these things straight, the operation of checking and changing the value of a semaphore should be atomic which can be achived by using a system call “semop” with the help of sembuf structure. 
Struct sembuf{
Ushort sem_num
Short semop
Short semflg
}
semop(semId, sembufPtr, numOfSemsInSecondArg )
* The main field in the sembuf structure is semop which will be acted upon the current value of the semaphore. 
Let’s take an example to explain this.
If sembuf is assigned  with { 1, 0, 0}, that means, we want to operate on the first sub semaphore, and the third argument is kind of not required to discuss at this time. 
Let’s see the second argument meaning.
If semop is 0, then the semop will block the execution until the value of the semaphore becomes zero. This is the atomic operation which does get and check of the semaphore value. 
If semop is a positive value, then this value will be added to the current value of the semaphore and then the function will immediately return. This is equivalent to SETVAL. 
If semop is a negative value, then this value will be decremented from the current value of the semaphore and then the function will return only if the value after decrementing is greater than or equal to zero. Otherwise, this call will block the execution. 
Different types of IPC mechanisms
• pipes
• FIFOs (named pipes)
• message queues
• semaphores
• shared memory
Pipes
* Pipes can be used to communicate between related processes i.e, parent and child (or) between two children of a parent. Pipes provide one-way communication of data. Pipe has two ends, one for reading and one for writing.
* Pipe can be created by using int pipe(int *filedes)
* The pipe system call returns two file descriptors filedes[0] for reading and filedes[1] for writing.
* One process writes onto the pipe using the write end fd and the other process reads the pipe by using the read end fd.
* Just to make sure that each process does either writing or reading using the pipes, the corresponding fd is closed in that process.
* For example, if parent writes and child reads, then in parent process, the programmer can close the read fd and in the child process the write fd can be closed.
* For creating a two-way communication, we need to create two pipes.
* Pipes use kernel memory for the actual pipe buffer. Pipe has a finite size which is set to 4096 bytes atleast.
* Disadvantages: Pipes can be used only between related processes. Pipes does not have any entry in the name space because of which it can not be used between two unrelated processes.
Named Pipes (FIFOs)
* The main difference between a FIFO and a normal pipe is that the FIFOs have an entry in the name space, so that it can be used between two unrelated processes.
* FIFO can be created by using int mknod (char *pathname, int mode, int dev)
* Once the FIFO has been created, it needs to be opened for either reading or writing using the open system call.
* Pipes or FIFO follows below rules for reading or writing
* read of data less than is in the pipe or FIFO returns the requested amount and remaining can be read by subsequent reads
* If more data is requested, only the amount available is returned
* If no data (or) no writer on the pipe, the read will return zero
* If two processes write simultaneously ( total less than max limit), then one process data follows another but won't intermix
* If process writes onto pipe and if no process opens it for reading, then SIGPIPE is generated
* unlink system call can be used to remove a FIFO.
* Pipes and FIFOs are called stream oriented IPC mechanisms, since the data that is flowing on the IPC are just stream of bytes and does not have demarkation for any fixed messages. Hence, when one process writes 100 bytes, another process can read these in 20 bytes each for 5 times. 
Message Q, Semaphores, Shared Memory
* These 3 are called system V IPCs. These share a commonality. All the three IPCs can be identified by using a key_t (integer)
* System calls that operate these IPCs also are similar.
* replace ipc with msg/sem/shm to get the corresponding system call for each IPC mechanism
msgsnd/msgrecv - for send / recv in msgQ
semop - opertions on semaphores
shamat/shmdt - operations on shared memory
Message Queues
1. message type
2. length of data portion // This is optional
3. data portion
* For receiving a message, int msgrecv(int msgqid, struct msgbuf *buf, int len, long msgtype, int flags) is used.
* The msgtype indicates the type of the message that needs to be read from the Q.
If the msgtype is 0, first message on the Q is returned
If the msgtype is >0, first message with that msgtype on the Q is returned
Shared Memory
Semaphores
* Semaphores just work with the same kind of calls as anyother IPC mechanisms, like
- semget ( key, numOfSemaphoreSets , permissionflags)
* We can set the value of a semaphore to any value we want by using semctl system call.
* Semctl ( semId, semNum, Cmd, Args )  second arg is the sub semaphore number
Ex : semctl ( semId, 0, GETVAL )  To get the value of a sempahore
semctl ( semId, 0, SETVAL, 13 )  To set the value of a semaphore to 13
* How to use semaphores ?
Struct sembuf{
Ushort sem_num
Short semop
Short semflg
}
semop(semId, sembufPtr, numOfSemsInSecondArg )
Let’s take an example to explain this.
Let’s see the second argument meaning.
Difference btw read and fread
Difference between read and fread
1. “read” is a system call and fread is a C (glibc) linbrary function.
2. fread uses buffer caches and internal buffers which will fasten the operation of reading. Fread will be more efficient while using on block device like disk.
3. Read can be used for accessing a character device like a network device.
4. Fread is a formatted IO, where as read is not.
1. “read” is a system call and fread is a C (glibc) linbrary function.
2. fread uses buffer caches and internal buffers which will fasten the operation of reading. Fread will be more efficient while using on block device like disk.
3. Read can be used for accessing a character device like a network device.
4. Fread is a formatted IO, where as read is not.
Process and Memory Segments
Process:
* An instance of program in execution.
* Can be created by fork() system call
* number of total processes created by "n" calls to fork are (2 pwr n)-1
Different memory segments in a process
1. Code/text segment
* Contain code instructions. This segment can be shared across multiple running instances of the same program
2. Data Segment
* Contains global and static variable
2.a Zero Initialized Data segment ( bss segment )
2.b Initialized Data segment
3. Heap segment
* All dynamically allocated memory
4. Stack segment –
* All local / automatic variables are stored in stack
* Function parameters and function return address are also stored in the stack
5. Env Segment
* All environment variables
6. Shared memory segment
7. mmaped memory segment
 
* An instance of program in execution.
* Can be created by fork() system call
* number of total processes created by "n" calls to fork are (2 pwr n)-1
Different memory segments in a process
1. Code/text segment
* Contain code instructions. This segment can be shared across multiple running instances of the same program
2. Data Segment
* Contains global and static variable
2.a Zero Initialized Data segment ( bss segment )
* Global and statically allocated data that are initialized to zero by default are kept in what is colloquially called the BSS area of the process.1 Each process running the same program has its own BSS area. When running, the BSS data are placed in the data segment. In the executable file, they are stored in the BSS section.
* The format of a Linux/Unix executable is such that only variables that are initialized to a nonzero value occupy space in the executable's disk file. Thus, a large array declared 'static char somebuf[2048];', which is automatically zero-filled, does not take up 2 KB worth of disk space. (Some compilers have options that let you place zero-initialized data into the data segment.)
    
2.b Initialized Data segment
* Statically allocated and global data that are initialized with nonzero values live in the data segment. Each process running the same program has its own data segment. The portion of the executable file containing the data segment is the data section.
3. Heap segment
* All dynamically allocated memory
4. Stack segment –
* All local / automatic variables are stored in stack
* Function parameters and function return address are also stored in the stack
5. Env Segment
* All environment variables
6. Shared memory segment
7. mmaped memory segment
 
Difference btw System call and Library Function
A system call is an entry point to the kernel. system calls are executed in kernel mode. Hence, whenever a system call is made from a user program a mode switching has to be done from "user mode" to "kernel mode". System calls are provided by each OS. They are not portable. example: read.
A library function usually works in a user mode which may or may not require any OS services, for example, strlen() which calculates the length of a particular string.
Tuesday, November 18, 2008
Important Unix Concepts
IPC mechanisms:
http://mia.ece.uic.edu/~papers/WWW/multi-process/multi-process.html
Shared memory is the fastest of all IPC schemes. The memory to be shared is mapped into the address space of the processes (that are sharing). The speed achieved is attributed to the fact that there is no kernel involvement. But this scheme needs synchronization.
One advantage by using pipe as IPC,It does not support a flush operation. If all of the data writtenin a `PIPE:' device is not read out, it will stay there, buffered. It would be useful remote server connection gets lost in the middle, atleast Clients can retrieve data available in PIPE, not like sockets.
Synchronisation mechanisms:
http://mia.ece.uic.edu/~papers/WWW/multi-process/multi-process.html
Sheduling:
Sockets, Client-Server:
http://users.actcom.co.il/~choo/lupg/tutorials/internetworking/internet-theory.html
Real Time OS:
* An RTOS will have a deterministic scheduler. For any given set of tasks your process will always execute every number of microseconds or miliseconds exactly, and the same number from schedule to schedule
* In UNIX and Windows the scheduler are usually soft-realtime (as opposed to some hard-realtime RTOS). Soft-realtime means that the scheduler tries to assure your process runs every X number of miliseconds, but may fail to do so on occassion. * Modern RTOSs simply make sure that a) no interrupt is ever lost, and b) no interrupt can be blocked by a lower priority process.
* The real difference between an RTOS and a general purpose OS is that with an RTOS the designers have taken care to ensure that the response times are known
fork:
The `fork()' function is used to create a new process from an existingprocess. The new process is called the child process, and the existingprocess is called the parent. You can tell which is which by checking thereturn value from `fork()'. The parent gets the child's pid returned tohim, but the child gets 0 returned to him.
Childs inherits from the parent the following stuff:
* process credentials (real/effective/saved UIDs and GIDs)
* environment
* stack
* memory
* open file descriptors (note that the underlying file positions are shared between the parent and child, which can be confusing)
* close-on-exec flags
* signal handling settings
* nice value
* scheduler class
* process group ID
* session ID
* current working directory
* root directory
* file mode creation mask (umask)
* resource limits
* controlling terminal
Childs wont inherit the following stuff:
* process ID
* different parent process ID
* Own copy of file descriptors and directory streams.
* process, text, data and other memory locks are NOT inherited.
* process times, in the tms struct
* resource utilizations are set to 0
* pending signals initialized to the empty set
* timers created by timer_create not inherited
* asynchronous input or output operations not inherited
Signals:
*A signal is a message which can be sent to a running process.Signals can be initiated by programs, users, or administrators.
*For example, to the proper method of telling the Internet Daemon (inetd) to re-read its configuration file is to send it a SIGHUP signal.
*By default, the kill command sends the SIGTERM signal. If SIGTERM fails, we can escalate to using the SIGKILL signal to stop the process:
*Two signals are unable to be redefined by a signal handler. SIGKILL always stops a process and SIGSTOP always moves a process from the foreground to the background. These two signals cannot be "caught" by a signal handler.
*When a child process calls exit(), a SIGCHLD signal is sent by the system to the parent.If there is no explicit signal handler for SIGCHILD in the parent process, the child process will be kept in Zombie state,till the point of Parent process dies.
* what are synchronous and async signals??* How to generate a particular signal? by using kill command along with the signal number.* send of signals to a process which is blocking/waiting on system call will disturb the process's system call. For example, if a process ( waiting on accept ) gets a SIGUSR1, will come out of accept system call.* by sending signal zero to particular PID, we can know the process existance.If signal delivery gets succedeed, the kill command returns zero otherwise it will delivery non zero value.
Select vs Poll:
Typical Process Layout:
Command-line arguments and environment
variables
Stack
Heap
Uninitialized data(bss)
Initialized data Text(code)
Differences between Process and Thread:
clone system call:
signals and threads:
Basic differences between Linux and Traditional Unix:
1.Linux kernel is monolithic.It is a large,composed of several logically different components.
8. On Linux 2.6, on every pthread_create call, kernel creates a corresponding LWP.
Linux kernel Mechanisms:
Linux boot sequence:
1.Processor comes out of reset and branches to the ROM startup code.
3.The boot loader(Linux Loader(LILO)) decompresses the kernel(example vmlinuz.x.x.x) into RAM, and executes it.
5.Executing the first program linked against the shared C runtime library (often init) causes the shared runtime library to be loaded.
7.At last init will spawn getty process,which is responsible for all user logins.
The main Differences between 2.6 kernel and 2.4 kernel:
1. 2.6 kernel scheduling algorithm got changed to O(1) algo.Will work better on multi processors
2. 2.6 kernel made partially preemptable,during the some part of kernel code exec
3. there were some changes made to VM in 2.6, for improving physical page removal,while many processes map the same page.
4. Linux threads removed in 2.6,in place of it Native Posix Thread Libs are introduced.NPTL works in 1:1 manner.and proved to be faster.Each thread has a separate PID.
5. 2.6 also got changed in terms of Max number of threads supported on the system to approx 2 billion,whereas 2.4 was supporting 8192 threads per processor.
6. Workqueue interface got introduced in 2.6 kernel,which will replace task queue interface(used to schedule kernel tasks)
Steps involved in calling main function:
1. GCC build your program with crtbegin.o/crtend.o/gcrt1.o And the other default libraries are dynamically linked by default. Starting address of the executable is set to that of _start.
2. Kernel loads the executable and setup text/data/bss/stack, especially, kernel allocate page(s) for arguments and environment variables and pushes all necessary information on stack.
3. Control is pased to _start. _start gets all information from stack setup by kernel, sets up argument stack for __libc_start_main, and calls it.
4. __libc_start_main initializes necessary stuffs, especially C library(such as malloc) and thread environment and calls our main.
Some interesting problems/topics:
4.Unix wont allocate PIDs in sequential manner.Its because to avoid unexpected signals to the new processes
http://mia.ece.uic.edu/~papers/WWW/multi-process/multi-process.html
Shared memory is the fastest of all IPC schemes. The memory to be shared is mapped into the address space of the processes (that are sharing). The speed achieved is attributed to the fact that there is no kernel involvement. But this scheme needs synchronization.
One advantage by using pipe as IPC,It does not support a flush operation. If all of the data writtenin a `PIPE:' device is not read out, it will stay there, buffered. It would be useful remote server connection gets lost in the middle, atleast Clients can retrieve data available in PIPE, not like sockets.
Synchronisation mechanisms:
http://mia.ece.uic.edu/~papers/WWW/multi-process/multi-process.html
* A semaphore is useful for managing an object that is shared by several threads or processes, but should be limited. A mutex is useful for managing access to an object that should only be accessed by a single thread at a time. *  mutex is essentially a semaphore with its max count=1
Sheduling:
On DNP we schedule some important processes as Realtime and some other non-important processes as Time Shared.The realtime process will have more prority than the Time Share process.Whenever Realtime process goes for I/O,then the time shared processes will be scheduled.The time shared process will have the number of clock-ticks it needs,prior to scheduling.
Sockets, Client-Server:
http://users.actcom.co.il/~choo/lupg/tutorials/internetworking/internet-theory.html
* When you are finished using a socket, you can simply close its file descriptor with close. If there is still data waiting to be transmitted over the connection, normally close tries to complete this transmission. You can control this behavior using the SO_LINGER socket option to specify a timeout period; The shutdown function shuts down the connection of socket socket. The argument how specifies what action to perform:
Real Time OS:
* An RTOS will have a deterministic scheduler. For any given set of tasks your process will always execute every number of microseconds or miliseconds exactly, and the same number from schedule to schedule
* In UNIX and Windows the scheduler are usually soft-realtime (as opposed to some hard-realtime RTOS). Soft-realtime means that the scheduler tries to assure your process runs every X number of miliseconds, but may fail to do so on occassion. * Modern RTOSs simply make sure that a) no interrupt is ever lost, and b) no interrupt can be blocked by a lower priority process.
* The real difference between an RTOS and a general purpose OS is that with an RTOS the designers have taken care to ensure that the response times are known
fork:
The `fork()' function is used to create a new process from an existingprocess. The new process is called the child process, and the existingprocess is called the parent. You can tell which is which by checking thereturn value from `fork()'. The parent gets the child's pid returned tohim, but the child gets 0 returned to him.
Childs inherits from the parent the following stuff:
* process credentials (real/effective/saved UIDs and GIDs)
* environment
* stack
* memory
* open file descriptors (note that the underlying file positions are shared between the parent and child, which can be confusing)
* close-on-exec flags
* signal handling settings
* nice value
* scheduler class
* process group ID
* session ID
* current working directory
* root directory
* file mode creation mask (umask)
* resource limits
* controlling terminal
Childs wont inherit the following stuff:
* process ID
* different parent process ID
* Own copy of file descriptors and directory streams.
* process, text, data and other memory locks are NOT inherited.
* process times, in the tms struct
* resource utilizations are set to 0
* pending signals initialized to the empty set
* timers created by timer_create not inherited
* asynchronous input or output operations not inherited
Signals:
*A signal is a message which can be sent to a running process.Signals can be initiated by programs, users, or administrators.
*The signal() system call is used to set a signal handler for a single signal type. signal() accepts a signal number and a pointer to a signal handler function, and sets that handler to accept the given signal
*For example, to the proper method of telling the Internet Daemon (inetd) to re-read its configuration file is to send it a SIGHUP signal.
*By default, the kill command sends the SIGTERM signal. If SIGTERM fails, we can escalate to using the SIGKILL signal to stop the process:
*Two signals are unable to be redefined by a signal handler. SIGKILL always stops a process and SIGSTOP always moves a process from the foreground to the background. These two signals cannot be "caught" by a signal handler.
*When a child process calls exit(), a SIGCHLD signal is sent by the system to the parent.If there is no explicit signal handler for SIGCHILD in the parent process, the child process will be kept in Zombie state,till the point of Parent process dies.
* what are synchronous and async signals??* How to generate a particular signal? by using kill command along with the signal number.* send of signals to a process which is blocking/waiting on system call will disturb the process's system call. For example, if a process ( waiting on accept ) gets a SIGUSR1, will come out of accept system call.* by sending signal zero to particular PID, we can know the process existance.If signal delivery gets succedeed, the kill command returns zero otherwise it will delivery non zero value.
* when kill() is called with PID as zero, the signal will be issued to all the processes in the same group* when kill() is called with PID as -1, the signal will be issued to all processes, except swapper(PID=0),init(PID=1) and current process
Select vs Poll:
*Note: `select()' was introduced in BSD, whereas `poll()' is an artifact ofSysV STREAMS. As such, there are portability issues; pure BSD systems maystill lack `poll()', whereas some older SVR3 systems may not have`select()'. SVR4 added `select()', and the Posix.1g standard defines both.
*`select()' and `poll()' essentially do the same thing, just differently.Both of them examine a set of file descriptors to see if specific eventsare pending on any, and then optionally wait for a specified time for anevent to happen.
*[Important note: neither `select()' nor `poll()' do anything useful whenapplied to plain files; they are useful for sockets, pipes, ptys, ttys &possibly other character devices, but this is system-dependent.]
Typical Process Layout:
Command-line arguments and environment
variables
Stack
Heap
Uninitialized data(bss)
Initialized data Text(code)
Differences between Process and Thread:
*fork() is an expensive system call. Creating a new process requires more system memory space thus it causes more load on the operating system in keeping track of active processes [Ste98].  In a multiple-process application, the only way for the processes to share resources is through an Interprocess Communication (IPC) object, which is maintained by and kept within the system. For the user program, usage of these IPC objects is normally simple and abstract but causes heavy system overhead.  On the other hand, in a multiple-thread application, all the threads share resources within user address space and load on the operating system is reduced. However, the synchronization between threads is much more complicated. Moreover, using system call functions in multiple-thread application may have some unusual implications
* in the M X N model, there will be a user level schedular for taking care of CPU scheduling across threads
* in the 1 X 1 model, Kernel schedular will take care of scheduling fo CPU across threads.
* kernel threads are required in multi processor environment..where to address some of the scheduling complexities.
* from a thread you can do fork..but it will just inherit the code of thread..not all other threads..if you want to inherit other threads too..call pthread_atfork function.
* what exactly pthread_join does? the main thread will join to child threads,to make sure the main thread is alive till the end of child threads.
* An application can create both PTHREAD_SCOPE_SYSTEM and PTHREAD_SCOPE_PROCESS threads explicitly using the pthread_attr_setscope API. When no contention scope is specified, a thread created by pthread_create has a contention scope of PTHREAD_SCOPE_PROCESS. This change in default thread type is due to POSIX standards requirement.Refer http://docs.hp.com/en/5187-0701/ch07s15.html
clone system call:
clone creates a new process like fork(2)  does.   Unlike fork(2),  __clone  allows the child process to share parts of its execution context with its parent process, such  as the  memory  space, the table of file descriptors, and the table of signal handlers.  The main use of __clone  is  to implement  threads:  multiple threads of control in a program that run concurrently in a shared memory space.
signals and threads:
Synchronous signals, those caused by the thread itself (like SIGPIPE and SIGBUS), are delivered to the thread that caused them. Asynchronous signals (signals sent to the process by an external source) are delivered to an arbitrary thread within the process.At thread level to block the signal,a thread needs to set appropriate mask by calling function pthread_sigmask. Note thatsignal handlers are at process level and signal masks are at thread level.
Basic differences between Linux and Traditional Unix:
1.Linux kernel is monolithic.It is a large,composed of several logically different components.
2. Traditional Unix kernels are compiled and linked statically. Most modern kernels can dynamically load and unload some portions of the kernel code (typically, devicedrivers), which are usually called modules. Linux's support for modules is very good,since it is able to automatically load and unload modules on demand.
3. Kernel threading. Some modern Unix kernels, like Solaris 2.x and SVR4.2/MP, areorganized as a set of kernel threads. A kernel thread is an execution context that can be independently scheduled; it may be associated with a user program, or it may run only some kernel functions. Context switches between kernel threads are usually much less expensive than context switches between ordinary processes, since the former usually operate on a common address space. Linux uses kernel threads in a very limited way to execute a few kernel functions periodically; since Linux kernel threads cannot execute user programs, they do not represent the basic execution context abstraction
4. Multithreaded application support. Most modern operating systems have some kind ofsupport for multithreaded applications, that is, user programs that are well designed interms of many relatively independent execution flows sharing a large portion of theapplication data structures. A multithreaded user application could be composed ofmany lightweight processes (LWP), or processes that can operate on a common address space, common physical memory pages, common opened files, and so on.
Linux defines its own version of lightweight processes, which is different from thetypes used on other systems such as SVR4 and Solaris. While all the commercial Unixvariants of LWP are based on kernel threads, Linux regards lightweight processes asthe basic execution context and handles them via the nonstandard clone( ) systemcall.
5. Linux is a nonpreemptive kernel. This means that Linux cannot arbitrarily interleaveexecution flows while they are in privileged mode. Several sections of kernel codeassume they can run and modify data structures without fear of being interrupted andhaving another thread alter those data structures. Usually, fully preemptive kernels are associated with special real-time operating systems. Currently, among conventional,general-purpose Unix systems, only Solaris 2.x and Mach 3.0 are fully preemptive kernels. SVR4.2/MP introduces some fixed preemption points as a method to get limited preemption capability.
6. Multiprocessor support. Several Unix kernel variants take advantage of multiprocessor systems. Linux 2.2 offers an evolving kind of support for symmetric multiprocessing(SMP), which means not only that the system can use multiple processors but also that any processor can handle any task; there is no discrimination among them. However,Linux 2.2 does not make optimal use of SMP. Several kernel activities that could beexecuted concurrently.like filesystem handling and networking.must now beexecuted sequentially.
7. Filesystem. Linux's standard filesystem lacks some advanced features, such asjournaling. However, more advanced filesystems for Linux are available, although notincluded in the Linux source code; among them, IBM AIX's Journaling File System(JFS), and Silicon Graphics Irix's XFS filesystem. Thanks to a powerful objectoriented Virtual File System technology (inspired by Solaris and SVR4), porting a foreign filesystem to Linux is a relatively easy task.
8. On Linux 2.6, on every pthread_create call, kernel creates a corresponding LWP.
Linux kernel Mechanisms:
1. Bottom Half Handling:-There are often times in a kernel when you do not want to do work at this moment. A good example of this is during interrupt processing. When the interrupt was asserted, the processor stopped what it was doing and the operating system delivered the interrupt to the appropriate device driver. Device drivers should not spend too much time handling interrupts as, during this time, nothing else in the system can run. There is often some work that could just as well be done later on. Linux's bottom half handlers were invented so that device drivers and other parts of the Linux kernel could queue work to be done later on2. Task Queues:-Task queues are the kernel's way of deferring work until later. Linux has a generic mechanism for queuing work on queues and for processing them later. Task queues are often used in conjunction with bottom half handlers; the timer task queue is processed when the timer queue bottom half handler runs. 3. Timers:-4. Interrupts:-* In Linux, the system timer (or clock) is programmed to generate a hardware interrupt 100 times a second (as defined by the HZ system parameter). The interrupt is accomplished by sending a signal to a special chip on the motherboard called an interrupt controller. (We go into more detail about these in the section on hardware.) The interrupt controller then sends an interrupt to the CPU. When the CPU receives this signal, it knows that the clock tick has occurred and it jumps to a special part of the kernel that handles the clock interrupt. Scheduling priorities are also recalculated within this same section of code.
* When a page fault does happen when in kernel mode, the kernel panics. Special routines have been built into the kernel to deal with the panic to help the system shut down as gracefully as possible.
* When a process makes a system call, the behavior is similar to that of interrupts and exceptions. Like exception handling, the general purpose registers and the number of the system call are pushed onto the stack. Next, the system call handler is invoked, which calls the routine within the kernel that will do the actual work.
Linux boot sequence:
1.Processor comes out of reset and branches to the ROM startup code.
2.The ROM startup code initializes the CPU and memory controller, performing only minimal initialization of on-chip devices, such as the console serial port (typically SMC1 on 8xx devices) to provide boot diagnostic messages. It also sets up the memory map for the kernel to use in a format that is consistent across platforms, and then jumps to the boot loader.
3.The boot loader(Linux Loader(LILO)) decompresses the kernel(example vmlinuz.x.x.x) into RAM, and executes it.
4.The kernel program(example vmlinuz.x.x.x) sets up the caches, detects(via bus) & initializes each of the hardware devices via the init function in each driver, mounts the root filesystem and execs the init process, which is the ultimate parent of all user mode processes, typically /sbin/initd.
5.Executing the first program linked against the shared C runtime library (often init) causes the shared runtime library to be loaded.
6.In a typical Linux system, init reads /etc/inittab to execute the appropriate run control script from /etc/rc.d, which execute the start scripts to initialize networking and other system services. The rc scripts will start all network daemons.
7.At last init will spawn getty process,which is responsible for all user logins.
The main Differences between 2.6 kernel and 2.4 kernel:
1. 2.6 kernel scheduling algorithm got changed to O(1) algo.Will work better on multi processors
2. 2.6 kernel made partially preemptable,during the some part of kernel code exec
3. there were some changes made to VM in 2.6, for improving physical page removal,while many processes map the same page.
4. Linux threads removed in 2.6,in place of it Native Posix Thread Libs are introduced.NPTL works in 1:1 manner.and proved to be faster.Each thread has a separate PID.
5. 2.6 also got changed in terms of Max number of threads supported on the system to approx 2 billion,whereas 2.4 was supporting 8192 threads per processor.
6. Workqueue interface got introduced in 2.6 kernel,which will replace task queue interface(used to schedule kernel tasks)
Steps involved in calling main function:
1. GCC build your program with crtbegin.o/crtend.o/gcrt1.o And the other default libraries are dynamically linked by default. Starting address of the executable is set to that of _start.
2. Kernel loads the executable and setup text/data/bss/stack, especially, kernel allocate page(s) for arguments and environment variables and pushes all necessary information on stack.
3. Control is pased to _start. _start gets all information from stack setup by kernel, sets up argument stack for __libc_start_main, and calls it.
4. __libc_start_main initializes necessary stuffs, especially C library(such as malloc) and thread environment and calls our main.
Some interesting problems/topics:
1. If a file is open with read only..fclose() function will not flush the buffer..its better to use explicit system call fflush for doing the same.If fil buffers are not flushed properly, there could be unexpected file corruption problems,provided if the files are shared between parent and child processes.
2.How to find how many network interfaces  are associated with a socket?THis is by using ioctl system call,by passing sockfd as the arguement.THe system call returns the list of interfaces.
3.How to effectivly kill children when parent gets abruptly terminated??This can be achieved by using system call prctl in children's logic,for notifying it with a signal whenever parents terminates.Note that it is supported by Linux only.
4.Unix wont allocate PIDs in sequential manner.Its because to avoid unexpected signals to the new processes
5..read. is a system call and fread is a C (glibc) linbrary function. fread uses buffer caches and internal buffers which will fasten the operation of reading. Fread will be more efficient while using on block device like disk. Read can be used for accessing a character device like a network device.
6.difference between system call and a library functionA system call is an entry point to the kernel. system calls are executed in kernel mode. Hence, whenever a system call is made from a user program a mode switching has to be done from "user mode" to "kernel mode". System calls are provided by each OS. They are not portable. example: read
A library function usually works in a user mode which may or may not require any OS services, for example, strlen() which calculates the length of a particular string.
7.A function is called thread safe, if multiple threads can call that function without any destructive results. A function can be made a thread safe by locking a mutex while entering the function and releasing the mutex while exiting the function. (or) locking the mutex while executing the critical section of the code (or) locking the mutex while accessing the critical data and unlocking once done with it.
8.The HP OpenCall SS7 APIs (except the TimerLib) are Posix.1c Thread-Restrictedlevel B. This means that the interface is not thread-safe, but that it can be used byany single .dedicated. thread of a multi-threaded application.
9. to see the individual size of various segments of a lib or of object file,use size command.That will give size of text,data and bss segments.
10. Kernel panics, are two types. 1. Hard Panic 2.Soft Panic.  Hard panics are usually happens in interrupt handler...and soft panics generally happens in driver software    For debugging the hard panic,KDB can be used to get extra information for finding the root c
Tuesday, November 11, 2008
What's this Blog for
We are planning to put here a consolidated list of FAQs that we would have come across in the various technical areas that we've worked on. Those include:
C, C++, CDMA, Linux, SS7, TCP-IP, SCTP, Mobile-IP. As and when more info is added, we are expecting this to become as a reference for all the above areas.
C, C++, CDMA, Linux, SS7, TCP-IP, SCTP, Mobile-IP. As and when more info is added, we are expecting this to become as a reference for all the above areas.
Subscribe to:
Comments (Atom)
