What is protocol analyzer ?

Protocol analyzer is any device that captures and interprets the network traffic between two or more connected computer systems. The traffic can then be decoded so that it is possible to see what processes are occurring. By examining the flow of traffic, protocol analyzers can be used to find out where problems (such as bottlenecks or the failure of a network device) are on a LAN. 

Advanced protocol analyzers can also provide statistics on the traffic that can help to identify trends that may in future lead to further problems with the network.

Also See

protocol analyzer examples
protocol analyzer wireshark
protocol analyzer hardware
wireless protocol analyzer
serial protocol analyzer
protocol analyzer and packet sniffer
protocol analyzer open source
protocol analyzer vs port scanner

How do you troubleshoot timing based requirements ?

There are two ways to troubleshoot timing based as shown below, 

The timing based requirements can be tested either using a logic analyzer, where in you set he trigger and start the timing, calculate the difference between start and the end timing.

You can instrument your code using (say some compiler suite gives this provision) and then use code test, code tap to collect the tags and then through analysis we can verify the timing requirements.

Also See
hardware troubleshooting definition, what is software troubleshooting, hardware troubleshooting pdf, hardware troubleshooting interview questions and answers, troubleshooting computer hardware problems and solutions, hardware troubleshooting guide

How are embedded systems designed to make troubleshooting easier ?

By the help of generic monitor that you have in the code, which can be invoked only 

Certain mode ( referred as slots, ), this piece of code sits inside the box and let the tester know what is happening within the box. Normally these piece of code is 100% dormant.

What are JTags ?

Joint Test Action Group:

JTAG is a standard specifying how to control and monitor the pins of compliant devices on a printed circuit board. 

Each device has four JTAG control lines. There is a common reset (TRST) and clock (TCLK). The data line daisy chains one device's TDO pin to the TDI pins on the next device. 

The protocol contains commands to read and set the values of the pins (and, optionally internal registers) of devices. This is called "boundary scanning". The protocol makes board testing easier as signals that are not visible at the board connector may be read and set. 

The protocol also allows the testing of equipment, connected to the JTAG port, to identify components on the board (by reading the device identification register) and to control and monitor the device's outputs. 

JTAG is not used during normal operation of a board.

What is bootloader ?

What is bootloader?

Bootloader is a software written in some high level language, it is usually flashed into the EERPROM, one power up it is first invoked and reads the first stage in from a fixed location on the Flash, called the "boot block". When this program gains control, it is powerful enough to load the actual application and hand control over to it.

How to analyze or troubleshoot communication protocol ?

Troubleshooting communication protocol

Normally communication protocols are completely tested using protocol analyzers.

What is the difference between simulator & emulator?

Emulator is a system that performs in exactly the same way as another, though perhaps not at the same speed. A typical example would be emulation of one computer by (a program running on) another. You might use an emulation as a replacement for a system whereas you would use a simulation if you just wanted to analyze it and make predictions about it.

Attempting to predict aspects of the behavior of some system by creating an approximate (mathematical) model of it. This can be done by physical modeling, by writing a special-purpose computer program or using a more general simulation package, probably still aimed at a particular kind of simulation (e.g. structural engineering, fluid flow). Typical examples are aircraft flight simulators or electronic circuit simulators. A great many simulation languages exist,

What are In-circuit/Emulators/Debuggers?

A debugger is a software program used to break program execution at various locations in an application program after which the user is presented with a debugger command prompt that will allow him to enter debugger commands that will allow for setting breakpoints, displaying or changing memory, single stepping, and so forth.

ICE is an electronic tool that allows for debugging beyond the capabilities of a standard software debugger. An ICE is essentially a hardware box with a 2 to 3 foot cable attached to it. At the end of the cable is a multi-pin connector connected to a CPU processor chip, which is identical to the processor on the target board. The target processor is removed from your target board, and the connector plugged in. The ICE allows for trapping the following types of activities: read/write to an address range, read/write a specific value from an address range, setting a breakpoint in EPROM, mapping emulator memory to target board memory, and other similar features. It also turns the host (a PC for example) into a debug station with supplied software. This allows for debugging of the target board even though the target does not have a keyboard, screen, or disk.

What is logic analyzer?

Logic would suggest that there are two types of logic analyzers. The first type is a logic analyzer, which enables timing analysis, whilst the second type enables state analysis. In reality, however, instruments often use a combination of these two functions. 

The state analyser allows you to monitor whether the operating sequence in a digital circuit matches the expected sequence. In this instance the clock signal is put on the model to be tested and interrupts the data sampling. The logic analyser must therefore be able to function at this speed. The analyser functions synchronously in this mode. 

The timing analyser is often used to determine the cause of an error detected by the state analyser. It enables you to find out, for example, if a specific signal has occurred too late or too early or if a glitch has caused false triggering. The analyser functions asynchronously in this mode. 

The type of application will determine the minimum number of channels that the logic analyser should have. When choosing a logic analyser the following key criteria are important: speed, width and depth. In other words, a logic analyser must be able to capture events as quickly as possible and must offer a sufficient number of channels and memory capacity. The predefined capability of an analyser is also linked to its triggering capacity. 

The disassembly software and probe adapters on a logic analyser play a crucial role in the efficiency when analyzing a complex processor with hundreds of connections. Due to the huge range of processor types currently on the market, each logic analyser must be compatible with literally hundreds of probe adapters. The disassembly software should also give a clear view of which instructions are actually executed and which conditional branches are actually taken.

What is trouble shooting ? and why is trouble shooting difficult in embedded systems ?

What is trouble shooting?

The process of finding and resolving the bug in a software or hardware.

Why is trouble shooting difficult in embedded systems?

Most of the embedded system it deals with hardware and software, hence if any malfunction of the system requires appreciable knowledge to first isolate the problem as hardware of software. Also we have to adopt to the crude way of debugging like putting trace, analyzing the behavior, of the system etc., This makes the trouble shooting little tricky.

Swapping - Demand Paging

Swapping/Demand paging

A kind of virtual memory where a page of memory will be paged in if an attempt is made to access it and it is not already present in main memory. This normally involves a memory management unit which looks up the virtual address in a page map to see if it is paged in. If it is not then the operating system will page it in, update the page map and restart the failed access. This implies that the processor must be able to recover from and restart a failed memory access or must be suspended while some other mechanism is used to perform the paging.

Virtual Memory

Virtual memory

A technique in which a large memory space is simulated with a small amount of RAM, a disk, and special paging hardware. For example, a virtual 1-megabyte address space can be simulated with 64K of RAM, a 2-megabyte disk, and paging hardware. The paging hardware translates all memory accesses through a page table. If the page that contains the memory access is not loaded into memory a fault is generated which results in the least used page being written to disk and the desired page being read from disk and loaded into memory over the least used page.

MMU support

A hardware device or circuit that supports virtual memory and paging by translating virtual addresses into physical addresses.

The virtual address space (the range of addresses used by the processor) is divided into pages, whose size is 2^N, usually a few kilobytes. The bottom N bits of the address (the offset within a page) are left unchanged. The upper address bits are the (virtual) page number. The MMU contains a page table, which is indexed (possibly associatively) by the page number. Each page table entry (PTE) gives the physical page number corresponding to the virtual one. This is combined with the page offset to give the complete physical address.

A- PTE may also include information about whether the page has been written to, when it was last used (for a least recently used replacement algorithm), what kind of processes (user mode, supervisor mode) may read and write it, and whether it should be cached.

Thread synchronization

Sometimes one task must wait for another asks to finish before it can proceed. Consider a data acquisition application with 2 tasks: taskA that acquires data and taskB that displays data. taskB cannot display new data until taskA fills in a global data structure with all the new data values. taskA and taskB are typically synchronized as follows. (1) taskB waits for a message from taskA, and since no message is available, taskB suspends. When taskA runs and updates the data structure, it sends a message to taskB, which schedules taskB to run, at which time it displays the new data, then again waits for a message, and the scenario is repeated.

What is a Process?

A process is a single executable module that runs concurrently with other executable modules. For example, in a multi-tasking environment that supports processes, like OS/2, a word processor, an Internet browser, and a database, are separate processes and can run concurrently. 

Processes are separate executable, loadable modules as opposed to threads, which are not loadable. Multiple threads of execution may occur within a process. For example, from within a data base application, a user may start both a spell check and a time consuming sort. In order to continue to accept further input from the user, the active thread could start two other concurrent threads of execution, one for the spell check and one for the sort. Contrast this with multiple .EXE files (processes) like a word processor, a database, and Internet browser, multi-tasking under OS/2 for example.

Real-Time Kernels

A real-time kernel is a set of software calls that provide for the creation of independent tasks, timer management, inter-task communication, memory management, and resource management.

Basic concepts of RTOS

Atomic an operation is said to be atomic if it can be completed without interruption.

Context Switch the process of changing execution from one process to the next. The time required to perform a context switch will have a significant impact on performance. So, knowing how much information is stored as part of a task's context may be important to you. The minimum amount of information required to perform a context switch is largely processor dependent, however, the RTOS vendor may have chosen to include some extra information that will slow down the context switch.

Cooperative multitasking in a cooperative multitasking environment, generally the current running task will be allowed to run until it completes or until it chooses to yield to another task. Yielding to another task may be explicit through a call to a yield function or it may be implied through a call to a function that may cause it to wait for an event or resource.

Counting Semaphore a mechanism for synchronizing processes and their access to resources.

Critical Section a section of code in a program that must be executed while no other piece of code is running. Typically, this means that all interrupts must be disabled. So, critical sections should be kept as small as possible.

Deadlock: Deadlock is a condition where multiple processes may be waiting for a resource to be made available which will never become available. Generally, the programmer must design the system to prevent deadlock from occurring. The typical approach to preventing deadlock in an embedded system is for tasks to always request needed resources in a predefined order.

Deferred interrupt processing Deferred interrupt processing is an approach used to minimize interrupt latency by reducing the amount of time spent inside interrupt service routines. Typically, the interrupt handler sends a message to the operating system indicating that the interrupt has occurred allowing the operating system to invoke a task to handle the external event. The improved interrupt latency due to this approach should result in more deterministic behavior of the system.

Event: An event is generally a mechanism provided by the RTOS to permit communications between processes.

Interprocess communications: Processes or tasks generally need a synchronized method of communications. The relationship between the tasks could be various, including one-to-one, one-to-many, or many-to-many. An RTOS needs to provide mechanisms to permit all of these kinds of communications. Common mechanisms include mailboxes, queues, and events.

Memory management Real-time operating systems frequently provide specialized memory management routines to help solve common embedded system problems. An RTOS may provide the ability to allocate memory in fixed sized blocks or from distinct memory pools, each of which may have special purposes. For example, distinct server processes may have unique memory pools for allocation. Typically the goal is to help avoid memory fragmentation that could lead to a system failure.

Mailbox a mechanism provided by an RTOS for interprocess communications. A process may have a mailbox that other processes can send messages to.

Multitasking/Time slicing A CPU typically can only have a single state or context. Essentially, the CPU can only do one thing at a time. Multitasking the process of frequently changing context to make it appear as if the CPU were performing multiple tasks simultaneously. This task switching may occur on either a cooperative or preemptive multitasking basis.

Mutual Exclusion Refers to ensuring that a shared resource cannot be accessed simultaneously to prevent unwanted interaction. One example of the need for mutual exclusion would be keyboard I/O. Generally; an operating system must ensure that only one process can have access to the keyboard at a time

Preemptive multitasking in a preemptive multitasking system, a supervisor or scheduling task can change which task is currently executing without the permission of the current task. In a preemptive system, the scheduler will generally switch tasks at a periodic interval known as the time slice.

Priority In a real-time operating system, tasks can be assigned a priority. Usually, the scheduler will allow higher priority tasks to run before lower priority tasks are allowed to run. However, there may be a lot of variations between different real-time operating systems and how they treat tasks varying priorities. Some systems will not allow lower priority tasks to execute at all until the higher priority tasks have completed. Other systems may assign larger time slices to higher priority task; yet still allow low priority tasks to execute with a smaller time slice.

Priority Inheritance Priority inheritance, an approach to dealing with the problem of priority inversion, allows a low-priority task to become high priority if it owns a resource that a high-priority task is waiting for. The goal of priority inheritance is to minimize the amount of time high-priority tasks are required to wait for needed resources.

Priority Inversion In a system where a high- and a low-priority task may both need access to the same resource, it is possible that the high-priority task may have to wait for the low-priority task to complete. This is called priority inversion. In such a situation, the low-priority task may not get its time slice until other high-priority tasks complete.

Process/Thread/Task A process, thread, or task is a unit of execution. The use of these terms may be somewhat indistinguishable at times, especially from one RTOS to the next. However, some environments actually define more than one of these. For example, under Windows NT, a process is usually an executable program. And that process may have multiple smaller threads associated with it. Sometimes, as is the case with Windows NT, the terminology in part describes how much context information is associated with it.

Queue a mechanism provided by an RTOS for interprocess communications. Usually, one process acts as a consumer, removing messages from the queue. And, one or many processes may act as producers adding messages to the queue.

Reentrant/Non-Reentrant Reentrant code is code, which does not rely on being executed without interruption before completion. Reentrant code can be used by multiple, simultaneous tasks. Reentrant code generally does not access global data. Variables within a reentrant function are allocated on the stack, so each instance of the function has its own private data. Non-reentrant code, to be used safely by multiple processes, should have access controlled via some synchronization method such as a semaphore.

Resource a resource is usually a mechanism provided by the RTOS to synchronize access to an object within the system, which requires mutually exclusive access. The object within the system, also frequently called a resource, may be a physical device such as a keyboard or an EEPROM, or just a system variable.
Scheduler The scheduler is the portion of the RTOS responsible for determining which task will run next. Real-time operating systems use a wide variety of algorithms for making this selection. Sometimes it is as simple as just allowing the next available process with the highest priority to run. Other systems may have a hybrid prioritized round-robin system where tasks are only allowed to run for a specific time slice before the next task is run.

Semaphore a mechanism for ensuring mutual exclusion or synchronizing processing related to a resource. Time slice The largest amount of time that a task will be allowed to run in a preemptive multitasking system before the scheduler may invoke a new task.

Scheduler scheduling policies are as follows

  • First-Come-First-Served (FCFS)
  • Shortest-Job-First (SJF)
  • Priority Scheduling (PS)
  • Round-Robin Scheduling (RR)
  • Multi-level feedback queues (MLF)
  • Real-time scheduling

What is end-to-end checksum and, why & where is it used?

Embedded C interview Question

What is end-to-end checksum and, why & where is it used?

Answer

End to end checksum means the checksum that is calculated from the start to the end of the memory. This is done to make sure that the data is not corrupted. Normally a weighted modulo arithmetic is used for calculation checksum. It is used in memory chips to ensure that all the memory bits are proper, while it is fabricated. Also during for any software to ensure that the code is not corrupted.

What are the ways to achieve shared memory?

Embedded C Interview Question

What are the ways to achieve shared memory?

Answer

Computers using shared memory usually have some kind of local cache on each processor to reduce the number of accesses to shared memory. This requires a cache consistency protocol to ensure that one processor's cached copy of a shared memory location is invalidated when another processor writes to that location. 

The alternative to shared memory is message passing where all memory is private to some particular processor and processors communicate by sending messages down special links. This is usually slower than shared memory but it avoids the problems of contention for memory and can be implemented more cheaply.

What is shared ram memory?

Embedded C interview Questions

What is shared ram memory?

Answer

Memory in a parallel computer, usually RAM, which can be accessed by more than one processor, usually via a shared bus or network.

It usually takes longer for a processor to access shared memory than to access its own private memory because of contention for the processor-to-memory connections and because of other overheads associated with ensuring synchronized access.

What is dual port memory?

Embedded C interview Question

What is dual port memory?

Answer

Dual-Port RAM - is a RAM, which is the only memory area simultaneously accessible for both read and write to both the Controller interface simultaneously.

What is swap and fail over?

Embedded C interview Question

What is swap and fail over?

Answer

Automatically switching to a redundant or standby server, system, or network upon the failure or abnormal termination of the currently active server, system, or network (a "hot standby" or "warm standby"). Fail over happens without human intervention. This feature is usually built-in to expensive systems, which must be available continuously.

What is the danger in using shared memory?

Embedded C interview Question

What is the danger in using shared memory?

Answer

Data corruption is a critical issue while we use shared memory. Have a look at the following example

Mainline code:
1. Read variable X into register
2. Decrement register contents
3. Store result back at variable X

ISR code:
A. Read variable X into register
B. Increment register contents
C. Store result back at variable X

Let's say that the shared variable X is tracking the number of bytes in a buffer. The ISR puts a byte into the buffer and increments X. The mainline code reads a byte from the buffer and decrements X. Say that X starts out with a value of 4. The ISR puts a byte into the buffer and increments it to 5. The mainline code then reads a byte and decrements the count back to 4.

But if an interrupt occurs between lines 1 and 3 in the mainline code, the value of X will be corrupted. First, the mainline code reads X, which is 4, into a register. Then the ISR occurs, also reads 4, and increments X to 5. After the ISR completes, the mainline code finishes, storing the improper value 3 in X.

Working and Usage of JTAG? | Boundary scans | Test Pins | Test Access Port

One disadvantage of shrinking technology is that the testing of small devices gets exponentially more complex. When circuit boards were large, we tested them with techniques such as bed-of-nails, which employed small spring-loaded test probes to make connections with solder pads on the bottom of the board. Such test fixtures were custom made, expensive, and inefficient, and much of the testing could not be performed until the design was complete.

The problems with bed-of-nails testing were exacerbated as board dimensions got smaller and surface-mount packaging technology improved. If devices were mounted on both sides of a circuit board, no attachment points were left for the test equipment.

Boundary scans

To find a solution to these problems, a group of European electronics companies formed a consortium in 1985 called the Joint Test Action Group (JTAG). The consortium devised a specification for performing boundary-scan hardware testing at the IC level. In 1990, that specification resulted in IEEE 1149.1, a standard that established the details of access to any chip with a so-called JTAG port.

The specification JTAG devised uses boundary-scan technology, which enables engineers to perform extensive debugging and diagnostics on a system through a small number of dedicated test pins. Signals are scanned into and out of the I/O cells of a device serially to control its inputs and test the outputs under various conditions. Today, boundary-scan technology is probably the most popular and widely used design-for-test technique in the industry.

Test pins

Devices communicate to the world via a set of I/O pins. By themselves, these pins provide limited visibility into the workings of the device. However, devices that support boundary scan contain a shift-register cell for each signal pin of the device. These registers are connected in a dedicated path around the device's boundary (hence the name), as shown in Figure 1. The path creates a virtual access capability that circumvents the normal inputs and provides direct control of the device and detailed visibility at its outputs.

During testing, I/O signals enter and leave the chip through the boundary-scan cells. The boundary-scan cells can be configured to support external testing for interconnection between chips or internal testing for logic within the chip.

To provide the boundary scan capability, IC vendors add additional logic to each of their devices, including scan registers for each of the signal pins, a dedicated scan path connecting these registers, four or five additional pins, and control circuitry. The overhead for this additional logic is minimal and generally well worth the price to have efficient testing at the board level.

Test Access Port

In the boundary-scan control signals, collectively referred to as the Test Access Port (TAP), define a serial protocol for scan-based devices. There are five pins: 

TCK/clock synchronizes the internal state machine operations.
TMS/mode select is sampled at the rising edge of TCK to determine the next state.

TDI/data in is sampled at the rising edge of TCK and shifted into the device's test or programming logic when the internal state machine is in the correct state.

TDO/data out represents the data shifted out of the device's test or programming logic and is valid on the falling edge of TCK when the internal state machine is in the correct state.

TRST/reset (optional), when driven low, resets the internal state machine.

The TCK, TMS, and TRST input pins drive a 16-state TAP controller state machine. The TAP controller manages the exchange of data and instructions. The controller advances to the next state based on the value of the TMS signal at each rising edge of TCK.





With the proper wiring, you can test multiple ICs or boards simultaneously. An external file, known as a Boundary-Scan Description Language (BSDL) file, defines the capabilities of any single device's boundary-scan logic.





Test process





The standard test process for verifying a device or circuit board using boundary-scan technology is as follows:



The tester applies test or diagnostic data on the input pins of the device.

The boundary-scan cells capture the data in the boundary scan registers monitoring the input pins.

Data is scanned out of the device via the TDO pin, for verification.

Data can then be scanned into the device via the TDI pin.

The tester can then verify data on the output pins of the device.



Simple tests can find manufacturing defects such as unconnected pins, a missing device, an incorrect or rotated device on a circuit board, and even a failed or dead device.





The primary advantage of boundary-scan technology is the ability to observe data at the device inputs and control the data at the outputs independently of the application logic.





Another benefit is the ability to reduce the number of overall test points required for device access. With boundary scan there are no physical test points. This can help lower board fabrication costs and increase package density.





Boundary scan provides a better set of diagnostics than other test techniques. Conventional techniques apply test vectors (patterns) to the inputs of the device and monitor the outputs. If there is a problem with the test, it can be time consuming to isolate the problem. Additional tests have to be run to isolate the failure. With boundary scan, the boundary-scan cells observe device responses by monitoring the input pins of the device. This enables easy isolation of various classes of test failures, such as a pin not making contact with the circuit board.





Boundary scan can be used for functional testing and debugging at various levels, from internal IC tests to board-level tests. The technology is even useful for hardware/software integration testing.

What are multicast, broadcast and point-to-point?

Embedded C interview questions

What are multicast, broadcast and point-to-point?

Answer

Ethernet addressing scheme used to send packets to devices of a certain type or for broadcasting to all nodes. The least significant bit of the most significant byte of a multi-cast address is one.

A transmission to multiple, unspecified recipients. On Ethernet, a broadcast packet is a special type of multicast packet which all nodes on the network are always willing to receive.

(PPP) The protocol defined in RFC 1661, the Internet standard for transmitting network layer data grams (e.g. IP packets) over serial point-to-point links. 

PPP has a number of advantages it is designed to operate both over asynchronous connections and bit-oriented synchronous systems, it can configure connections to a remote network dynamically, and test that the link is usable. PPP can be configured to encapsulate different network layer protocols (such as IP, IPX, or AppleTalk) by using the appropriate Network Control Protocol (NCP).

What is graceful degradation?

Embedded C interview Question

What is graceful degradation?

Answer

When any critical error has occurred the system should not come down abruptly, instead it has to shutdown in a graceful way, by not driving any output illegally, and come down in a planned way.

What is load balancing?

Embedded C interview Question

What is load balancing?

Answer

Techniques, which aim to spread tasks among the processors in a parallel processor to avoid some processors being idle while others, have tasks queuing for execution. Load balancing may be performed either by heavily loaded processors (with many tasks in their queues) sending tasks to other processors; by idle processors requesting work from others; by some centralized task distribution mechanism; or some combination of these. Some systems allow tasks to be moved after they have started executing ("task migration") others do not. It is important that the overhead of executing the load balancing algorithm does not contribute significantly to the overall processing or communications load.

What is IPC ?

Embedded C interview Question

What is IPC?

Answer

Inter-process Communication:
(IPC) is an exchange of data between one process and another, either within the same computer or over a network,It implies a protocol that guarantees a response to a request. Examples are Unix sockets, RISC OS's messages, OS/2's Named Pipes, Microsoft Windows' DDE, Novell's SPX and Macintosh's IAC. Although programs perform IPC automatically, an analogous function can be performed interactively when users cut and paste data from one process to another using a clipboard.

What does distributed computing mean?

Embedded C interview question

What does distributed computing mean?

Answer

Distributed Computing Environment (DEC) is an architecture consisting of standard programming interfaces, conventions and server functionalities (e.g. naming, distributed file system, remote procedure call) for distributing applications transparently across networks of heterogeneous computers. DCE is promoted and controlled by the Open Software Foundation

What is Redundancy ?

Embedded C Interview Question

What is Redundancy?

Answer

The provision of multiple interchangeable components to perform a single function in order to cope with failures and errors. Redundancy normally applies primarily to hardware. For example, one might install two or even three computers to do the same job. There are several ways these could be used. They could all be active all the time thus giving extra performance through parallel processing as well as extra availability; one could be active and the others simply monitoring its activity so as to be ready to take over if it failed ("warm standby"); the "spares" could be kept turned off and only switched on when needed ("cold standby"). Another common form of hardware redundancy is disk mirroring.

Redundancy can also be used to detect and recover from errors, either in hardware or software. A well known example of this is the cyclic redundancy check which adds redundant data to a block in order to detect corruption during storage or transmission. If the cost of errors is high enough, e.g. in a safety-critical system, redundancy may be used in both hardware AND software with three separate computers programmed by three separate teams and some system to check that they all produce the same answer, or some kind of majority voting system.

Swap little endian to big endian and little endian to big endian

Before going to this tutorial first understand what is endian and it's types. program to find little endian or big endian for a processor.
Strongly recommended to visit this

Program to swap little endian to big endian and little endian to big endian

int swap_Endian ( int number )
{
   int byte0, byte1, byte2, byte3;
   byte0 = (number & 0x000000FF) >> 0;
   byte1 = (number & 0x0000FF00) >> 8;
   byte2 = (number & 0x00FF0000) >> 16;
   byte3 = (number & 0xFF000000) >> 24;
   return ((byte0<<24) | (byte1 << 16) | (byte2 << 8) | (byte3 << 0));
}

void main(void) {
    int swapped_value;
    swapped_value = swap_Endian(1);
    return;
}

Video Explanation of this program


How to decide whether given processor is using little endian format or big endian format ?

It can be found easily with a small c/c++ program. Before going to this first we shall understand what is the endian. Endian is of two types i.e., little endian and big endian. Endian means how a processor stores a variable in it's memory say left to right or right to left.

Big endian means that the most significant byte of any multibyte data field is stored at the lowest memory address, which is also the address of the larger field.

Little endian means that the least significant byte of any multibyte data field is stored at the lowest memory address, which is also the address of the larger field

Where as Big endian means it stores in right to left. Rarely used in some microprpcessors.

Program to find a little endian or big endian

void print_Endian()
{
   unsigned int i = 1;
   char *c = (char *) &i;
   if(*c)
   {
       printf("\n\rLittle Endian");
   }
   else
   {
      printf("\n\rBig Endian");
   }
}

Explanation:

Consider this architecture uses 4 bytes for int. So int i is declared as 1. it would be stored in a little endian architecture as shown below


If it is a big endian then it will be exactly opposite for the picture shown above.

char *c = (char *)&i this line declares a char pointer that points to the address of i. As it is a char pointer it's value will be 1st 1 byte of the i. So that value of *c will be 1 but it is 1 byte as shown below



If it is a big endian the value on char* will be 0.

So that the if statement will print the type of endian for the processor.

Video Tutorial


Also See Program to swap little to bit endian and viceversa of the same

Where are Local, Global, Static, Extern variables stored ?


  • Local variables are stored in Stack
  • Global variables are stored in BSS
  • Static variables are stored in BSS
  • Extern variables are stored in data segment
  • malloc, calloc and realloc variables are stored in heap

Different Memory segments in C - C++

There are 5 different memory segments in C/C++. They are 

  1. Text Segment
  2. Initialized Data Segment
  3. Uninitialized data segment
  4. Stack
  5. Heap
1.Text Segment is also known as a code memory, where the function description details will be stored

2.Initialized data segment, It is a virtual address space. Initialized static and global variables are stored in this memory.

3.Uninitialized data segment, It is also known as Block Started by Symbol (BSS). Global and static variables which is uninitialized are stored here and that is initialized to kernel zero (0).

4.Stack, In this memory only the local variables are stored, the stack memory area is adjoined heap area, which grew opposite direction. When both of them met then the memory will be free-ed.

5.Heap memory, is known as a dynamic memory. Dynamic memory allocation (malloc, calloc, realloc) during program execution will be in this memory. It begins at the end of BSS.

How to create Semaphores ? C/C++ Example

Embedded C interview Question

What is Semaphore ?
How to create Semaphores ?

Answer

A semaphore is a protected variable (or abstract data type), which can only be accessed using the following operations:

P(s)
Semaphore s;
{
   While (s == 0); /* wait until s>0 */
   S = s-1;
}

V(s)
Semaphore s;
{
   S = s+1;
}
Init(s, v)
Semaphore s;

Int v;
{
   S = v;
}

The value of a semaphore is the number of units of the resource, which are free (if there is only one resource a "binary semaphore" with values 0 or 1 is used). The P operation busy-waits (or maybe sleeps) until a resource is available whereupon it immediately claims one. V is the inverse; it simply makes a resource available again after the process has finished using it. Init is only used to initialize the semaphore before any requests are made. The P and V operations must be indivisible, i.e. no other process can access the semaphore during the their execution.

To avoid busy-waiting, a semaphore may have an associated queue of processes (usually a FIFO). If a process does a P on a semaphore, which is zero, the process is added to the semaphore's queue. When another process increments the semaphore by doing a V and there are tasks on the queue, one is taken off and resumed.

What are mutexes?

Embedded C interview Questions

What are mutexes?

Answer

A mutual exclusion object that allows multiple threads to synchronizes access to a shared resource. A mutex has two states: locked and unlocked. Once a thread has locked a mutex, other threads attempting to lock it will block. When the locking thread unlocks (releases) the mutex, one of the blocked threads will acquire (lock) it and proceed. 

If multiple threads or tasks are blocked on a locked mutex object, the one to take it and proceed when it becomes available is determined by some type of scheduling algorithm. For example, in a priority-based system, the highest priority blocked task will acquire the mutex and proceed. Another common set-up is put blocked tasks on a first-in-first-out queue.

How many assembly instructions does each of your high level code take?

Embedded C interview question

How many assembly instructions does each of your high level code take?

Answer

Any programming language, which provides some level of abstraction above assembly language. These normally use statements consisting of English-like keywords such as "FOR", "PRINT" or "GOTO", where each statement corresponds to several machine language instructions. It is much easier to program in a high-level language than in assembly language though the efficiency of execution depends on how good the compiler or interpreter is at optimizing the program. It is not possible to tell in the number of assembly instruction for a high level code. Depending on the optimization switches the number of Assembly instruction will vary.

Development platform and target platform

Embedded C interview question

What is development platform and target platform ?

Answer

Refers to the situation where two computers are involved in the software development process - one computer to develop software (edit, compile, link, etc.), referred to as the host, and one computer to run the software, referred to as the target. The target is the actual product on which the software is to run. In most common situations, the host and target are the same. For example, a word processor is developed on the PC and runs as a product on the PC. However, for various real-time and embedded systems, this is not the case. Consider a small single board computer that controls a robot arm. 

The software cannot be developed on this board because it has no keyboard, display, or disk, and therefore, a host computer, like a PC, is used to develop the software. At some point, when it is believed that the software is ready for testing, the software is compiled and linked to form an executable file, and the file is downloaded to the target, typically over an RS-232 serial connection. Debugging on the target is typically done with an ICE.

Downloading an application mean to hardware means what ?

Embedded C interview question

What does down loading an application mean?

Answer

Refers to the transfer of executable code from a host to a target, typically using an RS-232 serial line. The target must have resident software (e.g., EPROM) that can read the incoming data, translate it if necessary (the file format may be ASCII hex for example) and load and run the code. If the target board has no resident software, then an ICE is required. The ICE connects to the host via RS-232 typically and accepts the download and loads the code into memory

Board Support Package (BSP)? Embedded C interview question

Embedded C interview question

What is a Board Support Package (BSP)? 

Answer

BSP is a commonly used embedded industry term identifying a source or binary software package used to rapidly build an embedded operating system on a particular hardware platform. In Windows CE, a BSP is a collection of drivers and OEM Adaptation Layers (OALs), hardware abstraction layers (HALs), and BIOS files that are needed to allow an operating system to boot and make the peripherals function on a board.

Intel and Motorola architecture

Embedded C interview Question

What is Intel and Motorola architecture ?

Answer

Motorola architecture is (RISC) is based on the rapid execution of a sequence of simple instructions rather than on the provision of a large variety of complex instructions.

Intel architecture is CISC where each instruction can perform several low-level operations such as memory access, arithmetic operations or address calculations. The term was coined in contrast to Reduced Instruction Set Computer.

Reentrant routines and what is their use ?

Embedded C interview question

What are reentrant routines and what is their use?

Answer

Used to describe code, which can have multiple simultaneous, interleaved, or nested invocations, which will not interfere with each other. This is important for parallel processing, recursive functions or subroutines, and interrupt handling. 
It is usually easy to arrange for multiple invocations (e.g. calls to a subroutine) to share one copy of the code and any read-only data but, for the code to be re-entrant, each invocation must use its own copy of any modifiable data (or synchronized access to shared data). This is most often achieved using a stack and allocating local variables in a new stack frame for each invocation. Alternatively, the caller may pass in a pointer to a block of memory which that invocation can use (usually for outputting the result) or the code may allocate some memory on a heap, especially if the data must survive after the routine returns.

care must be taken while writing ISRs as opposed to normal sub routines

Embedded C interview Question

What care must you take while writing ISRs as opposed to normal sub routines?

Answer

Special care is required when writing an interrupt handler to ensure that either the interrupt, which triggered the handler’s execution, is masked out (inhibited) until the handler exits, or the handler is re-entrant so that multiple concurrent invocations will not interfere with each other.

difference between ISR and normal functions - Embedded C interview Question

Embedded C interview Question

What is the difference between ISR and normal functions?

Answer

The difference between an ISR and a normal routine is very slight and has to do with CPU opcodes. ISR routines end their routine with an "Interrupt Return (IRET)" whereas normal procedures end their routines with "Return (RET)" or "Far Return (RETF)"

Some compilers do not have the ability to correctly create ISRs. Often a compiler introduces a non-ansi compliant keyword "_interrupt" or "interrupt". Compilers known to support this keyword are Watcom C/C++, Borland C/C++, and Microsoft C 6.0. However, GCC does not support this keyword for x86 architecture and it would seem Visual C/C++ does not.

/* example of clock tick routine in Watcom C/C++ */
Void _interrupt ISR_clock (void)
{
   Clock_count++;
}

Interrupt Service Routine (ISR) - Embedded C interview questions

Embedded C interview Question

What is Interrupt Service Routine (ISR) ?

Answer

ISR is a routine, which is executed when an interrupt occurs. Interrupt handlers typically deal with low-level events in the hardware of a computer system such as a character arriving at a serial port or a tick of a real-time clock. Special care is required when writing an interrupt handler to ensure that either the interrupt, which triggered the handler’s execution, is masked out (inhibited) until the handler exits, or the handler is re-entrant so that multiple concurrent invocations will not interfere with each other.







Cache Memory Embedded C interview Question


Embedded C interview Question

What is cache Memory?


Answer

A small fast memory holding recently accessed data, designed to speed up subsequent access to the same data. Most often applied to processor-memory access but also used for a local copy of data accessible over a network etc. 



When data is read from, or written to, main memory a copy is also saved in the cache, along with the associated main memory address. The cache monitors addresses of subsequent reads to see if the required data is already in the cache. If it is (a cache hit) then it is returned immediately and the main memory read is aborted (or not started). If the data is not cached (a cache miss) then it is fetched from main memory and also saved in the cache. 

The cache is built from faster memory chips than main memory so a cache hit takes much less time to complete than a normal memory access. The cache may be located on the same integrated circuit as the CPU, in order to further reduce the access time. In this case it is often known as primary cache since there may be a larger, slower secondary cache outside the CPU chip. 

The most important characteristic of a cache is its hit rate - the fraction of all memory accesses, which are satisfied from the cache. This in turn depends on the cache design but mostly on its size relative to the main memory. The size is limited by the cost of fast memory chips.

What is concurrency - Embedded C interview Question


Embedded C interview Question

What is concurrency?

Answer

A technique used in an operating system for sharing a single processor between several independent jobs, it is also called as multi-tasking", "multi-processing", "multiprogramming", "concurrency", "process scheduling"

Unit Test Tools are in use today - Embedded C interview question

What kinds of Unit Test Tools are in use today?

AdaCAST, ATTOL are some of the tool used for ADA and C, C++ environment, vectorCAST for unit testing.

Trade off - Embedded C interview Question

Embedded C interview Question

Should we do any code instrumentation for debug and if so, what is the trade-off?

Answer

By instrumentation we mean is adding some tags through out the code. These tags can Be monitored, collected, for analysis of coverage, and timing using 3rd party software tools Like code test, code tap etc., By adding these we are injection few more code which the processor has to parse through, This will reduce the throughput of the processor.

What is Bit and Byte Masking - Embedded C interview question


Embedded C interview question

What is bit/byte masking?

Answer

By masking we means the corresponding bit of a packed operand propagates into the result Unchanged or inverted or whether that bit of the result is cleared to 0 or 1.

Bit Manipulation and Byte Manipulation Embedded C Interview Question

Embedded C interview Question

What is Bit manipulation and Byte manipulation ?

Answer

Setting, clearing, checking, inverting are the bit manipulation technique normally done in
any embedded software, refer the following for how it will be done
1. Check to see bit 7 is set: IF (BITS & (1<< 6))
2. Set bit 7: BITS | = (1<< 7)
3. Clear Bit 7: BITS & =~ (1<<7)
4. Toggle Bit 7: BIT ^ = (1 << 7)

Embedded C interview question - Write an iterative function


Embedded C interview question

Write an iterative function

Answer

Repetition of a sequence of instructions is called iterative; this is fundamental part of many algorithms. Iteration is characterized by a set of initial conditions, an iterative step and a termination condition.
Eg: new_x = n/2;
do
{
   x = new_x;
   new_x = 0.5 * (x + n/x);
} while (abs(new_x-x) > epsilon);

Embedded C interview Question - TCB (w.r.t real-time) ?


Embedded C interview Question

What is TCB (w.r.t real-time) ?

Answer:

A task control block is a data structure that contains information about the task. For example, task name, start address, a pointer to the task's instance data structure, stack pointer, stack top, stack bottom, task number, message queue, etc.

Embedded C interview Questions - features supported in processor w.r.t. Virtual memory

Embedded C interview question

What are the features supported in processor w.r.t. Virtual memory ?

Paging is supported by the Virtual memory For example; a virtual 1-megabyte address space can be simulated with 64K of RAM, a 2-megabyte disk, and paging hardware. The paging hardware translates all memory accesses through a page table. If the page that contains the memory access is not loaded into memory a fault is generated which results in the least used page being written to disk and the desired page being read from disk and loaded into memory over the least used page.

Embedded C interview questions - How to switch from real to protected mode?


Embedded C interview questions

How to switch from real to protected mode?

Answer
Every DOS Protected Mode Interface (DPMI) task runs on four different stacks: An application ring protected mode stack, a locked protected mode stack, a real mode stack, and a DPMI host ring 0 stack. The protected mode stack is the one the DPMI client was running on when it switched into protected mode by calling the protected mode entry point (although the client can switch to another protected mode stack if desired). The locked protected mode stack is provided by the DPMI server and is used for simulating hardware interrupts and processing real mode callbacks. The DPMI host provides the real mode stack, which is usually located in the data area provided by the client. The ring 0 stack is only accessible by the DPMI host. However, this stack may contain state information about the currently running program

There are three different ways a client can force a mode switch between protected and real mode:

  • Execute the default interrupt reflection handler 
  • Use the translation services to call real mode code 
  • Use a real mode callback to switch from real to protected mode 
  • Use the raw mode switch functions 
All mode switches except for the raw mode switches will save some information on the DOS Protected Mode Interface (DPMI) host's ring 0 stack. This means that programs should not terminate while in nested mode switches unless they are using the raw mode switching services. However, even programs that use raw mode switches should not attempt to terminate from a hardware interrupt or exception handler since the DPMI host performs automatic mode and stack switching to provide these services.

Embedded C interview question - What is Instruction pipelining ?

Embedded C interview questions

What is Instruction pipelining ?

Pipelining is an architecture, in which a sequence of functional units ("stages"), which performs a task in several steps, likes an assembly line in a factory. Each functional unit takes inputs and produces outputs, which are stored in its output buffer. One stage's output buffer is the next stage's input buffer. This arrangement allows all the stages to work in parallel thus giving greater throughput than if each input had to pass through the whole pipeline before the next input could enter. The costs are greater latency and complexity due to the need to synchronize the stages in some way so that different inputs do not interfere. The pipeline will only work at full efficiency if it can be filled and emptied at the same rate that it can process.

Embedded C interview question - What is the Main purpose of clock?

Embedded C interview question

What is the Main purpose of clock?


A processor's clock or one cycle thereof. The relative execution times of instructions on a computer are usually measured by number of clock cycles rather than seconds. One good reason for this is that clock rates for various models of the computer may increase as technology improves, and it is usually the relative times one is interested in when discussing the instruction set.

Interview Question - What are the various modes in Intel processor ?

Embedded C interview Question

What are the various modes in Intel processor ?

There are three types of modes in Intel processor. They are,

  • Real Mode
  • Virtual Mode
  • Protected Mode
In real mode, adding an address offset to the value of a segment Register shifted left four bits generates addresses. As the segment register and address offset are 16 bits long this results in a 20-bit address. This is the origin of the one-megabyte (2^20) limit in real mode.

Virtual Mode: An operating mode provided by the Intel 80386 and later processors to allow real mode programs to run under operating systems, which use protected mode. In this sub-mode of protected mode, an operating environment is created which mimics the address calculation in real mode.

In protected mode,
the segment registers contain an index into a table of segment descriptors. Each segment descriptor contains the start address of the segment, to which the offset is added to generate the address. In addition, the segment descriptor contains memory protection information. This includes an offset limit and bits for write and read permission. This allows the processor to prevent memory accesses to certain data. The operating system can use this to protect different processes' memory from each other, hence the name "protected mode".

Interview Question - What is a start-up routine for the processor and, what does this routine do

Embedded C interview Question

What is a start-up routine for the processor and, what does this routine do.

A startup or entry routine in native assembly code is linked into the source code subsequently programmed into the EPROM’s.

The routine is responsible for the following:

  • Initializes the stack pointer (SP). 
  • Initializes the frame pointer (FP). 
  • Clears the zero vars section. 
  • Initializes the I/O system. 
  • Sets up the exception vector tables. 
  • Copies initialized data to the vars section from ROM to RAM. 
  • Calls the main () function

Embedded C interview question - how to initialize the interrupt vector table ?

Embedded C interview question

How to initialize the interrupt vector table ?

To make the interrupts work, we must: 

  • Make sure that the Interrupt Vector Table is in the right place and that the Vector Base Register points to it. 
  • Set up each interrupt source to generate an interrupt when the proper special conditions arise. 
  • Set the Interrupt Mask Register so that the expected interrupts are not masked out. 
  • Set the Interrupt Priority Mask in the Status Register low enough the interrupt request is not inhibited 

For initialize the interrupt vector table... (W.r.t 8259 Operation) 

mov ax, 0 
mov es, ax 
mov di, 20h 
mov ax, offset int_service_0 
stosw 
mov ax, cs 
stosw 

The program should have a procedure int_service_0. 

mov ax, offset int_service_0 

This can move the offset of this procedure into AX 

mov ax, cs 

This moves your CS value into AX 

Hereby, we have put the offset address of int_service_0 into address 20h(i.e. 32d) and CS into address 22h. When INT 8 is called, then the offset of int_service_0 will be fetched into IP and CS will be fetched into CS. Then 8088 will go to run int_service_0.

Embedded C interview Questions - How to initialize the processor ?

Embedded C interview questions

How to initialize the processor?

The following example shows how to initialize a processor here the example of 80187 is taken.

$mod186

name example_80C187_init

;

; FUNCTION: This function initializes the 80C187 numeric

; co-processor.

;

; SYNTAX: extern unsigned char far 187_init (void);

;

; INPUTS: None

;

; OUTPUTS: unsigned char - 0000h -> False -> coprocessor not

; Initialized

; ffffh -> True -> coprocessor

; Initialized

; NOTE: Parameters are passed on the stack as required by

; high-level languages.

;

lib_80186 segment public 'code'

assume cs:lib_80186

public _187_init

_187_init proc far

push bp ;save caller's bp

mov bp, sp ;get current top of stack

cli ;disable maskable

;interrupts

fninit ;init 80C187 processor

fnstcw [bp-2] ;get current control word

sti ;enable interrupts

mov ax, [bp-2]

and ax, 0300h ;mask off unwanted control

;bits

cmp ax, 0300h ;PC bits = 11

je Ok ;yes: processor ok

xor ax, ax ;return false (80C187 not

;ok)

pop bp ;restore caller's bp

ret

Ok: and [bp-2], 0fffeh ;unmask possible exceptions fldcw [bp-2]

mov ax,0ffffh ;return true (80C187 ok)

pop bp ;restore caller's bp

ret

_187_init endp

lib_80186 ends

end

Embedded C Interview Question - What is the functionality of BIOS ?

Embedded C Interview Question

What is the functionality of BIOS?

BIOS is an acronym for basic input/output system, which is the part of system software that provides the lowest-level interface to peripheral devices and which controls the first stage of the system boot process, including installation of the operating system into memory.

What makes an OS a RTOS ? Embedded C interview questions

Embedded C interview questions

What makes an OS a RTOS ?

Answer

  • A RTOS (Real-Time Operating System) has to be multi-threaded and preemptible.
  • The notion of thread priority has to exist, as there is for the moment no deadline driven OS. 
  • The OS has to support predictable thread synchronization mechanisms 
  • A system of priority inheritance has to exist 
  • OS Behavior should be known 

so the following figures should be clearly given by the RTOS manufacturer:

  • The interrupt latency (i.e. time from interrupt to task run): this has to be compatible with application requirements and has to be predictable. This value depends on the number of simultaneous pending interrupts.
  • For every system call, the maximum time it takes. It should be predictable and independent from the number of objects in the system; 
  • The maximum time the OS and drivers mask the interrupts. 

The developer should also know the following points:

  • System Interrupt Levels. 
  • Device driver IRQ Levels, maximum time they take, etc

Embedded C interview question on What exactly is meant by real-time?

A embedded c interview question on What exactly is meant by real-time? let us see the answer for this question in this tutorial.

Answer

A real-time system is one in which the correctness of the computations not only depends upon the logical correctness of the computation but also upon the time at which the result is produced. If the timing constraints of the system are not met, system failure is said to have occurred.

Embedded C interview question on what is an Embedded System?

What is an Embedded System? is one of the frequently asked embedded c interview question for those who are freshers and less than 6 months of work experience. Let us see the answer below.

Answer 

Embedded control systems are designed around a MCU (Micro Controller Unit), which integrates on-chip program memory, data memory (RAM) and various peripheral functions, such as timers, displays, and keyboard and communication devices. Such systems usually require display drivers, device drivers to manage various devices and implement the application with or without operator intervention. Some systems require real time or quick response to real world inputs. Often they operate without any operator assistance or inputs round the clock. This requires very high system stability and error handling capability. 

Embedded Systems programming is different from PC application programming in the following aspects.

  • Unlike "processor" applications such as personal computers and workstations, the computing or controlling elements of the embedded control applications are buried inside the application.
  • Requires understanding of Hardware/System architecture. The user of the product is only concerned with the very top-level commands. Very rarely does an end-user know (or care to know) the embedded controller inside (unlike the conscientious PC programmers, who are intimately familiar not only with the processor type, but also its clock speed, DMA capabilities and so on).
  • Requires strict/careful timing characteristics.
  • Should be programmed to run on resource/ time/ performance constrained environments and fast response/ less memory.

Which of the following registers associated with a digital input/output port in an embedded system with software written whether the ports or parts thereof are inputs or outputs ?

Question

Which of the following registers associated with a digital input/output port in an embedded system with software written whether the ports or parts  thereof are inputs or outputs ?

Options

A. Interrupt enable register
B. Interrupt request register
C. Data direction register
D. Data register
E. Open-collector register

Answer

C. Data direction register

Which of the following memory locations store the address of the code designated to handle a particular interrupt programmed in ANSI C ?

Question

Which of the following memory locations store the address of the code designated to handle a particular interrupt programmed in ANSI C ?

Options

A) Start Of program memory
B) Data direction register
C) Interrupt vector
D) Interrupt priority register
E) Interrupt enable mask

Answer

C) Interrupt vector

Embedded C - C++ IKM Assesment Question and Answers - 2

On an ANSI C implementation using 2's complement math, which of the following will be the output for the given program

Program

#include<stdio.h>
int main()
{
   signed char j = 1;
   while(j<=255)
   {
      printf("%d ",j);
      ++j;
   }
   return 0;
}

Choice

A) 1 2 3 ... 127 128 0 1 2 3 ... 127 128 ... endlessly
B) 1 2 3 ... 127 -128 -127 -126 ... -2 -1 0 1 2 ... 127 -128 -127 ... endlessly
C) 1 2 3 ... 254 255 0 1 2 3 254 255 ... endlessly
D) 1 2 3 ... 255
E) 1 2 3 ... 127

Answer

B) 1 2 3 ... 127 -128 -127 -126 ... -2 -1 0 1 2 ... 127 -128 -127 ... endlessly 

POWER SYSTEM CONGESTION MANAGEMENT USING THYRISTOR CONTROLLED SERIES CAPACITOR - Part - 1

ABSTRACT

This project describes an approach for determining the most suitable locations for installing FACTS devices and finding their optimal settings for congestion management.

Congestion management means the activities of the transmission system operator to relieve transmission constraints in competitive electricity market. Congestion occurs when the transmission network is unable to accommodate all of the desired transactions due to violation of the system operating limits. Congestion can be removed by using FACTS devices.

The FACTS device considered in this work is TCSC. In this project, Particle Swarm Optimization (PSO) algorithm is used for finding the optimal settings of installed FACTS device. The proposed approach used for locating and finding the optimal settings of FACTS devices reduces the congestion. A sample IEEE- 30 bus system is used to demonstrate the effectiveness of the proposed approach.
CHAPTER 1

1. INTRODUCTION

Power systems are commonly planned and operated such that the system should remain secure under all conditions. In recent years, with the deregulation of the electricity market, the traditional concepts and practices of power systems are changed. This led to the introduction of Flexible AC Transmission System (FACTS) devices. These devices are able to modify voltage, phase angle, impedance and the power flows at particular points in power systems. FACTS devices controls the power flow in the network, reduces the flow in heavily loaded lines thereby resulting in an increase loadability, low system losses, improved stability and security of network and reduced cost of production.

Power exchanges in a deregulated system must be under controlled in order to avoid line overloading known as congestion. Therefore the full capacity of the transmission lines may not be used. This congestion is reduced to use the full capacity of the network. Removing congestion in normal and contingency condition in a power system without reducing the stability and security margin can be achieved through fast power control by FACTS devices in a transmission system. Their main function is to maximize the power flow.  In the proposed work, a non-traditional optimization technique, Particle Swarm Optimization (PSO) algorithm is used to optimize the parameters of FACTS devices in a power system. The various parameters taken into consideration are the location and settings of FACTS devices in transmission lines. The simulation is performed on IEEE-30 bus power system with more than one TCSC, modelled for steady state studies.


CHAPTER 2

2. FACTS DEVICES IN AC POWER SYSTEM

2.1 FACTS CONTROLLERS TO A.C POWER SYSTEMS

To achieve both operational reliability and financial profitability, it has become clear that more efficient utilization and control of the existing transmission system infrastructure is required. Power electronics based equipment, or Flexible AC Transmission Systems (FACTS), provide proven technical solutions to address these new operating challenges being presented today. FACTS technologies allow for improved transmission system operation with minimal infrastructure investment, environmental impact, and implementation time compared to the construction of new transmission lines. Traditional solutions to upgrading the electrical transmission system infrastructure have been primarily in the form of new transmission lines, substation, and associated equipment. However, as experiences have proven over the past decade or more, the process to permit, site, and construct new transmission line has become extremely difficult, expensive, time-consuming, and controversial. FACTS technologies provide advanced solutions as cost-effective alternatives to new transmission line construction.

2.1.1 AC Power

Power is defined as the rate of flow of energy past a given point. In alternating current circuits, energy storage elements such as inductance and capacitance may result in periodic reversals of the direction of energy flow. The portion of power flow that averaged over a complete cycle of the AC waveform, results in net transfer of energy in one direction is known as real power. On the other hand, the portion of power flow due to stored energy, which returns to the source in each cycle, is known as reactive power.
2.1.2 Real, Reactive and Apparent Power

Consider a simple alternating current (AC) circuit consisting of a source and a load, where both the current and voltage are sinusoidal. If the load is purely resistive, the two quantities reverse their polarity at the same time, the direction of energy flow does not reverse, and only real power flows. If the load is purely reactive, then the voltage and current are 90 degrees out of phase and there is no net power flow. This energy flowing backwards and forwards is known as reactive power. A practical load will have resistive, inductive, and capacitive parts, and so both real and reactive power will flow to the load.
If a capacitor and an inductor are placed in parallel, then the currents flowing through the inductor and the capacitor tend to cancel out rather than adding. Conventionally, capacitors are considered to generate reactive power and inductors to consume it. This is the fundamental mechanism for controlling the power factor in electric power transmission; capacitors (or inductors) are inserted in a circuit to partially cancel reactive power of the load.
The apparent power is the product of voltage and current. Apparent power is handy for sizing of equipment or wiring. However, adding the apparent power for two loads will not accurately give the total apparent power unless they have the same displacement between current and voltage (the same power factor).
Engineers use the following terms to describe energy flow in a system (and assign each of them a different unit to differentiate between them):
·                 Real power (P)        - unit: watt (W)
·                 Reactive power (Q) - unit: volt-amperes reactive (var)
·                 Complex power (S) - unit: volt-ampere (VA)
·                 Apparent Power (|S|) , that is, the absolute value of Complex power        S - unit: volt-ampere (VA)

In the diagram 3.1, P is the real power, Q is the reactive power (in this case positive), S is the complex power and the length of S is the apparent power.

Reactive power does not transfer energy, so it is represented as the imaginary basis. Real power moves energy, so it is the real basis.


          Figure 3.1 The apparent power is the vector sum of real and reactive power

The unit for all forms of power is the watt (symbol: W), but this unit is generally reserved for real power. Apparent power is conventionally expressed in volt-amperes (VA) since it is the product of rms voltage and rms current. The unit for reactive power is expressed as "Var", which stands for volt-amperes reactive. Since reactive power flow transfers no net energy to the load, it is sometimes called "wattles" power.

Understanding the relationship between these three quantities lies at the heart of understanding power engineering. The mathematical relationship among them can be represented by vectors or expressed using complex numbers,
                                      S = P + jQ         

The complex value S is referred to as the complex power.                             
                                Table 2.1 Active and Reactive power

Instantaneous power p=
P=
Instantaneous active power

Instantaneous reactive power
Average active power
P=VI cos
Called simply active power
Average reactive power
=0
Ignored usually
Maximum instantaneous active power
VI cos
ignored usually as is the same
quality as P.
Maximum instantaneous reactive power
Q=VI sin
called simply reactive power

2.1.3 Reactive power

Reactive power is essential to move active power through the transmission and distribution system to the customer. While active power is the energy supplied, reactive power provides the important function of regulating voltage. 

Reactive power is used to provide the voltage levels necessary for active power to do useful work. 

The sources of Reactive power:

·         Synchronous Generators
·       Synchronous Compensators
·       Capacitive and Inductive Compensators
·       Overhead Lines and Underground Cables

Reactive power (vars) is required to maintain the voltage to deliver active power (watts) through transmission lines. Motor loads and other loads require reactive power to convert the flow of electrons into useful work. When there is not enough reactive power, the voltage sags down and it is not possible to push the power demanded by loads through the lines.

2.1.4 Reactive power Vs system voltage

Voltage drop between two nodes 1 and 2, at voltages V1 and V2 respectively, connected by a short transmission line of impedance R + j X is
 
(RP2 +XP2)/V2

where P2, Q2 is the real and reactive power at node V2. For most power networks X>> R and the voltage drop determines Q. If V1 is in phase advance of V2, then the power P flows from node1 to node2. If V1 >V2, then reactive power is transferred from node1 to node 2. If by varying the excitation of generators at nodes 1 & 2 V2 is made >V1, then the direction of Q will be reversed from node 2 to node1. Hence P can be sent from node 1 to node2 or from node 2 to node1 by suitably adjusting the amount of steam (or water0 admitted to the turbine and Q can be sent in either direction by adjusting the voltage magnitudes. These two operations are approximately independent of each other if X>>R.






 











                                   Figure 3.2 Voltage collapse phenomenon

2.1.5 Complex power flow

          Consider two ideal voltage sources connected by a line of impedance
 Z=R + jX as shown in figure 3.3 below

Figure 3.3 Two interconnected voltage sources
 Let the phasor voltage be V1 = |V1| 1    and V2 =|V2| 2.For the assumed direction of current

                                     | V1 | 1 - | V2 | 2
                            I12  =    
                                              | Z |
                       
                                 = 1 - -   2 -

The complex power S12 is given by  

                           S12 = V1 I*12

  = δ1 [ γ –δ1 - γ –δ2]
                              
                                 = γ - γ +δ1 – δ2

Thus, the real and reactive powers at the sending end are

 P12 = cos γ -  sin (γ+δ12)                                             (2.1)
 
Q12 = sin γ -  sin (γ+δ1-δ2)                                              (2.2)

Power system transmission lines have small resistance compared to the reactance. Assuming R = 0 (i.e., Z = X 900), the above equations become

 P12 =  sin (δ1 – δ2)                                                                  (2.3)

Q12 =  [  -  cos (δ1 – δ2)]                                                   (2.4)
Since R = 0, there are no transmission line losses and the real power sent equals the real power received.
2.1.6 Power system control

When discussing the creation, movement, and utilisation of electrical power, it can be separated into three areas,
·       Generation
·       Transmission
·       Distribution

The three main variables that can be directly controlled in the power system to impact its performance are:
·       Voltage
·       Angle, δ and
·       Impedance

2.2 FACTS DEVICES

Some of the FACTS controllers used for power system control are,

Table 2.2 Types of FACTS Devices

Type
Parameter  Controlled
FACTS Devices
Series Controllers
Series
TCSC, SSSC, TCPST
Shunt controllers
Shunt
SVC, STATCOM
Combined
Series-Shunt
Controllers
Series  & Shunt
UPFC

·       STATCOM - Static Synchronous Compensator
·       SVC            - Static Var Compensator
·       TCSC          - Thyristor Controlled Series Compensator
·       TCPST        - Thyristor Controlled Phase Shifting Transformer
·       UPFC          - Unified Power Flow Controller
·       SSSC           - Static Synchronous Series Compensator

Each of the above mentioned (and similar) controllers impact voltage, impedance, and/or angle (and power).

A STATCOM operated as a shunt-connected static var compensator whose capacitive or inductive output current can be controlled independent of the AC system voltage. A SVC is a shunt connected static var generator or absorber whose output is adjusted to exchange capacitive or inductive current so as to maintain specified bus voltage. A SSSC can be operated without an external electric energy source as a series compensator whose output voltage is in quadrature with the line current for the purpose of increasing or decreasing the overall reactive voltage drop across the line and thereby controlling the transmitted electric power.

UPFC is a combination of STATCOM and SVC which are coupled via a common DC link. It is able to control the transmission line voltage, impedance and angle or, alternatively, the real and reactive power flow in the line. TCSC is a capacitive reactance compensator which consists of a series capacitor bank shunted by a thyristor-controlled reactor in order to provide a smoothly variable series capacitive reactance thereby controlling the impedance of transmission line.

TCPST is a phase shifting transformer adjusted by thyristor switches to provide a rapidly variable phase angle. Super Conducting Magnetic Energy Storage (SMES) is a super conducting electromagnetic energy storage device containing electronic converters that rapidly injects and/or absorbs real and /or reactive power and dynamically controls power flow in an AC system.

2.2.1 Thyristor Controlled Series Capacitor (TCSC)
         
Basically, the TCSC is comprised of a capacitor in series with a transmission line in parallel with a TCR (a pair of anti-parallel thyristor’s in series with a reactor). Figure 3.4 shows the basic circuit of a TCSC.

2.2.2. Operation of TCSC
                      

       Figure 3.4 Thyristor Controlled Series Capacitor (TCSC)

The device can operate in three different modes:

      i.          Bypassed mode
         In this mode the thyristor’s are triggered to full conductance, the module behaves approximately like a parallel arrangement of the capacitor and the inductor. If the reactive impedance of the inductor is lower than the reactive impedance of the capacitor, the current through the device is inductive.

ii.          Blocked mode
       The thyristor are blocked and the current through the reactor gets zero and the arrangement acts just like a fixed capacitor.

iii.          Vernier mode
       The thyristor’s conduction is controlled by a gate signal and therefore the TCSC has a controllable reactance in both the inductive and capacitive regions. This last case is of interest here. The thyristor firing angles (α) can vary from 90 until a maximum inductive value in an inductive operating range, and from 180 until a minimum capacitive value in a capacitive operating range. The maximum value of inductive impedance and the minimum value of capacitive impedance should be set up in the design of the device to prevent a parallel resonance between the capacitor and the TCR at the fundamental frequency.

2.2.3 Benefits of utilizing FACTS devices

The benefits of utilizing FACTS devices in electrical transmission systems can be summarized as follows:
·       Better utilization of existing transmission system assets
·       Increased System Security
·       Increased transmission system reliability and availability
·       Increased quality of supply for sensitive industries
·       Environmental benefits



CHAPTER 3

                        3. CONGESTION MANAGEMENT

3.1 DEFINITION OF CONGESTION

In a market, when the producers and consumers of electric energy desire to produce and consume in amounts that would cause the transmission system to operate at or beyond one or more transfer limits, the system is said to be congested. Congestion is defined as the violation of one or more constraints in the network that are imposed to reflect the physical limitation of component facilities and that need to be satisfied so as to ensure reliability of power system. In this project an analytical framework to solve problem arising in transmission congestion management is considered.

3.2 CAUSES FOR CONGESTION

      Congestion occurs whenever the preferred generation or demand pattern of     various market player requires the provision of transmission services beyond the capability of the transmission system to provide. When the constraints on the transmission networks are taken into account, the constrained transfer capabilities of the network may be unable to accomodate the preferred unconstrained market schedule without violating one or more constraints.

Therefore, congestion results from insufficient transfer capabilities to simultaneously transfer electricity between the various buying and selling entities.

     Congestion introduces unavoidable losses in market efficiently so that not all benefits foreseen in the restructuring of electric power industry can be fully realised. Congestion is a major obstacle to vibrant competitive electricity markets. Congestion should be managed.


3.3   CONGESTION   MANAGEMENT

          Congestion management means the activities of the transmission system operator to relieve transmission constraints in competitive electricity market.

Congestion management is about controlling the transmission system so that transfer limits are observed.

Effective management of the congestion is a critically important contributor to the smooth functioning of competitive electricity market through its key role of minimizing the impacts of congestion. Congestion is managed through optimal location of FACTS device.      












CHAPTER 4
                                                                                             
4. PARTICLE SWARM OPTIMIZATION

4.1 Overview of PSO

Population based, cooperative and competitive stochastic search algorithms have been very popular in the recent year’s arena of computational intelligence. Particle Swarm Optimization (PSO) is motivated from the simulation of social behaviour. It is a robust stochastic optimization technique based on the movement and intelligence of swarms.

It was developed in 1995 by James Kennedy (social-psychologist) and Russell Eberhart (electrical engineer).It uses a number of agents (particles) that constitute a swarm moving around in the search space looking for the best solution. Each particle is treated as a point in a N-dimensional space which adjusts its “flying” according to its own flying experience as well as the flying experience of other particles. Thus it applies the concept of social interaction to problem solving and so it is now applied for solving electrical engineering related problems.

4.2 pbest value and gbest value

Each particle keeps track of its coordinates in the solution space which are associated with the best solution (fitness) that has achieved so far by that particle. This value is called personal best, pbest.

Another best value that is tracked by the PSO is the best value obtained so far by any particle in the neighborhood of that particle. This value is called gbest.
The basic concept of PSO lies in accelerating each particle toward its pbest and the gbest locations, with a random weighted acceleration at each time step as shown in Figure 3.5


Vk

     Sk

     Vk+1

     Sk+1

     Vpbest

     Vgbest

     y

     x
   Figure 3.5 Concept of modification of a searching point by PSO

           Sk        : current searching point
Sk+1     : modified searching point
Vk      : Current velocity
Vk+1   : modified velocity
Vpbest : velocity based on pbest
Vgbest : velocity based on gbest
 
Each particle tries to modify its position using the following               information:
·       The current positions,
·       The current velocities,
·       The distance between the current position and pbest,
·       The distance between the current position and the gbest.

Start

Initialize particles with random position and velocity vectors

Stop: giving gbest, optimal solution.

For each particle’s position (p) evaluate fitness

If fitness (p) better than fitness (pbest) then pbest=p

Set best of pBest as gBest

Update particles velocity and position
Text Box: Loop until max iterText Box: Loop until all particles exhaust







Comparison with other evolutionary computation techniques





Figure 3.6 Flowchart for PSO Algorithm


4.3 Advantage of PSO

·       Unlike in genetic algorithms, evolutionary programming and evolutionary strategies, in  PSO, there is no selection operation
·       All particles in PSO are kept as members of the population through the course of the run
·       PSO is the only algorithm that does not implement the survival of the fittest.
·       No crossover operation in PSO



4.4 Applications of PSO

The application of PSO helps in solving many of the Power system problems. They are:

·       FACTS device location and design
·       Economic dispatch
·       Unit Commitment
·       Generation planning
·       Maintenance Scheduling
·       Capacitor placement
                                                               









CHAPTER 5

5. POWER FLOW EQUATION

The equation of power flow relates the power transfer between two buses and the electrical data of the system. The electrical data comprises the receiving and sending bus voltages, the power angle between the two buses and the series impedance and natural capacitance of the transmission line connecting the two buses. We consider the PI model for a transmission line (figure 3.7) and we express the reactive power at the two ends as a function of the voltages Vs and VR and the characteristic of the line (R, XL, &  XC).


Figure 3.7 Transmission line connecting two voltage buses

Using the phasor representation, (bar symbol above the respective quantity) we have for the voltages,

and for the currents,

The complex power for each end can be calculated by multiplying the voltage with the complex conjugate of the corresponding current. As we are interested to evaluate the reactive power Q (according with our definition the
amplitude of the instantaneous reactive power), we take the complex part of the complex powers which are


Considering a small resistance comparative with the inductance (R<<L), the above equation can be simplified. This assumption does not affect the results as the reactive power is stored, absorbed or produced by the reactive part of the network (inductance or capacitance). The simplified equations for the reactive power at the two end are then
So far the standard procedure followed by textbooks to introduce the power flow equations was followed. As QR  and Qs are not equal, the reactive power loss is introduced as the difference of the two expressions
The reactive power loss is explained to be the reactive power produced or absorbed by the line, depending on its sign. Accordingly, for a piece of electric network, the reactive power injected at one end will be the reactive power at the other end plus the reactive power produced or absorbed by the network element.

This unanimously accepted interpretation of reactive power loss contradicts the previous unanimously accepted interpretation that the reactive energy is neither consumed nor produced but oscillates among different part of the electric network. Here we would like to remind the reader that in fact not the power is lost, neither the power is flowing but rather the energy.

The confusion lies on the fact that the same term reactive power is used for the amplitude of the instantaneous reactive power and for the average of reactive power, two completely different concepts. The same confusion is avoided for the active power, as the amplitude of the instantaneous active power and the average of active power happen to be the same.

The confusion is removed by interpreting the two facts as following

1. The average of reactive power is zero, is interpreted  as the energy is flowing for half a cycle in one direction and for the second half a cycle, the same amount of energy is flowing in the opposite direction. Therefore it is impossible to have a gain or loss of reactive energy (power).

 2.The “loss of reactive power” should be seen as a loss in the amplitude of           the instantaneous reactive power that in fact is not a loss in real power.

5.1 Line flows

 
      Figure 3.8 Transmission line model for calculating line flows

Consider the line connecting the 2 buses ‘i’&’j’ .The line currents are,

        Iij=Il+Ii0;
        Iji=-Il+Ij0;

The complex powers are,

Sij=Vi*conj(Iij);
Sji=Vj*conj(Iji);



CHAPTER 6

6. PROBLEM FORMULATION

In this project, more than one TCSC’s are considered to maximize the power flow and reduce the congestion.
          Each potential solution for the optimization problem is considered as a particle in PSO. Initially the particles are randomly generated. All particles in PSO are kept as members of the population through the course of the run. It is the velocity of the particle, which is updated according to its own previous best solution of its companions. The particles fly with the updated velocities .Thus the PSO optimization problem is used to find the optimal settings of FACTS devices with the objective of maximizing  the power flow.

6.1 Optimal Placement of TCSC

 The essential idea of the proposed TCSC placement approach is to determine a branch, which has maximum power flow. Initially, the TCSC is placed in every branch of the IEEE 30-bus system and the power flows are calculated. Then, the maximum difference in line power flows are arranged in descending order for different locations of the TCSC. Optimal location and number of TCSC’s are obtained for the branches based upon the maximum power flow through lines.

6.2 Optimal settings of TCSC
FACTS devices constrains:

The FACTS device limit is given by,

-0.7 XL<XTCSC<-0.2XL           
   Where,
XL- original line reactance in pu
XTCSC-reactance added to the line where TCSC is placed in pu

6.3 Proposed method of PSO implementation

In PSO, particles fly in the search space with a velocity dynamically adjusted according to its own flying experience and its own flying experience and its companies flying experience. The position of each agent is represented in X-Y plane with position   (Sx, Sy), Vx (velocity along Y-axis). Modification of the agent position is realized by the position and velocity information.
           
Bird flocking optimizes a certain objective function. Each agent knows its best value so far, called ‘Pbest’, which contains the information on position and velocity. This information is the analogy of personal experience of each agent. Moreover, each agent knows the best value so far, in the group ‘Gbest’ among ‘Pbest’. This information is the analogy of knowledge, how the other neighbouring agents have performed. Each agent tires to modify its position by considering current positions (Sx,Sy), current velocities (Vx,Vy), the individual intelligence(Pbest), and the group intelligence (Gbest).

The following equations are utilized, in computing the position and velocities, in the X-Y plane:

Vid=W x Vid + C1 x rand x (Pid-Xid) + C2 x rand x (Pgd-Xid)                  (2.5)
 Xid=Xid+Vid                                                                                           (2.6)                                                                                                                                          
Where,
Vid             : Particle velocity
Xid             : current particle solution
Pid              : Pbest
Pgd             : Gbest
rand          : random number between(0, 1)
C1 and C2 : learning factors,
c1min=c2min=0.5; c1max=c2max=2.5
C1=c1 max-((c1 max-c1 min)/iter max)*iter
C2=c2 min+ ((c2 max-c2min)/iter max)*iter

W is weighting factor given by,


                                           
Where,

W max   - initial weight (taken as 0.9);  


W min    - final weight (taken as 0.1)
iter         - current iteration number
iter max - maximum iterations

The minimum and maximum velocity is kept between -0.2 and +0.2.




6.4 PSO Algorithm for Congestion management

Step 1:  Each particle is initialized with random position and velocity          vectors.
Step 2:  The initial population of individuals is created by satisfying the FACTS device’s constraints.
Step 3:  For each individual in the population, the fitness function is evaluated.                  
Step  4: The velocity is updated by equation (2.5) and new population is created by equation (2.6).
Step 5:  For each individual in the population, the fitness function is evaluated by using the updated velocity.
Step 6:  Initial and updated fitness values are combined and the better individuals are passed to the next iteration.         
Step 7: If maximum iteration number is reached, then go to next step else go to step 3.
Step 8: Print the best individual’s settings and loss value.