Saturday, July 10, 2010

Thermal Imaging Technology

What is Infrared Radiation?

The Latin prefix "infra" means "below" or "beneath." Thus "infrared" refers to the region beyond or beneath the red end of the visible color spectrum. The infrared region is located between the visible and microwave regions of the electromagnetic spectrum. Because heated objects radiate energy in the infrared, it is often referred to as the heat region of the spectrum. All objects radiate some energy in the infrared, even objects at room temperature and frozen objects such as ice.

The higher the temperature of an object, the higher the spectral radiant energy, or emittance, at all wavelengths and the shorter the predominant or peak wavelength of the emissions. Peak emissions from objects at room temperature occur at 10 µm. The sun has an equivalent temperature of 5900 K and a peak wavelength of 0.53 µm (green light). It emits copious amounts of energy from the ultraviolet to beyond the far IR region.
Much of the IR emission spectrum is unusable for detection systems because the radiation is absorbed by water or carbon dioxide in the atmosphere. There are several wavelength bands, however, with good transmission.
  • The long wavelength IR (LWIR) band spans roughly 8-14 µm, with nearly 100% transmission on the 9-12 µm band. The LWIR band offers excellent visibility of most terrestrial objects.
  • The medium wavelength IR (MWIR or MIR) band (3.3-5.0 µm) also offers nearly 100% transmission, with the added benefit of lower, ambient, background noise.
  • Visible and short wavelength IR (SWIR or near IR, NIR) light (0.35-2.5 µm) corresponds to a band of high atmospheric transmission and peak solar illumination, yielding detectors with the best clarity and resolution of the three bands. Without moonlight or artificial illumination, however, SWIR imagers provide poor or no imagery of objects at 300K. 

Infrared Detectors

An infrared detector is simply a transducer of radiant energy, converting radiant energy in the infrared into a measurable form. Infrared detectors can be used for a variety of applications in the military, scientific, industrial, medical, security and automotive arenas. Since infrared radiation does not rely on visible light, it offers the possibility of seeing in the dark or through obscured conditions, by detecting the infrared energy emitted by objects. The detected energy is translated into imagery showing the energy differences between objects, thus allowing an otherwise obscured scene to be seen. For example, the left image below is what you may see in ordinary light on a dark night. The image at right is the same scene but as seen with an Infrared camera. Hot objects such as people stand out from the typically cooler backgrounds regardless of the available visible light.
Under infrared light, the world reveals features not apparent under regular visible light. People and animals are easily seen in total darkness, weaknesses are revealed in structures, components close to failure glow brighter, visibility is improved in adverse condition such as smoke or fog.

Infrared Detector Types

There are two fundamental methods of IR detection, energy and photon detection. Energy detectors respond to temperature changes generated from incident IR radiation through changes in material properties. Photon detectors generate free electrical carriers through the interaction of photons and bound electrons. Energy detectors are low cost and typically used in single detector applications; common applications include fire detection systems and automatic light switches. However, the simplicity of fabricating large 2D focal plane arrays in semiconductors has lead to the use of photon detectors in almost all advanced IR detection systems. Recent advances in micromachining and materials science have lead to the exciting field of uncooled detectors which promise lower system and operation costs.

Energy Detectors

The absorption of IR energy heats the detection element in energy or thermal detectors, leading to changes in physical properties which can be detected by external instrumentation and which can be correlated to the scene under observation. Energy detectors contain two elements, an absorber and a thermal transducer. The following are examples of energy detectors.

Thermocouples / Thermopiles

Thermocouples are formed by joining two dissimilar metals which create a voltage at their junction. This voltage is proportional to the temperature of the junction. When a scene is optically focused onto a thermocouple, its temperature increases or decreases as the incident IR flux increases or decreases. The change in IR flux emitted by the scene can be detected by monitoring the voltage generated by the thermocouple. For sensitive detection, the thermocouple must be thermally insulated from its surroundings. For fast response, the thermocouple must be able to quickly release built up heat. This tradeoff between sensitivity of detection and the ability to respond to quickly changing scenes is inherent to all energy detectors.
A thermopile is a series of thermocouples connected together to provide increased responsivity.

2 Pyroelectric Detectors

Pyroelectric detectors consist of a polarized material which, when subjected to changes in temperature, changes polarization. These detectors operate in a chopped system; the fluctuation in the exposure to the scene generates a corresponding fluctuation in polarization and thus an alternating current that can be monitored with an external amplifier.

Ferroelectric Detectors

Similar to pyroelectric detectors, ferroelectric detectors are based on a polarized material which, when subjected to changes in temperature, changes polarization.

Thermistors / Bolometers / Microbolometers

In thermistors, the resistance of the elements varies with temperature. One example of a thermistor is a bolometer. Bolometers function in one of two ways: monitoring voltage with constant current or monitoring current with constant voltage.
Advances in the micromachining of silicon have lead to the exciting field of microbolometers. A microbolometer consists of an array of bolometers fabricated directly onto a silicon readout circuit. This technology has demonstrated excellent imagery in the IR. Although the performance of microbolometers currently falls short of that of photon detectors, development is underway to close the performance gap. Microbolometers can operate near room temperature and therefore do not need vacuum evacuated, cryogenically cooled dewars. This advantage brings with it the possibility of producing low cost night vision systems for both military and commercial markets.

Microcantilevers

Microcantilevers are based on the bimetal effect to measure IR radiation. This effect utilizes the difference in thermal expansion coefficients of two different bimetals to cause a displacement of a microcantilever. In combination with a reference plate, this cantilever forms a capacitance. When infrared light is absorbed by the microcantilever, the microcantilever deflects and thus alters the capacitance of the structure. This change in capacitance is a measure for the incident infrared radiation.

Photon Detectors

Light interacts directly with the semiconductors in photon detectors to generate electrical carriers. Because these detectors do not function by changing temperature, they respond faster than energy detectors. However, these detectors will also pick up the IR radiation generated by their own mountings and accompanying optics and thus must be cooled to cryogenic temperatures to minimize background noise. The following are examples of photon detectors.

Intrinsic Detectors

Photovoltaic Intrinsic Detectors
Photovoltaic (PV) detectors generate photocurrents which can be monitored with a trans-impedance amplifier. These photocurrents are created when incident light with energy greater than or equal to the energy gap, or diode junction, of the semiconductor strikes the detector causing excited, minority, electrical carriers to be swept across the photodiode's electrical junction.

PV devices operate in the diode's reverse bias region; this minimizes the current flow through the device which in turn minimizes power dissipation. In addition, PV detectors are low noise because the reverse bias diode junction is depleted of minority carriers. The highest performance PV detectors are fabricated from Si, Ge, GaAs, InSb, InGaAs, and from HgCdTe (MCT).
Photoconductive Intrinsic Detectors
Photoconductive (PC) detectors function similarly to PV detectors. Incident light with energy greater than or equal to the energy gap of the semiconductor generates majority electrical carriers. This results in a change in the resistance, and hence conductivity, of the detector. Examples of PC detector materials are Lead sulfide (PbS), Lead selenide (PbSe) and MCT.

Extrinsic Detectors

Extrinsic detectors are based on Si (SiX) or Ge (GeX) doped with impurities such as Boron, Arsenic and Gallium. They are similar to intrinsic detectors. However, in extrinsic detectors carriers are excited from the impurity levels and not over the bandgap of the basic material. Both photovoltaic and photoconductive types exist.

Photo-emissive Detectors

Photo-emissive detectors are based on the emission of carriers from a metal into a semiconductor material through the absorption of light. A typical example is Platinum Silicide (PtSi) on Si.

Quantum Well Infrared Photodetector

The Quantum Well Infrared Photodetector (QWIP) is an infrared detector that consists of multiple alternating thin gallium arsenide (GaAs) and aluminum gallium arsenide (AlGaAs) layers. Carriers are generated by absorption of IR light inside quantum wells.

Detector Types and Materials Overview
The table below summarizes the main detector types and materials.
 Many of these IR materials are based on compound semiconductors made of III-V elements such as indium, gallium, arsenic, antimony, or on the II-VI elements mercury, cadmium and telluride, or on the IV-VI elements lead, sulfur and selenide. They can be combined into binary compounds such as GaAs, InSb, PbS and PbSe or into ternaries such as InGaAs or HgCdTe.

Infrared Detector Formats and Architectures

Infrared detectors are available as single element detectors in circular, rectangular, cruciform, and other geometries for reticle systems, as linear arrays, and as 2D focal plane arrays (FPAs).
Single element detectors are normally frontside illuminated and wire bonded devices. Linear and 2D arrays may be fabricated with a variety of device and signal output architectures.
First generation linear arrays were usually frontside illuminated, with the detector signal output connected by wire bonding to each element in the array. The signal from each element was then brought out of the vacuum package and connected to an individual room temperature preamplifier prior to interfacing with the imaging system display. Gain adjustments were usually made in the preamplifier circuitry. This approach limited first generation linear arrays to less than two hundred elements.
Second generation arrays, both linear and 2D, are frequently backside illuminated through a transparent substrate. Several alternative focal plane architectures are illustrated in the graph below.
The diagram below illustrates a detector array which is electrically connected directly to an array of preamplifiers and/or switches called a readout. The electrical connection is made with indium "bumps" which provide a soft metal interconnect for each pixel. This arrangement, commonly referred to as a "direct hybrid", facilitates the interconnection of large numbers of pixels to individual preamplifiers coupled with row and column multiplexers.
Indirect hybrid configurations (b) may be used with large linear arrays to interface the detector with a substrate having a similar thermal coefficient of expansion. These hybrids may also be used for serial hybridization, allowing the detector to be tested prior to committing the readout, and/or to accommodate readout unit cells having dimensions larger than the detector unit cell, increasing the charge storage capacity and thereby extending the dynamic range. Readouts and detectors are electrically interconnected by a patterned metal bus on a fanout substrate.
Monolithic detector arrays (c) have integrated detector and readout functions. Generally, in these arrays, the command and control signal processing electronics are adjacent to the detector array, rather than underneath. In this case, the signal processing circuits may be connected to the detector by wire bonds. In the monolithic configuration it is not necessary for the signal processing circuits to be on the same substrate as the detector/readout (as shown in the figure) or at the same temperature as the detector. Monolithic PtSi detector arrays can be made with signal processing incorporated on the periphery of the detector/readout chip through the use of silicon-based detector technology.
Z technology, as illustrated in figure (d), provides extended signal processing real estate for each pixel in the readout chip by extending the structure in the orthogonal direction. In the approach illustrated, stacked, thinned readout chips are glued together, and the detector array is connected to the edge of this signal processing stack with indium.
Finally, a "Loophole" approach, as illustrated in figure (e), relies on thinning the detector material after adhesively bonding it to the silicon readout. Detector elements are connected to the underlying readout with vias, which are etched through the detector material to contact pads on the readout and metallized.

History and Trends of Infrared Detectors

Infrared detectors are in general used to detect, image, and measure patterns of the thermal heat radiation which all objects emit. Early devices consisted of single detector elements that relied on a change in the temperature of the detector. Early thermal detectors were thermocouples and bolometers which are still used today. Thermal detectors are generally sensitive to all infrared wavelengths and operate at room temperature. Under these conditions, they have relatively low sensitivity and slow response.
Photon detectors were developed to improve sensitivity and response time. These detectors have been extensively developed since the 1940's. Lead sulfide (PbS) was the first practical IR detector. It is sensitive to infrared wavelengths up to ~3 µm.
Beginning in the late 1940's and continuing into the 1950's, a wide variety of new materials were developed for IR sensing. Lead selenide (PbSe), lead telluride (PbTe), and indium antimonide (InSb) extended the spectral range beyond that of PbS, providing sensitivity in the 3-5 µm medium wavelength (MWIR) atmospheric window.
The end of the 1950's saw the first introduction of semiconductor alloys, in the chemical table group III-V, IV-VI, and II-VI material systems. These alloys allowed the bandgap of the semiconductor, and hence its spectral response, to be custom tailored for specific applications. MCT (HgCdTe), a group II-VI material, has today become the most widely used of the tunable bandgap materials.
As photolithography became available in the early 1960's it was applied to make IR sensor arrays. Linear array technology was first demonstrated in PbS, PbSe, and InSb detectors. Photovoltaic (PV) detector development began with the availability of single crystal InSb material.
In the late 1960's and early 1970's, "first generation" linear arrays of intrinsic MCT photoconductive detectors were developed. These allowed LWIR forward looking imaging radiometer (FLIR) systems to operate at 80K with a single stage cryoengine, making them much more compact, lighter, and significantly lower in power consumption.
The 1970's witnessed a mushrooming of IR applications combined with the start of high volume production of first generation sensor systems using linear arrays.
At the same time, other significant detector technology developments were taking place. Silicon technology spawned novel platinum silicide (PtSi) detector devices which have become standard commercial products for a variety of MWIR high resolution applications.
The invention of charge coupled devices (CCDs) in the late 1960's made it possible to envision "second generation" detector arrays coupled with on-focal-plane electronic analog signal readouts which could multiplex the signal from a very large array of detectors. Early assessment of this concept showed that photovoltaic detectors such as InSb, PtSi, and MCT detectors or high impedance photoconductors such as PbSe, PbS, and extrinsic silicon detectors were promising candidates because they had impedances suitable for interfacing with the FET input of readout multiplexers. PC MCT was not suitable due to its low impedance. Therefore, in the late 1970's through the 1980's, MCT technology efforts focused almost exclusively on PV device development because of the need for low power and high impedance for interfacing to readout input circuits in large arrays. This effort has been paying off in the 1990's with the birth of second generation IR detectors which provide large 2D arrays in both linear formats. These detectors use TDI for scanning systems; in staring systems, they come in square and rectangular formats.
Monolithic extrinsic silicon detectors were demonstrated first in the mid 1970's. The monolithic extrinsic silicon approach was subsequently set aside because the process of integrated circuit fabrication degraded the detector quality. Monolithic PtSi detectors, however, in which the detector can be formed after the readout is processed, are now widely available.
Second generation devices have now been demonstrated with many detector materials and device types, including PbS, PbSe, InSb, extrinsic Si, PtSi, and PV MCT.
It has taken nearly two decades since the invention of the CCD to mature the integration of IR detectors coupled with electronic readouts on the focal plane. This progress brought with it the transition from first generation to second generation device production. The size and complexity of infrared image detectors corresponds to the evolution of silicon integrated circuit size and complexity; this can be seen through comparison to dynamic random access memory chip trends (see graph below). Note that DRAMs require just one transistor per unit cell, whereas infrared sensor readouts require three or more, one of which must be a low noise analog device.
 This paper was provided by ISG - a world leader in Thermal Imaging Technology.

Saturday, July 3, 2010

Basics of Internet Communication

To send data between a device on one local area network to another device on another LAN, a standard way of communicating is required since local area networks may use different types of technologies. This need led to the development of IP addressing and the many IP-based protocols for communicating over the Internet, which is a global system of interconnected computer networks. (LANs may also use IP addressing and IP protocols for communicating within a local area network, although using MAC addresses is sufficient for internal communication.) Before IP addressing is discussed, some of the basic elements of Internet communication such as routers, firewalls and Internet service providers are covered below.
Routers
To forward data packages from one LAN to another LAN via the Internet, a networking equipment called a network router must be used. A router routes information from one network to another based on IP addresses. It forwards only data packages that are to be sent to another network. A router is most commonly used for connecting a local network to the Internet. Traditionally, routers were referred to as gateways.
Firewalls
A firewall is designed to prevent unauthorized access to or from a private network. Firewalls can be implemented in both hardware and software, or a combination of both. Firewalls are frequently used to prevent unauthorized Internet users from accessing private networks that are connected to the Internet. Messages entering or leaving the Internet pass through the firewall, which examines each message, and blocks those that do not meet the specified security criteria.
Internet connections
In order to connect a LAN to the Internet, a network connection via an Internet service provider (ISP) must be established. When connecting to the Internet, terms such as upstream and downstream are used. Upstream describes the transfer rate with which data can be uploaded from the device to the Internet; for instance, when video is sent from a network camera. Downstream is the transfer speed for downloading files; for instance, when video is received by a monitoring PC.
In most scenarios — for example, a laptop that is connected to the Internet — downloading information from the Internet is the most important speed to consider. In a network video application with a network camera at a remote site, the upstream speed is more relevant since data (video) from the network camera will be uploaded to the Internet.
IP addressing
Any device that wants to communicate with other devices via the Internet must have a unique and appropriate IP address. IP addresses are used to identify the sending and receiving devices. There are currently two IP versions: IP version 4 (IPv4) and IP version 6 (IPv6). The main difference between the two is that the length of an IPv6 address is longer (128 bits compared with 32 bits for an IPv4 address). IPv4 addresses are most commonly used today.
IPv4 addresses
IPv4 addresses are grouped into four blocks, and each block is separated by a dot. Each block represents a number between 0 and 255; for example, 192.168.12.23.
Certain blocks of IPv4 addresses have been reserved exclusively for private use. These private IP addresses are 10.0.0.0 to 10.255.255.255, 172.16.0.0 to 172.31.255.255 and 192.168.0.0 to 192.168.255.255. Such addresses can only be used on private networks and are not allowed to be forwarded through a router to the Internet. All devices that want to communicate over the Internet must have its own individual, public IP address. A public IP address is an address allocated by an Internet service provider. An ISP can allocate either a dynamic IP address, which can change during a session, or a static address, which normally comes with a monthly fee.
Ports
A port number defines a particular service or application so that the receiving server (e.g., network camera) will know how to process the incoming data. When a computer sends data tied to a specific application, it usually automatically adds the port number to an IP address without the user’s knowledge.
Port numbers can range from 0 to 65535. Certain applications use port numbers that are pre-assigned to them by the Internet Assigned Numbers Authority (IANA). For example, a web service via HTTP is typically mapped to port 80 on a network camera.
Setting IPv4 addresses
In order for a network camera or video encoder to work in an IP network, an IP address must be assigned to it. Setting an IPv4 address for an Axis network video product can be done mainly in two ways: 1) automatically using DHCP (Dynamic Host Configuration Protocol), and 2) manually by either entering into the network video product’s interface a static IP address, a subnet mask and the IP address of the default router, or using a management software tool such as AXIS Camera Management.
DHCP manages a pool of IP addresses, which it can assign dynamically to a network camera/ video encoder. The DHCP function is often performed by a broadband router, which in turn gets its IP addresses from an Internet service provider. Using a dynamic IP address means that the IP address for a network device may change from day to day. With dynamic IP addresses, it is recommended that users register a domain name (e.g., www.mycamera.com) for the network video product at a dynamic DNS (Domain Name System) server, which can always tie the domain name for the product to any IP address that is currently assigned to it.
Using DHCP to set an IPv4 address works as follows. When a network camera/video encoder comes online, it sends a query requesting configuration from a DHCP server. The DHCP server replies with an IP address and subnet mask. The network video product can then update a dynamic DNS server with its current IP address so that users can access the product using a domain name.
With AXIS Camera Management, the software can automatically find and set IP addresses and show the connection status. The software can also be used to assign static, private IP addresses for Axis network video products. This is recommended when using video management software to access network video products. In a network video system with potentially hundreds of cameras, a software program such as AXIS Camera Management is necessary in order to effectively manage the system.
NAT (Network address translation)
When a network device with a private IP address wants to send information via the Internet, it must do so using a router that supports NAT. Using this technique, the router can translate a private IP address into a public IP address without the sending host’s knowledge.
Port forwarding
To access cameras that are located on a private LAN via the Internet, the public IP address of the router should be used together with the corresponding port number for the network camera/video encoder on the private network.
Since a web service via HTTP is typically mapped to port 80, what happens then when there are several network cameras/video encoders using port 80 for HTTP in a private network? Instead of changing the default HTTP port number for each network video product, a router can be configured to associate a unique HTTP port number to a particular network video product’s IP address and default HTTP port. This is a process called port forwarding.
Port forwarding works as follows. Incoming data packets reach the router via the router’s public (external) IP address and a specific port number. The router is configured to forward any data coming into a predefined port number to a specific device on the private network side of the router. The router then replaces the sender’s address with its own private (internal) IP address. To a receiving client, it looks like the packets originated from the router. The reverse happens with outgoing data packets. The router replaces the private IP address of the source device with the router’s public IP address before the data is sent out over the Internet.

Internet pic
Thanks to port forwarding in the router, network cameras with private IP addresses on a local network can be accessed over the Internet. In this illustration, the router knows to forward data (request) coming into port 8032 to a network camera with a private IP address of 192.168.10.13 port 80. The network camera can then begin to send video.
Port forwarding is traditionally done by first configuring the router. Different routers have different ways of doing port forwarding and there are web sites such as www.portfoward.com that offer step-by-step instruction for different routers. Usually port forwarding involves bringing up the router’s interface using an Internet browser, and entering the public (external) IP address of the router and a unique port number that is then mapped to the internal IP address of the specific network video product and its port number for the application.
To make the task of port forwarding easier, Axis offers the NAT traversal feature in many of its network video products. NAT traversal will automatically attempt to configure port mapping in a NAT router on the network using UPnP™. In the network video product interface, users can manually enter the IP address of the NAT router. If a router is not manually specified, then the network video product will automatically search for NAT routers on the network and select the default router. In addition, the service will automatically select an HTTP port if none is manually entered.
IPv6 addresses
An IPv6 address is written in hexadecimal notation with colons subdividing the address into eight blocks of 16 bits each; for example, 2001:0da8:65b4:05d3:1315:7c1f:0461:7847.
The major advantages of IPv6, apart from the availability of a huge number of IP addresses, include enabling a device to automatically configure its IP address using its MAC address. For communication over the Internet, the host requests and receives from the router the necessary prefix of the public address block and additional information. The prefix and host’s suffix is then used, so DHCP for IP address allocation and manual setting of IP addresses are no longer required with IPv6. Port forwarding is also no longer needed. Other benefits of IPv6 include renumbering to simplify switching entire corporate networks between providers, faster routing, point-to-point encryption according to IPSec, and connectivity using the same address in changing networks (Mobile IPv6).
An IPv6 address is enclosed in square brackets in a URL and a specific port can be addressed in the following way: http://[2001:0da8:65b4:05d3:1315:7c1f:0461:7847]:8081/
Setting an IPv6 address for an Axis network video product is as simple as checking a box to enable IPv6 in the product. The product will then receive an IPv6 address according to the configuration in the network router.
Data transport protocols for network video
The Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) are the IP-based protocols used for sending data. These transport protocols act as carriers for many other protocols. For example, HTTP (Hyper Text Transfer Protocol), which is used to browse web pages on servers around the world using the Internet, is carried by TCP.
TCP provides a reliable, connection-based transmission channel. It handles the process of breaking large chunks of data into smaller packets and ensures that data sent from one end is received on the other. TCP’s reliability through retransmission may introduce significant delays. In general, TCP is used when reliable communication is preferred over transport latency.
UDP is a connectionless protocol and does not guarantee the delivery of data sent, thus leaving the whole control mechanism and error-checking to the application itself. UDP provides no transmissions of lost data and, therefore, does not introduce further delays.

Saturday, June 12, 2010

How do motion sensing lights and Burglar Alarms work?

T­here are many different ways to create a motion sensor. For example:
  • It is common for stores to have a beam of light crossing the room near the door, and a photosensor on the other side of the ­room. When a customer breaks the beam, the photosensor detects the change in the amount of light and rings a bell.
  • Many grocery stores have automatic door openers that use a very simple form of radar to detect when someone passes near the door. The box above the door sends out a burst of microwave radio energy and waits for the reflected energy to bounce back. When a person moves into the field of microwave energy, it changes the amount of reflected energy or the time it takes for the reflection to arrive, and the box opens the door. Since these devices use radar, they often set off radar detectors.
  • The same thing can be done with ultrasonic sound waves, bouncing them off a target and waiting for the echo.
All of these are active sensors. They inject energy (light, microwaves or sound) into the environment in order to detect a change of some sort.
The "motion sensing" feature on most lights (and security systems) is a passive system that detects infrared energy. These sensors are therefore known as PIR (passive infrared) detectors or pyroelectric sensors. In order to make a sensor that can detect a human being, you need to make the sensor sensitive to the temperature of a human body. Humans, having a skin temperature of about 93 degrees F, radiate infrared energy with a wavelength between 9 and 10 micrometers. Therefore, the sensors are typically sensitive in the range of 8 to 12 micrometers.
The devices themselves are simple electronic components not unlike a photosensor. The infrared light bumps electrons off a substrate, and these electrons can be detected and amplified into a signal.
You have probably noticed that your light is sensitive to motion, but not to a person who is standing still. That's because the electronics package attached to the sensor is looking for a fairly rapid change in the amount of infrared energy it is seeing. When a person walks by, the amount of infrared energy in the field of view changes rapidly and is easily detected. You do not want the sensor detecting slower changes, like the sidewalk cooling off at night.
Your motion sensing light has a wide field of view because of the lens covering the sensor. Infrared energy is a form of light, so you can focus and bend it with plastic lenses. But it's not like there is a 2-D array of sensors in there. There is a single (or sometimes two) sensors inside looking for changes in infrared energy.
If you have a burglar alarm with motion sensors, you may have noticed that the motion sensors cannot "see" you when you are outside looking through a window. That is because glass is not very transparent to infrared energy. This, by the way, is the basis of a greenhouse. Light passes through the glass into the greenhouse and heats things up inside the greenhouse. The glass is then opaque to the infrared energy these heated things are emitting, so the heat is trapped inside the greenhouse. It makes sense that a motion detector sensitive to infrared energy cannot see through glass windows.

Tuesday, June 1, 2010

Fingerprint Verification

What is the fingerprint verification technology?

A fingerprint in its narrow sense is an impression left by the friction ridges of a human finger. In a wider use of the term, fingerprints are the traces of an impression from the friction ridges of any part of a human or other primate hand. 

Since the early 20th century, fingerprint detection and analysis has been one of the most common and important forms of crime scene forensic investigation. More crimes have been solved with fingerprint evidence than for any other reason. This fact necessitated the need for assailants to cover their hands during the commission of their crimes; thus designating gloves to be the most essential and crucial tool for any successful criminal perpetrator.
Fingerprint verification method?
There are two types of method, optical and capacitance. 

Optical fingerprint imaging involves capturing a digital image of the print using visible light. This type of sensor is, in essence, a specialized digital camera. The top layer of the sensor, where the finger is placed, is known as the touch surface. Beneath this layer is a light-emitting phosphor layer which illuminates the surface of the finger. The light reflected from the finger passes through the phosphor layer to an array of solid state pixels (a charge-coupled device) which captures a visual image of the fingerprint. A scratched or dirty touch surface can cause a bad image of the fingerprint. A disadvantage of this type of sensor is the fact that the imaging capabilities are affected by the quality of skin on the finger. For instance, a dirty or marked finger is difficult to image properly. Also, it is possible for an individual to erode the outer layer of skin on the fingertips to the point where the fingerprint is no longer visible. It can also be easily fooled by an image of a fingerprint if not coupled with a "live finger" detector. 

Capacitance sensors use principles associated with capacitance in order to form fingerprint images. In this method of imaging, the sensor array pixels each act as one plate of a parallel-plate capacitor, the dermal layer (which is electrically conductive) acts as the other plate, and the non-conductive epidermal layer acts as a dielectric

Coverage
It is widely used in access control, building management, bank, airport information system, etc

Video Transmission & Compression

During the past 18 years, traffic and freeway management agencies have been integrating the use of CCTV cameras into their operational programs. The heavy use of this technology has created a need to deploy very high bandwidth communication networks. The transmission of video is not very different from voice or data. Video is transmitted in either an analog or digital format. Video transmitted in an analog format must travel over coaxial cable or fiber optic cable. The bandwidth requirements cannot be easily handled by twisted pair configurations.
Video can be transmitted in a digital format via twisted pair. It can be transmitted in a broadband arrangement as full quality and full motion, or as a compressed signal offering lower image or motion qualities. Via twisted pair, video is either transmitted in a compressed format, or sent frame-by-frame. The frame-by-frame process is usually called "slow-scan video".
Full color broadcast analog video requires a substantial amount of bandwidth that far exceeds the capacity of the typical twisted pair analog voice communication circuit of 4 KHz. Early commercial television networks were connected via Coaxial cable systems provided by AT&T Long Distance. These networks were very costly to operate and maintain, and had a limited capability.
Transmission of analog video requires large amounts of bandwidth, and power. The most common use of analog video (outside of commercial broadcast TV) is for closed circuit surveillance systems. The cameras used in these systems use less bandwidth than traditional broadcast quality cameras, and are only required to send a signal for several hundred feet. For transmission distances (of analog video) of more than 500 feet, the system designer must resort to the use of triaxial cable, or fiber optics. Depending upon other requirements, the system designer can convert the video to another signal format. The video can be converted to a radio (or light) frequency, digitized, or compressed.
Cable companies have traditionally converted television broadcast signals to a radio frequency. With this technique, they can provide from 8 to 40 analog channels in a cable system using coaxial cable (more about multiplexing later in this chapter). Cable company operators wanting to provide hundreds of program channels will convert the video to a radio frequency, and then digitize. The cable company is able to take advantage of using both fiber and coaxial cable. These are called HFC (hybrid fiber coax) systems. Fiber is used to get the signal from the cable company main broadcast center to a group of houses. The existing coaxial cable is used to supply the signal to individual houses.
Early freeway management systems used analog video converted to RF and transmitted over coaxial cable. Later systems used fiber optic cable with either RF signal conversion, or frequency division multiplexing (see Multiplexing in this chapter).
With the introduction high bandwidth microprocessors and efficient video compression algorithms, there has been a shift from analog video transmission systems to digital systems. New processes such as Video over IP (Internet Protocol) and streaming video allow for the broadcast of video incident images to many user agencies via low (relatively) cost communication networks. Before looking at the systems, let's take a look at the various types of video compression schemes.
Video Compression
Compressed Video – Since the mid-1990s, FMS system designers have turned to digital compression of video to maximize resources, and reduce overall communication systems costs. The digital compression of video allows system operators to move video between operation centers using standard communication networks technologies.
Video compression systems can be divided into two categories – hardware compression and software compression. All video compression systems use a Codec. The term Codec is an abbreviation for coder/decoder. A codec can be either a software application or a piece of hardware that processes video through complex algorithms, which compress the file and then decompress it for playback. Unlike other kinds of file-compression packages that require you to compress/decompress a file before viewing, video codecs decompress the video on the fly, allowing immediate viewing. This discussion will focus on hardware compression technologies.

Thursday, May 20, 2010

Different ways to shutdown your PC

This post not releted to CCTV / Access Control / Attendance System.


1. The standard approach - click the Start Button with your mouse, then select the Turn Off menu and finally click the Turn Off icon on the Turn Off computer dialog. blink.gif

2. Press Ctrl+Esc key or the Win key and press u two times - the fastest approach.

3. Get the Shutdown utility from Download.com - it add the shut down shortcuts for you. Else create them yourself using approach 4.

4. Create a shutdown shortcut on your desktop. Right click on the desktop, choose New Shortcut and type shutdown -s -t 00 in the area where you are asked to specify the location of the program file. Now you can just double click this icon to turn off the computer. The best location would be your quick launch bar.

5. Press the Win key + R key to open the run window. Type shutdown -s -t 00. [s means shutdown while t means the duration after which you want to initiate the shutdown process].

If some open processes or application won't let you turn off, append a -f switch to force a shut down by closing all active processes.

6. Win+M to minimize all windows and then Alt+F4 to bring the Turn Off computer dialog.

7. Open Windows Task manager (by right clicking the Windows Task bar or Alt+Ctrl+Del) and choose Shut down from the menu. Useful when the Windows are not responding.

8. open task manager--->click on shutdown--->hold the ctrl key and click on Turn off::::::: pc will be turned off in 3 secs.. fastest method other than hard shutdown.

Saturday, May 1, 2010

How does a Network Camera work ?

An IP Network Video Camera is a Video Camera with a built in web server that can be controlled, monitored and viewed from virtually any location via High-Speed Internet Access.


A Network Camera has its own IP Address and built-in computing functions to handle network communication. Everything required for viewing images over the Network is built into the unit. An IP Network Video Camera can be described as a Camera and a computer combined. It is connected directly to the Network as any other network device and it has built-in software for a Web server, FTP Server, FTP client and e-mail client. It also includes alarm input and relay output as well. More advanced Network Cameras can also be equipped with many other value-added functions such as motion detection and an Analog Video Output.

The Network Camera's camera component captures the image, which can be described as light of different wavelengths, and transforms it into electrical signals. These signals are then are converted from Analog to Digital Format and transferred into the computer function where the image is compressed and sent out over the network.

The lens of the Network Camera focuses the image onto the image sensor (CCD). Before reaching the image sensor, the images pass through the optical filter, which removes any infrared light so that the "correct" colors will be displayed. The image sensor converts the image, which is composed of light information, into electrical signals. These electrical, digital signals are now in a format that can be compressed and transferred over networks. The Camera functions to manage the exposure (light level ofimage), white balance (adjusts the color levels), image sharpness, andother aspects of image quality.
  A single camera setup
  • The camera turns video & audio into data
  • The camera connects to your Network or direct your Router and transmits this data onto the network
  • This data can then be viewed as high quality images, and audio on any authorised PC, Mac or Mobile Phone; on the local network, or over the internet
  • The Recording Software supplied can be used to record and view up 64 cameras on any compatible Windows PC or Laptop.
 To setup more than one camera
  • Each camera turns video & audio into data
  • The camera connects to your Network via a Network Switch and transmit their data onto the network
  • This data can then be viewed as high quality images, and audio on any authorised PC, Mac or Mobile Phone; on the local network, or over the internet
  • The Recording Software supplied can be used to record and view up 64 cameras on any compatible Windows PC or Laptop
 To setup multiple cameras over multiple sites
  • In the example below each site has 2 IP cameras
  • At site 1 the cameras are connected to the local network and recorded on a Laptop running the Xvision Recording Software
  • The cameras are also connected to the Internet via the router
  • At site 2 the cameras are connected to the Internet, no local recording or viewing is taking place
  • At Head Office the cameras are being recorded and viewed live on a Laptop running the Xvision Recording Software
  • The cameras can also be viewed from an iPhone (over 3G) by the Managing Director when he is out of the office.

License Plate Recognition, a Twenty-First Century Fact of Life

Terrorists were the intended targets for the first license plate readers deployed by New York City.  It was 2006 and the NYPD was involved in what was known as the Lower Manhattan Security Initiative, a counter-terrorism plan that involved setting up movable, random roadblocks in the Financial District. Thousands of cameras provided ancillary surveillance in the area south of Canal Street but the program revolved around special ones equipped with license plate reading technology.



Thank goodness the NYPD has been successful so far in quelling terrorist plots. They have expanded their use of license readers to attack everyday crime wherever it may be happening. According to an article in the New York Times, as of April 2011, New York was using 238 license plate readers. Of these 130 are mobile, mounted on the backs of police cars that might be patrolling any street in the city’s five boroughs. The other 108 are fixed posts at city bridges and tunnels, as well as above other thoroughfares. License plate reading cameras differ from other surveillance IP cameras that monitor broad areas in that they are designed to focus on a small area, and are aimed low to the ground.

Police tracked down 3,659 stolen vehicles, and issued traffic tickets for 34,969 un-registered ones. In the period from 2010 to 2011 alone, they identified and recovered 248 vehicles bearing stolen license plates.

Divisions dealing with felonies have used the technology to their advantage as well. In 2011 a bank robber was apprehended after high-jacking a livery cab in New Jersey and driving it through the Lincoln tunnel to New York. Somewhere along the route, the license plate was detected and the car traced to a specific block in Queens. FBI agents, alerted by the NYPD, surveyed the block and the next morning apprehended the suspect who had a loaded pistol in his possession. 
In another case of violent crime, a murder suspect was arrested after several cameras spotted his plates in various locations. The police had but to connect the dots to find him sequestered in a closet in a relative’s home. 

How does this work in a city measuring 304.8 square miles (or 468.9 square miles if one counts the 165.6 square miles of water)? The data captured on the cameras are continuously checked against specific databases containing information on stolen vehicles, stolen license plates, and unregistered vehicles. In addition, the cameras’ files are downloaded twice daily to central computers where personnel update the databases each time. Investigators are then able to retrieve new information such as the license plate of a new suspect or the stolen license plate of one they’ve lost track of.

Technology Highlights:
This technology is gaining popularity in security and traffic installations. The technology concept assumes that all vehicles already have the identity displayed (the plate!) so no additional transmitter or responder is required to be installed on the car.
The system uses illumination (such as Infra-red) and a camera to take the image of the front or rear of the vehicle, then an image-processing software analyzes the images and extracts the plate information. This data is used for enforcement, data collection, and (as in the access-control system featured above) can be used to open a gate if the car is authorized or keep a time record on the entry or exit for automatic payment calculations.
The LPR system significant advantage is that the system can keep an image record of the vehicle which is useful in order to fight crime and fraud ("an image is worth a thousand words"). An additional camera can focus on the driver face and save the image for security reasons. Additionally, this technology does not need any installation per car (such as in all the other technologies that require a transmitter added on each car or carried by the driver).

  • Automatic Vehicle Identification (AVI)
  • Car Plate Recognition (CPR)
  • Automatic Number Plate Recognition (ANPR)
  • Car Plate Reader (CPR)
  • Optical Character Recognition (OCR) for Cars
Does it Work?
Early LPR systems sufferred from a low recognition rate, lower than required by practical systems. The external effects (sun and headlights, bad plates, wide number of plates types) and the limited level of the recognition software and vision hardware yielded low quality systems.
However, recent improvements in the software and hardware have made the LPR systems much more reliable and wide spread. You can now find these systems in numerous installations and the number of systems are growing exponentially, efficiently automating more and more tasks in different market segments. In many cases the LPR unit is added as retrofit in addition to existing solutions, such as a magnetic card reader or ticket dispenser/reader, in order to add more functionality to the existing facility.
Even if the recognition is not absolute, the application that depends on the recognition results can compensate the errors and produce a virtually flawless system. For example, when comparing the recognition of the entry time of a car to the exit time in order to establish the parking time, the match (of entry verses exit) can allow some small degree of error without making a mistake. This intelligent integration can overcome some of the LPR flaws and yield dependable and fully automatic systems.

Some license plate recognition system uses special software who automatically reading license plates.   
·         Image collection

·         Image analysis
·         Image and data storage
·         Data transmission

Image Collection
License plate capture cameras with CCD image sensor works with a pulsed infra-red light source to monitor a target area of passing vehicles. The illumination device contains up to 190 LEDs in the near infrared range and is capable of providing a high contrast black and white image similar to the image below.

Notice how the use of infra-red light suppresses most of the surrounding detail and allows the reflective license plate properties to make it dominant in the field.  In addition the TruViewLPR license plate capture camera lets the user alter the contrast by changing each video field up to sixty times per second, on a cycle of three different levels of brightness  - low, medium, and high. Taken together, these allow for optimal plate image processing no matter what the time of day or the condition the license plate in question.

Image Analysis
The captured images are processed by a set of algorithms that extract only the license plate portion of the frame and send it to two different Optical Character Recognition engines for processing.  It takes 200 milliseconds or less for the LPR Software to analyze and come up with an ALPR result. It then reports one of two reads: The read that provides the highest confidence score level of all the captured images for that particular license plate or the read that meets a pre-determined minimum level of confidence.


Data Storage
The image with the best results is now saved and linked with the results data. The data might consist of the plate number, the date and time, the lane number.


Overview Camera
In addition another camera may be used to furnish a scene overview showing a full view of the vehicle which will be linked to the plate data and image, all to be stored to be made available for subsequent  queries. You can also add many IP cameras for multiple overviews when using VMS software.

Data and Image Management and Display
Stored data can be forwarded to a central server over a standard TCP/IP connection or using a wireless connection.

The LPR information can be displayed using Ocularis VMS software or using the Central Management console which will allow an operator to bring up ALPR events based on license plate number, date, time, lane, or other desired characteristics.
Applications
There are a number of applications where automated License plate recognition can be used.  Image collection can take place in a triggered or non-triggered environment.

·         A non-triggered installation needs no detection device. In this mode, software, known as Virtual Vehicle Detector, analyzes each image at a rate of sixty images per second for the presence of a license plate. This image, and additional images containing the vehicle’s license plate data is captured and processed to extract the license plate characteristics

·         A triggered mode requires a detection device and can be used in a number of applications. The trigger could be an in-ground loop or an optical trigger and is called for when several systems are to be tied together to a single event. Such parallel systems might be a vehicle classification system, a transponder system, a parking lot ticket dispenser, a weigh-in motion system , and so on.

The LPR Software device can act as a lane controller, hosting a database that will permit or deny vehicle access into or out of a parking facility, gated community, or high-security compound. This can be done with the optional Universal Interface Controller (UIC) to provide contact closure outputs to open or close a gate or arm in response to queries of the database.
And so in this day and age, a license plate serves as more than just a way to determine if that’s your buddy in the silver Honda up ahead.



LPR systems normally consist of the following units:
  • Camera(s) - that take the images of the car (front or rear side)
  • Illumination - a controlled light that can bright up the plate, and allow day and night operation. In most cases the illumination is Infra-Red (IR) which is invisible to the driver.
  • Frame grabber - an interface board between the camera and the PC, allows the software to read the image information
  • Computer - normally a PC running Windows or Linux. It runs the LPR application which controls the system, reads the images, analyzes and identifies the plate, and interfaces with other applications and systems.
  • Software - the application and the recognition package. Usually the recognition package is supplied as a DLL (Dynamic Link Library).
  • Hardware - various input/output boards used to interface the external world (such as control boards and networking boards)
  • Database - the events are recorded on a local database or transmitted over the network. The data includes the recognition results and (optionally) the vehicle or drver-face image file

The following illustration shows a typical configuration of a LPR system (for example, for 2-lanes-in and 2-lanes-out access control system). The system ("SeeLane") is a typical example of such system.
The SeeLane application runs as a background Windows application in the PC (shown in the center), and interfaces to a set of SeeCarHead camera/illumination units (one for each vehicle) which are interfaced by the frame grabber. The application controls the sensors and controls via an I/O card that is connected thru a terminal block to the inputs and outputs.
The application displays the results and can also send them via serial communication and via DDE messages to other application(s). It writes the information to local database or to optional remote databases (via the network). 

Typical applications

LPR applications have a wide range of applications, which use the extracted plate number and optional images to create automated solutions for various problems. These include the following sample applications
Parking - the plate number is used to automatically enter pre-paid members and calculate parking fee for non-members (by comparing the exit and entry times). The optional driver face image can be used to prevent car hijacking.
In this example, a car is entering a car park in a busy shopping center. The car plate is recognized and stored. When the car will later exit (through the gate on the right side) the car plate will be read again. The driver will be charged for the duration of the parking. The gate will automatically open after payment - or if the vehicle has a monthly permit.

Access Control - a gate automatically opens for authorized members in a secured area, thus replacing or assisting the security gaurd. The events are logged on a database and could be used to search the history of events.

In this example, the gate has just been automatically raised for the authorized vehicle, after being recognized by the system. A large outdoor display greets the driver. The event (result, time and image) is logged in the database.

Tolling - the car number is used to calculate the travel fee in a toll-road, or used to double-check the ticket.

In this installation, the plate is read when the vehicle enters the toll lane and presents a pass card. The information of the vehicle is retrieved from the database and compared against the pass information. In case of fraud the operator is notified.



Border Control - the car number is registered in the entry or exits to the Country, and used to monitor the border crossings. It can short the border crossing turnaround time and cut short the typical long lines.

This installation covers the borders of the entire Country. Each vehicle is registered into a central database and linked to additional information such as the passport data. This is used to track all border crossings.

Stolen cars - a list of stolen cars or unpaid fines is used to alert on a passing 'hot' cars. The 'black list' can be updated in real time and provide immediate alarm to the police force. The LPR system is deployed on the roadside, and performs a real-time match between the passing cars and the list. When a match is found a siren or display is activated and the police officer is notified with the detected car and the reasons for stopping the car.

Enforcement - the plate number is used to produce a violation fine on speed or red-light systems The manual process of preparing a violation fine is replaced by an automated process which reduces the overhead and turnaround time. The fines can be viewed and paid on-line.

The photo is an example of a speeding car caught by the traffic camera. The rear vehicle plate is automatically extracted off the scanned film image, replacing a tedious manual operation and the need to develope and print the violation. The datablock on the top-right side is additional speeding information that is automatically extracted from the developed film and used to complete the fine notice and inserted to a database. The violators can pay the fine on-line and are presented with this photo as a proof with the speeding information.

Traffic control - the vehicles can be directed to different lanes according to their entry permits (such as in University complex projects). The system effectively reduces traffic congestions and the number of attendents.

In this installation the LPR based system classifies the cars on a congested entrance to 3 types (authorized, known visitors, and unknown cars for inquiry) and guides them to the appropriate lane. This system reduced the long waiting lines and simplified the security officers work load.


Marketing Tool - the car plates may be used to compile a list of frequent visitors for marketing purposes, or to build a traffic profile (such as the frequency of entry verses the hour or day).


Travel - A number of LPR units are installed in different locations in city routes and the passing vehicle plate numbers are matched between the points. The average speed and travel time between these points can be calculated and presented in order to monitor municipal traffic loads. Additionally, the average speed may be used to issue a speeding ticket.

In this example the car is recognized at two points, and the violation shows the photos of both locations which were taken on bridges on top of the highway. The average speed of the car is calculated from both points, and displayed if the speed passed a violation threshold, and optionally printed.


Airport Parking - In order to reduce ticket fraud or mistakes, the LPR unit is used to capture the plate number and image of the cars. The information may be used to calculate the parking time or provide a proof of parking in case of a lost ticket - a typical problem in airport parking which have relatively long (and expensive) parking durations.

This photo shows the gate of a long term airport parking. The car is recognzied on entry and the data is later used to track the real entry time in case of a lost ticket.

Friday, April 30, 2010

About all Video CODECS

Codecs work in two ways – using temporal and spatial compression. Both schemes generally work with "lossy" compression, which means information that is redundant or unnoticeable to the viewer gets discarded (and hence is not retrievable).

Temporal compression is a method of compression which looks for information that is not necessary for continuity to the human eye It looks at the video information on a frame-by-frame basis for changes between frames. For example, if you're working with video of a section of freeway, there's a lot of redundant information in the image. The background rarely changes and most of the motion involved is from vehicles passing through the scene. The compression algorithm compares the first frame (known as a key frame) with the next (called a delta frame) to find anything that changes. After the key frame, it only keeps the information that does change, thus deleting a large portion of image. It does this for each frame. If there is a scene change, it tags the first frame of the new scene as the next key frame and continues comparing the following frames with this new key frame. As the number of key frames increases, so does the amount of motion delay. This will happen if an operator is panning a camera from left to right.

Spatial compression uses a different method to delete information that is common to the entire file or an entire sequence within the file. It also looks for redundant information, but instead of specifying each pixel in an area, it defines that area using coordinates.

Both of these compression methods reduce the overall transmission bandwidth requirements. If this is not sufficient, one can make a larger reduction by reducing the frame rate (that is, how many frames of video go by in a given second). Depending on the degree of changes one makes in each of these areas, the final output can vary greatly in quality.

Hardware codecs are an efficient way to compress and decompress video files. Hardware codecs are expensive, but deliver high-quality results. Using a hardware-compression device will deliver high-quality source video, but requires viewers to have the same decompression device in order to watch it. Hardware codecs are used often in video conferencing, where the equipment of the audience and the broadcaster are configured in the same way. A number of standards have been developed for video compression – MPEG, JPEG, and video conferencing.

Video Compression
MPEG stands for the Moving Picture Experts Group. MPEG is an ISO/IEC working group, established in 1988 to develop standards for digital audio and video formats. There are five MPEG standards being used or in development. Each compression standard was designed with a specific application and bit rate in mind, although MPEG compression scales well with increased bit rates.

Following is a list of video compression standards:
•MPEG-1 – designed for transmission rates of up to 1.5 Mbit/sec – is a standard for the compression of moving pictures and audio. This was based on CD-ROM video applications, and is a popular standard for video on the Internet, transmitted as .mpg files. In addition, level 3 of MPEG-1 is the most popular standard for digital compression of audio—known as MP3. This standard is available in most of the video codec units supplied for FMS and traffic management systems.

•MPEG-2 – designed for transmission rates between 1.5 and 15 Mbit/sec – is a standard on which Digital Television set top boxes and DVD compression is based. It is based on MPEG-1, but designed for the compression and transmission of digital broadcast television. The most significant enhancement from MPEG-1 is its ability to efficiently compress interlaced video. MPEG-2 scales well to HDTV resolution and bit rates, obviating the need for an MPEG-3. This standard is also provided in many of the video codecs supplied for FMS.

•MPEG-4 – a standard for multimedia and Web compression - MPEG-4 is an object-based compression, similar in nature to the Virtual Reality Modeling Language (VRML). Individual objects within a scene are tracked separately and compressed together to create an MPEG4 file. The files are sent as data packages and assembled at the viewer end. The result is a high quality motion picture. The more image data that is sent the greater the lag-time (or latency) before the video begins to play. Currently, this compression standard is not suited for real-time traffic observation systems that require pan-tilt-zoom capability. The "forward and store" scheme used in this system inhibits eye-hand coordination. However, this is an evolving standard. The latency factor between image capture and image viewing is being reduced. The latency factor can be reduced to a minimum if the image and motion quality do not have to meet commercial video production standards. Most surveillance systems can function without this quality and can use pan-tilt-zoom functions.

•MPEG-7 – this standard, currently under development, is also called the Multimedia Content Description Interface. When released, it is hoped that this standard will provide a framework for multimedia content that will include information on content manipulation, filtering and personalization, as well as the integrity and security of the content. Contrary to the previous MPEG standards, which described actual content, MPEG-7 will represent information about the content.

•MPEG-21 – work on this standard, also called the Multimedia Framework, has just begun. MPEG-21 will attempt to describe the elements needed to build an infrastructure for the delivery and consumption of multimedia content, and how they will relate to each other.

•JPEG – stands for Joint Photographic Experts Group. It is also an ISO/IEC working group, but works to build standards for continuous tone image coding. JPEG is a lossy compression technique used for full-color or gray-scale images, by exploiting the fact that the human eye will not notice small color changes. Motion JPEG is a standard that is used for compression of images transmitted from CCTV cameras. It provides compressed motion in the same manner as MPEG, but is based on the JPEG standard.

•H.261 – is an ITU standard designed for two-way communication over ISDN lines (video conferencing) and supports data rates which are multiples of 64Kbit/s.

•H.263 – is based on H.261 with enhancements that improve video quality over modems.

•H.264 – is the latest MPEG standard for video encoding that is geared to take video beyond the realms of DVD quality by supporting Hi Definition CCTV video. H.264 can also reduce the size of digital video by more than 80% compared with M-JPEG and as much as 50% with MPEG-4, all without compromising image quality. This means that much less network bandwidth and storage space are required. Since the typical storage costs for surveillance projects represent between 20 and 30 percent of the project cost significant savings can be made.

Advantage:-
 1. H.264 cameras is that they reduce the amount of bandwidth needed.if your megapixel camera needed 10 Mb/s before (with MJPEG), it might now need only 1.5 Mb/s. So for each camera, you will save a lot of bandwidth.
 2. Eliminates barriers: Enables many more networks to support megapixel cameras.
 3. The bitstream is fully compatible with existing decoders with no error/drift.

 Disadvantages:-
 1. Using analytics with these cameras reduces the H.264 benefit.
 2. Costs few hundred dollars more per camera.