Jump to content

User:Extra-low-voltage/ELV systems and advanced functions of surveillance systems

From Wikipedia, the free encyclopedia

ELV Systems of Surveillance Systems[edit]

A surveillance system may use the following three electrical systems to power:

  • Centralized power supply
  • Independent power supply
  • PoE power supply

Centralized power supply[1][edit]

Centralized power supply refers to that the source equipment is installed in the power room and the battery room. And the distribution of power supply is uniformly from one port and allocated to different surveillance equipment.

The advantages and disadvantages of centralised power supply[edit]

Advantages[edit]
  1. Centralized power supply makes it convenient to uniformly control and manage power supplies of surveillance systems;
  2. Centralized power supply can reduce the cost of cables;
  3. Centralized power supply can beautify the project wiring.
Disadvantages[edit]
  1. The transmission distance of DC low-voltage power supply may be too long, which results in high voltage loss;
  2. Poor anti-interference ability during transmission.

A example of centralized power supply: wind-solar hybrid power supply system[2][edit]

When a single solar power supply is used, the power supply cannot meet the needs and requirements of all-weather operation of the surveillance system which also needs to be able to work continuously and stably in continuous cloudy and rainy days, and thus monitoring time blind spots are formed.

The wind-solar hybrid power supply system utilizes the complementarity of wind and solar resources to achieve all-weather power generation, which effectively solves the above problem.

In order to avoid problems such as monitoring blind spots and unstable operation, we need an all-weather surveillance system, hoping that it can run continuously and stably for 24 hours all the day, so that the background monitoring center can fully grasp the dynamic situation of dangerous points and response in time when things happening. However, if the power supply is insufficient, it is difficult for the surveillance systems to operate stably and well.

Component and principle of wind-solar hybrid power supply system[edit]

The wind-solar hybrid power generation system can operate in the following three modes:

  1. wind power generation system supplies power to the loads alone;
  2. photovoltaic power generation system supplies power to the loads alone;
  3. wind power and photovoltaic power generation systems jointly supply power to the load.

The wind-solar hybrid power supply system can convert both the kinetic energy of the wind and the light energy of the sun into electrical energy. Thus, it is a hybrid power generation system.

This system consists of wind turbines and solar cell components, mainly including wind turbines, solar cell components, wind-solar hybrid controllers, battery packs, and inverters.

To utilize sun energy, the solar panels convert the light energy into electrical energy using the photovoltaic effect. On daytimes, when under the sunlight, the solar power cells can generate electricity and charge the battery.

For wind power, wind passes through the wind wheel and rotate it. Therefore, the wind turbine connecting the wheel can also be rotated to absorb wind power, and convert the mechanical energy into electrical energy. Then, it can charge the battery through the wind-solar hybrid controller.Battery is the energy-storage component, which is used to regulate energy and balance loads. When the power system generate to much energy, battery can store the extra energy; when the energy generated is not enough for normal work of the load, the battery can supply energy in need.

Wind-solar hybrid controller is used to adjust the working state of the battery in real time according to the energy generated by the power system and the change of the loads. The controller can ensure the alternate operations of the system in charging, discharging and floating charging, etc., to ensure power generation and promise continuity, stability and reliability of system.

The inverter converts the DC power in the battery into the corresponding AC power, such as 24V AC, to ensure the normal performance of the AC load equipment. Also, the inverter can do the automatic voltage regulation, which can improve the power supply quality of the wind-solar hybrid power generation system.

Design of wind-solar hybrid power supply system[edit]

In order to design an above-mentioned system, we need the following steps:

  1. determine the load requirements of the various elements of the video surveillance system;
  2. calculate battery capacity and system power generation;
  3. determine the number of batteries required for the system and assembly method of the batteries;
  4. determine the power generations and models of the wind power generator, solar power generator, and battery pack according to the total power generation of the system;
  5. determine the installation position and assembly plan of each component of the wind-solar hybrid power supply system.
Determine load requirements[edit]

Loads of this wind-solar hybrid system mainly include infrared camara (dual use of day and night), PTZ, and communication devices. Thus, we need to calculate their loads to determine the capacity of power generators and batteries.

The power loads of the wind-solar hybrid power supply system include the infrared camera, the PTZ, and the wireless MESH device. The working voltage of the infrared camera is AC 24V, and its working power is 15W; the working voltage of the PTZ and the wireless MESH device are both DC 12V, and the working powers are 60W and 5W respectively. Each equipment in the surveillance system needs to be able to work normally for 7 days without any external energy.
Power Consumption of Equipment of Surveillance System
Name of equipment Working voltage/V Power/W Operating hours/(h/d) power consumption/(W·h)
Infrared camera AC 24V 10 24 240
PTZ DC 12V 60 4 240
MESH DC 12V 5 24 120
Battery pack design[edit]

The configuration of battery pack is one of the keys to wind-solar power generation systems. The capacity of the battery is determined by the number of days when the battery pack works alone, the daily discharge amount, its own leakage power level and so on. Under special climatic conditions, the battery is allowed to discharge until the remaining capacity of the battery accounts for 20% of the normal rated capacity.The self-discharge rate of the battery increases as the battery usage time increases or the temperature of the battery increases. For new batteries, the self-discharge rate is usually less than 5% of the capacity; but for old batteries with poor quality, the self-discharge rate can be increased to 10% ~ 15% of the capacity.

The capacity of the battery must be sufficient to supply power for the loads for at least 7 days when there is no power generated by the wind and sunlight. If the battery is designed for 24V, the battery discharge capacity is 70%, and the inverter efficiency is 90%, then we can calculate the capacity of the battery:                         Infrared camera: A = 7d×240W·h/0.9/(24V×0.7) ≈ 112A·h
PTZ: B = 7d×240W·h/(12V×0.7) = 200A·h
Wireless MESH device: C = 7d×120W·h/(12V×0.7) = 100A·h
Therefore, the total battery capacity is A+B+C = 412 A·h. To meet requirements, a total of four 225A·h/12V batteries can be considered. We can connect two single-cell 12V batteries in series, each with a capacity of 225 A·h/12 V. Then two sets of batteries connected in series are connected in parallel to ensure the stability and safety of the system.
Power generation calculation[edit]

When the system works normally, the power generation should meet the need of charging of the battery in addition to the rated loads. The power generation of wind power and solar power needs to meet the rated load capacity.

If the system charge controller is designed as 24V, and the efficiency is considered as 90%, then the power of wind and solar hybrid power generation is 236W.
Wind power components[edit]

The wind power generation component adopts an integrated vertical wind turbine power generation unit. The entire wind power generation unit has only one movable part which is the wind wheel. Turbine blade shape can reach 360 degrees of wind in the space, and can generate electricity normally in light wind and typhoon weather.

If the system requires 236W of generating power, usually the wind cannot reach the rated operating wind speed of the fan, so a wind power generation component with a rated power of 400W/24V is needed.
Solar power components[edit]

The power output capability of a solar photovoltaic cell array is closely related to its area. The larger the area, the greater the output power under the same lighting conditions. The structural design of the solar cell array should ensure that the connection between the module and the bracket is firm and reliable, and the solar cell module can be easily replaced.

If the system requires 236W of generating power, and the battery pack needs to be charged in a short time in order to maximize the power generation during the solar irradiation time, 4 sets of solar power modules with rated power of 100W/24V is needed.
Wind-solar intelligent controller[edit]

The controller applies the maximum power tracking technology to maximize the conversion of wind and solar power into battery charging current. The controller has a series of warning and protection functions such as overcharge, overdischarge, overload, open circuit, short circuit, reverse connection, anti-reverse discharge, and overheating.

If the system sets 25V as the starting voltage, when the voltage is lower than 25V, the output system is in an undervoltage state and does not transmit power to the load so as to avoid damage to the battery due to overdischarge.
Sine wave inverter[edit]

The sine wave inverter is used to invert the DC power of the battery into a sine wave AC output to supply the power of the infrared camera. The inverter has functions such as under-voltage protection, over-current protection, and short-circuit protection. Through MESH, supervisors can remotely monitor the inverter in real time.

Lightning protection[edit]

In order to ensure the safe operation of the system in severe weather such as thunderstorms, lightning protection measures are required. There are mainly the following aspects:

  1. The ground wire is the key to lightning protection (Therefore, SELV is not proper here). The ground resistance should be less than 4 ohms.
  2. The brackets of wind power generation modules and solar power generation modules should be well grounded. And the grounding resistance should be less than 4 ohms.
  3. The DC output terminal and the AC output terminal of the inverter should adopt secondary lightning protection.
System installation design[edit]

The installation requirements of the wind-solar hybrid power supply system are simple installation and easy maintenance. The installation position of the wind wheel is required to satisfy the minimum wind speed of wind power generation and the need of 360-degree rotation of the wheel.

Suggestions of applying centralized power supply[3][edit]

  1. When the system is repaired, it is often necessary to turn the system on and off. If all cameras are turned on at the same time , the starting current is particularly large, which may have a great impact on the power supply and burn the power supply in serious cases. And if all surveillance cameras share a power supply, when the power supply fails, the entire surveillance system is paralyzed. In particular, some important entrances and exits cannot be monitored, which may cause unnecessary trouble. Thus, all cameras can be divided and powerd a couple of power supplies respectively. In this way, when a certain power supply fails, the cameras powered by it can be connected to other power supplies without affecting the work of the entire system.
  2. When a surveillance system is powered by centralized power supply, do not connect the farther cameras and the nearer cameras to the same power supply. If they are connected to the same power supply and the power supply voltage is high, the nearer camera may be burned; if the power supply voltage is low, the farther camera may not work normally. Therefore, some cameras are designed with a wide range of power supply, which can work stably for a long time within the DC input voltage range of 12-36V, and completely eradicate the harm caused by unstable input voltage (such as surges, etc.) to the camera.
  3. If the transmission distance is too long, a higher-voltage power supply may be needed, such as 30V or 36V, etc. Even directly supplying 220V AC power and then transforming it to appropriate voltage level at the front end can be considered.

Independent power supply[edit]

Independent power supply refers to that the front end of each camera is equipped with an independent security monitoring power supply, and each power supply provides power for only one camera. 220V AC is directly drawn from the security monitoring room, and then a separate DC 12/24V power adapter is connected to each camera. And Each source only powers one piece of surveillance equipment.

The advantages and disadvantages of Independent power supply[edit]

Advantages[edit]
  1. AC has lower voltage loss and strong anti-interference capability during transmission.
  2. If there is a random faulty camera, each surveillance equipment corresponds to one power supply and the faulty can be located quickly.
  3. The power supply distance is long, and there is basically no loss on it.
Disadvantages[edit]
  1. Construction is more troublesome because each point should install a power supply.
  2. The costs of the independent power supply are higher. The cost of multiple independently monitored power adapters is higher than that of a centralized power supply with the same power output.
  3. The security stability is weaker. A higher voltage is more dangerous while line aging and rodents are easy to lead to short circuits and even fire.
  4. The distance between the independent monitoring power supply and the camera is mostly in an outdoor environment, which is vulnerable to external damage factors such as lighting strikes.

Examples of an independent power supply[edit]

Island Satellite Earth station power supply system[edit]
Situation background[edit]

The satellite ground station is located in the coastal area, with high temperature and humidity for a long time, heavy rainfall, shallow groundwater level, severe salt fog corrosion, and typhoons from time to time. Since the ground station has a side yard area and scattered monitoring points, the AV220V power bus is adapted to transmit power and the front end is independent. Power supply might burn down after long-term operation: because the power supply box is not sealed due to heat dissipation requirements, although it is not affected under normal rainfall conditions, it is seriously flooded when a typhoon strikes.

System installation mode[edit]

Cameras are mounted on different stainless steel metal poles with an independent power adapter mounted on an inner pole in a small distribution box.

Civil airport terminals video monitoring system[edit]
Situation background[edit]

There are a large number of camera points in the terminal. From the perspective of reliability, economy, and convenient construction, some cameras are powered by the method of independent power supply.

System installation mode[edit]

The cameras used for outdoor viaduct road and airside parking spot monitoring in front of the building adopt independent power supply mode, and each camera is separately equipped with power cord and power adapter.

Suggestions of applying Independent power supply[edit]

Different situations demand a video monitoring system, in an independent power supply, video monitoring system design should be considered in addition to meeting the requirements of building the relevant specification, also need to be used in the early stage of the design-related investigation unit needs, all kinds of equipment installation in the system, the layout of the pipeline will also need to make appropriate adjustment on the basis of the construction site conditions, Only in this way can the construction of the monitoring system meet the requirements of the code, the building environment, the needs of users and other factors to the greatest extent.

PoE Power Supply[4][edit]

PoE (Power Over Ethernet) refers to providing some IP-based terminals (such as IP telephones, wireless LAN access points, and network cameras) without making any changes to the existing Ethernet Cat.5 cabling infrastructure. etc.) technology that can provide DC power supply for such devices while transmitting data.

Standard[5][edit]

In order to standardize and promote the development of PoE applications, in June 2003, the IEEE 802.3 working group formulated the IEEE 802.3af standard. As an extension of the Ethernet standard, detailed regulations have been made on the power supply, transmission and reception of network power supply. The IEEE 802.3af standard is a new standard based on the power over Ethernet system POE. It adds the related standard of direct power supply through the network cable on the basis of IEEE 802.3. It is an extension of the existing Ethernet standard and the first international standard on power distribution. standard. IEEE began to develop this standard in 1999, and the earliest participating manufacturers were 3Com, Intel, PowerDsine, Nortel, Mitel and National Semiconductor. However, the shortcomings of this standard have been restricting the expansion of the market. Until June 2003, the IEEE ratified the 802.3af standard, which clearly defines the power detection and control in remote systems, and provides access to IP phones, security systems, and wireless LANs via Ethernet cables from routers, switches, and hubs. Points and other equipment power supply method is specified. The development of IEEE 802.3af involved the efforts of many company experts, which also allowed the standard to be tested in various aspects. In October 2009, the IEEE 802.3at standard was born in response to the needs of high-power terminals. On the basis of being compatible with the 802.3af standard, it provides greater power supply requirements and meets new needs.

Features[edit]

POE technology can ensure the normal operation of the existing network while ensuring the security of the existing structured cabling, and minimize the cost. The IEEE 802.3af standard is a new standard based on the power over Ethernet system POE. It adds the related standard of direct power supply through the network cable on the basis of IEEE 802.3. It is an extension of the existing Ethernet standard and the first international standard on power distribution standard.

System Components[edit]

System composition of POE: A complete POE system includes two parts: Power Sourcing Equipment (PSE, Power Sourcing Equipment) and Power Sourcing Equipment (PD, Powered Device).

  • The PSE device is the device that supplies power to the Ethernet client devices and is also the manager of the entire POE power over Ethernet process.
  • PD devices are PSE loads that receive power, i.e., client devices of POE systems, such as IP phones, network security cameras, APs, and many other Ethernet devices such as PDAs or mobile phone chargers (actually, any power that does not Devices with more than 13W can obtain the corresponding power from the RJ45 socket). In the monitoring system, it is mainly the network camera (IPC).

Characteristic Parameter[6][edit]

Power Supply Characteristic Parameters
Category 802.3af (POE) 802.3at (POE Plus)
Classification 0~3 0~4
Maximum current 350mA 600mA
PSE output voltage 44~57V DC 50~57V DC
PSE output power <=15.4W <=30W
PD input voltage 36~57V DC 42.5~57V DC
PD maximum power 12.95W 25.5W
Cable Requirements Unstructured CAT-5e or better
Power cable pair 2 2

Strengths and Weaknesses[edit]

Advantage[edit]
  • Simplify wiring and save labor costs. One network cable transmits data and power supply at the same time. PoE makes it no longer need expensive power supply and time spent installing power supply, saving cost and time.
  • Safe and convenient The PoE power supply end device will only supply power to the device that needs power supply. Only when the device that needs power supply is connected, the Ethernet cable will have voltage, thus eliminating the risk of leakage on the line. Users can safely mix legacy and PoE devices on the network, which can coexist with existing Ethernet cables.
  • Easy for remote management Like data transmission, PoE can supervise and control the device by using Simple Network Management Protocol (SNMP). This function can provide functions such as night shutdown, remote restart, etc.
Disadvantage[edit]
  • Insufficient power, the power receiving end cannot be carried: 802.3af standard (PoE) output power is less than 15.4W, which is enough for general IPC, but for high-power front-end equipment such as dome cameras, the output power is not enough. to the request.
  • The risk is too concentrated: Generally speaking, a PoE switch will supply power to multiple front-end IPCs at the same time. Any failure of the POE power supply module of the switch will cause all cameras to fail to work, and the risk is too concentrated.
  • High equipment and maintenance costs: Compared with other power supply methods, PoE power supply technology will increase the workload of after-sales maintenance. From the perspective of safety and stability, the stability and safety of separate power supply are the best.

Power Supply Work ProcessCite error: The <ref> tag has too many names (see the help page).[edit]

  1. Detection: At the beginning, the POE device outputs a small voltage at the port until it detects that the connection of the cable terminal is a receiving end device that supports the IEEE802.3af standard.
  2. PD device classification: After detecting the PD of the power receiving device, the POE device may classify the PD device and evaluate the power loss required by the PD device.
  3. Start power supply: During the startup period of a configurable time (generally less than 15μs), the PSE device starts to supply power to the PD device from a low voltage until a 48V DC power supply is provided.
  4. Power supply: Provide stable and reliable 48V DC power for PD devices, so that the power consumption of PD devices does not exceed 15.4W.
  5. Power failure: If the PD device is disconnected from the network, the PSE will quickly (usually within 300-400ms) stop supplying power to the PD device, and repeat the detection process to detect whether the terminal of the cable is connected to the PD device.

Power Supply Method[edit]

The POE standard defines two methods for delivering DC power to POE-compliant devices using Ethernet-based transmission cables:

  • Intermediate jumper method

Use an independent PoE power supply device to bridge between the switch and the terminal device with PoE function, generally using the unused wire pairs in the Ethernet cable to transmit DC power. The Midspan PSE is a specialized power management device, usually co-located with the switch. It has two RJ45 jacks corresponding to each port, one is connected to the switch with a short cable (here refers to the traditional switch without PoE function), and the other is connected to the remote device.

  • End-crossing method

It is to integrate the power supply device at the signal outlet end of the switch. This type of integrated connection generally provides the "dual" power supply function of the idle line pair and the data line pair. Among them, the signal isolation transformer is used for the data line pair, and the center tap is used to realize the DC power supply. It is foreseeable that End-Span will be rapidly popularized. This is because the Ethernet data and power transmission use a common wire pair, thus eliminating the need to set up a dedicated wire for independent power transmission. The -45 socket is particularly significant.

Distinguish Between Standard PoE and Non-standard PoE[edit]

Measure with a multimeter. Methods as below:

  1. Start the device, adjust the multimeter to the voltage measurement position, and use the two test pens of the multimeter to touch the power supply pins of the PSE device (usually 1/2, 3/6 or 4/5, 7/8 of the RJ45 port). ), if a device with a stable output of 48V or other voltage values (12V, 24V, etc.) is measured, it is a non-standard product. Because in this process, the PSE does not detect the powered device (here, the multimeter), and directly uses 48V or other voltage values to supply power.  
  2. Conversely, if the voltage cannot be measured and the needle of the multimeter jumps between 2 and 10V, it is a standard POE. Because at this stage, the PSE is testing the PD side (here is the multimeter), and the multimeter is not a legal PD, the PSE will not supply power, and no stable voltage will be generated.

How to Choose a PoE Switch[edit]

  1. How much power needs to be supplied to the device: the output power of the PoE switch is different depending on the standard used. For example, the maximum power consumption of IEEE802.3af does not exceed 15.4W. Due to the loss of the transmission wire, the maximum power consumption can not exceed 12.95W. device is powered. A PoE switch that follows the IEEE802.3at standard can supply power to devices with a maximum power consumption of no more than 25W.
  2. How many devices can be powered at most: An important indicator of a PoE switch is the total power of the PoE power supply. Under the IEEE802.3af standard, if the total PoE power supply of a 24-port PoE switch reaches 370W, then it can supply 24 ports (370/15.4=24), but if it is a single port according to the IEEE802.3at standard, the maximum The power supply is calculated at 30W, and it can only supply power to 12 ports at most at the same time (370/30=12).
  3. The number of interfaces required, whether with optical fiber ports, with or without network management, and speed (10/100/1000M).

Example: How to Choose POE Switch for Security Monitoring and Wireless Coverage[edit]

There are many types of PoE switches, ranging from 100M to 1000M, to full gigabit, as well as the difference between unmanaged and managed types, and the difference in the number of different ports. If you want to choose a suitable switch, you need a comprehensive and comprehensive consideration. Take a project that requires high-definition monitoring as an example.

  • Step 1: Choose a Standard PoE Switch
  • Step 2: Select Fast or Gigabit Switch

In the actual solution, it is necessary to integrate the number of cameras, and select parameters such as camera resolution, bit rate, and frame number. Mainstream monitoring equipment manufacturers such as Hikvision and Dahua provide professional bandwidth calculation tools. Users can use the tools to calculate the required bandwidth and select a suitable PoE switch.

  • Step 3: Select af or at standard PoE switch

Select according to the monitoring equipment power. For example, if a camera of a well-known brand is used, the power is 12W max. In this case, a switch of the af standard needs to be selected. The power of a high-definition dome camera is 30W max. In this case, it is necessary to use an at-standard switch.

  • Step 4: Select the number of ports on the switch

According to the number of ports, PoE switches can be divided into 4 ports, 8 ports, 16 ports and 24 ports, etc., which can comprehensively monitor the power, quantity, location of the equipment, switch power supply and price selection.

Advanced Functions of Surveillance Systems[edit]

Surveillance Cloud Storage[edit]

Concept[edit]

Monitoring system cloud storage function to cloud computing technology applications as the core, a single storage device there are application limitations, and with the distributed network system can be achieved in a collaborative network mode for massive service resources, storage data processing, and the provision of its application services mainly to cloud storage management and storage as the core. Cloud storage technology can store and manage huge amounts of data in the process of practical application. The virtualization and standardization of the access method transformation, help storage devices to achieve capacity expansion and ensure that the data storage capacity on the basis of improved, to achieve the purpose of reasonable reduction in storage costs. Video surveillance cloud storage is a new concept extended and developed on the concept of cloud computing, which refers to a system that collects a large number of different types of storage devices in the network to work together through application software to provide data storage and business access functions together externally through functions such as cluster application, grid technology or distributed file system.

Classification[edit]

1 The ordinary home consumer cameras

These products are based on the p2p connection access, there is a corresponding own p2p platform. Camera through the wired or Wi-Fi access to the network, and the platform through the p2p way to connect the solution successfully. The mobile app or computer client can preview the video screen of the camera in real time. At the same time these p2p monitoring platform can extend the cloud storage function, that is, access to the cloud storage space provided by IDC manufacturers. Consumers buy the corresponding length, size of the storage space, you can store his camera video in these cloud storage space. Through the above-mentioned cell phone app or computer client can access these videos at any time. The core of this process is the camera networking penetration, p2p connection.

2 The conventional professional class video surveillance system

① through the way of FTP, the video data, pictures, etc. are written to the cloud storage space.

② network direct connection, das or nas, based on the IP network video data directly into the storage server. This method is mainly used for private cloud storage, generally, there will be a dedicated video surveillance network to ensure the network security level data written to the speed of transmission.

Security video surveillance platform software installed directly in the cloud storage server, or the cloud storage server access to the platform media storage forwarding node, equivalent to the platform storage node expansion.

③ object cloud storage, considering the network environment and network transmission speed, is generally mainly used to store the alarm capture pictures or shorter alarm video. Object storage method, based on HTTP protocol access. Object storage service, which provides a key-value form of the object at the storage service. Users can uniquely access the content of the Object-based on the name of the Object (Key). Object cloud storage contains the following concepts, storage space (Bucket), object/file (Object), access to the secret key (AccessKey).

Technologies[edit]

A video surveillance cloud storage system is a collection of multiple devices, applications, and services working together, and its implementation is predicated on the development of multiple technologies. According to the characteristics of video surveillance cloud storage and its application areas, the main video surveillance cloud storage technologies involve storage virtualization, distributed file system, cluster storage, centralized storage management, heterogeneous platform collaboration, automatic hierarchical storage, and of course, deduplication, data compression, and other technologies.

1 Storage Virtualization

Storage virtualization (StorageVirtualization) is most commonly understood as the abstracted representation of storage hardware resources. By integrating one (or more) target services or functions with other additional functions, useful and fully functional services are provided in a unified manner. The idea of storage virtualization is to separate the logical image of a resource from the physical storage, thus providing a simplified, seamless virtual view of the resource for systems and administrators.

2 Distributed File System

Hadoop Distributed File System (HDFS) is a distributed file system designed to be used on common hardware devices.HDFS provides high throughput for application data and is suitable for those applications with large data sets.HDFS opens some POSIX mandatory interfaces to allow streaming access to file system data.

HDFS is a master/slave architecture. A cluster has one name node, the master control server, which manages the file system namespace and coordinates client access to files. There are also a bunch of data nodes, typically one deployed on a physical node, responsible for storage management on the physical node where they are located.

3 Clustered Storage

Clustered storage is the aggregation of storage space in multiple storage devices into a storage pool that can provide a unified access interface and management interface to application servers, through which applications can transparently access and utilize the disks on all storage devices, which can give full play to the performance and disk utilization of storage devices. Data will be stored and read from multiple storage devices according to certain rules to obtain higher concurrent access performance.

4 Centralized storage management

Video surveillance cloud storage management platform requires support for cross-data center deployment and management and supports cross-data center user access scheduling, data migration, data off-site storage backup, and other functions.

Support centralized management, video surveillance cloud storage management platform is deployed in the central server room of the cloud, storage nodes can be deployed in the server rooms around the management platform for each sub-point server room storage equipment can be unified management scheduling.

5 Automatic hierarchical storage

Automatic hierarchical storage refers to the function of migrating data blocks between different disk types and RAID levels, which can satisfy the proper balance between performance and space usage, quickly put data in the right place and avoid so-called hotspots.

In the hierarchical data storage structure, storage devices are generally tape libraries, disks or disk arrays, etc., and disks can be divided into FC disks, SCSI disks, SATA disks, and many others according to their performance, while flash storage media (non-volatile random access memory (NVRAM)) can also be used as a higher level in the hierarchical data storage structure because of its higher performance.

Structure[edit]

The structure model of a cloud storage system consists of 4 layers: storage layer, base management layer, application interface layer, and access layer.

1 The storage layer is the most basic part of cloud storage. Storage devices can be FC Fibre Channel storage devices, IP storage devices such as NAS and iSCSI, or DAS storage devices such as SCSI or SAS. Storage devices in cloud storage are often large in number and geographically distributed, connected via WAN, Internet, or FC Fibre Channel networks. On top of the storage devices is a unified storage device management system that enables logical virtualization management of storage devices, multi-link redundancy management, and status monitoring and fault maintenance of hardware devices.

2 The base management layer is the most core part of cloud storage and the most difficult part of cloud storage to implement. The foundation management layer realizes the collaboration between multiple storage devices in cloud storage through technologies such as clustering, distributed file system, and grid computing so that multiple storage devices can provide the same service to the outside world and provide larger, stronger, and better data access performance. At the same time, a variety of data backup and disaster recovery technologies and measures can ensure that the data in the cloud storage will not be lost, to ensure the security and stability of the cloud storage itself.

3 Application interface layer is the most flexible and changeable part of cloud storage. Different cloud storage operating units can develop different application service interfaces and provide different application services according to the actual business type. Examples, video monitoring application platform, IPTV and video-on-demand application platform, network hard disk citation platform, remote data backup application platform, etc.

4 Access layer is also the user of the cloud storage system. Any authorized user can log into the cloud storage system through the standard common application interface to enjoy cloud storage services. Cloud storage operating units are different, and cloud storage provides different types of access and access means.

Prerequisite for realization[edit]

1 Development of broadband network

Users need to connect to cloud storage through broadband access devices such as ADSL and DDN. Only if the broadband network is sufficiently developed, users can obtain a large enough data transmission bandwidth to realize the transmission of large-capacity data and truly enjoy the cloud storage service.

2 WEB2.0 technology

The core of Web2.0 technology is sharing. Only through web 2.0 technology, users of cloud storage can realize centralized storage and data sharing of data, documents, pictures, and video and audio content through various devices such as PCs, cell phones, and mobile multimedia.

3 The development of application storage

Application storage is a kind of storage device that integrates the function of application software in the storage device, which not only has the function of data storage, but also has the function of application software, and can be regarded as an aggregate of server and storage device.

4 Clustering technology, grid technology, and distributed file system

Different storage devices need to be clustered technology, distributed file system, grid computing, and other technologies to achieve collaboration between multiple storage devices, multiple storage devices can provide the same service to the outside world, providing greater and better data access performance.

5 CDN content distribution, P2P technology, data compression technology

CDN content distribution system, data encryption technology to ensure that the data in the cloud storage will not be accessed by unauthorized users, at the same time, through a variety of data backup and disaster recovery technology to ensure that the data in the cloud storage will not be lost, to ensure the security and stability of the cloud storage itself.

Working process of system - The mode of actively acquiring the data stream[edit]

Video surveillance system: This system is mainly under the service of cloud storage so that the video surveillance function can be realized, in general, is the installation of video surveillance in the national cloud storage system, while the data monitored by the video can be stored in the cloud storage system, and then use the relevant monitoring software to manage and call the video data.

1 The client in the monitoring center sends a signaling stream to the storage management node in the cloud storage center over the IP network, and the platform sends a recording plan.

2 The storage management node of the cloud storage center synchronizes the video plan to the storage node in the virtualized storage resource pool.

3 storage node through the IP network through the software to the front-end device to take the stream command, the digital front-end through the IP network to return the video data stream.

4 Storage node synchronizes the stream information to the storage management node, and at the same time sends the video data stream to the management server of the monitoring center through the IP network.

Specific application scenarios[edit]

1 Massive video data applications

At this stage, the application of video surveillance systems in various industry sectors put forward high-definition, networked demand, especially in the context of smart city construction, the growth in demand for monitoring systems in buildings makes monitoring systems for the application of cloud storage technology is imminent. Relying on the application of cloud storage technology video surveillance system, higher storage device capacity can effectively respond to the current trend of data quantification and rapid growth. For example, the construction of storage pools using multiple petabyte-level storage products, through the formation of large storage space to achieve effective storage and management of video data in the building. In addition, the current stage of the development of video data quantification, to further promote the development and upgrade of intelligent analysis and image graphics retrieval function in video surveillance. If in the process of data supervision, the processing is still using manual processing, it is difficult to obtain more significant control effectiveness and even increases the cost of monitoring management due to a large number of personnel input. Therefore, in the context of cloud storage applications, video surveillance systems in the massive amount of video data through the rapid retrieval function to achieve efficient and convenient storage and management. And through the implementation of policy presets, combined with relevant calculation methods, principles are applied to achieve intelligent analysis of video data. In addition, video surveillance system intelligent analysis can be achieved through the integration of technology to expand the function of the purpose, or automation, Internet of Things and other systems and intelligent analysis technology to maintain a state of synergistic operation, to ensure that its system decision processing can obtain a more accurate and comprehensive basis for support.

2 Software clustering and deployment

Cloud storage technology application is the foundation premise of the organic combination of software platform deployment, real-world business, intelligent analysis, and so on. The platform + IPSAN as the core of the monitoring program, the monitoring center as the carrier for program deployment, can achieve the organic integration of real-world business and platform deployment, highlighting the video surveillance system with all-round coverage characteristics of the video query function, and on the basis of intelligent processing, based on key features for efficient aggregation and query of video data, to ensure the effectiveness of its video query is guaranteed. Based on relevant server resources, combined with cloud computing features to achieve video clip extraction, with intelligent analysis function application to achieve the purpose of video editing, concentration, truly reflecting the comprehensive combination of video surveillance system and actual business development, to provide users with more convenient and comprehensive monitoring and management services. At the same time, the server is a carrier for system construction, to achieve the cloud deployment of video surveillance systems. At the same time, because of the existence of universal characteristics of the basic server, there is no need for equipment hardware development again, enterprises only need to cloud technology development on the cloud architecture, the effective deployment of platform software, and intelligent analysis software to ensure that the construction of video surveillance systems with virtualization, grid computing, and other functions.

Advantages[edit]

1 Storage security. The local storage method is through the SD card for storage, once the device is stolen, the video stored on the SD card will also be damaged and lost, unable to recover the video and view earlier footage. Cloud storage is the opposite, when the smart camera for recording, the resulting surveillance video will be uploaded directly to the cloud server, even if the smart camera was vandalized, the video stored in the cloud will ensure that intact, can provide solid and powerful evidence of wrongdoing.

2 Storage flexibility. With the use of local SD card storage, once the smart camera power fails, disconnected, it is impossible to remotely view the video, the use of inconvenience. After the use of cloud storage, uploading the video data in the cloud is not limited by time, location, and network, you can see it anytime, anywhere you want.

3 Massive storage. Storage space and video clarity are greatly associated with the recording of high-definition video a day require 10G memory, if the SD card memory card space is full, the device recorded video will automatically overwrite the earliest time video by time. Cloud storage has a large amount of storage space, users can choose different cloud storage packages according to the actual demand for video length, regardless of the performance of the storage medium. After opening the storage service, no need to worry about the lack of storage space, rest assured that the recording of ultra-high-definition video.

4 Stable and reliable. When the local storage in the storage capacity is full, the memory card will default to "cleaning" the video data overwritten, which is a very big depletion of the memory card, for which there are limits on the number of times the memory card can be read and the life of the card, the hidden greater risk is that after the memory card is damaged, all the stored data will not be recovered. Cloud storage will not have this concern, cloud storage video is real-time upload and save in the cloud, relatively reliable and stable.

In recent years, users with smart cameras to prevent theft and catch thieves, convenient services are common cases, at the same time, the cloud storage of surveillance video coupled with real-time push alarm information for people's lives and home security to provide convenience and security.

Surveillance Location Tracking[edit]

Concept[edit]

A target positioning and monitoring system is a system that uses image technology to detect a specific target, then uses automatic control technology to locate the target, and finally uses video transmission technology to transmit real-time video. The intelligent video frequency technology mentioned in video surveillance mainly refers to "automatic analysis and extraction of key information in the video source."  If the camera is regarded as the eye of the human, and the intelligent video system or equipment can be regarded as the brain of the human. 'Intelligent video technology with the help of the powerful data processing power of the computer, the massive data in the visual frequency map can be analyzed at high speed.  Filters out information that users don't care about and only provides critical information that is useful to the monitor.  

Detecting the specific target in the image processing technology is known as object recognition. In the field of computers, It is image processing on the premise of image acquisition and then we can find our objectives. In the field of automatic control technology, Automatic target positioning can be understood as the use of specific equipment to indicate the target in the actual scene. It can be seen that the target positioning system mainly has image acquisition, image processing target recognition and target positioning, and other parts, its main technical indicators include image processing speed, recognition accuracy, positioning speed, and so on.

Video transmission technology covers two different types--wireless video transmission and wired video transmission.  Wireless video transmission is apart based on wireless communication technology, it contains video data processing, compression, transmission, and decompression. Its main technical indicators are video recovery quality and video transmission robustness. Video compression and decompression is the core technology of video transmission system. Under the limitation of network bandwidth and processor operation speed, video compression pursues to reduce bit rate as much as possible to obtain real-time transmission and effectiveness. Video compression is an important condition to ensure video transmission and good implementation.

Classification[edit]

Monitoring system at home and abroad mainly includes digital controlled analog video monitoring and digital video surveillance. After years of technology development, digital controlled analog video monitoring has been very mature, the performance is stable and can be applied widely. But there are some restriction factors, such as weak functional expansibility. With the rapid development of computer technology and image and video technology, the digital video monitoring  as a new type of video monitoring technology has been rapidly developed, which can solve many shortcomings of the simulation system, but  it still need to be further improved and optimized.  

In general, the development trend of video surveillance system can be divided into four terms--front-end integration, digital surveillance, video network and system integration. Digital is the precondition of the network, the network is the basis of system integration, so the biggest two trends of the development of video surveillance is digital and networked, digitized refers to the flow of information in the system from analog information to digital information. Improvement of processing data transmission ,system control mode and form of video surveillance system solves many drawbacks, while digitalization is the digital coding compression of information flow and open protocol.  Networked process refers to the structure of the system, like collecting and distributing system, excessive centralized-distributed system. It can adopt multi-layer, hierarchical structure with micro kernel technology of real-time multitasking, multi-user distributed operating system to realize the preemptive task. Modular and serialized design system is equipped with the configuration of strong versatility, good openness and flexible system configuration so that digital and network is the inevitable trend of monitoring system development

Structure[edit]

Target positioning and monitoring system can be mainly divided into 3 parts--the server, network transmission, the client. In terms of structure, it can be divided into six module-- image acquisition, image processing, target, data transmission, network transmission and video monitoring. Server-side can be divided into image acquisition, image processing, target positioning and data processing. Video monitoring module belong to the Client-side.

In terms of component design, the server is mainly composed of camera, image acquisition card, laser, galvanometer and PC. The wireless network consists of a wireless router and a network card. The configuration of the client is the same as that of the server. 

Working process of system[edit]

The working process of the whole system is as follow:

1. Open the server and client host to realize server-side image acquisition and positioning, then run monitoring application.

2. Server-side image acquisition and the positioning application has already started, and broadcast own IP client monitoring application within the local area network (LAN). Connect to a server IP for the network connection.

3. Image acquisition and the positioning can be realized whether the client is connected or not. After image acquisition, the target is detected by image processing technology. Then turn on the laser and use the positioning technology to control the galvanometer to complete the indicating function.

4. The client connects to the server, and the server transmits the video to the client.

5. The client is responsible foe the decompression and parsing of the video, and displays the corresponding service specific video on the client monitoring application interface. Record the stain processing video and it can be played or replayed.

Schematic design[edit]

Edit section

1.Design of target recognition algorithm  

YOLO (You Only Look Once) is a kind of Target detection and location algorithm based on deep neural networks , its biggest characteristic is running fast and it can be used  

in real-time system. YOLO is an end-to-end One Stage algorithm that will be proposed candidate area and target identification are accomplished in one step.  Yolov3 algorithm is based on LOV1 and YOLOv2, the network structure is deepened and the multi-scale features are advanced line object detection under the premise of maintaining the speed advantage, which can greatly improve the prediction accuracy and the ability to recognize small objects.  

Baidu Feioar is China's first self-developed, open source and open deep learning box  

Frame, which can be easily deployed to devices of different architectures.  PaddleDetection fly Oar target detection module achieves a variety of mainstream target detection algorithms and provides a rich data enhancement strategies, network module components (such as backbone network), loss functions. The integration of model compression and cross-platform high performance deployment capabilities makes it our preferred deep degree learning framework. The YOLOv3 model can be improved on the baidu flying propeller frame in order to achieve smooth recognition effect and recognition accuracy.

2.Video input/output design  

Use OpenCV to get USB camera video stream data as we do this. There is no need to save the video with the recognition result box, we save the code for the video  modified. Loop a 3-second video clip in memory, when the target detection algorithm detects the appearance of the target object, the appearance and departure time of the object is recorded , then we can save the video in long term storage device to hold from 3 seconds before the target is detected. If the target object is not detected within 3 seconds, we should go into persistence stops and turn video data into cyclic detection mode.  

3.Model selection

We use pp-Yolo_mobilenetv3_large lightweight model as the baseline for model retraining, and use specific wildlife data to retrain on the pre-trained model, which can effectively improve the generalization ability.  Compared with YOLO- Tiny, Mo⁃bileNet has deeper layers, but maintains a small amount of computation. It is a lightweight network structure optimized for mobile terminals launched by Google.  Using MobileNetV3 as backbone can strike a balance between precision and efficiency.  MobileNetV3 combines the depth detachable convolution of V1, the inverse residual structure of V2, and uses a lightweight attention mechanism to optimize performance, and uses h-swish activation function instead of swish function to reduce computation.

4.Frame extraction detection  

Since adjacent frames are essentially the same, we do not need to invoke the target detection algorithm for each frame. There are two commonly used frame extraction algorithms, one is to extract frame by proportion, the other is to extract frame by FPS.  In the intelligent video surveillance system, in order to ensure the smooth target recognition algorithm, we choose to extract frames according to FPS.  Add parameter FPS_rate_max in the video processing module. If the FPS of visual frequency is larger than fPS_rate_max, the frame extraction detection is carried out according to FPS_rate_max. Frame extraction detection can greatly reduce the computational pressure of the system and improve the detection fluency.

5.Model Deployment  

The completed model was trained on the deep learning server, compressed and quantized using PaddleSlim, and deployed to Jetson TX2 using TensorRT. NVIDIA TensorRT is a high-performance deep learning prediction library designed by NVIDIA for NVIDIA Gpus that provides low latency and high throughput for deep learning reasoning applications, typically increasing the reasoning speed of image classification tasks by 3-6 times. PaddlePaddle integrates TensorRT in the form of subgraph. In tests, it only reached 14fps without PaddleSlim and TensorRT acceleration, and 42bps with acceleration.

Advantages[edit]

Edit section

1.Reliable

The intelligent video monitoring system has completely changed the previous mode of monitoring and analysis by security workers. It analyzes the monitored picture through the intelligent video module embedded in the network camera or video frequency server of the front-end equipment.  The intelligent algorithm is compared with the security model defined by the user, and the security threat is immediately reported to the monitoring center.

2.Accurate

The intelligent video monitoring system can effectively improve the accuracy of the report, and greatly reduce the occurrence of false alarms and missing alarms.  The network camera and video server of the intelligent video surveillance system combine strong image processing ability and run advanced intelligent algorithm, so that users can define the characteristics of security threats more accurately.  For example, users can define a virtual warning line, and specify that only those who cross the line enter or leave will generate a general alarm, and those who pass by the line will not generate an alarm, as shown in the figure. As defined by users, only those who cross the door will generate an alarm. And the activity through the door does not produce alarm.

3.Faster response speed

The intelligent video system has greater intellectual property than the ordinary network video monitoring system. It can identify suspicious activities, such as people leaving suspicious objects in public places,  Or there are people in the sensitive area to stay for too long, so before the occurrence of security threats can show the attention of security personnel related monitoring picture, so that the security department door by enough time for potential threats to do a good job of preparation.

4.Expand the use of video resources

No matter is the traditional video surveillance system or network video surveillance system, the video images monitored by the system can only be used in the field of security surveillance, and in the intelligent video system, these video resources can also have more uses.  For example, video cameras in shopping malls can be used to enhance customer service to customers and general customers. Intelligent video systems can automatically identify the characteristics of customers and inform customer service staff to perform timely service.  Intelligence, digitization and network are the inevitable trend of the development of video frequency monitoring, and the emergence of intelligent video monitoring is the direct embodiment of this trend.  Compared with ordinary network video surveillance equipment, intelligent video surveillance equipment has more powerful image processing ability and intelligence factors, which can provide users with more advanced visual frequency analysis power. It can greatly improve the ability of video surveillance system.  And make video resources can play a greater role.

Linkage with radar systems[edit]

Necessity[edit]

After the linkage is set between the radar and video surveillance system in the monitoring front-end, users can preset the linkage trigger conditions through the software, when the radar detected target triggers the linkage, the front-end video station can lock the target and continuous tracking, the radar detects the target directly after the linkage video system to capture real-time video images to achieve intuitive surveillance monitoring effect.

Features[edit]

The radar-based positioning video linkage system links the target spatial/geographic location data acquired by the radar with the rotation angle and lens focal length of the video telescopic monitoring imaging integrated cloud mirror system through the computer system software to achieve real-time review of the scene image of this spatial and geographic location. The system consists of positioning radar, video telescope monitoring imaging integrated cloud mirror system, radar linkage tracking server, radar linkage tracking software, network communication transmission system, etc.

Compared with the conventional video surveillance system, the radar and video linkage system has the following advantages.

The first, is fast target detection speed.

Second, it can detect multiple targets at the same time.

Third, the impact of meteorological conditions is small, with high detection accuracy.

Fourth, the system is completely digitalized, networked, intelligent, high degree of information.

Working Principle[edit]

(1) Target positioning by radar[edit]

Through the radar's active scanning beam for directional scanning and ranging of the target, the relative polar coordinates (bearing, distance) of the target relative to the radar station can be obtained, and the absolute geodetic coordinates (latitude and longitude) of the radar station itself can be converted into a coordinate system to calculate the absolute geodetic coordinates (latitude and longitude) of the target.

(2) Radar equipment-video equipment relative to coordinate conversion[edit]

According to the absolute geodetic coordinates (latitude and longitude) of the radar equipment itself, the absolute geodetic coordinates (latitude and longitude) of the video equipment to be linked, and the absolute geodetic coordinates (latitude and longitude) of the target to be tracked, the three adopt the triangulation method to solve the geodetic coordinates and derive the relative polar coordinates (orientation, distance) of the target to be tracked relative to the video equipment to be linked.

(3) Video synchronous linkage tracking[edit]

Through the relative polar coordinates (orientation, distance) of the target relative to the video station, the parameters of the video equipment head are adjusted, the relative orientation of the target to be tracked is converted into the relative scale value of the head code plate, and the relative scale of the head code plate is corrected according to the positive north, and the absolute scale value of the head code plate of the target to be tracked is obtained, and the system controls the head to jump to the specified orientation to lock the target according to this value. Then according to the distance of the target to be tracked relative to the video equipment, the lens potentiometer level value is calculated according to the lens magnification curve, according to which the system controls the lens magnification to the appropriate magnification to ensure that the size of the locked target in the video screen will not be too small or too large.

After the system finishes target locking according to the above steps, if the tracked target is a moving target, it can also use intelligent analysis to video fine tracking of the target (see below for specific description). When fine tracking is carried out, even if the communication line between the radar station and the video station is interrupted, the video station can still maintain continuous tracking of the target.

Therefore, for the target detected by the radar, if the target is within the monitoring range of the system, it can automatically trigger the linkage response according to the user's preset conditions, autonomously drive the PTZ to jump to the target location to lock on the target, and start to carry out continuous automatic tracking.

(4) Supported linkage equipment types (heavy-duty head, ball head, laser searchlight)[edit]

①A variable speed network heavy-duty head with high load capacity, high precision, and good typhoon resistance by using worm gear structure as well as grinding design.

②It is suitable for video surveillance occasions that need auto-focus, excellent image color expression as well a full frame rate. It can provide low-bit-stream 24-hour 360-degree HD infrared video images and overall solutions for security monitoring places such as city security, road monitoring, airports, railroads, ports, campuses, scenic spots, stations, neighborhoods, squares, guard posts, parks, large venues, etc.

③ Adopt 50W laser light as a light source, with high light efficiency, good color rendering, low power consumption, and long service life. Adopting a zoom light design, the laser light source homogenization angle can be adjusted and changed from a 5°-100° angle.

(5) The types of radar supported (large radar, 4G small radar, PAR micro radar)[edit]

①Support for X-band radar, millimeter-wave radar, LIDAR, CCTV video surveillance/cloud mirror telescope monitoring/cloud mirror photoelectric integration system, AIS, ADS-B, GPS / Beidou, ocean buoys, UAV, and other types of active and passive sensor information fusion and analysis, the results make the target with a unified logo, can distinguish the target detection, identification of the source, the fusion of The information is intelligently displayed on the integrated GIS platform.

②With the characteristics of small size, ultra-low microwave radiation power, and no magnetron, it is suitable for installation and use in vehicles, ships, and yachts of all sizes, especially no radiation harm to the human body, and is one of the safest radars in the world.

③It is widely used in substations, power plants, water plants, factories, industrial heavy places, industrial and mining enterprises, material warehouses, schools, airports, aquaculture and animal husbandry places, key cultural relic places, military facilities, prisons, guardhouses, borderlines and other places with strong demand for perimeter security.

Advantages[edit]

Take the advantages of using a video surveillance system in a factory as an example.

1 Locate and monitor assets, vehicles, tools, products

Location awareness is changing the way businesses operate because this not only knows what assets entrepreneurs have but also where and when they are moving. This means being able to control how fast assets are moving through the facility, whether they are being processed as planned, where vehicles are located on-site, and more. Location tracking helps improve productivity because assets and personnel can be tracked, potential bottlenecks can be identified, the efficiency of resource usage can be viewed, and more.

2 Prevent lost and missing equipment   

Being able to track and know the location of assets can reduce equipment loss and misplacement. Previously, only the items in the warehouse could be tracked, and if any items were moved or taken away, managers had no way of knowing. Now there is no need to rely solely on inventory counts, as it is possible to see exactly where each asset is and track it as they move.

3 Locate employees during distress calls

One of the biggest benefits of a location system is secure because it is possible to know where your employees are so you can help them in an emergency. For example, if there is a fire in a building, a video surveillance system can quickly identify who is in immediate danger and direct them to a safe place based on their location.

4  Control the use of devices

can help managers to raise security to new standards. This access control allows entrepreneurs to ensure that unauthorized personnel does not enter dangerous or highly secure areas.  

5 Enhance and streamline operations and processes  

This can mean finding a faster route for vehicles, understanding which machines are underperforming, or keeping track of how space is being used. Leveraging location data can help make smarter decisions, improve resource allocation, reduce costs, and increase the speed and quality of production and asset performance.

Service Cluster Technology[edit]

Concept[edit]

With the rapid development of information technology, the informatization construction of enterprises and institutions such as finance, finance and taxation has been continuously deepened. The expansion and concentration of business subsystems have led to the need for comprehensive management of a large number of hosts, databases, and application systems. On the one hand, the normal operation of the system must be ensured. For the daily business operations of a large number of staff, the necessary early warning work must be carried out as soon as possible. Find problems and eliminate hidden dangers. Secondly, once a serious problem occurs, it must be reported to the police and provide relevant information at the first time, so that the system administrator can locate the problem and deal with it as quickly as possible; on the other hand, it is necessary to accumulate daily operation and maintenance experience and It provides decision support for the comprehensive utilization and planning of various system resources. The web-based server cluster integrated monitoring system can meet the above requirements. The system can solve the problems of real-time monitoring, effective management, and rational utilization of resources, and help system administrators to do daily management and operation and maintenance work.

Structure[edit]

Basic Information Collection Layer[edit]

The main structure of the system is shown in Figure

File:System Architecture of Information Collection Layer.png

. Information collection is the basis of the entire system. Information is collected into the monitoring database through active and passive methods. According to different monitoring needs, the collection time period can be divided into minutes, hours, and days. At the host information collection level, the usage of the host's CPU, memory, file system and other resources is regularly collected. through active detection sequence, ping the host and check the port. At the level of database information collection, through the detection program, the standard indicators such as database connectivity, collection table space, number of connections, backup time, and various hit rates, as well as user-defined detection indicators, are regularly detected.

Logical analysis and judgment layer[edit]

On the basis of collecting a large amount of information at the bottom layer, the logic analysis layer is responsible for analyzing and processing various information. According to the threshold information table, this layer judges the CPU, memory, file system, Ping, and port status of the host, as well as the connectivity, table space, and various indicators status of the database. The status is mainly divided into: normal, warning, and error. In actual use, the following will appear Situation: For example, in the threshold information table, it is generally believed that the file system occupancy rate is too high and a warning will be issued, but some host administrators believe that the file system will not have any adverse impact on the system after confirmation. In order to avoid useless warning information Interfering with the administrator, these messages can be blocked using the Blocking Item Configuration Table. When a host is restarted and other maintenance, it can also be shielded in the shielding item configuration table to avoid generating excessive alarm information.

Result presentation layer[edit]

After the processing of the logical analysis layer, the information and status of each indicator have been stored in the database, and the presentation layer displays these results to the user through the page. The main functions are: (1) Information display. Provide data, status, and query of historical information and status of various indicators. (2) Alarm reminder. Display prompt warnings, errors and other information in different color text, for example, warning information is displayed in pink; error information is displayed in big red. A separate alarm page will pop up to the front desk, and a warning sound will be issued at the same time, so that the administrator can deal with it in time. (3) Monthly report function. Automatically generate monthly report information for each host and database. (4) Decision support. Provide decision support to users through appropriate graphics, tables, numerical analysis, statistical analysis and other functions. With the help of diagrams The shape of the CPU, the daily, weekly, and monthly lines of memory, administrators can analyze the performance pressure of the server, and plan and configure resources reasonably. At the database level, the system will calculate the monthly growth of each table space and the usable time of the remaining space based on the statistics of the table space in recent months, so that the administrator can grasp the usage of the table space in time and deal with it accordingly. At the same time, it provides the basis for storage planning.

The key technology of system realization[edit]

Realization of Basic Information Collection Layer[edit]

The system uses c# language to realize the functions of each layer. The basic information collection layer mainly creates 5 classes to support the host information collection process: disk information class, network IP address class, network information class, process information class and system Information class. Disk info class is used to get disk name, total size and available space; the network IP address class is used to obtain the host physical address and IP address address; the network information class is used to obtain the network card name, type, description, maximum Rate, Index, Receive Bytes, Send Bytes, Connectivity Status, MAC Ground address, etc.; the process information class is used to obtain the process number, process name, occupation processing server time, the amount of physical memory allocated, and the completeness of the main modules of the process Path; the system information class is responsible for initializing the CPU counter and obtaining the CPU and memory information and get partition information, process list, IP on request address, network information, application title. The system passes WMI (WindowsManagementInstrumen—tation, Windows Management Instrumentation) technology to obtain local or remote host related information. Microsoft is here. NET platform fully supports WMI technology technique, based on . NETFramework hosting platform WMI. NET technology Encapsulates the original WMI details and provides unified WMI object properties Read , method call path . System is first introduced during development. Man—The agement namespace, which provides classes for accessing various Management objects provide an object-oriented programming interface using this namespace You can query system resource information. This system mainly uses Manage.mentObject, ManagementClass, ManagementObjectSearch—er, ManagementObjectCollection class object. The host information collection process is triggered and executed regularly by setting a timer. Based on the collected data, an SQL insert statement is constructed. The statement is stored in a batch file, uploaded to the database through FrP, and set to run automatically. Upload the function code of the SQL statement using ProcessStar—The tlnfo and Process classes are implemented. The ProcessStartInfo class contains the information used to start the process, and the Process class is responsible for starting the local system process.

Implementation of Analysis and Judgment Layer[edit]

After the collected data is aggregated into the monitoring database, the analysis and judgment layer completes the analysis and judgment work. The processing process of various information at this layer is mainly divided into three steps: acquiring basic data, performing analysis and judgment, and updating status information. Taking the analysis of the host port status as an example, the following code completesThe process of analyzing and judging the port status based on the data in the basic configuration table of the port, and saving the analysis results to the port status record table, in which Public and DataAccess are custom classes, which respectively provide public methods for obtaining the dynamic information of the host and executing the database query and update.

Implementation of the result display layer[edit]

The result display layer is used. NET to create web applications that present functional interfaces in the form of web pages. System administrators can view the overall status of each cluster and monitor server host performance and database status. This system combines ASP. NET and JavaScript technology to realize the information display function. On the application server side, IIS is used as the publishing platform for ASP. NET pages and publication of web directories. Because the page needs to obtain the cluster status information from the background database every 3 minutes, so use 3 ASP. NETAjax controls: ScriptManager, UpdatePanel and Timer realize the timing of the page without refreshing the display. The display form of status information includes both DataList complex data controls and Label simple controls. For the DataList control that displays the status of each server, by setting its DataSource property and DataBind() method to bind the state data of the system database. For the Label control that displays the integrated state of the cluster, directly bind the data to the Text property of the control

Surveillance video playback[edit]

Concept[edit]

Surveillance video playback refers to an operation method that traces the specific events that have occurred and supervises the working environment and production order of the production site through the precise recall and playback of the monitoring images stored in the video surveillance system.

Classification[edit]

Classified by development time, video surveillance systems that support surveillance video playback can be divided into three categories.[7].

The first generation of video surveillance is the traditional analog closed-circuit video surveillance system (CCTV).

This era mainly began in the 1970s, and the video switching matrix was one of the core equipment. Multi-screen cutters, video matrixes, analog monitors, tape recorders, etc. constitute the main equipment in the era of analog video surveillance. The transmission method mainly uses coaxial cable, which is controlled by the host to transmit in an analog way. The media of video storage is mainly VCR.

The second generation of video surveillance is the "analog-digital" surveillance system (DVR).

The "analog-digital" monitoring system is a half-analog-half-digital scheme with a digital hard disk video recorder (DVR) as the core. DVR is the main iconic product in the era of semi-digital video surveillance. Through DVR, analog video signals can be digitally encoded and stored. From the camera to the DVR, the coaxial cable is still used to output the video signal, and the DVR supports both recording and playback, and can support IP network access with limited functions.

The third generation of video surveillance is a complete IP network video surveillance system IPVS.

The camera in this system has a built-in web server and provides an Ethernet port directly. Instead of generating images as continuous analog video signals, these cameras generate data files encoded in JPEG or MPEG4/H.264/H.265 that can be accessed, monitored, recorded and copied by any authorized client from anywhere on the network. Since the video surveillance system is open and decentralized, there is no core hardware device in the system. Video encoders, network cameras and central management platforms constitute the main equipment in the era of full digital video surveillance. Mainly use video content analysis technology to complete network video storage, video playback and transmission functions.

Classified by development time, network video surveillance systems can be divided into the following four categories.

1 Close monitoring

In the early stage of video surveillance development, video coaxial cables were mainly used to import video images from front-end monitoring points into the monitoring center, and then play them on display devices. With the increase in the number of monitoring points, the video display equipment and video recording equipment will also be greatly increased, which greatly increases the construction cost and management difficulty. The introduction of video matrix technology effectively solves the above problems, allowing a large number of video images to be switched and displayed, allocated and shared. Analog video is only suitable for short-distance transmission, cannot realize long-distance large-capacity video transmission, and cannot achieve multi-center and multi-level networking, which limits its application range.

2 Network monitoring

In the mid-1990s, the emergence of optical transceivers solved the problem of long-distance transmission of video images. With the help of digital optical transceiver and transmission through multiplexing technology, it not only improves the quality and capacity of video transmission, but also enriches the types of transmission services. At the same time, the network video monitoring is realized under the action of RS232/422. However, due to the low data transmission rate of RS232/422, and the nodes cannot be numbered arbitrarily and cannot be managed remotely, the scale of networked video surveillance is restricted to a certain extent.

3 IP network monitoring

The continuous maturity of network virtual matrix also marks that network video surveillance technology has entered the era of IP network surveillance. The network virtual matrix uses the IP network as the medium, uses the TCP/IP protocol, and is a network monitoring platform constructed with network video codecs, network switches, routers, network video storage devices, and network video management platforms to achieve unified management of video across the entire network. , and can perform flexible background operations.

4 Optical fiber network monitoring

Analog video can be digitally encoded using digital techniques, but does not require video compression. The digitally processed video signal can be transmitted through the optical fiber network, which can realize front-end integration, transmission networking, processing digitization, and system integration.

Common problem[edit]

In the process of surveillance video playback, the following problems often occur. The following is a principle analysis of common problems, and solutions for reference.

Time skip: Part of the time period is missing from the monitoring playback.

Common causes

1. The power supply of the camera is abnormal, sometimes the power supply is insufficient, and there is no image, and the video recorder will skip the time when there is no image.

2. The connection of the signal line is not smooth, because the distance of the signal line is too far or the copper content is low, the signal is attenuated to a certain extent, and the image is stuck or sometimes disappears. Skip that time period.

3. The camera fails, sometimes good and sometimes bad. When the camera fails, the video recording is unsuccessful, resulting in skipping that time period during playback.

4. The power line or signal line is in poor contact.

5. The hard disk is faulty, and the hard disk has bad sectors.

6. The equipment of the monitoring system has been powered off and stopped working, and the video recorder during playback at this time will skip.

Some surveillance cameras lose video signal.

Common causes

1 The surveillance camera itself is faulty. A new surveillance camera can be replaced.

2 The power supply is insufficient, resulting in overheating of the switching power supply and thermal protection. Replaceable power supply.

3 The transmission line has power supply, which causes the monitoring camera and the electrolytic capacitor inside the distributor to charge, blocking the video signal output. An anti-jammer can be added.

Video Structured Description Technology[edit]

Concept[edit]

If there is a technology that can replace a human being to intelligently read the video and convert it into a descriptive language that both the computer and the human being can understand, and have the corresponding image or video clip pushed out directly by entering text keywords when the information management system is queried, it will greatly enhance business effectiveness, even through personalization, with the system automatically pushing out the image or video clip information you want. This is the realistic and urgent need for video surveillance for video structured description technology. Video structuring is a technique for extracting information from video content. Video structural description aims at parsing video content into the text information, which uses spatiotemporal segmentation, feature selection, object recognition, and semantic web technology[8]. From the perspective of the data processing process, video structuring and description technology can transform surveillance video into human and machine-understandable information, and further transform it into intelligence used in public security operations, realizing the transformation of video data into information and intelligence.

The Background of the Emerging Video Structural Description Technology[edit]

Video surveillance is an integrated system with strong prevention capabilities and widely used in military, customs, police, fire fighting, airports, railways, urban transport and many other public places. It's an important part of security system because of its visualized, accurate, timely and rich information content. Video surveillance has become the main tool due to its accurate, rich, and intuitive information. However, with the large-scale construction of video surveillance systems full of the world, problems such as " clues and useful information cannot be retrieved immediately with the worth of video big data" , which greatly decreasing the detecting efficiency during public security governance and crime prediction. The incremental need of video based applications issues the importance of organizing and parsing the content in videos. However, the accurate managing and understanding video contents at the semantic level is still insufficient. The semantic gap between low level features and high level semantics cannot be connected by neither semi-automatic nor manual methods[9]. For the sake of jumping out of this dilemma, a semantic based model named video structural description (VSD) for representing and organizing the content in videos is adopted. Video structural description aims at parsing video content into the text information.

The Structure of Video Structured Description Technology[edit]

The VSD technology generally adopts four sets of advanced algorithms, including feature selection [10], object recognition[11], semantic web technology[12][13] and spatiotemporal segmentation [14] respectively. As a result, the interpreted text information translates the semantics of the video contents for machine or human understanding. As the figure

shown, the framework of VSD contents three sections: the first section is regarding to video intelligent analysis with the function of behavior analysis, object detection and target tracking. With the aid of video intelligent analysis, the key information, such as vehicles, pedestrian, and abnormal action, can be mined from the tremendous video data and transformed into a standardized format. The second section is the construction of policing repository database, which acts as the domain knowledge including knowledge reasoning, data mining, information description, and assist the crime prediction by supporting millions of real cases. Additionally, the last section refers to the underlying technical support, virtualization and cloud computing technologies prop up efficient computing environment for techniques development all above, besides, the storage environment for data storage.

VSD is commonly designed as a hierarchical semantic data model combined with three layers[15][16]. The specific illustration: (1) Pattern Recognition layer: This layer attempts to extract and represent the content of the plenty of videos. Rather than the existing video content extraction and representation method, VSD applies the domain ontology involved with basic relations, principles and events. Moreover, the temporal relations and spatial are set in event and concepts definitions, which can be used by users for representing and annotating the semantic relations between objects in videos (2) Surveillance Video Resources layer: Different form the pattern recognition layer which enable VSD to extracts the content of a single video, this surveillance video resources layer targets to bond the video data with their semantic relations, which is resemblant to the web uses hyperlink to link resources [17]. (3) Application layer: The pattern recognition layer and video resources layer mentioned above, both concentrate on processing video resources using their semantics. In addition to them, the application layer focuses on processing the demand of related applications. In the application layer, the video resources are clustering and integrating via related applications.

The Applications of VSD Technology[edit]

VSD based surveillance systems[edit]

Video surveillance systems are widely used in many different fields. Nowadays, video surveillance plays a crucial role in guaranteeing security at various buildings and communities such as airports, banks, and school. Combining the availability of high-speed, broad-band wireless networks, and the proliferation of inexpensive cameras, deploying thousands of cameras for security surveillance has become economically and technically feasible. VSD technology is expected to be applied in crime resolution, crime prevention, and crime protection. Much of the current research in video surveillance focuses on algorithms to analyze video and other media from multiple sources to automatically detect significant events. In terms of the proposed VSD model, the related surveillance systems are emerging. Such as activity monitoring, intrusion detection, and pedestrian counting. Tracing back to their underlying technology, the applications mentioned above are generated based on the results from the pattern recognition layer. Because the video resources are organized by their association relation, the deep application can be developed.

VSD based monitoring scope panorama stitching[edit]

Traditional video person tracking is mainly based on the study of underlying image pixel features to achieve pedestrian detection and recognition, and also uses relatively single image pixel features to achieve person tracking, the accuracy of its person matching is low. VSD based monitoring scope panorama stitching is based on a structured description of the video to track pedestrians by matching attributes, improving accuracy and potentially transforming complex video retrieval into a sophisticated text search like Baidu and Google [18].

In current market, a typical current high speed spherical cameras configurated with 90 degree vertical rotation and 335 degree horizontal rotation, autofocus and zoom functions. In practical, it may extremely inefficient and tedious to neither adjusting the cameras by several users respectively nor adjusting them one by one while in a multi-camera surveillance scene. To cope with this issue, a spherical relied surveillance scene panorama stitching and automatic positioning system is launched. It assists the users understand the panoramic scene and surveillance the certain spot instantly by automatic camera positioning. For example, the figure on the right presents an case application on crime detection based on VSD [19].

VSD based video searching[edit]

The content-based retrieval (CBIR) refers to adopt computer vision techniques to the image retrieval issue, namely, how to searching for specific digital images from large-scale databases. Besides, "content-based" regards to develop a search analysis from the contents of the image instead of the descriptions, keywords, or relevant tags. Illustratively, "content-based" applied in video searching area is likely refers to textures, profile, shapes and colors. In a way, CBIR is ideal due to a pheromone that most web-based image search engines directly depend on metadata to achieve image search, which results in large quantities of garbage and noise. The CBRI method can save human resources from tagging keywords for images which is neither efficient nor economic. Therefore, a CBRI configurated system has a better performance that filters images based on their content with a more accurate outcoming result. In summary, the ontology based video searching is similar to content-based retrieval, both concentrate on the content of videos. Additionally, the outcome from the pattern recognition layer is used for video searching.

The Popular Research Topics in VSD[edit]

Facing the practical public security issue, differing from the general computer vision research, the crucial focusing objects on surveillance videos are vehicles and person.

1) For VSD based person analysis of surveillance videos, the key point is to identify the person to match pedestrians across non-overlapping camera views in video surveillance network. With the aim of this topic, various sub-technologies is emerging, such as behavior analysis, camera-tracking, and object retrieval, etc. In this person analysis field, the most ultimate barrier is how to figure out effective matrix and representation to measure the similarities over a bunch of objects captured by different cameras. To address this problem, a great deal of measurements are proposed, among which the most competitive measurement is adapting the deep convolutional neural network architecture by Krizhevsky et al., which is pre-trained on a subset of ImageNet[20]. (For further interest, please refer to the researches[21][22].)

2) For VSD based vehicles analysis of surveillance videos, CNN also have a dominative performance on solving image retrieval problems.(A high-performance of AlexNet model with vehicle dataset in ImageNet.) Up to now, a sequence of vehicle brand classification, vehicle verification and vehicle retrieval, all receive a high accuracy by applying CNN algorithm.

3) The cloud computing is proposed as one of the promising next generation computing paradigm due to it promises consumers available to access applications and data from a "cloud" wherever and whenever. In recent years, the intelligent video systems such as urban video surveillance are appealing to large amounts of newly emerging applications.

Face Recognition[edit]

At present, it is the era of rapid development of information technology and the Internet. In the background of the development of the whole era, people's basic living standards are constantly being improved, and while they are concerned about their basic living standards, they are also more concerned about their own security, therefore, the demand for video surveillance systems is also increasing [23]. At the same time, in the context of the development of the times, the use of video surveillance technology can be well in line with the rapid development of the future Internet industry, and there will be great business opportunities and economic benefits, has become a digital product of great interest in the current information technology industry [24].

Face Recognition in surveillance system

In recent years, the entire video surveillance technology has been developed very rapidly in various aspects, such as military development, economic aspects and many other aspects have a wide range of applications and demonstration platforms. The information technology is used to manage the security of neighborhoods, to count the number of people in tourist attractions, and to monitor the operation of ATMs in banks[25]. At the same time, in order to effectively prevent and deter crime, the demand for unattended automatic video surveillance systems has been increasing, and the design goal of such intelligent systems is to minimize the possibility of manual operation, to constantly reduce the dependence on human labor in the surveillance system, and to automate the operation of the software for the detection of people or core targets in complex environments. In order to reduce the possibility of manual operation in monitoring systems, it is necessary to automate the detection and tracking of people or core targets in complex environments through software operations [26]. However, the current situation is that many parts of the entire video surveillance system are directly involved in a manual way, in most cases, the acquired video is only considered as a kind of evidence, and does not play well in the entire video surveillance system with the initiative and some advantages of real-time, and the whole process will require a lot of manpower and energy, and also requires manual real-time monitoring of the entire computer The entire computer screen needs to be manually monitored in real time, and the surveillance video needs to be manually interpreted and processed to make basic decisions by human means [27].

However, this manual approach is a very heavy workload for video surveillance personnel, especially in the case of monitoring many screens at the same time, and it is not possible to monitor the entire video well. This is because in many cases, abnormalities in monitoring scenes are basically small probabilities, so it is easy to waste human resources when doing manual monitoring, especially when it is very easy to cause a large number of missed alarms once the real-time monitoring personnel are lax, which is unacceptable for some scenarios that rely on monitoring [28]

Face recognition, a biometric technology based on the information of human face features for identification. A series of related technologies that use a camera or a camera to capture an image or video stream containing a human face, and automatically detect and track the face in the image, and then face the detected face, are often also called portrait recognition and facial recognition. Face recognition is a popular computer technology research area, which belongs to biometric recognition technology, which is a biometric feature of the organism (generally and specifically human) itself to distinguish individual organisms. The biometric features studied by biometric recognition technology include face, fingerprint, palm print, iris, retina, voice (voice), body shape, personal habits (e.g. how hard and how often one taps on the keyboard, signature), etc. The corresponding recognition technologies are face recognition, fingerprint recognition, palm print recognition, iris recognition, retina recognition, voice recognition (with voice recognition you can carry out identity recognition and also voice content recognition, only the former belongs to biometric recognition technology), body shape recognition, keyboard tapping recognition, signature recognition, etc.

Three key technologies are shown below.

1.feature-based face detection technology Face detection by using color, contour, texture, structure or histogram features, etc.

2.Face detection technology based on template matching Extract face templates from the database, and then adopt a certain template matching strategy to match the captured face image with the image extracted from the template library, and determine the face size and location information by the relevance and the size of the matched template.

3.Statistical-based face detection technique By collecting a large number of positive and negative face samples from "human face" and "non-human face" images, the system is intensively trained with statistical methods to detect and classify the patterns of human faces and non-human faces.

The four main features are as follows.

1.Geometric features From the distance and ratio between facial points as features, recognition is fast, memory requirements are relatively small, and sensitivity to light is reduced.

2.Model-based features Extract face image features according to the different probabilities of different feature states.

3.Based on statistical features Consider the face image as a random vector and use statistical methods to identify different face feature patterns, more typical ones are feature face, independent component analysis, singular value decomposition, etc.

4.Based on neural network features Use a large number of neural units to store and remember face image features associatively, and achieve accurate recognition of face images according to the probability of different neural unit states.


Structure[edit]

In terms of architecture, the system is essentially a two-tier structure.The decentralised front-end monitoring host is responsible for face recording, data collection and storage, and the centralised remote management end can play back this information.

The front-end monitoring mainly re-networks to complete the acquisition of video: video recording, card number overlay, face detection and storage, the work of data retrieval.

The back-end works after the networking, which is to connect the intelligent monitoring hosts of the city branches to the provincial bank monitoring centre through various network connection methods.

Workflow[edit]

1. Video processing/face capture:

finds faces in video images, evaluates image quality and submits to face recognition matching module;

2. Face Recognition Matching Module:

extracts feature templates from the logged-in photos and compares them with the blacklist database;

3. Blacklist photo collection:

create templates and add the template data to the blacklist database;

4. Alarm display:

according to the comparison result, display the alarm result or pass the alarm information to the PDA or other handheld terminals.

Face Recognition System[edit]

The system consists of five main components: face image acquisition, face detection , face image pre-processing, face image feature extraction, and matching and recognition.

Face image acquisition: Different face images can be captured through the camera lens, such as static images, dynamic images, different positions, different expressions and other aspects can be well captured. When the user is within the capture range of the capture device, the capture device will automatically search and capture the user's face image.

Face detection: Face detection is in practice mainly used for pre-processing of face recognition, i.e. to accurately mark the position and size of a face in an image. Face images contain a wealth of pattern features, such as histogram features, colour features, template features, structure features and Haar features. Face detection is about picking out the useful information from these features and using them to achieve face detection.

The mainstream face detection method uses the Adaboost learning algorithm based on the above features. The Adaboost algorithm is a method used for classification, which combines some of the weaker classification methods together to combine new and very strong classification methods.

The face detection process uses the Adaboost algorithm to select some rectangular features (weak classifiers) that best represent the face, construct the weak classifiers into a strong classifier according to a weighted voting process, and then connect a number of trained strong classifiers in series to form a cascaded classifier with a cascade structure, effectively improving the detection speed of the classifier.

Face image pre-processing: Image pre-processing for faces is the process of processing images based on face detection results and ultimately serving for feature extraction. The original image acquired by the system is often not directly usable due to various conditions and random interference, and must be subjected to image pre-processing such as grey-scale correction and noise filtering at an early stage of image processing. For face images, the pre-processing process mainly includes light compensation, grey scale transformation, histogram equalisation, normalisation, geometric correction, filtering and sharpening of the face image.

The purpose of face image pre-processing is to make further processing of the face image based on the system's detection of the face image, i.e. grey scale adjustment, image filtering, image size normalisation, etc., in order to facilitate the feature extraction of the face image. Face image pre-processing specifically refers to a series of complex processes such as lighting, rotation, cutting, filtering, noise reduction, zooming in and out of the face image captured by the system to make the face image meet the standard requirements for face image feature extraction in terms of lighting, angle, distance, size and any other aspects.

Based on the face detection results, the image is processed and ultimately serves the feature extraction process. The original image acquired by the system is often not directly usable due to various conditions and random interference, and must be pre-processed at an early stage of image processing, such as grey-scale correction and noise filtering.

Pre-processing process:

1) Face alignment (to obtain a correctly positioned image of the face);

2) Light compensation, grey scale transformation, histogram equalisation and normalisation of the face image (to obtain a standardised face image of the same size and with the same range of grey scale values);

3) geometric correction, median filtering (smoothing of the image to remove noise) and sharpening.

Face image feature extraction: The features that can be used in face recognition systems are usually classified as visual features, pixel statistical features, face image transformation coefficient features, face image algebraic features, etc. Face feature extraction is performed on certain features of the face. Face feature extraction, also known as face characterisation, is the process of modelling the features of a face. There are two main types of face feature extraction methods: knowledge-based characterisation methods and algebraic feature or statistical learning characterisation methods.Knowledge-based representations are based on the description of the shape of the face organs and the distance between them to obtain feature data for face classification. The geometric description of these parts and the structural relationships between them can be used as important features for face recognition, and these features are called geometric features. Knowledge-based face representation mainly includes geometric feature-based methods and template matching methods.

Matching and recognition: The feature data of the extracted face image is searched and matched with the feature template stored in the database by setting a threshold value, and when the similarity exceeds this threshold, the result obtained by matching is output. Face recognition is the process of comparing the features of the face to be recognised with the obtained face feature template and making a judgement on the identity information of the face based on the degree of similarity. This process is further divided into two categories: confirmation, which is a one-to-one image comparison process, and recognition, which is a one-to-many image matching comparison process.

Recognition algorithms[edit]

In general, face recognition systems include image acquisition, face localisation, image pre-processing, and face recognition (identification or identity finding). The input to the system is typically a single image or a series of images of an unidentified face and a number of images of faces with known identities or corresponding codes in a face database, while the output is a series of similarity scores indicating the identity of the face to be recognised.

Classification of face recognition algorithms

Feature-based recognition algorithms.

Appearance-based recognition algorithms.

Template-based recognition algorithms.

Recognition algorithms using neural network.

Based on the theory of light estimation model: The light pre-processing method based on Gamma grey scale correction is proposed, and the corresponding light compensation and light balancing strategies are carried out on the basis of the light estimation model.

Optimised statistical correction theory for deformation: Optimised face pose based on statistical deformation correction theory; Reinforced iteration theory.Reinforcement iteration theory as an effective extension to the DLFA face detection algorithm.

Unique real-time feature recognition theory: This theory focuses on the intermediate values of real-time face data to achieve the best match between recognition speed and recognition performance

Problems in face recognition systems[edit]

(1) The illumination problem in face recognition.

Illumination variation is the most critical factor affecting face recognition performance, and the extent to which this problem is solved is critical to the success or failure of the face recognition practical process. It is necessary to separate the inherent face attributes from the face image and the non-face inherent attributes such as light source, shading and highlighting, etc., and to make targeted lighting compensation in the face image pre-processing or normalization stage, so as to eliminate the impact of shadowing and highlighting caused by non-uniform frontal lighting on recognition performance;

(2) Face detection and tracking problem.

Face detection is the preliminary work of face identification, while face tracking is the continuous tracking and detection of the motion trajectory and contour changes of the target face in the subsequent frames of the motion sequence according to the results of face detection and localization. A multi-level structured face detection and tracking system in a complex context can use face detection techniques such as template matching, feature sub-faces, and colour information, so that faces rotating in-plane can be detected and faces in motion of any pose can be tracked.

(3) De-duplication problem.

The face recognition monitoring system is required to be able to quickly detect single and multiple face images in the video capture, and automatically remove redundancy, subtract duplicate images, and extract the corresponding face image features to achieve rapid comparison of faces, and output the corresponding result information.

(4) The pose problem in face recognition.

The pose problem involves the facial changes caused by the rotation of the head around three axes in the 3D vertical coordinate system, where the depth rotation in two directions perpendicular to the image plane will cause partial loss of facial information. One option is an approach based on pose invariant features, i.e. seeking those features that do not change with pose. Another option is to use a statistically based visual model that corrects the input pose image to a frontal image so that feature extraction and matching can be done in a uniform pose space.

Advantages of face recognition technology[edit]

1. Convenience

Capture equipment is simple and quick to use. Generally, common cameras can be used to capture face images without the need for particularly sophisticated specialised equipment. Image capture can be completed within seconds.

2. Friendliness

The method of identification by face is consistent with human habits and can be used by both humans and machines. Fingerprints, irises and other methods do not have this feature and a person without special training cannot use fingerprints and iris images for identification of other people.

3. Non-contact

The collection of face image information is different from the collection of fingerprint information. Using fingerprints to collect information requires the use of fingers to touch the collection device, which is unhygienic and easily offensive to the user, whereas with face image collection, the user does not need to have direct contact with the device.

4、Expandability

After face recognition, the next step of data processing and application determines the actual application of face recognition equipment, such as application in various fields such as access control, commuting swipe card, terrorist identification, etc., which is highly scalable.

Alarm Linkage[edit]

Modern surveillance is used in a wide range of applications. For example, community surveillance, fences that do not allow people to pass through, prison surveillance, highway surveillance, etc. In real life, surveillance systems in specific areas such as community fence monitoring, prison access, community roads at night, etc., are not able to function in real time due to the fact that the surveillance duty personnel face multiple static surveillance screens for a long time, which can easily lead to fatigue and relaxation of vigilance, and the surveillance duty personnel cannot detect and stop accidents at the early stage of occurrence. Based on the above situation: distributed video surveillance and its linkage alarm system can reasonably solve the problem. By extracting images from the video surveillance information according to a fixed period and comparing the images, a large gap is found in the comparison of the images extracted before and after, which indicates a change in the video information or a small change in the image ratio, which also indicates a change in the surveillance information. This can alert the monitoring personnel by sounding the horn in time so that the monitoring duty personnel can pay attention to the changing video and find the problem in time. In other words, the construction of a system that identifies itself as a linked alarm system helps to maximize the usefulness of monitoring. Monitoring is usually distributed in different areas or regions each monitoring point has local video query, playback rights, but no video editing, deletion rights. The monitoring center can perform deletion operations on expired videos. The video information of each area is uploaded to the monitoring center for management. The monitoring center centrally manages all video files according to time classification and area classification to ensure the security and integrity of the monitoring video.

At present, the domestic video surveillance has been applied to many areas of use, mainly in the community, factories, various supermarkets, banks and offices, etc. At present, there are a number of domestic manufacturers specializing in the production of surveillance hardware products - video surveillance cards, such as Tianmin, CTV, Aoyu and other video surveillance cards. Companies engaged in monitoring-related software development are countless most of their products and monitoring systems with stable performance and clear picture basically meet the requirements of video surveillance. In addition, infrared surveillance technology is also fully applied in many fields infrared surveillance system due to its thermal radiation imaging principle even in places where there is not enough light or even in the dark can still get a clearer and more satisfactory monitoring video information. But the application of domestic monitoring system intelligence is low on many occasions, monitoring is through the personnel on duty, after the fact view, it is difficult to do in the process of monitoring to play the role of real-time monitoring. For example, the monitoring system of large supermarkets is ubiquitous and can monitor the goods in any corner, and there are monitoring staff on duty, but the theft of supermarket goods still happens repeatedly.

AI Video Analytics[edit]

By integrating the surveillance system with other sensors or image recognition technologies, we will be able to use the advanced features of the surveillance system to warn supervisors of danger or record accidents in advance. Among them, AI Video Analytics plays an important role in the alarm linkage function.

For example, users can install a surveillance system with AI Video Analytics on their private parking spaces. When the monitoring system recognizes that the parking space is occupied, it can send an alarm to the user in advance, and the user can contact the occupant in advance and urge him to leave. At the same time, the surveillance system with AI Video Analytic function can record the environment. When other people or vehicles approach, the key recording function can be activated in advance. If a collision accident occurs, the evidence can be locked more quickly.

We can also set up a surveillance system with this function at the intersection. Once the AI recognizes the vehicle, the camera system can be activated to record the license plate and driver information of the visiting vehicle.

At the same time, this surveillance system with AI Video Analytics will also play an important role in home security protection. We use AI video analysis technology to identify the approach of people. If a person is detected approaching, the facial recognition system can be activated in advance to unlock the door for the owner. If an unrecorded stranger is detected, the administrator will be alerted to report the visit of the stranger.

The function of intelligent video recognition system[edit]

(1) It can distinguish people, animals, vehicles and other objects for detection and tracking, and each camera can monitor 50 different targets at the same time.

(2) It can be connected to a variety of detection equipment such as cameras, access control, RFID, smart fences, GPS, radar, etc., and can be analyzed and integrated.

(3) A virtual boundary can be set.

(4) Set the security level according to the security policy (day/night, busy/loose traffic, etc.), you can set the security level in a specific area or the whole site, and you can create or change the alarm area.

(5) When the alarm is automatically linked and jumps out of the PTZ camera image window, the PTZ and lens control icons provided by the system can be used to lock the target manually or automatically, and the alarm can be sent by voice, email, telephone, pager, etc.

(6) You can double-click the target with the mouse to jump out of the linkage monitoring image and view the details of the target.

(7) It can effectively shield the impact of sunlight reflection on the water surface, rain and snow weather, etc. on the capture of the system target. A single-camera scene video intelligent analysis function requires the ability to cope with a variety of lighting and environmental factors, including changes due to - shadows, weather, lighting changes in the area, as well as changes caused by searchlights, reflections, and wind.

(8) Intrusion detection: It can distinguish human-sized intruders, while ignoring small animals and birds.

(9) Counting: Counting in multiple regions of interest and multiple moving directions is achieved on one camera.

(10) Queue management: It can provide comprehensive statistical chart reports to understand traffic, people flow/customer conversion rate, average waiting time, etc.

(11) Abnormal behavior detection: It can distinguish abnormal behaviors such as slipping and running.

(12) Wandering detection: The wandering alarm time can be preset.

(13) Crowded crowd management: In the set area, the number of people is counted in real time, and the alarm is given according to the preset crowded threshold.

(14) Legacy target detection: In a busy and crowded environment, detect multiple legacy targets in a camera scene, and the minimum legacy target can reach 3% of the image size.

(15) Theft detection: It can detect lost items in a busy and crowded environment.

(16) Illegal parking detection: It can detect illegal parking in a busy and crowded environment.

(17) Graffiti, posters and vandalism detection.

(18) Camera check function: Judging different statuses of cameras, such as disconnection, poor focus, damaged, moving, insufficient frame rate or undetectable due to weather such as fog, rain and snow.

Implementation of AI-based video analytics[edit]

Video recognition mainly includes three links: front-end video information collection and transmission, middle video detection and back-end analysis and processing. Video recognition requires the front-end video capture camera to provide a clear and stable video signal, and the quality of the video signal will directly affect the effect of video recognition. Then, through the intelligent analysis module embedded in the middle, it can identify, detect and analyze the video picture, filter out the interference, and mark the abnormal situation in the video picture with target and trajectory. The intelligent video analysis module is an algorithm based on artificial intelligence and pattern recognition principles.

Front-end analysis

High-performance video analytics were once server-based because of their high demands on power and cooling capabilities that cameras couldn't provide. But in recent years, algorithm development and ever-increasing processing capabilities of front-end devices have made it possible to run advanced AI-based video analysis tools directly on the front-end. The advantages of front-end analytics applications are clear: they can access uncompressed video material with very low latency, ensuring real-time application, while avoiding the additional cost and complexity of moving data to the cloud for computation. Since fewer server resources are required in the monitoring system, the hardware and deployment costs involved in front-end analysis are also lower. In some applications, it is also possible to combine front-end processing with server processing, with the camera taking care of the preprocessing and then the server taking care of the detailing. Such a hybrid system can help scale analytics applications in a cost-effective manner by processing several camera streams.

AI video analysis module

The AI concept incorporates machine learning algorithms and deep learning algorithms. Both types automatically build a mathematical model, using substantial amounts of sample data (training data), to gain the ability to calculate results without being specifically programmed for it. An AI algorithm is developed through an iterative process, in which a cycle of collecting training data, labeling training data, using the labeled data to train the algorithm, and testing the trained algorithm, is repeated until the desired quality level is reached. After this, the algorithm is ready to use in an analytics application which can be purchased and deployed on a surveillance site. At this point, all the training is done and the application will not learn anything new.

A typical task for AI-based video analytics is to visually detect humans and vehicles in a video stream and distinguish which is which. A machine learning algorithm has learned the combination of visual features that defines these objects. A deep learning algorithm is more refined and can - if trained for it - detect much more complex objects. But it also requires substantially larger efforts for development and training and much more computation resources when the finalized application is used. For well-specified surveillance needs, it should therefore be considered whether a dedicated, optimized machine learning application can be sufficient.

Hardware Acceleration

Algorithm development and increasing processing power of cameras have made it possible to run advanced AI-based video analytics directly on the camera (edge based) instead of having to perform the computations on a server (server based). This enables better realtime functionality because the applications have immediate access to uncompressed video material. With dedicated hardware accelerators, such as MLPU (machine learning processing unit) and DLPU (deep learning processing unit), in the cameras, edge-based analytics can be more power-efficiently implemented than with a CPU or GPU (graphics processing unit).

While analytic applications can typically run on many types of platforms, higher performance can be achieved when processing power is limited using specialized hardware acceleration tools. Hardware accelerators facilitate more power-efficient analytics applications. In addition, servers and cloud computing resources can be supplemented as appropriate.

• GPU (Graphics Processing Unit). GPUs are primarily used for graphics processing applications, but can also be used for AI acceleration on servers and cloud platforms. Although sometimes used in embedded systems (front-end), GPUs are not ideal for machine learning inference tasks in terms of power efficiency.

• MLPU (Machine Learning Processing Unit). The MLPU is capable of accelerating the inference of certain classical machine learning algorithms to solve computer vision tasks with power-saving processing efficiency. It is designed for simultaneous real-time object detection of a limited number of objects (eg, people and vehicles).

• DLPU (Deep Learning Processing Unit). Cameras with built-in DLPU can accelerate general deep learning algorithm inference, while taking into account efficient power-saving processing, and achieve more refined target classification.

Factors Affecting Performance

Before an AI-based video analytics application is installed, the manufacturer’s recommendations based on known preconditions and limitations must be carefully studied and followed. Every surveillance installation is unique, and the application’s performance should be evaluated at each site. If the quality is found to be lower than expected, investigations should be made on a holistic level, and not focus only on the analytics application itself. The performance of video analytics is dependent on many factors related to camera hardware, camera configuration, video quality, scene dynamics, and illumination. In many cases, knowing the impact of these factors and optimizing them accordingly makes it possible to increase video analytics performance in the installation.

  1. ^ "Centralized power supply mode in monitoring engineering". 知乎专栏 (in Chinese). Retrieved 2022-03-19.
  2. ^ "Application of wind-solar hybrid power supply system in transmission line video surveillance". wappass.baidu.com. Retrieved 2022-03-19.
  3. ^ "Calculation Method of Centralized Power Supply for Monitoring System". wenku.baidu.com. Retrieved 2022-03-20.
  4. ^ Cite error: The named reference 802.3at Amendment 3: Data Terminal Equipment (DTE) Power via the Media Dependent Interface (MDI) Enhancements, September 11, 2009 was invoked but never defined (see the help page).
  5. ^ Cite error: The named reference IEEE 802.3-2012 Standard for Ethernet, IEEE Standards Association, December 28, 2012 was invoked but never defined (see the help page).
  6. ^ Cite error: The named reference IEEE 802.3at-2009 Table 33-11 was invoked but never defined (see the help page).
  7. ^ "Explain the development process and development trend of video monitoring system in detail.elecfans.com.Retrieved 2019-03-01".
  8. ^ Xu, Zheng, et al. "The big data analytics and applications of the surveillance system using video structured description technology." Cluster Computing 19.3 (2016): 1283-1292.
  9. ^ Xu Z, Hu C, Mei L. Video structured description technology based intelligence analysis of surveillance videos for public security applications. Multimedia Tools and Applications. 2016 Oct;75(19):12155-72.
  10. ^ Javed K, Babri H, Saeed M (2012) Feature selection based on class-dependent densities for high-dimensional binary data. IEEE Trans Knowl Data Eng 24(3):465–477
  11. ^ Choi M, Torralba A, Willsky A (2012) A Tree-based context model for object recognition. IEEE Trans Pattern Anal Mach Intell 34(2):240–252
  12. ^ Liu Y, Zhang Q, Lionel MN (2010) Opportunity-based topology control in wireless sensor networks. IEEE Trans Parallel Distrib Syst 21(3):405–416
  13. ^ Plebani P, Pernici B (2009) URBE: Web service retrieval based on similarity evaluation. IEEE Trans Knowl Data Eng 21(11):1629–1642
  14. ^ Chen H and Ahuja N (2012) Exploiting nonlocal spatiotemporal structure for video segmentation. 2012 I.E. Conference on Computer Vision and Pattern Recognition, pp.741-748
  15. ^ Hu C, Zheng Xu, et al. Video Structured Description Technology for the New Generation Video Surveillance System. Frontiers of Computer Science, 10.1007/s11704-015-3482-x
  16. ^ Xu Z et al (2015) Semantic based representing and organizing surveillance big data using video structural description technology. J Syst Softw 102:217–225
  17. ^ Zhuge H (2009) Communities and emerging semantics in semantic link network: discovery and learning. IEEE Trans Knowl Data Eng 21(6):785–799
  18. ^ Zhao Y, Chen J, Xu X, Wang W. A monitoring scope panorama stitching and fast automatic positioning system based on spherical cameras. In2015 11th International Conference on Semantics, Knowledge and Grids (SKG) 2015 Aug 19 (pp. 192-196). IEEE.
  19. ^ Xu Z, Hu C, Mei L. Video structured description technology based intelligence analysis of surveillance videos for public security applications. Multimedia Tools and Applications. 2016 Oct;75(19):12155-72.
  20. ^ Li W, Zhao R, Xiao T, and Wang X (2014) Deepreid: Deep filter pairing neural network for person re�identification. In CVPR.
  21. ^ Felzenszwalb P, Girshick R, McAllester D, and Ramanan D (2010) Object Detection with Discriminatively Trained Part-Based Models. In PAMI
  22. ^ Li W, Zhao R, Xiao T, and Wang X (2014) Deepreid: Deep filter pairing neural network for person re�identification. In CVPR
  23. ^ "Building a security monitoring platform based on broadcasting network[J]. China Cable TV, 2016(4):22-27".
  24. ^ "Tonglu County security monitoring platform design program [J]. China Cable TV, 2015(9):1070-1073".
  25. ^ "Shunde social security video surveillance system center platform construction [J]. Science and Wealth, 2016(9):23-28" (in Chinese).
  26. ^ "Technical System Architecture of Social Security Video Surveillance System Management Platform [J]. Communication World , 2015(13):127-128".
  27. ^ "Design and practice of security monitoring system along the railroad line of Yanmar Group [J]. Engineering Technology:Full Text Edition, 2016(10):00312-00312".
  28. ^ "Exploring a shared monitoring service cloud platform[J]. Radio and Television Information, 2015(10):34-40".