This website uses cookies. By using this site, you consent to the use of cookies. For more information, please take a look at our Privacy Policy.
Home > Wiki encyclopedia > IO

IO

I/O input/output (Input/Output) is divided into two parts: IO device and IO interface. On POSIX compatible systems, such as the Linux system [1], I/O operations can have multiple methods, such as DIO (Direct I/O), AIO (Asynchronous I/O, asynchronous I/O), Memory-Mapped I /O (memory-mapped I/O), etc., different I/O methods have different implementations and performances, and different I/O methods can be selected according to the situation in different applications.

Definition

I/O input/output (Input/Output) is divided into two parts: IO device and IO interface.

On POSIX-compatible systems, such as Linux systems, I/O operations can have multiple methods, such as DIO (Direct I/O), AIO (Asynchronous I/O, asynchronous I/O), Memory-Mapped I/O ( Memory-mapped I/O), etc., different I/O methods have different implementations and performances, and different I/O methods can be selected according to the situation in different applications.

The input/output I/O stream can be regarded as the reading of bytes or packed bytes is to take it out and put it into two-way switching; realize the transfer between the weak current line of the linkage control system and the strong current line of the controlled device , Isolation to prevent strong electricity from entering the system and to ensure the safety of the system;

Connected to the dedicated line control panel, used to control important fire fighting equipment (such as fire pump, spray pump, fan, etc.), a module can control the start and stop control of a large fire fighting equipment;

Plug-in structure, you can first install the base on the wall like installing a detector, and then insert the switching module into the base after wiring and engineering debugging. Easy to construct and maintain;

Pass passive moving contact or switch AC220V voltage as answer signal.

Confirmation lamp action lamp—red, answer lamp—green; when action, the action lamp is always on and the answer lamp is always on.

The IO output port can be connected to a relay. The relay contact loads AC250V/3A, DC30V/7A start as a set of normally open/normally closed contacts, and stop as a set of normally open contacts.

Installation and wiring

Installation and wiring The installation hole distance is 65mm, and it is fixed at the installation position with 2 M4 screws or A4 self-tapping screws.

Terminal 1 is connected to the start end of the multi-wire reel; Terminal 2 is connected to the stop end of the multi-wire reel;

Terminal 3 is connected to the answering end of the multi-wire reel; terminal 4 is connected to the power ground G;

Terminals 5 and 6 are normally open contact output corresponding to the stop command;

Terminals 11 and 12 are connected to 220V answer signal;

Terminals 13 and 14 are normally open contact outputs corresponding to start commands; terminals 14 and 15 are normally closed contact outputs corresponding to start commands;

Contact output is passive.

Terminal 16 is connected to the positive pole of the 24V power supply;

Application (connect to dedicated line control panel)

Note: AC220V or passive closing signal can be used as the feedback signal.

JBF-151F/D has only 1 answer input, it is the answer to start 1.

JBF-151F/D can provide a set of normally open or normally closed contacts after the start is issued, and only output a pair of normally open points when the command output is stopped.

Improve cache

In the calculation of several indicators to measure performance, we can see that a 15k disk has IOPS of only about 140 in the case of random read and write access, but in actual applications, we can see many marked with 5000IOPS or higher Storage system, how come the storage system with such a large IOPS? This is due to the use of various storage technologies, the most widely used in these storage technologies are cache (Cache) and redundant array of disks (RAID ), this article will discuss the cache and disk array to improve storage IO performance.

Cache

In the current storage products, according to the speed from fast to slow, it should be memory>flash>disk>tape, but the faster the speed, the higher the price. Although flash memory is said to have a good development momentum, the speed of the disk It is undoubtedly the biggest bottleneck in the computer system, so in the case of having to use the disk and want to improve performance, people have come up with a method of embedding a high-speed memory in the disk to save frequently accessed data to improve read and write efficiency To compromise, this embedded memory is called a cache.

When it comes to caching, to the operating system layer, to the disk controller, and to the CPU, there are also caches inside a single disk. The purpose of all these caches is the same, which is to improve the efficiency of system execution.

Of course, here we only mention caches related to IO performance. The several caches directly related to IO performance are File System Cache (File SySTem Cache), Disk Controller Cache (Disk CONtroller Cache), and Disk Cache (Disk Cache, also It is called Disk Buffer), but when calculating the performance of a disk system, the file system cache will not be taken into account. We focus on the disk controller cache and disk cache.

Whether it is controller cache or disk cache, their role is mainly divided into three parts: cache data, read-ahead (Read-ahead) and write-back (Write-back).

Cache data

The first is that the data read by the system will be cached in the cache, so that the next time you need to read the same data again, you don’t need to access the disk, just take the data from the cache. Of course, the used data can not be permanently retained in the cache. The cached data is generally managed by the LRU algorithm. The purpose is to clear out the data that has not been used for a long time from the cache. Those that are frequently accessed can always be kept in the cache. In the cache until the cache is cleared.

Read ahead

Read-ahead refers to the use of read-ahead algorithms to read data from the disk into the cache in advance when there is no system IO request, and then when the system issues a read IO request, it will be implemented to check to see if there is a need in the cache The read data, if it exists (that is, hits), the result will be returned directly. At this time, the disk no longer needs the sequence of operations of addressing, rotation waiting, and reading data, which can save a lot of time; if If there is no hit, then send the real command to read the disk to fetch the required data.

The hit rate of the cache has a great relationship with the size of the cache. In theory, the larger the cache, the more data can be cached, so the hit rate is naturally higher, of course, the cache cannot be too large, after all, the cost is Where is it. If a large-capacity storage system is equipped with a small read cache, the problem will be larger at this time, because the amount of data in the small cache cache is very small, and the ratio is very low compared to the entire storage system. When fetching (in most cases of database systems), the hit rate is naturally very low. Such a cache can not only improve efficiency (because most of the read IO also reads the disk), but because it will match the cache every time. waste time.

The read IO operation is the ratio of the amount of read data in the cache to all the data to be read is called the cache hit rate (Read Cache Hit Radio), assuming that a storage system does not use the cache to randomly read small IO If it can reach 150IOPS, and its cache can provide a cache hit rate of 10%, then its IOPS can actually reach 150/(1-10%)=166.

Writeback

Let me talk about it first, the part of the cache used for the write-back function is called the write cache (Write Cache). In a set of storage opened by the write cache, a series of write IO commands issued by the operating system will not be executed one by one. These write IO commands will be written to the cache first, and then the cache will be modified at once Pushed to disk, this is equivalent to merging those same multiple IOs into one, multiple small IOs that are consecutively operated into one large IO, and the multiple random write IO into a group of continuous Write IO, which can reduce the time consumed by disk addressing and other operations, greatly improving the efficiency of disk writing.

Although the write cache is very obvious for improving efficiency, the problems it brings are also more serious, because the cache is the same as ordinary memory, and all data will be lost after power off. When the write IO command issued by the operating system is written to the cache After that, it is considered to be a successful write, but the data is not actually written to the disk. At this time, if the power is lost, the data in the cache will be lost forever. This is catastrophic for the application. The best way to solve this problem is to equip the cache with batteries to ensure that the cache data can be saved after the power is lost.

Like read, write cache also has a write cache hit rate (Write Cache Hit Radio), but unlike the read cache hit situation, despite the cache hit, the actual IO operation cannot be waived, it is just merged. .

In addition to the above functions, the controller cache and the disk cache also perform other functions. For example, the disk cache has the function of saving the IO command queue. A single disk can only process one IO command at a time, but can receive multiple IO commands. These unprocessed commands that enter the disk are saved in the IO queue in the cache.

RAID(Redundant ArrayOf Inexpensive Disks)

If you are a database administrator or have frequent contact with servers, you should be familiar with RAID. As the cheapest storage solution, RAID has already gained popularity in server storage. Among the various levels of RAID, RAID10 and RAID5 should be used (although RAID5 has basically come to an end, RAID6 is on the rise, see here to understand the reasons) is the most widely used. The following will discuss the impact of disk arrays on disk performance on RAID0, RAID1, RAID5, RAID6, RAID10. Of course, you must be familiar with the structure and working principle of each level of RAID before reading the following content. It’s okay, so it’s not overwhelming. It is recommended to check the following items on wikipedia: RAID, Standard RAID levels, Nested RAID levels.

RAID0

RAID0 strips data (striping) to disperse continuous data on multiple disks for access. IO commands issued by the system (regardless of read IO and write IO) can be executed in parallel on the disk, each The disk executes its own part of the request separately. Such parallel IO operations can greatly enhance the performance of the entire storage system. Assuming that a RAID0 array consists of n (n>=2) disks, and the random read and write IO capacity of each disk reaches 140, then the IO capacity of the entire disk array will be 140*n. At the same time, if the transmission capacity of the array bus allows, the throughput of RAID0 will also be n times that of a single disk.

Other RAID areas

RAID1 mirrored disks use 2 hard disks, generally mirror the system disk, read IO as one hard disk IO, write IO as two hard disk IO.

RAID10 can increase the read and write performance of IO and achieve data redundancy. The number of disks used is a multiple of 2 and must be greater than or equal to 4, and the hard disk space is the same. The disadvantage of this is that the corresponding hard disk must be added to achieve IO expansion. The number of hard drives that achieve the same performance will increase exponentially. Allow any piece of data from different hard drives to be lost.

RAID3 takes out a single disk as a parity disk to achieve data redundancy

In this case, a hard disk is allowed to be damaged. As any data on the disk changes, the check disk will be rewritten again, so excessive write operations will become the bottleneck of the entire system. This RAID level can only be used in an environment with relatively high read requests and few write requests. . RAID3 has been basically eliminated and generally replaced with RAID5 technology.

ASSOCIATED PRODUCTS

  • XC2C32A-6PC44C

    XC2C32A-6PC44C

    CPLD CoolRunner -II Family 750 Gates 32 Macro Cells 200MHz 0.18um, CMOS Technology 1.8V 44-Pin PLCC

  • XC5210-6PC84I

    XC5210-6PC84I

    FPGA XC5200 Family 16K Gates 1296 Cells 83MHz 0.5um Technology 5V 84-Pin PLCC

  • XC5210-6PQG208C

    XC5210-6PQG208C

    FPGA XC5200 Family 16K Gates 1296 Cells 83MHz 0.5um Technology 5V 208-Pin PQFP

  • XC2S600E-6FG676C

    XC2S600E-6FG676C

    FPGA Spartan-IIE Family 600K Gates 15552 Cells 357MHz 0.15um Technology 1.8V 676-Pin FBGA

  • XCS10XL-4TQ144C

    XCS10XL-4TQ144C

    FPGA Spartan-XL Family 10K Gates 466 Cells 217MHz 3.3V 144-Pin TQFP

FPGA Tutorial Lattice FPGA
Need Help?

Support

If you have any questions about the product and related issues, Please contact us.