In Computing, What Is a Kernel?
The kernel is the most basic part of the operating system. It is part of the software that provides secure access to computer hardware for numerous applications. Such access is limited, and the kernel determines when and how long a program operates on a piece of hardware. The classification of kernels can be divided into single and dual kernels and microkernels. Strictly speaking, the kernel is not a necessary part of a computer system.
- Chinese name
- Kernel
- Foreign name
- kernel
- Category
- software
- Origin time
- October 1991
- Species
- Single core, dual core, micro core
- The kernel is the most basic part of the operating system. It is part of the software that provides secure access to computer hardware for numerous applications. Such access is limited, and the kernel determines when and how long a program operates on a piece of hardware. The classification of kernels can be divided into single and dual kernels and microkernels. Strictly speaking, the kernel is not a necessary part of a computer system.
Kernel basics
- The kernel is the core of an operating system. It is a hardware-based first-level software extension that provides the most basic functions of the operating system. It is the basis of the operating system.
- Kernel architecture [1]
- In the design of modern operating systems, in order to reduce the overhead of the system itself, some modules that are closely related to the hardware (such as interrupt handlers, device drivers, etc.), basic, common, and run at high frequency (such as clock management, Process scheduling, etc.) and critical data structures are kept separate to keep them in memory and protect them. This part is often referred to as the kernel of the operating system. [3]
- The program can be directly transferred to the computer for execution. This design shows that the designer does not want to provide any hardware abstraction and operating system support. It is common in the design of early computer systems. In the end, some auxiliary programs, such as program loaders and debuggers, are designed into the core of the machine, or they are fixed in read-only memory. When these changes occurred, the concept of the operating system kernel gradually became clear. [4]
- (Overview Image Source :)
Kernel History Development
- The first public version of Linux was version 0.02 in October 1991. Two months later, in December 1991, Linux released version 0.11. This was the first independent kernel that could be used without relying on Minix.
- One month after the 0.12 version was released, in March, the version number jumped to 0.95, reflecting that the system is becoming mature. Not only that, but two years later, in March 1994, the landmark 1.0.0 was not completed.
- About from this point on, the development of the kernel was marked using the two-way numbering method, and even-numbered kernels
- Understanding the linux kernel [5]
- Most of the discussion in the post-halloween document is the main changes that users need to be aware of, as well as the system tools that need to be updated (in order to take advantage of them). The people who care about this information are mainly the Linux publishers who want to know what is in the 2.6 kernel in advance, as well as end users, which allows them to determine whether there are programs that need to be upgraded in order to take advantage of new parts.
- The KernelJanitors project maintains a list of minor defects and solutions that need to be fixed. Most of these defect solutions are caused by the need to change many parts of the code when applying large patches to the kernel, such as affecting the device driver in some places. Those who are new to kernel development can start their work by selecting entries in the list, so that they can learn how to write kernel code through small projects, and have the opportunity to contribute to the community.
- Also, in another pre-release project, JohnCherry tracked errors and warnings found while compiling each released kernel version. These compilation statistics have continued to decline over time, and publishing these results in a systematic format makes it clear at a glance. In many cases, you can take advantage of some of these warnings and error messages just like using the KernelJanitors list, because compilation errors are usually caused by small bugs and require some effort to fix.
- Finally, there is a list of "must-fix" by Andrew Morton. Since he has been selected as the maintainer of the 2.6 kernel release, he used his privileges to outline a list of issues that he believes require the most urgent solution before the final 2.6 kernel release. The must-fix list contains bugs in the kernel Bugzilla system, parts that need to be completed, and other known issues. These issues, if not resolved, will prevent the 2.6 release. This information can help indicate what steps are needed before the new kernel is released; it can also provide valuable information for those who care when this much-anticipated 2.6 kernel release will be completed. [7]
Kernel Kernel Classification
Single kernel
- Monolithic kernel is a very big process. Its interior can be divided into several modules (or layers or other). But at runtime, it is a single large binary image. The communication between its modules is achieved by directly calling functions in other modules, rather than message passing. [8]
- The single-kernel structure defines a high-level abstract interface on the hardware. It uses a set of primitives (or system calls) to implement the functions of the operating system, such as process management, file system, and storage management. These functions are provided by Multiple modules running in the core state to complete.
- Although each module serves these operations individually, the kernel code is highly integrated and difficult to write correctly. Because all modules run in the same kernel space, a small bug can crash the entire system. However, if the development goes well, the single-core architecture can benefit from operational efficiency.
- Many modern single-kernel kernels, such as the Linux and FreeBSD kernels, can
- Linux Kernel Programming Guide Third Edition
- The single-core architecture is a very attractive design, because the complexity of system control code that implements all low-level operations in the same address space is more efficient than the implementation in different address spaces. The single-core structure is tending to be easily designed correctly, so its development will be faster than the micro-core structure.
- An example of a single-kernel architecture: the traditional UNIX kernel-such as the version released by the University of Berkeley, the Linux kernel. [9]
Kernel microkernel
- The microkernel (Microkernelkernel) structure consists of a very simple hardware abstraction layer and a set of relatively primitive primitives or system calls. These primitives only include several parts necessary to build a system, such as thread management, address space and processes Communication.
- The goal of microkernel is to separate the implementation of system services from the basic operating rules of the system. For example, the input / output locking service of a process can be provided by a service component running outside the microcore. These very modular user-mode servers are used to complete more advanced operations in the operating system. This design makes the design of the core part of the kernel easier. The failure of a service component does not cause the entire system to crash. All the kernel needs to do is to restart this component without affecting other parts.
- The microkernel puts many OS services into separate processes, such as file systems, device drivers, and processes call OS services through message passing. The microkernel structure must be multi-threaded. The first-generation microkernel provided more services in the core, so it was called 'fat microkernel', and its typical representative was MACH. It is both the core of GNU HURD and APPLE SERVER OS. It can be said that it is booming. The second generation provides only the most basic OS services for microkernels. The typical OS is QNX. QNX is very famous in the theoretical world and is considered an advanced OS . [8]
- The microkernel provides only a small part of the hardware abstraction, and most functions are performed by a special user mode program: the server. Microkernels are often used in embedded designs for robots and medical devices because key parts of their systems are in separate, protected storage spaces. This is not possible for a single-core design, even if it uses the module loading method at runtime.
- Examples of microkernels: AIX, BeOS, L4 microkernel series, .Mach for GNU Hurd and Mac OS X, Minix, MorphOS, QNX, RadiOS, VSTa. [9]
Kernel Hybrid Kernel
- The hybrid kernel is very similar to the microkernel structure, except that its components run more in the core state.
- Freebsd kernel detailed [10]
- A hybrid kernel is essentially a microkernel, except that it allows some microkernel structures to run in user-space code in kernel space, which makes the kernel run more efficiently. This is a compromise approach, and the designer refers to the theory that the microkernel architecture system does not perform well. However, subsequent experiments have proven that pure microkernel systems can actually be highly efficient. Most modern operating systems follow this design category, and the Windows operating system developed by Microsoft is a good example. There is also XNU, the kernel running on Apple Mac OS X, which is also a hybrid kernel.
- Examples of mixed kernels: BeOS kernel, DragonFly BSD, ReactOS kernel
- Windows NT, Windows 2000, Windows XP, Windows Server 2003, and Windows Vista and other NT-based operating systems. [11]
Out-of-kernel
- The outer kernel system, also known as the vertical structure operating system, is a more extreme design method. [12]
- The outer kernel does not provide any hardware abstraction operation, but allows to add additional runtime libraries to the kernel. Through these runtime applications, the hardware can operate directly or nearly directly.
- Its design philosophy is to let the designer of the user program decide the design of the hardware interface. The outer kernel itself is very small, and it is usually only responsible for services related to system protection and system resource reuse.
- Traditional kernel designs (including single-core and micro-core) have abstracted the hardware, hiding hardware resources or device drivers under the hardware abstraction layer. For example, in these systems, if a piece of physical storage is allocated, the application does not know its actual location.
- The goal of the outer core is to let the application directly request a specific physical space, a specific disk block, and so on. The system itself only guarantees that the requested resource is currently free, and the application allows direct access to it. Since the outer nuclear system
- Detailed freebsd kernel [13]
- In theory, this design allows various operating systems to run on an external core, such as Windows and Unix. And the designer can adjust the functions of each part of the system according to the operating efficiency.
- Outer nuclear design is still in the research stage, and no commercial system has adopted this design. Several conceptual operating systems are being developed, such as Nemesis from the University of Cambridge, Citrix from the University of Glasgow, and a system from the Swiss Academy of Computer Sciences. MIT is also conducting such research. [11]
Comparison of kernel single kernel and micro kernel
- The single-core architecture is a very attractive design, because it is more efficient to implement all complex low-level operating system control code in the same address space than in different address spaces.
- In the early 1990s, single-core architectures were considered obsolete. The design of Linux as a single-kernel structure rather than a microkernel has caused countless controversies.
- Single-core architectures are tending to be less error-prone, so their development will be faster than micro-kernel architectures. Success stories are found in both camps.
- Although Mach is a well-known multi-purpose microkernel, several others have been developed. L3 is a demonstration kernel, just to prove that the microkernel design is not always low-speed. Its subsequent version, L4, can even use the Linux kernel as one of its processes, running in a separate address space.
- QNX is a microkernel system designed since the 1980s. It is closer to the idea of a microkernel than Mach. It is used in some special fields; in these cases, system failure due to software errors is not allowed. For example, a robot on a space shuttle, and a machine for grinding telescope lenses, a small mistake can cause thousands of dollars in losses.
- Many people believe that micro-kernel technology is useless because Mach cannot solve some of the problems that were addressed when the theory of micro-kernels was proposed. Mach enthusiasts suggest that this is a very narrow view, and unfortunately it seems that everyone is beginning to accept it.
Kernel advantages
Kernel abstraction hidden
- The kernel provides a hardware abstraction method to complete hardware operations, because these operations are very complicated. The hardware abstraction hides the complexity, and provides a concise and unified interface for application software and hardware, making program design simpler. . [14]
Kernel source code management
- Historically, there has never been a formal source code management or revision control system for the Linux kernel. In fact, many developers have implemented their own modified controllers, but there is no official LinuxCVS archive, allowing LinusTorvalds to check for added code and make it available to others. Fixing the lack of controllers often results in "generation gaps" between releases. No one really knows what changes have been added, whether these changes integrate well, or what new content is expected in the upcoming release. . Often, if more developers learn about those changes as they do about them, some problems can be avoided.
- It is necessary to use a real-time, centralized archive to keep the latest updates to the Linux kernel. Every change or patch accepted by the kernel is tracked as a change set. End users and developers can save their own source file archives and update them with the latest changesets with a simple command as needed. For developers, this means that they can always use the latest copy of the code. Testers can use these logical change sets to determine which changes have caused problems and reduce the time required for debugging. Even those who want to use the latest kernel can directly take advantage of the real-time, centralized archive, because once the parts or bug fixes they need are added to the kernel, they can update immediately. When code is integrated into the kernel, any user can provide instant feedback and bug reports about the code. [15]
Kernel Parallel Development
- As the Linux kernel grows, it becomes more complex and attracts more developers.
- Kernel structure diagram (6 photos)
- During the development of 2.5, the kernel tree exploded. Because the use of source code management tools can keep development in parallel and in parallel, it is possible to achieve partial parallelization of development. In order for others to test before their changes are accepted, there is some development that needs to be parallelized. Kernel maintainers who maintain their own trees work on specific components and goals, such as memory management, NUMA components, improved extensibility and code for specific architectures, and some trees collect and track corrections to many small defects.
- The advantage of this parallel development model is that it allows developers who need to make major changes, or those who make a large number of similar changes for a specific goal, to freely develop in a controlled environment without affecting others. The stability of the kernel used. When developers finish their work, they can release patches for the current version of the Linux kernel to implement the changes they have done so far. This allows testers in the community to easily test these changes and provide feedback. After each part has proven to be stable, those parts can be integrated into the main Linux kernel individually, or even all at the same time.
- (Kernel structure picture album part of the picture source :)
Kernel code coverage analysis
- The new tool provides code coverage analysis to the kernel. Coverage analysis shows which lines of code are executed in a kernel at a given test run. More importantly, coverage analysis suggests which parts of the kernel have not been tested at all. This data is important because it indicates which new tests need to be written to test those parts of the kernel so that the kernel can get more complete tests.
A lot of information in the kernel
- In the process of developing for the future 2.6Linux kernel, in addition to these automated information management methods, different members of the open source community have also collected and tracked an amazing amount of information.
- For example, a status list was created on the KernelNewbies site to keep track of new kernel components that have been proposed. This list contains entries sorted by status. If they have been completed, it indicates which kernel they have been included in. If not, it indicates how long it will take. Many of the entries on the list link to a large project's Web site, or when the entry is small, the link points to a copy of an e-mail message explaining the corresponding part.
Kernel test
Kernel application testing
- Test background
- In the past, the Linux kernel testing methodology revolved around an open source development model. Since the code is released for review by other developers once it is released, there has never been a formal verification cycle similar to other forms of software development. This
- How to compile a kernel [16]
- In reality, however, the kernel has many complex interconnections. Even with enough scrutiny, many serious flaws are missed. In addition, once the latest kernel is released, end users can (and often do) download and use it. When 2.4.0 was released, many people in the community proposed more organized testing to ensure the intensity of specific tests and code reviews. Organized testing includes the use of test plans, repeatability during testing, and more. Using all three methods results in higher code quality than initially using only two methods. [17]
- Test items
- The earliest contributor to the organized testing of Linux was the Linux Test Project (LTP). The purpose of this project is to improve the quality of Linux through a more organized testing approach. Part of this test project is the development of an automated test suite. The main test suite developed by LTP is also called the Linux Test Project. When the 2.4.0 kernel was released, the LTP test suite had only about 100 tests. With the development and maturity of 2.4 and 2.5 Linux, the LTP test suite is also developing and maturing. Currently, Linux testing projects include more than 2,000 tests, and this number is growing. [18]
Kernel regression test
- In the 2.5 development cycle, another project adopted by the Linux test project was to perform a continuous multi-day regression test on the Linux kernel using the LTP test suite. People use BitKeeper to create a real-time, centralized archive to take snapshots of the Linux kernel at any time. When BitKeeper and snapshots are not used, testers have to wait until the kernel is released before they can start testing. As soon as the kernel changes, testers can test.
- Another advantage of using automated tools to perform multi-day regression tests is that they are less changeable than the last test. If a new regression defect is found, it is usually easy to detect which change the defect may have caused.
- Also, because it is the latest change, it is still more impressive in the developer's mind-hope this makes it easier for them to remember and revise the corresponding code. Perhaps Linus's rule should have the conclusion that some defects are easier to find than others, because those are the ones found and processed by kernel regression testing that lasts for many days. The ability to perform these tests daily during the development cycle and before the actual release allows testers who focus on the full release version to focus on more serious and time-consuming defects. [19]
Kernel Extensible Test
- Another team, called OpenSource Development Labs (OSDL), also made important contributions to Linux testing. Shortly after the 2.4 kernel was released, OSDL created a system called Scalable Test Platform (STP). STP is an automated test platform that allows developers and testers to run tests provided by systems on top of OSDL hardware. Developers can even use this system to test their own kernel-specific patches. An extensible test platform simplifies the testing process because STP can build the kernel, set up tests, run tests, and collect results. The results are then obtained for in-depth comparisons. Many people have no access to large systems, such as SMP machines with 8 processors, and with STP, anyone can run tests on large systems like this. Another advantage of this system (STP) is this.
Kernel trace defect
- One of the biggest improvements to the organized testing of the Linux kernel since the 2.4 release was defect tracking. In the past, defects found in the Linux kernel were reported to the Linux kernel mailing list, to the mailing lists of specific components or specific systems, or directly to the individual who maintained the part of the code where the defect was found. With the increase in the number of people developing and testing Linux, the shortcomings of this system were quickly exposed. In the past, unless people's reports of defects were surprisingly sustainable, defects were often missed, forgotten, or ignored.
- OSDL installs a defect tracking system to report and track defects in the Linux kernel. The system is configured so that when a defect is reported for a component, the maintainer of that component is notified. The maintainer can either accept and fix the defect, or re-specify the defect (if it is finally determined that it is actually a defect in another part of the kernel), or exclude it (if it is finally determined that it is not a real defect, such as a misconfigured system). Defects reported to mailing lists are also in danger of being lost as more and more emails flock to that list. However, in a defect tracking system, there is always a record of each defect and its current status.