What is Computer Programming?

The process of editing a program for computer execution is called programming. Many softwares can be programmed. Representative computer languages are Java, BASIC C, C ++, VB, VF, SQL, web programming JSP, ASP, PHP. The software is eclipse, Microsoft Visual Studio. , Microsoft Visual Basic, Microsoft SQL Server, etc. Java is currently one of the most widely used programming languages, and C is often used as an introductory programming language in universities. BASIC is the abbreviation of Beginner's All-purpose symbolic instruction Code. It is widely used internationally. A high-level computer language.

The process of editing a program for a computer to execute is called programming.
Many software can be programmed. Typical computer languages are
The parallel programming model is a bridge between the underlying architecture and the upper-level applications.It hides the details of the parallel processors upwards and provides programmers with a way to express them in parallel; it makes full use of hardware resources and completes application requirements efficiently and correctly. , Task mapping, data distribution, communication, and synchronization are the five key elements that need to be considered when designing a parallel programming model. The task parallel programming model mainly focuses on shared storage platforms. Data is divided into two types of storage properties, shared and private. Therefore, the research focus of this programming model is on the key elements of task division, task mapping, and synchronization. The task parallel programming model treats tasks as the basic unit of parallelism, provides a programming interface for task division and synchronization, and transfers task division and synchronization work. For programmers, users can divide the application into a large number of fine-grained tasks. However, whether each task is executed in parallel or serially, on which physical core is executed, and how to achieve synchronization between tasks is run by When the system is complete. The task-parallel programming model promotes nested recursion Service, and introduced to the task-level thread scheduling steal as the core, to achieve high performance programs and dynamic load balancing [1].
The task parallel programming model provides explicit task partitioning and synchronous programming interfaces, as well as implicit task mapping mechanisms. The former focuses on programmability and the latter focuses on execution efficiency. At present, the task parallel programming model supports irregular applications, combining logical tasks with The physical threads are separated and thus independent of the number of processor cores. However, what is needed in the multi-core era is a parallel programming tool that is easy to program and has high productivity for a wider application field, the programming interface (parallel expression and data management) and operation of the model Support (task scheduling) [1] faces the following challenges:
(1) The model's programming interface can support limited parallel modes, and requires rich programming interfaces to express a variety of parallelism. For example, spawnsync can implement nested parallel control structures, but cannot efficiently implement loop-level parallelism. In order to write parallel programs with this model, you need to convert data-parallel applications into nested parallels. In addition, unconditional atomic block structure and conditional atomic block structure are important parallel task structures, and how to express and how to efficiently support them requires in-depth Research; [1]
(2) This model divides the data into two types: shared and private, and communicates through shared data. However, some data are shared by some tasks or shared by all tasks performed in a thread. How to efficiently implement different levels of shared data [1] ;
(3) The runtime system of this model is responsible for mapping logical tasks to physical threads for execution, and its core task is to improve execution efficiency. The existing problems are: (a) The runtime system is a software layer and is linked with the application , Running in user space. There is a price to implement task stealing with software. The question is whether it can further reduce the runtime system overhead. The utilization rate of the cache. However, the threads are randomly selected for task stealing without considering the storage level and processor architecture characteristics of the multi-core processor, which will have an impact on locally sensitive applications. Therefore, task scheduling needs to be based on the level of storage components, Capacity, access delay, and access locality, reuse, and hierarchy of data for locality-sensitive scheduling; (c) Cluster systems and many-core processors are far more complex than multi-core processors and have a greater amount of computing resources How to manage and use hardware resources and make full use of the parallelism and locality of the architecture to improve performance also needs to be further studied. Study [1].

IN OTHER LANGUAGES

Was this article helpful? Thanks for the feedback Thanks for the feedback

How can we help? How can we help?