What Does a Scientific Programmer Do?

The scientific computing framework is a software platform for visual development and integration of theoretical management processes in scientific research fields (such as aerodynamics) for file management, graphics processing, and database construction. It has powerful practical functions.

In the field of software engineering, the development of many scientific computing software continues to use traditional development techniques. Traditional software development methods have the advantages of easy writing and clear purpose, and the operation speed can basically meet the needs, but the main disadvantages are: (1) the software developed is small in scale, single in function, and low in repeated utilization; (2) it is difficult Carry out multi-language joint development and secondary development; (3) Due to the limitations of FORTRAN itself and the application framework used, the visual interface is rough, and the advanced functions of existing operating systems and various application servers cannot be fully utilized. Carry out networked distributed management. When the calculation program is not only used as a formula calculator, we need a new development method for software development, and the development of this method needs to meet the following requirements: (1) the software is highly versatile and highly Integration and complexity, secondary development is more convenient, has a friendly human-machine interface under WindOws, can make full use of the network for distributed management; (2) can effectively distinguish the development tasks of scientific computing kernels and other management functions, The work of engineers responsible for writing scientific computing programs and programmers responsible for other design tasks can be clearly separated, and the computing functions can be abstracted as much as possible from specific problems. Computing engineers can write computing kernel programs in a similar way to the original, and even The original program can be changed into a new kernel of the computing framework with a slight modification, thereby greatly improving the utilization rate of the original computing program, and the peripheral programs can be developed by using the most advanced development methods unrelated to the computing kernel.
The main problem in the development of scientific computing software is how to make the application framework operate the computing program kernel efficiently, reasonably, and securely so that it can work normally and return the correct running results. There are six main implementation methods: (1) using a file as an interface to pass parameters; (2) writing a shared memory interface library to enable the kernel to communicate with the main framework; (3) using a database as a parameter to control the management core; (4) using The rewriting of the computing kernel to FORtran dynamic link library is directly called by the main framework; (5) the computing kernel is deployed as a COM / DCOM / ActiVeX component; and (6) it is deployed as a Web-based B / S structure.

Tensorflow Scientific computing framework Tensorflow

TensorFlow is an open source mathematical calculation software that uses the form of DataFlowGraph to perform calculations. The nodes in the graph represent mathematical operations, while the lines in the graph represent interactions between multidimensional data arrays (tensors). TensorFlow's flexible architecture can be deployed on one or more CPU, GPU desktops, and servers, or it can be applied to mobile devices using a single API. TensorFlow was originally developed by researchers and the GoogleBrain team for research on machine learning and deep neural networks. After being open source, it can be applied in almost all fields.
TensorFlow features:
Mobility: TensorFlow is not just a regular neuralnetwork library. In fact, if you can represent your calculations in the form of a dataflowgraph, you can use TensorFlow. Users build graphs and write inner-loop code to drive calculations. TensorFlow can help assemble subgraphs. To define a new operation, you only need to write a Python function. If the underlying data operation is missing, you need to write some C ++ code to define the operation.
Strong adaptability: can be applied to different devices, cpus, gpu, mobile devices, cloud platforms, etc.
Auto-diff: TensorFlow's auto-diff capability is good for many Graph-based machine learning algorithms
Multiple programming languages: TensorFlow is easy to use, with python interface and C ++ interface. Other languages can use SWIG tools to use interfaces. (SWIGSimplifiedWrapperandInterfaceGenerator, is an excellent open source tool that supports the integration of C / C ++ code with any mainstream scripting language.)
Optimized performance: make full use of hardware resources, TensorFlow can allocate different computing units of the graph to different devices for execution, and use TensorFlow to process copies.

Torch Scientific Computing Framework Torch

Torch is a scientific computing framework supported by a large number of machine learning algorithms. It has been in existence for more than ten years, but the real rise is due to Facebook's open source of a large number of Torch's deep learning modules and extensions. Another special feature of Torch is the use of the programming language Lua (which was used to develop video games).
Torch advantages:
  • Simple model building
  • Highly modular
  • Fast and efficient GPU support
  • Access C via LuaJIT
  • Numerical optimization procedures, etc.
  • Interfaces that can be embedded into iOS, Android, and FPGA backends

Caffe Scientific Computing Framework Caffe

Developed by Jia Yangqing, PHD of the University of California, Berkeley, Caffe, full name Convolutional Architecture for FastFeatureEmbedding, is a clear and efficient open source deep learning framework maintained by the Berkeley Vision and Learning Center (BVLC). (Jia Yangqing has worked for MSRA, NEC, and GoogleBrain. He is also one of the authors of TensorFlow and works at the FacebookFAIR Lab.)
Caffe basic flow: Caffe follows a simple assumption of the neural network-all calculations are expressed in the form of a layer, what the layer does is to obtain some data and then output some calculated results. For example, convolution-is to input an image, then perform convolution with the parameters of this layer, and then output the result of the convolution. Each layer needs to do two calculations: forward is to calculate the output from the input, and then backward is to calculate the gradient relative to the input from the gradient given above. As long as these two functions are implemented, we will Many layers can be connected into a network. What this network does is to input our data (images or speech, etc.), and then calculate the output we need (such as recognition tags). During training, we can There are tags to calculate the loss and gradient, and then use gradient to update the network parameters.
Caffe advantages:
Quick to get started: Models and corresponding optimizations are given in text rather than code
Fast: Ability to run the best models and large amounts of data
Modularity: easy expansion to new tasks and settings
Openness: open code and reference models for reproduction
Good community: can participate in development and discussion through BSD-2

Theano Scientific Computing Framework Theano

Born in Montreal Institute of Technology in 2008, Theano has derived a large number of deep learning Python packages, the most famous of which include Blocks and Keras. At the core of Theano is a compiler for mathematical expressions that knows how to get your structure. Make it an efficient code that uses numpy, an efficient native library, such as BLAS and native code (C ++) to run as fast as possible on the CPU or GPU. It is specially designed for processing the calculations required by large neural network algorithms in deep learning. It is one of the inventions of such libraries (development began in 2007) and is considered the industry standard for deep learning research and development.
Theano advantages:
  • Integrate NumPy-use numpy.ndarray
  • GPU-accelerated calculations-140 times faster than CPU (only for 32-bit float types)
  • Efficient Symbolic Differentiation-Calculate Derivatives of One or More Functions
  • Speed and stability optimization
  • Generate C code dynamically-make calculations faster
  • Extensive unit testing and self-verification-detect and diagnose multiple errors
  • Good flexibility

IN OTHER LANGUAGES

Was this article helpful? Thanks for the feedback Thanks for the feedback

How can we help? How can we help?