What is the hashtable?

In computer science, hashtable is a data structure for data storage, which consists of a list of values ​​called keys that pair with the corresponding list of values ​​called field. For example, the business name could be paired with its address. Each value in the field usually has a position number referred to as hash. The hash function is generally a set of instructions or algorithm that maps each key value to the hash - connecting the trade name to its address, its phone number and its business category. The purpose of the hash function is to assign each key to the unique corresponding value in the field; This is commonly referred to as hash. The hash function must be properly formatted to make the hashtable work properly.

The power of the hashtable on the data set depends on the efficiency of its hash function. A good hash function of the typically ensures uniform key search and even distribution of mapping in the corresponding field. CollisionH occurs when two keys are assigned to the same corresponding value. When a hasha collision occurs, the hash function is usually done again until a unique corresponding value is found; This usually leads to longer hash times. Although the number of keys in the hashtable is usually solid, sometimes there may be duplicate keys. Yet a well -designed hashtable has effective hash features that map each key to a unique corresponding value in the field.

Sometimes ineffective hash in hashtable can also produce a cluster of mapping. If the hash function creates a cluster of mapping for existing keys, it can extend the amount of time required to find the appropriate values. This can slow down the future keys, as most of the hash features are generally looking for another available position in the field. If the large cluster of values ​​has already been assigned, it would usually take much longer to look for an unjustified value for the new key.

load factor is another concept related to function efficiencyCE Hash; The load factor is the amount of existing fire existing in relation to the overall size of the corresponding field in the hashtable. It is usually defined by dividing the number of already assigned keys of the size of the corresponding field. As the load factor increases, good hash function will usually still maintain a constant number of collisions and clusters up to a certain point. This threshold can often be used to determine how effective the hash with a given number of keys is and when a new hash function may be needed.

Many researchers in computer science have tried to create a perfect hash function - which does not create any collisions or clusters due to the growing factor of the burden. Theoretically, the key to creating a perfect hashtable produce is the perfect hash function. Scientists generally believe that the perfect hash function should have constant performance - the number of collisions and clusters - with a growing load factor. In the worst scenarios, the perfect hash function would still allow constant hash without reachingPrague.

IN OTHER LANGUAGES

Was this article helpful? Thanks for the feedback Thanks for the feedback

How can we help? How can we help?