Go: The Idea Behind Sync.Pool
作者:互联网
原文:https://medium.com/swlh/go-the-idea-behind-sync-pool-32da5089df72
-----------------------
I encountered a problem in Go Garbage Collection inside a project of mine recently. A massive amount of object were allocated repeatedly and caused a huge workload of GC. Using sync.Pool
I was able to decrease the allocations and GC workload.
What is sync.Pool?
One of the highlights of Go 1.3 release was sync Pool. It is a component under the sync
package to create a self-managed temporary retrieval object pool.
Why to use sync.Pool?
We want to keep the GC overhead as little as possible. Frequent allocation and recycling of memory will cause a heavy burden to GC. sync.Pool
can cache objects that are not used temporarily and use them directly (without reallocation) when they are needed next time. This can potentially reduce the GC workload and improve the performance.
How to use sync.Pool?
First you need to set the New
function. This function will be used when there is no cached object in the Pool. After that you only need using Get
and Put
methods to retrieve and return objects. Also a Pool must not be copied after first use.
Due to New
function type, which is func() interface{}
, Get
method returns an interface{}
. So you need to do a type assertion in order to get the concrete object
// A dummy struct type Person struct { Name string } // Initializing pool var personPool = sync.Pool{ // New optionally specifies a function to generate // a value when Get would otherwise return nil. New: func() interface{} { return new(Person) }, } // Main function func main() { // Get hold of an instance newPerson := personPool.Get().(*Person) // Defer release function // After that the same instance is // reusable by another routine defer personPool.Put(newPerson) // Using the instance newPerson.Name = "Jack" }
sync.Pool example
Benchmark
type Person struct { Age int } var personPool = sync.Pool{ New: func() interface{} { return new(Person) }, } func BenchmarkWithoutPool(b *testing.B) { var p *Person b.ReportAllocs() b.ResetTimer() for i := 0; i < b.N; i++ { for j := 0; j < 10000; j++ { p = new(Person) p.Age = 23 } } } func BenchmarkWithPool(b *testing.B) { var p *Person b.ReportAllocs() b.ResetTimer() for i := 0; i < b.N; i++ { for j := 0; j < 10000; j++ { p = personPool.Get().(*Person) p.Age = 23 personPool.Put(p) } } }
sync.Pool benchmark
Benchmark result:
BenchmarkWithoutPool
BenchmarkWithoutPool-8 160698 ns/op 80001 B/op 10000 allocs/op
BenchmarkWithPool
BenchmarkWithPool-8 191163 ns/op 0 B/op 0 allocs/op
Trade-off
Everything in life is a trade-off. The Pool has also its performance cost. It is much slower to use sync.Pool
than simple initialization.
func BenchmarkPool(b *testing.B) { var p sync.Pool b.RunParallel(func(pb *testing.PB) { for pb.Next() { p.Put(1) p.Get() } }) } func BenchmarkAllocation(b *testing.B) { b.RunParallel(func(pb *testing.PB) { for pb.Next() { i := 0 i = i } }) }
Benchmarking sync.Pool and simple Allocation
Benchmark result:
BenchmarkPool
BenchmarkPool-8 283395016 4.40 ns/op
BenchmarkAllocation
BenchmarkAllocation-8 1000000000 0.344 ns/op
How does sync.Pool work?
sync.Pool
has two containers for objects: local pool (active) and victim cache (archived).
According to the sync/pool.go
, package init
function registers to the runtime as a method to clean the pools. This method will be triggered by the GC.
func init() {
runtime_registerPoolCleanup(poolCleanup)
}
When the GC is triggered, objects inside the victim cache will be collected and then objects inside the local pool will be moved to the victim cache.
func poolCleanup() {
// Drop victim caches from all pools.
for _, p := range oldPools {
p.victim = nil
p.victimSize = 0
}
// Move primary cache to victim cache.
for _, p := range allPools {
p.victim = p.local
p.victimSize = p.localSize
p.local = nil
p.localSize = 0
}
oldPools, allPools = allPools, nil
}
New objects are put in the local pool. Calling Put
method will put the object into the local pool as well. Calling Get
method will take an object from the victim cache in the first place and if the victim cache was empty the object will be taken from the local pool.
sync.Pool localPool and victimCache
For your information, the Go 1.12 sync.Pool implementation uses a mutex
based locking for thread-safe operations from multiple Goroutines. Go 1.13 introduces a doubly-linked list as a shared pool which removes the mutex
lock and improves the shared access.
Conclusion
When there is an expensive object you have to create it frequently, it can be very beneficial to use sync.Pool
.
For your information, the Go 1.12 sync.Pool implementation uses a mutex
based locking for thread-safe operations from multiple Goroutines. Go 1.13 introduces a doubly-linked list as a shared pool which removes the mutex
lock and improves the shared access.
Conclusion
When there is an expensive object you have to create it frequently, it can be very beneficial to use sync.Pool
.
标签:object,Idea,sync,Person,Behind,pool,func,Go,Pool 来源: https://www.cnblogs.com/oxspirt/p/15358377.html