其他分享
首页 > 其他分享> > Go: The Idea Behind Sync.Pool

Go: The Idea Behind Sync.Pool

作者:互联网

 

原文:https://medium.com/swlh/go-the-idea-behind-sync-pool-32da5089df72

-----------------------

 

I encountered a problem in Go Garbage Collection inside a project of mine recently. A massive amount of object were allocated repeatedly and caused a huge workload of GC. Using sync.Pool I was able to decrease the allocations and GC workload.

What is sync.Pool?

Why to use sync.Pool?

How to use sync.Pool?

Due to New function type, which is func() interface{}Get method returns an interface{}. So you need to do a type assertion in order to get the concrete object

// A dummy struct
type Person struct {
	Name string
}

// Initializing pool
var personPool = sync.Pool{
	// New optionally specifies a function to generate
	// a value when Get would otherwise return nil.
	New: func() interface{} { return new(Person) },
}

// Main function
func main() {
	// Get hold of an instance
	newPerson := personPool.Get().(*Person)
	// Defer release function
	// After that the same instance is 
	// reusable by another routine
	defer personPool.Put(newPerson)

	// Using the instance
	newPerson.Name = "Jack"
}

  

sync.Pool example

Benchmark

type Person struct {
	Age int
}

var personPool = sync.Pool{
	New: func() interface{} { return new(Person) },
}

func BenchmarkWithoutPool(b *testing.B) {
	var p *Person
	b.ReportAllocs()
	b.ResetTimer()
	for i := 0; i < b.N; i++ {
		for j := 0; j < 10000; j++ {
			p = new(Person)
			p.Age = 23
		}
	}
}

func BenchmarkWithPool(b *testing.B) {
	var p *Person
	b.ReportAllocs()
	b.ResetTimer()
	for i := 0; i < b.N; i++ {
		for j := 0; j < 10000; j++ {
			p = personPool.Get().(*Person)
			p.Age = 23
			personPool.Put(p)
		}
	}
}

  

sync.Pool benchmark

Benchmark result:

BenchmarkWithoutPool
BenchmarkWithoutPool-8 160698 ns/op 80001 B/op 10000 allocs/op
BenchmarkWithPool
BenchmarkWithPool-8 191163 ns/op 0 B/op 0 allocs/op

Trade-off

func BenchmarkPool(b *testing.B) {
	var p sync.Pool
	b.RunParallel(func(pb *testing.PB) {
		for pb.Next() {
			p.Put(1)
			p.Get()
		}
	})
}

func BenchmarkAllocation(b *testing.B) {
	b.RunParallel(func(pb *testing.PB) {
		for pb.Next() {
			i := 0
			i = i
		}
	})
}

  

Benchmarking sync.Pool and simple Allocation

Benchmark result:

BenchmarkPool
BenchmarkPool-8 283395016 4.40 ns/op
BenchmarkAllocation
BenchmarkAllocation-8 1000000000 0.344 ns/op
 

How does sync.Pool work?

According to the sync/pool.go , package init function registers to the runtime as a method to clean the pools. This method will be triggered by the GC.

func init() {
runtime_registerPoolCleanup(poolCleanup)
}

When the GC is triggered, objects inside the victim cache will be collected and then objects inside the local pool will be moved to the victim cache.

func poolCleanup() {
// Drop victim caches from all pools.
for _, p := range oldPools {
p.victim = nil
p.victimSize = 0
}

// Move primary cache to victim cache.
for _, p := range allPools {
p.victim = p.local
p.victimSize = p.localSize
p.local = nil
p.localSize = 0
}

oldPools, allPools = allPools, nil
}

New objects are put in the local pool. Calling Put method will put the object into the local pool as well. Calling Get method will take an object from the victim cache in the first place and if the victim cache was empty the object will be taken from the local pool.

 

 

 

sync.Pool localPool and victimCache

For your information, the Go 1.12 sync.Pool implementation uses a mutex based locking for thread-safe operations from multiple Goroutines. Go 1.13 introduces a doubly-linked list as a shared pool which removes the mutex lock and improves the shared access.

Conclusion

For your information, the Go 1.12 sync.Pool implementation uses a mutex based locking for thread-safe operations from multiple Goroutines. Go 1.13 introduces a doubly-linked list as a shared pool which removes the mutex lock and improves the shared access.

Conclusion

 

 

标签:object,Idea,sync,Person,Behind,pool,func,Go,Pool
来源: https://www.cnblogs.com/oxspirt/p/15358377.html