TRY AND ERROR

気になったこと、勉強したこと、その他雑記など色々メモしていきます。。Sometimes these posts will be written in English.,

【Golang】sliceのcapacityとは?

sliceは可変長の配列を扱うデータ型で、ベースとなる配列のポインタと要素数、および容量(capacity)を持っている。sliceリテラルやmakeはベースとなる配列を作成した上でそのsliceを返している。
sliceは参照型なので、sliceの要素を変更すればベースとなる配列の値も変わる。したがって、sliceのポインタを渡す必要はなく、そのままでもコピーが発生しないのでメモリに優しい。sliceのcapを超えてappendすると参照が外れて、、、ほにゃらら

といった感じのことがブログ等々で書かれているが、容量(capacity)について詳しく触れられているものが見当たらなかったのでメモ。


容量(capacity)というのは、sliceのベース、つまり切り出しもととなる配列において、sliceの最初の要素から配列の最後の要素までの要素数のことである、とのこと。

Slice is a data-type, which has a variable and flexible length, contains pointer, the number of elements, and capacity of its base array.
When you use make or slice literal, the base(underlying) array is allocated and return its slice. Since slice belongs to a type of reference, if the element of slice is changed then the base array's one is changed in the same way.
Therefore besides it's unnecessary to take pointer of slice, it's able to suppress the use of memory. If you append element into slice as over its capacity, its reference of base array get away,,,blah blah blah.
There are many blogs that explain like above, but I hardly find them that explain a definition of 'capacity of slice'. So this time I put a little memo for it.
According to 'Tour of Go', capacity of slice is defined as like the number of element counting from first element of slice to the end of base array's element.


A Tour of Goより

The capacity of a slice is the number of elements in the underlying array, counting from the first element in the slice.

こんな感じ。

s1 := []int{1,2,3,4,5}
// ここで、sliceのベースとなる配列は[1,2,3,4,5]となる
fmt.Println(s1)
fmt.Println(len(s1))
fmt.Println(cap(s1))

/*
	s1: [1,2,3,4,5]
	len(s1): 5
	cap(s1): 5 -> sliceの最初の要素:1 から ベースとなる配列の最後の要素:5 までの要素数(1,2,3,4,5) = 5個
*/

s2 := s1[0:3]
fmt.Println(s2)
fmt.Println(len(s2))
fmt.Println(cap(s2))

/*
	s2: [1,2,3]
	len(s2): 3
	cap(s2): 5 -> sliceの最初の要素:1 からベースとなる配列の最後の要素:5 までの要素数(1,2,3,4,5) = 5個

*/



s3 := s1[2:3]
fmt.Println(s3)
fmt.Println(len(s3))
fmt.Println(cap(s3))

/*
	s3: [3]
	len(s3): 1
	cap(s3): 3 -> sliceの最初の要素:3 からベースとなる配列の最後の要素:5 までの要素数(3,4,5) = 3個

*/

Goでjsonのunmarshal結果を構造体に入れる時のフィールド名について

Goでjsonを扱う場合、json文字列をbyte配列にしたものをencoding/jsonパッケージのunmarshalに渡し、あらかじめ定義しておいた構造体に入れるといったのが定番ですが、その際に構造体のフィールド名を先頭を大文字にしていないとマッピングできないということに気づいた。。。
ちょっと気になったので公式ドキュメントを見てみると、exported(外部パッケージから参照できるというGoのスコープのことをいっている)の場合のみセットする仕様であることがちゃんと書かれていた。

When you need to handle json data, the most popular way to do that is using "unmarshal" method in the encoding/json package with the arguments that is byte encoded json and prepared structure. But in then, the structure's fields must be written as the first character upper case. (If it's not enough condition, "unmarshal" cannot mapping data into the structure.)
In the official Document, there's a sentence just like "Unmarshal will only set exported fields of the struct.".
"exported fields" means like "Possible to use in the other package".

To unmarshal JSON into a struct, Unmarshal matches incoming object keys to the keys used by Marshal (either the struct field name or its tag), preferring an exact match but also accepting a case-insensitive match. Unmarshal will only set exported fields of the struct.

Details below.
json - The Go Programming Language

Rollback Atom's "Synchronize Settings" to specific revision.

Assume you use "Synchronize Settings", the package of Atom Editor for synchronizing settings between multiple environments, and have taken a backup by mistake, you're able to rollback by following way faster than reverting each files with editing.

Clone your gist into your local.

$ git clone [your gist repo url]

Check git logs and get specific commitID you want to rollback to.

$ git log
commit xxxxxxxxxxxxx

Rollback your local branch to commitID.

$ git reset --hard xxxxxxxxxxxxx

Once delete remote master.

$ git push origin :master

Push your rollbacked local commit to remote master.

$ git push origin master

That's all.

Auroraがマルチマスターをプレビュー公開

AWS Auroraでマルチマスター機能がプレビュー公開されました。

# レジュメ
https://aws.amazon.com/jp/about-aws/whats-new/2017/11/sign-up-for-the-preview-of-amazon-aurora-multi-master/

プレビューはMySQLの互換エディションでのみ利用可らしいです。
マスター追加で書き込みの水平分散を実現できるというのが個人的にムネアツ。
プレビュー取れるのいつぐらいになるのだろうか。。。

AWS Aurora restarted due to an error Out Of Memory

Recently, there was a problem that Aurora database had been restarting at the same time on daily.
Since at the time batch with a huge query had processed, so we guessed it was the cause of restarting Aurora.
We asked AWS Technical Support the reason of the problem, then we recieved below answer.

We think your guess is almost correct. According to your Cloud Watch, we guess that Aurora restarting is maybe caused by the batch process.
In default, 75% of the memory on Aurora is assigned at innodb_buffer_pool.
This buffer is mainly used for query caches, so other uses like table caches, log buffers, memory used in each connection is assined at 25% remained memory.
Therefore you're not able to use the full of 25% memory just for your queries, actually it's less than 25%.
In this case, Due to the memory size used by your batch was exceeded the actual enabled memory size, OOM error occured.

The conceivable actions for this problem are like below.

  • Decrease the "innodb_buffer_pool_size" from default(75%) to 50~60%.

Set {DBInstanceClassMemory*2/4} to "innodb_buffer_pool_size" in parameter group console.

  • Upgrade DBInstance class.
  • Optimize your query.

In these, we reccomend the first action for this time.

In this time, We took the first action for the problem. And we haven't face the OOM problem since then.
We learned a lot from AWS Technical Support, so we appreciate them far too much.

The hard bug in MacOS High Sierra

WTF\(^o^)/ナンテコッター

There's a hard bug in MacOS High Sierra if multiple users are accepted to login.
In above, anyone seems to be able to get a root privilege of the mac without any specific technical skills.

Anyway you'd better to see this page for a detail.
www.wired.com


The person who first discovered this bug had definitely to be freak out, I guess.

Modified behavior of jquery plugins [Counter-Up] to effect at only first viewed.

I think this is the most fantastic plugin for increasing the number text dynamically.

Counter-Up/jquery.counterup.js at master · bfintal/Counter-Up · GitHub

But it's been getting a little old at recent, so there's a problem that the option named "triggerOnce" doesn't work.
The detail of the problem is that the count up effect occurs every time when display is scrolled into the element even though I set a "triggerOnce" option. Anyway, I hope that the effect occurs at just only the first time if I set that option into a initializer, so I fixed the problem to put below code into the jquery.counterup.js.

// Start the count up
setTimeout($this.data('counterup-func'), $settings.delay);
$this.attr("data-counterup_finished", true);    // Add

And

var counterUpper = function() {

    // Add start
    if ($this.data("counterup_finished") == true) {
        return false;
    }
    // Add end

    var nums = [];
    var divisions = $settings.time / $settings.delay;
    var num = $this.text();

    ...


It's the end.