TRY AND ERRΦR

気になったこと、勉強したこと、その他雑記など色々メモしていきます。。Sometimes these posts will be written in English.,

【AWS CodeDeploy】Resource permissions by appspec.yml

I had misunderstanding for "permissions" in appspec.yml of AWS CodeDeploy.
Say around that we have an app which directory structure is like below.
CodeDeployのappspec.ymlで使うpermissionセクションについて誤解してた。。
例えば以下のような構成のアプリがあるとする。

app
|-src
  |--something...
|-logs
  |--logfiles...

I wanted to set logs directory's permission to 757, so that I did like this.
logsディレクトリのパーミッションを757にしたくて、こんな感じのappspec.ymlを書いた。


appspec.yml

version: 0.0
os: linux
files:
  - source: /
    destination: /var/www/html/app
permission: 
  - object: logs
    mode: 757 
	type: 
	  - directory

But it didn't work...
AWS official document says that permission section affects to the resources contained in the object you specified.
しかし動かず、、、
AWSの公式ドキュメントをみると、permissionセクションは指定したobjectに含まれるリソースに影響するみたい。

type – Optional. The types of objects to which to apply the specified permissions. This can be set to file or directory. If file is specified, the permissions are applied only to files that are immediately contained within object after the copy operation (and not to object itself). If directory is specified, the permissions are recursively applied to all directories/folders that are anywhere within object after the copy operation (but not to object itself).

AppSpec 'permissions' Section (EC2/On-Premises Deployments Only) - AWS CodeDeploy


So I made script to execute "chmod" to logs directory, and called it in the "AfterInstall" hook, then of course it worked well.
Also, I think maybe it's ok to use pattern matching which specifies logs directory from higher hierarchy directory(like /).
今回はchmodで権限を変えるスクリプトを書いて、AfterInstallフックで呼び出したところ、上手く動いた。
別のやり方として、おそらくlogsの上の階層のディレクトリをobjectに指定し、パターンマッチでやればできそう。

AWS CodeDeploy logs that you should watch.

We have a web service with AWS, located in like below environment.
(This time in case of using AmazonLinux)

WebServer:
    - Production:
        - ALB * 1
            - EC2 * 2
    - Dev:
        - EC2 * 1
DB:
    - Production: Aurora * 1
    - Dev: Aurora * 1


Our source code is managed by github organization account.
And we're going to deploy source code from github into Production EC2 * 2 with AWS CodeDeploy.
A little before we had many trials to succeed in_place deployment then we caught some Errors in CodeDeploy web console.
If you face any Errors to CodeDeploy in the similar environment to us, at first you'd better to watch these logs in EC2 instance, and they probably help you to solve the problem.

・/opt/codedeploy-agent/deployment-root
・/var/log/aws/codedeploy-agent
・/var/log/cloud*
・/etc/codedeploy-agent/conf
・/tmp/codedeploy-agent.update.log

In Addition to above, you should confirm these stuff.

  • IAM roles attached to EC2 and CodeDeploy are correct.
  • EC2 instances status is just helthy.
  • AWS CodeDeploy Agent is installed in EC2 instances.
  • Github Organization account permits access from AWS CodeDeploy.

### Reference.
docs.aws.amazon.com

【Golang】sliceのcapacityとは?

sliceは可変長の配列を扱うデータ型で、ベースとなる配列のポインタと要素数、および容量(capacity)を持っている。sliceリテラルやmakeはベースとなる配列を作成した上でそのsliceを返している。
sliceは参照型なので、sliceの要素を変更すればベースとなる配列の値も変わる。したがって、sliceのポインタを渡す必要はなく、そのままでもコピーが発生しないのでメモリに優しい。sliceのcapを超えてappendすると参照が外れて、、、ほにゃらら

といった感じのことがブログ等々で書かれているが、容量(capacity)について詳しく触れられているものが見当たらなかったのでメモ。


容量(capacity)というのは、sliceのベース、つまり切り出しもととなる配列において、sliceの最初の要素から配列の最後の要素までの要素数のことである、とのこと。

Slice is a data-type, which has a variable and flexible length, contains pointer, the number of elements, and capacity of its base array.
When you use make or slice literal, the base(underlying) array is allocated and return its slice. Since slice belongs to a type of reference, if the element of slice is changed then the base array's one is changed in the same way.
Therefore besides it's unnecessary to take pointer of slice, it's able to suppress the use of memory. If you append element into slice as over its capacity, its reference of base array get away,,,blah blah blah.
There are many blogs that explain like above, but I hardly find them that explain a definition of 'capacity of slice'. So this time I put a little memo for it.
According to 'Tour of Go', capacity of slice is defined as like the number of element counting from first element of slice to the end of base array's element.


A Tour of Goより

The capacity of a slice is the number of elements in the underlying array, counting from the first element in the slice.

こんな感じ。

s1 := []int{1,2,3,4,5}
// ここで、sliceのベースとなる配列は[1,2,3,4,5]となる
fmt.Println(s1)
fmt.Println(len(s1))
fmt.Println(cap(s1))

/*
	s1: [1,2,3,4,5]
	len(s1): 5
	cap(s1): 5 -> sliceの最初の要素:1 から ベースとなる配列の最後の要素:5 までの要素数(1,2,3,4,5) = 5個
*/

s2 := s1[0:3]
fmt.Println(s2)
fmt.Println(len(s2))
fmt.Println(cap(s2))

/*
	s2: [1,2,3]
	len(s2): 3
	cap(s2): 5 -> sliceの最初の要素:1 からベースとなる配列の最後の要素:5 までの要素数(1,2,3,4,5) = 5個

*/



s3 := s1[2:3]
fmt.Println(s3)
fmt.Println(len(s3))
fmt.Println(cap(s3))

/*
	s3: [3]
	len(s3): 1
	cap(s3): 3 -> sliceの最初の要素:3 からベースとなる配列の最後の要素:5 までの要素数(3,4,5) = 3個

*/

Goでjsonのunmarshal結果を構造体に入れる時のフィールド名について

Goでjsonを扱う場合、json文字列をbyte配列にしたものをencoding/jsonパッケージのunmarshalに渡し、あらかじめ定義しておいた構造体に入れるといったのが定番ですが、その際に構造体のフィールド名を先頭を大文字にしていないとマッピングできないということに気づいた。。。
ちょっと気になったので公式ドキュメントを見てみると、exported(外部パッケージから参照できるというGoのスコープのことをいっている)の場合のみセットする仕様であることがちゃんと書かれていた。

When you need to handle json data, the most popular way to do that is using "unmarshal" method in the encoding/json package with the arguments that is byte encoded json and prepared structure. But in then, the structure's fields must be written as the first character upper case. (If it's not enough condition, "unmarshal" cannot mapping data into the structure.)
In the official Document, there's a sentence just like "Unmarshal will only set exported fields of the struct.".
"exported fields" means like "Possible to use in the other package".

To unmarshal JSON into a struct, Unmarshal matches incoming object keys to the keys used by Marshal (either the struct field name or its tag), preferring an exact match but also accepting a case-insensitive match. Unmarshal will only set exported fields of the struct.

Details below.
json - The Go Programming Language

Rollback Atom's "Synchronize Settings" to specific revision.

Assume you use "Synchronize Settings", the package of Atom Editor for synchronizing settings between multiple environments, and have taken a backup by mistake, you're able to rollback by following way faster than reverting each files with editing.

Clone your gist into your local.

$ git clone [your gist repo url]

Check git logs and get specific commitID you want to rollback to.

$ git log
commit xxxxxxxxxxxxx

Rollback your local branch to commitID.

$ git reset --hard xxxxxxxxxxxxx

Once delete remote master.

$ git push origin :master

Push your rollbacked local commit to remote master.

$ git push origin master

That's all.

Auroraがマルチマスターをプレビュー公開

AWS Auroraでマルチマスター機能がプレビュー公開されました。

# レジュメ
https://aws.amazon.com/jp/about-aws/whats-new/2017/11/sign-up-for-the-preview-of-amazon-aurora-multi-master/

プレビューはMySQLの互換エディションでのみ利用可らしいです。
マスター追加で書き込みの水平分散を実現できるというのが個人的にムネアツ。
プレビュー取れるのいつぐらいになるのだろうか。。。

AWS Aurora restarted due to an error Out Of Memory

Recently, there was a problem that Aurora database had been restarting at the same time on daily.
Since at the time batch with a huge query had processed, so we guessed it was the cause of restarting Aurora.
We asked AWS Technical Support the reason of the problem, then we recieved below answer.

We think your guess is almost correct. According to your Cloud Watch, we guess that Aurora restarting is maybe caused by the batch process.
In default, 75% of the memory on Aurora is assigned at innodb_buffer_pool.
This buffer is mainly used for query caches, so other uses like table caches, log buffers, memory used in each connection is assined at 25% remained memory.
Therefore you're not able to use the full of 25% memory just for your queries, actually it's less than 25%.
In this case, Due to the memory size used by your batch was exceeded the actual enabled memory size, OOM error occured.

The conceivable actions for this problem are like below.

  • Decrease the "innodb_buffer_pool_size" from default(75%) to 50~60%.

Set {DBInstanceClassMemory*2/4} to "innodb_buffer_pool_size" in parameter group console.

  • Upgrade DBInstance class.
  • Optimize your query.

In these, we reccomend the first action for this time.

In this time, We took the first action for the problem. And we haven't face the OOM problem since then.
We learned a lot from AWS Technical Support, so we appreciate them far too much.