気になったこと、勉強したこと、その他雑記など色々メモしていきます。。Sometimes these posts will be written in English.,

【AWS CodeDeploy】Resource permissions by appspec.yml

I had misunderstanding for "permissions" in appspec.yml of AWS CodeDeploy.
Say around that we have an app which directory structure is like below.


I wanted to set logs directory's permission to 757, so that I did like this.


version: 0.0
os: linux
  - source: /
    destination: /var/www/html/app
  - object: logs
    mode: 757 
	  - directory

But it didn't work...
AWS official document says that permission section affects to the resources contained in the object you specified.

type – Optional. The types of objects to which to apply the specified permissions. This can be set to file or directory. If file is specified, the permissions are applied only to files that are immediately contained within object after the copy operation (and not to object itself). If directory is specified, the permissions are recursively applied to all directories/folders that are anywhere within object after the copy operation (but not to object itself).

AppSpec 'permissions' Section (EC2/On-Premises Deployments Only) - AWS CodeDeploy

So I made script to execute "chmod" to logs directory, and called it in the "AfterInstall" hook, then of course it worked well.
Also, I think maybe it's ok to use pattern matching which specifies logs directory from higher hierarchy directory(like /).

AWS CodeDeploy logs that you should watch.

We have a web service with AWS, located in like below environment.
(This time in case of using AmazonLinux)

    - Production:
        - ALB * 1
            - EC2 * 2
    - Dev:
        - EC2 * 1
    - Production: Aurora * 1
    - Dev: Aurora * 1

Our source code is managed by github organization account.
And we're going to deploy source code from github into Production EC2 * 2 with AWS CodeDeploy.
A little before we had many trials to succeed in_place deployment then we caught some Errors in CodeDeploy web console.
If you face any Errors to CodeDeploy in the similar environment to us, at first you'd better to watch these logs in EC2 instance, and they probably help you to solve the problem.


In Addition to above, you should confirm these stuff.

  • IAM roles attached to EC2 and CodeDeploy are correct.
  • EC2 instances status is just helthy.
  • AWS CodeDeploy Agent is installed in EC2 instances.
  • Github Organization account permits access from AWS CodeDeploy.

### Reference.





Slice is a data-type, which has a variable and flexible length, contains pointer, the number of elements, and capacity of its base array.
When you use make or slice literal, the base(underlying) array is allocated and return its slice. Since slice belongs to a type of reference, if the element of slice is changed then the base array's one is changed in the same way.
Therefore besides it's unnecessary to take pointer of slice, it's able to suppress the use of memory. If you append element into slice as over its capacity, its reference of base array get away,,,blah blah blah.
There are many blogs that explain like above, but I hardly find them that explain a definition of 'capacity of slice'. So this time I put a little memo for it.
According to 'Tour of Go', capacity of slice is defined as like the number of element counting from first element of slice to the end of base array's element.

A Tour of Goより

The capacity of a slice is the number of elements in the underlying array, counting from the first element in the slice.


s1 := []int{1,2,3,4,5}
// ここで、sliceのベースとなる配列は[1,2,3,4,5]となる

	s1: [1,2,3,4,5]
	len(s1): 5
	cap(s1): 5 -> sliceの最初の要素:1 から ベースとなる配列の最後の要素:5 までの要素数(1,2,3,4,5) = 5個

s2 := s1[0:3]

	s2: [1,2,3]
	len(s2): 3
	cap(s2): 5 -> sliceの最初の要素:1 からベースとなる配列の最後の要素:5 までの要素数(1,2,3,4,5) = 5個


s3 := s1[2:3]

	s3: [3]
	len(s3): 1
	cap(s3): 3 -> sliceの最初の要素:3 からベースとなる配列の最後の要素:5 までの要素数(3,4,5) = 3個




When you need to handle json data, the most popular way to do that is using "unmarshal" method in the encoding/json package with the arguments that is byte encoded json and prepared structure. But in then, the structure's fields must be written as the first character upper case. (If it's not enough condition, "unmarshal" cannot mapping data into the structure.)
In the official Document, there's a sentence just like "Unmarshal will only set exported fields of the struct.".
"exported fields" means like "Possible to use in the other package".

To unmarshal JSON into a struct, Unmarshal matches incoming object keys to the keys used by Marshal (either the struct field name or its tag), preferring an exact match but also accepting a case-insensitive match. Unmarshal will only set exported fields of the struct.

Details below.
json - The Go Programming Language

Rollback Atom's "Synchronize Settings" to specific revision.

Assume you use "Synchronize Settings", the package of Atom Editor for synchronizing settings between multiple environments, and have taken a backup by mistake, you're able to rollback by following way faster than reverting each files with editing.

Clone your gist into your local.

$ git clone [your gist repo url]

Check git logs and get specific commitID you want to rollback to.

$ git log
commit xxxxxxxxxxxxx

Rollback your local branch to commitID.

$ git reset --hard xxxxxxxxxxxxx

Once delete remote master.

$ git push origin :master

Push your rollbacked local commit to remote master.

$ git push origin master

That's all.


AWS Auroraでマルチマスター機能がプレビュー公開されました。

# レジュメ


AWS Aurora restarted due to an error Out Of Memory

Recently, there was a problem that Aurora database had been restarting at the same time on daily.
Since at the time batch with a huge query had processed, so we guessed it was the cause of restarting Aurora.
We asked AWS Technical Support the reason of the problem, then we recieved below answer.

We think your guess is almost correct. According to your Cloud Watch, we guess that Aurora restarting is maybe caused by the batch process.
In default, 75% of the memory on Aurora is assigned at innodb_buffer_pool.
This buffer is mainly used for query caches, so other uses like table caches, log buffers, memory used in each connection is assined at 25% remained memory.
Therefore you're not able to use the full of 25% memory just for your queries, actually it's less than 25%.
In this case, Due to the memory size used by your batch was exceeded the actual enabled memory size, OOM error occured.

The conceivable actions for this problem are like below.

  • Decrease the "innodb_buffer_pool_size" from default(75%) to 50~60%.

Set {DBInstanceClassMemory*2/4} to "innodb_buffer_pool_size" in parameter group console.

  • Upgrade DBInstance class.
  • Optimize your query.

In these, we reccomend the first action for this time.

In this time, We took the first action for the problem. And we haven't face the OOM problem since then.
We learned a lot from AWS Technical Support, so we appreciate them far too much.