TRY AND ERROR

気になったこと、勉強したこと、その他雑記など色々メモしていきます。。Sometimes these posts will be written in English.,

Goでjsonのunmarshal結果を構造体に入れる時のフィールド名について

Goでjsonを扱う場合、json文字列をbyte配列にしたものをencoding/jsonパッケージのunmarshalに渡し、あらかじめ定義しておいた構造体に入れるといったのが定番ですが、その際に構造体のフィールド名を先頭を大文字にしていないとマッピングできないということに気づいた。。。
ちょっと気になったので公式ドキュメントを見てみると、exported(外部パッケージから参照できるというGoのスコープのことをいっている)の場合のみセットする仕様であることがちゃんと書かれていた。

When you need to handle json data, the most popular way to do that is using "unmarshal" method in the encoding/json package with the arguments that is byte encoded json and prepared structure. But in then, the structure's fields must be written as the first character upper case. (If it's not enough condition, "unmarshal" cannot mapping data into the structure.)
In the official Document, there's a sentence just like "Unmarshal will only set exported fields of the struct.".
"exported fields" means like "Possible to use in the other package".

To unmarshal JSON into a struct, Unmarshal matches incoming object keys to the keys used by Marshal (either the struct field name or its tag), preferring an exact match but also accepting a case-insensitive match. Unmarshal will only set exported fields of the struct.

Details below.
json - The Go Programming Language

Rollback Atom's "Synchronize Settings" to specific revision.

Assume you use "Synchronize Settings", the package of Atom Editor for synchronizing settings between multiple environments, and have taken a backup by mistake, you're able to rollback by following way faster than reverting each files with editing.

Clone your gist into your local.

$ git clone [your gist repo url]

Check git logs and get specific commitID you want to rollback to.

$ git log
commit xxxxxxxxxxxxx

Rollback your local branch to commitID.

$ git reset --hard xxxxxxxxxxxxx

Once delete remote master.

$ git push origin :master

Push your rollbacked local commit to remote master.

$ git push origin master

That's all.

Auroraがマルチマスターをプレビュー公開

AWS Auroraでマルチマスター機能がプレビュー公開されました。

# レジュメ
https://aws.amazon.com/jp/about-aws/whats-new/2017/11/sign-up-for-the-preview-of-amazon-aurora-multi-master/

プレビューはMySQLの互換エディションでのみ利用可らしいです。
マスター追加で書き込みの水平分散を実現できるというのが個人的にムネアツ。
プレビュー取れるのいつぐらいになるのだろうか。。。

AWS Aurora restarted due to an error Out Of Memory

Recently, there was a problem that Aurora database had been restarting at the same time on daily.
Since at the time batch with a huge query had processed, so we guessed it was the cause of restarting Aurora.
We asked AWS Technical Support the reason of the problem, then we recieved below answer.

We think your guess is almost correct. According to your Cloud Watch, we guess that Aurora restarting is maybe caused by the batch process.
In default, 75% of the memory on Aurora is assigned at innodb_buffer_pool.
This buffer is mainly used for query caches, so other uses like table caches, log buffers, memory used in each connection is assined at 25% remained memory.
Therefore you're not able to use the full of 25% memory just for your queries, actually it's less than 25%.
In this case, Due to the memory size used by your batch was exceeded the actual enabled memory size, OOM error occured.

The conceivable actions for this problem are like below.

  • Decrease the "innodb_buffer_pool_size" from default(75%) to 50~60%.

Set {DBInstanceClassMemory*2/4} to "innodb_buffer_pool_size" in parameter group console.

  • Upgrade DBInstance class.
  • Optimize your query.

In these, we reccomend the first action for this time.

In this time, We took the first action for the problem. And we haven't face the OOM problem since then.
We learned a lot from AWS Technical Support, so we appreciate them far too much.

The hard bug in MacOS High Sierra

WTF\(^o^)/ナンテコッター

There's a hard bug in MacOS High Sierra if multiple users are accepted to login.
In above, anyone seems to be able to get a root privilege of the mac without any specific technical skills.

Anyway you'd better to see this page for a detail.
www.wired.com


The person who first discovered this bug had definitely to be freak out, I guess.

Modified behavior of jquery plugins [Counter-Up] to effect at only first viewed.

I think this is the most fantastic plugin for increasing the number text dynamically.

Counter-Up/jquery.counterup.js at master · bfintal/Counter-Up · GitHub

But it's been getting a little old at recent, so there's a problem that the option named "triggerOnce" doesn't work.
The detail of the problem is that the count up effect occurs every time when display is scrolled into the element even though I set a "triggerOnce" option. Anyway, I hope that the effect occurs at just only the first time if I set that option into a initializer, so I fixed the problem to put below code into the jquery.counterup.js.

// Start the count up
setTimeout($this.data('counterup-func'), $settings.delay);
$this.attr("data-counterup_finished", true);    // Add

And

var counterUpper = function() {

    // Add start
    if ($this.data("counterup_finished") == true) {
        return false;
    }
    // Add end

    var nums = [];
    var divisions = $settings.time / $settings.delay;
    var num = $this.text();

    ...


It's the end.

Make replication between external Mysql and RDS-Aurora.

My external Mysql, which means non-AWS-RDS, has a large capacity records in it, and have been replicating between master and slave which are both external Mysql.It's difficult and too annoying about its slowness to dump data and to import them to Aurora.Instead of mysqldump, I tried to use percona-xtrabackup which is a third-party tools to migrate Mysql Database from S3, but restoring to Aurora with percona-xtrabackup didn't work since mysql version wasn't supported.(Accoding to error message, restoring by S3 data is accepted ver5.6 of source Mysql database.)
So I've given up to using percona-xtrabackup, and I'll show you the snippets how to make replication between external Mysql and RDS-Aurora with mysqldump.


# Refer to this for details.
http://docs.aws.amazon.com/ja_jp/AmazonRDS/latest/UserGuide/MySQL.Procedural.Importing.NonRDSRepl.html


This is my environments.

CentOS Linux 7.2.1511
mysql 5.7.16
innobackupex 2.4.8

First of all, in source Mysql you need to add a replication user with Aurora.

mysql > CREATE USER 'repl_aurora'@'aurora_host' IDENTIFIED BY '<password>';
mysql > GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'repl_aurora'@'aurora_host' IDENTIFIED BY '<password>';


And then you make backup as gz archived file. At the same time you need to set option "--master-data=2" to put out MASTER_LOG_FILE and MASTER_LOG_POS statements.

$ MYSQL_PWD="xxxxxxxxxxxx" mysqldump --opt --all-databases --events --default-character-set=binary --master-data=2 -u myuser | gzip > ./backup.sql.gz


After the backup is finished, you need to search MASTER_LOG_FILE and MASTER_LOG_POS statements like this.
With zgrep command, you can grep in gz archived file as it is.

$ zgrep -i "CHANGE MASTER TO" backup.sql.gz > grepped.txt

Import backup data to Aurora.

$ zcat ./backup.sql.gz | mysql -u root -p -h aurora_host da_name 


Set master database info to Aurora with prepared stored procedure.
Then 'mysql bin file' and 'position' are able to be fetched from grepped.txt.

mysql > CALL mysql.rds_set_external_master ('source mysql host', port, 'repl_aurora', 'aurora password', 'mysql bin file', position, 0); 


Start replication.

mysql > CALL mysql.rds_start_replication;


Watch replication status.

SHOW SLAVE STATUS \G;