Elixir with Phoenix - JSToElm
All Episodes

Elixir with Phoenix

We're doing it again. Avoiding actual work by playing around with Elixir using the Phoenix framework and I love it. Now, can we get it deployed in a reasonable way so that projects we work on can be shown off to the world?

Building it locally

  1. Easy enough to get it running with the instructions from Elixir and Phoenix
  2. Problems with Postgres install and running

    1. postgresql install ubuntu
    2. sudo su - postgres pg_ctlcluster 11 main start
    3. Getting default postgres running required the pg_ctlcluster
    4. Finally running local db, not my strong suit

Package it up for deployment

  1. Add Distillery {:distillery, "~> 2.0"}

  2. Hex Package Manager for the BEAM ecosystem

    1. About Six Colors AB

      Hex was started as an open-source project early 2014, it is still an open-source project but today it is operated by the Swedish limited company “Six Colors AB”. The company was founded in 2018 by the creator of Hex, Eric Meadows-Jönsson, Six Colors supports the development of Hex and operates all services required to run Hex.

      By charging for private packages we can fund free open-source development and run reliable services for both paying customers and the open-source community.

Get it out there

  1. Setup a VM on Azure

    1. tyBasic Ubuntu VM 1 CPU, 1GB RAM
    2. $ 8 bucks a month
  2. Setup the Postgres

    1. CPU 1 & 5GB Storage
    2. $ 27 Bucks a month!
  3. OK…now what?
  4. I can ssh into the VM 👍
  5. Probably need to get the postgres creds into the prod build for my app?

    1. set the env vars to be able to run MIX_ENV=prod mix release
$ mix deps.get --only prod
$ MIX_ENV=prod mix compile
$ npm run deploy --prefix assets
$ mix phx.digest

npm run deploy --prefix assets && MIX_ENV=prod mix do phx.digest, release --env=prod

  1. Let’s try this without an Nginx proxy 🤷‍♀️

    1. Port Forwarding with iptables
    2. Tried Ubuntu ufw. But it wasn’t enabled on the VM by default,
    3. So I went with port forwarding iptable style
  2. Getting the DB connection is a struggle

    1. How do I do mix ecto.create for prod? Since, you know, there isn’t any mix?
    2. Ecto Migrations
    3. edeliver writeup from plataformatec
Attaching to /home/meowAdmin/hello-meow/var/erl_pipes/hello_meow@ (^D to exit)

iex(hello_meow@> path = Application.app_dir(:hello_meow, "priv/repo/migrations")
iex(hello_meow@> Ecto.Migrator.run(HelloMeow.Repo, path, :up, all: true)
22:08:49.855 [info] Already up

Continuous Delivery

One option

Digital Ocean Option

AWS Option

Next Try

Let’s go with the AWS option, and we’re going to roughly follow a couple guides to get what we what. And what’s that?

1. Continuous Deployment 1. Whatever I git commit to Master is built and deployed to the running instance. Whether that’s a hot upgrade, or a deploy and drain of an EC2. Whatever, I’m not too concerned with the details at this point. I want to be able to build locally, test it out, and then trigger a deploy by doing what I already do, a git commit and / or merge 2. Wanta be able to rollback easily. That might be aws cli, or even a click on the aws console. Either would be fine, I have no idea how this will work in practice with automatic build and deploy yet. 3. Build fail if tests fail.

Elixir on AWS

or this one Elixir w/ Docker on AWS

diagram of what we're after

  1. AWS install cli and get signed in.

    1. setup ssh for code commit
    2. config cli with AMI account
    3. Be sure to have permissions for our new AMI user to cloudformation (this might be hairy)
    4. Skipping cloudformation bc 😱
  2. Get the project up on git up

  3. Create the RDS postgresql and get the address, username, and password,

  4. Create an encrypted S3 bucket for secrets

  5. Spin up 2 EC2 instances of the default linux and ssh into them with the *.pem that you download from the console

    1. not sure how with a PEM ? yeah me either aws ec2 ssh
    2. turns out you can just the PEM in the ssh command
  6. ssh into ec2 instance and install codedeploy mmmmmmm. with a list of 8 commands ?

    1. ummmm, ok 🤷‍♀️ codedeploy agent install
    2. rando site for install package by s3 region
    3. wget https://aws-codedeploy-us-east-2.s3.us-east-2.amazonaws.com/latest/install
    4. is your agent running ? sudo service codedeploy-agent status
    5. at this point I haven’t seen codedeploy in action, but i have the agent running.
    6. Repeat on the 2nd instance
  7. That seemed to be the easy part. I’ve now got code deploy agent running on my instances, Now it appears I need an appspec.yaml oh boy do I love YAML files.

    1. Have YAML file
    2. slowly try and refine
    3. uploaded secrets to secure S3 bucket with no public access.
    4. Scripts pull them down and export them before build
  8. Current cycle:

    1. Update code config
    2. git push
    3. get commit hash
    4. run codedeploy
  9. Now on 8 failed deploys

  10. Then it occurs to me. Even if I get it running.

    2. 😡
  11. So over to codebuild as a product.

    1. deploying to aws from hex:distillery

    2. Mess with codebuildfor a couple hours

    3. Realize that pipeline is really what I want and that some options aren’t available unless you start from there

    4. Build a pipeline

      1. detour from aws to CircleCI for another couple hours

      2. Even within a project I will bounce when I hit a wall, rather than taking a 10 minute break and coming back to it. I was trying to be super good at time tracking, but it’s gone out the window for this project.

      3. Got it building and testing on CircleCI, then aws orbs showed up when I went to figure out how to get the build from the CI to S3 so finally get it to the EC2 instances to run

      4. That seems like a real chore, using the same aws-cli that I was using in the aws interface !!!!

      5. Back to aws 😭

      6. At least there is gobs of documentations and help articles

      7. Cool I like VPC (Virtual Private Cloud)

      8. loadbalancers need 2 subnets in different available zones

      9. both need to have an internet gateway ? which means the routing table for both public subnets created need to have internet gateway … i think

      10. ok. got past that step. now

      AWS Certificate Manager (ACM) is the preferred tool to provision and store server certificates. If you previously stored a server certificate using IAM, you can deploy it to your load balancer. Learn more about HTTPS listeners and certificate management.

      1. so many steps

      2. ok. so have a cert for the domain.

      3. Use NameServe from aws to populate the ‘nameserve’ on hover.

      4. Now. Now we can set up the loadbalancer with a target ‘group’

      5. Remember we need to actually deploy the code from S3 to the EC2.

      6. Need to ssh in and install codedeploy again

      7. had to swipe ec2 instances and start again

      8. Need Blue/Green so 2 ? 🤷‍♀️

      9. Also remember there is no codedeploy agent on the ec2’s anymore

      10. AND they are not publicly accessible so I can’t ssh into them and run the commands

      11. so that leads me to AWS System Manger 🤦‍♂️

        1. after fighting with AIM to get the ec2’s to show up,
        2. I might have out outdated agent to get to the instance, to run what I need ?

        The version of SSM Agent on the instance supports Session Manager, but the instance is not configured for use with AWS Systems Manager. Verify that the IAM instance profile attached to the instance includes the required permissions. Learn more

        1. I didn’t even know what a bastion instance was
Published 14 Jun 2019

A show about learning Elm, Functional Programing, and generally leveling up as a JS developer.
JavaScript To Elm on Twitter