mirror of
https://github.com/livebook-dev/livebook.git
synced 2025-09-04 12:04:20 +08:00
Use a fixed port on deployments (#2989)
This commit is contained in:
parent
d230e0f6a7
commit
0208856133
6 changed files with 79 additions and 58 deletions
|
@ -1,6 +1,6 @@
|
|||
# Clustering
|
||||
|
||||
If you plan to run several Livebook instances behind a load balancer, you need to enable clustering via the `LIVEBOOK_CLUSTER` environment variable. This page describes how to configure the relevant environment variables.
|
||||
If you plan to run several Livebook instances behind a load balancer, you need to enable clustering via the `LIVEBOOK_CLUSTER` environment variable. This page describes how to configure the relevant environment variables. By default, Livebook uses port `13825` for nodes in a cluster to communicate.
|
||||
|
||||
If you are using [Livebook Teams](https://livebook.dev/teams/), you can deploy with the click of a button by running Livebook servers inside your infrastructure. To get started, open up Livebook and click "Add Organization" on the sidebar. Once completed, open up the Application pane on the sidebar (with a rocket icon), click "Deploy with Livebook Teams". We provide templates for clustering inside Fly.io and Kubernetes, without a need to follow the steps below.
|
||||
|
||||
|
@ -20,7 +20,7 @@ Detects the hosting platform and automatically sets up a cluster using DNS confi
|
|||
|
||||
If you're running Livebook in the **AWS ECS** environment, the `auto` configuration will automatically cluster based on the ECS Container Metadata HTTP API. The cluster's "deployment" name will be based on a SHA checksum of the ECS Container Image ID. Largely you don't need to care about this, but any Livebook deployment using the same image ID will be clustered together. If you want more containers (say in **AWS Fargate**), increase the `desiredCount` of the family.
|
||||
|
||||
While ECS and Fargate won't need any further configuration for clustering, you will need to do network level configuration to allow the containers to talk to other resources (databases, S3, etc), as well as be reached by the public internet, etc. That configuration is outside of the scope of this documentation. If you're having issues connecting, there's a good chance it's either you haven't setup the standard ports required, you haven't correctly setup or configured the security groups, or you haven't correctly configured the HTTP listeners/load balancers.
|
||||
While ECS and Fargate won't need any further configuration for clustering, you will need to do network level configuration to allow the containers to talk to each other (using port 13825) and other resources (databases, S3, etc). That configuration is outside of the scope of this documentation. If you're having issues connecting, there's a good chance it's either you haven't setup the standard ports required, you haven't correctly configured the security groups, or you haven't correctly configured the HTTP listeners/load balancers.
|
||||
|
||||
#### Fly.io
|
||||
|
||||
|
@ -28,7 +28,7 @@ When deploying Livebook to Fly.io, the `auto` configuration automatically connec
|
|||
|
||||
#### Kubernetes
|
||||
|
||||
When using the Livebook application, you can also choose "auto" for Kubernetes deployments, but those values are automatically replaced by a DNS query, such as `dns:livebook-headless.$(POD_NAMESPACE).svc.cluster.local`, when generating the relevant resource definitions. See [the Kubernetes section in the Docker guides](docker.md#Kubernetes) for an example.
|
||||
When using the Livebook application, you can also choose "auto" for Kubernetes deployments, but those values are automatically replaced by a DNS query, such as `dns:livebook-headless.$(POD_NAMESPACE).svc.cluster.local`, when generating the relevant resource definitions. See [the Kubernetes section in the Docker guides](docker.md#Kubernetes) for an example.
|
||||
|
||||
### `dns:QUERY`
|
||||
|
||||
|
|
|
@ -1,12 +1,14 @@
|
|||
if "!RELEASE_COMMAND!"=="rpc" goto remote
|
||||
if "!RELEASE_COMMAND!"=="remote" goto remote
|
||||
goto server
|
||||
|
||||
:server
|
||||
if exist "!RELEASE_ROOT!\user\env.bat" (
|
||||
call "!RELEASE_ROOT!\user\env.bat"
|
||||
)
|
||||
|
||||
set RELEASE_MODE=interactive
|
||||
if not defined RELEASE_DISTRIBUTION set RELEASE_DISTRIBUTION=none
|
||||
|
||||
if defined LIVEBOOK_NODE set RELEASE_NODE=!LIVEBOOK_NODE!
|
||||
if defined LIVEBOOK_COOKIE set RELEASE_COOKIE=!LIVEBOOK_COOKIE!
|
||||
set RELEASE_DISTRIBUTION=none
|
||||
|
||||
if not defined RELEASE_COOKIE (
|
||||
for /f "skip=1" %%X in ('wmic os get localdatetime') do if not defined TIMESTAMP set TIMESTAMP=%%X
|
||||
|
@ -14,3 +16,12 @@ if not defined RELEASE_COOKIE (
|
|||
)
|
||||
|
||||
cd !HOMEDRIVE!!HOMEPATH!
|
||||
goto end
|
||||
|
||||
:remote
|
||||
set RELEASE_DISTRIBUTION=name
|
||||
if defined LIVEBOOK_NODE set RELEASE_NODE=!LIVEBOOK_NODE!
|
||||
if defined LIVEBOOK_COOKIE set RELEASE_COOKIE=!LIVEBOOK_COOKIE!
|
||||
goto end
|
||||
|
||||
:end
|
||||
|
|
|
@ -1,50 +1,49 @@
|
|||
if [ "$LIVEBOOK_CLUSTER" = "auto" ] && [ ! -z "$FLY_APP_NAME" ]; then
|
||||
export LIVEBOOK_CLUSTER="dns:${FLY_APP_NAME}.internal"
|
||||
if [[ "$RELEASE_COMMAND" == "rpc" || "$RELEASE_COMMAND" == "remote" ]]; then
|
||||
export RELEASE_DISTRIBUTION="name"
|
||||
if [ ! -z "${LIVEBOOK_NODE}" ]; then export RELEASE_NODE=${LIVEBOOK_NODE}; fi
|
||||
if [ ! -z "${LIVEBOOK_COOKIE}" ]; then export RELEASE_COOKIE=${LIVEBOOK_COOKIE}; fi
|
||||
else
|
||||
if [ "$LIVEBOOK_CLUSTER" = "auto" ] && [ ! -z "$FLY_APP_NAME" ]; then
|
||||
export LIVEBOOK_CLUSTER="dns:${FLY_APP_NAME}.internal"
|
||||
|
||||
case "$ERL_AFLAGS $ERL_ZFLAGS" in
|
||||
*"-proto_dist"*) ;;
|
||||
*)
|
||||
export ERL_AFLAGS="$ERL_AFLAGS -proto_dist inet6_tcp"
|
||||
;;
|
||||
esac
|
||||
case "$ERL_AFLAGS $ERL_ZFLAGS" in
|
||||
*"-proto_dist"*) ;;
|
||||
*)
|
||||
export ERL_AFLAGS="$ERL_AFLAGS -proto_dist inet6_tcp"
|
||||
;;
|
||||
esac
|
||||
|
||||
if [ -z "${LIVEBOOK_NODE}" ]; then
|
||||
deployment="$(echo "$FLY_IMAGE_REF" | shasum | cut -c 1-10)"
|
||||
export LIVEBOOK_NODE="${FLY_APP_NAME}-${deployment}@${FLY_PRIVATE_IP}"
|
||||
if [ -z "${LIVEBOOK_NODE}" ]; then
|
||||
deployment="$(echo "$FLY_IMAGE_REF" | shasum | cut -c 1-10)"
|
||||
export LIVEBOOK_NODE="${FLY_APP_NAME}-${deployment}@${FLY_PRIVATE_IP}"
|
||||
fi
|
||||
elif [ "$LIVEBOOK_CLUSTER" = "auto" ] && [ ! -z "$ECS_CONTAINER_METADATA_URI" ]; then
|
||||
metadata="$(curl --silent $ECS_CONTAINER_METADATA_URI)"
|
||||
machine_ip="$(echo $metadata | $RELEASE_ROOT/bin/livebook eval 'IO.read(:stdio, :eof) |> JSON.decode!() |> Map.fetch!("Networks") |> hd() |> Map.fetch!("IPv4Addresses") |> hd() |> IO.write()')"
|
||||
image_id="$(echo $metadata | $RELEASE_ROOT/bin/livebook eval 'IO.read(:stdio, :eof) |> JSON.decode!() |> Map.fetch!("ImageID") |> IO.write()')"
|
||||
|
||||
if [ -z "${LIVEBOOK_NODE}" ]; then
|
||||
deployment="$(echo $image_id | shasum | cut -c 1-10)"
|
||||
export LIVEBOOK_NODE="livebook-${deployment}@${machine_ip}"
|
||||
fi
|
||||
fi
|
||||
elif [ "$LIVEBOOK_CLUSTER" = "auto" ] && [ ! -z "$ECS_CONTAINER_METADATA_URI" ]; then
|
||||
metadata="$(curl --silent $ECS_CONTAINER_METADATA_URI)"
|
||||
machine_ip="$(echo $metadata | $RELEASE_ROOT/bin/livebook eval 'IO.read(:stdio, :eof) |> JSON.decode!() |> Map.fetch!("Networks") |> hd() |> Map.fetch!("IPv4Addresses") |> hd() |> IO.write()')"
|
||||
image_id="$(echo $metadata | $RELEASE_ROOT/bin/livebook eval 'IO.read(:stdio, :eof) |> JSON.decode!() |> Map.fetch!("ImageID") |> IO.write()')"
|
||||
|
||||
if [ -z "${LIVEBOOK_NODE}" ]; then
|
||||
deployment="$(echo $image_id | shasum | cut -c 1-10)"
|
||||
export LIVEBOOK_NODE="livebook-${deployment}@${machine_ip}"
|
||||
else
|
||||
export LIVEBOOK_NODE="${LIVEBOOK_NODE}@${machine_ip}"
|
||||
if [ -f "${RELEASE_ROOT}/user/env.sh" ]; then
|
||||
. "${RELEASE_ROOT}/user/env.sh"
|
||||
fi
|
||||
|
||||
export RELEASE_MODE="interactive"
|
||||
export RELEASE_DISTRIBUTION="none"
|
||||
|
||||
# We remove the COOKIE file when assembling the release, because we
|
||||
# don't want to share the same cookie across users. The Elixir release
|
||||
# script attempts to read from that file, which would fail, therefore
|
||||
# we need to set it here. Also there is a very tiny time gap between we
|
||||
# start distribution and set the cookie during application boot, so we
|
||||
# specifically want the temporary node cookie to be random, rather than
|
||||
# a fixed value. Note that this value is overriden on boot, so other
|
||||
# than being the initial node cookie, we don't really use it.
|
||||
export RELEASE_COOKIE="${RELEASE_COOKIE:-$(cat /dev/urandom | env LC_ALL=C tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)}"
|
||||
|
||||
cd $HOME
|
||||
fi
|
||||
|
||||
if [ -f "${RELEASE_ROOT}/user/env.sh" ]; then
|
||||
. "${RELEASE_ROOT}/user/env.sh"
|
||||
fi
|
||||
|
||||
export RELEASE_MODE="interactive"
|
||||
if [ -z "${RELEASE_DISTRIBUTION}" ]; then export RELEASE_DISTRIBUTION="none"; fi
|
||||
|
||||
# Mirror these values, so that it is easier to use "bin/release rpc",
|
||||
# though it still requires setting RELEASE_DISTRIBUTION=name
|
||||
if [ ! -z "${LIVEBOOK_NODE}" ]; then export RELEASE_NODE=${LIVEBOOK_NODE}; fi
|
||||
if [ ! -z "${LIVEBOOK_COOKIE}" ]; then export RELEASE_COOKIE=${LIVEBOOK_COOKIE}; fi
|
||||
|
||||
# We remove the COOKIE file when assembling the release, because we
|
||||
# don't want to share the same cookie across users. The Elixir release
|
||||
# script attempts to read from that file, which would fail, therefore
|
||||
# we need to set it here. Also there is a very tiny time gap between we
|
||||
# start distribution and set the cookie during application boot, so we
|
||||
# specifically want the temporary node cookie to be random, rather than
|
||||
# a fixed value. Note that this value is overriden on boot, so other
|
||||
# than being the initial node cookie, we don't really use it.
|
||||
export RELEASE_COOKIE="${RELEASE_COOKIE:-$(cat /dev/urandom | env LC_ALL=C tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)}"
|
||||
|
||||
cd $HOME
|
||||
|
|
|
@ -1,12 +1,7 @@
|
|||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
cd -P -- "$(dirname -- "$0")"
|
||||
|
||||
# Livebook does not start EPMD automatically, but we want to start it
|
||||
# here, becasue we need it for clustering
|
||||
epmd -daemon
|
||||
|
||||
if [ -n "${FLAME_PARENT}" ]; then
|
||||
exec elixir ./start_flame.exs
|
||||
elif [ -n "${LIVEBOOK_RUNTIME}" ]; then
|
||||
|
|
9
rel/server/remote.vm.args.eex
Normal file
9
rel/server/remote.vm.args.eex
Normal file
|
@ -0,0 +1,9 @@
|
|||
# Disable busy waiting so that we don't waste resources
|
||||
# Limit the maximal number of ports for the same reason
|
||||
+sbwt none +sbwtdcpu none +sbwtdio none +Q 65536
|
||||
|
||||
# Set custom EPMD module and port
|
||||
-epmd_module Elixir.Livebook.EPMD -erl_epmd_port 13825
|
||||
|
||||
# Disable listening and does not start epmd (due to RELEASE_DISTRIBUTION=name)
|
||||
-dist_listen false -start_epmd false
|
|
@ -1,4 +1,11 @@
|
|||
# Disable busy waiting so that we don't waste resources
|
||||
# Limit the maximal number of ports for the same reason
|
||||
# Set the custom EPMD module
|
||||
+sbwt none +sbwtdcpu none +sbwtdio none +Q 65536 -epmd_module Elixir.Livebook.EPMD
|
||||
+sbwt none +sbwtdcpu none +sbwtdio none +Q 65536
|
||||
|
||||
# Set custom EPMD module and port
|
||||
#
|
||||
# Note we don't use -erl_epmd_port here on purpose,
|
||||
# because that also assumes all nodes we connect to
|
||||
# run on erl_epmd_port and Livebook's attached node
|
||||
# needs to be able to connect regurlarly.
|
||||
-epmd_module Elixir.Livebook.EPMD -kernel inet_dist_listen_min 13825
|
||||
|
|
Loading…
Add table
Reference in a new issue