Nowadays, there are quite a few options regarding ML Experiment Tracking and Management tools like MLFlow, ClearML, W&B, Comet, Neptune, DagsHub, to name but a few. Some of these actually are (or almost are, depending) end-to-end MLOps solutions.
For this tutorial, I assume that you already want to install and use ClearML.
Setting & Accessing the ClearML GUI
The ClearML WebUI can be used either hosted by ClearML itself (they have a free tier and some paid plans), or self-hosted (they are open-source after all, except probably for enterprise specific features).
Here, we will focus only on the self-hosted version in a specific settings which make it slightly less straight-forward to set.
Context
Say, you are in the following case:
the clearml-server docker compose running on Host workstation ‘Red’ (running Linux - in my case: Xubuntu22.04) behind a firewall
a Proxy server ‘Pi’ (for rapsberry pi) (running Linux - in my case: Raspberry Pi OS/Raspbian) outside of the firewall network
flowchart LR
A[
Client
Blue
<hr><p style="color:blue;">Request: `ssh localhost:9108`
via `ssh proxy.blissfox.xyz -p 22`</p><hr><p style="color:red;">Request: Red:8080</p>
]
B[
Proxy
Pi
<hr>< PROXY_IP >
]
C[[
<i class="fa-solid fa-shield"></i> Firewall#8201;
]]
D["
Workstation
Red
"]
E[["
Reverse tunneling
<hr>9108:9108
"]]
F[["
OVH
<hr>`proxy.blissfox.xyz`-->< ROUTER_IP >
"]]
H[["
<i class="fa-solid fa-shield"></i> UFW#8201;
<hr>22
"]]
I["
<i class="fa-solid fa-house-signal"></i><i class="fa-solid fa-shield"></i> Router#8201;#8201;#8201;
<hr>< ROUTER_IP >:22-->< PROXY_IP >:22
"]
J[["
<i class="fa-brands fa-docker"></i>
8080 (app)
8008 (api)
8081 (files)
"]]
B x-.-x C --- D --> E
D --> J
A x-.-x C
B --- E
A --> F --> I --> H --> B
A o-.->|???| J
linkStyle 2 stroke:blue
linkStyle 3 stroke:red
linkStyle 5 stroke:blue
linkStyle 6 stroke:blue
linkStyle 7 stroke:blue
linkStyle 8 stroke:blue
linkStyle 9 stroke:blue
linkStyle 10 stroke:red
subgraph "<i class="fa-solid fa-server"></i> Proxy#8201;#8201;#8201;"
B
E
H
I
end
subgraph "<i class="fa-solid fa-server"></i> Host#8201;#8201;#8201;"
C
D
J
end
subgraph "<i class="fa-solid fa-laptop"></i> Client <i class="fa-solid fa-desktop"></i>#8201;#8201;#8201;"
A
end
subgraph "<i class="fa-solid fa-cloud"></i> OVH#8201;#8201;#8201;"
F
end
We already have a reverse tunnel set to bypass the firewall on the server that forward a port 9108 to the proxy (see Reverse-tunneling to bypass a firewall with no fixed IP adress), so that the client can connect in ssh to the server via the proxy (in blue) (we named the alias from a client to workstation via the proxy server workstation_sshR). Say that we run services on the server in docker containers showing on ports 8080, 8008, 8081, and want to access said services from the client (in red).
We will see two ways to access the WebUI and services:
Thus, you can probably (I did not run the tests for the api and fileserver access for the SSH access, but I see no reason why it would not work) use only ssh tunnels to access the clearml-server if you wish so.
A more generic approach is to use URLs with subdomains, that is the following subsection.
On the Web
Preliminary
How would we go about to access the clearml-server services through (sub)domain name(s)/URL(s)?
The main question compared to what we previously did with reverse tunneling, would be: Can we somehow configure a redirection from a URL to an IP address with a specific port specified?
After some searches:
Original content (in French) ''Hello ! Les serveurs DNS ne font que la résolution noms de domaines à adresse IP. Ils ne gèrent ni le port (dans ton cas le 8080) ni les dossiers (dans ton cas `/site1`).\ Après il est possible que tu gère ca via ton rooter ou directement depuis ton NAS, dans ce cas ton domaine pointe vers ton ip (donc arrive sur ton rooter), le rooter redirige vers le NAS avec le bon port et apres le NAS cherche a faire correspondre le domaine avec le dossier.''
DNS servers only resolves IP address from domain names. They deal neihter with the port (e.g. 8080) nor with the files (e.g. /site1). It is possible to deal with it via the router or directly from a NAS, in which case, your domain points to the IP (thus it arrives on the router), the router redirects it to the NAS with the right port, afterwards, the NAS will try to match the domain to a directory.
'’A CNAME record does not modify the host of the request. A record A.example.net CNAME B.example.net simply says that the ip of A should be resolved as if it was B. The request will still be directed to A if you try to access A.example.net.\ Then, in your reverse proxy, you create virtual hosts for both A and B, and route them to wherever you need, be it a specific directory or another host/port.\ In any case, depending on with whom you’ve registered your mydomain.com, they might allow free DDNS, so it would be worth checking.\ Regardless, in your case, I would set up a CNAME from something like ip1.mydomain.com to the ddns domain, and then your specific services can be CNAMEd to this ip1, like nextcloud.mydomain.com -> ip1.mydomain.com’’
Considering the preceding two notes, we should:
set reverse tunnels for each service port
set a (sub)domain name for each service
direct these (sub)domains to the router of the proxy server
set the router so that it bounces http/https requests to the proxy server Pi
set a reverse proxy on the Pi so that it has a virtual host config for each pair of (sub)domain request and service ports
flowchart LR
A[
Client
Blue
]
B[
Proxy
Pi
]
C[[
<i class="fa-solid fa-shield"></i> Firewall#8201;
]]
D["
Workstation
Red
"]
J[["
<i class="fa-brands fa-docker"></i>
8080 (app)
8008 (api)
8081 (files)
"]]
E[["
Reverse tunneling
<hr>8080:8080
8008:8008
8081:8081
"]]
F[["
OVH
"]]
G[["
NGINX
"]]
H[["
<i class="fa-solid fa-shield"></i> UFW#8201;
"]]
I["
<i class="fa-solid fa-house-signal"></i><i class="fa-solid fa-shield"></i> Router#8201;#8201;#8201;
"]
B x-.-x C --- D ---|<div class="numberCircle">0</div><div class="numberCircle">7</div><div class="numberCircle">8</div><div class="numberCircle">9</div>| J
A x-.-x C
J --->|<div class="numberCircle">1</div>| E
B ---|<div class="numberCircle">1</div>| E
A -->|<div class="numberCircle">5</div>| F -->|<div class="numberCircle">4</div><div class="numberCircle">5</div>| I -->|<div class="numberCircle">3</div><div class="numberCircle">4</div><div class="numberCircle">6</div>| H -->|<div class="numberCircle">2</div><div class="numberCircle">3</div><div class="numberCircle">6</div>| G -->|<div class="numberCircle">2</div><div class="numberCircle">6</div>| B
subgraph "<i class="fa-solid fa-server"></i> Proxy#8201;#8201;#8201;"
B
E
G
H
I
end
subgraph "<i class="fa-solid fa-server"></i> Host#8201;#8201;#8201;"
C
D
J
end
subgraph "<i class="fa-solid fa-laptop"></i> Client <i class="fa-solid fa-desktop"></i>#8201;#8201;#8201;#8201;#8201;"
A
end
subgraph "<i class="fa-solid fa-cloud"></i> OVH#8201;#8201;#8201;"
F
end
Getting a bit familiar with Docker Compose will help, have a look at docker compose
To deploy the ClearML server:
Follow the Install guide:
’’
Linux and macOS
Deploy the ClearML Server in Linux or macOS using the pre-built Docker image.
For ClearML docker images, including previous versions, see https://hub.docker.com/r/allegroai/clearml. However, pulling the ClearML Docker image directly is not required. ClearML provides a docker-compose YAML file that does this. The docker-compose file is included in the instructions on this page.
For information about upgrading ClearML Server in Linux or macOS, see here.
INFO
If ClearML Server is being reinstalled, clearing browser cookies for ClearML Server is recommended. For example, for Firefox, go to Developer Tools > Storage > Cookies, and for Chrome, go to Developer Tools > Application > Cookies, and delete all cookies under the ClearML Server URL.
Prerequisites
For Linux users only:
Linux distribution must support Docker. For more information, see the Docker documentation.
Be logged in as a user with sudo privileges.
Use bash for all command-line instructions in this installation.
The ports 8080, 8081, and 8008 must be available for the ClearML Server services.
Deploying
CAUTION
By default, ClearML Server launches with unrestricted access. To restrict ClearML Server access, follow the instructions in the Security page.
MEMORY REQUIREMENT
Deploying the server requires a minimum of 4 GB of memory, 8 GB is recommended.
To launch ClearML Server on Linux or macOS:
Install Docker. The instructions depend upon the operating system:
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64)
3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal.
For macOS only, increase the memory allocation in Docker Desktop to 8GB.
i. In the top status bar, click the Docker icon.
ii. Click Preferences > Resources > Advanced, and then set the memory to at least 8192.
iii. Click Apply.
For Linux only, install docker-compose. Execute the following commands (for more information, see Install Docker Compose in the Docker documentation):
For Linux only, configure the ClearML Agent Services. If CLEARML_HOST_IP is not provided, then ClearML Agent Services uses the external public address of the ClearML Server. If CLEARML_AGENT_GIT_USER / CLEARML_AGENT_GIT_PASS are not provided, then ClearML Agent Services can’t access any private repositories for running service tasks.
Run docker-compose with the downloaded configuration file.
docker-compose -f /opt/clearml/docker-compose.yml up -d
The server is now running on http://localhost:8080.
Port Mapping
After deploying ClearML Server, the services expose the following ports:
Web server on port 8080
API server on port 8008
File server on port 8081
Restarting
To restart ClearML Server Docker deployment:
Stop and then restart the Docker containers by executing the following commands:
docker-compose -f /opt/clearml/docker-compose.yml down
docker-compose -f /opt/clearml/docker-compose.yml up -d
[…]
’’
curl: Client URL, ‘‘designed to transfer data using various protocols such as HTTP, HTTPS, FTP, SCP, SFTP, and more’‘[1]
-o/--output <file>: ‘‘Write output to instead of stdout.''[2]
-L/--location: ‘‘(HTTP/HTTPS) If the server reports that the requested page has moved to a different location (indicated with a Location: header and a 3XX response code), this option will make curl redo the request on the new place. If used together with -i/--include or -I/--head, headers from all requested pages will be shown. When authentication is used, curl only sends its credentials to the initial host. If a redirect takes curl to a different host, it won’t be able to intercept the user+password. See also --location-trusted on how to change this. You can limit the amount of redirects to follow by using the --max-redirs option.
When curl follows a redirect and the request is not a plain GET (for example POST or PUT), it will do the following request with a GET if the HTTP response was 301, 302, or 303. If the response code was any other 3xx code, curl will re-send the following request using the same unmodified method.’‘[2]
For an example of --location, one can look at [3].
If there is a redirection from the initial url to another, the request is resubmitted to the new url, thanks to -L/--location. The traget file here is saved at -o <DESTINATION> instead of stdout.
The top-level version property is defined by the Compose Specification for backward compatibility. It is only informative.
Compose doesn’t use version to select an exact schema to validate the Compose file, but prefers the most recent schema when it’s implemented.
Compose validates whether it can fully parse the Compose file. If some fields are unknown, typically because the Compose file was written with fields defined by a newer version of the Specification, you’ll receive a warning message. ‘’
# Launch the services
docker-compose -f /opt/clearml/docker-compose.yml up -d# Stop the services
docker-compose -f /opt/clearml/docker-compose.yml down
# Launch the services
docker compose -f /opt/clearml/docker-compose.yml up -d# Stop the services
docker compose -f /opt/clearml/docker-compose.yml down
docker compose -p clearml-server -f /opt/clearml/docker-compose.yml up -d
This, in turn, allows you to shutdown the compose using the project name instead of giving back all the file path:
docker compose -p clearml-server down
Launching the compose, we get:
docker-compose -f /opt/clearml/docker-compose.yml up -d
Output
WARNING: The ELASTIC_PASSWORD variable is not set. Defaulting to a blank string.
WARNING: The CLEARML_HOST_IP variable is not set. Defaulting to a blank string.
WARNING: The CLEARML_AGENT_GIT_USER variable is not set. Defaulting to a blank string.
WARNING: The CLEARML_AGENT_GIT_PASS variable is not set. Defaulting to a blank string.
WARNING: Some services (agent-services) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm.
Creating network "clearml_backend" with driver "bridge"
Creating network "clearml_frontend" with driver "bridge"
Creating clearml-fileserver ... done
Creating clearml-elastic ... done
Creating clearml-redis ... done
Creating clearml-mongo ... done
Creating clearml-apiserver ... done
Creating async_delete ... done
Creating clearml-webserver ... done
Creating clearml-agent-services ... done
Docker compose adds labels to each container that it creates. If you want to get all containers created by compose, you can perform a container ls and apply a filter.
This will show all running container created by compose, regardless of the project name.
For example, I created some containers from different compose projects. With the filter, I get only those, but no other container that have not been created by compose and therefore don’t have a project label.
$ base='\t\t\t\t\t\t'$ compose='\t'$ docker container ls--all\--filterlabel=com.docker.compose.project \--format"table $compose\t$base"
project service STATUS CONTAINER ID NAMES IMAGE PORTS NETWORKS MOUNTS
kafka kafka Up 5 minutes 3f97a460266e kafka_kafka_1 bitnami/kafka:3 0.0.0.0:9092->9092/tcp, :::9092->9092/tcp kafka_default kafka_kafka_da…,kafka_kafa_con…
kafka zookeeper Up 5 minutes 0b6f32ccd196 kafka_zookeeper_1 bitnami/zookeeper:3.7 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp, :::2181->2181/tcp, 8080/tcp kafka_default kafka_zookeepe…
manager db Up 22 minutes 4f0e799b4fd7 manager_db_1 da2cb49d7a8d 5432/tcp manager_default 0d667a0e48a280…
foo db Exited (0) 37 minutes ago e106c5cdbf5e foo_db_1 da2cb49d7a8d foo_default 5a87e93627b8f6…
foo backend Up 10 minutes 08a0873c0587 foo_backend_2 c316d5a335a5 80/tcp foo_default
foo frontend Up 10 minutes be723bf41aeb foo_frontend_1 c316d5a335a5 80/tcp foo_default
foo backend Up 10 minutes 5d91d4bcfcb3 foo_backend_1 c316d5a335a5 80/tcp foo_default
manager app Up 22 minutes 2ca4c0920807 manager_app_1 c316d5a335a5 80/tcp manager_default
manager app Up 22 minutes b2fa2b9724b0 manager_app_2 c316d5a335a5 80/tcp manager_default
loadbalancer app Exited (0) 37 minutes ago 791f4059b4af loadbalancer_app_1 c316d5a335a5 loadbalancer_default
If you want to see all container regardless of their state, you can add the --all or short -a flag to the ls command, like I did in my example. Otherwise, only running containers are shown. ‘’
docker compose ls
Output
NAME STATUS CONFIG FILES
clearml-server running(7) /opt/clearml/docker-compose.yml
docker compose -p clearml-server ps
Output
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
async_delete allegroai/clearml:latest "python3 -m jobs.asy…" async_delete 10 days ago Up 6 days 8008/tcp, 8080-8081/tcp
clearml-apiserver allegroai/clearml:latest "/opt/clearml/wrappe…" apiserver 10 days ago Up 6 days 0.0.0.0:8008->8008/tcp, :::8008->8008/tcp, 8080-8081/tcp
clearml-elastic docker.elastic.co/elasticsearch/elasticsearch:7.17.7 "/bin/tini -- /usr/l…" elasticsearch 10 days ago Up 6 days 9200/tcp, 9300/tcp
clearml-fileserver allegroai/clearml:latest "/opt/clearml/wrappe…" fileserver 10 days ago Up 6 days 8008/tcp, 8080/tcp, 0.0.0.0:8081->8081/tcp, :::8081->8081/tcp
clearml-mongo mongo:4.4.9 "docker-entrypoint.s…" mongo 10 days ago Up 6 days 27017/tcp
clearml-redis redis:5.0 "docker-entrypoint.s…" redis 10 days ago Up 6 days 6379/tcp
clearml-webserver allegroai/clearml:latest "/opt/clearml/wrappe…" webserver 10 days ago Up 6 days 8008/tcp, 8080-8081/tcp, 0.0.0.0:8080->80/tcp, :::8080->80/tcp
’’ Web login authentication can be configured in the ClearML Server in order to permit only users provided with credentials to access the ClearML system. Those credentials are a username and password.
Without web login authentication, ClearML Server does not restrict access (by default).
To add web login authentication to the ClearML Server:
In ClearML Server /opt/clearml/config/apiserver.conf, add the auth.fixed_users section and specify the users.
For example:
auth {
# Fixed users login credentials
# No other user will be able to login
fixed_users {
enabled: truepass_hashed: falseusers: [
{
username: "jane"password: "12345678"name: "Jane Doe"
},
{
username: "john"password: "12345678"name: "John Doe"
},
]
}
}
If the apiserver.conf file does not exist, create your own in ClearML Server’s /opt/clearml/config directory (or an alternate folder you configured), and input the modified configuration
Restart ClearML Server. ‘’
On Workstation Red: Add the following to /opt/clearml/config/apiserver.conf (or create the file if not present yet) modifying the username, name and password to your needs:
auth {
# Fixed users login credentials
# No other user will be able to login
fixed_users {
enabled: truepass_hashed: falseusers: [
{
username: "RR5555"password: "my_password"name: "RR5555"
},
]
}
}
On Red: You can check the ports 8080, 8081, and 8008 are listening
# Check ipv4 all portssudo lsof -Pn-i4# Check ipv4 port 8080sudo lsof -Pn-i4:8080
# Check ipv4 ports 8080, 8081, 8008 TCP in LISTEN statesudo lsof -Pn-i4:8080,8081,8008 -sTCP:LISTEN
lsof (LiSt Open Files): ‘‘It’s commonly said that in Linux, everything is a file.’‘[1]; ‘‘An open file may be a regular file, a directory, a block special file, a character special file, an executing text reference, a library, a stream or a network file (Internet socket, NFS file or UNIX domain socket.) A specific file or all the files in a file system may be selected by path.’‘[2]
-i4: ‘‘If -i4 or -i6 is specified with no following address, only files of the indicated IP version, IPv4 or IPv6, are displayed.’‘[2]
-n: ‘‘This option inhibits the conversion of network numbers to host names for network files. Inhibiting conversion may make lsof run faster. It is also useful when host name lookup is not working properly.’‘[2]
-P: ‘‘This option inhibits the conversion of port numbers to port names for network files. Inhibiting the conversion may make lsof run a little faster. It is also useful when port name lookup is not working properly.’‘[2]
-iTCP -sTCP:LISTEN: ‘‘to list only network files with TCP state LISTEN’‘[2]
-i [i]: ‘‘An Internet address is specified in the form (Items in square brackets are optional.):
’’ A forward proxy, often called a proxy, proxy server, or web proxy, is a server that sits in front of a group of client machines. When those computers make requests to sites and services on the Internet, the proxy server intercepts those requests and then communicates with web servers on behalf of those clients, like a middleman.
[…]
[…]
A reverse proxy is a server that sits in front of one or more web servers, intercepting requests from clients. This is different from a forward proxy, where the proxy sits in front of the clients. With a reverse proxy, when clients send requests to the origin server of a website, those requests are intercepted at the network edge by the reverse proxy server. The reverse proxy server will then send requests to and receive responses from the origin server.
The difference between a forward and reverse proxy is subtle but important. A simplified way to sum it up would be to say that a forward proxy sits in front of a client and ensures that no origin server ever communicates directly with that specific client. On the other hand, a reverse proxy sits in front of an origin server and ensures that no client ever communicates directly with that origin server.
[…]
[…]
’’
Install NGINX on pi and start it:
’’
[…]
We should also run the following command to uninstall Apache2 since there is a chance that it is pre-installed on your system.
Failing to do so can cause the installation to fail since it automatically starts up and utilizes port 80 since we intend on using NGINX as a web server we choose to remove it from the system.
sudo apt remove apache2
[…]
sudo apt install nginx
sudo systemctl start nginx
You can check that nginx is running with the following command:
is necessary to solve the following problem, which ‘‘is most likely happening because of the long domain name’’:
sudo nginx -t-c /etc/nginx/nginx.conf
nginx: [emerg] could not build server_names_hash, you should increase server_names_hash_bucket_size: 32
nginx: configuration file /etc/nginx/nginx.conf test failed
64 is enough here, if more needed, note that ‘‘the directive value should be increased to the next power of two.’’.
nginx: [emerg] "server_names_hash_bucket_size" directive is not allowed here in /etc/nginx/sites-available/clearml.conf:3
nginx: configuration file /etc/nginx/sites-available/clearml.conf test failed
(Basically faulting the very first directive encountered, as it should normally be encapsulated in other directive, if it was a stand-alone file.)\ But on /etc/nginx/nginx.conf instead, the former will be inlcuded, as is, in the latter:
sudo nginx -t-c /etc/nginx/nginx.conf
Resulting in:
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
For checking and disambiguation, in addition to the sources above, see also:
type nginx
man nginx
man /etc/init.d/nginx
man service nginx
and boot launch enabling:
sudo systemctl enable nginx
’’
1.4 Configure Nginx to Launch on Boot
Use the enable option with the systemctl command to enable Nginx:
sudo systemctl enable nginx
Use the disable option with the systemctl command to disable Nginx:
sudo systemctl disable nginx
’’
How to verify if nginx is running or not
curl localhost:80
Output (positive)
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Output (negative)
curl: (7) Failed to connect to localhost port 80 after 0 ms: Connection refused
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2024-05-14 09:13:28 KST; 2 weeks 5 days ago
Docs: man:nginx(8)
Main PID: 656 (nginx)
Tasks: 5 (limit: 1595)
CPU: 1h 14min 5.109s
CGroup: /system.slice/nginx.service
├─ 656 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
├─2582 nginx: worker process
├─2583 nginx: worker process
├─2584 nginx: worker process
└─2585 nginx: worker process
May 14 09:13:26 proxy_host_name systemd[1]: Starting A high performance web server and a reverse proxy server...
May 14 09:13:28 proxy_host_name systemd[1]: Started A high performance web server and a reverse proxy server.
'’On systemd based versions such as Ubuntu Linux 16.04 LTS and above, make use of the command below’’:
systemctl status nginx
Output
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2024-05-14 09:13:28 KST; 2 weeks 5 days ago
Docs: man:nginx(8) Main PID: 656 (nginx)
Tasks: 5 (limit: 1595)
CPU: 1h 14min 5.971s
CGroup: /system.slice/nginx.service
├─ 656 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
├─2582 nginx: worker process
├─2583 nginx: worker process
├─2584 nginx: worker process
└─2585 nginx: worker process
May 14 09:13:26 proxy_host_name systemd[1]: Starting A high performance web server and a reverse proxy server...
May 14 09:13:28 proxy_host_name systemd[1]: Started A high performance web server and a reverse proxy server.
systemctl is-active nginx
active
'’You can use the exit value in your shell scripts as follows’’:
systemctl -q is-active nginx &&echo"It is active, do something"
'’Probably system-dependent’’:
if[-e /var/run/nginx.pid ];then echo"nginx is running";fi
nginx is running
'’You could use lsof to see what application is listening on port 80’’:
lsof -i TCP:80: Shows ‘‘what process uses a particular TCP port’’, here port 80
See previous note on lsof for more details and references.
'’None of the above answers worked for me so let me share my experience. I am running nginx in a docker container that has a port mapping (hostPort:containerPort) - 80:80 The above answers are giving me strange console output. Only the good old ‘nmap’ is working flawlessly even catching the nginx version. The command working for me is:
nmap -sV localhost -p 80
We are doing nmap using the -ServiceVersion switch on the localhost and port: 80. It works great for me. ‘’
'’Nmap is short for Network Mapper. It is an open-source Linux command-line tool that is used to scan IP addresses and ports in a network and to detect installed applications.
Nmap allows network admins to find which devices are running on their network, discover open ports and services, and detect vulnerabilities.’‘[1]
Regarding:
nmap -sV localhost -p 80
-p port ranges: ‘‘Only scan specified ports’‘[2]
-sV: Version detection’‘[2]
Thus, it is looking at port 80 on localhost and trying to detect the version of the service using the port.
Starting Nmap 7.80 ( https://nmap.org ) at 2024-06-02 19:51 KST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00065s latency).
Other addresses for localhost (not scanned): ::1
PORT STATE SERVICE VERSION
80/tcp open http nginx 1.18.0
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 10.11 seconds
nmap -sV localhost -p 443
Output
Starting Nmap 7.80 ( https://nmap.org ) at 2024-06-02 19:52 KST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00066s latency).
Other addresses for localhost (not scanned): ::1
PORT STATE SERVICE VERSION
443/tcp open ssl/http nginx 1.18.0
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 16.89 seconds
nmap -sV localhost -p 8080
Output
Starting Nmap 7.80 ( https://nmap.org ) at 2024-06-02 19:52 KST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00062s latency).
Other addresses for localhost (not scanned): ::1
PORT STATE SERVICE VERSION
8080/tcp open http nginx 1.18.0
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 10.55 seconds
Example of a port not in use (and not ufw-allowed):
nmap -sV localhost -p 9001
Output
Starting Nmap 7.80 ( https://nmap.org ) at 2024-06-02 19:53 KST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00063s latency).
Other addresses for localhost (not scanned): ::1
PORT STATE SERVICE VERSION
9001/tcp closed tor-orport
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 4.19 seconds
'’UFW (uncomplicated firewall) is a firewall configuration tool that runs on top of iptables, included by default within Ubuntu distributions. It provides a streamlined interface for configuring common firewall use cases via the command line.’’
'’The default firewall configuration tool for Ubuntu is ufw. Developed to ease iptables firewall configuration, ufw provides a user friendly way to create an IPv4 or IPv6 host-based firewall. By default UFW is disabled.’’
You can either choose to allow, deny, etc:
by port/[optional:protocol]:
sudo ufw allow 80
sudo ufw allow 80/tcp
by service/application name:
sudo ufw http
On the Proxy server Pi:
sudo ufw allow 80/tcp
If you need to delete a rule, for instance, the one we just used:
sudo ufw delete allow 80/tcp
Note that ‘nginx’ only uses TCP regarding ports 80 and 443:
-a: ‘‘This option causes list selection options to be ANDed, as described above.’’
Indeed, without the -a option above, lsof will filter for -i4:80,443 OR -c nginx, thus including any results that fit one and/or the other. To only include results that fit both conditions at once, we need to ‘AND’ the options. For more details, see the man reference in the first lsof note.
Because we are reverse forwarding the relevant ports, there is no need to allow ports on the Workstation.
For test such as:
curl http://localhost:8080
They still work as they are behind the UFW firewall.
Similarly, because the ports 8080, 8081, and 8008 are only used internally by Nginx or directly used for the forwarding, but are not requested externally, they need not be allowed in the ufw firewall on the Proxy Pi.
Test
sudo ufw status
Status: active
To Action From
------------
22 ALLOW Anywhere
80/tcp ALLOW Anywhere
22 (v6) ALLOW Anywhere (v6)
80/tcp (v6) ALLOW Anywhere (v6)
Before sudo ufw allow 80/tcp on Pi:
$ nc -z-v app_clearml.blissfox.xyz 80
nc: connect to app_clearml.blissfox.xyz (182.226.43.201) port 80 (tcp) failed: Connection timed out
After sudo ufw allow 80/tcp on Pi:
$ nc -z-v app_clearml.blissfox.xyz 80
Connection to app_clearml.blissfox.xyz (182.226.43.201) 80 port [tcp/http] succeeded!
For insights on Netcat, have a look at the following sources:
As hinted by its name, netcat can do the same as cat (display a file content) but over a network.
For instance, say you want to transfer a file adrien.txt from one machine to another. Let’s use port 1234 (if you have a firewall enabled, you might have to open/allow the 1324/TCP port).
On the recipient machine, launch nc with the -l option to listen, and specify the port with the -p option:
nc -l-p 1234 > received_file.txt
On the sending machine, feed the file to transfer. Use the -w option to define a timeout just in case (in seconds). Specify both the recipient address (192.168.21.251) and port (1234):
nc -w 2 192.168.21.251 1234 < adrien.txt
The cmd end is instantaneous. On the recipient machine, the file has arrived, and the nc -l cmd ends.
-z: ‘‘Specifies that nc should just scan for listening daemons, without sending any data to them. It is an error to use this option in conjunction with the -l option.’‘[1]
Using -l would be counterproductive, indeed, to use a port with nc -l, this port should be available, that is not used:
blissfox-pi@blissfoxpi:~ $ nc -l 80
nc: Permission denied
blissfox-pi@blissfoxpi:~ $ sudo nc -l 80
nc: Address already in use
-l: ‘‘Used to specify that nc should listen for an incoming connection rather than initiate a connection to a remote host. It is an error to use this option in conjunction with the -p, -s, or -z options. Additionally, any timeouts specified with the -w option are ignored.’‘[1]
'’The network statistics (netstat) command is a networking tool used for troubleshooting and configuration, that can also serve as a monitoring tool for connections over the network. Both incoming and outgoing connections, routing tables, port listening, and usage statistics are common uses for this command.’‘[1]
'’While in recent years netstat has been deprecated in favor of the ss command, you may still find it in your networking toolbox.’‘[1]
--numeric, -n: ‘‘Show numerical addresses instead of trying to determine symbolic host, port or user names.’‘[2]
-c, --continuous: ‘‘This will cause netstat to print the selected information every second continuously.’‘[2]
-p, --program: ‘‘Show the PID and name of the program to which each socket belongs.’‘[2]
--tcp
-t[2]
--udp
-u[2]
--raw
-w[2]
Note: Though, netstat have the -c option, it actually prints the command results one after the other instead of ‘refreshing’ (clearing and re-displaying). As a result, I would actually favour:
Configure NAT on the router to redirect port 80 & 443 on the right target: Port forwarding settings
ON/OFF
Service port
Protocol
IP Address
Internal port
ON OFF
80~80
TCP/IP
192.168.219.102
80
ON OFF
443~443
TCP/IP
192.168.219.102
443
Because it need not receive external requests, there is also no need to port forward requests to the router on ports 8080, 8081, and 8008 to the same ports on the Proxy Pi.
Set OVH blissfox.xyz to redirect all the required URL IPs:
Domain
Service
app_clearml.blissfox.xyz
app/webserver
api_clearml.blissfox.xyz
api
files_clearml.blissfox.xyz
files
to the IP address ofproxy.blissfox.xyz: OVH > Web Cloud > Domain names > [your_domain_name] (e.g. blissfox.xyz) > DNS zone> Add an entry and fill up accordingly:
Following the Certbot Instructions – My HTTP website is running Nginx on Ubuntu, with two additions from other sources (installing snapd on the pi and details about prompts when using certbot), as well as the results I had with the certbot cmd, and a final additional step and note (identified by [RR] when from my own):
Install the core snap in order to get the latest snapd:
$ sudo snap install core
core 16-2.45.2 from Canonical✓ installed
some snaps require new snapd features and will show an error such as snap "lxd" assumes unsupported features" during install. You can solve this issue by making sure the core snap is installed (snap install core) and it’s the latest version (snap refresh core).
To test your system, install the hello-world snap and make sure it runs correctly:
Ubuntu Core on the Raspberry Pi:\ Snap is an integral part of Ubuntu Core, which can be installed as the native Raspberry Pi operating system. Ubuntu Core provides more permissive access to the Raspberry Pi, and may enable functionality not easily mirrored when snap is installed from Raspberry Pi OS. A good example of this is low-level access to a Raspberry Pi’s GPIO pins.
Remove certbot-auto and any Certbot OS packages
If you have any Certbot packages installed using an OS package manager like apt, dnf, or yum, you should remove them before installing the Certbot snap to ensure that when you run the command certbot the snap is used rather than the installation from your OS package manager. The exact command to do this depends on your OS, but common examples are sudo apt-get remove certbot, sudo dnf remove certbot, or sudo yum remove certbot.
Install Certbot
Run this command on the command line on the machine to install Certbot.
sudo snap install--classic certbot
Prepare the Certbot command
Execute the following instruction on the command line on the machine to ensure that the certbot command can be run.
sudo ln-s /snap/bin/certbot /usr/bin/certbot
Choose how you’d like to run Certbot
Either get and install your certificates…
Run this command to get a certificate and have Certbot edit your nginx configuration automatically to serve it, turning on HTTPS access in a single step.
sudo certbot --nginx
Or, just get a certificate
If you’re feeling more conservative and would like to make the changes to your nginx configuration by hand, run this command.
sudo certbot certonly --nginx
’’ During the certificate granting process, Certbot asks a series of questions about the domain so it can properly request the certificate. You must agree to the terms of service and provide a valid administrative email address. Depending upon the server configuration, the messages displayed by Certbot might differ somewhat from what is shown here.
Run Certbot to start the certificate request. When Certbot runs, it requests and installs certificate file along with a private key file. When used with the NGINX plugin (--nginx), Certbot also automatically edits the configuration files for NGINX, which dramatically simplifies configuring HTTPS for your web server. If you prefer to manually adjust the configuration files, you can run Certbot using the certonly command.
Request a certficate and automatically configure it on NGINX (recommended):
sudo certbot --nginx
Request a certificate without configuring NGINX:
sudo certbot certonly --nginx
To request the certificate without relying on your NGINX installation, you can instead use the standalone plugin (--standalone).
During the installation process, Certbot will prompt you for some basic information including your email address and domain name.
Enter email address. The first prompt is to request an email address where Certbot can send urgent notices about the domain or registration. This should be the address of the web server administrator.
Accept terms of service. Certbot next asks you to agree to the Let’s Encrypt terms of service. Use the link in the output to download the PDF file and review the document. If you agree with the terms, enter Y. Entering N terminates the certificate request.
Optionally subscribe to mailing list. Certbot asks if you want to subscribe to the EFF mailing list. You can answer either Y or N without affecting the rest of the installation.
Enter domain name(s). Certbot now requests a domain name for the certificate. If there is a virtual host file for the domain, Certbot displays the names of the eligible domains. Select the numbers corresponding to the domains you are requesting certificates for, separated by spaces. If the domain doesn’t appear, you can enter the name for each domain without the http or https prefix. For each domain name, you should request separate certificates with and without the www prefix. If you have more than one domain to certify, separate the names with either a space or a comma.
www.example.com example.com
Certbot displays the names of domains configured in the server blocks configured withing NGINX. Select the numbers corresponding to the domains you are requesting certificates for, separated by spaces.
Certbot then communicates with Let’s Encrypt to request the certificate(s) and perform any necessary challenges as defined in the ACME standard (see Challenge Types). In most cases, ownership can be proven through the HTTP challenge, which automatically adds a file on your web server. If you wish to change the challenge type or perform challenge manually, see the Manual section in the Certbot documentation.
If the operation is successful, Certbot confirms the certificates are enabled. It also displays some information about the directories where the certificates and key chains are stored, along with the expiration date. Certificates typically expire in 90 days. ‘’
[RR]
The first option sudo certbot --nginx is recommended and actually worked flawlessly, it identified the already configured addresses, and modified correctly /etc/nginx/sites-enabled/clearml.conf to change from port 80 to port 443 with the newly generated certifcates.
blissfox-pi@blissfoxpi:~ $ sudo certbot --nginx
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Enter email address (used for urgent renewal and security notices)(Enter 'c' to cancel): renardromain@hotmail.fr
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Please read the Terms of Service at
https://letsencrypt.org/documents/LE-SA-v1.4-April-3-2024.pdf. You must agree in
order to register with the ACME server. Do you agree?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: Y
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Would you be willing, once your first certificate is successfully issued, to
share your email address with the Electronic Frontier Foundation, a founding
partner of the Let's Encrypt project and the non-profit organization that
develops Certbot? We'd like to send you email about our work encrypting the web,
EFF news, campaigns, and ways to support digital freedom.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: N
Account registered.
Which names would you like to activate HTTPS for?
We recommend selecting either all domains, or all domains in a VirtualHost/server block.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: api_clearml.blissfox.xyz
2: app_clearml.blissfox.xyz
3: files_clearml.blissfox.xyz
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel):
Requesting a certificate for api_clearml.blissfox.xyz and 2 more domains
Successfully received certificate.
Certificate is saved at: /etc/letsencrypt/live/api_clearml.blissfox.xyz/fullchain.pem
Key is saved at: /etc/letsencrypt/live/api_clearml.blissfox.xyz/privkey.pem
This certificate expires on 2024-08-11.
These files will be updated when the certificate renews.
Certbot has set up a scheduled task to automatically renew this certificate in the background.
Deploying certificate
Successfully deployed certificate for api_clearml.blissfox.xyz to /etc/nginx/sites-enabled/clearml.conf
Successfully deployed certificate for app_clearml.blissfox.xyz to /etc/nginx/sites-enabled/clearml.conf
Successfully deployed certificate for files_clearml.blissfox.xyz to /etc/nginx/sites-enabled/clearml.conf
Congratulations! You have successfully enabled HTTPS on https://api_clearml.blissfox.xyz, https://app_clearml.blissfox.xyz, and https://files_clearml.blissfox.xyz
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
If you like Certbot, please consider supporting our work by:
* Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
* Donating to EFF: https://eff.org/donate-le
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Your file /etc/nginx/sites-available/clearml.conf should now looks like:
server_names_hash_bucket_size64;
server {
server_nameapp_clearml.blissfox.xyz;
location / {
proxy_passhttp://localhost:8080;
}
listen [::]:443ssl; # managed by Certbot
listen443ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/api_clearml.blissfox.xyz/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/api_clearml.blissfox.xyz/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
server_nameapi_clearml.blissfox.xyz;
location / {
proxy_passhttp://localhost:8008;
}
listen [::]:443sslipv6only=on; # managed by Certbot
listen443ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/api_clearml.blissfox.xyz/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/api_clearml.blissfox.xyz/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
server_namefiles_clearml.blissfox.xyz;
location / {
proxy_passhttp://localhost:8081;
}
listen [::]:443ssl; # managed by Certbot
listen443ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/api_clearml.blissfox.xyz/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/api_clearml.blissfox.xyz/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = api_clearml.blissfox.xyz) {
return301https://$host$request_uri;
} # managed by Certbot
listen80;
listen [::]:80;
server_nameapi_clearml.blissfox.xyz;
return404; # managed by Certbot
}
server {
if ($host = app_clearml.blissfox.xyz) {
return301https://$host$request_uri;
} # managed by Certbot
listen80;
listen [::]:80;
server_nameapp_clearml.blissfox.xyz;
return404; # managed by Certbot
}
server {
if ($host = files_clearml.blissfox.xyz) {
return301https://$host$request_uri;
} # managed by Certbot
listen80;
listen [::]:80;
server_namefiles_clearml.blissfox.xyz;
return404; # managed by Certbot
}
Test automatic renewal
The Certbot packages on your system come with a cron job or systemd timer that will renew your certificates automatically before they expire. You will not need to run Certbot again, unless you change your configuration. You can test automatic renewal for your certificates by running this command:
sudo certbot renew --dry-run
The command to renew certbot is installed in one of the following locations:
/etc/crontab/
/etc/cron.*/*
systemctl list-timers
Confirm that Certbot worked
To confirm that your site is set up properly, visit https://yourwebsite.com/ in your browser and look for the lock icon in the URL bar.
[RR] Do not forget to allow HTTPS through UFW
Do not forget to:
sudo ufw https
[RR]
Do not forget to modify clearml.conf>api accordingly, that is changing http to https.
Forgetting to do so will result in errors such as:
Error: Action failed <400/12: tasks.create/v1.0 (Validation error (error for field 'name'. field is required!))
When attempting to create/init a clearml-task.
Test
Check out your firewall on Proxy Pi:
sudo ufw status
Status: active
To Action From
------------
22 ALLOW Anywhere
80/tcp ALLOW Anywhere
443 ALLOW Anywhere
22 (v6) ALLOW Anywhere (v6)
80/tcp (v6) ALLOW Anywhere (v6)
443 (v6) ALLOW Anywhere (v6)
Check out that https is working:
With curl:
curling the http address:
curl http://app_clearml.blissfox.xyz
<html>
<head><title>301 Moved Permanently</title></head>
<body>
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.18.0</center>
</body>
</html>
will lead you to a standard ‘error’ message.
Actually, there is a redirection to the https address. To follow the redirection using curl, you can use:
curl --location http://app_clearml.blissfox.xyz
Or you can directly curl the https address:
curl https://app_clearml.blissfox.xyz
These last two should lead you to the same page starting with:
You can also use hashed passwords instead of plain-text passwords. To do that:
Set pass_hashed: true
Use a base64-encoded hashed password in the password field instead of a plain-text password. Assuming Jane’s plain-text password is 123456, use the following bash command to generate the base64-encoded hashed password:
Use the command’s output as the user’s password. Resulting apiserver.conf file should look as follows:
auth {
# Fixed users login credentials
# No other user will be able to login
fixed_users {
enabled: truepass_hashed: trueusers: [
{
username: "jane"password: "JDJiJDEyJDk3OHBFcHFlNEsxTkFoZDlPcGZsbC5sU1pmM3huZ1RpeHc0ay5WUjlzTzN5WE1WRXJrUmhp"name: "Jane Doe"
}
]
}
}
If the apiserver.conf file does not exist, create your own in ClearML Server’s /opt/clearml/config directory (or an alternate folder you configured), and input the modified configuration
’’
Note that bcrypt is not a python standard package, and thus have to be installed:
Pypi bcrypt: ‘‘Acceptable password hashing for your software and your servers (but you should really use argon2id or scrypt)’’
'’While bcrypt remains an acceptable choice for password storage, depending on your specific use case you may also want to consider using scrypt (either via standard library or cryptography) or argon2id via argon2_cffi.’’
Dependencies:
sudo apt-get install build-essential cargo
It seems that installing cargo is actually not required for the code snippet to work (tested in a ubuntu:jammy docker container with aptupdate and then install of python3.10 and python3-pip, and pip install bcrypt)\ Cargo (e.g. jammy pkg): Rust package manager
’’ By default, ClearML Server comes with default values that are designed to allow to set it up quickly and to start working with the ClearML SDK.
However, this also means that the server must be secured by either preventing any external access, or by changing defaults so that the server’s credentials are not publicly known.
The ClearML Server default secrets can be found here, and can be changed using the secure.conf configuration file or using environment variables (see ClearML Server Feature Configurations).
Specifically, the relevant settings are:
secure.http.session_secret.apiserver
secure.auth.token_secret
secure.credentials.apiserver.user_key
secure.credentials.apiserver.user_secret
secure.credentials.webserver.user_key (automatically revoked by the server if using Web Login Authentication)
secure.credentials.webserver.user_secret (automatically revoked by the server if using Web Login Authentication)
secure.credentials.tests.user_key
secure.credentials.tests.user_secret
Securing the ClearML Server means also using Web Login Authentication, since the default “free access” login is inherently unsecure (and will not work once secure.credentials.webserver.user_key and secure.credentials.webserver.user_secret values are changed)
Example: Using Environment Variables
To set new values for these settings, use the following environment variables:
When generating new user keys and secrets, make sure to use sufficiently long strings (we use 30 chars for keys and 50-60 chars for secrets). See here for Python example code to generate these strings.
’’
{
http {
session_secret {
apiserver: "Gx*gB-L2U8!Naqzd#8=7A4&+=In4H(da424H33ZTDQRGF6=FWw"}}
auth {# token sign secret
token_secret: "7E1ua3xP9GT2(cIQOfhjp+gwN6spBeCAmN-XuugYle00I=Wc+u"}
credentials {# system credentials as they appear in the auth DB, used for intra-service communications
apiserver {
role: "system"
user_key: "62T8CP7HGBC6647XF9314C2VY67RJO"
user_secret: "FhS8VZv_I4%6Mo$8S1BWc$n$=o1dMYSivuiWU-Vguq7qGOKskG-d+b@tn_Iq"}
webserver {
role: "system"
user_key: "EYVQ385RW7Y2QQUH88CZ7DWIQ1WUHP"
user_secret: "yfc8KQo*GMXb*9p((qcYC7ByFIpF7I&4VH3BfUYXH%o9vX1ZUZQEEw1Inc)S"
revoke_in_fixed_mode: true}
services_agent {
role: "admin"
user_key: "P4BMJA7RK3TKBXGSY8OAA1FA8TOD11"
user_secret: "9LsgSfa0SYz0zli1_c500ZcLqanre2xkWOpepyt1w-BKK3_DKPHrtoj3JSHvyy8bIi0"}
tests {
role: "user"
display_name: "Default User"
user_key: "EGRTCO8JMSIGI6S39GTP43NFWXDQOW"
user_secret: "x!XTov_G-#vspE*Y(h$Anm&DIc5Ou-F)jsl$PdOyj5wG1&E!Z8"
revoke_in_fixed_mode: true}}}
importrandomimportstringsys_random=random.SystemRandom()defget_random_string(length:int=12,allowed_chars:str=string.ascii_letters+string.digits)->str:"""
Returns a securely generated random string.
The default length of 12 with the a-z, A-Z, 0-9 character set returns
a 71-bit value. log_2((26+26+10)^12) =~ 71 bits.
Taken from the django.utils.crypto module.
"""return"".join(sys_random.choice(allowed_chars)for_inrange(length))defget_client_id(length:int=20)->str:"""
Create a random secret key.
Taken from the Django project.
"""chars=string.ascii_uppercase+string.digitsreturnget_random_string(length,chars)defget_secret_key(length:int=50)->str:"""
Create a random secret key.
Taken from the Django project.
NOTE: asterisk is not supported due to issues with environment variables containing
asterisks (in case the secret key is stored in an environment variable)
"""chars=string.ascii_letters+string.digitsreturnget_random_string(length,chars)
’’ The ClearML Server uses the following configuration files:
apiserver.conf
hosts.conf
logging.conf
secure.conf
services.conf
When starting up, the ClearML Server will look for these configuration files, in the /opt/clearml/config directory (this path can be modified using the CLEARML_CONFIG_DIR environment variable). The default configuration files are in the clearml-server repository.
If you want to modify server configuration, and the relevant configuration file doesn’t exist, you can create the file, and input the relevant modified configuration.
Within the default structure, the services.conf file is represented by a subdirectory with service-specific .conf files. If services.conf is used to configure the server, any setting related to a file under the services subdirectory can simply be represented by a key within the services.conf file. For example, to override multi_task_histogram_limit that appears in the default/services/tasks.conf, the services.conf file should contain:
ClearML is able to interface with the most popular storage solutions in the market for storing model checkpoints, artifacts and charts.
Supported storage mediums include:
To use cloud storage with ClearML, install the clearml package for your cloud storage type, and then configure your storage credentials.
Once uploading an object to a storage medium, each machine that uses the object must have access to it.
’’
Among the supported storage mediums, Amazon S3, Google Cloud Storage, and Azure Storage (Microsoft) are well-known proprietary cloud-based storage solutions.
Let’s have a look at the two left:
Ceph
’’
Original author(s)
Inktank Storage (Sage Weil, Yehuda Sadeh Weinraub, Gregory Farnum, Josh Durgin, Samuel Just, Wido den Hollander)
Developer(s)
Red Hat, Intel, CERN, Cisco, Fujitsu, SanDisk, Canonical and SUSE
Stable release
18.2.0(Reef) / 3 August 2023
Repository
github.com/ceph/ceph
Written in
C++, Python
Operating system
Linux, FreeBSD, Windows
Type
Distributed object store
License
LGPLv2.1
Website
ceph.io
Ceph (pronounced /ˈsɛf/) is a free and open-source software-defined storage platform that provides object storage, block storage, and file storage built on a common distributed cluster foundation. Ceph provides completely distributed operation without a single point of failure and scalability to the exabyte level, and is freely available. Since version 12 (Luminous), Ceph does not rely on any other conventional filesystem and directly manages HDDs and SSDs with its own storage backend BlueStore and can expose a POSIX filesystem.
Ceph replicates data with fault tolerance, using commodity hardware and Ethernet IP and requiring no specific hardware support. Ceph is highly available and ensures strong data durability through techniques including replication, erasure coding, snapshots and clones. By design, the system is both self-healing and self-managing, minimizing administration time and other costs.
Large-scale production Ceph deployments include CERN, OVH and DigitalOcean.
2024-02-14T21-36-02Z/ 14 February 2024; 2 months ago
Repository
github.com/minio/minio
Written in
Go
Type
Object storage
License
GNU Affero GPL
Website
min.io
MinIO is a High-Performance Object Storage system released under GNU Affero General Public License v3.0. It is API compatible with the Amazon S3 cloud storage service. It is capable of working with unstructured data such as photos, videos, log files, backups, and container images with the maximum supported object size being 50TB.
History & development
MinIO’s main developer is MinIO Inc, a Silicon Valley–based technology startup founded by Anand Babu Periasamy, Garima Kapoor, and Harshavardhana in November 2014.
Please do not hesitate to leave a comment below or dm me if you have any feedback, whether it is about a mistake I made in the articles or suggestions for improvements.