Compatibility Notes
- libopenblas
- cURL
- Wget2
- Nginx
- iPerf 2
- iPerf 3
- Jetty
- etcd (distributed key-value store)
- CTorrent and opentracker
- http-server
libopenblas
libopenblas is a fairly low-level library, and can get pulled in transitively via dependencies. e.g., tgen uses libigraph, which links against liblapack, which links against blas.
libopenblas, when compiled with pthread support, uses busy-loops in its worker threads.
There are several known workarounds:
-
Use Shadow's
--model-unblocked-syscall-latency
feature. See busy-loops for details and caveats. -
Use a different implementation of libblas. e.g. on Ubuntu, there are several alternative packages that can provide libblas. In particular, libblas3 doesn't have this issue.
-
Install libopenblas compiled without pthread support. e.g. on Ubuntu this can be obtained by installing libopenblas0-serial instead of libopenblas0-pthread.
-
Configure libopenblas to not use threads at runtime. This can be done by setting the environment variable
OPENBLAS_NUM_THREADS=1
, in the process's environment attribute in the Shadow config. Example: tor-minimal.yaml:109
See also:
cURL
Example
general:
stop_time: 10s
model_unblocked_syscall_latency: true
network:
graph:
type: 1_gbit_switch
hosts:
server:
network_node_id: 0
processes:
- path: python3
args: -m http.server 80
start_time: 0s
expected_final_state: running
client1: &client_host
network_node_id: 0
processes:
- path: curl
args: -s server
start_time: 2s
client2: *client_host
client3: *client_host
rm -rf shadow.data; shadow shadow.yaml > shadow.log
cat shadow.data/hosts/client1/curl.1000.stdout
Notes
- Older versions of cURL use a busy loop that is incompatible with Shadow and
will cause Shadow to deadlock.
model_unblocked_syscall_latency
works around this (see busy-loops). Newer versions of cURL, such as the version provided in Ubuntu 20.04, don't have this issue. See issue #1794 for details.
Wget2
Example
general:
stop_time: 10s
network:
graph:
type: 1_gbit_switch
hosts:
server:
network_node_id: 0
processes:
- path: python3
args: -m http.server 80
start_time: 0s
expected_final_state: running
client1: &client_host
network_node_id: 0
processes:
- path: wget2
args: --no-tcp-fastopen server
start_time: 2s
client2: *client_host
client3: *client_host
rm -rf shadow.data; shadow shadow.yaml > shadow.log
cat shadow.data/hosts/client1/index.html
Notes
- Shadow doesn't support
TCP_FASTOPEN
so you must run Wget2 using the--no-tcp-fastopen
option.
Nginx
Example
shadow.yaml
general:
stop_time: 10s
network:
graph:
type: 1_gbit_switch
hosts:
server:
network_node_id: 0
processes:
- path: nginx
args: -c ../../../nginx.conf -p .
start_time: 0s
expected_final_state: running
client1: &client_host
network_node_id: 0
processes:
- path: curl
args: -s server
start_time: 2s
client2: *client_host
client3: *client_host
nginx.conf
error_log stderr;
# shadow wants to run nginx in the foreground
daemon off;
# shadow doesn't support some syscalls that nginx uses to set up and control
# worker child processes.
# https://github.com/shadow/shadow/issues/3174
master_process off;
worker_processes 0;
# don't use the system pid file
pid nginx.pid;
events {
# we're not using any workers, so this is the maximum number
# of simultaneous connections we can support
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# shadow does not support sendfile()
sendfile off;
access_log off;
server {
listen 80;
location / {
root /var/www/html;
index index.nginx-debian.html;
}
}
}
rm -rf shadow.data; shadow shadow.yaml > shadow.log
cat shadow.data/hosts/client1/curl.1000.stdout
Notes
-
Shadow currently doesn't support some syscalls that nginx uses to set up and control worker child processes, so you must disable additional processes using
master_process off
andworker_processes 0
. See https://github.com/shadow/shadow/issues/3174. -
Shadow doesn't support
sendfile()
so you must disable it usingsendfile off
.
iPerf 2
Example
general:
stop_time: 10s
network:
graph:
type: 1_gbit_switch
hosts:
server:
network_node_id: 0
processes:
- path: iperf
args: -s
start_time: 0s
expected_final_state: running
client:
network_node_id: 0
processes:
- path: iperf
args: -c server -t 5
start_time: 2s
rm -rf shadow.data; shadow shadow.yaml > shadow.log
Notes
- You must use an iPerf 2 version >=
2.1.1
. Older versions of iPerf have a no-syscall busy loop that is incompatible with Shadow.
iPerf 3
Example
general:
stop_time: 10s
model_unblocked_syscall_latency: true
network:
graph:
type: 1_gbit_switch
hosts:
server:
network_node_id: 0
processes:
- path: iperf3
args: -s --bind 0.0.0.0
start_time: 0s
# Tell shadow to expect this process to still be running at the end of the
# simulation.
expected_final_state: running
client:
network_node_id: 0
processes:
- path: iperf3
args: -c server -t 5
start_time: 2s
rm -rf shadow.data; shadow shadow.yaml > shadow.log
Notes
-
By default iPerf 3 servers bind to an IPv6 address, but Shadow doesn't support IPv6. Instead you need to bind the server to an IPv4 address such as 0.0.0.0.
-
The iPerf 3 server exits with a non-zero error code and the message "unable to start listener for connections: Address already in use" after the client disconnects. This is likely due to Shadow not supporting the
SO_REUSEADDR
socket option. -
iPerf 3 uses a busy loop that is incompatible with Shadow and will cause Shadow to deadlock. A workaround is to use the
model_unblocked_syscall_latency
option.
Jetty
Running Jetty with the http module works, but we haven't tested anything more than this.
Example
shadow.yaml
general:
stop_time: 10s
network:
graph:
type: 1_gbit_switch
hosts:
server:
network_node_id: 0
processes:
- path: java
args: -jar ../../../jetty-home-12.0.12/start.jar jetty.http.port=80 --modules=http
expected_final_state: running
client1: &client_host
network_node_id: 0
processes:
- path: curl
args: -s server
start_time: 2s
client2: *client_host
client3: *client_host
if [ ! -d jetty-home-12.0.12/ ]; then
wget https://repo1.maven.org/maven2/org/eclipse/jetty/jetty-home/12.0.12/jetty-home-12.0.12.zip
echo "2dc2c60a8a3cb84df64134bed4df1c45598118e9a228604eaeb8b9b42d80bc07 jetty-home-12.0.12.zip" | sha256sum -c
unzip -q jetty-home-12.0.12.zip && rm jetty-home-12.0.12.zip
fi
rm -rf shadow.data; shadow shadow.yaml > shadow.log
cat shadow.data/hosts/client1/curl.1000.stdout
etcd (distributed key-value store)
Example
Example for etcd version 3.3.x.
general:
stop_time: 30s
network:
graph:
type: gml
inline: |
graph [
node [
id 0
host_bandwidth_down "20 Mbit"
host_bandwidth_up "20 Mbit"
]
edge [
source 0
target 0
latency "150 ms"
packet_loss 0.01
]
]
hosts:
server1:
network_node_id: 0
processes:
- path: etcd
args:
--name server1
--log-output=stdout
--initial-cluster-token etcd-cluster-1
--initial-cluster 'server1=http://server1:2380,server2=http://server2:2380,server3=http://server3:2380'
--listen-client-urls http://0.0.0.0:2379
--advertise-client-urls http://server1:2379
--listen-peer-urls http://0.0.0.0:2380
--initial-advertise-peer-urls http://server1:2380
expected_final_state: running
- path: etcdctl
args: set my-key my-value
start_time: 10s
server2:
network_node_id: 0
processes:
- path: etcd
# each etcd peer must have a different start time
# https://github.com/shadow/shadow/issues/2858
start_time: 1ms
args:
--name server2
--log-output=stdout
--initial-cluster-token etcd-cluster-1
--initial-cluster 'server1=http://server1:2380,server2=http://server2:2380,server3=http://server3:2380'
--listen-client-urls http://0.0.0.0:2379
--advertise-client-urls http://server2:2379
--listen-peer-urls http://0.0.0.0:2380
--initial-advertise-peer-urls http://server2:2380
expected_final_state: running
- path: etcdctl
args: get my-key
start_time: 12s
server3:
network_node_id: 0
processes:
- path: etcd
start_time: 2ms
args:
--name server3
--log-output=stdout
--initial-cluster-token etcd-cluster-1
--initial-cluster 'server1=http://server1:2380,server2=http://server2:2380,server3=http://server3:2380'
--listen-client-urls http://0.0.0.0:2379
--advertise-client-urls http://server3:2379
--listen-peer-urls http://0.0.0.0:2380
--initial-advertise-peer-urls http://server3:2380
expected_final_state: running
- path: etcdctl
args: get my-key
start_time: 12s
rm -rf shadow.data; shadow shadow.yaml > shadow.log
cat shadow.data/hosts/*/etcdctl.*.stdout
Notes
-
The etcd binary must not be statically linked. You can build a dynamically linked version by replacing
CGO_ENABLED=0
withCGO_ENABLED=1
in etcd'sscripts/build.sh
andscripts/build_lib.sh
scripts. The etcd packages included in the Debian and Ubuntu APT repositories are dynamically linked, so they can be used directly. -
Each etcd peer must be started at a different time since etcd uses the current time as an RNG seed. See issue #2858 for details.
-
If using etcd version greater than 3.5.4, you must build etcd from source and comment out the keepalive period assignment as Shadow does not support this.
CTorrent and opentracker
Example
general:
stop_time: 60s
network:
graph:
type: 1_gbit_switch
hosts:
tracker:
network_node_id: 0
processes:
- path: opentracker
# Tell shadow to expect this process to still be running at the end of the
# simulation.
expected_final_state: running
uploader:
network_node_id: 0
processes:
- path: cp
args: ../../../foo .
start_time: 10s
# Create the torrent file
- path: ctorrent
args: -t foo -s example.torrent -u http://tracker:6969/announce
start_time: 11s
# Serve the torrent
- path: ctorrent
args: example.torrent
start_time: 12s
expected_final_state: running
downloader1: &downloader_host
network_node_id: 0
processes:
# Download and share the torrent
- path: ctorrent
args: ../uploader/example.torrent
start_time: 30s
expected_final_state: running
downloader2: *downloader_host
downloader3: *downloader_host
downloader4: *downloader_host
downloader5: *downloader_host
echo "bar" > foo
rm -rf shadow.data; shadow shadow.yaml > shadow.log
cat shadow.data/hosts/downloader1/foo
Notes
- Shadow must be run as a non-root user since opentracker will attempt to drop privileges if it detects that the effective user is root.
http-server
Example
general:
stop_time: 10s
model_unblocked_syscall_latency: true
network:
graph:
type: 1_gbit_switch
hosts:
server:
network_node_id: 0
processes:
- path: node
args: /usr/local/bin/http-server -p 80 -d
start_time: 3s
expected_final_state: running
client:
network_node_id: 0
processes:
- path: curl
args: -s server
start_time: 5s
rm -rf shadow.data; shadow shadow.yaml > shadow.log
cat shadow.data/hosts/client/curl.1000.stdout
Notes
- Either the Node.js runtime or http-server uses a busy loop that is
incompatible with Shadow and will cause Shadow to deadlock.
model_unblocked_syscall_latency
works around this (see busy-loops).