Newer
Older
# ANNEX 1: Troubleshooting
## How to know the version of your current OSM installation
Run the following command to know the version of OSM client and OSM NBI:
```bash
osm version
```
In some circumstances, it could be useful to search the `osm-devops` package installed in your system, since `osm-devops` is the package used to drive installations:
```bash
dpkg -l osm-devops
||/ Name Version Architecture Description
+++-======================-=================-=====================-=====================================
ii osm-devops 7.0.0-1 all
To know the current verion of the OSM client, you can also search the `python3-osmclient` package as a way to know your current version of OSM:
```bash
dpkg -l python3-osmclient
||/ Name Version Architecture Description
+++-======================-=================-=====================-=====================================
ii python3-osmclient 7.0.0-1 all
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
```
## Recommended installation to facilitate troubleshooting
It is highly recommended saving a log of your installation:
```bash
$ ./install_osm.sh 2>&1 | tee osm_install_log.txt
```
## Common issues and troubleshooting
### Add User in Group
Add the non-root user used for installation in *sudo , lxd, docker* groups
This will skip below error :-
_Finished installation of juju_ Password: **sg: failed to crypt password with previous salt: Invalid argument** ERROR No controllers registered.
### Docker
#### Were all docker images successfully built?
Although controlled by the installer, you can check that the following images exist:
```bash
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
osm/light-ui latest 1988aa262a97 18 hours ago 710MB
osm/lcm latest c9ad59bf96aa 46 hours ago 667MB
osm/ro latest 812c987fcb16 46 hours ago 791MB
osm/nbi latest 584b4e0084a7 46 hours ago 497MB
osm/pm latest 1ad1e4099f52 46 hours ago 462MB
osm/mon latest b17efa3412e3 46 hours ago 725MB
wurstmeister/kafka latest 7cfc4e57966c 10 days ago 293MB
mysql 5 0d16d0a97dd1 2 weeks ago 372MB
mongo latest 14c497d5c758 3 weeks ago 366MB
wurstmeister/zookeeper latest 351aa00d2fe9 18 months ago 478MB
```
#### Are all processes/services running?
```bash
$ docker stack ps osm |grep -i running
```
10 docker containers should be running.
All the 10 services should have at least 1 replica: 1/1
```bash
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
yuyiqh8ty8pv osm_kafka replicated 1/1 wurstmeister/kafka:latest *:9092->9092/tcp
y585906h5vy5 osm_lcm replicated 1/1 osm/lcm:latest
pcdi5vb86nt9 osm_light-ui replicated 1/1 osm/light-ui:latest *:80->80/tcp
i56jhl5k6re4 osm_mon replicated 1/1 osm/mon:latest *:8662->8662/tcp
p5wyjtne93hp osm_mongo replicated 1/1 mongo:latest
iz5uncfdzu23 osm_nbi replicated 1/1 osm/nbi:latest *:9999->9999/tcp
4ttw2v4z2g57 osm_pm replicated 1/1 osm/pm:latest
xbg6bclp2anw osm_ro replicated 1/1 osm/ro:latest *:9090->9090/tcp
sf7rayfolncu osm_ro-db replicated 1/1 mysql:5
5bl73dhj1xl0 osm_zookeeper replicated 1/1 wurstmeister/zookeeper:latest
```
#### Docker image failed to build
##### Err:1 `http://archive.ubuntu.com/ubuntu xenial InRelease`
In some cases, DNS resolution works on the host but fails when building the Docker container. This is caused when Docker doesn't automatically determine the DNS server to use.
Check if the following works:
```bash
docker run busybox nslookup archive.ubuntu.com
```
If it does not work, you have to configure Docker to use the available DNS.
```bash
# Get the IP address you’re using for DNS:
nmcli dev show | grep 'IP4.DNS'
# Create a new file, /etc/docker/daemon.json, that contains the following (but replace the DNS IP address with the output from the previous step:
{
"dns": ["192.168.24.10"]
}
# Restart docker
sudo service docker restart
# Re-run
docker run busybox nslookup archive.ubuntu.com
# Now you should be able to re-run the installer and move past the DNS issue.
```
##### TypeError: `unsupported operand type(s) for -=: 'Retry' and 'int'`
In some cases, a MTU mismatch between the host and docker interfaces will cause this error while running pip. You can check this by running `ifconfig` and comparing the MTU of your host interface and the `docker_gwbridge` interface.
```bash
# Create a new file, /etc/docker/daemon.json, that contains the following (but replace the MTU value with that of your host interface from the previous step:
{
"mtu": 1458
}
# Restart docker
sudo service docker restart
```
#### Problem deploying stack osm
##### `network netosm could not be found`
The error is `network "netosm" is declared as external, but could not be found. You need to create a swarm-scoped network before the stack is deployed`
It usually happens when a `docker system prune` is done with the stack stopped. The following script will create it:
```bash
#!/bin/bash
# Create OSM Docker Network ...
[ -z "$OSM_STACK_NAME" ] && OSM_STACK_NAME=osm
OSM_NETWORK_NAME=net${OSM_STACK_NAME}
echo Creating OSM Docker Network
DEFAULT_INTERFACE=$(route -n | awk '$1~/^0.0.0.0/ {print $8}')
DEFAULT_MTU=$(ip addr show $DEFAULT_INTERFACE | perl -ne 'if (/mtu\s(\d+)/) {print $1;}')
echo \# OSM_STACK_NAME = $OSM_STACK_NAME
echo \# OSM_NETWORK_NAME = $OSM_NETWORK_NAME
echo \# DEFAULT_INTERFACE = $DEFAULT_INTERFACE
echo \# DEFAULT_MTU = $DEFAULT_MTU
sg docker -c "docker network create --driver=overlay --attachable \
--opt com.docker.network.driver.mtu=${DEFAULT_MTU} \
${OSM_NETWORK_NAME}"
```
### Juju
#### Bootstrap hangs
If the Juju bootstrap takes a long time, stuck at this status...
```text
Installing Juju agent on bootstrap instance
Fetching Juju GUI 2.14.0
Waiting for address
Attempting to connect to 10.71.22.78:22
Connected to 10.71.22.78
Running machine configuration script...
```
...it usually indicates that the LXD container with the Juju controller is having trouble connecting to the internet.
Get the name of the LXD container. It will begin with '`juju-`' and end with '`-0`'.
```bash
lxc list
+-----------------+---------+---------------------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-----------------+---------+---------------------+------+------------+-----------+
| juju-0383f2-0 | RUNNING | 10.195.8.57 (eth0) | | PERSISTENT | |
+-----------------+---------+---------------------+------+------------+-----------+
```
Next, tail the output of cloud-init to see where the bootstrap is stuck.
```bash
lxc exec juju-0383f2-0 -- tail -f /var/log/cloud-init-output.log
```
#### Is Juju running?
If running, you should see something like this:
```bash
$ juju status
Model Controller Cloud/Region Version SLA
default osm localhost/localhost 2.3.7 unsupported
```
#### ERROR controller osm already exists
Did OSM installation fail during juju installation with an error like "ERROR controller osm already exists" ?
```bash
$ ./install_osm.sh
...
ERROR controller "osm" already exists
ERROR try was stopped
### Jum Agu 24 15:19:33 WIB 2018 install_juju: FATAL error: Juju installation failed
BACKTRACE:
### FATAL /usr/share/osm-devops/jenkins/common/logging 39
### install_juju /usr/share/osm-devops/installers/full_install_osm.sh 564
### install_lightweight /usr/share/osm-devops/installers/full_install_osm.sh 741
### main /usr/share/osm-devops/installers/full_install_osm.sh 1033
```
Try to destroy the Juju controller and run the installation again:
```bash
$ juju destroy-controller osm --destroy-all-models -y
$ ./install_osm.sh
```
If it does not work, you can destroy Juju container and run the installation again
```bash
#Destroy the Juju container
lxc stop juju-*
lxc delete juju-*
#Unregister the controller since we’ve manually freed the resources associated with it
juju unregister -y osm
#Verify that there are no controllers
juju list-controllers
#Run the installation again
./install_osm.sh
```
### LXD
#### ERROR profile default: `/etc/default/lxd-bridge` has IPv6 enabled
Make sure that you follow the instructions in the [Quickstart](01-quickstart.md).
When asked if you want to proceed with the installation and configuration of LXD, juju, docker CE and the initialization of a local docker swarm, as pre-requirements, Please answer "y".
When dialog messages related to LXD configuration are shown, please answer in the following way:
- Do you want to configure the LXD bridge? Yes
- Do you want to setup an IPv4 subnet? Yes
- << Default values apply for next questions >>
- **Do you want to setup an IPv6 subnet? No**
### Configuration
### VIMs
#### Is the VIM URL reachable and operational?
When there are problems to access the VIM URL, an error message similar to the following is shown after attempts to instantiate network services:
```text
Error: "VIM Exception vimmconnConnectionException ConnectFailure: Unable to establish connection to <URL>"
```
- In order to debug potential issues with the connection, in the case of an OpenStack VIM, you can install the OpenStack client in the OSM VM and run some basic tests. I.e.:
```bash
$ # Install the OpenStack client
$ sudo apt-get install python-openstackclient
$ # Load your OpenStack credentials. For instance, if your credentials are saved in a file named 'myVIM-openrc.sh', you can load them with:
$ source myVIM-openrc.sh
$ # Test if the VIM API is operational with a simple command. For instance:
$ openstack image list
```
If the openstack client works, then make sure that you can reach the VIM from the RO docker:
```bash
$ docker exec -it osm_ro.1.xxxxx bash
$ curl <URL_CONTROLLER>
```
_In some cases, the errors come from the fact that the VIM was added to OSM using names in the URL that are not Fully Qualified Domain Names (FQDN)._
When adding a VIM to OSM, you must use always FQDN or the IP addresses. It must be noted that “controller” or similar names are not proper FQDN (the suffix should be added). Non-FQDN names might be understood by docker’s dnsmasq as a docker container name to be resolved, which is not the case. In addition, all the VIM endpoints should also be FQDN or IP addresses, thus guaranteeing that all subsequent API calls can reach the appropriate endpoint.
Think of an NFV infrastructure with tens of VIMs, first you will have to use different names for each controller (controller1, controller2, etc.), then you will have to add to every machine trying to interact with the different VIMs, not only OSM, all those entries in the /etc/hosts file. This is bad practice.
However, it is useful to have a mean to work with lab environments using non-FQDN names. Three options here. Probably you are looking for the third one, but we recommend the first one:
- Option 1. Change the admin URL and/or public URL of the endpoints to use an IP address or an FQDN. You might find this interesting if you want to bring your Openstack setup to production.
- Option 2. Modify `/etc/hosts` in the docker RO container. This is not persistent after reboots or restarts of the osm docker stack.
- Option 3. Modify `/etc/osm/docker/docker-compose.yaml` in the host, adding extra_hosts in the ro section with the entries that you want to add to `/etc/hosts` in the RO docker:
```yaml
ro:
extra_hosts:
controller: 1.2.3.4
```
Then restart the stack:
```bash
docker stack rm osm
docker stack deploy -c /etc/osm/docker/docker-compose.yaml osm
```
This is persistent after reboots and restarts of the osm docker stack.
#### Authentication
**What should I check if the VIM authentication is failing?**
Typically, you will get the following error message:
Error: `"VIM Exception vimconnUnexpectedResponse Unauthorized: The request you have made requieres authentication. (HTTP 401)"`
If your OpenStack URL is based on HTTPS, OSM will check by default the authenticity of your VIM using the appropriate public certificate. The recommended way to solve this is by modifying `/etc/osm/docker/docker-compose.yaml` in the host, sharing the host file (e.g. `/home/ubuntu/cafile.crt`) by adding a volume to the `ro` section as follows:
```yaml
ro:
...
volumes:
- /home/ubuntu/cafile.crt:/etc/osm/cafile.crt
```
Then, when creating the VIM, you should use the config option `ca_cert` as follows:
```bash
$ # Create the VIM with all the usual options, and add the config option to specify the certificate
$ osm vim-create VIM-NAME ... --config '{ca_cert: /etc/osm/cafile.crt}'
```
For casual testing, when adding the VIM account to OSM, you can use `'insecure: True'` (without quotes) as part of the VIM config parameters:
```bash
$ osm vim-create VIM-NAME ... --config '{insecure: True}'
```
**Is the VIM management network reachable from OSM (e.g. via ssh, port 22)?**
The simplest check would consist on deploying a VM attached to the management network and trying to access it via e.g. ssh from the OSM host.
For instance, in the case of an OpenStack VIM you could try something like this:
```bash
$ openstack server create --image ubuntu --flavor m1.small --nic mgmtnet test
```
If this does not work, typically it is due to one of these issues:
- Security group policy in your VIM is blocking your traffic (contact your admin to fix it)
- IP address space in the management network is not routable from outside (or in the reverse direction, for the ACKs).
### Operational issues
### Running out of disk space
If you are upgrading frequently your OSM installation, you might face that your disk is running out of space. The reason is that the previous dockers and docker images might be consuming some disk space. Running the following two commands should be enough to clear your docker setup:
```bash
docker system prune
docker image prune
```
If you are still experiencing issues with disk space, logs in one of the dockers could be the cause of your issue. Check the containers that are consuming more space (typically kafka-exporter)
```bash
du -sk /var/lib/docker/containers/* |sort -n
docker ps |grep <CONTAINER_ID>
```
Then, remove the stack and redeploy it again after doing a prune:
```bash
docker stack rm osm_metrics
docker system prune
docker image prune
docker stack deploy -c /etc/osm/docker/osm_metrics/docker-compose.yml osm_metrics
```
### VCA (juju)
#### Status is not coherent with running NS
In extraordinary situations, the output of `juju status` could show pending units that should have been removed when deleting a NS. In those situations, you can clean up VCA by following the procedure below:
```bash
juju status -m <NS_ID>
juju remove-application -m <NS_ID> <application>
juju resolved -m <NS_ID> <unit> --no-retry # You'll likely have to run it several times, as it will probably have an error in the next queued hook.Once the last hook is marked resolved, the charm will continue its removal
```
The following page also shows [how to remove different Juju objects](https://docs.jujucharms.com/2.1/en/charms-destroy)
#### Dump Juju Logs
To dump the Juju debug-logs, run this command:
```bash
juju debug-log --replay --no-tail > juju-debug.log
juju debug-log --replay --no-tail -m <NS_ID>
juju debug-log --replay --no-tail -m <NS_ID> --include <UNIT>
```
#### Manual recovery of Juju
If juju gets in a corrupt state and you cannot run `juju status` or contact the juju controller, you might need to remove manually the controller and register again, making OSM aware of the new controller.
```bash
# Stop and delete all juju containers, then unregister the controller
lxc list
lxc stop juju-* #replace "*" by the right values
lxc delete juju-* #replace "*" by the right values
juju unregister -y osm
# Create the controller again
sg lxd -c "juju bootstrap --bootstrap-series=xenial localhost osm"
# Get controller IP and update it in relevant OSM env files
controller_ip=$(juju show-controller osm|grep api-endpoints|awk -F\' '{print $2}'|awk -F\: '{print $1}')
sudo sed -i 's/^OSMMON_VCA_HOST.*$/OSMMON_VCA_HOST='$controller_ip'/' /etc/osm/docker/mon.env
sudo sed -i 's/^OSMLCM_VCA_HOST.*$/OSMLCM_VCA_HOST='$controller_ip'/' /etc/osm/docker/lcm.env
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
#Get juju password and feed it to OSM env files
function parse_juju_password {
password_file="${HOME}/.local/share/juju/accounts.yaml"
local controller_name=$1
local s='[[:space:]]*' w='[a-zA-Z0-9_-]*' fs=$(echo @|tr @ '\034')
sed -ne "s|^\($s\):|\1|" \
-e "s|^\($s\)\($w\)$s:$s[\"']\(.*\)[\"']$s\$|\1$fs\2$fs\3|p" \
-e "s|^\($s\)\($w\)$s:$s\(.*\)$s\$|\1$fs\2$fs\3|p" $password_file |
awk -F$fs -v controller=$controller_name '{
indent = length($1)/2;
vname[indent] = $2;
for (i in vname) {if (i > indent) {delete vname[i]}}
if (length($3) > 0) {
vn=""; for (i=0; i<indent; i++) {vn=(vn)(vname[i])("_")}
if (match(vn,controller) && match($2,"password")) {
printf("%s",$3);
}
}
}'
}
juju_password=$(parse_juju_password osm)
sudo sed -i 's/^OSMMON_VCA_SECRET.*$/OSMMON_VCA_SECRET='$juju_password'/' /etc/osm/docker/mon.env
sudo sed -i 's/^OSMLCM_VCA_SECRET.*$/OSMLCM_VCA_SECRET='$juju_password'/' /etc/osm/docker/lcm.env
juju_pubkey=$(cat $HOME/.local/share/juju/ssh/juju_id_rsa.pub)
sudo sed -i 's/^OSMLCM_VCA_PUBKEY.*$/OSMLCM_VCA_PUBKEY='$juju_pubkey'/' /etc/osm/docker/mon.env
sudo sed -i 's/^OSMLCM_VCA_PUBKEY.*$/OSMLCM_VCA_PUBKEY='$juju_pubkey'/' /etc/osm/docker/lcm.env
#Restart OSM stack
docker stack rm osm
docker stack deploy -c /etc/osm/docker/docker-compose.yaml osm
```
#### Slow deployment of charms
You can make deployment of charms quicker by:
- Upgrading your LXD installation to use ZFS:LXD configuration for OSM Release FIVE
- After LXD re-installation, you might need to reinstall the juju controller: [Reinstall Juju controller](#manual-recovery-of-juju)
- Preventing Juju from running `apt-get update && apt-get upgrade` when starting a machine: [Disable OS upgrades in charms](14-advanced-charm-development.md#disable-os-upgrades)
- Building periodically a custom image that will be used as base image for all the charms: [Custom base image for charms](14-advanced-charm-development.md#build-a-custom-cloud-image)
### Instantiation Errors
#### File juju_id_rsa.pub not found
- **ERROR**: `ERROR creating VCA model name 'xxxx': Traceback (most recent call last): File "/usr/lib/python3/dist-packages/osm_lcm/ns.py", line 822, in instantiate await ... [Errno 2] No such file or directory: '/root/.local/share/juju/ssh/juju_id_rsa.pub'`
- **CAUSE**: Normally a migration from release FIVE do not set properly the env for LCM
- **SOLUTION**: Ensure variable **OSMLCM_VCA_PUBKEY** is properly set at file `/etc/osm/docker/lcm.env`. The value must match with the output of this command `cat $HOME/.local/share/juju/ssh/juju_id_rsa.pub`. If not, add or change it. Restart OSM, or just LCM service with `docker service update osm_lcm --force --env-add OSMLCM_VCA_PUBKEY=""`
### NBI Errors
#### Cannot login after migration to 6.0.2
- **ERROR**: NBI always return "UNAUTHORIZED". Cannot login neither with UI nor with CLI. CLI shows error "`can't find a default project for this user`" or "`project admin not allowed for this user`".
- **CAUSE**: Normally after a migration to release 6.0.2 There is a slight incompatibility with users created from older versions.
- **SOLUTION**: Delete user admin and reboot NBI so that a new compatible user is created by running these commands:
```bash
curl --insecure https://localhost:9999/osm/test/db-clear/users
docker service update osm_nbi --force
```
### Checking the logs
You can check the logs of any container with the following commands:
```bash
docker logs $(docker ps -aqf "name=osm_mon" -n 1)
docker logs $(docker ps -aqf "name=osm_pol" -n 1)
docker logs $(docker ps -aqf "name=osm_lcm" -n 1)
docker logs $(docker ps -aqf "name=osm_nbi" -n 1)
docker logs $(docker ps -aqf "name=osm_light-ui" -n 1)
docker logs $(docker ps -aqf "name=osm_ro.1" -n 1)
docker logs $(docker ps -aqf "name=osm_ro-db" -n 1)
docker logs $(docker ps -aqf "name=osm_mongo" -n 1)
docker logs $(docker ps -aqf "name=osm_kafka" -n 1)
docker logs $(docker ps -aqf "name=osm_zookeeper" -n 1)
docker logs $(docker ps -aqf "name=osm_keystone.1" -n 1)
docker logs $(docker ps -aqf "name=osm_keystone-db" -n 1)
docker logs $(docker ps -aqf "name=osm_prometheus" -n 1)
```
For each container, logs can be found under:
```bash
/var/lib/docker/containers/DOCKER_ID/DOCKER_ID-json.log
```
And the DOCKER_ID can be obtained this way, e.g. for MON
```bash
docker ps -aqf "name=osm_mon" -n 1 --no-trunc
```
### Changing the log level
You can change the log level of any container, by updating the container with the right `LOG_LEVEL` env var.
Log levels are:
- ERROR
- WARNING
- INFO
- DEBUG
For instance, to increase the log level to DEBUG for the NBI in a deployment of OSM over docker swarm:
```bash
docker service update --env-add OSMNBI_LOG_LEVEL=DEBUG osm_nbi
```
For instance, to set the log level to INFO for the MON in a deployment of OSM over K8s:
```bash
kubectl -n osm set env deployment mon OSMMON_GLOBAL_LOGLEVEL=INFO
```
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
## How to report an issue
**If you have bugs or issues to be reported, please use [Bugzilla](https://osm.etsi.org/bugzilla)**
**If you have questions or feedback, feel free to contact us through:**
- **the mailing list [OSM_TECH@list.etsi.org](https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=OSM_TECH@list.etsi.org)**
- **the [Slack work space](https://join.slack.com/t/opensourcemano/shared_invite/enQtMzQ3MzYzNTQ0NDIyLWVkNTE4ZjZjNWI0ZTQyN2VhOTI1MjViMzU1NWYwMWM3ODI4NTQyY2VlODA2ZjczMWIyYTFkZWNiZmFkM2M2ZDk)**
**Please be patient. Answers may take a few days.**
------
Please provide some context to your questions. As an example, find below some guidelines:
- In case of an installation issue:
- The full command used to run the installer and the full output of the installer (or at least enough context) might help on finding the solution.
- It is highly recommended to run the installer command capturing standard output and standard error, so that you can send them for analysis if needed. E.g.:
```bash
./install_osm.sh 2>&1 | tee osm_install.log
```
- In case of operational issues, the following information might help:
- Version of OSM that you are using
- Logs of the system. Check <https://osm.etsi.org/wikipub/index.php/Common_issues_and_troubleshooting> to know how to get them.
- Details on the actions you made to get that error so that we could reproduce it.
- IP network details in order to help troubleshooting potential network issues. For instance:
- Client IP address (browser, command line client, etc.) from where you are trying to access OSM
- IP address of the machine where OSM is running
- IP addresses of the containers
- NAT rules in the machine where OSM is running
Common sense applies here, so you don't need to send everything, but just enough information to diagnose the issue and find a proper solution.