Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prometheus erro #11

Closed
pryorda opened this issue Jun 29, 2018 · 25 comments
Closed

Prometheus erro #11

pryorda opened this issue Jun 29, 2018 · 25 comments
Labels
bug Something isn't working

Comments

@pryorda
Copy link
Owner

pryorda commented Jun 29, 2018

From @smallfish01 on August 9, 2017 9:36

Hi, When I compled the configuration and restart prometheus service, I got the error as below:
time="2017-08-09T17:34:37+08:00" level=info msg="Loading configuration file prometheus.yml" source="main.go:252"
time="2017-08-09T17:34:37+08:00" level=error msg="Error reading file "/opt/vmware_exporter/config.yml": yaml: unmarshal errors:
line 1: cannot unmarshal !!map into []*config.TargetGroup" source="file.go:199"
time="2017-08-09T17:34:38+08:00" level=error msg="Error reading file "/opt/vmware_exporter/config.yml": yaml: unmarshal errors:
line 1: cannot unmarshal !!map into []*config.TargetGroup" source="file.go:199"

My config.yml configuration is:
default:
vmware_user: '[email protected]'
vmware_password: 'Er4545'
ignore_ssl: True

esx:
vmware_user: 'root'
vmware_password: 'Er4545'
ignore_ssl: True

Do you know why?

Thank you!

Copied from original issue: rverchere/vmware_exporter#13

@pryorda
Copy link
Owner Author

pryorda commented Jun 29, 2018

From @wtip on August 9, 2017 18:1

What is the above log file output from? The prometheus server or the vmware_exporter?
If you are using

default:
vmware_user: '[email protected]'
vmware_password: 'Er4545'
ignore_ssl: True

esx:
vmware_user: 'root'
vmware_password: 'Er4545'
ignore_ssl: True

as your prometheus server configuration it's not going to work. This would be the configuration for the vmware_exporter.

@pryorda
Copy link
Owner Author

pryorda commented Jun 29, 2018

From @smallfish01 on August 10, 2017 3:6

Hi Wtip,

Thanks for your replied.
The log from Prometheus server, vmware_exporter is ok, I can saw the port has been opened.

netstat -anp|grep 9272

tcp 0 0 0.0.0.0:9272 0.0.0.0:* LISTEN 6268/python

So did you means I should remove "- /opt/vmware_exporter/config.yml" from my Prometheus configation?

My Prometheus configuration as below:

For Vmware ESXi

  • job_name: 'vmware_esx'
    metrics_path: '/metrics'
    file_sd_configs:
    • files:
      • /opt/vmware_exporter/config.yml
        params:
        section: [esx]
        relabel_configs:
    • source_labels: [address]
      target_label: __param_target
    • source_labels: [__param_target]
      target_label: instance
    • target_label: address
      replacement: 192.168.100.10:9272

@pryorda
Copy link
Owner Author

pryorda commented Jun 29, 2018

From @smallfish01 on August 10, 2017 3:8

When I access from web: http://192.168.100.22:9272/, I got the information:

No Such Resource
No such child resource.

@pryorda
Copy link
Owner Author

pryorda commented Jun 29, 2018

From @wtip on August 10, 2017 15:0

@smallfish01 yes you are using the file_sd_configs: prometheus configuration incorrectly.
Have a look at https://prometheus.io/docs/operating/configuration/#<file_sd_config> and https://prometheus.io/blog/2015/06/01/advanced-service-discovery/#custom-service-discovery

however it might be easier for you to use the first configuration example that doesn't use the file based service discovery.

  - job_name: 'vmware_vcenter'
    metrics_path: '/metrics'
    static_configs:
      - targets:
        - 'vcenter.company.com
    relabel_configs:
      - source_labels: [__address__]
        target_label: __param_target
      - source_labels: [__param_target]
        target_label: instance
      - target_label: __address__
        replacement: localhost:9272

Also you can't just access http://192.168.100.22:9272/
You need to use some thing like http://192.168.100.22:9272/metrics?target=vcenter.company.com but change vcenter.company.com to the name of your vcenter or esxi server domain

@pryorda
Copy link
Owner Author

pryorda commented Jun 29, 2018

From @smallfish01 on August 11, 2017 11:48

Hi William,

I really appreciate for your great help! Now the erros is gone, but I saw the instance is done from Prometheus web. The Esxi server status is "Down" and error is "context deadline exceeded" .
I tried restart vmware_exporter.py and got the error:

./vmware_exporter.py &

[1] 5734
[root@office-monitoring vmware_exporter]# Starting web server on port 9272

[root@office-monitoring vmware_exporter]# [2017-08-11 11:41:43.229451+00:00] Start collecting vcenter metrics for 192.168.100.10
[2017-08-11 11:41:58.228206+00:00] Start collecting vcenter metrics for 192.168.100.10
[2017-08-11 11:42:13.228314+00:00] Start collecting vcenter metrics for 192.168.100.10
[2017-08-11 11:42:28.228214+00:00] Start collecting vcenter metrics for 192.168.100.10
[2017-08-11 11:42:43.228200+00:00] Start collecting vcenter metrics for 192.168.100.10
[2017-08-11 11:42:58.228250+00:00] Start collecting vcenter metrics for 192.168.100.10
Unhandled error in Deferred:
Unhandled error in Deferred:
Unhandled error in Deferred:
Unhandled error in Deferred:
Unhandled error in Deferred:

I saw the log prompted "Start collecting vcenter metrics for 192.168.100.10" but the server is ESXi not vcenter, how can I change it?

Many thank you!

@pryorda
Copy link
Owner Author

pryorda commented Jun 29, 2018

From @smallfish01 on August 11, 2017 11:53

I just added:
params:
section: [esx]

Into prometheus.yml and restart it, but the issue still exists.

For Vmware ESXi

  • job_name: 'vmware_esx'
    metrics_path: '/metrics'
    static_configs:
    • targets:
      • '192.168.100.10'
        params:
        section: [esx]
        relabel_configs:
    • source_labels: [address]
      target_label: __param_target
    • source_labels: [__param_target]
      target_label: instance
    • target_label: address
      replacement: 127.0.0.1:9272

@pryorda
Copy link
Owner Author

pryorda commented Jun 29, 2018

From @wtip on August 11, 2017 13:27

@smallfish01 After you start up the vmware exporter are you able to go to http://192.168.100.22:9272/metrics?section=esx&target=esxiservername.company.com and get a list of metrics? Please remember to change the IP address and server name

@pryorda
Copy link
Owner Author

pryorda commented Jun 29, 2018

From @smallfish01 on August 12, 2017 7:25

I running http://192.168.100.22:9272/metrics?section=esx&target=192.168.100.10 and cannot get list of metrics, because of the Vmware server status is "down" from promether server http://192.168.100.22:9090

@pryorda
Copy link
Owner Author

pryorda commented Jun 29, 2018

From @wtip on August 12, 2017 19:15

I'm not sure what the issue is.
What version of esxi are you using?
Do you have a config.yml in the same directory as the vmware_exporter.py?
Is the username and password under the esx: section correct?
Did you do a pip install -r requirements.txt

@pryorda
Copy link
Owner Author

pryorda commented Jun 29, 2018

From @smallfish01 on August 14, 2017 7:1

Hello,
1).ESXi version is: 6.0.0 Update 3
2). Yes
3). Yes.

cat config.yml

default:
vmware_user: '[email protected]'
vmware_password: 'allar430'
ignore_ssl: True

esx:
vmware_user: 'root'
vmware_password: 'allar430'
ignore_ssl: True

4). Installed the requirements.

@pryorda
Copy link
Owner Author

pryorda commented Jun 29, 2018

From @wtip on August 14, 2017 14:6

@smallfish01 Very strange. You could try my fork that includes some additional error logging. https://github.com/wtip/vmware_exporter
Maybe this will tell you what the problem is.

@pryorda
Copy link
Owner Author

pryorda commented Jun 29, 2018

From @rverchere on August 17, 2017 15:16

Hi,

I think your scrape interval is too short, and your prometheus server ask for metrics before de vmware_exportersends its previous results:

[2017-08-11 11:41:58.228206+00:00] Start collecting vcenter metrics for 192.168.100.10
[2017-08-11 11:42:13.228314+00:00] Start collecting vcenter metrics for 192.168.100.10
[2017-08-11 11:42:28.228214+00:00] Start collecting vcenter metrics for 192.168.100.10

Could you please:

For my cas, I had to change de scrape interval to 30sec (prometheus.yml config) :

  - job_name: 'vmware_vcenter'
    metrics_path: '/metrics'
    scrape_interval: 30s
    scrape_timeout: 30s
    [...]

@pryorda
Copy link
Owner Author

pryorda commented Jun 29, 2018

From @rverchere on September 7, 2017 10:35

Hi @smallfish01 , did you try it the previous recommendation?

@pryorda
Copy link
Owner Author

pryorda commented Jun 29, 2018

From @smallfish01 on September 7, 2017 11:58

Hi rverchere,

Thanks for your help, I am so sorry for the trouble, I tried as your saed, changed prometheus.yml as below:

For Vmware ESXi

  • job_name: 'vmware_esx'
    metrics_path: '/metrics'
    scrape_interval: 30s
    scrape_timeout: 30s
    static_configs:
    • targets:
      • '192.168.100.10'
        params:
        section: [esx]
        relabel_configs:
    • source_labels: [address]
      target_label: __param_target
    • source_labels: [__param_target]
      target_label: instance
    • target_label: address
      replacement: 127.0.0.1:9272

netstat -anp|grep 9272

tcp 0 0 0.0.0.0:9272 0.0.0.0:* LISTEN 753/python

And I cannot open the url http://192.168.100.16:9272/metrics?section=esx&target=192.168.100.20, also got the error from Prometheus:
screen shot 2017-09-07 at 19 56 16

systemctl status vmware_exporter

● vmware_exporter.service - Prometheus VMWare Exporter
Loaded: loaded (/etc/systemd/system/vmware_exporter.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2017-09-05 12:43:46 CST; 2 days ago
Main PID: 753 (python)
Memory: 26.8M
CGroup: /system.slice/vmware_exporter.service
└─753 python /opt/vmware_exporter/vmware_exporter.py

Sep 07 19:55:27 office-monitoring vmware_exporter.py[753]: Unhandled error in Deferred:
Sep 07 19:55:27 office-monitoring vmware_exporter.py[753]: Unhandled error in Deferred:
Sep 07 19:55:27 office-monitoring vmware_exporter.py[753]: Unhandled error in Deferred:
Sep 07 19:57:57 office-monitoring vmware_exporter.py[753]: Unhandled error in Deferred:
Sep 07 19:57:57 office-monitoring vmware_exporter.py[753]: Unhandled error in Deferred:
Sep 07 19:57:57 office-monitoring vmware_exporter.py[753]: Unhandled error in Deferred:
Sep 07 19:57:57 office-monitoring vmware_exporter.py[753]: Unhandled error in Deferred:
Sep 07 19:57:57 office-monitoring vmware_exporter.py[753]: Unhandled error in Deferred:
Sep 07 19:57:57 office-monitoring vmware_exporter.py[753]: Unhandled error in Deferred:
Sep 07 19:57:57 office-monitoring vmware_exporter.py[753]: Unhandled error in Deferred:

@pryorda
Copy link
Owner Author

pryorda commented Jun 29, 2018

From @smallfish01 on September 8, 2017 6:6

Hi Rverchere,
I tried yesterday and setup prometheus.yml as below:

  • job_name: 'vmware_esx'
    metrics_path: '/metrics'
    scrape_interval: 30s
    scrape_timeout: 30s
    static_configs:
    • targets:
      • '192.168.100.10'
        params:
        section: [esx]
        relabel_configs:
    • source_labels: [address]
      target_label: __param_target
    • source_labels: [__param_target]
      target_label: instance
    • target_label: address
      replacement: 127.0.0.1:9272

But also got error from Prometheus web:
screen shot 2017-09-07 at 19 56 16

The error log:
vmware_exporter.py: Unhandled error in Deferred:

@pryorda
Copy link
Owner Author

pryorda commented Jun 29, 2018

From @wangyf123 on November 6, 2017 13:11

@smallfish01 Encounter same error, vmware target status is DOWN, Error is: server returned HTTP status 404 Not Found
image

vmware_exporter config.yml:
image

prometheus.yml:
image

@pryorda
Copy link
Owner Author

pryorda commented Jun 29, 2018

From @wangyf123 on November 8, 2017 5:44

Resovled by launch server of node exporter by command:
./node_exporter/node_exporter.py

@pryorda
Copy link
Owner Author

pryorda commented Jun 29, 2018

From @smallfish01 on November 10, 2017 1:22

@wangyf123 Thanks for your update! I will have a try.

@pryorda
Copy link
Owner Author

pryorda commented Jun 29, 2018

From @hmmxp on January 29, 2018 2:58

Dear All,

Having same issue

My Environment Is As Following

All Exporter (Node Exporter, VMware Exporter) Is On Server A With IP 10.10.10.1
Prometheus Is On Server B With IP 10.10.10.2

Prometheus Is Version 2.0

Exporter Server Is Running Python 3.4 and pip list shows the following:
asn1crypto (0.24.0)
attrs (17.4.0)
Automat (0.6.0)
certifi (2018.1.18)
cffi (1.11.4)
chardet (3.0.4)
constantly (15.1.0)
cryptography (2.1.4)
hyperlink (17.3.1)
idna (2.6)
incremental (17.5.0)
pip (9.0.1)
prometheus-client (0.0.19)
pyasn1 (0.4.2)
pyasn1-modules (0.2.1)
pycparser (2.18)
pyOpenSSL (17.5.0)
pytz (2017.3)
pyvmomi (6.0.0.2016.4)
PyYAML (3.12)
requests (2.18.4)
service-identity (17.0.0)
setuptools (36.6.0)
six (1.11.0)
Twisted (17.9.0)
urllib3 (1.22)
vmware-exporter (0.1.0)
yamlconfig (0.3.1)
zope.interface (4.4.3)

My Environment VMware ESXi Is Mixed Of v5.0, v5.5 and v6.0 and VMware vCenter is v6.0

When Browsing http://10.10.10.1:9272/metrics?section=esx&target=esx01

  • Shows No Target Defined

@pryorda
Copy link
Owner Author

pryorda commented Jun 29, 2018

From @rverchere on January 30, 2018 22:42

Exporter Server Is Running Python 3.4 and pip list shows the following:

Hi, I id not test with Python3, could you try with python2.7?

@pryorda
Copy link
Owner Author

pryorda commented Jun 29, 2018

From @hmmxp on January 31, 2018 0:45

Using Python 2 and v0.1.1

Accessing http://localhost:9272/metrics?target=vcenter.company.com seeing the following:

web.Server Traceback (most recent call last):
exceptions.UnicodeEncodeError: 'ascii' codec can't encode character u'\u6708' in position 6: ordinal not in range(128)
/usr/lib/python2.7/site-packages/Twisted-17.9.0-py2.7-linux-x86_64.egg/twisted/internet/defer.py:653 in _runCallbacks
652 try:
653 current.result = callback(current.result, *args, **kw)
654 if current.result is current:

/opt/vmware_exporter.py:84 in generate_latest_target
83 k, v.replace('\', r'\').replace('\n', r'\n').replace('"', r'"'))
84 for k, v in sorted(labels.items())]))
85 else:
exceptions.UnicodeEncodeError: 'ascii' codec can't encode character u'\u6708' in position 6: ordinal not in range(128)

@pryorda
Copy link
Owner Author

pryorda commented Jun 29, 2018

From @wangyf123 on January 31, 2018 5:46

Using Python 2 and v0.1.1

Accessing http://localhost:9272/metrics?target=vcenter.company.com seeing the following:

web.Server Traceback (most recent call last):
exceptions.UnicodeEncodeError: 'ascii' codec can't encode character u'\u6708' in position 6: >ordinal not in range(128)
/usr/lib/python2.7/site-packages/Twisted-17.9.0-py2.7-linux->x86_64.egg/twisted/internet/defer.py:653 in _runCallbacks
652 try:
653 current.result = callback(current.result, *args, **kw)
654 if current.result is current:

/opt/vmware_exporter.py:84 in generate_latest_target
83 k, v.replace('', r'').replace('\n', r'\n').replace('"', r'"'))
84 for k, v in sorted(labels.items())]))
85 else:
exceptions.UnicodeEncodeError: 'ascii' codec can't encode character u'\u6708' in position 6: >ordinal not in range(128)

This is an UTF-8 encoding error,just add follow code to vmware_exporter.py:
import sys
reload(sys)
sys.setdefaultencoding("utf-8")

@pryorda
Copy link
Owner Author

pryorda commented Jun 29, 2018

From @rverchere on January 31, 2018 20:12

This is an UTF-8 encoding error,just add follow code to vmware_exporter.py:

Thanks @wangyf123, I've just fixed the issue in 0.1.2 version !

@pryorda
Copy link
Owner Author

pryorda commented Jun 29, 2018

I've changed this up and have not tested with prometheus. If you chose to use my container or source make sure that you look at the new documentation.

@pryorda pryorda added the bug Something isn't working label Jun 29, 2018
@pryorda
Copy link
Owner Author

pryorda commented Sep 11, 2018

Closing because of inactivity.

@pryorda pryorda closed this as completed Sep 11, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant