Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

binaries/dump_operator_names.cc missing iostream include #125134

Open
phetdam opened this issue Apr 29, 2024 · 0 comments
Open

binaries/dump_operator_names.cc missing iostream include #125134

phetdam opened this issue Apr 29, 2024 · 0 comments
Labels
module: build Build system issues module: windows Windows support for PyTorch module: wsl Related to Windows Subsystem for Linux triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@phetdam
Copy link

phetdam commented Apr 29, 2024

Issue description

When compiling the v2.3.0 checkout using the below (can't attach) build-libtorch-2.3.0.sh script on WSL1 Ubuntu 22.04 with GCC 11.3.0, where BUILD_BINARY CMake flag is set to ON, compiling binaries/dump_operator_names.cc errors out.

#!/usr/bin/bash
#
# build-libtorch-2.3.0.sh
#
# Author: Derek Huang
# Brief: Build libtorch 2.3.0 from source
# Copyright: MIT License
#
# Build includes C++ programs, e.g. BUILD_BINARY is ON. No tests are installed,
# e.g. INSTALL_TEST is off, and installation is done to ./libtorch-2.3.0. The
# new C++11 ABI is used (used by default with newer GCC compilers). CPU-only.
#
# Originally tried on WSL1 Ubuntu 22.04 LTS with GCC 11.3.0 and Python 3.10.6.
#

##
# Clone PyTorch Git repo if it does not exist and initialize submodules.
#
# v2.3.0 tag is checked out and working directory will be top-level repo dir.
#
git_setup() {
    # if repo does not exist, clone
    echo "Cloning PyTorch repo..."
    if [ ! -d pytorch ]
    then
        git clone --recursive https://github.com/pytorch/pytorch.git
        echo "Cloning PyTorch repo... done"
    else
        echo "Cloning PyTorch repo... skipped"
    fi
    # checkout 2.3.0 + update submodules
    cd pytorch && git checkout v2.3.0
    git submodule sync && git submodule update --init --recursive
}

##
# Create and activate Python virtualenv for libtorch install + install reqs.
#
python_setup() {
    # if it doesn't exist, create
    echo "Creating $(python --version) venv torch_venv..."
    if [ ! -d torch_venv ]
    then
        python3 -m venv torch_venv
        echo "Creating $(python --version) venv torch_venv... done"
    else
        echo "Creating $(python --version) venv torch_venv... skipped"
    fi
    # activate if not activated
    echo "Activating venv torch_venv..."
    if [ $VIRTUAL_ENV != $(realpath torch_venv) ]
    then
        source torch_venv/bin/activate
        echo "Activating venv torch_venv... done"
    else
        echo "Activating venv torch_venv... skipped"
    fi
    # install Python requirements
    pip install -r requirements.txt
}

##
# Main function.
#
# Args:
#   Array of command-line arguments
#
main() {
    # clone PyTorch Git repo + checkout 2.3.0 + init submodules if necessary
    git_setup
    # create + activate virtual environment + install requirements if necessary
    python_setup
    # CMake command. C++11 ABI, defaults to CPU build, but explicitly no CUDA
    cmake -S . -B build-cpu -DCMAKE_INSTALL_PREFIX=libtorch-2.3.0 \
        -D_GLIBCXX_USE_CXX11_ABI=ON -DBUILD_BINARY=ON \
        -DBUILD_PYTHON=OFF -DBUILD_TEST=ON -DINSTALL_TEST=OFF -DUSE_CUDA=OFF
    # note: using $(nproc) / 2 is to relieve some load on the build machine
    cmake --build build-cpu -j$(($(nproc) / 2))
}

set -e
main "$@"

Code example

The GCC compilation error is as follows (leading directories and subsequent "not a member" complaints removed for brevity):

dump_operator_names.cc:31:10: error: ‘cout’ is not a member of ‘std’
   31 |     std::cout << "function name: " << func.name() << std::endl;
      |          ^~~~
dump_operator_names.cc:20:1: note: ‘std::cout’ is defined in header ‘<iostream>’; did you forget to ‘#include <iostream>’?
   19 | #include <torch/csrc/jit/serialization/import.h>
  +++ |+#include <iostream>
   20 | #include <torch/csrc/jit/runtime/instruction.h>

Just adding #include <iostream> is enough to allow the dump_operator_names target to be successfully built. For completeness, the missing <string>, <unordered_set> headers should also be included, e.g. this sample Git diff:

diff --git a/binaries/dump_operator_names.cc b/binaries/dump_operator_names.cc
index f77f93bf592..93e18b854e3 100644
--- a/binaries/dump_operator_names.cc
+++ b/binaries/dump_operator_names.cc
@@ -21,6 +21,9 @@
 #include <c10/util/Flags.h>
 
 #include <fstream>
+#include <iostream>
+#include <string>
+#include <unordered_set>
 
 namespace torch {
 namespace jit {

Seems simple enough to fix since it's just a few missing includes. However, issue exists even in the HEAD source checkout.

Addendum on run_plan_mpi.cc

There's also an issue with binaries/run_plan_mpi.cc not being able to find the mpi.h header despite the MPI include path being correctly determined and it being possible for me to compile some C test program like

/**
 * @file mpiver.c
 * @author Derek Huang
 * @brief C program to print MPI major/minor versions
 * @copyright MIT License
 *
 * Compile: mpicc -Wall -o mpiver mpiver.c
 */

#include <stdio.h>
#include <stdlib.h>

#include <mpi.h>

int
main(void)
{
  int mpi_major, mpi_minor;
  MPI_Get_version(&mpi_major, &mpi_minor);
  printf("MPI version: %d.%d\n", mpi_major, mpi_minor);  // 3.1 for me
  return EXIT_SUCCESS;
}

This is something that I think is better left to a different issue however.

System Info

Output from running torch/utils/collect_env.py using the torch_venv virtual env created using build-libtorch-2.3.0.sh.

Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: 14.0.0-1ubuntu1
CMake version: version 3.22.1
Libc version: glibc-2.35

Python version: 3.10.6 (main, May 29 2023, 11:10:38) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-4.4.0-22621-Microsoft-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A

CPU:
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Address sizes:       36 bits physical, 48 bits virtual
Byte Order:          Little Endian
CPU(s):              20
On-line CPU(s) list: 0-19
Vendor ID:           GenuineIntel
Model name:          12th Gen Intel(R) Core(TM) i7-12800H
CPU family:          6
Model:               154
Thread(s) per core:  2
Core(s) per socket:  14
Socket(s):           1
Stepping:            3
CPU max MHz:         2400.0000
CPU min MHz:         0.0000
BogoMIPS:            4800.00
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm pni pclmulqdq monitor est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave osxsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni umip gfni vaes vpclmulqdq rdpid ibrs ibpb stibp ssbd
Hypervisor vendor:   Windows Subsystem for Linux
Virtualization type: container

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.11.0
[conda] Could not collect

cc @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht @iremyux @Blackhex @cristianPanaite

@cpuhrsch cpuhrsch added module: build Build system issues module: windows Windows support for PyTorch triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module module: wsl Related to Windows Subsystem for Linux labels Apr 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: build Build system issues module: windows Windows support for PyTorch module: wsl Related to Windows Subsystem for Linux triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

2 participants