Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Project dependencies may have API risk issues #35

Open
PyDeps opened this issue Oct 27, 2022 · 0 comments
Open

Project dependencies may have API risk issues #35

PyDeps opened this issue Oct 27, 2022 · 0 comments

Comments

@PyDeps
Copy link

PyDeps commented Oct 27, 2022

Hi, In NER-BERT-pytorch, inappropriate dependency versioning constraints can cause risks.

Below are the dependencies and version constraints that the project is using

tensorflow>=1.11.0
torch>=0.4.1
tqdm
pytorch-pretrained-bert==0.4.0
apex

The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict.
The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.

After further analysis, in this project,
The version constraint of dependency tqdm can be changed to >=4.36.0,<=4.64.0.
The version constraint of dependency pytorch-pretrained-bert can be changed to >=0.3.0,<=0.6.2.

The above modification suggestions can reduce the dependency conflicts as much as possible,
and introduce the latest version as much as possible without calling Error in the projects.

The invocation of the current project includes all the following methods.

The calling methods from the tqdm
tqdm.trange
The calling methods from the pytorch-pretrained-bert
pytorch_pretrained_bert.BertForTokenClassification
The calling methods from the all methods
torch.nn.DataParallel.to
sentences.append
metrics.classification_report
print
numpy.argmax
list
str
loss.mean.backward
self.__dict__.update
collections.defaultdict.items
tqdm.trange
file.write
ps.append
data_loader.DataLoader.data_iterator
zip
json.dump
chunks.append
torch.nn.DataParallel.train
filter
set
torch.nn.DataParallel
any
model
tqdm.trange.set_postfix
torch.nn.DataParallel.parameters
rs.append
logging.info
self.load_tags.append
self.tag2idx.get
logging.getLogger.addHandler
torch.tensor.to
file_sentences.write
metrics.items
batch_output.detach.cpu.numpy.detach
logging.getLogger.setLevel
self.load_tags
s.append
logging.StreamHandler
isinstance
collections.defaultdict
utils.load_checkpoint
numpy.average
train_and_evaluate
torch.optim.Adam
file_tags.write
torch.nn.DataParallel.half
torch.save
get_entities
train
hasattr
float
words.append
format
enumerate
utils.RunningAverage.update
sum
utils.RunningAverage
logging.StreamHandler.setFormatter
set.append
pytorch_pretrained_bert.BertTokenizer.from_pretrained
torch.cuda.device_count
open
batch_output.detach.cpu.numpy
range
utils.Params
pytorch_pretrained_bert.BertConfig.from_json_file
line.strip.split
self.tokenizer.tokenize
utils.set_logger
model.load_state_dict
torch.cuda.manual_seed_all
pytorch_pretrained_bert.BertForTokenClassification
pred_tags.extend
ImportError
e.d2.add
torch.optim.lr_scheduler.LambdaLR
build_tags
batch_tags.to.numpy.to
row_fmt.format
torch.optim.Adam.backward
min
logging.getLogger
evaluate.evaluate
logging.FileHandler
apex.optimizers.FusedAdam
join
json.load
max
shutil.copyfile
chunk.split
dataset.append
random.seed
model.classifier.named_parameters
loss_avg
argparse.ArgumentParser.parse_args
hasattr.state_dict
data_loader.DataLoader
torch.nn.DataParallel.named_parameters
apex.optimizers.FP16_Optimizer
e.d1.add
ValueError
numpy.ones
data_loader.DataLoader.load_data
true_tags.extend
logging.FileHandler.setFormatter
argparse.ArgumentParser.add_argument
self.tokenizer.convert_tokens_to_ids
torch.load
torch.optim.lr_scheduler.LambdaLR.step
next
torch.nn.DataParallel.zero_grad
load_dataset
save_dataset
end_of_chunk
torch.nn.DataParallel.eval
line.strip.strip
start_of_chunk
optimizer_to_save.state_dict
torch.manual_seed
optimizer.load_state_dict
f1s.append
os.path.join
pytorch_pretrained_bert.BertForTokenClassification.from_pretrained
batch_tags.to.numpy
self.load_sentences_tags
logging.Formatter
numpy.sum
len.format
torch.device
line.strip
os.path.isfile
batch_data.gt
set.update
idx2tag.get
os.mkdir
evaluate
utils.save_checkpoint
argparse.ArgumentParser
tag.strip
loss.mean.mean
loss.mean.item
torch.nn.utils.clip_grad_norm_
random.shuffle
torch.optim.Adam.step
os.path.exists
metrics.f1_score
len
torch.cuda.is_available
torch.tensor
batch_output.detach.cpu
os.makedirs

@developer
Could please help me check this issue?
May I pull a request to fix it?
Thank you very much.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant