Pinned Loading
-
Basic-UI-for-GPT-J-6B-with-low-vram
Basic-UI-for-GPT-J-6B-with-low-vram PublicA repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model loading takes 12gb free ram.
-
Basic-UI-for-GPT-Neo-with-low-vram
Basic-UI-for-GPT-Neo-with-low-vram PublicA basic ui for running gpt neo 2.7B on low vram (3 gb Vram minimum)
-
Temporal-Neuron-Variance-Pruning-Demo
Temporal-Neuron-Variance-Pruning-Demo PublicAn implementation of Variance Pruning: Pruning Language Models via Temporal Neuron Variance by Berry Weinstein, Yonatan Belinkov
Jupyter Notebook 1
-
saving-and-loading-large-models-pytorch
saving-and-loading-large-models-pytorch PublicI am using this to load gpt-j-6b to prevent excessive ram usage
-
auto-function-serving
auto-function-serving PublicA python package to offload a function call to an http server running on localhost automatically using a decorator. Compatible with multiprocessing, pickle, flask, fastapi, async etc..
Python 1
-
strongmock
strongmock PublicStrongMock is a powerful mocking library for Python that leverages low-level ctypes functionality to provide extensive mocking capabilities. Some care may be needed while using this.
Python 2
If the problem persists, check the GitHub status page or contact support.