-
-
Notifications
You must be signed in to change notification settings - Fork 9
/
artificial-intelligence.bigb
213 lines (158 loc) · 8.04 KB
/
artificial-intelligence.bigb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
= Artificial intelligence
{wiki}
= AI
{c}
{synonym}
{title2}
= Artificial general intelligence
{parent=Artificial intelligence}
{wiki}
= AGI
{c}
{synonym}
{title2}
Given enough computational power per dollar, AGI is inevitable, but it is not sure certain ever happen given the end of <Moore's law>[end of Moore's Law].
Alternatively, it could also be achieved genetically modified biological brains + <brain in a vat>.
Imagine a brain the size of a building, perfectly engineered to solve certain engineering problems, and giving hints to human operators + taking feedback from cameras and audio attached to the operators.
This likely implies <transhumanism>, and <mind uploading>.
<Ciro Santilli> joined the silicon industry at one point to help increase our computational capacity and reach AGI.
Ciro believes that the easiest route to full AI, if any, could involve <Ciro's 2D reinforcement learning games>.
= AGI research has become a taboo in the early 21st century
{c}
{parent=Artificial general intelligence}
Due to the failures of earlier generations, which believed that would quickly achieve <AGI>, leading to the AI winters, 21st researchers have been very afraid of even trying it, rather going only for smaller subste problems like better neural network designs, at the risk of being considered a <crank (person)>.
While there is fundamental value in such subset problems, the general view to the final goal is also very important, we will likely never reach AI without it.
This is voiced for example in <Superintelligence by Nick Bostrom (2014)> section "Opinions about the future of machine intelligence" which in turn quotes Nils Nilsson:
\Q[
There may, however, be a residual cultural effect on the AI community of its earlier history that makes many mainstream researchers reluctant to align themselves with over-grand ambition. Thus Nils Nilsson, one of the old-timers in the field, complains that his present-day colleagues lack the boldness of spirit that propelled the pioneers of his own generation:
\Q[
Concern for "respectability" has had, I think, a stultifying effect on some AI researchers. I hear them saying things like, "AI used to be criticized for its flossiness. Now that we have made solid progress, let us not risk losing our respectability." One result of this conservatism has been increased concentration on "weak AI" - the variety devoted to providing aids to human
thought - and away from "strong AI" - the variety that attempts to mechanize human-level intelligence
]
Nilsson’s sentiment has been echoed by several others of the founders, including Marvin Minsky, John McCarthy, and Patrick Winston.
]
= AI complete
{c}
{parent=Artificial general intelligence}
{wiki}
= Instrumental goal
{c}
{parent=Artificial general intelligence}
= Instrumental convergence
{c}
{parent=Instrumental goal}
{wiki}
= AGI research entity
{c}
{parent=Artificial general intelligence}
* 2020 https://towardsdatascience.com/four-ai-companies-on-the-bleeding-edge-of-artificial-general-intelligence-b17227a0b64a Top 4 AI companies leading in the race towards Artificial General Intelligence
* Douglas Hofstadter according to https://www.theatlantic.com/magazine/archive/2013/11/the-man-who-would-teach-machines-to-think/309529/ The Man Who Would Teach Machines to Think (2013) by <James Somers>
= AGI software
{c}
{parent=Artificial general intelligence}
= Artificial general intelligence software
{synonym}
{title2}
* https://ai.stackexchange.com/questions/5428/how-can-people-contribute-to-agi-research mentions:
* https://github.com/opennars/opennars
* https://github.com/brohrer/robot-brain-project
= OpenCog
{c}
{parent=AGI software}
{wiki}
= Ben Goertzel
{c}
{parent=OpenCog}
{tag=AGI research entity}
{wiki}
https://www.reddit.com/r/artificial/comments/b38hbk/what_do_my_fellow_ai_researchers_think_of_ben/ What do my fellow AI researchers think of Ben Goertzel and his research?
= SingularityNET
{c}
{parent=Ben Goertzel}
{wiki}
https://singularitynet.io/
<Ben Goertzel>'s <fog computing> project to try and help achieve <AGI>.
= NuNET
{c}
{parent=SingularityNET}
{tag=Fog computing}
= Turing test
{c}
{parent=Artificial general intelligence}
{wiki}
= CAPTCHA
{c}
{parent=Turing test}
{wiki}
= reCAPTCHA
{c}
{parent=CAPTCHA}
{wiki}
= AI alignment
{c}
{parent=Artificial intelligence}
{wiki}
As highlighted e.g. at <Human compatible by Stuart J. Russell (2019)>, this AI alignment intrisically linked to the idea of <utility> in <economy>.
= AI safety
{c}
{parent=AI alignment}
Basically ensuring that good <AI alignment> allows us to survive the singularity.
= AI training game
{c}
{parent=Artificial intelligence}
{tag=Serious game}
<Ciro Santilli> took a stab at: <Ciro's 2D reinforcement learning games>, but he didn't sink too much/enough into that project.
= gvgai
{c}
{parent=AI training game}
http://www.gvgai.net/
Similar goals to <Ciro's 2D reinforcement learning games>, but they were focusing mostly on discrete games.
The group kind of died circa 2020 it seems, a shame.
= Can AGI be trained in simulations?
{c}
{parent=AI training game}
Or is real word data necessary, e.g. with <robots>?
Fundamental question related to <Ciro's 2D reinforcement learning games>.
Bibliography:
* https://youtu.be/i0UyKsAEaNI?t=120 How to Build AGI? Ilya Sutskever interview by Lex Fridman (2020)
= OpenAI
{c}
{parent=AI training game}
{wiki}
\Q[In 2019, <OpenAI> transitioned from non-profit to for-profit]
so what's that point of "Open" in the name anymore??
* https://www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/ "The AI moonshot was founded in the spirit of transparency. This is the inside story of how competitive pressure eroded that idealism."
* https://archive.ph/wXBtB How OpenAI Sold its Soul for \$1 Billion
* https://www.reddit.com/r/GPT3/comments/n2eo86/is_gpt3_open_source/
= OpenAI Gym
{c}
{parent=OpenAI}
https://github.com/openai/gym
= Artificial intelligence bibliography
{c}
{parent=Artificial intelligence}
{wiki}
= Human compatible by Stuart J. Russell (2019)
{c}
{parent=Artificial intelligence bibliography}
{tag=AI alignment}
{tag=Good book}
{wiki=Human_compatible}
The key takeaway is that setting an explicit <value function> to an <AGI> entity is a good way to destroy the world due to poor <AI alignment>. We are more likely to not destroy by creating an AI whose goals is to "do what humans what it to do", but in a way that it does not know before hand what it is that humans want, and it has to learn from them.
Some other cool ideas:
* a big thing that is missing for AGI in the 2010's is some kind of more hierarchical representation of the continuous input data of the world, e.g.:
* when we behave, we do things in subroutines. E.g. life goal: save hunger. Subgoal: apply for some grant. Subsubgoal: eat, sleep, take shower. Subsub goal: move muscles to get me to table and open a can.
* we can group continuous things into higher objects, e.g. all these pixels I'm seeing in front of me are a computer. So I treat all of them as a single object in my mind.
* <game theory> can be seen as part of <artificial intelligence> that deals with scenarios where multiple intelligent agents are involved
* <probability> plays a crucial role in our everyday living, even though we don't think too much about it every explicitly. He gives a very good example of the cost/risk tradeoffs of planning to the airport to catch a plane. E.g.:
* should you leave 2 days in advance to be sure you'll get there?
* should you pay an armed escort to make sure you are not attacked in the way?
* <economy>, and notably the study of the <utility>, is intrinsically linked to <AI alignment>
= Superintelligence by Nick Bostrom (2014)
{c}
{parent=Artificial intelligence bibliography}
{wiki=Superintelligence:_Paths,_Dangers,_Strategies}
Good points:
* <Post mortem connectome extraction with microtome>
* the idea of a singleton, i.e. one centralized power, possibly AGI-based, that decisivly takes over the planet/reachable universe
* <AGI research has become a taboo in the early 21st century> section "Opinions about the future of machine intelligence"