You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This article is deduced from the perspective of AI product engineering.
brief description
Data visualization and AI-generated visualization seem to be different in a few words, but they are products of different eras, with different contexts behind them.
Data Visualization Development
Big data development history: 1960-1980 relational database development, then 1980 data warehouse theory was perfected, the concept of business intelligence was proposed, and Hadoop system was proposed from 2000-2010, and all massive data were put into Hadoop system. After 2010, Because of the development of computing engines, people untie the data warehouse system and develop the data lake system and OLAP system. Now there are two situations, one is to rely on the unified technology of the data lake to process data, and the other is to rely on the light MPP architecture of OLAP. Quantitative calculation data. Visualization development, 1980-1990, Office system, which realized two-dimensional charts, human-computer interaction generation, 2000-2010, some business intelligence visualization, combined with data warehouse business bundles, to generate two-dimensional charts, from 2010 to now, The data format is also very complex, structured and unstructured, and the 2D and 3D systems have also been perfected.
Since 2010, the data visualization standard has been completely set, and data visualization can be explained: let the data be displayed in graphics and images.
AI Generated Visualization Development
This is not directly related to data visualization. AI-generated visualization can explain the behavior of large models in graphics and images.
The history of AI generating large models dates back to the early 2010s. Early natural language processing (NLP) models were small and limited in functionality, but as research and technology advanced, these models gradually became larger and more complex. Here are some important milestones and developments of large-scale AI generative models:
Word2Vec (2013) - Word2Vec is a supervised learning method that represents words as vectors in a high-dimensional space such that words with similar meanings are closer together in the embedding space. This is the first time that a meaningful vector representation has been successfully provided for a natural language processing task.
Seq2Seq and Attention Mechanism (2014-2015) - Sequence-to-Sequence (Sequence-to-Sequence, Seq2Seq) model is an end-to-end deep learning architecture widely used in tasks such as machine translation and summarization. After that, the researchers introduced the Attention mechanism to improve the performance of the model when processing long sequences.
ELMo (2018) - ELMo (Embeddings from Language Models) is a pre-trained language model that uses a bidirectional LSTM (Long Short-Term Memory) neural network to generate context-dependent word embeddings. This is the first method to successfully use pretrained language models for generative tasks.
Transformer (2017) - Transformer is a new type of neural network architecture that abandons the traditional recurrent neural network (RNN). The architecture introduces a multi-head self-attention (Multi-Head Self-Attention) mechanism, which is capable of parallel computing and strong scalability, making it possible to train large models.
BERT (2018) - BERT (Bidirectional Encoder Representations from Transformers) is a Transformer-based pre-trained model. BERT enables deep understanding of text by pre-training with Masked Language Model (MLM) and Next Sentence Prediction (NSP) tasks. It achieves significant performance gains on various natural language processing tasks.
GPT (2018-2021) - GPT (Generative Pre-trained Transformer) series models are a series of large-scale pre-trained language models developed by OpenAI. From GPT to GPT-3, these models have continued to increase in size and capability. GPT-3, in particular, has 175 billion parameters, making it one of the largest language models ever built. GPT-3 has shown amazing performance on various generative tasks, such as text generation, translation, summarization, question answering, etc.
During this process, many other models and architectures emerged, such as ULMFiT, RoBERTa, ALBERT, T5, etc. These models have broken performance records for natural language processing tasks. Today, with further advances in computing power and research, the size and capabilities of large-scale AI generative models continue to expand.
AI generation broke out due to OpenAI GPT, which opened a new era of visualization. The standard has not been generated, but is being explored.
SolidUI precise definition
The precise definition of SolidUI is AI-generated visualization. Let the behavior of the vertical domain model be demonstrated graphically and graphically.
Features
Minimalist process, concise design
Support multi-dimensional legend
Support multiple data sources
Support different plug-ins, for example: huggingface system
Support plug-in robot
Support SolidUI self-developed model
Support tripartite large model system
Containerized deployment
develop
Thinking Evolution
Now many features are restricted by old-age thinking. For example, version 0.1.0 can be seen as a visual application framework. Starting from version 0.2.0, the chat framework has been developed, and the deduction of the application system has been continuously pushed forward.
In the current trend of outlets, we continue to explore downstream, and we have started to explore the model system since version 0.3.0.
Now, due to the trending stage, many old-age thinking patterns, whether it is products, technologies, or operations, have been impacted, especially in the field of AI, where many changes are particularly large. Only talking about AI product thinking does not mean combining the same features with those that remain unchanged. If some of the differences are fine-tuned, you can make first-class products. This is not the case. Many generation models of the old era are actually outdated. When the AI generation system has not formed a standard, it is necessary to explore more boundaries and continue to bottom out.
Keep bottoming out
But those who have done real AI entrepreneurs will find that the so-called AI entrepreneurship, even if we don’t talk about all sales or these commercial things, 80% of your things are product engineering, and 20% are underlying technologies. even good. If you start a business at this time, and you don't happen to be open AI or Anthropic, you may have 10% of the technology, which is not bad. At this time, we will find that all these people who came first, not only the technology, the scale of the industry, but even the distribution of benefits are historical burdens.
Fortunately, SolidUI is an open source product, which can continue to break through and become a first-class product. The realization of many things may not be what the main creator really wants to do, but don't be burdened all the time. There are new conventions in the new era.
Compatibility, Migration, Discard
Compatibility, migration and discarding are inevitable problems in the process of continuous exploration of SolidUI. The great thing about open source is that in the global collaboration, there will be constant additions and bifurcated paths, but the exploration of the main path is always going on.
Moat?
Continue to invest in research and development to maintain technological innovation and leading position
Excellent product design and user experience give the product a competitive advantage
Close community cooperation and support form a good ecosystem.
Accumulate information and knowledge, this aspect is money.
Conclusion
SolidUI is an open source AI-generated visualization product, with innovation and breakthrough as the core, constantly challenging the old-age thinking mode. Through continuous research and development, excellent product design, community cooperation and accumulation of industry experience, SolidUI strives to build a moat in the field of generative AI products.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
background
I am a freelancer and operator of the AI open source project https://github.com/CloudOrc/SolidUI.
This article is deduced from the perspective of AI product engineering.
brief description
Data visualization and AI-generated visualization seem to be different in a few words, but they are products of different eras, with different contexts behind them.
Data Visualization Development
Big data development history: 1960-1980 relational database development, then 1980 data warehouse theory was perfected, the concept of business intelligence was proposed, and Hadoop system was proposed from 2000-2010, and all massive data were put into Hadoop system. After 2010, Because of the development of computing engines, people untie the data warehouse system and develop the data lake system and OLAP system. Now there are two situations, one is to rely on the unified technology of the data lake to process data, and the other is to rely on the light MPP architecture of OLAP. Quantitative calculation data. Visualization development, 1980-1990, Office system, which realized two-dimensional charts, human-computer interaction generation, 2000-2010, some business intelligence visualization, combined with data warehouse business bundles, to generate two-dimensional charts, from 2010 to now, The data format is also very complex, structured and unstructured, and the 2D and 3D systems have also been perfected.
Since 2010, the data visualization standard has been completely set, and data visualization can be explained: let the data be displayed in graphics and images.
AI Generated Visualization Development
This is not directly related to data visualization. AI-generated visualization can explain the behavior of large models in graphics and images.
The history of AI generating large models dates back to the early 2010s. Early natural language processing (NLP) models were small and limited in functionality, but as research and technology advanced, these models gradually became larger and more complex. Here are some important milestones and developments of large-scale AI generative models:
During this process, many other models and architectures emerged, such as ULMFiT, RoBERTa, ALBERT, T5, etc. These models have broken performance records for natural language processing tasks. Today, with further advances in computing power and research, the size and capabilities of large-scale AI generative models continue to expand.
AI generation broke out due to OpenAI GPT, which opened a new era of visualization. The standard has not been generated, but is being explored.
SolidUI precise definition
The precise definition of SolidUI is AI-generated visualization. Let the behavior of the vertical domain model be demonstrated graphically and graphically.
Features
develop
Thinking Evolution
Now many features are restricted by old-age thinking. For example, version 0.1.0 can be seen as a visual application framework. Starting from version 0.2.0, the chat framework has been developed, and the deduction of the application system has been continuously pushed forward.
In the current trend of outlets, we continue to explore downstream, and we have started to explore the model system since version 0.3.0.
Now, due to the trending stage, many old-age thinking patterns, whether it is products, technologies, or operations, have been impacted, especially in the field of AI, where many changes are particularly large. Only talking about AI product thinking does not mean combining the same features with those that remain unchanged. If some of the differences are fine-tuned, you can make first-class products. This is not the case. Many generation models of the old era are actually outdated. When the AI generation system has not formed a standard, it is necessary to explore more boundaries and continue to bottom out.
Keep bottoming out
But those who have done real AI entrepreneurs will find that the so-called AI entrepreneurship, even if we don’t talk about all sales or these commercial things, 80% of your things are product engineering, and 20% are underlying technologies. even good. If you start a business at this time, and you don't happen to be open AI or Anthropic, you may have 10% of the technology, which is not bad. At this time, we will find that all these people who came first, not only the technology, the scale of the industry, but even the distribution of benefits are historical burdens.
Fortunately, SolidUI is an open source product, which can continue to break through and become a first-class product. The realization of many things may not be what the main creator really wants to do, but don't be burdened all the time. There are new conventions in the new era.
Compatibility, Migration, Discard
Compatibility, migration and discarding are inevitable problems in the process of continuous exploration of SolidUI. The great thing about open source is that in the global collaboration, there will be constant additions and bifurcated paths, but the exploration of the main path is always going on.
Moat?
Conclusion
SolidUI is an open source AI-generated visualization product, with innovation and breakthrough as the core, constantly challenging the old-age thinking mode. Through continuous research and development, excellent product design, community cooperation and accumulation of industry experience, SolidUI strives to build a moat in the field of generative AI products.
Beta Was this translation helpful? Give feedback.
All reactions