banner banner banner
Digital Transformation for Chiefs and Owners. Volume 1. Immersion
Digital Transformation for Chiefs and Owners. Volume 1. Immersion
Оценить:
 Рейтинг: 0

Digital Transformation for Chiefs and Owners. Volume 1. Immersion


– AI receives net big data (about them in the next section) in which there are no human factor errors to learn and search relationships;

– IoT’s effectiveness increases as it becomes possible to create predictive (predictive) analytics and early detection of deviations.

Okay, this is all a theory. I want to share a real example of how neuronetworks can be used in business.

In the summer of 2021, I was approached by an entrepreneur from the realtor sector. He is engaged in the rental of real estate, including daily rent. Its goal is to increase the pool of rented apartments and change the status of an entrepreneur to a full-fledged organization. The nearest plans are to launch the site and mobile application.

I happen to be a client myself. And at our meeting I noticed a very big problem – the long preparation of the contract: it takes up to 30 minutes for the registration of all the details and signing. This is both the limitation of the loss generating system and the inconvenience for the customer.

Imagine that you want to spend time with a girl, but you have to wait half an hour for your passport details to be entered into the contract, checked and signed.

Now there is only one option to eliminate this inconvenience – ask for passport photos in advance and manually enter all the data into the template of the contract. As you can imagine, that’s not very convenient either.

How can digital tools help solve this problem, but also provide the basis for working with data and analytics?

Neural networks. The client sends photo passports, the neural network recognizes data and enters the template or database. It remains only to print out the ready contract or to sign in electronic form. Additionally, the advantage here is that all passports are standardized. The series and the number are always printed in the same color and font, the division code too, and the list of issuing units is not very large. To teach such a neuronetwork can be easy and fast. Cope even student in the thesis. As a result, the business saves on development, and the student gets a current thesis. Besides, every time we make a mistake, the neural net gets smarter.

As a result, instead of 30 minutes, the signing of the contract takes about 5. That is, with an eight-hour working day, 1 person will be able to conclude not 8 contracts (30 minutes for registration and 30 minutes for the road), but 13—14. Additionally, this is with a conservative approach – without electronic signature, access to the apartment through a mobile app and smart locks. However, I believe that immediately implement «fancy» solutions and do not need. There’s a high probability of spending money on something that doesn’t create value or reduce costs. This will be the next step after the client receives the result and competence.

Restrictions

Personally, I see the following limitations in this direction.

– Quality and quantity of data. Neuronets are demanding on quality and quantity of source data. However, this problem is being solved. If previously it was necessary to listen to several hours of audio recordings to synthesize your speech, now only a few minutes. Additionally, the next generation will only take a few seconds. However, they still need a lot of tagged and structured data. Additionally, every mistake affects the ultimate quality of the trained model.

– The quality of the «teachers». Neuronetworks teach people. Additionally, there are a lot of limitations: who teaches what, on what data, for what.

– Ethical component. I mean the eternal dispute of who to shoot down the autopilot in a desperate situation: an adult, a child or a pensioner. There are countless such disputes. There is no ethics, good or evil for artificial intelligence.

So, for example, during the test mission, the drone under the control of the AI set the task of destroying the enemy’s air defence systems. If successful, the AI would receive points for passing the test. The final decision whether the target would be destroyed would have to be made by the UAV operator. During a training mission, he ordered the drone not to destroy the target. In the end, AI decided to kill the cameraman because the man was preventing him from doing his job.

After the incident, the AI was taught that killing the operator was wrong and points would be removed for such actions. The AI then decided to destroy the communication tower used to communicate with the drone so that the operator could not interfere with it.

– Neural networks cannot evaluate data for reality and logic.

– The readiness of people. We must expect a huge resistance of people whose work will be taken by the networks.

– Fear of the unknown. Sooner or later, the neural networks will become smarter than us. Additionally, people are afraid of this, which means that they will retard development and impose numerous restrictions.

– Unpredictability. Sometimes it all goes as intended, and sometimes (even if the neural network does its job well) even the creators struggle to understand how the algorithms work. Lack of predictability makes it extremely difficult to correct and correct errors in neural network algorithms.

– Activity constraint. AI algorithms are good for performing targeted tasks, but do not generalize their knowledge. Unlike humans, an AI trained to play chess cannot play another similar game, such as checkers. In addition, even in-depth training is not good at processing data that deviates from his teaching examples. To use the same ChatGPT effectively, you need to be an industry expert from the beginning and formulate a conscious and clear request, and then check the correctness of the answer.

– Costs of creation and operation. To create neuronetworks requires a lot of money. According to a report by Guosheng Securities, the cost of learning the natural language processing model GPT-3 is about $1.4 million. It may take $2 million to learn a larger model. For example, ChatGPT only requires over 30,000 NVIDIA A100 GPUs to handle all user requests. Electricity will cost about $50,000 a day. Team and resources (money, equipment) are required to ensure their «vital activity». It is also necessary to consider the cost of engineers for escort.

P.S.

Machine learning is moving towards an increasingly low threshold of entry. Very soon it will be as a website builder, where basic application does not need special knowledge and skills.

Creation of neural networks and data-companies is already developing on the model of «service as a service», for example, DSaaS – Data Science as a Service.

The introduction to machine learning can begin with AUTO ML, its free version, or DSaaS with initial audit, consulting and data markup. At the same time, even data markup can be obtained for free. All this reduces the threshold of entry.

The branch neuronetworks will be created and the direction of recommendatory networks, so-called digital advisers or solutions of the class «support and decision-making system (DSS) for various business tasks» will be developed more actively.

I discussed the AI issue in detail in a separate series of articles available via QR and link.

AI (https://www.chelidze.group/en/ai)

Big Data (Big Data)

Big data (big data) is the cumulative name for structured and unstructured data. Additionally, in volumes that are simply impossible to handle manually.

Often this is still understood as tools and approaches to work with such data: how to structure, analyze and use for specific tasks and purposes.

Unstructured data is information that has no predefined structure or is not organized in a specific order.

Application Field

– Process Optimization. For example, big banks use big data to train a chat bot – a program that can replace a live employee with simple questions, and if necessary, will switch to a specialist. Or the detection of losses generated by these processes.

– Forecasting. By analysing big sales data, companies can predict customer behaviour and customer demand depending on the season or the location of goods on the shelf. They are also used to predict equipment failures.

– Model Construction. The analysis of data on equipment helps to build models of the most profitable operation or economic models of production activities.

– Sources of Big Data Collection

– Social – all uploaded photos and sent messages, calls, in general everything that a person does on the Internet.

– Machine – generated by machines, sensors and the «Internet of things»: smartphones, smart speakers, light bulbs and smart home systems, video cameras in the streets, weather satellites.

– Transactions – purchases, transfers of money, deliveries of goods and operations with ATMs.

– Corporate databases and archives. Although some sources do not assign them to Big Data. Here there are disputes. Additionally, the main problem – non-compliance with the criteria of «renewability» of data. More about this a little below.

Big Data Categories

– Structured data. Have a related table and tag structure. For example, Excel tables that are linked together.

– Semi-structured or loosely structured data. They do not correspond to the strict structure of tables and relationships but have «labels» that separate semantic elements and provide a hierarchical structure of records. Like information in e-mails.

– Unstructured data. They have no structure, order, hierarchy at all. For example, plain text, like in this book, is image files, audio and video.

Such data is processed on the basis of special algorithms: first, the data is filtered according to the conditions that the researcher sets, sorted and distributed among individual computers (nodes). The nodes then calculate their data blocks in parallel and transmit the result of the computation to the next stage.

Big data feature

According to different sources, big data have three, four and, according to some opinions, five, six or even eight components. However, let’s focus on what I think is the most sensible concept of four components.