Deng Jiejina, Zhang Yijiaa, Dong Xinruia, Zhang Fuyaoa and Lu Mingyua, School of Information Science and Technology, Dalian Maritime University, Dalian 116024, Liaoning, China
Predicting drug-target interaction (DTI) plays a crucial role in the study of drug repositioning. In computer-aided drug development, high-performance computers are used to simulate drug development tasks, which is a promising area of research. In drug target affinity prediction, compared to several statistical and machine learning-based models that have been proposed, the deep learning approach is better than the traditional methods. In order to improve the accuracy, the prediction of drug-target affinity (DTA) based on deep learning has always been the focus of research. More and more advanced models have been proposed in recent years, but they ignore the information of different forms of data and do not make full use of data information. This paper proposes a novel end-to-end learning framework for DTA prediction called MultiDTA. In this model, we use multi-channel inputs to predict drug-target affinity, which can make full use of the different information of the data. Specifically, Extraction of sequence and contextual information of drugs and proteins using convolutional neural networks and LSTM. The drug and protein sequences convert into graph structures, then using graph neural networks to extract spatial structure information from drug and protein. We conduct extensive experiments to compare our proposed with state-of-the-art models. Our model is highly competitive relative to other models. The code of MultiDTA and the relevant data are available at: https://github.com/dengjiejin/MultiDTA.
drug-target affinity prediction, representation learning, multi-channel inputs, GCN.
Mei Yin, Nanjing, China
With black hole explosion under incredibly high temperatures leading to cosmic information billions of years ago, all matters had been in gaseous phase. With temperature dropping, under atomic attractive forces, adjacent atoms which made up gases attracted each other and formed a variety of big, small or tiny gaseous lumps. With the temperature persistently dropping, the tiny gaseous lump became colder and contracted and got smaller and turned into one in liquid state and subsequently in solid state according to the principle of expanding when heated and contracting when cooled in general cases. Gradually it developed and formed a human- or animal-like fetus or a plant-like seed. If it had the same compositions as a human, cow or sunflower, the human, the cow or the sunflower formed. Similar cases happened to other humans, animals and plants. Humans neither evolved from apes nor shared a common ancestor with apes.
Origins of humans, animals and plants, cosmic formation, property and counts of brain neuron in language and thinking areas, brain structure, black hole.
Huizhu Li and Lixi Fang, School of Information, Central University of Finance and Economics, Beijing, 102206
In recent years, cloud computing has received extensive attention from everyone. Cloud storage is one of the most widely used services in cloud computing, with a wide range of applications, but some security issues have followed. One of the more serious security issues is that user privacy data is leaked or modified by cloud storage service providers, causing serious losses to users. In response to this problem, this paper proposes a recursive polynomial secret sharing threshold scheme in which users and cloud storage service providers jointly manage user privacy data. In this scheme, the last two polynomials in the recursive equation are used to distribute secret shares to cloud storage service providers and users respectively, so that only users and cloud storage service providers that exceed the threshold can cooperate to recover the key. Then the paper analyzes the safety of the scheme, mainly analyzing whether the scheme is safe from three aspects: correctness, computational safety and robustness. Finally, the paper uses a simple example to show that the scheme can effectively protect the confidentiality of users private data.
Cloud Storage, Threshold Secret Sharing, Recursive Secret Sharing, Cloud Storage Service Provider.
L O Toriola-Coker1*, Nureni Asafe Yekini1, H Alaka2, 1School of Engineering, Yaba College of Technology, Yaba Lagos, 2University of Hertfordshire, Hatfield, Hertfordshire, UK
Artificial intelligence technology is based on design of machine or computer application that mimic human intelligent. Use of artificial intelligence in teaching and learning in civil engineering is a welcome development. This paper presents a conceptual framework of Artificial Intelligence Systems for Teaching, Learning, and administration of in Civil Engineering education. The proposed system is to be design using the following tools: Extensible Markup Language (XML) to develop the GUI, Hypertext Preprocessor (PHP) for the web user interface (WUI), APACHE for middleware, MYSQL for database design, and UML will be used to visualize the design of the system. If system is developed and implemented it will go a long way to advance teaching and learning, and educational administration in civil engineering profession.
Artificial Intelligence, learning management system, Civil Engineering, MYSQL.
Jichuan Wang1, Yu Sun2, 1Northwood High School, 4515 Portola PKWY, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA 92620
Many times when we create any design or drawing on a digital platform, we face the problem that lines we created couldn’t close up the shape correctly . When we use the paint bucket tool, it fills the whole screen up with one click, and we need to re-look through the whole design again to find where it wasnt closed up correctly, not only taking up much of our time, but it’s also very annoying. By determining a rule for whether or not a shape is completely closed or open, this application checks whether the shapes we created were closed by going through each pixel and detecting if there are holes around it . The application could be provided for anyone creating 2d designs on a computer whether for hobby or job, to save the time looking through the whole design again and find the errors with much time are really tiny.
Shapes, checker, fixer, pixel.
Uchenna Onyemaechi1, Eberechi Ruth Uchenna2 and Nopriadi Saputra2, 1Management Department, Abia State University, Uturu, Nigeria and 2Management Department, Bina Nusantara University, Jakarta, Indonesia
This study examined the influences of environment and technology on the relationship between telecommuting and workplace performance in service industries in Nigeria. The objectives of the study were to; examine whether environment as a moderating factor significantly affects the relationship between telecommuting and workplace performance and determine the extent to which technology as a moderating variable influences the relationship between telecommuting and workplace performance. To achieve the objectives, a survey research design was adopted. The population of the study was one thousand one hundred and ninety-five (1195) and the Taro Yamane formula was used to derive the sample size, which is three hundred (300). The techniques employed in analyzing the data were descriptive statistics, ordinal logistic regression analysis. The results indicated that the environment as a moderating factor significantly affects the relationship between telecommuting and workplace performance. It was also revealed that technology as a moderating variable has significant influences on the relationship between telecommuting and workplace performance. Based on the findings, the study concluded that environment and technology to a reasonable extent influence the relationship between telecommuting and workplace performance. It was recommended among others that organizations need to ensure that the environment of their telecommuters is conducive and has good network and internet connectivity if it must affect the relationship between telecommuting and workplace performance.
environment, technology, telecommuting, workplace performance.
Xiang Zhang1, Yan Liu2, Gong Chen3 and Sheng-hua Zhong4, 1Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China, 2Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China, 3Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China, 4College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China
Chatbot has always been an important research topic in artificial intelligence and attracts lots of attention recently. Despite the great progress in language ability, the interactions between users and chatbots are rather generic, short-term, and transnational. It has always been challenging to develop truly personal chatbots and even establish long-term and affective connection. This paper first brings up “nurture” as a new interaction mode with chatbots. We introduce the nurture framework and design the learning algorithm and nurture functions accordingly. Then we present LightBlue – a platform that allows non-professionals to nurture personal chatbots from scratch. Experiments on both close-domain and open-domain tasks have showed the effectiveness of the proposed framework and demonstrated a promising way to establish a longterm interaction between users and chatbots.
Personal Chatbot, Conversation Agent, Nurture, Human-chatbot Interaction, Long-term Relationship.
Yuyang Lou1, Yu Sun2, 1Charles Wright Academy, 7723 Chambers Creek Rd W, Tacoma, WA 98467, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA 92620
In the past few years, the internet and online social networks developed drastically, promoting the development of online learning programs . These programs provided opportunities for a digital learning experience that allows students to explore beyond whats taught in school. However, having a clear understanding of what topic might interest the user and motivate the user to further explore that topic is hard for both the user and the learning program. This paper proposes to create one centralized method of predicting what the user would be interested in and provide them with educational content recommendations. Our design builds upon the eye-tracking techniques, which allows us to capture users’ eye movements, and object recognition achieved by machine learning, which allows us to examine the specific object that the users are looking at and provide data for the users’ interest analysis . Our results show a success rate of __% of analyzing what the user is truly looking at. We used our decision heuristic, etc.
Eye Tracking, Deep learning, computer vision.
Basim Mahbooba, Mohan Timilsina And Martin Serrano, The Insight centre, NUIG University, Galway City, Ireland
Identifying network attacks is a very crucial task for Internet of things (IoT) security. The increasing amount of IoT devices is creating a massive amount of data and opening new security vulnerabilities that malicious users can exploit to gain access. Recently, the research community in IoT Security has been using a data-driven approach to detect anomaly, intrusion, and cyber-attacks. However, getting accurate IoT attack data is time-consuming and expensive. On the other hand, evaluating complex security systems requires costly and sophisticated modelling practices with expert security professionals. Thus, we have used simulated datasets to create different possible scenarios for IoT data labeled with malicious and non-malicious nodes. For each scenario, we tested off a shelf machine learning algorithm for malicious node detection. Experiments on the scenarios demonstrate the benefits of the simulated datasets to assess the performance of the ML algorithms.
IoT Simulation, Data Labels, Malicious Nodes, Attacks, Trust, Prediction.
Chengyang Li, Tianbo Huang, Xiarun Chen, Chenglin Xie, Weiping Wen, School of Software and Microelectronics, Peking University, Beijing, China
Code obfuscation increases the difficulty of understanding programs, improves software security, and, in particular, OLLVM offers the possibility of cross-platform code obfuscation. For OLLVM, we provide enhanced solutions for control flow obfuscation and identifier obfuscation. First, we propose the nested switch obfuscation scheme and the in-degree obfuscation for bogus blocks in the control flow obfuscation. Secondly, the identifier obfuscation scheme is presented in the LLVM layer to fill the gap of OLLVM at this level. Finally, we experimentally verify the enhancement effect of the control flow method and the identifier obfuscation effect and prove that the programs security can be further improved with less overhead, providing higher software security.
Software Protection, Code Obfuscation, Control Flow Obfuscation, Identifier Obfuscation, LLVM.
Isaac Tijerina1 and Dr. Soma Datta2, 1Masters of Science in Software Engineering Student, University of Houston – Clear Lake, Houston, Texas, USA, 2Associate Professor of Software Engineering, University of Houston – Clear Lake, Houston, Texas, USA
Objective: The focus of this study is to research what has already been accomplished in Speech to Text with usage in programming and the general application of Speech to Text in other fields. What is found can then be applied to furthering the application of Speech to Text with coding. Results: It was found that the state of modern Speech to Text is in constant motion. There is work being done in the field constantly to improve Speech to Text and apply it to various fields. In its usage, it is being applied to medical fields, education, machinery control, and many others. It is being seen that while being used there is still a struggle with a users accent if it is different from the native accent of the language.
Speech to text, computer speech recognition, speech to text accent, spoken word recognition.
Chandra Shekhar Gautam, Dr. Rakesh Kumar Katare, Research Scholar A.P.S University Rewa, HOD Dept. of Computer Science A.P.S University Rewa (M.P)(India)
With the growth of the internet a huge amount of data is being generated in every second. Companies rely on data analytics to expand their business and to stay competitive in the market. Over time the technologies of big data analytics have become more affordable for small companies. Unfortunately small companies usually find it difficult to make the best use of the resources due to wrong assumptions about big data or because they are unable to meet the infrastructural requirements big data analysis involves. MapReduce is the basic software framework used in the field of Big Data because of its high scalability. The use of parallel genetic algorithms in map reduces gives more accurate close to optimal value and due to its parallel nature huge volume of data can be handled properly. This survey aims to collect and organize most of the recent publications on the parallel genetic algorithms using MapReduce for improve theperformance of processing. This paper presents a comparative view of different parallel models of genetic algorithm for map reduce and also provides a detail information on different platform and tools for mapreduce.
Big Data, Map Reduce, Genetic Algorithm, Map Reduce using Global Parallelization, Map Reduce using Coarse Grained Parallelization, Map Reduce using Fine Grained Parallelization.
Mengcheng Han1 and Yu Sun2, 1Santa Margarita Catholic High School, 22062 Antonio Pkwy, Rancho Santa Margarita, CA 92688, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA 92620
How did image recognition and object analysis function to bring convenience to people’s lives? Within this question I bear in my mind, I started to explore and build this Automatic Car License Plate Detect and Analyze Project. Since cameras are being widely used for recording and analyzing vehicle information, it has been a great cost to buy such intelligent devices. Guided by recent research on machine learning approaches , we solve this financial problem by designing and implementing a mobile phone application that automatically utilizes the camera installed on the phone to analyze the information of the car license plate. Our design is built to provide users with an accessible way to analyze license plates in complex environments.
Machine learning, LPR, OCR libraries.
Abdulaziz A. Afandi, Department of Engineering College, Islamic University of Medina, Medina, Saudi Arabia
The need to reduce inventory holding costs and increasing system operational availability are the main motivation behind improving spare parts inventory management in a major power utility company in Medina. This paper reports in an effort made to optimize the orders quantities of spare parts by improving the method of forecasting the demand. The study focuses on equipment that has frequent spare part’s purchase orders with uncertain demand. The pattern of the demand considers lumpy pattern which makes conventional forecasting methods less effective. A comparison was made by benchmarking various methods of forecasting based on experts’ criteria to select the most suitable method for the case-study. Three actual data sets were used to make the forecast in this case study. Two neural network (NN) approaches were utilized and compared, namely long short-term memory (LSTM) and multilayer perceptron (MLP). The results as expected, showed that the NN models gave better results than traditional forecasting method (judgmental method). In addition, the LSTM model had a higher predictive accuracy than the MLP model.
ANN, long short-term memory, multilayer perceptron, forecasting demand, inventory management, spare parts.
Yiwen Liu1, Yu Sun2, 1The high school Affiliated to Renmin University, Beijing, Haidian, Zhong guan cun street, 37, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA 92620
The citizens nowadays are often born and raised in well developed urban areas and have rarely, even never experienced the difficulties that wildlife are suffering from due to human actions. They are likely to possess sympathy, but never do these individuals are aware of how their deeds may affect the living of other species on planet Earth . However, if someone reveals the bloody truth to the people, they are likely to change for the greater good. In this paper, we mainly used unity modeling and java programming skills to develop an animal simulation game in order to show the damage done by mankind and resonate the feeling of empathy so the players may alter their actions to preserve the environment . The player starts the game as an animal figure in a randomly generated map. The player will control the animal to move around to consume water and food for survival. Meanwhile, the animal must avoid the invasion of human poison lands closing in on the habitats for this figure. Eventually, the player will starve or be poisoned and fail to survive. By setting this result, we hope to arouse the sympathy in hearts and lead to some alteration to a person’s habits.
Simulator, self-manipulation, protection, ecosystem.
Copyright © SIGML 2022