Search results for: memetic algorithms
404 Low-Cost Image Processing System for Evaluating Pavement Surface Distress
Authors: Keerti Kembhavi, M. R. Archana, V. Anjaneyappa
Abstract:
Most asphalt pavement condition evaluation use rating frameworks in which asphalt pavement distress is estimated by type, extent, and severity. Rating is carried out by the pavement condition rating (PCR), which is tedious and expensive. This paper presents the development of a low-cost technique for image pavement distress analysis that permits the identification of pothole and cracks. The paper explores the application of image processing tools for the detection of potholes and cracks. Longitudinal cracking and pothole are detected using Fuzzy-C- Means (FCM) and proceeded with the Spectral Theory algorithm. The framework comprises three phases, including image acquisition, processing, and extraction of features. A digital camera (Gopro) with the holder is used to capture pavement distress images on a moving vehicle. FCM classifier and Spectral Theory algorithms are used to compute features and classify the longitudinal cracking and pothole. The Matlab2016Ra Image preparing tool kit utilizes performance analysis to identify the viability of pavement distress on selected urban stretches of Bengaluru city, India. The outcomes of image evaluation with the utilization semi-computerized image handling framework represented the features of longitudinal crack and pothole with an accuracy of about 80%. Further, the detected images are validated with the actual dimensions, and it is seen that dimension variability is about 0.46. The linear regression model y=1.171x-0.155 is obtained using the existing and experimental / image processing area. The R2 correlation square obtained from the best fit line is 0.807, which is considered in the linear regression model to be ‘large positive linear association’.Keywords: crack detection, pothole detection, spectral clustering, fuzzy-c-means
Procedia PDF Downloads 182403 Predictive Modeling of Bridge Conditions Using Random Forest
Authors: Miral Selim, May Haggag, Ibrahim Abotaleb
Abstract:
The aging of transportation infrastructure presents significant challenges, particularly concerning the monitoring and maintenance of bridges. This study investigates the application of Random Forest algorithms for predictive modeling of bridge conditions, utilizing data from the US National Bridge Inventory (NBI). The research is significant as it aims to improve bridge management through data-driven insights that can enhance maintenance strategies and contribute to overall safety. Random Forest is chosen for its robustness, ability to handle complex, non-linear relationships among variables, and its effectiveness in feature importance evaluation. The study begins with comprehensive data collection and cleaning, followed by the identification of key variables influencing bridge condition ratings, including age, construction materials, environmental factors, and maintenance history. Random Forest is utilized to examine the relationships between these variables and the predicted bridge conditions. The dataset is divided into training and testing subsets to evaluate the model's performance. The findings demonstrate that the Random Forest model effectively enhances the understanding of factors affecting bridge conditions. By identifying bridges at greater risk of deterioration, the model facilitates proactive maintenance strategies, which can help avoid costly repairs and minimize service disruptions. Additionally, this research underscores the value of data-driven decision-making, enabling better resource allocation to prioritize maintenance efforts where they are most necessary. In summary, this study highlights the efficiency and applicability of Random Forest in predictive modeling for bridge management. Ultimately, these findings pave the way for more resilient and proactive management of bridge systems, ensuring their longevity and reliability for future use.Keywords: data analysis, random forest, predictive modeling, bridge management
Procedia PDF Downloads 24402 Analyzing the Influence of Hydrometeorlogical Extremes, Geological Setting, and Social Demographic on Public Health
Authors: Irfan Ahmad Afip
Abstract:
This main research objective is to accurately identify the possibility for a Leptospirosis outbreak severity of a certain area based on its input features into a multivariate regression model. The research question is the possibility of an outbreak in a specific area being influenced by this feature, such as social demographics and hydrometeorological extremes. If the occurrence of an outbreak is being subjected to these features, then the epidemic severity for an area will be different depending on its environmental setting because the features will influence the possibility and severity of an outbreak. Specifically, this research objective was three-fold, namely: (a) to identify the relevant multivariate features and visualize the patterns data, (b) to develop a multivariate regression model based from the selected features and determine the possibility for Leptospirosis outbreak in an area, and (c) to compare the predictive ability of multivariate regression model and machine learning algorithms. Several secondary data features were collected locations in the state of Negeri Sembilan, Malaysia, based on the possibility it would be relevant to determine the outbreak severity in the area. The relevant features then will become an input in a multivariate regression model; a linear regression model is a simple and quick solution for creating prognostic capabilities. A multivariate regression model has proven more precise prognostic capabilities than univariate models. The expected outcome from this research is to establish a correlation between the features of social demographic and hydrometeorological with Leptospirosis bacteria; it will also become a contributor for understanding the underlying relationship between the pathogen and the ecosystem. The relationship established can be beneficial for the health department or urban planner to inspect and prepare for future outcomes in event detection and system health monitoring.Keywords: geographical information system, hydrometeorological, leptospirosis, multivariate regression
Procedia PDF Downloads 117401 Review of Strategies for Hybrid Energy Storage Management System in Electric Vehicle Application
Authors: Kayode A. Olaniyi, Adeola A. Ogunleye, Tola M. Osifeko
Abstract:
Electric Vehicles (EV) appear to be gaining increasing patronage as a feasible alternative to Internal Combustion Engine Vehicles (ICEVs) for having low emission and high operation efficiency. The EV energy storage systems are required to handle high energy and power density capacity constrained by limited space, operating temperature, weight and cost. The choice of strategies for energy storage evaluation, monitoring and control remains a challenging task. This paper presents review of various energy storage technologies and recent researches in battery evaluation techniques used in EV applications. It also underscores strategies for the hybrid energy storage management and control schemes for the improvement of EV stability and reliability. The study reveals that despite the advances recorded in battery technologies there is still no cell which possess both the optimum power and energy densities among other requirements, for EV application. However combination of two or more energy storages as hybrid and allowing the advantageous attributes from each device to be utilized is a promising solution. The review also reveals that State-of-Charge (SoC) is the most crucial method for battery estimation. The conventional method of SoC measurement is however questioned in the literature and adaptive algorithms that include all model of disturbances are being proposed. The review further suggests that heuristic-based approach is commonly adopted in the development of strategies for hybrid energy storage system management. The alternative approach which is optimization-based is found to be more accurate but is memory and computational intensive and as such not recommended in most real-time applications.Keywords: battery state estimation, hybrid electric vehicle, hybrid energy storage, state of charge, state of health
Procedia PDF Downloads 242400 Benchmarking Machine Learning Approaches for Forecasting Hotel Revenue
Authors: Rachel Y. Zhang, Christopher K. Anderson
Abstract:
A critical aspect of revenue management is a firm’s ability to predict demand as a function of price. Historically hotels have used simple time series models (regression and/or pick-up based models) owing to the complexities of trying to build casual models of demands. Machine learning approaches are slowly attracting attention owing to their flexibility in modeling relationships. This study provides an overview of approaches to forecasting hospitality demand – focusing on the opportunities created by machine learning approaches, including K-Nearest-Neighbors, Support vector machine, Regression Tree, and Artificial Neural Network algorithms. The out-of-sample performances of above approaches to forecasting hotel demand are illustrated by using a proprietary sample of the market level (24 properties) transactional data for Las Vegas NV. Causal predictive models can be built and evaluated owing to the availability of market level (versus firm level) data. This research also compares and contrast model accuracy of firm-level models (i.e. predictive models for hotel A only using hotel A’s data) to models using market level data (prices, review scores, location, chain scale, etc… for all hotels within the market). The prospected models will be valuable for hotel revenue prediction given the basic characters of a hotel property or can be applied in performance evaluation for an existed hotel. The findings will unveil the features that play key roles in a hotel’s revenue performance, which would have considerable potential usefulness in both revenue prediction and evaluation.Keywords: hotel revenue, k-nearest-neighbors, machine learning, neural network, prediction model, regression tree, support vector machine
Procedia PDF Downloads 134399 Optimizing CNC Production Line Efficiency Using NSGA-II: Adaptive Layout and Operational Sequence for Enhanced Manufacturing Flexibility
Authors: Yi-Ling Chen, Dung-Ying Lin
Abstract:
In the manufacturing process, computer numerical control (CNC) machining plays a crucial role. CNC enables precise machinery control through computer programs, achieving automation in the production process and significantly enhancing production efficiency. However, traditional CNC production lines often require manual intervention for loading and unloading operations, which limits the production line's operational efficiency and production capacity. Additionally, existing CNC automation systems frequently lack sufficient intelligence and fail to achieve optimal configuration efficiency, resulting in the need for substantial time to reconfigure production lines when producing different products, thereby impacting overall production efficiency. Using the NSGA-II algorithm, we generate production line layout configurations that consider field constraints and select robotic arm specifications from an arm list. This allows us to calculate loading and unloading times for each job order, perform demand allocation, and assign processing sequences. The NSGA-II algorithm is further employed to determine the optimal processing sequence, with the aim of minimizing demand completion time and maximizing average machine utilization. These objectives are used to evaluate the performance of each layout, ultimately determining the optimal layout configuration. By employing this method, it enhance the configuration efficiency of CNC production lines and establish an adaptive capability that allows the production line to respond promptly to changes in demand. This will minimize production losses caused by the need to reconfigure the layout, ensuring that the CNC production line can maintain optimal efficiency even when adjustments are required due to fluctuating demands.Keywords: evolutionary algorithms, multi-objective optimization, pareto optimality, layout optimization, operations sequence
Procedia PDF Downloads 24398 A Comprehensive Theory of Communication with Biological and Non-Biological Intelligence for a 21st Century Curriculum
Authors: Thomas Schalow
Abstract:
It is commonly recognized that our present curriculum is not preparing students to function in the 21st century. This is particularly true in regard to communication needs across cultures - both human and non-human. In this paper, a comprehensive theory of communication-based on communication with non-human cultures and intelligences is presented to meet the following three imminent contingencies: communicating with sentient biological intelligences, communicating with extraterrestrial intelligences, and communicating with artificial super-intelligences. The paper begins with the argument that we need to become much more serious about communicating with the non-human, intelligent life forms that already exists around us here on Earth. We need to broaden our definition of communication and reach out to other sentient life forms in order to provide humanity with a better perspective of its place within our ecosystem. The paper next examines the science and philosophy behind CETI (communication with extraterrestrial intelligences) and how it could prove useful even in the absence of contact with alien life. However, CETI’s assumptions and methodology need to be revised in accordance with the communication theory being proposed in this paper if we are truly serious about finding and communicating with life beyond Earth. The final theme explored in this paper is communication with non-biological super-intelligences. Humanity has never been truly compelled to converse with other species, and our failure to seriously consider such intercourse has left us largely unprepared to deal with communication in a future that will be mediated and controlled by computer algorithms. Fortunately, our experience dealing with other cultures can provide us with a framework for this communication. The basic concepts behind intercultural communication can be applied to the three types of communication envisioned in this paper if we are willing to recognize that we are in fact dealing with other cultures when we interact with other species, alien life, and artificial super-intelligence. The ideas considered in this paper will require a new mindset for humanity, but a new disposition will yield substantial gains. A curriculum that is truly ready for the 21st century needs to be aligned with this new theory of communication.Keywords: artificial intelligence, CETI, communication, language
Procedia PDF Downloads 365397 Multi-Impairment Compensation Based Deep Neural Networks for 16-QAM Coherent Optical Orthogonal Frequency Division Multiplexing System
Authors: Ying Han, Yuanxiang Chen, Yongtao Huang, Jia Fu, Kaile Li, Shangjing Lin, Jianguo Yu
Abstract:
In long-haul and high-speed optical transmission system, the orthogonal frequency division multiplexing (OFDM) signal suffers various linear and non-linear impairments. In recent years, researchers have proposed compensation schemes for specific impairment, and the effects are remarkable. However, different impairment compensation algorithms have caused an increase in transmission delay. With the widespread application of deep neural networks (DNN) in communication, multi-impairment compensation based on DNN will be a promising scheme. In this paper, we propose and apply DNN to compensate multi-impairment of 16-QAM coherent optical OFDM signal, thereby improving the performance of the transmission system. The trained DNN models are applied in the offline digital signal processing (DSP) module of the transmission system. The models can optimize the constellation mapping signals at the transmitter and compensate multi-impairment of the OFDM decoded signal at the receiver. Furthermore, the models reduce the peak to average power ratio (PAPR) of the transmitted OFDM signal and the bit error rate (BER) of the received signal. We verify the effectiveness of the proposed scheme for 16-QAM Coherent Optical OFDM signal and demonstrate and analyze transmission performance in different transmission scenarios. The experimental results show that the PAPR and BER of the transmission system are significantly reduced after using the trained DNN. It shows that the DNN with specific loss function and network structure can optimize the transmitted signal and learn the channel feature and compensate for multi-impairment in fiber transmission effectively.Keywords: coherent optical OFDM, deep neural network, multi-impairment compensation, optical transmission
Procedia PDF Downloads 144396 Quoting Jobshops Due Dates Subject to Exogenous Factors in Developing Nations
Authors: Idris M. Olatunde, Kareem B.
Abstract:
In manufacturing systems, especially job shops, service performance is a key factor that determines customer satisfaction. Service performance depends not only on the quality of the output but on the delivery lead times as well. Besides product quality enhancement, delivery lead time must be minimized for optimal patronage. Quoting accurate due dates is sine quo non for job shop operational survival in a global competitive environment. Quoting accurate due dates in job shops has been a herculean task that nearly defiled solutions from many methods employed due to complex jobs routing nature of the system. This class of NP-hard problems possessed no rigid algorithms that can give an optimal solution. Jobshop operational problem is more complex in developing nations due to some peculiar factors. Operational complexity in job shops emanated from political instability, poor economy, technological know-how, and the non-promising socio-political environment. The mentioned exogenous factors were hardly considered in the previous studies on scheduling problem related to due date determination in job shops. This study has filled the gap created in the past studies by developing a dynamic model that incorporated the exogenous factors for accurate determination of due dates for varying jobs complexity. Real data from six job shops selected from the different part of Nigeria, were used to test the efficacy of the model, and the outcomes were analyzed statistically. The results of the analyzes showed that the model is more promising in determining accurate due dates than the traditional models deployed by many job shops in terms of patronage and lead times minimization.Keywords: due dates prediction, improved performance, customer satisfaction, dynamic model, exogenous factors, job shops
Procedia PDF Downloads 413395 Transmission Line Congestion Management Using Hybrid Fish-Bee Algorithm with Unified Power Flow Controller
Authors: P. Valsalal, S. Thangalakshmi
Abstract:
There is a widespread changeover in the electrical power industry universally from old-style monopolistic outline towards a horizontally distributed competitive structure to come across the demand of rising consumption. When the transmission lines of derestricted system are incapable to oblige the entire service needs, the lines are overloaded or congested. The governor between customer and power producer is nominated as Independent System Operator (ISO) to lessen the congestion without obstructing transmission line restrictions. Among the existing approaches for congestion management, the frequently used approaches are reorganizing the generation and load curbing. There is a boundary for reorganizing the generators, and further loads may not be supplemented with the prevailing resources unless more private power producers are added in the system by considerably raising the cost. Hence, congestion is relaxed by appropriate Flexible AC Transmission Systems (FACTS) devices which boost the existing transfer capacity of transmission lines. The FACTs device, namely, Unified Power Flow Controller (UPFC) is preferred, and the correct placement of UPFC is more vital and should be positioned in the highly congested line. Hence, the weak line is identified by using power flow performance index with the new objective function with proposed hybrid Fish – Bee algorithm. Further, the location of UPFC at appropriate line reduces the branch loading and minimizes the voltage deviation. The power transfer capacity of lines is determined with and without UPFC in the identified congested line of IEEE 30 bus structure and the simulated results are compared with prevailing algorithms. It is observed that the transfer capacity of existing line is increased with the presented algorithm and thus alleviating the congestion.Keywords: available line transfer capability, congestion management, FACTS device, Hybrid Fish-Bee Algorithm, ISO, UPFC
Procedia PDF Downloads 384394 Emotion Detection in Twitter Messages Using Combination of Long Short-Term Memory and Convolutional Deep Neural Networks
Authors: Bahareh Golchin, Nooshin Riahi
Abstract:
One of the most significant issues as attended a lot in recent years is that of recognizing the sentiments and emotions in social media texts. The analysis of sentiments and emotions is intended to recognize the conceptual information such as the opinions, feelings, attitudes and emotions of people towards the products, services, organizations, people, topics, events and features in the written text. These indicate the greatness of the problem space. In the real world, businesses and organizations are always looking for tools to gather ideas, emotions, and directions of people about their products, services, or events related to their own. This article uses the Twitter social network, one of the most popular social networks with about 420 million active users, to extract data. Using this social network, users can share their information and opinions about personal issues, policies, products, events, etc. It can be used with appropriate classification of emotional states due to the availability of its data. In this study, supervised learning and deep neural network algorithms are used to classify the emotional states of Twitter users. The use of deep learning methods to increase the learning capacity of the model is an advantage due to the large amount of available data. Tweets collected on various topics are classified into four classes using a combination of two Bidirectional Long Short Term Memory network and a Convolutional network. The results obtained from this study with an average accuracy of 93%, show good results extracted from the proposed framework and improved accuracy compared to previous work.Keywords: emotion classification, sentiment analysis, social networks, deep neural networks
Procedia PDF Downloads 139393 Sensor and Actuator Fault Detection in Connected Vehicles under a Packet Dropping Network
Authors: Z. Abdollahi Biron, P. Pisu
Abstract:
Connected vehicles are one of the promising technologies for future Intelligent Transportation Systems (ITS). A connected vehicle system is essentially a set of vehicles communicating through a network to exchange their information with each other and the infrastructure. Although this interconnection of the vehicles can be potentially beneficial in creating an efficient, sustainable, and green transportation system, a set of safety and reliability challenges come out with this technology. The first challenge arises from the information loss due to unreliable communication network which affects the control/management system of the individual vehicles and the overall system. Such scenario may lead to degraded or even unsafe operation which could be potentially catastrophic. Secondly, faulty sensors and actuators can affect the individual vehicle’s safe operation and in turn will create a potentially unsafe node in the vehicular network. Further, sending that faulty sensor information to other vehicles and failure in actuators may significantly affect the safe operation of the overall vehicular network. Therefore, it is of utmost importance to take these issues into consideration while designing the control/management algorithms of the individual vehicles as a part of connected vehicle system. In this paper, we consider a connected vehicle system under Co-operative Adaptive Cruise Control (CACC) and propose a fault diagnosis scheme that deals with these aforementioned challenges. Specifically, the conventional CACC algorithm is modified by adding a Kalman filter-based estimation algorithm to suppress the effect of lost information under unreliable network. Further, a sliding mode observer-based algorithm is used to improve the sensor reliability under faults. The effectiveness of the overall diagnostic scheme is verified via simulation studies.Keywords: fault diagnostics, communication network, connected vehicles, packet drop out, platoon
Procedia PDF Downloads 239392 Non-Destructive Static Damage Detection of Structures Using Genetic Algorithm
Authors: Amir Abbas Fatemi, Zahra Tabrizian, Kabir Sadeghi
Abstract:
To find the location and severity of damage that occurs in a structure, characteristics changes in dynamic and static can be used. The non-destructive techniques are more common, economic, and reliable to detect the global or local damages in structures. This paper presents a non-destructive method in structural damage detection and assessment using GA and static data. Thus, a set of static forces is applied to some of degrees of freedom and the static responses (displacements) are measured at another set of DOFs. An analytical model of the truss structure is developed based on the available specification and the properties derived from static data. The damages in structure produce changes to its stiffness so this method used to determine damage based on change in the structural stiffness parameter. Changes in the static response which structural damage caused choose to produce some simultaneous equations. Genetic Algorithms are powerful tools for solving large optimization problems. Optimization is considered to minimize objective function involve difference between the static load vector of damaged and healthy structure. Several scenarios defined for damage detection (single scenario and multiple scenarios). The static damage identification methods have many advantages, but some difficulties still exist. So it is important to achieve the best damage identification and if the best result is obtained it means that the method is Reliable. This strategy is applied to a plane truss. This method is used for a plane truss. Numerical results demonstrate the ability of this method in detecting damage in given structures. Also figures show damage detections in multiple damage scenarios have really efficient answer. Even existence of noise in the measurements doesn’t reduce the accuracy of damage detections method in these structures.Keywords: damage detection, finite element method, static data, non-destructive, genetic algorithm
Procedia PDF Downloads 237391 Facilitating Written Biology Assessment in Large-Enrollment Courses Using Machine Learning
Authors: Luanna B. Prevost, Kelli Carter, Margaurete Romero, Kirsti Martinez
Abstract:
Writing is an essential scientific practice, yet, in several countries, the increasing university science class-size limits the use of written assessments. Written assessments allow students to demonstrate their learning in their own words and permit the faculty to evaluate students’ understanding. However, the time and resources required to grade written assessments prohibit their use in large-enrollment science courses. This study examined the use of machine learning algorithms to automatically analyze student writing and provide timely feedback to the faculty about students' writing in biology. Written responses to questions about matter and energy transformation were collected from large-enrollment undergraduate introductory biology classrooms. Responses were analyzed using the LightSide text mining and classification software. Cohen’s Kappa was used to measure agreement between the LightSide models and human raters. Predictive models achieved agreement with human coding of 0.7 Cohen’s Kappa or greater. Models captured that when writing about matter-energy transformation at the ecosystem level, students focused on primarily on the concepts of heat loss, recycling of matter, and conservation of matter and energy. Models were also produced to capture writing about processes such as decomposition and biochemical cycling. The models created in this study can be used to provide automatic feedback about students understanding of these concepts to biology faculty who desire to use formative written assessments in larger enrollment biology classes, but do not have the time or personnel for manual grading.Keywords: machine learning, written assessment, biology education, text mining
Procedia PDF Downloads 281390 Metagenomics-Based Molecular Epidemiology of Viral Diseases
Authors: Vyacheslav Furtak, Merja Roivainen, Olga Mirochnichenko, Majid Laassri, Bella Bidzhieva, Tatiana Zagorodnyaya, Vladimir Chizhikov, Konstantin Chumakov
Abstract:
Molecular epidemiology and environmental surveillance are parts of a rational strategy to control infectious diseases. They have been widely used in the worldwide campaign to eradicate poliomyelitis, which otherwise would be complicated by the inability to rapidly respond to outbreaks and determine sources of the infection. The conventional scheme involves isolation of viruses from patients and the environment, followed by their identification by nucleotide sequences analysis to determine phylogenetic relationships. This is a tedious and time-consuming process that yields definitive results when it may be too late to implement countermeasures. Because of the difficulty of high-throughput full-genome sequencing, most such studies are conducted by sequencing only capsid genes or their parts. Therefore the important information about the contribution of other parts of the genome and inter- and intra-species recombination to viral evolution is not captured. Here we propose a new approach based on the rapid concentration of sewage samples with tangential flow filtration followed by deep sequencing and reconstruction of nucleotide sequences of viruses present in the samples. The entire nucleic acids content of each sample is sequenced, thus preserving in digital format the complete spectrum of viruses. A set of rapid algorithms was developed to separate deep sequence reads into discrete populations corresponding to each virus and assemble them into full-length consensus contigs, as well as to generate a complete profile of sequence heterogeneities in each of them. This provides an effective approach to study molecular epidemiology and evolution of natural viral populations.Keywords: poliovirus, eradication, environmental surveillance, laboratory diagnosis
Procedia PDF Downloads 282389 Hardware-In-The-Loop Relative Motion Control: Theory, Simulation and Experimentation
Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini
Abstract:
This paper presents a Guidance and Control (G&C) strategy to address spacecraft maneuvering problem for future Rendezvous and Docking (RVD) missions. The proposed strategy allows safe and propellant efficient trajectories for space servicing missions including tasks such as approaching, inspecting and capturing. This work provides the validation test results of the G&C laws using a Hardware-In-the-Loop (HIL) setup with two robotic mockups representing the chaser and the target spacecraft. Through this paper, the challenges of the relative motion control in space are first summarized, and in particular, the constraints imposed by the mission, spacecraft and, onboard processing capabilities. Second, the proposed algorithm is introduced by presenting the formulation of constrained Model Predictive Control (MPC) to optimize the fuel consumption and explicitly handle the physical and geometric constraints in the system, e.g. thruster or Line-Of-Sight (LOS) constraints. Additionally, the coupling between translational motion and rotational motion is addressed via dual quaternion based kinematic description and accordingly explained. The resulting convex optimization problem allows real-time implementation capability based on a detailed discussion on the computational time requirements and the obtained results with respect to the onboard computer and future trends of space processors capabilities. Finally, the performance of the algorithm is presented in the scope of a potential future mission and of the available equipment. The results also cover a comparison between the proposed algorithms with Linear–quadratic regulator (LQR) based control law to highlight the clear advantages of the MPC formulation.Keywords: autonomous vehicles, embedded optimization, real-time experiment, rendezvous and docking, space robotics
Procedia PDF Downloads 125388 A Study on the Korean Connected Industrial Parks Smart Logistics It Financial Enterprise Architecture
Authors: Ilgoun Kim, Jongpil Jeong
Abstract:
Recently, a connected industrial parks (CIPs) architecture using new technologies such as RFID, cloud computing, CPS, Big Data, 5G 5G, IIOT, VR-AR, and ventral AI algorithms based on IoT has been proposed. This researcher noted the vehicle junction problem (VJP) as a more specific detail of the CIPs architectural models. The VJP noted by this researcher includes 'efficient AI physical connection challenges for vehicles' through ventilation, 'financial and financial issues with complex vehicle physical connections,' and 'welfare and working conditions of the performing personnel involved in complex vehicle physical connections.' In this paper, we propose a public solution architecture for the 'electronic financial problem of complex vehicle physical connections' as a detailed task during the vehicle junction problem (VJP). The researcher sought solutions to businesses, consumers, and Korean social problems through technological advancement. We studied how the beneficiaries of technological development can benefit from technological development with many consumers in Korean society and many small and small Korean company managers, not some specific companies. In order to more specifically implement the connected industrial parks (CIPs) architecture using the new technology, we noted the vehicle junction problem (VJP) within the smart factory industrial complex and noted the process of achieving the vehicle junction problem performance among several electronic processes. This researcher proposes a more detailed, integrated public finance enterprise architecture among the overall CIPs architectures. The main details of the public integrated financial enterprise architecture were largely organized into four main categories: 'business', 'data', 'technique', and 'finance'.Keywords: enterprise architecture, IT Finance, smart logistics, CIPs
Procedia PDF Downloads 168387 Multi-Objective Optimal Design of a Cascade Control System for a Class of Underactuated Mechanical Systems
Authors: Yuekun Chen, Yousef Sardahi, Salam Hajjar, Christopher Greer
Abstract:
This paper presents a multi-objective optimal design of a cascade control system for an underactuated mechanical system. Cascade control structures usually include two control algorithms (inner and outer). To design such a control system properly, the following conflicting objectives should be considered at the same time: 1) the inner closed-loop control must be faster than the outer one, 2) the inner loop should fast reject any disturbance and prevent it from propagating to the outer loop, 3) the controlled system should be insensitive to measurement noise, and 4) the controlled system should be driven by optimal energy. Such a control problem can be formulated as a multi-objective optimization problem such that the optimal trade-offs among these design goals are found. To authors best knowledge, such a problem has not been studied in multi-objective settings so far. In this work, an underactuated mechanical system consisting of a rotary servo motor and a ball and beam is used for the computer simulations, the setup parameters of the inner and outer control systems are tuned by NSGA-II (Non-dominated Sorting Genetic Algorithm), and the dominancy concept is used to find the optimal design points. The solution of this problem is not a single optimal cascade control, but rather a set of optimal cascade controllers (called Pareto set) which represent the optimal trade-offs among the selected design criteria. The function evaluation of the Pareto set is called the Pareto front. The solution set is introduced to the decision-maker who can choose any point to implement. The simulation results in terms of Pareto front and time responses to external signals show the competing nature among the design objectives. The presented study may become the basis for multi-objective optimal design of multi-loop control systems.Keywords: cascade control, multi-Loop control systems, multiobjective optimization, optimal control
Procedia PDF Downloads 154386 Machine Learning Techniques in Bank Credit Analysis
Authors: Fernanda M. Assef, Maria Teresinha A. Steiner
Abstract:
The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.Keywords: artificial neural networks (ANNs), classifier algorithms, credit risk assessment, logistic regression, machine Learning, support vector machines
Procedia PDF Downloads 104385 Writing a Parametric Design Algorithm Based on Recreation and Structural Analysis of Patkane Model: The Case Study of Oshtorjan Mosque
Authors: Behnoush Moghiminia, Jesus Anaya Diaz
Abstract:
The current study attempts to present the relationship between the structure development and Patkaneh as one of the Iranian geometric patterns and parametric algorithms by introducing two practical methods. While having a structural function, Patkaneh is also used as an ornamental element. It can be helpful in the scientific and practical review of Patkaneh. The current study aims to use Patkaneh as a parametric form generator based on the algorithm. The current paper attempts to express how can a more complete algorithm of this covering be obtained based on the parametric study and analysis of a sample of a Patkaneh and also investigate the relationship between the development of the geometrical pattern of Patkaneh as a structural-decorative element of Iranian architecture and digital design. In this regard, to achieve the research purposes, researchers investigated the oldest type of Patkaneh in the architecture history of Iran, such as the Northern Entrance Patkaneh of Oshtorjan Jame’ Mosque. An accurate investigation was done on the history of the background to answer the questions. Then, by investigating the structural behavior of Patkaneh, the decorative or structural-decorative role of Patkaneh was investigated to eliminate the ambiguity. Then, the geometrical structure of Patkaneh was analyzed by introducing two practical methods. The first method is based on the constituent units of Patkaneh (Square and diamond) and investigating the interactive relationships between them in 2D and 3D. This method is appropriate for cases where there are rational and regular geometrical relationships. The second method is based on the separation of the floors and the investigation of their interrelation. It is practical when the constituent units are not geometrically regular and have numerous diversity. Finally, the parametric form algorithm of these methods was codified.Keywords: geometric properties, parametric design, Patkaneh, structural analysis
Procedia PDF Downloads 152384 Rating Agreement: Machine Learning for Environmental, Social, and Governance Disclosure
Authors: Nico Rosamilia
Abstract:
The study evaluates the importance of non-financial disclosure practices for regulators, investors, businesses, and markets. It aims to create a sector-specific set of indicators for environmental, social, and governance (ESG) performances alternative to the ratings of the agencies. The existing literature extensively studies the implementation of ESG rating systems. Conversely, this study has a twofold outcome. Firstly, it should generalize incentive systems and governance policies for ESG and sustainable principles. Therefore, it should contribute to the EU Sustainable Finance Disclosure Regulation. Secondly, it concerns the market and the investors by highlighting successful sustainable investing. Indeed, the study contemplates the effect of ESG adoption practices on corporate value. The research explores the asset pricing angle in order to shed light on the fragmented argument on the finance of ESG. Investors may be misguided about the positive or negative effects of ESG on performances. The paper proposes a different method to evaluate ESG performances. By comparing the results of a traditional econometric approach (Lasso) with a machine learning algorithm (Random Forest), the study establishes a set of indicators for ESG performance. Therefore, the research also empirically contributes to the theoretical strands of literature regarding model selection and variable importance in a finance framework. The algorithms will spit out sector-specific indicators. This set of indicators defines an alternative to the compounded scores of ESG rating agencies and avoids the possible offsetting effect of scores. With this approach, the paper defines a sector-specific set of indicators to standardize ESG disclosure. Additionally, it tries to shed light on the absence of a clear understanding of the direction of the ESG effect on corporate value (the problem of endogeneity).Keywords: ESG ratings, non-financial information, value of firms, sustainable finance
Procedia PDF Downloads 85383 Analyzing the Efficiency of Initiatives Taken against Disinformation during Election Campaigns: Case Study of Young Voters
Authors: Fatima-Zohra Ghedir
Abstract:
Social media platforms have been actively working on solutions and combined their efforts with media, policy makers, educators and researchers to protect citizens and prevent interferences in information, political discourses and elections. Facebook, for instance, deleted fake accounts, implemented fake accounts and fake content detection algorithms, partnered with news agencies to manually fact check content and changed its newsfeeds display. Twitter and Instagram regularly communicate on their efforts and notify their users of improvements and safety guidelines. More funds have been allocated to media literacy programs to empower citizens in prevision of the coming elections. This paper investigates the efficiency of these initiatives and analyzes the metrics to measure their success or failure. The objective is also to determine the segments of population more prone to fall in disinformation traps during the elections despite the measures taken over the last four years. This study will also examine the groups who were positively impacted by these measures. This paper relies on both desk and field methodologies. For this study, a survey was administered to French students aged between 17 and 29 years old. Semi-guided interviews were conducted on a similar audience. The analysis of the survey and of the interviews show that respondents were exposed to the initiatives described above and are aware of the existence of disinformation issues. However, they do not understand what disinformation really entails or means. For instance, for most of them, disinformation is synonymous of the opposite point of view without taking into account the truthfulness of the content. Besides, they still consume and believe the information shared by their friends and family, with little questioning about the ways their closed ones get informed.Keywords: democratic elections, disinformation, foreign interference, social media, success metrics
Procedia PDF Downloads 111382 Quality Analysis of Vegetables Through Image Processing
Authors: Abdul Khalique Baloch, Ali Okatan
Abstract:
The quality analysis of food and vegetable from image is hot topic now a day, where researchers make them better then pervious findings through different technique and methods. In this research we have review the literature, and find gape from them, and suggest better proposed approach, design the algorithm, developed a software to measure the quality from images, where accuracy of image show better results, and compare the results with Perouse work done so for. The Application we uses an open-source dataset and python language with tensor flow lite framework. In this research we focus to sort food and vegetable from image, in the images, the application can sorts and make them grading after process the images, it could create less errors them human base sorting errors by manual grading. Digital pictures datasets were created. The collected images arranged by classes. The classification accuracy of the system was about 94%. As fruits and vegetables play main role in day-to-day life, the quality of fruits and vegetables is necessary in evaluating agricultural produce, the customer always buy good quality fruits and vegetables. This document is about quality detection of fruit and vegetables using images. Most of customers suffering due to unhealthy foods and vegetables by suppliers, so there is no proper quality measurement level followed by hotel managements. it have developed software to measure the quality of the fruits and vegetables by using images, it will tell you how is your fruits and vegetables are fresh or rotten. Some algorithms reviewed in this thesis including digital images, ResNet, VGG16, CNN and Transfer Learning grading feature extraction. This application used an open source dataset of images and language used python, and designs a framework of system.Keywords: deep learning, computer vision, image processing, rotten fruit detection, fruits quality criteria, vegetables quality criteria
Procedia PDF Downloads 70381 Developing a Green Strategic Management Model with regarding HSE-MS
Authors: Amin Padash, Gholam Reza Nabi Bid Hendi, Hassan Hoveidi
Abstract:
Purpose: The aim of this research is developing a model for green management based on Health, Safety and Environmental Management System. An HSE-MS can be a powerful tool for organizations to both improve their environmental, health and safety performance, and enhance their business efficiency to green management. Model: The model is developed in this study can be used for industries as guidelines for implementing green management issue by considering Health, Safety and Environmental Management System. Case Study: The Pars Special Economic / Energy Zone Organization on behalf of Iran’s Petroleum Ministry and National Iranian Oil Company (NIOC) manages and develops the South and North oil and gas fields in the region. Methodology: This research according to objective is applied and based on implementing is descriptive and also prescription. We used technique MCDM (Multiple Criteria Decision-Making) for determining the priorities of the factors. Based on process approach the model consists of the following steps and components: first factors involved in green issues are determined. Based on them a framework is considered. Then with using MCDM (Multiple Criteria Decision-Making) algorithms (TOPSIS) the priority of basic variables are determined. The authors believe that the proposed model and results of this research can aid industries managers to implement green subjects according to Health, Safety and Environmental Management System in a more efficient and effective manner. Finding and conclusion: Basic factors involved in green issues and their weights can be the main finding. Model and relation between factors are the other finding of this research. The case is considered Petrochemical Company for promoting the system of ecological industry thinking.Keywords: Fuzzy-AHP method , green management, health, safety and environmental management system, MCDM technique, TOPSIS
Procedia PDF Downloads 412380 Efficient Chiller Plant Control Using Modern Reinforcement Learning
Authors: Jingwei Du
Abstract:
The need of optimizing air conditioning systems for existing buildings calls for control methods designed with energy-efficiency as a primary goal. The majority of current control methods boil down to two categories: empirical and model-based. To be effective, the former heavily relies on engineering expertise and the latter requires extensive historical data. Reinforcement Learning (RL), on the other hand, is a model-free approach that explores the environment to obtain an optimal control strategy often referred to as “policy”. This research adopts Proximal Policy Optimization (PPO) to improve chiller plant control, and enable the RL agent to collaborate with experienced engineers. It exploits the fact that while the industry lacks historical data, abundant operational data is available and allows the agent to learn and evolve safely under human supervision. Thanks to the development of language models, renewed interest in RL has led to modern, online, policy-based RL algorithms such as the PPO. This research took inspiration from “alignment”, a process that utilizes human feedback to finetune the pretrained model in case of unsafe content. The methodology can be summarized into three steps. First, an initial policy model is generated based on minimal prior knowledge. Next, the prepared PPO agent is deployed so feedback from both critic model and human experts can be collected for future finetuning. Finally, the agent learns and adapts itself to the specific chiller plant, updates the policy model and is ready for the next iteration. Besides the proposed approach, this study also used traditional RL methods to optimize the same simulated chiller plants for comparison, and it turns out that the proposed method is safe and effective at the same time and needs less to no historical data to start up.Keywords: chiller plant, control methods, energy efficiency, proximal policy optimization, reinforcement learning
Procedia PDF Downloads 31379 Modelling Insider Attacks in Public Cloud
Authors: Roman Kulikov, Svetlana Kolesnikova
Abstract:
Last decade Cloud Computing technologies have been rapidly becoming ubiquitous. Each year more and more organizations, corporations, internet services and social networks trust their business sensitive information to Public Cloud. The data storage in Public Cloud is protected by security mechanisms such as firewalls, cryptography algorithms, backups, etc.. In this way, however, only outsider attacks can be prevented, whereas virtualization tools can be easily compromised by insider. The protection of Public Cloud’s critical elements from internal intruder remains extremely challenging. A hypervisor, also called a virtual machine manager, is a program that allows multiple operating systems (OS) to share a single hardware processor in Cloud Computing. One of the hypervisor's functions is to enforce access control policies. Furthermore, it prevents guest OS from disrupting each other and from accessing each other's memory or disk space. Hypervisor is the one of the most critical and vulnerable elements in Cloud Computing infrastructure. Nevertheless, it has been poorly protected from being compromised by insider. By exploiting certain vulnerabilities, privilege escalation can be easily achieved in insider attacks on hypervisor. In this way, an internal intruder, who has compromised one process, is able to gain control of the entire virtual machine. Thereafter, the consequences of insider attacks in Public Cloud might be more catastrophic and significant to virtual tools and sensitive data than of outsider attacks. So far, almost no preventive security countermeasures have been developed. There has been little attention paid for developing models to assist risks mitigation strategies. In this paper formal model of insider attacks on hypervisor is designed. Our analysis identifies critical hypervisor`s vulnerabilities that can be easily compromised by internal intruder. Consequently, possible conditions for successful attacks implementation are uncovered. Hence, development of preventive security countermeasures can be improved on the basis of the proposed model.Keywords: insider attack, public cloud, cloud computing, hypervisor
Procedia PDF Downloads 364378 Predicting Match Outcomes in Team Sport via Machine Learning: Evidence from National Basketball Association
Authors: Jacky Liu
Abstract:
This paper develops a team sports outcome prediction system with potential for wide-ranging applications across various disciplines. Despite significant advancements in predictive analytics, existing studies in sports outcome predictions possess considerable limitations, including insufficient feature engineering and underutilization of advanced machine learning techniques, among others. To address these issues, we extend the Sports Cross Industry Standard Process for Data Mining (SRP-CRISP-DM) framework and propose a unique, comprehensive predictive system, using National Basketball Association (NBA) data as an example to test this extended framework. Our approach follows a holistic methodology in feature engineering, employing both Time Series and Non-Time Series Data, as well as conducting Explanatory Data Analysis and Feature Selection. Furthermore, we contribute to the discourse on target variable choice in team sports outcome prediction, asserting that point spread prediction yields higher profits as opposed to game-winner predictions. Using machine learning algorithms, particularly XGBoost, results in a significant improvement in predictive accuracy of team sports outcomes. Applied to point spread betting strategies, it offers an astounding annual return of approximately 900% on an initial investment of $100. Our findings not only contribute to academic literature, but have critical practical implications for sports betting. Our study advances the understanding of team sports outcome prediction a burgeoning are in complex system predictions and pave the way for potential profitability and more informed decision making in sports betting markets.Keywords: machine learning, team sports, game outcome prediction, sports betting, profits simulation
Procedia PDF Downloads 102377 A Multi-Modal Virtual Walkthrough of the Virtual Past and Present Based on Panoramic View, Crowd Simulation and Acoustic Heritage on Mobile Platform
Authors: Lim Chen Kim, Tan Kian Lam, Chan Yi Chee
Abstract:
This research presents a multi-modal simulation in the reconstruction of the past and the construction of present in digital cultural heritage on mobile platform. In bringing the present life, the virtual environment is generated through a presented scheme for rapid and efficient construction of 360° panoramic view. Then, acoustical heritage model and crowd model are presented and improvised into the 360° panoramic view. For the reconstruction of past life, the crowd is simulated and rendered in an old trading port. However, the keystone of this research is in a virtual walkthrough that shows the virtual present life in 2D and virtual past life in 3D, both in an environment of virtual heritage sites in George Town through mobile device. Firstly, the 2D crowd is modelled and simulated using OpenGL ES 1.1 on mobile platform. The 2D crowd is used to portray the present life in 360° panoramic view of a virtual heritage environment based on the extension of Newtonian Laws. Secondly, the 2D crowd is animated and rendered into 3D with improved variety and incorporated into the virtual past life using Unity3D Game Engine. The behaviours of the 3D models are then simulated based on the enhancement of the classical model of Boid algorithm. Finally, a demonstration system is developed and integrated with the models, techniques and algorithms of this research. The virtual walkthrough is demonstrated to a group of respondents and is evaluated through the user-centred evaluation by navigating around the demonstration system. The results of the evaluation based on the questionnaires have shown that the presented virtual walkthrough has been successfully deployed through a multi-modal simulation and such a virtual walkthrough would be particularly useful in a virtual tour and virtual museum applications.Keywords: Boid Algorithm, Crowd Simulation, Mobile Platform, Newtonian Laws, Virtual Heritage
Procedia PDF Downloads 277376 Gender Bias in Natural Language Processing: Machines Reflect Misogyny in Society
Authors: Irene Yi
Abstract:
Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.Keywords: gendered grammar, misogynistic language, natural language processing, neural networks
Procedia PDF Downloads 122375 Readability of Trauma-Related Patient Education Materials from the AAOS and OTA Websites
Authors: Diane Ghanem, Oscar Covarrubias, Ridge Maxson, Samir Sabharwal, Babar Shafiq
Abstract:
Introduction: Web-based resources serve as a fundamental educational platform for orthopaedic trauma patients; however, they are notoriously written at a high grade reading level and are often too complicated for patients to benefit from them. The aim of this study is to perform an updated assessment of the readability of the AAOS trauma-related educational articles and compare their readability with that of injury-specific patient education materials developed by the OTA. Methods: All forty-six trauma-related articles on the AAOS patient education website were analyzed for readability. Two independent reviewers used the (1) Flesch-Kincaid Grade Level (FKGL) and the (2) Flesch Reading Ease (FRE) algorithms to calculate the readability level. Mean readability scores were compared across body part categories. One-sample t-test was done to compare mean FKGL with the recommended 6th-grade readability level and the average American adult reading level. Two-sample t-test was used to compare the readability scores of the AAOS trauma-related articles to those of the OTA. Results: The average FKGL and FRE for the AAOS articles were 8.9±0.74 and 57.2±5.8, respectively. All articles were written above the 6th-grade reading level. The average readability of the AAOS articles was significantly greater than the recommended 6th-grade and average American adult reading level. The average FKGL (8.9±0.74 vs 8.1±1.14) and FRE (57.2±5.8 vs 65.6±6.6) for all AAOS articles was significantly greater compared to that of OTA articles. Excellent agreement was observed between raters for the FKGL 0.956 (95%CI 0.922 - 0.975) and FRE 0.993 (95%CI 0.987 – 0.996). Discussion: Our findings suggest that, after almost a decade, the readability of the AAOS trauma-related articles remains unchanged. The AAOS and OTA trauma patient education materials have high readability levels and may be too difficult for patient comprehension. A need remains to improve the readability of these commonly used trauma education materials.Keywords: american ocademy of orthopaedic surgeons, FKGL, FRE, orthopaedic trauma association, patient education, readability
Procedia PDF Downloads 63