Search results for: optimization algorithms
3262 Assessing Performance of Data Augmentation Techniques for a Convolutional Network Trained for Recognizing Humans in Drone Images
Authors: Masood Varshosaz, Kamyar Hasanpour
Abstract:
In recent years, we have seen growing interest in recognizing humans in drone images for post-disaster search and rescue operations. Deep learning algorithms have shown great promise in this area, but they often require large amounts of labeled data to train the models. To keep the data acquisition cost low, augmentation techniques can be used to create additional data from existing images. There are many techniques of such that can help generate variations of an original image to improve the performance of deep learning algorithms. While data augmentation is potentially assumed to improve the accuracy and robustness of the models, it is important to ensure that the performance gains are not outweighed by the additional computational cost or complexity of implementing the techniques. To this end, it is important to evaluate the impact of data augmentation on the performance of the deep learning models. In this paper, we evaluated the most currently available 2D data augmentation techniques on a standard convolutional network which was trained for recognizing humans in drone images. The techniques include rotation, scaling, random cropping, flipping, shifting, and their combination. The results showed that the augmented models perform 1-3% better compared to a base network. However, as the augmented images only contain the human parts already visible in the original images, a new data augmentation approach is needed to include the invisible parts of the human body. Thus, we suggest a new method that employs simulated 3D human models to generate new data for training the network.Keywords: human recognition, deep learning, drones, disaster mitigation
Procedia PDF Downloads 1013261 Prediction of Physical Properties and Sound Absorption Performance of Automotive Interior Materials
Authors: Un-Hwan Park, Jun-Hyeok Heo, In-Sung Lee, Seong-Jin Cho, Tae-Hyeon Oh, Dae-Kyu Park
Abstract:
Sound absorption coefficient is considered important when designing because noise affects emotion quality of car. It is designed with lots of experiment tunings in the field because it is unreliable to predict it for multi-layer material. In this paper, we present the design of sound absorption for automotive interior material with multiple layers using estimation software of sound absorption coefficient for reverberation chamber. Additionally, we introduce the method for estimation of physical properties required to predict sound absorption coefficient of car interior materials with multiple layers too. It is calculated by inverse algorithm. It is very economical to get information about physical properties without expensive equipment. Correlation test is carried out to ensure reliability for accuracy. The data to be used for the correlation is sound absorption coefficient measured in the reverberation chamber. In this way, it is considered economical and efficient to design automotive interior materials. And design optimization for sound absorption coefficient is also easy to implement when it is designed.Keywords: sound absorption coefficient, optimization design, inverse algorithm, automotive interior material, multiple layers nonwoven, scaled reverberation chamber, sound impedance tubes
Procedia PDF Downloads 3123260 TimeTune: Personalized Study Plans Generation with Google Calendar Integration
Authors: Chevon Fernando, Banuka Athuraliya
Abstract:
The purpose of this research is to provide a solution to the students’ time management, which usually becomes an issue because students must study and manage their personal commitments. "TimeTune," an AI-based study planner that provides an opportunity to maneuver study timeframes by incorporating modern machine learning algorithms with calendar applications, is unveiled as the ideal solution. The research is focused on the development of LSTM models that connect to the Google Calendar API in the process of developing learning paths that would be fit for a unique student's daily life experience and study history. A key finding of this research is the success in building the LSTM model to predict optimal study times, which, integrating with the real-time data of Google Calendar, will generate the timetables automatically in a personalized and customized manner. The methodology encompasses Agile development practices and Object-Oriented Analysis and Design (OOAD) principles, focusing on user-centric design and iterative development. By adopting this method, students can significantly reduce the tension associated with poor study habits and time management. In conclusion, "TimeTune" displays an advanced step in personalized education technology. The fact that its application of ML algorithms and calendar integration is quite innovative is slowly and steadily revolutionizing the lives of students. The excellence of maintaining a balanced academic and personal life is stress reduction, which the applications promise to provide for students when it comes to managing their studies.Keywords: personalized learning, study planner, time management, calendar integration
Procedia PDF Downloads 533259 Cryptographic Resource Allocation Algorithm Based on Deep Reinforcement Learning
Authors: Xu Jie
Abstract:
As a key network security method, cryptographic services must fully cope with problems such as the wide variety of cryptographic algorithms, high concurrency requirements, random job crossovers, and instantaneous surges in workloads. Its complexity and dynamics also make it difficult for traditional static security policies to cope with the ever-changing situation. Cyber Threats and Environment. Traditional resource scheduling algorithms are inadequate when facing complex decision-making problems in dynamic environments. A network cryptographic resource allocation algorithm based on reinforcement learning is proposed, aiming to optimize task energy consumption, migration cost, and fitness of differentiated services (including user, data, and task security) by modeling the multi-job collaborative cryptographic service scheduling problem as a multi-objective optimized job flow scheduling problem and using a multi-agent reinforcement learning method, efficient scheduling and optimal configuration of cryptographic service resources are achieved. By introducing reinforcement learning, resource allocation strategies can be adjusted in real-time in a dynamic environment, improving resource utilization and achieving load balancing. Experimental results show that this algorithm has significant advantages in path planning length, system delay and network load balancing and effectively solves the problem of complex resource scheduling in cryptographic services.Keywords: cloud computing, cryptography on-demand service, reinforcement learning, workflow scheduling
Procedia PDF Downloads 243258 Coordinated Renewal Planning of Civil Infrastructure Systems
Authors: Hesham Osman
Abstract:
The challenges facing aging urban infrastructure systems require a more holistic and comprehensive approach to their management. The large number of urban infrastructure renewal activities occurring in cities throughout the world leads to social, economic and environmental impacts on the communities in its vicinity. As such, a coordinated effort is required to streamline these activities. This paper presents a framework to enable temporal (time-based) coordination of water, sewer and road intervention activities. Intervention activities include routine maintenance, renewal, and replacement of physical assets. The coordination framework considers 1) Life-cycle costs, 2) Infrastructure level-of-service, and 3) Risk exposure to system operators. The model enables infrastructure asset managers to trade-off options of delaying versus bringing forward intervention activities of one system in order to be executed in conjunction with another co-located system in the right-of-way. The framework relies on a combination of meta-heuristics and goal-based optimization. In order to demonstrate the applicability of the framework, a case study for a major infrastructure corridor in Cairo, Egypt is taken as an example. Results show that the framework can be scaled-up to include other infrastructure systems located in the right-of-way like electricity, gas and telecom, provided that information can be shared among these entities.Keywords: infrastructure, rehabilitation, construction, optimization
Procedia PDF Downloads 3003257 Optimization Principles of Eddy Current Separator for Mixtures with Different Particle Sizes
Authors: Cao Bin, Yuan Yi, Wang Qiang, Amor Abdelkader, Ali Reza Kamali, Diogo Montalvão
Abstract:
The study of the electrodynamic behavior of non-ferrous particles in time-varying magnetic fields is a promising area of research with wide applications, including recycling of non-ferrous metals, mechanical transmission, and space debris. The key technology for recovering non-ferrous metals is eddy current separation (ECS), which utilizes the eddy current force and torque to separate non-ferrous metals. ECS has several advantages, such as low energy consumption, large processing capacity, and no secondary pollution, making it suitable for processing various mixtures like electronic scrap, auto shredder residue, aluminum scrap, and incineration bottom ash. Improving the separation efficiency of mixtures with different particle sizes in ECS can create significant social and economic benefits. Our previous study investigated the influence of particle size on separation efficiency by combining numerical simulations and separation experiments. Pearson correlation analysis found a strong correlation between the eddy current force in simulations and the repulsion distance in experiments, which confirmed the effectiveness of our simulation model. The interaction effects between particle size and material type, rotational speed, and magnetic pole arrangement were examined. It offer valuable insights for the design and optimization of eddy current separators. The underlying mechanism behind the effect of particle size on separation efficiency was discovered by analyzing eddy current and field gradient. The results showed that the magnitude and distribution heterogeneity of eddy current and magnetic field gradient increased with particle size in eddy current separation. Based on this, we further found that increasing the curvature of magnetic field lines within particles could also increase the eddy current force, providing a optimized method to improving the separation efficiency of fine particles. By combining the results of the studies, a more systematic and comprehensive set of optimization guidelines can be proposed for mixtures with different particle size ranges. The separation efficiency of fine particles could be improved by increasing the rotational speed, curvature of magnetic field lines, and electrical conductivity/density of materials, as well as utilizing the eddy current torque. When designing an ECS, the particle size range of the target mixture should be investigated in advance, and the suitable parameters for separating the mixture can be fixed accordingly. In summary, these results can guide the design and optimization of ECS, and also expand the application areas for ECS.Keywords: eddy current separation, particle size, numerical simulation, metal recovery
Procedia PDF Downloads 943256 A Mixed Integer Programming Model for Optimizing the Layout of an Emergency Department
Authors: Farhood Rismanchian, Seong Hyeon Park, Young Hoon Lee
Abstract:
During the recent years, demand for healthcare services has dramatically increased. As the demand for healthcare services increases, so does the necessity of constructing new healthcare buildings and redesigning and renovating existing ones. Increasing demands necessitate the use of optimization techniques to improve the overall service efficiency in healthcare settings. However, high complexity of care processes remains the major challenge to accomplish this goal. This study proposes a method based on process mining results to address the high complexity of care processes and to find the optimal layout of the various medical centers in an emergency department. ProM framework is used to discover clinical pathway patterns and relationship between activities. Sequence clustering plug-in is used to remove infrequent events and to derive the process model in the form of Markov chain. The process mining results served as an input for the next phase which consists of the development of the optimization model. Comparison of the current ED design with the one obtained from the proposed method indicated that a carefully designed layout can significantly decrease the distances that patients must travel.Keywords: Mixed Integer programming, Facility layout problem, Process Mining, Healthcare Operation Management
Procedia PDF Downloads 3453255 Mathematical Modelling and AI-Based Degradation Analysis of the Second-Life Lithium-Ion Battery Packs for Stationary Applications
Authors: Farhad Salek, Shahaboddin Resalati
Abstract:
The production of electric vehicles (EVs) featuring lithium-ion battery technology has substantially escalated over the past decade, demonstrating a steady and persistent upward trajectory. The imminent retirement of electric vehicle (EV) batteries after approximately eight years underscores the critical need for their redirection towards recycling, a task complicated by the current inadequacy of recycling infrastructures globally. A potential solution for such concerns involves extending the operational lifespan of electric vehicle (EV) batteries through their utilization in stationary energy storage systems during secondary applications. Such adoptions, however, require addressing the safety concerns associated with batteries’ knee points and thermal runaways. This paper develops an accurate mathematical model representative of the second-life battery packs from a cell-to-pack scale using an equivalent circuit model (ECM) methodology. Neural network algorithms are employed to forecast the degradation parameters based on the EV batteries' aging history to develop a degradation model. The degradation model is integrated with the ECM to reflect the impacts of the cycle aging mechanism on battery parameters during operation. The developed model is tested under real-life load profiles to evaluate the life span of the batteries in various operating conditions. The methodology and the algorithms introduced in this paper can be considered the basis for Battery Management System (BMS) design and techno-economic analysis of such technologies.Keywords: second life battery, electric vehicles, degradation, neural network
Procedia PDF Downloads 713254 Yawning Computing Using Bayesian Networks
Authors: Serge Tshibangu, Turgay Celik, Zenzo Ncube
Abstract:
Road crashes kill nearly over a million people every year, and leave millions more injured or permanently disabled. Various annual reports reveal that the percentage of fatal crashes due to fatigue/driver falling asleep comes directly after the percentage of fatal crashes due to intoxicated drivers. This percentage is higher than the combined percentage of fatal crashes due to illegal/Un-Safe U-turn and illegal/Un-Safe reversing. Although a relatively small percentage of police reports on road accidents highlights drowsiness and fatigue, the importance of these factors is greater than we might think, hidden by the undercounting of their events. Some scenarios show that these factors are significant in accidents with killed and injured people. Thus the need for an automatic drivers fatigue detection system in order to considerably reduce the number of accidents owing to fatigue.This research approaches the drivers fatigue detection problem in an innovative way by combining cues collected from both temporal analysis of drivers’ faces and environment. Monotony in driving environment is inter-related with visual symptoms of fatigue on drivers’ faces to achieve fatigue detection. Optical and infrared (IR) sensors are used to analyse the monotony in driving environment and to detect the visual symptoms of fatigue on human face. Internal cues from drivers faces and external cues from environment are combined together using machine learning algorithms to automatically detect fatigue.Keywords: intelligent transportation systems, bayesian networks, yawning computing, machine learning algorithms
Procedia PDF Downloads 4593253 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection
Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye
Abstract:
The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document
Procedia PDF Downloads 1643252 NaOH/Pumice and LiOH/Pumice as Heterogeneous Solid Base Catalysts for Biodiesel Production from Soybean Oil: An Optimization Study
Authors: Joy Marie Mora, Mark Daniel De Luna, Tsair-Wang Chung
Abstract:
Transesterification reaction of soybean oil with methanol was carried out to produce fatty acid methyl esters (FAME) using calcined alkali metal (Na and Li) supported by pumice silica as the solid base catalyst. Pumice silica catalyst was activated by loading alkali metal ions to its surface via an ion-exchange method. Response surface methodology (RSM) in combination with Box-Behnken design (BBD) was used to optimize the operating parameters in biodiesel production, namely: reaction temperature, methanol to oil molar ratio, reaction time, and catalyst concentration. Using the optimized sets of parameters, FAME yields using sodium and lithium silicate catalysts were 98.80% and 98.77%, respectively. A pseudo-first order kinetic equation was applied to evaluate the kinetic parameters of the reaction. The prepared catalysts were characterized by several techniques such as X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), Brunauer-Emmett-Teller (BET) sorptometer, and scanning electron microscopy (SEM). In addition, the reusability of the catalysts was successfully tested in two subsequent cycles.Keywords: alkali metal, biodiesel, Box-Behnken design, heterogeneous catalyst, kinetics, optimization, pumice, transesterification
Procedia PDF Downloads 3073251 Unveiling Comorbidities in Irritable Bowel Syndrome: A UK BioBank Study utilizing Supervised Machine Learning
Authors: Uswah Ahmad Khan, Muhammad Moazam Fraz, Humayoon Shafique Satti, Qasim Aziz
Abstract:
Approximately 10-14% of the global population experiences a functional disorder known as irritable bowel syndrome (IBS). The disorder is defined by persistent abdominal pain and an irregular bowel pattern. IBS significantly impairs work productivity and disrupts patients' daily lives and activities. Although IBS is widespread, there is still an incomplete understanding of its underlying pathophysiology. This study aims to help characterize the phenotype of IBS patients by differentiating the comorbidities found in IBS patients from those in non-IBS patients using machine learning algorithms. In this study, we extracted samples coding for IBS from the UK BioBank cohort and randomly selected patients without a code for IBS to create a total sample size of 18,000. We selected the codes for comorbidities of these cases from 2 years before and after their IBS diagnosis and compared them to the comorbidities in the non-IBS cohort. Machine learning models, including Decision Trees, Gradient Boosting, Support Vector Machine (SVM), AdaBoost, Logistic Regression, and XGBoost, were employed to assess their accuracy in predicting IBS. The most accurate model was then chosen to identify the features associated with IBS. In our case, we used XGBoost feature importance as a feature selection method. We applied different models to the top 10% of features, which numbered 50. Gradient Boosting, Logistic Regression and XGBoost algorithms yielded a diagnosis of IBS with an optimal accuracy of 71.08%, 71.427%, and 71.53%, respectively. Among the comorbidities most closely associated with IBS included gut diseases (Haemorrhoids, diverticular diseases), atopic conditions(asthma), and psychiatric comorbidities (depressive episodes or disorder, anxiety). This finding emphasizes the need for a comprehensive approach when evaluating the phenotype of IBS, suggesting the possibility of identifying new subsets of IBS rather than relying solely on the conventional classification based on stool type. Additionally, our study demonstrates the potential of machine learning algorithms in predicting the development of IBS based on comorbidities, which may enhance diagnosis and facilitate better management of modifiable risk factors for IBS. Further research is necessary to confirm our findings and establish cause and effect. Alternative feature selection methods and even larger and more diverse datasets may lead to more accurate classification models. Despite these limitations, our findings highlight the effectiveness of Logistic Regression and XGBoost in predicting IBS diagnosis.Keywords: comorbidities, disease association, irritable bowel syndrome (IBS), predictive analytics
Procedia PDF Downloads 1243250 Energy Efficient Retrofitting and Optimization of Dual Mixed Refrigerant Natural Gas Liquefaction Process
Authors: Muhammad Abdul Qyyum, Kinza Qadeer, Moonyong Lee
Abstract:
Globally, liquefied natural gas (LNG) has drawn interest as a green energy source in comparison with other fossil fuels, mainly because of its ease of transport and low carbon dioxide emissions. It is expected that demand for LNG will grow steadily over the next few decades. In addition, because the demand for clean energy is increasing, LNG production facilities are expanding into new natural gas reserves across the globe. However, LNG production is an energy and cost intensive process because of the huge power requirements for compression and refrigeration. Therefore, one of the major challenges in the LNG industry is to improve the energy efficiency of existing LNG processes through economic and ecological strategies. The advancement in expansion devices such as two-phase cryogenic expander (TPE) and cryogenic hydraulic turbine (HT) were exploited for energy and cost benefits in natural gas liquefaction. Retrofitting the conventional Joule–Thompson (JT) valve with TPE and HT have the potential to improve the energy efficiency of LNG processes. This research investigated the potential feasibility of the retrofitting of a dual mixed refrigerant (DMR) process by replacing the isenthalpic expansion with isentropic expansion corresponding to energy efficient LNG production. To fully take the potential benefit of the proposed process retrofitting, the proposed DMR schemes were optimized by using a Coggins optimization approach, which was implemented in Microsoft Visual Studio (MVS) environment and linked to the rigorous HYSYS® model. The results showed that the required energy of the proposed isentropic expansion based DMR process could be saved up to 26.5% in comparison with the conventional isenthalpic based DMR process using the JT valves. Utilization of the recovered energy into boosting the natural gas feed pressure could further improve the energy efficiency of the LNG process up to 34% as compared to the base case. This work will help the process engineers to overcome the challenges relating to energy efficiency and safety concerns of LNG processes. Furthermore, the proposed retrofitting scheme can also be implemented to improve the energy efficiency of other isenthalpic expansion based energy intensive cryogenic processes.Keywords: cryogenic liquid turbine, Coggins optimization, dual mixed refrigerant, energy efficient LNG process, two-phase expander
Procedia PDF Downloads 1493249 Methods and Algorithms of Ensuring Data Privacy in AI-Based Healthcare Systems and Technologies
Authors: Omar Farshad Jeelani, Makaire Njie, Viktoriia M. Korzhuk
Abstract:
Recently, the application of AI-powered algorithms in healthcare continues to flourish. Particularly, access to healthcare information, including patient health history, diagnostic data, and PII (Personally Identifiable Information) is paramount in the delivery of efficient patient outcomes. However, as the exchange of healthcare information between patients and healthcare providers through AI-powered solutions increases, protecting a person’s information and their privacy has become even more important. Arguably, the increased adoption of healthcare AI has resulted in a significant concentration on the security risks and protection measures to the security and privacy of healthcare data, leading to escalated analyses and enforcement. Since these challenges are brought by the use of AI-based healthcare solutions to manage healthcare data, AI-based data protection measures are used to resolve the underlying problems. Consequently, this project proposes AI-powered safeguards and policies/laws to protect the privacy of healthcare data. The project presents the best-in-school techniques used to preserve the data privacy of AI-powered healthcare applications. Popular privacy-protecting methods like Federated learning, cryptographic techniques, differential privacy methods, and hybrid methods are discussed together with potential cyber threats, data security concerns, and prospects. Also, the project discusses some of the relevant data security acts/laws that govern the collection, storage, and processing of healthcare data to guarantee owners’ privacy is preserved. This inquiry discusses various gaps and uncertainties associated with healthcare AI data collection procedures and identifies potential correction/mitigation measures.Keywords: data privacy, artificial intelligence (AI), healthcare AI, data sharing, healthcare organizations (HCOs)
Procedia PDF Downloads 1003248 Integrated Intensity and Spatial Enhancement Technique for Color Images
Authors: Evan W. Krieger, Vijayan K. Asari, Saibabu Arigela
Abstract:
Video imagery captured for real-time security and surveillance applications is typically captured in complex lighting conditions. These less than ideal conditions can result in imagery that can have underexposed or overexposed regions. It is also typical that the video is too low in resolution for certain applications. The purpose of security and surveillance video is that we should be able to make accurate conclusions based on the images seen in the video. Therefore, if poor lighting and low resolution conditions occur in the captured video, the ability to make accurate conclusions based on the received information will be reduced. We propose a solution to this problem by using image preprocessing to improve these images before use in a particular application. The proposed algorithm will integrate an intensity enhancement algorithm with a super resolution technique. The intensity enhancement portion consists of a nonlinear inverse sign transformation and an adaptive contrast enhancement. The super resolution section is a single image super resolution technique is a Fourier phase feature based method that uses a machine learning approach with kernel regression. The proposed technique intelligently integrates these algorithms to be able to produce a high quality output while also being more efficient than the sequential use of these algorithms. This integration is accomplished by performing the proposed algorithm on the intensity image produced from the original color image. After enhancement and super resolution, a color restoration technique is employed to obtain an improved visibility color image.Keywords: dynamic range compression, multi-level Fourier features, nonlinear enhancement, super resolution
Procedia PDF Downloads 5583247 An Optimization of Machine Parameters for Modified Horizontal Boring Tool Using Taguchi Method
Authors: Thirasak Panyaphirawat, Pairoj Sapsmarnwong, Teeratas Pornyungyuen
Abstract:
This paper presents the findings of an experimental investigation of important machining parameters for the horizontal boring tool modified to mouth with a horizontal lathe machine to bore an overlength workpiece. In order to verify a usability of a modified tool, design of experiment based on Taguchi method is performed. The parameters investigated are spindle speed, feed rate, depth of cut and length of workpiece. Taguchi L9 orthogonal array is selected for four factors three level parameters in order to minimize surface roughness (Ra and Rz) of S45C steel tubes. Signal to noise ratio analysis and analysis of variance (ANOVA) is performed to study an effect of said parameters and to optimize the machine setting for best surface finish. The controlled factors with most effect are depth of cut, spindle speed, length of workpiece, and feed rate in order. The confirmation test is performed to test the optimal setting obtained from Taguchi method and the result is satisfactory.Keywords: design of experiment, Taguchi design, optimization, analysis of variance, machining parameters, horizontal boring tool
Procedia PDF Downloads 4433246 Optimization of Hemp Fiber Reinforced Concrete for Various Environmental Conditions
Authors: Zoe Chang, Max Williams, Gautham Das
Abstract:
The purpose of this study is to evaluate the incorporation of hemp fibers (HF) in concrete. Hemp fiber reinforced concrete (HFRC) is becoming more popular as an alternative for regular mix designs. This study was done to evaluate the compressive strength of HFRC regarding mix procedure. Hemp fibers were obtained from the manufacturer and hand-processed to ensure uniformity in width and length. The fibers were added to the concrete as both wet and dry mixes to investigate and optimize the mix design process. Results indicated that the dry mix had a compressive strength of 1157 psi compared to the wet mix of 985 psi. This dry mix compressive strength was within range of the standard mix compressive strength of 1533 psi. The statistical analysis revealed that the mix design process needs further optimization and uniformity concerning the addition of HF. Regression analysis revealed the standard mix design had a coefficient of 0.9 as compared to the dry mix of 0.375, indicating a variation in the mixing process. While completing the dry mix, the addition of plain hemp fibers caused them to intertwine, creating lumps and inconsistency. However, during the wet mixing process, combining water and hemp fibers before incorporation allows the fibers to uniformly disperse within the mix; hence the regression analysis indicated a better coefficient of 0.55. This study concludes that HRFC is a viable alternative to regular mixes; however, more research surrounding its characteristics needs to be conducted.Keywords: hemp fibers, hemp reinforced concrete, wet & dry, freeze thaw testing, compressive strength
Procedia PDF Downloads 2043245 Building User Behavioral Models by Processing Web Logs and Clustering Mechanisms
Authors: Madhuka G. P. D. Udantha, Gihan V. Dias, Surangika Ranathunga
Abstract:
Today Websites contain very interesting applications. But there are only few methodologies to analyze User navigations through the Websites and formulating if the Website is put to correct use. The web logs are only used if some major attack or malfunctioning occurs. Web Logs contain lot interesting dealings on users in the system. Analyzing web logs has become a challenge due to the huge log volume. Finding interesting patterns is not as easy as it is due to size, distribution and importance of minor details of each log. Web logs contain very important data of user and site which are not been put to good use. Retrieving interesting information from logs gives an idea of what the users need, group users according to their various needs and improve site to build an effective and efficient site. The model we built is able to detect attacks or malfunctioning of the system and anomaly detection. Logs will be more complex as volume of traffic and the size and complexity of web site grows. Unsupervised techniques are used in this solution which is fully automated. Expert knowledge is only used in validation. In our approach first clean and purify the logs to bring them to a common platform with a standard format and structure. After cleaning module web session builder is executed. It outputs two files, Web Sessions file and Indexed URLs file. The Indexed URLs file contains the list of URLs accessed and their indices. Web Sessions file lists down the indices of each web session. Then DBSCAN and EM Algorithms are used iteratively and recursively to get the best clustering results of the web sessions. Using homogeneity, completeness, V-measure, intra and inter cluster distance and silhouette coefficient as parameters these algorithms self-evaluate themselves to input better parametric values to run the algorithms. If a cluster is found to be too large then micro-clustering is used. Using Cluster Signature Module the clusters are annotated with a unique signature called finger-print. In this module each cluster is fed to Associative Rule Learning Module. If it outputs confidence and support as value 1 for an access sequence it would be a potential signature for the cluster. Then the access sequence occurrences are checked in other clusters. If it is found to be unique for the cluster considered then the cluster is annotated with the signature. These signatures are used in anomaly detection, prevent cyber attacks, real-time dashboards that visualize users, accessing web pages, predict actions of users and various other applications in Finance, University Websites, News and Media Websites etc.Keywords: anomaly detection, clustering, pattern recognition, web sessions
Procedia PDF Downloads 2903244 Dynamic Route Optimization in Vehicle Adhoc Networks: A Heuristics Routing Protocol
Authors: Rafi Ullah, Shah Muhammad Emaduddin, Taha Jilani
Abstract:
Vehicle Adhoc Networks (VANET) belongs to a special class of Mobile Adhoc Network (MANET) with high mobility. Network is created by road side vehicles equipped with communication devices like GPS and Wifi etc. Since the environment is highly dynamic due to difference in speed and high mobility of vehicles and weak stability of the network connection, it is a challenging task to design an efficient routing protocol for such an unstable environment. Our proposed algorithm uses heuristic for the calculation of optimal path for routing the packet efficiently in collaboration with several other parameters like geographical location, speed, priority, the distance among the vehicles, communication range, and networks congestion. We have incorporated probabilistic, heuristic and machine learning based approach inconsistency with the relay function of the memory buffer to keep the packet moving towards the destination. These parameters when used in collaboration provide us a very strong and admissible heuristics. We have mathematically proved that the proposed technique is efficient for the routing of packets, especially in a medical emergency situation. These networks can be used for medical emergency, security, entertainment and routing purposes.Keywords: heuristics routing, intelligent routing, VANET, route optimization
Procedia PDF Downloads 1863243 Study of Launch Recovery Control Dynamics of Retro Propulsive Reusable Rockets
Authors: Pratyush Agnihotri
Abstract:
The space missions are very costly because the transportation to the space is highly expensive and therefore there is the need to achieve complete re-usability in our launch vehicles to make the missions highly economic by cost cutting of the material recovered. Launcher reusability is the most efficient approach to decreasing admittance to space access economy, however stays an incredible specialized hurdle for the aerospace industry. Major concern of the difficulties lies in guidance and control procedure and calculations, specifically for those of the controlled landing stage, which should empower an exact landing with low fuel edges. Although cutting edge ways for navigation and control are present viz hybrid navigation and robust control. But for powered descent and landing of first stage of launch vehicle the guidance control is need to enable on board optimization. At first the CAD model of the launch vehicle I.e. space x falcon 9 rocket is presented for better understanding of the architecture that needs to be identified for the guidance and control solution for the recovery of the launcher. The focus is on providing the landing phase guidance scheme for recovery and re usability of first stage using retro propulsion. After reviewing various GNC solutions, to achieve accuracy in pre requisite landing online convex and successive optimization are explored as the guidance schemes.Keywords: guidance, navigation, control, retro propulsion, reusable rockets
Procedia PDF Downloads 963242 Prediction Modeling of Alzheimer’s Disease and Its Prodromal Stages from Multimodal Data with Missing Values
Authors: M. Aghili, S. Tabarestani, C. Freytes, M. Shojaie, M. Cabrerizo, A. Barreto, N. Rishe, R. E. Curiel, D. Loewenstein, R. Duara, M. Adjouadi
Abstract:
A major challenge in medical studies, especially those that are longitudinal, is the problem of missing measurements which hinders the effective application of many machine learning algorithms. Furthermore, recent Alzheimer's Disease studies have focused on the delineation of Early Mild Cognitive Impairment (EMCI) and Late Mild Cognitive Impairment (LMCI) from cognitively normal controls (CN) which is essential for developing effective and early treatment methods. To address the aforementioned challenges, this paper explores the potential of using the eXtreme Gradient Boosting (XGBoost) algorithm in handling missing values in multiclass classification. We seek a generalized classification scheme where all prodromal stages of the disease are considered simultaneously in the classification and decision-making processes. Given the large number of subjects (1631) included in this study and in the presence of almost 28% missing values, we investigated the performance of XGBoost on the classification of the four classes of AD, NC, EMCI, and LMCI. Using 10-fold cross validation technique, XGBoost is shown to outperform other state-of-the-art classification algorithms by 3% in terms of accuracy and F-score. Our model achieved an accuracy of 80.52%, a precision of 80.62% and recall of 80.51%, supporting the more natural and promising multiclass classification.Keywords: eXtreme gradient boosting, missing data, Alzheimer disease, early mild cognitive impairment, late mild cognitive impair, multiclass classification, ADNI, support vector machine, random forest
Procedia PDF Downloads 1913241 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards
Authors: Golnush Masghati-Amoli, Paul Chin
Abstract:
Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering
Procedia PDF Downloads 1403240 Design and Implementation of Low-code Model-building Methods
Authors: Zhilin Wang, Zhihao Zheng, Linxin Liu
Abstract:
This study proposes a low-code model-building approach that aims to simplify the development and deployment of artificial intelligence (AI) models. With an intuitive way to drag and drop and connect components, users can easily build complex models and integrate multiple algorithms for training. After the training is completed, the system automatically generates a callable model service API. This method not only lowers the technical threshold of AI development and improves development efficiency but also enhances the flexibility of algorithm integration and simplifies the deployment process of models. The core strength of this method lies in its ease of use and efficiency. Users do not need to have a deep programming background and can complete the design and implementation of complex models with a simple drag-and-drop operation. This feature greatly expands the scope of AI technology, allowing more non-technical people to participate in the development of AI models. At the same time, the method performs well in algorithm integration, supporting many different types of algorithms to work together, which further improves the performance and applicability of the model. In the experimental part, we performed several performance tests on the method. The results show that compared with traditional model construction methods, this method can make more efficient use, save computing resources, and greatly shorten the model training time. In addition, the system-generated model service interface has been optimized for high availability and scalability, which can adapt to the needs of different application scenarios.Keywords: low-code, model building, artificial intelligence, algorithm integration, model deployment
Procedia PDF Downloads 353239 Introduction to Multi-Agent Deep Deterministic Policy Gradient
Authors: Xu Jie
Abstract:
As a key network security method, cryptographic services must fully cope with problems such as the wide variety of cryptographic algorithms, high concurrency requirements, random job crossovers, and instantaneous surges in workloads. Its complexity and dynamics also make it difficult for traditional static security policies to cope with the ever-changing situation. Cyber Threats and Environment. Traditional resource scheduling algorithms are inadequate when facing complex decisionmaking problems in dynamic environments. A network cryptographic resource allocation algorithm based on reinforcement learning is proposed, aiming to optimize task energy consumption, migration cost, and fitness of differentiated services (including user, data, and task security). By modeling the multi-job collaborative cryptographic service scheduling problem as a multiobjective optimized job flow scheduling problem, and using a multi-agent reinforcement learning method, efficient scheduling and optimal configuration of cryptographic service resources are achieved. By introducing reinforcement learning, resource allocation strategies can be adjusted in real time in a dynamic environment, improving resource utilization and achieving load balancing. Experimental results show that this algorithm has significant advantages in path planning length, system delay and network load balancing, and effectively solves the problem of complex resource scheduling in cryptographic services.Keywords: multi-agent reinforcement learning, non-stationary dynamics, multi-agent systems, cooperative and competitive agents
Procedia PDF Downloads 313238 Facile Synthesis and Structure Characterization of Europium (III) Tungstate Nanoparticles
Authors: Mehdi Rahimi-Nasrabadi, Seied Mahdi Pourmortazavi
Abstract:
Taguchi robust design as a statistical method was applied for optimization of the process parameters in order to tunable, simple and fast synthesis of europium (III) tungstate nanoparticles. Europium (III) tungstate nanoparticles were synthesized by a chemical precipitation reaction involving direct addition of europium ion aqueous solution to the tungstate reagent solved in aqueous media. Effects of some synthesis procedure variables i.e., europium and tungstate concentrations, flow rate of cation reagent addition, and temperature of reaction reactor on the particle size of europium (III) tungstate nanoparticles were studied experimentally in order to tune particle size of europium (III) tungstate. Analysis of variance shows the importance of controlling tungstate concentration, cation feeding flow rate and temperature for preparation of europium (III) tungstate nanoparticles by the proposed chemical precipitation reaction. Finally, europium (III) tungstate nanoparticles were synthesized at the optimum conditions of the proposed method and the morphology and chemical composition of the prepared nano-material were characterized by means of X-Ray diffraction, scanning electron microscopy, transmission electron microscopy, FT-IR spectroscopy, and fluorescence.Keywords: europium (III) tungstate, nano-material, particle size control, procedure optimization
Procedia PDF Downloads 3993237 Applications for Additive Manufacturing Technology for Reducing the Weight of Body Parts of Gas Turbine Engines
Authors: Liubov Magerramova, Mikhail Petrov, Vladimir Isakov, Liana Shcherbinina, Suren Gukasyan, Daniil Povalyukhin, Olga Klimova-Korsmik, Darya Volosevich
Abstract:
Aircraft engines are developing along the path of increasing resource, strength, reliability, and safety. The building of gas turbine engine body parts is a complex design and technological task. Particularly complex in the design and manufacturing are the casings of the input stages of helicopter gearboxes and central drives of aircraft engines. Traditional technologies, such as precision casting or isothermal forging, are characterized by significant limitations in parts production. For parts like housing, additive technologies guarantee spatial freedom and limitless or flexible design. This article presents the results of computational and experimental studies. These investigations justify the applicability of additive technologies (AT) to reduce the weight of aircraft housing gearbox parts by up to 32%. This is possible due to geometrical optimization compared to the classical, less flexible manufacturing methods and as-casted aircraft parts with over-insured values of safety factors. Using an example of the body of the input stage of an aircraft gearbox, visualization of the layer-by-layer manufacturing of a part based on thermal deformation was demonstrated.Keywords: additive technologies, gas turbine engines, topological optimization, synthesis process
Procedia PDF Downloads 1223236 On-Chip Sensor Ellipse Distribution Method and Equivalent Mapping Technique for Real-Time Hardware Trojan Detection and Location
Authors: Longfei Wang, Selçuk Köse
Abstract:
Hardware Trojan becomes great concern as integrated circuit (IC) technology advances and not all manufacturing steps of an IC are accomplished within one company. Real-time hardware Trojan detection is proven to be a feasible way to detect randomly activated Trojans that cannot be detected at testing stage. On-chip sensors serve as a great candidate to implement real-time hardware Trojan detection, however, the optimization of on-chip sensors has not been thoroughly investigated and the location of Trojan has not been carefully explored. On-chip sensor ellipse distribution method and equivalent mapping technique are proposed based on the characteristics of on-chip power delivery network in this paper to address the optimization and distribution of on-chip sensors for real-time hardware Trojan detection as well as to estimate the location and current consumption of hardware Trojan. Simulation results verify that hardware Trojan activation can be effectively detected and the location of a hardware Trojan can be efficiently estimated with less than 5% error for a realistic power grid using our proposed methods. The proposed techniques therefore lay a solid foundation for isolation and even deactivation of hardware Trojans through accurate location of Trojans.Keywords: hardware trojan, on-chip sensor, power distribution network, power/ground noise
Procedia PDF Downloads 3953235 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data
Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz
Abstract:
In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.Keywords: real-time spatial big data, quality of service, vertical partitioning, horizontal partitioning, matching algorithm, hamming distance, stream query
Procedia PDF Downloads 1593234 Tree-Based Inference for Regionalization: A Comparative Study of Global Topological Perturbation Methods
Authors: Orhun Aydin, Mark V. Janikas, Rodrigo Alves, Renato Assuncao
Abstract:
In this paper, a tree-based perturbation methodology for regionalization inference is presented. Regionalization is a constrained optimization problem that aims to create groups with similar attributes while satisfying spatial contiguity constraints. Similar to any constrained optimization problem, the spatial constraint may hinder convergence to some global minima, resulting in spatially contiguous members of a group with dissimilar attributes. This paper presents a general methodology for rigorously perturbing spatial constraints through the use of random spanning trees. The general framework presented can be used to quantify the effect of the spatial constraints in the overall regionalization result. We compare several types of stochastic spanning trees used in inference problems such as fuzzy regionalization and determining the number of regions. Performance of stochastic spanning trees is juxtaposed against the traditional permutation-based hypothesis testing frequently used in spatial statistics. Inference results for fuzzy regionalization and determining the number of regions is presented on the Local Area Personal Incomes for Texas Counties provided by the Bureau of Economic Analysis.Keywords: regionalization, constrained clustering, probabilistic inference, fuzzy clustering
Procedia PDF Downloads 2323233 Field Oriented Control of Electrical Motor for Efficiency Improvement of Aerial Vehicle
Authors: Francois Defay
Abstract:
Uses of Unmanned aerial vehicle (UAV) are increasing for many applicative cases. Long endurance UAVs are required for inspection or transportation in some deserted places. The global optimization of the efficiency is the aim of the works in ISAE-SUPAERO. From the propulsive part until the motor control, the global optimization can increase significantly the global efficiency. This paper deals with the global improvement of the efficiency of the electrical propulsion for the aerial vehicle. The application case of study is a small airplane of 2kg. A global modelization is presented in order to validate the electrical engine in a complete simulation from aerodynamics to battery. The classical control of the synchronous permanent drive is compared to the field-oriented control which is not yet applied for UAVs. The experimental results presented show an increase of more than 10 percent of the efficiency. A complete modelization and simulation based on Matlab/ Simulink are presented in this paper and compared to the experimental study. Finally this paper presents solutions to increase the endurance of the electrical aerial vehicle and provide models to optimize the global consumption for a specific mission. The next step is to use this model and the control to work with distributed propulsion which is the future for small distance plane.Keywords: electrical propulsion, endurance, field-oriented control, UAV
Procedia PDF Downloads 238