Search results for: back propagation neural network model
19329 An Efficient Proxy Signature Scheme Over a Secure Communications Network
Authors: H. El-Kamchouchi, Heba Gaber, Fatma Ahmed, Dalia H. El-Kamchouchi
Abstract:
Proxy signature scheme permits an original signer to delegate his/her signing capability to a proxy signer, and then the proxy signer generates a signing message on behalf of the original signer. The two parties must be able to authenticate one another and agree on a secret encryption key, in order to communicate securely over an unreliable public network. Authenticated key agreement protocols have an important role in building secure communications network between the two parties. In this paper, we present a secure proxy signature scheme over an efficient and secure authenticated key agreement protocol based on the discrete logarithm problem.Keywords: proxy signature, warrant partial delegation, key agreement, discrete logarithm
Procedia PDF Downloads 34519328 Impact of Joule Heating on the Electrical Conduction Behavior of Carbon Composite Laminates under Simulated Lightning Strike
Authors: Hong Yu, Dirk Heider, Suresh Advani
Abstract:
Increasing demands for high strength and lightweight materials in aircraft industry prompted the wide use of carbon composites in recent decades. Carbon composite laminates used on aircraft structures are subject to lightning strikes. Unlike its metal/alloy counterparts, carbon fiber reinforced composites demonstrate smaller electrical conductivity, yielding more severe damages due to Joule heating. The anisotropic nature of composite laminates makes the electrical and thermal conduction within carbon composite laminates even more complicated. Good understanding of the electrical conduction behavior of carbon composites is the key to effective lightning protection design. The goal of this study is to numerically and experimentally investigate the impact of ultra-high temperature induced by simulated lightning strike on the electrical conduction of carbon composites. A lightning simulator is designed to apply standard lightning current waveform to composite laminates. Multiple carbon composite laminates made from IM7 and AS4 carbon fiber are tested and the transient resistance data is recorded. A microstructure based resistor network model is developed to describe the electrical and thermal conduction behavior, with consideration of temperature dependent material properties. Material degradations such as thermal and electrical breakdown are also modeled to include the effect of high current and high temperature induced by lightning strikes. Good match between the simulation results and experimental data indicates that the developed model captures the major conduction mechanisms. A parametric study is then conducted using the validated model to investigate the effect of system parameters such as fiber volume fraction, inter-ply interface quality, and lightning current waveforms.Keywords: carbon composite, joule heating, lightning strike, resistor network
Procedia PDF Downloads 22819327 The Effects of Zinc Oxide Nanoparticles Loaded with Indole-3-Acetic Acid and Indole-3-Butyric Acid on in vitro Rooting of Apple Microcuttings
Authors: Shabnam Alizadeh, Hatice Dumanoglu
Abstract:
Plant tissue culture is a substantial plant propagation technique for mass clonal production throughout the year, regardless of time in fruit species. However, the rooting achievement must be enhanced in the difficult-to-root genotypes. Classical auxin applications in clonal propagation of these genotypes are inadequate to solve the rooting problem. Nanoparticles having different physical and chemical properties from bulk material could enhance the rooting success of controlled release of these substances when loaded with auxin due to their ability to reach the active substance up to the target cells as a carrier system.The purpose of this study is to investigate the effects of zinc oxide nanoparticles loaded with indole-3-acetic acid (IAA-nZnO) and indole-3-butyric acid (IBA-nZnO) on in vitro rooting of microcuttings in a difficult-to-root apple genotype (Malus domestica Borkh.). Rooting treatments consisted of IBA or IAA at concentrations of 0.5, 1.0, 2.0, 3.0 mg/L; nZnO, IAA-nZnO and IBA-nZnO at doses of 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0 mg/L were used. All components were added to the Murashige and Skoog (MS) basal medium at strength ½ with 2% sucrose and 0.7% agar before autoclaving. In the study, no rooting occurred in control and nZnO applications. Especially, 1.0 mg/L and 2.0 mg/L IBA-nZnO nanoparticle applications (containing 0.5 mg/L and 0.9 mg/L IBA), respectively with rooting rates of 40.3% and 70.4%, rooting levels of 2.0±0.4 and 2.3±0.4, 2.6±0.7 and 2.5±0.6 average root numbers and 20.4±1.6 mm and 20.2±3.4 mm average root lengths put forward as effective applications.Keywords: Auxin, Malus, nanotechnology, zinc oxide nanoparticles
Procedia PDF Downloads 14419326 An Early Detection Type 2 Diabetes Using K - Nearest Neighbor Algorithm
Authors: Ng Liang Shen, Ngahzaifa Abdul Ghani
Abstract:
This research aimed at developing an early warning system for pre-diabetic and diabetics by analyzing simple and easily determinable signs and symptoms of diabetes among the people living in Malaysia using Particle Swarm Optimized Artificial. With the skyrocketing prevalence of Type 2 diabetes in Malaysia, the system can be used to encourage affected people to seek further medical attention to prevent the onset of diabetes or start managing it early enough to avoid the associated complications. The study sought to find out the best predictive variables of Type 2 Diabetes Mellitus, developed a system to diagnose diabetes from the variables using Artificial Neural Networks and tested the system on accuracy to find out the patent generated from diabetes diagnosis result in machine learning algorithms even at primary or advanced stages.Keywords: diabetes diagnosis, Artificial Neural Networks, artificial intelligence, soft computing, medical diagnosis
Procedia PDF Downloads 33619325 Model of a Context-Aware Middleware for Mobile Workers
Authors: Esraa Moustafa, Gaetan Rey, Stephane Lavirotte, Jean-Yves Tigli
Abstract:
With the development of Internet of Things and Web of Things, computing becomes more pervasive, invisible and present everywhere. In fact, in our environment, we are surrounded by multiple devices that deliver (web) services that meet the needs of the users. However, the mobility of these devices as the users has important repercussions that challenge software design of these applications because the variability of the environment cannot be anticipated at the design time. Thus, it will be interesting to dynamically discover the environment and adapt the application during its execution to the new contextual conditions. We, therefore, propose a model of a context-aware middleware that can address this issue through a monitoring service that is capable of reasoning and observation channels capable of calculating the context during the runtime. The monitoring service evaluates the pre-defined X-Query predicates in the context manager and uses Prolog to deduce the services needed to respond back. An independent Observation Channel for each different predicate is then dynamically generated by the monitoring service depending on the current state of the environment. Each channel sends its result directly to the context manager which consequently calculates the context based on all the predicates’ results while preserving the reactivity of the self-adaptive system.Keywords: auto-adaptation, context-awareness, middleware, reasoning engine
Procedia PDF Downloads 25119324 Simulation of Forest Fire Using Wireless Sensor Network
Authors: Mohammad F. Fauzi, Nurul H. Shahba M. Shahrun, Nurul W. Hamzah, Mohd Noah A. Rahman, Afzaal H. Seyal
Abstract:
In this paper, we proposed a simulation system using Wireless Sensor Network (WSN) that will be distributed around the forest for early forest fire detection and to locate the areas affected. In Brunei Darussalam, approximately 78% of the nation is covered by forest. Since the forest is Brunei’s most precious natural assets, it is very important to protect and conserve our forest. The hot climate in Brunei Darussalam can lead to forest fires which can be a fatal threat to the preservation of our forest. The process consists of getting data from the sensors, analyzing the data and producing an alert. The key factors that we are going to analyze are the surrounding temperature, wind speed and wind direction, humidity of the air and soil.Keywords: forest fire monitor, humidity, wind direction, wireless sensor network
Procedia PDF Downloads 45419323 Building Green Infrastructure Networks Based on Cadastral Parcels Using Network Analysis
Authors: Gon Park
Abstract:
Seoul in South Korea established the 2030 Seoul City Master Plan that contains green-link projects to connect critical green areas within the city. However, the plan does not have detailed analyses for green infrastructure to incorporate land-cover information to many structural classes. This study maps green infrastructure networks of Seoul for complementing their green plans with identifying and raking green areas. Hubs and links of main elements of green infrastructure have been identified from incorporating cadastral data of 967,502 parcels to 135 of land use maps using geographic information system. Network analyses were used to rank hubs and links of a green infrastructure map with applying a force-directed algorithm, weighted values, and binary relationships that has metrics of density, distance, and centrality. The results indicate that network analyses using cadastral parcel data can be used as the framework to identify and rank hubs, links, and networks for the green infrastructure planning under a variable scenarios of green areas in cities.Keywords: cadastral data, green Infrastructure, network analysis, parcel data
Procedia PDF Downloads 20619322 Discovering the Effects of Meteorological Variables on the Air Quality of Bogota, Colombia, by Data Mining Techniques
Authors: Fabiana Franceschi, Martha Cobo, Manuel Figueredo
Abstract:
Bogotá, the capital of Colombia, is its largest city and one of the most polluted in Latin America due to the fast economic growth over the last ten years. Bogotá has been affected by high pollution events which led to the high concentration of PM10 and NO2, exceeding the local 24-hour legal limits (100 and 150 g/m3 each). The most important pollutants in the city are PM10 and PM2.5 (which are associated with respiratory and cardiovascular problems) and it is known that their concentrations in the atmosphere depend on the local meteorological factors. Therefore, it is necessary to establish a relationship between the meteorological variables and the concentrations of the atmospheric pollutants such as PM10, PM2.5, CO, SO2, NO2 and O3. This study aims to determine the interrelations between meteorological variables and air pollutants in Bogotá, using data mining techniques. Data from 13 monitoring stations were collected from the Bogotá Air Quality Monitoring Network within the period 2010-2015. The Principal Component Analysis (PCA) algorithm was applied to obtain primary relations between all the parameters, and afterwards, the K-means clustering technique was implemented to corroborate those relations found previously and to find patterns in the data. PCA was also used on a per shift basis (morning, afternoon, night and early morning) to validate possible variation of the previous trends and a per year basis to verify that the identified trends have remained throughout the study time. Results demonstrated that wind speed, wind direction, temperature, and NO2 are the most influencing factors on PM10 concentrations. Furthermore, it was confirmed that high humidity episodes increased PM2,5 levels. It was also found that there are direct proportional relationships between O3 levels and wind speed and radiation, while there is an inverse relationship between O3 levels and humidity. Concentrations of SO2 increases with the presence of PM10 and decreases with the wind speed and wind direction. They proved as well that there is a decreasing trend of pollutant concentrations over the last five years. Also, in rainy periods (March-June and September-December) some trends regarding precipitations were stronger. Results obtained with K-means demonstrated that it was possible to find patterns on the data, and they also showed similar conditions and data distribution among Carvajal, Tunal and Puente Aranda stations, and also between Parque Simon Bolivar and las Ferias. It was verified that the aforementioned trends prevailed during the study period by applying the same technique per year. It was concluded that PCA algorithm is useful to establish preliminary relationships among variables, and K-means clustering to find patterns in the data and understanding its distribution. The discovery of patterns in the data allows using these clusters as an input to an Artificial Neural Network prediction model.Keywords: air pollution, air quality modelling, data mining, particulate matter
Procedia PDF Downloads 25819321 Comparison of Different Machine Learning Algorithms for Solubility Prediction
Authors: Muhammet Baldan, Emel Timuçin
Abstract:
Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.Keywords: random forest, machine learning, comparison, feature extraction
Procedia PDF Downloads 4019320 Non-Destructive Testing of Metal Pipes with Ultrasonic Sensors Based on Determination of Maximum Ultrasonic Frequency
Authors: Herlina Abdul Rahim, Javad Abbaszadeh, Ruzairi Abdul Rahim
Abstract:
In this research, the non-invasive ultrasonic transmission tomography is investigated. In order to model the ultrasonic wave scattering for different thickness of metal pipes, two-dimensional (2D) finite element modeling (FEM) has been utilized. The wall thickness variation of the metal pipe and its influence on propagation of the ultrasonic pressure wave are explored in this paper, includes frequency analysing in order to find the maximum applicable frequency. The simulation results have been compared to experimental data and are shown to provide key insight for this well-defined experimental case by explaining the achieved reconstructed images from experimental setup. Finally, the experimental results which are useful for further investigation for the application of ultrasonic transmission tomography in industry are illustrated.Keywords: ultrasonic transmission tomography, ultrasonic sensors, ultrasonic wave, non-invasive tomography, metal pipe
Procedia PDF Downloads 35919319 Electromagnetic Radiation Generation by Two-Color Sinusoidal Laser Pulses Propagating in Plasma
Authors: Nirmal Kumar Verma, Pallavi Jha
Abstract:
Generation of the electromagnetic radiation oscillating at the frequencies in the terahertz range by propagation of two-color laser pulses in plasma is an active area of research due to its potential applications in various areas, including security screening, material characterization, and spectroscopic techniques. Due to nonionizing nature and the ability to penetrate several millimeters, THz radiation is suitable for diagnosis of cancerous cells. Traditional THz emitters like optically active crystals, when irradiated with high power laser radiation, are subject to material breakdown and hence low conversion efficiencies. This problem is not encountered in laser-plasma based THz radiation sources. The present paper is devoted to the study of the enhanced electromagnetic radiation generation by propagation of two-color, linearly polarized laser pulses through the magnetized plasma. The two lasers pulse orthogonally polarized are co-propagating along the same direction. The direction of the external magnetic field is such that one of the two laser pulses propagates in the ordinary mode, while the other pulse propagates in the extraordinary mode through the homogeneous plasma. A transverse electromagnetic wave with frequency in the THz range is generated due to the presence of the static magnetic field. It is observed that larger amplitude terahertz can be generated by mixing of ordinary and extraordinary modes of two-color laser pulses as compared with a single laser pulse propagating in the extraordinary mode.Keywords: two-color laser pulses, electromagnetic radiation, magnetized plasma, ordinary and extraordinary modes
Procedia PDF Downloads 28519318 A Global Organizational Theory for the 21st Century
Authors: Troy A. Tyre
Abstract:
Organizational behavior and organizational change are elements of the ever-changing global business environment. Leadership and organizational behavior are 21st century disciplines. Network marketing organizations need to understand the ever-changing nature of global business and be ready and willing to adapt to the environment. Network marketing organizations have a challenge keeping up with a rapid escalation in global growth. Network marketing growth has been steady and global. Network marketing organizations have been slow to develop a 21st century global strategy to manage the rapid escalation of growth degrading organizational behavior, job satisfaction, increasing attrition, and degrading customer service. Development of an organizational behavior and leadership theory for the 21st century to help network marketing develops a global business strategy to manage the rapid escalation in growth that affects organizational behavior. Managing growth means organizational leadership must develop and adapt to the organizational environment. Growth comes with an open mind and one’s departure from the comfort zone. Leadership growth operates in the tacit dimension. Systems thinking and adaptation of mental models can help shift organizational behavior. Shifting the organizational behavior requires organizational learning. Organizational learning occurs through single-loop, double-loop, and triple-loop learning. Triple-loop learning is the most difficult, but the most rewarding. Tools such as theory U can aid in developing a landscape for organizational behavioral development. Additionally, awareness to espoused and portrayed actions is imperatives. Theories of motivation, cross-cultural diversity, and communications are instrumental in founding an organizational behavior suited for the 21st century.Keywords: global, leadership, network marketing, organizational behavior
Procedia PDF Downloads 55319317 Bringing Together Student Collaboration and Research Opportunities to Promote Scientific Understanding and Outreach Through a Seismological Community
Authors: Michael Ray Brunt
Abstract:
China has been the site of some of the most significant earthquakes in history; however, earthquake monitoring has long been the provenance of universities and research institutions. The China Digital Seismographic Network was initiated in 1983 and improved significantly during 1992-1993. Data from the CDSN is widely used by government and research institutions, and, generally, this data is not readily accessible to middle and high school students. An educational seismic network in China is needed to provide collaboration and research opportunities for students and engaging students around the country in scientific understanding of earthquake hazards and risks while promoting community awareness. In 2022, the Tsinghua International School (THIS) Seismology Team, made up of enthusiastic students and facilitated by two experienced teachers, was established. As a group, the team’s objective is to install seismographs in schools throughout China, thus creating an educational seismic network that shares data from the THIS Educational Seismic Network (THIS-ESN) and facilitates collaboration. The THIS-ESN initiative will enhance education and outreach in China about earthquake risks and hazards, introduce seismology to a wider audience, stimulate interest in research among students, and develop students’ programming, data collection and analysis skills. It will also encourage and inspire young minds to pursue science, technology, engineering, the arts, and math (STEAM) career fields. The THIS-ESN utilizes small, low-cost RaspberryShake seismographs as a powerful tool linked into a global network, giving schools and the public access to real-time seismic data from across China, increasing earthquake monitoring capabilities in the perspective areas and adding to the available data sets regionally and worldwide helping create a denser seismic network. The RaspberryShake seismograph is compatible with free seismic data viewing platforms such as SWARM, RaspberryShake web programs and mobile apps are designed specifically towards teaching seismology and seismic data interpretation, providing opportunities to enhance understanding. The RaspberryShake is powered by an operating system embedded in the Raspberry Pi, which makes it an easy platform to teach students basic computer communication concepts by utilizing processing tools to investigate, plot, and manipulate data. THIS Seismology Team believes strongly in creating opportunities for committed students to become part of the seismological community by engaging in analysis of real-time scientific data with tangible outcomes. Students will feel proud of the important work they are doing to understand the world around them and become advocates spreading their knowledge back into their homes and communities, helping to improve overall community resilience. We trust that, in studying the results seismograph stations yield, students will not only grasp how subjects like physics and computer science apply in real life, and by spreading information, we hope students across the country can appreciate how and why earthquakes bear on their lives, develop practical skills in STEAM, and engage in the global seismic monitoring effort. By providing such an opportunity to schools across the country, we are confident that we will be an agent of change for society.Keywords: collaboration, outreach, education, seismology, earthquakes, public awareness, research opportunities
Procedia PDF Downloads 7219316 Long-Term Resilience Performance Assessment of Dual and Singular Water Distribution Infrastructures Using a Complex Systems Approach
Authors: Kambiz Rasoulkhani, Jeanne Cole, Sybil Sharvelle, Ali Mostafavi
Abstract:
Dual water distribution systems have been proposed as solutions to enhance the sustainability and resilience of urban water systems by improving performance and decreasing energy consumption. The objective of this study was to evaluate the long-term resilience and robustness of dual water distribution systems versus singular water distribution systems under various stressors such as demand fluctuation, aging infrastructure, and funding constraints. To this end, the long-term dynamics of these infrastructure systems was captured using a simulation model that integrates institutional agency decision-making processes with physical infrastructure degradation to evaluate the long-term transformation of water infrastructure. A set of model parameters that varies for dual and singular distribution infrastructure based on the system attributes, such as pipes length and material, energy intensity, water demand, water price, average pressure and flow rate, as well as operational expenditures, were considered and input in the simulation model. Accordingly, the model was used to simulate various scenarios of demand changes, funding levels, water price growth, and renewal strategies. The long-term resilience and robustness of each distribution infrastructure were evaluated based on various performance measures including network average condition, break frequency, network leakage, and energy use. An ecologically-based resilience approach was used to examine regime shifts and tipping points in the long-term performance of the systems under different stressors. Also, Classification and Regression Tree analysis was adopted to assess the robustness of each system under various scenarios. Using data from the City of Fort Collins, the long-term resilience and robustness of the dual and singular water distribution systems were evaluated over a 100-year analysis horizon for various scenarios. The results of the analysis enabled: (i) comparison between dual and singular water distribution systems in terms of long-term performance, resilience, and robustness; (ii) identification of renewal strategies and decision factors that enhance the long-term resiliency and robustness of dual and singular water distribution systems under different stressors.Keywords: complex systems, dual water distribution systems, long-term resilience performance, multi-agent modeling, sustainable and resilient water systems
Procedia PDF Downloads 29219315 Frequent-Pattern Tree Algorithm Application to S&P and Equity Indexes
Authors: E. Younsi, H. Andriamboavonjy, A. David, S. Dokou, B. Lemrabet
Abstract:
Software and time optimization are very important factors in financial markets, which are competitive fields, and emergence of new computer tools further stresses the challenge. In this context, any improvement of technical indicators which generate a buy or sell signal is a major issue. Thus, many tools have been created to make them more effective. This worry about efficiency has been leading in present paper to seek best (and most innovative) way giving largest improvement in these indicators. The approach consists in attaching a signature to frequent market configurations by application of frequent patterns extraction method which is here most appropriate to optimize investment strategies. The goal of proposed trading algorithm is to find most accurate signatures using back testing procedure applied to technical indicators for improving their performance. The problem is then to determine the signatures which, combined with an indicator, outperform this indicator alone. To do this, the FP-Tree algorithm has been preferred, as it appears to be the most efficient algorithm to perform this task.Keywords: quantitative analysis, back-testing, computational models, apriori algorithm, pattern recognition, data mining, FP-tree
Procedia PDF Downloads 36119314 Disease Level Assessment in Wheat Plots Using a Residual Deep Learning Algorithm
Authors: Felipe A. Guth, Shane Ward, Kevin McDonnell
Abstract:
The assessment of disease levels in crop fields is an important and time-consuming task that generally relies on expert knowledge of trained individuals. Image classification in agriculture problems historically has been based on classical machine learning strategies that make use of hand-engineered features in the top of a classification algorithm. This approach tends to not produce results with high accuracy and generalization to the classes classified by the system when the nature of the elements has a significant variability. The advent of deep convolutional neural networks has revolutionized the field of machine learning, especially in computer vision tasks. These networks have great resourcefulness of learning and have been applied successfully to image classification and object detection tasks in the last years. The objective of this work was to propose a new method based on deep learning convolutional neural networks towards the task of disease level monitoring. Common RGB images of winter wheat were obtained during a growing season. Five categories of disease levels presence were produced, in collaboration with agronomists, for the algorithm classification. Disease level tasks performed by experts provided ground truth data for the disease score of the same winter wheat plots were RGB images were acquired. The system had an overall accuracy of 84% on the discrimination of the disease level classes.Keywords: crop disease assessment, deep learning, precision agriculture, residual neural networks
Procedia PDF Downloads 33119313 Scientific Development as Diffusion on a Social Network: An Empirical Case Study
Authors: Anna Keuchenius
Abstract:
Broadly speaking, scientific development is studied in either a qualitative manner with a focus on the behavior and interpretations of academics, such as the sociology of science and science studies or in a quantitative manner with a focus on the analysis of publications, such as scientometrics and bibliometrics. Both come with a different set of methodologies and few cross-references. This paper contributes to the bridging of this divide, by on the on hand approaching the process of scientific progress from a qualitative sociological angle and using on the other hand quantitative and computational techniques. As a case study, we analyze the diffusion of Granovetter's hypothesis from his 1973 paper 'On The Strength of Weak Ties.' A network is constructed of all scientists that have referenced this particular paper, with directed edges to all other researchers that are concurrently referenced with Granovetter's 1973 paper. Studying the structure and growth of this network over time, it is found that Granovetter's hypothesis is used by distinct communities of scientists, each with their own key-narrative into which the hypothesis is fit. The diffusion within the communities shares similarities with the diffusion of an innovation in which innovators, early adopters, and an early-late majority can clearly be distinguished. Furthermore, the network structure shows that each community is clustered around one or few hub scientists that are disproportionately often referenced and seem largely responsible for carrying the hypothesis into their scientific subfield. The larger implication of this case study is that the diffusion of scientific hypotheses and ideas are not the spreading of well-defined objects over a network. Rather, the diffusion is a process in which the object itself dynamically changes in concurrence with its spread. Therefore it is argued that the methodology presented in this paper has potential beyond the scientific domain, in the study of diffusion of other not well-defined objects, such as opinions, behavior, and ideas.Keywords: diffusion of innovations, network analysis, scientific development, sociology of science
Procedia PDF Downloads 30619312 Use of Artificial Neural Networks to Estimate Evapotranspiration for Efficient Irrigation Management
Authors: Adriana Postal, Silvio C. Sampaio, Marcio A. Villas Boas, Josué P. Castro
Abstract:
This study deals with the estimation of reference evapotranspiration (ET₀) in an agricultural context, focusing on efficient irrigation management to meet the growing interest in the sustainable management of water resources. Given the importance of water in agriculture and its scarcity in many regions, efficient use of this resource is essential to ensure food security and environmental sustainability. The methodology used involved the application of artificial intelligence techniques, specifically Multilayer Perceptron (MLP) Artificial Neural Networks (ANNs), to predict ET₀ in the state of Paraná, Brazil. The models were trained and validated with meteorological data from the Brazilian National Institute of Meteorology (INMET), together with data obtained from a producer's weather station in the western region of Paraná. Two optimizers (SGD and Adam) and different meteorological variables, such as temperature, humidity, solar radiation, and wind speed, were explored as inputs to the models. Nineteen configurations with different input variables were tested; amidst them, configuration 9, with 8 input variables, was identified as the most efficient of all. Configuration 10, with 4 input variables, was considered the most effective, considering the smallest number of variables. The main conclusions of this study show that MLP ANNs are capable of accurately estimating ET₀, providing a valuable tool for irrigation management in agriculture. Both configurations (9 and 10) showed promising performance in predicting ET₀. The validation of the models with cultivator data underlined the practical relevance of these tools and confirmed their generalization ability for different field conditions. The results of the statistical metrics, including Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Coefficient of Determination (R²), showed excellent agreement between the model predictions and the observed data, with MAE as low as 0.01 mm/day and 0.03 mm/day, respectively. In addition, the models achieved an R² between 0.99 and 1, indicating a satisfactory fit to the real data. This agreement was also confirmed by the Kolmogorov-Smirnov test, which evaluates the agreement of the predictions with the statistical behavior of the real data and yields values between 0.02 and 0.04 for the producer data. In addition, the results of this study suggest that the developed technique can be applied to other locations by using specific data from these sites to further improve ET₀ predictions and thus contribute to sustainable irrigation management in different agricultural regions. The study has some limitations, such as the use of a single ANN architecture and two optimizers, the validation with data from only one producer, and the possible underestimation of the influence of seasonality and local climate variability. An irrigation management application using the most efficient models from this study is already under development. Future research can explore different ANN architectures and optimization techniques, validate models with data from multiple producers and regions, and investigate the model's response to different seasonal and climatic conditions.Keywords: agricultural technology, neural networks in agriculture, water efficiency, water use optimization
Procedia PDF Downloads 4819311 Isolated Contraction of Deep Lumbar Paraspinal Muscle with Magnetic Nerve Root Stimulation: A Pilot Study
Authors: Shi-Uk Lee, Chae Young Lim
Abstract:
Objective: The aim of this study was to evaluate the changes of lumbar deep muscle thickness and cross-sectional area using ultrasonography with magnetic stimulation. Methods: To evaluate the changes of lumbar deep muscle by using magnetic stimulation, 12 healthy volunteers (39.6±10.0 yrs) without low back pain during 3 months participated in this study. All the participants were checked with X-ray and electrophysiologic study to confirm that they had no problems with their back. Magnetic stimulation was done on the L5 and S1 root with figure-eight coil as previous study. To confirm the proper motor root stimulation, the surface electrode was put on the tibialis anterior (L5) and abductor hallucis muscles (S1) and the hot spots of magnetic stimulation were found with 50% of maximal magnetic stimulation and determined the stimulation threshold lowering the magnetic intensity by 5%. Ultrasonography was used to assess the changes of L5 and S1 lumbar multifidus (superficial and deep) cross-sectional area and thickness with maximal magnetic stimulation. Cross-sectional area (CSA) and thickness was evaluated with image acquisition program, ImageJ software (National Institute of Healthy, USA). Wilcoxon signed-rank was used to compare outcomes between before and after stimulations. Results: The mean minimal threshold was 29.6±3.8% of maximal stimulation intensity. With minimal magnetic stimulation, thickness of L5 and S1 deep multifidus (DM) were increased from 1.25±0.20, 1.42±0.23 cm to 1.40±0.27, 1.56±0.34 cm, respectively (P=0.005, P=0.003). CSA of L5 and S1 DM were also increased from 2.26±0.18, 1.40±0.26 cm2 to 2.37±0.18, 1.56±0.34 cm2, respectively (P=0.002, P=0.002). However, thickness of L5 and S1 superficial multifidus (SM) were not changed from 1.92±0.21, 2.04±0.20 cm to 1.91±0.33, 1.96±0.33 cm (P=0.211, P=0.199) and CSA of L5 and S1 were also not changed from 4.29±0.53, 5.48±0.32 cm2 to 4.42±0.42, 5.64±0.38 cm2. With maximal magnetic stimulation, thickness of L5, S1 of DM and SM were increased (L5 DM, 1.29±0.26, 1.46±0.27 cm, P=0.028; L5 SM, 2.01±0.42, 2.24±0.39 cm, P=0.005; S1 DM, 1.29±0.19, 1.67±0.29 P=0.002; S1 SM, 1.90±0.36, 2.30±0.36, P=0.002). CSA of L5, S1 of DM and SM were also increased (all P values were 0.002). Conclusions: Deep lumbar muscles could be stimulated with lumbar motor root magnetic stimulation. With minimal stimulation, thickness and CSA of lumbosacral deep multifidus were increased in this study. Further studies are needed to confirm whether the similar results in chronic low back pain patients are represented. Lumbar magnetic stimulation might have strengthening effect of deep lumbar muscles with no discomfort.Keywords: magnetic stimulation, lumbar multifidus, strengthening, ultrasonography
Procedia PDF Downloads 37119310 Theoretical Analysis of the Optical and Solid State Properties of Thin Film
Authors: E. I. Ugwu
Abstract:
Theoretical analysis of the optical and Solid State properties of ZnS thin film using beam propagation technique in which a scalar wave is propagated through the material thin film deposited on a substrate with the assumption that the dielectric medium is section into a homogenous reference dielectric constant term, and a perturbed dielectric term, representing the deposited thin film medium is presented in this work. These two terms, constitute arbitrary complex dielectric function that describes dielectric perturbation imposed by the medium of for the system. This is substituted into a defined scalar wave equation in which the appropriate Green’s Function was defined on it and solved using series technique. The green’s value obtained from Green’s Function was used in Dyson’s and Lippmann Schwinger equations in conjunction with Born approximation method in computing the propagated field for different input regions of field wavelength during which the influence of the dielectric constants and mesh size of the thin film on the propagating field were depicted. The results obtained from the computed field were used in turn to generate the data that were used to compute the band gaps, solid state and optical properties of the thin film such as reflectance, Transmittance and reflectance with which the band gap obtained was found to be in close approximate to that of experimental value.Keywords: scalar wave, optical and solid state properties, thin film, dielectric medium, perturbation, Lippmann Schwinger equations, Green’s Function, propagation
Procedia PDF Downloads 43819309 Suitable Die Shaping for a Rectangular Shape Bottle by Application of FEM and AI Technique
Authors: N. Ploysook, R. Rugsaj, C. Suvanjumrat
Abstract:
The characteristic requirement for producing rectangular shape bottles was a uniform thickness of the plastic bottle wall. Die shaping was a good technique which controlled the wall thickness of bottles. An advance technology which was the finite element method (FEM) for blowing parison to be a rectangular shape bottle was conducted to reduce waste plastic from a trial and error method of a die shaping and parison control method. The artificial intelligent (AI) comprised of artificial neural network and genetic algorithm was selected to optimize the die gap shape from the FEM results. The application of AI technique could optimize the suitable die gap shape for the parison blow molding which did not depend on the parison control method to produce rectangular bottles with the uniform wall. Particularly, this application can be used with cheap blow molding machines without a parison controller therefore it will reduce cost of production in the bottle blow molding process.Keywords: AI, bottle, die shaping, FEM
Procedia PDF Downloads 23819308 A Multi Objective Reliable Location-Inventory Capacitated Disruption Facility Problem with Penalty Cost Solve with Efficient Meta Historic Algorithms
Authors: Elham Taghizadeh, Mostafa Abedzadeh, Mostafa Setak
Abstract:
Logistics network is expected that opened facilities work continuously for a long time horizon without any failure; but in real world problems, facilities may face disruptions. This paper studies a reliable joint inventory location problem to optimize cost of facility locations, customers’ assignment, and inventory management decisions when facilities face failure risks and doesn’t work. In our model we assume when a facility is out of work, its customers may be reassigned to other operational facilities otherwise they must endure high penalty costs associated with losing service. For defining the model closer to real world problems, the model is proposed based on p-median problem and the facilities are considered to have limited capacities. We define a new binary variable (Z_is) for showing that customers are not assigned to any facilities. Our problem involve a bi-objective model; the first one minimizes the sum of facility construction costs and expected inventory holding costs, the second one function that mention for the first one is minimizes maximum expected customer costs under normal and failure scenarios. For solving this model we use NSGAII and MOSS algorithms have been applied to find the pareto- archive solution. Also Response Surface Methodology (RSM) is applied for optimizing the NSGAII Algorithm Parameters. We compare performance of two algorithms with three metrics and the results show NSGAII is more suitable for our model.Keywords: joint inventory-location problem, facility location, NSGAII, MOSS
Procedia PDF Downloads 52519307 The Development of an Accident Causation Model Specific to Agriculture: The Irish Farm Accident Causation Model
Authors: Carolyn Scott, Rachel Nugent
Abstract:
The agricultural industry in Ireland and worldwide is one of the most dangerous occupations with respect to occupational health and safety accidents and fatalities. Many accident causation models have been developed in safety research to understand the underlying and contributory factors that lead to the occurrence of an accident. Due to the uniqueness of the agricultural sector, current accident causation theories cannot be applied. This paper presents an accident causation model named the Irish Farm Accident Causation Model (IFACM) which has been specifically tailored to the needs of Irish farms. The IFACM is a theoretical and practical model of accident causation that arranges the causal factors into a graphic representation of originating, shaping, and contributory factors that lead to accidents when unsafe acts and conditions are created that are not rectified by control measures. Causes of farm accidents were assimilated by means of a thorough literature review and were collated to form a graphical representation of the underlying causes of a farm accident. The IFACM was validated retrospectively through case study analysis and peer review. Participants in the case study (n=10) identified causes that led to a farm accident in which they were involved. A root cause analysis was conducted to understand the contributory factors surrounding the farm accident, traced back to the ‘root cause’. Experts relevant to farm safety accident causation in the agricultural industry have peer reviewed the IFACM. The accident causation process is complex. Accident prevention requires a comprehensive understanding of this complex process because to prevent the occurrence of accidents, the causes of accidents must be known. There is little research on the key causes and contributory factors of unsafe behaviours and accidents on Irish farms. The focus of this research is to gain a deep understanding of the causality of accidents on Irish farms. The results suggest that the IFACM framework is helpful for the analysis of the causes of accidents within the agricultural industry in Ireland. The research also suggests that there may be international applicability if further research is carried out. Furthermore, significant learning can be obtained from considering the underlying causes of accidents.Keywords: farm safety, farm accidents, accident causation, root cause analysis
Procedia PDF Downloads 7819306 Data-Driven Strategies for Enhancing Food Security in Vulnerable Regions: A Multi-Dimensional Analysis of Crop Yield Predictions, Supply Chain Optimization, and Food Distribution Networks
Authors: Sulemana Ibrahim
Abstract:
Food security remains a paramount global challenge, with vulnerable regions grappling with issues of hunger and malnutrition. This study embarks on a comprehensive exploration of data-driven strategies aimed at ameliorating food security in such regions. Our research employs a multifaceted approach, integrating data analytics to predict crop yields, optimizing supply chains, and enhancing food distribution networks. The study unfolds as a multi-dimensional analysis, commencing with the development of robust machine learning models harnessing remote sensing data, historical crop yield records, and meteorological data to foresee crop yields. These predictive models, underpinned by convolutional and recurrent neural networks, furnish critical insights into anticipated harvests, empowering proactive measures to confront food insecurity. Subsequently, the research scrutinizes supply chain optimization to address food security challenges, capitalizing on linear programming and network optimization techniques. These strategies intend to mitigate loss and wastage while streamlining the distribution of agricultural produce from field to fork. In conjunction, the study investigates food distribution networks with a particular focus on network efficiency, accessibility, and equitable food resource allocation. Network analysis tools, complemented by data-driven simulation methodologies, unveil opportunities for augmenting the efficacy of these critical lifelines. This study also considers the ethical implications and privacy concerns associated with the extensive use of data in the realm of food security. The proposed methodology outlines guidelines for responsible data acquisition, storage, and usage. The ultimate aspiration of this research is to forge a nexus between data science and food security policy, bestowing actionable insights to mitigate the ordeal of food insecurity. The holistic approach converging data-driven crop yield forecasts, optimized supply chains, and improved distribution networks aspire to revitalize food security in the most vulnerable regions, elevating the quality of life for millions worldwide.Keywords: data-driven strategies, crop yield prediction, supply chain optimization, food distribution networks
Procedia PDF Downloads 6219305 Evaluation of Turbulence Prediction over Washington, D.C.: Comparison of DCNet Observations and North American Mesoscale Model Outputs
Authors: Nebila Lichiheb, LaToya Myles, William Pendergrass, Bruce Hicks, Dawson Cagle
Abstract:
Atmospheric transport of hazardous materials in urban areas is increasingly under investigation due to the potential impact on human health and the environment. In response to health and safety concerns, several dispersion models have been developed to analyze and predict the dispersion of hazardous contaminants. The models of interest usually rely on meteorological information obtained from the meteorological models of NOAA’s National Weather Service (NWS). However, due to the complexity of the urban environment, NWS forecasts provide an inadequate basis for dispersion computation in urban areas. A dense meteorological network in Washington, DC, called DCNet, has been operated by NOAA since 2003 to support the development of urban monitoring methodologies and provide the driving meteorological observations for atmospheric transport and dispersion models. This study focuses on the comparison of wind observations from the DCNet station on the U.S. Department of Commerce Herbert C. Hoover Building against the North American Mesoscale (NAM) model outputs for the period 2017-2019. The goal is to develop a simple methodology for modifying NAM outputs so that the dispersion requirements of the city and its urban area can be satisfied. This methodology will allow us to quantify the prediction errors of the NAM model and propose adjustments of key variables controlling dispersion model calculation.Keywords: meteorological data, Washington D.C., DCNet data, NAM model
Procedia PDF Downloads 23319304 Housing Price Prediction Using Machine Learning Algorithms: The Case of Melbourne City, Australia
Authors: The Danh Phan
Abstract:
House price forecasting is a main topic in the real estate market research. Effective house price prediction models could not only allow home buyers and real estate agents to make better data-driven decisions but may also be beneficial for the property policymaking process. This study investigates the housing market by using machine learning techniques to analyze real historical house sale transactions in Australia. It seeks useful models which could be deployed as an application for house buyers and sellers. Data analytics show a high discrepancy between the house price in the most expensive suburbs and the most affordable suburbs in the city of Melbourne. In addition, experiments demonstrate that the combination of Stepwise and Support Vector Machine (SVM), based on the Mean Squared Error (MSE) measurement, consistently outperforms other models in terms of prediction accuracy.Keywords: house price prediction, regression trees, neural network, support vector machine, stepwise
Procedia PDF Downloads 23119303 Prevalence of Work-Related Musculoskeletal Disorder among Dental Personnel in Perak
Authors: Nursyafiq Ali Shibramulisi, Nor Farah Fauzi, Nur Azniza Zawin Anuar, Nurul Atikah Azmi, Janice Hew Pei Fang
Abstract:
Background: Work related musculoskeletal disorders (WRMD) among dental personnel have been underestimated and under-reported worldwide and specifically in Malaysia. The problem will arise and progress slowly over time, as it results from accumulated injury throughout the period of work. Several risk factors, such as repetitive movement, static posture, vibration, and adapting poor working postures, have been identified to be contributing to WRMSD in dental practices. Dental personnel is at higher risk of getting this problem as it is their working nature and core business. This would cause pain and dysfunction syndrome among them and result in absence from work and substandard services to their patients. Methodology: A cross-sectional study involving 19 government dental clinics in Perak was done over the period of 3 months. Those who met the criteria were selected to participate in this study. Malay version of the Self-Reported Nordic Musculoskeletal Discomfort Form was used to identify the prevalence of WRMSD, while the intensity of pain in the respective regions was evaluated using a 10-point scale according to ‘Pain as The 5ᵗʰ Vital Sign’ by MOH Malaysia and later on were analyzed using SPSS version 25. Descriptive statistics, including mean and SD and median and IQR, were used for numerical data. Categorical data were described by percentage. Pearson’s Chi-Square Test and Spearman’s Correlation were used to find the association between the prevalence of WRMSD and other socio-demographic data. Results: 159 dentists, 73 dental therapists, 26 dental lab technicians, 81 dental surgery assistants, and 23 dental attendants participated in this study. The mean age for the participants was 34.9±7.4 and their mean years of service was 9.97±7.5. Most of them were female (78.5%), Malay (71.3%), married (69.6%) and right-handed (90.1%). The highest prevalence of WRMSD was neck (58.0%), followed by shoulder (48.1%), upper back (42.0%), lower back (40.6%), hand/wrist (31.5%), feet (21.3%), knee (12.2%), thigh 7.7%) and lastly elbow (6.9%). Most of those who reported having neck pain scaled their pain experiences at 2 out of 10 (19.5%), while for those who suffered upper back discomfort, most of them scaled their pain experience at 6 out of 10 (17.8%). It was found that there was a significant relationship between age and pain at neck (p=0.007), elbow (p=0.027), lower back (p=0.032), thigh (p=0.039), knee (p=0.001) and feet (p=0.000) regions. Job position also had been found to be having a significant relationship with pain experienced at the lower back (p=0.018), thigh (p=0.011), knee, and feet (p=0.000). Conclusion: The prevalence of WRMSD among dental personnel in Perak was found to be high. Age and job position were found to be having a significant relationship with pain experienced in several regions. Intervention programs should be planned and conducted to prevent and reduce the occurrence of WRMSD, as all harmful or unergonomic practices should be avoided at all costs.Keywords: WRMSD, ergonomic, dentistry, dental
Procedia PDF Downloads 8819302 Experimental Networks Synchronization of Chua’s Circuit in Different Topologies
Authors: Manuel Meranza-Castillon, Rolando Diaz-Castillo, Adrian Arellano-Delgado, Cesar Cruz-Hernandez, Rosa Martha Lopez-Gutierrez
Abstract:
In this work, we deal with experimental network synchronization of chaotic nodes with different topologies. Our approach is based on complex system theory, and we use a master-slave configuration to couple the nodes in the networks. In particular, we design and implement electronically complex dynamical networks composed by nine coupled chaotic Chua’s circuits with topologies: in nearest-neighbor, small-world, open ring, star, and global. Also, network synchronization is evaluated according to a particular coupling strength for each topology. This study is important by the possible applications to private transmission of information in a chaotic communication network of multiple users.Keywords: complex networks, Chua's circuit, experimental synchronization, multiple users
Procedia PDF Downloads 34819301 Framing the Dynamics and Functioning of Different Variants of Terrorist Organizations: A Business Model Perspective
Authors: Eisa Younes Alblooshi
Abstract:
Counterterrorism strategies, to be effective and efficient, require a sound understanding of the dynamics, the interlinked organizational elements of the terrorist outfits being combated, with a view to having cognizance of their strong points to be guarded against, as well as the vulnerable zones that can be targeted for optimal results in a timely fashion by counterterrorism agencies. A unique model regarding the organizational imperatives was evolved in this research through likening the terrorist organizations with the traditional commercial ones, with a view to understanding in detail the dynamics of interconnectivity and dependencies, and the related compulsions facing the leaderships of such outfits that provide counterterrorism agencies with opportunities for forging better strategies. It involved assessing the evolving organizational dynamics and imperatives of different types of terrorist organizations, to enable the researcher to construct a prototype model that defines the progression and linkages of the related organizational elements of such organizations. It required detailed analysis of how the various elements are connected, with sequencing identified, as any outfit positions itself with respect to its external environment and internal dynamics. A case study focusing on a transnational radical religious state-sponsored terrorist organization was conducted to validate the research findings and to further strengthen the specific counterterrorism strategies. Six different variants of the business model of terrorist organizations were identified, categorized based on their outreach, mission, and status of any state sponsorship. The variants represent vast majority of the range of terrorist organizations acting locally or globally. The model shows the progression and dynamics of these organizations through various dimensions including mission, leadership, outreach, state sponsorship status, resulting in the organizational structure, state of autonomy, preference divergence in its fold, recruitment core, propagation avenues, down to their capacity to adapt, resulting critically in their own life cycles. A major advantage of the model is the utility of mapping terrorist organizations according to their fits to the sundry identified variants, allowing for flexibility and differences within, enabling the researchers and counterterrorism agencies to observe a neat blueprint of the organization’s footprint, along with highlighting the areas to be evaluated for focused target zone selection and timing of counterterrorism interventions. Special consideration is given to the dimension of financing, keeping in context the latest developments regarding cryptocurrencies, hawala, and global anti-money laundering initiatives. Specific counterterrorism strategies and intervention points have been identified for each of the respective model variants, with a view to efficient and effective deployment of resources.Keywords: terrorism, counterterrorism, model, strategy
Procedia PDF Downloads 15819300 5G Future Hyper-Dense Networks: An Empirical Study and Standardization Challenges
Authors: W. Hashim, H. Burok, N. Ghazaly, H. Ahmad Nasir, N. Mohamad Anas, A. F. Ismail, K. L. Yau
Abstract:
Future communication networks require devices that are able to work on a single platform but support heterogeneous operations which lead to service diversity and functional flexibility. This paper proposes two cognitive mechanisms termed cognitive hybrid function which is applied in multiple broadband user terminals in order to maintain reliable connectivity and preventing unnecessary interferences. By employing such mechanisms especially for future hyper-dense network, we can observe their performances in terms of optimized speed and power saving efficiency. Results were obtained from several empirical laboratory studies. It was found that selecting reliable network had shown a better optimized speed performance up to 37% improvement as compared without such function. In terms of power adjustment, our evaluation of this mechanism can reduce the power to 5dB while maintaining the same level of throughput at higher power performance. We also discuss the issues impacting future telecommunication standards whenever such devices get in place.Keywords: dense network, intelligent network selection, multiple networks, transmit power adjustment
Procedia PDF Downloads 376