Search results for: logistics network optimization
5380 A Study on Improvement of the Torque Ripple and Demagnetization Characteristics of a PMSM
Authors: Yong Min You
Abstract:
The study on the torque ripple of Permanent Magnet Synchronous Motors (PMSMs) has been rapidly progressed, which effects on the noise and vibration of the electric vehicle. There are several ways to reduce torque ripple, which are the increase in the number of slots and poles, the notch of the rotor and stator teeth, and the skew of the rotor and stator. However, the conventional methods have the disadvantage in terms of material cost and productivity. The demagnetization characteristic of PMSMs must be attained for electric vehicle application. Due to rare earth supply issue, the demand for Dy-free permanent magnet has been increasing, which can be applied to PMSMs for the electric vehicle. Dy-free permanent magnet has lower the coercivity; the demagnetization characteristic has become more significant. To improve the torque ripple as well as the demagnetization characteristics, which are significant parameters for electric vehicle application, an unequal air-gap model is proposed for a PMSM. A shape optimization is performed to optimize the design variables of an unequal air-gap model. Optimal design variables are the shape of an unequal air-gap and the angle between V-shape magnets. An optimization process is performed by Latin Hypercube Sampling (LHS), Kriging Method, and Genetic Algorithm (GA). Finite element analysis (FEA) is also utilized to analyze the torque and demagnetization characteristics. The torque ripple and the demagnetization temperature of the initial model of 45kW PMSM with unequal air-gap are 10 % and 146.8 degrees, respectively, which are reaching a critical level for electric vehicle application. Therefore, the unequal air-gap model is proposed, and then an optimization process is conducted. Compared to the initial model, the torque ripple of the optimized unequal air-gap model was reduced by 7.7 %. In addition, the demagnetization temperature of the optimized model was also increased by 1.8 % while maintaining the efficiency. From these results, a shape optimized unequal air-gap PMSM has shown the usefulness of an improvement in the torque ripple and demagnetization temperature for the electric vehicle.Keywords: permanent magnet synchronous motor, optimal design, finite element method, torque ripple
Procedia PDF Downloads 2755379 Optimization of Oxygen Plant Parameters Simulating with MATLAB
Authors: B. J. Sonani, J. K. Ratnadhariya, Srinivas Palanki
Abstract:
Cryogenic engineering is the fast growing branch of the modern technology. There are various applications of the cryogenic engineering such as liquefaction in gas industries, metal industries, medical science, space technology, and transportation. The low-temperature technology developed superconducting materials which lead to reduce the friction and wear in various components of the systems. The liquid oxygen, hydrogen and helium play vital role in space application. The liquefaction process is produced very low temperature liquid for various application in research and modern application. The air liquefaction system for oxygen plants in gas industries is based on the Claude cycle. The effect of process parameters on the overall system is difficult to be analysed by manual calculations, and this provides the motivation to use process simulators for understanding the steady state and dynamic behaviour of such systems. The parametric study of this system via MATLAB simulations provide useful guidelines for preliminary design of air liquefaction system based on the Claude cycle. Every organization is always trying for reduce the cost and using the optimum performance of the plant for the staying in the competitive market.Keywords: cryogenic, liquefaction, low -temperature, oxygen, claude cycle, optimization, MATLAB
Procedia PDF Downloads 3225378 Enhancing Rupture Pressure Prediction for Corroded Pipes Through Finite Element Optimization
Authors: Benkouiten Imene, Chabli Ouerdia, Boutoutaou Hamid, Kadri Nesrine, Bouledroua Omar
Abstract:
Algeria is actively enhancing gas productivity by augmenting the supply flow. However, this effort has led to increased internal pressure, posing a potential risk to the pipeline's integrity, particularly in the presence of corrosion defects. Sonatrach relies on a vast network of pipelines spanning 24,000 kilometers for the transportation of gas and oil. The aging of these pipelines raises the likelihood of corrosion both internally and externally, heightening the risk of ruptures. To address this issue, a comprehensive inspection is imperative, utilizing specialized scraping tools. These advanced tools furnish a detailed assessment of all pipeline defects. It is essential to recalculate the pressure parameters to safeguard the corroded pipeline's integrity while ensuring the continuity of production. In this context, Sonatrach employs symbolic pressure limit calculations, such as ASME B31G (2009) and the modified ASME B31G (2012). The aim of this study is to perform a comparative analysis of various limit pressure calculation methods documented in the literature, namely DNV RP F-101, SHELL, P-CORRC, NETTO, and CSA Z662. This comparative assessment will be based on a dataset comprising 329 burst tests published in the literature. Ultimately, we intend to introduce a novel approach grounded in the finite element method, employing ANSYS software.Keywords: pipeline burst pressure, burst test, corrosion defect, corroded pipeline, finite element method
Procedia PDF Downloads 585377 Neighborhood Graph-Optimized Preserving Discriminant Analysis for Image Feature Extraction
Authors: Xiaoheng Tan, Xianfang Li, Tan Guo, Yuchuan Liu, Zhijun Yang, Hongye Li, Kai Fu, Yufang Wu, Heling Gong
Abstract:
The image data collected in reality often have high dimensions, and it contains noise and redundant information. Therefore, it is necessary to extract the compact feature expression of the original perceived image. In this process, effective use of prior knowledge such as data structure distribution and sample label is the key to enhance image feature discrimination and robustness. Based on the above considerations, this paper proposes a local preserving discriminant feature learning model based on graph optimization. The model has the following characteristics: (1) Locality preserving constraint can effectively excavate and preserve the local structural relationship between data. (2) The flexibility of graph learning can be improved by constructing a new local geometric structure graph using label information and the nearest neighbor threshold. (3) The L₂,₁ norm is used to redefine LDA, and the diagonal matrix is introduced as the scale factor of LDA, and the samples are selected, which improves the robustness of feature learning. The validity and robustness of the proposed algorithm are verified by experiments in two public image datasets.Keywords: feature extraction, graph optimization local preserving projection, linear discriminant analysis, L₂, ₁ norm
Procedia PDF Downloads 1495376 The Use of Layered Neural Networks for Classifying Hierarchical Scientific Fields of Study
Authors: Colin Smith, Linsey S Passarella
Abstract:
Due to the proliferation and decentralized nature of academic publication, no widely accepted scheme exists for organizing papers by their scientific field of study (FoS) to the author’s best knowledge. While many academic journals require author provided keywords for papers, these keywords range wildly in scope and are not consistent across papers, journals, or field domains, necessitating alternative approaches to paper classification. Past attempts to perform field-of-study (FoS) classification on scientific texts have largely used a-hierarchical FoS schemas or ignored the schema’s inherently hierarchical structure, e.g. by compressing the structure into a single layer for multi-label classification. In this paper, we introduce an application of a Layered Neural Network (LNN) to the problem of performing supervised hierarchical classification of scientific fields of study (FoS) on research papers. In this approach, paper embeddings from a pretrained language model are fed into a top-down LNN. Beginning with a single neural network (NN) for the highest layer of the class hierarchy, each node uses a separate local NN to classify the subsequent subfield child node(s) for an input embedding of concatenated paper titles and abstracts. We compare our LNN-FOS method to other recent machine learning methods using the Microsoft Academic Graph (MAG) FoS hierarchy and find that the LNN-FOS offers increased classification accuracy at each FoS hierarchical level.Keywords: hierarchical classification, layer neural network, scientific field of study, scientific taxonomy
Procedia PDF Downloads 1335375 Concentrated Whey Protein Drink with Orange Flavor: Protein Modification and Formulation
Authors: Shahram Naghizadeh Raeisi, Ali Alghooneh
Abstract:
The application of whey protein in drink industry to enhance the nutritional value of the products is important. Furthermore, the gelification of protein during thermal treatment and shelf life makes some limitations in its application. So, the main goal of this research is manufacturing of high concentrate whey protein orange drink with appropriate shelf life. In this way, whey protein was 5 to 30% hydrolyzed ( in 5 percent intervals at six stages), then thermal stability of samples with 10% concentration of protein was tested in acidic condition (T= 90 °C, pH=4.2, 5 minutes ) and neutral condition (T=120° C, pH:6.7, 20 minutes.) Furthermore, to study the shelf life of heat treated samples in 4 months at 4 and 24 °C, the time sweep rheological test were done. At neutral conditions, 5 to 20% hydrolyzed sample showed gelling during thermal treatment, whereas at acidic condition, was happened only in 5 to 10 percent hydrolyzed samples. This phenomenon could be related to the difference in hydrodynamic radius and zeta potential of samples with different level of hydrolyzation at acidic and neutral conditions. To study the gelification of heat resistant protein solutions during shelf life, for 4 months with 7 days intervals, the time sweep analysis were performed. Cross over was observed for all heat resistant neutral samples at both storage temperature, while in heat resistant acidic samples with degree of hydrolysis, 25 and 30 percentage at 4 and 20 °C, it was not seen. It could be concluded that the former sample was stable during heat treatment and 4 months storage, which made them a good choice for manufacturing high protein drinks. The Scheffe polynomial model and numerical optimization were employed for modeling and high protein orange drink formula optimization. Scheffe model significantly predicted the overal acceptance index (Pvalue<0.05) of sensorial analysis. The coefficient of determination (R2) of 0.94, the adjusted coefficient of determination (R2Adj) of 0.90, insignificance of the lack-of-fit test and F value of 64.21 showed the accuracy of the model. Moreover, the coefficient of variable (C.V) was 6.8% which suggested the replicability of the experimental data. The desirability function had been achieved to be 0.89, which indicates the high accuracy of optimization. The optimum formulation was found as following: Modified whey protein solution (65.30%), natural orange juice (33.50%), stevia sweetener (0.05%), orange peel oil (0.15%) and citric acid (1 %), respectively. Its worth mentioning that this study made an appropriate model for application of whey protein in drink industry without bitter flavor and gelification during heat treatment and shelf life.Keywords: croos over, orange beverage, protein modification, optimization
Procedia PDF Downloads 625374 The Interplay between Autophagy and Macrophages' Polarization in Wound Healing: A Genetic Regulatory Network Analysis
Authors: Mayada Mazher, Ahmed Moustafa, Ahmed Abdellatif
Abstract:
Background: Autophagy is a eukaryotic, highly conserved catabolic process implicated in many pathophysiologies such as wound healing. Autophagy-associated genes serve as a scaffolding platform for signal transduction of macrophage polarization during the inflammatory phase of wound healing and tissue repair process. In the current study, we report a model for the interplay between autophagy-associated genes and macrophages polarization associated genes. Methods: In silico analysis was performed on 249 autophagy-related genes retrieved from the public autophagy database and gene expression data retrieved from Gene Expression Omnibus (GEO); GSE81922 and GSE69607 microarray data macrophages polarization 199 DEGS. An integrated protein-protein interaction network was constructed for autophagy and macrophage gene sets. The gene sets were then used for GO terms pathway enrichment analysis. Common transcription factors for autophagy and macrophages' polarization were identified. Finally, microRNAs enriched in both autophagy and macrophages were predicated. Results: In silico prediction of common transcription factors in DEGs macrophages and autophagy gene sets revealed a new role for the transcription factors, HOMEZ, GABPA, ELK1 and REL, that commonly regulate macrophages associated genes: IL6,IL1M, IL1B, NOS1, SOC3 and autophagy-related genes: Atg12, Rictor, Rb1cc1, Gaparab1, Atg16l1. Conclusions: Autophagy and macrophages' polarization are interdependent cellular processes, and both autophagy-related proteins and macrophages' polarization related proteins coordinate in tissue remodelling via transcription factors and microRNAs regulatory network. The current work highlights a potential new role for transcription factors HOMEZ, GABPA, ELK1 and REL in wound healing.Keywords: autophagy related proteins, integrated network analysis, macrophages polarization M1 and M2, tissue remodelling
Procedia PDF Downloads 1525373 Cross-Linked Amyloglucosidase Aggregates: A New Carrier Free Immobilization Strategy for Continuous Saccharification of Starch
Authors: Sidra Pervez, Afsheen Aman, Shah Ali Ul Qader
Abstract:
The importance of attaining an optimum performance of an enzyme is often a question of devising an effective method for its immobilization. Cross-linked enzyme aggregate (CLEAs) is a new approach for immobilization of enzymes using carrier free strategy. This method is exquisitely simple (involving precipitation of the enzyme from aqueous buffer followed by cross-linking of the resulting physical aggregates of enzyme molecules) and amenable to rapid optimization. Among many industrial enzymes, amyloglucosidase is an important amylolytic enzyme that hydrolyzes alpha (1→4) and alpha (1→6) glycosidic bonds in starch molecule and produce glucose as a sole end product. Glucose liberated by amyloglucosidase can be used for the production of ethanol and glucose syrups. Besides this amyloglucosidase can be widely used in various food and pharmaceuticals industries. For production of amyloglucosidase on commercial scale, filamentous fungi of genera Aspergillus are mostly used because they secrete large amount of enzymes extracellularly. The current investigation was based on isolation and identification of filamentous fungi from genus Aspergillus for the production of amyloglucosidase in submerged fermentation and optimization of cultivation parameters for starch saccharification. Natural isolates were identified as Aspergillus niger KIBGE-IB36, Aspergillus fumigatus KIBGE-IB33, Aspergillus flavus KIBGE-IB34 and Aspergillus terreus KIBGE-IB35 on taxonomical basis and 18S rDNA analysis and their sequence were submitted to GenBank. Among them, Aspergillus fumigatus KIBGE-IB33 was selected on the basis of maximum enzyme production. After optimization of fermentation conditions enzyme was immobilized on CLEA. Different parameters were optimized for maximum immobilization of amyloglucosidase. Data of enzyme stability (thermal and Storage) and reusability suggested the applicability of immobilized amyloglucosidase for continuous saccharification of starch in industrial processes.Keywords: aspergillus, immobilization, industrial processes, starch saccharification
Procedia PDF Downloads 4965372 Identifying the Factors affecting on the Success of Energy Usage Saving in Municipality of Tehran
Authors: Rojin Bana Derakhshan, Abbas Toloie
Abstract:
For the purpose of optimizing and developing energy efficiency in building, it is required to recognize key elements of success in optimization of energy consumption before performing any actions. Surveying Principal Components is one of the most valuable result of Linear Algebra because the simple and non-parametric methods are become confusing. So that energy management system implemented according to energy management system international standard ISO50001:2011 and all energy parameters in building to be measured through performing energy auditing. In this essay by simulating used of data mining, the key impressive elements on energy saving in buildings to be determined. This approach is based on data mining statistical techniques using feature selection method and fuzzy logic and convert data from massive to compressed type and used to increase the selected feature. On the other side, influence portion and amount of each energy consumption elements in energy dissipation in percent are recognized as separated norm while using obtained results from energy auditing and after measurement of all energy consuming parameters and identified variables. Accordingly, energy saving solution divided into 3 categories, low, medium and high expense solutions.Keywords: energy saving, key elements of success, optimization of energy consumption, data mining
Procedia PDF Downloads 4685371 Net-Trainer-ST: A Swiss Army Knife for Pentesting, Based on Single Board Computer, for Cybersecurity Professionals and Hobbyists
Authors: K. Hołda, D. Śliwa, K. Daniec, A. Nawrat
Abstract:
This article was created as part of the developed master's thesis. It attempts to present a newly developed device, which will support the work of specialists dealing with broadly understood cybersecurity terms. The device is contrived to automate security tests. In addition, it simulates potential cyberattacks in the most realistic way possible, without causing permanent damage to the network, in order to maximize the quality of the subsequent corrections to the tested network systems. The proposed solution is a fully operational prototype created from commonly available electronic components and a single board computer. The focus of the following article is not only put on the hardware part of the device but also on the theoretical and applicatory way in which implemented cybersecurity tests operate and examples of their results.Keywords: Raspberry Pi, ethernet, automated cybersecurity tests, ARP, DNS, backdoor, TCP, password sniffing
Procedia PDF Downloads 1255370 Steepest Descent Method with New Step Sizes
Authors: Bib Paruhum Silalahi, Djihad Wungguli, Sugi Guritman
Abstract:
Steepest descent method is a simple gradient method for optimization. This method has a slow convergence in heading to the optimal solution, which occurs because of the zigzag form of the steps. Barzilai and Borwein modified this algorithm so that it performs well for problems with large dimensions. Barzilai and Borwein method results have sparked a lot of research on the method of steepest descent, including alternate minimization gradient method and Yuan method. Inspired by previous works, we modified the step size of the steepest descent method. We then compare the modification results against the Barzilai and Borwein method, alternate minimization gradient method and Yuan method for quadratic function cases in terms of the iterations number and the running time. The average results indicate that the steepest descent method with the new step sizes provide good results for small dimensions and able to compete with the results of Barzilai and Borwein method and the alternate minimization gradient method for large dimensions. The new step sizes have faster convergence compared to the other methods, especially for cases with large dimensions.Keywords: steepest descent, line search, iteration, running time, unconstrained optimization, convergence
Procedia PDF Downloads 5405369 The Rise of Darknet: A Call for Understanding Online Communication of Terrorist Groups in Indonesia
Authors: Aulia Dwi Nastiti
Abstract:
A number of studies and reports on terrorism have continuously addressed the role of internet and online activism to support terrorist and extremist groups. In particular, they stress on social media’s usage that generally serves for the terrorist’s propaganda as well as justification of their causes. While those analyses are important to understand how social media is a vital tool for global network terrorism, they are inadequate to explain the online communication patterns that enable terrorism acts. Beyond apparent online narratives, there is a deep cyber sphere where the very vein of terrorism movement lies. That is a hidden space in the internet called ‘darknet’. Recent investigations, particularly in Middle Eastern context, have shed some lights that this invisible space in the internet is fundamental to maintain the terrorist activities. Despite that, limited number of research examines darknet within the issue of terrorist movements in Indonesian context—where the country is considered quite a hotbed for extremist groups. Therefore, this paper attempts to provide an insight of darknet operation in Indonesian cases. By exploring how the darknet is used by the Indonesian-based extremist groups, this paper maps out communication patterns of terrorist groups on the internet which appear as an intermingled network. It shows the usage of internet is differentiated in different level of anonymity for distinctive purposes. This paper further argues that the emerging terrorist communication network calls for a more comprehensive counterterrorism strategy on the Internet.Keywords: communication pattern, counterterrorism, darknet, extremist groups, terrorism
Procedia PDF Downloads 2935368 Scheduling Residential Daily Energy Consumption Using Bi-criteria Optimization Methods
Authors: Li-hsing Shih, Tzu-hsun Yen
Abstract:
Because of the long-term commitment to net zero carbon emission, utility companies include more renewable energy supply, which generates electricity with time and weather restrictions. This leads to time-of-use electricity pricing to reflect the actual cost of energy supply. From an end-user point of view, better residential energy management is needed to incorporate the time-of-use prices and assist end users in scheduling their daily use of electricity. This study uses bi-criteria optimization methods to schedule daily energy consumption by minimizing the electricity cost and maximizing the comfort of end users. Different from most previous research, this study schedules users’ activities rather than household appliances to have better measures of users’ comfort/satisfaction. The relation between each activity and the use of different appliances could be defined by users. The comfort level is at the highest when the time and duration of an activity completely meet the user’s expectation, and the comfort level decreases when the time and duration do not meet expectations. A questionnaire survey was conducted to collect data for establishing regression models that describe users’ comfort levels when the execution time and duration of activities are different from user expectations. Six regression models representing the comfort levels for six types of activities were established using the responses to the questionnaire survey. A computer program is developed to evaluate electricity cost and the comfort level for each feasible schedule and then find the non-dominated schedules. The Epsilon constraint method is used to find the optimal schedule out of the non-dominated schedules. A hypothetical case is presented to demonstrate the effectiveness of the proposed approach and the computer program. Using the program, users can obtain the optimal schedule of daily energy consumption by inputting the intended time and duration of activities and the given time-of-use electricity prices.Keywords: bi-criteria optimization, energy consumption, time-of-use price, scheduling
Procedia PDF Downloads 605367 Lessons from Implementation of a Network-Wide Safety Huddle in Behavioral Health
Authors: Deborah Weidner, Melissa Morgera
Abstract:
The model of care delivery in the Behavioral Health Network (BHN) is integrated across all five regions of Hartford Healthcare and thus spans the entirety of the state of Connecticut, with care provided in seven inpatient settings and over 30 ambulatory outpatient locations. While safety has been a core priority of the BHN in alignment with High Reliability practices, safety initiatives have historically been facilitated locally in each region or within each entity, with interventions implemented locally as opposed to throughout the network. To address this, the BHN introduced a network wide Safety Huddle during 2022. Launched in January, the BHN Safety Huddle brought together internal stakeholders, including medical and administrative leaders, along with executive institute leadership, quality, and risk management. By bringing leaders together and introducing a network-wide safety huddle into the way we work, the benefit has been an increase in awareness of safety events occurring in behavioral health areas as well as increased systemization of countermeasures to prevent future events. One significant discussion topic presented in huddles has pertained to environmental design and patient access to potentially dangerous items, addressing some of the most relevant factors resulting in harm to patients in inpatient and emergency settings for behavioral health patients. The safety huddle has improved visibility of potential environmental safety risks through the generation of over 15 safety alerts cascaded throughout the BHN and also spurred a rapid improvement project focused on standardization of patient belonging searches to reduce patient access to potentially dangerous items on inpatient units. Safety events pertaining to potentially dangerous items decreased by 31% as a result of standardized interventions implemented across the network and as a result of increased awareness. A second positive outcome originating from the BHN Safety Huddle was implementation of a recommendation to increase the emergency Narcan®(naloxone) supply on hand in ambulatory settings of the BHN after incidents involving accidental overdose resulted in higher doses of naloxone administration. By increasing the emergency supply of naloxone on hand in all ambulatory and residential settings, colleagues are better prepared to respond in an emergency situation should a patient experience an overdose while on site. Lastly, discussions in safety huddle spurred a new initiative within the BHN to improve responsiveness to assaultive incidents through a consultation service. This consult service, aligned with one of the network’s improvement priorities to reduce harm events related to assaultive incidents, was borne out of discussion in huddle in which it was identified that additional interventions may be needed in providing clinical care to patients who are experiencing multiple and/ or frequent safety events.Keywords: quality, safety, behavioral health, risk management
Procedia PDF Downloads 835366 The Impact of Milk Transport on Its Quality
Authors: Urszula Malaga-Toboła, Marek Gugała, Rafał Kornas, Robert Rusinek, Marek Gancarz
Abstract:
The work focused on presenting the elements that determine the quality of fresh milk in the context of the quality of its transport. The quality of the raw material depends on the quality of transport. Milk transport involves many activities in which, apart from the temperature and sterility of the means of transport, it is important not to expose the raw material to shocks. Recently, there have been changes in the milk supply chain, thus affecting the logistics processes between its links. Based on the conducted research and analyses, it was found that the condition of the road surface on which milk is transported affects its quality. For the T1 milk transport route- gravel roads of very poor and poor quality, the lowest number of bacteria and the highest number of somatic cells, fat content, and temperature of the transported milk were obtained. A well-organized integrated transport system is a real need for most companies today. The analysis showed significant differences in the quality of milk delivered to the dairy.Keywords: fresh milk, transport, milk quality, dairy
Procedia PDF Downloads 815365 Secure Proxy Signature Based on Factoring and Discrete Logarithm
Authors: H. El-Kamchouchi, Heba Gaber, Fatma Ahmed, Dalia H. El-Kamchouchi
Abstract:
A digital signature is an electronic signature form used by an original signer to sign a specific document. When the original signer is not in his office or when he/she travels outside, he/she delegates his signing capability to a proxy signer and then the proxy signer generates a signing message on behalf of the original signer. The two parties must be able to authenticate one another and agree on a secret encryption key, in order to communicate securely over an unreliable public network. Authenticated key agreement protocols have an important role in building a secure communications network between the two parties. In this paper, we present a secure proxy signature scheme over an efficient and secure authenticated key agreement protocol based on factoring and discrete logarithm problem.Keywords: discrete logarithm, factoring, proxy signature, key agreement
Procedia PDF Downloads 3095364 Modeling Studies on the Elevated Temperatures Formability of Tube Ends Using RSM
Authors: M. J. Davidson, N. Selvaraj, L. Venugopal
Abstract:
The elevated temperature forming studies on the expansion of thin walled tubes have been studied in the present work. The influence of process parameters namely the die angle, the die ratio and the operating temperatures on the expansion of tube ends at elevated temperatures is carried out. The range of operating parameters have been identified by perfoming extensive simulation studies. The hot forming parameters have been evaluated for AA2014 alloy for performing the simulation studies. Experimental matrix has been developed from the feasible range got from the simulation results. The design of experiments is used for the optimization of process parameters. Response Surface Method’s (RSM) and Box-Behenken design (BBD) is used for developing the mathematical model for expansion. Analysis of variance (ANOVA) is used to analyze the influence of process parameters on the expansion of tube ends. The effect of various process combinations of expansion are analyzed through graphical representations. The developed model is found to be appropriate as the coefficient of determination value is very high and is equal to 0.9726. The predicted values are found to coincide well with the experimental results, within acceptable error limits.Keywords: expansion, optimization, Response Surface Method (RSM), ANOVA, bbd, residuals, regression, tube
Procedia PDF Downloads 5095363 A Neural Approach for the Offline Recognition of the Arabic Handwritten Words of the Algerian Departments
Authors: Salim Ouchtati, Jean Sequeira, Mouldi Bedda
Abstract:
In this work we present an off line system for the recognition of the Arabic handwritten words of the Algerian departments. The study is based mainly on the evaluation of neural network performances, trained with the gradient back propagation algorithm. The used parameters to form the input vector of the neural network are extracted on the binary images of the handwritten word by several methods: the parameters of distribution, the moments centered of the different projections and the Barr features. It should be noted that these methods are applied on segments gotten after the division of the binary image of the word in six segments. The classification is achieved by a multi layers perceptron. Detailed experiments are carried and satisfactory recognition results are reported.Keywords: handwritten word recognition, neural networks, image processing, pattern recognition, features extraction
Procedia PDF Downloads 5135362 Optimization of Strategies and Models Review for Optimal Technologies-Based on Fuzzy Schemes for Green Architecture
Authors: Ghada Elshafei, A. Elazim Negm
Abstract:
Recently, Green architecture becomes a significant way to a sustainable future. Green building designs involve finding the balance between comfortable homebuilding and sustainable environment. Moreover, the utilization of the new technologies such as artificial intelligence techniques are used to complement current practices in creating greener structures to keep the built environment more sustainable. The most common objectives are green buildings should be designed to minimize the overall impact of the built environment on ecosystems in general and particularly on human health and on the natural environment. This will lead to protecting occupant health, improving employee productivity, reducing pollution and sustaining the environmental. In green building design, multiple parameters which may be interrelated, contradicting, vague and of qualitative/quantitative nature are broaden to use. This paper presents a comprehensive critical state of art review of current practices based on fuzzy and its combination techniques. Also, presented how green architecture/building can be improved using the technologies that been used for analysis to seek optimal green solutions strategies and models to assist in making the best possible decision out of different alternatives.Keywords: green architecture/building, technologies, optimization, strategies, fuzzy techniques, models
Procedia PDF Downloads 4755361 ATC in Competitive Electricity Market Using TCSC
Authors: S. K. Gupta, Richa Bansal
Abstract:
In a deregulated power system structure, power producers, and customers share a common transmission network for wheeling power from the point of generation to the point of consumption. All parties in this open access environment may try to purchase the energy from the cheaper source for greater profit margins, which may lead to overloading and congestion of certain corridors of the transmission network. This may result in violation of line flow, voltage and stability limits and thereby undermine the system security. Utilities therefore need to determine adequately their Available Transfer Capability (ATC) to ensure that system reliability is maintained while serving a wide range of bilateral and multilateral transactions. This paper presents power transfer distribution factor based on AC load flow for the determination and enhancement of ATC. The study has been carried out for IEEE 24 bus Reliability Test System.Keywords: available transfer capability, FACTS devices, power transfer distribution factors, electric
Procedia PDF Downloads 4975360 ANOVA-Based Feature Selection and Machine Learning System for IoT Anomaly Detection
Authors: Muhammad Ali
Abstract:
Cyber-attacks and anomaly detection on the Internet of Things (IoT) infrastructure is emerging concern in the domain of data-driven intrusion. Rapidly increasing IoT risk is now making headlines around the world. denial of service, malicious control, data type probing, malicious operation, DDos, scan, spying, and wrong setup are attacks and anomalies that can affect an IoT system failure. Everyone talks about cyber security, connectivity, smart devices, and real-time data extraction. IoT devices expose a wide variety of new cyber security attack vectors in network traffic. For further than IoT development, and mainly for smart and IoT applications, there is a necessity for intelligent processing and analysis of data. So, our approach is too secure. We train several machine learning models that have been compared to accurately predicting attacks and anomalies on IoT systems, considering IoT applications, with ANOVA-based feature selection with fewer prediction models to evaluate network traffic to help prevent IoT devices. The machine learning (ML) algorithms that have been used here are KNN, SVM, NB, D.T., and R.F., with the most satisfactory test accuracy with fast detection. The evaluation of ML metrics includes precision, recall, F1 score, FPR, NPV, G.M., MCC, and AUC & ROC. The Random Forest algorithm achieved the best results with less prediction time, with an accuracy of 99.98%.Keywords: machine learning, analysis of variance, Internet of Thing, network security, intrusion detection
Procedia PDF Downloads 1255359 Earthquake Identification to Predict Tsunami in Andalas Island, Indonesia Using Back Propagation Method and Fuzzy TOPSIS Decision Seconder
Authors: Muhamad Aris Burhanudin, Angga Firmansyas, Bagus Jaya Santosa
Abstract:
Earthquakes are natural hazard that can trigger the most dangerous hazard, tsunami. 26 December 2004, a giant earthquake occurred in north-west Andalas Island. It made giant tsunami which crushed Sumatra, Bangladesh, India, Sri Lanka, Malaysia and Singapore. More than twenty thousand people dead. The occurrence of earthquake and tsunami can not be avoided. But this hazard can be mitigated by earthquake forecasting. Early preparation is the key factor to reduce its damages and consequences. We aim to investigate quantitatively on pattern of earthquake. Then, we can know the trend. We study about earthquake which has happened in Andalas island, Indonesia one last decade. Andalas is island which has high seismicity, more than a thousand event occur in a year. It is because Andalas island is in tectonic subduction zone of Hindia sea plate and Eurasia plate. A tsunami forecasting is needed to mitigation action. Thus, a Tsunami Forecasting Method is presented in this work. Neutral Network has used widely in many research to estimate earthquake and it is convinced that by using Backpropagation Method, earthquake can be predicted. At first, ANN is trained to predict Tsunami 26 December 2004 by using earthquake data before it. Then after we get trained ANN, we apply to predict the next earthquake. Not all earthquake will trigger Tsunami, there are some characteristics of earthquake that can cause Tsunami. Wrong decision can cause other problem in the society. Then, we need a method to reduce possibility of wrong decision. Fuzzy TOPSIS is a statistical method that is widely used to be decision seconder referring to given parameters. Fuzzy TOPSIS method can make the best decision whether it cause Tsunami or not. This work combines earthquake prediction using neural network method and using Fuzzy TOPSIS to determine the decision that the earthquake triggers Tsunami wave or not. Neural Network model is capable to capture non-linear relationship and Fuzzy TOPSIS is capable to determine the best decision better than other statistical method in tsunami prediction.Keywords: earthquake, fuzzy TOPSIS, neural network, tsunami
Procedia PDF Downloads 4965358 Enhanced Retrieval-Augmented Generation (RAG) Method with Knowledge Graph and Graph Neural Network (GNN) for Automated QA Systems
Authors: Zhihao Zheng, Zhilin Wang, Linxin Liu
Abstract:
In the research of automated knowledge question-answering systems, accuracy and efficiency are critical challenges. This paper proposes a knowledge graph-enhanced Retrieval-Augmented Generation (RAG) method, combined with a Graph Neural Network (GNN) structure, to automatically determine the correctness of knowledge competition questions. First, a domain-specific knowledge graph was constructed from a large corpus of academic journal literature, with key entities and relationships extracted using Natural Language Processing (NLP) techniques. Then, the RAG method's retrieval module was expanded to simultaneously query both text databases and the knowledge graph, leveraging the GNN to further extract structured information from the knowledge graph. During answer generation, contextual information provided by the knowledge graph and GNN is incorporated to improve the accuracy and consistency of the answers. Experimental results demonstrate that the knowledge graph and GNN-enhanced RAG method perform excellently in determining the correctness of questions, achieving an accuracy rate of 95%. Particularly in cases involving ambiguity or requiring contextual information, the structured knowledge provided by the knowledge graph and GNN significantly enhances the RAG method's performance. This approach not only demonstrates significant advantages in improving the accuracy and efficiency of automated knowledge question-answering systems but also offers new directions and ideas for future research and practical applications.Keywords: knowledge graph, graph neural network, retrieval-augmented generation, NLP
Procedia PDF Downloads 415357 Modeling Intention to Use 3PL Services: An Application of the Theory of Planned Behavior
Authors: Nasrin Akter, Prem Chhetri, Shams Rahman
Abstract:
The present study tested Ajzen’s Theory of Planned Behavior (TPB) model to explain the formation of business customers’ intention to use 3PL services in Bangladesh. The findings show that the TPB model has a good fit to the data. Based on theoretical support and suggested modification indices, a refined TPB model was developed afterwards which provides a better predictive power for intention. Consistent with the theory, the results of a structural equation analysis revealed that the intention to use 3PL services is predicted by attitude and subjective norms but not by perceived behavioral control. Further investigation indicated that the paths between (attitude and intention) and (subjective norms and intention) did not statistically differ between 3PL user and non-user. Findings of this research provide an evidence base to formulate business strategies to increase the use of 3PL services in Bangladesh to enhance productivity and to gain economic efficiency.Keywords: Bangladesh, intention, third-party logistics, Theory of Planned Behavior
Procedia PDF Downloads 5815356 An Approach to Analyze Testing of Nano On-Chip Networks
Authors: Farnaz Fotovvatikhah, Javad Akbari
Abstract:
Test time of a test architecture is an important factor which depends on the architecture's delay and test patterns. Here a new architecture to store the test results based on network on chip is presented. In addition, simple analytical model is proposed to calculate link test time for built in self-tester (BIST) and external tester (Ext) in multiprocessor systems. The results extracted from the model are verified using FPGA implementation and experimental measurements. Systems consisting 16, 25, and 36 processors are implemented and simulated and test time is calculated. In addition, BIST and Ext are compared in terms of test time at different conditions such as at different number of test patterns and nodes. Using the model the maximum frequency of testing could be calculated and the test structure could be optimized for high speed testing.Keywords: test, nano on-chip network, JTAG, modelling
Procedia PDF Downloads 4885355 Awareness and Utilization of Social Network Tools among Agricultural Science Students in Colleges of Education in Ogun State, Nigeria
Authors: Adebowale Olukayode Efunnowo
Abstract:
This study was carried out to assess the awareness and utilization of Social Network Tools (SNTs) among agricultural science students in Colleges of Education in Ogun State, Nigeria. Simple random sampling techniques were used to select 280 respondents from the study area. Descriptive statistics was used to describe the objectives while Pearson Product Moment Correlation was used to test the hypothesis. The result showed that the majority (71.8%) of the respondents were single, with a mean age of 20 years. Almost all (95.7%) the respondents were aware of Facebook and 2go as a Social Network Tools (SNTs) while 85.0% of the respondents were not aware of Blackplanet, LinkedIn, MyHeritage and Bebo. Many (41.1%) of the respondents had views that using SNTs can enhance extensive literature survey, increase internet browsing potential, promote teaching proficiency, and update on outcomes of researches. However, 51.4% of the respondents perceived that SNTs usage as what is meant for the lecturers/adults only while 16.1% considered it as mainly used by internet fraudsters. Findings revealed that about 50.0% of the respondents browsed Facebook and 2go daily while more than 80% of the respondents used Blackplanet, MyHeritage, Skyrock, Bebo, LinkedIn and My YearBook as the need arise. Major constraints to the awareness and utilization of SNTs were high cost and poor quality of ICTs facilities (77.1%), epileptic power supply (75.0%), inadequate telecommunication infrastructure (71.1%), low technical know-how (62.9%) and inadequate computer knowledge (61.1%). The result of PPMC analysis showed that there was an inverse relationship between constraints and utilization of SNTs at p < 0.05. It can be concluded that constraints affect efficient and effective utilization of SNTs in the study area. It is hereby recommended that management of colleges of education and agricultural institutes should provide good internet connectivity, computer facilities, and alternative power supply in order to increase the awareness and utilization of SNTs among students.Keywords: awareness, utilization, social network tools, constraints, students
Procedia PDF Downloads 3525354 Quality versus Excellence: The Importance of Employees Knowing the Difference
Authors: Chris Nelson
Abstract:
Quality and excellence are qualitative topics that are usually addressed based on knowledge and past experience from leadership and those in charge of the organization. The significance of this study is to highlight the differences and similarities between these two mindsets and how an operational staff can most appropriately use them in the workplace. Quality and excellence are two words that are talked about a lot in the manufacturing world. Buzzwords such as operational excellence, quality controls, and efficiencies are discussed in the boardroom as well on the shop floor. These terms are used quite frequently and with good reasons. When a person visits their favorite local restaurant, They go because 1) they like the food and 2) the people are some of the greatest individuals to be around. With that in mind, they know they always put out quality food. They do not always go because the quality of the food is far superior than other restaurants. But the quality of ingredients always meets their expectations. When they compare them to the term excellence, they are disappointed. The food never looks like the pictures on the menu. But when have you ever been to a restaurant where the food looks the same as on the menu? For them, when evaluating which buzzword to use as a guiding star, its simple: excellence. The corporation can accomplish these goals by operating at a standard that far exceeds customer’s wants and needs.Keywords: industrial engineering, innovation, management and technology, logistics and scheduling, six sigma
Procedia PDF Downloads 2055353 Using Historical Data for Stock Prediction
Authors: Sofia Stoica
Abstract:
In this paper, we use historical data to predict the stock price of a tech company. To this end, we use a dataset consisting of the stock prices in the past five years of ten major tech companies – Adobe, Amazon, Apple, Facebook, Google, Microsoft, Netflix, Oracle, Salesforce, and Tesla. We experimented with a variety of models– a linear regressor model, K nearest Neighbors (KNN), a sequential neural network – and algorithms - Multiplicative Weight Update, and AdaBoost. We found that the sequential neural network performed the best, with a testing error of 0.18%. Interestingly, the linear model performed the second best with a testing error of 0.73%. These results show that using historical data is enough to obtain high accuracies, and a simple algorithm like linear regression has a performance similar to more sophisticated models while taking less time and resources to implement.Keywords: finance, machine learning, opening price, stock market
Procedia PDF Downloads 1905352 A Palmprint Identification System Based Multi-Layer Perceptron
Authors: David P. Tantua, Abdulkader Helwan
Abstract:
Biometrics has been recently used for the human identification systems using the biological traits such as the fingerprints and iris scanning. Identification systems based biometrics show great efficiency and accuracy in such human identification applications. However, these types of systems are so far based on some image processing techniques only, which may decrease the efficiency of such applications. Thus, this paper aims to develop a human palmprint identification system using multi-layer perceptron neural network which has the capability to learn using a backpropagation learning algorithms. The developed system uses images obtained from a public database available on the internet (CASIA). The processing system is as follows: image filtering using median filter, image adjustment, image skeletonizing, edge detection using canny operator to extract features, clear unwanted components of the image. The second phase is to feed those processed images into a neural network classifier which will adaptively learn and create a class for each different image. 100 different images are used for training the system. Since this is an identification system, it should be tested with the same images. Therefore, the same 100 images are used for testing it, and any image out of the training set should be unrecognized. The experimental results shows that this developed system has a great accuracy 100% and it can be implemented in real life applications.Keywords: biometrics, biological traits, multi-layer perceptron neural network, image skeletonizing, edge detection using canny operator
Procedia PDF Downloads 3715351 UniFi: Universal Filter Model for Image Enhancement
Authors: Aleksei Samarin, Artyom Nazarenko, Valentin Malykh
Abstract:
Image enhancement is becoming more and more popular, especially on mobile devices. Nowadays, it is a common approach to enhance an image using a convolutional neural network (CNN). Such a network should be of significant size; otherwise, a possibility for the artifacts to occur is overgrowing. The existing large CNNs are computationally expensive, which could be crucial for mobile devices. Another important flaw of such models is they are poorly interpretable. There is another approach to image enhancement, namely, the usage of predefined filters in combination with the prediction of their applicability. We present an approach following this paradigm, which outperforms both existing CNN-based and filter-based approaches in the image enhancement task. It is easily adaptable for mobile devices since it has only 47 thousand parameters. It shows the best SSIM 0.919 on RANDOM250 (MIT Adobe FiveK) among small models and is thrice faster than previous models.Keywords: universal filter, image enhancement, neural networks, computer vision
Procedia PDF Downloads 101