Search results for: cost-reflective network pricing method
20045 Optimization for Autonomous Robotic Construction by Visual Guidance through Machine Learning
Authors: Yangzhi Li
Abstract:
Network transfer of information and performance customization is now a viable method of digital industrial production in the era of Industry 4.0. Robot platforms and network platforms have grown more important in digital design and construction. The pressing need for novel building techniques is driven by the growing labor scarcity problem and increased awareness of construction safety. Robotic approaches in construction research are regarded as an extension of operational and production tools. Several technological theories related to robot autonomous recognition, which include high-performance computing, physical system modeling, extensive sensor coordination, and dataset deep learning, have not been explored using intelligent construction. Relevant transdisciplinary theory and practice research still has specific gaps. Optimizing high-performance computing and autonomous recognition visual guidance technologies improves the robot's grasp of the scene and capacity for autonomous operation. Intelligent vision guidance technology for industrial robots has a serious issue with camera calibration, and the use of intelligent visual guiding and identification technologies for industrial robots in industrial production has strict accuracy requirements. It can be considered that visual recognition systems have challenges with precision issues. In such a situation, it will directly impact the effectiveness and standard of industrial production, necessitating a strengthening of the visual guiding study on positioning precision in recognition technology. To best facilitate the handling of complicated components, an approach for the visual recognition of parts utilizing machine learning algorithms is proposed. This study will identify the position of target components by detecting the information at the boundary and corner of a dense point cloud and determining the aspect ratio in accordance with the guidelines for the modularization of building components. To collect and use components, operational processing systems assign them to the same coordinate system based on their locations and postures. The RGB image's inclination detection and the depth image's verification will be used to determine the component's present posture. Finally, a virtual environment model for the robot's obstacle-avoidance route will be constructed using the point cloud information.Keywords: robotic construction, robotic assembly, visual guidance, machine learning
Procedia PDF Downloads 8620044 Concentric Circle Detection based on Edge Pre-Classification and Extended RANSAC
Authors: Zhongjie Yu, Hancheng Yu
Abstract:
In this paper, we propose an effective method to detect concentric circles with imperfect edges. First, the gradient of edge pixel is coded and a 2-D lookup table is built to speed up normal generation. Then we take an accumulator to estimate the rough center and collect plausible edges of concentric circles through gradient and distance. Later, we take the contour-based method, which takes the contour and edge intersection, to pre-classify the edges. Finally, we use the extended RANSAC method to find all the candidate circles. The center of concentric circles is determined by the two circles with the highest concentricity. Experimental results demonstrate that the proposed method has both good performance and accuracy for the detection of concentric circles.Keywords: concentric circle detection, gradient, contour, edge pre-classification, RANSAC
Procedia PDF Downloads 13120043 Sustainability of Telecom Operators Orange-CI, MTN-CI, and MOOV Africa in Cote D’Ivoire
Authors: Odile Amoncou, Djedje-Kossu Zahui
Abstract:
The increased demand for digital communications during the COVID-19 pandemic has seen an unprecedented surge in new telecom infrastructure around the world. The expansion has been more remarkable in countries with developing telecom infrastructures. Particularly, the three telecom operators in Cote d’Ivoire, Orange CI, MTN CI, and MOOV Africa, have considerably scaled up their exploitation technologies and capacities in terms of towers, fiber optic installation, and customer service hubs. The trend will likely continue upward while expanding the carbon footprint of the Ivorian telecom operators. Therefore, the corporate social and environmental responsibilities of these telecommunication companies can no longer be overlooked. This paper assesses the sustainability of the three Ivorian telecommunication network operators by applying a combination of commonly used sustainability management indexes. These tools are streamlined and adapted to the relatively young and developing digital network of Cote D’Ivoire. We trust that this article will push the respective CEOs to make sustainability a top strategic priority and understand the substantial potential returns in terms of saving, new products, and new clients while improving their corporate image. In addition, good sustainability management can increase their stakeholders.Keywords: sustainability of telecom operators, sustainability management index, carbon footprint, digital communications
Procedia PDF Downloads 8820042 Genetic Algorithm Based Deep Learning Parameters Tuning for Robot Object Recognition and Grasping
Authors: Delowar Hossain, Genci Capi
Abstract:
This paper concerns with the problem of deep learning parameters tuning using a genetic algorithm (GA) in order to improve the performance of deep learning (DL) method. We present a GA based DL method for robot object recognition and grasping. GA is used to optimize the DL parameters in learning procedure in term of the fitness function that is good enough. After finishing the evolution process, we receive the optimal number of DL parameters. To evaluate the performance of our method, we consider the object recognition and robot grasping tasks. Experimental results show that our method is efficient for robot object recognition and grasping.Keywords: deep learning, genetic algorithm, object recognition, robot grasping
Procedia PDF Downloads 35320041 Pharmacokinetic Monitoring of Glimepiride and Ilaprazole in Rat Plasma by High Performance Liquid Chromatography with Diode Array Detection
Authors: Anil P. Dewani, Alok S. Tripathi, Anil V. Chandewar
Abstract:
Present manuscript reports the development and validation of a quantitative high performance liquid chromatography method for the pharmacokinetic evaluation of Glimepiride (GLM) and Ilaprazole (ILA) in rat plasma. The plasma samples were involved with Solid phase extraction process (SPE). The analytes were resolved on a Phenomenex C18 column (4.6 mm× 250 mm; 5 µm particle size) using a isocratic elution mode comprising methanol:water (80:20 % v/v) with pH of water modified to 3 using Formic acid, the total run time was 10 min at 225 nm as common wavelength, the flow rate throughout was 1ml/min. The method was validated over the concentration range from 10 to 600 ng/mL for GLM and ILA, in rat plasma. Metformin (MET) was used as Internal Standard. Validation data demonstrated the method to be selective, sensitive, accurate and precise. The limit of detection was 1.54 and 4.08 and limit of quantification was 5.15 and 13.62 for GLM and ILA respectively, the method demonstrated excellent linearity with correlation coefficients (r2) 0.999. The intra and inter-day precision (RSD%) values were < 2.0% for both ILA and GLM. The method was successfully applied in pharmacokinetic studies followed by oral administration in rats.Keywords: pharmacokinetics, glimepiride, ilaprazole, HPLC, SPE
Procedia PDF Downloads 36920040 Moving Target Defense against Various Attack Models in Time Sensitive Networks
Authors: Johannes Günther
Abstract:
Time Sensitive Networking (TSN), standardized in the IEEE 802.1 standard, has been lent increasing attention in the context of mission critical systems. Such mission critical systems, e.g., in the automotive domain, aviation, industrial, and smart factory domain, are responsible for coordinating complex functionalities in real time. In many of these contexts, a reliable data exchange fulfilling hard time constraints and quality of service (QoS) conditions is of critical importance. TSN standards are able to provide guarantees for deterministic communication behaviour, which is in contrast to common best-effort approaches. Therefore, the superior QoS guarantees of TSN may aid in the development of new technologies, which rely on low latencies and specific bandwidth demands being fulfilled. TSN extends existing Ethernet protocols with numerous standards, providing means for synchronization, management, and overall real-time focussed capabilities. These additional QoS guarantees, as well as management mechanisms, lead to an increased attack surface for potential malicious attackers. As TSN guarantees certain deadlines for priority traffic, an attacker may degrade the QoS by delaying a packet beyond its deadline or even execute a denial of service (DoS) attack if the delays lead to packets being dropped. However, thus far, security concerns have not played a major role in the design of such standards. Thus, while TSN does provide valuable additional characteristics to existing common Ethernet protocols, it leads to new attack vectors on networks and allows for a range of potential attacks. One answer to these security risks is to deploy defense mechanisms according to a moving target defense (MTD) strategy. The core idea relies on the reduction of the attackers' knowledge about the network. Typically, mission-critical systems suffer from an asymmetric disadvantage. DoS or QoS-degradation attacks may be preceded by long periods of reconnaissance, during which the attacker may learn about the network topology, its characteristics, traffic patterns, priorities, bandwidth demands, periodic characteristics on links and switches, and so on. Here, we implemented and tested several MTD-like defense strategies against different attacker models of varying capabilities and budgets, as well as collaborative attacks of multiple attackers within a network, all within the context of TSN networks. We modelled the networks and tested our defense strategies on an OMNET++ testbench, with networks of different sizes and topologies, ranging from a couple dozen hosts and switches to significantly larger set-ups.Keywords: network security, time sensitive networking, moving target defense, cyber security
Procedia PDF Downloads 7320039 Computational Study of Chromatographic Behavior of a Series of S-Triazine Pesticides Based on Their in Silico Biological and Lipophilicity Descriptors
Authors: Lidija R. Jevrić, Sanja O. Podunavac-Kuzmanović, Strahinja Z. Kovačević
Abstract:
In this paper, quantitative structure-retention relationships (QSRR) analysis was applied in order to correlate in silico biological and lipophilicity molecular descriptors with retention values for the set of selected s-triazine herbicides. In silico generated biological and lipophilicity descriptors were discriminated using generalized pair correlation method (GPCM). According to this method, the significant difference between independent variables can be noticed regardless almost equal correlation with dependent variable. Using established multiple linear regression (MLR) models some biological characteristics could be predicted. Established MLR models were evaluated statistically and the most suitable models were selected and ranked using sum of ranking differences (SRD) method. In this method, as reference values, average experimentally obtained values are used. Additionally, using SRD method, similarities among investigated s-triazine herbicides can be noticed. These analysis were conducted in order to characterize selected s-triazine herbicides for future investigations regarding their biodegradability. This study is financially supported by COST action TD1305.Keywords: descriptors, generalized pair correlation method, pesticides, sum of ranking differences
Procedia PDF Downloads 29520038 Comparing Performance of Neural Network and Decision Tree in Prediction of Myocardial Infarction
Authors: Reza Safdari, Goli Arji, Robab Abdolkhani Maryam zahmatkeshan
Abstract:
Background and purpose: Cardiovascular diseases are among the most common diseases in all societies. The most important step in minimizing myocardial infarction and its complications is to minimize its risk factors. The amount of medical data is increasingly growing. Medical data mining has a great potential for transforming these data into information. Using data mining techniques to generate predictive models for identifying those at risk for reducing the effects of the disease is very helpful. The present study aimed to collect data related to risk factors of heart infarction from patients’ medical record and developed predicting models using data mining algorithm. Methods: The present work was an analytical study conducted on a database containing 350 records. Data were related to patients admitted to Shahid Rajaei specialized cardiovascular hospital, Iran, in 2011. Data were collected using a four-sectioned data collection form. Data analysis was performed using SPSS and Clementine version 12. Seven predictive algorithms and one algorithm-based model for predicting association rules were applied to the data. Accuracy, precision, sensitivity, specificity, as well as positive and negative predictive values were determined and the final model was obtained. Results: five parameters, including hypertension, DLP, tobacco smoking, diabetes, and A+ blood group, were the most critical risk factors of myocardial infarction. Among the models, the neural network model was found to have the highest sensitivity, indicating its ability to successfully diagnose the disease. Conclusion: Risk prediction models have great potentials in facilitating the management of a patient with a specific disease. Therefore, health interventions or change in their life style can be conducted based on these models for improving the health conditions of the individuals at risk.Keywords: decision trees, neural network, myocardial infarction, Data Mining
Procedia PDF Downloads 42920037 Spectral Analysis Approaches for Simultaneous Determination of Binary Mixtures with Overlapping Spectra: An Application on Pseudoephedrine Sulphate and Loratadine
Authors: Sara El-Hanboushy, Hayam Lotfy, Yasmin Fayez, Engy Shokry, Mohammed Abdelkawy
Abstract:
Simple, specific, accurate and precise spectrophotometric methods are developed and validated for simultaneous determination of pseudoephedrine sulphate (PSE) and loratadine (LOR) in combined dosage form based on spectral analysis technique. Pseudoephedrine (PSE) in binary mixture could be analyzed either by using its resolved zero order absorption spectrum at its λ max 256.8 nm after subtraction of LOR spectrum or in presence of LOR spectrum by absorption correction method at 256.8 nm, dual wavelength (DWL) method at 254nm and 273nm, induced dual wavelength (IDWL) method at 256nm and 272nm and ratio difference (RD) method at 256nm and 262 nm. Loratadine (LOR) in the mixture could be analyzed directly at 280nm without any interference of PSE spectrum or at 250 nm using its recovered zero order absorption spectrum using constant multiplication(CM).In addition, simultaneous determination for PSE and LOR in their mixture could be applied by induced amplitude modulation method (IAM) coupled with amplitude multiplication (PM).Keywords: dual wavelength (DW), induced amplitude modulation method (IAM) coupled with amplitude multiplication (PM), loratadine, pseudoephedrine sulphate, ratio difference (RD)
Procedia PDF Downloads 32120036 Efficacy and Safety of Updated Target Therapies for Treatment of Platinum-Resistant Recurrent Ovarian Cancer
Authors: John Hang Leung, Shyh-Yau Wang, Hei-Tung Yip, Fion, Ho Tsung-chin, Agnes LF Chan
Abstract:
Objectives: Platinum-resistant ovarian cancer has a short overall survival of 9–12 months and limited treatment options. The combination of immunotherapy and targeted therapy appears to be a promising treatment option for patients with ovarian cancer, particularly to patients with platinum-resistant recurrent ovarian cancer (PRrOC). However, there are no direct head-to-head clinical trials comparing their efficacy and toxicity. We, therefore, used a network to directly and indirectly compare seven newer immunotherapies or targeted therapies combined with chemotherapy in platinum-resistant relapsed ovarian cancer, including antibody-drug conjugates, PD-1 (Programmed death-1) and PD-L1 (Programmed death-ligand 1), PARP (Poly ADP-ribose polymerase) inhibitors, TKIs (Tyrosine kinase inhibitors), and antiangiogenic agents. Methods: We searched PubMed (Public/Publisher MEDLINE), EMBASE (Excerpta Medica Database), and the Cochrane Library electronic databases for phase II and III trials involving PRrOC patients treated with immunotherapy or targeted therapy plus chemotherapy. The quality of included trials was assessed using the GRADE method. The primary outcomes compared were progression-free survival, the secondary outcomes were overall survival and safety. Results: Seven randomized controlled trials involving a total of 2058 PRrOC patients were included in this analysis. Bevacizumab plus chemotherapy showed statistically significant differences in PFS (Progression-free survival) but not OS (Overall survival) for all interested targets and immunotherapy regimens; however, according to the heatmap analysis, bevacizumab plus chemotherapy had a statistically significant risk of ≥grade 3 SAEs (Severe adverse effects), particularly hematological severe adverse events (neutropenia, anemia, leukopenia, and thrombocytopenia). Conclusions: Bevacizumab plus chemotherapy resulted in better PFS as compared with all interested regimens for the treatment of PRrOC. However, statistical differences in SAEs as bevacizumab plus chemotherapy is associated with a greater risk for hematological SAE.Keywords: platinum-resistant recurrent ovarian cancer, network meta-analysis, immune checkpoint inhibitors, target therapy, antiangiogenic agents
Procedia PDF Downloads 8020035 The Shannon Entropy and Multifractional Markets
Authors: Massimiliano Frezza, Sergio Bianchi, Augusto Pianese
Abstract:
Introduced by Shannon in 1948 in the field of information theory as the average rate at which information is produced by a stochastic set of data, the concept of entropy has gained much attention as a measure of uncertainty and unpredictability associated with a dynamical system, eventually depicted by a stochastic process. In particular, the Shannon entropy measures the degree of order/disorder of a given signal and provides useful information about the underlying dynamical process. It has found widespread application in a variety of fields, such as, for example, cryptography, statistical physics and finance. In this regard, many contributions have employed different measures of entropy in an attempt to characterize the financial time series in terms of market efficiency, market crashes and/or financial crises. The Shannon entropy has also been considered as a measure of the risk of a portfolio or as a tool in asset pricing. This work investigates the theoretical link between the Shannon entropy and the multifractional Brownian motion (mBm), stochastic process which recently is the focus of a renewed interest in finance as a driving model of stochastic volatility. In particular, after exploring the current state of research in this area and highlighting some of the key results and open questions that remain, we show a well-defined relationship between the Shannon (log)entropy and the memory function H(t) of the mBm. In details, we allow both the length of time series and time scale to change over analysis to study how the relation modify itself. On the one hand, applications are developed after generating surrogates of mBm trajectories based on different memory functions; on the other hand, an empirical analysis of several international stock indexes, which confirms the previous results, concludes the work.Keywords: Shannon entropy, multifractional Brownian motion, Hurst–Holder exponent, stock indexes
Procedia PDF Downloads 11020034 Healthcare-SignNet: Advanced Video Classification for Medical Sign Language Recognition Using CNN and RNN Models
Authors: Chithra A. V., Somoshree Datta, Sandeep Nithyanandan
Abstract:
Sign Language Recognition (SLR) is the process of interpreting and translating sign language into spoken or written language using technological systems. It involves recognizing hand gestures, facial expressions, and body movements that makeup sign language communication. The primary goal of SLR is to facilitate communication between hearing- and speech-impaired communities and those who do not understand sign language. Due to the increased awareness and greater recognition of the rights and needs of the hearing- and speech-impaired community, sign language recognition has gained significant importance over the past 10 years. Technological advancements in the fields of Artificial Intelligence and Machine Learning have made it more practical and feasible to create accurate SLR systems. This paper presents a distinct approach to SLR by framing it as a video classification problem using Deep Learning (DL), whereby a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) has been used. This research targets the integration of sign language recognition into healthcare settings, aiming to improve communication between medical professionals and patients with hearing impairments. The spatial features from each video frame are extracted using a CNN, which captures essential elements such as hand shapes, movements, and facial expressions. These features are then fed into an RNN network that learns the temporal dependencies and patterns inherent in sign language sequences. The INCLUDE dataset has been enhanced with more videos from the healthcare domain and the model is evaluated on the same. Our model achieves 91% accuracy, representing state-of-the-art performance in this domain. The results highlight the effectiveness of treating SLR as a video classification task with the CNN-RNN architecture. This approach not only improves recognition accuracy but also offers a scalable solution for real-time SLR applications, significantly advancing the field of accessible communication technologies.Keywords: sign language recognition, deep learning, convolution neural network, recurrent neural network
Procedia PDF Downloads 2820033 Collocation Method for Coupled System of Boundary Value Problems with Cubic B-Splines
Authors: K. N. S. Kasi Viswanadham
Abstract:
Coupled system of second order linear and nonlinear boundary value problems occur in various fields of Science and Engineering. In the formulation of the problem, any one of 81 possible types of boundary conditions may occur. These 81 possible boundary conditions are written as a combination of four boundary conditions. To solve a coupled system of boundary value problem with these converted boundary conditions, a collocation method with cubic B-splines as basis functions has been developed. In the collocation method, the mesh points of the space variable domain have been selected as the collocation points. The basis functions have been redefined into a new set of basis functions which in number match with the number of mesh points in the space variable domain. The solution of a non-linear boundary value problem has been obtained as the limit of a sequence of solutions of linear boundary value problems generated by quasilinearization technique. Several linear and nonlinear boundary value problems are presented to test the efficiency of the proposed method and found that numerical results obtained by the present method are in good agreement with the exact solutions available in the literature.Keywords: collocation method, coupled system, cubic b-splines, mesh points
Procedia PDF Downloads 20920032 An Approach for Pattern Recognition and Prediction of Information Diffusion Model on Twitter
Authors: Amartya Hatua, Trung Nguyen, Andrew Sung
Abstract:
In this paper, we study the information diffusion process on Twitter as a multivariate time series problem. Our model concerns three measures (volume, network influence, and sentiment of tweets) based on 10 features, and we collected 27 million tweets to build our information diffusion time series dataset for analysis. Then, different time series clustering techniques with Dynamic Time Warping (DTW) distance were used to identify different patterns of information diffusion. Finally, we built the information diffusion prediction models for new hashtags which comprise two phrases: The first phrase is recognizing the pattern using k-NN with DTW distance; the second phrase is building the forecasting model using the traditional Autoregressive Integrated Moving Average (ARIMA) model and the non-linear recurrent neural network of Long Short-Term Memory (LSTM). Preliminary results of performance evaluation between different forecasting models show that LSTM with clustering information notably outperforms other models. Therefore, our approach can be applied in real-world applications to analyze and predict the information diffusion characteristics of selected topics or memes (hashtags) in Twitter.Keywords: ARIMA, DTW, information diffusion, LSTM, RNN, time series clustering, time series forecasting, Twitter
Procedia PDF Downloads 39120031 Traditional Dyeing of Silk with Natural Dyes by Eco-Friendly Method
Authors: Samera Salimpour Abkenar
Abstract:
In traditional dyeing of natural fibers with natural dyes, metal salts are commonly used to increase color stability. This method always carries the risk of environmental pollution (contamination of arable soils and fresh groundwater) due to the release of dyeing effluents containing large amounts of metal. Therefore, researchers are always looking for new methods to obtain a green dyeing system. In this research, the use of the enzymatic dyeing method to prevent environmental pollution with metals and reduce production costs has been proposed. After degumming and bleaching, raw silk fabrics were dyed with natural dyes (Madder and Sumac) by three methods (pre-mordanting with a metal salt, one-step enzymatic dyeing, and two-step enzymatic dyeing). Results show that silk dyed with natural dyes by the enzymatic method has higher color strength and colorfastness than the pretreated with a metal salt. Also, the amount of remained dyes in the dyeing wastewater is significantly reduced by the enzymatic method. It is found that the enzymatic dyeing method leads to improvement of dye absorption, color strength, soft hand, no change in color shade, low production costs (due to low dyeing temperature), and a significant reduction in environmental pollution.Keywords: eco-friendly, natural dyes, silk, traditional dyeing
Procedia PDF Downloads 19020030 Marketing of Non Timber Forest Products and Forest Management in Kaffa Biosphere Reserve, Ethiopia
Authors: Amleset Haile
Abstract:
Non-timber forest products (NTFPs) are harvested for both subsistence and commercial use and play a key role in the livelihoods of millions of rural people. Non-timber forest products (NTFPs) are important in rural southwest Ethiopia, Kaffa as a source of household income. market players at various levels in marketing chains are interviewed to getther information on elements of marketing system–products, product differentiation, value addition, pricing, promotion, distribution, and marketing chains. The study, therefore, was conducted in Kaffa Biosphere reserve of southwest Ethiopia with the main objective of assessing and analyzing the contribution of NTFPs to rural livelihood and to the conservation of the biosphere reserve and to identify factors influencing in the marketing of the NTFP. Five villages were selected based on their proximity gradient from Bonga town and availability of NTFP. Formal survey was carried out on rural households selected using stratified random sampling. The results indicate that Local people practice diverse livelihood activities mainly crops cultivation (cereals and cash crops) and livestock husbandry, gather forest products and off-farm/off-forest activities for surviva. NTFP trade is not a common phenomenon in southwest Ethiopia. The greatest opportunity exists for local level marketing of spices and other non timber forest products. Very little local value addition takes place within the region,and as a result local market players have little control. Policy interventions arc required to enhance the returns to local collectors, which will also contribute to sustainable management of forest resources in Kaffa biosphere reserve.Keywords: forest management, biosphere reserve, marketing, local people
Procedia PDF Downloads 54020029 Planning Strategy for Sustainable Transportation in Heritage Areas
Authors: Hassam Hassan Elborombaly
Abstract:
The pollution generated from transportation modes, congestion and traffic heritage has led to the deterioration of historic buildings and the urban heritage in historic cities. Accordingly, this paper attempts to diagnose the transport and traffic problems in historic cities. In general and in Heritage Cities, and to investigate methods for conserving the urban heritage from negative effects of traffic congestion and of the traditional red modes of transportation. It also attempts to explore possible areas for intervention to mitigate transportation and traffic problems in the light of the principles of the sustainable transportation framework. It aims to draw conclusion and propose recommendation that would increase the efficiency and effectiveness of transportation plans in historic Cairo and consequently achieve sustainable transportation. Problems In historic cities public paths compose an irregular network enclosing large residential plots (defined as super blocks quarters or hettas). The blocks represent the basic morphology units in historic Cities. Each super block incorporates several uses (i.e. residential, non-residential, service uses and others). Local paths reach the interior of the super blocks in an organized inter core, which deals mainly with residential functions mixed with handicraft activities and is composed of several local path units; (b) the other core, which is bound by the public paths and contains a combination of residential, commercial and social activities. Objectives: 1- To provide amenity convenience and comfort for visitors and people who live and work in the area. Pedestrianizing, accessibility and safety are to be reinforced while respecting the organic urban pattern. 2- To enhance street life, vitality and activity, in order to attract people and increase economic prosperity. Research Contents • Relation between residential areas and transportation in the inner core • Analytical studies for historic areas in heritage cities • Sustainable transportation planning in heritage cities • Dynamic and flexible methodology for achieving sustainable transportation network for the Heritage Cities • Result and RecommendationKeywords: irregular network, public paths, sustainable transportation, urban heritage
Procedia PDF Downloads 53220028 Analytical Modelling of Surface Roughness during Compacted Graphite Iron Milling Using Ceramic Inserts
Authors: Ş. Karabulut, A. Güllü, A. Güldaş, R. Gürbüz
Abstract:
This study investigates the effects of the lead angle and chip thickness variation on surface roughness during the machining of compacted graphite iron using ceramic cutting tools under dry cutting conditions. Analytical models were developed for predicting the surface roughness values of the specimens after the face milling process. Experimental data was collected and imported to the artificial neural network model. A multilayer perceptron model was used with the back propagation algorithm employing the input parameters of lead angle, cutting speed and feed rate in connection with chip thickness. Furthermore, analysis of variance was employed to determine the effects of the cutting parameters on surface roughness. Artificial neural network and regression analysis were used to predict surface roughness. The values thus predicted were compared with the collected experimental data, and the corresponding percentage error was computed. Analysis results revealed that the lead angle is the dominant factor affecting surface roughness. Experimental results indicated an improvement in the surface roughness value with decreasing lead angle value from 88° to 45°.Keywords: CGI, milling, surface roughness, ANN, regression, modeling, analysis
Procedia PDF Downloads 44820027 Fault Location Identification in High Voltage Transmission Lines
Authors: Khaled M. El Naggar
Abstract:
This paper introduces a digital method for fault section identification in transmission lines. The method uses digital set of the measured short circuit current to locate faults in electrical power systems. The digitized current is used to construct a set of overdetermined system of equations. The problem is then constructed and solved using the proposed digital optimization technique to find the fault distance. The proposed optimization methodology is an application of simulated annealing optimization technique. The method is tested using practical case study to evaluate the proposed method. The accurate results obtained show that the algorithm can be used as a powerful tool in the area of power system protection.Keywords: optimization, estimation, faults, measurement, high voltage, simulated annealing
Procedia PDF Downloads 39320026 Performance Evaluation of Hierarchical Location-Based Services Coupled to the Greedy Perimeter Stateless Routing Protocol for Wireless Sensor Networks
Authors: Rania Khadim, Mohammed Erritali, Abdelhakim Maaden
Abstract:
Nowadays Wireless Sensor Networks have attracted worldwide research and industrial interest, because they can be applied in various areas. Geographic routing protocols are very suitable to those networks because they use location information when they need to route packets. Obviously, location information is maintained by Location-Based Services provided by network nodes in a distributed way. In this paper we choose to evaluate the performance of two hierarchical rendezvous location based-services, GLS (Grid Location Service) and HLS (Hierarchical Location Service) coupled to the GPSR routing protocol (Greedy Perimeter Stateless Routing) for Wireless Sensor Network. The simulations were performed using NS2 simulator to evaluate the performance and power of the two services in term of location overhead, the request travel time (RTT) and the query Success ratio (QSR). This work presents also a new scalability performance study of both GLS and HLS, specifically, what happens if the number of nodes N increases. The study will focus on three qualitative metrics: The location maintenance cost, the location query cost and the storage cost.Keywords: location based-services, routing protocols, scalability, wireless sensor networks
Procedia PDF Downloads 37220025 Robust Adaptation to Background Noise in Multichannel C-OTDR Monitoring Systems
Authors: Andrey V. Timofeev, Viktor M. Denisov
Abstract:
A robust sequential nonparametric method is proposed for adaptation to background noise parameters for real-time. The distribution of background noise was modelled like to Huber contamination mixture. The method is designed to operate as an adaptation-unit, which is included inside a detection subsystem of an integrated multichannel monitoring system. The proposed method guarantees the given size of a nonasymptotic confidence set for noise parameters. Properties of the suggested method are rigorously proved. The proposed algorithm has been successfully tested in real conditions of a functioning C-OTDR monitoring system, which was designed to monitor railways.Keywords: guaranteed estimation, multichannel monitoring systems, non-asymptotic confidence set, contamination mixture
Procedia PDF Downloads 43020024 Spectrum Allocation Using Cognitive Radio in Wireless Mesh Networks
Authors: Ayoub Alsarhan, Ahmed Otoom, Yousef Kilani, Abdel-Rahman al-GHuwairi
Abstract:
Wireless mesh networks (WMNs) have emerged recently to improve internet access and other networking services. WMNs provide network access to the clients and other networking functions such as routing, and packet forwarding. Spectrum scarcity is the main challenge that limits the performance of WMNs. Cognitive radio is proposed to solve spectrum scarcity problem. In this paper, we consider a cognitive wireless mesh network where unlicensed users (secondary users, SUs) can access free spectrum that is allocated to spectrum owners (primary users, PUs). Although considerable research has been conducted on spectrum allocation, spectrum assignment is still considered an important challenging problem. This problem can be solved using cognitive radio technology that allows SUs to intelligently locate free bands and access them without interfering with PUs. Our scheme considers several heuristics for spectrum allocation. These heuristics include: channel error rate, PUs activities, channel capacity and channel switching time. Performance evaluation of the proposed scheme shows that the scheme is able to allocate the unused spectrum for SUs efficiently.Keywords: cognitive radio, dynamic spectrum access, spectrum management, spectrum sharing, wireless mesh networks
Procedia PDF Downloads 52920023 On the Bootstrap P-Value Method in Identifying out of Control Signals in Multivariate Control Chart
Authors: O. Ikpotokin
Abstract:
In any production process, every product is aimed to attain a certain standard, but the presence of assignable cause of variability affects our process, thereby leading to low quality of product. The ability to identify and remove this type of variability reduces its overall effect, thereby improving the quality of the product. In case of a univariate control chart signal, it is easy to detect the problem and give a solution since it is related to a single quality characteristic. However, the problems involved in the use of multivariate control chart are the violation of multivariate normal assumption and the difficulty in identifying the quality characteristic(s) that resulted in the out of control signals. The purpose of this paper is to examine the use of non-parametric control chart (the bootstrap approach) for obtaining control limit to overcome the problem of multivariate distributional assumption and the p-value method for detecting out of control signals. Results from a performance study show that the proposed bootstrap method enables the setting of control limit that can enhance the detection of out of control signals when compared, while the p-value method also enhanced in identifying out of control variables.Keywords: bootstrap control limit, p-value method, out-of-control signals, p-value, quality characteristics
Procedia PDF Downloads 34720022 Numerical Method for Heat Transfer Problem in a Block Having an Interface
Authors: Beghdadi Lotfi, Bouziane Abdelhafid
Abstract:
A finite volume method for quadrilaterals unstructured mesh is developed to predict the two dimensional steady-state solutions of conduction equation. In this scheme, based on the integration around the polygonal control volume, the derivatives of conduction equation must be converted into closed line integrals using same formulation of the Stokes theorem. To valid the accuracy of the method two numerical experiments s are used: conduction in a regular block (with known analytical solution) and conduction in a rotated block (case with curved boundaries).The numerical results show good agreement with analytical results. To demonstrate the accuracy of the method, the absolute and root-mean square errors versus the grid size are examined quantitatively.Keywords: Stokes theorem, unstructured grid, heat transfer, complex geometry
Procedia PDF Downloads 29020021 Method of Visual Prosthesis Design Based on Biologically Inspired Design
Authors: Shen Jian, Hu Jie, Zhu Guo Niu, Peng Ying Hong
Abstract:
There are two issues exited in the traditional visual prosthesis: lacking systematic method and the low level of humanization. To tackcle those obstacles, a visual prosthesis design method based on biologically inspired design is proposed. Firstly, a constrained FBS knowledge cell model is applied to construct the functional model of visual prosthesis in biological field. Then the clustering results of engineering domain are ob-tained with the use of the cross-domain knowledge cell clustering algorithm. Finally, a prototype system is designed to support the bio-logically inspired design where the conflict is digested by TRIZ and other tools, and the validity of the method is verified by the solution schemeKeywords: knowledge-based engineering, visual prosthesis, biologically inspired design, biomedical engineering
Procedia PDF Downloads 19220020 The Communication Library DIALOG for iFDAQ of the COMPASS Experiment
Authors: Y. Bai, M. Bodlak, V. Frolov, S. Huber, V. Jary, I. Konorov, D. Levit, J. Novy, D. Steffen, O. Subrt, M. Virius
Abstract:
Modern experiments in high energy physics impose great demands on the reliability, the efficiency, and the data rate of Data Acquisition Systems (DAQ). This contribution focuses on the development and deployment of the new communication library DIALOG for the intelligent, FPGA-based Data Acquisition System (iFDAQ) of the COMPASS experiment at CERN. The iFDAQ utilizing a hardware event builder is designed to be able to readout data at the maximum rate of the experiment. The DIALOG library is a communication system both for distributed and mixed environments, it provides a network transparent inter-process communication layer. Using the high-performance and modern C++ framework Qt and its Qt Network API, the DIALOG library presents an alternative to the previously used DIM library. The DIALOG library was fully incorporated to all processes in the iFDAQ during the run 2016. From the software point of view, it might be considered as a significant improvement of iFDAQ in comparison with the previous run. To extend the possibilities of debugging, the online monitoring of communication among processes via DIALOG GUI is a desirable feature. In the paper, we present the DIALOG library from several insights and discuss it in a detailed way. Moreover, the efficiency measurement and comparison with the DIM library with respect to the iFDAQ requirements is provided.Keywords: data acquisition system, DIALOG library, DIM library, FPGA, Qt framework, TCP/IP
Procedia PDF Downloads 31620019 Method Validation for Determining Platinum and Palladium in Catalysts Using Inductively Coupled Plasma Optical Emission Spectrometry
Authors: Marin Senila, Oana Cadar, Thorsten Janisch, Patrick Lacroix-Desmazes
Abstract:
The study presents the analytical capability and validation of a method based on microwave-assisted acid digestion for quantitative determination of platinum and palladium in catalysts using inductively coupled plasma optical emission spectrometry (ICP-OES). In order to validate the method, the main figures of merit such as limit of detection and limit of quantification, precision and accuracy were considered and the measurement uncertainty was estimated based on the bottom-up approach according to the international guidelines of ISO/IEC 17025. Limit of detections, estimated from blank signal using 3 s criterion, were 3.0 mg/kg for Pt and respectively 3.6 mg/kg for Pd, while limits of quantification were 9.0 mg/kg for Pt and respectively 10.8 mg/kg for Pd. Precisions, evaluated as standard deviations of repeatability (n=5 parallel samples), were less than 10% for both precious metals. Accuracies of the method, verified by recovery estimation certified reference material NIST SRM 2557 - pulverized recycled monolith, were 99.4 % for Pt and 101% for Pd. The obtained limit of quantifications and accuracy were satisfactory for the intended purpose. The paper offers all the steps necessary to validate the determination method for Pt and Pd in catalysts using inductively coupled plasma optical emission spectrometry.Keywords: catalyst analysis, ICP-OES, method validation, platinum, palladium
Procedia PDF Downloads 16720018 Preservation of Coconut Toddy Sediments as a Leavening Agent for Bakery Products
Authors: B. R. Madushan, S. B. Navaratne, I. Wickramasinge
Abstract:
Toddy sediment (TS) was cultured in a PDA medium to determine initial yeast load, and also it was undergone sun, shade, solar, dehumidified cold air (DCA) and hot air oven (at 400, 500 and 60oC) drying with a view to preserve viability of yeast. Thereafter, this study was conducted according to two factor factorial design in order to determine best preservation method. Therein the dried TS from the best drying method was taken and divided into two portions. One portion was mixed with 3: 7 ratio of TS: rice flour and the mixture was divided in to two again. While one portion was kept under in house condition the other was in a refrigerator. Same procedure was followed to the rest portion of TS too but it was at the same ratio of corn flour. All treatments were vacuum packed in triple laminate pouches and the best preservation method was determined in terms of leavening index (LI). The TS obtained from the best preservation method was used to make foods (bread and hopper) and organoleptic properties of it were evaluated against same of ordinary foods using sensory panel with a five point hedonic scale. Results revealed that yeast load or fresh TS was 58×106 CFU/g. The best drying method in preserving viability of yeast was DCA because LI of this treatment (96%) is higher than that of other three treatments. Organoleptic properties of foods prepared from best preservation method are as same as ordinary foods according to Duo trio test.Keywords: biological leavening agent, coconut toddy, fermentation, yeast
Procedia PDF Downloads 34320017 Two-Level Graph Causality to Detect and Predict Random Cyber-Attacks
Authors: Van Trieu, Shouhuai Xu, Yusheng Feng
Abstract:
Tracking attack trajectories can be difficult, with limited information about the nature of the attack. Even more difficult as attack information is collected by Intrusion Detection Systems (IDSs) due to the current IDSs having some limitations in identifying malicious and anomalous traffic. Moreover, IDSs only point out the suspicious events but do not show how the events relate to each other or which event possibly cause the other event to happen. Because of this, it is important to investigate new methods capable of performing the tracking of attack trajectories task quickly with less attack information and dependency on IDSs, in order to prioritize actions during incident responses. This paper proposes a two-level graph causality framework for tracking attack trajectories in internet networks by leveraging observable malicious behaviors to detect what is the most probable attack events that can cause another event to occur in the system. Technically, given the time series of malicious events, the framework extracts events with useful features, such as attack time and port number, to apply to the conditional independent tests to detect the relationship between attack events. Using the academic datasets collected by IDSs, experimental results show that the framework can quickly detect the causal pairs that offer meaningful insights into the nature of the internet network, given only reasonable restrictions on network size and structure. Without the framework’s guidance, these insights would not be able to discover by the existing tools, such as IDSs. It would cost expert human analysts a significant time if possible. The computational results from the proposed two-level graph network model reveal the obvious pattern and trends. In fact, more than 85% of causal pairs have the average time difference between the causal and effect events in both computed and observed data within 5 minutes. This result can be used as a preventive measure against future attacks. Although the forecast may be short, from 0.24 seconds to 5 minutes, it is long enough to be used to design a prevention protocol to block those attacks.Keywords: causality, multilevel graph, cyber-attacks, prediction
Procedia PDF Downloads 15620016 The Application of the Analytic Basis Function Expansion Triangular-z Nodal Method for Neutron Diffusion Calculation
Authors: Kunpeng Wang, Hongchun, Wu, Liangzhi Cao, Chuanqi Zhao
Abstract:
The distributions of homogeneous neutron flux within a node were expanded into a set of analytic basis functions which satisfy the diffusion equation at any point in a triangular-z node for each energy group, and nodes were coupled with each other with both the zero- and first-order partial neutron current moments across all the interfaces of the triangular prism at the same time. Based this method, a code TABFEN has been developed and applied to solve the neutron diffusion equation in a complicated geometry. In addition, after a series of numerical derivation, one can get the neutron adjoint diffusion equations in matrix form which is the same with the neutron diffusion equation; therefore, it can be solved by TABFEN, and the low-high scan strategy is adopted to improve the efficiency. Four benchmark problems are tested by this method to verify its feasibility, the results show good agreement with the references which demonstrates the efficiency and feasibility of this method.Keywords: analytic basis function expansion method, arbitrary triangular-z node, adjoint neutron flux, complicated geometry
Procedia PDF Downloads 445