Search results for: Fisher-Yates algorithm
385 Hybridization of Manually Extracted and Convolutional Features for Classification of Chest X-Ray of COVID-19
Authors: M. Bilal Ishfaq, Adnan N. Qureshi
Abstract:
COVID-19 is the most infectious disease these days, it was first reported in Wuhan, the capital city of Hubei in China then it spread rapidly throughout the whole world. Later on 11 March 2020, the World Health Organisation (WHO) declared it a pandemic. Since COVID-19 is highly contagious, it has affected approximately 219M people worldwide and caused 4.55M deaths. It has brought the importance of accurate diagnosis of respiratory diseases such as pneumonia and COVID-19 to the forefront. In this paper, we propose a hybrid approach for the automated detection of COVID-19 using medical imaging. We have presented the hybridization of manually extracted and convolutional features. Our approach combines Haralick texture features and convolutional features extracted from chest X-rays and CT scans. We also employ a minimum redundancy maximum relevance (MRMR) feature selection algorithm to reduce computational complexity and enhance classification performance. The proposed model is evaluated on four publicly available datasets, including Chest X-ray Pneumonia, COVID-19 Pneumonia, COVID-19 CTMaster, and VinBig data. The results demonstrate high accuracy and effectiveness, with 0.9925 on the Chest X-ray pneumonia dataset, 0.9895 on the COVID-19, Pneumonia and Normal Chest X-ray dataset, 0.9806 on the Covid CTMaster dataset, and 0.9398 on the VinBig dataset. We further evaluate the effectiveness of the proposed model using ROC curves, where the AUC for the best-performing model reaches 0.96. Our proposed model provides a promising tool for the early detection and accurate diagnosis of COVID-19, which can assist healthcare professionals in making informed treatment decisions and improving patient outcomes. The results of the proposed model are quite plausible and the system can be deployed in a clinical or research setting to assist in the diagnosis of COVID-19.Keywords: COVID-19, feature engineering, artificial neural networks, radiology images
Procedia PDF Downloads 75384 Modeling of Bipolar Charge Transport through Nanocomposite Films for Energy Storage
Authors: Meng H. Lean, Wei-Ping L. Chu
Abstract:
The effects of ferroelectric nanofiller size, shape, loading, and polarization, on bipolar charge injection, transport, and recombination through amorphous and semicrystalline polymers are studied. A 3D particle-in-cell model extends the classical electrical double layer representation to treat ferroelectric nanoparticles. Metal-polymer charge injection assumes Schottky emission and Fowler-Nordheim tunneling, migration through field-dependent Poole-Frenkel mobility, and recombination with Monte Carlo selection based on collision probability. A boundary integral equation method is used for solution of the Poisson equation coupled with a second-order predictor-corrector scheme for robust time integration of the equations of motion. The stability criterion of the explicit algorithm conforms to the Courant-Friedrichs-Levy limit. Trajectories for charge that make it through the film are curvilinear paths that meander through the interspaces. Results indicate that charge transport behavior depends on nanoparticle polarization with anti-parallel orientation showing the highest leakage conduction and lowest level of charge trapping in the interaction zone. Simulation prediction of a size range of 80 to 100 nm to minimize attachment and maximize conduction is validated by theory. Attached charge fractions go from 2.2% to 97% as nanofiller size is decreased from 150 nm to 60 nm. Computed conductivity of 0.4 x 1014 S/cm is in agreement with published data for plastics. Charge attachment is increased with spheroids due to the increase in surface area, and especially so for oblate spheroids showing the influence of larger cross-sections. Charge attachment to nanofillers and nanocrystallites increase with vol.% loading or degree of crystallinity, and saturate at about 40 vol.%.Keywords: nanocomposites, nanofillers, electrical double layer, bipolar charge transport
Procedia PDF Downloads 354383 Globally Convergent Sequential Linear Programming for Multi-Material Topology Optimization Using Ordered Solid Isotropic Material with Penalization Interpolation
Authors: Darwin Castillo Huamaní, Francisco A. M. Gomes
Abstract:
The aim of the multi-material topology optimization (MTO) is to obtain the optimal topology of structures composed by many materials, according to a given set of constraints and cost criteria. In this work, we seek the optimal distribution of materials in a domain, such that the flexibility of the structure is minimized, under certain boundary conditions and the intervention of external forces. In the case we have only one material, each point of the discretized domain is represented by two values from a function, where the value of the function is 1 if the element belongs to the structure or 0 if the element is empty. A common way to avoid the high computational cost of solving integer variable optimization problems is to adopt the Solid Isotropic Material with Penalization (SIMP) method. This method relies on the continuous interpolation function, power function, where the base variable represents a pseudo density at each point of domain. For proper exponent values, the SIMP method reduces intermediate densities, since values other than 0 or 1 usually does not have a physical meaning for the problem. Several extension of the SIMP method were proposed for the multi-material case. The one that we explore here is the ordered SIMP method, that has the advantage of not being based on the addition of variables to represent material selection, so the computational cost is independent of the number of materials considered. Although the number of variables is not increased by this algorithm, the optimization subproblems that are generated at each iteration cannot be solved by methods that rely on second derivatives, due to the cost of calculating the second derivatives. To overcome this, we apply a globally convergent version of the sequential linear programming method, which solves a linear approximation sequence of optimization problems.Keywords: globally convergence, multi-material design ordered simp, sequential linear programming, topology optimization
Procedia PDF Downloads 315382 Hydrogen Production at the Forecourt from Off-Peak Electricity and Its Role in Balancing the Grid
Authors: Abdulla Rahil, Rupert Gammon, Neil Brown
Abstract:
The rapid growth of renewable energy sources and their integration into the grid have been motivated by the depletion of fossil fuels and environmental issues. Unfortunately, the grid is unable to cope with the predicted growth of renewable energy which would lead to its instability. To solve this problem, energy storage devices could be used. Electrolytic hydrogen production from an electrolyser is considered a promising option since it is a clean energy source (zero emissions). Choosing flexible operation of an electrolyser (producing hydrogen during the off-peak electricity period and stopping at other times) could bring about many benefits like reducing the cost of hydrogen and helping to balance the electric systems. This paper investigates the price of hydrogen during flexible operation compared with continuous operation, while serving the customer (hydrogen filling station) without interruption. The optimization algorithm is applied to investigate the hydrogen station in both cases (flexible and continuous operation). Three different scenarios are tested to see whether the off-peak electricity price could enhance the reduction of the hydrogen cost. These scenarios are: Standard tariff (1 tier system) during the day (assumed 12 p/kWh) while still satisfying the demand for hydrogen; using off-peak electricity at a lower price (assumed 5 p/kWh) and shutting down the electrolyser at other times; using lower price electricity at off-peak times and high price electricity at other times. This study looks at Derna city, which is located on the coast of the Mediterranean Sea (32° 46′ 0 N, 22° 38′ 0 E) with a high potential for wind resource. Hourly wind speed data which were collected over 24½ years from 1990 to 2014 were in addition to data on hourly radiation and hourly electricity demand collected over a one-year period, together with the petrol station data.Keywords: hydrogen filling station off-peak electricity, renewable energy, off-peak electricity, electrolytic hydrogen
Procedia PDF Downloads 231381 Influence of Hydrophobic Surface on Flow Past Square Cylinder
Authors: S. Ajith Kumar, Vaisakh S. Rajan
Abstract:
In external flows, vortex shedding behind the bluff bodies causes to experience unsteady loads on a large number of engineering structures, resulting in structural failure. Vortex shedding can even turn out to be disastrous like the Tacoma Bridge failure incident. We need to have control over vortex shedding to get rid of this untoward condition by reducing the unsteady forces acting on the bluff body. In circular cylinders, hydrophobic surface in an otherwise no-slip surface is found to be delaying separation and minimizes the effects of vortex shedding drastically. Flow over square cylinder stands different from this behavior as separation can takes place from either of the two corner separation points (front or rear). An attempt is made in this study to numerically elucidate the effect of hydrophobic surface in flow over a square cylinder. A 2D numerical simulation has been done to understand the effects of the slip surface on the flow past square cylinder. The details of the numerical algorithm will be presented at the time of the conference. A non-dimensional parameter, Knudsen number is defined to quantify the slip on the cylinder surface based on Maxwell’s equation. The slip surface condition of the wall affects the vorticity distribution around the cylinder and the flow separation. In the numerical analysis, we observed that the hydrophobic surface enhances the shedding frequency and damps down the amplitude of oscillations of the square cylinder. We also found that the slip has a negative effect on aerodynamic force coefficients such as the coefficient of lift (CL), coefficient of drag (CD) etc. and hence replacing the no slip surface by a hydrophobic surface can be treated as an effective drag reduction strategy and the introduction of hydrophobic surface could be utilized for reducing the vortex induced vibrations (VIV) and is found as an effective method in controlling VIV thereby controlling the structural failures.Keywords: drag reduction, flow past square cylinder, flow control, hydrophobic surfaces, vortex shedding
Procedia PDF Downloads 374380 Improved Distance Estimation in Dynamic Environments through Multi-Sensor Fusion with Extended Kalman Filter
Authors: Iffat Ara Ebu, Fahmida Islam, Mohammad Abdus Shahid Rafi, Mahfuzur Rahman, Umar Iqbal, John Ball
Abstract:
The application of multi-sensor fusion for enhanced distance estimation accuracy in dynamic environments is crucial for advanced driver assistance systems (ADAS) and autonomous vehicles. Limitations of single sensors such as cameras or radar in adverse conditions motivate the use of combined camera and radar data to improve reliability, adaptability, and object recognition. A multi-sensor fusion approach using an extended Kalman filter (EKF) is proposed to combine sensor measurements with a dynamic system model, achieving robust and accurate distance estimation. The research utilizes the Mississippi State University Autonomous Vehicular Simulator (MAVS) to create a controlled environment for data collection. Data analysis is performed using MATLAB. Qualitative (visualization of fused data vs ground truth) and quantitative metrics (RMSE, MAE) are employed for performance assessment. Initial results with simulated data demonstrate accurate distance estimation compared to individual sensors. The optimal sensor measurement noise variance and plant noise variance parameters within the EKF are identified, and the algorithm is validated with real-world data from a Chevrolet Blazer. In summary, this research demonstrates that multi-sensor fusion with an EKF significantly improves distance estimation accuracy in dynamic environments. This is supported by comprehensive evaluation metrics, with validation transitioning from simulated to real-world data, paving the way for safer and more reliable autonomous vehicle control.Keywords: sensor fusion, EKF, MATLAB, MAVS, autonomous vehicle, ADAS
Procedia PDF Downloads 43379 Hedgerow Detection and Characterization Using Very High Spatial Resolution SAR DATA
Authors: Saeid Gharechelou, Stuart Green, Fiona Cawkwell
Abstract:
Hedgerow has an important role for a wide range of ecological habitats, landscape, agriculture management, carbon sequestration, wood production. Hedgerow detection accurately using satellite imagery is a challenging problem in remote sensing techniques, because in the special approach it is very similar to line object like a road, from a spectral viewpoint, a hedge is very similar to a forest. Remote sensors with very high spatial resolution (VHR) recently enable the automatic detection of hedges by the acquisition of images with enough spectral and spatial resolution. Indeed, recently VHR remote sensing data provided the opportunity to detect the hedgerow as line feature but still remain difficulties in monitoring the characterization in landscape scale. In this research is used the TerraSAR-x Spotlight and Staring mode with 3-5 m resolution in wet and dry season in the test site of Fermoy County, Ireland to detect the hedgerow by acquisition time of 2014-2015. Both dual polarization of Spotlight data in HH/VV is using for detection of hedgerow. The varied method of SAR image technique with try and error way by integration of classification algorithm like texture analysis, support vector machine, k-means and random forest are using to detect hedgerow and its characterization. We are applying the Shannon entropy (ShE) and backscattering analysis in single and double bounce in polarimetric analysis for processing the object-oriented classification and finally extracting the hedgerow network. The result still is in progress and need to apply the other method as well to find the best method in study area. Finally, this research is under way to ahead to get the best result and here just present the preliminary work that polarimetric image of TSX potentially can detect the hedgerow.Keywords: TerraSAR-X, hedgerow detection, high resolution SAR image, dual polarization, polarimetric analysis
Procedia PDF Downloads 230378 Safeguarding the Construction Industry: Interrogating and Mitigating Emerging Risks from AI in Construction
Authors: Abdelrhman Elagez, Rolla Monib
Abstract:
This empirical study investigates the observed risks associated with adopting Artificial Intelligence (AI) technologies in the construction industry and proposes potential mitigation strategies. While AI has transformed several industries, the construction industry is slowly adopting advanced technologies like AI, introducing new risks that lack critical analysis in the current literature. A comprehensive literature review identified a research gap, highlighting the lack of critical analysis of risks and the need for a framework to measure and mitigate the risks of AI implementation in the construction industry. Consequently, an online survey was conducted with 24 project managers and construction professionals, possessing experience ranging from 1 to 30 years (with an average of 6.38 years), to gather industry perspectives and concerns relating to AI integration. The survey results yielded several significant findings. Firstly, respondents exhibited a moderate level of familiarity (66.67%) with AI technologies, while the industry's readiness for AI deployment and current usage rates remained low at 2.72 out of 5. Secondly, the top-ranked barriers to AI adoption were identified as lack of awareness, insufficient knowledge and skills, data quality concerns, high implementation costs, absence of prior case studies, and the uncertainty of outcomes. Thirdly, the most significant risks associated with AI use in construction were perceived to be a lack of human control (decision-making), accountability, algorithm bias, data security/privacy, and lack of legislation and regulations. Additionally, the participants acknowledged the value of factors such as education, training, organizational support, and communication in facilitating AI integration within the industry. These findings emphasize the necessity for tailored risk assessment frameworks, guidelines, and governance principles to address the identified risks and promote the responsible adoption of AI technologies in the construction sector.Keywords: risk management, construction, artificial intelligence, technology
Procedia PDF Downloads 99377 Tool for Maxillary Sinus Quantification in Computed Tomography Exams
Authors: Guilherme Giacomini, Ana Luiza Menegatti Pavan, Allan Felipe Fattori Alves, Marcela de Oliveira, Fernando Antonio Bacchim Neto, José Ricardo de Arruda Miranda, Seizo Yamashita, Diana Rodrigues de Pina
Abstract:
The maxillary sinus (MS), part of the paranasal sinus complex, is one of the most enigmatic structures in modern humans. The literature has suggested that MSs function as olfaction accessories, to heat or humidify inspired air, for thermoregulation, to impart resonance to the voice and others. Thus, the real function of the MS is still uncertain. Furthermore, the MS anatomy is complex and varies from person to person. Many diseases may affect the development process of sinuses. The incidence of rhinosinusitis and other pathoses in the MS is comparatively high, so, volume analysis has clinical value. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure, which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust, and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression, and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to quantify MS volume proved to be robust, fast, and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to automatically quantify MS volume proved to be robust, fast and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases.Keywords: maxillary sinus, support vector machine, region growing, volume quantification
Procedia PDF Downloads 504376 Application of a Model-Free Artificial Neural Networks Approach for Structural Health Monitoring of the Old Lidingö Bridge
Authors: Ana Neves, John Leander, Ignacio Gonzalez, Raid Karoumi
Abstract:
Systematic monitoring and inspection are needed to assess the present state of a structure and predict its future condition. If an irregularity is noticed, repair actions may take place and the adequate intervention will most probably reduce the future costs with maintenance, minimize downtime and increase safety by avoiding the failure of the structure as a whole or of one of its structural parts. For this to be possible decisions must be made at the right time, which implies using systems that can detect abnormalities in their early stage. In this sense, Structural Health Monitoring (SHM) is seen as an effective tool for improving the safety and reliability of infrastructures. This paper explores the decision-making problem in SHM regarding the maintenance of civil engineering structures. The aim is to assess the present condition of a bridge based exclusively on measurements using the suggested method in this paper, such that action is taken coherently with the information made available by the monitoring system. Artificial Neural Networks are trained and their ability to predict structural behavior is evaluated in the light of a case study where acceleration measurements are acquired from a bridge located in Stockholm, Sweden. This relatively old bridge is presently still in operation despite experiencing obvious problems already reported in previous inspections. The prediction errors provide a measure of the accuracy of the algorithm and are subjected to further investigation, which comprises concepts like clustering analysis and statistical hypothesis testing. These enable to interpret the obtained prediction errors, draw conclusions about the state of the structure and thus support decision making regarding its maintenance.Keywords: artificial neural networks, clustering analysis, model-free damage detection, statistical hypothesis testing, structural health monitoring
Procedia PDF Downloads 208375 Short Text Classification Using Part of Speech Feature to Analyze Students' Feedback of Assessment Components
Authors: Zainab Mutlaq Ibrahim, Mohamed Bader-El-Den, Mihaela Cocea
Abstract:
Students' textual feedback can hold unique patterns and useful information about learning process, it can hold information about advantages and disadvantages of teaching methods, assessment components, facilities, and other aspects of teaching. The results of analysing such a feedback can form a key point for institutions’ decision makers to advance and update their systems accordingly. This paper proposes a data mining framework for analysing end of unit general textual feedback using part of speech feature (PoS) with four machine learning algorithms: support vector machines, decision tree, random forest, and naive bays. The proposed framework has two tasks: first, to use the above algorithms to build an optimal model that automatically classifies the whole data set into two subsets, one subset is tailored to assessment practices (assessment related), and the other one is the non-assessment related data. Second task to use the same algorithms to build an optimal model for whole data set, and the new data subsets to automatically detect their sentiment. The significance of this paper is to compare the performance of the above four algorithms using part of speech feature to the performance of the same algorithms using n-grams feature. The paper follows Knowledge Discovery and Data Mining (KDDM) framework to construct the classification and sentiment analysis models, which is understanding the assessment domain, cleaning and pre-processing the data set, selecting and running the data mining algorithm, interpreting mined patterns, and consolidating the discovered knowledge. The results of this paper experiments show that both models which used both features performed very well regarding first task. But regarding the second task, models that used part of speech feature has underperformed in comparison with models that used unigrams and bigrams.Keywords: assessment, part of speech, sentiment analysis, student feedback
Procedia PDF Downloads 142374 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks
Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar
Abstract:
DNA Barcode, a short mitochondrial DNA fragment, made up of three subunits; a phosphate group, sugar and nucleic bases (A, T, C, and G). They provide good sources of information needed to classify living species. Such intuition has been confirmed by many experimental results. Species classification with DNA Barcode sequences has been studied by several researchers. The classification problem assigns unknown species to known ones by analyzing their Barcode. This task has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. To make this type of analysis feasible, heuristics, like progressive alignment, have been developed. Another tool for similarity search against a database of sequences is BLAST, which outputs shorter regions of high similarity between a query sequence and matched sequences in the database. However, all these methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. This method permits to avoid the complex problem of form and structure in different classes of organisms. On empirical data and their classification performances are compared with other methods. Our system consists of three phases. The first is called transformation, which is composed of three steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. The second is called approximation, which is empowered by the use of Multi Llibrary Wavelet Neural Networks (MLWNN).The third is called the classification of DNA Barcodes, which is realized by applying the algorithm of hierarchical classification.Keywords: DNA barcode, electron-ion interaction pseudopotential, Multi Library Wavelet Neural Networks (MLWNN)
Procedia PDF Downloads 318373 Control Algorithm Design of Single-Phase Inverter For ZnO Breakdown Characteristics Tests
Authors: Kashif Habib, Zeeshan Ayyub
Abstract:
ZnO voltage dependent resistor was widely used as components of the electrical system for over-voltage protection. It has a wide application prospect in superconducting energy-removal, generator de-excitation, overvoltage protection of electrical & electronics equipment. At present, the research for the application of ZnO voltage dependent resistor stop, it uses just in the field of its nonlinear voltage current characteristic and overvoltage protection areas. There is no further study over the over-voltage breakdown characteristics, such as the combustion phenomena and the measure of the voltage/current when it breakdown, and the affect to its surrounding equipment. It is also a blind spot in its application. So, when we do the feature test of ZnO voltage dependent resistor, we need to design a reasonable test power supply, making the terminal voltage keep for sine wave, simulating the real use of PF voltage in power supply conditions. We put forward the solutions of using inverter to generate a controllable power. The paper mainly focuses on the breakdown characteristic test power supply of nonlinear ZnO voltage dependent resistor. According to the current mature switching power supply technology, we proposed power control system using the inverter as the core. The power mainly realize the sin-voltage output on the condition of three-phase PF-AC input, and 3 control modes (RMS, Peak, Average) of the current output. We choose TMS320F2812M as the control part of the hardware platform. It is used to convert the power from three-phase to a controlled single-phase sin-voltage through a rectifier, filter, and inverter. Design controller produce SPWM, to get the controlled voltage source via appropriate multi-loop control strategy, while execute data acquisition and display, system protection, start logic control, etc. The TMS320F2812M is able to complete the multi-loop control quickly and can be a good completion of the inverter output control.Keywords: ZnO, multi-loop control, SPWM, non-linear load
Procedia PDF Downloads 325372 Object Detection in Digital Images under Non-Standardized Conditions Using Illumination and Shadow Filtering
Authors: Waqqas-ur-Rehman Butt, Martin Servin, Marion Pause
Abstract:
In recent years, object detection has gained much attention and very encouraging research area in the field of computer vision. The robust object boundaries detection in an image is demanded in numerous applications of human computer interaction and automated surveillance systems. Many methods and approaches have been developed for automatic object detection in various fields, such as automotive, quality control management and environmental services. Inappropriately, to the best of our knowledge, object detection under illumination with shadow consideration has not been well solved yet. Furthermore, this problem is also one of the major hurdles to keeping an object detection method from the practical applications. This paper presents an approach to automatic object detection in images under non-standardized environmental conditions. A key challenge is how to detect the object, particularly under uneven illumination conditions. Image capturing conditions the algorithms need to consider a variety of possible environmental factors as the colour information, lightening and shadows varies from image to image. Existing methods mostly failed to produce the appropriate result due to variation in colour information, lightening effects, threshold specifications, histogram dependencies and colour ranges. To overcome these limitations we propose an object detection algorithm, with pre-processing methods, to reduce the interference caused by shadow and illumination effects without fixed parameters. We use the Y CrCb colour model without any specific colour ranges and predefined threshold values. The segmented object regions are further classified using morphological operations (Erosion and Dilation) and contours. Proposed approach applied on a large image data set acquired under various environmental conditions for wood stack detection. Experiments show the promising result of the proposed approach in comparison with existing methods.Keywords: image processing, illumination equalization, shadow filtering, object detection
Procedia PDF Downloads 216371 Enhancing the Resilience of Combat System-Of-Systems Under Certainty and Uncertainty: Two-Phase Resilience Optimization Model and Deep Reinforcement Learning-Based Recovery Optimization Method
Authors: Xueming Xu, Jiahao Liu, Jichao Li, Kewei Yang, Minghao Li, Bingfeng Ge
Abstract:
A combat system-of-systems (CSoS) comprises various types of functional combat entities that interact to meet corresponding task requirements in the present and future. Enhancing the resilience of CSoS holds significant military value in optimizing the operational planning process, improving military survivability, and ensuring the successful completion of operational tasks. Accordingly, this research proposes an integrated framework called CSoS resilience enhancement (CSoSRE) to enhance the resilience of CSoS from a recovery perspective. Specifically, this research presents a two-phase resilience optimization model to define a resilience optimization objective for CSoS. This model considers not only task baseline, recovery cost, and recovery time limit but also the characteristics of emergency recovery and comprehensive recovery. Moreover, the research extends it from the deterministic case to the stochastic case to describe the uncertainty in the recovery process. Based on this, a resilience-oriented recovery optimization method based on deep reinforcement learning (RRODRL) is proposed to determine a set of entities requiring restoration and their recovery sequence, thereby enhancing the resilience of CSoS. This method improves the deep Q-learning algorithm by designing a discount factor that adapts to changes in CSoS state at different phases, simultaneously considering the network’s structural and functional characteristics within CSoS. Finally, extensive experiments are conducted to test the feasibility, effectiveness and superiority of the proposed framework. The obtained results offer useful insights for guiding operational recovery activity and designing a more resilient CSoS.Keywords: combat system-of-systems, resilience optimization model, recovery optimization method, deep reinforcement learning, certainty and uncertainty
Procedia PDF Downloads 16370 A Pilot Study of Influences of Scan Speed on Image Quality for Digital Tomosynthesis
Authors: Li-Ting Huang, Yu-Hsiang Shen, Cing-Ciao Ke, Sheng-Pin Tseng, Fan-Pin Tseng, Yu-Ching Ni, Chia-Yu Lin
Abstract:
Chest radiography is the most common technique for the diagnosis and follow-up of pulmonary diseases. However, the lesions superimposed with normal structures are difficult to be detected in chest radiography. Chest tomosynthesis is a relatively new technique to obtain 3D section images from a set of low-dose projections acquired over a limited angular range. However, there are some limitations with chest tomosynthesis. Patients undergoing tomosynthesis have to be able to hold their breath firmly for 10 seconds. A digital tomosynthesis system with advanced reconstruction algorithm and high-stability motion mechanism was developed by our research group. The potential for the system to perform a bidirectional chest scan within 10 seconds is expected. The purpose of this study is to realize the influences of the scan speed on the image quality for our digital tomosynthesis system. The major factors that lead image blurring are the motion of the X-ray source and the patient. For the fore one, an experiment of imaging a chest phantom with three different scan speeds, which are 6 cm/s, 8 cm/s, and 15 cm/s, was proceeded to understand the scan speed influences on the image quality. For the rear factor, a normal SD (Sprague-Dawley) rat was imaged with it alive and sacrificed to assess the impact on the image quality due to breath motion. In both experiments, the profile of the ROIs (region of interest) and the CNRs (contrast-to-noise ratio) of the ROIs to the normal tissue of the reconstructed images was examined to realize the degradations of the qualities of the images. The preliminary results show that no obvious degradation of the image quality was observed with increasing scan speed, possibly due to the advanced designs for the hardware and software of the system. It implies that higher speed (15 cm/s) than that of the commercialized tomosynthesis system (12 cm/s) for the proposed system is achieved, and therefore a complete chest scan within 10 seconds is expected.Keywords: chest radiography, digital tomosynthesis, image quality, scan speed
Procedia PDF Downloads 332369 Modified 'Perturb and Observe' with 'Incremental Conductance' Algorithm for Maximum Power Point Tracking
Authors: H. Fuad Usman, M. Rafay Khan Sial, Shahzaib Hamid
Abstract:
The trend of renewable energy resources has been amplified due to global warming and other environmental related complications in the 21st century. Recent research has very much emphasized on the generation of electrical power through renewable resources like solar, wind, hydro, geothermal, etc. The use of the photovoltaic cell has become very public as it is very useful for the domestic and commercial purpose overall the world. Although a single cell gives the low voltage output but connecting a number of cells in a series formed a complete module of the photovoltaic cells, it is becoming a financial investment as the use of it fetching popular. This also reduced the prices of the photovoltaic cell which gives the customers a confident of using this source for their electrical use. Photovoltaic cell gives the MPPT at single specific point of operation at a given temperature and level of solar intensity received at a given surface whereas the focal point changes over a large range depending upon the manufacturing factor, temperature conditions, intensity for insolation, instantaneous conditions for shading and aging factor for the photovoltaic cells. Two improved algorithms have been proposed in this article for the MPPT. The widely used algorithms are the ‘Incremental Conductance’ and ‘Perturb and Observe’ algorithms. To extract the maximum power from the source to the load, the duty cycle of the convertor will be effectively controlled. After assessing the previous techniques, this paper presents the improved and reformed idea of harvesting maximum power point from the photovoltaic cells. A thoroughly go through of the previous ideas has been observed before constructing the improvement in the traditional technique of MPP. Each technique has its own importance and boundaries at various weather conditions. An improved technique of implementing the use of both ‘Perturb and Observe’ and ‘Incremental Conductance’ is introduced.Keywords: duty cycle, MPPT (Maximum Power Point Tracking), perturb and observe (P&O), photovoltaic module
Procedia PDF Downloads 176368 Security of Database Using Chaotic Systems
Authors: Eman W. Boghdady, A. R. Shehata, M. A. Azem
Abstract:
Database (DB) security demands permitting authorized users and prohibiting non-authorized users and intruders actions on the DB and the objects inside it. Organizations that are running successfully demand the confidentiality of their DBs. They do not allow the unauthorized access to their data/information. They also demand the assurance that their data is protected against any malicious or accidental modification. DB protection and confidentiality are the security concerns. There are four types of controls to obtain the DB protection, those include: access control, information flow control, inference control, and cryptographic. The cryptographic control is considered as the backbone for DB security, it secures the DB by encryption during storage and communications. Current cryptographic techniques are classified into two types: traditional classical cryptography using standard algorithms (DES, AES, IDEA, etc.) and chaos cryptography using continuous (Chau, Rossler, Lorenz, etc.) or discreet (Logistics, Henon, etc.) algorithms. The important characteristics of chaos are its extreme sensitivity to initial conditions of the system. In this paper, DB-security systems based on chaotic algorithms are described. The Pseudo Random Numbers Generators (PRNGs) from the different chaotic algorithms are implemented using Matlab and their statistical properties are evaluated using NIST and other statistical test-suits. Then, these algorithms are used to secure conventional DB (plaintext), where the statistical properties of the ciphertext are also tested. To increase the complexity of the PRNGs and to let pass all the NIST statistical tests, we propose two hybrid PRNGs: one based on two chaotic Logistic maps and another based on two chaotic Henon maps, where each chaotic algorithm is running side-by-side and starting from random independent initial conditions and parameters (encryption keys). The resulted hybrid PRNGs passed the NIST statistical test suit.Keywords: algorithms and data structure, DB security, encryption, chaotic algorithms, Matlab, NIST
Procedia PDF Downloads 265367 Graph Clustering Unveiled: ClusterSyn - A Machine Learning Framework for Predicting Anti-Cancer Drug Synergy Scores
Authors: Babak Bahri, Fatemeh Yassaee Meybodi, Changiz Eslahchi
Abstract:
In the pursuit of effective cancer therapies, the exploration of combinatorial drug regimens is crucial to leverage synergistic interactions between drugs, thereby improving treatment efficacy and overcoming drug resistance. However, identifying synergistic drug pairs poses challenges due to the vast combinatorial space and limitations of experimental approaches. This study introduces ClusterSyn, a machine learning (ML)-powered framework for classifying anti-cancer drug synergy scores. ClusterSyn employs a two-step approach involving drug clustering and synergy score prediction using a fully connected deep neural network. For each cell line in the training dataset, a drug graph is constructed, with nodes representing drugs and edge weights denoting synergy scores between drug pairs. Drugs are clustered using the Markov clustering (MCL) algorithm, and vectors representing the similarity of drug pairs to each cluster are input into the deep neural network for synergy score prediction (synergy or antagonism). Clustering results demonstrate effective grouping of drugs based on synergy scores, aligning similar synergy profiles. Subsequently, neural network predictions and synergy scores of the two drugs on others within their clusters are used to predict the synergy score of the considered drug pair. This approach facilitates comparative analysis with clustering and regression-based methods, revealing the superior performance of ClusterSyn over state-of-the-art methods like DeepSynergy and DeepDDS on diverse datasets such as Oniel and Almanac. The results highlight the remarkable potential of ClusterSyn as a versatile tool for predicting anti-cancer drug synergy scores.Keywords: drug synergy, clustering, prediction, machine learning., deep learning
Procedia PDF Downloads 79366 Selection of Optimal Reduced Feature Sets of Brain Signal Analysis Using Heuristically Optimized Deep Autoencoder
Authors: Souvik Phadikar, Nidul Sinha, Rajdeep Ghosh
Abstract:
In brainwaves research using electroencephalogram (EEG) signals, finding the most relevant and effective feature set for identification of activities in the human brain is a big challenge till today because of the random nature of the signals. The feature extraction method is a key issue to solve this problem. Finding those features that prove to give distinctive pictures for different activities and similar for the same activities is very difficult, especially for the number of activities. The performance of a classifier accuracy depends on this quality of feature set. Further, more number of features result in high computational complexity and less number of features compromise with the lower performance. In this paper, a novel idea of the selection of optimal feature set using a heuristically optimized deep autoencoder is presented. Using various feature extraction methods, a vast number of features are extracted from the EEG signals and fed to the autoencoder deep neural network. The autoencoder encodes the input features into a small set of codes. To avoid the gradient vanish problem and normalization of the dataset, a meta-heuristic search algorithm is used to minimize the mean square error (MSE) between encoder input and decoder output. To reduce the feature set into a smaller one, 4 hidden layers are considered in the autoencoder network; hence it is called Heuristically Optimized Deep Autoencoder (HO-DAE). In this method, no features are rejected; all the features are combined into the response of responses of the hidden layer. The results reveal that higher accuracy can be achieved using optimal reduced features. The proposed HO-DAE is also compared with the regular autoencoder to test the performance of both. The performance of the proposed method is validated and compared with the other two methods recently reported in the literature, which reveals that the proposed method is far better than the other two methods in terms of classification accuracy.Keywords: autoencoder, brainwave signal analysis, electroencephalogram, feature extraction, feature selection, optimization
Procedia PDF Downloads 114365 Different Types of Amyloidosis Revealed with Positive Cardiac Scintigraphy with Tc-99M DPD-SPECT
Authors: Ioannis Panagiotopoulos, Efstathios Kastritis, Anastasia Katinioti, Georgios Efthymiadis, Argyrios Doumas, Maria Koutelou
Abstract:
Introduction: Transthyretin amyloidosis (ATTR) is a rare but serious infiltrative disease. Myocardial scintigraphy with DPD has emerged as the most effective, non-invasive, highly sensitive, and highly specific diagnostic method for cardiac ATTR amyloidosis. However, there are cases in which additional laboratory investigations reveal AL amyloidosis or other diseases despite a positive DPD scintigraphy. We describe the experience from the Onassis Cardiac Surgery Center and the monitoring center for infiltrative myocardial diseases of the cardiology clinic at AHEPA. Materials and Methods: All patients with clinical suspicion of cardiac or extracardiac amyloidosis undergo a myocardial scintigraphy scan with Tc-99m DPD. In this way, over 500 patients have been examined. Further diagnostic approach based on clinical and imaging findings includes laboratory investigation and invasive techniques (e.g., biopsy). Results: Out of 76 patients in total with positive myocardial scintigraphy Grade 2 or 3 according to the Perugini scale, 8 were proven to suffer from AL Amyloidosis during the investigation of paraproteinemia. Among these patients, 3 showed Grade 3 uptake, while the rest were graded as Grade 2, or 2 to 3. Additionally, one patient presented diffuse and unusual radiopharmaceutical uptake in soft tissues throughout the body without cardiac involvement. These findings raised suspicions, leading to the analysis of κ and λ light chains in the serum, as well as immunostaining of proteins in the serum and urine of these specific patients. The final diagnosis was AL amyloidosis. Conclusion: The value of DPD scintigraphy in the diagnosis of cardiac amyloidosis from transthyretin is undisputed. However, positive myocardial scintigraphy with DPD should not automatically lead to the diagnosis of ATTR amyloidosis. Laboratory differentiation between ATTR and AL amyloidosis is crucial, as both prognosis and therapeutic strategy are dramatically altered. Laboratory exclusion of paraproteinemia is a necessary and essential step in the diagnostic algorithm of ATTR amyloidosis for all positive myocardial scintigraphy with diphosphonate tracers since >20% of patients with Grade 3 and 2 uptake may conceal AL amyloidosis.Keywords: AL amyloidosis, amyloidosis, ATTR, myocardial scintigraphy, Tc-99m DPD
Procedia PDF Downloads 81364 Analysis of NMDA Receptor 2B Subunit Gene (GRIN2B) mRNA Expression in the Peripheral Blood Mononuclear Cells of Alzheimer's Disease Patients
Authors: Ali̇ Bayram, Semih Dalkilic, Remzi Yigiter
Abstract:
N-methyl-D-aspartate (NMDA) receptor is a subtype of glutamate receptor and plays a pivotal role in learning, memory, neuronal plasticity, neurotoxicity and synaptic mechanisms. Animal experiments were suggested that glutamate-induced excitotoxic injuriy and NMDA receptor blockage lead to amnesia and other neurodegenerative diseases including Alzheimer’s disease (AD), Huntington’s disease, amyotrophic lateral sclerosis. Aim of this study is to investigate association between NMDA receptor coding gene GRIN2B expression level and Alzheimer disease. The study was approved by the local ethics committees, and it was conducted according to the principles of the Declaration of Helsinki and guidelines for the Good Clinical Practice. Peripheral blood was collected 50 patients who diagnosed AD and 49 healthy control individuals. Total RNA was isolated with RNeasy midi kit (Qiagen) according to manufacturer’s instructions. After checked RNA quality and quantity with spectrophotometer, GRIN2B expression levels were detected by quantitative real time PCR (QRT-PCR). Statistical analyses were performed, variance between two groups were compared with Mann Whitney U test in GraphpadInstat algorithm with 95 % confidence interval and p < 0.05. After statistical analyses, we have determined that GRIN2B expression levels were down regulated in AD patients group with respect to control group. But expression level of this gene in each group was showed high variability. İn this study, we have determined that NMDA receptor coding gene GRIN2B expression level was down regulated in AD patients when compared with healthy control individuals. According to our results, we have speculated that GRIN2B expression level was associated with AD. But it is necessary to validate these results with bigger sample size.Keywords: Alzheimer’s disease, N-methyl-d-aspartate receptor, NR2B, GRIN2B, mRNA expression, RT-PCR
Procedia PDF Downloads 394363 D-Wave Quantum Computing Ising Model: A Case Study for Forecasting of Heat Waves
Authors: Dmytro Zubov, Francesco Volponi
Abstract:
In this paper, D-Wave quantum computing Ising model is used for the forecasting of positive extremes of daily mean air temperature. Forecast models are designed with two to five qubits, which represent 2-, 3-, 4-, and 5-day historical data respectively. Ising model’s real-valued weights and dimensionless coefficients are calculated using daily mean air temperatures from 119 places around the world, as well as sea level (Aburatsu, Japan). In comparison with current methods, this approach is better suited to predict heat wave values because it does not require the estimation of a probability distribution from scarce observations. Proposed forecast quantum computing algorithm is simulated based on traditional computer architecture and combinatorial optimization of Ising model parameters for the Ronald Reagan Washington National Airport dataset with 1-day lead-time on learning sample (1975-2010 yr). Analysis of the forecast accuracy (ratio of successful predictions to total number of predictions) on the validation sample (2011-2014 yr) shows that Ising model with three qubits has 100 % accuracy, which is quite significant as compared to other methods. However, number of identified heat waves is small (only one out of nineteen in this case). Other models with 2, 4, and 5 qubits have 20 %, 3.8 %, and 3.8 % accuracy respectively. Presented three-qubit forecast model is applied for prediction of heat waves at other five locations: Aurel Vlaicu, Romania – accuracy is 28.6 %; Bratislava, Slovakia – accuracy is 21.7 %; Brussels, Belgium – accuracy is 33.3 %; Sofia, Bulgaria – accuracy is 50 %; Akhisar, Turkey – accuracy is 21.4 %. These predictions are not ideal, but not zeros. They can be used independently or together with other predictions generated by different method(s). The loss of human life, as well as environmental, economic, and material damage, from extreme air temperatures could be reduced if some of heat waves are predicted. Even a small success rate implies a large socio-economic benefit.Keywords: heat wave, D-wave, forecast, Ising model, quantum computing
Procedia PDF Downloads 499362 Improvement of the Geometric of Dental Bridge Framework through Automatic Program
Authors: Rong-Yang Lai, Jia-Yu Wu, Chih-Han Chang, Yung-Chung Chen
Abstract:
The dental bridge is one of the clinical methods of the treatment for missing teeth. The dental bridge is generally designed for two layers, containing the inner layer of the framework(zirconia) and the outer layer of the porcelain-fused to framework restorations. The design of a conventional bridge is generally based on the antagonist tooth profile so that the framework evenly indented by an equal thickness from outer contour. All-ceramic dental bridge made of zirconia have well demonstrated remarkable potential to withstand a higher physiological occlusal load in posterior region, but it was found that there is still the risk of all-ceramic bridge failure in five years. Thus, how to reduce the incidence of failure is still a problem to be solved. Therefore, the objective of this study is to develop mechanical designs for all-ceramic dental bridges framework by reducing the stress and enhancing fracture resistance under given loading conditions by finite element method. In this study, dental design software is used to design dental bridge based on tooth CT images. After building model, Bi-directional Evolutionary Structural Optimization (BESO) Method algorithm implemented in finite element software was employed to analyze results of finite element software and determine the distribution of the materials in dental bridge; BESO searches the optimum distribution of two different materials, namely porcelain and zirconia. According to the previous calculation of the stress value of each element, when the element stress value is higher than the threshold value, the element would be replaced by the framework material; besides, the difference of maximum stress peak value is less than 0.1%, calculation is complete. After completing the design of dental bridge, the stress distribution of the whole structure is changed. BESO reduces the peak values of principle stress of 10% in outer-layer porcelain and avoids producing tensile stress failure.Keywords: dental bridge, finite element analysis, framework, automatic program
Procedia PDF Downloads 282361 Pharmacokinetic Modeling of Valsartan in Dog following a Single Oral Administration
Authors: In-Hwan Baek
Abstract:
Valsartan is a potent and highly selective antagonist of the angiotensin II type 1 receptor, and is widely used for the treatment of hypertension. The aim of this study was to investigate the pharmacokinetic properties of the valsartan in dogs following oral administration of a single dose using quantitative modeling approaches. Forty beagle dogs were randomly divided into two group. Group A (n=20) was administered a single oral dose of valsartan 80 mg (Diovan® 80 mg), and group B (n=20) was administered a single oral dose of valsartan 160 mg (Diovan® 160 mg) in the morning after an overnight fast. Blood samples were collected into heparinized tubes before and at 0.5, 1, 1.5, 2, 2.5, 3, 4, 6, 8, 12 and 24 h following oral administration. The plasma concentrations of the valsartan were determined using LC-MS/MS. Non-compartmental pharmacokinetic analyses were performed using WinNonlin Standard Edition software, and modeling approaches were performed using maximum-likelihood estimation via the expectation maximization (MLEM) algorithm with sampling using ADAPT 5 software. After a single dose of valsartan 80 mg, the mean value of maximum concentration (Cmax) was 2.68 ± 1.17 μg/mL at 1.83 ± 1.27 h. The area under the plasma concentration-versus-time curve from time zero to the last measurable concentration (AUC24h) value was 13.21 ± 6.88 μg·h/mL. After dosing with valsartan 160 mg, the mean Cmax was 4.13 ± 1.49 μg/mL at 1.80 ± 1.53 h, the AUC24h was 26.02 ± 12.07 μg·h/mL. The Cmax and AUC values increased in proportion to the increment in valsartan dose, while the pharmacokinetic parameters of elimination rate constant, half-life, apparent of total clearance, and apparent of volume of distribution were not significantly different between the doses. Valsartan pharmacokinetic analysis fits a one-compartment model with first-order absorption and elimination following a single dose of valsartan 80 mg and 160 mg. In addition, high inter-individual variability was identified in the absorption rate constant. In conclusion, valsartan displays the dose-dependent pharmacokinetics in dogs, and Subsequent quantitative modeling approaches provided detailed pharmacokinetic information of valsartan. The current findings provide useful information in dogs that will aid future development of improved formulations or fixed-dose combinations.Keywords: dose-dependent, modeling, pharmacokinetics, valsartan
Procedia PDF Downloads 297360 Geometric Nonlinear Dynamic Analysis of Cylindrical Composite Sandwich Shells Subjected to Underwater Blast Load
Authors: Mustafa Taskin, Ozgur Demir, M. Mert Serveren
Abstract:
The precise study of the impact of underwater explosions on structures is of great importance in the design and engineering calculations of floating structures, especially those used for military purposes, as well as power generation facilities such as offshore platforms that can become a target in case of war. Considering that ship and submarine structures are mostly curved surfaces, it is extremely important and interesting to examine the destructive effects of underwater explosions on curvilinear surfaces. In this study, geometric nonlinear dynamic analysis of cylindrical composite sandwich shells subjected to instantaneous pressure load is performed. The instantaneous pressure load is defined as an underwater explosion and the effects of the liquid medium are taken into account. There are equations in the literature for pressure due to underwater explosions, but these equations have been obtained for flat plates. For this reason, the instantaneous pressure load equations are arranged to be suitable for curvilinear structures before proceeding with the analyses. Fluid-solid interaction is defined by using Taylor's Plate Theory. The lower and upper layers of the cylindrical composite sandwich shell are modeled as composite laminate and the middle layer consists of soft core. The geometric nonlinear dynamic equations of the shell are obtained by Hamilton's principle, taken into account the von Kàrmàn theory of large displacements. Then, time dependent geometric nonlinear equations of motion are solved with the help of generalized differential quadrature method (GDQM) and dynamic behavior of cylindrical composite sandwich shells exposed to underwater explosion is investigated. An algorithm that can work parametrically for the solution has been developed within the scope of the study.Keywords: cylindrical composite sandwich shells, generalized differential quadrature method, geometric nonlinear dynamic analysis, underwater explosion
Procedia PDF Downloads 192359 Exploring the Connectedness of Ad Hoc Mesh Networks in Rural Areas
Authors: Ibrahim Obeidat
Abstract:
Reaching a fully-connected network of mobile nodes in rural areas got a great attention between network researchers. This attention rose due to the complexity and high costs while setting up the needed infrastructures for these networks, in addition to the low transmission range these nodes has. Terranet technology, as an example, employs ad-hoc mesh network where each node has a transmission range not exceed one kilometer, this means that every two nodes are able to communicate with each other if they are just one kilometer far from each other, otherwise a third-party will play the role of the “relay”. In Terranet, and as an idea to reduce network setup cost, every node in the network will be considered as a router that is responsible of forwarding data between other nodes which result in a decentralized collaborative environment. Most researches on Terranet presents the idea of how to encourage mobile nodes to become more cooperative by letting their devices in “ON” state as long as possible while accepting to play the role of relay (router). This research presents the issue of finding the percentage of nodes in ad-hoc mesh network within rural areas that should play the role of relay at every time slot, relating to what is the actual area coverage of nodes in order to have the network reach the fully-connectivity. Far from our knowledge, till now there is no current researches discussed this issue. The research is done by making an implementation that depends on building adjacency matrix as an indicator to the connectivity between network members. This matrix is continually updated until each value in it refers to the number of hubs that should be followed to reach from one node to another. After repeating the algorithm on different area sizes, different coverage percentages for each size, and different relay percentages for several times, results extracted shows that for area coverage less than 5% we need to have 40% of the nodes to be relays, where 10% percentage is enough for areas with node coverage greater than 5%.Keywords: ad-hoc mesh networks, network connectivity, mobile ad-hoc networks, Terranet, adjacency matrix, simulator, wireless sensor networks, peer to peer networks, vehicular Ad hoc networks, relay
Procedia PDF Downloads 282358 Multi Biomertric Personal Identification System Based On Hybird Intellegence Method
Authors: Laheeb M. Ibrahim, Ibrahim A. Salih
Abstract:
Biometrics is a technology that has been widely used in many official and commercial identification applications. The increased concerns in security during recent years (especially during the last decades) have essentially resulted in more attention being given to biometric-based verification techniques. Here, a novel fusion approach of palmprint, dental traits has been suggested. These traits which are authentication techniques have been employed in a range of biometric applications that can identify any postmortem PM person and antemortem AM. Besides improving the accuracy, the fusion of biometrics has several advantages such as increasing, deterring spoofing activities and reducing enrolment failure. In this paper, a first unimodel biometric system has been made by using (palmprint and dental) traits, for each one classification applying an artificial neural network and a hybrid technique that combines swarm intelligence and neural network together, then attempt has been made to combine palmprint and dental biometrics. Principally, the fusion of palmprint and dental biometrics and their potential application has been explored as biometric identifiers. To address this issue, investigations have been carried out about the relative performance of several statistical data fusion techniques for integrating the information in both unimodal and multimodal biometrics. Also the results of the multimodal approach have been compared with each one of these two traits authentication approaches. This paper studies the features and decision fusion levels in multimodal biometrics. To determine the accuracy of GAR to parallel system decision-fusion including (AND, OR, Majority fating) has been used. The backpropagation method has been used for classification and has come out with result (92%, 99%, 97%) respectively for GAR, while the GAR) for this algorithm using hybrid technique for classification (95%, 99%, 98%) respectively. To determine the accuracy of the multibiometric system for feature level fusion has been used, while the same preceding methods have been used for classification. The results have been (98%, 99%) respectively while to determine the GAR of feature level different methods have been used and have come out with (98%).Keywords: back propagation neural network BP ANN, multibiometric system, parallel system decision-fusion, practical swarm intelligent PSO
Procedia PDF Downloads 533357 Effects of Cacao Agroforestry and Landscape Composition on Farm Biodiversity and Household Dietary Diversity
Authors: Marlene Yu Lilin Wätzold, Wisnu Harto Adiwijoyo, Meike Wollni
Abstract:
Land-use conversion from tropical forests to cash crop production in the form of monocultures has drastic consequences for biodiversity. Meanwhile, high dependence on cash crop production is often associated with a decrease in other food crop production, thereby affecting household dietary diversity. Additionally, deforestation rates have been found to reduce households’ dietary diversity, as forests often offer various food sources. Agroforestry systems are seen as a potential solution to improve local biodiversity as well as provide a range of provisioning ecosystem services, such as timber and other food crops. While a number of studies have analyzed the effects of agroforestry on biodiversity, as well as household livelihood indicators, little is understood between potential trade-offs or synergies between the two. This interdisciplinary study aims to fill this gap by assessing cacao agroforestry’s role in enhancing local bird diversity, as well as farm household dietary diversity. Additionally, we will take a landscape perspective and investigate in what ways the landscape composition, such as the proximity to forests and forest patches, are able to contribute to the local bird diversity, as well as households’ dietary diversity. Our study will take place in two agro-ecological zones in Ghana, based on household surveys of 500 cacao farm households. Using a subsample of 120 cacao plots, we will assess the degree of shade tree diversity and density using drone flights and a computer vision tree detection algorithm. Bird density and diversity will be assessed using sound recordings that will be kept in the cacao plots for 24 hours. Landscape compositions will be assessed via remote sensing images. The results of our study are of high importance as they will allow us to understand the effects of agroforestry and landscape composition in improving simultaneous ecosystem services.Keywords: agroforestry, biodiversity, landscape composition, nutrition
Procedia PDF Downloads 113356 VeriFy: A Solution to Implement Autonomy Safely and According to the Rules
Authors: Michael Naderhirn, Marco Pavone
Abstract:
Problem statement, motivation, and aim of work: So far, the development of control algorithms was done by control engineers in a way that the controller would fit a specification by testing. When it comes to the certification of an autonomous car in highly complex scenarios, the challenge is much higher since such a controller must mathematically guarantee to implement the rules of the road while on the other side guarantee aspects like safety and real time executability. What if it becomes reality to solve this demanding problem by combining Formal Verification and System Theory? The aim of this work is to present a workflow to solve the above mentioned problem. Summary of the presented results / main outcomes: We show the usage of an English like language to transform the rules of the road into system specification for an autonomous car. The language based specifications are used to define system functions and interfaces. Based on that a formal model is developed which formally correctly models the specifications. On the other side, a mathematical model describing the systems dynamics is used to calculate the systems reachability set which is further used to determine the system input boundaries. Then a motion planning algorithm is applied inside the system boundaries to find an optimized trajectory in combination with the formal specification model while satisfying the specifications. The result is a control strategy which can be applied in real time independent of the scenario with a mathematical guarantee to satisfy a predefined specification. We demonstrate the applicability of the method in simulation driving scenarios and a potential certification. Originality, significance, and benefit: To the authors’ best knowledge, it is the first time that it is possible to show an automated workflow which combines a specification in an English like language and a mathematical model in a mathematical formal verified way to synthesizes a controller for potential real time applications like autonomous driving.Keywords: formal system verification, reachability, real time controller, hybrid system
Procedia PDF Downloads 241