Search results for: measurement accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6148

Search results for: measurement accuracy

1078 A Machine Learning Approach for Detecting and Locating Hardware Trojans

Authors: Kaiwen Zheng, Wanting Zhou, Nan Tang, Lei Li, Yuanhang He

Abstract:

The integrated circuit industry has become a cornerstone of the information society, finding widespread application in areas such as industry, communication, medicine, and aerospace. However, with the increasing complexity of integrated circuits, Hardware Trojans (HTs) implanted by attackers have become a significant threat to their security. In this paper, we proposed a hardware trojan detection method for large-scale circuits. As HTs introduce physical characteristic changes such as structure, area, and power consumption as additional redundant circuits, we proposed a machine-learning-based hardware trojan detection method based on the physical characteristics of gate-level netlists. This method transforms the hardware trojan detection problem into a machine-learning binary classification problem based on physical characteristics, greatly improving detection speed. To address the problem of imbalanced data, where the number of pure circuit samples is far less than that of HTs circuit samples, we used the SMOTETomek algorithm to expand the dataset and further improve the performance of the classifier. We used three machine learning algorithms, K-Nearest Neighbors, Random Forest, and Support Vector Machine, to train and validate benchmark circuits on Trust-Hub, and all achieved good results. In our case studies based on AES encryption circuits provided by trust-hub, the test results showed the effectiveness of the proposed method. To further validate the method’s effectiveness for detecting variant HTs, we designed variant HTs using open-source HTs. The proposed method can guarantee robust detection accuracy in the millisecond level detection time for IC, and FPGA design flows and has good detection performance for library variant HTs.

Keywords: hardware trojans, physical properties, machine learning, hardware security

Procedia PDF Downloads 153
1077 Comparison of an Anthropomorphic PRESAGE® Dosimeter and Radiochromic Film with a Commercial Radiation Treatment Planning System for Breast IMRT: A Feasibility Study

Authors: Khalid Iqbal

Abstract:

This work presents a comparison of an anthropomorphic PRESAGE® dosimeter and radiochromic film measurements with a commercial treatment planning system to determine the feasibility of PRESAGE® for 3D dosimetry in breast IMRT. An anthropomorphic PRESAGE® phantom was created in the shape of a breast phantom. A five-field IMRT plan was generated with a commercially available treatment planning system and delivered to the PRESAGE® phantom. The anthropomorphic PRESAGE® was scanned with the Duke midsized optical CT scanner (DMOS-RPC) and the OD distribution was converted to dose. Comparisons were performed between the dose distribution calculated with the Pinnacle3 treatment planning system, PRESAGE®, and EBT2 film measurements. DVHs, gamma maps, and line profiles were used to evaluate the agreement. Gamma map comparisons showed that Pinnacle3 agreed with PRESAGE® as greater than 95% of comparison points for the PTV passed a ± 3%/± 3 mm criterion when the outer 8 mm of phantom data were discluded. Edge artifacts were observed in the optical CT reconstruction, from the surface to approximately 8 mm depth. These artifacts resulted in dose differences between Pinnacle3 and PRESAGE® of up to 5% between the surface and a depth of 8 mm and decreased with increasing depth in the phantom. Line profile comparisons between all three independent measurements yielded a maximum difference of 2% within the central 80% of the field width. For the breast IMRT plan studied, the Pinnacle3 calculations agreed with PRESAGE® measurements to within the ±3%/± 3 mm gamma criterion. This work demonstrates the feasibility of the PRESAGE® to be fashioned into anthropomorphic shape, and establishes the accuracy of Pinnacle3 for breast IMRT. Furthermore, these data have established the groundwork for future investigations into 3D dosimetry with more complex anthropomorphic phantoms.

Keywords: 3D dosimetry, PRESAGE®, IMRT, QA, EBT2 GAFCHROMIC film

Procedia PDF Downloads 420
1076 Effects of Rations with High Amount of Crude Fiber on Rumen Fermentation in Suckler Cows

Authors: H. Scholz, P. Kuehne, G. Heckenberger

Abstract:

Problems during the calving period (December until May) often are results in a high body condition score (BCS) at this time. At the end of the grazing period (frequently after early weaning), however, an increase of BCS can often be observed under German conditions. In the last eight weeks before calving, the body condition should be reduced or at least not increased. Rations with a higher amount of crude fiber can be used (rations with straw or late mowed grass silage). Fermentative digestion of fiber is slow and incomplete; that’s why the fermentative process in the rumen can be reduced over a long feeding time. Viewed in this context, feed intake of suckler cows (8 weeks before calving) in different rations and fermentation in the rumen should be checked by taking rumen fluid. Eight suckler cows (Charolais) were feeding a Total Mixed Ration (TMR) in the last eight weeks before calving and grass silage after calving. By the addition of straw (30 % [TMR1] vs. 60 % [TMR2] of dry matter) was varied the amount of crude fiber in the TMR (grass silage, straw, mineral) before calving. After calving of the cow's grass, silage [GS] was fed ad libitum, and the last measurement of rumen fluid took place on the pasture [PS]. Rumen fluid, plasma, body weight, and backfat thickness were collected. Rumen fluid pH was assessed using an electronic pH meter. Volatile fatty acids (VFA), sedimentation, methylene-blue, and amount of infusorians were measured. From these 4 parameters, an “index of rumen fermentation” [IRF] in the rumen was formed. Fixed effects of treatment (TMR1, TMR2, GS, and PS) and a number of lactations (3-7 lactations) were analyzed by ANOVA using SPSS Version 25.0 (significant by p ≤ 5 %). Rumen fluid pH was significantly influenced by variants (TMR 1 by 6.6; TMR 2 by 6.9; GS by 6.6 and PS by 6.9) but was not affected by other effects. The IRF showed disturbed fermentation in the rumen by feeding the TMR 1+2 with a high amount of crude fiber (Score: > 10.0 points) and a very good environment for fermentation during grazing the pasture (Score: 6.9 points). Furthermore, significant differences were found for VFA, methylene blue, and the number of infusorians. The use of rations with a high amount of crude fiber from weaning to calving may cause deviations from undisturbed fermentation in the rumen and adversely affect the utilization of the feed in the rumen.

Keywords: rumen fermentation, suckler cow, digestibility organic matter, crude fiber

Procedia PDF Downloads 148
1075 Apollo Quality Program: The Essential Framework for Implementing Patient Safety

Authors: Anupam Sibal

Abstract:

Apollo Quality Program(AQP) was launched across the Apollo Group of Hospitals to address the four patient safety areas; Safety during Clinical Handovers, Medication Safety, Surgical Safety and the six International Patient Safety Goals(IPSGs) of JCI. A measurable, online, quality dashboard covering 20 process and outcome parameters was devised for monthly monitoring. The expected outcomes were also defined and categorized into green, yellow and red ranges. An audit methodology was also devised to check the processes for the measurable dashboard. Documented clinical handovers were introduced for the first time at many locations for in-house patient transfer, nursing-handover, and physician-handover. Prototype forms using the SBAR format were made. Patient-identifiers, read-back for verbal orders, safety of high-alert medications, site marking and time-outs and falls risk-assessment were introduced for all hospitals irrespective of accreditation status. Measurement of Surgical-Site-Infection (SSI) for 30 days postoperatively, was done. All hospitals now tracked the time of administration of antimicrobial prophylaxis before surgery. Situations with high risk of retention of foreign body were delineated and precautionary measures instituted. Audit of medications prescribed in the discharge summaries was made uniform. Formularies, prescription-audits and other means for reduction of medication errors were implemented. There is a marked increase in the compliance to processes and patient safety outcomes. Compliance to read-back for verbal orders rose from 86.83% in April’11 to 96.95% in June’15, to policy for high alert medications from 87.83% to 98.82%, to use of measures to prevent wrong-site, wrong-patient, wrong procedure surgery from 85.75% to 97.66%, to hand-washing from 69.18% to 92.54%, to antimicrobial prophylaxis within one hour before incision from 79.43% to 93.46%. Percentage of patients excluded from SSI calculation due to lack of follow-up for the requisite time frame decreased from 21.25% to 10.25%. The average AQP scores for all Apollo Hospitals improved from 62 in April’11 to 87.7 in Jun’15.

Keywords: clinical handovers, international patient safety goals, medication safety, surgical safety

Procedia PDF Downloads 262
1074 High-Resolution Flood Hazard Mapping Using Two-Dimensional Hydrodynamic Model Anuga: Case Study of Jakarta, Indonesia

Authors: Hengki Eko Putra, Dennish Ari Putro, Tri Wahyu Hadi, Edi Riawan, Junnaedhi Dewa Gede, Aditia Rojali, Fariza Dian Prasetyo, Yudhistira Satya Pribadi, Dita Fatria Andarini, Mila Khaerunisa, Raditya Hanung Prakoswa

Abstract:

Catastrophe risk management can only be done if we are able to calculate the exposed risks. Jakarta is an important city economically, socially, and politically and in the same time exposed to severe floods. On the other hand, flood risk calculation is still very limited in the area. This study has calculated the risk of flooding for Jakarta using 2-Dimensional Model ANUGA. 2-Dimensional model ANUGA and 1-Dimensional Model HEC-RAS are used to calculate the risk of flooding from 13 major rivers in Jakarta. ANUGA can simulate physical and dynamical processes between the streamflow against river geometry and land cover to produce a 1-meter resolution inundation map. The value of streamflow as an input for the model obtained from hydrological analysis on rainfall data using hydrologic model HEC-HMS. The probabilistic streamflow derived from probabilistic rainfall using statistical distribution Log-Pearson III, Normal and Gumbel, through compatibility test using Chi Square and Smirnov-Kolmogorov. Flood event on 2007 is used as a comparison to evaluate the accuracy of model output. Property damage estimations were calculated based on flood depth for 1, 5, 10, 25, 50, and 100 years return period against housing value data from the BPS-Statistics Indonesia, Centre for Research and Development of Housing and Settlements, Ministry of Public Work Indonesia. The vulnerability factor was derived from flood insurance claim. Jakarta's flood loss estimation for the return period of 1, 5, 10, 25, 50, and 100 years, respectively are Rp 1.30 t; Rp 16.18 t; Rp 16.85 t; Rp 21.21 t; Rp 24.32 t; and Rp 24.67 t of the total value of building Rp 434.43 t.

Keywords: 2D hydrodynamic model, ANUGA, flood, flood modeling

Procedia PDF Downloads 280
1073 Feature Evaluation Based on Random Subspace and Multiple-K Ensemble

Authors: Jaehong Yu, Seoung Bum Kim

Abstract:

Clustering analysis can facilitate the extraction of intrinsic patterns in a dataset and reveal its natural groupings without requiring class information. For effective clustering analysis in high dimensional datasets, unsupervised dimensionality reduction is an important task. Unsupervised dimensionality reduction can generally be achieved by feature extraction or feature selection. In many situations, feature selection methods are more appropriate than feature extraction methods because of their clear interpretation with respect to the original features. The unsupervised feature selection can be categorized as feature subset selection and feature ranking method, and we focused on unsupervised feature ranking methods which evaluate the features based on their importance scores. Recently, several unsupervised feature ranking methods were developed based on ensemble approaches to achieve their higher accuracy and stability. However, most of the ensemble-based feature ranking methods require the true number of clusters. Furthermore, these algorithms evaluate the feature importance depending on the ensemble clustering solution, and they produce undesirable evaluation results if the clustering solutions are inaccurate. To address these limitations, we proposed an ensemble-based feature ranking method with random subspace and multiple-k ensemble (FRRM). The proposed FRRM algorithm evaluates the importance of each feature with the random subspace ensemble, and all evaluation results are combined with the ensemble importance scores. Moreover, FRRM does not require the determination of the true number of clusters in advance through the use of the multiple-k ensemble idea. Experiments on various benchmark datasets were conducted to examine the properties of the proposed FRRM algorithm and to compare its performance with that of existing feature ranking methods. The experimental results demonstrated that the proposed FRRM outperformed the competitors.

Keywords: clustering analysis, multiple-k ensemble, random subspace-based feature evaluation, unsupervised feature ranking

Procedia PDF Downloads 342
1072 An Unsupervised Domain-Knowledge Discovery Framework for Fake News Detection

Authors: Yulan Wu

Abstract:

With the rapid development of social media, the issue of fake news has gained considerable prominence, drawing the attention of both the public and governments. The widespread dissemination of false information poses a tangible threat across multiple domains of society, including politics, economy, and health. However, much research has concentrated on supervised training models within specific domains, their effectiveness diminishes when applied to identify fake news across multiple domains. To solve this problem, some approaches based on domain labels have been proposed. By segmenting news to their specific area in advance, judges in the corresponding field may be more accurate on fake news. However, these approaches disregard the fact that news records can pertain to multiple domains, resulting in a significant loss of valuable information. In addition, the datasets used for training must all be domain-labeled, which creates unnecessary complexity. To solve these problems, an unsupervised domain knowledge discovery framework for fake news detection is proposed. Firstly, to effectively retain the multidomain knowledge of the text, a low-dimensional vector for each news text to capture domain embeddings is generated. Subsequently, a feature extraction module utilizing the unsupervisedly discovered domain embeddings is used to extract the comprehensive features of news. Finally, a classifier is employed to determine the authenticity of the news. To verify the proposed framework, a test is conducted on the existing widely used datasets, and the experimental results demonstrate that this method is able to improve the detection performance for fake news across multiple domains. Moreover, even in datasets that lack domain labels, this method can still effectively transfer domain knowledge, which can educe the time consumed by tagging without sacrificing the detection accuracy.

Keywords: fake news, deep learning, natural language processing, multiple domains

Procedia PDF Downloads 106
1071 Spatio-Temporal Risk Analysis of Cancer to Assessed Environmental Exposures in Coimbatore, India

Authors: Janani Selvaraj, M. Prashanthi Devi, P. B. Harathi

Abstract:

Epidemiologic studies conducted over several decades have provided evidence to suggest that long-term exposure to elevated ambient levels of particulate air pollution is associated with increased mortality. Air quality risk management is significant in developing countries and it highlights the need to understand the role of ecologic covariates in the association between air pollution and mortality. Several new methods show promise in exploring the geographical distribution of disease and the identification of high risk areas using epidemiological maps. However, the addition of the temporal attribute would further give us an in depth idea of the disease burden with respect to forecasting measures. In recent years, new methods developed in the reanalysis were useful for exploring the spatial structure of the data and the impact of spatial autocorrelation on estimates of risk associated with exposure to air pollution. Based on this, our present study aims to explore the spatial and temporal distribution of the lung cancer cases in the Coimbatore district of Tamil Nadu in relation to air pollution risk areas. A spatio temporal moving average method was computed using the CrimeStat software and visualized in ArcGIS 10.1 to document the spatio temporal movement of the disease in the study region. The random walk analysis performed showed the progress of the peak cancer incidences in the intersection regions of the Coimbatore North and South taluks that include major commercial and residential regions like Gandhipuram, Peelamedu, Ganapathy, etc. Our study shows evidence that daily exposure to high air pollutant concentration zones may lead to the risk of lung cancer. The observations from the present study will be useful in delineating high risk zones of environmental exposure that contribute to the increase of cancer among daily commuters. Through our study we suggest that spatially resolved exposure models in relevant time frames will produce higher risks zones rather than solely on statistical theory about the impact of measurement error and the empirical findings.

Keywords: air pollution, cancer, spatio-temporal analysis, India

Procedia PDF Downloads 517
1070 Hyper Parameter Optimization of Deep Convolutional Neural Networks for Pavement Distress Classification

Authors: Oumaima Khlifati, Khadija Baba

Abstract:

Pavement distress is the main factor responsible for the deterioration of road structure durability, damage vehicles, and driver comfort. Transportation agencies spend a high proportion of their funds on pavement monitoring and maintenance. The auscultation of pavement distress was based on the manual survey, which was extremely time consuming, labor intensive, and required domain expertise. Therefore, the automatic distress detection is needed to reduce the cost of manual inspection and avoid more serious damage by implementing the appropriate remediation actions at the right time. Inspired by recent deep learning applications, this paper proposes an algorithm for automatic road distress detection and classification using on the Deep Convolutional Neural Network (DCNN). In this study, the types of pavement distress are classified as transverse or longitudinal cracking, alligator, pothole, and intact pavement. The dataset used in this work is composed of public asphalt pavement images. In order to learn the structure of the different type of distress, the DCNN models are trained and tested as a multi-label classification task. In addition, to get the highest accuracy for our model, we adjust the structural optimization hyper parameters such as the number of convolutions and max pooling, filers, size of filters, loss functions, activation functions, and optimizer and fine-tuning hyper parameters that conclude batch size and learning rate. The optimization of the model is executed by checking all feasible combinations and selecting the best performing one. The model, after being optimized, performance metrics is calculated, which describe the training and validation accuracies, precision, recall, and F1 score.

Keywords: distress pavement, hyperparameters, automatic classification, deep learning

Procedia PDF Downloads 99
1069 Creating Database and Building 3D Geological Models: A Case Study on Bac Ai Pumped Storage Hydropower Project

Authors: Nguyen Chi Quang, Nguyen Duong Tri Nguyen

Abstract:

This article is the first step to research and outline the structure of the geotechnical database in the geological survey of a power project; in the context of this report creating the database that has been carried out for the Bac Ai pumped storage hydropower project. For the purpose of providing a method of organizing and storing geological and topographic survey data and experimental results in a spatial database, the RockWorks software is used to bring optimal efficiency in the process of exploiting, using, and analyzing data in service of the design work in the power engineering consulting. Three-dimensional (3D) geotechnical models are created from the survey data: such as stratigraphy, lithology, porosity, etc. The results of the 3D geotechnical model in the case of Bac Ai pumped storage hydropower project include six closely stacked stratigraphic formations by Horizons method, whereas modeling of engineering geological parameters is performed by geostatistical methods. The accuracy and reliability assessments are tested through error statistics, empirical evaluation, and expert methods. The three-dimensional model analysis allows better visualization of volumetric calculations, excavation and backfilling of the lake area, tunneling of power pipelines, and calculation of on-site construction material reserves. In general, the application of engineering geological modeling makes the design work more intuitive and comprehensive, helping construction designers better identify and offer the most optimal design solutions for the project. The database always ensures the update and synchronization, as well as enables 3D modeling of geological and topographic data to integrate with the designed data according to the building information modeling. This is also the base platform for BIM & GIS integration.

Keywords: database, engineering geology, 3D Model, RockWorks, Bac Ai pumped storage hydropower project

Procedia PDF Downloads 174
1068 Characterization of Kevlar 29 for Multifunction Applications

Authors: Doaa H. Elgohary, Dina M. Hamoda, S. Yahia

Abstract:

Technical textiles refer to textile materials that are engineered and designed to have specific functionalities and performance characteristics beyond their traditional use as apparel or upholstery fabrics. These textiles are usually developed for their unique properties such as strength, durability, flame retardancy, chemical resistance, waterproofing, insulation and other special properties. The development and use of technical textiles are constantly evolving, driven by advances in materials science, manufacturing technologies and the demand for innovative solutions in various industries. Kevlar 29 is a type of aramid fiber developed by DuPont. It is a high-performance material known for its exceptional strength and resistance to impact, abrasion, and heat. Kevlar 29 belongs to the Kevlar family, which includes different types of aramid fibers. Kevlar 29 is primarily used in applications that require strength and durability, such as ballistic protection, body armor, and body armor for military and law enforcement personnel. It is also used in the aerospace and automotive industries to reinforce composite materials, as well as in various industrial applications. Two different Kevlar samples were used coated with cooper lithium silicate (CLS); ten different mechanical and physical properties (weight, thickness, tensile strength, elongation, stiffness, air permeability, puncture resistance, thermal conductivity, stiffness, and spray test) were conducted to approve its functional performance efficiency. The influence of different mechanical properties was statistically analyzed using an independent t-test with a significant difference at P-value = 0.05. The radar plot was calculated and evaluated to determine the best-performing samples. The results of the independent t-test observed that all variables were significantly affected by yarn counts except water permeability, which has no significant effect. All properties were evaluated for samples 1 and 2, a radar chart was used to determine the best attitude for samples. The radar chart area was calculated, which shows that sample 1 recorded the best performance, followed by sample 2. The surface morphology of all samples and the coating materials was determined using a scanning electron microscope (SEM), also Fourier Transform Infrared Spectroscopy Measurement for the two samples.

Keywords: cooper lithium silicate, independent t-test, kevlar, technical textiles.

Procedia PDF Downloads 83
1067 Lack of Physical Activity In Schools: Study Carried Out on School-aged Adolescents

Authors: Bencharif Meriem, Sersar Ibrahim, Djaafri Zineb

Abstract:

Introduction and purpose of the study: Education plays a fundamental role in the lives of young people, but what about their physical well-being as they spend long hours sitting at school? School inactivity is a problem that deserves particular attention because it can have significant repercussions on the health and development of students. The aim of this study was to describe and evaluate the physical activity of students in different practices in class, at recess and in the canteen. Material and methods: A physical activity diary and an anthropometric measurement sheet (weight, height) were provided to 123 school-aged adolescents. The measurements were carried out according to international recommendations. The statistical tests were carried out with the R software. 3.2.4. The significance threshold retained was 0.05. Results and Statistical Analysis: One hundred and twenty-three students agreed to participate in the study. Their average age was 16.5±1.60 years. Overweight was present in 8.13% and obesity in 4.06%. For the practice of physical activity, during physical education and sports classes, all students played sports with an average of 1.94±1.00 hours/week, of which 74.00% sweated or were out of breath during these hours of physical activity. It was also noted that boys practiced sports more than girls (p<0.0001). Each day, on average, students spent 39.78±37.85 min walking or running during recess. On the other hand, they spent, on average 4.25±2.65 hours sitting per day in class, at recess, in the canteen, etc., without counting the time spent in front of a screen. The increasing use of screens has become a major concern for parents and educators. On average, students spent approximately 42.90±38.41 min per day using screens in class, at recess, in the canteen and at home. (computer, tablet, telephone, video games, etc.) and therefore to a prolonged sedentary lifestyle. On average, students sat for more than 1.5 hours without moving for at least 2 minutes in a row approximately 1.72±0.71 times per day. Conclusion: These students spent many hours sitting at school. This prolonged inactivity can have negative consequences on their health, including problems with posture and cardiovascular health. It is crucial that schools, educators and parents collaborate to promote more active learning environments where students can move more and thus contribute to their overall well-being. It's time to rethink how we approach education and student health to give them a healthier, more active future.

Keywords: physical acivity, sedentarity, adolescents, school

Procedia PDF Downloads 64
1066 Developing a DNN Model for the Production of Biogas From a Hybrid BO-TPE System in an Anaerobic Wastewater Treatment Plant

Authors: Hadjer Sadoune, Liza Lamini, Scherazade Krim, Amel Djouadi, Rachida Rihani

Abstract:

Deep neural networks are highly regarded for their accuracy in predicting intricate fermentation processes. Their ability to learn from a large amount of datasets through artificial intelligence makes them particularly effective models. The primary obstacle in improving the performance of these models is to carefully choose the suitable hyperparameters, including the neural network architecture (number of hidden layers and hidden units), activation function, optimizer, learning rate, and other relevant factors. This study predicts biogas production from real wastewater treatment plant data using a sophisticated approach: hybrid Bayesian optimization with a tree-structured Parzen estimator (BO-TPE) for an optimised deep neural network (DNN) model. The plant utilizes an Upflow Anaerobic Sludge Blanket (UASB) digester that treats industrial wastewater from soft drinks and breweries. The digester has a working volume of 1574 m3 and a total volume of 1914 m3. Its internal diameter and height were 19 and 7.14 m, respectively. The data preprocessing was conducted with meticulous attention to preserving data quality while avoiding data reduction. Three normalization techniques were applied to the pre-processed data (MinMaxScaler, RobustScaler and StandardScaler) and compared with the Non-Normalized data. The RobustScaler approach has strong predictive ability for estimating the volume of biogas produced. The highest predicted biogas volume was 2236.105 Nm³/d, with coefficient of determination (R2), mean absolute error (MAE), and root mean square error (RMSE) values of 0.712, 164.610, and 223.429, respectively.

Keywords: anaerobic digestion, biogas production, deep neural network, hybrid bo-tpe, hyperparameters tuning

Procedia PDF Downloads 43
1065 Comparison of Support Vector Machines and Artificial Neural Network Classifiers in Characterizing Threatened Tree Species Using Eight Bands of WorldView-2 Imagery in Dukuduku Landscape, South Africa

Authors: Galal Omer, Onisimo Mutanga, Elfatih M. Abdel-Rahman, Elhadi Adam

Abstract:

Threatened tree species (TTS) play a significant role in ecosystem functioning and services, land use dynamics, and other socio-economic aspects. Such aspects include ecological, economic, livelihood, security-based, and well-being benefits. The development of techniques for mapping and monitoring TTS is thus critical for understanding the functioning of ecosystems. The advent of advanced imaging systems and supervised learning algorithms has provided an opportunity to classify TTS over fragmenting landscape. Recently, vegetation maps have been produced using advanced imaging systems such as WorldView-2 (WV-2) and robust classification algorithms such as support vectors machines (SVM) and artificial neural network (ANN). However, delineation of TTS in a fragmenting landscape using high resolution imagery has widely remained elusive due to the complexity of the species structure and their distribution. Therefore, the objective of the current study was to examine the utility of the advanced WV-2 data for mapping TTS in the fragmenting Dukuduku indigenous forest of South Africa using SVM and ANN classification algorithms. The results showed the robustness of the two machine learning algorithms with an overall accuracy (OA) of 77.00% (total disagreement = 23.00%) for SVM and 75.00% (total disagreement = 25.00%) for ANN using all eight bands of WV-2 (8B). This study concludes that SVM and ANN classification algorithms with WV-2 8B have the potential to classify TTS in the Dukuduku indigenous forest. This study offers relatively accurate information that is important for forest managers to make informed decisions regarding management and conservation protocols of TTS.

Keywords: artificial neural network, threatened tree species, indigenous forest, support vector machines

Procedia PDF Downloads 519
1064 Evaluation and Analysis of ZigBee-Based Wireless Sensor Network: Home Monitoring as Case Study

Authors: Omojokun G. Aju, Adedayo O. Sule

Abstract:

ZigBee wireless sensor and control network is one of the most popularly deployed wireless technologies in recent years. This is because ZigBee is an open standard lightweight, low-cost, low-speed, low-power protocol that allows true operability between systems. It is built on existing IEEE 802.15.4 protocol and therefore combines the IEEE 802.15.4 features and newly added features to meet required functionalities thereby finding applications in wide variety of wireless networked systems. ZigBee‘s current focus is on embedded applications of general-purpose, inexpensive, self-organising networks which requires low to medium data rates, high number of nodes and very low power consumption such as home/industrial automation, embedded sensing, medical data collection, smart lighting, safety and security sensor networks, and monitoring systems. Although the ZigBee design specification includes security features to protect data communication confidentiality and integrity, however, when simplicity and low-cost are the goals, security is normally traded-off. A lot of researches have been carried out on ZigBee technology in which emphasis has mainly been placed on ZigBee network performance characteristics such as energy efficiency, throughput, robustness, packet delay and delivery ratio in different scenarios and applications. This paper investigate and analyse the data accuracy, network implementation difficulties and security challenges of ZigBee network applications in star-based and mesh-based topologies with emphases on its home monitoring application using the ZigBee ProBee ZE-10 development boards for the network setup. The paper also expose some factors that need to be considered when designing ZigBee network applications and suggest ways in which ZigBee network can be designed to provide more resilient to network attacks.

Keywords: home monitoring, IEEE 802.14.5, topology, wireless security, wireless sensor network (WSN), ZigBee

Procedia PDF Downloads 387
1063 A Damage Level Assessment Model for Extra High Voltage Transmission Towers

Authors: Huan-Chieh Chiu, Hung-Shuo Wu, Chien-Hao Wang, Yu-Cheng Yang, Ching-Ya Tseng, Joe-Air Jiang

Abstract:

Power failure resulting from tower collapse due to violent seismic events might bring enormous and inestimable losses. The Chi-Chi earthquake, for example, strongly struck Taiwan and caused huge damage to the power system on September 21, 1999. Nearly 10% of extra high voltage (EHV) transmission towers were damaged in the earthquake. Therefore, seismic hazards of EHV transmission towers should be monitored and evaluated. The ultimate goal of this study is to establish a damage level assessment model for EHV transmission towers. The data of earthquakes provided by Taiwan Central Weather Bureau serve as a reference and then lay the foundation for earthquake simulations and analyses afterward. Some parameters related to the damage level of each point of an EHV tower are simulated and analyzed by the data from monitoring stations once an earthquake occurs. Through the Fourier transform, the seismic wave is then analyzed and transformed into different wave frequencies, and the data would be shown through a response spectrum. With this method, the seismic frequency which damages EHV towers the most is clearly identified. An estimation model is built to determine the damage level caused by a future seismic event. Finally, instead of relying on visual observation done by inspectors, the proposed model can provide a power company with the damage information of a transmission tower. Using the model, manpower required by visual observation can be reduced, and the accuracy of the damage level estimation can be substantially improved. Such a model is greatly useful for health and construction monitoring because of the advantages of long-term evaluation of structural characteristics and long-term damage detection.

Keywords: damage level monitoring, drift ratio, fragility curve, smart grid, transmission tower

Procedia PDF Downloads 302
1062 Alternating Expectation-Maximization Algorithm for a Bilinear Model in Isoform Quantification from RNA-Seq Data

Authors: Wenjiang Deng, Tian Mou, Yudi Pawitan, Trung Nghia Vu

Abstract:

Estimation of isoform-level gene expression from RNA-seq data depends on simplifying assumptions, such as uniform reads distribution, that are easily violated in real data. Such violations typically lead to biased estimates. Most existing methods provide a bias correction step(s), which is based on biological considerations, such as GC content–and applied in single samples separately. The main problem is that not all biases are known. For example, new technologies such as single-cell RNA-seq (scRNA-seq) may introduce new sources of bias not seen in bulk-cell data. This study introduces a method called XAEM based on a more flexible and robust statistical model. Existing methods are essentially based on a linear model Xβ, where the design matrix X is known and derived based on the simplifying assumptions. In contrast, XAEM considers Xβ as a bilinear model with both X and β unknown. Joint estimation of X and β is made possible by simultaneous analysis of multi-sample RNA-seq data. Compared to existing methods, XAEM automatically performs empirical correction of potentially unknown biases. XAEM implements an alternating expectation-maximization (AEM) algorithm, alternating between estimation of X and β. For speed XAEM utilizes quasi-mapping for read alignment, thus leading to a fast algorithm. Overall XAEM performs favorably compared to other recent advanced methods. For simulated datasets, XAEM obtains higher accuracy for multiple-isoform genes, particularly for paralogs. In a differential-expression analysis of a real scRNA-seq dataset, XAEM achieves substantially greater rediscovery rates in an independent validation set.

Keywords: alternating EM algorithm, bias correction, bilinear model, gene expression, RNA-seq

Procedia PDF Downloads 146
1061 Meta-Magnetic Properties of LaFe₁₂B₆ Type Compounds

Authors: Baptiste Vallet-Simond, Léopold V. B. Diop, Olivier Isnard

Abstract:

The antiferromagnetic itinerant-electron compound LaFe₁₂B₆ occupies a special place among rare-earth iron-rich intermetallic; it presents exotic magnetic and physical properties. The unusual amplitude-modulated spin configuration defined by a propagation vector k = (¼, ¼, ¼), remarkably weak Fe magnetic moment (0.43 μB) in the antiferromagnetic ground state, especially low magnetic ordering temperature TN = 36 K for an Fe-rich phase, a multicritical point in the complex magnetic phase diagram, both normal and inverse magnetocaloric effects, and huge hydrostatic pressure effects can be highlighted as the most relevant. Both antiferromagnetic (AFM) and paramagnetic (PM) states can be transformed into the ferromagnetic (FM) state via a field-induced first-order metamagnetic transition. Of particular interest is the low-temperature magnetization process. This process is discontinuous and evolves unexpected huge metamagnetic transitions consisting of a succession of steep magnetization jumps separated by plateaus, giving rise to an unusual avalanche-like behavior. The metamagnetic transition is accompanied by giant magnetoresistance and large magnetostriction. In the present work, we report on the intrinsic magnetic properties of the La₁₋ₓPrₓFe₁₂B₆ series of compounds exhibiting sharp metamagnetic transitions. The study of the structural, magnetic, magneto-transport, and magnetostrictive properties of the La₁₋ₓPrₓFe₁₂B₆ system was performed by combining a wide variety of measurement techniques. Magnetic measurements were performed up to µ0H = 10 T. It was found that the proportion of Pr had a strong influence on the magnetic properties of this series of compounds. At x=0.05, the ground state at 2K is that of an antiferromagnet, but the critical transition field Hc has been lowered from Hc = 6T at x = 0 to Hc = 2.5 Tat x=0.05. And starting from x=0.10, the ground state of this series of compounds is a coexistence of AFM and FM parts. At x=0.30, the AFM order has completely vanished, and only the FM part is left. However, we still observe meta-magnetic transitions at higher temperatures (above 100 K for x=0.30) from the paramagnetic (P) state to a forced FM state. And, of course, such transitions are accompanied by strong magneto-caloric, magnetostrictive, and magnetoresistance effects. The Curie temperatures for the probed compositions going from x=0.05 to x=0.30 were spread over the temperature range of 40 K up to 100 K.

Keywords: metamagnetism, RMB intermetallic, magneto-transport effect, metamagnetic transitions

Procedia PDF Downloads 74
1060 Enhanced Dielectric Properties of La Substituted CoFe2O4 Magnetic Nanoparticles

Authors: M. Vadivel, R. Ramesh Babu

Abstract:

Spinel ferrite magnetic nanomaterials have received a great deal of attention in recent years due to their wide range of potential applications in various fields such as magnetic data storage and microwave device applications. Among the family of spinel ferrites, cobalt ferrite (CoFe2O4) has been widely used in the field of high-frequency applications because of its remarkable material qualities such as moderate saturation magnetization, high coercivity, large permeability at higher frequency and high electrical resistivity. For aforementioned applications, the materials should have an improved electrical property, especially enhancement in the dielectric properties. It is well known that the substitution of rare earth metal cations in Fe3+ site of CoFe2O4 nanoparticles leads to structural distortion and thus significantly influences the structural and morphological properties whereas greatly modifies the electrical and magnetic properties of a material. In the present investigation, we report on the influence of lanthanum (La3+) ion substitution on the structural, morphological, dielectric and magnetic properties of CoFe2O4 magnetic nanoparticles prepared by co-precipitation method. Powder X-ray diffraction patterns reveal the formation of inverse cubic spinel structure with the signature of LaFeO3 phase at higher La3+ ion concentrations. Raman and Fourier transform infrared spectral analysis also confirms the formation of inverse cubic spinel structure and Fe-O symmetrical stretching vibrations of CoFe2O4 nanoparticles, respectively. Transmission electron microscopy study reveals that the size of the particles gradually increases with increasing La3+ ion concentrations whereas the agglomeration gets slightly reduced for La3+ ion substituted CoFe2O4 nanoparticles than that of undoped CoFe2O4 nanoparticles. Dielectric properties such as dielectric constant and dielectric loss were recorded as a function of frequency and temperature which reveals that the dielectric constant gradually increases with increasing temperatures as well as La3+ ion concentrations. The increased dielectric constant might be the reason that the formation of LaFeO3 secondary phase at higher La3+ ion concentrations. Magnetic measurement demonstrates that the saturation magnetization gradually decreases from 61.45 to 25.13 emu/g with increasing La3+ ion concentrations which is due to the nonmagnetic nature of La3+ ions substitution.

Keywords: cobalt ferrite, co-precipitation, dielectric properties, saturation magnetization

Procedia PDF Downloads 320
1059 Proniosomes as a Carrier for Ocular Drug Delivery

Authors: Rawia M. Khalil, Ghada Abd-Elbary, Mona Basha, Ghada E. A. Awad, Hadeer A. Elhashemy

Abstract:

Background: Bacterial infections of the eye are the clinical conditions responsible for ocular morbidity and blindness. Conjunctivitis is an inflammation of the conjunctiva, due to Staphylococcus aureus. Lomefloxacin HCl (LXN) is a third generation flouroquinolone antibiotic with a broad spectrum against wide range of bacteria and very effective against Staph infections especially in conjunctiva (conjunctivitis). The present study aims to develop and evaluate novel ocular proniosomal gels of Lomefloxacin Hcl (LXN); in order to improve its ocular bioavailability for the management of bacterial conjunctivitis. Materials and methods: Proniosomes were prepared by coacervation phase separation method using different types of nonionic surfactants (Span 60,40,20,Tween 20,40,60,80,Brij 35,98,72) solely and as mixtures with Span® 60. The formed gels were characterized for entrapment efficiency, vesicle size and in vitro drug release. The optimum proniosomal gel; P-LXN 7 were characterized for pH measurement, transmission electron microscopy (TEM) and differential scanning calorimetry (DSC) as well as Stability study and microbiological evaluation .The results revealed that only Span 60 was able to form stable LXN proniosomal gel when used individually while the other nonionic surfactants formed gels only in combination with Span 60 at different ratios. The optimum proniosomal gel; P-LXN 7 (Span60:Tween60, 9:1) appeared as spherical shaped vesicles having high entrapment efficiency (>80 %), appropriate vesicle size (187 nm) as well as controlled drug release over 12h. DSC confirmed the amorphous nature and the uniformity of LXN inclusion within the vesicles. Physical stability study did not show any significant changes in appearance or entrapment efficiency or vesicle size after storage for 3 months at 4°C. Ocular irritancy test revealed that P-LXN 7 was safe, well tolerable and suitable for ocular delivery. In vivo antibacterial activity of P-LXN 7 evaluated using the susceptibility test and topical therapy of induced ocular conjunctivitis confirmed the enhanced antibacterial therapeutic efficacy of the LXN-proniosomal gel compared to the commercially available LXN eye drops; Orchacin®. Conclusions: Our results suggest that proniosomal gels could provide a promising carrier of LXN for efficient ocular treatment of bacterial conjunctivitis.

Keywords: bacterial conjunctivitis, lomefloxacin HCl, ocular drug delivery, proniosomes

Procedia PDF Downloads 231
1058 Structural Analysis of Phase Transformation and Particle Formation in Metastable Metallic Thin Films Grown by Plasma-Enhanced Atomic Layer Deposition

Authors: Pouyan Motamedi, Ken Bosnick, Ken Cadien, James Hogan

Abstract:

Growth of conformal ultrathin metal films has attracted a considerable amount of attention recently. Plasma-enhanced atomic layer deposition (PEALD) is a method capable of growing conformal thin films at low temperatures, with an exemplary control over thickness. The authors have recently reported on growth of metastable epitaxial nickel thin films via PEALD, along with a comprehensive characterization of the films and a study on the relationship between the growth parameters and the film characteristics. The goal of the current study is to use the mentioned films as a case study to investigate the temperature-activated phase transformation and agglomeration in ultrathin metallic films. For this purpose, metastable hexagonal nickel thin films were annealed using a controlled heating/cooling apparatus. The transformations in the crystal structure were observed via in-situ synchrotron x-ray diffraction. The samples were annealed to various temperatures in the range of 400-1100° C. The onset and progression of particle formation were studied in-situ via laser measurements. In addition, a four-point probe measurement tool was used to record the changes in the resistivity of the films, which is affected by phase transformation, as well as roughening and agglomeration. Thin films annealed at various temperature steps were then studied via atomic force microscopy, scanning electron microscopy and high-resolution transmission electron microscopy, in order to get a better understanding of the correlated mechanisms, through which phase transformation and particle formation occur. The results indicate that the onset of hcp-to-bcc transformation is at 400°C, while particle formations commences at 590° C. If the annealed films are quenched after transformation, but prior to agglomeration, they show a noticeable drop in resistivity. This can be attributed to the fact that the hcp films are grown epitaxially, and are under severe tensile strain, and annealing leads to relaxation of the mismatch strain. In general, the results shed light on the nature of structural transformation in nickel thin films, as well as metallic thin films, in general.

Keywords: atomic layer deposition, metastable, nickel, phase transformation, thin film

Procedia PDF Downloads 331
1057 Evaluation and Proposal for Improvement of the Flow Measurement Equipment in the Bellavista Drinking Water System of the City of Azogues

Authors: David Quevedo, Diana Coronel

Abstract:

The present article carries out an evaluation of the drinking water system in the Bellavista sector of the city of Azogues, with the purpose of determining the appropriate equipment to record the actual consumption flows of the inhabitants in said sector. Taking into account that the study area is located in a rural and economically disadvantaged area, there is an urgent need to establish a control system for the consumption of drinking water in order to conserve and manage the vital resource in the best possible way, considering that the water source supplying this sector is approximately 9km away. The research began with the collection of cartographic, demographic, and statistical data of the sector, determining the coverage area, population projection, and a provision that guarantees the supply of drinking water to meet the water needs of the sector's inhabitants. By using hydraulic modeling through the United States Environmental Protection Agency Application for Modeling Drinking Water Distribution Systems EPANET 2.0 software, theoretical hydraulic data were obtained, which were used to design and justify the most suitable measuring equipment for the Bellavista drinking water system. Taking into account a minimum service life of the drinking water system of 30 years, future flow rates were calculated for the design of the macro-measuring device. After analyzing the network, it was evident that the Bellavista sector has an average consumption of 102.87 liters per person per day, but considering that Ecuadorian regulations recommend a provision of 180 liters per person per day for the geographical conditions of the sector, this value was used for the analysis. With all the collected and calculated information, the conclusion was reached that the Bellavista drinking water system needs to have a 125mm electromagnetic macro-measuring device for the first three quinquenniums of its service life and a 150mm diameter device for the following three quinquenniums. The importance of having equipment that provides real and reliable data will allow for the control of water consumption by the population of the sector, measured through micro-measuring devices installed at the entrance of each household, which should match the readings of the macro-measuring device placed after the water storage tank outlet, in order to control losses that may occur due to leaks in the drinking water system or illegal connections.

Keywords: macrometer, hydraulics, endowment, water

Procedia PDF Downloads 78
1056 The Effects of Shift Work on Neurobehavioral Performance: A Meta Analysis

Authors: Thomas Vlasak, Tanja Dujlociv, Alfred Barth

Abstract:

Shift work is an essential element of modern labor, ensuring ideal conditions of service for today’s economy and society. Despite the beneficial properties, its impact on the neurobehavioral performance of exposed subjects remains controversial. This meta-analysis aims to provide first summarizing the effects regarding the association between shift work exposure and different cognitive functions. A literature search was performed via the databases PubMed, PsyINFO, PsyARTICLES, MedLine, PsycNET and Scopus including eligible studies until December 2020 that compared shift workers with non-shift workers regarding neurobehavioral performance tests. A random-effects model was carried out using Hedge’s g as a meta-analytical effect size with a restricted likelihood estimator to summarize the mean differences between the exposure group and controls. The heterogeneity of effect sizes was addressed by a sensitivity analysis using funnel plots, egger’s tests, p-curve analysis, meta-regressions, and subgroup analysis. The meta-analysis included 18 studies resulting in a total sample of 18,802 participants and 37 effect sizes concerning six different neurobehavioral outcomes. The results showed significantly worse performance in shift workers compared to non-shift workers in the following cognitive functions with g (95% CI): processing speed 0.16 (0.02 - 0.30), working memory 0.28 (0.51 - 0.50), psychomotor vigilance 0.21 (0.05 - 0.37), cognitive control 0.86 (0.45 - 1.27) and visual attention 0.19 (0.11 - 0.26). Neither significant moderating effects of publication year or study quality nor significant subgroup differences regarding type of shift or type of profession were indicated for the cognitive outcomes. These are the first meta-analytical findings that associate shift work with decreased cognitive performance in processing speed, working memory, psychomotor vigilance, cognitive control, and visual attention. Further studies should focus on a more homogenous measurement of cognitive functions, a precise assessment of experience of shift work and occupation types which are underrepresented in the current literature (e.g., law enforcement). In occupations where shift work is fundamental (e.g., healthcare, industries, law enforcement), protective countermeasures should be promoted for workers.

Keywords: meta-analysis, neurobehavioral performance, occupational psychology, shift work

Procedia PDF Downloads 111
1055 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer

Procedia PDF Downloads 140
1054 The Utility of Sonographic Features of Lymph Nodes during EBUS-TBNA for Predicting Malignancy

Authors: Atefeh Abedini, Fatemeh Razavi, Mihan Pourabdollah Toutkaboni, Hossein Mehravaran, Arda Kiani

Abstract:

In countries with the highest prevalence of tuberculosis, such as Iran, the differentiation of malignant tumors from non-malignant is very important. In this study, which was conducted for the first time among the Iranian population, the utility of the ultrasonographic morphological characteristics in patients undergoing EBUS was used to distinguish the non-malignant versus malignant lymph nodes. The morphological characteristics of lymph nodes, which consist of size, shape, vascular pattern, echogenicity, margin, coagulation necrosis sign, calcification, and central hilar structure, were obtained during Endobronchial Ultrasound-Guided Trans-Bronchial Needle Aspiration and were compared with the final pathology results. During this study period, a total of 253 lymph nodes were evaluated in 93 cases. Round shape, non-hilar vascular pattern, heterogeneous echogenicity, hyperechogenicity, distinct margin, and the presence of necrosis sign were significantly higher in malignant nodes. On the other hand, the presence of calcification and also central hilar structure were significantly higher in the benign nodes (p-value ˂ 0.05). Multivariate logistic regression showed that size>1 cm, heterogeneous echogenicity, hyperechogenicity, the presence of necrosis signs and, the absence of central hilar structure are independent predictive factors for malignancy. The accuracy of each of the aforementioned factors is 42.29 %, 71.54 %, 71.90 %, 73.51 %, and 65.61 %, respectively. Of 74 malignant lymph nodes, 100% had at least one of these independent factors. According to our results, the morphological characteristics of lymph nodes based on Endobronchial Ultrasound-Guided Trans-Bronchial Needle Aspiration can play a role in the prediction of malignancy.

Keywords: EBUS-TBNA, malignancy, nodal characteristics, pathology

Procedia PDF Downloads 140
1053 Efficacy of Phonological Awareness Intervention for People with Language Impairment

Authors: I. Wardana Ketut, I. Suparwa Nyoman

Abstract:

This study investigated the form and characteristic of speech sound produced by three Balinese subjects who have recovered from aphasia as well as intervened their language impairment on side of linguistic and neuronal aspects of views. The failure of judging the speech sound was caused by impairment of motor cortex that indicated there were lesions in left hemispheric language zone. Sound articulation phenomena were in the forms of phonemes deletion, replacement or assimilation in individual words and meaning building for anomic aphasia. Therefore, the Balinese sound patterns were stimulated by showing pictures to the subjects and recorded to recognize what individual consonants or vowels they unclearly produced and to find out how the sound disorder occurred. The physiology of sound production by subject’s speech organs could not only show the accuracy of articulation but also any level of severity the lesion they suffered from. The subjects’ speech sounds were investigated, classified and analyzed to know how poor the lingual units were and observed to clarify weaknesses of sound characters occurred either for place or manner of articulation. Many fricative and stopped consonants were replaced by glottal or palatal sounds because the cranial nerve, such as facial, trigeminal, and hypoglossal underwent impairment after the stroke. The phonological intervention was applied through a technique called phonemic articulation drill and the examination was conducted to know any change has been obtained. The finding informed that some weak articulation turned into clearer sound and simple meaning of language has been conveyed. The hierarchy of functional parts of brain played important role of language formulation and processing. From this finding, it can be clearly emphasized that this study supports the role of right hemisphere in recovery from aphasia is associated with functional brain reorganization.

Keywords: aphasia, intervention, phonology, stroke

Procedia PDF Downloads 197
1052 Breast Cancer Sensing and Imaging Utilized Printed Ultra Wide Band Spherical Sensor Array

Authors: Elyas Palantei, Dewiani, Farid Armin, Ardiansyah

Abstract:

High precision of printed microwave sensor utilized for sensing and monitoring the potential breast cancer existed in women breast tissue was optimally computed. The single element of UWB printed sensor that successfully modeled through several numerical optimizations was multiple fabricated and incorporated with woman bra to form the spherical sensors array. One sample of UWB microwave sensor obtained through the numerical computation and optimization was chosen to be fabricated. In overall, the spherical sensors array consists of twelve stair patch structures, and each element was individually measured to characterize its electrical properties, especially the return loss parameter. The comparison of S11 profiles of all UWB sensor elements is discussed. The constructed UWB sensor is well verified using HFSS programming, CST programming, and experimental measurement. Numerically, both HFSS and CST confirmed the potential operation bandwidth of UWB sensor is more or less 4.5 GHz. However, the measured bandwidth provided is about 1.2 GHz due to the technical difficulties existed during the manufacturing step. The configuration of UWB microwave sensing and monitoring system implemented consists of 12 element UWB printed sensors, vector network analyzer (VNA) to perform as the transceiver and signal processing part, the PC Desktop/Laptop acting as the image processing and displaying unit. In practice, all the reflected power collected from whole surface of artificial breast model are grouped into several numbers of pixel color classes positioned on the corresponding row and column (pixel number). The total number of power pixels applied in 2D-imaging process was specified to 100 pixels (or the power distribution pixels dimension 10x10). This was determined by considering the total area of breast phantom of average Asian women breast size and synchronizing with the single UWB sensor physical dimension. The interesting microwave imaging results were plotted and together with some technical problems arisen on developing the breast sensing and monitoring system are examined in the paper.

Keywords: UWB sensor, UWB microwave imaging, spherical array, breast cancer monitoring, 2D-medical imaging

Procedia PDF Downloads 197
1051 Reliability Analysis of Construction Schedule Plan Based on Building Information Modelling

Authors: Lu Ren, You-Liang Fang, Yan-Gang Zhao

Abstract:

In recent years, the application of BIM (Building Information Modelling) to construction schedule plan has been the focus of more and more researchers. In order to assess the reasonable level of the BIM-based construction schedule plan, that is whether the schedule can be completed on time, some researchers have introduced reliability theory to evaluate. In the process of evaluation, the uncertain factors affecting the construction schedule plan are regarded as random variables, and probability distributions of the random variables are assumed to be normal distribution, which is determined using two parameters evaluated from the mean and standard deviation of statistical data. However, in practical engineering, most of the uncertain influence factors are not normal random variables. So the evaluation results of the construction schedule plan will be unreasonable under the assumption that probability distributions of random variables submitted to the normal distribution. Therefore, in order to get a more reasonable evaluation result, it is necessary to describe the distribution of random variables more comprehensively. For this purpose, cubic normal distribution is introduced in this paper to describe the distribution of arbitrary random variables, which is determined by the first four moments (mean, standard deviation, skewness and kurtosis). In this paper, building the BIM model firstly according to the design messages of the structure and making the construction schedule plan based on BIM, then the cubic normal distribution is used to describe the distribution of the random variables due to the collecting statistical data of the random factors influencing construction schedule plan. Next the reliability analysis of the construction schedule plan based on BIM can be carried out more reasonably. Finally, the more accurate evaluation results can be given providing reference for the implementation of the actual construction schedule plan. In the last part of this paper, the more efficiency and accuracy of the proposed methodology for the reliability analysis of the construction schedule plan based on BIM are conducted through practical engineering case.

Keywords: BIM, construction schedule plan, cubic normal distribution, reliability analysis

Procedia PDF Downloads 152
1050 Evaluation of Automated Analyzers of Polycyclic Aromatic Hydrocarbons and Black Carbon in a Coke Oven Plant by Comparison with Analytical Methods

Authors: L. Angiuli, L. Trizio, R. Giua, A. Digilio, M. Tutino, P. Dambruoso, F. Mazzone, C. M. Placentino

Abstract:

In the winter of 2014 a series of measurements were performed to evaluate the behavior of real-time PAHs and black carbon analyzers in a coke oven plant located in Taranto, a city of Southern Italy. Data were collected both insides than outside the plant, at air quality monitoring sites. Contemporary measures of PM2.5 and PM1 were performed. Particle-bound PAHs were measured by two methods: (1) aerosol photoionization using an Ecochem PAS 2000 analyzer, (2) PM2.5 and PM1 quartz filter collection and analysis by gas chromatography/mass spectrometry (GC/MS). Black carbon was determined both in real-time by Magee Aethalometer AE22 analyzer than by semi-continuous Sunset Lab EC/OC instrument. Detected PM2.5 and PM1 levels were higher inside than outside the plant while PAHs real-time values were higher outside than inside. As regards PAHs, inside the plant Ecochem PAS 2000 revealed concentrations not significantly different from those determined on the filter during low polluted days, but at increasing concentrations the automated instrument underestimated PAHs levels. At the external site, Ecochem PAS 2000 real-time concentrations were steadily higher than those on the filter. In the same way, real-time black carbon values were constantly lower than EC concentrations obtained by Sunset EC/OC in the inner site, while outside the plant real-time values were comparable to Sunset EC values. Results showed that in a coke plant real-time analyzers of PAHs and black carbon in the factory configuration provide qualitative information, with no accuracy and leading to the underestimation of the concentration. A site specific calibration is needed for these instruments before their installation in high polluted sites.

Keywords: black carbon, coke oven plant, PAH, PAS, aethalometer

Procedia PDF Downloads 346
1049 Evaluating Structural Crack Propagation Induced by Soundless Chemical Demolition Agent Using an Energy Release Rate Approach

Authors: Shyaka Eugene

Abstract:

The efficient and safe demolition of structures is a critical challenge in civil engineering and construction. This study focuses on the development of optimal demolition strategies by investigating the crack propagation behavior in beams induced by soundless cracking agents. It is commonly used in controlled demolition and has gained prominence due to its non-explosive and environmentally friendly nature. This research employs a comprehensive experimental and computational approach to analyze the crack initiation, propagation, and eventual failure in beams subjected to soundless cracking agents. Experimental testing involves the application of various cracking agents under controlled conditions to understand their effects on the structural integrity of beams. High-resolution imaging and strain measurements are used to capture the crack propagation process. In parallel, numerical simulations are conducted using advanced finite element analysis (FEA) techniques to model crack propagation in beams, considering various parameters such as cracking agent composition, loading conditions, and beam properties. The FEA models are validated against experimental results, ensuring their accuracy in predicting crack propagation patterns. The findings of this study provide valuable insights into optimizing demolition strategies, allowing engineers and demolition experts to make informed decisions regarding the selection of cracking agents, their application techniques, and structural reinforcement methods. Ultimately, this research contributes to enhancing the safety, efficiency, and sustainability of demolition practices in the construction industry, reducing environmental impact and ensuring the protection of adjacent structures and the surrounding environment.

Keywords: expansion pressure, energy release rate, soundless chemical demolition agent, crack propagation

Procedia PDF Downloads 66