Search results for: Fault detection and classification
213 Optimization of Thermopile Sensor Performance of Polycrystalline Silicon Film
Authors: Li Long, Thomas Ortlepp
Abstract:
A theoretical model for the optimization of thermopile sensor performance is developed for thermoelectric-based infrared radiation detection. It is shown that the performance of polycrystalline silicon film thermopile sensor can be optimized according to the thermoelectric quality factor, sensor layer structure factor and sensor layout shape factor. Based on the properties of electrons, phonons, grain boundaries and their interactions, the thermoelectric quality factor of polycrystalline silicon is analyzed with the relaxation time approximation of Boltzmann transport equation. The model includes the effects of grain structure, grain boundary trap properties and doping concentration. The layer structure factor of sensor is analyzed with respect to infrared absorption coefficient. The effect of layout design is characterized with the shape factor, which is calculated for different sensor designs. Double layer polycrystalline silicon thermopile infrared sensors on suspended support membrane have been designed and fabricated with a CMOS-compatible process. The theoretical approach is confirmed with measurement results.
Keywords: Polycrystalline silicon film, relaxation time approximation, specific detectivity, thermal conductivity, thermopile infrared sensor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 231212 Harnessing the Power of AI: Transforming DevSecOps for Enhanced Cloud Security
Authors: Ashly Joseph, Jithu Paulose
Abstract:
The increased usage of cloud computing has revolutionized the IT landscape, but it has also raised new security concerns. DevSecOps emerged as a way for tackling these difficulties by integrating security into the software development process. However, the rising complexity and sophistication of cyber threats need more advanced solutions. This paper looks into the usage of artificial intelligence (AI) techniques in the DevSecOps framework to increase cloud security. This study uses quantitative and qualitative techniques to assess the usefulness of AI approaches such as machine learning, natural language processing, and deep learning in reducing security issues. This paper thoroughly examines the symbiotic relationship between AI and DevSecOps, concentrating on how AI may be seamlessly integrated into the continuous integration and continuous delivery (CI/CD) pipeline, automated security testing, and real-time monitoring methods. The findings emphasize AI's huge potential to improve threat detection, risk assessment, and incident response skills. Furthermore, the paper examines the implications and challenges of using AI in DevSecOps workflows, considering factors like as scalability, interpretability, and adaptability. This paper adds to a better understanding of AI's revolutionary role in cloud security and provides valuable insights for practitioners and scholars in the field.
Keywords: Cloud Security, DevSecOps, Artificial Intelligence, AI, Machine Learning, Natural Language Processing, NLP, cybersecurity, AI-driven Security.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 135211 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis
Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen
Abstract:
The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluates the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.
Keywords: lexical semantics, feature representation, semantic decision, convolutional neural network, electronic medical record
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 594210 Analysis of Reflectance Photoplethysmograph Sensors
Authors: Fu-Hsuan Huang, Po-Jung Yuan, Kang-Ping Lin, Hen-Hong Chang, Cheng-Lun Tsai
Abstract:
Photoplethysmography is a simple measurement of the variation in blood volume in tissue. It detects the pulse signal of heart beat as well as the low frequency signal of vasoconstriction and vasodilation. The transmission type measurement is limited to only a few specific positions for example the index finger that have a short path length for light. The reflectance type measurement can be conveniently applied on most parts of the body surface. This study analyzed the factors that determine the quality of reflectance photoplethysmograph signal including the emitter-detector distance, wavelength, light intensity, and optical properties of skin tissue. Light emitting diodes (LEDs) with four different visible wavelengths were used as the light emitters. A phototransistor was used as the light detector. A micro translation stage adjusts the emitter-detector distance from 2 mm to 15 mm. The reflective photoplethysmograph signals were measured on different sites. The optimal emitter-detector distance was chosen to have a large dynamic range for low frequency drifting without signal saturation and a high perfusion index. Among these four wavelengths, a yellowish green (571nm) light with a proper emitter-detection distance of 2mm is the most suitable for obtaining a steady and reliable reflectance photoplethysmograph signalKeywords: Reflectance photoplethysmograph, Perfusion index, Signal-to-noise ratio
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2212209 Cirrhosis Mortality Prediction as Classification Using Frequent Subgraph Mining
Authors: Abdolghani Ebrahimi, Diego Klabjan, Chenxi Ge, Daniela Ladner, Parker Stride
Abstract:
In this work, we use machine learning and data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. Our work applies modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.
Keywords: machine learning, liver cirrhosis, subgraph mining, supervised learning
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 450208 A Novel Neighborhood Defined Feature Selection on Phase Congruency Images for Recognition of Faces with Extreme Variations
Authors: Satyanadh Gundimada, Vijayan K Asari
Abstract:
A novel feature selection strategy to improve the recognition accuracy on the faces that are affected due to nonuniform illumination, partial occlusions and varying expressions is proposed in this paper. This technique is applicable especially in scenarios where the possibility of obtaining a reliable intra-class probability distribution is minimal due to fewer numbers of training samples. Phase congruency features in an image are defined as the points where the Fourier components of that image are maximally inphase. These features are invariant to brightness and contrast of the image under consideration. This property allows to achieve the goal of lighting invariant face recognition. Phase congruency maps of the training samples are generated and a novel modular feature selection strategy is implemented. Smaller sub regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are arranged in the order of increasing distance between the sub regions involved in merging. The assumption behind the proposed implementation of the region merging and arrangement strategy is that, local dependencies among the pixels are more important than global dependencies. The obtained feature sets are then arranged in the decreasing order of discriminating capability using a criterion function, which is the ratio of the between class variance to the within class variance of the sample set, in the PCA domain. The results indicate high improvement in the classification performance compared to baseline algorithms.
Keywords: Discriminant analysis, intra-class probability distribution, principal component analysis, phase congruency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1850207 Recognition Machine (RM) for On-line and Isolated Flight Deck Officer (FDO) Gestures
Authors: Deniz T. Sodiri, Venkat V S S Sastry
Abstract:
The paper presents an on-line recognition machine (RM) for continuous/isolated, dynamic and static gestures that arise in Flight Deck Officer (FDO) training. RM is based on generic pattern recognition framework. Gestures are represented as templates using summary statistics. The proposed recognition algorithm exploits temporal and spatial characteristics of gestures via dynamic programming and Markovian process. The algorithm predicts corresponding index of incremental input data in the templates in an on-line mode. Accumulated consistency in the sequence of prediction provides a similarity measurement (Score) between input data and the templates. The algorithm provides an intuitive mechanism for automatic detection of start/end frames of continuous gestures. In the present paper, we consider isolated gestures. The performance of RM is evaluated using four datasets - artificial (W TTest), hand motion (Yang) and FDO (tracker, vision-based ). RM achieves comparable results which are in agreement with other on-line and off-line algorithms such as hidden Markov model (HMM) and dynamic time warping (DTW). The proposed algorithm has the additional advantage of providing timely feedback for training purposes.Keywords: On-line Recognition Algorithm, IsolatedDynamic/Static Gesture Recognition, On-line Markovian/DynamicProgramming, Training in Virtual Environments.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1463206 Researches on Simulation and Validation of Airborne Enhanced Ground Proximity Warning System
Authors: Ma Shidong, He Yuncheng, Wang Zhong, Yang Guoqing
Abstract:
In this paper, enhanced ground proximity warning simulation and validation system is designed and implemented. First, based on square grid and sub-grid structure, the global digital terrain database is designed and constructed. Terrain data searching is implemented through querying the latitude and longitude bands and separated zones of global terrain database with the current aircraft position. A combination of dynamic scheduling and hierarchical scheduling is adopted to schedule the terrain data, and the terrain data can be read and delete dynamically in the memory. Secondly, according to the scope, distance, approach speed information etc. to the dangerous terrain in front, and using security profiles calculating method, collision threat detection is executed in real-time, and provides caution and warning alarm. According to this scheme, the implementation of the enhanced ground proximity warning simulation system is realized. Simulations are carried out to verify a good real-time in terrain display and alarm trigger, and the results show simulation system is realized correctly, reasonably and stable.
Keywords: enhanced ground proximity warning system, digital terrain, look-ahead terrain alarm, terrain display, simulation and validation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1692205 The Comparation of Activation Nuclear Factor Kappa Beta (NFKB) at Rattus Novergicus Strain Wistar Induced by Various Duration High Fat Diet (HFD)
Authors: Titin Andri Wihastuti, Djanggan Sargowo
Abstract:
NFκB is a transcription factor regulating many function of the vessel wall. In the normal condition , NFκB is revealed diffuse cytoplasmic expressionsuggesting that the system is inactive. The presence of activation NFκB provide a potential pathway for the rapid transcriptional of a variety of genes encoding cytokines, growth factors, adhesion molecules and procoagulatory factors. It is likely to play an important role in chronic inflamatory disease involved atherosclerosis. There are many stimuli with the potential to active NFκB, including hyperlipidemia. We used 24 mice which was divided in 6 groups. The HFD given by et libitum procedure during 2, 4, and 6 months. The parameters in this study were the amount of NFKB activation ,H2O2 as ROS and VCAM-1 as a product of NFKB activation. H2O2 colorimetryc assay performed directly using Anti Rat H2O2 ELISA Kit. The NFKB and VCAM-1 detection obtained from aorta mice, measured by ELISA kit and imunohistochemistry. There was a significant difference activation of H2O2, NFKB and VCAM-1 level at induce HFD after 2, 4 and 6 months. It suggest that HFD induce ROS formation and increase the activation of NFKB as one of atherosclerosis marker that caused by hyperlipidemia as classical atheroschlerosis risk factor.Keywords: High Fat Diet, NFKB, H2O2, atherosclerosis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2031204 Intrinsic Electromagnetic Fields and Atom-Field Coupling in Living Cells
Authors: Masroor H. S. Bukhari, Z. H. Shah
Abstract:
The possibility of intrinsic electromagnetic fields within living cells and their resonant self-interaction and interaction with ambient electromagnetic fields is suggested on the basis of a theoretical and experimental study. It is reported that intrinsic electromagnetic fields are produced in the form of radio-frequency and infra-red photons within atoms (which may be coupled or uncoupled) in cellular structures, such as the cell cytoskeleton and plasma membrane. A model is presented for the interaction of these photons among themselves or with atoms under a dipole-dipole coupling, induced by single-photon or two-photon processes. This resonance is manifested by conspicuous field amplification and it is argued that it is possible for these resonant photons to undergo tunnelling in the form of evanescent waves to a short range (of a few nanometers to micrometres). This effect, suggested as a resonant photon tunnelling mechanism in this report, may enable these fields to act as intracellular signal communication devices and as bridges between macromolecules or cellular structures in the cell cytoskeleton, organelles or membrane. A brief overview of an experimental technique and a review of some preliminary results are presented, in the detection of these fields produced in living cell membranes under physiological conditions.Keywords: bioelectromagnetism, cell membrane, evanescentwaves, photon tunnelling, resonance
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1893203 Human Face Detection and Segmentation using Eigenvalues of Covariance Matrix, Hough Transform and Raster Scan Algorithms
Authors: J. Prakash, K. Rajesh
Abstract:
In this paper we propose a novel method for human face segmentation using the elliptical structure of the human head. It makes use of the information present in the edge map of the image. In this approach we use the fact that the eigenvalues of covariance matrix represent the elliptical structure. The large and small eigenvalues of covariance matrix are associated with major and minor axial lengths of an ellipse. The other elliptical parameters are used to identify the centre and orientation of the face. Since an Elliptical Hough Transform requires 5D Hough Space, the Circular Hough Transform (CHT) is used to evaluate the elliptical parameters. Sparse matrix technique is used to perform CHT, as it squeeze zero elements, and have only a small number of non-zero elements, thereby having an advantage of less storage space and computational time. Neighborhood suppression scheme is used to identify the valid Hough peaks. The accurate position of the circumference pixels for occluded and distorted ellipses is identified using Bresenham-s Raster Scan Algorithm which uses the geometrical symmetry properties. This method does not require the evaluation of tangents for curvature contours, which are very sensitive to noise. The method has been evaluated on several images with different face orientations.Keywords: Circular Hough Transform, Covariance matrix, Eigenvalues, Elliptical Hough Transform, Face segmentation, Raster Scan Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2517202 Optical Fish Tracking in Fishways using Neural Networks
Authors: Alvaro Rodriguez, Maria Bermudez, Juan R. Rabuñal, Jeronimo Puertas
Abstract:
One of the main issues in Computer Vision is to extract the movement of one or several points or objects of interest in an image or video sequence to conduct any kind of study or control process. Different techniques to solve this problem have been applied in numerous areas such as surveillance systems, analysis of traffic, motion capture, image compression, navigation systems and others, where the specific characteristics of each scenario determine the approximation to the problem. This paper puts forward a Computer Vision based algorithm to analyze fish trajectories in high turbulence conditions in artificial structures called vertical slot fishways, designed to allow the upstream migration of fish through obstructions in rivers. The suggested algorithm calculates the position of the fish at every instant starting from images recorded with a camera and using neural networks to execute fish detection on images. Different laboratory tests have been carried out in a full scale fishway model and with living fishes, allowing the reconstruction of the fish trajectory and the measurement of velocities and accelerations of the fish. These data can provide useful information to design more effective vertical slot fishways.
Keywords: Computer Vision, Neural Network, Fishway, Fish Trajectory, Tracking
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2001201 A Watermarking Scheme for MP3 Audio Files
Authors: Dimitrios Koukopoulos, Yiannis Stamatiou
Abstract:
In this work, we present for the first time in our perception an efficient digital watermarking scheme for mpeg audio layer 3 files that operates directly in the compressed data domain, while manipulating the time and subband/channel domain. In addition, it does not need the original signal to detect the watermark. Our scheme was implemented taking special care for the efficient usage of the two limited resources of computer systems: time and space. It offers to the industrial user the capability of watermark embedding and detection in time immediately comparable to the real music time of the original audio file that depends on the mpeg compression, while the end user/audience does not face any artifacts or delays hearing the watermarked audio file. Furthermore, it overcomes the disadvantage of algorithms operating in the PCMData domain to be vulnerable to compression/recompression attacks, as it places the watermark in the scale factors domain and not in the digitized sound audio data. The strength of our scheme, that allows it to be used with success in both authentication and copyright protection, relies on the fact that it gives to the users the enhanced capability their ownership of the audio file not to be accomplished simply by detecting the bit pattern that comprises the watermark itself, but by showing that the legal owner knows a hard to compute property of the watermark.Keywords: Audio watermarking, mpeg audio layer 3, hardinstance generation, NP-completeness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1651200 A Robust and Adaptive Unscented Kalman Filter for the Air Fine Alignment of the Strapdown Inertial Navigation System/GPS
Authors: Jian Shi, Baoguo Yu, Haonan Jia, Meng Liu, Ping Huang
Abstract:
Adapting to the flexibility of war, a large number of guided weapons launch from aircraft. Therefore, the inertial navigation system loaded in the weapon needs to undergo an alignment process in the air. This article proposes the following methods to the problem of inaccurate modeling of the system under large misalignment angles, the accuracy reduction of filtering caused by outliers, and the noise changes in GPS signals: first, considering the large misalignment errors of Strapdown Inertial Navigation System (SINS)/GPS, a more accurate model is made rather than to make a small-angle approximation, and the Unscented Kalman Filter (UKF) algorithms are used to estimate the state; then, taking into account the impact of GPS noise changes on the fine alignment algorithm, the innovation adaptive filtering algorithm is introduced to estimate the GPS’s noise in real-time; at the same time, in order to improve the anti-interference ability of the air fine alignment algorithm, a robust filtering algorithm based on outlier detection is combined with the air fine alignment algorithm to improve the robustness of the algorithm. The algorithm can improve the alignment accuracy and robustness under interference conditions, which is verified by simulation.
Keywords: Air alignment, fine alignment, inertial navigation system, integrated navigation system, UKF.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 543199 Land Suitability Prediction Modelling for Agricultural Crops Using Machine Learning Approach: A Case Study of Khuzestan Province, Iran
Authors: Saba Gachpaz, Hamid Reza Heidari
Abstract:
The sharp increase in population growth leads to more pressure on agricultural areas to satisfy the food supply. This necessitates increased resource consumption and underscores the importance of addressing sustainable agriculture development along with other environmental considerations. Land-use management is a crucial factor in obtaining optimum productivity. Machine learning is a widely used technique in the agricultural sector, from yield prediction to customer behavior. This method focuses on learning and provides patterns and correlations from our data set. In this study, nine physical control factors, namely, soil classification, electrical conductivity, normalized difference water index (NDWI), groundwater level, elevation, annual precipitation, pH of water, annual mean temperature, and slope in the alluvial plain in Khuzestan (an agricultural hotspot in Iran) are used to decide the best agricultural land use for both rainfed and irrigated agriculture for 10 different crops. For this purpose, each variable was imported into Arc GIS, and a raster layer was obtained. In the next level, by using training samples, all layers were imported into the python environment. A random forest model was applied, and the weight of each variable was specified. In the final step, results were visualized using a digital elevation model, and the importance of all factors for each one of the crops was obtained. Our results show that despite 62% of the study area being allocated to agricultural purposes, only 42.9% of these areas can be defined as a suitable class for cultivation purposes.
Keywords: Land suitability, machine learning, random forest, sustainable agriculture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 284198 Graph Codes-2D Projections of Multimedia Feature Graphs for Fast and Effective Retrieval
Authors: Stefan Wagenpfeil, Felix Engel, Paul McKevitt, Matthias Hemmje
Abstract:
Multimedia Indexing and Retrieval is generally de-signed and implemented by employing feature graphs. These graphs typically contain a significant number of nodes and edges to reflect the level of detail in feature detection. A higher level of detail increases the effectiveness of the results but also leads to more complex graph structures. However, graph-traversal-based algorithms for similarity are quite inefficient and computation intensive, espe-cially for large data structures. To deliver fast and effective retrieval, an efficient similarity algorithm, particularly for large graphs, is mandatory. Hence, in this paper, we define a graph-projection into a 2D space (Graph Code) as well as the corresponding algorithms for indexing and retrieval. We show that calculations in this space can be performed more efficiently than graph-traversals due to a simpler processing model and a high level of parallelisation. In consequence, we prove that the effectiveness of retrieval also increases substantially, as Graph Codes facilitate more levels of detail in feature fusion. Thus, Graph Codes provide a significant increase in efficiency and effectiveness (especially for Multimedia indexing and retrieval) and can be applied to images, videos, audio, and text information.
Keywords: indexing, retrieval, multimedia, graph code, graph algorithm
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 444197 Satellite Sensing for Evaluation of an Irrigation System in Cotton - Wheat Zone
Authors: Sadia Iqbal, Faheem Iqbal, Furqan Iqbal
Abstract:
Efficient utilization of existing water is a pressing need for Pakistan. Due to rising population, reduction in present storage capacity and poor delivery efficiency of 30 to 40% from canal. A study to evaluate an irrigation system in the cotton-wheat zone of Pakistan, after the watercourse lining was conducted. The study is made on the basis of cropping pattern and salinity to evaluate the system. This study employed an index-based approach of using Geographic information system with field data. The satellite images of different years were use to examine the effective area. Several combinations of the ratio of signals received in different spectral bands were used for development of this index. Near Infrared and Thermal IR spectral bands proved to be most effective as this combination helped easy detection of salt affected area and cropping pattern of the study area. Result showed that 9.97% area under salinity in 1992, 9.17% in 2000 and it left 2.29% in year 2005. Similarly in 1992, 45% area is under vegetation it improves to 56% and 65% in 2000 and 2005 respectively. On the basis of these results evaluation is done 30% performance is increase after the watercourse improvement.Keywords: Salinity, remote sensing index, salinity index, cropping pattern.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1678196 Detection of Arcobacter and Helicobacter pylori Contamination in Organic Vegetables by Cultural and PCR Methods
Authors: Miguel García-Ferrús, Ana González, María A. Ferrús
Abstract:
The most demanded organic foods worldwide are those that are consumed fresh, such as fruits and vegetables. However, there is a knowledge gap about some aspects of organic food microbiological quality and safety. Organic fruits and vegetables are more exposed to pathogenic microorganisms due to surface contact with natural fertilizers such as animal manure, wastes and vermicompost used during farming. Therefore, the objective of this work was to study the contamination of organic fresh green leafy vegetables by two emergent pathogens, Arcobacter spp. and Helicobacter pylori. For this purpose, a total of 24 vegetable samples, 13 lettuce and 11 spinach were acquired from 10 different ecological supermarkets and greengroceries and analyzed by culture and PCR. Arcobacter spp. was detected in five samples (20%) by PCR, four spinach and one lettuce. One spinach sample was found to be also positive by culture. For H. pylori, the H. pylori VacA gene-specific band was detected in 12 vegetable samples (50%), 10 lettuces and two spinach. Isolation in the selective medium did not yield any positive result, possibly because of low contamination levels together with the presence of the organism in its viable but non-culturable form. Results showed significant levels of H. pylori and Arcobacter contamination in organic vegetables that are generally consumed raw, which seems to confirm that these foods can act as transmission vehicles to humans.
Keywords: Arcobacter spp., Helicobacter pylori, organic vegetables, Polymerase Chain Reaction, PCR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 403195 An Efficient Architecture for Interleaved Modular Multiplication
Authors: Ahmad M. Abdel Fattah, Ayman M. Bahaa El-Din, Hossam M.A. Fahmy
Abstract:
Modular multiplication is the basic operation in most public key cryptosystems, such as RSA, DSA, ECC, and DH key exchange. Unfortunately, very large operands (in order of 1024 or 2048 bits) must be used to provide sufficient security strength. The use of such big numbers dramatically slows down the whole cipher system, especially when running on embedded processors. So far, customized hardware accelerators - developed on FPGAs or ASICs - were the best choice for accelerating modular multiplication in embedded environments. On the other hand, many algorithms have been developed to speed up such operations. Examples are the Montgomery modular multiplication and the interleaved modular multiplication algorithms. Combining both customized hardware with an efficient algorithm is expected to provide a much faster cipher system. This paper introduces an enhanced architecture for computing the modular multiplication of two large numbers X and Y modulo a given modulus M. The proposed design is compared with three previous architectures depending on carry save adders and look up tables. Look up tables should be loaded with a set of pre-computed values. Our proposed architecture uses the same carry save addition, but replaces both look up tables and pre-computations with an enhanced version of sign detection techniques. The proposed architecture supports higher frequencies than other architectures. It also has a better overall absolute time for a single operation.Keywords: Montgomery multiplication, modular multiplication, efficient architecture, FPGA, RSA
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2456194 Influence of Compactive Efforts on Cement- Bagasse Ash Treatment of Expansive Black Cotton Soil
Authors: Moses, G, Osinubi, K. J.
Abstract:
A laboratory study on the influence of compactive effort on expansive black cotton specimens treated with up to 8% ordinary Portland cement (OPC) admixed with up to 8% bagasse ash (BA) by dry weight of soil and compacted using the energies of the standard Proctor (SP), West African Standard (WAS) or “intermediate” and modified Proctor (MP) were undertaken. The expansive black cotton soil was classified as A-7-6 (16) or CL using the American Association of Highway and Transportation Officials (AASHTO) and Unified Soil Classification System (USCS), respectively. The 7day unconfined compressive strength (UCS) values of the natural soil for SP, WAS and MP compactive efforts are 286, 401 and 515kN/m2 respectively, while peak values of 1019, 1328 and 1420kN/m2 recorded at 8% OPC/ 6% BA, 8% OPC/ 2% BA and 6% OPC/ 4% BA treatments, respectively were less than the UCS value of 1710kN/m2 conventionally used as criterion for adequate cement stabilization. The soaked California bearing ratio (CBR) values of the OPC/BA stabilized soil increased with higher energy level from 2, 4 and 10% for the natural soil to Peak values of 55, 18 and 8% were recorded at 8% OPC/4% BA 8% OPC/2% BA and 8% OPC/4% BA, treatments when SP, WAS and MP compactive effort were used, respectively. The durability of specimens was determined by immersion in water. Soils treatment at 8% OPC/ 4% BA blend gave a value of 50% resistance to loss in strength value which is acceptable because of the harsh test condition of 7 days soaking period specimens were subjected instead of the 4 days soaking period that specified a minimum resistance to loss in strength of 80%. Finally An optimal blend of is 8% OPC/ 4% BA is recommended for treatment of expansive black cotton soil for use as a sub-base material.
Keywords: Bagasse ash, California bearing ratio, Compaction, Durability, Ordinary Portland cement, Unconfined compressive strength.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3565193 Effect of L-Arginine on Neuromuscular Transmission of the Chick Biventer Cervicis Muscle
Authors: S. Asadzadeh Vostakolaei
Abstract:
In this study, the effect of L-arginine was examined at the neuromuscular junction of the chick biventer cervicis muscle. LArginine at 500 μg/ ml, decreased twitch response to electerical stimulation, and produced rightward shift of the dose- response curve for acetylcholine or carbachol. L-Arginine at 1000μg/ ml produced a strong shift to the right of the dose – response curve for acetylcholine or carbachol with a reduction in the efficacy. The inhibitory effect of L-arginine on the twitch response was blocked by caffeine (200μg/ ml). NO levels were also measured in the chick biventer cervicis muscle homogenates, using spectrophotometric method for the direct detection of NO, nitrite and nitrate. Total nitrite (nitrite + nitrate) was measured by a spectrophotometer at 540 nm after the conversion of nitrate to nitrite by copperized cadmium granules. NO levels were found to be significantly increased in concentrations 500 and 1000μg/ ml of L-arginine in comparison with the control group (p<0.001). These findings indicate a possible role of increased NO levels in the suppressive action of L-arginine on the twitch response. In addition, the results indicate that the post- junctional antagonistic action of L-arginine is probably the result of impaired sarcoplasmic reticulum (SR) Ca+2 releases.
Keywords: Chick, L-Arginine, Nitric Oxide, Skeletal muscle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1720192 A Study on the Developing Method of the BIM (Building Information Modeling) Software Based On Cloud Computing Environment
Authors: Byung-Kon Kim
Abstract:
According as the Architecture, Engineering and Construction (AEC) Industry projects have grown more complex and larger, the number of utilization of BIM for 3D design and simulation is increasing significantly. Therefore, typical applications of BIM such as clash detection and alternative measures based on 3-dimenstional planning are expanded to process management, cost and quantity management, structural analysis, check for regulation, and various domains for virtual design and construction. Presently, commercial BIM software is operated on single-user environment, so initial cost is so high and the investment may be wasted frequently. Cloud computing that is a next-generation internet technology enables simple internet devices (such as PC, Tablet, Smart phone etc) to use services and resources of BIM software. In this paper, we suggested developing method of the BIM software based on cloud computing environment in order to expand utilization of BIM and reduce cost of BIM software. First, for the benchmarking, we surveyed successful case of BIM and cloud computing. And we analyzed needs and opportunities of BIM and cloud computing in AEC Industry. Finally, we suggested main functions of BIM software based on cloud computing environment and developed a simple prototype of cloud computing BIM software for basic BIM model viewing.
Keywords: Construction IT, BIM(Building Information Modeling), Cloud Computing, BIM Service Based Cloud Computing, Viewer Based BIM Server, 3D Design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4101191 The Role of Velocity Map Quality in Estimation of Intravascular Pressure Distribution
Authors: Ali Pashaee, Parisa Shooshtari, Gholamreza Atae, Nasser Fatouraee
Abstract:
Phase-Contrast MR imaging methods are widely used for measurement of blood flow velocity components. Also there are some other tools such as CT and Ultrasound for velocity map detection in intravascular studies. These data are used in deriving flow characteristics. Some clinical applications are investigated which use pressure distribution in diagnosis of intravascular disorders such as vascular stenosis. In this paper an approach to the problem of measurement of intravascular pressure field by using velocity field obtained from flow images is proposed. The method presented in this paper uses an algorithm to calculate nonlinear equations of Navier- Stokes, assuming blood as an incompressible and Newtonian fluid. Flow images usually suffer the lack of spatial resolution. Our attempt is to consider the effect of spatial resolution on the pressure distribution estimated from this method. In order to achieve this aim, velocity map of a numerical phantom is derived at six different spatial resolutions. To determine the effects of vascular stenoses on pressure distribution, a stenotic phantom geometry is considered. A comparison between the pressure distribution obtained from the phantom and the pressure resulted from the algorithm is presented. In this regard we also compared the effects of collocated and staggered computational grids on the pressure distribution resulted from this algorithm.Keywords: Flow imaging, pressure distribution estimation, phantom, resolution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1682190 A Case Study of Applying Virtual Prototyping in Construction
Authors: Stephen C. W. Kong
Abstract:
The use of 3D computer-aided design (CAD) models to support construction project planning has been increasing in the previous year. 3D CAD models reveal more planning ideas by visually showing the construction site environment in different stages of the construction process. Using 3D CAD models together with scheduling software to prepare construction plan can identify errors in process sequence and spatial arrangement, which is vital to the success of a construction project. A number of 4D (3D plus time) CAD tools has been developed and utilized in different construction projects due to the awareness of their importance. Virtual prototyping extends the idea of 4D CAD by integrating more features for simulating real construction process. Virtual prototyping originates from the manufacturing industry where production of products such as cars and airplanes are virtually simulated in computer before they are built in the factory. Virtual prototyping integrates 3D CAD, simulation engine, analysis tools (like structural analysis and collision detection), and knowledgebase to streamline the whole product design and production process. In this paper, we present the application of a virtual prototyping software which has been used in a few construction projects in Hong Kong to support construction project planning. Specifically, the paper presents an implementation of virtual prototyping in a residential building project in Hong Kong. The applicability, difficulties and benefits of construction virtual prototyping are examined based on this project.Keywords: construction project planning, prefabrication, simulation, virtual prototyping.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2826189 Adaptive Block State Update Method for Separating Background
Authors: Youngsuck Ji, Youngjoon Han, Hernsoo Hahn
Abstract:
In this paper, we proposed the robust mobile object detection method for light effect in the night street image block based updating reference background model using block state analysis. Experiment image is acquired sequence color video from steady camera. When suddenly appeared artificial illumination, reference background model update this information such as street light, sign light. Generally natural illumination is change by temporal, but artificial illumination is suddenly appearance. So in this paper for exactly detect artificial illumination have 2 state process. First process is compare difference between current image and reference background by block based, it can know changed blocks. Second process is difference between current image-s edge map and reference background image-s edge map, it possible to estimate illumination at any block. This information is possible to exactly detect object, artificial illumination and it was generating reference background more clearly. Block is classified by block-state analysis. Block-state has a 4 state (i.e. transient, stationary, background, artificial illumination). Fig. 1 is show characteristic of block-state respectively [1]. Experimental results show that the presented approach works well in the presence of illumination variance.Keywords: Block-state, Edge component, Reference backgroundi, Artificial illumination.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1321188 Obesity and Bone Mineral Density in Patients with Large Joint Osteoarthritis
Authors: Vladyslav Povoroznyuk, Anna Musiienko, Nataliia Zaverukha, Roksolana Povoroznyuk
Abstract:
Along with the global aging of population, the number of people with somatic diseases is increasing, including such interrelated pathologies as obesity, osteoarthritis (OA) and osteoporosis (OP). The objective of the study is to examine the connection between body mass index (BMI), OA and bone mineral density (BMD) of lumbar spine, femoral neck and trabecular bone score (TBS) in postmenopausal women with OA. We have observed 359 postmenopausal women (50-89 years old) and divided them into four groups by age: 50-59 yrs, 60-69 yrs, 70-79 yrs and over 80 years old. In addition, according to the American College of Rheumatology (ACR) Clinical classification criteria for knee and hip OA, we divided them into 2 groups: group I – 117 females with symptomatic OA (including 89 patients with knee OA, 28 patients with hip OA) and group II –242 women with a normal functional activity of large joints. Analysis of data was performed taking into account their BMI, classified by World Health Organization (WHO). Diagnosis of obesity was established when BMI was above 30 kg/m2. In woman with obesity, a symptomatic OA was detected in 44 postmenopausal women (41.1%), a normal functional activity of large joints - in 63 women (58.9%). However, in women with normal BMI – 73 women, who account for 29.0% of cases, a symptomatic OA was detected. According to a chi-squared (χ2) test, a significantly higher level of BMI was detected in postmenopausal women with OA (χ2 = 5.05, p = 0.02). Women with a symptomatic OA had a significantly higher BMD of lumbar spine compared with women who had a normal functional activity of large joints. No significant differences of BMD of femoral necks or TBS were detected in either the group with OA or with a normal functional activity of large joints.
Keywords: Bone mineral density, BMD, body mass index, BMI, obesity, overweight, postmenopausal women, osteoarthritis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 684187 A Holographic Infotainment System for Connected and Driverless Cars: An Exploratory Study of Gesture Based Interaction
Authors: Nicholas Lambert, Seungyeon Ryu, Mehmet Mulla, Albert Kim
Abstract:
In this paper, an interactive in-car interface called HoloDash is presented. It is intended to provide information and infotainment in both autonomous vehicles and ‘connected cars’, vehicles equipped with Internet access via cellular services. The research focuses on the development of interactive avatars for this system and its gesture-based control system. This is a case study for the development of a possible human-centred means of presenting a connected or autonomous vehicle’s On-Board Diagnostics through a projected ‘holographic’ infotainment system. This system is termed a Holographic Human Vehicle Interface (HHIV), as it utilises a dashboard projection unit and gesture detection. The research also examines the suitability for gestures in an automotive environment, given that it might be used in both driver-controlled and driverless vehicles. Using Human Centred Design methods, questions were posed to test subjects and preferences discovered in terms of the gesture interface and the user experience for passengers within the vehicle. These affirm the benefits of this mode of visual communication for both connected and driverless cars.
Keywords: Holographic interface, human-computer interaction, user-centered design, Gesture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1068186 End-to-End Spanish-English Sequence Learning Translation Model
Authors: Vidhu Mitha Goutham, Ruma Mukherjee
Abstract:
The low availability of well-trained, unlimited, dynamic-access models for specific languages makes it hard for corporate users to adopt quick translation techniques and incorporate them into product solutions. As translation tasks increasingly require a dynamic sequence learning curve; stable, cost-free opensource models are scarce. We survey and compare current translation techniques and propose a modified sequence to sequence model repurposed with attention techniques. Sequence learning using an encoder-decoder model is now paving the path for higher precision levels in translation. Using a Convolutional Neural Network (CNN) encoder and a Recurrent Neural Network (RNN) decoder background, we use Fairseq tools to produce an end-to-end bilingually trained Spanish-English machine translation model including source language detection. We acquire competitive results using a duo-lingo-corpus trained model to provide for prospective, ready-made plug-in use for compound sentences and document translations. Our model serves a decent system for large, organizational data translation needs. While acknowledging its shortcomings and future scope, it also identifies itself as a well-optimized deep neural network model and solution.
Keywords: Attention, encoder-decoder, Fairseq, Seq2Seq, Spanish, translation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 475185 Thermographic Tests of Curved GFRP Structures with Delaminations: Numerical Modelling vs. Experimental Validation
Authors: P. D. Pastuszak
Abstract:
The present work is devoted to thermographic studies of curved composite panels (unidirectional GFRP) with subsurface defects. Various artificial defects, created by inserting PTFE stripe between individual layers of a laminate during manufacturing stage are studied. The analysis is conducted both with the use finite element method and experiments. To simulate transient heat transfer in 3D model with embedded various defect sizes, the ANSYS package is used. Pulsed Thermography combined with optical excitation source provides good results for flat surfaces. Composite structures are mostly used in complex components, e.g., pipes, corners and stiffeners. Local decrease of mechanical properties in these regions can have significant influence on strength decrease of the entire structure. Application of active procedures of thermography to defect detection and evaluation in this type of elements seems to be more appropriate that other NDT techniques. Nevertheless, there are various uncertainties connected with correct interpretation of acquired data. In this paper, important factors concerning Infrared Thermography measurements of curved surfaces in the form of cylindrical panels are considered. In addition, temperature effects on the surface resulting from complex geometry and embedded and real defect are also presented.Keywords: Active thermography, finite element analysis, composite, curved structures, defects.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1711184 Customer Churn Prediction Using Four Machine Learning Algorithms Integrating Feature Selection and Normalization in the Telecom Sector
Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh
Abstract:
A crucial part of maintaining a customer-oriented business in the telecommunications industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years, which has made it more important to understand customers’ needs in this strong market. For those who are looking to turn over their service providers, understanding their needs is especially important. Predictive churn is now a mandatory requirement for retaining customers in the telecommunications industry. Machine learning can be used to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.
Keywords: Machine Learning, Gradient Boosting, Logistic Regression, Churn, Random Forest, Decision Tree, ROC, AUC, F1-score.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 408