Search results for: edge detection
68 A Study of RSCMAC Enhanced GPS Dynamic Positioning
Authors: Ching-Tsan Chiang, Sheng-Jie Yang, Jing-Kai Huang
Abstract:
The purpose of this research is to develop and apply the RSCMAC to enhance the dynamic accuracy of Global Positioning System (GPS). GPS devices provide services of accurate positioning, speed detection and highly precise time standard for over 98% area on the earth. The overall operation of Global Positioning System includes 24 GPS satellites in space; signal transmission that includes 2 frequency carrier waves (Link 1 and Link 2) and 2 sets random telegraphic codes (C/A code and P code), on-earth monitoring stations or client GPS receivers. Only 4 satellites utilization, the client position and its elevation can be detected rapidly. The more receivable satellites, the more accurate position can be decoded. Currently, the standard positioning accuracy of the simplified GPS receiver is greatly increased, but due to affected by the error of satellite clock, the troposphere delay and the ionosphere delay, current measurement accuracy is in the level of 5~15m. In increasing the dynamic GPS positioning accuracy, most researchers mainly use inertial navigation system (INS) and installation of other sensors or maps for the assistance. This research utilizes the RSCMAC advantages of fast learning, learning convergence assurance, solving capability of time-related dynamic system problems with the static positioning calibration structure to improve and increase the GPS dynamic accuracy. The increasing of GPS dynamic positioning accuracy can be achieved by using RSCMAC system with GPS receivers collecting dynamic error data for the error prediction and follows by using the predicted error to correct the GPS dynamic positioning data. The ultimate purpose of this research is to improve the dynamic positioning error of cheap GPS receivers and the economic benefits will be enhanced while the accuracy is increased.Keywords: Dynamic Error, GPS, Prediction, RSCMAC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 168567 Progressive AAM Based Robust Face Alignment
Authors: Daehwan Kim, Jaemin Kim, Seongwon Cho, Yongsuk Jang, Sun-Tae Chung, Boo-Gyoun Kim
Abstract:
AAM has been successfully applied to face alignment, but its performance is very sensitive to initial values. In case the initial values are a little far distant from the global optimum values, there exists a pretty good possibility that AAM-based face alignment may converge to a local minimum. In this paper, we propose a progressive AAM-based face alignment algorithm which first finds the feature parameter vector fitting the inner facial feature points of the face and later localize the feature points of the whole face using the first information. The proposed progressive AAM-based face alignment algorithm utilizes the fact that the feature points of the inner part of the face are less variant and less affected by the background surrounding the face than those of the outer part (like the chin contour). The proposed algorithm consists of two stages: modeling and relation derivation stage and fitting stage. Modeling and relation derivation stage first needs to construct two AAM models: the inner face AAM model and the whole face AAM model and then derive relation matrix between the inner face AAM parameter vector and the whole face AAM model parameter vector. In the fitting stage, the proposed algorithm aligns face progressively through two phases. In the first phase, the proposed algorithm will find the feature parameter vector fitting the inner facial AAM model into a new input face image, and then in the second phase it localizes the whole facial feature points of the new input face image based on the whole face AAM model using the initial parameter vector estimated from using the inner feature parameter vector obtained in the first phase and the relation matrix obtained in the first stage. Through experiments, it is verified that the proposed progressive AAM-based face alignment algorithm is more robust with respect to pose, illumination, and face background than the conventional basic AAM-based face alignment algorithm.Keywords: Face Alignment, AAM, facial feature detection, model matching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 164066 Data Privacy and Safety with Large Language Models
Authors: Ashly Joseph, Jithu Paulose
Abstract:
Large language models (LLMs) have revolutionized natural language processing capabilities, enabling applications such as chatbots, dialogue agents, image, and video generators. Nevertheless, their trainings on extensive datasets comprising personal information poses notable privacy and safety hazards. This study examines methods for addressing these challenges, specifically focusing on approaches to enhance the security of LLM outputs, safeguard user privacy, and adhere to data protection rules. We explore several methods including post-processing detection algorithms, content filtering, reinforcement learning from human and AI inputs, and the difficulties in maintaining a balance between model safety and performance. The study also emphasizes the dangers of unintentional data leakage, privacy issues related to user prompts, and the possibility of data breaches. We highlight the significance of corporate data governance rules and optimal methods for engaging with chatbots. In addition, we analyze the development of data protection frameworks, evaluate the adherence of LLMs to General Data Protection Regulation (GDPR), and examine privacy legislation in academic and business policies. We demonstrate the difficulties and remedies involved in preserving data privacy and security in the age of sophisticated artificial intelligence by employing case studies and real-life instances. This article seeks to educate stakeholders on practical strategies for improving the security and privacy of LLMs, while also assuring their responsible and ethical implementation.
Keywords: Data privacy, large language models, artificial intelligence, machine learning, cybersecurity, general data protection regulation, data safety.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11965 Lamb Wave Wireless Communication in Healthy Plates Using Coherent Demodulation
Authors: Rudy Bahouth, Farouk Benmeddour, Emmanuel Moulin, Jamal Assaad
Abstract:
Guided ultrasonic waves are used in Non-Destructive Testing and Structural Health Monitoring for inspection and damage detection. Recently, wireless data transmission using ultrasonic waves in solid metallic channels has gained popularity in some industrial applications such as nuclear, aerospace and smart vehicles. The idea is to find a good substitute for electromagnetic waves since they are highly attenuated near metallic components due to Faraday shielding. The proposed solution is to use ultrasonic guided waves such as Lamb waves as an information carrier due to their capability of propagation for long distances. In addition to this, valuable information about the health of the structure could be extracted simultaneously. In this work, the reliable frequency bandwidth for communication is extracted experimentally from dispersion curves at first. Then, an experimental platform for wireless communication using Lamb waves is described and built. After this, coherent demodulation algorithm used in telecommunications is tested for Amplitude Shift Keying, On-Off Keying and Binary Phase Shift Keying modulation techniques. Signal processing parameters such as threshold choice, number of cycles per bit and Bit Rate are optimized. Experimental results are compared based on the average bit error percentage. Results has shown high sensitivity to threshold selection for Amplitude Shift Keying and On-Off Keying techniques resulting a Bit Rate decrease. Binary Phase Shift Keying technique shows the highest stability and data rate between all tested modulation techniques.
Keywords: Lamb Wave Communication, wireless communication, coherent demodulation, bit error percentage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 56364 Multipath Routing Protocol Using Basic Reconstruction Routing (BRR) Algorithm in Wireless Sensor Network
Authors: K. Rajasekaran, Kannan Balasubramanian
Abstract:
A sensory network consists of multiple detection locations called sensor nodes, each of which is tiny, featherweight and portable. A single path routing protocols in wireless sensor network can lead to holes in the network, since only the nodes present in the single path is used for the data transmission. Apart from the advantages like reduced computation, complexity and resource utilization, there are some drawbacks like throughput, increased traffic load and delay in data delivery. Therefore, multipath routing protocols are preferred for WSN. Distributing the traffic among multiple paths increases the network lifetime. We propose a scheme, for the data to be transmitted through a dominant path to save energy. In order to obtain a high delivery ratio, a basic route reconstruction protocol is utilized to reconstruct the path whenever a failure is detected. A basic reconstruction routing (BRR) algorithm is proposed, in which a node can leap over path failure by using the already existing routing information from its neighbourhood while the composed data is transmitted from the source to the sink. In order to save the energy and attain high data delivery ratio, data is transmitted along a multiple path, which is achieved by BRR algorithm whenever a failure is detected. Further, the analysis of how the proposed protocol overcomes the drawback of the existing protocols is presented. The performance of our protocol is compared to AOMDV and energy efficient node-disjoint multipath routing protocol (EENDMRP). The system is implemented using NS-2.34. The simulation results show that the proposed protocol has high delivery ratio with low energy consumption.Keywords: Multipath routing, WSN, energy efficient routing, alternate route, assured data delivery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 172363 Emotion Detection in Twitter Messages Using Combination of Long Short-Term Memory and Convolutional Deep Neural Networks
Authors: B. Golchin, N. Riahi
Abstract:
One of the most significant issues as attended a lot in recent years is that of recognizing the sentiments and emotions in social media texts. The analysis of sentiments and emotions is intended to recognize the conceptual information such as the opinions, feelings, attitudes and emotions of people towards the products, services, organizations, people, topics, events and features in the written text. These indicate the greatness of the problem space. In the real world, businesses and organizations are always looking for tools to gather ideas, emotions, and directions of people about their products, services, or events related to their own. This article uses the Twitter social network, one of the most popular social networks with about 420 million active users, to extract data. Using this social network, users can share their information and opinions about personal issues, policies, products, events, etc. It can be used with appropriate classification of emotional states due to the availability of its data. In this study, supervised learning and deep neural network algorithms are used to classify the emotional states of Twitter users. The use of deep learning methods to increase the learning capacity of the model is an advantage due to the large amount of available data. Tweets collected on various topics are classified into four classes using a combination of two Bidirectional Long Short Term Memory network and a Convolutional network. The results obtained from this study with an average accuracy of 93%, show good results extracted from the proposed framework and improved accuracy compared to previous work.
Keywords: emotion classification, sentiment analysis, social networks, deep neural networks
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 66562 Optimization and Validation for Determination of VOCs from Lime Fruit Citrus aurantifolia (Christm.) with and without California Red Scale Aonidiella aurantii (Maskell) Infested by Using HS-SPME-GC-FID/MS
Authors: K. Mohammed, M. Agarwal, J. Mewman, Y. Ren
Abstract:
An optimum technic has been developed for extracting volatile organic compounds which contribute to the aroma of lime fruit (Citrus aurantifolia). The volatile organic compounds of healthy and infested lime fruit with California red scale Aonidiella aurantii were characterized using headspace solid phase microextraction (HS-SPME) combined with gas chromatography (GC) coupled flame ionization detection (FID) and gas chromatography with mass spectrometry (GC-MS) as a very simple, efficient and nondestructive extraction method. A three-phase 50/30 μm PDV/DVB/CAR fibre was used for the extraction process. The optimal sealing and fibre exposure time for volatiles reaching equilibrium from whole lime fruit in the headspace of the chamber was 16 and 4 hours respectively. 5 min was selected as desorption time of the three-phase fibre. Herbivorous activity induces indirect plant defenses, as the emission of herbivorous-induced plant volatiles (HIPVs), which could be used by natural enemies for host location. GC-MS analysis showed qualitative differences among volatiles emitted by infested and healthy lime fruit. The GC-MS analysis allowed the initial identification of 18 compounds, with similarities higher than 85%, in accordance with the NIST mass spectral library. One of these were increased by A. aurantii infestation, D-limonene, and three were decreased, Undecane, α-Farnesene and 7-epi-α-selinene. From an applied point of view, the application of the above-mentioned VOCs may help boost the efficiency of biocontrol programs and natural enemies’ production techniques.
Keywords: Lime fruit, Citrus aurantifolia, California red scale, Aonidiella aurantii, VOCs, HS-SPME/GC-FID-MS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 86061 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data
Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L Duan
Abstract:
The conditional density characterizes the distribution of a response variable y given other predictor x, and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts a motivating starting point. In this work, we extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zP , zN]. The zP component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zN component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach, coined Augmented Posterior CDE (AP-CDE), only requires a simple modification on the common normalizing flow framework, while significantly improving the interpretation of the latent component, since zP represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of x-related variations due to factors such as lighting condition and subject id, from the other random variations. Further, the experiments show that an unconditional NF neural network, based on an unsupervised model of z, such as Gaussian mixture, fails to generate interpretable results.
Keywords: Conditional density estimation, image generation, normalizing flow, supervised dimension reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16960 Non-Destructive Testing of Carbon Fiber Reinforced Plastic by Infrared Thermography Methods
Authors: W. Swiderski
Abstract:
Composite materials are one answer to the growing demand for materials with better parameters of construction and exploitation. Composite materials also permit conscious shaping of desirable properties to increase the extent of reach in the case of metals, ceramics or polymers. In recent years, composite materials have been used widely in aerospace, energy, transportation, medicine, etc. Fiber-reinforced composites including carbon fiber, glass fiber and aramid fiber have become a major structural material. The typical defect during manufacture and operation is delamination damage of layered composites. When delamination damage of the composites spreads, it may lead to a composite fracture. One of the many methods used in non-destructive testing of composites is active infrared thermography. In active thermography, it is necessary to deliver energy to the examined sample in order to obtain significant temperature differences indicating the presence of subsurface anomalies. To detect possible defects in composite materials, different methods of thermal stimulation can be applied to the tested material, these include heating lamps, lasers, eddy currents, microwaves or ultrasounds. The use of a suitable source of thermal stimulation on the test material can have a decisive influence on the detection or failure to detect defects. Samples of multilayer structure carbon composites were prepared with deliberately introduced defects for comparative purposes. Very thin defects of different sizes and shapes made of Teflon or copper having a thickness of 0.1 mm were screened. Non-destructive testing was carried out using the following sources of thermal stimulation, heating lamp, flash lamp, ultrasound and eddy currents. The results are reported in the paper.Keywords: Non-destructive testing, IR thermography, composite material, thermal stimulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 155159 Investigation of Combined use of MFCC and LPC Features in Speech Recognition Systems
Authors: К. R. Aida–Zade, C. Ardil, S. S. Rustamov
Abstract:
Statement of the automatic speech recognition problem, the assignment of speech recognition and the application fields are shown in the paper. At the same time as Azerbaijan speech, the establishment principles of speech recognition system and the problems arising in the system are investigated. The computing algorithms of speech features, being the main part of speech recognition system, are analyzed. From this point of view, the determination algorithms of Mel Frequency Cepstral Coefficients (MFCC) and Linear Predictive Coding (LPC) coefficients expressing the basic speech features are developed. Combined use of cepstrals of MFCC and LPC in speech recognition system is suggested to improve the reliability of speech recognition system. To this end, the recognition system is divided into MFCC and LPC-based recognition subsystems. The training and recognition processes are realized in both subsystems separately, and recognition system gets the decision being the same results of each subsystems. This results in decrease of error rate during recognition. The training and recognition processes are realized by artificial neural networks in the automatic speech recognition system. The neural networks are trained by the conjugate gradient method. In the paper the problems observed by the number of speech features at training the neural networks of MFCC and LPC-based speech recognition subsystems are investigated. The variety of results of neural networks trained from different initial points in training process is analyzed. Methodology of combined use of neural networks trained from different initial points in speech recognition system is suggested to improve the reliability of recognition system and increase the recognition quality, and obtained practical results are shown.Keywords: Speech recognition, cepstral analysis, Voice activation detection algorithm, Mel Frequency Cepstral Coefficients, features of speech, Cepstral Mean Subtraction, neural networks, Linear Predictive Coding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 91458 Sensor and Actuator Fault Detection in Connected Vehicles under a Packet Dropping Network
Authors: Z. Abdollahi Biron, P. Pisu
Abstract:
Connected vehicles are one of the promising technologies for future Intelligent Transportation Systems (ITS). A connected vehicle system is essentially a set of vehicles communicating through a network to exchange their information with each other and the infrastructure. Although this interconnection of the vehicles can be potentially beneficial in creating an efficient, sustainable, and green transportation system, a set of safety and reliability challenges come out with this technology. The first challenge arises from the information loss due to unreliable communication network which affects the control/management system of the individual vehicles and the overall system. Such scenario may lead to degraded or even unsafe operation which could be potentially catastrophic. Secondly, faulty sensors and actuators can affect the individual vehicle’s safe operation and in turn will create a potentially unsafe node in the vehicular network. Further, sending that faulty sensor information to other vehicles and failure in actuators may significantly affect the safe operation of the overall vehicular network. Therefore, it is of utmost importance to take these issues into consideration while designing the control/management algorithms of the individual vehicles as a part of connected vehicle system. In this paper, we consider a connected vehicle system under Co-operative Adaptive Cruise Control (CACC) and propose a fault diagnosis scheme that deals with these aforementioned challenges. Specifically, the conventional CACC algorithm is modified by adding a Kalman filter-based estimation algorithm to suppress the effect of lost information under unreliable network. Further, a sliding mode observer-based algorithm is used to improve the sensor reliability under faults. The effectiveness of the overall diagnostic scheme is verified via simulation studies.
Keywords: Fault diagnostics, communication network, connected vehicles, packet drop out, platoon.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 200357 Clustering for Detection of Population Groups at Risk from Anticholinergic Medication
Authors: Amirali Shirazibeheshti, Tarik Radwan, Alireza Ettefaghian, Farbod Khanizadeh, George Wilson, Cristina Luca
Abstract:
Anticholinergic medication has been associated with events such as falls, delirium, and cognitive impairment in older patients. To further assess this, anticholinergic burden scores have been developed to quantify risk. A risk model based on clustering was deployed in a healthcare management system to cluster patients into multiple risk groups according to anticholinergic burden scores of multiple medicines prescribed to patients to facilitate clinical decision-making. To do so, anticholinergic burden scores of drugs were extracted from the literature which categorizes the risk on a scale of 1 to 3. Given the patients’ prescription data on the healthcare database, a weighted anticholinergic risk score was derived per patient based on the prescription of multiple anticholinergic drugs. This study was conducted on 300,000 records of patients currently registered with a major regional UK-based healthcare provider. The weighted risk scores were used as inputs to an unsupervised learning algorithm (mean-shift clustering) that groups patients into clusters that represent different levels of anticholinergic risk. This work evaluates the association between the average risk score and measures of socioeconomic status (index of multiple deprivation) and health (index of health and disability). The clustering identifies a group of 15 patients at the highest risk from multiple anticholinergic medication. Our findings show that this group of patients is located within more deprived areas of London compared to the population of other risk groups. Furthermore, the prescription of anticholinergic medicines is more skewed to female than male patients, suggesting that females are more at risk from this kind of multiple medication. The risk may be monitored and controlled in a healthcare management system that is well-equipped with tools implementing appropriate techniques of artificial intelligence.
Keywords: Anticholinergic medication, socioeconomic status, deprivation, clustering, risk analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 107156 Lightweight and Seamless Distributed Scheme for the Smart Home
Authors: Muhammad Mehran Arshad Khan, Chengliang Wang, Zou Minhui, Danyal Badar Soomro
Abstract:
Security of the smart home in terms of behavior activity pattern recognition is a totally dissimilar and unique issue as compared to the security issues of other scenarios. Sensor devices (low capacity and high capacity) interact and negotiate each other by detecting the daily behavior activity of individuals to execute common tasks. Once a device (e.g., surveillance camera, smart phone and light detection sensor etc.) is compromised, an adversary can then get access to a specific device and can damage daily behavior activity by altering the data and commands. In this scenario, a group of common instruction processes may get involved to generate deadlock. Therefore, an effective suitable security solution is required for smart home architecture. This paper proposes seamless distributed Scheme which fortifies low computational wireless devices for secure communication. Proposed scheme is based on lightweight key-session process to upheld cryptic-link for trajectory by recognizing of individual’s behavior activities pattern. Every device and service provider unit (low capacity sensors (LCS) and high capacity sensors (HCS)) uses an authentication token and originates a secure trajectory connection in network. Analysis of experiments is revealed that proposed scheme strengthens the devices against device seizure attack by recognizing daily behavior activities, minimum utilization memory space of LCS and avoids network from deadlock. Additionally, the results of a comparison with other schemes indicate that scheme manages efficiency in term of computation and communication.Keywords: Authentication, key-session, security, wireless sensors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 87855 Computational Feasibility Study of a Torsional Wave Transducer for Tissue Stiffness Monitoring
Authors: Rafael Muñoz, Juan Melchor, Alicia Valera, Laura Peralta, Guillermo Rus
Abstract:
A torsional piezoelectric ultrasonic transducer design is proposed to measure shear moduli in soft tissue with direct access availability, using shear wave elastography technique. The measurement of shear moduli of tissues is a challenging problem, mainly derived from a) the difficulty of isolating a pure shear wave, given the interference of multiple waves of different types (P, S, even guided) emitted by the transducers and reflected in geometric boundaries, and b) the highly attenuating nature of soft tissular materials. An immediate application, overcoming these drawbacks, is the measurement of changes in cervix stiffness to estimate the gestational age at delivery. The design has been optimized using a finite element model (FEM) and a semi-analytical estimator of the probability of detection (POD) to determine a suitable geometry, materials and generated waves. The technique is based on the time of flight measurement between emitter and receiver, to infer shear wave velocity. Current research is centered in prototype testing and validation. The geometric optimization of the transducer was able to annihilate the compressional wave emission, generating a quite pure shear torsional wave. Currently, mechanical and electromagnetic coupling between emitter and receiver signals are being the research focus. Conclusions: the design overcomes the main described problems. The almost pure shear torsional wave along with the short time of flight avoids the possibility of multiple wave interference. This short propagation distance reduce the effect of attenuation, and allow the emission of very low energies assuring a good biological security for human use.Keywords: Cervix ripening, preterm birth, shear modulus, shear wave elastography, soft tissue, torsional wave.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 156754 Combination of Different Classifiers for Cardiac Arrhythmia Recognition
Authors: M. R. Homaeinezhad, E. Tavakkoli, M. Habibi, S. A. Atyabi, A. Ghaffari
Abstract:
This paper describes a new supervised fusion (hybrid) electrocardiogram (ECG) classification solution consisting of a new QRS complex geometrical feature extraction as well as a new version of the learning vector quantization (LVQ) classification algorithm aimed for overcoming the stability-plasticity dilemma. Toward this objective, after detection and delineation of the major events of ECG signal via an appropriate algorithm, each QRS region and also its corresponding discrete wavelet transform (DWT) are supposed as virtual images and each of them is divided into eight polar sectors. Then, the curve length of each excerpted segment is calculated and is used as the element of the feature space. To increase the robustness of the proposed classification algorithm versus noise, artifacts and arrhythmic outliers, a fusion structure consisting of five different classifiers namely as Support Vector Machine (SVM), Modified Learning Vector Quantization (MLVQ) and three Multi Layer Perceptron-Back Propagation (MLP–BP) neural networks with different topologies were designed and implemented. The new proposed algorithm was applied to all 48 MIT–BIH Arrhythmia Database records (within–record analysis) and the discrimination power of the classifier in isolation of different beat types of each record was assessed and as the result, the average accuracy value Acc=98.51% was obtained. Also, the proposed method was applied to 6 number of arrhythmias (Normal, LBBB, RBBB, PVC, APB, PB) belonging to 20 different records of the aforementioned database (between– record analysis) and the average value of Acc=95.6% was achieved. To evaluate performance quality of the new proposed hybrid learning machine, the obtained results were compared with similar peer– reviewed studies in this area.Keywords: Feature Extraction, Curve Length Method, SupportVector Machine, Learning Vector Quantization, Multi Layer Perceptron, Fusion (Hybrid) Classification, Arrhythmia Classification, Supervised Learning Machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 222753 A Damage Level Assessment Model for Extra High Voltage Transmission Towers
Authors: Huan-Chieh Chiu, Hung-Shuo Wu, Chien-Hao Wang, Yu-Cheng Yang, Ching-Ya Tseng, Joe-Air Jiang
Abstract:
Power failure resulting from tower collapse due to violent seismic events might bring enormous and inestimable losses. The Chi-Chi earthquake, for example, strongly struck Taiwan and caused huge damage to the power system on September 21, 1999. Nearly 10% of extra high voltage (EHV) transmission towers were damaged in the earthquake. Therefore, seismic hazards of EHV transmission towers should be monitored and evaluated. The ultimate goal of this study is to establish a damage level assessment model for EHV transmission towers. The data of earthquakes provided by Taiwan Central Weather Bureau serve as a reference and then lay the foundation for earthquake simulations and analyses afterward. Some parameters related to the damage level of each point of an EHV tower are simulated and analyzed by the data from monitoring stations once an earthquake occurs. Through the Fourier transform, the seismic wave is then analyzed and transformed into different wave frequencies, and the data would be shown through a response spectrum. With this method, the seismic frequency which damages EHV towers the most is clearly identified. An estimation model is built to determine the damage level caused by a future seismic event. Finally, instead of relying on visual observation done by inspectors, the proposed model can provide a power company with the damage information of a transmission tower. Using the model, manpower required by visual observation can be reduced, and the accuracy of the damage level estimation can be substantially improved. Such a model is greatly useful for health and construction monitoring because of the advantages of long-term evaluation of structural characteristics and long-term damage detection.Keywords: Smart grid, EHV transmission tower, response spectrum, damage level monitoring.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 106652 Supervisory Controller with Three-State Energy Saving Mode for Induction Motor in Fluid Transportation
Authors: O. S. Ebrahim, K. O. Shawky, M. O. Ebrahim, P. K. Jain
Abstract:
Induction Motor (IM) driving pump is the main consumer of electricity in a typical fluid transportation system (FTS). Changing the connection of the stator windings from delta to star at no load can achieve noticeable active and reactive energy savings. This paper proposes a supervisory hysteresis liquid-level control with three-state energy saving mode (ESM) for IM in FTS including storage tank. The IM pump drive comprises modified star/delta switch and hydromantic coupler. Three-state ESM is defined, along with the normal running, and named analog to computer ESMs as follows: Sleeping mode in which the motor runs at no load with delta stator connection, hibernate mode in which the motor runs at no load with a star connection, and motor shutdown is the third energy saver mode. A logic flow-chart is synthesized to select the motor state at no-load for best energetic cost reduction, considering the motor thermal capacity used. An artificial neural network (ANN) state estimator, based on the recurrent architecture, is constructed and learned in order to provide fault-tolerant capability for the supervisory controller. Sequential test of Wald is used for sensor fault detection. Theoretical analysis, preliminary experimental testing and, computer simulations are performed to show the effectiveness of the proposed control in terms of reliability, power quality and energy/coenergy cost reduction with the suggestion of power factor correction.
Keywords: Artificial Neural Network, ANN, Energy Saving Mode, ESM, Induction Motor, IM, star/delta switch, supervisory control, fluid transportation, reliability, power quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38751 3D Modeling Approach for Cultural Heritage Structures: The Case of Virgin of Loreto Chapel in Cusco, Peru
Authors: Rony Reátegui, Cesar Chácara, Benjamin Castañeda, Rafael Aguilar
Abstract:
Nowadays, Heritage Building Information Modeling (HBIM) is considered an efficient tool to represent and manage information of Cultural Heritage (CH). The basis of this tool relies on a 3D model generally obtained from a Cloud-to-BIM procedure. There are different methods to create an HBIM model that goes from manual modeling based on the point cloud to the automatic detection of shapes and the creation of objects. The selection of these methods depends on the desired Level of Development (LOD), Level of Information (LOI), Grade of Generation (GOG) as well as on the availability of commercial software. This paper presents the 3D modeling of a stone masonry chapel using Recap Pro, Revit and Dynamo interface following a three-step methodology. The first step consists of the manual modeling of simple structural (e.g., regular walls, columns, floors, wall openings, etc.) and architectural (e.g., cornices, moldings and other minor details) elements using the point cloud as reference. Then, Dynamo is used for generative modeling of complex structural elements such as vaults, infills and domes. Finally, semantic information (e.g., materials, typology, state of conservation, etc.) and pathologies are added within the HBIM model as text parameters and generic models’ families respectively. The application of this methodology allows the documentation of CH following a relatively simple to apply process that ensures adequate LOD, LOI and GOG levels. In addition, the easy implementation of the method as well as the fact of using only one BIM software with its respective plugin for the scan-to-BIM modeling process means that this methodology can be adopted by a larger number of users with intermediate knowledge and limited resources, since the BIM software used has a free student license.
Keywords: Cloud-to-BIM, cultural heritage, generative modeling, HBIM, parametric modeling, Revit.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 93250 Land Use Land Cover Changes in Response to Urban Sprawl within North-West Anatolia, Turkey
Authors: Melis Inalpulat, Levent Genc
Abstract:
In the present study, an attempt was made to state the Land Use Land Cover (LULC) transformation over three decades around the urban regions of Balıkesir, Bursa, and Çanakkale provincial centers (PCs) in Turkey. Landsat imageries acquired in 1984, 1999 and 2014 were used to determine the LULC change. Images were classified using the supervised classification technique and five main LULC classes were considered including forest (F), agricultural land (A), residential area (urban) - bare soil (R-B), water surface (W), and other (O). Change detection analyses were conducted for 1984-1999 and 1999-2014, and the results were evaluated. Conversions of LULC types to R-B class were investigated. In addition, population changes (1985-2014) were assessed depending on census data, the relations between population and the urban areas were stated, and future populations and urban area needs were forecasted for 2030. The results of LULC analysis indicated that urban areas, which are covered under R-B class, were expanded in all PCs. During 1984-1999 R-B class within Balıkesir, Bursa and Çanakkale PCs were found to have increased by 7.1%, 8.4%, and 2.9%, respectively. The trend continued in the 1999-2014 term and the increment percentages reached to 15.7%, 15.5%, and 10.2% at the end of 30-year period (1984-2014). Furthermore, since A class in all provinces was found to be the principal contributor for the R-B class, urban sprawl lead to the loss of agricultural lands. Moreover, the areas of R-B classes were highly correlated with population within all PCs (R2>0.992). Depending on this situation, both future populations and R-B class areas were forecasted. The estimated values of increase in the R-B class areas for Balıkesir, Bursa, and Çanakkale PCs were 1,586 ha, 7,999 ha and 854 ha, respectively. Due to this fact, the forecasted values for 2,030 are 7,838 ha, 27,866, and 2,486 ha for Balıkesir, Bursa, and Çanakkale, and thus, 7.7%, 8.2%, and 9.7% more R-B class areas are expected to locate in PCs in respect to the same order.Keywords: Landsat, LULC change, population, urban sprawl.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 143749 Identification of Flexographic-printed Newspapers with NIR Spectral Imaging
Authors: Raimund Leitner, Susanne Rosskopf
Abstract:
Near-infrared (NIR) spectroscopy is a widely used method for material identification for laboratory and industrial applications. While standard spectrometers only allow measurements at one sampling point at a time, NIR Spectral Imaging techniques can measure, in real-time, both the size and shape of an object as well as identify the material the object is made of. The online classification and sorting of recovered paper with NIR Spectral Imaging (SI) is used with success in the paper recycling industry throughout Europe. Recently, the globalisation of the recycling material streams caused that water-based flexographic-printed newspapers mainly from UK and Italy appear also in central Europe. These flexo-printed newspapers are not sufficiently de-inkable with the standard de-inking process originally developed for offset-printed paper. This de-inking process removes the ink from recovered paper and is the fundamental processing step to produce high-quality paper from recovered paper. Thus, the flexo-printed newspapers are a growing problem for the recycling industry as they reduce the quality of the produced paper if their amount exceeds a certain limit within the recovered paper material. This paper presents the results of a research project for the development of an automated entry inspection system for recovered paper that was jointly conducted by CTR AG (Austria) and PTS Papiertechnische Stiftung (Germany). Within the project an NIR SI prototype for the identification of flexo-printed newspaper has been developed. The prototype can identify and sort out flexoprinted newspapers in real-time and achieves a detection accuracy for flexo-printed newspaper of over 95%. NIR SI, the technology the prototype is based on, allows the development of inspection systems for incoming goods in a paper production facility as well as industrial sorting systems for recovered paper in the recycling industry in the near future.Keywords: spectral imaging, imaging spectroscopy, NIR, waterbasedflexographic, flexo-printed, recovered paper, real-time classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 154648 Extraction of Forest Plantation Resources in Selected Forest of San Manuel, Pangasinan, Philippines Using LiDAR Data for Forest Status Assessment
Authors: Mark Joseph Quinto, Roan Beronilla, Guiller Damian, Eliza Camaso, Ronaldo Alberto
Abstract:
Forest inventories are essential to assess the composition, structure and distribution of forest vegetation that can be used as baseline information for management decisions. Classical forest inventory is labor intensive and time-consuming and sometimes even dangerous. The use of Light Detection and Ranging (LiDAR) in forest inventory would improve and overcome these restrictions. This study was conducted to determine the possibility of using LiDAR derived data in extracting high accuracy forest biophysical parameters and as a non-destructive method for forest status analysis of San Manual, Pangasinan. Forest resources extraction was carried out using LAS tools, GIS, Envi and .bat scripts with the available LiDAR data. The process includes the generation of derivatives such as Digital Terrain Model (DTM), Canopy Height Model (CHM) and Canopy Cover Model (CCM) in .bat scripts followed by the generation of 17 composite bands to be used in the extraction of forest classification covers using ENVI 4.8 and GIS software. The Diameter in Breast Height (DBH), Above Ground Biomass (AGB) and Carbon Stock (CS) were estimated for each classified forest cover and Tree Count Extraction was carried out using GIS. Subsequently, field validation was conducted for accuracy assessment. Results showed that the forest of San Manuel has 73% Forest Cover, which is relatively much higher as compared to the 10% canopy cover requirement. On the extracted canopy height, 80% of the tree’s height ranges from 12 m to 17 m. CS of the three forest covers based on the AGB were: 20819.59 kg/20x20 m for closed broadleaf, 8609.82 kg/20x20 m for broadleaf plantation and 15545.57 kg/20x20m for open broadleaf. Average tree counts for the tree forest plantation was 413 trees/ha. As such, the forest of San Manuel has high percent forest cover and high CS.
Keywords: Carbon stock, forest inventory, LiDAR, tree count.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 128147 A Robust Visual SLAM for Indoor Dynamic Environment
Authors: Xiang Zhang, Daohong Yang, Ziyuan Wu, Lei Li, Wanting Zhou
Abstract:
Visual Simultaneous Localization and Mapping (VSLAM) uses cameras to gather information in unknown environments to achieve simultaneous localization and mapping of the environment. This technology has a wide range of applications in autonomous driving, virtual reality, and other related fields. Currently, the research advancements related to VSLAM can maintain high accuracy in static environments. But in dynamic environments, the presence of moving objects in the scene can reduce the stability of the VSLAM system, leading to inaccurate localization and mapping, or even system failure. In this paper, a robust VSLAM method was proposed to effectively address the challenges in dynamic environments. We proposed a dynamic region removal scheme based on a semantic segmentation neural network and geometric constraints. Firstly, a semantic segmentation neural network is used to extract the prior active motion region, prior static region, and prior passive motion region in the environment. Then, the lightweight frame tracking module initializes the transform pose between the previous frame and the current frame on the prior static region. A motion consistency detection module based on multi-view geometry and scene flow is used to divide the environment into static regions and dynamic regions. Thus, the dynamic object region was successfully eliminated. Finally, only the static region is used for tracking thread. Our research is based on the ORBSLAM3 system, which is one of the most effective VSLAM systems available. We evaluated our method on the TUM RGB-D benchmark and the results demonstrate that the proposed VSLAM method improves the accuracy of the original ORBSLAM3 by 70%˜98.5% under a high dynamic environment.
Keywords: Dynamic scene, dynamic visual SLAM, semantic segmentation, scene flow, VSLAM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18246 Analyzing the Changing Pattern of Nigerian Vegetation Zones and Its Ecological and Socio-Economic Implications Using Spot-Vegetation Sensor
Authors: B. L. Gadiga
Abstract:
This study assesses the major ecological zones in Nigeria with the view to understanding the spatial pattern of vegetation zones and the implications on conservation within the period of sixteen (16) years. Satellite images used for this study were acquired from the SPOT-VEGETATION between 1998 and 2013. The annual NDVI images selected for this study were derived from SPOT-4 sensor and were acquired within the same season (November) in order to reduce differences in spectral reflectance due to seasonal variations. The images were sliced into five classes based on literatures and knowledge of the area (i.e. <0.16 Non-Vegetated areas; 0.16-0.22 Sahel Savannah; 0.22-0.40 Sudan Savannah, 0.40-0.47 Guinea Savannah and >0.47 Forest Zone). Classification of the 1998 and 2013 images into forested and non forested areas showed that forested area decrease from 511,691 km2 in 1998 to 478,360 km2 in 2013. Differencing change detection method was performed on 1998 and 2013 NDVI images to identify areas of ecological concern. The result shows that areas undergoing vegetation degradation covers an area of 73,062 km2 while areas witnessing some form restoration cover an area of 86,315 km2. The result also shows that there is a weak correlation between rainfall and the vegetation zones. The non-vegetated areas have a correlation coefficient (r) of 0.0088, Sahel Savannah belt 0.1988, Sudan Savannah belt -0.3343, Guinea Savannah belt 0.0328 and Forest belt 0.2635. The low correlation can be associated with the encroachment of the Sudan Savannah belt into the forest belt of South-eastern part of the country as revealed by the image analysis. The degradation of the forest vegetation is therefore responsible for the serious erosion problems witnessed in the South-east. The study recommends constant monitoring of vegetation and strict enforcement of environmental laws in the country.
Keywords: Vegetation, NDVI, SPOT-vegetation, ecology, degradation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 83945 Implicit Responses for Assessment of Autism Based on Natural Behaviors Obtained Inside Immersive Virtual Environment
Authors: E. Olmos-Raya, A. Cascales Martínez, N. Minto de Sousa, M. Alcañiz Raya
Abstract:
The late detection and subjectivity of the assessment of Autism Spectrum Disorder (ASD) imposed a difficulty for the children’s clinical and familiar environment. The results showed in this paper, are part of a research project about the assessment and training of social skills in children with ASD, whose overall goal is the use of virtual environments together with physiological measures in order to find a new model of objective ASD assessment based on implicit brain processes measures. In particular, this work tries to contribute by studying the differences and changes in the Skin Conductance Response (SCR) and Eye Tracking (ET) between a typical development group (TD group) and an ASD group (ASD group) after several combined stimuli using a low cost Immersive Virtual Environment (IVE). Subjects were exposed to a virtual environment that showed natural scenes that stimulated visual, auditory and olfactory perceptual system. By exposing them to the IVE, subjects showed natural behaviors while measuring SCR and ET. This study compared measures of subjects diagnosed with ASD (N = 18) with a control group of subjects with typical development (N=10) when exposed to three different conditions: only visual (V), visual and auditory (VA) and visual, auditory and olfactory (VAO) stimulation. Correlations between SCR and ET measures were also correlated with the Autism Diagnostic Observation Schedule (ADOS) test. SCR measures showed significant differences among the experimental condition between groups. The ASD group presented higher level of SCR while we did not find significant differences between groups regarding DF. We found high significant correlations among all the experimental conditions in SCR measures and the subscale of ADOS test of imagination and symbolic thinking. Regarding the correlation between ET measures and ADOS test, the results showed significant relationship between VA condition and communication scores.
Keywords: Autism, electrodermal activity, eye tracking, immersive virtual environment, virtual reality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 81044 Remote Vital Signs Monitoring in Neonatal Intensive Care Unit Using a Digital Camera
Authors: Fatema-Tuz-Zohra Khanam, Ali Al-Naji, Asanka G. Perera, Kim Gibson, Javaan Chahl
Abstract:
Conventional contact-based vital signs monitoring sensors such as pulse oximeters or electrocardiogram (ECG) may cause discomfort, skin damage, and infections, particularly in neonates with fragile, sensitive skin. Therefore, remote monitoring of the vital sign is desired in both clinical and non-clinical settings to overcome these issues. Camera-based vital signs monitoring is a recent technology for these applications with many positive attributes. However, there are still limited camera-based studies on neonates in a clinical setting. In this study, the heart rate (HR) and respiratory rate (RR) of eight infants at the Neonatal Intensive Care Unit (NICU) in Flinders Medical Centre were remotely monitored using a digital camera applying color and motion-based computational methods. The region-of-interest (ROI) was efficiently selected by incorporating an image decomposition method. Furthermore, spatial averaging, spectral analysis, band-pass filtering, and peak detection were also used to extract both HR and RR. The experimental results were validated with the ground truth data obtained from an ECG monitor and showed a strong correlation using the Pearson correlation coefficient (PCC) 0.9794 and 0.9412 for HR and RR, respectively. The root mean square errors (RMSE) between camera-based data and ECG data for HR and RR were 2.84 beats/min and 2.91 breaths/min, respectively. A Bland Altman analysis of the data also showed a close correlation between both data sets with a mean bias of 0.60 beats/min and 1 breath/min, and the lower and upper limit of agreement -4.9 to + 6.1 beats/min and -4.4 to +6.4 breaths/min for both HR and RR, respectively. Therefore, video camera imaging may replace conventional contact-based monitoring in NICU and has potential applications in other contexts such as home health monitoring.
Keywords: Neonates, NICU, digital camera, heart rate, respiratory rate, image decomposition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 58143 Assessment of Predictive Confounders for the Prevalence of Breast Cancer among Iraqi Population: A Retrospective Study from Baghdad, Iraq
Authors: Nadia H. Mohammed, Anmar Al-Taie, Fadia H. Al-Sultany
Abstract:
Although breast cancer prevalence continues to increase, mortality has been decreasing as a result of early detection and improvement in adjuvant systemic therapy. Nevertheless, this disease required further efforts to understand and identify the associated potential risk factors that could play a role in the prevalence of this malignancy among Iraqi women. The objective of this study was to assess the perception of certain predictive risk factors on the prevalence of breast cancer types among a sample of Iraqi women diagnosed with breast cancer. This was a retrospective observational study carried out at National Cancer Research Center in College of Medicine, Baghdad University from November 2017 to January 2018. Data of 100 patients with breast cancer whose biopsies examined in the National Cancer Research Center were included in this study. Data were collected to structure a detailed assessment regarding the patients’ demographic, medical and cancer records. The majority of study participants (94%) suffered from ductal breast cancer with mean age 49.57 years. Among those women, 48.9% were obese with body mass index (BMI) 35 kg/m2. 68.1% of them had positive family history of breast cancer and 66% had low parity. 40.4% had stage II ductal breast cancer followed by 25.5% with stage III. It was found that 59.6% and 68.1% had positive oestrogen receptor sensitivity and positive human epidermal growth factor (HER2/neu) receptor sensitivity respectively. In regard to the impact of prediction of certain variables on the incidence of ductal breast cancer, positive family history of breast cancer (P < 0.0001), low parity (P< 0.0001), stage I and II breast cancer (P = 0.02) and positive HER2/neu status (P < 0.0001) were significant predictive factors among the study participants. The results from this study provide relevant evidence for a significant positive and potential association between certain risk factors and the prevalence of breast cancer among Iraqi women.
Keywords: Ductal breast cancer, hormone sensitivity, Iraq, risk factors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 108442 Screening Wheat Parents of Mapping Population for Heat and Drought Tolerance, Detection of Wheat Genetic Variation
Authors: H.R. Balouchi
Abstract:
To evaluate genetic variation of wheat (Triticum aestivum) affected by heat and drought stress on eight Australian wheat genotypes that are parents of Doubled Haploid (HD) mapping populations at the vegetative stage, the water stress experiment was conducted at 65% field capacity in growth room. Heat stress experiment was conducted in the research field under irrigation over summer. Result show that water stress decreased dry shoot weight and RWC but increased osmolarity and means of Fv/Fm values in all varieties except for Krichauff. Krichauff and Kukri had the maximum RWC under drought stress. Trident variety was shown maximum WUE, osmolarity (610 mM/Kg), dry mater, quantum yield and Fv/Fm 0.815 under water stress condition. However, the recovery of quantum yield was apparent between 4 to 7 days after stress in all varieties. Nevertheless, increase in water stress after that lead to strong decrease in quantum yield. There was a genetic variation for leaf pigments content among varieties under heat stress. Heat stress decreased significantly the total chlorophyll content that measured by SPAD. Krichauff had maximum value of Anthocyanin content (2.978 A/g FW), chlorophyll a+b (2.001 mg/g FW) and chlorophyll a (1.502 mg/g FW). Maximum value of chlorophyll b (0.515 mg/g FW) and Carotenoids (0.234 mg/g FW) content belonged to Kukri. The quantum yield of all varieties decreased significantly, when the weather temperature increased from 28 ÔùªC to 36 ÔùªC during the 6 days. However, the recovery of quantum yield was apparent after 8th day in all varieties. The maximum decrease and recovery in quantum yield was observed in Krichauff. Drought and heat tolerant and moderately tolerant wheat genotypes were included Trident, Krichauff, Kukri and RAC875. Molineux, Berkut and Excalibur were clustered into most sensitive and moderately sensitive genotypes. Finally, the results show that there was a significantly genetic variation among the eight varieties that were studied under heat and water stress.Keywords: Abiotic stress, Genetic variation, Fluorescence, Wheat genotypes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 222841 Attitude and Knowledge of Primary Health Care Physicians and Local Inhabitants about Leishmaniasis and Sandfly in West Alexandria
Authors: Randa M. Ali, Naguiba F. Loutfy, Osama M. Awad
Abstract:
Leishmaniasis is the collective name for a number of diseases caused by protozoan flagellates of the genus Leishmania, which is transmitted by Phlebotomine sandfly, the disease has diverse clinical manifestations and found in many areas of the world, particularly in Africa, Latin America, South and Central Asia, the Mediterranean basin and the Middle East. This study was done to assess primary health care physicians’ knowledge (PHP) and attitude about leishmaniasis and to assess awareness of local inhabitants about the disease and its vector in four areas in west Alexandria, Egypt. It is a cross sectional survey that was conducted in four PHC units in west Alexandria. All physicians currently working in these units during the study period were invited to participate in the study; only 20 PHP completed the questionnaire. 60 local inhabitants were selected randomly from the four areas of the study, 15 from each area; Data was collected through two different specially designed questionnaires. Results showed that 11 (55%) percent of the physicians had satisfactory knowledge; they answered more than 9 (60%) questions out of a total 14 questions about leishmaniasis and sandfly. On the other hand when attitude of the primary health care physicians about leishmaniasis was measured, results showed that 17 (85%) had good attitude and 3 (15%) had poor attitude. The second questionnaire showed that the awareness of local inhabitants about leishmaniasis and sandfly as a vector of the disease is poor and needs to be corrected. (90%) of the interviewed inhabitants had not heard about leishmaniasis, Only 3 (5%) of them said they know sandfly and its role in transmission of leishmaniasis. Thus we conclude that knowledge and attitudes of physicians are acceptable. However, there is, room for improvement and could be done through formal training courses and distribution of guidelines. In addition to raising the awareness of primary health care physicians about the importance of early detection and notification of cases of leishmaniasis, health education for raising awareness of the public regarding the vector and the disease is necessary because related studies have demonstrated that for inhabitants to take enough protective measures against the vector, they should perceive that it is responsible for causing a disease.Keywords: Attitude, knowledge, PHP, leishmaniasis, sandfly, local inhabitants, inside and outside housing conditions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 193440 Forensic Medical Capacities of Research of Saliva Stains on Physical Evidence after Washing
Authors: Saule Mussabekova
Abstract:
Recent advances in genetics have allowed increasing acutely the capacities of the formation of reliable evidence in conducting forensic examinations. Thus, traces of biological origin are important sources of information about a crime. Currently, around the world, sexual offenses have increased, and among them are those in which the criminals use various detergents to remove traces of their crime. A feature of modern synthetic detergents is the presence of biological additives - enzymes. Enzymes purposefully destroy stains of biological origin. To study the nature and extent of the impact of modern washing powders on saliva stains on the physical evidence, specially prepared test specimens of different types of tissues to which saliva was applied have been examined. Materials and Methods: Washing machines of famous manufacturers of household appliances have been used with different production characteristics and advertised brands of washing powder for test washing. Over 3,500 experimental samples were tested. After washing, the traces of saliva were identified using modern research methods of forensic medicine. Results: The influence was tested and the dependence of the use of different washing programs, types of washing machines and washing powders in the process of establishing saliva trace and identify of the stains on the physical evidence while washing was revealed. The results of experimental and practical expert studies have shown that in most cases it is not possible to draw the conclusions in the identification of saliva traces on physical evidence after washing. This is a consequence of the effect of biological additives and other additional factors on traces of saliva during washing. Conclusions: On the basis of the results of the study, the feasibility of saliva traces of the stains on physical evidence after washing is established. The use of modern molecular genetic methods makes it possible to partially solve the problems arising in the study of unlaundered evidence. Additional study of physical evidence after washing facilitates detection and investigation of sexual offenses against women and children.
Keywords: Saliva research, modern synthetic detergents, laundry detergents, forensic medicine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 132039 Resting-State Functional Connectivity Analysis Using an Independent Component Approach
Authors: Eric Jacob Bacon, Chaoyang Jin, Dianning He, Shuaishuai Hu, Lanbo Wang, Han Li, Shouliang Qi
Abstract:
Refractory epilepsy is a complicated type of epilepsy that can be difficult to diagnose. Recent technological advancements have made resting-state functional magnetic resonance (rsfMRI) a vital technique for studying brain activity. However, there is still much to learn about rsfMRI. Investigating rsfMRI connectivity may aid in the detection of abnormal activities. In this paper, we propose studying the functional connectivity of rsfMRI candidates to diagnose epilepsy. 45 rsfMRI candidates, comprising 26 with refractory epilepsy and 19 healthy controls, were enrolled in this study. A data-driven approach known as Independent Component Analysis (ICA) was used to achieve our goal. First, rsfMRI data from both patients and healthy controls were analyzed using group ICA. The components that were obtained were then spatially sorted to find and select meaningful ones. A two-sample t-test was also used to identify abnormal networks in patients and healthy controls. Finally, based on the fractional amplitude of low-frequency fluctuations (fALFF), a chi-square statistic test was used to distinguish the network properties of the patient and healthy control groups. The two-sample t-test analysis yielded abnormal in the default mode network, including the left superior temporal lobe and the left supramarginal. The right precuneus was found to be abnormal in the dorsal attention network. In addition, the frontal cortex showed an abnormal cluster in the medial temporal gyrus. In contrast, the temporal cortex showed an abnormal cluster in the right middle temporal gyrus and the right fronto-operculum gyrus. Finally, the chi-square statistic test was significant, producing a p-value of 0.001 for the analysis. This study offers evidence that investigating rsfMRI connectivity provides an excellent diagnosis option for refractory epilepsy.
Keywords: Independent Component Analysis, Resting State Network, refractory epilepsy, rsfMRI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 292