Search results for: detection and estimation
722 Signaling Theory: An Investigation on the Informativeness of Dividends and Earnings Announcements
Authors: Faustina Masocha, Vusani Moyo
Abstract:
For decades, dividend announcements have been presumed to contain important signals about the future prospects of companies. Similarly, the same has been presumed about management earnings announcements. Despite both dividend and earnings announcements being considered informative, a number of researchers questioned their credibility and found both to contain short-term signals. Pertaining to dividend announcements, some authors argued that although they might contain important information that can result in changes in share prices, which consequently results in the accumulation of abnormal returns, their degree of informativeness is less compared to other signaling tools such as earnings announcements. Yet, this claim in favor has been refuted by other researchers who found the effect of earnings to be transitory and of little value to shareholders as indicated by the little abnormal returns earned during the period surrounding earnings announcements. Considering the above, it is apparent that both dividends and earnings have been hypothesized to have a signaling impact. This prompts one to question which between these two signaling tools is more informative. To answer this question, two follow-up questions were asked. The first question sought to determine the event which results in the most effect on share prices, while the second question focused on the event that influenced trading volume the most. To answer the first question and evaluate the effect that each of these events had on share prices, an event study methodology was employed on a sample made up of the top 10 JSE-listed companies for data collected from 2012 to 2019 to determine if shareholders gained abnormal returns (ARs) during announcement dates. The event that resulted in the most persistent and highest amount of ARs was considered to be more informative. Looking at the second follow-up question, an investigation was conducted to determine if either dividends or earnings announcements influenced trading patterns, resulting in abnormal trading volumes (ATV) around announcement time. The event that resulted in the most ATV was considered more informative. Using an estimation period of 20 days and an event window of 21 days, and hypothesis testing, it was found that announcements pertaining to the increase of earnings resulted in the most ARs, Cumulative Abnormal Returns (CARs) and had a lasting effect in comparison to dividend announcements whose effect lasted until day +3. This solidifies some empirical arguments that the signaling effect of dividends has become diminishing. It was also found that when reported earnings declined in comparison to the previous period, there was an increase in trading volume, resulting in ATV. Although dividend announcements did result in abnormal returns, they were lesser than those acquired during earnings announcements which refutes a number of theoretical and empirical arguments that found dividends to be more informative than earnings announcements.Keywords: dividend signaling, event study methodology, information content of earnings, signaling theory
Procedia PDF Downloads 172721 Development of a Multi-Locus DNA Metabarcoding Method for Endangered Animal Species Identification
Authors: Meimei Shi
Abstract:
Objectives: The identification of endangered species, especially simultaneous detection of multiple species in complex samples, plays a critical role in alleged wildlife crime incidents and prevents illegal trade. This study was to develop a multi-locus DNA metabarcoding method for endangered animal species identification. Methods: Several pairs of universal primers were designed according to the mitochondria conserved gene regions. Experimental mixtures were artificially prepared by mixing well-defined species, including endangered species, e.g., forest musk, bear, tiger, pangolin, and sika deer. The artificial samples were prepared with 1-16 well-characterized species at 1% to 100% DNA concentrations. After multiplex-PCR amplification and parameter modification, the amplified products were analyzed by capillary electrophoresis and used for NGS library preparation. The DNA metabarcoding was carried out based on Illumina MiSeq amplicon sequencing. The data was processed with quality trimming, reads filtering, and OTU clustering; representative sequences were blasted using BLASTn. Results: According to the parameter modification and multiplex-PCR amplification results, five primer sets targeting COI, Cytb, 12S, and 16S, respectively, were selected as the NGS library amplification primer panel. High-throughput sequencing data analysis showed that the established multi-locus DNA metabarcoding method was sensitive and could accurately identify all species in artificial mixtures, including endangered animal species Moschus berezovskii, Ursus thibetanus, Panthera tigris, Manis pentadactyla, Cervus nippon at 1% (DNA concentration). In conclusion, the established species identification method provides technical support for customs and forensic scientists to prevent the illegal trade of endangered animals and their products.Keywords: DNA metabarcoding, endangered animal species, mitochondria nucleic acid, multi-locus
Procedia PDF Downloads 140720 Impacts of Urbanization on Forest and Agriculture Areas in Savannakhet Province, Lao People's Democratic Republic
Authors: Chittana Phompila
Abstract:
The current increased population pushes increasing demands for natural resources and living space. In Laos, urban areas have been expanding rapidly in recent years. The rapid urbanization can have negative impacts on landscapes, including forest and agriculture lands. The primary objective of this research were to map current urban areas in a large city in Savannakhet province, in Laos, 2) to compare changes in urbanization between 1990 and 2018, and 3) to estimate forest and agriculture areas lost due to expansions of urban areas during the last over twenty years within study area. Landsat 8 data was used and existing GIS data was collected including spatial data on rivers, lakes, roads, vegetated areas and other land use/land covers). GIS data was obtained from the government sectors. Object based classification (OBC) approach was applied in ECognition for image processing and analysis of urban area using. Historical data from other Landsat instruments (Landsat 5 and 7) were used to allow us comparing changes in urbanization in 1990, 2000, 2010 and 2018 in this study area. Only three main land cover classes were focused and classified, namely forest, agriculture and urban areas. Change detection approach was applied to illustrate changes in built-up areas in these periods. Our study shows that the overall accuracy of map was 95% assessed, kappa~ 0.8. It is found that that there is an ineffective control over forest and land-use conversions from forests and agriculture to urban areas in many main cities across the province. A large area of agriculture and forest has been decreased due to this conversion. Uncontrolled urban expansion and inappropriate land use planning can lead to creating a pressure in our resource utilisation. As consequence, it can lead to food insecurity and national economic downturn in a long term.Keywords: urbanisation, forest cover, agriculture areas, Landsat 8 imagery
Procedia PDF Downloads 159719 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging
Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen
Abstract:
Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques
Procedia PDF Downloads 99718 Estimating Understory Species Diversity of West Timor Tropical Savanna, Indonesia: The Basis for Planning an Integrated Management of Agricultural and Environmental Weeds and Invasive Species
Authors: M. L. Gaol, I. W. Mudita
Abstract:
Indonesia is well known as a country covered by lush tropical rain forests, but in fact, the northeastern part of the country, within the areas geologically known as Lesser Sunda, the dominant vegetation is tropical savanna. Lesser Sunda is a chain of islands located closer to Australia than to islands in the other parts of the country. Among those of islands in the chain which is closes to Australia, and thereby most strongly affected by the hot and dry Australian climate, is the island of Timor, the western part of which belongs to Indonesia and the eastern part is a sovereign state East Timor. Regardless of being the most dominant vegetation cover, tropical savanna in West Timor, especially its understory, is rarely investigated. This research was therefore carried out to investigate the structure, composition and diversity of the understory of this tropical savanna as the basis for looking at the possibility of introducing other spesieis for various purposes. For this research, 14 terrestrial communities representing major types of the existing savannas in West Timor was selected with aid of the most recently available satellite imagery. At each community, one stand of the size of 50 m x 50 m most likely representing the community was as the site of observation for the type of savanna under investigation. At each of the 14 communities, 20 plots of 1 m x 1 m in size was placed at random to identify understory species and to count the total number of individuals and to estimate the cover of each species. Based on such counts and estimation, the important value of each species was later calculated. The results of this research indicated that the understory of savanna in West Timor consisted of 73 understory species. Of this number of species, 18 species are grasses and 55 are non-grasses. Although lower than non-grass species, grass species indeed dominated the savanna as indicated by their number of individuals (65.33 vs 34.67%), species cover (57.80 vs 42.20%), and important value (123.15 vs 76.85). Of the 14 communities, the lowest density of grass was 13.50/m2 and the highest was 417.50/m2. Of 18 grass species found, all were commonly found as agricultural weeds, whereas of 55 non-grass, 10 species were commonly found as agricultural weeds, environmental weeds, or invasive species. In terms of better managing the savanna in the region, these findings provided the basis for planning a more integrated approach in managing such agricultural and environmental weeds as well as invasive species by considering the structure, composition, and species diversity of the understory species existing in each site. These findings also provided the basis for better understanding the flora of the region as a whole and for developing a flora database of West Timor in future.Keywords: tropical savanna, understory species, integrated management, weedy and invasive species
Procedia PDF Downloads 136717 Empirical Analysis of the Global Impact of Cybercrime Laws on Cyber Attacks and Malware Types
Authors: Essang Anwana Onuntuei, Chinyere Blessing Azunwoke
Abstract:
The study focused on probing the effectiveness of online consumer privacy and protection laws, electronic transaction laws, privacy and data protection laws, and cybercrime legislation amid frequent cyber-attacks and malware types worldwide. An empirical analysis was engaged to uncover ties and causations between the stringency and implementation of these legal structures and the prevalence of cyber threats. A deliberate sample of seventy-eight countries (thirteen countries each from six continents) was chosen as sample size to study the challenges linked with trending regulations and possible panoramas for improving cybersecurity through refined legal approaches. Findings establish if the frequency of cyber-attacks and malware types vary significantly. Also, the result proved that various cybercrime laws differ statistically, and electronic transactions law does not statistically impact the frequency of cyber-attacks. The result also statistically revealed that the online Consumer Privacy and Protection law does not influence the total number of cyber-attacks. In addition, the results implied that Privacy and Data Protection laws do not statistically impact the total number of cyber-attacks worldwide. The calculated value also proved that cybercrime law does not statistically impact the total number of cyber-attacks. Finally, the computed value concludes that combined multiple cyber laws do not significantly impact the total number of cyber-attacks worldwide. Suggestions were produced based on findings from the study, contributing to the ongoing debate on the validity of legal approaches in battling cybercrime and shielding consumers in the digital age.Keywords: cybercrime legislation, cyber attacks, consumer privacy and protection law, detection, electronic transaction law, prevention, privacy and data protection law, prohibition, prosecution
Procedia PDF Downloads 42716 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array
Authors: Yanping Liao, Zenan Wu, Ruigang Zhao
Abstract:
Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues of the noise subspace, improve the divergence of small eigenvalues in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.Keywords: adaptive beamforming, correction index, limited snapshot, multi-carrier frequency diverse array, robust
Procedia PDF Downloads 130715 The Role of Risk Attitudes and Networks on the Migration Decision: Empirical Evidence from the United States
Authors: Tamanna Rimi
Abstract:
A large body of literature has discussed the determinants of migration decision. However, the potential role of individual risk attitudes on migration decision has so far been overlooked. The research on migration literature has studied how the expected income differential influences migration flows for a risk neutral individual. However, migration takes place when there is no expected income differential or even the variability of income appears as lower than in the current location. This migration puzzle motivates a recent trend in the literature that analyzes how attitudes towards risk influence the decision to migrate. However, the significance of risk attitudes on migration decision has been addressed mostly in a theoretical perspective in the mainstream migration literature. The efficient outcome of labor market and overall economy are largely influenced by migration in many countries. Therefore, attitudes towards risk as a determinant of migration should get more attention in empirical studies. To author’s best knowledge, this is the first study that has examined the relationship between relative risk aversion and migration decision in US market. This paper considers movement across United States as a means of migration. In addition, this paper also explores the network effect due to the increasing size of one’s own ethnic group to a source location on the migration decision and how attitudes towards risk vary with network effect. Two ethnic groups (i.e. Asian and Hispanic) have been considered in this regard. For the empirical estimation, this paper uses two sources of data: 1) U.S. census data for social, economic, and health research, 2010 (IPUMPS) and 2) University of Michigan Health and Retirement Study, 2010 (HRS). In order to measure relative risk aversion, this study uses the ‘Two Sample Two-Stage Instrumental Variable (TS2SIV)’ technique. This is a similar method of Angrist (1990) and Angrist and Kruegers’ (1992) ‘Two Sample Instrumental Variable (TSIV)’ technique. Using a probit model, the empirical investigation yields the following results: (i) risk attitude has a significantly large impact on migration decision where more risk averse people are less likely to migrate; (ii) the impact of risk attitude on migration varies by other demographic characteristics such as age and sex; (iii) people with higher concentration of same ethnic households living in a particular place are expected to migrate less from their current place; (iv) the risk attitudes on migration vary with network effect. The overall findings of this paper relating risk attitude, migration decision and network effect can be a significant contribution addressing the gap between migration theory and empirical study in migration literature.Keywords: migration, network effect, risk attitude, U.S. market
Procedia PDF Downloads 162714 Efficient Video Compression Technique Using Convolutional Neural Networks and Generative Adversarial Network
Authors: P. Karthick, K. Mahesh
Abstract:
Video has become an increasingly significant component of our digital everyday contact. With the advancement of greater contents and shows of the resolution, its significant volume poses serious obstacles to the objective of receiving, distributing, compressing, and revealing video content of high quality. In this paper, we propose the primary beginning to complete a deep video compression model that jointly upgrades all video compression components. The video compression method involves splitting the video into frames, comparing the images using convolutional neural networks (CNN) to remove duplicates, repeating the single image instead of the duplicate images by recognizing and detecting minute changes using generative adversarial network (GAN) and recorded with long short-term memory (LSTM). Instead of the complete image, the small changes generated using GAN are substituted, which helps in frame level compression. Pixel wise comparison is performed using K-nearest neighbours (KNN) over the frame, clustered with K-means, and singular value decomposition (SVD) is applied for each and every frame in the video for all three color channels [Red, Green, Blue] to decrease the dimension of the utility matrix [R, G, B] by extracting its latent factors. Video frames are packed with parameters with the aid of a codec and converted to video format, and the results are compared with the original video. Repeated experiments on several videos with different sizes, duration, frames per second (FPS), and quality results demonstrate a significant resampling rate. On average, the result produced had approximately a 10% deviation in quality and more than 50% in size when compared with the original video.Keywords: video compression, K-means clustering, convolutional neural network, generative adversarial network, singular value decomposition, pixel visualization, stochastic gradient descent, frame per second extraction, RGB channel extraction, self-detection and deciding system
Procedia PDF Downloads 187713 Predicting Low Birth Weight Using Machine Learning: A Study on 53,637 Ethiopian Birth Data
Authors: Kehabtimer Shiferaw Kotiso, Getachew Hailemariam, Abiy Seifu Estifanos
Abstract:
Introduction: Despite the highest share of low birth weight (LBW) for neonatal mortality and morbidity, predicting births with LBW for better intervention preparation is challenging. This study aims to predict LBW using a dataset encompassing 53,637 birth cohorts collected from 36 primary hospitals across seven regions in Ethiopia from February 2022 to June 2024. Methods: We identified ten explanatory variables related to maternal and neonatal characteristics, including maternal education, age, residence, history of miscarriage or abortion, history of preterm birth, type of pregnancy, number of livebirths, number of stillbirths, antenatal care frequency, and sex of the fetus to predict LBW. Using WEKA 3.8.2, we developed and compared seven machine learning algorithms. Data preprocessing included handling missing values, outlier detection, and ensuring data integrity in birth weight records. Model performance was evaluated through metrics such as accuracy, precision, recall, F1-score, and area under the Receiver Operating Characteristic curve (ROC AUC) using 10-fold cross-validation. Results: The results demonstrated that the decision tree, J48, logistic regression, and gradient boosted trees model achieved the highest accuracy (94.5% to 94.6%) with a precision of 93.1% to 93.3%, F1-score of 92.7% to 93.1%, and ROC AUC of 71.8% to 76.6%. Conclusion: This study demonstrates the effectiveness of machine learning models in predicting LBW. The high accuracy and recall rates achieved indicate that these models can serve as valuable tools for healthcare policymakers and providers in identifying at-risk newborns and implementing timely interventions to achieve the sustainable developmental goal (SDG) related to neonatal mortality.Keywords: low birth weight, machine learning, classification, neonatal mortality, Ethiopia
Procedia PDF Downloads 22712 Tuberculosis Massive Active Case Discovery in East Jakarta 2016-2017: The Role of Ketuk Pintu Layani Dengan Hati and Juru Pemantau Batuk (Jumantuk) Cadre Programs
Authors: Ngabilas Salama
Abstract:
Background: Indonesia has the 2nd highest number of incidents of tuberculosis (TB). It accounts for 1.020.000 new cases per year, only 30% of which has been reported. To find the lost 70%, a massive active case discovery was conducted through two programs: Ketuk Pintu Layani Dengan Hati (KPLDH) and Kader Juru Pemantau Batuk (Jumantuk cadres), who also plays a role in child TB screening. Methods: Data was collected and analyzed through Tuberculosis Integrated Online System from 2014 to 2017 involving 129 DOTS facility with 86 primary health centers in East Jakarta. Results: East Jakarta consists of 2.900.722 people. KPLDH program started in February 2016 consisting of 84 teams (310 people). Jumantuk cadres was formed 4 months later (218 orang). The number of new TB cases in East Jakarta (primary health center) from 2014 to June 2017 respectively is as follows: 6.499 (2.637), 7.438 (2.651), 8.948 (3.211), 5.701 (1.830). Meanwhile, the percentage of child TB case discovery in primary health center was 8,5%, 9,8%, 12,1% from 2014 to 2016 respectively. In 2017, child TB case discovery was 13,1% for the first 3 months and 16,5% for the next 3 months. Discussion: Increased TB incidence rate from 2014 to 2017 was 14,4%, 20,3%, and 27,4% respectively in East Jakarta, and 0,5%, 21,1%, and 14% in primary health center. This reveals the positive role of KPLDH and Jumantuk in TB detection and reporting. Likewise, these programs were responsible for the increase in child TB case discovery, especially in the first 3 months of 2017 (Ketuk Pintu TB Day program) and the next 3 months (active TB screening). Conclusion: KPLDH dan Jumantuk are actively involved in increasing TB case discovery in both adults and children.Keywords: tuberculosis, case discovery program, primary health center, cadre
Procedia PDF Downloads 331711 Recent Progress in the Uncooled Mid-Infrared Lead Selenide Polycrystalline Photodetector
Authors: Hao Yang, Lei Chen, Ting Mei, Jianbang Zheng
Abstract:
Currently, the uncooled PbSe photodetectors in the mid-infrared range (2-5μm) with sensitization technology extract more photoelectric response than traditional ones, and enable the room temperature (300K) photo-detection with high detectivity, which have attracted wide attentions in many fields. This technology generally contains the film fabrication with vapor phase deposition (VPD) and a sensitizing process with doping of oxygen and iodine. Many works presented in the recent years almost provide and high temperature activation method with oxygen/iodine vapor diffusion, which reveals that oxygen or iodine plays an important role in the sensitization of PbSe material. In this paper, we provide our latest experimental results and discussions in the stoichiometry of oxygen and iodine and its influence on the polycrystalline structure and photo-response. The experimental results revealed that crystal orientation was transformed from (200) to (420) by sensitization, and the responsivity of 5.42 A/W was gained by the optimal stoichiometry of oxygen and iodine with molecular density of I2 of ~1.51×1012 mm-3 and oxygen pressure of ~1Mpa. We verified that I2 plays a role in transporting oxygen into the lattice of crystal, which is actually not its major role. It is revealed that samples sensitized with iodine transform atomic proportion of Pb from 34.5% to 25.0% compared with samples without iodine from XPS data, which result in the proportion of about 1:1 between Pb and Se atoms by sublimation of PbI2 during sensitization process, and Pb/Se atomic proportion is controlled by I/O atomic proportion in the polycrystalline grains, which is very an important factor for improving responsivity of uncooled PbSe photodetector. Moreover, a novel sensitization and dopant activation method is proposed using oxygen ion implantation with low ion energy of < 500eV and beam current of ~120μA/cm2. These results may be helpful to understanding the sensitization mechanism of polycrystalline lead salt materials.Keywords: polycrystalline PbSe, sensitization, transport, stoichiometry
Procedia PDF Downloads 349710 On-Line Super Critical Fluid Extraction, Supercritical Fluid Chromatography, Mass Spectrometry, a Technique in Pharmaceutical Analysis
Authors: Narayana Murthy Akurathi, Vijaya Lakshmi Marella
Abstract:
The literature is reviewed with regard to online Super critical fluid extraction (SFE) coupled directly with supercritical fluid chromatography (SFC) -mass spectrometry that have typically more sensitive than conventional LC-MS/MS and GC-MS/MS. It is becoming increasingly interesting to use on-line techniques that combine sample preparation, separation and detection in one analytical set up. This provides less human intervention, uses small amount of sample and organic solvent and yields enhanced analyte enrichment in a shorter time. The sample extraction is performed under light shielding and anaerobic conditions, preventing the degradation of thermo labile analytes. It may be able to analyze compounds over a wide polarity range as SFC generally uses carbon dioxide which was collected as a by-product of other chemical reactions or is collected from the atmosphere as it contributes no new chemicals to the environment. The diffusion of solutes in supercritical fluids is about ten times greater than that in liquids and about three times less than in gases which results in a decrease in resistance to mass transfer in the column and allows for fast high resolution separations. The drawback of SFC when using carbon dioxide as mobile phase is that the direct introduction of water samples poses a series of problems, water must therefore be eliminated before it reaches the analytical column. Hundreds of compounds analysed simultaneously by simple enclosing in an extraction vessel. This is mainly applicable for pharmaceutical industry where it can analyse fatty acids and phospholipids that have many analogues as their UV spectrum is very similar, trace additives in polymers, cleaning validation can be conducted by putting swab sample in an extraction vessel, analysing hundreds of pesticides with good resolution.Keywords: super critical fluid extraction (SFE), super critical fluid chromatography (SFC), LCMS/MS, GCMS/MS
Procedia PDF Downloads 391709 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources
Authors: Mustafa Alhamdi
Abstract:
Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification
Procedia PDF Downloads 150708 Imputation of Incomplete Large-Scale Monitoring Count Data via Penalized Estimation
Authors: Mohamed Dakki, Genevieve Robin, Marie Suet, Abdeljebbar Qninba, Mohamed A. El Agbani, Asmâa Ouassou, Rhimou El Hamoumi, Hichem Azafzaf, Sami Rebah, Claudia Feltrup-Azafzaf, Nafouel Hamouda, Wed a.L. Ibrahim, Hosni H. Asran, Amr A. Elhady, Haitham Ibrahim, Khaled Etayeb, Essam Bouras, Almokhtar Saied, Ashrof Glidan, Bakar M. Habib, Mohamed S. Sayoud, Nadjiba Bendjedda, Laura Dami, Clemence Deschamps, Elie Gaget, Jean-Yves Mondain-Monval, Pierre Defos Du Rau
Abstract:
In biodiversity monitoring, large datasets are becoming more and more widely available and are increasingly used globally to estimate species trends and con- servation status. These large-scale datasets challenge existing statistical analysis methods, many of which are not adapted to their size, incompleteness and heterogeneity. The development of scalable methods to impute missing data in incomplete large-scale monitoring datasets is crucial to balance sampling in time or space and thus better inform conservation policies. We developed a new method based on penalized Poisson models to impute and analyse incomplete monitoring data in a large-scale framework. The method al- lows parameterization of (a) space and time factors, (b) the main effects of predic- tor covariates, as well as (c) space–time interactions. It also benefits from robust statistical and computational capability in large-scale settings. The method was tested extensively on both simulated and real-life waterbird data, with the findings revealing that it outperforms six existing methods in terms of missing data imputation errors. Applying the method to 16 waterbird species, we estimated their long-term trends for the first time at the entire North African scale, a region where monitoring data suffer from many gaps in space and time series. This new approach opens promising perspectives to increase the accuracy of species-abundance trend estimations. We made it freely available in the r package ‘lori’ (https://CRAN.R-project.org/package=lori) and recommend its use for large- scale count data, particularly in citizen science monitoring programmes.Keywords: biodiversity monitoring, high-dimensional statistics, incomplete count data, missing data imputation, waterbird trends in North-Africa
Procedia PDF Downloads 156707 Development of a Fuzzy Logic Based Model for Monitoring Child Pornography
Authors: Mariam Ismail, Kazeem Rufai, Jeremiah Balogun
Abstract:
A study was conducted to apply fuzzy logic to the development of a monitoring model for child pornography based on associated risk factors, which can be used by forensic experts or integrated into forensic systems for the early detection of child pornographic activities. A number of methods were adopted in the study, which includes an extensive review of related works was done in order to identify the factors that are associated with child pornography following which they were validated by an expert sex psychologist and guidance counselor, and relevant data was collected. Fuzzy membership functions were used to fuzzify the associated variables identified alongside the risk of the occurrence of child pornography based on the inference rules that were provided by the experts consulted, and the fuzzy logic expert system was simulated using the Fuzzy Logic Toolbox available in the MATLAB Software Release 2016. The results of the study showed that there were 4 categories of risk factors required for assessing the risk of a suspect committing child pornography offenses. The results of the study showed that 2 and 3 triangular membership functions were used to formulate the risk factors based on the 2 and 3 number of labels assigned, respectively. The results of the study showed that 5 fuzzy logic models were formulated such that the first 4 was used to assess the impact of each category on child pornography while the last one takes the 4 outputs from the 4 fuzzy logic models as inputs required for assessing the risk of child pornography. The following conclusion was made; there were factors that were related to personal traits, social traits, history of child pornography crimes, and self-regulatory deficiency traits by the suspects required for the assessment of the risk of child pornography crimes committed by a suspect. Using the values of the identified risk factors selected for this study, the risk of child pornography can be easily assessed from their values in order to determine the likelihood of a suspect perpetuating the crime.Keywords: fuzzy, membership functions, pornography, risk factors
Procedia PDF Downloads 130706 Electrochemical Biosensor for the Detection of Botrytis spp. in Temperate Legume Crops
Authors: Marzia Bilkiss, Muhammad J. A. Shiddiky, Mostafa K. Masud, Prabhakaran Sambasivam, Ido Bar, Jeremy Brownlie, Rebecca Ford
Abstract:
A greater achievement in the Integrated Disease Management (IDM) to prevent the loss would result from early diagnosis and quantitation of the causal pathogen species for accurate and timely disease control. This could significantly reduce costs to the growers and reduce any flow on impacts to the environment from excessive chemical spraying. Necrotrophic fungal disease botrytis grey mould, caused by Botrytis cinerea and Botrytis fabae, significantly reduce temperate legume yield and grain quality during favourable environmental condition in Australia and worldwide. Several immunogenic and molecular probe-type protocols have been developed for their diagnosis, but these have varying levels of species-specificity, sensitivity, and consequent usefulness within the paddock. To substantially improve speed, accuracy, and sensitivity, advanced nanoparticle-based biosensor approaches have been developed. For this, two sets of primers were designed for both Botrytis cinerea and Botrytis fabae which have shown the species specificity with initial sensitivity of two genomic copies/µl in pure fungal backgrounds using multiplexed quantitative PCR. During further validation, quantitative PCR detected 100 spores on artificially infected legume leaves. Simultaneously an electro-catalytic assay was developed for both target fungal DNA using functionalised magnetic nanoparticles. This was extremely sensitive, able to detect a single spore within a raw total plant nucleic acid extract background. We believe that the translation of this technology to the field will enable quantitative assessment of pathogen load for future accurate decision support of informed botrytis grey mould management.Keywords: biosensor, botrytis grey mould, sensitive, species specific
Procedia PDF Downloads 173705 Enzyme Producing Psyhrophilic Pseudomonas app. Isolated from Poultry Meats
Authors: Ali Aydin, Mert Sudagidan, Aysen Coban, Alparslan Kadir Devrim
Abstract:
Pseudomonas spp. (specifically, P. fluorescens and P. fragi) are considered the principal spoilage microorganisms of refrigerated poultry meats. The higher the level psychrophilic spoilage Pseudomonas spp. on carcasses at the end of processing lead to decrease the shelf life of the refrigerated product. The aim of the study was the identification of psychrophilic Pseudomonas spp. having proteolytic and lipolytic activities from poultry meats by 16S rRNA and rpoB gene sequencing, investigation of protease and lipase related genes and determination of proteolytic activity of Pseudomonas spp. In the of isolation procedure, collected chicken meat samples from local markets and slaughterhouses were homogenized and the lysates were incubated on Standard method agar and Skim Milk agar for selection of proteolytic bacteria and tributyrin agar for selection of lipolytic bacteria at +4 °C for 7 days. After detection of proteolytic and lipolytic colonies, the isolates were firstly analyzed by biochemical tests such as Gram staining, catalase and oxidase tests. DNA gene sequencing analysis and comparison with GenBank revealed that 126 strong enzyme Pseudomonas spp. were identified as predominantly P. fluorescens (n=55), P. fragi (n=42), Pseudomonas spp. (n=24), P. cedrina (n=2), P. poae (n=1), P. koreensis (n=1), and P. gessardi (n=1). Additionally, protease related aprX gene was screened in the strains and it was detected in 69/126 strains, whereas, lipase related lipA gene was found in 9 Pseudomonas strains. Protease activity was determined using commercially available protease assay kit and 5 strains showed high protease activity. The results showed that psychrophilic Pseudomonas strains were present in chicken meat samples and they can produce important levels of proteases and lipases for food spoilage to decrease food quality and safety.Keywords: Pseudomonas, chicken meat, protease, lipase
Procedia PDF Downloads 387704 Finite Element Analysis of Layered Composite Plate with Elastic Pin Under Uniaxial Load Using ANSYS
Authors: R. M. Shabbir Ahmed, Mohamed Haneef, A. R. Anwar Khan
Abstract:
Analysis of stresses plays important role in the optimization of structures. Prior stress estimation helps in better design of the products. Composites find wide usage in the industrial and home applications due to its strength to weight ratio. Especially in the air craft industry, the usage of composites is more due to its advantages over the conventional materials. Composites are mainly made of orthotropic materials having unequal strength in the different directions. Composite materials have the drawback of delamination and debonding due to the weaker bond materials compared to the parent materials. So proper analysis should be done to the composite joints before using it in the practical conditions. In the present work, a composite plate with elastic pin is considered for analysis using finite element software Ansys. Basically the geometry is built using Ansys software using top down approach with different Boolean operations. The modelled object is meshed with three dimensional layered element solid46 for composite plate and solid element (Solid45) for pin material. Various combinations are considered to find the strength of the composite joint under uniaxial loading conditions. Due to symmetry of the problem, only quarter geometry is built and results are presented for full model using Ansys expansion options. The results show effect of pin diameter on the joint strength. Here the deflection and load sharing of the pin are increasing and other parameters like overall stress, pin stress and contact pressure are reducing due to lesser load on the plate material. Further material effect shows, higher young modulus material has little deflection, but other parameters are increasing. Interference analysis shows increasing of overall stress, pin stress, contact stress along with pin bearing load. This increase should be understood properly for increasing the load carrying capacity of the joint. Generally every structure is preloaded to increase the compressive stress in the joint to increase the load carrying capacity. But the stress increase should be properly analysed for composite due to its delamination and debonding effects due to failure of the bond materials. When results for an isotropic combination is compared with composite joint, isotropic joint shows uniformity of the results with lesser values for all parameters. This is mainly due to applied layer angle combinations. All the results are represented with necessasary pictorial plots.Keywords: bearing force, frictional force, finite element analysis, ANSYS
Procedia PDF Downloads 334703 A Comparative Study of Black Carbon Emission Characteristics from Marine Diesel Engines Using Light Absorption Method
Authors: Dongguk Im, Gunfeel Moon, Younwoo Nam, Kangwoo Chun
Abstract:
Recognition of the needs about protecting environment throughout worldwide is widespread. In the shipping industry, International Maritime Organization (IMO) has been regulating pollutants emitted from ships by MARPOL 73/78. Recently, the Marine Environment Protection Committee (MEPC) of IMO, at its 68th session, approved the definition of Black Carbon (BC) specified by the following physical properties (light absorption, refractory, insolubility and morphology). The committee also agreed to the need for a protocol for any voluntary measurement studies to identify the most appropriate measurement methods. Filter Smoke Number (FSN) based on light absorption is categorized as one of the IMO relevant BC measurement methods. EUROMOT provided a FSN measurement data (measured by smoke meter) of 31 different engines (low, medium and high speed marine engines) of member companies at the 3rd International Council on Clean Transportation (ICCT) workshop on marine BC. From the comparison of FSN, the results indicated that BC emission from low speed marine diesel engines was ranged from 0.009 to 0.179 FSN and it from medium and high speed marine diesel engine was ranged 0.012 to 3.2 FSN. In consideration of measured the low FSN from low speed engine, an experimental study was conducted using both a low speed marine diesel engine (2 stroke, power of 7,400 kW at 129 rpm) and a high speed marine diesel engine (4 stroke, power of 403 kW at 1,800 rpm) under E3 test cycle. The results revealed that FSN was ranged from 0.01 to 0.16 and 1.09 to 1.35 for low and high speed engines, respectively. The measurement equipment (smoke meter) ranges from 0 to 10 FSN. Considering measurement range of it, FSN values from low speed engines are near the detection limit (0.002 FSN or ~0.02 mg/m3). From these results, it seems to be modulated the measurement range of the measurement equipment (smoke meter) for enhancing measurement accuracy of marine BC and evaluation on performance of BC abatement technologies.Keywords: black carbon, filter smoke number, international maritime organization, marine diesel engine (two and four stroke), particulate matter
Procedia PDF Downloads 276702 Tool for Maxillary Sinus Quantification in Computed Tomography Exams
Authors: Guilherme Giacomini, Ana Luiza Menegatti Pavan, Allan Felipe Fattori Alves, Marcela de Oliveira, Fernando Antonio Bacchim Neto, José Ricardo de Arruda Miranda, Seizo Yamashita, Diana Rodrigues de Pina
Abstract:
The maxillary sinus (MS), part of the paranasal sinus complex, is one of the most enigmatic structures in modern humans. The literature has suggested that MSs function as olfaction accessories, to heat or humidify inspired air, for thermoregulation, to impart resonance to the voice and others. Thus, the real function of the MS is still uncertain. Furthermore, the MS anatomy is complex and varies from person to person. Many diseases may affect the development process of sinuses. The incidence of rhinosinusitis and other pathoses in the MS is comparatively high, so, volume analysis has clinical value. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure, which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust, and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression, and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to quantify MS volume proved to be robust, fast, and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to automatically quantify MS volume proved to be robust, fast and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases.Keywords: maxillary sinus, support vector machine, region growing, volume quantification
Procedia PDF Downloads 504701 Land Use Land Cover Changes in Response to Urban Sprawl within North-West Anatolia, Turkey
Authors: Melis Inalpulat, Levent Genc
Abstract:
In the present study, an attempt was made to state the Land Use Land Cover (LULC) transformation over three decades around the urban regions of Balıkesir, Bursa, and Çanakkale provincial centers (PCs) in Turkey. Landsat imageries acquired in 1984, 1999 and 2014 were used to determine the LULC change. Images were classified using the supervised classification technique and five main LULC classes were considered including forest (F), agricultural land (A), residential area (urban) - bare soil (R-B), water surface (W), and other (O). Change detection analyses were conducted for 1984-1999 and 1999-2014, and the results were evaluated. Conversions of LULC types to R-B class were investigated. In addition, population changes (1985-2014) were assessed depending on census data, the relations between population and the urban areas were stated, and future populations and urban area needs were forecasted for 2030. The results of LULC analysis indicated that urban areas, which are covered under R-B class, were expanded in all PCs. During 1984-1999 R-B class within Balıkesir, Bursa and Çanakkale PCs were found to have increased by 7.1%, 8.4%, and 2.9%, respectively. The trend continued in the 1999-2014 term and the increment percentages reached to 15.7%, 15.5%, and 10.2% at the end of 30-year period (1984-2014). Furthermore, since A class in all provinces was found to be the principal contributor for the R-B class, urban sprawl lead to the loss of agricultural lands. Moreover, the areas of R-B classes were highly correlated with population within all PCs (R2>0.992). Depending on this situation, both future populations and R-B class areas were forecasted. The estimated values of increase in the R-B class areas for Balıkesir, Bursa, and Çanakkale PCs were 1,586 ha, 7,999 ha and 854 ha, respectively. Due to this fact, the forecasted values for 2,030 are 7,838 ha, 27,866, and 2,486 ha for Balıkesir, Bursa, and Çanakkale, and thus, 7.7%, 8.2%, and 9.7% more R-B class areas are expected to locate in PCs in respect to the same order.Keywords: landsat, LULC change, population, urban sprawl
Procedia PDF Downloads 262700 Application of a Model-Free Artificial Neural Networks Approach for Structural Health Monitoring of the Old Lidingö Bridge
Authors: Ana Neves, John Leander, Ignacio Gonzalez, Raid Karoumi
Abstract:
Systematic monitoring and inspection are needed to assess the present state of a structure and predict its future condition. If an irregularity is noticed, repair actions may take place and the adequate intervention will most probably reduce the future costs with maintenance, minimize downtime and increase safety by avoiding the failure of the structure as a whole or of one of its structural parts. For this to be possible decisions must be made at the right time, which implies using systems that can detect abnormalities in their early stage. In this sense, Structural Health Monitoring (SHM) is seen as an effective tool for improving the safety and reliability of infrastructures. This paper explores the decision-making problem in SHM regarding the maintenance of civil engineering structures. The aim is to assess the present condition of a bridge based exclusively on measurements using the suggested method in this paper, such that action is taken coherently with the information made available by the monitoring system. Artificial Neural Networks are trained and their ability to predict structural behavior is evaluated in the light of a case study where acceleration measurements are acquired from a bridge located in Stockholm, Sweden. This relatively old bridge is presently still in operation despite experiencing obvious problems already reported in previous inspections. The prediction errors provide a measure of the accuracy of the algorithm and are subjected to further investigation, which comprises concepts like clustering analysis and statistical hypothesis testing. These enable to interpret the obtained prediction errors, draw conclusions about the state of the structure and thus support decision making regarding its maintenance.Keywords: artificial neural networks, clustering analysis, model-free damage detection, statistical hypothesis testing, structural health monitoring
Procedia PDF Downloads 209699 Frequency Response of Complex Systems with Localized Nonlinearities
Authors: E. Menga, S. Hernandez
Abstract:
Finite Element Models (FEMs) are widely used in order to study and predict the dynamic properties of structures and usually, the prediction can be obtained with much more accuracy in the case of a single component than in the case of assemblies. Especially for structural dynamics studies, in the low and middle frequency range, most complex FEMs can be seen as assemblies made by linear components joined together at interfaces. From a modelling and computational point of view, these types of joints can be seen as localized sources of stiffness and damping and can be modelled as lumped spring/damper elements, most of time, characterized by nonlinear constitutive laws. On the other side, most of FE programs are able to run nonlinear analysis in time-domain. They treat the whole structure as nonlinear, even if there is one nonlinear degree of freedom (DOF) out of thousands of linear ones, making the analysis unnecessarily expensive from a computational point of view. In this work, a methodology in order to obtain the nonlinear frequency response of structures, whose nonlinearities can be considered as localized sources, is presented. The work extends the well-known Structural Dynamic Modification Method (SDMM) to a nonlinear set of modifications, and allows getting the Nonlinear Frequency Response Functions (NLFRFs), through an ‘updating’ process of the Linear Frequency Response Functions (LFRFs). A brief summary of the analytical concepts is given, starting from the linear formulation and understanding what the implications of the nonlinear one, are. The response of the system is formulated in both: time and frequency domain. First the Modal Database is extracted and the linear response is calculated. Secondly the nonlinear response is obtained thru the NL SDMM, by updating the underlying linear behavior of the system. The methodology, implemented in MATLAB, has been successfully applied to estimate the nonlinear frequency response of two systems. The first one is a two DOFs spring-mass-damper system, and the second example takes into account a full aircraft FE Model. In spite of the different levels of complexity, both examples show the reliability and effectiveness of the method. The results highlight a feasible and robust procedure, which allows a quick estimation of the effect of localized nonlinearities on the dynamic behavior. The method is particularly powerful when most of the FE Model can be considered as acting linearly and the nonlinear behavior is restricted to few degrees of freedom. The procedure is very attractive from a computational point of view because the FEM needs to be run just once, which allows faster nonlinear sensitivity analysis and easier implementation of optimization procedures for the calibration of nonlinear models.Keywords: frequency response, nonlinear dynamics, structural dynamic modification, softening effect, rubber
Procedia PDF Downloads 266698 Prevalence and Comparison for Detection Methods of Candida Species in Vaginal Specimens from Pregnant and Non-Pregnant Saudi Women
Authors: Yazeed Al-Sheikh
Abstract:
Pregnancy represents a risk factor in the occurrence of vulvovaginal candidiasis. To investigate the prevalence rate of vaginal carriage of Candida species in Saudi pregnant and non-pregnant women, high vaginal swab (HVS) specimens (707) were examined by direct microscopy (10% KOH and Giemsa staining) and parallel cultured on Sabouraud Dextrose Agar (SDA) as well as on “CHROM agar Candida” medium. As expected, Candida-positive cultures were frequently observed in pregnant-test group (24%) than in non-pregnant group (17%). The frequency of culture positive was correlated to pregnancy (P=0.047), parity (P=0.001), use of contraceptive (P=0.146), or antibiotics (P=0.128), and diabetic-patients (P < 0.0001). Out of 707 HVS examined specimens, 157 specimens were yeast-positive culture (22%) on Sabouraud Dextrose Agar or “CHROM agar Candida”. In comparison, the sensitivities of the direct 10% KOH and the Giemsa stain microscopic examination methods were 84% (132/157) and 95% (149/157) respectively but both with 100% specificity. As for the identity of recovered 157 yeast isolates, based on API 20C biotype carbohydrate assimilation, germ tube and chlamydospore formation, C. albicansand C. glabrata constitute 80.3 and 12.7% respectively. Rates of C. tropicalis, C. kefyr, C. famata or C. utilis were 2.6, 1.3, and 0.6% respectively. Sachromyces cerevisiae and Rhodotorula mucilaginosa yeasts were also encountered at a frequency of 1.3 and 0.6% respectively. Finally, among all recovered 157 yeast-isolates, strains resistant to ketoconazole were not detected, whereas 5% of the C. albicans and as high as 55% of the non-albicans yeast isolates (majority C. glabrata) showed resistance to fluconazole. Our findings may prove helpful for continuous determination of the existing vaginal candidiasis causative species during pregnancy, its lab-diagnosis and/or control and possible measures to minimize the incidence of the disease-associated pre-term delivery.Keywords: vaginal candidiasis, Candida spp., pregnancy, risk factors, API 20C-yeast biotypes, giemsa stain, antifungal agents
Procedia PDF Downloads 241697 MRI Findings in Children with Intrac Table Epilepsy Compared to Children with Medical Responsive Epilepsy
Authors: Susan Amirsalari, Azime Khosrinejad, Elham Rahimian
Abstract:
Objective: Epilepsy is a common brain disorder characterized by a persistent tendency to develop in neurological, cognitive, and psychological contents. Magnetic Resonance Imaging (MRI) is a neuroimaging test facilitating the detection of structural epileptogenic lesions. This study aimed to compare the MRI findings between patients with intractable and drug-responsive epilepsy. Material & methods: This case-control study was conducted from 2007 to 2019. The research population encompassed all 1-16- year-old patients with intractable epilepsy referred to the Shafa Neuroscience Center (n=72) (a case group) and drug-responsive patients referred to the pediatric neurology clinic of Baqiyatallah Hospital (a control group). Results: There were 72 (23.5%) patients in the intractable epilepsy group and 200 (76.5%) patients in the drug-responsive group. The participants' mean age was 6.70 ±4.13 years, and there were 126 males and 106 females in this study Normal brain MRI was noticed in 21 (29.16%) patients in the case group and 184 (92.46%) patients in the control group. Neuronal migration disorder (NMD)was also exhibited in 7 (9.72%) patients in the case group and no patient in the control group. There were hippocampal abnormalities and focal lesions (mass, dysplasia, etc.) in 10 (13.88%) patients in the case group and only 1 (0.05%) patient in the control group. Gliosis and porencephalic cysts were presented in 3 (4.16%) patients in the case group and no patient in the control group. Cerebral and cerebellar atrophy was revealed in 8 (11.11%) patients in the case group and 4 (2.01%) patients in the control group. Corpus callosum agenesis, hydrocephalus, brain malacia, and developmental cyst were more frequent in the case group; however, the difference between the groups was not significant. Conclusion: The MRI findings such as hippocampal abnormalities, focal lesions (mass, dysplasia), NMD, porencephalic cysts, gliosis, and atrophy are significantly more frequent in children with intractable epilepsy than in those with drug-responsive epilepsy.Keywords: magnetic resonance imaging, intractable epilepsy, drug responsive epilepsy, neuronal migrational disorder
Procedia PDF Downloads 45696 The Magnitude and Associated Factors of Coagulation Abnormalities Among Liver Disease Patients at the University of Gondar Comprehensive Specialized Hospital Northwest, Ethiopia
Authors: Melkamu A., Woldu B., Sitotaw C., Seyoum M., Aynalem M.
Abstract:
Background: Liver disease is any condition that affects the liver cells and their function. It is directly linked to coagulation disorders since most coagulation factors are produced by the liver. Therefore, this study aimed to assess the magnitude and associated factors of coagulation abnormalities among liver disease patients. Methods: A cross-sectional study was conducted from August to October 2022 among 307 consecutively selected study participants at the University of Gondar Comprehensive Specialized Hospital. Sociodemographic and clinical data were collected using a structured questionnaire and data extraction sheet, respectively. About 2.7 mL of venous blood was collected and analyzed by the Genrui CA51 coagulation analyzer. Data was entered into Epi-data and exported to STATA version 14 software for analysis. The finding was described in terms of frequencies and proportions. Factors associated with coagulation abnormalities were analyzed by bivariable and multivariable logistic regression. Result: In this study, a total of 307 study participants were included. Of them, the magnitude of prolonged Prothrombin Time (PT) and Activated Partial Thromboplastin Time (APTT) were 68.08% and 63.51%, respectively. The presence of anemia (AOR = 2.97, 95% CI: 1.26, 7.03), a lack of a vegetable feeding habit (AOR = 2.98, 95% CI: 1.42, 6.24), no history of blood transfusion (AOR = 3.72, 95% CI: 1.78, 7.78), and lack of physical exercise (AOR = 3.23, 95% CI: 1.60, 6.52) were significantly associated with prolonged PT. While the presence of anaemia (AOR = 3.02; 95% CI: 1.34, 6.76), lack of vegetable feeding habit (AOR = 2.64; 95% CI: 1.34, 5.20), no history of blood transfusion (AOR = 2.28; 95% CI: 1.09, 4.79), and a lack of physical exercise (AOR = 2.35; 95% CI: 1.16, 4.78) were significantly associated with abnormal APTT. Conclusion: Patients with liver disease had substantial coagulation problems. Being anemic, having a transfusion history, lack of physical activity, and lack of vegetables showed significant association with coagulopathy. Therefore, early detection and management of coagulation abnormalities in liver disease patients are critical.Keywords: coagulation, liver disease, PT, Aptt
Procedia PDF Downloads 60695 Determination of Vinpocetine in Tablets with the Vinpocetine-Selective Electrode and Possibilities of Application in Pharmaceutical Analysis
Authors: Faisal A. Salih
Abstract:
Vinpocetine (Vin) is an ethyl ester of apovincamic acid and is a semisynthetic derivative of vincamine, an alkaloid from plants of the genus Periwinkle (plant) vinca minor. It was found that this compound stimulates cerebral metabolism: it increases the uptake of glucose and oxygen, as well as the consumption of these substances by the brain tissue. Vinpocetine enhances the flow of blood in the brain and has a vasodilating, antihypertensive, and antiplatelet effect. Vinpocetine seems to improve the human ability to acquire new memories and restore memories that have been lost. This drug has been clinically used for the treatment of cerebrovascular disorders such as stroke and dementia memory disorders, as well as in ophthalmology and otorhinolaryngology. It has no side effects, and no toxicity has been reported when using vinpocetine for a long time. For the quantitative determination of Vin in dosage forms, the HPLC methods are generally used. A promising alternative is potentiometry with Vin- selective electrode, which does not require expensive equipment and materials. Another advantage of the potentiometric method is that the pills and solutions for injections can be used directly without separation from matrix components, which reduces both analysis time and cost. In this study, it was found that the choice of a good plasticizer an electrode with the following membrane composition: PVC (32.8 wt.%), ortho-nitrophenyl octyl ether (66.6 wt.%), tetrakis-4-chlorophenyl borate (0.6 wt.%) exhibits excellent analytical performance: lower detection limit (LDL) 1.2•10⁻⁷ M, linear response range (LRR) 1∙10⁻³–3.9∙10⁻⁶ M, the slope of the electrode function 56.2±0.2 mV/decade). Vin masses per average tablet weight determined by direct potentiometry (DP) and potentiometric titration (PT) methods for the two different sets of 10 tablets were (100.35±0.2–100.36±0.1) mg for two sets of blister packs. The mass fraction of Vin in individual tablets, determined using DP, was (9.87 ± 0.02–10.16 ±0.02) mg, while the RSD was (0.13–0.35%). The procedure has very good reproducibility, and excellent compliance with the declared amounts was observed.Keywords: vinpocetine, potentiometry, ion selective electrode, pharmaceutical analysis
Procedia PDF Downloads 76694 Dynamic Analysis of Commodity Price Fluctuation and Fiscal Management in Sub-Saharan Africa
Authors: Abidemi C. Adegboye, Nosakhare Ikponmwosa, Rogers A. Akinsokeji
Abstract:
For many resource-rich developing countries, fiscal policy has become a key tool used for short-run fiscal management since it is considered as playing a critical role in injecting part of resource rents into the economies. However, given its instability, reliance on revenue from commodity exports renders fiscal management, budgetary planning and the efficient use of public resources difficult. In this study, the linkage between commodity prices and fiscal operations among a sample of commodity-exporting countries in sub-Saharan Africa (SSA) is investigated. The main question is whether commodity price fluctuations affects the effectiveness of fiscal policy as a macroeconomic stabilization tool in these countries. Fiscal management effectiveness is considered as the ability of fiscal policy to react countercyclically to output gaps in the economy. Fiscal policy is measured as the ratio of fiscal deficit to GDP and the ratio of government spending to GDP, output gap is measured as a Hodrick-Prescott filter of output growth for each country, while commodity prices are associated with each country based on its main export commodity. Given the dynamic nature of fiscal policy effects on the economy overtime, a dynamic framework is devised for the empirical analysis. The panel cointegration and error correction methodology is used to explain the relationships. In particular, the study employs the panel ECM technique to trace short-term effects of commodity prices on fiscal management and also uses the fully modified OLS (FMOLS) technique to determine the long run relationships. These procedures provide sufficient estimation of the dynamic effects of commodity prices on fiscal policy. Data used cover the period 1992 to 2016 for 11 SSA countries. The study finds that the elasticity of the fiscal policy measures with respect to the output gap is significant and positive, suggesting that fiscal policy is actually procyclical among the countries in the sample. This implies that fiscal management for these countries follows the trend of economic performance. Moreover, it is found that fiscal policy has not performed well in delivering macroeconomic stabilization for these countries. The difficulty in applying fiscal stabilization measures is attributable to the unstable revenue inflows due to the highly volatile nature of commodity prices in the international market. For commodity-exporting countries in SSA to improve fiscal management, therefore, fiscal planning should be largely decoupled from commodity revenues, domestic revenue bases must be improved, and longer period perspectives in fiscal policy management are the critical suggestions in this study.Keywords: commodity prices, ECM, fiscal policy, fiscal procyclicality, fully modified OLS, sub-saharan africa
Procedia PDF Downloads 164693 A Dynamic Cardiac Single Photon Emission Computer Tomography Using Conventional Gamma Camera to Estimate Coronary Flow Reserve
Authors: Maria Sciammarella, Uttam M. Shrestha, Youngho Seo, Grant T. Gullberg, Elias H. Botvinick
Abstract:
Background: Myocardial perfusion imaging (MPI) is typically performed with static imaging protocols and visually assessed for perfusion defects based on the relative intensity distribution. Dynamic cardiac SPECT, on the other hand, is a new imaging technique that is based on time varying information of radiotracer distribution, which permits quantification of myocardial blood flow (MBF). In this abstract, we report a progress and current status of dynamic cardiac SPECT using conventional gamma camera (Infinia Hawkeye 4, GE Healthcare) for estimation of myocardial blood flow and coronary flow reserve. Methods: A group of patients who had high risk of coronary artery disease was enrolled to evaluate our methodology. A low-dose/high-dose rest/pharmacologic-induced-stress protocol was implemented. A standard rest and a standard stress radionuclide dose of ⁹⁹ᵐTc-tetrofosmin (140 keV) was administered. The dynamic SPECT data for each patient were reconstructed using the standard 4-dimensional maximum likelihood expectation maximization (ML-EM) algorithm. Acquired data were used to estimate the myocardial blood flow (MBF). The correspondence between flow values in the main coronary vasculature with myocardial segments defined by the standardized myocardial segmentation and nomenclature were derived. The coronary flow reserve, CFR, was defined as the ratio of stress to rest MBF values. CFR values estimated with SPECT were also validated with dynamic PET. Results: The range of territorial MBF in LAD, RCA, and LCX was 0.44 ml/min/g to 3.81 ml/min/g. The MBF between estimated with PET and SPECT in the group of independent cohort of 7 patients showed statistically significant correlation, r = 0.71 (p < 0.001). But the corresponding CFR correlation was moderate r = 0.39 yet statistically significant (p = 0.037). The mean stress MBF value was significantly lower for angiographically abnormal than that for the normal (Normal Mean MBF = 2.49 ± 0.61, Abnormal Mean MBF = 1.43 ± 0. 0.62, P < .001). Conclusions: The visually assessed image findings in clinical SPECT are subjective, and may not reflect direct physiologic measures of coronary lesion. The MBF and CFR measured with dynamic SPECT are fully objective and available only with the data generated from the dynamic SPECT method. A quantitative approach such as measuring CFR using dynamic SPECT imaging is a better mode of diagnosing CAD than visual assessment of stress and rest images from static SPECT images Coronary Flow Reserve.Keywords: dynamic SPECT, clinical SPECT/CT, selective coronary angiograph, ⁹⁹ᵐTc-Tetrofosmin
Procedia PDF Downloads 151