Search results for: accurate tagging algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5810

Search results for: accurate tagging algorithm

1010 Recent Advances in Pulse Width Modulation Techniques and Multilevel Inverters

Authors: Satish Kumar Peddapelli

Abstract:

This paper presents advances in pulse width modulation techniques which refers to a method of carrying information on train of pulses and the information be encoded in the width of pulses. Pulse Width Modulation is used to control the inverter output voltage. This is done by exercising the control within the inverter itself by adjusting the ON and OFF periods of inverter. By fixing the DC input voltage we get AC output voltage. In variable speed AC motors the AC output voltage from a constant DC voltage is obtained by using inverter. Recent developments in power electronics and semiconductor technology have lead improvements in power electronic systems. Hence, different circuit configurations namely multilevel inverters have become popular and considerable interest by researcher are given on them. A fast Space-Vector Pulse Width Modulation (SVPWM) method for five-level inverter is also discussed. In this method, the space vector diagram of the five-level inverter is decomposed into six space vector diagrams of three-level inverters. In turn, each of these six space vector diagrams of three-level inverter is decomposed into six space vector diagrams of two-level inverters. After decomposition, all the remaining necessary procedures for the three-level SVPWM are done like conventional two-level inverter. The proposed method reduces the algorithm complexity and the execution time. It can be applied to the multilevel inverters above the five-level also. The experimental setup for three-level diode-clamped inverter is developed using TMS320LF2407 DSP controller and the experimental results are analysed.

Keywords: five-level inverter, space vector pulse wide modulation, diode clamped inverter, electrical engineering

Procedia PDF Downloads 386
1009 Feature Selection Approach for the Classification of Hydraulic Leakages in Hydraulic Final Inspection using Machine Learning

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Manufacturing companies are facing global competition and enormous cost pressure. The use of machine learning applications can help reduce production costs and create added value. Predictive quality enables the securing of product quality through data-supported predictions using machine learning models as a basis for decisions on test results. Furthermore, machine learning methods are able to process large amounts of data, deal with unfavourable row-column ratios and detect dependencies between the covariates and the given target as well as assess the multidimensional influence of all input variables on the target. Real production data are often subject to highly fluctuating boundary conditions and unbalanced data sets. Changes in production data manifest themselves in trends, systematic shifts, and seasonal effects. Thus, Machine learning applications require intensive pre-processing and feature selection. Data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets. Within the used real data set of Bosch hydraulic valves, the comparability of the same production conditions in the production of hydraulic valves within certain time periods can be identified by applying the concept drift method. Furthermore, a classification model is developed to evaluate the feature importance in different subsets within the identified time periods. By selecting comparable and stable features, the number of features used can be significantly reduced without a strong decrease in predictive power. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. In this research, the ada boosting classifier is used to predict the leakage of hydraulic valves based on geometric gauge blocks from machining, mating data from the assembly, and hydraulic measurement data from end-of-line testing. In addition, the most suitable methods are selected and accurate quality predictions are achieved.

Keywords: classification, achine learning, predictive quality, feature selection

Procedia PDF Downloads 161
1008 Pilot-Assisted Direct-Current Biased Optical Orthogonal Frequency Division Multiplexing Visible Light Communication System

Authors: Ayad A. Abdulkafi, Shahir F. Nawaf, Mohammed K. Hussein, Ibrahim K. Sileh, Fouad A. Abdulkafi

Abstract:

Visible light communication (VLC) is a new approach of optical wireless communication proposed to support the congested radio frequency (RF) spectrum. VLC systems are combined with orthogonal frequency division multiplexing (OFDM) to achieve high rate transmission and high spectral efficiency. In this paper, we investigate the Pilot-Assisted Channel Estimation for DC biased Optical OFDM (PACE-DCO-OFDM) systems to reduce the effects of the distortion on the transmitted signal. Least-square (LS) and linear minimum mean-squared error (LMMSE) estimators are implemented in MATLAB/Simulink to enhance the bit-error-rate (BER) of PACE-DCO-OFDM. Results show that DCO-OFDM system based on PACE scheme has achieved better BER performance compared to conventional system without pilot assisted channel estimation. Simulation results show that the proposed PACE-DCO-OFDM based on LMMSE algorithm can more accurately estimate the channel and achieves better BER performance when compared to the LS based PACE-DCO-OFDM and the traditional system without PACE. For the same signal to noise ratio (SNR) of 25 dB, the achieved BER is about 5×10-4 for LMMSE-PACE and 4.2×10-3 with LS-PACE while it is about 2×10-1 for system without PACE scheme.

Keywords: channel estimation, OFDM, pilot-assist, VLC

Procedia PDF Downloads 178
1007 Monitoring Large-Coverage Forest Canopy Height by Integrating LiDAR and Sentinel-2 Images

Authors: Xiaobo Liu, Rakesh Mishra, Yun Zhang

Abstract:

Continuous monitoring of forest canopy height with large coverage is essential for obtaining forest carbon stocks and emissions, quantifying biomass estimation, analyzing vegetation coverage, and determining biodiversity. LiDAR can be used to collect accurate woody vegetation structure such as canopy height. However, LiDAR’s coverage is usually limited because of its high cost and limited maneuverability, which constrains its use for dynamic and large area forest canopy monitoring. On the other hand, optical satellite images, like Sentinel-2, have the ability to cover large forest areas with a high repeat rate, but they do not have height information. Hence, exploring the solution of integrating LiDAR data and Sentinel-2 images to enlarge the coverage of forest canopy height prediction and increase the prediction repeat rate has been an active research topic in the environmental remote sensing community. In this study, we explore the potential of training a Random Forest Regression (RFR) model and a Convolutional Neural Network (CNN) model, respectively, to develop two predictive models for predicting and validating the forest canopy height of the Acadia Forest in New Brunswick, Canada, with a 10m ground sampling distance (GSD), for the year 2018 and 2021. Two 10m airborne LiDAR-derived canopy height models, one for 2018 and one for 2021, are used as ground truth to train and validate the RFR and CNN predictive models. To evaluate the prediction performance of the trained RFR and CNN models, two new predicted canopy height maps (CHMs), one for 2018 and one for 2021, are generated using the trained RFR and CNN models and 10m Sentinel-2 images of 2018 and 2021, respectively. The two 10m predicted CHMs from Sentinel-2 images are then compared with the two 10m airborne LiDAR-derived canopy height models for accuracy assessment. The validation results show that the mean absolute error (MAE) for year 2018 of the RFR model is 2.93m, CNN model is 1.71m; while the MAE for year 2021 of the RFR model is 3.35m, and the CNN model is 3.78m. These demonstrate the feasibility of using the RFR and CNN models developed in this research for predicting large-coverage forest canopy height at 10m spatial resolution and a high revisit rate.

Keywords: remote sensing, forest canopy height, LiDAR, Sentinel-2, artificial intelligence, random forest regression, convolutional neural network

Procedia PDF Downloads 90
1006 Analytical Performance of Cobas C 8000 Analyzer Based on Sigma Metrics

Authors: Sairi Satari

Abstract:

Introduction: Six-sigma is a metric that quantifies the performance of processes as a rate of Defects-Per-Million Opportunities. Sigma methodology can be applied in chemical pathology laboratory for evaluating process performance with evidence for process improvement in quality assurance program. In the laboratory, these methods have been used to improve the timeliness of troubleshooting, reduce the cost and frequency of quality control and minimize pre and post-analytical errors. Aim: The aim of this study is to evaluate the sigma values of the Cobas 8000 analyzer based on the minimum requirement of the specification. Methodology: Twenty-one analytes were chosen in this study. The analytes were alanine aminotransferase (ALT), albumin, alkaline phosphatase (ALP), Amylase, aspartate transaminase (AST), total bilirubin, calcium, chloride, cholesterol, HDL-cholesterol, creatinine, creatinine kinase, glucose, lactate dehydrogenase (LDH), magnesium, potassium, protein, sodium, triglyceride, uric acid and urea. Total error was obtained from Clinical Laboratory Improvement Amendments (CLIA). The Bias was calculated from end cycle report of Royal College of Pathologists of Australasia (RCPA) cycle from July to December 2016 and coefficient variation (CV) from six-month internal quality control (IQC). The sigma was calculated based on the formula :Sigma = (Total Error - Bias) / CV. The analytical performance was evaluated based on the sigma, sigma > 6 is world class, sigma > 5 is excellent, sigma > 4 is good and sigma < 4 is satisfactory and sigma < 3 is poor performance. Results: Based on the calculation, we found that, 96% are world class (ALT, albumin, ALP, amylase, AST, total bilirubin, cholesterol, HDL-cholesterol, creatinine, creatinine kinase, glucose, LDH, magnesium, potassium, triglyceride and uric acid. 14% are excellent (calcium, protein and urea), and 10% ( chloride and sodium) require more frequent IQC performed per day. Conclusion: Based on this study, we found that IQC should be performed frequently for only Chloride and Sodium to ensure accurate and reliable analysis for patient management.

Keywords: sigma matrics, analytical performance, total error, bias

Procedia PDF Downloads 170
1005 Case Study: Optimization of Contractor’s Financing through Allocation of Subcontractors

Authors: Helen S. Ghali, Engy Serag, A. Samer Ezeldin

Abstract:

In many countries, the construction industry relies heavily on outsourcing models in executing their projects and expanding their businesses to fit in the diverse market. Such extensive integration of subcontractors is becoming an influential factor in contractor’s cash flow management. Accordingly, subcontractors’ financial terms are important phenomena and pivotal components for the well-being of the contractor’s cash flow. The aim of this research is to study the contractor’s cash flow with respect to the owner and subcontractor’s payment management plans, considering variable advance payment, payment frequency, and lag and retention policies. The model is developed to provide contractors with a decision support tool that can assist in selecting the optimum subcontracting plan to minimize the contractor’s financing limits and optimize the profit values. The model is built using Microsoft Excel VBA coding, and the genetic algorithm is utilized as the optimization tool. Three objective functions are investigated, which are minimizing the highest negative overdraft value, minimizing the net present worth of overdraft, and maximizing the project net profit. The model is validated on a full-scale project which includes both self-performed and subcontracted work packages. The results show potential outputs in optimizing the contractor’s negative cash flow values and, in the meantime, assisting contractors in selecting suitable subcontractors to achieve the objective function.

Keywords: cash flow optimization, payment plan, procurement management, subcontracting plan

Procedia PDF Downloads 129
1004 Enhanced Model for Risk-Based Assessment of Employee Security with Bring Your Own Device Using Cyber Hygiene

Authors: Saidu I. R., Shittu S. S.

Abstract:

As the trend of personal devices accessing corporate data continues to rise through Bring Your Own Device (BYOD) practices, organizations recognize the potential cost reduction and productivity gains. However, the associated security risks pose a significant threat to these benefits. Often, organizations adopt BYOD environments without fully considering the vulnerabilities introduced by human factors in this context. This study presents an enhanced assessment model that evaluates the security posture of employees in BYOD environments using cyber hygiene principles. The framework assesses users' adherence to best practices and guidelines for maintaining a secure computing environment, employing scales and the Euclidean distance formula. By utilizing this algorithm, the study measures the distance between users' security practices and the organization's optimal security policies. To facilitate user evaluation, a simple and intuitive interface for automated assessment is developed. To validate the effectiveness of the proposed framework, design science research methods are employed, and empirical assessments are conducted using five artifacts to analyze user suitability in BYOD environments. By addressing the human factor vulnerabilities through the assessment of cyber hygiene practices, this study aims to enhance the overall security of BYOD environments and enable organizations to leverage the advantages of this evolving trend while mitigating potential risks.

Keywords: security, BYOD, vulnerability, risk, cyber hygiene

Procedia PDF Downloads 74
1003 High-Throughput Artificial Guide RNA Sequence Design for Type I, II and III CRISPR/Cas-Mediated Genome Editing

Authors: Farahnaz Sadat Golestan Hashemi, Mohd Razi Ismail, Mohd Y. Rafii

Abstract:

A huge revolution has emerged in genome engineering by the discovery of CRISPR (clustered regularly interspaced palindromic repeats) and CRISPR-associated system genes (Cas) in bacteria. The function of type II Streptococcus pyogenes (Sp) CRISPR/Cas9 system has been confirmed in various species. Other S. thermophilus (St) CRISPR-Cas systems, CRISPR1-Cas and CRISPR3-Cas, have been also reported for preventing phage infection. The CRISPR1-Cas system interferes by cleaving foreign dsDNA entering the cell in a length-specific and orientation-dependant manner. The S. thermophilus CRISPR3-Cas system also acts by cleaving phage dsDNA genomes at the same specific position inside the targeted protospacer as observed in the CRISPR1-Cas system. It is worth mentioning, for the effective DNA cleavage activity, RNA-guided Cas9 orthologs require their own specific PAM (protospacer adjacent motif) sequences. Activity levels are based on the sequence of the protospacer and specific combinations of favorable PAM bases. Therefore, based on the specific length and sequence of PAM followed by a constant length of target site for the three orthogonals of Cas9 protein, a well-organized procedure will be required for high-throughput and accurate mining of possible target sites in a large genomic dataset. Consequently, we created a reliable procedure to explore potential gRNA sequences for type I (Streptococcus thermophiles), II (Streptococcus pyogenes), and III (Streptococcus thermophiles) CRISPR/Cas systems. To mine CRISPR target sites, four different searching modes of sgRNA binding to target DNA strand were applied. These searching modes are as follows: i) coding strand searching, ii) anti-coding strand searching, iii) both strand searching, and iv) paired-gRNA searching. The output of such procedure highlights the power of comparative genome mining for different CRISPR/Cas systems. This could yield a repertoire of Cas9 variants with expanded capabilities of gRNA design, and will pave the way for further advance genome and epigenome engineering.

Keywords: CRISPR/Cas systems, gRNA mining, Streptococcus pyogenes, Streptococcus thermophiles

Procedia PDF Downloads 255
1002 Investigation of Leishmaniasis, Babesiosis, Ehrlichiosis, Dirofilariasis, and Hepatozoonosis in Referred Dogs to Veterinary Hospitals in Tehran, 2022

Authors: Mohamad Bolandmartabe, Nafiseh Hassani, Saeed Abdi Darake, Maryam Asghari

Abstract:

Dogs are highly susceptible to diseases, nutritional problems, toxins, and parasites, with parasitic infections being common and causing hardship in their lives. Some important internal parasites include worms (such as roundworms and tapeworms) and protozoa, which can lead to anemia in dogs. Important bloodborne parasites in dogs include microfilariae and adult forms of Dirofilaria immitis, Dipetalonema reconditum, Babesia, Trypanosoma, Hepatozoon, Leishmania, Ehrlichia, and Hemobartonella. Babesia and Hemobartonella are parasites that reside inside red blood cells and cause regenerative anemia by directly destroying the red blood cells. Hepatozoon, Leishmania, and Ehrlichia are also parasites that reside within white blood cells and can infiltrate other tissues, such as the liver and lymph nodes. Since intermediate hosts are more commonly found in the open environment, the prevalence of parasites in stray and free-roaming dogs is higher compared to pet dogs. Furthermore, pet dogs are less exposed to internal and external parasites due to better care, hygiene, and being predominantly indoors. Therefore, they are less likely to be affected by them. Among the parasites, Leishmania carries significant importance as it is shared between dogs and humans, causing a dangerous disease known as visceral Leishmaniasis or kala-azar and cutaneous Leishmaniasis. Furthermore, dogs can act as reservoirs and spread the disease agent within human communities. Therefore, timely and accurate diagnosis of these diseases in dogs can be highly beneficial in preventing their occurrence in humans. In this article, we employed the Giemsa staining technique under a light microscope for the identification of bloodborne parasites in dogs. However, considering the negative impact of these parasites on the natural life of dogs, the development of chronic diseases, and the gradual loss of the animal's well-being, rapid and timely diagnosis is essential. Serological methods and PCR are available for the diagnosis of certain parasites, which have high sensitivity and desirable characteristics. Therefore, this research aims to investigate the molecular aspects of bloodborne parasites in dogs referred to veterinary hospitals in Tehran city.

Keywords: leishmaniasis, babesiosis, ehrlichiosis, dirofilariasis, hepatozoonosis

Procedia PDF Downloads 98
1001 Preliminary Study of Gold Nanostars/Enhanced Filter for Keratitis Microorganism Raman Fingerprint Analysis

Authors: Chi-Chang Lin, Jian-Rong Wu, Jiun-Yan Chiu

Abstract:

Myopia, ubiquitous symptom that is necessary to correct the eyesight by optical lens struggles many people for their daily life. Recent years, younger people raise interesting on using contact lens because of its convenience and aesthetics. In clinical, the risk of eye infections increases owing to the behavior of incorrectly using contact lens unsupervised cleaning which raising the infection risk of cornea, named ocular keratitis. In order to overcome the identification needs, new detection or analysis method with rapid and more accurate identification for clinical microorganism is importantly needed. In our study, we take advantage of Raman spectroscopy having unique fingerprint for different functional groups as the distinct and fast examination tool on microorganism. As we know, Raman scatting signals are normally too weak for the detection, especially in biological field. Here, we applied special SERS enhancement substrates to generate higher Raman signals. SERS filter we designed in this article that prepared by deposition of silver nanoparticles directly onto cellulose filter surface and suspension nanoparticles - gold nanostars (AuNSs) also be introduced together to achieve better enhancement for lower concentration analyte (i.e., various bacteria). Research targets also focusing on studying the shape effect of synthetic AuNSs, needle-like surface morphology may possible creates more hot-spot for getting higher SERS enhance ability. We utilized new designed SERS technology to distinguish the bacteria from ocular keratitis under strain level, and specific Raman and SERS fingerprint were grouped under pattern recognition process. We reported a new method combined different SERS substrates can be applied for clinical microorganism detection under strain level with simple, rapid preparation and low cost. Our presenting SERS technology not only shows the great potential for clinical bacteria detection but also can be used for environmental pollution and food safety analysis.

Keywords: bacteria, gold nanostars, Raman spectroscopy surface-enhanced Raman scattering filter

Procedia PDF Downloads 165
1000 Isolation and Identification of Salmonella spp and Salmonella enteritidis, from Distributed Chicken Samples in the Tehran Province using Culture and PCR Techniques

Authors: Seyedeh Banafsheh Bagheri Marzouni, Sona Rostampour Yasouri

Abstract:

Salmonella is one of the most important common pathogens between humans and animals worldwide. Globally, the prevalence of the disease in humans is due to the consumption of food contaminated with animal-derived Salmonella. These foods include eggs, red meat, chicken, and milk. Contamination of chicken and its products with Salmonella may occur at any stage of the chicken processing chain. Salmonella infection is usually not fatal. However, its occurrence is considered dangerous in some individuals, such as infants, children, the elderly, pregnant women, or individuals with weakened immune systems. If Salmonella infection enters the bloodstream, the possibility of contamination of tissues throughout the body will arise. Therefore, determining the potential risk of Salmonella at various stages is essential from the perspective of consumers and public health. The aim of this study is to isolate and identify Salmonella from chicken samples distributed in the Tehran market using the Gold standard culture method and PCR techniques based on specific genes, invA and ent. During the years 2022-2023, sampling was performed using swabs from the liver and intestinal contents of distributed chickens in the Tehran province, with a total of 120 samples taken under aseptic conditions. The samples were initially enriched in buffered peptone water (BPW) for pre-enrichment overnight. Then, the samples were incubated in selective enrichment media, including TT broth and RVS medium, at temperatures of 37°C and 42°C, respectively, for 18 to 24 hours. Organisms that grew in the liquid medium and produced turbidity were transferred to selective media (XLD and BGA) and incubated overnight at 37°C for isolation. Suspicious Salmonella colonies were selected for DNA extraction, and PCR technique was performed using specific primers that targeted the invA and ent genes in Salmonella. The results indicated that 94 samples were Salmonella using the PCR technique. Of these, 71 samples were positive based on the invA gene, and 23 samples were positive based on the ent gene. Although the culture technique is the Gold standard, PCR is a faster and more accurate method. Rapid detection through PCR can enable the identification of Salmonella contamination in food items and the implementation of necessary measures for disease control and prevention.

Keywords: culture, PCR, salmonella spp, salmonella enteritidis

Procedia PDF Downloads 70
999 Frontal Oscillatory Activity and Phase–Amplitude Coupling during Chan Meditation

Authors: Arthur C. Tsai, Chii-Shyang Kuo, Vincent S. C. Chien, Michelle Liou, Philip E. Cheng

Abstract:

Meditation enhances mental abilities and it is an antidote to anxiety. However, very little is known about brain mechanisms and cortico-subcortical interactions underlying meditation-induced anxiety relief. In this study, the changes of phase-amplitude coupling (PAC) in which the amplitude of the beta frequency band were modulated in phase with delta rhythm were investigated after eight-week of meditation training. The study hypothesized that through a concentrate but relaxed mental training the delta-beta coupling in the frontal regions is attenuated. The delta-beta coupling analysis was applied to within and between maximally-independent component sources returned from the extended infomax independent components analysis (ICA) algorithm on the continuous EEG data during mediation. A unique meditative concentration task through relaxing body and mind was used with a constant level of moderate mental effort, so as to approach an ‘emptiness’ meditative state. A pre-test/post-test control group design was used in this study. To evaluate cross-frequency phase-amplitude coupling of component sources, the modulation index (MI) with statistics to calculate circular phase statistics were estimated. Our findings reveal that a significant delta-beta decoupling was observed in a set of frontal regions bilaterally. In addition, beta frequency band of prefrontal component were amplitude modulated in phase with the delta rhythm of medial frontal component.

Keywords: phase-amplitude coupling, ICA, meditation, EEG

Procedia PDF Downloads 423
998 Non-Invasive Pre-Implantation Genetic Assessment Using NGS in IVF Clinical Routine

Authors: Katalin Gombos, Bence Gálik, Krisztina Ildikó Kalács, Krisztina Gödöny, Ákos Várnagy, József Bódis, Attila Gyenesei, Gábor L. Kovács

Abstract:

Although non-invasive pre-implantation genetic testing for aneuploidy (NIPGT-A) is potentially appropriate to assess chromosomal ploidy of the embryo, practical application of it in a routine IVF center has not been started in the absence of a recommendation. We developed a comprehensive workflow for a clinically applicable strategy for NIPGT-A based on next-generation sequencing (NGS) technology. We performed MALBAC whole genome amplification and NGS on spent blastocyst culture media of Day 3 embryos fertilized with intra-cytoplasmic sperm injection (ICSI). Spent embryonic culture media of morphologically good quality score embryos were enrolled in further analysis with the blank culture media as background control. Chromosomal abnormalities were identified by an optimized bioinformatics pipeline applying a copy number variation (CNV) detecting algorithm. We demonstrate a comprehensive workflow covering both wet- and dry-lab procedures supporting a clinically applicable strategy for NIPGT-A. It can be carried out within 48 h which is critical for the same-cycle blastocyst transfer, but also suitable for “freeze all” and “elective frozen embryo” strategies. The described integrated approach of non-invasive evaluation of embryonic DNA content of the culture media can potentially supplement existing pre-implantation genetic screening methods.

Keywords: next generation sequencing, in vitro fertilization, embryo assessment, non-invasive pre-implantation genetic testing

Procedia PDF Downloads 154
997 Singular Perturbed Vector Field Method Applied to the Problem of Thermal Explosion of Polydisperse Fuel Spray

Authors: Ophir Nave

Abstract:

In our research, we present the concept of singularly perturbed vector field (SPVF) method, and its application to thermal explosion of diesel spray combustion. Given a system of governing equations, which consist of hidden Multi-scale variables, the SPVF method transfer and decompose such system to fast and slow singularly perturbed subsystems (SPS). The SPVF method enables us to understand the complex system, and simplify the calculations. Later powerful analytical, numerical and asymptotic methods (e.g method of integral (invariant) manifold (MIM), the homotopy analysis method (HAM) etc.) can be applied to each subsystem. We compare the results obtained by the methods of integral invariant manifold and SPVF apply to spray droplets combustion model. The research deals with the development of an innovative method for extracting fast and slow variables in physical mathematical models. The method that we developed called singular perturbed vector field. This method based on a numerical algorithm applied to global quasi linearization applied to given physical model. The SPVF method applied successfully to combustion processes. Our results were compared to experimentally results. The SPVF is a general numerical and asymptotical method that reveals the hierarchy (multi-scale system) of a given system.

Keywords: polydisperse spray, model reduction, asymptotic analysis, multi-scale systems

Procedia PDF Downloads 217
996 Accelerator Mass Spectrometry Analysis of Isotopes of Plutonium in PM₂.₅

Authors: C. G. Mendez-Garcia, E. T. Romero-Guzman, H. Hernandez-Mendoza, C. Solis, E. Chavez-Lomeli, E. Chamizo, R. Garcia-Tenorio

Abstract:

Plutonium is present in different concentrations in the environment and biological samples related to nuclear weapons testing, nuclear waste recycling and accidental discharges of nuclear plants. This radioisotope is considered the most radiotoxic substance, particularly when it enters the human body through inhalation of powders insoluble or aerosols. This is the main reason of the determination of the concentration of this radioisotope in the atmosphere. Besides that, the isotopic ratio of ²⁴⁰Pu/²³⁹Pu provides information about the origin of the source. PM₂.₅ sampling was carried out in the Metropolitan Zone of the Valley of Mexico (MZVM) from February 18th to March 17th in 2015 on quartz filter. There have been significant developments recently due to the establishment of new methods for sample preparation and accurate measurement to detect ultra trace levels as the plutonium is found in the environment. The accelerator mass spectrometry (AMS) is a technique that allows measuring levels of detection around of femtograms (10-15 g). The AMS determinations include the chemical isolation of Pu. The Pu separation involved an acidic digestion and a radiochemical purification using an anion exchange resin. Finally, the source is prepared, when Pu is pressed in the corresponding cathodes. According to the author's knowledge on these aerosols showed variations on the ²³⁵U/²³⁸U ratio of the natural value, suggesting that could be an anthropogenic source altering it. The determination of the concentration of the isotopes of Pu can be a useful tool in order the clarify this presence in the atmosphere. The first results showed a mean value of activity concentration of ²³⁹Pu of 280 nBq m⁻³ thus the ²⁴⁰Pu/²³⁹Pu was 0.025 corresponding to the weapon production source; these results corroborate that there is an anthropogenic influence that is increasing the concentration of radioactive material in PM₂.₅. According to the author's knowledge in Total Suspended Particles (TSP) have been reported activity concentrations of ²³⁹⁺²⁴⁰Pu around few tens of nBq m⁻³ and 0.17 of ²⁴⁰Pu/²³⁹Pu ratios. The preliminary results in MZVM show high activity concentrations of isotopes of Pu (40 and 700 nBq m⁻³) and low ²⁴⁰Pu/²³⁹Pu ratio than reported. These results are in the order of the activity concentrations of Pu in weapons-grade of high purity.

Keywords: aerosols, fallout, mass spectrometry, radiochemistry, tracer, ²⁴⁰Pu/²³⁹Pu ratio

Procedia PDF Downloads 165
995 Conflation Methodology Applied to Flood Recovery

Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong

Abstract:

Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.

Keywords: community resilience, conflation, flood risk, nuisance flooding

Procedia PDF Downloads 102
994 An Investigation of the Relationship Between Privacy Crisis, Public Discourse on Privacy, and Key Performance Indicators at Facebook (2004–2021)

Authors: Prajwal Eachempati, Laurent Muzellec, Ashish Kumar Jha

Abstract:

We use Facebook as a case study to investigate the complex relationship between the firm’s public discourse (and actions) surrounding data privacy and the performance of a business model based on monetizing user’s data. We do so by looking at the evolution of public discourse over time (2004–2021) and relate topics to revenue and stock market evolution Drawing from archival sources like Zuckerberg We use LDA topic modelling algorithm to reveal 19 topics regrouped in 6 major themes. We first show how, by using persuasive and convincing language that promises better protection of consumer data usage, but also emphasizes greater user control over their own data, the privacy issue is being reframed as one of greater user control and responsibility. Second, we aim to understand and put a value on the extent to which privacy disclosures have a potential impact on the financial performance of social media firms. There we found significant relationship between the topics pertaining to privacy and social media/technology, sentiment score and stock market prices. Revenue is found to be impacted by topics pertaining to politics and new product and service innovations while number of active users is not impacted by the topics unless moderated by external control variables like Return on Assets and Brand Equity.

Keywords: public discourses, data protection, social media, privacy, topic modeling, business models, financial performance

Procedia PDF Downloads 92
993 Macroscopic Support Structure Design for the Tool-Free Support Removal of Laser Powder Bed Fusion-Manufactured Parts Made of AlSi10Mg

Authors: Tobias Schmithuesen, Johannes Henrich Schleifenbaum

Abstract:

The additive manufacturing process laser powder bed fusion offers many advantages over conventional manufacturing processes. For example, almost any complex part can be produced, such as topologically optimized lightweight parts, which would be inconceivable with conventional manufacturing processes. A major challenge posed by the LPBF process, however, is, in most cases, the need to use and remove support structures on critically inclined part surfaces (α < 45 ° regarding substrate plate). These are mainly used for dimensionally accurate mapping of part contours and to reduce distortion by absorbing process-related internal stresses. Furthermore, they serve to transfer the process heat to the substrate plate and are, therefore, indispensable for the LPBF process. A major challenge for the economical use of the LPBF process in industrial process chains is currently still the high manual effort involved in removing support structures. According to the state of the art (SoA), the parts are usually treated by simple hand tools (e.g., pliers, chisels) or by machining (e.g., milling, turning). New automatable approaches are the removal of support structures by means of wet chemical ablation and thermal deburring. According to the state of the art, the support structures are essentially adapted to the LPBF process and not to potential post-processing steps. The aim of this study is the determination of support structure designs that are adapted to the mentioned post-processing approaches. In the first step, the essential boundary conditions for complete removal by means of the respective approaches are identified. Afterward, a representative demonstrator part with various macroscopic support structure designs will be LPBF-manufactured and tested with regard to a complete powder and support removability. Finally, based on the results, potentially suitable support structure designs for the respective approaches will be derived. The investigations are carried out on the example of the aluminum alloy AlSi10Mg.

Keywords: additive manufacturing, laser powder bed fusion, laser beam melting, selective laser melting, post processing, tool-free, wet chemical ablation, thermal deburring, aluminum alloy, AlSi10Mg

Procedia PDF Downloads 90
992 Partial Least Square Regression for High-Dimentional and High-Correlated Data

Authors: Mohammed Abdullah Alshahrani

Abstract:

The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.

Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data

Procedia PDF Downloads 49
991 Composite Approach to Extremism and Terrorism Web Content Classification

Authors: Kolade Olawande Owoeye, George Weir

Abstract:

Terrorism and extremism activities on the internet are becoming the most significant threats to national security because of their potential dangers. In response to this challenge, law enforcement and security authorities are actively implementing comprehensive measures by countering the use of the internet for terrorism. To achieve the measures, there is need for intelligence gathering via the internet. This includes real-time monitoring of potential websites that are used for recruitment and information dissemination among other operations by extremist groups. However, with billions of active webpages, real-time monitoring of all webpages become almost impossible. To narrow down the search domain, there is a need for efficient webpage classification techniques. This research proposed a new approach tagged: SentiPosit-based method. SentiPosit-based method combines features of the Posit-based method and the Sentistrenght-based method for classification of terrorism and extremism webpages. The experiment was carried out on 7500 webpages obtained through TENE-webcrawler by International Cyber Crime Research Centre (ICCRC). The webpages were manually grouped into three classes which include the ‘pro-extremist’, ‘anti-extremist’ and ‘neutral’ with 2500 webpages in each category. A supervised learning algorithm is then applied on the classified dataset in order to build the model. Results obtained was compared with existing classification method using the prediction accuracy and runtime. It was observed that our proposed hybrid approach produced a better classification accuracy compared to existing approaches within a reasonable runtime.

Keywords: sentiposit, classification, extremism, terrorism

Procedia PDF Downloads 276
990 The Relationship between Body Positioning and Badminton Smash Quality

Authors: Gongbing Shan, Shiming Li, Zhao Zhang, Bingjun Wan

Abstract:

Badminton originated in ancient civilizations in Europe and Asia more than 2000 years ago. Presently, it is played almost everywhere with estimated 220 million people playing badminton regularly, ranging from professionals to recreational players; and it is the second most played sport in the world after soccer. In Asia, the popularity of badminton and involvement of people surpass soccer. Unfortunately, scientific researches on badminton skills are hardly proportional to badminton’s popularity. A search of literature has shown that the literature body of biomechanical investigations is relatively small. One of the dominant skills in badminton is the forehand overhead smash, which consists of 1/5 attacks during games. Empirical evidences show that one has to adjust the body position in relation to the coming shuttlecock to produce a powerful and accurate smash. Therefore, positioning is a fundamental aspect influencing smash quality. A search of literature has shown that there is a dearth/lack of study on this fundamental aspect. The goals of this study were to determine the influence of positioning and training experience on smash quality in order to discover information that could help learn/acquire the skill. Using a 10-camera, 3D motion capture system (VICON MX, 200 frames/s) and 15-segment, full-body biomechanical model, 14 skilled and 15 novice players were measured and analyzed. Results have revealed that the body positioning has direct influence on the quality of a smash, especially on shuttlecock release angle and clearance height (passing over the net) of offensive players. The results also suggest that, for training a proper positioning, one could conduct a self-selected comfort position towards a statically hanged shuttlecock and then step one foot back – a practical reference marker for learning. This perceptional marker could be applied in guiding the learning and training of beginners. As one gains experience through repetitive training, improved limbs’ coordination would increase smash quality further. The researchers hope that the findings will benefit practitioners for developing effective training programs for beginners.

Keywords: 3D motion analysis, biomechanical modeling, shuttlecock release speed, shuttlecock release angle, clearance height

Procedia PDF Downloads 497
989 Time and Cost Prediction Models for Language Classification Over a Large Corpus on Spark

Authors: Jairson Barbosa Rodrigues, Paulo Romero Martins Maciel, Germano Crispim Vasconcelos

Abstract:

This paper presents an investigation of the performance impacts regarding the variation of five factors (input data size, node number, cores, memory, and disks) when applying a distributed implementation of Naïve Bayes for text classification of a large Corpus on the Spark big data processing framework. Problem: The algorithm's performance depends on multiple factors, and knowing before-hand the effects of each factor becomes especially critical as hardware is priced by time slice in cloud environments. Objectives: To explain the functional relationship between factors and performance and to develop linear predictor models for time and cost. Methods: the solid statistical principles of Design of Experiments (DoE), particularly the randomized two-level fractional factorial design with replications. This research involved 48 real clusters with different hardware arrangements. The metrics were analyzed using linear models for screening, ranking, and measurement of each factor's impact. Results: Our findings include prediction models and show some non-intuitive results about the small influence of cores and the neutrality of memory and disks on total execution time, and the non-significant impact of data input scale on costs, although notably impacts the execution time.

Keywords: big data, design of experiments, distributed machine learning, natural language processing, spark

Procedia PDF Downloads 118
988 Estimating CO₂ Storage Capacity under Geological Uncertainty Using 3D Geological Modeling of Unconventional Reservoir Rocks in Block nv32, Shenvsi Oilfield, China

Authors: Ayman Mutahar Alrassas, Shaoran Ren, Renyuan Ren, Hung Vo Thanh, Mohammed Hail Hakimi, Zhenliang Guan

Abstract:

The significant effect of CO₂ on global climate and the environment has gained more concern worldwide. Enhance oil recovery (EOR) associated with sequestration of CO₂ particularly into the depleted oil reservoir is considered the viable approach under financial limitations since it improves the oil recovery from the existing oil reservoir and boosts the relation between global-scale of CO₂ capture and geological sequestration. Consequently, practical measurements are required to attain large-scale CO₂ emission reduction. This paper presents an integrated modeling workflow to construct an accurate 3D reservoir geological model to estimate the storage capacity of CO₂ under geological uncertainty in an unconventional oil reservoir of the Paleogene Shahejie Formation (Es1) in the block Nv32, Shenvsi oilfield, China. In this regard, geophysical data, including well logs of twenty-two well locations and seismic data, were combined with geological and engineering data and used to construct a 3D reservoir geological modeling. The geological modeling focused on four tight reservoir units of the Shahejie Formation (Es1-x1, Es1-x2, Es1-x3, and Es1-x4). The validated 3D reservoir models were subsequently used to calculate the theoretical CO₂ storage capacity in the block Nv32, Shenvsi oilfield. Well logs were utilized to predict petrophysical properties such as porosity and permeability, and lithofacies and indicate that the Es1 reservoir units are mainly sandstone, shale, and limestone with a proportion of 38.09%, 32.42%, and 29.49, respectively. Well log-based petrophysical results also show that the Es1 reservoir units generally exhibit 2–36% porosity, 0.017 mD to 974.8 mD permeability, and moderate to good net to gross ratios. These estimated values of porosity, permeability, lithofacies, and net to gross were up-scaled and distributed laterally using Sequential Gaussian Simulation (SGS) and Simulation Sequential Indicator (SIS) methods to generate 3D reservoir geological models. The reservoir geological models show there are lateral heterogeneities of the reservoir properties and lithofacies, and the best reservoir rocks exist in the Es1-x4, Es1-x3, and Es1-x2 units, respectively. In addition, the reservoir volumetric of the Es1 units in block Nv32 was also estimated based on the petrophysical property models and fund to be between 0.554368

Keywords: CO₂ storage capacity, 3D geological model, geological uncertainty, unconventional oil reservoir, block Nv32

Procedia PDF Downloads 175
987 Hand Gesture Recognition for Sign Language: A New Higher Order Fuzzy HMM Approach

Authors: Saad M. Darwish, Magda M. Madbouly, Murad B. Khorsheed

Abstract:

Sign Languages (SL) are the most accomplished forms of gestural communication. Therefore, their automatic analysis is a real challenge, which is interestingly implied to their lexical and syntactic organization levels. Hidden Markov models (HMM’s) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. In this paper, several results concerning static hand gesture recognition using an algorithm based on Type-2 Fuzzy HMM (T2FHMM) are presented. The features used as observables in the training as well as in the recognition phases are based on Singular Value Decomposition (SVD). SVD is an extension of Eigen decomposition to suit non-square matrices to reduce multi attribute hand gesture data to feature vectors. SVD optimally exposes the geometric structure of a matrix. In our approach, we replace the basic HMM arithmetic operators by some adequate Type-2 fuzzy operators that permits us to relax the additive constraint of probability measures. Therefore, T2FHMMs are able to handle both random and fuzzy uncertainties existing universally in the sequential data. Experimental results show that T2FHMMs can effectively handle noise and dialect uncertainties in hand signals besides a better classification performance than the classical HMMs. The recognition rate of the proposed system is 100% for uniform hand images and 86.21% for cluttered hand images.

Keywords: hand gesture recognition, hand detection, type-2 fuzzy logic, hidden Markov Model

Procedia PDF Downloads 460
986 Feasibility Study of Particle Image Velocimetry in the Muzzle Flow Fields during the Intermediate Ballistic Phase

Authors: Moumen Abdelhafidh, Stribu Bogdan, Laboureur Delphine, Gallant Johan, Hendrick Patrick

Abstract:

This study is part of an ongoing effort to improve the understanding of phenomena occurring during the intermediate ballistic phase, such as muzzle flows. A thorough comprehension of muzzle flow fields is essential for optimizing muzzle device and projectile design. This flow characterization has heretofore been almost entirely limited to local and intrusive measurement techniques such as pressure measurements using pencil probes. Consequently, the body of quantitative experimental data is limited, so is the number of numerical codes validated in this field. The objective of the work presented here is to demonstrate the applicability of the Particle Image Velocimetry (PIV) technique in the challenging environment of the propellant flow of a .300 blackout weapon to provide accurate velocity measurements. The key points of a successful PIV measurement are the selection of the particle tracer, their seeding technique, and their tracking characteristics. We have experimentally investigated the aforementioned points by evaluating the resistance, gas dispersion, laser light reflection as well as the response to a step change across the Mach disk for five different solid tracers using two seeding methods. To this end, an experimental setup has been performed and consisted of a PIV system, the combustion chamber pressure measurement, classical high-speed schlieren visualization, and an aerosol spectrometer. The latter is used to determine the particle size distribution in the muzzle flow. The experimental results demonstrated the ability of PIV to accurately resolve the salient features of the propellant flow, such as the under the expanded jet and vortex rings, as well as the instantaneous velocity field with maximum centreline velocities of more than 1000 m/s. Besides, naturally present unburned particles in the gas and solid ZrO₂ particles with a nominal size of 100 nm, when coated on the propellant powder, are suitable as tracers. However, the TiO₂ particles intended to act as a tracer, surprisingly not only melted but also functioned as a combustion accelerator and decreased the number of particles in the propellant gas.

Keywords: intermediate ballistic, muzzle flow fields, particle image velocimetry, propellant gas, particle size distribution, under expanded jet, solid particle tracers

Procedia PDF Downloads 160
985 Designing and Prototyping Permanent Magnet Generators for Wind Energy

Authors: T. Asefi, J. Faiz, M. A. Khan

Abstract:

This paper introduces dual rotor axial flux machines with surface mounted and spoke type ferrite permanent magnets with concentrated windings; they are introduced as alternatives to a generator with surface mounted Nd-Fe-B magnets. The output power, voltage, speed and air gap clearance for all the generators are identical. The machine designs are optimized for minimum mass using a population-based algorithm, assuming the same efficiency as the Nd-Fe-B machine. A finite element analysis (FEA) is applied to predict the performance, emf, developed torque, cogging torque, no load losses, leakage flux and efficiency of both ferrite generators and that of the Nd-Fe-B generator. To minimize cogging torque, different rotor pole topologies and different pole arc to pole pitch ratios are investigated by means of 3D FEA. It was found that the surface mounted ferrite generator topology is unable to develop the nominal electromagnetic torque, and has higher torque ripple and is heavier than the spoke type machine. Furthermore, it was shown that the spoke type ferrite permanent magnet generator has favorable performance and could be an alternative to rare-earth permanent magnet generators, particularly in wind energy applications. Finally, the analytical and numerical results are verified using experimental results.

Keywords: axial flux, permanent magnet generator, dual rotor, ferrite permanent magnet generator, finite element analysis, wind turbines, cogging torque, population-based algorithms

Procedia PDF Downloads 149
984 The Enhancement of Target Localization Using Ship-Borne Electro-Optical Stabilized Platform

Authors: Jaehoon Ha, Byungmo Kang, Kilho Hong, Jungsoo Park

Abstract:

Electro-optical (EO) stabilized platforms have been widely used for surveillance and reconnaissance on various types of vehicles, from surface ships to unmanned air vehicles (UAVs). EO stabilized platforms usually consist of an assembly of structure, bearings, and motors called gimbals in which a gyroscope is installed. EO elements such as a CCD camera and IR camera, are mounted to a gimbal, which has a range of motion in elevation and azimuth and can designate and track a target. In addition, a laser range finder (LRF) can be added to the gimbal in order to acquire the precise slant range from the platform to the target. Recently, a versatile functionality of target localization is needed in order to cooperate with the weapon systems that are mounted on the same platform. The target information, such as its location or velocity, needed to be more accurate. The accuracy of the target information depends on diverse component errors and alignment errors of each component. Specially, the type of moving platform can affect the accuracy of the target information. In the case of flying platforms, or UAVs, the target location error can be increased with altitude so it is important to measure altitude as precisely as possible. In the case of surface ships, target location error can be increased with obliqueness of the elevation angle of the gimbal since the altitude of the EO stabilized platform is supposed to be relatively low. The farther the slant ranges from the surface ship to the target, the more extreme the obliqueness of the elevation angle. This can hamper the precise acquisition of the target information. So far, there have been many studies on EO stabilized platforms of flying vehicles. However, few researchers have focused on ship-borne EO stabilized platforms of the surface ship. In this paper, we deal with a target localization method when an EO stabilized platform is located on the mast of a surface ship. Especially, we need to overcome the limitation caused by the obliqueness of the elevation angle of the gimbal. We introduce a well-known approach for target localization using Unscented Kalman Filter (UKF) and present the problem definition showing the above-mentioned limitation. Finally, we want to show the effectiveness of the approach that will be demonstrated through computer simulations.

Keywords: target localization, ship-borne electro-optical stabilized platform, unscented kalman filter

Procedia PDF Downloads 517
983 Developing an Out-of-Distribution Generalization Model Selection Framework through Impurity and Randomness Measurements and a Bias Index

Authors: Todd Zhou, Mikhail Yurochkin

Abstract:

Out-of-distribution (OOD) detection is receiving increasing amounts of attention in the machine learning research community, boosted by recent technologies, such as autonomous driving and image processing. This newly-burgeoning field has called for the need for more effective and efficient methods for out-of-distribution generalization methods. Without accessing the label information, deploying machine learning models to out-of-distribution domains becomes extremely challenging since it is impossible to evaluate model performance on unseen domains. To tackle this out-of-distribution detection difficulty, we designed a model selection pipeline algorithm and developed a model selection framework with different impurity and randomness measurements to evaluate and choose the best-performing models for out-of-distribution data. By exploring different randomness scores based on predicted probabilities, we adopted the out-of-distribution entropy and developed a custom-designed score, ”CombinedScore,” as the evaluation criterion. This proposed score was created by adding labeled source information into the judging space of the uncertainty entropy score using harmonic mean. Furthermore, the prediction bias was explored through the equality of opportunity violation measurement. We also improved machine learning model performance through model calibration. The effectiveness of the framework with the proposed evaluation criteria was validated on the Folktables American Community Survey (ACS) datasets.

Keywords: model selection, domain generalization, model fairness, randomness measurements, bias index

Procedia PDF Downloads 123
982 Novel p22-Monoclonal Antibody Based Blocking ELISA for the Detection of African Swine Fever Virus Antibodies in Serum

Authors: Ghebremedhin Tsegay, Weldu Tesfagaber, Yuanmao Zhu, Xijun He, Wan Wang, Zhenjiang Zhang, Encheng Sun, Jinya Zhang, Yuntao Guan, Fang Li, Renqiang Liu, Zhigao Bu, Dongming Zhao*

Abstract:

African swine fever (ASF) is a highly infectious viral disease of pigs, resulting in significant economic loss worldwide. As there is no approved vaccines and treatments, the control of ASF entirely depends on early diagnosis and culling of infected pigs. Thus, highly specific and sensitive diagnostic assays are required for accurate and early diagnosis of ASF virus (ASFV). Currently, only a few recombinant proteins have been tested and validated for use as reagents in ASF diagnostic assays. The most promising ones for ASFV antibody detection were p72, p30, p54, and pp62. So far, three ELISA kits based on these recombinant proteins have been commercialized. Due to the complex nature of the virus and variety forms of the disease, robust serodiagnostic assays are still required. ASFV p22 protein, encoded by KP177R gene, is located in the inner membrane of viral particle and appeared transiently in the plasma membrane early after virus infection. The p22 protein interacts with numerous cellular proteins, involved in processes of phagocytosis and endocytosis through different cellular pathways. However, p22 does not seem to be involved in virus replication or swine pathogenicity. In this study, E.coli expressed recombinant p22 protein was used to generate a monoclonal antibody (mAb), and its potential use for the development of blocking ELISA (bELISA) was evaluated. A total of 806 pig serum samples were tested to evaluate the bELISA. Acording the ROC (Reciever operating chracteristic) analysis, 100% sensitivity and 98.10% of specificity was recorded when the PI cut-off value was set at 47%. The novel assay was able to detect the antibodies as early as 9 days post infection. Finaly, a highly sensitive, specific and rapid novel p22-mAb based bELISA assay was developed, and optimized for detection of antibodies against genotype I and II ASFVs. It is a promising candidate for an early and acurate detection of the antibodies and is highly expected to have a valuable role in the containment and prevention of ASF.

Keywords: ASFV, blocking ELISA, diagnosis, monoclonal antibodies, sensitivity, specificity

Procedia PDF Downloads 75
981 Investigating Non-suicidal Self-Injury Discussions on Twitter

Authors: Muhammad Abubakar Alhassan, Diane Pennington

Abstract:

Social networking sites have become a space for people to discuss public health issues such as non-suicidal self-injury (NSSI). There are thousands of tweets containing self-harm and self-injury hashtags on Twitter. It is difficult to distinguish between different users who participate in self-injury discussions on Twitter and how their opinions change over time. Also, it is challenging to understand the topics surrounding NSSI discussions on Twitter. We retrieved tweets using #selfham and #selfinjury hashtags and investigated those from the United kingdom. We applied inductive coding and grouped tweeters into different categories. This study used the Latent Dirichlet Allocation (LDA) algorithm to infer the optimum number of topics that describes our corpus. Our findings revealed that many of those participating in NSSI discussions are non-professional users as opposed to medical experts and academics. Support organisations, medical teams, and academics were campaigning positively on rais-ing self-injury awareness and recovery. Using LDAvis visualisation technique, we selected the top 20 most relevant terms from each topic and interpreted the topics as; children and youth well-being, self-harm misjudgement, mental health awareness, school and mental health support and, suicide and mental-health issues. More than 50% of these topics were discussed in England compared to Scotland, Wales, Ireland and Northern Ireland. Our findings highlight the advantages of using the Twitter social network in tackling the problem of self-injury through awareness. There is a need to study the potential risks associated with the use of social networks among self-injurers.

Keywords: self-harm, non-suicidal self-injury, Twitter, social networks

Procedia PDF Downloads 131