Search results for: binary vector quantization (BVQ)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1744

Search results for: binary vector quantization (BVQ)

214 Low Field Microwave Absorption and Magnetic Anisotropy in TM Co-Doped ZnO System

Authors: J. Das, T. S. Mahule, V. V. Srinivasu

Abstract:

Electron spin resonance (ESR) study at 9.45 GHz and a field modulation frequency of 100Hz was performed on bulk polycrystalline samples of Mn:TM (Fe/Ni) and Mn:RE (Gd/Sm) co doped ZnO samples with composition Zn1-xMn:TM/RE)xO synthesised by solid state reaction route and sintered at 500 0C temperature. The room temperature microwave absorption data collected by sweeping the DC magnetic field from -500 to 9500 G for the Mn:Fe and Mn:Ni co doped ZnO samples exhibit a rarely reported non resonant low field absorption (NRLFA) in addition to a strong absorption at around 3350G, usually associated with ferromagnetic resonance (FMR) satisfying Larmor’s relation due to absorption in the full saturation state. Observed low field absorption is distinct to ferromagnetic resonance even at low temperature and shows hysteresis. Interestingly, it shows a phase opposite with respect to the main ESR signal of the samples, which indicates that the low field absorption has a minimum value at zero magnetic field whereas the ESR signal has a maximum value. The major resonance peak as well as the peak corresponding to low field absorption exhibit asymmetric nature indicating magnetic anisotropy in the sample normally associated with intrinsic ferromagnetism. Anisotropy parameter for Mn:Ni codoped ZnO sample is noticed to be quite higher. The g values also support the presence of oxygen vacancies and clusters in the samples. These samples have shown room temperature ferromagnetism in the SQUID measurement. However, in rare earth (RE) co doped samples (Zn1-x (Mn: Gd/Sm)xO), which show paramagnetic behavior at room temperature, the low field microwave signals are not observed. As microwave currents due to itinerary electrons can lead to ohmic losses inside the sample, we speculate that more delocalized 3d electrons contributed from the TM dopants facilitate such microwave currents leading to the loss and hence absorption at the low field which is also supported by the increase in current with increased micro wave power. Besides, since Fe and Ni has intrinsic spin polarization with polarisability of around 45%, doping of Fe and Ni is expected to enhance the spin polarization related effect in ZnO. We emphasize that in this case Fe and Ni doping contribute to polarized current which interacts with the magnetization (spin) vector and get scattered giving rise to the absorption loss.

Keywords: co-doping, electron spin resonance, hysteresis, non-resonant microwave absorption

Procedia PDF Downloads 307
213 A Feature Clustering-Based Sequential Selection Approach for Color Texture Classification

Authors: Mohamed Alimoussa, Alice Porebski, Nicolas Vandenbroucke, Rachid Oulad Haj Thami, Sana El Fkihi

Abstract:

Color and texture are highly discriminant visual cues that provide an essential information in many types of images. Color texture representation and classification is therefore one of the most challenging problems in computer vision and image processing applications. Color textures can be represented in different color spaces by using multiple image descriptors which generate a high dimensional set of texture features. In order to reduce the dimensionality of the feature set, feature selection techniques can be used. The goal of feature selection is to find a relevant subset from an original feature space that can improve the accuracy and efficiency of a classification algorithm. Traditionally, feature selection is focused on removing irrelevant features, neglecting the possible redundancy between relevant ones. This is why some feature selection approaches prefer to use feature clustering analysis to aid and guide the search. These techniques can be divided into two categories. i) Feature clustering-based ranking algorithm uses feature clustering as an analysis that comes before feature ranking. Indeed, after dividing the feature set into groups, these approaches perform a feature ranking in order to select the most discriminant feature of each group. ii) Feature clustering-based subset search algorithms can use feature clustering following one of three strategies; as an initial step that comes before the search, binded and combined with the search or as the search alternative and replacement. In this paper, we propose a new feature clustering-based sequential selection approach for the purpose of color texture representation and classification. Our approach is a three step algorithm. First, irrelevant features are removed from the feature set thanks to a class-correlation measure. Then, introducing a new automatic feature clustering algorithm, the feature set is divided into several feature clusters. Finally, a sequential search algorithm, based on a filter model and a separability measure, builds a relevant and non redundant feature subset: at each step, a feature is selected and features of the same cluster are removed and thus not considered thereafter. This allows to significantly speed up the selection process since large number of redundant features are eliminated at each step. The proposed algorithm uses the clustering algorithm binded and combined with the search. Experiments using a combination of two well known texture descriptors, namely Haralick features extracted from Reduced Size Chromatic Co-occurence Matrices (RSCCMs) and features extracted from Local Binary patterns (LBP) image histograms, on five color texture data sets, Outex, NewBarktex, Parquet, Stex and USPtex demonstrate the efficiency of our method compared to seven of the state of the art methods in terms of accuracy and computation time.

Keywords: feature selection, color texture classification, feature clustering, color LBP, chromatic cooccurrence matrix

Procedia PDF Downloads 124
212 Transgressing Gender Norms in Addiction Treatment

Authors: Sara Matsuzaka

Abstract:

At the center of emerging policy debates on the rights of transgender individuals in public accommodations is the collision of gender binary views with transgender perspectives that challenge conventional gender norms. The results of such socio-political debates could have significant ramifications for the policies and infrastructures of public and private institutions nationwide, including within the addiction treatment field. Despite having disproportionately high rates of substance use disorder compared to the general population, transgender individuals experience significant barriers to engaging in addiction treatment programs. Inpatient addiction treatment centers were originally designed to treat heterosexual cisgender populations and, as such, feature gender segregated housing, bathrooms, and counseling sessions. Such heteronormative structural barriers, combined with exposures to stigmatic al attitudes, may dissuade transgender populations from benefiting from the addiction treatment they so direly need. A literature review is performed to explore the mechanisms by which gender segregation alienates transgender populations within inpatient addiction treatment. The constituent parts of the current debate on the rights of transgender individuals in public accommodations are situated the context of inpatient addiction treatment facilities. Minority Stress Theory is used as a theoretical framework for understanding substance abuse issues among transgender populations as a maladaptive behavioral response for coping with chronic stressors related to gender minority status and intersecting identities. The findings include that despite having disproportionately high rates of substance use disorder compared to the general population, transgender individuals experience significant barriers to engaging in and benefiting from addiction treatment. These barriers are present in the form of anticipated or real interpersonal stigma and discrimination by service providers and structural stigma in the form of policy and programmatic components in addiction treatment that marginalize transgender populations. Transphobic manifestations within addiction treatment may dissuade transgender individuals from seeking help, if not reinforce a lifetime of stigmatic experience, potentially exacerbating their substance use issues. Conclusive recommendations for social workers and addiction treatment professionals include: (1) dismantling institutional policies around gender segregation that alienate transgender individuals, (2) developing policies that provide full protections for transgender clients against discrimination based on their gender identity, and (3) implementing trans-affirmative cultural competency training requirements for all staff. Directions for future research are provided.

Keywords: addiction treatment, gender segregation, stigma, transgender

Procedia PDF Downloads 201
211 Colored Image Classification Using Quantum Convolutional Neural Networks Approach

Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins

Abstract:

Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.

Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning

Procedia PDF Downloads 116
210 Polymeric Composites with Synergetic Carbon and Layered Metallic Compounds for Supercapacitor Application

Authors: Anukul K. Thakur, Ram Bilash Choudhary, Mandira Majumder

Abstract:

In this technologically driven world, it is requisite to develop better, faster and smaller electronic devices for various applications to keep pace with fast developing modern life. In addition, it is also required to develop sustainable and clean sources of energy in this era where the environment is being threatened by pollution and its severe consequences. Supercapacitor has gained tremendous attention in the recent years because of its various attractive properties such as it is essentially maintenance-free, high specific power, high power density, excellent pulse charge/discharge characteristics, exhibiting a long cycle-life, require a very simple charging circuit and safe operation. Binary and ternary composites of conducting polymers with carbon and other layered transition metal dichalcogenides have shown tremendous progress in the last few decades. Compared with bulk conducting polymer, these days conducting polymers have gained more attention because of their high electrical conductivity, large surface area, short length for the ion transport and superior electrochemical activity. These properties make them very suitable for several energy storage applications. On the other hand, carbon materials have also been studied intensively, owing to its rich specific surface area, very light weight, excellent chemical-mechanical property and a wide range of the operating temperature. These have been extensively employed in the fabrication of carbon-based energy storage devices and also as an electrode material in supercapacitors. Incorporation of carbon materials into the polymers increases the electrical conductivity of the polymeric composite so formed due to high electrical conductivity, high surface area and interconnectivity of the carbon. Further, polymeric composites based on layered transition metal dichalcogenides such as molybdenum disulfide (MoS2) are also considered important because they are thin indirect band gap semiconductors with a band gap around 1.2 to 1.9eV. Amongst the various 2D materials, MoS2 has received much attention because of its unique structure consisting of a graphene-like hexagonal arrangement of Mo and S atoms stacked layer by layer to give S-Mo-S sandwiches with weak Van-der-Waal forces between them. It shows higher intrinsic fast ionic conductivity than oxides and higher theoretical capacitance than the graphite.

Keywords: supercapacitor, layered transition-metal dichalcogenide, conducting polymer, ternary, carbon

Procedia PDF Downloads 244
209 The Presence of Investor Overconfidence in the South African Exchange Traded Fund Market

Authors: Damien Kunjal, Faeezah Peerbhai

Abstract:

Despite the increasing popularity of exchange-traded funds (ETFs), ETF investment choices may not always be rational. Excess trading volume, misevaluations of securities, and excess return volatility present in financial markets can be attributed to the influence of the overconfidence bias. Whilst previous research has explored the overconfidence bias in stock markets; this study focuses on trading in ETF markets. Therefore, the objective of this study is to investigate the presence of investor overconfidence in the South African ETF market. Using vector autoregressive models, the lead-lag relationship between market turnover and the market return is examined for the market of South African ETFs tracking domestic benchmarks and for the market of South African ETFs tracking international benchmarks over the period November 2000 till August 2019. Consistent with the overconfidence hypothesis, a positive relationship between current market turnover and lagged market return is found for both markets, even after controlling for market volatility and cross-sectional dispersion. This relationship holds for both market and individual ETF turnover suggesting that investors are overconfident when trading in South African ETFs tracking domestic benchmarks and South African ETFs tracking international benchmarks since trading activity depends on past market returns. Additionally, using the global recession as a structural break, this study finds that investor overconfidence is more pronounced after the global recession suggesting that investors perceive ETFs as risk-reducing assets due to their diversification benefits. Overall, the results of this study indicate that the overconfidence bias has a significant influence on ETF investment choices, therefore, suggesting that the South African ETF market is inefficient since investors’ decisions are based on their biases. As a result, the effect of investor overconfidence can account for the difference between the fair value of ETFs and its current market price. This finding has implications for policymakers whose responsibility is to promote the efficiency of the South African ETF market as well as ETF investors and traders who trade in the South African ETF market.

Keywords: exchange-traded fund, market return, market turnover, overconfidence, trading activity

Procedia PDF Downloads 155
208 Impact of Import Restriction on Rice Production in Nigeria

Authors: C. O. Igberi, M. U. Amadi

Abstract:

This research paper on the impact of import restriction on rice production in Nigeria is aimed at finding/proffering valid solutions to the age long problem of rice self-sufficiency, through a better understanding of policy measures used in the past, in this case, the effectiveness of rice import restriction of the early 90’s. It tries to answer the questions of; import restriction boosting domestic rice production and the macroeconomic determining factors of Gross Domestic Rice Product (GDRP). The research probe is investigated through literature and analytical frameworks, such that time series data on the GDRP, Gross Fixed Capital Formation (GFCF), average foreign rice producers’ prices(PPF), domestic producers’ prices (PPN) and the labour force (LABF) are collated for analysis (with an import restriction dummy variable, POL1). The research objectives/hypothesis are analysed using; Cointegration, Vector Error Correction Model (VECM), Impulse Response Function (IRF) and Granger Causality Test(GCT) methodologies. Results show that in the short-run error correction specification for GDRP, a percentage (1%) deviation away from the long-run equilibrium in a current quarter is only corrected by 0.14% in the subsequent quarter. Also, the rice import restriction policy had no significant effect on the GDRP at this time. Other findings show that the policy period has, in fact, had effects on the PPN and LABF. The choice variables used are valid macroeconomic factors that explain the GDRP of Nigeria, as adduced from the IRF and GCT, and in the long-run. Policy recommendations suggest that the import restriction is not disqualified as a veritable tool for improving domestic rice production, rather better enforcement procedures and strict adherence to the policy dictates is needed. Furthermore, accompanying policies which drive public and private capital investment and accumulation must be introduced. Also, employment rate and labour substitution in the agricultural sector should not be drastically changed, rather its welfare and efficiency be improved.

Keywords: import restriction, gross domestic rice production, cointegration, VECM, Granger causality, impulse response function

Procedia PDF Downloads 193
207 The Relationships between Carbon Dioxide (CO2) Emissions, Energy Consumption, and GDP for Turkey: Time Series Analysis, 1980-2010

Authors: Jinhoa Lee

Abstract:

The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of CO2 emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: crude oil, coal, natural gas, electricity), carbon dioxide (CO2) emissions and gross domestic product (GDP) for Turkey using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Phillips–Perron (PP) test for stationarity, Johansen maximum likelihood method for cointegration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. All the variables in this study show very strong significant effects on GDP in the country for the long term. The long-run equilibrium in the VECM suggests negative long-run causalities from consumption of petroleum products and the direct combustion of crude oil, coal and natural gas to GDP. Conversely, positive impacts of CO2 emissions and electricity consumption on GDP are found to be significant in Turkey during the period. There exists a short-run bidirectional relationship between electricity consumption and natural gas consumption. There exists a positive unidirectional causality running from electricity consumption to natural gas consumption, while there exists a negative unidirectional causality running from natural gas consumption to electricity consumption. Moreover, GDP has a negative effect on electricity consumption in Turkey in the short run. Overall, the results support arguments that there are relationships among environmental quality, energy use and economic output but the associations can to be differed by the sources of energy in the case of Turkey over of period 1980-2010.

Keywords: CO2 emissions, energy consumption, GDP, Turkey, time series analysis

Procedia PDF Downloads 503
206 Genotypic Identification of Oral Bacteria Using 16S rRNA in Children with and without Early Childhood Caries in Kelantan, Malaysia

Authors: Zuliani Mahmood, Thirumulu Ponnuraj Kannan, Yean Yean Chan, Salahddin A. Al-Hudhairy

Abstract:

Caries is the most common childhood disease which develops due to disturbances in the physiological equilibrium in the dental plaque resulting in demineralization of tooth structures. Plaque and dentine samples were collected from three different tooth surfaces representing caries progression (intact, over carious lesion and dentine) in children with early childhood caries (ECC, n=36). In caries free (CF) children, plaque samples were collected from sound tooth surfaces at baseline and after one year (n=12). The genomic DNA was extracted from all samples and subjected to 16S rRNA PCR amplification. The end products were cloned into pCR®2.1-TOPO® Vector. Five randomly selected positive clones collected from each surface were sent for sequencing. Identification of the bacterial clones was performed using BLAST against GenBank database. In the ECC group, the frequency of Lactobacillus sp. detected was significantly higher in the dentine surface (p = 0.031) than over the cavitated lesion. The highest frequency of bacteria detected in the intact surfaces was Fusobacterium nucleatum subsp. polymorphum (33.3%) while Streptococcus mutans was detected over the carious lesions and dentine surfaces at a frequency of 33.3% and 52.7% respectively. Fusobacterium nucleatum subsp. polymorphum was also found to be highest in the CF group (41.6%). Follow up at the end of one year showed that the frequency of Corynebacterium matruchotii detected was highest in those who remained caries free (16.6%), while Porphyromonas catoniae was highest in those who developed caries (25%). In conclusion, Streptococcus mutans and Porphyromonas catoniae are strongly associated with caries progression, while Lactobacillus sp. is restricted to deep carious lesions. Fusobacterium nucleatum subsp. polymorphum and Corynebacterium matruchotii may play a role in sustaining the healthy equilibrium in the dental plaque. These identified bacteria show promise as potential biomarkers in diagnosis which could help in the management of dental caries in children.

Keywords: early childhood caries, genotypic identification, oral bacteria, 16S rRNA

Procedia PDF Downloads 270
205 The Effect of a Probiotic Diet on htauE14 in a Rodent Model of Alzheimer’s Disease

Authors: C. Flynn, Q. Yuan, C. Reinhardt

Abstract:

Alzheimer’s Disease (AD) is a progressive neurodegenerative disorder affecting broad areas of the cerebral cortex and hippocampus. More than 95% of AD cases are representative of sporadic AD, where both genetic and environmental risk factors play a role. The main pathological features of AD include the widespread deposition of amyloid-beta and neurofibrillary tau tangles in the brain. The earliest brain pathology related to AD has been defined as hyperphosphorylated soluble tau in the noradrenergic locus coeruleus (LC) neurons, characterized by Braak. However, the cause of this pathology and the ultimate progression of AD is not understood. Increasing research points to a connection between the gut microbiota and the brain, and mounting evidence has shown that there is a bidirectional interaction between the two, known as the gut-brain axis. This axis can allow for bidirectional movement of neuroinflammatory cytokines and pathogenic misfolded proteins, as seen in AD. Prebiotics and probiotics have been shown to have a beneficial effect on gut health and can strengthen the gut-barrier as well as the blood-brain barrier, preventing the spread of these pathogens across the gut-brain axis. Our laboratory has recently established a pretangle tau rat model, in which we selectively express pseudo-phosphorylated human tau (htauE14) in the LC neurons of TH-Cre rats. LC htauE14 produced pathological changes in rats resembling those of the preclinical AD pathology (reduced olfactory discrimination and LC degeneration). In this work, we will investigate the effects of pre/probiotic ingestion on AD behavioral deficits, blood inflammation/cytokines, and various brain markers in our experimental rat model of AD. Rats will be infused with an adeno-associated viral vector containing a human tau gene pseudophosphorylated at 14 sites (common in LC pretangles) into 2-3 month TH-Cre rats. Fecal and blood samples will be taken at pre-surgery, and various post-surgery time points. A collection of behavioral tests will be performed, and immunohistochemistry/western blotting techniques will be used to observe various biomarkers. This work aims to elucidate the relationship between gut health and AD progression by strengthening gut-brain relationship and aims to observe the overall effect on tau formation and tau pathology in AD brains.

Keywords: alzheimer’s disease, aging, gut microbiome, neurodegeneration

Procedia PDF Downloads 129
204 Risk Factors Associated with Dengue Fever Outbreak in Diredawa Administration City, Ethiopia, October 2015: A Case Control Study

Authors: Luna Degife, Desalegn Belay, Yoseph Worku, Tigist Tesfaye, Assefa Tufa, Abyot Bekele, Zegeye Hailemariam, Abay Hagos

Abstract:

Half of the world’s population is at risk of Dengue Fever (DF), a highly under-recognized and underreported mosquito-borne viral disease with high prevalence in the tropical and subtropical regions. Globally, an estimated 50 to 200 million cases and 20, 000 DF deaths occur annually as per the world health organization report. In Ethiopia, the first outbreak occurred in 2013 in Diredawa administration city. Afterward, three outbreaks have been reported from the eastern part of the country. We received a report of the fifth DF outbreak for Ethiopia and the second for Diredawa city on October 4, 2015. We conducted the investigation to confirm the outbreak, identify the risk factors for the repeatedly occurrence of the disease and implement control measures. We conducted un- matched case-control study and defined a suspected DF case as any person with fever of 2-7 days and 2 or more of the following: a headache, arthralgia, myalgia, rash, or bleeding from any part of the body. Controls were residents of Diredawa city without DF symptoms. We interviewed 70 Cases and 140 controls from all health facilities in Diredawa city from October 7 to 15; 2015. Epi Info version 7.1.5.0 was used to analyze the data and multivariable logistic regression was conducted to assess risk factors for DF. Sixty-nine blood samples were collected for Laboratory confirmation.The mean age for cases was 23.7±9.5 standard deviation (SD) and for controls 31.2±13 SD. Close contact with DF patient (Adjusted odds ratio (AOR)=5.36, 95% confidence interval(CI): 2.75-10.44), nonuse of long-lasting insecticidal nets (AOR=2.74, 95% CI: 1.06-7.08) and availability of stagnant water in the village (AOR=3.61, 95% CI:1.31-9.93) were independent risk factors associated with higher rates of the disease. Forty-two samples were tested positive. Endemicity of DF is becoming a concern for Diredawa city after the first outbreak. Therefore, effective vector control activities need to be part of long-term preventive measures.

Keywords: dengue fever, Diredawa, outbreak, risk factors, second

Procedia PDF Downloads 266
203 Periplasmic Expression of Anti-RoxP Antibody Fragments in Escherichia Coli.

Authors: Caspar S. Carson, Gabriel W. Prather, Nicholas E. Wong, Jeffery R. Anton, William H. McCoy

Abstract:

Cutibacterium acnes is a commensal bacterium found on human skin that has been linked to acne. C. acnes can also be an opportunistic pathogen when it infiltrates the body during surgery. This pathogen can cause dangerous infections of medical implants, such as shoulder replacements, leading to life-threatening blood infections. Compounding this issue, C. acnes resistance to many antibiotics has become an increasing problem worldwide, creating a need for special forms of treatment. C. acnes expresses the protein RoxP, and it requires this protein to colonize human skin. Though this protein is required for C. acnes skin colonization, its function is not yet understood. Inhibition of RoxP function might be an effective treatment for C. acnes infections. To develop such reagents, the McCoy Laboratory generated four unique anti-RoxP antibodies. Preliminary studies in the McCoy Lab have established that each antibody binds a distinct site on RoxP. To assess the potential of these antibodies as therapeutics, it is necessary to specifically characterize these antibody epitopes and evaluate them in assays that assess their ability to inhibit RoxP-dependent C. acnes growth. To provide material for these studies, an antibody expression construct, Fv-clasp(v2), was adapted to encode anti-RoxP antibody sequences. The author hypothesizes that this expression strategy can produce sufficient amounts of >95% pure antibody fragments for further characterization of these antibodies. Four anti-RoxP Fv-clasp(v2) expression constructs (pET vector-based) were transformed into E. coli BL21-Gold(DE3) cells and a small-scale expression and purification trial was performed for each construct to evaluate anti-RoxP Fv-clasp(v2) yield and purity. Successful expression and purification of these antibody constructs will allow for their use in structural studies, such as protein crystallography and cryogenic electron microscopy. Such studies would help to define the antibody binding sites on RoxP, which could then be leveraged in the development of certain methods to treat C. acnes infection through RoxP inhibition.

Keywords: structural biology, protein expression, infectious disease, antibody, therapeutics, E. coli

Procedia PDF Downloads 50
202 Production and Purification of Salmonella Typhimurium MisL Autotransporter Protein in Escherichia coli

Authors: Neslihan Taskale Karatug, Mustafa Akcelik

Abstract:

Some literature data show that misL protein play a role on host immune response formed against Salmonella Typhimurium. The aim of the present study is to learn the role of the protein in S. Typhimurium pathogenicity. To describe certain functions of the protein, primarily recombinant misL protein was produced and purified. PCR was performed using a primer set targeted to passenger domain of the misL gene on S. Typhimurium LT2 genome. Amplicon and pet28a vector were enzymatically cleaved with EcoRI and NheI. The digested DNA materials were purified with High Pure PCR Product Purification Kit. The ligation reaction was achieved with the pure products. After preparation of competent Escherichia coli Dh5α, ligation mix was transformed into the cell by electroporation. To confirm the existence of insert gene, recombinant plasmid DNA of Dh5α was isolated with high pure plasmid DNA kit. Proved the correctness of recombinant plasmid was electroporated to BL21. The cell was induced by IPTG. After induction, the presence of recombinant protein was checked by SDS-PAGE. The recombinant misL protein was purified using HisPur Ni-NTA spin colon. The pure protein was shown by SDS-PAGE and western blot immünoassay. The concentration of the protein was measured BCA Protein Assay kit. In the wake of ligation with digested products (2 kb misL and 5.4 kb pet28a) visualised on gel size of the band was about 7.4 kb and was named as pNT01. The pNT01 recombinant plasmid was transformed into Dh5α and colonies were chosen in selective medium. Plasmid DNA isolation from them was carried out. PCR was achieved on the pNT01 to check misL and 2 kb band was observed on the agarose gel. After electroporation of the plasmid and induction of the cell, 68 kDa misL protein was seen. Subsequent to the purification of the protein, only a band was observed on SDS-PAGE. Association of the pure protein with anti-his antibody was verified by the western blot assay. The concentration of the pure misL protein was determined as 345 μg/mL. Production of polyclonal antibody will be achieved by using the obtained pure recombinant misL protein as next step. The role of the protein will come out on the immune system together some assays.

Keywords: cloning, Escherichia coli, recombinant protein purification, Salmonella Typhimurium

Procedia PDF Downloads 379
201 Analysis of the Savings Behaviour of Rice Farmers in Tiaong, Quezon, Philippines

Authors: Angelika Kris D. Dalangin, Cesar B. Quicoy

Abstract:

Rice farming is a major source of livelihood and employment in the Philippines, but it requires a substantial amount of capital. Capital may come from income (farm, non-farm, and off-farm), savings and credit. However, rice farmers suffer from lack of capital due to high costs of inputs and low productivity. Capital insufficiency, coupled with low productivity, hindered them to meet their basic household and production needs. Hence, they resorted to borrowing money, mostly from informal lenders who charge very high interest rates. As another source of capital, savings can help rice farmers meet their basic needs for both the household and the farm. However, information is inadequate whether the farmers save or not, as well as, why they do not depend on savings to augment their lack of capital. Thus, it is worth analyzing how rice farmers saved. The study revealed, using the actual savings which is the difference between the household income and expenditure, that about three-fourths (72%) of the total number of farmers interviewed are savers. However, when they were asked whether they are savers or not, more than half of them considered themselves as non-savers. This gap shows that there are many farmers who think that they do not have savings at all; hence they continue to borrow money and do not depend on savings to augment their lack of capital. The study also identified the forms of savings, saving motives, and savings utilization among rice farmers. Results revealed that, for the past 12 months, most of the farmers saved cash at home for liquidity purposes while others deposited cash in banks and/or saved their money in the form of livestock. Among the most important reasons of farmers for saving are for daily household expenses, for building a house, for emergency purposes, for retirement, and for their next production. Furthermore, the study assessed the factors affecting the rice farmers’ savings behaviour using logistic regression. Results showed that the factors found to be significant were presence of non-farm income, per capita net farm income, and per capita household expense. The presence of non-farm income and per capita net farm income positively affects the farmers’ savings behaviour. On the other hand, per capita household expenses have negative effect. The effect, however, of per capita net farm income and household expenses is very negligible because of the very small chance that the farmer is a saver. Generally, income and expenditure were proved to be significant factors that affect the savings behaviour of the rice farmers. However, most farmers could not save regularly due to low farm income and high household and farm expenditures. Thus, it is highly recommended that government should develop programs or implement policies that will create more jobs for the farmers and their family members. In addition, programs and policies should be implemented to increase farm productivity and income.

Keywords: agricultural economics, agricultural finance, binary logistic regression, logit, Philippines, Quezon, rice farmers, savings, savings behaviour

Procedia PDF Downloads 220
200 Kinematic Analysis of Human Gait for Typical Postures of Walking, Running and Cart Pulling

Authors: Nupur Karmaker, Hasin Aupama Azhari, Abdul Al Mortuza, Abhijit Chanda, Golam Abu Zakaria

Abstract:

Purpose: The purpose of gait analysis is to determine the biomechanics of the joint, phases of gait cycle, graphical and analytical analysis of degree of rotation, analysis of the electrical activity of muscles and force exerted on the hip joint at different locomotion during walking, running and cart pulling. Methods and Materials: Visual gait analysis and electromyography method has been used to detect the degree of rotation of joints and electrical activity of muscles. In cinematography method an object is observed from different sides and takes its video. Cart pulling length has been divided into frames with respect to time by using video splitter software. Phases of gait cycle, degree of rotation of joints, EMG profile and force analysis during walking and running has been taken from different papers. Gait cycle and degree of rotation of joints during cart pulling has been prepared by using video camera, stop watch, video splitter software and Microsoft Excel. Results and Discussion: During the cart pulling the force exerted on hip is the resultant of various forces. The force on hip is the vector sum of the force Fg= mg, due the body of weight of the person and Fa= ma, due to the velocity. Maximum stance phase shows during cart pulling and minimum shows during running. During cart pulling shows maximum degree of rotation of hip joint, knee: running, and ankle: cart pulling. During walking, it has been observed minimum degree of rotation of hip, ankle: during running. During cart pulling, dynamic force depends on the walking velocity, body weight and load weight. Conclusions: 80% people suffer gait related disease with increasing their age. Proper care should take during cart pulling. It will be better to establish the gait laboratory to determine the gait related diseases. If the way of cart pulling is changed i.e the design of cart pulling machine, load bearing system is changed then it would possible to reduce the risk of limb loss, flat foot syndrome and varicose vein in lower limb.

Keywords: kinematic, gait, gait lab, phase, force analysis

Procedia PDF Downloads 571
199 Standard Essential Patents for Artificial Intelligence Hardware and the Implications For Intellectual Property Rights

Authors: Wendy de Gomez

Abstract:

Standardization is a critical element in the ability of a society to reduce uncertainty, subjectivity, misrepresentation, and interpretation while simultaneously contributing to innovation. Technological standardization is critical to codify specific operationalization through legal instruments that provide rules of development, expectation, and use. In the current emerging technology landscape Artificial Intelligence (AI) hardware as a general use technology has seen incredible growth as evidenced from AI technology patents between 2012 and 2018 in the United States Patent Trademark Office (USPTO) AI dataset. However, as outlined in the 2023 United States Government National Standards Strategy for Critical and Emerging Technology the codification through standardization of emerging technologies such as AI has not kept pace with its actual technological proliferation. This gap has the potential to cause significant divergent possibilities for the downstream outcomes of AI in both the short and long term. This original empirical research provides an overview of the standardization efforts around AI in different geographies and provides a background to standardization law. It quantifies the longitudinal trend of Artificial Intelligence hardware patents through the USPTO AI dataset. It seeks evidence of existing Standard Essential Patents from these AI hardware patents through a text analysis of the Statement of patent history and the Field of the invention of these patents in Patent Vector and examines their determination as a Standard Essential Patent and their inclusion in existing AI technology standards across the four main AI standards bodies- European Telecommunications Standards Institute (ETSI); International Telecommunication Union (ITU)/ Telecommunication Standardization Sector (-T); Institute of Electrical and Electronics Engineers (IEEE); and the International Organization for Standardization (ISO). Once the analysis is complete the paper will discuss both the theoretical and operational implications of F/Rand Licensing Agreements for the owners of these Standard Essential Patents in the United States Court and Administrative system. It will conclude with an evaluation of how Standard Setting Organizations (SSOs) can work with SEP owners more effectively through various forms of Intellectual Property mechanisms such as patent pools.

Keywords: patents, artifical intelligence, standards, F/Rand agreements

Procedia PDF Downloads 75
198 Preschoolers’ Selective Trust in Moral Promises

Authors: Yuanxia Zheng, Min Zhong, Cong Xin, Guoxiong Liu, Liqi Zhu

Abstract:

Trust is a critical foundation of social interaction and development, playing a significant role in the physical and mental well-being of children, as well as their social participation. Previous research has demonstrated that young children do not blindly trust others but make selective trust judgments based on available information. The characteristics of speakers can influence children’s trust judgments. According to Mayer et al.’s model of trust, these characteristics of speakers, including ability, benevolence, and integrity, can influence children’s trust judgments. While previous research has focused primarily on the effects of ability and benevolence, there has been relatively little attention paid to integrity, which refers to individuals’ adherence to promises, fairness, and justice. This study focuses specifically on how keeping/breaking promises affects young children’s trust judgments. The paradigm of selective trust was employed in two experiments. A sample size of 100 children was required for an effect size of w = 0.30,α = 0.05,1-β = 0.85, using G*Power 3.1. This study employed a 2×2 within-subjects design to investigate the effects of moral valence of promises (within-subjects factor: moral vs. immoral promises), and fulfilment of promises (within-subjects factor: kept vs. broken promises) on children’s trust judgments (divided into declarative and promising contexts). In Experiment 1 adapted binary choice paradigms, presenting 118 preschoolers (62 girls, Mean age = 4.99 years, SD = 0.78) with four conflict scenarios involving the keeping or breaking moral/immoral promises, in order to investigate children’s trust judgments. Experiment 2 utilized single choice paradigms, in which 112 preschoolers (57 girls, Mean age = 4.94 years, SD = 0.80) were presented four stories to examine their level of trust. The results of Experiment 1 showed that preschoolers selectively trusted both promisors who kept moral promises and those who broke immoral promises, as well as their assertions and new promises. Additionally, the 5.5-6.5-year-old children are more likely to trust both promisors who keep moral promises and those who break immoral promises more than the 3.5- 4.5-year-old children. Moreover, preschoolers are more likely to make accurate trust judgments towards promisor who kept moral promise compared to those who broke immoral promises. The results of Experiment 2 showed significant differences of preschoolers’ trust degree: kept moral promise > broke immoral promise > broke moral promise ≈ kept immoral promise. This study is the first to investigate the development of trust judgement in moral promise among preschoolers aged 3.5-6.5. The results show that preschoolers can consider both valence and fulfilment of promises when making trust judgments. Furthermore, as preschoolers mature, they become more inclined to trust promisors who keep moral promises and those who break immoral promises. Additionally, the study reveals that preschoolers have the highest level of trust in promisors who kept moral promises, followed by those who broke immoral promises. Promisors who broke moral promises and those who kept immoral promises are trusted the least. These findings contribute valuable insights to our understanding of moral promises and trust judgment.

Keywords: promise, trust, moral judgement, preschoolers

Procedia PDF Downloads 41
197 Quantifying Multivariate Spatiotemporal Dynamics of Malaria Risk Using Graph-Based Optimization in Southern Ethiopia

Authors: Yonas Shuke Kitawa

Abstract:

Background: Although malaria incidence has substantially fallen sharply over the past few years, the rate of decline varies by district, time, and malaria type. Despite this turn-down, malaria remains a major public health threat in various districts of Ethiopia. Consequently, the present study is aimed at developing a predictive model that helps to identify the spatio-temporal variation in malaria risk by multiple plasmodium species. Methods: We propose a multivariate spatio-temporal Bayesian model to obtain a more coherent picture of the temporally varying spatial variation in disease risk. The spatial autocorrelation in such a data set is typically modeled by a set of random effects that assign a conditional autoregressive prior distribution. However, the autocorrelation considered in such cases depends on a binary neighborhood matrix specified through the border-sharing rule. Over here, we propose a graph-based optimization algorithm for estimating the neighborhood matrix that merely represents the spatial correlation by exploring the areal units as the vertices of a graph and the neighbor relations as the series of edges. Furthermore, we used aggregated malaria count in southern Ethiopia from August 2013 to May 2019. Results: We recognized that precipitation, temperature, and humidity are positively associated with the malaria threat in the area. On the other hand, enhanced vegetation index, nighttime light (NTL), and distance from coastal areas are negatively associated. Moreover, nonlinear relationships were observed between malaria incidence and precipitation, temperature, and NTL. Additionally, lagged effects of temperature and humidity have a significant effect on malaria risk by either species. More elevated risk of P. falciparum was observed following the rainy season, and unstable transmission of P. vivax was observed in the area. Finally, P. vivax risks are less sensitive to environmental factors than those of P. falciparum. Conclusion: The improved inference was gained by employing the proposed approach in comparison to the commonly used border-sharing rule. Additionally, different covariates are identified, including delayed effects, and elevated risks of either of the cases were observed in districts found in the central and western regions. As malaria transmission operates in a spatially continuous manner, a spatially continuous model should be employed when it is computationally feasible.

Keywords: disease mapping, MSTCAR, graph-based optimization algorithm, P. falciparum, P. vivax, waiting matrix

Procedia PDF Downloads 64
196 Oligoalkylamine Modified Poly(Amidoamine) Generation 4.5 Dendrimer for the Delivery of Small Interfering RNA

Authors: Endris Yibru Hanurry, Wei-Hsin Hsu, Hsieh-Chih Tsai

Abstract:

In recent years, the discovery of small interfering RNAs (siRNAs) has got great attention for the treatment of cancer and other diseases. However, the therapeutic efficacy of siRNAs has been faced with many drawbacks because of short half-life in blood circulation, poor membrane penetration, weak endosomal escape and inadequate release into the cytosol. To overcome these drawbacks, we designed a non-viral vector by conjugating polyamidoamine generation 4.5 dendrimer (PDG4.5) with diethylenetriamine (DETA)- and tetraethylenepentamine (TEPA) followed by binding with siRNA to form polyplexes through electrostatic interaction. The result of 1H nuclear magnetic resonance (NMR), 13C NMR, correlation spectroscopy, heteronuclear single–quantum correlation spectroscopy, and Fourier transform infrared spectroscopy confirmed the successful conjugation of DETA and TEPA with PDG4.5. Then, the size, surface charge, morphology, binding ability, stability, release assay, toxicity and cellular internalization were analyzed to explore the physicochemical and biological properties of PDG4.5-DETA and PDG4.5-TEPA polyplexes at specific N/P ratios. The polyplexes (N/P = 8) exhibited spherical nanosized (125 and 85 nm) particles with optimum surface charge (13 and 26 mV), showed strong siRNA binding ability, protected the siRNA against enzyme digestion and accepted biocompatibility to the HeLa cells. Qualitatively, the fluorescence microscopy image revealed the delocalization (Manders’ coefficient 0.63 and 0.53 for PDG4.5-DETA and PDG4.5-TEPA, respectively) of polyplexes and the translocation of the siRNA throughout the cytosol to show a decent cellular internalization and intracellular biodistribution of polyplexes in HeLa cells. Quantitatively, the flow cytometry result indicated that a significant (P < 0.05) amount of siRNA was internalized by cells treated with PDG4.5-DETA (68.5%) and PDG4.5-TEPA (73%) polyplexes. Generally, PDG4.5-DETA and PDG4.5-TEPA were ideal nanocarriers of siRNA in vitro and might be used as promising candidates for in vivo study and future pharmaceutical applications.

Keywords: non-viral carrier, oligoalkylamine, poly(amidoamine) dendrimer, polyplexes, siRNA

Procedia PDF Downloads 119
195 Unsupervised Detection of Burned Area from Remote Sensing Images Using Spatial Correlation and Fuzzy Clustering

Authors: Tauqir A. Moughal, Fusheng Yu, Abeer Mazher

Abstract:

Land-cover and land-use change information are important because of their practical uses in various applications, including deforestation, damage assessment, disasters monitoring, urban expansion, planning, and land management. Therefore, developing change detection methods for remote sensing images is an important ongoing research agenda. However, detection of change through optical remote sensing images is not a trivial task due to many factors including the vagueness between the boundaries of changed and unchanged regions and spatial dependence of the pixels to its neighborhood. In this paper, we propose a binary change detection technique for bi-temporal optical remote sensing images. As in most of the optical remote sensing images, the transition between the two clusters (change and no change) is overlapping and the existing methods are incapable of providing the accurate cluster boundaries. In this regard, a methodology has been proposed which uses the fuzzy c-means clustering to tackle the problem of vagueness in the changed and unchanged class by formulating the soft boundaries between them. Furthermore, in order to exploit the neighborhood information of the pixels, the input patterns are generated corresponding to each pixel from bi-temporal images using 3×3, 5×5 and 7×7 window. The between images and within image spatial dependence of the pixels to its neighborhood is quantified by using Pearson product moment correlation and Moran’s I statistics, respectively. The proposed technique consists of two phases. At first, between images and within image spatial correlation is calculated to utilize the information that the pixels at different locations may not be independent. Second, fuzzy c-means technique is used to produce two clusters from input feature by not only taking care of vagueness between the changed and unchanged class but also by exploiting the spatial correlation of the pixels. To show the effectiveness of the proposed technique, experiments are conducted on multispectral and bi-temporal remote sensing images. A subset (2100×1212 pixels) of a pan-sharpened, bi-temporal Landsat 5 thematic mapper optical image of Los Angeles, California, is used in this study which shows a long period of the forest fire continued from July until October 2009. Early forest fire and later forest fire optical remote sensing images were acquired on July 5, 2009 and October 25, 2009, respectively. The proposed technique is used to detect the fire (which causes change on earth’s surface) and compared with the existing K-means clustering technique. Experimental results showed that proposed technique performs better than the already existing technique. The proposed technique can be easily extendable for optical hyperspectral images and is suitable for many practical applications.

Keywords: burned area, change detection, correlation, fuzzy clustering, optical remote sensing

Procedia PDF Downloads 163
194 A Method for Clinical Concept Extraction from Medical Text

Authors: Moshe Wasserblat, Jonathan Mamou, Oren Pereg

Abstract:

Natural Language Processing (NLP) has made a major leap in the last few years, in practical integration into medical solutions; for example, extracting clinical concepts from medical texts such as medical condition, medication, treatment, and symptoms. However, training and deploying those models in real environments still demands a large amount of annotated data and NLP/Machine Learning (ML) expertise, which makes this process costly and time-consuming. We present a practical and efficient method for clinical concept extraction that does not require costly labeled data nor ML expertise. The method includes three steps: Step 1- the user injects a large in-domain text corpus (e.g., PubMed). Then, the system builds a contextual model containing vector representations of concepts in the corpus, in an unsupervised manner (e.g., Phrase2Vec). Step 2- the user provides a seed set of terms representing a specific medical concept (e.g., for the concept of the symptoms, the user may provide: ‘dry mouth,’ ‘itchy skin,’ and ‘blurred vision’). Then, the system matches the seed set against the contextual model and extracts the most semantically similar terms (e.g., additional symptoms). The result is a complete set of terms related to the medical concept. Step 3 –in production, there is a need to extract medical concepts from the unseen medical text. The system extracts key-phrases from the new text, then matches them against the complete set of terms from step 2, and the most semantically similar will be annotated with the same medical concept category. As an example, the seed symptom concepts would result in the following annotation: “The patient complaints on fatigue [symptom], dry skin [symptom], and Weight loss [symptom], which can be an early sign for Diabetes.” Our evaluations show promising results for extracting concepts from medical corpora. The method allows medical analysts to easily and efficiently build taxonomies (in step 2) representing their domain-specific concepts, and automatically annotate a large number of texts (in step 3) for classification/summarization of medical reports.

Keywords: clinical concepts, concept expansion, medical records annotation, medical records summarization

Procedia PDF Downloads 125
193 National Assessment for Schools in Saudi Arabia: Score Reliability and Plausible Values

Authors: Dimiter M. Dimitrov, Abdullah Sadaawi

Abstract:

The National Assessment for Schools (NAFS) in Saudi Arabia consists of standardized tests in Mathematics, Reading, and Science for school grade levels 3, 6, and 9. One main goal is to classify students into four categories of NAFS performance (minimal, basic, proficient, and advanced) by schools and the entire national sample. The NAFS scoring and equating is performed on a bounded scale (D-scale: ranging from 0 to 1) in the framework of the recently developed “D-scoring method of measurement.” The specificity of the NAFS measurement framework and data complexity presented both challenges and opportunities to (a) the estimation of score reliability for schools, (b) setting cut-scores for the classification of students into categories of performance, and (c) generating plausible values for distributions of student performance on the D-scale. The estimation of score reliability at the school level was performed in the framework of generalizability theory (GT), with students “nested” within schools and test items “nested” within test forms. The GT design was executed via a multilevel modeling syntax code in R. Cut-scores (on the D-scale) for the classification of students into performance categories was derived via a recently developed method of standard setting, referred to as “Response Vector for Mastery” (RVM) method. For each school, the classification of students into categories of NAFS performance was based on distributions of plausible values for the students’ scores on NAFS tests by grade level (3, 6, and 9) and subject (Mathematics, Reading, and Science). Plausible values (on the D-scale) for each individual student were generated via random selection from a statistical logit-normal distribution with parameters derived from the student’s D-score and its conditional standard error, SE(D). All procedures related to D-scoring, equating, generating plausible values, and classification of students into performance levels were executed via a computer program in R developed for the purpose of NAFS data analysis.

Keywords: large-scale assessment, reliability, generalizability theory, plausible values

Procedia PDF Downloads 5
192 God, The Master Programmer: The Relationship Between God and Computers

Authors: Mohammad Sabbagh

Abstract:

Anyone who reads the Torah or the Quran learns that GOD created everything that is around us, seen and unseen, in six days. Within HIS plan of creation, HE placed for us a key proof of HIS existence which is essentially computers and the ability to program them. Digital computer programming began with binary instructions, which eventually evolved to what is known as high-level programming languages. Any programmer in our modern time can attest that you are essentially giving the computer commands by words and when the program is compiled, whatever is processed as output is limited to what the computer was given as an ability and furthermore as an instruction. So one can deduce that GOD created everything around us with HIS words, programming everything around in six days, just like how we can program a virtual world on the computer. GOD did mention in the Quran that one day where GOD’s throne is, is 1000 years of what we count; therefore, one might understand that GOD spoke non-stop for 6000 years of what we count, and gave everything it’s the function, attributes, class, methods and interactions. Similar to what we do in object-oriented programming. Of course, GOD has the higher example, and what HE created is much more than OOP. So when GOD said that everything is already predetermined, it is because any input, whether physical, spiritual or by thought, is outputted by any of HIS creatures, the answer has already been programmed. Any path, any thought, any idea has already been laid out with a reaction to any decision an inputter makes. Exalted is GOD!. GOD refers to HIMSELF as The Fastest Accountant in The Quran; the Arabic word that was used is close to processor or calculator. If you create a 3D simulation of a supernova explosion to understand how GOD produces certain elements and fuses protons together to spread more of HIS blessings around HIS skies; in 2022 you are going to require one of the strongest, fastest, most capable supercomputers of the world that has a theoretical speed of 50 petaFLOPS to accomplish that. In other words, the ability to perform one quadrillion (1015) floating-point operations per second. A number a human cannot even fathom. To put in more of a perspective, GOD is calculating when the computer is going through those 50 petaFLOPS calculations per second and HE is also calculating all the physics of every atom and what is smaller than that in all the actual explosion, and it’s all in truth. When GOD said HE created the world in truth, one of the meanings a person can understand is that when certain things occur around you, whether how a car crashes or how a tree grows; there is a science and a way to understand it, and whatever programming or science you deduce from whatever event you observed, it can relate to other similar events. That is why GOD might have said in The Quran that it is the people of knowledge, scholars, or scientist that fears GOD the most! One thing that is essential for us to keep up with what the computer is doing and for us to track our progress along with any errors is we incorporate logging mechanisms and backups. GOD in The Quran said that ‘WE used to copy what you used to do’. Essentially as the world is running, think of it as an interactive movie that is being played out in front of you, in a full-immersive non-virtual reality setting. GOD is recording it, from every angle to every thought, to every action. This brings the idea of how scary the Day of Judgment will be when one might realize that it’s going to be a fully immersive video when we would be getting and reading our book.

Keywords: programming, the Quran, object orientation, computers and humans, GOD

Procedia PDF Downloads 100
191 Correlation between the Larvae Density (Diptera: Culicidae) and Physicochemical Characteristics of Habitats in Mazandaran Province, Northern Iran

Authors: Seyed Hassan Nikookar, Mahmoud Fazeli-Dinan, Seyyed Payman Ziapour, Ahmad-Ali Enayati

Abstract:

Background: Mosquitoes look for all kinds of aquatic habitats for laying eggs. Characteristics of water habitats are important factors in determining whether a mosquito can survive and successfully completed their developmental stages. Physicochemical factors can display an important role in vector control programs. This investigate determined whether physicochemical factors differ between habitats can be effective in the larvae density in Mazandaran province. Methods: Larvae were collected by the standard dipper up to 350 ml for 15-20 minutes from fixed habitats in 16 villages of 30 townships, the specimens identified by morphological key. Water samples were collected during larval collection and were evaluated for temperature (°C), acidity (pH), turbidity (NTU), electrical conductivity (μS/cm), alkalinity (mg/l), total hardness (mg/l), nitrate (mg/l), chloride (mg/l), phosphate (mg/l), sulfate (mg/l) in selected habitats using standard methods. Spearman Correlation coefficient was used for analyze data. Results: Totally 7566 mosquito larvae of three genera and 15 species were collected of fixed habitats. Cx. pipiens was the dominant species except in villages of Tileno, Zavat, Asad Abad, Shah Mansur Mahale which An. maculipennis, Cx. torrentium were as the predominant species. Turbidity in Karat Koti, Chloride in Al Tappeh, nitrate, phosphate and sulfate in Chalmardi, electrical conductivity, alkalinity, total hardness in Komishan villages were significantly higher than other villages (P < 0.05). There were a significant positive correlation between Cx. pipiens and Electrical conductivity, Alkalinity, Total hardness, Chloride, Cx. tritaeniorhynchus and Chloride, whereas a significant negative correlation observed between Sulfate and Cx. perexiguss. Conclusion: The correlations observed between physicochemical factor and larval density, possibly can confirm the effect of these parameters on the breeding activities of mosquitoes, and could probability facilitate larval control programs by the handwork of such factors.

Keywords: anopheles, culex, culiseta, physicochemical, habitats, larvae density, correlation

Procedia PDF Downloads 256
190 An Adaptive Oversampling Technique for Imbalanced Datasets

Authors: Shaukat Ali Shahee, Usha Ananthakumar

Abstract:

A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.

Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling

Procedia PDF Downloads 408
189 Chikungunya Virus Infection among Patients with Febrile Illness Attending University of Maiduguri Teaching Hospital, Nigeria

Authors: Abdul-Dahiru El-Yuguda, Saka Saheed Baba, Tawa Monilade Adisa, Mustapha Bala Abubakar

Abstract:

Background: Chikungunya (CHIK) virus, a previously anecdotally described arbovirus, is now assuming a worldwide public health burden. The CHIK virus infection is characterized by potentially life threatening and debilitating arthritis in addition to the high fever, arthralgia, myalgia, headache and rash. Method: Three hundred and seventy (370) serum samples were collected from outpatients with febrile illness attending University of Maiduguri Teaching Hospital, Nigeria, and was used to detect for Chikungunya (CHIK) virus IgG and IgM antibodies using the Enzyme Linked Immunosorbent Assays (ELISAs). Result: Out of the 370 sera tested, 39 (10.5%) were positive for presence of CHIK virus antibodies. A total of 24 (6.5%) tested positive for CHIK virus IgM only while none (0.0%) was positive for presence of CHIK virus IgG only and 15 (4.1%) of the serum samples were positive for both IgG and IgM antibodies. A significant difference (p<0.0001) was observed in the distribution of CHIK virus antibodies in relation to gender. The males had prevalence of 8.5% IgM antibodies as against 4.6% observed in females. On the other hand 4.6% of the females were positive for concurrent CHIK virus IgG and IgM antibodies when compared to a prevalence of 3.4% observed in males. Only the age groups ≤ 60 years and the undisclosed age group were positive for presence of CHIK virus IgG and/or IgM antibodies. No significant difference (p>0.05) was observed in the seasonal prevalence of CHIK virus antibodies among the study subjects Analysis of the prevalence of CHIK virus antibodies in relation to clinical presentation (as observed by Clinicians) of the patients revealed that headache and fever were the most frequently encountered ailments. Conclusion: The CHIK virus IgM and concurrent IgM and IgG antibody prevalence rates of 6.5% and 4.1% observed in this study indicates a current infection and the lack of IgG antibody alone observed shows that the infection is not endemic but sporadic. Recommendation: Further studies should be carried to establish the seasonal prevalence of CHIK virus infection vis-à-vis vector dynamics in the study area. A comprehensive study need to be carried out on the molecular characterization of the CHIK virus circulating in Nigeria with a view to developing CHIK virus vaccine.

Keywords: Chikungunya virus, IgM and IgG antibodies, febrile patients, enzyme linked immunosorbent assay

Procedia PDF Downloads 380
188 Prevalence of Fast-Food Consumption on Overweight or Obesity on Employees (Age Between 25-45 Years) in Private Sector; A Cross-Sectional Study in Colombo, Sri Lanka

Authors: Arosha Rashmi De Silva, Ananda Chandrasekara

Abstract:

This study seeks to comprehensively examine the influence of fast-food consumption and physical activity levels on the body weight of young employees within the private sector of Sri Lanka. The escalating popularity of fast food has raised concerns about its nutritional content and associated health ramifications. To investigate this phenomenon, a cohort of 100 individuals aged between 25 and 45, employed in Sri Lanka's private sector, participated in this research. These participants provided socio-demographic data through a standardized questionnaire, enabling the characterization of their backgrounds. Additionally, participants disclosed their frequency of fast-food consumption and engagement in physical activities, utilizing validated assessment tools. The collected data was meticulously compiled into an Excel spreadsheet and subjected to rigorous statistical analysis. Descriptive statistics, such as percentages and proportions, were employed to delineate the body weight status of the participants. Employing chi-square tests, our study identified significant associations between fast-food consumption, levels of physical activity, and body weight categories. Furthermore, through binary logistic regression analysis, potential risk factors contributing to overweight and obesity within the young employee cohort were elucidated. Our findings revealed a disconcerting trend, with 6% of participants classified as underweight, 32% within the normal weight range, and a substantial 62% categorized as overweight or obese. These outcomes underscore the alarming prevalence of overweight and obesity among young private-sector employees, particularly within the bustling urban landscape of Colombo, Sri Lanka. The data strongly imply a robust correlation between fast-food consumption, sedentary behaviors, and higher body weight categories, reflective of the evolving lifestyle patterns associated with the nation's economic growth. This study emphasizes the urgent need for effective interventions to counter the detrimental effects of fast-food consumption. The implementation of awareness campaigns elucidating the adverse health consequences of fast food, coupled with comprehensive nutritional education, can empower individuals to make informed dietary choices. Workplace interventions, including the provision of healthier meal alternatives and the facilitation of physical activity opportunities, are essential in fostering a healthier workforce and mitigating the escalating burden of overweight and obesity in Sri Lanka

Keywords: fast food consumption, obese, overweight, physical activity level

Procedia PDF Downloads 39
187 3D Modeling for Frequency and Time-Domain Airborne EM Systems with Topography

Authors: C. Yin, B. Zhang, Y. Liu, J. Cai

Abstract:

Airborne EM (AEM) is an effective geophysical exploration tool, especially suitable for ridged mountain areas. In these areas, topography will have serious effects on AEM system responses. However, until now little study has been reported on topographic effect on airborne EM systems. In this paper, an edge-based unstructured finite-element (FE) method is developed for 3D topographic modeling for both frequency and time-domain airborne EM systems. Starting from the frequency-domain Maxwell equations, a vector Helmholtz equation is derived to obtain a stable and accurate solution. Considering that the AEM transmitter and receiver are both located in the air, the scattered field method is used in our modeling. The Galerkin method is applied to discretize the Helmholtz equation for the final FE equations. Solving the FE equations, the frequency-domain AEM responses are obtained. To accelerate the calculation speed, the response of source in free-space is used as the primary field and the PARDISO direct solver is used to deal with the problem with multiple transmitting sources. After calculating the frequency-domain AEM responses, a Hankel’s transform is applied to obtain the time-domain AEM responses. To check the accuracy of present algorithm and to analyze the characteristic of topographic effect on airborne EM systems, both the frequency- and time-domain AEM responses for 3 model groups are simulated: 1) a flat half-space model that has a semi-analytical solution of EM response; 2) a valley or hill earth model; 3) a valley or hill earth with an abnormal body embedded. Numerical experiments show that close to the node points of the topography, AEM responses demonstrate sharp changes. Special attentions need to be paid to the topographic effects when interpreting AEM survey data over rugged topographic areas. Besides, the profile of the AEM responses presents a mirror relation with the topographic earth surface. In comparison to the topographic effect that mainly occurs at the high-frequency end and early time channels, the EM responses of underground conductors mainly occur at low frequencies and later time channels. For the signal of the same time channel, the dB/dt field reflects the change of conductivity better than the B-field. The research of this paper will serve airborne EM in the identification and correction of the topographic effects.

Keywords: 3D, Airborne EM, forward modeling, topographic effect

Procedia PDF Downloads 306
186 Using the Smith-Waterman Algorithm to Extract Features in the Classification of Obesity Status

Authors: Rosa Figueroa, Christopher Flores

Abstract:

Text categorization is the problem of assigning a new document to a set of predetermined categories, on the basis of a training set of free-text data that contains documents whose category membership is known. To train a classification model, it is necessary to extract characteristics in the form of tokens that facilitate the learning and classification process. In text categorization, the feature extraction process involves the use of word sequences also known as N-grams. In general, it is expected that documents belonging to the same category share similar features. The Smith-Waterman (SW) algorithm is a dynamic programming algorithm that performs a local sequence alignment in order to determine similar regions between two strings or protein sequences. This work explores the use of SW algorithm as an alternative to feature extraction in text categorization. The dataset used for this purpose, contains 2,610 annotated documents with the classes Obese/Non-Obese. This dataset was represented in a matrix form using the Bag of Word approach. The score selected to represent the occurrence of the tokens in each document was the term frequency-inverse document frequency (TF-IDF). In order to extract features for classification, four experiments were conducted: the first experiment used SW to extract features, the second one used unigrams (single word), the third one used bigrams (two word sequence) and the last experiment used a combination of unigrams and bigrams to extract features for classification. To test the effectiveness of the extracted feature set for the four experiments, a Support Vector Machine (SVM) classifier was tuned using 20% of the dataset. The remaining 80% of the dataset together with 5-Fold Cross Validation were used to evaluate and compare the performance of the four experiments of feature extraction. Results from the tuning process suggest that SW performs better than the N-gram based feature extraction. These results were confirmed by using the remaining 80% of the dataset, where SW performed the best (accuracy = 97.10%, weighted average F-measure = 97.07%). The second best was obtained by the combination of unigrams-bigrams (accuracy = 96.04, weighted average F-measure = 95.97) closely followed by the bigrams (accuracy = 94.56%, weighted average F-measure = 94.46%) and finally unigrams (accuracy = 92.96%, weighted average F-measure = 92.90%).

Keywords: comorbidities, machine learning, obesity, Smith-Waterman algorithm

Procedia PDF Downloads 289
185 Macroeconomic Effects and Dynamics of Natural Disaster Damages: Evidence from SETX on the Resiliency Hypothesis

Authors: Agim Kukelii, Gevorg Sargsyan

Abstract:

This study, focusing on the base regional area (county level), estimates the effect of natural disaster damages on aggregate personal income, aggregate wages, wages per worker, aggregate employment, and aggregate income transfer. The study further estimates the dynamics of personal income, employment, and wages under natural disaster shocks. Southeast Texas, located at the center of Golf Coast, is hit by meteorological and hydrological caused natural disasters yearly. On average, there are more than four natural disasters per year that cane an estimated damage average of 2.2% of real personal income. The study uses the panel data method to estimate the average effect of natural disasters on the area’s economy (personal income, wages, employment, and income transfer). It also uses Panel Vector Autoregressive (PVAR) model to study the dynamics of macroeconomic variables under natural disaster shocks. The study finds that the average effect of natural disasters is positive for personal income and income transfer and is negative for wages and employment. The PVAR and the impulse response function estimates reveal that natural disaster shocks cause a decrease in personal income, employment, and wages. However, the economy’s variables bounce back after three years. The novelty of this study rests on several aspects. First, this is the first study to investigate the effects of natural disasters on macroeconomic variables at a regional level. Second, the study uses direct measures of natural disaster damages. Third, the study estimates that the time that the local economy takes to absorb the natural disaster damages shocks is three years. This is a relatively good reaction to the local economy, therefore, adding to the “resiliency” hypothesis. The study has several implications for policymakers, businesses, and households. First, this study serves to increase the awareness of local stakeholders that natural disaster damages do worsen, macroeconomic variables, such as personal income, employment, and wages beyond the immediate damages to residential and commercial properties, physical infrastructure, and discomfort in daily lives. Second, the study estimates that these effects linger on the economy on average for three years, which would require policymakers to factor in the time area need to be on focus.

Keywords: natural disaster damages, macroeconomics effects, PVAR, panel data

Procedia PDF Downloads 78