Search results for: robust estimator
571 The Simultaneous Application of Chemical and Biological Markers to Identify Reliable Indicators of Untreated Human Waste and Fecal Pollution in Urban Philadelphia Source Waters
Authors: Stafford Stewart, Hui Yu, Rominder Suri
Abstract:
This paper publishes the results of the first known study conducted in urban Philadelphia waterways that simultaneously utilized anthropogenic chemical and biological markers to identify suitable indicators of untreated human waste and fecal pollution. A total of 13 outfall samples, 30 surface water samples, and 2 groundwater samples were analyzed for fecal contamination and untreated human waste using a suite of 25 chemical markers and 5 bio-markers. Pearson rank correlation tests were conducted to establish associations between the abundances of bio-markers and the concentrations of chemical markers. Results show that 16S rRNA gene of human-associated Bacteroidales (BacH) was very strongly correlated (0.76 – 0.97, p < 0.05) with labile chemical markers acetaminophen, cotinine, estriol, and urobilin. Likewise, human-specific F- RNA coliphages (F-RNA-II) and labile chemical markers, urobilin, ibuprofen, cotinine and estriol, were significantly correlated (0.77 – 0.95, p < 0.05). Similarly, a strong positive correlation (0.67 – 0.91, p < 0.05) was evident between the abundances of bio-markers BacH and F-RNA-II, and the concentrations of the conservative markers, trimethoprim, meprobamate, diltiazem, triclocarban, metformin, sucralose, gemfibrozil, sulfamethoxazole, and carbamazepine. Human mitochondrial DNA (MitoH) correlated moderately with labile markers nicotine and salicylic acid as well as with conservative markers metformin and triclocarban (0.31 – 0.47, p<0.05). This study showed that by associating chemical and biological markers, a robust technique was developed for fingerprinting source-specific untreated waste and fecal contamination in source waters.Keywords: anthropogenic markers, bacteroidales, fecal pollution, source waters, wastewater
Procedia PDF Downloads 10570 RNAseq Reveals Hypervirulence-Specific Host Responses to M. tuberculosis Infection
Authors: Gina Leisching, Ray-Dean Pietersen, Carel Van Heerden, Paul Van Helden, Ian Wiid, Bienyameen Baker
Abstract:
The distinguishing factors that characterize the host response to infection with virulent Mycobacterium tuberculosis (M.tb) are largely confounding. We present an infection study with two genetically closely related M.tb strains that have vastly different pathogenic characteristics. The early host response to infection with these detergent-free cultured strains was analyzed through RNAseq in an attempt to provide information on the subtleties which may ultimately contribute to the virulent phenotype. Murine bone marrow-derived macrophages (BMDMs) were infected with either a hyper- (R5527) or hypovirulent (R1507) Beijing M. tuberculosis clinical isolate. RNAseq revealed 69 differentially expressed host genes in BMDMs during comparison of these two transcriptomes. Pathway analysis revealed activation of the stress-induced and growth inhibitory Gadd45 signaling pathway in hypervirulent infected BMDMs. Upstream regulators of interferon activation such as and IRF3 and IRF7 were predicted to be upregulated in hypovirulent-infected BMDMs. Additional analysis of the host immune response through ELISA and qPCR included the use of human THP-1 macrophages where a robust proinflammatory response was observed after infection with the hypervirulent strain. RNAseq revealed two early-response genes (IER3 and SAA3) and two host-defence genes (OASL1 and SLPI) that were significantly upregulated by the hypervirulent strain. The role of these genes under M.tb infection conditions are largely unknown but here we provide validation of their presence with use of qPCR and Western blot. Further analysis into their biological role under infection with virulent M.tb is required.Keywords: host-response, Mycobacterium tuberculosis, RNAseq, virulence
Procedia PDF Downloads 208569 Unlocking the Genetic Code: Exploring the Potential of DNA Barcoding for Biodiversity Assessment
Authors: Mohammed Ahmed Ahmed Odah
Abstract:
DNA barcoding is a crucial method for assessing and monitoring species diversity amidst escalating threats to global biodiversity. The author explores DNA barcoding's potential as a robust and reliable tool for biodiversity assessment. It begins with a comprehensive review of existing literature, delving into the theoretical foundations, methodologies and applications of DNA barcoding. The suitability of various DNA regions, like the COI gene, as universal barcodes is extensively investigated. Additionally, the advantages and limitations of different DNA sequencing technologies and bioinformatics tools are evaluated within the context of DNA barcoding. To evaluate the efficacy of DNA barcoding, diverse ecosystems, including terrestrial, freshwater and marine habitats, are sampled. Extracted DNA from collected specimens undergoes amplification and sequencing of the target barcode region. Comparison of the obtained DNA sequences with reference databases allows for the identification and classification of the sampled organisms. Findings demonstrate that DNA barcoding accurately identifies species, even in cases where morphological identification proves challenging. Moreover, it sheds light on cryptic and endangered species, aiding conservation efforts. The author also investigates patterns of genetic diversity and evolutionary relationships among different taxa through the analysis of genetic data. This research contributes to the growing knowledge of DNA barcoding and its applicability for biodiversity assessment. The advantages of this approach, such as speed, accuracy and cost-effectiveness, are highlighted, along with areas for improvement. By unlocking the genetic code, DNA barcoding enhances our understanding of biodiversity, supports conservation initiatives and informs evidence-based decision-making for the sustainable management of ecosystems.Keywords: DNA barcoding, biodiversity assessment, genetic code, species identification, taxonomic resolution, next-generation sequencing
Procedia PDF Downloads 23568 Non Enzymatic Electrochemical Sensing of Glucose Using Manganese Doped Nickel Oxide Nanoparticles Decorated Carbon Nanotubes
Authors: Anju Joshi, C. N. Tharamani
Abstract:
Diabetes is one of the leading cause of death at present and remains an important concern as the prevalence of the disease is increasing at an alarming rate. Therefore, it is crucial to diagnose the accurate levels of glucose for developing an efficient therapeutic for diabetes. Due to the availability of convenient and compact self-testing, continuous monitoring of glucose is feasible nowadays. Enzyme based electrochemical sensing of glucose is quite popular because of its high selectivity but suffers from drawbacks like complicated purification and immobilization procedures, denaturation, high cost, and low sensitivity due to indirect electron transfer. Hence, designing a robust enzyme free platform using transition metal oxides remains crucial for the efficient and sensitive determination of glucose. In the present work, manganese doped nickel oxide nanoparticles (Mn-NiO) has been synthesized onto the surface of multiwalled carbon nanotubes using a simple microwave assisted approach for non-enzymatic electrochemical sensing of glucose. The morphology and structure of the synthesized nanostructures were characterized using scanning electron microscopy (SEM) and X-Ray diffraction (XRD). We demonstrate that the synthesized nanostructures show enormous potential for electrocatalytic oxidation of glucose with high sensitivity and selectivity. Cyclic voltammetry and square wave voltammetry studies suggest superior sensitivity and selectivity of Mn-NiO decorated carbon nanotubes towards the non-enzymatic determination of glucose. A linear response between the peak current and the concentration of glucose has been found to be in the concentration range of 0.01 μM- 10000 μM which suggests the potential efficacy of Mn-NiO decorated carbon nanotubes for sensitive determination of glucose.Keywords: diabetes, glucose, Mn-NiO decorated carbon nanotubes, non-enzymatic
Procedia PDF Downloads 234567 Robust Segmentation of Salient Features in Automatic Breast Ultrasound (ABUS) Images
Authors: Lamees Nasser, Yago Diez, Robert Martí, Joan Martí, Ibrahim Sadek
Abstract:
Automated 3D breast ultrasound (ABUS) screening is a novel modality in medical imaging because of its common characteristics shared with other ultrasound modalities in addition to the three orthogonal planes (i.e., axial, sagittal, and coronal) that are useful in analysis of tumors. In the literature, few automatic approaches exist for typical tasks such as segmentation or registration. In this work, we deal with two problems concerning ABUS images: nipple and rib detection. Nipple and ribs are the most visible and salient features in ABUS images. Determining the nipple position plays a key role in some applications for example evaluation of registration results or lesion follow-up. We present a nipple detection algorithm based on color and shape of the nipple, besides an automatic approach to detect the ribs. In point of fact, rib detection is considered as one of the main stages in chest wall segmentation. This approach consists of four steps. First, images are normalized in order to minimize the intensity variability for a given set of regions within the same image or a set of images. Second, the normalized images are smoothed by using anisotropic diffusion filter. Next, the ribs are detected in each slice by analyzing the eigenvalues of the 3D Hessian matrix. Finally, a breast mask and a probability map of regions detected as ribs are used to remove false positives (FP). Qualitative and quantitative evaluation obtained from a total of 22 cases is performed. For all cases, the average and standard deviation of the root mean square error (RMSE) between manually annotated points placed on the rib surface and detected points on rib borders are 15.1188 mm and 14.7184 mm respectively.Keywords: Automated 3D Breast Ultrasound, Eigenvalues of Hessian matrix, Nipple detection, Rib detection
Procedia PDF Downloads 329566 Uncertainty and Volatility in Middle East and North Africa Stock Market during the Arab Spring
Authors: Ameen Alshugaa, Abul Mansur Masih
Abstract:
This paper sheds light on the economic impacts of political uncertainty caused by the civil uprisings that swept the Arab World and have been collectively known as the Arab Spring. Measuring documented effects of political uncertainty on regional stock market indices, we examine the impact of the Arab Spring on the volatility of stock markets in eight countries in the Middle East and North Africa (MENA) region: Egypt, Lebanon, Jordon, United Arab Emirate, Qatar, Bahrain, Oman and Kuwait. This analysis also permits testing the existence of financial contagion among equity markets in the MENA region during the Arab Spring. To capture the time-varying and multi-horizon nature of the evidence of volatility and contagion in the eight MENA stock markets, we apply two robust methodologies on consecutive data from November 2008 to March 2014: MGARCH-DCC, Continuous Wavelet Transforms (CWT). Our results indicate two key findings. First, the discrepancies between volatile stock markets of countries directly impacted by the Arab Spring and countries that were not directly impacted indicate that international investors may still enjoy portfolio diversification and investment in MENA markets. Second, the lack of financial contagion during the Arab Spring suggests that there is little evidence of cointegration among MENA markets. Providing a general analysis of the economic situation and the investment climate in the MENA region during and after the Arab Spring, this study bear significant importance for policy makers, local and international investors, and market regulators.Keywords: Portfolio Diversification , MENA Region , Stock Market Indices, MGARCH-DCC, Wavelet Analysis, CWT
Procedia PDF Downloads 290565 Mindfulness and Mental Resilience Training for Pilots: Enhancing Cognitive Performance and Stress Management
Authors: Nargiza Nuralieva
Abstract:
The study delves into assessing the influence of mindfulness and mental resilience training on the cognitive performance and stress management of pilots. Employing a meticulous literature search across databases such as Medline and Google Scholar, the study used specific keywords to target a wide array of studies. Inclusion criteria were stringent, focusing on peer-reviewed studies in English that utilized designs like randomized controlled trials, with a specific interest in interventions related to mindfulness or mental resilience training for pilots and measured outcomes pertaining to cognitive performance and stress management. The initial literature search identified a pool of 123 articles, with subsequent screening resulting in the exclusion of 77 based on title and abstract. The remaining 54 articles underwent a more rigorous full-text screening, leading to the exclusion of 41. Additionally, five studies were selected from the World Health Organization's clinical trials database. A total of 11 articles from meta-analyses were retained for examination, underscoring the study's dedication to a meticulous and robust inclusion process. The interventions varied widely, incorporating mixed approaches, Cognitive behavioral Therapy (CBT)-based, and mindfulness-based techniques. The analysis uncovered positive effects across these interventions. Specifically, mixed interventions demonstrated a Standardized Mean Difference (SMD) of 0.54, CBT-based interventions showed an SMD of 0.29, and mindfulness-based interventions exhibited an SMD of 0.43. Long-term effects at a 6-month follow-up suggested sustained impacts for both mindfulness-based (SMD: 0.63) and CBT-based interventions (SMD: 0.73), albeit with notable heterogeneity.Keywords: mindfulness, mental resilience, pilots, cognitive performance, stress management
Procedia PDF Downloads 54564 Nonlinear Aerodynamic Parameter Estimation of a Supersonic Air to Air Missile by Using Artificial Neural Networks
Authors: Tugba Bayoglu
Abstract:
Aerodynamic parameter estimation is very crucial in missile design phase, since accurate high fidelity aerodynamic model is required for designing high performance and robust control system, developing high fidelity flight simulations and verification of computational and wind tunnel test results. However, in literature, there is not enough missile aerodynamic parameter identification study for three main reasons: (1) most air to air missiles cannot fly with constant speed, (2) missile flight test number and flight duration are much less than that of fixed wing aircraft, (3) variation of the missile aerodynamic parameters with respect to Mach number is higher than that of fixed wing aircraft. In addition to these challenges, identification of aerodynamic parameters for high wind angles by using classical estimation techniques brings another difficulty in the estimation process. The reason for this, most of the estimation techniques require employing polynomials or splines to model the behavior of the aerodynamics. However, for the missiles with a large variation of aerodynamic parameters with respect to flight variables, the order of the proposed model increases, which brings computational burden and complexity. Therefore, in this study, it is aimed to solve nonlinear aerodynamic parameter identification problem for a supersonic air to air missile by using Artificial Neural Networks. The method proposed will be tested by using simulated data which will be generated with a six degree of freedom missile model, involving a nonlinear aerodynamic database. The data will be corrupted by adding noise to the measurement model. Then, by using the flight variables and measurements, the parameters will be estimated. Finally, the prediction accuracy will be investigated.Keywords: air to air missile, artificial neural networks, open loop simulation, parameter identification
Procedia PDF Downloads 278563 MRI Quality Control Using Texture Analysis and Spatial Metrics
Authors: Kumar Kanudkuri, A. Sandhya
Abstract:
Typically, in a MRI clinical setting, there are several protocols run, each indicated for a specific anatomy and disease condition. However, these protocols or parameters within them can change over time due to changes to the recommendations by the physician groups or updates in the software or by the availability of new technologies. Most of the time, the changes are performed by the MRI technologist to account for either time, coverage, physiological, or Specific Absorbtion Rate (SAR ) reasons. However, giving properly guidelines to MRI technologist is important so that they do not change the parameters that negatively impact the image quality. Typically a standard American College of Radiology (ACR) MRI phantom is used for Quality Control (QC) in order to guarantee that the primary objectives of MRI are met. The visual evaluation of quality depends on the operator/reviewer and might change amongst operators as well as for the same operator at various times. Therefore, overcoming these constraints is essential for a more impartial evaluation of quality. This makes quantitative estimation of image quality (IQ) metrics for MRI quality control is very important. So in order to solve this problem, we proposed that there is a need for a robust, open-source, and automated MRI image control tool. The Designed and developed an automatic analysis tool for measuring MRI image quality (IQ) metrics like Signal to Noise Ratio (SNR), Signal to Noise Ratio Uniformity (SNRU), Visual Information Fidelity (VIF), Feature Similarity (FSIM), Gray level co-occurrence matrix (GLCM), slice thickness accuracy, slice position accuracy, High contrast spatial resolution) provided good accuracy assessment. A standardized quality report has generated that incorporates metrics that impact diagnostic quality.Keywords: ACR MRI phantom, MRI image quality metrics, SNRU, VIF, FSIM, GLCM, slice thickness accuracy, slice position accuracy
Procedia PDF Downloads 168562 Drop Impact Study on Flexible Superhydrophobic Surface Containing Micro-Nano Hierarchical Structures
Authors: Abinash Tripathy, Girish Muralidharan, Amitava Pramanik, Prosenjit Sen
Abstract:
Superhydrophobic surfaces are abundant in nature. Several surfaces such as wings of butterfly, legs of water strider, feet of gecko and the lotus leaf show extreme water repellence behaviour. Self-cleaning, stain-free fabrics, spill-resistant protective wears, drag reduction in micro-fluidic devices etc. are few applications of superhydrophobic surfaces. In order to design robust superhydrophobic surface, it is important to understand the interaction of water with superhydrophobic surface textures. In this work, we report a simple coating method for creating large-scale flexible superhydrophobic paper surface. The surface consists of multiple layers of silanized zirconia microparticles decorated with zirconia nanoparticles. Water contact angle as high as 159±10 and contact angle hysteresis less than 80 was observed. Drop impact studies on superhydrophobic paper surface were carried out by impinging water droplet and capturing its dynamics through high speed imaging. During the drop impact, the Weber number was varied from 20 to 80 by altering the impact velocity of the drop and the parameters such as contact time, normalized spread diameter were obtained. In contrast to earlier literature reports, we observed contact time to be dependent on impact velocity on superhydrophobic surface. Total contact time was split into two components as spread time and recoil time. The recoil time was found to be dependent on the impact velocity while the spread time on the surface did not show much variation with the impact velocity. Further, normalized spreading parameter was found to increase with increase in impact velocity.Keywords: contact angle, contact angle hysteresis, contact time, superhydrophobic
Procedia PDF Downloads 425561 Associations and Interactions of Delivery Mode and Antibiotic Exposure with Infant Cortisol Level: A Correlational Study
Authors: Samarpreet Singh, Gerald Giesbrecht
Abstract:
Both c-section and antibiotic exposure are linked to gut microbiota imbalance in infants. Such disturbance is associated with the Hypothalamic-Pituitary-Adrenal (HPA) axis function. However, the literature only has contradicting evidence for the association between c-sections and the HPA axis. Therefore, this study aims to test if the mode of delivery and antibiotics exposure is associated with the HPA axis. Also, whether exposure to both interacts with the HPA-axis. It was hypothesized that associations and interactions would be observed. Secondary data analysis was used for this co-relational study. Data for the mode of delivery and antibiotics exposure variables were documented from hospital records or self-questionnaires. In addition, cortisol levels (Area under the curve with respect to increasing (AUCi) and Area under the curve with respect to ground (AUCg)) were based on saliva collected from three months old during the infant’s visit to the lab and after drawing blood. One-way and between-subject ANOVA analyses were run on data. No significant association between delivery mode and infant cortisol level was found, AUCi and AUCg, p > .05. Only the infant’s AUCg was found to be significantly higher if there were antibiotics exposure at delivery (p = .001) or their mothers were exposed during pregnancy (p < .05). Infants born by c-section and exposed to antibiotics at three months had higher AUCi than those born vaginally, p < .02. These results imply that antibiotic exposure before three months is associated with an infant’s stress response. The association might increase if antibiotic exposure occurs three months after a c-section birth. However, more robust and causal evidence in future studies is needed, given a variable group’s statistically weak sample size. Nevertheless, the results of this study still highlight the unintended consequences of antibiotic exposure during delivery and pregnancy.Keywords: HPA-axis, antibiotics, c-section, gut-microbiota, development, stress
Procedia PDF Downloads 70560 Artificial Neural Network-Based Prediction of Effluent Quality of Wastewater Treatment Plant Employing Data Preprocessing Approaches
Authors: Vahid Nourani, Atefeh Ashrafi
Abstract:
Prediction of treated wastewater quality is a matter of growing importance in water treatment procedure. In this way artificial neural network (ANN), as a robust data-driven approach, has been widely used for forecasting the effluent quality of wastewater treatment. However, developing ANN model based on appropriate input variables is a major concern due to the numerous parameters which are collected from treatment process and the number of them are increasing in the light of electronic sensors development. Various studies have been conducted, using different clustering methods, in order to classify most related and effective input variables. This issue has been overlooked in the selecting dominant input variables among wastewater treatment parameters which could effectively lead to more accurate prediction of water quality. In the presented study two ANN models were developed with the aim of forecasting effluent quality of Tabriz city’s wastewater treatment plant. Biochemical oxygen demand (BOD) was utilized to determine water quality as a target parameter. Model A used Principal Component Analysis (PCA) for input selection as a linear variance-based clustering method. Model B used those variables identified by the mutual information (MI) measure. Therefore, the optimal ANN structure when the result of model B compared with model A showed up to 15% percent increment in Determination Coefficient (DC). Thus, this study highlights the advantage of PCA method in selecting dominant input variables for ANN modeling of wastewater plant efficiency performance.Keywords: Artificial Neural Networks, biochemical oxygen demand, principal component analysis, mutual information, Tabriz wastewater treatment plant, wastewater treatment plant
Procedia PDF Downloads 128559 The Planner's Pentangle: A Proposal for a 21st-Century Model of Planning for Sustainable Development
Authors: Sonia Hirt
Abstract:
The Planner's Triangle, an oft-cited model that visually defined planning as the search for sustainability to balance the three basic priorities of equity, economy, and environment, has influenced planning theory and practice for a quarter of a century. In this essay, we argue that the triangle requires updating and expansion. Even if planners keep sustainability as their key core aspiration at the center of their imaginary geometry, the triangle's vertices have to be rethought. Planners should move on to a 21st-century concept. We propose a Planner's Pentangle with five basic priorities as vertices of a new conceptual polygon. These five priorities are Wellbeing, Equity, Economy, Environment, and Esthetics (WE⁴). The WE⁴ concept more accurately and fully represents planning’s history. This is especially true in the United States, where public art and public health played pivotal roles in the establishment of the profession in the late 19th and early 20th centuries. It also more accurately represents planning’s future. Both health/wellness and aesthetic concerns are becoming increasingly important in the 21st century. The pentangle can become an effective tool for understanding and visualizing planning's history and present. Planning has a long history of representing urban presents and future as conceptual models in visual form. Such models can play an important role in understanding and shaping practice. For over two decades, one such model, the Planner's Triangle, stood apart as the expression of planning's pursuit for sustainability. But if the model is outdated and insufficiently robust, it can diminish our understanding of planning practice, as well as the appreciation of the profession among non-planners. Thus, we argue for a new conceptual model of what planners do.Keywords: sustainable development, planning for sustainable development, planner's triangle, planner's pentangle, planning and health, planning and art, planning history
Procedia PDF Downloads 139558 On the Solution of Boundary Value Problems Blended with Hybrid Block Methods
Authors: Kizito Ugochukwu Nwajeri
Abstract:
This paper explores the application of hybrid block methods for solving boundary value problems (BVPs), which are prevalent in various fields such as science, engineering, and applied mathematics. Traditionally, numerical approaches such as finite difference and shooting methods, often encounter challenges related to stability and convergence, particularly in the context of complex and nonlinear BVPs. To address these challenges, we propose a hybrid block method that integrates features from both single-step and multi-step techniques. This method allows for the simultaneous computation of multiple solution points while maintaining high accuracy. Specifically, we employ a combination of polynomial interpolation and collocation strategies to derive a system of equations that captures the behavior of the solution across the entire domain. By directly incorporating boundary conditions into the formulation, we enhance the stability and convergence properties of the numerical solution. Furthermore, we introduce an adaptive step-size mechanism to optimize performance based on the local behavior of the solution. This adjustment allows the method to respond effectively to variations in solution behavior, improving both accuracy and computational efficiency. Numerical tests on a variety of boundary value problems demonstrate the effectiveness of the hybrid block methods. These tests showcase significant improvements in accuracy and computational efficiency compared to conventional methods, indicating that our approach is robust and versatile. The results suggest that this hybrid block method is suitable for a wide range of applications in real-world problems, offering a promising alternative to existing numerical techniques.Keywords: hybrid block methods, boundary value problem, polynomial interpolation, adaptive step-size control, collocation methods
Procedia PDF Downloads 30557 Innovation of a New Plant Tissue Culture Medium for Large Scale Plantlet Production in Potato (Solanum tuberosum L.)
Authors: Ekramul Hoque, Zinat Ara Eakut Zarin, Ershad Ali
Abstract:
The growth and development of explants is governed by the effect of nutrient medium. Ammonium nitrate (NH4NO3) as a major salt of stock solution-1 for the preparation of tissue culture medium. But, it has several demerits on human civilization. It is use for the preparation of bomb and other destructive activities. Hence, it is totally ban in our country. A new chemical was identified as a substitute of ammonium nitrate. The concentrations of the other ingredients of major and minor salt were modified from the MS medium. The formulation of new medium is totally different from the MS nutrient composition. The most widely use MS medium composition was used as first check treatment and MS powder (Duchefa Biocheme, The Netherland) was used as second check treatment. The experiments were carried out at the Department of Biotechnology, Sher-e-Bangla Agricultural University, Dhaka, Bangladesh. Two potato varieties viz. Diamant and Asterix were used as experimental materials. The regeneration potentiality of potato onto new medium was best as compare with the two check treatments. The traits -node number, leaf number, shoot length, root lengths were highest in new medium. The plantlets were healthy, robust and strong as compare to plantlets regenerated from check treatments. Three subsequent sub-cultures were made in the new medium to observe the growth pattern of plantlet. It was also showed the best performance in all the parameter under studied. The regenerated plantlet produced good quality minituber under field condition. Hence, it is concluded that, a new plant tissue culture medium as discovered from the Department of Biotechnology, Sher-e-Bangla Agricultural University, Dhaka, Bangladesh under the leadership of Professor Dr. Md. Ekramul Hoque.Keywords: new medium, potato, regeneration, ammonium nitrate
Procedia PDF Downloads 93556 Similar Script Character Recognition on Kannada and Telugu
Authors: Gurukiran Veerapur, Nytik Birudavolu, Seetharam U. N., Chandravva Hebbi, R. Praneeth Reddy
Abstract:
This work presents a robust approach for the recognition of characters in Telugu and Kannada, two South Indian scripts with structural similarities in characters. To recognize the characters exhaustive datasets are required, but there are only a few publicly available datasets. As a result, we decided to create a dataset for one language (source language),train the model with it, and then test it with the target language.Telugu is the target language in this work, whereas Kannada is the source language. The suggested method makes use of Canny edge features to increase character identification accuracy on pictures with noise and different lighting. A dataset of 45,150 images containing printed Kannada characters was created. The Nudi software was used to automatically generate printed Kannada characters with different writing styles and variations. Manual labelling was employed to ensure the accuracy of the character labels. The deep learning models like CNN (Convolutional Neural Network) and Visual Attention neural network (VAN) are used to experiment with the dataset. A Visual Attention neural network (VAN) architecture was adopted, incorporating additional channels for Canny edge features as the results obtained were good with this approach. The model's accuracy on the combined Telugu and Kannada test dataset was an outstanding 97.3%. Performance was better with Canny edge characteristics applied than with a model that solely used the original grayscale images. The accuracy of the model was found to be 80.11% for Telugu characters and 98.01% for Kannada words when it was tested with these languages. This model, which makes use of cutting-edge machine learning techniques, shows excellent accuracy when identifying and categorizing characters from these scripts.Keywords: base characters, modifiers, guninthalu, aksharas, vattakshara, VAN
Procedia PDF Downloads 51555 Physical Modeling of Woodwind Ancient Greek Musical Instruments: The Case of Plagiaulos
Authors: Dimitra Marini, Konstantinos Bakogiannis, Spyros Polychronopoulos, Georgios Kouroupetroglou
Abstract:
Archaemusicology cannot entirely depend on the study of the excavated ancient musical instruments as most of the time their condition is not ideal (i.e., missing/eroded parts) and moreover, because of the concern damaging the originals during the experiments. Researchers, in order to overcome the above obstacles, build replicas. This technique is still the most popular one, although it is rather expensive and time-consuming. Throughout the last decades, the development of physical modeling techniques has provided tools that enable the study of musical instruments through their digitally simulated models. This is not only a more cost and time-efficient technique but also provides additional flexibility as the user can easily modify parameters such as their geometrical features and materials. This paper thoroughly describes the steps to create a physical model of a woodwind ancient Greek instrument, Plagiaulos. This instrument could be considered as the ancestor of the modern flute due to the common geometry and air-jet excitation mechanism. Plagiaulos is comprised of a single resonator with an open end and a number of tone holes. The combination of closed and open tone holes produces the pitch variations. In this work, the effects of all the instrument’s components are described by means of physics and then simulated based on digital waveguides. The synthesized sound of the proposed model complies with the theory, highlighting its validity. Further, the synthesized sound of the model simulating the Plagiaulos of Koile (2nd century BCE) was compared with its replica build in our laboratory by following the scientific methodologies of archeomusicology. The aforementioned results verify that robust dynamic digital tools can be introduced in the field of computational, experimental archaemusicology.Keywords: archaeomusicology, digital waveguides, musical acoustics, physical modeling
Procedia PDF Downloads 112554 Elements of Sector Benchmarking in Physical Education Curriculum: An Indian Perspective
Authors: Kalpana Sharma, Jyoti Mann
Abstract:
The study was designed towards institutional analysis for a clear understanding of the process involved in functioning and layout of determinants influencing physical education teacher’s education program in India. This further can be recommended for selection of parameters for creating sector benchmarking for physical education teachers training institutions across India. 165 stakeholders involving students, teachers, parents, administrators were surveyed from the identified seven institutions and universities from different states of India. They were surveyed on the basis of seven broad parameters which were associated with the post graduate physical education program in India. A physical education program assessment tool of 52 items was designed to administer it among the stakeholders selected for the survey. An item analysis of the contents was concluded through the review process from selected experts working in higher education with experience in teacher training program in physical education. The data was collected from the stakeholders of the selected institutions through Physical Education Program Assessment Tool (PEPAT). The hypothesis that PE teacher education program is independent of physical education institutions was significant. The study directed a need towards robust admission process emphasizing on identification, selection of potential candidates and quality control of intake with the scientific process developed according to the Indian education policies and academic structure. The results revealed that the universities do not have similar functional and delivery process related to the physical education teacher training program. The study reflects towards the need for physical education universities and institutions to identify the best practices to be followed regarding the functioning of delivery of physical education programs at various institutions through strategic management studies on the identified parameters before establishing strict standards and norms for achieving excellence in physical education in India.Keywords: assessment, benchmarking, curriculum, physical education, teacher education
Procedia PDF Downloads 557553 Empathy and Yoga Philosophy: Both Eastern and Western Concepts
Authors: Jacqueline Jasmine Kumar
Abstract:
This paper seeks to challenge the predominate Western-centric paradigm concerning empathy by conducting an exploration of its presence within both Western and Eastern philosophical traditions. The primary focus of this inquiry is the examination of the Indian yogic tradition, encompassing the four yogas: bhakti (love/devotion), karma (action), jnāna (knowledge), and rāja (psychic control). Through this examination, it is demonstrated that empathy does not exclusively originate from Western philosophical thought. Rather than superimposing the Western conceptualization of empathy onto the tenets of Indian philosophy, this study endeavours to unearth a distinct array of ideas and concepts within the four yogas, which significantly contribute to our comprehension of empathy as a universally relevant phenomenon. To achieve this objective, an innovative approach is adopted, delving into various facets of empathy, including the propositional, affective/intuitive, perspective-taking, and actionable dimensions. This approach intentionally deviates from conventional Western frameworks, shifting the emphasis towards lived morally as opposed to engagement in abstract theoretical discourse. While it is acknowledged that the explicit term “empathy” may not be overly articulated within the yogic tradition, a scrupulous examination reveals the underlying substance and significance of this phenomenon. Throughout this comparative analysis, the paper aims to lay a robust foundation for the discourse of empathy within the contexts of the human experience. By assimilating insights gleaned from the Indian yogic tradition, it contributes to the expansion of our comprehension of empathy, enabling an exploration of its multifaceted dimensions. Ultimately, this scholarly endeavour facilitates the development of a more comprehensive and inclusive perspective on empathy, transcending cultural boundaries and enriching our collective repository of knowledge.Keywords: Bhakti, Yogic, Jnana, Karma
Procedia PDF Downloads 76552 Adaptive Energy-Aware Routing (AEAR) for Optimized Performance in Resource-Constrained Wireless Sensor Networks
Authors: Innocent Uzougbo Onwuegbuzie
Abstract:
Wireless Sensor Networks (WSNs) are crucial for numerous applications, yet they face significant challenges due to resource constraints such as limited power and memory. Traditional routing algorithms like Dijkstra, Ad hoc On-Demand Distance Vector (AODV), and Bellman-Ford, while effective in path establishment and discovery, are not optimized for the unique demands of WSNs due to their large memory footprint and power consumption. This paper introduces the Adaptive Energy-Aware Routing (AEAR) model, a solution designed to address these limitations. AEAR integrates reactive route discovery, localized decision-making using geographic information, energy-aware metrics, and dynamic adaptation to provide a robust and efficient routing strategy. We present a detailed comparative analysis using a dataset of 50 sensor nodes, evaluating power consumption, memory footprint, and path cost across AEAR, Dijkstra, AODV, and Bellman-Ford algorithms. Our results demonstrate that AEAR significantly reduces power consumption and memory usage while optimizing path weight. This improvement is achieved through adaptive mechanisms that balance energy efficiency and link quality, ensuring prolonged network lifespan and reliable communication. The AEAR model's superior performance underlines its potential as a viable routing solution for energy-constrained WSN environments, paving the way for more sustainable and resilient sensor network deployments.Keywords: wireless sensor networks (WSNs), adaptive energy-aware routing (AEAR), routing algorithms, energy, efficiency, network lifespan
Procedia PDF Downloads 34551 Generalized Additive Model for Estimating Propensity Score
Authors: Tahmidul Islam
Abstract:
Propensity Score Matching (PSM) technique has been widely used for estimating causal effect of treatment in observational studies. One major step of implementing PSM is estimating the propensity score (PS). Logistic regression model with additive linear terms of covariates is most used technique in many studies. Logistics regression model is also used with cubic splines for retaining flexibility in the model. However, choosing the functional form of the logistic regression model has been a question since the effectiveness of PSM depends on how accurately the PS been estimated. In many situations, the linearity assumption of linear logistic regression may not hold and non-linear relation between the logit and the covariates may be appropriate. One can estimate PS using machine learning techniques such as random forest, neural network etc for more accuracy in non-linear situation. In this study, an attempt has been made to compare the efficacy of Generalized Additive Model (GAM) in various linear and non-linear settings and compare its performance with usual logistic regression. GAM is a non-parametric technique where functional form of the covariates can be unspecified and a flexible regression model can be fitted. In this study various simple and complex models have been considered for treatment under several situations (small/large sample, low/high number of treatment units) and examined which method leads to more covariate balance in the matched dataset. It is found that logistic regression model is impressively robust against inclusion quadratic and interaction terms and reduces mean difference in treatment and control set equally efficiently as GAM does. GAM provided no significantly better covariate balance than logistic regression in both simple and complex models. The analysis also suggests that larger proportion of controls than treatment units leads to better balance for both of the methods.Keywords: accuracy, covariate balances, generalized additive model, logistic regression, non-linearity, propensity score matching
Procedia PDF Downloads 365550 Assessing the Effects of Sub-Concussive Head Impacts on Clinical Measures of Neurologic Function
Authors: Gianluca Del Rossi
Abstract:
Sub-concussive impacts occur frequently in collision sports such as American tackle football. Sub-concussive level impacts are defined as hits to the head that do not result in the clinical manifestation of concussion injury. Presently, there is limited information known about the short-term effects of repeated sub-concussive blows to the head. Therefore, the purpose of this investigation was to determine if standard clinical measures could detect acute impairments in neurologic function resulting from the accumulation of sub-concussive impacts throughout a season of high school American tackle football. Simple reaction time using the ruler-drop test, and oculomotor performance using the King-Devick (KD) test, were assessed in 15 athletes prior to the start of the athletic season, then repeated each week of the season, and once following its completion. The mean reaction times and fastest KD scores that were recorded or calculated from each study participant and from each test session were analyzed to assess for change in reaction time and oculomotor performance over the course of the American tackle football season. Analyses of KD data revealed improvements in oculomotor performance from baseline measurements (i.e., decreased time), with most weekly comparisons to baseline being significantly different. Statistical tests performed on the mean reaction times obtained via the ruler-drop test throughout the season revealed statistically significant declines (i.e., increased time) between baseline and weeks 3, 4, 10, and 12 of the athletic season. The inconsistent and contrasting findings between KD data and reaction time demonstrate the need to identify more robust clinical measures to definitively assess if repeated sub-concussive impacts to the head are acutely detrimental to patients.Keywords: head injury, mTBI and sport, subclinical head trauma, sub-concussive impacts
Procedia PDF Downloads 203549 A Robust System for Foot Arch Type Classification from Static Foot Pressure Distribution Data Using Linear Discriminant Analysis
Authors: R. Periyasamy, Deepak Joshi, Sneh Anand
Abstract:
Foot posture assessment is important to evaluate foot type, causing gait and postural defects in all age groups. Although different methods are used for classification of foot arch type in clinical/research examination, there is no clear approach for selecting the most appropriate measurement system. Therefore, the aim of this study was to develop a system for evaluation of foot type as clinical decision-making aids for diagnosis of flat and normal arch based on the Arch Index (AI) and foot pressure distribution parameter - Power Ratio (PR) data. The accuracy of the system was evaluated for 27 subjects with age ranging from 24 to 65 years. Foot area measurements (hind foot, mid foot, and forefoot) were acquired simultaneously from foot pressure intensity image using portable PedoPowerGraph system and analysis of the image in frequency domain to obtain foot pressure distribution parameter - PR data. From our results, we obtain 100% classification accuracy of normal and flat foot by using the linear discriminant analysis method. We observe there is no misclassification of foot types because of incorporating foot pressure distribution data instead of only arch index (AI). We found that the mid-foot pressure distribution ratio data and arch index (AI) value are well correlated to foot arch type based on visual analysis. Therefore, this paper suggests that the proposed system is accurate and easy to determine foot arch type from arch index (AI), as well as incorporating mid-foot pressure distribution ratio data instead of physical area of contact. Hence, such computational tool based system can help the clinicians for assessment of foot structure and cross-check their diagnosis of flat foot from mid-foot pressure distribution.Keywords: arch index, computational tool, static foot pressure intensity image, foot pressure distribution, linear discriminant analysis
Procedia PDF Downloads 495548 Analysis of Real Time Seismic Signal Dataset Using Machine Learning
Authors: Sujata Kulkarni, Udhav Bhosle, Vijaykumar T.
Abstract:
Due to the closeness between seismic signals and non-seismic signals, it is vital to detect earthquakes using conventional methods. In order to distinguish between seismic events and non-seismic events depending on their amplitude, our study processes the data that come from seismic sensors. The authors suggest a robust noise suppression technique that makes use of a bandpass filter, an IIR Wiener filter, recursive short-term average/long-term average (STA/LTA), and Carl short-term average (STA)/long-term average for event identification (LTA). The trigger ratio used in the proposed study to differentiate between seismic and non-seismic activity is determined. The proposed work focuses on significant feature extraction for machine learning-based seismic event detection. This serves as motivation for compiling a dataset of all features for the identification and forecasting of seismic signals. We place a focus on feature vector dimension reduction techniques due to the temporal complexity. The proposed notable features were experimentally tested using a machine learning model, and the results on unseen data are optimal. Finally, a presentation using a hybrid dataset (captured by different sensors) demonstrates how this model may also be employed in a real-time setting while lowering false alarm rates. The planned study is based on the examination of seismic signals obtained from both individual sensors and sensor networks (SN). A wideband seismic signal from BSVK and CUKG station sensors, respectively located near Basavakalyan, Karnataka, and the Central University of Karnataka, makes up the experimental dataset.Keywords: Carl STA/LTA, features extraction, real time, dataset, machine learning, seismic detection
Procedia PDF Downloads 123547 The Effect of Object Presentation on Action Memory in School-Aged Children
Authors: Farzaneh Badinlou, Reza Kormi-Nouri, Monika Knopf
Abstract:
Enacted tasks are typically remembered better than when the same task materials are only verbally encoded, a robust finding referred to as the enactment effect. It has been assumed that enactment effect is independent of object presence but the size of enactment effect can be increased by providing objects at study phase in adults. To clarify the issues in children, free recall and cued recall performance of action phrases with or without using real objects were compared in 410 school-aged children from four age groups (8, 10, 12 and 14 years old). In this study, subjects were instructed to learn a series of action phrases under three encoding conditions, participants listened to verbal action phrases (VTs), performed the phrases (SPTs: subject-performed tasks), and observed the experimenter perform the phrases (EPTs: experimenter-performed tasks). Then, free recall and cued recall memory tests were administrated. The results revealed that the real object compared with imaginary objects improved recall performance in SPTs and EPTs, but more so in VTs. It was also found that the object presence was not necessary for the occurrence of the enactment effect but it was changed the size of enactment effect in all age groups. The size of enactment effect was more pronounced for imaginary objects than the real object in both free recall and cued recall memory tests in children. It was discussed that SPTs and EPTs deferentially facilitate item-specific and relation information processing and providing the objects can moderate the processing underlying the encoding conditions.Keywords: action memory, enactment effect, item-specific processing, object, relational processing, school-aged children
Procedia PDF Downloads 238546 On the Solution of Fractional-Order Dynamical Systems Endowed with Block Hybrid Methods
Authors: Kizito Ugochukwu Nwajeri
Abstract:
This paper presents a distinct approach to solving fractional dynamical systems using hybrid block methods (HBMs). Fractional calculus extends the concept of derivatives and integrals to non-integer orders and finds increasing application in fields such as physics, engineering, and finance. However, traditional numerical techniques often struggle to accurately capture the complex behaviors exhibited by these systems. To address this challenge, we develop HBMs that integrate single-step and multi-step methods, enabling the simultaneous computation of multiple solution points while maintaining high accuracy. Our approach employs polynomial interpolation and collocation techniques to derive a system of equations that effectively models the dynamics of fractional systems. We also directly incorporate boundary and initial conditions into the formulation, enhancing the stability and convergence properties of the numerical solution. An adaptive step-size mechanism is introduced to optimize performance based on the local behavior of the solution. Extensive numerical simulations are conducted to evaluate the proposed methods, demonstrating significant improvements in accuracy and efficiency compared to traditional numerical approaches. The results indicate that our hybrid block methods are robust and versatile, making them suitable for a wide range of applications involving fractional dynamical systems. This work contributes to the existing literature by providing an effective numerical framework for analyzing complex behaviors in fractional systems, thereby opening new avenues for research and practical implementation across various disciplines.Keywords: fractional calculus, numerical simulation, stability and convergence, Adaptive step-size mechanism, collocation methods
Procedia PDF Downloads 41545 Functional Connectivity Signatures of Polygenic Depression Risk in Youth
Authors: Louise Moles, Steve Riley, Sarah D. Lichenstein, Marzieh Babaeianjelodar, Robert Kohler, Annie Cheng, Corey Horien Abigail Greene, Wenjing Luo, Jonathan Ahern, Bohan Xu, Yize Zhao, Chun Chieh Fan, R. Todd Constable, Sarah W. Yip
Abstract:
Background: Risks for depression are myriad and include both genetic and brain-based factors. However, relationships between these systems are poorly understood, limiting understanding of disease etiology, particularly at the developmental level. Methods: We use a data-driven machine learning approach connectome-based predictive modeling (CPM) to identify functional connectivity signatures associated with polygenic risk scores for depression (DEP-PRS) among youth from the Adolescent Brain and Cognitive Development (ABCD) study across diverse brain states, i.e., during resting state, during affective working memory, during response inhibition, during reward processing. Results: Using 10-fold cross-validation with 100 iterations and permutation testing, CPM identified connectivity signatures of DEP-PRS across all examined brain states (rho’s=0.20-0.27, p’s<.001). Across brain states, DEP-PRS was positively predicted by increased connectivity between frontoparietal and salience networks, increased motor-sensory network connectivity, decreased salience to subcortical connectivity, and decreased subcortical to motor-sensory connectivity. Subsampling analyses demonstrated that model accuracies were robust across random subsamples of N’s=1,000, N’s=500, and N’s=250 but became unstable at N’s=100. Conclusions: These data, for the first time, identify neural networks of polygenic depression risk in a large sample of youth before the onset of significant clinical impairment. Identified networks may be considered potential treatment targets or vulnerability markers for depression risk.Keywords: genetics, functional connectivity, pre-adolescents, depression
Procedia PDF Downloads 57544 Family Firms Performance: Examining the Impact of Digital and Technological Capabilities using Partial Least Squares Structural Equation Modeling and Necessary Condition Analysis
Authors: Pedro Mota Veiga
Abstract:
This study comprehensively evaluates the repercussions of innovation, digital advancements, and technological capabilities on the operational performance of companies across fifteen European Union countries following the initial wave of the COVID-19 pandemic. Drawing insights from longitudinal data sourced from the 2019 World Bank business surveys and subsequent 2020 World Bank COVID-19 follow-up business surveys, our extensive examination involves a diverse sample of 5763 family businesses. In exploring the relationships between these variables, we adopt a nuanced approach to assess the impact of innovation and digital and technological capabilities on performance. This analysis unfolds along two distinct perspectives: one rooted in necessity and the other insufficiency. The methodological framework employed integrates partial least squares structural equation modeling (PLS-SEM) with condition analysis (NCA), providing a robust foundation for drawing meaningful conclusions. The findings of the study underscore a positive influence on the performance of family firms stemming from both technological capabilities and digital advancements. Furthermore, it is pertinent to highlight the indirect contribution of innovation to enhanced performance, operating through its impact on digital capabilities. This research contributes valuable insights to the broader understanding of how innovation, coupled with digital and technological capabilities, can serve as pivotal factors in shaping the post-COVID-19 landscape for businesses across the European Union. The intricate analysis of family businesses, in particular adds depth to the comprehension of the dynamics at play in diverse economic contexts within the European Union.Keywords: digital capabilities, technological capabilities, family firms performance, innovation, NCA, PLS-SEM
Procedia PDF Downloads 62543 Functional Feeding Groups and Trophic Levels of Benthic Macroinvertebrates Assemblages in Albertine Rift Rivers and Streams in South Western Uganda
Authors: Peace Liz Sasha Musonge
Abstract:
Behavioral aspects of species nutrition such as feeding methods and food type are archetypal biological traits signifying how species have adapted to their environment. This concept of functional feeding groups (FFG) analysis is currently used to ascertain the trophic levels of the aquatic food web in a specific microhabitat. However, in Eastern Africa, information about the FFG classification of benthic macroinvertebrates in highland rivers and streams is almost absent, and existing studies have fragmented datasets. For this reason, we carried out a robust study to determine the feed type, trophic level and FFGs, of 56 macroinvertebrate taxa (identified to family level) from Albertine rift valley streams. Our findings showed that all five major functional feeding groups were represented; Gatherer Collectors (GC); Predators (PR); shredders (SH); Scrapers (SC); and Filterer collectors. The most dominant functional feeding group was the Gatherer Collectors (GC) that accounted for 53.5% of the total population. The most abundant (GC) families were Baetidae (7813 individuals), Chironomidae NTP (5628) and Caenidae (1848). Majority of the macroinvertebrate population feed on Fine particulate organic matter (FPOM) from the stream bottom. In terms of taxa richness the Predators (PR) had the highest value of 24 taxa and the Filterer Collectors group had the least number of taxa (3). The families that had the highest number of predators (PR) were Corixidae (1024 individuals), Coenagrionidae (445) and Libellulidae (283). However, Predators accounted for only 7.4% of the population. The findings highlighted the functional feeding groups and habitat type of macroinvertebrate communities along an altitudinal gradient.Keywords: trophic levels, functional feeding groups, macroinvertebrates, Albertine rift
Procedia PDF Downloads 233542 Thick Data Techniques for Identifying Abnormality in Video Frames for Wireless Capsule Endoscopy
Authors: Jinan Fiaidhi, Sabah Mohammed, Petros Zezos
Abstract:
Capsule endoscopy (CE) is an established noninvasive diagnostic modality in investigating small bowel disease. CE has a pivotal role in assessing patients with suspected bleeding or identifying evidence of active Crohn's disease in the small bowel. However, CE produces lengthy videos with at least eighty thousand frames, with a frequency rate of 2 frames per second. Gastroenterologists cannot dedicate 8 to 15 hours to reading the CE video frames to arrive at a diagnosis. This is why the issue of analyzing CE videos based on modern artificial intelligence techniques becomes a necessity. However, machine learning, including deep learning, has failed to report robust results because of the lack of large samples to train its neural nets. In this paper, we are describing a thick data approach that learns from a few anchor images. We are using sound datasets like KVASIR and CrohnIPI to filter candidate frames that include interesting anomalies in any CE video. We are identifying candidate frames based on feature extraction to provide representative measures of the anomaly, like the size of the anomaly and the color contrast compared to the image background, and later feed these features to a decision tree that can classify the candidate frames as having a condition like the Crohn's Disease. Our thick data approach reported accuracy of detecting Crohn's Disease based on the availability of ulcer areas at the candidate frames for KVASIR was 89.9% and for the CrohnIPI was 83.3%. We are continuing our research to fine-tune our approach by adding more thick data methods for enhancing diagnosis accuracy.Keywords: thick data analytics, capsule endoscopy, Crohn’s disease, siamese neural network, decision tree
Procedia PDF Downloads 155