Search results for: ethnic identification
1180 Numerical Simulations of Acoustic Imaging in Hydrodynamic Tunnel with Model Adaptation and Boundary Layer Noise Reduction
Authors: Sylvain Amailland, Jean-Hugh Thomas, Charles Pézerat, Romuald Boucheron, Jean-Claude Pascal
Abstract:
The noise requirements for naval and research vessels have seen an increasing demand for quieter ships in order to fulfil current regulations and to reduce the effects on marine life. Hence, new methods dedicated to the characterization of propeller noise, which is the main source of noise in the far-field, are needed. The study of cavitating propellers in closed-section is interesting for analyzing hydrodynamic performance but could involve significant difficulties for hydroacoustic study, especially due to reverberation and boundary layer noise in the tunnel. The aim of this paper is to present a numerical methodology for the identification of hydroacoustic sources on marine propellers using hydrophone arrays in a large hydrodynamic tunnel. The main difficulties are linked to the reverberation of the tunnel and the boundary layer noise that strongly reduce the signal-to-noise ratio. In this paper it is proposed to estimate the reflection coefficients using an inverse method and some reference transfer functions measured in the tunnel. This approach allows to reduce the uncertainties of the propagation model used in the inverse problem. In order to reduce the boundary layer noise, a cleaning algorithm taking advantage of the low rank and sparse structure of the cross-spectrum matrices of the acoustic and the boundary layer noise is presented. This approach allows to recover the acoustic signal even well under the boundary layer noise. The improvement brought by this method is visible on acoustic maps resulting from beamforming and DAMAS algorithms.Keywords: acoustic imaging, boundary layer noise denoising, inverse problems, model adaptation
Procedia PDF Downloads 3321179 Social Vulnerability Mapping in New York City to Discuss Current Adaptation Practice
Authors: Diana Reckien
Abstract:
Vulnerability assessments are increasingly used to support policy-making in complex environments, like urban areas. Usually, vulnerability studies include the construction of aggregate (sub-) indices and the subsequent mapping of indices across an area of interest. Vulnerability studies show a couple of advantages: they are great communication tools, can inform a wider general debate about environmental issues, and can help allocating and efficiently targeting scarce resources for adaptation policy and planning. However, they also have a number of challenges: Vulnerability assessments are constructed on the basis of a wide range of methodologies and there is no single framework or methodology that has proven to serve best in certain environments, indicators vary highly according to the spatial scale used, different variables and metrics produce different results, and aggregate or composite vulnerability indicators that are mapped easily distort or bias the picture of vulnerability as they hide the underlying causes of vulnerability and level out conflicting reasons of vulnerability in space. So, there is urgent need to further develop the methodology of vulnerability studies towards a common framework, which is one reason of the paper. We introduce a social vulnerability approach, which is compared with other approaches of bio-physical or sectoral vulnerability studies relatively developed in terms of a common methodology for index construction, guidelines for mapping, assessment of sensitivity, and verification of variables. Two approaches are commonly pursued in the literature. The first one is an additive approach, in which all potentially influential variables are weighted according to their importance for the vulnerability aspect, and then added to form a composite vulnerability index per unit area. The second approach includes variable reduction, mostly Principal Component Analysis (PCA) that reduces the number of variables that are interrelated into a smaller number of less correlating components, which are also added to form a composite index. We test these two approaches of constructing indices on the area of New York City as well as two different metrics of variables used as input and compare the outcome for the 5 boroughs of NY. Our analysis yields that the mapping exercise yields particularly different results in the outer regions and parts of the boroughs, such as Outer Queens and Staten Island. However, some of these parts, particularly the coastal areas receive the highest attention in the current adaptation policy. We imply from this that the current adaptation policy and practice in NY might need to be discussed, as these outer urban areas show relatively low social vulnerability as compared with the more central parts, i.e. the high dense areas of Manhattan, Central Brooklyn, Central Queens and the Southern Bronx. The inner urban parts receive lesser adaptation attention, but bear a higher risk of damage in case of hazards in those areas. This is conceivable, e.g., during large heatwaves, which would more affect more the inner and poorer parts of the city as compared with the outer urban areas. In light of the recent planning practice of NY one needs to question and discuss who in NY makes adaptation policy for whom, but the presented analyses points towards an under representation of the needs of the socially vulnerable population, such as the poor, the elderly, and ethnic minorities, in the current adaptation practice in New York City.Keywords: vulnerability mapping, social vulnerability, additive approach, Principal Component Analysis (PCA), New York City, United States, adaptation, social sensitivity
Procedia PDF Downloads 3941178 Defining Priority Areas for Biodiversity Conservation to Support for Zoning Protected Areas: A Case Study from Vietnam
Authors: Xuan Dinh Vu, Elmar Csaplovics
Abstract:
There has been an increasing need for methods to define priority areas for biodiversity conservation since the effectiveness of biodiversity conservation in protected areas largely depends on the availability of material resources. The identification of priority areas requires the integration of biodiversity data together with social data on human pressures and responses. However, the deficit of comprehensive data and reliable methods becomes a key challenge in zoning where the demand for conservation is most urgent and where the outcomes of conservation strategies can be maximized. In order to fill this gap, the study applied an environmental model Condition–Pressure–Response to suggest a set of criteria to identify priority areas for biodiversity conservation. Our empirical data has been compiled from 185 respondents, categorizing into three main groups: governmental administration, research institutions, and protected areas in Vietnam by using a well - designed questionnaire. Then, the Analytic Hierarchy Process (AHP) theory was used to identify the weight of all criteria. Our results have shown that priority level for biodiversity conservation could be identified by three main indicators: condition, pressure, and response with the value of the weight of 26%, 41%, and 33%, respectively. Based on the three indicators, 7 criteria and 15 sub-criteria were developed to support for defining priority areas for biodiversity conservation and zoning protected areas. In addition, our study also revealed that the groups of governmental administration and protected areas put a focus on the 'Pressure' indicator while the group of Research Institutions emphasized the importance of 'Response' indicator in the evaluation process. Our results provided recommendations to apply the developed criteria for identifying priority areas for biodiversity conservation in Vietnam.Keywords: biodiversity conservation, condition–pressure–response model, criteria, priority areas, protected areas
Procedia PDF Downloads 1691177 Geophysical Methods of Mapping Groundwater Aquifer System: Perspectives and Inferences From Lisana Area, Western Margin of the Central Main Ethiopian Rift
Authors: Esubalew Yehualaw Melaku, Tigistu Haile Eritro
Abstract:
In this study, two basic geophysical methods are applied for mapping the groundwater aquifer system in the Lisana area along the Guder River, northeast of Hosanna town, near the western margin of the Central Main Ethiopian Rift. The main target of the study is to map the potential aquifer zone and investigate the groundwater potential for current and future development of the resource in the Gode area. The geophysical methods employed in this study include, Vertical Electrical Sounding (VES) and magnetic survey techniques. Electrical sounding was used to examine and map the depth to the potential aquifer zone of the groundwater and its distribution over the area. On the other hand, a magnetic survey was used to delineate contact between lithologic units and geological structures. The 2D magnetic modeling and the geoelectric sections are used for the identification of weak zones, which control the groundwater flow and storage system. The geophysical survey comprises of twelve VES readings collected by using a Schlumberger array along six profile lines and more than four hundred (400) magnetic readings at about 10m station intervals along four profiles and 20m along three random profiles. The study result revealed that the potential aquifer in the area is obtained at a depth range from 45m to 92m. This is the response of the highly weathered/ fractured ignimbrite and pumice layer with sandy soil, which is the main water-bearing horizon. Overall, in the neighborhood of four VES points, VES- 2, VES- 3, VES-10, and VES-11, shows good water-bearing zones in the study area.Keywords: vertical electrical sounding, magnetic survey, aquifer, groundwater potential
Procedia PDF Downloads 771176 Identification and Characterization of Groundwater Recharge Sites in Kuwait
Authors: Dalal Sadeqi
Abstract:
Groundwater is an important component of Kuwait’s water resources. Although limited in quantity and often poor in quality, the significance of this natural source of water cannot be overemphasized. Recharge of groundwater in Kuwait occurs during periodical storm events, especially in open desert areas. Runoff water dissolves accumulated surficial meteoric salts and subsequently leaches them into the groundwater following a period of evaporative enrichment at or near the soil surface. Geochemical processes governing groundwater recharge vary in time and space. Stable isotope (18O and 2H) and geochemical signatures are commonly used to gain some insight into recharge processes and groundwater salinization mechanisms, particularly in arid and semiarid regions. This article addresses the mechanism used in identifying and characterizing the main water shed areas in Kuwait using stable isotopes in an attempt to determine favorable groundwater recharge sites in the country. Stable isotopes of both rainwater and groundwater were targeted in different hydrogeological settings. Additionally, data and information obtained from subsurface logs in the study area were collected and analyzed to develop a better understanding of the lateral and vertical extent of the groundwater aquifers. Geographic Information System (GIS) and RockWorks 3D modelling software were used to map out the hydrogeomorphology of the study area and the subsurface lithology of the investigated aquifers. The collected data and information, including major ion chemistry, isotopes, subsurface characteristics, and hydrogeomorphology, were integrated in a GIS platform to identify and map out suitable natural recharge areas as part of an integrated water resources management scheme that addresses the challenges of the sustainability of the groundwater reserves in the country.Keywords: scarcity, integrated, recharge, isotope
Procedia PDF Downloads 1131175 Bioinformatic Approaches in Population Genetics and Phylogenetic Studies
Authors: Masoud Sheidai
Abstract:
Biologists with a special field of population genetics and phylogeny have different research tasks such as populations’ genetic variability and divergence, species relatedness, the evolution of genetic and morphological characters, and identification of DNA SNPs with adaptive potential. To tackle these problems and reach a concise conclusion, they must use the proper and efficient statistical and bioinformatic methods as well as suitable genetic and morphological characteristics. In recent years application of different bioinformatic and statistical methods, which are based on various well-documented assumptions, are the proper analytical tools in the hands of researchers. The species delineation is usually carried out with the use of different clustering methods like K-means clustering based on proper distance measures according to the studied features of organisms. A well-defined species are assumed to be separated from the other taxa by molecular barcodes. The species relationships are studied by using molecular markers, which are analyzed by different analytical methods like multidimensional scaling (MDS) and principal coordinate analysis (PCoA). The species population structuring and genetic divergence are usually investigated by PCoA and PCA methods and a network diagram. These are based on bootstrapping of data. The Association of different genes and DNA sequences to ecological and geographical variables is determined by LFMM (Latent factor mixed model) and redundancy analysis (RDA), which are based on Bayesian and distance methods. Molecular and morphological differentiating characters in the studied species may be identified by linear discriminant analysis (DA) and discriminant analysis of principal components (DAPC). We shall illustrate these methods and related conclusions by giving examples from different edible and medicinal plant species.Keywords: GWAS analysis, K-Means clustering, LFMM, multidimensional scaling, redundancy analysis
Procedia PDF Downloads 1221174 Prediction of B-Cell Epitope for 24 Mite Allergens: An in Silico Approach towards Epitope-Based Immune Therapeutics
Authors: Narjes Ebrahimi, Soheila Alyasin, Navid Nezafat, Hossein Esmailzadeh, Younes Ghasemi, Seyed Hesamodin Nabavizadeh
Abstract:
Immunotherapy with allergy vaccines is of great importance in allergen-specific immunotherapy. In recent years, B-cell epitope-based vaccines have attracted considerable attention and the prediction of epitopes is crucial to design these types of allergy vaccines. B-cell epitopes might be linear or conformational. The prerequisite for the identification of conformational epitopes is the information about allergens' tertiary structures. Bioinformatics approaches have paved the way towards the design of epitope-based allergy vaccines through the prediction of tertiary structures and epitopes. Mite allergens are one of the major allergy contributors. Several mite allergens can elicit allergic reactions; however, their structures and epitopes are not well established. So, B-cell epitopes of various groups of mite allergens (24 allergens in 6 allergen groups) were predicted in the present work. Tertiary structures of 17 allergens with unknown structure were predicted and refined with RaptorX and GalaxyRefine servers, respectively. The predicted structures were further evaluated by Rampage, ProSA-web, ERRAT and Verify 3D servers. Linear and conformational B-cell epitopes were identified with Ellipro, Bcepred, and DiscoTope 2 servers. To improve the accuracy level, consensus epitopes were selected. Fifty-four conformational and 133 linear consensus epitopes were predicted. Furthermore, overlapping epitopes in each allergen group were defined, following the sequence alignment of the allergens in each group. The predicted epitopes were also compared with the experimentally identified epitopes. The presented results provide valuable information for further studies about allergy vaccine design.Keywords: B-cell epitope, Immunotherapy, In silico prediction, Mite allergens, Tertiary structure
Procedia PDF Downloads 1591173 Improvement of Microscopic Detection of Acid-Fast Bacilli for Tuberculosis by Artificial Intelligence-Assisted Microscopic Platform and Medical Image Recognition System
Authors: Hsiao-Chuan Huang, King-Lung Kuo, Mei-Hsin Lo, Hsiao-Yun Chou, Yusen Lin
Abstract:
The most robust and economical method for laboratory diagnosis of TB is to identify mycobacterial bacilli (AFB) under acid-fast staining despite its disadvantages of low sensitivity and labor-intensive. Though digital pathology becomes popular in medicine, an automated microscopic system for microbiology is still not available. A new AI-assisted automated microscopic system, consisting of a microscopic scanner and recognition program powered by big data and deep learning, may significantly increase the sensitivity of TB smear microscopy. Thus, the objective is to evaluate such an automatic system for the identification of AFB. A total of 5,930 smears was enrolled for this study. An intelligent microscope system (TB-Scan, Wellgen Medical, Taiwan) was used for microscopic image scanning and AFB detection. 272 AFB smears were used for transfer learning to increase the accuracy. Referee medical technicians were used as Gold Standard for result discrepancy. Results showed that, under a total of 1726 AFB smears, the automated system's accuracy, sensitivity and specificity were 95.6% (1,650/1,726), 87.7% (57/65), and 95.9% (1,593/1,661), respectively. Compared to culture, the sensitivity for human technicians was only 33.8% (38/142); however, the automated system can achieve 74.6% (106/142), which is significantly higher than human technicians, and this is the first of such an automated microscope system for TB smear testing in a controlled trial. This automated system could achieve higher TB smear sensitivity and laboratory efficiency and may complement molecular methods (eg. GeneXpert) to reduce the total cost for TB control. Furthermore, such an automated system is capable of remote access by the internet and can be deployed in the area with limited medical resources.Keywords: TB smears, automated microscope, artificial intelligence, medical imaging
Procedia PDF Downloads 2271172 Application and Limitation of Heavy Metal Pollution Indicators in Coastal Environment of Pakistan
Authors: Noor Us Saher
Abstract:
Oceans and Marine areas have a great importance, mainly regarding food resources, fishery products and reliance of livelihood. Aquatic pollution is common due to the incorporation of various chemicals mainly entering from urbanization, industrial and commercial facilities, such as oil and chemical spills. Many hazardous wastes and industrial effluents contaminate the nearby areas and initiate to affect the marine environment. These contaminated conditions may become worse in those aquatic environments situated besides the world’s largest cities, which are hubs of various commercial activities. Heavy metal contamination is one of the most important predicaments for marine environments and during past decades this problem has intensified due to an increase in urbanization and industrialization. Coastal regions of Pakistan are facing severe threats from various organic and inorganic pollutants, especially the estuarine and coastal areas of Karachi city, the most populated and industrialized city situated along the coastline. Metal contamination causes severe toxicity in biota resulting the degradation of Marine environments and depletion of fishery resources and sustainability. There are several abiotic (air, water and sediment) and biotic (fauna and flora) indicators that indicate metal contamination. However, all these indicators have certain limitations and complexities, which delay their implementation for rehabilitation and conservation in the marine environment. The inadequate evidences have presented on this significant topic till the time and this study discussed metal pollution and its consequences along the marine environment of Pakistan. This study further helps in identification of possible hazards for the ecological system and allied resources for management strategies and decision making for sustainable approaches.Keywords: coastal and estuarine environment, heavy metals pollution, pollution indicators, Pakistan
Procedia PDF Downloads 2471171 Genetic Characterization of Acanthamoeba Isolates from Amoebic Keratitis Patients
Authors: Sumeeta Khurana, Kirti Megha, Amit Gupta, Rakesh Sehgal
Abstract:
Background: Amoebic keratitis is a painful vision threatening infection caused by a free living pathogenic amoeba Acanthamoeba. It can be misdiagnosed and very difficult to treat if not suspected early. The epidemiology of Acanthamoeba genotypes causing infection in our geographical area is not yet known to the best of our knowledge. Objective: To characterize Acanthamoeba isolates from amoebic keratitis patients. Methods: A total of 19 isolates obtained from patients with amoebic keratitis presenting to the Advanced Eye Centre at Postgraduate Institute of Medical Education and Research, a tertiary care centre of North India over a period of last 10 years were included. Their corneal scrapings, lens solution and lens case (in case of lens wearer) were collected for microscopic examination, culture and molecular diagnosis. All the isolates were maintained in the Non Nutrient agar culture medium overlaid with E.coli and 13 strains were axenised and maintained in modified Peptone Yeast Dextrose Agar. Identification of Acanthamoeba genotypes was based on amplification of diagnostic fragment 3 (DF3) region of the 18srRNA gene followed by sequencing. Nucleotide similarity search was performed by BLAST search of sequenced amplicons in GenBank database (http//www.ncbi.nlm.nih.gov/blast). Multiple Sequence alignments were determined by using CLUSTAL X. Results: Nine out of 19 Acanthamoeba isolates were found to belong to Genotype T4 followed by 6 isolates of genotype T11, 3 T5 and 1 T3 genotype. Conclusion: T4 is the predominant Acanthamoeba genotype in our geographical area. Further studies should focus on differences in pathogenicity of these genotypes and their clinical significance.Keywords: Acanthamoeba, free living amoeba, keratitis, genotype, ocular
Procedia PDF Downloads 2341170 Efficiency of PCR-RFLP for the Identification of Adulteries in Meat Formulation
Authors: Hela Gargouri, Nizar Moalla, Hassen Hadj Kacem
Abstract:
Meat adulteration affecting the safety and quality of food is becoming one of the main concerns of public interest across the world. The drastic consequences on the meat industry highlighted the urgent necessity to control the products' quality and to point out the complexity of both supply and processing circuits. Due to the expansion of this problem, the authentic testing of foods, particularly meat and its products, is deemed crucial to avoid unfair market competition and to protect consumers from fraudulent practices of meat adulteration. The adoption of authentication methods by the food quality-control laboratories is becoming a priority issue. However, in some developing countries, the number of food tests is still insignificant, although a variety of processed and traditional meat products are widely consumed. Little attention has been paid to provide an easy, fast, reproducible, and low-cost molecular test, which could be conducted in a basic laboratory. In the current study, the 359 bp fragment of the cytochrome-b gene was mapped by PCR-RFLP using firstly fresh biological supports (DNA and meat) and then turkey salami as an example of commercial processed meat. This technique has been established through several optimizations, namely: the selection of restriction enzymes. The digestion with BsmAI, SspI, and TaaI succeed to identify the seven included animal species when meat is formed by individual species and when the meat is a mixture of different origin. In this study, the PCR-RFLP technique using universal primer succeed to meet our needs by providing an indirect sequencing method identifying by restriction enzymes the specificities characterizing different species on the same amplicon reducing the number of potential tests.Keywords: adulteration, animal species, authentication, meat, mtDNA, PCR-RFLP
Procedia PDF Downloads 1111169 Identification of Significant Genes in Rheumatoid Arthritis, Melanoma Metastasis, Ulcerative Colitis and Crohn’s Disease
Authors: Krishna Pal Singh, Shailendra Kumar Gupta, Olaf Wolkenhauer
Abstract:
Background: Our study aimed to identify common genes and potential targets across the four diseases, which include rheumatoid arthritis, melanoma metastasis, ulcerative colitis, and Crohn’s disease. We used a network and systems biology approach to identify the hub gene, which can act as a potential target for all four disease conditions. The regulatory network was extracted from the PPI using the MCODE module present in Cytoscape. Our objective was to investigate the significance of hub genes in these diseases using gene ontology and KEGG pathway enrichment analysis. Methods: Our methodology involved collecting disease gene-related information from DisGeNET databases and performing protein-protein interaction (PPI) network and core genes screening. We then conducted gene ontology and KEGG pathway enrichment analysis. Results: We found that IL6 plays a critical role in all disease conditions and in different pathways that can be associated with the development of all four diseases. Conclusions: The theoretical importance of our research is that we employed various systems and structural biology techniques to identify a crucial protein that could serve as a promising target for treating multiple diseases. Our data collection and analysis procedures involved rigorous scrutiny, ensuring high-quality results. Our conclusion is that IL6 plays a significant role in all four diseases, and it can act as a potential target for treating them. Our findings may have important implications for the development of novel therapeutic interventions for these diseases.Keywords: melanoma metastasis, rheumatoid arthritis, inflammatory bowel diseases, integrated bioinformatics analysis
Procedia PDF Downloads 871168 Assessment Using Copulas of Simultaneous Damage to Multiple Buildings Due to Tsunamis
Authors: Yo Fukutani, Shuji Moriguchi, Takuma Kotani, Terada Kenjiro
Abstract:
If risk management of the assets owned by companies, risk assessment of real estate portfolio, and risk identification of the entire region are to be implemented, it is necessary to consider simultaneous damage to multiple buildings. In this research, the Sagami Trough earthquake tsunami that could have a significant effect on the Japanese capital region is focused on, and a method is proposed for simultaneous damage assessment using copulas that can take into consideration the correlation of tsunami depths and building damage between two sites. First, the tsunami inundation depths at two sites were simulated by using a nonlinear long-wave equation. The tsunamis were simulated by varying the slip amount (five cases) and the depths (five cases) for each of 10 sources of the Sagami Trough. For each source, the frequency distributions of the tsunami inundation depth were evaluated by using the response surface method. Then, Monte-Carlo simulation was conducted, and frequency distributions of tsunami inundation depth were evaluated at the target sites for all sources of the Sagami Trough. These are marginal distributions. Kendall’s tau for the tsunami inundation simulation at two sites was 0.83. Based on this value, the Gaussian copula, t-copula, Clayton copula, and Gumbel copula (n = 10,000) were generated. Then, the simultaneous distributions of the damage rate were evaluated using the marginal distributions and the copulas. For the correlation of the tsunami inundation depth at the two sites, the expected value hardly changed compared with the case of no correlation, but the damage rate of the ninety-ninth percentile value was approximately 2%, and the maximum value was approximately 6% when using the Gumbel copula.Keywords: copulas, Monte-Carlo simulation, probabilistic risk assessment, tsunamis
Procedia PDF Downloads 1411167 [Keynote Talk]: Unlocking Transformational Resilience in the Aftermath of a Flood Disaster: A Case Study from Cumbria
Authors: Kate Crinion, Martin Haran, Stanley McGreal, David McIlhatton
Abstract:
Past research has demonstrated that disasters are continuing to escalate in frequency and magnitude worldwide, representing a key concern for the global community. Understanding and responding to the increasing risk posed by disaster events has become a key concern for disaster managers. An emerging trend within literature, acknowledges the need to move beyond a state of coping and reinstatement of the status quo, towards incremental adaptive change and transformational actions for long-term sustainable development. As such, a growing interest in research concerns the understanding of the change required to address ever increasing and unpredictable disaster events. Capturing transformational capacity and resilience, however is not without its difficulties and explains the dearth in attempts to capture this capacity. Adopting a case study approach, this research seeks to enhance an awareness of transformational resilience by identifying key components and indicators that determine the resilience of flood-affected communities within Cumbria. Grounding and testing a theoretical resilience framework within the case studies, permits the identification of how perceptions of risk influence community resilience actions. Further, it assesses how levels of social capital and connectedness impacts upon the extent of interplay between resources and capacities that drive transformational resilience. Thus, this research seeks to expand the existing body of knowledge by enhancing the awareness of resilience in post-disaster affected communities, by investigating indicators of community capacity building and resilience actions that facilitate transformational resilience during the recovery and reconstruction phase of a flood disaster.Keywords: capacity building, community, flooding, transformational resilience
Procedia PDF Downloads 2881166 Identification of Breeding Objectives for Begait Goat in Western Tigray, North Ethiopia
Authors: Hagos Abraham, Solomon Gizaw, Mengistu Urge
Abstract:
A sound breeding objective is the basis for genetic improvement in overall economic merit of farm animals. Begait goat is one of the identified breeds in Ethiopia, which is a multipurpose breed as it serves as source of cash income and source of food (meat and milk). Despite its importance, no formal breeding objectives exist for Begait goat. The objective of the present study was to identify breeding objectives for the breed through two approaches: using own-flock ranking experiment and developing deterministic bio-economic models as a preliminary step towards designing sustainable breeding programs for the breed. In the own-flock ranking experiment, a total of forty five households were visited at their homesteads and were asked to select, with reasons, the first best, second best, third best and the most inferior does from their own flock. Age, previous reproduction and production information of the identified animals were inquired; live body weight and some linear body measurements were taken. The bio-economic model included performance traits (weights, daily weight gain, kidding interval, litter size, milk yield, kid mortality, pregnancy and replacement rates) and economic (revenue and costs) parameters. It was observed that there was close agreement between the farmers’ ranking and bio-economic model results. In general, the results of the present study indicated that Begait goat owners could improve performance of their goats and profitability of their farms by selecting for litter size, six month weight, pre-weaning kid survival rate and milk yield.Keywords: bio-economic model, economic parameters, own-flock ranking, performance traits
Procedia PDF Downloads 651165 A Phenomenological Approach to Computational Modeling of Analogy
Authors: José Eduardo García-Mendiola
Abstract:
In this work, a phenomenological approach to computational modeling of analogy processing is carried out. The paper goes through the consideration of the structure of the analogy, based on the possibility of sustaining the genesis of its elements regarding Husserl's genetic theory of association. Among particular processes which take place in order to get analogical inferences, there is one which arises crucial for enabling efficient base cases retrieval through long-term memory, namely analogical transference grounded on familiarity. In general, it has been argued that analogical reasoning is a way by which a conscious agent tries to determine or define a certain scope of objects and relationships between them using previous knowledge of other familiar domain of objects and relations. However, looking for a complete description of analogy process, a deeper consideration of phenomenological nature is required in so far, its simulation by computational programs is aimed. Also, one would get an idea of how complex it would be to have a fully computational account of the analogy elements. In fact, familiarity is not a result of a mere chain of repetitions of objects or events but generated insofar as the object/attribute or event in question is integrable inside a certain context that is taking shape as functionalities and functional approaches or perspectives of the object are being defined. Its familiarity is generated not by the identification of its parts or objective determinations as if they were isolated from those functionalities and approaches. Rather, at the core of such a familiarity between entities of different kinds lays the way they are functionally encoded. So, and hoping to make deeper inroads towards these topics, this essay allows us to consider that cognitive-computational perspectives can visualize, from the phenomenological projection of the analogy process reviewing achievements already obtained as well as exploration of new theoretical-experimental configurations towards implementation of analogy models in specific as well as in general purpose machines.Keywords: analogy, association, encoding, retrieval
Procedia PDF Downloads 1211164 Dissection of Genomic Loci for Yellow Vein Mosaic Virus Resistance in Okra (Abelmoschus esculentas)
Authors: Rakesh Kumar Meena, Tanushree Chatterjee
Abstract:
Okra (Abelmoschus esculentas L. Moench) or lady’s finger is an important vegetable crop belonging to the Malvaceae family. Unfortunately, production and productivity of Okra are majorly affected by Yellow Vein mosaic virus (YVMV). The AO: 189 (resistant parent) X AO: 191(susceptible parent) used for the development of mapping population. The mapping population has 143 individuals (F₂:F₃). Population was characterized by physiological and pathological observations. Screening of 360 DNA markers was performed to survey for parental polymorphism between the contrasting parents’, i.e., AO: 189 and AO: 191. Out of 360; 84 polymorphic markers were used for genotyping of the mapping population. Total markers were distributed into four linkage groups (LG1, LG2, LG3, and LG4). LG3 covered the longest span (106.8cM) with maximum number of markers (27) while LG1 represented the smallest linkage group in terms of length (71.2cM). QTL identification using the composite interval mapping approach detected two prominent QTLs, QTL1 and QTL2 for resistance against YVMV disease. These QTLs were placed between the marker intervals of NBS-LRR72-Path02 and NBS-LRR06- NBS-LRR65 on linkage group 02 and linkage group 04 respectively. The LOD values of QTL1 and QTL2 were 5.7 and 6.8 which accounted for 19% and 27% of the total phenotypic variation, respectively. The findings of this study provide two linked markers which can be used as efficient diagnostic tools to distinguish between YVMV resistant and susceptible Okra cultivars/genotypes. Lines identified as highly resistant against YVMV infection can be used as donor lines for this trait. This will be instrumental in accelerating the trait improvement program in Okra and will substantially reduce the yield losses due to this viral disease.Keywords: Okra, yellow vein mosaic virus, resistant, linkage map, QTLs
Procedia PDF Downloads 2141163 The Optimization of TICSI in the Convergence Mechanism of Urban Water Management
Authors: M. Macchiaroli, L. Dolores, V. Pellecchia
Abstract:
With the recent Resolution n. 580/2019/R/idr, the Italian Regulatory Authority for Energy, Networks, and Environment (ARERA) for the Urban Water Management has introduced, for water managements characterized by persistent critical issues regarding the planning and organization of the service and the implementation of the necessary interventions for the improvement of infrastructures and management quality, a new mechanism for determining tariffs: the regulatory scheme of Convergence. The aim of this regulatory scheme is the overcoming of the Water Service Divided in order to improve the stability of the local institutional structures, technical quality, contractual quality, as well as in order to guarantee transparency elements for Users of the Service. Convergence scheme presupposes the identification of the cost items to be considered in the tariff in parametric terms, distinguishing three possible cases according to the type of historical data available to the Manager. The study, in particular, focuses on operations that have neither data on tariff revenues nor data on operating costs. In this case, the Manager's Constraint on Revenues (VRG) is estimated on the basis of a reference benchmark and becomes the starting point for defining the structure of the tariff classes, in compliance with the TICSI provisions (Integrated Text for tariff classes, ARERA's Resolution n. 665/2017/R/idr). The proposed model implements the recent studies on optimization models for the definition of tariff classes in compliance with the constraints dictated by TICSI in the application of the Convergence mechanism, proposing itself as a support tool for the Managers and the local water regulatory Authority in the decision-making process.Keywords: decision-making process, economic evaluation of projects, optimizing tools, urban water management, water tariff
Procedia PDF Downloads 1181162 Investigating Potential Pest Management Strategies for Citrus Gall Wasp in Australia
Authors: M. Yazdani, J. F. Carragher
Abstract:
Citrus gall wasp (CGW), Bruchophagus fellis (Hym: Eurytomidae), is an Australian native insect pest. CGW has now become a problem of national concern, threatening the viability of the entire Australian citrus industry. However, CGW appears to exhibit a preference for certain citrus species; growers report that grapefruit and lemons are most severely infested, with oranges and mandarins affected to a lesser extent. Given the specificity of the host plant-insect interactions, it is speculated that plant volatiles may play a significant role in host recognition. To address whether plant volatiles is involved in host plant preference by CGW we tested the behavioral response of CGW to plants in a wind tunnel. The result showed that CGW had significantly higher preference to grapefruit and lemon than other cultivars and the least preference was recorded to mandarin (Chi-square test, P<0.001). Because CGW exhibited a detectable choice further studies were undertaken to identify the components of the volatiles from each species. We trapped the volatile chemicals emitted by a 30 cm tip of each plant onto a solid Porapak matrix. Eluted extracts were then analysed by Gas Chromatography-Mass Spectrometry (GCMS) and the presumptive identity of the major compounds from each species inferred from the MS library. Although the same major compounds existed in all of the cultivars, the relative ratios of them differed between species. Next, we will validate the identity of the key volatiles using authentic standards and establish their ability to elicit olfactory responses in CGW in wind tunnel and field experiments. Identification of semiochemicals involved in host location by CGW is of interest not only from an ecological perspective but also for the development of novel pest control strategies.Keywords: Citrus gall wasp, Bruchophagus fellis, volatiles, semiochemicals, IPM
Procedia PDF Downloads 2291161 Surface-Enhanced Raman Spectroscopy on Gold Nanoparticles in the Kidney Disease
Authors: Leonardo C. Pacheco-Londoño, Nataly J Galan-Freyle, Lisandro Pacheco-Lugo, Antonio Acosta-Hoyos, Elkin Navarro, Gustavo Aroca-Martinez, Karin Rondón-Payares, Alberto C. Espinosa-Garavito, Samuel P. Hernández-Rivera
Abstract:
At the Life Science Research Center at Simon Bolivar University, a primary focus is the diagnosis of various diseases, and the use of gold nanoparticles (Au-NPs) in diverse biomedical applications is continually expanding. In the present study, Au-NPs were employed as substrates for Surface-Enhanced Raman Spectroscopy (SERS) aimed at diagnosing kidney diseases arising from Lupus Nephritis (LN), preeclampsia (PC), and Hypertension (H). Discrimination models were developed for distinguishing patients with and without kidney diseases based on the SERS signals from urine samples by partial least squares-discriminant analysis (PLS-DA). A comparative study of the Raman signals across the three conditions was conducted, leading to the identification of potential metabolite signals. Model performance was assessed through cross-validation and external validation, determining parameters like sensitivity and specificity. Additionally, a secondary analysis was performed using machine learning (ML) models, wherein different ML algorithms were evaluated for their efficiency. Models’ validation was carried out using cross-validation and external validation, and other parameters were determined, such as sensitivity and specificity; the models showed average values of 0.9 for both parameters. Additionally, it is not possible to highlight this collaborative effort involved two university research centers and two healthcare institutions, ensuring ethical treatment and informed consent of patient samples.Keywords: SERS, Raman, PLS-DA, kidney diseases
Procedia PDF Downloads 421160 Vibration Based Damage Detection and Stiffness Reduction of Bridges: Experimental Study on a Small Scale Concrete Bridge
Authors: Mirco Tarozzi, Giacomo Pignagnoli, Andrea Benedetti
Abstract:
Structural systems are often subjected to degradation processes due to different kind of phenomena like unexpected loadings, ageing of the materials and fatigue cycles. This is true especially for bridges, in which their safety evaluation is crucial for the purpose of a design of planning maintenance. This paper discusses the experimental evaluation of the stiffness reduction from frequency changes due to uniform damage scenario. For this purpose, a 1:4 scaled bridge has been built in the laboratory of the University of Bologna. It is made of concrete and its cross section is composed by a slab linked to four beams. This concrete deck is 6 m long and 3 m wide, and its natural frequencies have been identified dynamically by exciting it with an impact hammer, a dropping weight, or by walking on it randomly. After that, a set of loading cycles has been applied to this bridge in order to produce a uniformly distributed crack pattern. During the loading phase, either cracking moment and yielding moment has been reached. In order to define the relationship between frequency variation and loss in stiffness, the identification of the natural frequencies of the bridge has been performed, before and after the occurrence of the damage, corresponding to each load step. The behavior of breathing cracks and its effect on the natural frequencies has been taken into account in the analytical calculations. By using a sort of exponential function given from the study of lot of experimental tests in the literature, it has been possible to predict the stiffness reduction through the frequency variation measurements. During the load test also crack opening and middle span vertical displacement has been monitored.Keywords: concrete bridge, damage detection, dynamic test, frequency shifts, operational modal analysis
Procedia PDF Downloads 1831159 Numerical Calculation and Analysis of Fine Echo Characteristics of Underwater Hemispherical Cylindrical Shell
Authors: Hongjian Jia
Abstract:
A finite-length cylindrical shell with a spherical cap is a typical engineering approximation model of actual underwater targets. The research on the omni-directional acoustic scattering characteristics of this target model can provide a favorable basis for the detection and identification of actual underwater targets. The elastic resonance characteristics of the target are the results of the comprehensive effect of the target length, shell-thickness ratio and materials. Under the conditions of different materials and geometric dimensions, the coincidence resonance characteristics of the target have obvious differences. Aiming at this problem, this paper obtains the omni-directional acoustic scattering field of the underwater hemispherical cylindrical shell by numerical calculation and studies the influence of target geometric parameters (length, shell-thickness ratio) and material parameters on the coincidence resonance characteristics of the target in turn. The study found that the formant interval is not a stable value and changes with the incident angle. Among them, the formant interval is less affected by the target length and shell-thickness ratio and is significantly affected by the material properties, which is an effective feature for classifying and identifying targets of different materials. The quadratic polynomial is utilized to fully fit the change relationship between the formant interval and the angle. The results show that the three fitting coefficients of the stainless steel and aluminum targets are significantly different, which can be used as an effective feature parameter to characterize the target materials.Keywords: hemispherical cylindrical shell;, fine echo characteristics;, geometric and material parameters;, formant interval
Procedia PDF Downloads 1071158 Sociology Curriculum and Capabilities Formation: A Case Study of Two South African Universities
Authors: B. Manyonga
Abstract:
Across the world, higher education (HE) is expanding rapidly and issues of curriculum change have become more contentious and political than ever before. Although research informing curricula review in social sciences and particularly sociology has been conducted, much analysis has been devoted to teaching and transmitting disciplinary knowledge, student identity and epistemology, with little focus on curriculum conceptualisation and capability formation. This paper builds on and contributes to accumulating knowledge in the field of sociology curriculum design in the South African HE context. Drawing from the principles of Capabilities Approach (CA) of Amartya Sen and Martha Nussbaum, the paper argues that sociology curriculum conceptualisation may be enriched by capabilities identification for students. Thus, the sociological canon ought to be the vehicle through which student capabilities could be developed. The CA throws a fresh light on how curriculum ought to be designed to offer students real opportunities, expanding choices for individuals to be what they want to be and do. The paper uses a case of two South African universities to present analysis of qualitative data collected from undergraduate sociology lecturers. The major findings of the paper indicate that there is no clear philosophy guiding the conceptualisation of curriculum. The conceptualisation is based on lecturer expertise, carrying out research, response to topical and societal issues. Sociology lecturers highlighted that they do not consult students on what they want to do and to be as a result of studying for a sociology degree. Although lecturers recognise some human development capabilities such as critical thinking, multiple perspectives and problem solving as important for sociology students, there is little evidence to illustrate how these are being cultivated in students. Taken together, the results suggest that sociological canon is being regarded as the starting point for curriculum planning and construction.Keywords: capabilities approach, graduate attributes, higher education, sociology curriculum
Procedia PDF Downloads 2551157 South African Breast Cancer Mutation Spectrum: Pitfalls to Copy Number Variation Detection Using Internationally Designed Multiplex Ligation-Dependent Probe Amplification and Next Generation Sequencing Panels
Authors: Jaco Oosthuizen, Nerina C. Van Der Merwe
Abstract:
The National Health Laboratory Services in Bloemfontien has been the diagnostic testing facility for 1830 patients for familial breast cancer since 1997. From the cohort, 540 were comprehensively screened using High-Resolution Melting Analysis or Next Generation Sequencing for the presence of point mutations and/or indels. Approximately 90% of these patients stil remain undiagnosed as they are BRCA1/2 negative. Multiplex ligation-dependent probe amplification was initially added to screen for copy number variation detection, but with the introduction of next generation sequencing in 2017, was substituted and is currently used as a confirmation assay. The aim was to investigate the viability of utilizing internationally designed copy number variation detection assays based on mostly European/Caucasian genomic data for use within a South African context. The multiplex ligation-dependent probe amplification technique is based on the hybridization and subsequent ligation of multiple probes to a targeted exon. The ligated probes are amplified using conventional polymerase chain reaction, followed by fragment analysis by means of capillary electrophoresis. The experimental design of the assay was performed according to the guidelines of MRC-Holland. For BRCA1 (P002-D1) and BRCA2 (P045-B3), both multiplex assays were validated, and results were confirmed using a secondary probe set for each gene. The next generation sequencing technique is based on target amplification via multiplex polymerase chain reaction, where after the amplicons are sequenced parallel on a semiconductor chip. Amplified read counts are visualized as relative copy numbers to determine the median of the absolute values of all pairwise differences. Various experimental parameters such as DNA quality, quantity, and signal intensity or read depth were verified using positive and negative patients previously tested internationally. DNA quality and quantity proved to be the critical factors during the verification of both assays. The quantity influenced the relative copy number frequency directly whereas the quality of the DNA and its salt concentration influenced denaturation consistency in both assays. Multiplex ligation-dependent probe amplification produced false positives due to ligation failure when ligation was inhibited due to a variant present within the ligation site. Next generation sequencing produced false positives due to read dropout when primer sequences did not meet optimal multiplex binding kinetics due to population variants in the primer binding site. The analytical sensitivity and specificity for the South African population have been proven. Verification resulted in repeatable reactions with regards to the detection of relative copy number differences. Both multiplex ligation-dependent probe amplification and next generation sequencing multiplex panels need to be optimized to accommodate South African polymorphisms present within the genetically diverse ethnic groups to reduce the false copy number variation positive rate and increase performance efficiency.Keywords: familial breast cancer, multiplex ligation-dependent probe amplification, next generation sequencing, South Africa
Procedia PDF Downloads 2301156 Trends in Blood Pressure Control and Associated Risk Factors Among US Adults with Hypertension from 2013 to 2020: Insights from NHANES Data
Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei
Abstract:
Controlling blood pressure is critical to reducing the risk of cardiovascular disease. However, BP control rates (systolic BP < 140 mm Hg and diastolic BP < 90 mm Hg) have declined since 2013, warranting further analysis to identify contributing factors and potential interventions. This study investigates the factors associated with the decline in blood pressure (BP) control among U.S. adults with hypertension over the past decade. Data from the U.S. National Health and Nutrition Examination Survey (NHANES) were used to assess BP control trends between 2013 and 2020. The analysis included 18,927 U.S. adults with hypertension aged 18 years and older who completed study interviews and examinations. The dataset, obtained from the cardioStatsUSA and RNHANES R packages, was merged based on survey IDs. Key variables analyzed included demographic factors, lifestyle behaviors, hypertension status, BMI, comorbidities, antihypertensive medication use, and cardiovascular disease history. The prevalence of BP control declined from 78.0% in 2013-2014 to 71.6% in 2017-2020. Non-Hispanic Whites had the highest BP control prevalence (33.6% in 2013-2014), but this declined to 26.5% by 2017-2020. In contrast, BP control among Non-Hispanic Blacks increased slightly. Younger adults (aged 18-44) exhibited better BP control, but control rates declined over time. Obesity prevalence increased, contributing to poorer BP control. Antihypertensive medication use rose from 26.1% to 29.2% across the study period. Lifestyle behaviors, such as smoking and diet, also affected BP control, with nonsmokers and those with better diets showing higher control rates. Key findings indicate significant disparities in blood pressure control across racial/ethnic groups. Non-Hispanic Black participants had consistently higher odds (OR ranging from 1.84 to 2.33) of poor blood pressure control compared to Non-Hispanic Whites, while odds among Non-Hispanic Asians varied by cycle. Younger age groups (18-44 and 45-64) showed significantly lower odds of poor blood pressure control compared to those aged 75+, highlighting better control in younger populations. Men had consistently higher odds of poor control compared to women, though this disparity slightly decreased in 2017-2020. Medical comorbidities such as diabetes and chronic kidney disease were associated with significantly higher odds of poor blood pressure control across all cycles. Participants with chronic kidney disease had particularly elevated odds (OR=5.54 in 2015-2016), underscoring the challenge of managing hypertension in these populations. Antihypertensive medication use was also linked with higher odds of poor control, suggesting potential difficulties in achieving target blood pressure despite treatment. Lifestyle factors such as alcohol consumption and physical activity showed no consistent association with blood pressure control. However, dietary quality appeared protective, with those reporting an excellent diet showing lower odds (OR=0.64) of poor control in the overall sample. Increased BMI was associated with higher odds of poor blood pressure control, particularly in the 30-35 and 35+ BMI categories during 2015-2016. The study highlights a significant decline in BP control among U.S. adults with hypertension, particularly among certain demographic groups and those with increasing obesity rates. Lifestyle behaviors, antihypertensive medication use, and socioeconomic factors all played a role in these trends.Keywords: diabetes, blood pressure, obesity, logistic regression, odd ratio
Procedia PDF Downloads 61155 Structure and Magnetic Properties of Low-Temperature Synthesized M-W Hexaferrite Composites
Authors: Young-Min Kang
Abstract:
M-type Sr-hexaferrites (SrFe12O19) is one of the most utilized materials in permanent magnets due to their low price, outstanding chemical stability, and appropriate hard magnetic properties. For a M-type Sr-hexaferrite with a saturation magnetization (MS) of ~74.0 emu/g the practical limits of remanent flux density (Br) and maximum energy product (BH) max are ~4.6 kG and ~5.3 MGOe. Meanwhile, W-type hexaferrite (SrFe18O27) with higher MS ~81emu/g can be a good candidate for the development of enhanced ferrite magnet. However the W-type hexaferrite is stable at the temperature over 1350 ºC in air, and thus it is hard to control grain size and the coercivity. We report here high-MS M-W composite hexaferrites synthesized at 1250 ºC in air by doping Ca, Co, Mn, and Zn into the hexaferrite structures. The hexaferrites samples of stoichiometric SrFe12O19 (SrM) and Ca-Co-Mn-Zn doped hexaferrite (Sr0.7Ca0.3Fen-0.6Co0.2Mn0.2Zn0.2Oa) were prepared by conventional solid state reaction process with varying Fe content (10 ≤ n ≤ 17). Analysis by x-ray diffraction (XRD) and field emission scanning electron microscopy (FE-SEM) were performed for phase identification and microstructural observation respectively. Magnetic hysteresis curves were measured using vibrating sample magnetometer (VSM) at room temperature (300 K). Single M-type phase could be obtained in the non-doped SrM sample after calcinations at the range of 1200 ºC ~ 1300 ºC, showing MS in the range of 72 ~ 72.6 emu/g. The Ca-Co-Mn-Zn doped SrM with Fe content, 10 ≤ n ≤ 13, showed both M and W-phases peaks in the XRD after respective calcinations at 1250 ºC. The sample with n=13 showed the MS of 70.7, 75.3, 78.0 emu/g, respectively, after calcination at 1200, 1250, 1300 ºC. The high MS over that of non-doped SrM (~72 emu/g) is attributed to the volume portion of W-phase. It is also revealed that the high MS W-phase could not formed if only one of the Ca, Co, Zn is missed in the substitution. These elements are critical to form the W-phase at the calcinations temperature of 1250 ºC, which is 100 ºC lower than the calcinations temperature for non-doped Sr-hexaferrites.Keywords: M-type hexaferrite, W-type hexaferrite, saturation magnetization, low-temperature synthesis
Procedia PDF Downloads 1651154 Economic Impacts of Sanctuary and Immigration and Customs Enforcement Policies Inclusive and Exclusive Institutions
Authors: Alexander David Natanson
Abstract:
This paper focuses on the effect of Sanctuary and Immigration and Customs Enforcement (ICE) policies on local economies. "Sanctuary cities" refers to municipal jurisdictions that limit their cooperation with the federal government's efforts to enforce immigration. Using county-level data from the American Community Survey and ICE data on economic indicators from 2006 to 2018, this study isolates the effects of local immigration policies on U.S. counties. The investigation is accomplished by simultaneously studying the policies' effects in counties where immigrants' families are persecuted via collaboration with Immigration and Customs Enforcement (ICE), in contrast to counties that provide protections. The analysis includes a difference-in-difference & two-way fixed effect model. Results are robust to nearest-neighbor matching, after the random assignment of treatment, after running estimations using different cutoffs for immigration policies, and with a regression discontinuity model comparing bordering counties with opposite policies. Results are also robust after restricting the data to a single-year policy adoption, using the Sun and Abraham estimator, and with event-study estimation to deal with the staggered treatment issue. In addition, the study reverses the estimation to understand what drives the decision to choose policies to detect the presence of reverse causality biases in the estimated policy impact on economic factors. The evidence demonstrates that providing protections to undocumented immigrants increases economic activity. The estimates show gains in per capita income ranging from 3.1 to 7.2, median wages between 1.7 to 2.6, and GDP between 2.4 to 4.1 percent. Regarding labor, sanctuary counties saw increases in total employment between 2.3 to 4 percent, and the unemployment rate declined from 12 to 17 percent. The data further shows that ICE policies have no statistically significant effects on income, median wages, or GDP but adverse effects on total employment, with declines from 1 to 2 percent, mostly in rural counties, and an increase in unemployment of around 7 percent in urban counties. In addition, results show a decline in the foreign-born population in ICE counties but no changes in sanctuary counties. The study also finds similar results for sanctuary counties when separating the data between urban, rural, educational attainment, gender, ethnic groups, economic quintiles, and the number of business establishments. The takeaway from this study is that institutional inclusion creates the dynamic nature of an economy, as inclusion allows for economic expansion due to the extension of fundamental freedoms to newcomers. Inclusive policies show positive effects on economic outcomes with no evident increase in population. To make sense of these results, the hypothesis and theoretical model propose that inclusive immigration policies play an essential role in conditioning the effect of immigration by decreasing uncertainties and constraints for immigrants' interaction in their communities, decreasing the cost from fear of deportation or the constant fear of criminalization and optimize their human capital.Keywords: inclusive and exclusive institutions, post matching, fixed effect, time trend, regression discontinuity, difference-in-difference, randomization inference and sun, Abraham estimator
Procedia PDF Downloads 831153 Active Thermography Technique for High-Entropy Alloy Characterization Deposited with Cold Spray Technique
Authors: Nazanin Sheibanian, Raffaella Sesana, Sedat Ozbilen
Abstract:
In recent years, high-entropy alloys (HEAs) have attracted considerable attention due to their unique properties and potential applications. In this study, novel HEA coatings were prepared on Mg substrates using mechanically alloyed HEA powder feedstocks based on Al_(0.1-0.5)CoCrCuFeNi and MnCoCrCuFeNi multi-material systems. The coatings were deposited by the Cold Spray (CS) process using three different temperatures of the process gas (N2) (650°C, 750°C, and 850°C) to examine the effect of gas temperature on coating properties. In this study, Infrared Thermography (non-destructive) was examined as a possible quality control technique for HEA coatings applied to magnesium substrates. Active Thermography was employed to characterize coating properties using the thermal response of the coating. Various HEA chemical compositions and deposition temperatures have been investigated. As a part of this study, a comprehensive macro and microstructural analysis of Cold Spray (CS) HEA coatings has been conducted using macrophotography, optical microscopy, scanning electron microscopy with energy-dispersive X-ray spectroscopy (SEM+EDS), X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), microhardness tests, roughness measurements, and porosity assessments. These analyses provided insight into phase identification, microstructure characterization, deposition, particle deformation behavior, bonding mechanisms, and identifying a possible relationship between physical properties and thermal responses. Based on the figures and tables, it is evident that the Maximum Relative Radiance (∆RMax) of each sample differs depending on both the chemical composition of HEA and the temperature at which Cold Spray is applied.Keywords: active thermography, coating, cold spray, high- entropy alloy, material characterization
Procedia PDF Downloads 691152 Improvement of Production of γ-Aminobutyric Acid by Lactobacillus plantarum Isolated from Indigenous Fermented Durian (Tempoyak)
Authors: Yetti Marlida, Harnentis, Yuliaty Shafan Nur
Abstract:
Background: Tempoyak is a dish derived from fermented durian fruit. Tempoyak is a food consumed as a side dish when eating rice. Besides being eaten with rice, tempoyak can also be eaten directly. But this is rarely done because many cannot stand the sour taste and aroma of the tempoyak itself. In addition, tempoyak can also be used as a seasoning. The taste of tempoyak is acidic, this occurs because of the fermentation process in durian fruit meat which is the raw material. Tempoyak is already very well known in Indonesia, especially in Padang, Bengkulu, Palembang, Lampung, and Kalimantan. Besides that, this food is also famous in Malaysia. The purpose of this research is to improvement production of γ-aminobutyric acid (GABA) by Lactobacillus plantarum isolated from indigenous fermented durian (tempoyak). Selected Lactic Acid Bacteria (LAB) previously isolated from indigenous fermented durian (tempoyak) that have ability to produce γ-aminobutyric acid (GABA). The study was started with identification of selected LAB by 16 S RNA, followed optimation of GABA production by culture condition using different initial pH, temperature, glutamate concentration, incubation time, carbon and nitrogen sources. Results: The result from indentification used polymerase chain reaction of 16S rRNA gene sequences and phylogenetic analysis was Lactobacillus plantarum (coded as Y3) with a sequenced length of 1400bp. The improvement of Gaba production was found highest at pH: 6.0; temperature: 30 °C; glutamate concentration: 0.4%; incubation time: 60 h; glucose and yeast extract as carbon and nitrogen sources. Conclusions: GABA can be produced with the optimum condition fermentation were 66.06 mM.Keywords: lactic acid bacteria, γ-amino butyric acid, indigenous fermented durian, PCR
Procedia PDF Downloads 1401151 Difference in Virulence Factor Genes Between Transient and Persistent Streptococcus Uberis Intramammary Infection in Dairy Cattle
Authors: Anyaphat Srithanasuwan, Noppason Pangprasit, Montira Intanon, Phongsakorn Chuammitri, Witaya Suriyasathaporn, Ynte H. Schukken
Abstract:
Streptococcus uberis is one of the most common mastitis-causing pathogens, with a wide range of intramammary infection (IMI) durations and pathogenicity. This study aimed to compare shared or unique virulence factor gene clusters distinguishing persistent and transient strains of S. uberis. A total of 139 S. uberis strains were isolated from three small-holder dairy herds with a high prevalence of S. uberis mastitis. The duration of IMI was used to categorize bacteria into two groups: transient and persistent strains with an IMI duration of less than 1 month and longer than 2 months, respectively. Six representative S. uberis strains, three from each group (transience and persistence) were selected for analysis. All transient strains exhibited multi-locus sequence types (MLST), indicating a highly diverse population of transient S. uberis. In contrast, MLST of persistent strains was available in an online database (pubMLST). Identification of virulence genes was performed using whole-genome sequencing (WGS) data. Differences in genomic size and number of virulent genes were found. For example, the BCA gene or alpha-c protein and the gene associated with capsule formation (hasAB), found in persistent strains, are important for attachment and invasion, as well as the evasion of the antimicrobial mechanisms and survival persistence, respectively. These findings suggest a genetic-level difference between the two strain types. Consequently, a comprehensive study of 139 S. uberis isolates will be conducted to perform an in-depth genetic assessment through WGS analysis on an Illumina platform.Keywords: Streptococcus Uberis, mastitis, whole genome sequence, intramammary infection, persistent S. Uberis, transient s. Uberis
Procedia PDF Downloads 62