Search results for: adaptive thresholding based on RGB color
1061 Monitoring of Vector Mosquitors of Diseases in Areas of Energy Employment Influence in the Amazon (Amapa State), Brazil
Authors: Ribeiro Tiago Magalhães
Abstract:
Objective: The objective of this study was to evaluate the influence of a hydroelectric power plant in the state of Amapá, and to present the results obtained by dimensioning the diversity of the main mosquito vectors involved in the transmission of pathogens that cause diseases such as malaria, dengue and leishmaniasis. Methodology: The present study was conducted on the banks of the Araguari River, in the municipalities of Porto Grande and Ferreira Gomes in the southern region of Amapá State. Nine monitoring campaigns were conducted, the first in April 2014 and the last in March 2016. The selection of the catch sites was done in order to prioritize areas with possible occurrence of the species considered of greater importance for public health and areas of contact between the wild environment and humans. Sampling efforts aimed to identify the local vector fauna and to relate it to the transmission of diseases. In this way, three phases of collection were established, covering the schedules of greater hematophageal activity. Sampling was carried out using Shannon Shack and CDC types of light traps and by means of specimen collection with the hold method. This procedure was carried out during the morning (between 08:00 and 11:00), afternoon-twilight (between 15:30 and 18:30) and night (between 18:30 and 22:00). In the specific methodology of capture with the use of the CDC equipment, the delimited times were from 18:00 until 06:00 the following day. Results: A total of 32 species of mosquitoes was identified, and a total of 2,962 specimens was taxonomically subdivided into three genera (Culicidae, Psychodidae and Simuliidae) Psorophora, Sabethes, Simulium, Uranotaenia and Wyeomyia), besides those represented by the family Psychodidae that due to the morphological complexities, allows the safe identification (without the method of diaphanization and assembly of slides for microscopy), only at the taxonomic level of subfamily (Phlebotominae). Conclusion: The nine monitoring campaigns carried out provided the basis for the design of the possible epidemiological structure in the areas of influence of the Cachoeira Caldeirão HPP, in order to point out among the points established for sampling, which would represent greater possibilities, according to the group of identified mosquitoes, of disease acquisition. However, what should be mainly considered, are the future events arising from reservoir filling. This argument is based on the fact that the reproductive success of Culicidae is intrinsically related to the aquatic environment for the development of its larvae until adulthood. From the moment that the water mirror is expanded in new environments for the formation of the reservoir, a modification in the process of development and hatching of the eggs deposited in the substrate can occur, causing a sudden explosion in the abundance of some genera, in special Anopheles, which holds preferences for denser forest environments, close to the water portions.Keywords: Amazon, hydroelectric, power, plants
Procedia PDF Downloads 1931060 Patterns of Libido, Sexual Activity and Sexual Performance in Female Migraineurs
Authors: John Farr Rothrock
Abstract:
Although migraine traditionally has been assumed to convey a relative decrease in libido, sexual activity and sexual performance, recent data have suggested that the female migraine population is far from homogenous in this regard. We sought to determine the levels of libido, sexual activity and sexual performance in the female migraine patient population both generally and according to clinical phenotype. In this single-blind study, a consecutive series of sexually active new female patients ages 25-55 initially presenting to a university-based headache clinic and having a >1 year history of migraine were asked to complete anonymously a survey assessing their sexual histories generally and as they related to their headache disorder and the 19-item Female Sexual Function Index (FSFI). To serve as 2 separate control groups, 100 sexually active females with no history of migraine and 100 female migraineurs from the general (non-clinic) population but matched for age, marital status, educational background and socioeconomic status completed a similar survey. Over a period of 3 months, 188 consecutive migraine patients were invited to participate. Twenty declined, and 28 of the remaining 160 potential subjects failed to meet the inclusion criterion utilized for “sexually active” (ie, heterosexual intercourse at a frequency of > once per month in each of the preceding 6 months). In all groups younger age (p<.005), higher educational level attained (p<.05) and higher socioeconomic status (p<.025) correlated with a higher monthly frequency of intercourse and a higher likelihood of intercourse resulting in orgasm. Relative to the 100 control subjects with no history of migraine, the two migraine groups (total n=232) reported a lower monthly frequency of intercourse and recorded a lower FSFI score (both p<.025), but the contribution to this difference came primarily from the chronic migraine (CM) subgroup (n=92). Patients with low frequency episodic migraine (LFEM) and mid frequency episodic migraine (MFEM) reported a higher FSFI score, higher monthly frequency of intercourse, higher likelihood of intercourse resulting in orgasm and higher likelihood of multiple active sex partners than controls. All migraine subgroups reported a decreased likelihood of engaging in intercourse during an active migraine attack, but relative to the CM subgroup (8/92=9%), a higher proportion of patients in the LFEM (12/49=25%), MFEM (14/67=21%) and high frequency episodic migraine (HFEM: 6/14=43%) subgroups reported utilizing intercourse - and orgasm specifically - as a means of potentially terminating a migraine attack. In the clinic vs no-clinic groups there were no significant differences in the dependent variables assessed. Research subjects with LFEM and MFEM may report a level of libido, frequency of intercourse and likelihood of orgasm-associated intercourse that exceeds what is reported by age-matched controls free of migraine. Many patients with LFEM, MFEM and HFEM appear to utilize intercourse/orgasm as a means to potentially terminate an acute migraine attack.Keywords: migraine, female, libido, sexual activity, phenotype
Procedia PDF Downloads 761059 Patterns and Predictors of Intended Service Use among Frail Older Adults in Urban China
Authors: Yuanyuan Fu
Abstract:
Background and Purpose: Along with the change of society and economy, the traditional home function of old people has gradually weakened in the contemporary China. Acknowledging these situations, to better meet old people’s needs on formal services and improve the quality of later life, this study seeks to identify patterns of intended service use among frail old people living in the communities and examined determinants that explain heterogeneous variations in old people’s intended service use patterns. Additionally, this study also tested the relationship between culture value and intended service use patterns and the mediating role of enabling factors in terms of culture value and intended service use patterns. Methods:Participants were recruited from Haidian District, Beijing, China in 2015. The multi-stage sampling method was adopted to select sub-districts, communities and old people aged 70 years old or older. After screening, 577 old people with limitations in daily life, were successfully interviewed. After data cleaning, 550 samples were included for data analysis. This study establishes a conceptual framework based on the Anderson Model (including predisposing factors, enabling factors and need factors), and further developed it by adding culture value factors (including attitudes towards filial piety and attitudes towards social face). Using a latent class analysis (LCA), this study classifies overall patterns of old people’s formal service utilization. Fourteen types of formal services were taken into account, including housework, voluntary support, transportation, home-delivered meals, and home-delivery medical care, elderly’s canteen and day-care center/respite care and so on. Structural equation modeling (SEM) was used to examine the direct effect of culture value on service use pattern, and the mediating effect of the enabling factors. Results: The LCA classified a hierarchical structure of service use patterns: multiple intended service use (N=69, 23%), selective intended service use (N=129, 23%), and light intended service use (N=352, 64%). Through SEM, after controlling predisposing factors and need factors, the results showed the significant direct effect of culture value on older people’s intended service use patterns. Enabling factors had a partial mediation effect on the relationship between culture value and the patterns. Conclusions and Implications: Differentiation of formal services may be important for meeting frail old people’s service needs and distributing program resources by identifying target populations for intervention, which may make reference to specific interventions to better support frail old people. Additionally, culture value had a unique direct effect on the intended service use patterns of frail old people in China, enriching our theoretical understanding of sources of culture value and their impacts. The findings also highlighted the mediation effects of enabling factors on the relationship between culture value factors and intended service use patterns. This study suggests that researchers and service providers should pay more attention to the important role of culture value factors in contributing to intended service use patterns and also be more sensitive to the mediating effect of enabling factors when discussing the relationship between culture value and the patterns.Keywords: frail old people, intended service use pattern, culture value, enabling factors, contemporary China, latent class analysis
Procedia PDF Downloads 2231058 Earthquake Risk Assessment Using Out-of-Sequence Thrust Movement
Authors: Rajkumar Ghosh
Abstract:
Earthquakes are natural disasters that pose a significant risk to human life and infrastructure. Effective earthquake mitigation measures require a thorough understanding of the dynamics of seismic occurrences, including thrust movement. Traditionally, estimating thrust movement has relied on typical techniques that may not capture the full complexity of these events. Therefore, investigating alternative approaches, such as incorporating out-of-sequence thrust movement data, could enhance earthquake mitigation strategies. This review aims to provide an overview of the applications of out-of-sequence thrust movement in earthquake mitigation. By examining existing research and studies, the objective is to understand how precise estimation of thrust movement can contribute to improving structural design, analyzing infrastructure risk, and developing early warning systems. The study demonstrates how to estimate out-of-sequence thrust movement using multiple data sources, including GPS measurements, satellite imagery, and seismic recordings. By analyzing and synthesizing these diverse datasets, researchers can gain a more comprehensive understanding of thrust movement dynamics during seismic occurrences. The review identifies potential advantages of incorporating out-of-sequence data in earthquake mitigation techniques. These include improving the efficiency of structural design, enhancing infrastructure risk analysis, and developing more accurate early warning systems. By considering out-of-sequence thrust movement estimates, researchers and policymakers can make informed decisions to mitigate the impact of earthquakes. This study contributes to the field of seismic monitoring and earthquake risk assessment by highlighting the benefits of incorporating out-of-sequence thrust movement data. By broadening the scope of analysis beyond traditional techniques, researchers can enhance their knowledge of earthquake dynamics and improve the effectiveness of mitigation measures. The study collects data from various sources, including GPS measurements, satellite imagery, and seismic recordings. These datasets are then analyzed using appropriate statistical and computational techniques to estimate out-of-sequence thrust movement. The review integrates findings from multiple studies to provide a comprehensive assessment of the topic. The study concludes that incorporating out-of-sequence thrust movement data can significantly enhance earthquake mitigation measures. By utilizing diverse data sources, researchers and policymakers can gain a more comprehensive understanding of seismic dynamics and make informed decisions. However, challenges exist, such as data quality difficulties, modelling uncertainties, and computational complications. To address these obstacles and improve the accuracy of estimates, further research and advancements in methodology are recommended. Overall, this review serves as a valuable resource for researchers, engineers, and policymakers involved in earthquake mitigation, as it encourages the development of innovative strategies based on a better understanding of thrust movement dynamics.Keywords: earthquake, out-of-sequence thrust, disaster, human life
Procedia PDF Downloads 741057 An in silico Approach for Exploring the Intercellular Communication in Cancer Cells
Authors: M. Cardenas-Garcia, P. P. Gonzalez-Perez
Abstract:
Intercellular communication is a necessary condition for cellular functions and it allows a group of cells to survive as a population. Throughout this interaction, the cells work in a coordinated and collaborative way which facilitates their survival. In the case of cancerous cells, these take advantage of intercellular communication to preserve their malignancy, since through these physical unions they can send signs of malignancy. The Wnt/β-catenin signaling pathway plays an important role in the formation of intercellular communications, being also involved in a large number of cellular processes such as proliferation, differentiation, adhesion, cell survival, and cell death. The modeling and simulation of cellular signaling systems have found valuable support in a wide range of modeling approaches, which cover a wide spectrum ranging from mathematical models; e.g., ordinary differential equations, statistical methods, and numerical methods– to computational models; e.g., process algebra for modeling behavior and variation in molecular systems. Based on these models, different simulation tools have been developed from mathematical ones to computational ones. Regarding cellular and molecular processes in cancer, its study has also found a valuable support in different simulation tools that, covering a spectrum as mentioned above, have allowed the in silico experimentation of this phenomenon at the cellular and molecular level. In this work, we simulate and explore the complex interaction patterns of intercellular communication in cancer cells using the Cellulat bioinformatics tool, a computational simulation tool developed by us and motivated by two key elements: 1) a biochemically inspired model of self-organizing coordination in tuple spaces, and 2) the Gillespie’s algorithm, a stochastic simulation algorithm typically used to mimic systems of chemical/biochemical reactions in an efficient and accurate way. The main idea behind the Cellulat simulation tool is to provide an in silico experimentation environment that complements and guides in vitro experimentation in intra and intercellular signaling networks. Unlike most of the cell signaling simulation tools, such as E-Cell, BetaWB and Cell Illustrator which provides abstractions to model only intracellular behavior, Cellulat is appropriate for modeling both intracellular signaling and intercellular communication, providing the abstractions required to model –and as a result, simulate– the interaction mechanisms that involve two or more cells, that is essential in the scenario discussed in this work. During the development of this work we made evident the application of our computational simulation tool (Cellulat) for the modeling and simulation of intercellular communication between normal and cancerous cells, and in this way, propose key molecules that may prevent the arrival of malignant signals to the cells that surround the tumor cells. In this manner, we could identify the significant role that has the Wnt/β-catenin signaling pathway in cellular communication, and therefore, in the dissemination of cancer cells. We verified, using in silico experiments, how the inhibition of this signaling pathway prevents that the cells that surround a cancerous cell are transformed.Keywords: cancer cells, in silico approach, intercellular communication, key molecules, modeling and simulation
Procedia PDF Downloads 2491056 Safety Profile of Human Papillomavirus Vaccines: A Post-Licensure Analysis of the Vaccine Adverse Events Reporting System, 2007-2017
Authors: Giulia Bonaldo, Alberto Vaccheri, Ottavio D'Annibali, Domenico Motola
Abstract:
The Human Papilloma Virus (HPV) was shown to be the cause of different types of carcinomas, first of all of the cervical intraepithelial neoplasia. Since the early 80s to today, thanks first to the preventive screening campaigns (pap-test) and following to the introduction of HPV vaccines on the market; the number of new cases of cervical cancer has decreased significantly. The HPV vaccines currently approved are three: Cervarix® (HPV2 - virus type: 16 and 18), Gardasil® (HPV4 - 6, 11, 16, 18) and Gardasil 9® (HPV9 - 6, 11, 16, 18, 31, 33, 45, 52, 58), which all protect against the two high-risk HPVs (6, 11) that are mainly involved in cervical cancers. Despite the remarkable effectiveness of these vaccines has been demonstrated, in the recent years, there have been many complaints about their risk-benefit profile due to Adverse Events Following Immunization (AEFI). The purpose of this study is to provide a support about the ongoing discussion on the safety profile of HPV vaccines based on real life data deriving from spontaneous reports of suspected AEFIs collected in the Vaccine Adverse Events Reporting System (VAERS). VAERS is a freely-available national vaccine safety surveillance database of AEFI, co-administered by the Centers for Disease Control and Prevention (CDC) and Food and Drug Administration (FDA). We collected all the reports between January 2007 to December 2017 related to the HPV vaccines with a brand name (HPV2, HPV4, HPV9) or without (HPVX). A disproportionality analysis using Reporting Odds Ratio (ROR) with 95% confidence interval and p value ≤ 0.05 was performed. Over the 10-year period, 54889 reports of AEFI related to HPV vaccines reported in VAERS, corresponding to 224863 vaccine-event pairs, were retrieved. The highest number of reports was related to Gardasil (n = 42244), followed by Gardasil 9 (7212) and Cervarix (3904). The brand name of the HPV vaccine was not reported in 1529 cases. The two events more frequently reported and statistically significant for each vaccine were: dizziness (n = 5053) ROR = 1.28 (CI95% 1.24 – 1.31) and syncope (4808) ROR = 1.21 (1.17 – 1.25) for Gardasil. For Gardasil 9, injection site pain (305) ROR = 1.40 (1.25 – 1.57) and injection site erythema (297) ROR = 1.88 (1.67 – 2.10) and for Cervarix, headache (672) ROR = 1.14 (1.06 – 1.23) and loss of consciousness (528) ROR = 1.71 (1.57 – 1.87). In total, we collected 406 reports of death and 2461 cases of permanent disability in the ten-year period. The events consisting of incorrect vaccine storage or incorrect administration were not considered. The AEFI analysis showed that the most frequently reported events are non-serious and listed in the corresponding SmPCs. In addition to these, potential safety signals arose regarding less frequent and severe AEFIs that would deserve further investigation. This already happened with the referral of the European Medicines Agency (EMA) for the adverse events POTS (Postural Orthostatic Tachycardia Syndrome) and CRPS (Complex Regional Pain Syndrome) associated with anti-papillomavirus vaccines.Keywords: adverse drug reactions, pharmacovigilance, safety, vaccines
Procedia PDF Downloads 1611055 Multivariate Ecoregion Analysis of Nutrient Runoff From Agricultural Land Uses in North America
Authors: Austin P. Hopkins, R. Daren Harmel, Jim A Ippolito, P. J. A. Kleinman, D. Sahoo
Abstract:
Field-scale runoff and water quality data are critical to understanding the fate and transport of nutrients applied to agricultural lands and minimizing their off-site transport because it is at that scale that agricultural management decisions are typically made based on hydrologic, soil, and land use factors. However, regional influences such as precipitation, temperature, and prevailing cropping systems and land use patterns also impact nutrient runoff. In the present study, the recently-updated MANAGE (Measured Annual Nutrient loads from Agricultural Environments) database was used to conduct an ecoregion-level analysis of nitrogen and phosphorus runoff from agricultural lands in the North America. Specifically, annual N and P runoff loads for cropland and grasslands in North American Level II EPA ecoregions were presented, and the impact of factors such as land use, tillage, and fertilizer timing and placement on N and P runoff were analyzed. Specifically we compiled annual N and P runoff load data (i.e., dissolved, particulate, and total N and P, kg/ha/yr) for each Level 2 EPA ecoregion and for various agricultural management practices (i.e., land use, tillage, fertilizer timing, fertilizer placement) within each ecoregion to showcase the analyses possible with the data in MANAGE. Potential differences in N and P runoff loads were evaluated between and within ecoregions with statistical and graphical approaches. Non-parametric analyses, mainly Mann-Whitney tests were conducted on median values weighted by the site years of data utilizing R because the data were not normally distributed, and we used Dunn tests and box and whisker plots to visually and statistically evaluate significant differences. Out of the 50 total North American Ecoregions, 11 were found that had significant data and site years to be utilized in the analysis. When examining ecoregions alone, it was observed that ER 9.2 temperate prairies had a significantly higher total N at 11.7 kg/ha/yr than ER 9.4 South Central Semi Arid Prairies with a total N of 2.4. When examining total P it was observed that ER 8.5 Mississippi Alluvial and Southeast USA Coastal Plains had a higher load at 3.0 kg/ha/yr than ER 8.2 Southeastern USA Plains with a load of 0.25 kg/ha/yr. Tillage and Land Use had severe impacts on nutrient loads. In ER 9.2 Temperate Prairies, conventional tillage had a total N load of 36.0 kg/ha/yr while conservation tillage had a total N load of 4.8 kg/ha/yr. In all relevant ecoregions, when corn was the predominant land use, total N levels significantly increased compared to grassland or other grains. In ER 8.4 Ozark-Ouachita, Corn had a total N of 22.1 kg/ha/yr while grazed grassland had a total N of 2.9 kg/ha/yr. There are further intricacies of the interactions that agricultural management practices have on one another combined with ecological conditions and their impacts on the continental aquatic nutrient loads that still need to be explored. This research provides a stepping stone to further understanding of land and resource stewardship and best management practices.Keywords: water quality, ecoregions, nitrogen, phosphorus, agriculture, best management practices, land use
Procedia PDF Downloads 771054 Removal of Heavy Metals by Ultrafiltration Assisted with Chitosan or Carboxy-Methyl Cellulose
Authors: Boukary Lam, Sebastien Deon, Patrick Fievet, Nadia Crini, Gregorio Crini
Abstract:
Treatment of heavy metal-contaminated industrial wastewater has become a major challenge over the last decades. Conventional processes for the treatment of metal-containing effluents do not always simultaneously satisfy both legislative and economic criteria. In this context, coupling of processes can then be a promising alternative to the conventional approaches used by industry. The polymer-assisted ultrafiltration (PAUF) process is one of these coupling processes. Its principle is based on a sequence of steps with reaction (e.g., complexation) between metal ions and a polymer and a step involving the rejection of the formed species by means of a UF membrane. Unlike free ions, which can cross the UF membrane due to their small size, the polymer/ion species, the size of which is larger than pore size, are rejected. The PAUF process was deeply investigated herein in the case of removal of nickel ions by adding chitosan and carboxymethyl cellulose (CMC). Experiments were conducted with synthetic solutions containing 1 to 100 ppm of nickel ions with or without the presence of NaCl (0.05 to 0.2 M), and an industrial discharge water (containing several metal ions) with and without polymer. Chitosan with a molecular weight of 1.8×105 g mol⁻¹ and a degree of acetylation close to 15% was used. CMC with a degree of substitution of 0.7 and a molecular weight of 9×105 g mol⁻¹ was employed. Filtration experiments were performed under cross-flow conditions with a filtration cell equipped with a polyamide thin film composite flat-sheet membrane (3.5 kDa). Without the step of polymer addition, it was found that nickel rejection decreases from 80 to 0% with increasing metal ion concentration and salt concentration. This behavior agrees qualitatively with the Donnan exclusion principle: the increase in the electrolyte concentration screens the electrostatic interaction between ions and the membrane fixed the charge, which decreases their rejection. It was shown that addition of a sufficient amount of polymer (greater than 10⁻² M of monomer unit) can offset this decrease and allow good metal removal. However, the permeation flux was found to be somewhat reduced due to the increase in osmotic pressure and viscosity. It was also highlighted that the increase in pH (from 3 to 9) has a strong influence on removal performances: the higher pH value, the better removal performance. The two polymers have shown similar performance enhancement at natural pH. However, chitosan has proved more efficient in slightly basic conditions (above its pKa) whereas CMC has demonstrated very weak rejection performances when pH is below its pKa. In terms of metal rejection, chitosan is thus probably the better option for basic or strongly acid (pH < 4) conditions. Nevertheless, CMC should probably be preferred to chitosan in natural conditions (5 < pH < 8) since its impact on the permeation flux is less significant. Finally, ultrafiltration of an industrial discharge water has shown that the increase in metal ion rejection induced by the polymer addition is very low due to the competing phenomenon between the various ions present in the complex mixture.Keywords: carboxymethyl cellulose, chitosan, heavy metals, nickel ion, polymer-assisted ultrafiltration
Procedia PDF Downloads 1611053 The Processing of Context-Dependent and Context-Independent Scalar Implicatures
Authors: Liu Jia’nan
Abstract:
The default accounts hold the view that there exists a kind of scalar implicature which can be processed without context and own a psychological privilege over other scalar implicatures which depend on context. In contrast, the Relevance Theorist regards context as a must because all the scalar implicatures have to meet the need of relevance in discourse. However, in Katsos, the experimental results showed: Although quantitatively the adults rejected under-informative utterance with lexical scales (context-independent) and the ad hoc scales (context-dependent) at almost the same rate, adults still regarded the violation of utterance with lexical scales much more severe than with ad hoc scales. Neither default account nor Relevance Theory can fully explain this result. Thus, there are two questionable points to this result: (1) Is it possible that the strange discrepancy is due to other factors instead of the generation of scalar implicature? (2) Are the ad hoc scales truly formed under the possible influence from mental context? Do the participants generate scalar implicatures with ad hoc scales instead of just comparing semantic difference among target objects in the under- informative utterance? In my Experiment 1, the question (1) will be answered by repetition of Experiment 1 by Katsos. Test materials will be showed by PowerPoint in the form of pictures, and each procedure will be done under the guidance of a tester in a quiet room. Our Experiment 2 is intended to answer question (2). The test material of picture will be transformed into the literal words in DMDX and the target sentence will be showed word-by-word to participants in the soundproof room in our lab. Reading time of target parts, i.e. words containing scalar implicatures, will be recorded. We presume that in the group with lexical scale, standardized pragmatically mental context would help generate scalar implicature once the scalar word occurs, which will make the participants hope the upcoming words to be informative. Thus if the new input after scalar word is under-informative, more time will be cost for the extra semantic processing. However, in the group with ad hoc scale, scalar implicature may hardly be generated without the support from fixed mental context of scale. Thus, whether the new input is informative or not does not matter at all, and the reading time of target parts will be the same in informative and under-informative utterances. People’s mind may be a dynamic system, in which lots of factors would co-occur. If Katsos’ experimental result is reliable, will it shed light on the interplay of default accounts and context factors in scalar implicature processing? We might be able to assume, based on our experiments, that one single dominant processing paradigm may not be plausible. Furthermore, in the processing of scalar implicature, the semantic interpretation and the pragmatic interpretation may be made in a dynamic interplay in the mind. As to the lexical scale, the pragmatic reading may prevail over the semantic reading because of its greater exposure in daily language use, which may also lead the possible default or standardized paradigm override the role of context. However, those objects in ad hoc scale are not usually treated as scalar membership in mental context, and thus lexical-semantic association of the objects may prevent their pragmatic reading from generating scalar implicature. Only when the sufficient contextual factors are highlighted, can the pragmatic reading get privilege and generate scalar implicature.Keywords: scalar implicature, ad hoc scale, dynamic interplay, default account, Mandarin Chinese processing
Procedia PDF Downloads 3201052 Profiling of Bacterial Communities Present in Feces, Milk, and Blood of Lactating Cows Using 16S rRNA Metagenomic Sequencing
Authors: Khethiwe Mtshali, Zamantungwa T. H. Khumalo, Stanford Kwenda, Ismail Arshad, Oriel M. M. Thekisoe
Abstract:
Ecologically, the gut, mammary glands and bloodstream consist of distinct microbial communities of commensals, mutualists and pathogens, forming a complex ecosystem of niches. The by-products derived from these body sites i.e. faeces, milk and blood, respectively, have many uses in rural communities where they aid in the facilitation of day-to-day household activities and occasional rituals. Thus, although livestock rearing plays a vital role in the sustenance of the livelihoods of rural communities, it may serve as a potent reservoir of different pathogenic organisms that could have devastating health and economic implications. This study aimed to simultaneously explore the microbial profiles of corresponding faecal, milk and blood samples from lactating cows using 16S rRNA metagenomic sequencing. Bacterial communities were inferred through the Divisive Amplicon Denoising Algorithm 2 (DADA2) pipeline coupled with SILVA database v138. All downstream analyses were performed in R v3.6.1. Alpha-diversity metrics showed significant differences between faeces and blood, faeces and milk, but did not vary significantly between blood and milk (Kruskal-Wallis, P < 0.05). Beta-diversity metrics on Principal Coordinate Analysis (PCoA) and Non-Metric Dimensional Scaling (NMDS) clustered samples by type, suggesting that microbial communities of the studied niches are significantly different (PERMANOVA, P < 0.05). A number of taxa were significantly differentially abundant (DA) between groups based on the Wald test implemented in the DESeq2 package (Padj < 0.01). The majority of the DA taxa were significantly enriched in faeces than in milk and blood, except for the genus Anaplasma, which was significantly enriched in blood and was, in turn, the most abundant taxon overall. A total of 30 phyla, 74 classes, 156 orders, 243 families and 408 genera were obtained from the overall analysis. The most abundant phyla obtained between the three body sites were Firmicutes, Bacteroidota, and Proteobacteria. A total of 58 genus-level taxa were simultaneously detected between the sample groups, while bacterial signatures of at least 8 of these occurred concurrently in corresponding faeces, milk and blood samples from the same group of animals constituting a pool. The important taxa identified in this study could be categorized into four potentially pathogenic clusters: i) arthropod-borne; ii) food-borne and zoonotic; iii) mastitogenic and; iv) metritic and abortigenic. This study provides insight into the microbial composition of bovine faeces, milk, and blood and its extent of overlapping. It further highlights the potential risk of disease occurrence and transmission between the animals and the inhabitants of the sampled rural community, pertaining to their unsanitary practices associated with the use of cattle by-products.Keywords: microbial profiling, 16S rRNA, NGS, feces, milk, blood, lactating cows, small-scale farmers
Procedia PDF Downloads 1091051 Methodology for the Determination of Triterpenic Compounds in Apple Extracts
Authors: Mindaugas Liaudanskas, Darius Kviklys, Kristina Zymonė, Raimondas Raudonis, Jonas Viškelis, Norbertas Uselis, Pranas Viškelis, Valdimaras Janulis
Abstract:
Apples are among the most commonly consumed fruits in the world. Based on data from the year 2014, approximately 84.63 million tons of apples are grown per annum. Apples are widely used in food industry to produce various products and drinks (juice, wine, and cider); they are also used unprocessed. Apples in human diet are an important source of different groups of biological active compounds that can positively contribute to the prevention of various diseases. They are a source of various biologically active substances – especially vitamins, organic acids, micro- and macro-elements, pectins, and phenolic, triterpenic, and other compounds. Triterpenic compounds, which are characterized by versatile biological activity, are the biologically active compounds found in apples that are among the most promising and most significant for human health. A specific analytical procedure including sample preparation and High Performance Liquid Chromatography (HPLC) analysis was developed, optimized, and validated for the detection of triterpenic compounds in the samples of different apples, their peels, and flesh from widespread apple cultivars 'Aldas', 'Auksis', 'Connel Red', 'Ligol', 'Lodel', and 'Rajka' grown in Lithuanian climatic conditions. The conditions for triterpenic compound extraction were optimized: the solvent of the extraction was 100% (v/v) acetone, and the extraction was performed in an ultrasound bath for 10 min. Isocratic elution (the eluents ratio being 88% (solvent A) and 12% (solvent B)) for a rapid separation of triterpenic compounds was performed. The validation of the methodology was performed on the basis of the ICH recommendations. The following characteristics of validation were evaluated: the selectivity of the method (specificity), precision, the detection and quantitation limits of the analytes, and linearity. The obtained parameters values confirm suitability of methodology to perform analysis of triterpenic compounds. Using the optimised and validated HPLC technique, four triterpenic compounds were separated and identified, and their specificity was confirmed. These compounds were corosolic acid, betulinic acid, oleanolic acid, and ursolic acid. Ursolic acid was the dominant compound in all the tested apple samples. The detected amount of betulinic acid was the lowest of all the identified triterpenic compounds. The greatest amounts of triterpenic compounds were detected in whole apple and apple peel samples of the 'Lodel' cultivar, and thus apples and apple extracts of this cultivar are potentially valuable for use in medical practice, for the prevention of various diseases, for adjunct therapy, for the isolation of individual compounds with a specific biological effect, and for the development and production of dietary supplements and functional food enriched in biologically active compounds. Acknowledgements. This work was supported by a grant from the Research Council of Lithuania, project No. MIP-17-8.Keywords: apples, HPLC, triterpenic compounds, validation
Procedia PDF Downloads 1721050 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images
Authors: Elham Bagheri, Yalda Mohsenzadeh
Abstract:
Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception
Procedia PDF Downloads 881049 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas
Authors: Sahithi Yarlagadda
Abstract:
The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm
Procedia PDF Downloads 1081048 Molecular Identification of Camel Tick and Investigation of Its Natural Infection by Rickettsia and Borrelia in Saudi Arabia
Authors: Reem Alajmi, Hind Al Harbi, Tahany Ayaad, Zainab Al Musawi
Abstract:
Hard ticks Hyalomma spp. (family: Ixodidae) are obligate ectoparasite in their all life stages on some domestic animals mainly camels and cattle. Ticks may lead to many economic and public health problems because of their blood feeding behavior. Also, they act as vectors for many bacterial, viral and protozoan agents which may cause serious diseases such as tick-born encephalitis, Rocky-mountain spotted fever, Q-fever and Lyme disease which can affect human and/or animals. In the present study, molecular identification of ticks that attack camels in Riyadh region, Saudi Arabia based on the partial sequence of mitochondrial 16s rRNA gene was applied. Also, the present study aims to detect natural infections of collected camel ticks with Rickessia spp. and Borelia spp. using PCR/hybridization of Citrate synthase encoding gene present in bacterial cells. Hard ticks infesting camels were collected from different camels located in a farm in Riyadh region, Saudi Arabia. Results of the present study showed that the collected specimens belong to two species: Hyalomma dromedari represent 99% of the identified specimens and Hyalomma marginatum which account for 1 % of identified ticks. The molecular identification was made through blasting the obtained sequence of this study with sequences already present and identified in GeneBank. All obtained sequences of H. dromedarii specimens showed 97-100% identity with the same gene sequence of the same species (Accession # L34306.1) which was used as a reference. Meanwhile, no intraspecific variations of H. marginatum mesured because only one specimen was collected. Results also had shown that the intraspecific variability between individuals of H. dromedarii obtained in 92 % of samples ranging from 0.2- 6.6%, while the remaining 7 % of the total samples of H. dromedarii showed about 10.3 % individual differences. However, the interspecific variability between H. dromedarii and H. marginatum was approximately 18.3 %. On the other hand, by using the technique of PCR/hybridization, we could detect natural infection of camel ticks with Rickettsia spp. and Borrelia spp. Results revealed the natural presence of both bacteria in collected ticks. Rickettsial spp. infection present in 29% of collected ticks, while 35% of collected specimen were infected with Borrelia spp. The valuable results obtained from the present study are a new record for the molecular identification of camel ticks in Riyadh, Saudi Arabia and their natural infection with both Rickettsia spp. and Borrelia spp. These results may help scientists to provide a good and direct control strategy of ticks in order to protect one of the most important economic animals which are camels. Also results of this project spotlight on the disease that might be transmitted by ticks to put out a direct protective plan to prevent spreading of these dangerous agents. Further molecular studies are needed to confirm the results of the present study by using other mitochondrial and nuclear genes for tick identification.Keywords: Camel ticks, Rickessia spp. , Borelia spp. , mitochondrial 16s rRNA gene
Procedia PDF Downloads 2761047 Numerical Investigation of the Influence on Buckling Behaviour Due to Different Launching Bearings
Authors: Nadine Maier, Martin Mensinger, Enea Tallushi
Abstract:
In general, today, two types of launching bearings are used in the construction of large steel and steel concrete composite bridges. These are sliding rockers and systems with hydraulic bearings. The advantages and disadvantages of the respective systems are under discussion. During incremental launching, the center of the webs of the superstructure is not perfectly in line with the center of the launching bearings due to unavoidable tolerances, which may have an influence on the buckling behavior of the web plates. These imperfections are not considered in the current design against plate buckling, according to DIN EN 1993-1-5. It is therefore investigated whether the design rules have to take into account any eccentricities which occur during incremental launching and also if this depends on the respective launching bearing. Therefore, at the Technical University Munich, large-scale buckling tests were carried out on longitudinally stiffened plates under biaxial stresses with the two different types of launching bearings and eccentric load introduction. Based on the experimental results, a numerical model was validated. Currently, we are evaluating different parameters for both types of launching bearings, such as load introduction length, load eccentricity, the distance between longitudinal stiffeners, the position of the rotation point of the spherical bearing, which are used within the hydraulic bearings, web, and flange thickness and imperfections. The imperfection depends on the geometry of the buckling field and whether local or global buckling occurs. This and also the size of the meshing is taken into account in the numerical calculations of the parametric study. As a geometric imperfection, the scaled first buckling mode is applied. A bilinear material curve is used so that a GMNIA analysis is performed to determine the load capacity. Stresses and displacements are evaluated in different directions, and specific stress ratios are determined at the critical points of the plate at the time of the converging load step. To evaluate the load introduction of the transverse load, the transverse stress concentration is plotted on a defined longitudinal section on the web. In the same way, the rotation of the flange is evaluated in order to show the influence of the different degrees of freedom of the launching bearings under eccentric load introduction and to be able to make an assessment for the case, which is relevant in practice. The input and the output are automatized and depend on the given parameters. Thus we are able to adapt our model to different geometric dimensions and load conditions. The programming is done with the help of APDL and a Python code. This allows us to evaluate and compare more parameters faster. Input and output errors are also avoided. It is, therefore, possible to evaluate a large spectrum of parameters in a short time, which allows a practical evaluation of different parameters for buckling behavior. This paper presents the results of the tests as well as the validation and parameterization of the numerical model and shows the first influences on the buckling behavior under eccentric and multi-axial load introduction.Keywords: buckling behavior, eccentric load introduction, incremental launching, large scale buckling tests, multi axial stress states, parametric numerical modelling
Procedia PDF Downloads 1061046 The Seller’s Sense: Buying-Selling Perspective Affects the Sensitivity to Expected-Value Differences
Authors: Taher Abofol, Eldad Yechiam, Thorsten Pachur
Abstract:
In four studies, we examined whether seller and buyers differ not only in subjective price levels for objects (i.e., the endowment effect) but also in their relative accuracy given objects varying in expected value. If, as has been proposed, sellers stand to accrue a more substantial loss than buyers do, then their pricing decisions should be more sensitive to expected-value differences between objects. This is implied by loss aversion due to the steeper slope of prospect theory’s value function for losses than for gains, as well as by loss attention account, which posits that losses increase the attention invested in a task. Both accounts suggest that losses increased sensitivity to relative values of different objects, which should result in better alignment of pricing decisions to the objective value of objects on the part of sellers. Under loss attention, this characteristic should only emerge under certain boundary conditions. In Study 1 a published dataset was reanalyzed, in which 152 participants indicated buying or selling prices for monetary lotteries with different expected values. Relative EV sensitivity was calculated for participants as the Spearman rank correlation between their pricing decisions for each of the lotteries and the lotteries' expected values. An ANOVA revealed a main effect of perspective (sellers versus buyers), F(1,150) = 85.3, p < .0001 with greater EV sensitivity for sellers. Study 2 examined the prediction (implied by loss attention) that the positive effect of losses on performance emerges particularly under conditions of time constraints. A published dataset was reanalyzed, where 84 participants were asked to provide selling and buying prices for monetary lotteries in three deliberations time conditions (5, 10, 15 seconds). As in Study 1, an ANOVA revealed greater EV sensitivity for sellers than for buyers, F(1,82) = 9.34, p = .003. Importantly, there was also an interaction of perspective by deliberation time. Post-hoc tests revealed that there were main effects of perspective both in the condition with 5s deliberation time, and in the condition with 10s deliberation time, but not in the 15s condition. Thus, sellers’ EV-sensitivity advantage disappeared with extended deliberation. Study 3 replicated the design of study 1 but administered the task three times to test if the effect decays with repeated presentation. The results showed that the difference between buyers and sellers’ EV sensitivity was replicated in repeated task presentations. Study 4 examined the loss attention prediction that EV-sensitivity differences can be eliminated by manipulations that reduce the differential attention investment of sellers and buyers. This was carried out by randomly mixing selling and buying trials for each participant. The results revealed no differences in EV sensitivity between selling and buying trials. The pattern of results is consistent with an attentional resource-based account of the differences between sellers and buyers. Thus, asking people to price, an object from a seller's perspective rather than the buyer's improves the relative accuracy of pricing decisions; subtle changes in the framing of one’s perspective in a trading negotiation may improve price accuracy.Keywords: decision making, endowment effect, pricing, loss aversion, loss attention
Procedia PDF Downloads 3441045 Antimicrobial and Anti-Biofilm Activity of Non-Thermal Plasma
Authors: Jan Masak, Eva Kvasnickova, Vladimir Scholtz, Olga Matatkova, Marketa Valkova, Alena Cejkova
Abstract:
Microbial colonization of medical instruments, catheters, implants, etc. is a serious problem in the spread of nosocomial infections. Biofilms exhibit enormous resistance to environment. The resistance of biofilm populations to antibiotic or biocides often increases by two to three orders of magnitude in comparison with suspension populations. Subjects of interests are substances or physical processes that primarily cause the destruction of biofilm, while the released cells can be killed by existing antibiotics. In addition, agents that do not have a strong lethal effect do not cause such a significant selection pressure to further enhance resistance. Non-thermal plasma (NTP) is defined as neutral, ionized gas composed of particles (photons, electrons, positive and negative ions, free radicals and excited or non-excited molecules) which are in permanent interaction. In this work, the effect of NTP generated by the cometary corona with a metallic grid on the formation and stability of biofilm and metabolic activity of cells in biofilm was studied. NTP was applied on biofilm populations of Staphylococcus epidermidis DBM 3179, Pseudomonas aeruginosa DBM 3081, DBM 3777, ATCC 15442 and ATCC 10145, Escherichia coli DBM 3125 and Candida albicans DBM 2164 grown on solid media on Petri dishes and on the titanium alloy (Ti6Al4V) surface used for the production joint replacements. Erythromycin (for S. epidermidis), polymyxin B (for E. coli and P. aeruginosa), amphotericin B (for C. albicans) and ceftazidime (for P. aeruginosa) were used to study the combined effect of NTP and antibiotics. Biofilms were quantified by crystal violet assay. Metabolic activity of the cells in biofilm was measured using MTT (3-[4,5-dimethylthiazol-2-yl]-2,5 diphenyl tetrazolium bromide) colorimetric test based on the reduction of MTT into formazan by the dehydrogenase system of living cells. Fluorescence microscopy was applied to visualize the biofilm on the surface of the titanium alloy; SYTO 13 was used as a fluorescence probe to stain cells in the biofilm. It has been shown that biofilm populations of all studied microorganisms are very sensitive to the type of used NTP. The inhibition zone of biofilm recorded after 60 minutes exposure to NTP exceeded 20 cm², except P. aeruginosa DBM 3777 and ATCC 10145, where it was about 9 cm². Also metabolic activity of cells in biofilm differed for individual microbial strains. High sensitivity to NTP was observed in S. epidermidis, in which the metabolic activity of biofilm decreased after 30 minutes of NTP exposure to 15% and after 60 minutes to 1%. Conversely, the metabolic activity of cells of C. albicans decreased to 53% after 30 minutes of NTP exposure. Nevertheless, this result can be considered very good. Suitable combinations of exposure time of NTP and the concentration of antibiotic achieved in most cases a remarkable synergic effect on the reduction of the metabolic activity of the cells of the biofilm. For example, in the case of P. aeruginosa DBM 3777, a combination of 30 minutes of NTP with 1 mg/l of ceftazidime resulted in a decrease metabolic activity below 4%.Keywords: anti-biofilm activity, antibiotic, non-thermal plasma, opportunistic pathogens
Procedia PDF Downloads 1831044 Automated System: Managing the Production and Distribution of Radiopharmaceuticals
Authors: Shayma Mohammed, Adel Trabelsi
Abstract:
Radiopharmacy is the art of preparing high-quality, radioactive, medicinal products for use in diagnosis and therapy. Radiopharmaceuticals unlike normal medicines, this dual aspect (radioactive, medical) makes their management highly critical. One of the most convincing applications of modern technologies is the ability to delegate the execution of repetitive tasks to programming scripts. Automation has found its way to the most skilled jobs, to improve the company's overall performance by allowing human workers to focus on more important tasks than document filling. This project aims to contribute to implement a comprehensive system to insure rigorous management of radiopharmaceuticals through the use of a platform that links the Nuclear Medicine Service Management System to the Nuclear Radio-pharmacy Management System in accordance with the recommendations of World Health Organization (WHO) and International Atomic Energy Agency (IAEA). In this project we attempt to build a web application that targets radiopharmacies, the platform is built atop the inherently compatible web stack which allows it to work in virtually any environment. Different technologies are used in this project (PHP, Symfony, MySQL Workbench, Bootstrap, Angular 7, Visual Studio Code and TypeScript). The operating principle of the platform is mainly based on two parts: Radiopharmaceutical Backoffice for the Radiopharmacian, who is responsible for the realization of radiopharmaceutical preparations and their delivery and Medical Backoffice for the Doctor, who holds the authorization for the possession and use of radionuclides and he/she is responsible for ordering radioactive products. The application consists of sven modules: Production, Quality Control/Quality Assurance, Release, General Management, References, Transport and Stock Management. It allows 8 classes of users: The Production Manager (PM), Quality Control Manager (QCM), Stock Manager (SM), General Manager (GM), Client (Doctor), Parking and Transport Manager (PTM), Qualified Person (QP) and Technical and Production Staff. Digital platform bringing together all players involved in the use of radiopharmaceuticals and integrating the stages of preparation, production and distribution, Web technologies, in particular, promise to offer all the benefits of automation while requiring no more than a web browser to act as a user client, which is a strength because the web stack is by nature multi-platform. This platform will provide a traceability system for radiopharmaceuticals products to ensure the safety and radioprotection of actors and of patients. The new integrated platform is an alternative to write all the boilerplate paperwork manually, which is a tedious and error-prone task. It would minimize manual human manipulation, which has proven to be the main source of error in nuclear medicine. A codified electronic transfer of information from radiopharmaceutical preparation to delivery will further reduce the risk of maladministration.Keywords: automated system, management, radiopharmacy, technical papers
Procedia PDF Downloads 1551043 Implementing Quality Improvement Projects to Enhance Contraception and Abortion Care Service Provision and Pre-Service Training of Health Care Providers
Authors: Munir Kassa, Mengistu Hailemariam, Meghan Obermeyer, Kefelegn Baruda, Yonas Getachew, Asnakech Dessie
Abstract:
Improving the quality of sexual and reproductive health services that women receive is expected to have an impact on women’s satisfaction with the services, on their continued use and, ultimately, on their ability to achieve their fertility goals or reproductive intentions. Surprisingly, however, there is little empirical evidence of either whether this expectation is correct, or how best to improve service quality within sexual and reproductive health programs so that these impacts can be achieved. The Recent focus on quality has prompted more physicians to do quality improvement work, but often without the needed skill sets, which results in poorly conceived and ultimately unsuccessful improvement initiatives. As this renders the work unpublishable, it further impedes progress in the field of health care improvement and widens the quality chasm. Moreover, since 2014, the Center for International Reproductive Health Training (CIRHT) has worked diligently with 11 teaching hospitals across Ethiopia to increase access to contraception and abortion care services. This work has included improving pre-service training through education and curriculum development, expanding hands-on training to better learn critical techniques and counseling skills, and fostering a “team science” approach to research by encouraging scientific exploration. This is the first time this systematic approach has been applied and documented to improve access to high-quality services in Ethiopia. The purpose of this article is to report initiatives undertaken, and findings concluded by the clinical service team at CIRHT in an effort to provide a pragmatic approach to quality improvement projects. An audit containing nearly 300 questions about several aspects of patient care, including structure, process, and outcome indicators was completed by each teaching hospital’s quality improvement team. This baseline audit assisted in identifying major gaps and barriers, and each team was responsible for determining specific quality improvement aims and tasks to support change interventions using Shewart’s Cycle for Learning and Improvement (the Plan-Do-Study-Act model). To measure progress over time, quality improvement teams met biweekly and compiled monthly data for review. Also, site visits to each hospital were completed by the clinical service team to ensure monitoring and support. The results indicate that applying an evidence-based, participatory approach to quality improvement has the potential to increase the accessibility and quality of services in a short amount of time. In addition, continued ownership and on-site support are vital in promoting sustainability. This approach could be adapted and applied in similar contexts, particularly in other African countries.Keywords: abortion, contraception, quality improvement, service provision
Procedia PDF Downloads 2201042 Influence of Mandrel’s Surface on the Properties of Joints Produced by Magnetic Pulse Welding
Authors: Ines Oliveira, Ana Reis
Abstract:
Magnetic Pulse Welding (MPW) is a cold solid-state welding process, accomplished by the electromagnetically driven, high-speed and low-angle impact between two metallic surfaces. It has the same working principle of Explosive Welding (EXW), i.e. is based on the collision of two parts at high impact speed, in this case, propelled by electromagnetic force. Under proper conditions, i.e., flyer velocity and collision point angle, a permanent metallurgical bond can be achieved between widely dissimilar metals. MPW has been considered a promising alternative to the conventional welding processes and advantageous when compared to other impact processes. Nevertheless, MPW current applications are mostly academic. Despite the existing knowledge, the lack of consensus regarding several aspects of the process calls for further investigation. As a result, the mechanical resistance, morphology and structure of the weld interface in MPW of Al/Cu dissimilar pair were investigated. The effect of process parameters, namely gap, standoff distance and energy, were studied. It was shown that welding only takes place if the process parameters are within an optimal range. Additionally, the formation of intermetallic phases cannot be completely avoided in the weld of Al/Cu dissimilar pair by MPW. Depending on the process parameters, the intermetallic compounds can appear as continuous layer or small pockets. The thickness and the composition of the intermetallic layer depend on the processing parameters. Different intermetallic phases can be identified, meaning that different temperature-time regimes can occur during the process. It is also found that lower pulse energies are preferred. The relationship between energy increase and melting is possibly related to multiple sources of heating. Higher values of pulse energy are associated with higher induced currents in the part, meaning that more Joule heating will be generated. In addition, more energy means higher flyer velocity, the air existing in the gap between the parts to be welded is expelled, and this aerodynamic drag (fluid friction) is proportional to the square of the velocity, further contributing to the generation of heat. As the kinetic energy also increases with the square of velocity, the dissipation of this energy through plastic work and jet generation will also contribute to an increase in temperature. To reduce intermetallic phases, porosity, and melt pockets, pulse energy should be minimized. The bond formation is affected not only by the gap, standoff distance, and energy but also by the mandrel’s surface conditions. No correlation was clearly identified between surface roughness/scratch orientation and joint strength. Nevertheless, the aspect of the interface (thickness of the intermetallic layer, porosity, presence of macro/microcracks) is clearly affected by the surface topology. Welding was not established on oil contaminated surfaces, meaning that the jet action is not enough to completely clean the surface.Keywords: bonding mechanisms, impact welding, intermetallic compounds, magnetic pulse welding, wave formation
Procedia PDF Downloads 2091041 Engineering Photodynamic with Radioactive Therapeutic Systems for Sustainable Molecular Polarity: Autopoiesis Systems
Authors: Moustafa Osman Mohammed
Abstract:
This paper introduces Luhmann’s autopoietic social systems starting with the original concept of autopoiesis by biologists and scientists, including the modification of general systems based on socialized medicine. A specific type of autopoietic system is explained in the three existing groups of the ecological phenomena: interaction, social and medical sciences. This hypothesis model, nevertheless, has a nonlinear interaction with its natural environment ‘interactional cycle’ for the exchange of photon energy with molecular without any changes in topology. The external forces in the systems environment might be concomitant with the natural fluctuations’ influence (e.g. radioactive radiation, electromagnetic waves). The cantilever sensor deploys insights to the future chip processor for prevention of social metabolic systems. Thus, the circuits with resonant electric and optical properties are prototyped on board as an intra–chip inter–chip transmission for producing electromagnetic energy approximately ranges from 1.7 mA at 3.3 V to service the detection in locomotion with the least significant power losses. Nowadays, therapeutic systems are assimilated materials from embryonic stem cells to aggregate multiple functions of the vessels nature de-cellular structure for replenishment. While, the interior actuators deploy base-pair complementarity of nucleotides for the symmetric arrangement in particular bacterial nanonetworks of the sequence cycle creating double-stranded DNA strings. The DNA strands must be sequenced, assembled, and decoded in order to reconstruct the original source reliably. The design of exterior actuators have the ability in sensing different variations in the corresponding patterns regarding beat-to-beat heart rate variability (HRV) for spatial autocorrelation of molecular communication, which consists of human electromagnetic, piezoelectric, electrostatic and electrothermal energy to monitor and transfer the dynamic changes of all the cantilevers simultaneously in real-time workspace with high precision. A prototype-enabled dynamic energy sensor has been investigated in the laboratory for inclusion of nanoscale devices in the architecture with a fuzzy logic control for detection of thermal and electrostatic changes with optoelectronic devices to interpret uncertainty associated with signal interference. Ultimately, the controversial aspect of molecular frictional properties is adjusted to each other and forms its unique spatial structure modules for providing the environment mutual contribution in the investigation of mass temperature changes due to pathogenic archival architecture of clusters.Keywords: autopoiesis, nanoparticles, quantum photonics, portable energy, photonic structure, photodynamic therapeutic system
Procedia PDF Downloads 1231040 A Study on Green Building Certification Systems within the Context of Anticipatory Systems
Authors: Taner Izzet Acarer, Ece Ceylan Baba
Abstract:
This paper examines green building certification systems and their current processes in comparison with anticipatory systems. Rapid growth of human population and depletion of natural resources are causing irreparable damage to urban and natural environment. In this context, the concept of ‘sustainable architecture’ has emerged in the 20th century so as to establish and maintain standards for livable urban spaces, to improve quality of urban life, and to preserve natural resources for future generations. The construction industry is responsible for a large part of the resource consumption and it is believed that the ‘green building’ designs that emerge in construction industry can reduce environmental problems and contribute to sustainable development around the world. A building must meet a specific set of criteria, set forth through various certification systems, in order to be eligible for designation as a green building. It is disputable whether methods used by green building certification systems today truly serve the purposes of creating a sustainable world. Accordingly, this study will investigate the sets of rating systems used by the most popular green building certification programs, including LEED (Leadership in Energy and Environmental Design), BREEAM (Building Research Establishment's Environmental Assessment Methods), DGNB (Deutsche Gesellschaft für Nachhaltiges Bauen System), in terms of ‘Anticipatory Systems’ in accordance with the certification processes and their goals, while discussing their contribution to architecture. The basic methodology of the study is as follows. Firstly analyzes of brief historical and literature review of green buildings and certificate systems will be stated. Secondly, processes of green building certificate systems will be disputed by the help of anticipatory systems. Anticipatory Systems is a set of systems designed to generate action-oriented projections and to forecast potential side effects using the most current data. Anticipatory Systems pull the future into the present and take action based on future predictions. Although they do not have a claim to see into the future, they can provide foresight data. When shaping the foresight data, Anticipatory Systems use feedforward instead of feedback, enabling them to forecast the system’s behavior and potential side effects by establishing a correlation between the system’s present/past behavior and projected results. This study indicates the goals and current status of LEED, BREEAM and DGNB rating systems that created by using the feedback technique will be examined and presented in a chart. In addition, by examining these rating systems with the anticipatory system that using the feedforward method, the negative influences of the potential side effects on the purpose and current status of the rating systems will be shown in another chart. By comparing the two obtained data, the findings will be shown that rating systems are used for different goals than the purposes they are aiming for. In conclusion, the side effects of green building certification systems will be stated by using anticipatory system models.Keywords: anticipatory systems, BREEAM, certificate systems, DGNB, green buildings, LEED
Procedia PDF Downloads 2191039 Unfolding Architectural Assemblages: Mapping Contemporary Spatial Objects' Affective Capacity
Authors: Panagiotis Roupas, Yota Passia
Abstract:
This paper aims at establishing an index of design mechanisms - immanent in spatial objects - based on the affective capacity of their material formations. While spatial objects (design objects, buildings, urban configurations, etc.) are regarded as systems composed of interacting parts, within the premises of assemblage theory, their ability to affect and to be affected has not yet been mapped or sufficiently explored. This ability lies in excess, a latent potentiality they contain, not transcendental but immanent in their pre-subjective aesthetic power. As spatial structures are theorized as assemblages - composed of heterogeneous elements that enter into relations with one another - and since all assemblages are parts of larger assemblages, their components' ability to engage is contingent. We thus seek to unfold the mechanisms inherent in spatial objects that allow to the constituent parts of design assemblages to perpetually enter into new assemblages. To map architectural assemblage's affective ability, spatial objects are analyzed in two axes. The first axis focuses on the relations that the assemblage's material and expressive components develop in order to enter the assemblages. Material components refer to those material elements that an assemblage requires in order to exist, while expressive components includes non-linguistic (sense impressions) as well as linguistic (beliefs). The second axis records the processes known as a-signifying signs or a-signs, which are the triggering mechanisms able to territorialize or deterritorialize, stabilize or destabilize the assemblage and thus allow it to assemble anew. As a-signs cannot be isolated from matter, we point to their resulting effects, which without entering the linguistic level they are expressed in terms of intensity fields: modulations, movements, speeds, rhythms, spasms, etc. They belong to a molecular level where they operate in the pre-subjective world of perceptions, effects, drives, and emotions. A-signs have been introduced as intensities that transform the object beyond meaning, beyond fixed or known cognitive procedures. To that end, from an archive of more than 100 spatial objects by contemporary architects and designers, we have created an effective mechanisms index is created, where each a-sign is now connected with the list of effects it triggers and which thoroughly defines it. And vice versa, the same effect can be triggered by different a-signs, allowing the design object to lie in a perpetual state of becoming. To define spatial objects, A-signs are categorized in terms of their aesthetic power to affect and to be affected on the basis of the general categories of form, structure and surface. Thus, different part's degree of contingency are evaluated and measured and finally, we introduce as material information that is immanent in the spatial object while at the same time they confer no meaning; they only convey some information without semantic content. Through this index, we are able to analyze and direct the final form of the spatial object while at the same time establishing the mechanism to measure its continuous transformation.Keywords: affective mechanisms index, architectural assemblages, a-signifying signs, cartography, virtual
Procedia PDF Downloads 1271038 Abnormal Pap Smear Detection by Application of Revised Bethesda System in Commercial Sex Workers and a Control Group: A Comparative Study
Authors: Priyanka Manghani, Manthan Patel, Rahul Peddawad
Abstract:
Cervical Cancer is a major public health hurdle in the area of women’s health. The most common cause of Cervical Cancer is the Human Papilloma Virus (HPV). Human papilloma virus has various genotypes, with HPV 16 and HPV 18 being the major etiological factor causing carcinoma of the Cervix. Early screening and detection by Papanicolaou Smears (PAP) is an effective method for identifying premalignant and malignant lesions. In case of existing pre- malignant lesions /cervical dysplasia’s found with HPV 16 or 18, appropriate follow up can be done to prevent it from developing into a neoplasm. Aims and Objectives: Primary Aim; To study various abnormal cervical cytology reports as detected by Pap Smear Tests, using the Bethesda System in women at a Tertiary Care Hospital. Secondary Aim; To discuss the importance of Pap smear in Cervical Cancer Screening Program. Materials and Methods: Our study is a prospective study, based on 101 women who attended the Out-patient department of Obstetrics and Gynecology at a tertiary care hospital in age group 20-40 years with chief complaints of white/foul vaginal discharge, post-coital Bleeding, low back pain, irregular menstruation, etc. 60 women, who were tested, of the total no of women, were commercial sex workers, thus being a high-risk group for HPV infection. All women underwent conventional cytology. For all the abnormal smears, further cervical biopsies were done, and the final diagnosis was done on the basis of histopathology (gold standard). Results: In all these patients, 16 patients presented with normal smears out of which 2 belonged to the category of commercial sex workers (3.33%) and 14 being from the normal/control group (34.15%). 44 women presented with inflammatory smears out of which 30 were commercial sex workers (50%) and 14 from the control Group (34.15%). A total of 11 women presented with infectious etiology with 6 being commercial sex workers (10%) and 5 (12.2%) being in the control group. A total of 8 patients presented with low-grade squamous intra epithelial lesion (LSIL) with 7 (11.7%) being commercial sex workers and 1(2.44%) patient belonging to the control group. A Total of 7 patients presented with high-grade squamous intraepithelial lesion (HSIL) with 6 (10%) being commercial sex workers and 1 (2.44%) belonging to the control group. 9 patients in total presented with atypical squamous cells of undetermined significance (ASCUS) with 6(10%) being commercial sex workers and 3 (7.32%) belonging to the control group. Squamous cell carcinoma(SCC) presence was found only in 1(1.7%) commercial sex worker. Conclusion – We conclude that HSIL, LSIL, SCC and sexually related infections are comparatively more common in vulnerable groups such as sex workers due to a variety of factors such as multiple sexual partners and poor genital hygiene. Early screening and follow up interventions are highly needed for them along with Health education for risk factors and to emphasize on the importance of Pap smear screening.Keywords: cervical cancer, papanicolaou (pap) smear, bethesda system, neoplasm
Procedia PDF Downloads 2231037 Islam in Nation Building: Case Studies of Kazakhstan and Kyrgyzstan
Authors: Etibar Guliyev, Durdana Jafarli
Abstract:
The breakdown of the Soviet Union in the early 1990s and the 9/11 attacks resulted in the global changes created a totally new geopolitical situation for the Muslim populated republics of the former Soviet Union. Located between great powers such as China and Russia, as well as theocratic states like Iran and Afghanistan, the newly independent Central Asian states were facing a dilemma to choose a new politico-ideological course for development. Policies dubbed Perestroyka and Glasnost leading to the collapse of the world’s once superpower brought about a considerable rise in the national and religious self-consciousness of the Muslim population of the USSR where the religion was prohibited under the strict communist rule. Moreover, the religious movements prohibited during the Soviet era acted as a part of national straggle to gain their freedom from Moscow. The policies adopted by the Central Asian countries to manage the religious revival and extremism in their countries vary dramatically from each other. As Kazakhstan and Kyrgyzstan are located between Russia and China and hosting a considerable number of the Russian population, these countries treated Islamic revival more tolerantly trying benefit from it in the nation-building process. The importance of the topic could be explained with the fact that it investigates an alternative way of management of religious activities and movements. The recent developments in the Middle East, Syria and Iraq in particular, and the fact that hundreds of fighters from the Central Asian republics joined the ISIL terrorist organization once again highlights the implications of the proper regulation of religious activities not only for domestic, but also for regional and global politics. The paper is based on multiple research methods. The process trace method was exploited to better understand the Russification and anti-religious policies to which the Central Asian countries were subject during the Soviet era. The comparative analyse method was also used to better understand the common and distinct features of the politics of religion of Kazakhstan and Kyrgyzstan and the rest of the Central Asian countries. Various legislation acts, as well as secondary sources were investigated to this end. Mostly constructivist approach and a theory suggesting that religion supports national identity when there is a third cohesion that threatens both and when elements of national identity are weak. Preliminary findings suggest that in line with policies aimed at gradual reduction of Russian influence, as well as in the face of ever-increasing migration from China, the mentioned countries incorporated some Islamic elements into domestic policies as a part and parcel of national culture. Kazakhstan and Kyrgyzstan did not suppress religious activities, which was case in neighboring states, but allowed in a controlled way Islamic movements to have a relatively freedom of action which in turn led to the less violent religious extremism further boosting national identity.Keywords: identity, Islam, nationalism, terrorism
Procedia PDF Downloads 2881036 Liquid Waste Management in Cluster Development
Authors: Abheyjit Singh, Kulwant Singh
Abstract:
There is a gradual depletion of the water table in the earth's crust, and it is required to converse and reduce the scarcity of water. This is only done by rainwater harvesting, recycling of water and by judicially consumption/utilization of water and adopting unique treatment measures. Domestic waste is generated in residential areas, commercial settings, and institutions. Waste, in general, is unwanted, undesirable, and nevertheless an inevitable and inherent product of social, economic, and cultural life. In a cluster, a need-based system is formed where the project is designed for systematic analysis, collection of sewage from the cluster, treating it and then recycling it for multifarious work. The liquid waste may consist of Sanitary sewage/ Domestic waste, Industrial waste, Storm waste, or Mixed Waste. The sewage contains both suspended and dissolved particles, and the total amount of organic material is related to the strength of the sewage. The untreated domestic sanitary sewage has a BOD (Biochemical Oxygen Demand) of 200 mg/l. TSS (Total Suspended Solids) about 240 mg/l. Industrial Waste may have BOD and TSS values much higher than those of sanitary sewage. Another type of impurities of wastewater is plant nutrients, especially when there are compounds of nitrogen N phosphorus P in the sewage; raw sanitary contains approx. 35 mg/l Nitrogen and 10 mg/l of Phosphorus. Finally, the pathogen in the waste is expected to be proportional to the concentration of facial coliform bacteria. The coliform concentration in raw sanitary sewage is roughly 1 billion per liter. The system of sewage disposal technique has been universally applied to all conditions, which are the nature of soil formation, Availability of land, Quantity of Sewage to be disposed of, The degree of treatment and the relative cost of disposal technique. The adopted Thappar Model (India) has the following designed parameters consisting of a Screen Chamber, a Digestion Tank, a Skimming Tank, a Stabilization Tank, an Oxidation Pond and a Water Storage Pond. The screening Chamber is used to remove plastic and other solids, The Digestion Tank is designed as an anaerobic tank having a retention period of 8 hours, The Skimming Tank has an outlet that is kept 1 meter below the surface anaerobic condition at the bottom and also help in organic solid remover, Stabilization Tank is designed as primary settling tank, Oxidation Pond is a facultative pond having a depth of 1.5 meter, Storage Pond is designed as per the requirement. The cost of the Thappar model is Rs. 185 Lakh per 3,000 to 4,000 population, and the Area required is 1.5 Acre. The complete structure will linning as per the requirement. The annual maintenance will be Rs. 5 lakh per year. The project is useful for water conservation, silage water for irrigation, decrease of BOD and there will be no longer damage to community assets and economic loss to the farmer community by inundation. There will be a healthy and clean environment in the community.Keywords: collection, treatment, utilization, economic
Procedia PDF Downloads 761035 Improving Online Learning Engagement through a Kid-Teach-Kid Approach for High School Students during the Pandemic
Authors: Alexander Huang
Abstract:
Online learning sessions have become an indispensable complement to in-classroom-learning sessions in the past two years due to the emergence of Covid-19. Due to social distance requirements, many courses and interaction-intensive sessions, ranging from music classes to debate camps, are online. However, online learning imposes a significant challenge for engaging students effectively during the learning sessions. To resolve this problem, Project PWR, a non-profit organization formed by high school students, developed an online kid-teach-kid learning environment to boost students' learning interests and further improve students’ engagement during online learning. Fundamentally, the kid-teach-kid learning model creates an affinity space to form learning groups, where like-minded peers can learn and teach their interests. The role of the teacher can also help a kid identify the instructional task and set the rules and procedures for the activities. The approach also structures initial discussions to reveal a range of ideas, similar experiences, thinking processes, language use, and lower student-to-teacher ratio, which become enriched online learning experiences for upcoming lessons. In such a manner, a kid can practice both the teacher role and the student role to accumulate experiences on how to convey ideas and questions over the online session more efficiently and effectively. In this research work, we conducted two case studies involving a 3D-Design course and a Speech and Debate course taught by high-school kids. Through Project PWR, a kid first needs to design the course syllabus based on a provided template to become a student-teacher. Then, the Project PWR academic committee evaluates the syllabus and offers comments and suggestions for changes. Upon the approval of a syllabus, an experienced and voluntarily adult mentor is assigned to interview the student-teacher and monitor the lectures' progress. Student-teachers construct a comprehensive final evaluation for their students, which they grade at the end of the course. Moreover, each course requires conducting midterm and final evaluations through a set of surveyed replies provided by students to assess the student-teacher’s performance. The uniqueness of Project PWR lies in its established kid-teach-kids affinity space. Our research results showed that Project PWR could create a closed-loop system where a student can help a teacher improve and vice versa, thus improving the overall students’ engagement. As a result, Project PWR’s approach can train teachers and students to become better online learners and give them a solid understanding of what to prepare for and what to expect from future online classes. The kid-teach-kid learning model can significantly improve students' engagement in the online courses through the Project PWR to effectively supplement the traditional teacher-centric model that the Covid-19 pandemic has impacted substantially. Project PWR enables kids to share their interests and bond with one another, making the online learning environment effective and promoting positive and effective personal online one-on-one interactions.Keywords: kid-teach-kid, affinity space, online learning, engagement, student-teacher
Procedia PDF Downloads 1411034 Cytochrome B Diversity and Phylogeny of Egyptian Sheep Breeds
Authors: Othman E. Othman, Agnés Germot, Daniel Petit, Abderrahman Maftah
Abstract:
Threats to the biodiversity are increasing due to the loss of genetic diversity within the species utilized in agriculture. Due to the progressive substitution of the less productive, locally adapted and native breeds by highly productive breeds, the number of threatened breeds is increased. In these conditions, it is more strategically important than ever to preserve as much the farm animal diversity as possible, to ensure a prompt and proper response to the needs of future generations. Mitochondrial (mtDNA) sequencing has been used to explain the origins of many modern domestic livestock species. Studies based on sequencing of sheep mitochondrial DNA showed that there are five maternal lineages in the world for domestic sheep breeds; A, B, C, D and E. Because of the eastern location of Egypt in the Mediterranean basin and the presence of fat-tailed sheep breeds- character quite common in Turkey and Syria- where genotypes that seem quite primitive, the phylogenetic studies of Egyptian sheep breeds become particularly attractive. We aimed in this work to clarify the genetic affinities, biodiversity and phylogeny of five Egyptian sheep breeds using cytochrome B sequencing. Blood samples were collected from 63 animals belonging to the five tested breeds; Barki, Rahmani, Ossimi, Saidi and Sohagi. The total DNA was extracted and the specific primer allowed the conventional PCR amplification of the cytochrome B region of mtDNA (approximately 1272 bp). PCR amplified products were purified and sequenced. The alignment of Sixty-three samples was done using BioEdit software. DnaSP 5.00 software was used to identify the sequence variation and polymorphic sites in the aligned sequences. The result showed that the presence of 34 polymorphic sites leading to the formation of 18 haplotypes. The haplotype diversity in five tested breeds ranged from 0.676 in Rahmani breed to 0.894 in Sohagi breed. The genetic distances (D) and the average number of pairwise differences (Dxy) between breeds were estimated. The lowest distance was observed between Rahmani and Saidi (D: 1.674 and Dxy: 0.00150) while the highest distance was observed between Ossimi and Sohagi (D: 5.233 and Dxy: 0.00475). Neighbour-joining (Phylogeny) tree was constructed using Mega 5.0 software. The sequences of the 63 analyzed samples were aligned with references sequences of different haplogroups. The phylogeny result showed the presence of three haplogroups (HapA, HapB and HapC) in the 63 examined samples. The other two haplogroups described in literature (HapD and HapE) were not found. The result showed that 50 out of 63 tested animals cluster with haplogroup B (79.37%) whereas 7 tested animals cluster with haplogroup A (11.11%) and 6 animals cluster with haplogroup C (9.52%). In conclusion, the phylogenetic reconstructions showed that the majority of Egyptian sheep breeds belonging to haplogroup B which is the dominant haplogroup in Eastern Mediterranean countries like Syria and Turkey. Some individuals are belonging to haplogroups A and C, suggesting that the crosses were done with other breeds for characteristic selection for growth and wool quality.Keywords: cytochrome B, diversity, phylogheny, Egyptian sheep breeds
Procedia PDF Downloads 3711033 Critical Discourse Analysis of Xenophobia in UK Political Party Blogs
Authors: Nourah Almulhim
Abstract:
This paper takes a critical discourse analysis (CDA) approach to investigate discourse and ideology in political blogs, focusing in particular on the Conservative Home blog from the UK’s current governing party. The Conservative party member’s discourse strategies as the blogger, alongside the discourse used by members of the public who reply to the blog in the below-the-lines comments, will be examined. The blog discourse reflects the writer's political identity and authorial voice. The analysis of the below-the-lines comments enables members of the public to engage in creating adversative positions, introducing different language users who bring their own individual and collective identities. These language users can play the role of news reporters, political analysts, protesters or supporters of a specific agenda and current socio-political topics or events. This study takes a qualitative approach to analyze the discriminatory context towards Islam/Muslims in ' The Conservative Home' blog. A cognitive approach is adopted and an analysis of dominant discourses in the blog text and the below-the-line comments is used. The focus of the study is, firstly, on the construction of self/ collective national identity in comparison to Muslim identity, highlighting the in-group and out-group construction. Second, the type of attitudes, whether feelings or judgments, related to these social actors as they are explicated to draw on the social values. Third, the role of discursive strategies in justifying and legitimizing those Islamophobic discriminatory practices. Therefore, the analysis is based on the systematic analysis of social actors drawing on actors, actions, and arguments to explicate identity construction and its development in the different discourses. A socio-semantic categorization of social actors is implemented to draw on the discursive strategies in addition to using literature to understand these strategies. An appraisal analysis is further used to classify attitudes and elaborate on core values in both genres. Finally, the grammar of othering is applied to explain how discriminatory dichotomies of 'Us' Vs. ''Them' actions are carried in discourse. Some of the key findings of the analysis can be summarized in two main points. First, the discursive practice used to represent Muslims/Islam as different from ‘Us’ are different in both genres as the blogger uses a covert voice while the commenters generally use an overt voice. This is to say that the blogger uses a mitigated strategy to represent the Muslim identity, for example, using the noun phrase ‘British Muslim’ but then representing them as ‘radical’ and ‘terrorists'. Contrary to this is in below the lines comments, where a direct strategy with an active declarative voice is used to negatively represent the Muslim identity as ‘oppressors’ and ‘terrorists’ with no inclusion of the noun phrase ‘British Muslims’. Second, the negotiation of the ‘British’ identity and values, such as culture and democracy, are prominent in the comment section as being unique and under threat by Muslims, while in the article, these standpoints are not represented.Keywords: xenophobia, blogs, identity, critical discourse analysis
Procedia PDF Downloads 921032 Oil-price Volatility and Economic Prosperity in Nigeria: Empirical Evidence
Authors: Yohanna Panshak
Abstract:
The impact of macroeconomic instability on economic growth and prosperity has been at forefront in many discourses among researchers and policy makers and has generated a lot of controversies over the years. This has generated series of research efforts towards understanding the remote causes of this phenomenon; its nature, determinants and how it can be targeted and mitigated. While others have opined that the root cause of macroeconomic flux in Nigeria is attributed to Oil-Price volatility, others viewed the issue as resulting from some constellation of structural constraints both within and outside the shores of the country. Research works of scholars such as [Akpan (2009), Aliyu (2009), Olomola (2006), etc] argue that oil volatility can determine economic growth or has the potential of doing so. On the contrary, [Darby (1982), Cerralo (2005) etc] share the opinion that it can slow down growth. The earlier argument rest on the understanding that for a net balance of oil exporting economies, price upbeat directly increases real national income through higher export earnings, whereas, the latter allude to the case of net-oil importing countries (which experience price rises, increased input costs, reduced non-oil demand, low investment, fall in tax revenues and ultimately an increase in budget deficit which will further reduce welfare level). Therefore, assessing the precise impact of oil price volatility on virtually any economy is a function of whether it is an oil-exporting or importing nation. Research on oil price volatility and its outcome on the growth of the Nigerian economy are evolving and in a march towards resolving Nigeria’s macroeconomic instability as long as oil revenue still remain the mainstay and driver of socio-economic engineering. Recently, a major importer of Nigeria’s oil- United States made a historic breakthrough in more efficient source of energy for her economy with the capacity of serving significant part of the world. This undoubtedly suggests a threat to the exchange earnings of the country. The need to understand fluctuation in its major export commodity is critical. This paper leans on the Renaissance growth theory with greater focus on theoretical work of Lee (1998); a leading proponent of this school who makes a clear cut of difference between oil price changes and oil price volatility. Based on the above background, the research seeks to empirically examine the impact oil-price volatility on government expenditure using quarterly time series data spanning 1986:1 to 2014:4. Vector Auto Regression (VAR) econometric approach shall be used. The structural properties of the model shall be tested using Augmented Dickey-Fuller and Phillips-Perron. Relevant diagnostics tests of heteroscedasticity, serial correlation and normality shall also be carried out. Policy recommendation shall be offered on the empirical findings and believes it assist policy makers not only in Nigeria but the world-over.Keywords: oil-price, volatility, prosperity, budget, expenditure
Procedia PDF Downloads 269